A mobile camera

Once we have a bot which allows us to control our project remotely (My second bot) and we know how to move our servos (Smooth movement with servos) it is now the time to put the camera over them (A camera for my Raspberry Pi).
Let us remember that the control is done using XMPP (for example with programs such as Pidging, Google Talk or our preferred IM client); the idea was to avoid opening ports in the router but with the objective of being able to send instructions to the camera from anywhere.

We selected a couple of boxes for the project (they are cheap and it is quite simple to adapt them for our needs). In a bigger box we made two holes (in this way we can put two servos, even if at the end we only used one of them):

Hemos pintado la caja #raspi

A post shared by Fernando Tricas García (@ftricas) on

Inside the box we made the connections (batteries for the servos, and
connections for the control from the Raspberry Pi, which is outside of the

Caja como soporte para los motores

A post shared by Fernando Tricas García (@ftricas) on

The camera goes in a smaller box that will be attached to the selected servo.

Y tenemos un prototipo de mejor aspecto #raspi

A post shared by Fernando Tricas García (@ftricas) on

When we send the adequate instructions, the camera goes to the selected position, it stops for taking the picture and it sends it by mail. Finally, it returns to the initial position.
We can see all the sequence in the following video.

The project’s code can be found at err-plugins (it can have further evolutions; the main code in its current state can be seen at pruebas.py).

In the last weeks it has been published a similar proyect, “Raspberry Eye” Remote Servo Cam. It has two main differences: it can move the camera in two axis (our project only can move left and right) and it is controlled using a web interface.

So, what’s next?
I have several ideas, but I haven’t decided what to do: it would be nice having some autonomy for the camera (motion detecion? detection of changes in the scene?); I woudln’t mind adding also some more movement (maybe adding wheels such that the camera can take pictures in different parts of the house? this hexapod really impressed me). Going further, maybe we could think about other control devices (wearables?).

Of course, please feel free to comment, discuss and making suggesions… All comments are welcome.

Smooth movement with servos

One of the main problems of servos is that they move quite fast, as it can be seen in the video we included in Adding movement: servos .
With the setup I had imagined this was a problem. The camera has some non negligible weight and if we put something over the servo all of this can become unstable. See, for example:

Más pruebas #frikeando #voz #motores #raspi #c3po

A post shared by Fernando Tricas García (@ftricas) on

The solution for this problem is quite simple: when we want to move to a certain position, we can reach it by means of a set of small steps. We can indicate a set of succesive positions for the servo, each one a bit more close to the final destination. In this way, even with fast movements, the camera is more or less stable.

The code could be similar to the one we can see here:

def move(self, servo, pos, posIni=MIN, inc=10):

	posFin=posIni + (MAX-MIN)*pos
	steps=abs(posFin - posIni) / inc

	print "Pos ini", posIni
	print "Pos fin", posFin
	print "Steps", steps
	print int(steps)

	if pos < 0:
		pos = -pos
		sign = -1
		sign = 1

	for i in range(int(steps)):

	print "Pos ini", posIni
	print "Pos fin", posFin
	print "Steps", steps
	print int(steps)


That is, if we start at position (posIni) and we want to move a certain percentage of the available range (a real number between 0 and 1) we can compute the final position if we know the total range (MAX – MIN):

posFin=posIni + (MAX-MIN)*pos

And then, we can compute the needed steps to reach this destination; if we use increments of 10 (inc=10):

steps=abs(posFin - posIni) / inc

We are using the absolute value because the movement can be forward and backward (depending on the starting point for the movement). This is solved by means fo this conditional:

if pos < 0:

Finally, we use a for loop to reach the destination:

for i in range(int(steps)):

The result can be seen in the following video:

Montamos la cámara en el motor que se mueve más despacio #raspi

A post shared by Fernando Tricas García (@ftricas) on

There we can observe a forward and backward movements (to recover the initial position) with an improvised model.
The speed can be controlled with the time between steps (VEL value).

Maybe we should have chosen other type of motor, but we could solve the problem with this approach.

My second bot

In Raspberry Pi: ¿qué temperatura hace en mi casa? (only in Spanish, sorry) we presented our first attempt at doing a bot. It allowed us to interact with the Raspberry Pi from our location, provided we had an internet connection. In this video we can see that interaction using IRC.

I tested SleekXMPP and phenny but I found some limits and continued my search. When I found err I discovered that it was in that moment under develoment and that it has a somewhat active community in Google+, Err. It provides a modular architecture for adding features to the bot.

My first steps were to adapt some tests I programmed for phenny and to add the possibility to take pictures with my cameras and sending them by email The code is at: err-plugins (it will change in the future, so we will pay atention to the current version):

The first one is pruebas.plug. It contains some meta-information needed to define the module following the bot syntax:

Name = Pruebas
Module = pruebas

Description = let's try things !

And the file pruebas.py contains the actual code for the programmed actions. For example, the following code takes a pictures and then sends it by mail:

<br />
@botcmd<br />
def foto(self, msg, args):<br />
	"""Take a picture"""<br />
	quien=msg.getFrom().getStripped()<br />
	yield "I'm taking the picture, wait a second "<br />
	if(args):<br />
		try:<br />
			cam=int(args)<br />
		except:<br />
			cam=0<br />
	else:<br />
		cam=0<br />
	yield "Camera %s"%cam<br />
	self.camera("/tmp/imagen.png",cam)<br />
	yield "Now I'm sending it"<br />
	self.mail("/tmp/imagen.png", quien)<br />
	my_msg = "I've sent it to ... %s"%quien<br />
	yield my_msg<br />

The first line indicates that this funcion defines an instruction for the bot. The name of the funcion will be the command that we will need to send by IM (we will need a configurable prefix, that serves to differenciate among instructions for the bot and other strings),

In our case, the instruction


would execute a function that is almost the same as the one commented in Sending an image by mail in Python.

The main differences are:

  • It gets its parameters from the function call (Err manages this)

    def foto(self, msg, args):

  • It replies to the mail of the person who sent the order:


  • The argument can be 0, 1 or no argument (no validation is done) because we have two cameras attached to our raspi. By default (no parameters provided or some uninterpretable paramenter proviede) it uses camera 0.
  • Now it replies telling us the chosen camera:

    yield "Camera %s"%cam

  • And now it calls the actual funciont in charge of taking the picture; its parameters are very similar to the ones commented in a previous post (the name of the file and the chosen camera):


  • Now it calls the funciont that will send the picture to the previously defined mail address, so the parameters are the name of the file and this address.

    self.mail("/tmp/imagen.png", quien)

  • Finally, it uses again yield to reply, finishin the process.

If we look at the code, the main difference for these two functions are that they do not have a @bootcmd line; they are internal funcions, and they are not available as bot commands. They need some configuration options (as presented in Sending an image by mail in Python ).

Errbot manages this by means of:

<br />
def get_configuration_template(self):<br />
return{'ADDRESS' : u'kk@kk.com', 'FROMADD' : u'kk@kk.com',<br />
'TOADDRS' : u'kk@kk.com', 'SUBJECT' : u'Imagen',<br />
'SMTPSRV' : u'smtp.gmail.com:587', 'LOGINID' : u'changeme',<br />
'LOGINPW' : u'changeme'}<br />

It is a dictionary with the parameters we need to configure.

If we send the order via IM:

.config Pruebas

In this case, Pruebas is the name of the module and we have selected the dot (.) as the indicator that the following string is an instruction for the bot. The config instruction returns the current configuration (if it has not been configured it returns the defined template; if it is configured it returns the actual values). These values can be used as a template for the module configuration.

.config Pruebas {'TOADDRS': u'mydir@mail.com', 'SMTPSRV':
u'smtp.gmail.com:587', 'LOGINPW': u'mypassword',

We are almost done, soon we will be able to show the whole thing.

Adding movement: servos

Once we have a camera (or two) attached to our raspi helps us to discover one of the annoying limitations they have: they cannot move!
Fortunately, there are plenty of options for doing this. I decided to buy a couple of servos.

Motor #raspi

A post shared by Fernando Tricas García (@ftricas) on

They are cheap, small and noisy.

There are lots of pages explaining the theory behind their inner working so we only will remind here just a couple of things: they have some rotation constraints (the ones I bought can just move 180 degrees) and the way to control them is by sending some pulses whose duration determines the angle (for interested people, you can have a look at How do servos work? -in English- or at Trabajar con Servos -in Spanish-).

From our program our mission will be to find the way to send the adequate pulses to the selected pin where we have connected the servo (remember: physical world-computer connection).

There are lots of examples in the net.

For example, the programs : servo, servo2, servoYT, and servoYT2 are based on what we can see in the video Servo control using Raspberry pi (and also in this one Servo Control with the Raspberry Pi).

As usual, we are commenting on the main steps here, following the third program.

First the python modules that we need:

import RPi.GPIO as GPIO
import time

The first one is used for sending instructions through the pins to our raspi. The second one is for managing time related data.

Now, some setup: we will make reference to the pins by their number and we configure the 11 pin as output.



Now, we are going to define the controller with a frequency of 50Hz and we’ll start it in the central position:

p = GPIO.PWM(11,50)


Finally, a bit more of code changing the position each second:

    while True:
        print "Uno"
        print "Dos"
        print "Tres"

That is, it starts at the central position and moves to both extremes. From the center it goes to one side, then to the contrary one and finally it returns to the initial position. You can see this movements in the following video:

By the end of the video you can see that we can control more than one servo (with the only limitation of the number of available pins). The code for this can be seen at:
We have added pin number 12 and we use two controllers (p1 and p2). Then, we just send instructions to each one in sequence. You can have a look at this mini-video:

Dos motores #raspi

A post shared by Fernando Tricas García (@ftricas) on

We will see soon how to manage all the parts we have commented until now in order to finish the project.

This post has been posted originally in Spanish at: Añadiendo movimiento: servos.

Sending an image by mail in Python

Once we are able to take a picture with our webcam (A camera for my Raspberry Pi ) the next step is to see the picture from wher we are.

There are lots of texts explaining how to configure a web server for this
but I didn’t want to publish a web server with my raspi to the internet.
You’d need to setup the server, open some ports in the router and take into
account the problem of not having a fixed IP.
It found this approach not very robuts.
There is also the possibility of somebody cracking your server and
accessing our network in some way (maybe difficult, but not impossible).

I also evaluated the possibility of sending the images by means of an
instant messaging app but I’m not sure if this can be done, or maybe it is
just that I’ve not been able to find the adequate documentantion, so I
discarded this option.

The final election was the old and reliable email. My bot is going to be
able to get petitions by different ways (XMPP, IRC, …) and it will send
the images as a reply by email.

There are lots of documents explaining how to prepare a message with an
attachment. In fact, I had a program from previous experiments and this was
the one I decided to use.
It can be seen at mail.py.

It basically constructs a message from its components (From, To, Subject, Attachments, …)

It needs some parameters, that need to be configured. The way to do this
is by means of an auxiliar module that is imported at the beginning of the program.

import mailConfig

The only content for this file are the variables whose values need to be
adapted. Our program just reads them (it could of course use them

destaddr = mailConfig.ADDRESS
fromaddr = mailConfig.FROMADD
toaddrs = mailConfig.TOADDRS
subject = mailConfig.SUBJECT
smtpsrv = mailConfig.SMTPSRV
loginId = mailConfig.LOGINID
loginPw = mailConfig.LOGINPW

imgFile = ‘/tmp/imagen.png’

We are selecting also a default filename for the image, and we can choose a
different one from the command line.

We also setup a default address for sending emails to (destaddr) but we can
also include a different one in the command line (not very robust, there is
not validation of the email address).

From this, we can construct the message.

Detection and filling the parameters for the object we are sending:

format, enc = mimetypes.guess_type(imgFile)
main, sub = format.split('/')
adjunto = MIMEBase(main, sub)

Notice that in this way, the program can be used for sending other files
that need not to be just images.

Now we construct the attachment, with the adequate condification and we
attach it to the message:

adjunto.add_header('Content-Disposition', 'attachment; filename="%s"' % imgFile)

Finally, we add the other parameters:

mensaje['Subject'] = subject
mensaje['From'] = fromaddr
mensaje['To'] = destaddr
mensaje['Cc'] = toaddrs

The message is empty, it does not contain text (Exercice for the reader:
can you add some text? Something like: ‘This picture was taken on day
xx-xx-xxxx at hh:mm’).

And finally, we send the message (direct negotiation with the smtp server):

server = smtplib.SMTP()

server.login(loginId, loginPw)
server.sendmail(fromaddr, [destaddr]+[toaddrs], mensaje.as_string(0))

In this way we have a small Python program that will send by mail a file.
We can provide the name of the file in the command line and we can also
provide an email address.
By default it will send a image with the fixed name to the pre-configured
As a sort of backup and logging system the program will always send a copy of the mail to the ‘toaddrs’.

On the configuration side, we need ‘destaddr’, ‘fromaddr’ and ‘toaddrs’ to be valid email addresses.

The server ‘smtpsrv’ can be any server that we can use and the program uses
authenticated sending (and this is the reason for needing the user and
password). For example, we can use the Google servers, and the
configuration would be:


And we could use some user and password for a pre-existing account.

A camera for my Raspberry Pi

This post was originally published at: A camera for my Raspberry Pi. Not many visits later, and given the fact that I did not enjoy writing there (I’m a xmlrpc man), I’m trying to give that post a second life here.

My first idea was to attach a webcam to my Raspberry Pi and to further use it for a more complex project. This post is for reporting the initial steps with my

The main reference for buying a camera for the Raspberry was RPi USB Webcams. From the models shown there (and that where available in a local store near home) I selected the Logitech C270. As it was my understanding by that time, it should work directly connected to the USB port. Unfortunately this was not correct (it needs an USB powered HUB; it is not more clearly stated in the info now) and this made me some headaches and frustration.

During the tests I also bought another camera (second hand, this time), that is sold for the Playstation PS4 (if I’m correct). I had read that it did not need a powered hub and it was really cheap (5 euro) so it was worth a try.

We can see a picture of the cameras:

Probando otra cámara (PS3)

A post shared by Fernando Tricas García (@ftricas) on

While it is true that this camera was working better directly connected to
the USB port of the raspi, the machine was not behaving correctly (it is
important to notice that I also had a USB Wifi adapter. I’m using WiFi to
connect via SSH to the computer, and also to have it connected almost
permanently to the Internet).

Image quality with this camera is worse than with the Logitech one. It has
two positions (zoom) that need to be managed by hand (you cannot access
this feature from a program, as far as I know). I’ve kept it as a secondary

So, after a lot o days testing the cameras I decided to buy a powered USB
HUB, the EasyConnect 7 Port USB2 Powered Hub made by Trust.

Como decía la canción…

A post shared by Fernando Tricas García (@ftricas) on

During this time I was trying the different available options for the camera. It is worth remembering that there exist a camera supported by the project. It is supposed to work perfectly well directly connected to our machine: for me it was a bit expensive (I had bought the Logitech previously), and the connection seems not be very adequate to reuse the camera in other projects. For more information, you can have a look at Raspberry Pi Camera Module.

Some programs for the camera:

Of course, there are more.

But I also discovered the OpenCV project; with it you can access a camera from your programs (in Python, for example): manage several cameras, their parameters, …

You can have a look at an small example, cam.py. The program takes a picture with the camera and stores it in a file whose name is pre-configured. You can provide a name for the file where the image will be stored.

We can comment here some lines of the code:

Definition of the name of the file:

imgFile = ‘/tmp/imagen.png’

It is always the same. We could use something like this:

imgFile = time.strftime(“%Y-%m-%dT%H:%M:%S”, time.gmtime())+’.png’

in order to have several images stored without worrying about the name (be careful with the storage capacity!).

Then, there is code to check if you have provided a filename from the command line (not very robust: it does not validate anything). Later it initializes the camera:


For capturing the image we use a small function:

def get_image():
retval, im = cam.read()
return im

Again, no validation is done, it just hopes that everything was ok.

The function is used by this sentence:


And, finally, the image is written to the file:

cv2.imwrite(imgFile, img)

Hopefully, I will continue writting more things I have done with the camera

This text is a (sort of) translation of the original one, that has been published at: Una cámara en la Raspberry Pi (in Spanish).