A robotic leg

This post was more of a note to self than something of actual value. Anyway there were some advances, some tabs open in the browser and I felt it was better to try to share them here for future reference than waiting for an (eventual) finishing point of the project.
Maybe it will be useful for somebody.

In a mobile camera we anticipated the idea of making something that can move. In fact, there was an inspiration in Stubby. Full featured, miniature hexapod. There is more info in Stubby the (Teaching) Hexapod.

My only (and not small) problem with that design were the manual abilities and the tools needed: wooden cutting, mechanization,… I was wandering (physically and mentally) about my possibilities and one of the solutions was to use wood sticks; I’d need to cut and make perforations but that was not too scaring for me.

Internet is plenty of projects such as A spider called “Chopsticks” that is using chopsticks for the legs and Popsicle Stick Hexapod R.I.P.. Their ideas were similar to my own ones and they gave me some encouragement. I also had dicovered Build a 12-Servo Hexapod. It has some limitations but shows some interesting ideas.
Just to comply with my initial statement (more tabs!), we can see some more proyects like
Hexpider with a different design (it can even write!) and 6-legged robot project. All of them have helped me providing insight and ideas about the movement and articulations (at a very basic level, some elaboration is needed that will be shown in further posts).

With these ideas I visited a DIY store in order to get inspiration. I forgot quickly the idea of wooden sticks because I discovered some plastic tubes that seemed to me more convenient: they should be easier to cut and they should be lighter. You can find also alluminun sticks that would have a nicer look, but at this stage of the project the plastic tubes seemed easier to use.

View this post on Instagram


A post shared by Fernando Tricas García (@ftricas) on

My supposition was correct and this material is easy to manage: we can make holes and fix the servo with a screw, as it can be seen in the following image:

View this post on Instagram

La pata #raspi #servo

A post shared by Fernando Tricas García (@ftricas) on

The picture is not very good, but it should be enough to get the idea about joining the different parts. I’m very grateful for similar pictures from other projects that provided hints about how to proceed. As you can see I’ve chosen a design wit three servos for each leg.

We have used cable ties for joining some parts, maybe we’ll need some better methds to improve these unions. It should be easy to make more ‘agressive’ operations if needed.

It was quite surprising to see how fast I could configure the leg with these tools, we will see if I can go so fast in the future (hint: no).

For the movement of the legs we had some experience with servos (Adding movement: servos). The whole code was rewritten following the ideas of PiCam.

On the software side, I will only show a couple of small programs that can be found at servo.

The first one can move each joint in and independent way (we wanted to be able to test them from the command line legOneMove.py.

We have the three joints associated to three GPIO ports:

servoGPIO=[17, 23,15]

and we will use a function for the transormation of an angle in the needed pulse:

def angleMap(angle):
   return int((round((1950.0/180.0),0)*angle)/10)*10+550

The movement function is very simple:

def movePos(art, pos):
    servo = PWM.Servo()
    print art
    servo.set_servo(art, angleMap(pos))

Shame on me, I discovered that I was needing the last delay because when the program finishes it stops sending the needed pulses and the movement is not completed.

Finally, in

movePos(servoGPIO[int(sys.argv[1])], int(sys.argv[2]))

we are passing as the first argument the joint we are moving (mapped to the adequate GPIO). The second argument is the angle. Notice that no bound nor limit checking is done so, some bad things can happen if the parameters are not adequate.

The second program is legServo.py. It is a simulation of the movements needed for the leg in order to walk: raise the leg, move forward, lower it and move it backwards, and so on…
Some better movements will be needed in the future but do not forget that this is just a proof of concept.

Now we can see a video with a sequence of these movements repeated several times that I recorded with my son’s help.

View this post on Instagram

En movimiento #servo #raspi

A post shared by Fernando Tricas García (@ftricas) on

We can now see another video with some previous tests, taking advantage of the wonderful YouTube video editor, with two joints and with three joints:

The next steps will be to construct the other legs (four or six) and we’ll need to see if we need some more hardware (may be we will need some more input/ouputs in order to control all the servos for the legs and maybe something more). We will need also something for the ‘body’.

This post was published originally in Spanish, at: Una pata robótica.


Firing a camera when somebody is around

After the summer break we are returning with a small project. We added movement to our camera (Adding movement: servos) and with this we were able to change the orientation of the camera in the room (A mobile camera) but we weren’t able to see interesting things most of the time (it is difficult to find the adequate moments).

I was curious about proximity sensors, so I decided to give them a try buying a couple of HC-SR04, which work with ultrasounds.

View this post on Instagram

Ojos que no ven

A post shared by Fernando Tricas García (@ftricas) on

The objective is to take a picture when somebody/something is passing in front of the camera: for this we are measuring the distance to the obstacle in front of the sensor and when a change is observed we can suppose that there is something there.

I did some experiments with the Raspi but the results were unsatisfactory: measures are not accurate (it is easy to filter out the bad ones) and this is not adequate for our purposes.

Just in case, you can check an example in HC-SR04 Ultrasonic Range Sensor on the Raspberry Pi.

The connections:

The problems seem to be related to the fact that the raspi is not very good at real time and minor variations in time measurement can appear (with these sensors we are measuring the time that some sound pulses take to go and return until they find some obstacle).

Since we had an Arduino we decided to check if it was more adequate. This would allow us:

– More accurate measures.
– Learning the way to communicate the Raspberry Pi and the Arduino.

Of course, this will open the door for new experiments.

The connections with the Arduino:

Following HC-SR04 Ultrasonic Sensor it has been quite easy to prepare the Arduino sketch and to connect the sensor (the code is available at sketch.ino in its current format, there can be some changes in the future).

We found that the measures were more accurate: sometimes there can be a difference of one or two centimeters, but this is not a problem when we are trying to detect something passing because in this case there should be a difference of 20cms or more.

Now we needed a way to communicate the Arduino with the Raspberry (in order to reuse some previous code).

Arduino sends text that can be easily read and processed at the Raspberry.
There seem to be several ways to do the communication: a serial port over USB (Connect Raspberry Pi and Arduino with Serial USB Cable), using I2C (Raspberry Pi and Arduino Connected Using I2C) and by means of GPIO (Raspberry Pi and Arduino Connected Over Serial GPIO).
I chose the first one but I should experiment with the others in the near future.


while 1:
	distAnt = dist
	dist = int(ser.readline().strip().strip())

if abs(distAnt-dist)>10:
	print "Alert!!"

That is: we are storing the previous measurement (distAnt), we obtain a new one (dist = … ) and we activate an alert if there is a difference greater than 10 cms.

Since we wanted to take a picture, we have reused some code that can be seen at: A camera for my Raspberry Pi and, following previous ideas, we’ll send it by email (Sending an image by mail in Python).

The code can be seen at serialPicture.py.

There was a problem: we are establishing directly the connection with the mail server in order to send the image. We cannot avoid the time consumed by the camera (which is not negligible); but we can avoid waiting for the mail sending.
For this we are creating a subprocess (see multiprocessing) which does this part of the work.

p = Process(target=mail, args=(name,who))

That is, we take the picture and then we are launching a new process that will perform the sending. Since I had no previous experience with parallel coding in Python I’m not sure if some process cleaning/ending is needed. No sychronization nor waiting for the process to finish is needed, so all seems to be working well.

Some final remarks: none of these processes is really fast; nobody should expect to use this code as a ‘trap’ for taking pictures of a flying bird (even a child running won’t be captured).

What can we do now?
We could mount the sensor over one of our servos (as in A mobile camera) and with this we can construct a map of the room; this should be a different way to detect changes. When something gets noticed we can scan the space with the camera taking several pictures (or even recording a video; I’ve being avoiding the video until now, but for sure in the future we will try).
Of course, we could have some suggestions or questions here, or see some ideas out there.
There is another remark and it is that the sensor will work even when not enough light is available to take the picture; maybe we could add a light sensor to avoid firing the camera (or, perhaps, illuminate the scene when we are taking a picture).

A mobile camera

Once we have a bot which allows us to control our project remotely (My second bot) and we know how to move our servos (Smooth movement with servos) it is now the time to put the camera over them (A camera for my Raspberry Pi).
Let us remember that the control is done using XMPP (for example with programs such as Pidging, Google Talk or our preferred IM client); the idea was to avoid opening ports in the router but with the objective of being able to send instructions to the camera from anywhere.

We selected a couple of boxes for the project (they are cheap and it is quite simple to adapt them for our needs). In a bigger box we made two holes (in this way we can put two servos, even if at the end we only used one of them):

View this post on Instagram

Hemos pintado la caja #raspi

A post shared by Fernando Tricas García (@ftricas) on

Inside the box we made the connections (batteries for the servos, and
connections for the control from the Raspberry Pi, which is outside of the

View this post on Instagram

Caja como soporte para los motores

A post shared by Fernando Tricas García (@ftricas) on

The camera goes in a smaller box that will be attached to the selected servo.

When we send the adequate instructions, the camera goes to the selected position, it stops for taking the picture and it sends it by mail. Finally, it returns to the initial position.
We can see all the sequence in the following video.

The project’s code can be found at err-plugins (it can have further evolutions; the main code in its current state can be seen at pruebas.py).

In the last weeks it has been published a similar proyect, “Raspberry Eye” Remote Servo Cam. It has two main differences: it can move the camera in two axis (our project only can move left and right) and it is controlled using a web interface.

So, what’s next?
I have several ideas, but I haven’t decided what to do: it would be nice having some autonomy for the camera (motion detecion? detection of changes in the scene?); I woudln’t mind adding also some more movement (maybe adding wheels such that the camera can take pictures in different parts of the house? this hexapod really impressed me). Going further, maybe we could think about other control devices (wearables?).

Of course, please feel free to comment, discuss and making suggesions… All comments are welcome.

Smooth movement with servos

One of the main problems of servos is that they move quite fast, as it can be seen in the video we included in Adding movement: servos .
With the setup I had imagined this was a problem. The camera has some non negligible weight and if we put something over the servo all of this can become unstable. See, for example:

The solution for this problem is quite simple: when we want to move to a certain position, we can reach it by means of a set of small steps. We can indicate a set of succesive positions for the servo, each one a bit more close to the final destination. In this way, even with fast movements, the camera is more or less stable.

The code could be similar to the one we can see here:

def move(self, servo, pos, posIni=MIN, inc=10):

	posFin=posIni + (MAX-MIN)*pos
	steps=abs(posFin - posIni) / inc

	print "Pos ini", posIni
	print "Pos fin", posFin
	print "Steps", steps
	print int(steps)

	if pos < 0:
		pos = -pos
		sign = -1
		sign = 1

	for i in range(int(steps)):

	print "Pos ini", posIni
	print "Pos fin", posFin
	print "Steps", steps
	print int(steps)


That is, if we start at position (posIni) and we want to move a certain percentage of the available range (a real number between 0 and 1) we can compute the final position if we know the total range (MAX – MIN):

posFin=posIni + (MAX-MIN)*pos

And then, we can compute the needed steps to reach this destination; if we use increments of 10 (inc=10):

steps=abs(posFin - posIni) / inc

We are using the absolute value because the movement can be forward and backward (depending on the starting point for the movement). This is solved by means fo this conditional:

if pos < 0:

Finally, we use a for loop to reach the destination:

for i in range(int(steps)):

The result can be seen in the following video:

There we can observe a forward and backward movements (to recover the initial position) with an improvised model.
The speed can be controlled with the time between steps (VEL value).

Maybe we should have chosen other type of motor, but we could solve the problem with this approach.

Adding movement: servos

Once we have a camera (or two) attached to our raspi helps us to discover one of the annoying limitations they have: they cannot move!
Fortunately, there are plenty of options for doing this. I decided to buy a couple of servos.

View this post on Instagram

Motor #raspi

A post shared by Fernando Tricas García (@ftricas) on

They are cheap, small and noisy.

There are lots of pages explaining the theory behind their inner working so we only will remind here just a couple of things: they have some rotation constraints (the ones I bought can just move 180 degrees) and the way to control them is by sending some pulses whose duration determines the angle (for interested people, you can have a look at How do servos work? -in English- or at Trabajar con Servos -in Spanish-).

From our program our mission will be to find the way to send the adequate pulses to the selected pin where we have connected the servo (remember: physical world-computer connection).

There are lots of examples in the net.

For example, the programs : servo, servo2, servoYT, and servoYT2 are based on what we can see in the video Servo control using Raspberry pi (and also in this one Servo Control with the Raspberry Pi).

As usual, we are commenting on the main steps here, following the third program.

First the python modules that we need:

import RPi.GPIO as GPIO
import time

The first one is used for sending instructions through the pins to our raspi. The second one is for managing time related data.

Now, some setup: we will make reference to the pins by their number and we configure the 11 pin as output.



Now, we are going to define the controller with a frequency of 50Hz and we’ll start it in the central position:

p = GPIO.PWM(11,50)


Finally, a bit more of code changing the position each second:

    while True:
        print "Uno"
        print "Dos"
        print "Tres"

That is, it starts at the central position and moves to both extremes. From the center it goes to one side, then to the contrary one and finally it returns to the initial position. You can see this movements in the following video:

By the end of the video you can see that we can control more than one servo (with the only limitation of the number of available pins). The code for this can be seen at:
We have added pin number 12 and we use two controllers (p1 and p2). Then, we just send instructions to each one in sequence. You can have a look at this mini-video:

View this post on Instagram

Dos motores #raspi

A post shared by Fernando Tricas García (@ftricas) on

We will see soon how to manage all the parts we have commented until now in order to finish the project.

This post has been posted originally in Spanish at: Añadiendo movimiento: servos.

Sending an image by mail in Python

Once we are able to take a picture with our webcam (A camera for my Raspberry Pi ) the next step is to see the picture from wher we are.

There are lots of texts explaining how to configure a web server for this
but I didn’t want to publish a web server with my raspi to the internet.
You’d need to setup the server, open some ports in the router and take into
account the problem of not having a fixed IP.
It found this approach not very robuts.
There is also the possibility of somebody cracking your server and
accessing our network in some way (maybe difficult, but not impossible).

I also evaluated the possibility of sending the images by means of an
instant messaging app but I’m not sure if this can be done, or maybe it is
just that I’ve not been able to find the adequate documentantion, so I
discarded this option.

The final election was the old and reliable email. My bot is going to be
able to get petitions by different ways (XMPP, IRC, …) and it will send
the images as a reply by email.

There are lots of documents explaining how to prepare a message with an
attachment. In fact, I had a program from previous experiments and this was
the one I decided to use.
It can be seen at mail.py.

It basically constructs a message from its components (From, To, Subject, Attachments, …)

It needs some parameters, that need to be configured. The way to do this
is by means of an auxiliar module that is imported at the beginning of the program.

import mailConfig

The only content for this file are the variables whose values need to be
adapted. Our program just reads them (it could of course use them

destaddr = mailConfig.ADDRESS
fromaddr = mailConfig.FROMADD
toaddrs = mailConfig.TOADDRS
subject = mailConfig.SUBJECT
smtpsrv = mailConfig.SMTPSRV
loginId = mailConfig.LOGINID
loginPw = mailConfig.LOGINPW

imgFile = ‘/tmp/imagen.png’

We are selecting also a default filename for the image, and we can choose a
different one from the command line.

We also setup a default address for sending emails to (destaddr) but we can
also include a different one in the command line (not very robust, there is
not validation of the email address).

From this, we can construct the message.

Detection and filling the parameters for the object we are sending:

format, enc = mimetypes.guess_type(imgFile)
main, sub = format.split('/')
adjunto = MIMEBase(main, sub)

Notice that in this way, the program can be used for sending other files
that need not to be just images.

Now we construct the attachment, with the adequate condification and we
attach it to the message:

adjunto.add_header('Content-Disposition', 'attachment; filename="%s"' % imgFile)

Finally, we add the other parameters:

mensaje['Subject'] = subject
mensaje['From'] = fromaddr
mensaje['To'] = destaddr
mensaje['Cc'] = toaddrs

The message is empty, it does not contain text (Exercice for the reader:
can you add some text? Something like: ‘This picture was taken on day
xx-xx-xxxx at hh:mm’).

And finally, we send the message (direct negotiation with the smtp server):

server = smtplib.SMTP()

server.login(loginId, loginPw)
server.sendmail(fromaddr, [destaddr]+[toaddrs], mensaje.as_string(0))

In this way we have a small Python program that will send by mail a file.
We can provide the name of the file in the command line and we can also
provide an email address.
By default it will send a image with the fixed name to the pre-configured
As a sort of backup and logging system the program will always send a copy of the mail to the ‘toaddrs’.

On the configuration side, we need ‘destaddr’, ‘fromaddr’ and ‘toaddrs’ to be valid email addresses.

The server ‘smtpsrv’ can be any server that we can use and the program uses
authenticated sending (and this is the reason for needing the user and
password). For example, we can use the Google servers, and the
configuration would be:


And we could use some user and password for a pre-existing account.

A camera for my Raspberry Pi

This post was originally published at: A camera for my Raspberry Pi. Not many visits later, and given the fact that I did not enjoy writing there (I’m a xmlrpc man), I’m trying to give that post a second life here.

My first idea was to attach a webcam to my Raspberry Pi and to further use it for a more complex project. This post is for reporting the initial steps with my

The main reference for buying a camera for the Raspberry was RPi USB Webcams. From the models shown there (and that where available in a local store near home) I selected the Logitech C270. As it was my understanding by that time, it should work directly connected to the USB port. Unfortunately this was not correct (it needs an USB powered HUB; it is not more clearly stated in the info now) and this made me some headaches and frustration.

During the tests I also bought another camera (second hand, this time), that is sold for the Playstation PS4 (if I’m correct). I had read that it did not need a powered hub and it was really cheap (5 euro) so it was worth a try.

We can see a picture of the cameras:

View this post on Instagram

Probando otra cámara (PS3)

A post shared by Fernando Tricas García (@ftricas) on

While it is true that this camera was working better directly connected to
the USB port of the raspi, the machine was not behaving correctly (it is
important to notice that I also had a USB Wifi adapter. I’m using WiFi to
connect via SSH to the computer, and also to have it connected almost
permanently to the Internet).

Image quality with this camera is worse than with the Logitech one. It has
two positions (zoom) that need to be managed by hand (you cannot access
this feature from a program, as far as I know). I’ve kept it as a secondary

So, after a lot o days testing the cameras I decided to buy a powered USB
HUB, the EasyConnect 7 Port USB2 Powered Hub made by Trust.

View this post on Instagram

Como decía la canción…

A post shared by Fernando Tricas García (@ftricas) on

During this time I was trying the different available options for the camera. It is worth remembering that there exist a camera supported by the project. It is supposed to work perfectly well directly connected to our machine: for me it was a bit expensive (I had bought the Logitech previously), and the connection seems not be very adequate to reuse the camera in other projects. For more information, you can have a look at Raspberry Pi Camera Module.

Some programs for the camera:

Of course, there are more.

But I also discovered the OpenCV project; with it you can access a camera from your programs (in Python, for example): manage several cameras, their parameters, …

You can have a look at an small example, cam.py. The program takes a picture with the camera and stores it in a file whose name is pre-configured. You can provide a name for the file where the image will be stored.

We can comment here some lines of the code:

Definition of the name of the file:

imgFile = ‘/tmp/imagen.png’

It is always the same. We could use something like this:

imgFile = time.strftime(“%Y-%m-%dT%H:%M:%S”, time.gmtime())+’.png’

in order to have several images stored without worrying about the name (be careful with the storage capacity!).

Then, there is code to check if you have provided a filename from the command line (not very robust: it does not validate anything). Later it initializes the camera:


For capturing the image we use a small function:

def get_image():
retval, im = cam.read()
return im

Again, no validation is done, it just hopes that everything was ok.

The function is used by this sentence:


And, finally, the image is written to the file:

cv2.imwrite(imgFile, img)

Hopefully, I will continue writting more things I have done with the camera

This text is a (sort of) translation of the original one, that has been published at: Una cámara en la Raspberry Pi (in Spanish).