Storing credentials of your Python programs in the keyring


As I introduced in previous posts I’m a happy user of ErrBot (My second bot).
My main concern was to be able to control some functions in some machines without having to connect via ssh or exposing web pages to the world.

I couldn’t feel comfortable with the idea of storing my credentials in the file which is the suggested way to connect to the services with the backend selected (‘username’ and ‘password’).

In order to avoid this I’m proposing here an alternative approach, based on the keyring module. Provided you have it correctly installed, you can store your credentials there and use them from your Python programs. In this case, from ErrBot.

You can see there the way to store your credentials and so on, we will not replicate them here.

Notice that the credentials storage will rely on the operating system and depending on the configuration anybody logged into your account will have access to them without password (you’ll need this if you want a bot that can start in an unattended way). We are only obtaining the added security of not having the credentials stored in the file but not much more.

Then you need to make some changes in the file. First of all, importing the module:

import keyring

Later, before the BOT_IDENTITY section you can add three variables, for the server (needed in order to select the account in the keyring) and the username:

server = 'jabber-fernand0movilizado'
username = ''
password = keyring.get_password(server,username)

Finally, in the BOT_IDENTITY structure, in the backend you have selected, you
can put (XMPP backend, for example):

    'username': username,  # The JID of the user you have created for the bot
    'password': password,       # The corresponding password for this user

In this way when the bot starts it gets the credential from the keyring. You can use a similar approach in your programs and, if you do not need to autostart the program you can protect with a password the keyring entries.

A robotic leg

This post was more of a note to self than something of actual value. Anyway there were some advances, some tabs open in the browser and I felt it was better to try to share them here for future reference than waiting for an (eventual) finishing point of the project.
Maybe it will be useful for somebody.

In a mobile camera we anticipated the idea of making something that can move. In fact, there was an inspiration in Stubby. Full featured, miniature hexapod. There is more info in Stubby the (Teaching) Hexapod.

My only (and not small) problem with that design were the manual abilities and the tools needed: wooden cutting, mechanization,… I was wandering (physically and mentally) about my possibilities and one of the solutions was to use wood sticks; I’d need to cut and make perforations but that was not too scaring for me.

Internet is plenty of projects such as A spider called “Chopsticks” that is using chopsticks for the legs and Popsicle Stick Hexapod R.I.P.. Their ideas were similar to my own ones and they gave me some encouragement. I also had dicovered Build a 12-Servo Hexapod. It has some limitations but shows some interesting ideas.
Just to comply with my initial statement (more tabs!), we can see some more proyects like
Hexpider with a different design (it can even write!) and 6-legged robot project. All of them have helped me providing insight and ideas about the movement and articulations (at a very basic level, some elaboration is needed that will be shown in further posts).

With these ideas I visited a DIY store in order to get inspiration. I forgot quickly the idea of wooden sticks because I discovered some plastic tubes that seemed to me more convenient: they should be easier to cut and they should be lighter. You can find also alluminun sticks that would have a nicer look, but at this stage of the project the plastic tubes seemed easier to use.


A photo posted by Fernando Tricas García (@ftricas) on

My supposition was correct and this material is easy to manage: we can make holes and fix the servo with a screw, as it can be seen in the following image:

La pata #raspi #servo

A photo posted by Fernando Tricas García (@ftricas) on

The picture is not very good, but it should be enough to get the idea about joining the different parts. I’m very grateful for similar pictures from other projects that provided hints about how to proceed. As you can see I’ve chosen a design wit three servos for each leg.

We have used cable ties for joining some parts, maybe we’ll need some better methds to improve these unions. It should be easy to make more ‘agressive’ operations if needed.

It was quite surprising to see how fast I could configure the leg with these tools, we will see if I can go so fast in the future (hint: no).

For the movement of the legs we had some experience with servos (Adding movement: servos). The whole code was rewritten following the ideas of PiCam.

Rápido-lento-rápido #raspi #err #errbot

A video posted by Fernando Tricas García (@ftricas) on

On the software side, I will only show a couple of small programs that can be found at servo.

The first one can move each joint in and independent way (we wanted to be able to test them from the command line

We have the three joints associated to three GPIO ports:

servoGPIO=[17, 23,15]

and we will use a function for the transormation of an angle in the needed pulse:

def angleMap(angle):
   return int((round((1950.0/180.0),0)*angle)/10)*10+550

The movement function is very simple:

def movePos(art, pos):
    servo = PWM.Servo()
    print art
    servo.set_servo(art, angleMap(pos))

Shame on me, I discovered that I was needing the last delay because when the program finishes it stops sending the needed pulses and the movement is not completed.

Finally, in

movePos(servoGPIO[int(sys.argv[1])], int(sys.argv[2]))

we are passing as the first argument the joint we are moving (mapped to the adequate GPIO). The second argument is the angle. Notice that no bound nor limit checking is done so, some bad things can happen if the parameters are not adequate.

The second program is It is a simulation of the movements needed for the leg in order to walk: raise the leg, move forward, lower it and move it backwards, and so on…
Some better movements will be needed in the future but do not forget that this is just a proof of concept.

Now we can see a video with a sequence of these movements repeated several times that I recorded with my son’s help.

En movimiento #servo #raspi

A video posted by Fernando Tricas García (@ftricas) on

We can now see another video with some previous tests, taking advantage of the wonderful YouTube video editor, with two joints and with three joints:

The next steps will be to construct the other legs (four or six) and we’ll need to see if we need some more hardware (may be we will need some more input/ouputs in order to control all the servos for the legs and maybe something more). We will need also something for the ‘body’.

This post was published originally in Spanish, at: Una pata robótica.

Publishing in Facebook each post from this blog

Some time ago we published the way for Extracting links from a webpage with Python as a first step for publishing complete blog posts in Facebook. The idea was to prepare the text obtained from an RSS feed in order to publish it in a Facebook page (or in other places). Let us remember that Facebook does not allow (or I didn’t find the way) to include html in the pages’ posts.
We had presented previously in Publishing in Twitter when posting here some related ideas, in that case for Twitter.

Now we are going to use the Facebook API and an unofficial package which implements it in Python, Facebook Python SDK.

We can install it with

fernand0@aqui:~$ sudo pip install facebook-sdk

It will need `BeautifulSoup` and `requests` (and maybe some other modules). If they are not installed in our system, we will get the adequate ‘complaints’. We can install them as usual with pip (or our preferred system).

We need some credentials in order to publish in Facebook. First we have to register our application in Facebook My Apps (button ‘Add a new App’ (there are plenty of tutorials if you need help). We will use the ‘advanced setup’ (registering web applications seems to be easier) and some identifiers will be provided (mainly the OAUTH token; we can find them at Myapps, following the link for our app). We will store this token in ~/.rssFacebook, and it will be later used in our program.
This configuration file is similar to this one


The program is very simple, it can be downloaded from (link to the version commented here, there have been some further evolutions).

The program starts reading the configuration about the available blogs and we need to choose one. If there were just one no selection would be needed:

config = ConfigParser.ConfigParser()[os.path.expanduser('~/.rssBlogs')])

print "Configured blogs:"

for section in config.sections():
        print str(i), ')', section, config.get(section, "rssFeed")
        i = i + 1

if (int(i)>1):
        i = raw_input ('Select one: ')
        i = 1

print "You have chosen ", config.get("Blog"+str(i), "rssFeed")

The configuration file must contain a section for each blog; each one of them will have an RSS feed, the Twitter account and the name of the Facebook page. For this site it would have the following entries:


Notice that the Facebook account is empty: this blog has not a Facebook page (yet?).
We could have a second blog:


This configuration file can have yet another field, linksToAvoid that will be used for selecting some links that won’t be shown (I have other blog and in this way I can avoid the categories’ links).

if (config.has_option("Blog"+str(i), "linksToAvoid")):
        linksToAvoid = config.get("Blog"+str(i), "linksToAvoid")
        linksToAvoid = ""

We will read now the last post of the blog and we will extract the text and links in a similar way as seen in Extracting links from a webpage with Python (not shown here).

And now the links we want to avoid:

                print linksToAvoid
                print re.escape(linksToAvoid)
                print str(link['href'])
                print, link['href'])
                if ((linksToAvoid =="")
                        or (not, link['href']))):
                        link.append(" ["+str(j)+"]")
                        linksTxt = linksTxt + "["+str(j)+"] " + link.contents[0] + "\n"
                        linksTxt = linksTxt + "    " + link['href'] + "\n"
                        j =  j + 1

We then check if the post contains some image. If not, we will not add an image, but Facebook will (it will be the first image that it can find in our page).
We could configure one that would be used in case of need (in case we have not included an image in our post and we do not like the one chosen by Facebook) or we can try to add always to our posts some image.

if len(pageImage) > 0:
        imageLink = (pageImage[0]["src"])
        imageLine = ""

Now we will read the Facebook configuration and we will ask for the list of pages the user manages (remember that we have established the desired one in ~/.rssBlogs):[os.path.expanduser('~/.rssFacebook')])
oauth_access_token= config.get("Facebook", "oauth_access_token")

graph = facebook.GraphAPI(oauth_access_token)
pages = graph.get_connections("me", "accounts")

We could define more Facebook accounts but I have not tested this feature, so maybe it won’t work as expected (and, of course, there is no way to select one of them).

for i in range(len(pages['data'])):
        if (pages['data'][i]['name'] == pageFB):
                print "Writing in... ", pages['data'][i]['name']
                graph2 = facebook.GraphAPI(pages['data'][i]['access_token'])
                        "feed", message = theSummary, link=theLink,
                        picture = imageLink,
                        name=theTitle, caption='',

statusTxt = "Publicado: "+theTitle+" "+theLink

This program has been tested during the last months and the solution seems to be working (maybe you’ll want to check the latest version that will have some bugs corrected).
The most cumbersome part was to get the credentials and register the app (with a ‘fake’ production step; for me it is ‘fake’ because I’m the only user of the app).

This post was published originally (in Spanish) at: Publicar en Facebook las entradas de este sitio.

If you have doubts, comments, ideas… Please comment!

Extracting links from a webpage with Python

Enlaces en Página de Facebook Some time ago we presented a small program that helped us to publish in Twitter Publishing in Twitter when posting here.

Later I started having a look at the Facebook API and doing some tests. I discovered that Facebook does not allow to publish links with their anchor text. It transforms them in links that you can click on but such that they have the own link as text. I wanted to publish in Facebook the whole text (it will not show easily the whole entry, just a small part and a link to click in order to see more; and so on).

It has always called my attention the netiquette in some mailing lists where they add numbers near to the anchor text of links and they write at the end these numbers and the corresponding links. See, for example this Support page.

I decided to follow this path in order to publish in my Facebook pages. In the following I will try to explain some parts of the program for doing this. The code is available at (version in this moment, maybe they will be changes later).

There are several ways to extract links: regular expressions, some HTML parser (in our Blogómetro project we used this approach with the Simple SGML parser). Looking for alternatives I found Beautiful Soup, as a fast way to parse a web page and I decided to give it a try.

In order to use it we need some modules. We will publish in Facebook using the RSS feed, so we will also need to include the ‘feedparser’ module.

import feedparser
from bs4 import BeautifulSoup
from bs4 import NavigableString
from bs4 import Tag

Now we can read the RSS feed:

feed = feedparser.parse(url)

for i in range(len(feed.entries)):

And now the magic of BeautifulSoup can start:

soup = BeautifulSoup(feed.entries[i].summary)
links = soup("a")

That is we parse the RSS entry looking for links (“a” tag). We will have the entry in the ‘summary’ part and we are interested in the entry in position ‘i’. It will return the list of HTML elements with that tag.

In some entries we include images, but we do not want them to appear in the text. For this we use ‘isinstance’ in order to check if inside the text there is another HTML tag. We will check the list with the links together with a counter ‘j’ in order to associate the numbers and the links (in the original HTML, we have not modified it yet).

	j = 0
	linksTxt = ""
	for link in links:
		if not isinstance(link.contents[0], Tag):
			# We want to avoid embdeded tags (mainly <img ... )
			link.append(" ["+str(j)+"]")
			linksTxt = linksTxt + "["+str(j)+"] " + link.contents[0] + "\n"
			linksTxt = linksTxt + "    " + link['href'] + "\n"

The content of the link (now we know that it is not an image nor another HTML tag) will be available at `link.contents[0]` (of course, it could be more content but our links tend to be simple).

        linksTxt = linksTxt + "["+str(j)+"] " + link.contents[0] + "\n"

and the links is at `link[‘href’]`.

                linksTxt = linksTxt + "    " + link['href'] + "\n"

Now we need the text of the HTML.

        print soup.get_text()

Sometimes this text can have breaklines, spaces, … We could suppress them. We usually have very simple links, so we are not going to pay attention to this problem.

Now, we can add at the end the links:

        if linksTxt != "":
                print "Links :"
                print linksTxt

Publishing in Twitter when posting here

I don’t think RSS is dead. But we can see how many people is using social networking sites to get their information. For this reason I was publishing the entries of my blogs using services like IFTTT and They are easy to use and they work pretty well. Nevertheless, one is always wondering if we could prepare our own programs to manage these publications and learn something new on the way.

I started with Facebook publishing but I’m presenting here a program for Twitterpublishing: we only need to publish the title and the link (and, maybe, some introductory text).

I found the project twitter as an starting point. It has implemented an important part of the work. We can install it using pip:

fernand0@here:~$ sudo pip install twitter

It needs `BeautifulSoup` and maybe some other modules. If they are not available in our system we will get the adequate ‘complaints’.

Now we can execute it.
This step is useful in order to do the authentication steps in Twitter and getting the oauth token. Our program will not deal with this part and it will be smaller and more simple.
Not so long ago it was possible to send tweets with just the username and passowrd but Twitter decided to start using a more sofisticated systema based in OAuth.

fernand0@here:~$ twitter

The program launches a browser for authentication and then giving our app the adequate permissions. This generates the tokens and other information needed to interact with Twitter. They will be stored at `~/.twitter_oauth` (in a Unix-like system, I’d be happy to know about other systems) that we will reuse in our own application.

The program is quite simple, it can be downloaded from V.2014-12-07) (link to the commented version, the program has been updated to correct bugs and add features).

We will start reading the configuration:

config = ConfigParser.ConfigParser()[os.path.expanduser('~/.rssBlogs')])
rssFeed = config.get("Blog1", "rssFeed")
twitterAc = config.get("Blog1", "twitterAc")

This configuration file must contain a section for each blog (this program uses only the configuration for the first one). Each section will contain the RSS feed, the name of the Twitter account and the name of the Facebook account (it can be empty if it won’t be used. For example, for this blog it would be:


It also needs the Twitter configuration:[os.path.expanduser('~/.rssTwitter')])
CONSUMER_KEY = config.get("appKeys", "CONSUMER_KEY")
CONSUMER_SECRET = config.get("appKeys", "CONSUMER_SECRET")
TOKEN_KEY = config.get(twitterAc, "TOKEN_KEY")
TOKEN_SECRET = config.get(twitterAc, "TOKEN_SECRET")

We can use the ones that have been generated before (we can copy them from the app); in my system it is at: `/usr/local/lib/python2.7/dist-packages/twitter/` and the tokens are stored at `~/.twitter_oauth`

The configuration file is as follows:


Notice that you can configure as many Twitter accounts as needed. The name of the second section is the same as the one used in the previous configuration file.

Now we can read the RSS feed in order to extract the required data:

feed = feedparser.parse(rssFeed)

i = 0 # It will publish the last added item

soup = BeautifulSoup(feed.entries[i].title)
theTitle = soup.get_text()
theLink = feed.entries[i].link

For this, we will use `feedparser` in order to download the RSS feed and process it.

We are chosing the first entry (position 0), that will be the last one published. For Twitter we just need the title and the link.
We use BeautifulSoup for processing the title, in order to avoid the tags (para evitar las posibles etiquetas que pueda contener (CSS, HTLL entities, ...) 

And finally, we will build the tweet:

statusTxt = "Publicado: "+theTitle+" "+theLink

We can now proceed to the steps of identification, authentication and publishing:

t = Twitter(


This entry was originally published (in Spanish) at: Publicar en Twitter las entradas de este sitio.

Firing a camera when somebody is around

After the summer break we are returning with a small project. We added movement to our camera (Adding movement: servos) and with this we were able to change the orientation of the camera in the room (A mobile camera) but we weren’t able to see interesting things most of the time (it is difficult to find the adequate moments).

I was curious about proximity sensors, so I decided to give them a try buying a couple of HC-SR04, which work with ultrasounds.

Ojos que no ven

A photo posted by Fernando Tricas García (@ftricas) on

The objective is to take a picture when somebody/something is passing in front of the camera: for this we are measuring the distance to the obstacle in front of the sensor and when a change is observed we can suppose that there is something there.

I did some experiments with the Raspi but the results were unsatisfactory: measures are not accurate (it is easy to filter out the bad ones) and this is not adequate for our purposes.

Just in case, you can check an example in HC-SR04 Ultrasonic Range Sensor on the Raspberry Pi.

The connections:

Probando el sensor de distancia #raspi

A photo posted by Fernando Tricas García (@ftricas) on

The problems seem to be related to the fact that the raspi is not very good at real time and minor variations in time measurement can appear (with these sensors we are measuring the time that some sound pulses take to go and return until they find some obstacle).

Since we had an Arduino we decided to check if it was more adequate. This would allow us:

– More accurate measures.
– Learning the way to communicate the Raspberry Pi and the Arduino.

Of course, this will open the door for new experiments.

The connections with the Arduino:

Probando el sensor de distancia #arduino #raspi

A photo posted by Fernando Tricas García (@ftricas) on

Following HC-SR04 Ultrasonic Sensor it has been quite easy to prepare the Arduino sketch and to connect the sensor (the code is available at sketch.ino in its current format, there can be some changes in the future).

We found that the measures were more accurate: sometimes there can be a difference of one or two centimeters, but this is not a problem when we are trying to detect something passing because in this case there should be a difference of 20cms or more.

Now we needed a way to communicate the Arduino with the Raspberry (in order to reuse some previous code).

Arduino sends text that can be easily read and processed at the Raspberry.
There seem to be several ways to do the communication: a serial port over USB (Connect Raspberry Pi and Arduino with Serial USB Cable), using I2C (Raspberry Pi and Arduino Connected Using I2C) and by means of GPIO (Raspberry Pi and Arduino Connected Over Serial GPIO).
I chose the first one but I should experiment with the others in the near future.


while 1:
	distAnt = dist
	dist = int(ser.readline().strip().strip())

if abs(distAnt-dist)>10:
	print "Alert!!"

That is: we are storing the previous measurement (distAnt), we obtain a new one (dist = … ) and we activate an alert if there is a difference greater than 10 cms.

Since we wanted to take a picture, we have reused some code that can be seen at: A camera for my Raspberry Pi and, following previous ideas, we’ll send it by email (Sending an image by mail in Python).

The code can be seen at

There was a problem: we are establishing directly the connection with the mail server in order to send the image. We cannot avoid the time consumed by the camera (which is not negligible); but we can avoid waiting for the mail sending.
For this we are creating a subprocess (see multiprocessing) which does this part of the work.

p = Process(target=mail, args=(name,who))

That is, we take the picture and then we are launching a new process that will perform the sending. Since I had no previous experience with parallel coding in Python I’m not sure if some process cleaning/ending is needed. No sychronization nor waiting for the process to finish is needed, so all seems to be working well.

Some final remarks: none of these processes is really fast; nobody should expect to use this code as a ‘trap’ for taking pictures of a flying bird (even a child running won’t be captured).

What can we do now?
We could mount the sensor over one of our servos (as in A mobile camera) and with this we can construct a map of the room; this should be a different way to detect changes. When something gets noticed we can scan the space with the camera taking several pictures (or even recording a video; I’ve being avoiding the video until now, but for sure in the future we will try).
Of course, we could have some suggestions or questions here, or see some ideas out there.
There is another remark and it is that the sensor will work even when not enough light is available to take the picture; maybe we could add a light sensor to avoid firing the camera (or, perhaps, illuminate the scene when we are taking a picture).

A mobile camera

Once we have a bot which allows us to control our project remotely (My second bot) and we know how to move our servos (Smooth movement with servos) it is now the time to put the camera over them (A camera for my Raspberry Pi).
Let us remember that the control is done using XMPP (for example with programs such as Pidging, Google Talk or our preferred IM client); the idea was to avoid opening ports in the router but with the objective of being able to send instructions to the camera from anywhere.

We selected a couple of boxes for the project (they are cheap and it is quite simple to adapt them for our needs). In a bigger box we made two holes (in this way we can put two servos, even if at the end we only used one of them):

Hemos pintado la caja #raspi

A photo posted by Fernando Tricas García (@ftricas) on

Inside the box we made the connections (batteries for the servos, and
connections for the control from the Raspberry Pi, which is outside of the

Caja como soporte para los motores

A photo posted by Fernando Tricas García (@ftricas) on

The camera goes in a smaller box that will be attached to the selected servo.

Y tenemos un prototipo de mejor aspecto #raspi

A photo posted by Fernando Tricas García (@ftricas) on

When we send the adequate instructions, the camera goes to the selected position, it stops for taking the picture and it sends it by mail. Finally, it returns to the initial position.
We can see all the sequence in the following video.

The project’s code can be found at err-plugins (it can have further evolutions; the main code in its current state can be seen at

In the last weeks it has been published a similar proyect, “Raspberry Eye” Remote Servo Cam. It has two main differences: it can move the camera in two axis (our project only can move left and right) and it is controlled using a web interface.

So, what’s next?
I have several ideas, but I haven’t decided what to do: it would be nice having some autonomy for the camera (motion detecion? detection of changes in the scene?); I woudln’t mind adding also some more movement (maybe adding wheels such that the camera can take pictures in different parts of the house? this hexapod really impressed me). Going further, maybe we could think about other control devices (wearables?).

Of course, please feel free to comment, discuss and making suggesions… All comments are welcome.

Who is in my network?

It can be useful to know which devices are connected to our home network:
you always can assign fixed ips for each device but it is a process than can be
painful (if you are not used to manage these things) and does not scale
well when new devices appear (a frequent thing nowadays).

For this reason I enjoyed very much when I discovered Fing which is a tool for discovering devices in our network (it can be installed on android devices, iOS devices, and desktop computers). I wanted to have it in my latptop (now this work would not be necessary since they have released the tool for several operating systems) and I was looking for a solution.

The suggestion where twofold: nmap and arp should help with this, but I’m not familiar with them. When I found the project WiFinder I decided to try to adapt it for my purposes. I forked the project and started to adapt it.

The result is a small program (link to the commented version, in there can be further evolutions). It should have a better input/output system and I would like to add some features but the main ideas are there.
First of all, code related to the port scannning:

import nmap # import

nm = nmap.PortScanner() # creates an'instance of nmap.PortScanner

Here the actual instruction for code scanning:

nm.scan(hosts='', arguments='-n -sP -PE -T5')
# executes a ping scan

hosts_list = [(nm[x]['addresses']) for x in nm.all_hosts()]

From the obtained list we will keep the information using as an index the MAC address (which is the part that will remain constant for each device), and including the new discovered devices:

if not ipList.has_key(addresses['mac']):
	ipList[addresses['mac']] = ("", addresses['ipv4'])

The data structure is a hash indexed by the MAC address that contains the IP (than can change at any time) and a name that we will assign to each device (in a similar way as done in Fing).

We are using pickle for persistence



fIP = open(fileName,"w")

Finally, I have some doubts about Fing’s inner working: it does not need special privileges (or it should not need them, since the origin is a mobile app). But nmap needs to be run as root for obtaining MAC addresses (the program must be executed with sudo and the user needs to have the adequate permisssions).
Since it is dangerous to have a program running with root privileges, I dediced to try to learn the way to drop them when they were not needed anymore. I found: Dropping Root Permissions In Python and I included the function drop_privileges:

user_name = os.getenv("SUDO_USER")
pwnam = pwd.getpwnam(user_name)

Here we are obtaining the user’s data.

# Try setting the new uid/gid

We are assignig their privileges to the process, and in consequence dropping root privileges.

This has to be done in the program when we do not need these high privileges anymore (that is, in our case, when we do not need nmap anymore).

If you have ideas for improvement, comments, questions…

Smooth movement with servos

One of the main problems of servos is that they move quite fast, as it can be seen in the video we included in Adding movement: servos .
With the setup I had imagined this was a problem. The camera has some non negligible weight and if we put something over the servo all of this can become unstable. See, for example:

Más pruebas #frikeando #voz #motores #raspi #c3po

A video posted by Fernando Tricas García (@ftricas) on

The solution for this problem is quite simple: when we want to move to a certain position, we can reach it by means of a set of small steps. We can indicate a set of succesive positions for the servo, each one a bit more close to the final destination. In this way, even with fast movements, the camera is more or less stable.

The code could be similar to the one we can see here:

def move(self, servo, pos, posIni=MIN, inc=10):

	posFin=posIni + (MAX-MIN)*pos
	steps=abs(posFin - posIni) / inc

	print "Pos ini", posIni
	print "Pos fin", posFin
	print "Steps", steps
	print int(steps)

	if pos &lt; 0:
		pos = -pos
		sign = -1
		sign = 1

	for i in range(int(steps)):

	print &quot;Pos ini&quot;, posIni
	print &quot;Pos fin&quot;, posFin
	print &quot;Steps&quot;, steps
	print int(steps)


That is, if we start at position (posIni) and we want to move a certain percentage of the available range (a real number between 0 and 1) we can compute the final position if we know the total range (MAX – MIN):

posFin=posIni + (MAX-MIN)*pos

And then, we can compute the needed steps to reach this destination; if we use increments of 10 (inc=10):

steps=abs(posFin - posIni) / inc

We are using the absolute value because the movement can be forward and backward (depending on the starting point for the movement). This is solved by means fo this conditional:

if pos < 0:

Finally, we use a for loop to reach the destination:

for i in range(int(steps)):

The result can be seen in the following video:

Montamos la cámara en el motor que se mueve más despacio #raspi

A video posted by Fernando Tricas García (@ftricas) on

There we can observe a forward and backward movements (to recover the initial position) with an improvised model.
The speed can be controlled with the time between steps (VEL value).

Maybe we should have chosen other type of motor, but we could solve the problem with this approach.

My second bot

In Raspberry Pi: ¿qué temperatura hace en mi casa? (only in Spanish, sorry) we presented our first attempt at doing a bot. It allowed us to interact with the Raspberry Pi from our location, provided we had an internet connection. In this video we can see that interaction using IRC.

I tested SleekXMPP and phenny but I found some limits and continued my search. When I found err I discovered that it was in that moment under develoment and that it has a somewhat active community in Google+, Err. It provides a modular architecture for adding features to the bot.

My first steps were to adapt some tests I programmed for phenny and to add the possibility to take pictures with my cameras and sending them by email The code is at: err-plugins (it will change in the future, so we will pay atention to the current version):

The first one is pruebas.plug. It contains some meta-information needed to define the module following the bot syntax:

Name = Pruebas
Module = pruebas

Description = let's try things !

And the file contains the actual code for the programmed actions. For example, the following code takes a pictures and then sends it by mail:

<br />
@botcmd<br />
def foto(self, msg, args):<br />
	"""Take a picture"""<br />
	quien=msg.getFrom().getStripped()<br />
	yield "I'm taking the picture, wait a second "<br />
	if(args):<br />
		try:<br />
			cam=int(args)<br />
		except:<br />
			cam=0<br />
	else:<br />
		cam=0<br />
	yield "Camera %s"%cam<br />"/tmp/imagen.png",cam)<br />
	yield "Now I'm sending it"<br />
	self.mail("/tmp/imagen.png", quien)<br />
	my_msg = "I've sent it to ... %s"%quien<br />
	yield my_msg<br />

The first line indicates that this funcion defines an instruction for the bot. The name of the funcion will be the command that we will need to send by IM (we will need a configurable prefix, that serves to differenciate among instructions for the bot and other strings),

In our case, the instruction


would execute a function that is almost the same as the one commented in Sending an image by mail in Python.

The main differences are:

  • It gets its parameters from the function call (Err manages this)

    def foto(self, msg, args):

  • It replies to the mail of the person who sent the order:


  • The argument can be 0, 1 or no argument (no validation is done) because we have two cameras attached to our raspi. By default (no parameters provided or some uninterpretable paramenter proviede) it uses camera 0.
  • Now it replies telling us the chosen camera:

    yield "Camera %s"%cam

  • And now it calls the actual funciont in charge of taking the picture; its parameters are very similar to the ones commented in a previous post (the name of the file and the chosen camera):"/tmp/imagen.png",cam)

  • Now it calls the funciont that will send the picture to the previously defined mail address, so the parameters are the name of the file and this address.

    self.mail("/tmp/imagen.png", quien)

  • Finally, it uses again yield to reply, finishin the process.

If we look at the code, the main difference for these two functions are that they do not have a @bootcmd line; they are internal funcions, and they are not available as bot commands. They need some configuration options (as presented in Sending an image by mail in Python ).

Errbot manages this by means of:

<br />
def get_configuration_template(self):<br />
return{'ADDRESS' : u'', 'FROMADD' : u'',<br />
'TOADDRS' : u'', 'SUBJECT' : u'Imagen',<br />
'SMTPSRV' : u'', 'LOGINID' : u'changeme',<br />
'LOGINPW' : u'changeme'}<br />

It is a dictionary with the parameters we need to configure.

If we send the order via IM:

.config Pruebas

In this case, Pruebas is the name of the module and we have selected the dot (.) as the indicator that the following string is an instruction for the bot. The config instruction returns the current configuration (if it has not been configured it returns the defined template; if it is configured it returns the actual values). These values can be used as a template for the module configuration.

.config Pruebas {'TOADDRS': u'', 'SMTPSRV':
u'', 'LOGINPW': u'mypassword',

We are almost done, soon we will be able to show the whole thing.