Storing credentials of your Python programs in the keyring

err_speech

As I introduced in previous posts I’m a happy user of ErrBot (My second bot).
My main concern was to be able to control some functions in some machines without having to connect via ssh or exposing web pages to the world.

I couldn’t feel comfortable with the idea of storing my credentials in the config.py file which is the suggested way to connect to the services with the backend selected (‘username’ and ‘password’).

In order to avoid this I’m proposing here an alternative approach, based on the keyring module. Provided you have it correctly installed, you can store your credentials there and use them from your Python programs. In this case, from ErrBot.

You can see there the way to store your credentials and so on, we will not replicate them here.

Notice that the credentials storage will rely on the operating system and depending on the configuration anybody logged into your account will have access to them without password (you’ll need this if you want a bot that can start in an unattended way). We are only obtaining the added security of not having the credentials stored in the config.py file but not much more.

Then you need to make some changes in the config.py file. First of all, importing the module:

import keyring

Later, before the BOT_IDENTITY section you can add three variables, for the server (needed in order to select the account in the keyring) and the username:

server = 'jabber-fernand0movilizado'
username = 'fernand0movilizado@gmail.com'
password = keyring.get_password(server,username)

Finally, in the BOT_IDENTITY structure, in the backend you have selected, you
can put (XMPP backend, for example):

    'username': username,  # The JID of the user you have created for the bot
    'password': password,       # The corresponding password for this user

In this way when the bot starts it gets the credential from the keyring. You can use a similar approach in your programs and, if you do not need to autostart the program you can protect with a password the keyring entries.

Advertisements

A robotic leg

This post was more of a note to self than something of actual value. Anyway there were some advances, some tabs open in the browser and I felt it was better to try to share them here for future reference than waiting for an (eventual) finishing point of the project.
Maybe it will be useful for somebody.

In a mobile camera we anticipated the idea of making something that can move. In fact, there was an inspiration in Stubby. Full featured, miniature hexapod. There is more info in Stubby the (Teaching) Hexapod.

My only (and not small) problem with that design were the manual abilities and the tools needed: wooden cutting, mechanization,… I was wandering (physically and mentally) about my possibilities and one of the solutions was to use wood sticks; I’d need to cut and make perforations but that was not too scaring for me.

Internet is plenty of projects such as A spider called “Chopsticks” that is using chopsticks for the legs and Popsicle Stick Hexapod R.I.P.. Their ideas were similar to my own ones and they gave me some encouragement. I also had dicovered Build a 12-Servo Hexapod. It has some limitations but shows some interesting ideas.
Just to comply with my initial statement (more tabs!), we can see some more proyects like
Hexpider with a different design (it can even write!) and 6-legged robot project. All of them have helped me providing insight and ideas about the movement and articulations (at a very basic level, some elaboration is needed that will be shown in further posts).

With these ideas I visited a DIY store in order to get inspiration. I forgot quickly the idea of wooden sticks because I discovered some plastic tubes that seemed to me more convenient: they should be easier to cut and they should be lighter. You can find also alluminun sticks that would have a nicer look, but at this stage of the project the plastic tubes seemed easier to use.

Palo

A post shared by Fernando Tricas García (@ftricas) on

My supposition was correct and this material is easy to manage: we can make holes and fix the servo with a screw, as it can be seen in the following image:

La pata #raspi #servo

A post shared by Fernando Tricas García (@ftricas) on

The picture is not very good, but it should be enough to get the idea about joining the different parts. I’m very grateful for similar pictures from other projects that provided hints about how to proceed. As you can see I’ve chosen a design wit three servos for each leg.

We have used cable ties for joining some parts, maybe we’ll need some better methds to improve these unions. It should be easy to make more ‘agressive’ operations if needed.

It was quite surprising to see how fast I could configure the leg with these tools, we will see if I can go so fast in the future (hint: no).

For the movement of the legs we had some experience with servos (Adding movement: servos). The whole code was rewritten following the ideas of PiCam.

Rápido-lento-rápido #raspi #err #errbot

A post shared by Fernando Tricas García (@ftricas) on

On the software side, I will only show a couple of small programs that can be found at servo.

The first one can move each joint in and independent way (we wanted to be able to test them from the command line legOneMove.py.

We have the three joints associated to three GPIO ports:

servoGPIO=[17, 23,15]

and we will use a function for the transormation of an angle in the needed pulse:

def angleMap(angle):
   return int((round((1950.0/180.0),0)*angle)/10)*10+550

The movement function is very simple:

def movePos(art, pos):
    servo = PWM.Servo()
    print art
    servo.set_servo(art, angleMap(pos))
    time.sleep(VEL)

Shame on me, I discovered that I was needing the last delay because when the program finishes it stops sending the needed pulses and the movement is not completed.

Finally, in

movePos(servoGPIO[int(sys.argv[1])], int(sys.argv[2]))

we are passing as the first argument the joint we are moving (mapped to the adequate GPIO). The second argument is the angle. Notice that no bound nor limit checking is done so, some bad things can happen if the parameters are not adequate.

The second program is legServo.py. It is a simulation of the movements needed for the leg in order to walk: raise the leg, move forward, lower it and move it backwards, and so on…
Some better movements will be needed in the future but do not forget that this is just a proof of concept.

Now we can see a video with a sequence of these movements repeated several times that I recorded with my son’s help.

En movimiento #servo #raspi

A post shared by Fernando Tricas García (@ftricas) on

We can now see another video with some previous tests, taking advantage of the wonderful YouTube video editor, with two joints and with three joints:

The next steps will be to construct the other legs (four or six) and we’ll need to see if we need some more hardware (may be we will need some more input/ouputs in order to control all the servos for the legs and maybe something more). We will need also something for the ‘body’.

This post was published originally in Spanish, at: Una pata robótica.

Publishing in Facebook each post from this blog

Some time ago we published the way for Extracting links from a webpage with Python as a first step for publishing complete blog posts in Facebook. The idea was to prepare the text obtained from an RSS feed in order to publish it in a Facebook page (or in other places). Let us remember that Facebook does not allow (or I didn’t find the way) to include html in the pages’ posts.
We had presented previously in Publishing in Twitter when posting here some related ideas, in that case for Twitter.

Now we are going to use the Facebook API and an unofficial package which implements it in Python, Facebook Python SDK.

We can install it with

fernand0@aqui:~$ sudo pip install facebook-sdk

It will need `BeautifulSoup` and `requests` (and maybe some other modules). If they are not installed in our system, we will get the adequate ‘complaints’. We can install them as usual with pip (or our preferred system).

We need some credentials in order to publish in Facebook. First we have to register our application in Facebook My Apps (button ‘Add a new App’ (there are plenty of tutorials if you need help). We will use the ‘advanced setup’ (registering web applications seems to be easier) and some identifiers will be provided (mainly the OAUTH token; we can find them at Myapps, following the link for our app). We will store this token in ~/.rssFacebook, and it will be later used in our program.
This configuration file is similar to this one

[Facebook]
oauth_access_token:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

The program is very simple, it can be downloaded from rssToPages.py (link to the version commented here, there have been some further evolutions).

The program starts reading the configuration about the available blogs and we need to choose one. If there were just one no selection would be needed:

config = ConfigParser.ConfigParser()

config.read([os.path.expanduser('~/.rssBlogs')])

print "Configured blogs:"

i=1
for section in config.sections():
        print str(i), ')', section, config.get(section, "rssFeed")
        i = i + 1

if (int(i)>1):
        i = raw_input ('Select one: ')
else:
        i = 1

print "You have chosen ", config.get("Blog"+str(i), "rssFeed")

The configuration file must contain a section for each blog; each one of them will have an RSS feed, the Twitter account and the name of the Facebook page. For this site it would have the following entries:

[Blog1]
rssFeed:https://makingfernand0.wordpress.com/feed/
twitterAc:makingFernand0
pageFB:

Notice that the Facebook account is empty: this blog has not a Facebook page (yet?).
We could have a second blog:

[Blog2]
rssFeed:http://fernand0.github.io/feed.xml
twitterAc:mbpfernand0
pageFB:fernand0.github.io

This configuration file can have yet another field, linksToAvoid that will be used for selecting some links that won’t be shown (I have other blog and in this way I can avoid the categories’ links).

if (config.has_option("Blog"+str(i), "linksToAvoid")):
        linksToAvoid = config.get("Blog"+str(i), "linksToAvoid")
else:
        linksToAvoid = ""

We will read now the last post of the blog and we will extract the text and links in a similar way as seen in Extracting links from a webpage with Python (not shown here).

And now the links we want to avoid:

                print linksToAvoid
                print re.escape(linksToAvoid)
                print str(link['href'])
                print re.search(linksToAvoid, link['href'])
                if ((linksToAvoid =="")
                        or (not re.search(linksToAvoid, link['href']))):
                        link.append(" ["+str(j)+"]")
                        linksTxt = linksTxt + "["+str(j)+"] " + link.contents[0] + "\n"
                        linksTxt = linksTxt + "    " + link['href'] + "\n"
                        j =  j + 1

We then check if the post contains some image. If not, we will not add an image, but Facebook will (it will be the first image that it can find in our page).
We could configure one that would be used in case of need (in case we have not included an image in our post and we do not like the one chosen by Facebook) or we can try to add always to our posts some image.

if len(pageImage) > 0:
        imageLink = (pageImage[0]["src"])
else:
        imageLine = ""

Now we will read the Facebook configuration and we will ask for the list of pages the user manages (remember that we have established the desired one in ~/.rssBlogs):

config.read([os.path.expanduser('~/.rssFacebook')])
oauth_access_token= config.get("Facebook", "oauth_access_token")

graph = facebook.GraphAPI(oauth_access_token)
pages = graph.get_connections("me", "accounts")

We could define more Facebook accounts but I have not tested this feature, so maybe it won’t work as expected (and, of course, there is no way to select one of them).

for i in range(len(pages['data'])):
        if (pages['data'][i]['name'] == pageFB):
                print "Writing in... ", pages['data'][i]['name']
                graph2 = facebook.GraphAPI(pages['data'][i]['access_token'])
                graph2.put_object(pages['data'][i]['id'],
                        "feed", message = theSummary, link=theLink,
                        picture = imageLink,
                        name=theTitle, caption='',
                        description=theSummary.encode('utf-8'))

statusTxt = "Publicado: "+theTitle+" "+theLink

This program has been tested during the last months and the solution seems to be working (maybe you’ll want to check the latest version that will have some bugs corrected).
The most cumbersome part was to get the credentials and register the app (with a ‘fake’ production step; for me it is ‘fake’ because I’m the only user of the app).

This post was published originally (in Spanish) at: Publicar en Facebook las entradas de este sitio.

If you have doubts, comments, ideas… Please comment!

Extracting links from a webpage with Python

Enlaces en Página de Facebook Some time ago we presented a small program that helped us to publish in Twitter Publishing in Twitter when posting here.

Later I started having a look at the Facebook API and doing some tests. I discovered that Facebook does not allow to publish links with their anchor text. It transforms them in links that you can click on but such that they have the own link as text. I wanted to publish in Facebook the whole text (it will not show easily the whole entry, just a small part and a link to click in order to see more; and so on).

It has always called my attention the netiquette in some mailing lists where they add numbers near to the anchor text of links and they write at the end these numbers and the corresponding links. See, for example this Support page.

I decided to follow this path in order to publish in my Facebook pages. In the following I will try to explain some parts of the program for doing this. The code is available at rssToLinks.py (version in this moment, maybe they will be changes later).

There are several ways to extract links: regular expressions, some HTML parser (in our Blogómetro project we used this approach with the Simple SGML parser). Looking for alternatives I found Beautiful Soup, as a fast way to parse a web page and I decided to give it a try.

In order to use it we need some modules. We will publish in Facebook using the RSS feed, so we will also need to include the ‘feedparser’ module.

import feedparser
from bs4 import BeautifulSoup
from bs4 import NavigableString
from bs4 import Tag

Now we can read the RSS feed:

feed = feedparser.parse(url)

for i in range(len(feed.entries)):

And now the magic of BeautifulSoup can start:

soup = BeautifulSoup(feed.entries[i].summary)
links = soup("a")

That is we parse the RSS entry looking for links (“a” tag). We will have the entry in the ‘summary’ part and we are interested in the entry in position ‘i’. It will return the list of HTML elements with that tag.

In some entries we include images, but we do not want them to appear in the text. For this we use ‘isinstance’ in order to check if inside the text there is another HTML tag. We will check the list with the links together with a counter ‘j’ in order to associate the numbers and the links (in the original HTML, we have not modified it yet).

	j = 0
	linksTxt = ""
	for link in links:
		if not isinstance(link.contents[0], Tag):
			# We want to avoid embdeded tags (mainly <img ... )
			link.append(" ["+str(j)+"]")
			linksTxt = linksTxt + "["+str(j)+"] " + link.contents[0] + "\n"
			linksTxt = linksTxt + "    " + link['href'] + "\n"

The content of the link (now we know that it is not an image nor another HTML tag) will be available at `link.contents[0]` (of course, it could be more content but our links tend to be simple).

        linksTxt = linksTxt + "["+str(j)+"] " + link.contents[0] + "\n"

and the links is at `link[‘href’]`.

                linksTxt = linksTxt + "    " + link['href'] + "\n"

Now we need the text of the HTML.

        print soup.get_text()

Sometimes this text can have breaklines, spaces, … We could suppress them. We usually have very simple links, so we are not going to pay attention to this problem.

Now, we can add at the end the links:

        if linksTxt != "":
                print
                print "Links :"
                print linksTxt

Publishing in Twitter when posting here

I don’t think RSS is dead. But we can see how many people is using social networking sites to get their information. For this reason I was publishing the entries of my blogs using services like IFTTT and dlvr.it. They are easy to use and they work pretty well. Nevertheless, one is always wondering if we could prepare our own programs to manage these publications and learn something new on the way.

I started with Facebook publishing but I’m presenting here a program for Twitterpublishing: we only need to publish the title and the link (and, maybe, some introductory text).

I found the project twitter as an starting point. It has implemented an important part of the work. We can install it using pip:

fernand0@here:~$ sudo pip install twitter

It needs `BeautifulSoup` and maybe some other modules. If they are not available in our system we will get the adequate ‘complaints’.

Now we can execute it.
This step is useful in order to do the authentication steps in Twitter and getting the oauth token. Our program will not deal with this part and it will be smaller and more simple.
Not so long ago it was possible to send tweets with just the username and passowrd but Twitter decided to start using a more sofisticated systema based in OAuth.

fernand0@here:~$ twitter

The program launches a browser for authentication and then giving our app the adequate permissions. This generates the tokens and other information needed to interact with Twitter. They will be stored at `~/.twitter_oauth` (in a Unix-like system, I’d be happy to know about other systems) that we will reuse in our own application.

The program is quite simple, it can be downloaded from rssToTwitter.py V.2014-12-07) (link to the commented version, the program has been updated to correct bugs and add features).

We will start reading the configuration:

config = ConfigParser.ConfigParser()

config.read([os.path.expanduser('~/.rssBlogs')])
rssFeed = config.get("Blog1", "rssFeed")
twitterAc = config.get("Blog1", "twitterAc")

This configuration file must contain a section for each blog (this program uses only the configuration for the first one). Each section will contain the RSS feed, the name of the Twitter account and the name of the Facebook account (it can be empty if it won’t be used. For example, for this blog it would be:

[Blog1]
rssFeed:https://makingfernand0.wordpress.com/feed/
twitterAc:MakingFernand0
pageFB:

It also needs the Twitter configuration:

config.read([os.path.expanduser('~/.rssTwitter')])
CONSUMER_KEY = config.get("appKeys", "CONSUMER_KEY")
CONSUMER_SECRET = config.get("appKeys", "CONSUMER_SECRET")
TOKEN_KEY = config.get(twitterAc, "TOKEN_KEY")
TOKEN_SECRET = config.get(twitterAc, "TOKEN_SECRET")

We can use the ones that have been generated before (we can copy them from the app); in my system it is at: `/usr/local/lib/python2.7/dist-packages/twitter/cmdline.py` and the tokens are stored at `~/.twitter_oauth`

The configuration file is as follows:

[appKeys]
CONSUMER_KEY:xxxxxxxxxxxxxxxxxxxxxx
CONSUMER_SECRET:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
[makingfernand0]
TOKEN_KEY:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
TOKEN_SECRET:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Notice that you can configure as many Twitter accounts as needed. The name of the second section is the same as the one used in the previous configuration file.

Now we can read the RSS feed in order to extract the required data:

feed = feedparser.parse(rssFeed)

i = 0 # It will publish the last added item

soup = BeautifulSoup(feed.entries[i].title)
theTitle = soup.get_text()
theLink = feed.entries[i].link


For this, we will use `feedparser` in order to download the RSS feed and process it.

We are chosing the first entry (position 0), that will be the last one published. For Twitter we just need the title and the link.
We use BeautifulSoup for processing the title, in order to avoid the tags (para evitar las posibles etiquetas que pueda contener (CSS, HTLL entities, ...) 

And finally, we will build the tweet:


statusTxt = "Publicado: "+theTitle+" "+theLink

We can now proceed to the steps of identification, authentication and publishing:

t = Twitter(
	auth=OAuth(TOKEN_KEY, TOKEN_SECRET, CONSUMER_KEY, CONSUMER_SECRET))

t.statuses.update(status=statusTxt)

This entry was originally published (in Spanish) at: Publicar en Twitter las entradas de este sitio.