Introducing Petapass

March 27, 2011 at 12:41 PM | Tags: password, python, gui, pygtk, release, security, petapass

This post was import from an earlier version of this blog. Original here.

A few weeks ago, I wrote about a scheme for better passwords. Building on that idea, I'm pleased to announce Petapass, a stateless password generator. The name is a play on my first name as well as the very large number of passwords you can create.

The traditional approach to password management is to store passwords in an encrypted file (various password managers use this approach). Petapass instead implements a stateless password management scheme - all the necessary state resides in your head. It hashes a master password and a per-login descriptive token to generate unique 10-character passwords. The token is merely something you will remember when you need to log in (such as "myblog"). Portable across OSes, nothing to steal, lose or synchronize. I like to think of it as RESTful password management.

Petapass implements a simple GUI. It provides a "daemon mode", where it will remember your master password for a configurable timeout. After entering the token, the generated password is copied to the clipboard, allowing you to easily paste it to a login form or ssh prompt. Binding the command to show the window to a global hotkey makes Petapass unobtrusive and easy to use.

Full details at PYPI. Linux only for now - a windows version should be trivial, while OS X requires a Cocoa GUI.

Note: I couldn't get the window to always raise to the foreground the way I wanted - if you've got PyGTK skills and a few minutes, please ping me.

These comments were imported from an earlier version of this blog.

Jeff Barea 2011/03/27 19:56:41 -0700

I like the concept. One thing I've noticed is by introducing third party programs it makes it less secure than would seem.

Mostly talking about the clipboard copying. A whole host of other security issues, but some of them are simply out of anyone's control really.

ncoghlan_dev 2011/03/27 20:23:36 -0700

I believe Brett Cannon did something along these lines with OpLop (only web based). I've never quite seen the point of stateless generators: yes, it provides substantially improved resistance to dictionary attacks against the sites themselves, but it doesn't help much with remembering your tokens for rarely used sites.

And if you decide to save the tokens somewhere... you're back to needing an encrypted password store. And once you're using one of those *anyway* why not just generate the passwords directly and not bother with the tokens?

None 2011/03/28 08:49:41 -0700

Peter Fein said...
@ncoghlan: yeah, it's similar to OpLop (which does have Python implementation btw). Doing this in a browser at all feels risky, and doing it on a third party webpage is *insane* - you're implicitly trusting all of the code loaded by Oplop every time you use it. While I might trust Brett, I'm relying on his & google's security.

As for remembering tokens, that doesn't seem to be a problem in practice - the tokens themselves don't need to be hard to guess - the domain name (perhaps without TLD) is fine:

@Jeff Barea: The clipboard aspect could be improved (by being eliminated entirely). It's particularly a problem when you're using a clipboard history manager, like parcellite. Perhaps I can add another command to "paste" directly to the current X11 window. See this bug:

Eric 2011/03/28 13:21:09 -0700

I keep wishing that something like this would work for me, but the passwords I'm required to create are full of incompatible restrictions. Some sites are restricted to ten characters, others require at least twelve. Some sites prohibit special characters, others require them. I use an encrypted password store partly to avoid manipulating the result of such a stateless system to fit the need.

PlayerPiano: Amaze Your Friends!

November 16, 2010 at 05:10 PM | Tags: pycon2009, python, demo, slides, playerpiano, video, release, lightningtalk, presentation, pycon

This post was import from an earlier version of this blog. Original here.

PlayerPiano: Amaze Your Friends!

PlayerPiano amazes your friends by running Python doctests in a fake interactive shell.

This is one of my favorite pieces of code - an app for developers, by a developer. I think it really shows off the potential of computers as a tool for human communication (mostly unfulfilled, IMO). The basic idea belongs to Ian Bicking (if you haven't seen his "Topics of Interest" talk, go watch it. Now). I realized I could use doctest to extract the code samples, making for a much more usable tool.


Being a tool for demonstration, the best explanation is a demo. Here's my 2009 Pycon lightning talk on the subject. And yes, my hands were shaking quite a lot (two cups of coffee immediately before presenting to 1000+ people was probably not the best idea).

Ironically, our talks about code often feature remarkably little actual code. Live typing is slow, difficult and boring for an audience. PlayerPiano makes demoing code easier, by scaling Python's shell culture up to the ballroom. With PlayerPiano, your presentations can be interactive demos with vocal explanations, leaving your slides to summarize for an audience that's already on the web. I hope it's helpful to speakers at next year's Pycon or at your local user group.

Future Directions

As speakers, we have a terrific resource we've been neglecting - namely, a local network and our audience's laptops. Rather than being a distraction from the presentation (via email & Facebook checks), perhaps we could use the sea of laptops to engage with it. At the 2009 sprint, I worked on adding AJAX support (my first serious JavaScript foray). The idea was to have the code being demoed gradually revealed on a web page in sync with what was on the projector, adding additional context (the speaker's notes, or other explanatory text). I got a decent prototype working using STOMP, but never really got the UX to where I wanted (it's been ripped out in the current release - find it in the old_orbited branch). Perhaps someone out there with more JavaScript-fu than I would like to implement this with websockets? Lemme know.

Pete cooks, rides bikes and hacks Python. Maybe for you? Don't worry, he wears pants.

Nothing like a broken installer...

November 11, 2010 at 09:53 AM | Tags: packaging, python, sucks, twiggy, release

This post was import from an earlier version of this blog. Original here.

… to let you know you have users. ;–)

Please grab version 0.4.2 of Twiggy, the previous tarball was missing a bunch of files.

Python packaging sucks. Sucks. Really, let’s just throw in the towel and use CPAN.

Anyone written a package for testing that your packaging/release process went correctly? Seems crazy that we’d need something like that, but I’ve messed up releases so many times that a sanity checker would be nice.

These comments were imported from an earlier version of this blog.

Craig McQueen 2010/11/11 14:09:32 -0800

Sucks? I'd be interested to hear the details.

I've worked on a couple of (really simple) Python packages (cobs, crcmod), and managed to get it down to a 1-step process for building each package, i.e. sdist bdist_msi

That doesn't suck at all, it's quite straight-forward. But then perhaps your package is more advanced than the one's I've worked with. Does your packaging process require a bunch of manual steps, and what stops them being automated? I'm interested to hear the details.

Ben Finney 2010/11/11 14:27:51 -0800

The trouble with saying “let's use CPAN” is that CPAN is code for several things at once.

* CPAN is a registry of packages. Many people think that's *all* CPAN is, and they say “oh Python already has PyPI, stop complaining”.

* CPAN is a *repository* of the friggin' *source code* to all those packages; if a package is in CPAN, you know exactly where to get the package and it can be installed. With PyPI, there's no guarantee the source code is anywhere.

* CPAN is a repository of *all free-software* packages; the vast majority are under GPL or Artistic License terms. The packages on PyPI could be under any license, and many of them are non-free.

* CPAN is a command-line tool, ‘cpan(1)’, for installing every CPAN package.

It's so seamless and, ironically, One Obvious Way To Do It, that people mean “the combination of CPAN and the ‘cpan(1)’ tool” when they say “CPAN”. That's all made possible because the above points are reliably true for CPAN. Because those are not true for PyPI, Python packaging sucks.

So what we actually need is something that is *all* of the above for Python, and is the expected, official, no-brainer location to publish and seek packages for Python. The trouble is, the people who might be interested in doing so rarely see the problem, because PyPI *appears* to solve the problem.

Sadly, I think the horse has bolted; the PyPI administrators actively resist efforts to change the policy to require source code for PyPI packages under free software licenses. It's just an link index, with optional file storage that may or may not be used.

Brandon Craig Rhodes 2010/11/11 14:41:12 -0800

I know exactly what you mean, Peter! The same thing happened to me with a package release on Saturday: some files were not included, so I remembered I had to add a, and, great, the files got included in my .tar.gz! Everything should be fine, right? Wrong. When the .tar.gz was actually installed, the files were missing — because I needed to add them as "package_data=[]" in addition to having them in

When I release packages with the little "pyron" tool I made, there's no problem (it does both steps for you), but for bigger projects I like to use standard approaches — and therefore trip over the repetition that distutils requires.

By the way, under 2.7, Tarek has improved things so that things like package_data get added to the MANIFEST automatically — so maybe someday things will be easier.

None 2010/11/11 14:59:39 -0800


The problem's not the commands, but writing the At a minimum, you've got version numbers in two places and list of files in two places, in addition to the files themselves. Totally violates DRY. Then you need to remember to upload your files & docs.

I swear, every time I sit down to write a for a new project it takes me at least half a day of re-reading the distutils docs and mucking with an edit-install-uninstall loop. And we haven't even gotten to setuptools. ;-(


I'm not really serious about using CPAN; it's more a straw man (though one that actually seems to walk around). That said, yeah, the CPAN model is nice b/c it's integrated and covers most of a distributor's needs in one place, including perldocs. CPAN also requires/validates PGP signatures, IIRC.

The problem with PyPI/ is that it's only half of a release management tool. It lacks the hooks needed to support the other things you want to do when distributing - publish new docs, tag your VCS, upload your tarball (if you host elsewhere). Maybe I should write a Makefile to handle all this, but... ick.


I actually think the implicit inclusion of files in your tarball that aren't in makes things worse - it leads to forgetting about the to begin with (which has it's own implicit includes). Someone really needs to `import this`.

George 2010/11/11 15:02:09 -0800

"the PyPI administrators actively resist efforts to change the policy to require source code for PyPI packages under free software licenses".

What does the inclusion of non-free software have to do with PyPI's alleged suckage, let alone the OP's frustration ? Irrelevant.

The fetching-building-installing part of PyPI packages is almost a solved problem with pip/virtualenv (or buildout, no experience with it) and several mirrors. What's still a pain (and what the OP is complaining about) is the packaging part, how to wrap your code in a package to upload to PyPI (or your site) for others to install.

Marius Gedminas 2010/11/11 16:33:39 -0800

There's non-free software on PyPI? I've never noticed. I'll have to be more careful when I look for software there.

I've started adding Makefile rules for 'make distcheck' to verify that my is sane, because of the pain you're describing. You can find an example at (Bazaar has horrible web URLs, I hope your URLizer won't choke on that colon in the middle). It's crazy, complicated and obfuscated, but at least it stops me from making broken releases on PyPI.

I haven't had a problem forgetting to include something in package_data yet. If I have, I'll start extending make distcheck to create a virtualenv, install my local sdist, and run the testsuite of the installed package.

Python packaging sucks, but at least it's getting better. Remember the time when we had no automated dependency resolution systems?

None 2010/11/11 17:50:55 -0800


re: virtualenv + tests: yeah, I had thoughts along that direction. Though that only keeps your package from being totally borked, not from forgetting to include things that your tests don't capture (docs, random little scripts, whatever).

Really though, testing your package seems like trying to put gloves on after you've already got frostbite."twiggy").info("What's new, what's next")

November 09, 2010 at 11:00 AM | Tags: python, release, logging, twiggy

This post was import from an earlier version of this blog. Original here.

An update about Twiggy, my new Pythonic logger.

What’s New

Yesterday I released a new version 0.4.1 of Twiggy. This release adds full test coverage (over 1000 lines, nearly twice the lines of actual code). I’ve fixed a number of important bugs in the process, so you’re encouraged to upgrade.

The features system is currently deprecated, pending a reimplementation in version 0.5. Features are currently global (shared by all log instances); they really should be per-object so libraries can use them without stepping on each other. Expect some clever metaprogramming voodoo to make this work while keeping things running fast.

What’s Next

Here’s a little preview of what you can expect over the next few weeks:

Be the best, steal from the rest

I’ll be adding support for context fields, a feature inspired by Logbook’s stacks. This allows an application to add fields to all log messages on a per-thread or per-process basis.

>>> from twiggy import *
>>> quickSetup()
>>> log.process(x=42)
>>> log.thread(y=100)
>>> log.debug('yo')
>>> def doit():
...     log.debug('no y')
...     log.thread(y=999)
...     log.debug('different y')
>>> import threading
>>> t = threading.Thread(target=doit)
>>> t.start(); t.join()
DEBUG:x=42:no y
DEBUG:x=42:y=999:different y

This is a killer feature for logging/debugging in webapps. One often wants to inject the request ID into all messages, including libraries that don’t know/care that they’re running on the web. There’ll be methods for clearing these contexts, as well as context managers to use with the with: statement.

Stdlib compatibility layer

0.5 will improve compatibility with the standard library’s logging package. This compatiblity will be two-way. You’ll be able to:

  • configure twiggy to use stdlib logging as an output backend
  • inject an API shim that emulates basic logging functionality

The later requires some explanation. 90-plus percent of the logging code I’ve ever seen only uses the most basic functionality: creating loggers, logging messages and capturing tracebacks. For such code, it should be possible to do:

from twiggy import logging_compat as logging
log = logging.getLogger("oldcode")"Shh, don't tell")

Even better, twiggy will provide a logging_compat.hijack() method to inject itself into sys.modules so that no modification to old code is needed at all.

I don’t expect this compatibility layer to work for everyone – notably, custom handlers won’t be supported (the underlying models are just too different), but this should ease the transition pain for many people.


Also planned for 0.5 is support for user-defined counters. This feature is still taking shape, but it’ll look something like:

>>> def deep():
...     with log.increment('depth'):
..."it's dark")
...         abyss()
...         log.warning("coming back up")
>>> def abyss():
...     with log.increment('depth'):
..."it's cold")
>>> deep()
INFO:depth=1:it's dark
INFO:depth=2:it's cold
WARNING:depth=1:coming back up

Outputs will be able to transform the depth field into useful visual formatting – for example, by using indentation to group lines together in a console app, or by setting a CSS class in HTML. Hell yeah, structured logging.


Other forthcoming changes include: a port to Python 3, PEP-8 compliance, rewriting the features system, support for the warnings module and various minor enhancements. I’ll continue to support Python 2.7 using 3to2


I should probably stop there, but I’m excited by what’s further down the road. That includes:

  • lazy logging: an output backend that groups messages together by a key, and only outputs them if some condition is met. For example, capture messages by request ID, and output all of them together if any one message is ERROR or higher.
  • cluster logging: Twiggy will support easily settting up a master logging daemon to receive messages from multiple processes on a machine or across your cluster.
  • unittest support: stuff the expected log output in your test docstring, apply a decorator, and Twiggy will add additional asserts to ensure your logs come out right.
  • backends, backends, backends: email, HTTP, SQL, CouchDB, syslog, NT event log… Maybe even backends that open tickets in your bug tracker or stream live logs to your browser. Yeah.

What do you want?

Now is your opportunity to let me know what you want in a logger. Got a feature I haven’t thought of? Crazy idea? Think I should implement your favorite backend sooner? Tell me in the comments below.

Pete cooks, rides bikes and hacks Python. Maybe for you?. Don’t worry, he wears pants.