We can answer on question what is VPS? and what is cheap dedicated servers?

Python

toppcloud renamed to Silver Lining

After some pondering at PyCon, I decided on a new name for toppcloud: Silver Lining. I’ll credit a mysterious commenter "david" with the name idea. The command line is simply silversilver update has a nice ring to it.

There’s a new site: cloudsilverlining.org; not notably different than the old site, just a new name. The product is self-hosting now, using a simple app that runs after every commit to regenerate the docs, and with a small extension to Silver Lining itself (to make it easier to host static files). Now that it has a real name I also gave it a real mailing list.

Silver Lining also has its first test. Not an impressive test, but a test. I’m hoping with a VM-based libcloud backend that a full integration test can run in a reasonable amount of time. Some unit tests would be possible, but so far most of the bugs have been interaction bugs so I think integration tests will have to pull most of the weight. (A continuous integration rig will be very useful; I am not sure if Silver Lining can self-host that, though it’d be nice/clever if it could.)

Packaging
Programming
Python
Silver Lining
Web

Comments (7)

Permalink

Throw out your frameworks! (forms included)

No, I should say forms particularly.

I have lots of things to blog about, but nothing makes me want to blog like code. Ideas are hard, code is easy. So when I saw Jacob’s writeup about dynamic Django form generation I felt a desire to respond. I didn’t see the form panel at PyCon (I intended to but I hardly saw any talks at PyCon, and yet still didn’t even see a good number of the people I wanted to see), but as the author of an ungenerator and as a general form library skeptic I have a somewhat different perspective on the topic.

The example created for the panel might display that perspective. You should go read Jacob’s description; but basically it’s a simple registration form with a dynamic set of questions to ask.

I have created a complete example, because I wanted to be sure I wasn’t skipping anything, but I’ll present a trimmed-down version.

First, the basic control logic:


from webob.dec import wsgify
from webob import exc
from formencode import htmlfill

@wsgify
def questioner(req):
    questions = get_questions(req) # This is provided as part of the example
    if req.method == 'POST':
        errors = validate(req, questions)
        if not errors:
            ... save response ...
            return exc.HTTPFound(location='/thanks')
    else:
        errors = {}
    ## Here's the "form generation":
    page = page_template.substitute(
        action=req.url,
        questions=questions)
    page = htmlfill.render(
        page,
        defaults=req.POST,
        errors=errors)
    return Response(page)

def validate(req, questions):
    # All manual, but do it however you want:
    errors = {}
    form = req.POST
    if (form.get('password')
        and form['password'] != form.get('password_confirm')):
        errors['password_confirm'] = 'Passwords do not match'
    fields = questions + ['username', 'password']
    for field in fields:
        if not form.get(field):
            errors[field] = 'Please enter a value'
    return errors
 

I’ve just manually handled validation here. I don’t feel like doing it with FormEncode. Manual validation isn’t that big a deal; FormEncode would just produce the same errors dictionary anyway. In this case (as in many form validation cases) you can’t do better than hand-written validation code: it’s shorter, more self-contained, and easier to tweak.

After validation the template is rendered:


page = page_template.substitute(
    action=req.url,
    questions=questions)
 

I’m using Tempita, but it really doesn’t matter. The template looks like this:


<form action="{{action}}" method="POST">
New Username: <input type="text" name="username"><br />
Password: <input type="password" name="password"><br />
Repeat Password:
  <input type="password" name="password_confirm"><br />
{{for question in questions}}
  {{question}}: <input type="text" name="{{question}}"><br />
{{endfor}}
<input type="submit">
</form>
 

Note that the only "logic" here is to render the form to include fields for all the questions. Obviously this produces an ugly form, but it’s very obvious how you make this form pretty, and how to tweak it in any way you might want. Also if you have deeper dynamicism (e.g., get_questions start returning the type of response required, or weird validation, or whatever) it’s very obvious where that change would go: display logic goes in the form, validation logic goes in that validate function.

This just gives you the raw form. You wouldn’t need a template at all if it wasn’t for the dynamicism. Everything else is added when the form is "filled":


page = htmlfill.render(
    page,
    defaults=req.POST,
    errors=errors)
 

How exactly you want to calculate defaults is up to the application; you might want query string variables to be able to pre-fill the form (use req.params), you might want the form bare to start (like here with req.POST), you can easily implement wizards by stuffing req.POST into the session to repeat a form, you might read the defaults out of a user object to make this an edit form. And errors are just handled automatically, inserted into the HTML with appropriate CSS classes.

A great aspect of this pattern if you use it (I’m not even sure it deserves the moniker library): when HTML 5 Forms finally come around and we can all stop doing this stupid server-side overthought nonsense, you won’t have overthought your forms. Your mind will be free and ready to accept that the world has actually become simpler, not more complicated, and that there is knowledge worth forgetting (forms are so freakin’ stupid!) If at all possible, dodging complexity is far better than cleverly responding to complexity.

HTML
Programming
Python
Web

Comments (27)

Permalink

Why toppcloud (Silver Lining) will not be agnostic

I haven’t received a great deal of specific feedback on toppcloud (update: renamed Silver Lining), only a few people (Ben Bangert, Jorge Vargas) seem to have really dived deeply into it. But — and this is not unexpected — I have already gotten several requests about making it more agnostic with respect to… stuff. Maybe that it not (at least forever) require Ubuntu. Or maybe that it should support different process models (e.g., threaded and multiple processes). Or other versions of Python.

The more I think about it, and the more I work with the tool, the more confident I am that toppcloud should not be agnostic on these issues. This is not so much about an "opinionated" tool; toppcloud is not actually very opinionated. It’s about a well-understood system.

For instance, Ben noticed a problem recently with weird import errors. I don’t know quite why mod_wsgi has this particular problem (when other WSGI servers that I’ve used haven’t), but the fix isn’t that hard. So Ben committed a fix and the problem went away.

Personally I think this is a bug with mod_wsgi. Maybe it’s also a Python bug. But it doesn’t really matter. When a bug exists it "belongs" to everyone who encounters it.

toppcloud is not intended to be a transparent system. When it’s working correctly, you should be able to ignore most of the system and concentrate on the relatively simple abstractions given to your application. So if the configuration reveals this particular bug in Python/mod_wsgi, then the bug is essentially a toppcloud bug, and toppcloud should (and can) fix it.

A more flexible system can ignore such problems as being "somewhere else" in the system. Or, if you don’t define these problems as someone else’s problem, then a more flexible system is essentially always broken somewhere; there is always some untested combination, some new component, or some old component that might get pushed into the mix. Fixes for one person’s problem may introduce a new problem in someone else’s stack. Some fixes aren’t even clear. toppcloud has Varnish in place, so it’s quite clear where a fix related to Django and Varnish configuration goes. If these were each components developed by different people at different times (like with buildout recipes) then fixing something like this could get complicated.

So I feel very resolved: toppcloud will hardcode everything it possibly can. Python 2.6 and only 2.6! (Until 2.7, but then only 2.7!). Only Varnish/Apache/mod_wsgi. I haven’t figured out threads/processes exactly, but once I do, there will be only one way! And if I get it wrong, then everyone (everyone) will have to switch when it is corrected! Because I’d much rather have a system that is inflexible than one that doesn’t work. With a clear and solid design I think it is feasible to get this to work, and that is no small feat.

Relatedly, I think I’m going to change the name of toppcloud, so ideas are welcome!

Programming
Python
Silver Lining
Web

Comments (23)

Permalink

toppcloud (Silver Lining) and Django

I wrote up instructions on using toppcloud (update: renamed Silver Lining) with Django. They are up on the site (where they will be updated in the future), but I’ll drop them here too…

Creating a Layout

First thing you have to do (after installing toppcloud of course) is create an environment for your new application. Do that like:


$ toppcloud init sampleapp
 

This creates a directory sampleapp/ with a basic layout. The first thing we’ll do is set up version control for our project. For the sake of documentation, imagine you go to bitbucket and create two new repositories, one called sampleapp and another called sampleapp-lib (and for the examples we’ll use the username USER).

We’ll go into our new environment and use these:


$ cd sampleapp
$ hg clone http://bitbucket.org/USER/sampleapp src/sampleapp
$ rm -r lib/python/
$ hg clone http://bitbucket.org/USER/sampleapp-lib lib/python
$ mkdir lib/python/bin/
$ echo "syntax: glob
bin/python*
bin/activate
bin/activate_this.py
bin/pip
bin/easy_install*
"
> lib/python/.hgignore
$ mv bin/* lib/python/bin/
$ rmdir bin/
$ ln -s lib/python/bin bin
 

Now there is a basic layout setup, with all your libraries going into the sampleapp-lib repository, and your main application in the sampleapp repository.

Next we’ll install Django:


$ source bin/activate
$ pip install Django
 

Then we’ll set up a standard Django site:


$ cd src/sampleapp
$ django-admin.py sampleapp
 

Also we’d like to be able to import this file. It’d be nice if there was a setup.py file, and we could run pip -e src/sampleapp, but django-admin.py doesn’t create that itself. Instead we’ll get that on the import path more manually with a .pth file:


$ echo "../../src/sampleapp" > lib/python/sampleapp.pth
 

Also there’s the tricky $DJANGO_SETTINGS_MODULE that you might have had problems with before. We’ll use the file lib/python/toppcustomize.py (which is imported everytime Python is started) to make sure that is always set:


$ echo "import os
os.environ['DJANGO_SETTINGS_MODULE'] = 'sampleapp.settings'
"
> lib/python/toppcustomize.py
 

Also we have a file src/sampleapp/sampleapp/manage.py, and that file doesn’t work quite how we’d like. Instead we’ll put a file into bin/manage.py that does the same thing:


$ rm sampleapp/manage.py
$ cd ../..
$ echo '#!/usr/bin/env python
from django.core.management import execute_manager
from sampleapp import settings
if __name__ == "__main__":
    execute_manager(settings)
'
> bin/manage.py
$ chmod +x bin/manage.py
 

Now, if you were just using plain Django you’d do something like run python manage.py runserver. But we’ll be using toppcloud serve instead, which means we have to set up the two other files toppcloud needs: app.ini and the runner. Here’s a simple app.ini:


$ echo '[production]
app_name = sampleapp
version = 1
runner = src/sampleapp/toppcloud-runner.py
'
> src/sampleapp/toppcloud-app.ini
$ rm app.ini
$ ln -s src/sampleapp/toppcloud-app.ini app.ini
 

The file must be in the "root" of your application, and named app.ini, but it’s good to keep it in version control, so we set it up with a symlink.

It also refers to a "runner", which is the Python file that loads up the WSGI application. This looks about the same for any Django application, and we’ll put it in src/sampleapp/toppcloud-runner.py:


$ echo 'import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
'
> src/sampleapp/toppcloud-runner.py
 

Now if you want to run the application, you can:


$ toppcloud serve .
 

This will load it up on http://localhost:8080, and serve up a boring page. To do something interesting we’ll want to use a database.

Setting Up A Database

At the moment the only good database to use is PostgreSQL with the PostGIS extensions. Add this line to app.ini:


service.postgis =
 

This makes the database "available" to the application. For development you still have to set it up yourself. You should create a database sampleapp on your computer.

Next, we’ll need to change settings.py to use the new database configuration. Here’s the lines that you’ll see:


DATABASE_ENGINE = ''           # 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'.
DATABASE_NAME = ''             # Or path to database file if using sqlite3.
DATABASE_USER = ''             # Not used with sqlite3.
DATABASE_PASSWORD = ''         # Not used with sqlite3.
DATABASE_HOST = ''             # Set to empty string for localhost. Not used with sqlite3.
DATABASE_PORT = ''             # Set to empty string for default. Not used with sqlite3.
 

First add this to the top of the file:


import os
 

Then you’ll change those lines to:


DATABASE_ENGINE = 'postgresql_psycopg2'
DATABASE_NAME = os.environ['CONFIG_PG_DBNAME']
DATABASE_USER = os.environ['CONFIG_PG_USER']
DATABASE_PASSWORD = os.environ['CONFIG_PG_PASSWORD']
DATABASE_HOST = os.environ['CONFIG_PG_HOST']
DATABASE_PORT = ''
 

Now we can create all the default tables:


$ manage.py syncdb
Creating table auth_permission
Creating table auth_group
Creating table auth_user
Creating table auth_message
Creating table django_content_type
Creating table django_session
Creating table django_site
...
 

Now we have an empty project that doesn’t do anything. Let’s make it do a little something (this is all really based on the Django tutorial).


$ manage.py startapp polls
 

Django magically knows to put the code in src/sampleapp/sampleapp/polls/ — we’ll setup the model in src/sampleapp/sampleapp/polls/models.py:


from django.db import models

class Poll(models.Model):
    question = models.CharField(max_length=200)
    pub_date = models.DateTimeField('date published')

class Choice(models.Model):
    poll = models.ForeignKey(Poll)
    choice = models.CharField(max_length=200)
    votes = models.IntegerField()
 

And activate the application by adding 'sampleapp.polls' to INSTALLED_APPS in src/sampleapp/sampleapp/settings.py. Also add 'django.contrib.admin' to get the admin app in place. Run manage.py syncdb to get the tables in place.

You can try toppcloud serve . and go to /admin/ to login and see your tables. You might notice all the CSS is broken.

toppcloud serves static files out of the static/ directory. You don’t actually put static in the URLs, these files are available at the top-level (unless you create a static/static/ directory). The best way to put files in there is generally symbolic links.

For Django admin, do this:


$ cd static
$ ln -s ../lib/python/django/contrib/admin/media admin-media
 

Now edit src/sampleapp/sampleapp/settings.py and change ADMIN_MEDIA_PREFIX to '/admin-media'.

(Probably some other links should be added.)

One last little thing you might want to do; replace this line in
settings:


SECRET_KEY = 'ASF#&#64;$&#64;#JFAS#&#64;'
 

With this:


from tcsupport.secret import get_secret
SECRET_KEY = get_secret()
 

Then you don’t have to worry about checking a secret into version control.

You still don’t really have an application, but the rest is mere "programming" so have at it!

Programming
Python
Silver Lining
Web

Comments (2)

Permalink

A new way to deploy web applications

Deployment is one of the things I like least about development, and yet without deployment the development doesn’t really matter.

I’ve tried a few things (e.g. fassembler), built a few things (virtualenv, pip), but deployment just sucked less as a result. Then I got excited about App Engine; everyone else was getting excited about "scaling", but really I was excited about an accessible deployment process. When it comes to deployment App Engine is the first thing that has really felt good to me.

But I can’t actually use App Engine. I was able to come to terms with the idea of writing an application to the platform, but there are limits… and with App Engine there were simply too many limits. Geo stuff on App Engine is at best a crippled hack, I miss lxml terribly, I never hated relational databases, and almost nothing large works without some degree of rewriting. Sometimes you can work around it, but you can never be sure you won’t hit some wall later. And frankly working around the platform is tiring and not very rewarding.


So… App Engine seemed neat, but I couldn’t use it, and deployment was still a problem.

What I like about App Engine: an application is just files. There’s no build process, no fancy copying of things in weird locations, nothing like that; you upload files, and uploading files just works. Also, you can check everything into version control. Not just your application code, but every library you use, the exact files that you installed. I really wanted a system like that.

At the same time, I started looking into "the cloud". It took me a while to get a handle on what "cloud computing" really means. What I learned: don’t overthink it. It’s not magic. It’s just virtual private servers that can be requisitioned automatically via an API, and are billed on a short time cycle. You can expand or change the definition a bit, but this definition is the one that matters to me. (I’ve also realized that I cannot get excited about complicated solutions; only once I realized how simple cloud computing is could I really get excited about the idea.)

Given the modest functionality of cloud computing, why does it matter? Because with a cloud computing system you can actually test the full deployment stack. You can create a brand-new server, identical to all servers you will create in the future; you can set this server up; you can deploy to it. You get it wrong, you throw away that virtual server and start over from the beginning, fixing things until you get it right. Billing is important here too; with hourly billing you pay cents for these tests, and you don’t need a pool of ready servers because the cloud service basically manages that pool of ready servers for you.

Without "cloud computing" we each too easily find ourselves in a situation where deployments are ad hoc, server installations develop over time, and servers and applications are inconsistent in their configuration. Cloud computing makes servers disposable, which means we can treat them in consistent ways, testing our work as we go. It makes it easy to treat operations with the same discipline as software.

Given the idea from App Engine, and the easy-to-use infrastructure of a cloud service, I started to script together something to manage the servers and start up applications. I didn’t know what exactly I wanted to do to start, and I’m not completely sure where I’m going with this. But on the whole this feels pretty right. So I present the provisionally-named: toppcloud (Update: this has been renamed Silver Cloud).


How it works: first you have a directory of files that defines your application. This probably includes a checkout of your "application" (let’s say in src/mynewapp/), and I find it also useful to use source control on the libraries (which are put in lib/python/). There’s a file in app.ini that defines some details of the application (very similar to app.yaml).

While app.ini is a (very minimal) description of the application, there is no description of the environment. You do not specify database connection details, for instance. Instead your application requests access to a database service. For instance, one of these services is a PostgreSQL/PostGIS database (which you get if you put service.postgis in your app.ini file). If you ask for that then there will be evironmental variables, CONFIG_PG_DBNAME etc., that will tell your application how to connect to the database. (For local development you can provide your own configuration, based on how you have PostgreSQL or some other service installed.)

The standard setup is also a virtualenv environment. It is setup so every time you start that virtualenv environment you’ll get those configuration-related environmental variables. This means your application configuration is always present, your services always available. It’s available in tests just like it is during a request. Django accomplishes something similar with the (much maligned) $DJANGO_SETTINGS_MODULE but toppcloud builds it into the virtualenv environment instead of the shell environment.

And how is the server setup? Much like with App Engine that is merely an implementation detail. Unlike App Engine that’s an implementation detail you can actually look at and change (by changing toppcloud), but it’s not something you are supposed to concern yourself with during regular application development.

The basic lifecycle using toppcloud looks like:

toppcloud create-node
Create a new virtual server; you can create any kind of supported server, but only Ubuntu Jaunty or Karmic are supported (and Jaunty should probably be dropped). This step is where the "cloud" part actually ends. If you want to install a bare Ubuntu onto an existing physical machine that’s fine too — after toppcloud create-node the "cloud" part of the process is pretty much done. Just don’t go using some old Ubuntu install; this tool is for clean systems that are used only for toppcloud.
toppcloud setup-node
Take that bare Ubuntu server and set it up (or update it) for use with toppcloud. This installs all the basic standard stuff (things like Apache, mod_wsgi, Varnish) and some management script that toppcloud runs. This is written to be safe to run over and over, so upgrading and setting up a machine are the same. It needs to be a bare server, but
toppcloud init path/to/app/
Setup a basic virtualenv environment with some toppcloud customizations.
toppcloud serve path/to/app
Serve up the application locally.
toppcloud update --host=test.example.com path/to/app/
This creates or updates an application at the given host. It edits /etc/hosts so that the domain is locally viewable.
toppcloud run test.example.com script.py
Run a script (from bin/) on a remote server. This allows you to run things like django-admin.py syncdb.

There’s a few other things — stuff to manage the servers and change around hostnames or the active version of applications. It’s growing to fit a variety of workflows, but I don’t think its growth is unbounded.


So… this is what toppcloud. From the outside it doen’t do a lot. From the inside it’s not actually that complicated either. I’ve included a lot of constraints in the tool but I think it offers an excellent balance. The constraints are workable for applications (insignificant for many applications), while still exposing a simple and consistent system that’s easier to reason about than a big-ball-of-server.

Some of the constraints:

  1. Binary packages are supported via Ubuntu packages; you only upload portable files. If you need a library like lxml, you need to request that package (python-lxml) to be installed in your app.ini. If you need a version of a binary library that is not yet packaged, I think creating a new deb is reasonable.
  2. There is no Linux distribution abstraction, but I don’t care.
  3. There is no option for the way your application is run — there’s one way applications are run, because I believe there is a best practice. I might have gotten the best practice wrong, but that should be resolved inside toppcloud, not inside applications. Is Varnish a terrible cache? Probably not, but if it is we should all be able to agree on that and replace it. If there are genuinely different needs then maybe additional application or deployment configuration will be called for — but we shouldn’t add configuration just because someone says there is a better practice (and a better practice that is not universally better); there must be justifications.
  4. By abstracting out services and persistence some additional code is required for each such service, and that code is centralized in toppcloud, but it means we can also start to add consistent tools usable across a wide set of applications and backends.
  5. All file paths have to be relative, because files get moved around. I know of some particularly problematic files (e.g., .pth files), and toppcloud fixes these automatically. Mostly this isn’t so hard to do.

These particular compromises are ones I have not seen in many systems (and I’ve started to look more). App Engine I think goes too far with its constraints. Heroku is close, but closed source.

This is different than a strict everything-must-be-a-package strategy. This deployment system is light and simple and takes into account reasonable web development workflows. The pieces of an application that move around a lot are all well-greased and agile. The parts of an application that are better to Do Right And Then Leave Alone (like Apache configuration) are static.

Unlike generalized systems like buildout this system avoids "building" entirely, making deployment a simpler and lower risk action, leaning on system packages for the things they do best. Other open source tools emphasize a greater degree of flexibility than I think is necessary, allowing people to encode exploratory service integration into what appears to be an encapsulated build (I’m looking at you buildout).

Unlike requirement sets and packaging and versioning libraries, this makes all the Python libraries (typically the most volatile libraries) explicit and controlled, and can better ensure that small updates really are small. It doesn’t invalidate installers and versioning, but it makes that process even more explicit and encourages greater thoughtfulness.

Unlike many production-oriented systems (what I’ve seen in a lot of "cloud" tools) this encorporates both the development environment and production environment; but unlike some developer-oriented systems this does not try to normalize everyone’s environment and instead relies on developers to set up their systems however is appropriate. And unlike platform-neutral systems this can ensure an amount of reliability and predictability through extremely hard requirements (it is deployed on Ubuntu Jaunty/Karmic only).

But it’s not all constraints. Toppcloud is solidly web framework neutral. It’s even slightly language neutral. Though it does require support code for each persistence technique, it is fairly easy to do, and there are no requirements for "scalable systems"; I think unscalable systems are a perfectly reasonable implementation choice for many problems. I believe a more scalable system could be built on this, but as a deployment and development option, not a starting requirement.

So far I’ve done some deployments using toppcloud; not a lot, but some. And I can say that it feels really good; lots of rough edges still, but the core concept feels really right. I’ve made a lot of sideways attacks on deployment, and a few direct attacks… sometimes I write things that I think are useful, and sometimes I write things that I think are right. Toppcloud is at the moment maybe more right than useful. But I genuinely believe this is (in theory) a universally appropriate deployment tool.


Alright, so now you think maybe you should look more at toppcloud…

Well, I can offer you a fair amount of documentation. A lot of that documentation refers to design, and a bit of it to examples. There’s also a couple projects you can look at; they are all small, but :

  • Frank (will be interactivesomerville.org) which is another similar Django/Pinax project (Pinax was a bit tricky). This is probably the largest project. It’s a Django/Pinax volunteer-written application for collecting community feedback the Boston Greenline project, if that sounds interesting to you might want to chip in on the development (if so check out the wiki).
  • Neighborly, with minimal functionality (we need to run more sprints) but an installation story.
  • bbdocs which is a very simple bitbucket document generator, that makes the toppcloud site.
  • geodns which is another simple no-framework PostGIS project.

Now, the letdown. One thing I cannot offer you is support. THERE IS NO SUPPORT. I cannot now, and I might never really be able to support this tool. This tool is appropriate for collaborators, for people who like the idea and are ready to build on it. If it grows well I hope that it can grow a community, I hope people can support each other. I’d like to help that happen. But I can’t do that by bootstrapping it through unending support, because I’m not good at it and I’m not consistent and it’s unrealistic and unsustainable. This is not a open source dead drop. But it’s also not My Future; I’m not going to build a company around it, and I’m not going to use all my free time supporting it. It’s a tool I want to exist. I very much want it to exist. But even very much wanting something is not the same as being an undying champion, and I am not an undying champion. If you want to tell me what my process should be, please do!


If you want to see me get philosophical about packaging and deployment and other stuff like that, see my upcoming talk at PyCon.

Packaging
Programming
Python
Silver Lining
Web

Comments (13)

Permalink

WebOb decorator

Lately I’ve been writing a few applications (e.g., PickyWiki and a revisiting a request-tracking application VaingloriousEye), and I usually use no framework at all. Pylons would be a natural choice, but given that I am comfortable with all the components, I find myself inclined to assemble the pieces myself.

In the process I keep writing bits of code to make WSGI applications from simple WebOb -based request/response cycles. The simplest form looks like this:


from webob import Request, Response, exc

def wsgiwrap(func):
    def wsgi_app(environ, start_response):
        req = Request(environ)
        try:
            resp = func(req)
        except exc.HTTPException, e:
            resp = e
        return resp(environ, start_response)
    return wsgi_app

@wsgiwrap
def hello_world(req):
    return Response('Hi %s!' % (req.POST.get('name', 'You')))
 

But each time I’d write it, I change things slightly, implementing more or less features. For instance, handling methods, or coercing other responses, or handling middleware.

Having implemented several of these (and reading other people’s implementations) I decided I wanted WebOb to include a kind of reference implementation. But I don’t like to include anything in WebOb unless I’m sure I can get it right, so I’d really like feedback. (There’s been some less than positive feedback, but I trudge on.)

My implementation is in a WebOb branch, primarily in webob.dec (along with some doctests).

The most prominent way this is different from the example I gave is that it doesn’t change the function signature, instead it adds an attribute .wsgi_app which is WSGI application associated with the function. My goal with this is that the decorator isn’t intrusive. Here’s the case where I’ve been bothered:


class MyClass(object):
    @wsgiwrap
    def form(self, req):
        return Response(form_html...)

    @wsgiwrap
    def form_post(self, req):
        handle submission
 

OK, that’s fine, then I add validation:


@wsgiwrap
def form_post(self, req):
    if req not valid:
        return self.form
    handle submission
 

This still works, because the decorator allows you to return any WSGI application, not just a WebOb Response object. But that’s not helpful, because I need errors…


@wsgiwrap
def form_post(self, req):
    if req not valid:
        return self.form(req, errors)
    handle submission
 

That is, I want to have an option argument to the form method that passes in errors. But I can’t do this with the traditional wsgiwrap decorator, instead I have to refactor the code to have a third method that both form and form_post use. Of course, there’s more than one way to address this issue, but this is the technique I like.

The one other notable feature is that you can also make middleware:


@wsgify.middleware
def cap_middleware(req, app):
    resp = app(req)
    resp.body = resp.body.upper()
    return resp

capped_app = cap_middleware(some_wsgi_app)
 

Otherwise, for some reason I’ve found myself putting an inordinate amount of time into __repr__. Why I’ve done this I cannot say.

Programming
Python
Web

Comments (11)

Permalink

Avoiding Silos: “link” as a first-class object

One of the constant annoyances to me in web applications is the self-proclaimed need for those applications to know about everything and do everything, and only spotty ad hoc techniques for including things from other applications.

An example might be blog navigation or search, where you can only include data from the application itself. Or "Recent Posts" which can only show locally-produce posts. What if I post something elsewhere? I have to create some shoddy placeholder post to refer to it. Bah! Underlying this the data is usually structured in a specific way, with the HTML being a sort of artifact of the database, the markup transient and a slave to the database’s structure.

An example of this might be a recent post listing like:


<ul>
  for post in recent_posts:
    <li>
      <a href="/post/{{post.year}}/{{post.month}}/{{post.slug}}">
        {{post.title}}</a>
    </li>
</ul>
 

There’s clearly no room for exceptions in this code. I am thus proposing that any system like this should have the notion of a "link" as a first-class object. The code should look like this:


<ul>
  for post in recent_posts:
    <li>
      {{post.link()}}
    </li>
</ul>
 

Just like with changing IDs to links in service documents, the template doesn’t actually look any more complicated than it did before (simpler, even). But now we can use simple object-oriented techniques to create first-class links. The code might look like:


class Post(SomeORM):
    def url(self):
        if self.type == 'link':
            return self.body
        else:
            base = get_request().application_url
            return '%s/%s/%s/%s' % (
                base, self.year, self.month, self.slug)

    def link(self):
        return html('<a href="%s">%s</a>') % (
            self.url(), self.title)
 

The addition of the .url() method has the obvious effect of making these offsite links work. Using a .link() method has the added advantage of allowing things like HTML snippets to be inserted into the system (even though that is not implemented here). By allowing arbitrary HTML in certain places you make it possible for people to extend the site in little ways — possibly adding markup to a title, or allowing an item in the list that actually contains two URLs (e.g., <a href="url1">Some Item</a> (<a href="url2">via</a>)).

In the context of Python I recommend making these into methods, not properties, because it allows you to later add keyword arguments to specialize the markup (like post.link(abbreviated=True)).

One negative aspect of this is that you cannot affect all the markup through the template alone, you may have to go into the Python code to change things. Anyone have ideas for handling this problem?

HTML
Programming
Python
Web

Comments (13)

Permalink

Using pip Requirements

Following onto a set of recent posts (from James, me, then James again), Martijn Faassen wrote a description of Grok’s version management. Our ideas are pretty close, but he’s using buildout, and I’ll describe out to do the same things with pip.

Here’s a kind of development workflow that I think works well:

  • A framework release is prepared. Ideally there’s a buildbot that has been running (as Pylons has, for example), so the integration has been running for a while.
  • People make sure there are released versions of all the important components. If there are known conflicts between pieces, libraries and the framework update their install_requires in their setup.py files to make sure people don’t use conflicting pieces together.
  • Once everything has been released, there is a known set of packages that work together. Using a buildbot maybe future versions will also work together, but they won’t necessarily work together with applications built on the framework. And breakage can also occur regardless of a buildbot.
  • Also, people may have versions of libraries already installed, but just because they’ve installed something doesn’t mean they really mean to stay with an old version. While known conflicts have been noted, there’s going to be lots of unknown conflicts and future conflicts.
  • When starting development with a framework, the developer would like to start with some known-good set, which is a set that can be developed by the framework developers, or potentially by any person. For instance, if you extend a public framework with an internal framework (or even a public sub-framework like Pinax) then the known-good set will be developed by a different set of people.
  • As an application is developed, the developer will add on other libraries, or use some of their own libraries. Development will probably occur at the trunk/tip of several libraries as they are developed together.
  • A developer might upgrade the entire framework, or just upgrade one piece (for instance, to get a bug fix they are interested in, or follow a branch that has functionality they care about). The developer doesn’t necessarily have the same notion of "stable" and "released" as the core framework developers have.
  • At the time of deployment the developer wants to make sure all the pieces are deployed together as they’ve tested them, and how they know them to work. At any time, another developer may want to clone the same set of libraries.
  • After initial deployment, the developer may want to upgrade a single component, if only to test that an upgrade works, or if it resolves a bug. They may test out combinations only to throw them away, and they don’t want to bump versions of libraries in order to deploy new combinations.

This is the kind of development pattern that requirement files are meant to assist with. They can provide a known-good set of packages. Or they can provide a starting point for an active line of development. Or they can provide a historical record of how something was put together.

The easy way to start a requirement file for pip is just to put the packages you know you want to work with. For instance, we’ll call this project-start.txt:


Pylons
-e svn+http://mycompany/svn/MyApp/trunk#egg=MyApp
-e svn+http://mycompany/svn/MyLibrary/trunk#egg=MyLibrary
 

You can plug away for a while, and maybe you decide you want to freeze the file. So you do:


$ pip freeze -r project-start.txt project-frozen.txt
 

By using -r project-start.txt you give pip freeze a template for it to start with. From that, you’ll get project-frozen.txt that will look like:


Pylons==0.9.7
-e svn+http://mycompany/svn/MyApp/trunk@1045#egg=MyApp
-e svn+http://mycompany/svn/MyLibrary/trunk@1058#egg=MyLibrary

## The following requirements were added by pip --freeze:
Beaker==0.2.1
WebHelpers==0.9.1
nose==1.4
# Installing as editable to satisfy requirement INITools==0.2.1dev-r3488:
-e svn+http://svn.colorstudy.com/INITools/trunk@3488#egg=INITools-0.2.1dev_r3488
 

At that point you might decide that you don’t care about the nose version, or you might have installed something from trunk when you could have used the last release. So you go and adjust some things.

Martijn also asks: how do you have framework developers maintain one file, and then also have developers maintain their own lists for their projects?

You could start with a file like this for the framework itself. Pylons for instance could ship with something like this. To install Pylons you could then do:


$ pip -E MyProject install \\
>    -r http://pylonshq.com/0.9.7-requirements.txt
 

You can also download that file yourself, add some comments, rename the file and add your project to it, and use that. When you freeze the order of the packages and any comments will be preserved, so you can keep track of what changed. Also it should be ameniable to source control, and diffs would be sensible.

You could also use indirection, creating a file like this for your project:


-r http://pylonshq.com/0.9.7-requirements.txt
-e svn+http://mycompany/svn/MyApp/trunk#egg=MyApp
-e svn+http://mycompany/svn/MyLibrary/trunk#egg=MyLibrary
 

That is, requirements files can refer to each other. So if you want to maintain your own requirements file alongside the development of an upstream requirements file, you could do that.

Packaging
Python

Comments (3)

Permalink

A Few Corrections To “On Packaging”

James Bennett recently wrote an article on Python packaging and installation, and Setuptools. There’s a lot of issues, and writing up my thoughts could take a long time, but I thought at least I should correct some errors, specifically category errors. Figuring out where all the pieces in Setuptools (and pip and virtualenv) fit is difficult, so I don’t blame James for making some mistakes, but in the interest of clarifying the discussion…

I will start with a kind of glossary:

Distribution:
This is something-with-a-setup.py. A tarball, zip, a checkout, etc. Distributions have names; this is the name in setup(name="...") in the setup.py file. They have some other metadata too (description, version, etc), and Setuptools adds to that metadata some. Distutils doesn’t make it very easy to add to the metadata — it’ll whine a little about things it doesn’t know, but won’t do anything with that extra data. Fixing this problem in Distutils is an important aspect of Setuptools, and part of what Distutils itself unsuitable as a basis for good library management.
package/module:
This is something you import. It is not the same as a distribution, though usually a distribution will have the same name as a package. In my own libraries I try to name the distribution with mixed case (like Paste) and the package with lower case (like paste). Keeping the terminology straight here is very difficult; and usually it doesn’t matter, but sometimes it does.
Setuptools The Distribution:
This is what you install when you install Setuptools. It includes several pieces that Phillip Eby wrote, that work together but are not strictly a single thing.
setuptools The Package:
This is what you get when you do import setuptools. Setuptools largely works by monkeypatching distutils, so simply importing setuptools activates its functionality from then on. This package is entirely focused on installation and package management, it is not something you should use at runtime (unless you are installing packages as your runtime, of course).
pkg_resources The Module:
This is also included in Setuptools The Distribution, and is for use at runtime. This is a single module that provides the ability to query what distributions are installed, metadata about those distributions, information about the location where they are installed. It also allows distributions to be "activated". A distribution can be available but not activated. Activating a distribution means adding its location to sys.path, and probably you’ve noticed how long sys.path is when you use easy_install. Almost everything that allows different libraries to be installed, or allows different versions of libraries, does it through some management of sys.path. pkg_resources also allows for generic access to "resources" (i.e., non-code files), and let’s those resources be in zip files. pkg_resources is safe to use, it doesn’t do any of the funny stuff that people get annoyed with.
easy_install:
This is also in Setuptools The Distribution. The basic functionality it provides is that given a name, it can search for package with that distribution name, and also satisfying a version requirement. It then downloads the package, installs it (using setup.py install, but with the setuptools monkeypatches in place). After that, it checks the newly installed distribution to see if it requires any other libraries that aren’t yet installed, and if so it installs them.
Eggs the Distribution Format:
These are zip files that Setuptools creates when you run python setup.py bdist_egg. Unlike a tarball, these can be binary packages, containing compiled modules, and generally contain .pyc files (which are portable across platforms, but not Python versions). This format only includes files that will actually be installed; as a result it does not include doc files or setup.py itself. All the metadata from setup.py that is needed for installation is put in files in a directory EGG-INFO.
Eggs the Installation Format:
Eggs the Distribution Format are a subset of the Installation Format. That is, if you put an Egg zip file on the path, it is installed, no other process is necessary. But the Installation Format is more general. To have an egg installed, you either need something like DistroName-X.Y.egg/ on the path, and then an EGG-INFO/ directory under that with the metadata, or a path like DistroName.egg-info/ with the metadata directly in that directory. This metadata can exist anywhere, and doesn’t have to be directly alongside the actual Python code. Egg directories are required for pkg_resources to activate and deactivate distributions, but otherwise they aren’t necessary.
pip:
This is an alternative to easy_install. It works somewhat differently than easy_install, but not much. Mostly it is better than easy_install, in that it has some extra features and is easier to use. Unlike easy_install, it downloads all distributions up-front, and generates the metadata to read distribution and version requirements. It uses Setuptools to generate this metadata from a setup.py file, and uses pkg_resources to parse this metadata. It then installs packages with the setuptools monkeypatches applied. It just happens to use an option python setup.py --single-version-externally-managed, which gets Setuptools to install packages in a more flat manner, with Distro.egg-info/ directories alongside the package. Pip installs eggs! I’ve heard the many complaints about easy_install (and I’ve had many myself), but ultimately I think pip does well by just fixing a few small issues. Pip is not a repudiation of Setuptools or the basic mechanisms that easy_install uses.
PoachEggs:
This is a defunct package that had some of the features of pip (particularly requirement files) but used easy_install for installation. Don’t bother with this, it was just a bridge to get to pip.
virtualenv:
This is a little hack that creates isolated Python environments. It’s based on virtual-python.py, which is something I wrote based on some documentation notes PJE wrote for Setuptools. Basically virtualenv just creates a bin/python interpreter that has its own value of sys.prefix, but uses the system Python and standard library. It also installs Setuptools to make it easier to bootstrap the environment (because bootstrapping Setuptools is itself a bit tedious). I’ll add pip to it too sometime. Using virtualenv you don’t have to worry about different library versions, because for any one environment you will probably only need one version of a library. On any one machine you probably need different versions, which is why installing packages system-wide is problematic for most libraries. (I’ve been meaning to write a post on why I think using system packaging for libraries is counter-productive, but that’ll wait for another time.)

So… there’s the pieces involved, at least the ones I can remember now. And I haven’t really discussed .pth files, entry points, sys.path trickery, site.py, distutils.cfg… sadly this is a complex state of affairs, but it was also complex before Setuptools.

There are a few things that I think people really dislike about Setuptools.

First, zip files. Setuptools prefers zip files, for reasons that won’t mean much to you, and maybe are more historical than anything. When a distribution doesn’t indicate if it is zip-safe, Setuptools looks at the code and sees if it uses __file__, an if not it presumes that the code is probably zip-safe. The specific problem James cites is what appears to be a bug in Django, that Django looks for code and can’t traverse into zip files in the same way that Python itself can. Setuptools didn’t itself add anything to Python to make it import zip files, that functionality was added to Python some time before. The zipped eggs that Setuptools installs are using existing (standard!) Python functionality.

That said, I don’t think zipping libraries up is all that useful, and while it should work, it doesn’t always, and it makes code harder to inspect and understand. So since it’s not that useful, I’ve disabled it when pip installs packages. I also have had it disabled on my own system for years now, by creating a distutils.cfg file with [easy_install] zip_ok = False in it. Sadly App Engine is forcing me to use zip files again, because of its absurdly small file limits… but that’s a different topic. (There is an experimental pip zip command mostly intended for App Engine.)

Another pain point is version management with setup.py and Setuptools. Indeed it is easy to get things messed up, and it is easy to piss people off by overspecifying, and sometimes things can get in a weird state for no good reason (often because of easy_install’s rather naive leap-before-you-look installation order). Pip fixes that last point, but it also tries to suggest more constructive and less painful ways to manage other pieces.

Pip requirement files are an assertion of versions that work together. setup.py requirements (the Setuptools requirements) should contain two things: 1: all the libraries used by the distribution (without which there’s no way it’ll work) and 2: exclusions of the versions of those libraries that are known not to work. setup.py requirements should not be viewed as an assertion that by satisfying those requirements everything will work, just that it might work. Only the end developer, testing the system together, can figure out if it really works. Then pip gives you a way to record that working set (using pip freeze), separate from any single distribution or library.

There’s also a lot of conflicts between Setuptools and package maintainers. This is kind of a proxy war between developers and sysadmins, who have very different motivations. It deserves a post of its own, but the conflicts are about more than just how Setuptools is implemented.

I’d love if there was a language-neutral library installation and management tool that really worked. Linux system package managers are absolutely not that tool; frankly it is absurd to even consider them as an alternative. So for now we do our best in our respective language communities. If we’re going to move forward, we’ll have to acknowledge what’s come before, and the reasoning for it.

Packaging
Python

Comments (40)

Permalink

lxml: an underappreciated web scraping library

When people think about web scraping in Python, they usually think BeautifulSoup. That’s okay, but I would encourage you to also consider lxml.

First, people think BeautifulSoup is better at parsing broken HTML. This is not correct. lxml parses broken HTML quite nicely. I haven’t done any thorough testing, but at least the BeautifulSoup broken HTML example is parsed better by lxml (which knows that <td> elements should go inside <table> elements).

Second, people feel lxml is harder to install. This is correct. BUT, lxml 2.2alpha1 includes an option to compile static versions of the underlying C libraries, which should improve the installation experience, especially on Macs. To install this new way, try:


$ STATIC_DEPS=true easy_install 'lxml>=2.2alpha1'
 

One you have lxml installed, you have a great parser (which happens to be super-fast and that is not a tradeoff). You get a fairly familiar API based on ElementTree, which though a little strange feeling at first, offers a compact and canonical representation of a document tree, compared to more traditional representations. But there’s more…

One of the features that should be appealing to many people doing screen scraping is that you get CSS selectors. You can use XPath as well, but usually that’s more complicated (for example). Here’s an example I found getting links from a menu in a page in BeautifulSoup:


from BeautifulSoup import BeautifulSoup
import urllib2
soup = BeautifulSoup(urllib2.urlopen('http://java.sun.com').read())
menu = soup.findAll('div',attrs={'class':'pad'})
for subMenu in menu:
    links = subMenu.findAll('a')
    for link in links:
        print "%s : %s" % (link.string, link['href'])
 

Here’s the same example in lxml:


from lxml.html import parse
doc = parse('http://java.sun.com').getroot()
for link in doc.cssselect('div.pad a'):
    print '%s: %s' % (link.text_content(), link.get('href'))
 

lxml generally knows more about HTML than BeautifulSoup. Also I think it does well with the small details; for instance, the lxml example will match elements in <div class="pad menu"> (space-separated classes), which the BeautifulSoup example does not do (obviously there are other ways to search, but the obvious and documented technique doesn’t pay attention to HTML semantics).

One feature that I think is really useful is .make_links_absolute(). This takes the base URL of the page (doc.base) and uses it to make all the links absolute. This makes it possible to relocate snippets of HTML or whole sets of documents (as with this program). This isn’t just <a href> links, but stylesheets, inline CSS with @import statements, background attributes, etc. It doesn’t see quite all links (for instance, links in Javascript) but it sees most of them, and works well for most sites. So if you want to make a local copy of a site:


from lxml.html import parse, open_in_browser
doc = parse('http://wiki.python.org/moin/').getroot()
doc.make_links_absolute()
open_in_browser(doc)
 

open_in_browser serializes the document to a temporary file and then opens a web browser (using webbrowser).

Here’s an example that compares two pages using lxml.html.diff:


from lxml.html.diff import htmldiff
from lxml.html import parse, tostring, open_in_browser, fromstring

def get_page(url):
    doc = parse(url).getroot()
    doc.make_links_absolute()
    return tostring(doc)

def compare_pages(url1, url2, selector='body div'):
    basis = parse(url1).getroot()
    basis.make_links_absolute()
    other = parse(url2).getroot()
    other.make_links_absolute()
    el1 = basis.cssselect(selector)[0]
    el2 = other.cssselect(selector)[0]
    diff_content = htmldiff(tostring(el1), tostring(el2))
    diff_el = fromstring(diff_content)
    el1.getparent().insert(el1.getparent().index(el1), diff_el)
    el1.getparent().remove(el1)
    return basis

if __name__ == '__main__':
    import sys
    doc = compare_pages(sys.argv[1], sys.argv[2], sys.argv[3])
    open_in_browser(doc)
 

You can use it like:


$ python lxmldiff.py \
'http://wiki.python.org/moin/BeginnersGuide?action=recall&amp;#038;rev=70' \
'http://wiki.python.org/moin/BeginnersGuide?action=recall&amp;#038;rev=81' \
'div#content'
 

Another feature lxml has is form handling. All the cool sexy new sites use minimal forms, but searching for "registration forms" I get this nice complex form. Let’s look at it:


>>> from lxml.html import parse, tostring
>>> doc = parse('http://www.actuaryjobs.com/cform.html').getroot()
>>> doc.forms
[<Element form at -48232164>]
>>> form = doc.forms[0]
>>> form.inputs.keys()
['thank_you_title', 'City', 'Zip', ... ]
 

Now we have a form object. There’s two ways to get to the fields: form.inputs, which gives us a dictionary of all the actual <input> elements (and textarea and select). There’s also form.fields, which is a dictionary-like object. The dictionary-like object is convenient, for instance:


>>> form.fields['cEmail'] = 'me&#64;example.com'
 

This actually updates the input element itself:


>>> tostring(form.inputs['cEmail'])
'<input type="input" name="cEmail" size="30" value="test2">'
 

I think it’s actually a nicer API than htmlfill and can serve the same purpose on the server side.

But then you can also use the same interface for scraping, by filling fields and getting the submission. That looks like:


>>> import urllib
>>> action = form.action
>>> data = urllib.urlencode(form.form_values())
>>> if form.method == 'GET':
...     if '?' in action:
...         action += '&amp;#038;' + data
...     else:
...         action += '?' + data
...     data = None
>>> resp = urllib.urlopen(action, data)
>>> resp_doc = parse(resp).getroot()
 

Lastly, there’s HTML cleaning. I think all these features work together well, do useful things, and it’s based on an actual understanding HTML instead of just treating tags and attributes as arbitrary. (Also if you really like jQuery, you might want to look at pyquery, which is a jQuery-like API on top of lxml).

HTML
Programming
Python

Comments (51)

Permalink