We can answer on question what is VPS? and what is cheap dedicated servers?

Python

Why doctest.js is better than Python’s doctest

I’ve been trying, not too successfully I’m afraid, to get more people to use doctest.js. There’s probably a few reasons people don’t. They are all wrong! Doctest.js is the best!

One issue in particular is that people (especially people in my Python-biased circles) are perhaps thrown off by Python’s doctest. I think Python’s doctest is pretty nice, I enjoy testing with it, but there’s no question that it has a lot of problems. I’ve even thought about trying to fix doctest, and even made a repository, but I only really got as far as creating a list of issues I’d like to fix. But, like so many before me, I never actually made those fixes. Doctest has, in its life, only really had a single period of improvement (in the time leading to Python 2.4). That’s not a recipe for success.

Of course doctest.js takes inspiration from Python’s doctest, but I wrote it as a real test environment, not for a minimal use case. In the process I fixed a bunch of issues with doctest, and in places Javascript has also provided helpful usability.

Some issues:

Doctest.js output is predictable

The classic pitfall of Python’s doctest is printing a dictionary:


>>> print {"one": 1, "two": 2}
{'two': 2, 'one': 1}
 

The print order of a dictionary is arbitrary, based on a hash algorithm that can change, or mix things up as items are added or removed. And to make it worse, the output usually stable, such that you can write tests that unexpectibly fragile. But there’s no reason why dict.__repr__ must use an arbitrary order. Personally I take it as a bit of unfortunate laziness.

If doctest had used pprint for all of its printing it would have helped some. But not enough, because this kind of code is fairly common:


def __repr__(self):
    return '<ThisClass attr=%r>' % self.attr
 

and that %r invokes a repr() that cannot be overridden.

In doctest.js I always try to make output predictable. One reason this is fairly easy is that there’s nothing like repr() in Javascript, so doctest.js has its own implementation. It’s like I started with pprint and no other notion existed.

Good matching

In addition to unpredictable output, there’s also just hard-to-match output. Output might contain blank lines, for instance, and Python’s doctest requires a very ugly <BLANKLINE> token to handle that. Whitespace might not be normalized. Maybe there’s boring output. Maybe there’s just a volatile item like a timestamp.

Doctest.js includes, by default, ellipsis: ... matches any length of text. But it also includes another wildcard, ?, which matches just one number or word. This avoids cases when the use of ... swallows up too much when you just wanted to get a single word.

Also doctest.js doesn’t use ... for other purposes. In Python’s doctest ...` is used for continuation lines, meaning you can’t just ignore output, like:


>>> print who_knows_what_this_returns()
...
 

Or even worse, you can’t ignore the beginning of an item:


>>> print some_request
...
X-Some-Header: foo
...
 

The way I prefer to use doctest.js it doesn’t have any continuation line symbol (but if there is one, it’s >).

Also doctest.js normalizes whitespace, normalizes " and ', and just generally tries to be reasonable.

Doctest.js tests are plain Javascript

Not many editors know how to syntax highlight and check doctests, with their >>> in front of each line and so forth. And the whole thing is tweaky, you need to use a continuation (...) on some lines, and start statements with >>>. It’s an awkward way to compose.

Doctest.js started out with the same notion, though with different symbols ($ and >). But recently with the rise of a number of excellent parsers (I used Esprima) I’ve moved my own tests to another pattern:


print(something())
// => expected output
 

This is already a fairly common way to write examples. Like how you may have read pre-Python pseudocode and thought: that looks like Python!: doctest.js looks like example pseudocode.

Doctest.js tests are self-describing

Python’s doctest has some options, some important options that effect the semantics of the test, that you can only turn on in the runner. The most important option is ELLIPSIS. Either your test was written to use ELLIPSIS or it wasn’t – that a test can’t self-describe its requirements means that test running is fragile.

I made the hackiest package ever to get around this in Python, but it’s hacky and lame.

Exception handling isn’t special

Python’s doctest treats exceptions differently from other output. So if you print something before the exception, it is thrown away, never to be seen. And you can’t use some of the same matching techniques.

Doctest.js just prints out exceptions, and it’s matched like anything else.

This particular case is one of several places where it feels like Python’s doctest is just being obstinate. Doing it the right way isn’t harder. Python’s doctest makes debugging exception cases really hard.

Doctest.js has a concept of "abort"

I’m actually pretty okay with Python doctest’s notion that you just run all the tests, even when one fails. Getting too many failures is a bit of a nuisance, but it’s not that bad. But there’s no way to just give up, and there needs to be. If you are relying on something to be importable, or some service to be available, there’s no point in going on with the tests.

Doctest.js lets you call Abort() and further tests are cancelled.

Distinguishing between debugging output and deliberate output

Maybe it’s my own fault for being a programming troglodite, but I use a lot of print for debugging. This becomes a real problem with Python’s doctest, as it tracks all that printing and it causes tests to fail.

Javascript has something specifically for printing debugging output: console.log(). Doctest.js doesn’t mess with that, it adds a new function print(). Only stuff that is printed (not logged) is treated as expected output. It’s like console.log() goes to stderr and print() goes to stdout.

Doctest.js also forces the developer to print everything they care about. For better or worse Javascript has many more expressions than Python (including assignments), so looking at the result of an expression isn’t a good clue for whether you care about the result of an expression. I’m not sure this is better, but it’s part of the difference.

Doctest.js also groups your printed statements according to the example you are in (an example being a block of code and an expected output). This is much more helpful than watching a giant stream of output go to the console (the browser console or terminal).

Doctest.js handles async code

This admittedly isn’t that big a deal for Python, but for Javascript it is a real problem. Not a problem for doctest.js in particular, but a problem for any Javascript test framework. You want to test return values, but lots of functions don’t "return", instead they call some callback or create some kind of promise object, and you have to test for side effects.

Doctest.js I think has a really great answer for this, which is not so much to say that Python’s doctest is so much worse, but in the context of Javascript doctest.js has something really useful and unique. If callback-driven async code had ever been very popular in Python then this sort of feature would be nice there too.

The browser is a great environment

A lot of where doctest.js is much better than Python’s doctest is simply that it has a much more powerful environment for displaying results. It can highlight failed or passing tests. When there’s a wildcard in expected output, it can show the actual output without adding any particular extra distraction. It can group console messages with the tests they go with. It can show both a simple failure message, and a detailed line-by-line comparison. All these details make it easy to identify what went wrong and fix it. The browser gives a rich and navigable interface.

I’d like to get doctest.js working well on Node.js (right now it works, but is not appealing), but I just can’t bring myself to give up the browser. I have to figure out a good hybrid.

Python’s doctest lacks a champion

This is ultimately the reason Python’s doctest has all these problems: no one cares about it, no one feels responsible for it, and no one feels empowered to make improvements to it. And to make things worse there is a cadre of people that will respond to suggestions with their own criticisms that doctest should never be used beyond its original niche, that it’s constraints are features.

Doctest is still great

I’m ragging on Python’s doctest only because I love it. I wish it was better, and I made doctest.js in a way I wish Python’s doctest was made. Doctest, and more generally example/expectation oriented code, is a great way to explain things, to make tests readable, to make test-driven development feasible, to create an environment that errs on the side of over-testing instead of under-testing, and to make failures and resolutions symmetric. It’s still vastly superior to BDD, avoiding all BDD’s aping of readability while still embracing the sense of test-as-narrative.

But, more to the point: use doctest.js, read the tutorial, or try it in the browser. I swear, it’s really nice to use.

Javascript
Mozilla
Programming
Python

Comments (19)

Permalink

Python Application Package

I’ve been thinking some more about deployment of Python web applications, and deployment in general (in part leading up to the Web Summit). And I’ve got an idea.

I wrote about this about a year ago and recently revised some notes on a proposal but I’ve been thinking about something a bit more basic: a way to simply ship server applications, bundles of code. Web applications are just one use case for this.

For now lets call this a "Python application package". It has these features:

  1. There is an application description: this tells the environment about the application. (This is sometimes called "configuration" but that term is very confusing and overloaded; I think "description" is much clearer.)
  2. Given the description, you can create an execution environment to run code from the application and acquire objects from the application. So there would be a specific way to setup sys.path, and a way to indicate any libraries that are required but not bundled directly with the application.
  3. The environment can inject information into the application. (Also this sort of thing is sometimes called "configuration", but let’s not do that either.) This is where the environment could indicate, for instance, what database the application should connect to (host, username, etc).
  4. There would be a way to run commands and get objects from the application. The environment would look in the application description to get the names of commands or objects, and use them in some specific manner depending on the purpose of the application. For instance, WSGI web applications would point the environment to an application object. A Tornado application might simply have a command to start itself (with the environment indicating what port to use through its injection).

There’s a lot of things you can build from these pieces, and in a sophisticated application you might use a bunch of them at once. You might have some WSGI, maybe a seperate non-WSGI server to handle Web Sockets, something for a Celery queue, a way to accept incoming email, etc. In pretty much all cases I think basic application lifecycle is needed: commands to run when an application is first installed, something to verify the environment is acceptable, when you want to back up its data, when you want to uninstall it.

There’s also some things that all environments should setup the same or inject into the application. E.g., $TMPDIR should point to a place where the application can keep its temporary files. Or, every application should have a directory (perhaps specified in another environmental variable) where it can write log files.

Details?

To get more concrete, here’s what I can imagine from a small application description; probably YAML would be a good format:


platform: python, wsgi
require:
  os: posix
  python: <3
  rpm: m2crypto
  deb: python-m2crypto
  pip: requirements.txt
python:
  paths: vendor/
wsgi:
  app: myapp.wsgiapp:application
 

I imagine platform as kind of a series of mixins. This system doesn’t really need to be Python-specific; when creating something similar for Silver Lining I found PHP support relatively easy to add (handling languages that aren’t naturally portable, like Go, might be more of a stretch). So python is one of the features this application uses. You can imagine lots of modularization for other features, but it would be easy and unproductive to get distracted by that.

The application has certain requirements of its environment, like the version of Python and the general OS type. The application might also require libraries, ideally one libraries that are not portable (M2Crypto being an example). Modern package management works pretty nicely for this stuff, so relying on system packages as a first try I believe is best (I’d offer requirements.txt as a fallback, not as the primary way to handle dependencies).

I think it’s much more reliable if applications primarily rely on bundling their dependencies directly (i.e., using a vendor directory). The tool support for this is a bit spotty, but I believe this package format could clarify the problems and solutions. Here is an example of how you might set up a virtualenv environment for managing vendor libraries (you then do not need virtualenv to use those same libraries), and do so in a way where you can check the results into source control. It’s kind of complicated, but works (well, almost works – bin/ files need fixing up). It’s a start at least.

Support Library

On the environment side we need a good support library. pywebapp has some of the basic features, though it is quite incomplete. I imagine a library looking something like this:


from apppackage import AppPackage
app = AppPackage('/var/apps/app1.2012.02.11')
# Maybe a little Debian support directly:
subprocess.call(['apt-get', 'install'] +
                app.config['require']['deb'])
# Or fall back of virtualenv/pip
app.create_virtualenv('/var/app/venvs/app1.2012.02.11')
app.install_pip_requirements()
wsgi_app = app.load_object(app.config['wsgi']['app'])
 

You can imagine building hosting services on this sort of thing, or setting up continuous integration servers (app.run_command(app.config['unit_test'])), and so forth.

Local Development

If designed properly, I think this format is as usable for local development as it is for deployment. It should be able to run directly from a checkout, with the "development environment" being an environment just like any other.

This rules out, or at least makes less exciting, the use of zip files or tarballs as a package format. The only justification I see for using such archives is that they are easy to move around; but we live in the FUTURE and there are many ways to move directories around and we don’t need to cater to silly old fashions. If that means a script that creates a tarball, FTPs it to another computer, and there it is unzipped, then fine – this format should not specify anything about how you actually deliver the files. But let’s not worry about copying WARs.

Packaging
Python
Silver Lining
Web

Comments (9)

Permalink

My Unsolicited Advice For PyPy

I think the most interesting work in programming languages right now is about the runtime, not syntax or even the languages themselves. Which places PyPy in an interesting position, as they have put a great deal of effort into abstracting out the concept of runtime from the language they are implementing (Python).

There are of course other runtime environments available to Python. The main environment has and continues to be CPython — the runtime developed in parallel with the language, and with continuous incremental feedback and improvement by the Python developer community. It is the runtime that informs and is informed by the language. It’s also the runtime that is most easy-going about integrating with C libraries, and by extension it is part of the vague but important runtime environment of "Unix". There’s also Jython and IronPython. I frankly find these completely uninteresting. They are runtimes controlled by companies, not communities, and the Python implementations are neither natural parts of their runtime environments, nor do the runtimes include many concessions to make themselves natural for Python.

PyPy is somewhere different. It still has a tremendous challenge because Python was not developed for PyPy. Even small changes to the language seem impossible — something as seemingly innocuous as making builtins static seems to be stuck in a conservative reluctance to change. But unlike Jython and IronPython they aren’t stuck between a rock and a hard place; they just have to deal with the rock, not the hard place.

So here is my unsolicited advice on what PyPy-the-runtime should consider. Simple improvements to performance and the runtime are fine, but being incrementally better than CPython only goes so far, and I personally doubt it will ever make a big impact on Python that way.

PyPy should push hard on concurrency and reliability. If it is fast enough then that’s fine; that’s done as far as I’m concerned. I say this because I’m a web programmer, and speed is uninteresting to me. Certainly opinions will differ. But to me speed (as it’s normally defined) is really really uninteresting. When or if I care about speed I’m probably more drawn to Cython. I do care about latency, memory efficiency, scalability/concurrency, resource efficiency, and most of all worst cases. I don’t think a JIT addresses any of these (and can even make things worse). I don’t know of benchmarks that measure these parameters either.

I want a runtime with new and novel features; something that isn’t just incrementally better than CPython. This itself might seem controversial, as the only point to such novel features would be for people to implement at least some code intended for only PyPy. But if the features are good enough then I’m okay with this — and if I’m not drawn to write something that will only work on PyPy, I probably won’t be drawn to use PyPy at all; natural conservatism and inertia will keep me (and most people) on CPython indefinitely.

What do I want?

  • Microprocesses. Stackless and greenlets have given us micro-threads, but it’s just not the same. Which is not entirely a criticism — it shows that unportable features are interesting when they are good features. But I want the next step, which is processes that don’t share state. (And implicitly I don’t just want standard async techniques, which use explicit concurrency and shared state.)
  • Shared objects across processes with copy-on-write; then you can efficiently share objects (like modules!) across concurrent processes without the danger of shared state, but without the overhead of copying everything you want to share. Lack of this is hurting PHP, as you can’t have a rich set of libraries and share-nothing without killing your performance.
  • I’d rather see a break in compatibility for C extensions to support this new model, than to abandon what could be PyPy’s best feature to support CPython’s C extension ecosystem. Being a web programmer I honestly don’t need many C modules, so maybe I’m biased. But if the rest of the system is good enough then the C extensions will come.
  • Make sure resource sharing that happens outside of the Python environment is really solid. C libraries are often going to be unfriendly towards microprocesses; make sure what is exposed to the Python environment is solid. That might even mean a dangerous process mode that can handle ctypes and FFI and where you carefully write Python code that has extra powers, so long as there’s a strong wall between that code and "general" code that makes use of those services.
  • Cython — it’s doing a lot of good stuff, and has a much more conservative but also more predictable path to performance (through things like type annotation). I think it’s worth leaning on. I also have something of a hunch that it could be a good way to do FFI in a safe manner, as Cython already supports multiple targets (Python 2 and 3) from the same codebase. Could PyPy be another target?
  • Runtime introspection of the runtime. We have great language introspection (probably much to the annoyance of PyPy developers who have to copy this) but currently runtime introspection is poor-to-nonexistant. What processes are running? How much memory is each using? Where? Are they holding on to resources? Are they blocking on some non-Python library? How much CPU have they been using? Then I want to be able to kill processes, send them signals, adjust priorities, etc.

And I guess it doesn’t have to be "PyPy", but a new backend for PyPy to target; it doesn’t have to be the only path PyPy pursues.

With a runtime like this PyPy could be an absolutely rocking platform for web development. Python could be as reliable as, oh… PHP? Sorry, I probably won’t win arguments that way ;) As good as Erlang! Maybe we could get the benefits of async without the pain of callbacks or Deferreds. And these are features people would use. Right now I’m perceiving a problem where there’s lots of people standing on the sidelines cheering you on but not actually using PyPy.

So: I wouldn’t tell anyone what to do, and if someone tries this out I’ll probably only be on the sidelines cheering you on… but I really think this could be awesome.

Update: there’s some interesting comments on Hacker News as well.

Programming
Python

Comments (22)

Permalink

A Python Web Application Package and Format (we should make one)

At PyCon there was an open space about deployment, and the idea of drop-in applications (Java-WAR-style).

I generally get pessimistic about 80% solutions, and dropping in a WAR file feels like an 80% solution to me. I’ve used the Hudson/Jenkins installer (which I think is specifically a project that got WARs on people’s minds), and in a lot of ways that installer is nice, but it’s also kind of wonky, it makes configuration unclear, it’s not always clear when it installs or configures itself through the web, and when you have to do this at the system level, nor is it clear where it puts files and data, etc. So a great initial experience doesn’t feel like a great ongoing experience to me — and it doesn’t have to be that way. If those were necessary compromises, sure, but they aren’t. And because we don’t have WAR files, if we’re proposing to make something new, then we have every opportunity to make things better.

So the question then is what we’re trying to make. To me: we want applications that are easy to install, that are self-describing, self-configuring (or at least guide you through configuration), reliable with respect to their environment (not dependent on system tweaking), upgradable, and respectful of persistence (the data that outlives the application install). A lot of this can be done by the "container" (to use Java parlance; or "environment") — if you just have the app packaged in a nice way, the container (server environment, hosting service, etc) can handle all the system-specific things to make the application actually work.

At which point I am of course reminded of my Silver Lining project, which defines something very much like this. Silver Lining isn’t just an application format, and things aren’t fully extracted along these lines, but it’s pretty close and it addresses a lot of important issues in the lifecycle of an application. To be clear: Silver Lining is an application packaging format, a server configuration library, a cloud server management tool, a persistence management tool, and a tool to manage the application with respect to all these services over time. It is a bunch of things, maybe too many things, so it is not unreasonable to pick out a smaller subset to focus on. Maybe an easy place to start (and good for Silver Lining itself) would be to separate at least the application format (and tools to manage applications in that state, e.g., installing new libraries) from the tools that make use of such applications (deploy, etc).

Some opinions I have on this format, exemplified in Silver Lining:

  • It’s not zipped or a single file, unlike WARs. Uploading zip files is not a great API. Geez. I know there’s this desire to "just drop in a file"; but there’s no getting around the fact that "dropping a file" becomes a deployment protocol and it’s an incredibly impoverished protocol. The format is also not subtly git-based (ala Heroku) because git push is not a good deployment protocol.
  • But of course there isn’t really any deployment protocol inferred by a format anyway, so maybe I’m getting ahead of myself ;) I’m saying a tool that deploys should take as an argument a directory, not a single file. (If the tool then zips it up and uploads it, fine!)
  • Configuration "comes from the outside". That is, an application requests services, and the container tells the application where those services are. For Silver Lining I’ve used environmental variables. I think this one point is really important — the container tells the application. As a counter-example, an application that comes with a Puppet deployment recipe is essentially telling the server how to arrange itself to suit the application. This will never be reliable or simple!
  • The application indicates what "services" it wants; for instance, it may want to have access to a MySQL database. The container then provides this to the application. In practice this means installing the actual packages, but also creating a database and setting up permissions appropriately. The alternative is never having any dependencies, meaning you have to use SQLite databases or ad hoc structures, etc. But in fact installing databases really isn’t that hard these days.
  • All persistence has to use a service of some kind. If you want to be able to write to files, you need to use a file service. This means the container is fully aware of everything the application is leaving behind. All the various paths an application should use are given in different environmental variables (many of which don’t need to be invented anew, e.g., $TMPDIR).
  • It uses vendor libraries exclusively for Python libraries. That means the application bundles all the libraries it requires. Nothing ever gets installed at deploy-time. This is in contrast to using a requirements.txt list of packages at deployment time. If you want to use those tools for development that’s fine, just not for deployment.
  • There is also a way to indicate other libraries you might require; e.g., you might lxml, or even something that isn’t quite a library, like git (if you are making a github clone). You can’t do those as vendor libraries (they include non-portable binaries). Currently in Silver Lining the application description can contain a list of Ubuntu package names to install. Of course that would have to be abstracted some.
  • You can ask for scripts or a request to be invoked for an application after an installation or deployment. It’s lame to try to test if is-this-app-installed on every request, which is the frequent alternative. Also, it gives the application the chance to signal that the installation failed.
  • It has a very simple (possibly/probably too simple) sense of configuration. You don’t have to use this if you make your app self-configuring (i.e., build in a web-accessible settings screen), but in practice it felt like some simple sense of configuration would be helpful.

Things that could be improved:

  • There are some places where you might be encouraged to use routines from the silversupport package. There are very few! But maybe an alternative could be provided for these cases.
  • A little convention-over-configuration is probably suitable for the bundled libraries; silver includes tools to manage things, but it gets a little twisty. When creating a new project I find myself creating several .pth files, special customizing modules, etc. Managing vendor libraries is also not obvious.
  • Services are IMHO quite important and useful, but also need to be carefully specified.
  • There’s a bunch of runtime expectations that aren’t part of the format, but in practice would be part of how the application is written. For instance, I make sure each app has its own temporary directory, and that it is cleared on update. If you keep session files in that location, and you expect the environment to clean up old sessions — well, either all environments should do that, or none should.
  • The process model is not entirely clear. I tried to simply define one process model (unthreaded, multiple processes), but I’m not sure that’s suitable — most notably, multiple processes have a significant memory impact compared to threads. An application should at least be able to indicate what process models it accepts and prefers.
  • Static files are all convention over configuration — you put static files under static/ and then they are available. So static/style.css would be at /style.css. I think this is generally good, but putting all static files under one URL path (e.g., /media/) can be good for other reasons as well. Maybe there should be conventions for both.
  • Cron jobs are important. Though maybe they could just be yet another kind of service? Many extra features could be new services.
  • Logging is also important; Silver Lining attempts to handle that somewhat, but it could be specified much better.
  • Silver Lining also supports PHP, which seemed to cause a bit of stress. But just ignore that. It’s really easy to ignore.

There is a description of the configuration file for apps. The environmental variables are also notably part of the application’s expectations. The file layout is explained (together with a bunch of Silver Lining-specific concepts) in Development Patterns. Besides all that there is admittedly some other stuff that is only really specified in code; but in Silver Lining’s defense, specified in code is better than unspecified ;) App Engine provides another example of an application format, and would be worth using as a point of discussion or contrast (I did that myself when writing Silver Lining).

Discussing WSGI stuff with Ben Bangert at PyCon he noted that he didn’t really feel like the WSGI pieces needed that much more work, or at least that’s not where the interesting work was — the interesting work is in the tooling. An application format could provide a great basis for building this tooling. And I honestly think that the tooling has been held back more by divergent patterns of development than by the difficulty of writing the tools themselves; and a good, general application format could fix that.

Packaging
Programming
Python
Web

Comments (18)

Permalink

Silver Lining: More People!

OK… so I said before Silver Lining is for collaborators not users. And that’s still true… it’s not a polished experience where you can confidently ignore the innards of the tool. But it does stuff, and it works, and you can use it. So… I encourage some more of you to do so.

Now would be a perfectly good time, for instance, to port an application you use to the system. Almost all Python applications should be portable. The requirements are fairly simple:

  1. The application needs a WSGI interface.
  2. It needs to be Python 2.6 compatible.
  3. Any libraries that aren’t pure-Python need to be available as deb packages in some form.
  4. Any persistence needs to be provided as a service; if the appropriate service isn’t already available you may need to write some code.

Also PHP applications should work (though you may encounter more rough edges), with these constraints:

  1. No .htaccess files, so you have to implement any URL rewriting in PHP (e.g., for WordPress).
  2. Source code is not writable, so self-installers that write files won’t work. (Self-installing plugins might be workable, but that hasn’t been worked out yet.)
  3. And the same constraints for services.

So… take an application, give it a try, and tell me what you encounter.

Also I’d love to get feedback and ideas from people with more sysadmin background, or who know Ubuntu/Debian tricks. For instance, I’d like to handle some of the questions packages ask about on installation (right now they are all left as defaults, not always the right answer). I imagine there’s some non-interactive way to handle those questions but I haven’t been able to find it.

Python
Silver Lining
Web

Comments (5)

Permalink

WebTest HTTP testing

I’ve yet to see another testing system for local web testing that I like as much as WebTest… which is perhaps personal bias for something I wrote, but then I don’t have that same bias towards everything I’ve written. Many frameworks build in their own testing systems but I don’t like the abstractions — they touch lots of internal things, or skip important steps of the request, or mock out things that don’t need to be mocked out. WSGI can make this testing easy.

There’s also a hidden feature here: because WSGI is basically just describing HTTP, it can be a means of representing not just incoming HTTP requests, but also outgoing HTTP requests. If you are running local tests against your application using WebTest, with just a little tweaking you can turn those tests into HTTP tests (i.e., actually connect to a socket). But doing this is admittedly not obvious; hence this post!

Here’s what a basic WebTest test looks like:


from webtest import TestApp
import json

wsgi_app = acquire_wsgi_application_somehow()
app = TestApp(wsgi_app)

def test_login():
    resp = app.post('/login', dict(username='guest', password='guest'))
    resp.mustcontain('login successful')
    resp = resp.click('home')
    resp.mustcontain('<a href="/profile">guest</a>')
    # Or with a little framework integration:
    assert resp.templatevars.get('username') == 'guest'

# Or an API test:
def test_user_query():
    resp = app.get('/users.json')
    assert 'guest' in resp.json['userList']
    user_info = dict(username='guest2', password='guest2', name='Guest')
    resp = app.post('/users.json', content_type='application/json',
                    body=json.dumps(user_info)
    assert resp.json == user_info
 

The app object is a wrapper around the WSGI application, and each of those methods runs a request and gets the response. The response object is a WebOb response with several additional helpers for testing (things like .click() which finds a link in HTML and follows it, or .json which loads the body as JSON).

You don’t have to be using a WSGI-centric framework like Pylons to use WebTest, it works fine with anything with a WSGI frontend, which is just about everything. But the point of my post: you don’t have to use it with a WSGI application at all. Using WSGIProxy:


import os
import urlparse

if os.environ.get('TEST_REMOTE'):
    from wsgiproxy.exactproxy import proxy_exact_request
    wsgi_app = proxy_exact_request
    parsed = urlparse.urlsplit(os.environ['TEST_REMOTE'])
    app = TestApp(proxy_exact_request, extra_environ={
                  'wsgi.scheme': parsed.scheme,
                  'HTTP_HOST': parsed.netloc,
                  'SERVER_NAME': parsed.netloc})
else:
    wsgi_app = acquire_wsgi_application_somehow()
    app = TestApp(wsgi_app)
 

It’s a little crude to control this with an environmental variable ($TEST_REMOTE), but it’s an easy way to pass an option in when there’s no better way (and many test runners don’t make options easy). The extra_environ option puts in the host and scheme information into each request (the default host WebTest puts in is http://localhost). WSGIProxy lets you send a request to any host, kind of bypassing DNS, so SERVER_NAME is actually the server the request goes to, while HTTP_HOST is the value of the Host header.

Going over HTTP there are a couple features that won’t work. For instance, you can pass information about your application back to the test code by putting values in environ['paste.testing_variables'] (which is how you’d make resp.templatevars work in the first example). It’s also possible to use extra_environ to pass information into your application, for example to get your application to mock out user authentication; this is fairly safe because in production no request can put those same special keys into the environment (using custom HTTP headers means you must carefully filter requests in production). But custom environ values won’t work over HTTP.

The thing that got me thinking about this is the work I’m doing on Silver Lining, where I am taking apps and rearranging the code and modifying the database configuration ad setup to fit this deployment system. It would be really nice having done that to be able to run some functional tests, and I really want to run them over HTTP. If an application has tests using something like Selenium or Windmill that would also work great, but those tools can be a bit more challenging to work with and applications still need smaller tests anyway, so being able to reuse tests like these would be most useful.

Programming
Python
Web

Comments (1)

Permalink

More Sentinels

I’ve been casually perusing Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp. One of the things I am noticing is that Lisp traditionally has a terrible lack of sentinels: special objects denoting some kind of meaning. Specifically in Common Lisp the empty list and false and nil are all the same thing. As a result there’s all these cases where you want to distinguish false from empty, especially when false represents a failure of some sort. In these AI examples, usually a failure to find something, while in many cases the empty list could mean "the thing is already found, no need to look". But there’s also lots of other examples when this causes problems.

More modern languages usually distinguish between these objects. Python for instance has [], False and None. They might all test as "falsish", but if you care to tell the difference it is easy to do; especially common is a test for x is None. Modern Lisps also stopped folding together all these notions (Scheme for example has #f for false as a completely distinct object, though null and the empty list are still the same). XML-RPC is an example of a language missing null… and though JSON is almost the same data model, it is a great deal superior for having null. In comparison no one seems to care much one way or the other about making a strong distinction between True/False and 1/0.

These are all examples of sentinels: special objects that represent some state. None doesn’t mean anything in particular, but it means lots of things specifically. Maybe it means "not found" in one place, or "give me anything I don’t care" in another. But sometimes you need more than one of these in the same place, or None isn’t entirely clear.

One thing I noticed while reading some Perl 6 examples is that they’ve added a number of new sentinels. One is *. So you could write something like item(*) to mean "give me any item, your choice". While the Perl tendency to use punctuation is legend, words work too.

I wonder if we need a few more sentinel conventions? If so what?

Of course any object can become a sentinel if you use it like that, None isn’t more unique than any other object. (None is conveniently available everywhere.)

Any seems useful, ala Perl’s *. But… there’s already an any available everywhere as well. It happens to be a function, but it’s also a unique named object… would it be entirely too weird to do obj is any? And there’s very few cases where the actual function any would be an appropriate input, making it a good sentinel.

Programming
Python

Comments (19)

Permalink

The Web Server Benchmarking We Need

Another WSGI web server benchmark was published. It’s a decent benchmark, despite some criticisms. But it benchmarks what everyone benchmarks: serving up a trivial app really really quickly. This is not very useful to me. Also, performance is not to me the most important differentiation of servers.

In Silver Lining we’re using mod_wsgi. Silver Lining isn’t tied to mod_wsgi (applications can’t really tell), and we may revisit that decision (mostly because of memory concerns), but it is a deliberate choice. mod_wsgi is one of the few multiprocess WSGI servers, and it manages its children (the same way Apache manages all its children). So if a child stops responding, it gets taken out of the pool and killed (brutal efficiency! Or at least brutal terminology). Child processes are also recycled, guarding against memory leaks or other peculiarities. Sometimes these kinds of things are dismissed for covering up bugs, but (a) production is a lousy time to learn about bugs, (b) it’s like a third tier of garbage collection, and (c) the bugs you are avoiding are often bugs you can’t fix anyway (for instance, if your mysql driver leaks memory, is that the application developer’s fault?)

I wish there was competition among servers not to see who can tweak their performance for entirely unrealistic situations, but to see who can implement the most fail-safe server. We’re missing good benchmarks. Unfortunately benchmarks are a pain in the butt to write and manage.

But I hope someone writes a benchmark like that. Here’s some things I’d like to see benchmarked:

  • A "realistic" CPU-bound application. for i in xrange(10000000): pass is a reasonable start.
  • An application that generates big responses, e.g., "x"*100000.
  • An I/O bound application. E.g., one that reads a big file.
  • A simply slow application (time.sleep(1)).
  • Applications that wedge. while 1: pass perhaps? Or lock = threading.Lock(); lock.acquire(); lock.acquire(). Wedging in C and wedging in Python are different, so a bunch of different kinds of wedging.
  • Applications that segfault. ctypes is specially designed for this.
  • Applications that leak memory like a sieve, e.g., global_var.extend(['x']*10000).
  • Large uploads.
  • Slow uploads, like a client that takes 30 seconds to upload 1Mb.
  • Also slow downloads.
  • In each case it is interesting what happens when something bad happens to just a portion of requests. E.g., if 1% of requests wedge hard. A good container will serve the other 99% of requests properly. A bad container will have its worker pool exhausted and completely stop.
  • Mixing and matching these could be interesting. For instance Dave Beazley found some bad GIL results mixing I/O and CPU-bound code.
  • Add ideas in the comments and I’ll copy them into this list.

The hardest part of writing this is not the applications (they are simple). One annoyance is wiring up the applications, but handily Nicholas covers that well in his benchmark. You also have to make sure to clean up, as many servers will not exit cleanly from some of the tests. Another nuisance is that some of these require funny clients. These aren’t too hard to write, but you can’t just use ab. Then you have to report.

Anyway: I would love it if someone did this, and packaged it as repeatable/runnable code/scripts. I’ll help some, but I can’t lead. I’d both really like to see the results, and in my ideal world people writing servers would start using these benchmarks to make their servers more robust.

Programming
Python
Silver Lining
Web

Comments (23)

Permalink

What Does A WebOb App Look Like?

Lately I’ve been writing code using WebOb and just a few other small libraries. It’s not entirely obvious what this looks like, so I thought I’d give a simple example.

I make each application a class. Instances of the class are "configured applications". So it looks a little like this (for an application that takes one configuration parameter, file_path):


class Application(object):
    def __init__(self, file_path):
        self.file_path = file_path
 

Then the app needs to be a WSGI app, because that’s how I roll. I use webob.dec:


from webob.dec import wsgify
from webob import exc
from webob import Response

class Application(object):
    def __init__(self, file_path):
        self.file_path = file_path
    @wsgify
    def __call__(self, req):
        return Response('Hi!')
 

Somewhere separate from the application you actually instantiate Application. You can use Paste Deploy for that, configure it yourself, or just do something ad hoc (a lot of mod_wsgi .wsgi files are like this, basically).

I use webob.exc for things like exc.HTTPNotFound(). You can raise that as an exception, but I mostly just return the object (to the same effect).

Now you have Hello World. I then sometimes do something terrible, I start handling URLs like this:


@wsgify
def __call__(self, req):
    if req.path_info == '/':
        return self.index(req)
    elif req.path_info.startswith('/view/'):
        return self.view(req)
    return exc.HTTPNotFound()
 

This is lazy and a very bad idea. So you want a dispatcher. There are several (e.g., selector). I’ll use Routes here… the latest release makes it a bit easier (though it could still be streamlined a bit). Here’s a pattern I think makes sense:


from routes import Mapper

class Application(object):
    map = Mapper()
    map.connect('index', '/', method='index')
    map.connect('view', '/view/{item}', method='view')

    def __init__(self, file_path):
        self.file_path = file_path

    @wsgify
    def __call__(self, req):
        results = self.map.routematch(environ=req.environ)
        if not results:
            return exc.HTTPNotFound()
        match, route = results
        link = URLGenerator(self.map, req.environ)
        req.urlvars = ((), match)
        kwargs = match.copy()
        method = kwargs.pop('method')
        req.link = link
        return getattr(self, method)(req, **kwargs)

    def index(self, req):
        ...
    def view(self, req, item):
        ...
 

Another way you might do it is to skip the class, which means skipping a clear place for configuration. I don’t like that, but if you don’t care about that, then it looks like this:


def index(self, req):
    ...
def view(self, req, item):
    ...

map = Mapper()
map.connect('index', '/', view=index)
map.connect('view', '/view/{item}', view=view)

@wsgify
def application(req):
    results = map.routematch(environ=req.environ)
    if not results:
        return exc.HTTPNotFound()
    match, route = results
    link = URLGenerator(map, req.environ)
    req.urlvars = ((), match)
    kwargs = match.copy()
    view = kwargs.pop('view')
    req.link = link
    return view(req, **kwargs)
 

Then application is pretty much boilerplate. You could put configuration in the request if you wanted, or use some other technique (like Contextual).

I talked some with Ben Bangert about what he’s trying with these patterns, and he’s doing something reminiscent of Pylons controllers (but without the rest of Pylons) and it looks more like this (with my own adaptations):


class BaseController(object):
    special_vars = ['controller', 'action']

    def __init__(self, request, link, **config):
        self.request = request
        self.link = link
        for name, value in config.items():
            setattr(self, name, value)

    def __call__(self):
        action = self.request.urlvars.get('action', 'index')
        if hasattr(self, '__before__'):
            self.__before__()
        kwargs = req.urlsvars.copy()
        for attr in self.special_vars
            if attr in kwargs:
                del kwargs[attr]
        return getattr(self, action)(**kwargs)

class Index(BaseController):
    def index(self):
        ...
    def view(self, item):
        ...

class Application(object):
    map = Mapper()
    map.connect('index', '/', controller=Index)
    map.connect('view', '/view/{item}', controller=Index,     action='view')

    def __init__(self, **config):
        self.config = config

    @wsgify
    def __call__(self, req):
        results = self.map.routematch(environ=req.environ)
        if not results:
            return exc.HTTPNotFound()
        match, route = results
        link = URLGenerator(self.map, req.environ)
        req.urlvars = ((), match)
        controller = match['controller'](req, link, **self.config)
        return controller()
 

That’s a lot of code blocks, but they all really say the same thing ;) I think writing apps with almost-no-framework like this is pretty doable, so if you have something small you should give it a go. I think it’s especially appropriate for applications that are an API (not a "web site").

Programming
Python
Web

Comments (30)

Permalink

Joining Mozilla

As of last week, I am now an employee of Mozilla! Thanks to everyone who helped me out during my job search.

I’ll be working both with the Mozilla Web Development (webdev) team, and Mozilla Labs.

The first thing I’ll be working on is deployment. In part because I’ve been thinking about deployment lately, in part because streamlining deployment is just generally enabling of other work (and a personal itch to be scratched), and because I think there is the possibility to fit this work into Mozilla’s general mission, specifically Empowering people to do new and unanticipated things on the web. I think the way I’m approaching deployment has real potential to combine the discipline and benefits of good development practices with an accessible process that is more democratic and less professionalized. This is some of what PHP has provided over the years (and I think it’s been a genuinely positive influence on the web as a result); I’d like to see the same kind of easy entry using other platforms. I’m hoping Silver Lining will fit both Mozilla’s application deployment needs, as well as serving a general purpose.

Once I finish deployment and can move on (oh fuck what am I getting myself into) I’ll also be working with the web development group who has adopted Python for many of their new projects (e.g., Zamboni, a rewrite of the addons.mozilla.org site), and with Mozilla Labs on Weave or some of their other projects.

In addition my own Python open source work is in line with Mozilla’s mission and I will be able to continue spending time on those projects, as well as entirely new projects.

I’m pretty excited about this — it feels like there’s a really good match with Mozilla and what I’m good at, and what I care about, and how I care about it.

Mozilla
Non-technical
Programming
Python

Comments (32)

Permalink