We can answer on question what is VPS? and what is cheap dedicated servers?

Web

Python Application Package

I’ve been thinking some more about deployment of Python web applications, and deployment in general (in part leading up to the Web Summit). And I’ve got an idea.

I wrote about this about a year ago and recently revised some notes on a proposal but I’ve been thinking about something a bit more basic: a way to simply ship server applications, bundles of code. Web applications are just one use case for this.

For now lets call this a "Python application package". It has these features:

  1. There is an application description: this tells the environment about the application. (This is sometimes called "configuration" but that term is very confusing and overloaded; I think "description" is much clearer.)
  2. Given the description, you can create an execution environment to run code from the application and acquire objects from the application. So there would be a specific way to setup sys.path, and a way to indicate any libraries that are required but not bundled directly with the application.
  3. The environment can inject information into the application. (Also this sort of thing is sometimes called "configuration", but let’s not do that either.) This is where the environment could indicate, for instance, what database the application should connect to (host, username, etc).
  4. There would be a way to run commands and get objects from the application. The environment would look in the application description to get the names of commands or objects, and use them in some specific manner depending on the purpose of the application. For instance, WSGI web applications would point the environment to an application object. A Tornado application might simply have a command to start itself (with the environment indicating what port to use through its injection).

There’s a lot of things you can build from these pieces, and in a sophisticated application you might use a bunch of them at once. You might have some WSGI, maybe a seperate non-WSGI server to handle Web Sockets, something for a Celery queue, a way to accept incoming email, etc. In pretty much all cases I think basic application lifecycle is needed: commands to run when an application is first installed, something to verify the environment is acceptable, when you want to back up its data, when you want to uninstall it.

There’s also some things that all environments should setup the same or inject into the application. E.g., $TMPDIR should point to a place where the application can keep its temporary files. Or, every application should have a directory (perhaps specified in another environmental variable) where it can write log files.

Details?

To get more concrete, here’s what I can imagine from a small application description; probably YAML would be a good format:


platform: python, wsgi
require:
  os: posix
  python: <3
  rpm: m2crypto
  deb: python-m2crypto
  pip: requirements.txt
python:
  paths: vendor/
wsgi:
  app: myapp.wsgiapp:application
 

I imagine platform as kind of a series of mixins. This system doesn’t really need to be Python-specific; when creating something similar for Silver Lining I found PHP support relatively easy to add (handling languages that aren’t naturally portable, like Go, might be more of a stretch). So python is one of the features this application uses. You can imagine lots of modularization for other features, but it would be easy and unproductive to get distracted by that.

The application has certain requirements of its environment, like the version of Python and the general OS type. The application might also require libraries, ideally one libraries that are not portable (M2Crypto being an example). Modern package management works pretty nicely for this stuff, so relying on system packages as a first try I believe is best (I’d offer requirements.txt as a fallback, not as the primary way to handle dependencies).

I think it’s much more reliable if applications primarily rely on bundling their dependencies directly (i.e., using a vendor directory). The tool support for this is a bit spotty, but I believe this package format could clarify the problems and solutions. Here is an example of how you might set up a virtualenv environment for managing vendor libraries (you then do not need virtualenv to use those same libraries), and do so in a way where you can check the results into source control. It’s kind of complicated, but works (well, almost works – bin/ files need fixing up). It’s a start at least.

Support Library

On the environment side we need a good support library. pywebapp has some of the basic features, though it is quite incomplete. I imagine a library looking something like this:


from apppackage import AppPackage
app = AppPackage('/var/apps/app1.2012.02.11')
# Maybe a little Debian support directly:
subprocess.call(['apt-get', 'install'] +
                app.config['require']['deb'])
# Or fall back of virtualenv/pip
app.create_virtualenv('/var/app/venvs/app1.2012.02.11')
app.install_pip_requirements()
wsgi_app = app.load_object(app.config['wsgi']['app'])
 

You can imagine building hosting services on this sort of thing, or setting up continuous integration servers (app.run_command(app.config['unit_test'])), and so forth.

Local Development

If designed properly, I think this format is as usable for local development as it is for deployment. It should be able to run directly from a checkout, with the "development environment" being an environment just like any other.

This rules out, or at least makes less exciting, the use of zip files or tarballs as a package format. The only justification I see for using such archives is that they are easy to move around; but we live in the FUTURE and there are many ways to move directories around and we don’t need to cater to silly old fashions. If that means a script that creates a tarball, FTPs it to another computer, and there it is unzipped, then fine – this format should not specify anything about how you actually deliver the files. But let’s not worry about copying WARs.

Packaging
Python
Silver Lining
Web

Comments (9)

Permalink

Git-as-sync, not source-control-as-deployment

I don’t like systems that use git push for deployment (Heroku et al). Why? I do a lot of this:


$ git push deploy
... realize I forgot a domain name ...
$ git commit -m "fix domain name" -a ; git push deploy
... realize I didn't do something right with the database setup ...
$ git commit -m "configure database right" -a ; git push deploy
... dammit, I didn'
t fix it quite right ...
$ git commit -m "typo" -a ; git push deploy
 

And then maybe I’d actually like to keep my config out of my source control, or have a build process that I run locally, or any number of things. I’d like to be able to test deployment, but every deployment is a commit, and I like to commit tested work. I think I could use git rebase but I lack the discipline to undo my work so I can do it correctly. This is why I don’t do continuous commits.

There’s a whole different level of weirdness when you use GitHub Pages as you aren’t pushing to a deployment-specific remote, you are pushing to a deployment-specific branch.

So I’ve generally thought: git deployment is wrong.

Then I was talking to some other people at Mozilla and they mentioned that ops was using git for simply moving files around even though the stuff they were deploying was itself in Mercurial. They had a particular site with a very large number of files, and it was faster to use git than rsync (git has more metadata than rsync; rsync has to look at everything everytime you sync). And that all seemed very reasonable; git is a fine way to sync things.

But I kind of forgot about it all, and just swore to myself as I did too many trivial commits and wrote too many meaningless commit messages.

Still… it isn’t so hard to separate these concerns, is it? So I wrote up a quite small command called git-sync. The basic idea: copy the working directory to a new location (minus .git/), commit that, and push the result to your deployment remote. You can send modified and untracked files, and you can run a build script before committing and push the result of the build script, all without sullying your "real" source control. And you happen to have a nice history of deployments, which is also nice.

I’ve only used this a little bit, but I’ve enjoyed when I have used it, and it makes me feel much better/clearer about my actual commits. It’s really short right now, and probably gets some things entirely wrong (e.g., moving over untracked files). But it works well enough to be improved (winkwinknudgenudge).

So check it out: https://github.com/ianb/git-sync

Programming
Web

Comments (11)

Permalink

A Python Web Application Package and Format (we should make one)

At PyCon there was an open space about deployment, and the idea of drop-in applications (Java-WAR-style).

I generally get pessimistic about 80% solutions, and dropping in a WAR file feels like an 80% solution to me. I’ve used the Hudson/Jenkins installer (which I think is specifically a project that got WARs on people’s minds), and in a lot of ways that installer is nice, but it’s also kind of wonky, it makes configuration unclear, it’s not always clear when it installs or configures itself through the web, and when you have to do this at the system level, nor is it clear where it puts files and data, etc. So a great initial experience doesn’t feel like a great ongoing experience to me — and it doesn’t have to be that way. If those were necessary compromises, sure, but they aren’t. And because we don’t have WAR files, if we’re proposing to make something new, then we have every opportunity to make things better.

So the question then is what we’re trying to make. To me: we want applications that are easy to install, that are self-describing, self-configuring (or at least guide you through configuration), reliable with respect to their environment (not dependent on system tweaking), upgradable, and respectful of persistence (the data that outlives the application install). A lot of this can be done by the "container" (to use Java parlance; or "environment") — if you just have the app packaged in a nice way, the container (server environment, hosting service, etc) can handle all the system-specific things to make the application actually work.

At which point I am of course reminded of my Silver Lining project, which defines something very much like this. Silver Lining isn’t just an application format, and things aren’t fully extracted along these lines, but it’s pretty close and it addresses a lot of important issues in the lifecycle of an application. To be clear: Silver Lining is an application packaging format, a server configuration library, a cloud server management tool, a persistence management tool, and a tool to manage the application with respect to all these services over time. It is a bunch of things, maybe too many things, so it is not unreasonable to pick out a smaller subset to focus on. Maybe an easy place to start (and good for Silver Lining itself) would be to separate at least the application format (and tools to manage applications in that state, e.g., installing new libraries) from the tools that make use of such applications (deploy, etc).

Some opinions I have on this format, exemplified in Silver Lining:

  • It’s not zipped or a single file, unlike WARs. Uploading zip files is not a great API. Geez. I know there’s this desire to "just drop in a file"; but there’s no getting around the fact that "dropping a file" becomes a deployment protocol and it’s an incredibly impoverished protocol. The format is also not subtly git-based (ala Heroku) because git push is not a good deployment protocol.
  • But of course there isn’t really any deployment protocol inferred by a format anyway, so maybe I’m getting ahead of myself ;) I’m saying a tool that deploys should take as an argument a directory, not a single file. (If the tool then zips it up and uploads it, fine!)
  • Configuration "comes from the outside". That is, an application requests services, and the container tells the application where those services are. For Silver Lining I’ve used environmental variables. I think this one point is really important — the container tells the application. As a counter-example, an application that comes with a Puppet deployment recipe is essentially telling the server how to arrange itself to suit the application. This will never be reliable or simple!
  • The application indicates what "services" it wants; for instance, it may want to have access to a MySQL database. The container then provides this to the application. In practice this means installing the actual packages, but also creating a database and setting up permissions appropriately. The alternative is never having any dependencies, meaning you have to use SQLite databases or ad hoc structures, etc. But in fact installing databases really isn’t that hard these days.
  • All persistence has to use a service of some kind. If you want to be able to write to files, you need to use a file service. This means the container is fully aware of everything the application is leaving behind. All the various paths an application should use are given in different environmental variables (many of which don’t need to be invented anew, e.g., $TMPDIR).
  • It uses vendor libraries exclusively for Python libraries. That means the application bundles all the libraries it requires. Nothing ever gets installed at deploy-time. This is in contrast to using a requirements.txt list of packages at deployment time. If you want to use those tools for development that’s fine, just not for deployment.
  • There is also a way to indicate other libraries you might require; e.g., you might lxml, or even something that isn’t quite a library, like git (if you are making a github clone). You can’t do those as vendor libraries (they include non-portable binaries). Currently in Silver Lining the application description can contain a list of Ubuntu package names to install. Of course that would have to be abstracted some.
  • You can ask for scripts or a request to be invoked for an application after an installation or deployment. It’s lame to try to test if is-this-app-installed on every request, which is the frequent alternative. Also, it gives the application the chance to signal that the installation failed.
  • It has a very simple (possibly/probably too simple) sense of configuration. You don’t have to use this if you make your app self-configuring (i.e., build in a web-accessible settings screen), but in practice it felt like some simple sense of configuration would be helpful.

Things that could be improved:

  • There are some places where you might be encouraged to use routines from the silversupport package. There are very few! But maybe an alternative could be provided for these cases.
  • A little convention-over-configuration is probably suitable for the bundled libraries; silver includes tools to manage things, but it gets a little twisty. When creating a new project I find myself creating several .pth files, special customizing modules, etc. Managing vendor libraries is also not obvious.
  • Services are IMHO quite important and useful, but also need to be carefully specified.
  • There’s a bunch of runtime expectations that aren’t part of the format, but in practice would be part of how the application is written. For instance, I make sure each app has its own temporary directory, and that it is cleared on update. If you keep session files in that location, and you expect the environment to clean up old sessions — well, either all environments should do that, or none should.
  • The process model is not entirely clear. I tried to simply define one process model (unthreaded, multiple processes), but I’m not sure that’s suitable — most notably, multiple processes have a significant memory impact compared to threads. An application should at least be able to indicate what process models it accepts and prefers.
  • Static files are all convention over configuration — you put static files under static/ and then they are available. So static/style.css would be at /style.css. I think this is generally good, but putting all static files under one URL path (e.g., /media/) can be good for other reasons as well. Maybe there should be conventions for both.
  • Cron jobs are important. Though maybe they could just be yet another kind of service? Many extra features could be new services.
  • Logging is also important; Silver Lining attempts to handle that somewhat, but it could be specified much better.
  • Silver Lining also supports PHP, which seemed to cause a bit of stress. But just ignore that. It’s really easy to ignore.

There is a description of the configuration file for apps. The environmental variables are also notably part of the application’s expectations. The file layout is explained (together with a bunch of Silver Lining-specific concepts) in Development Patterns. Besides all that there is admittedly some other stuff that is only really specified in code; but in Silver Lining’s defense, specified in code is better than unspecified ;) App Engine provides another example of an application format, and would be worth using as a point of discussion or contrast (I did that myself when writing Silver Lining).

Discussing WSGI stuff with Ben Bangert at PyCon he noted that he didn’t really feel like the WSGI pieces needed that much more work, or at least that’s not where the interesting work was — the interesting work is in the tooling. An application format could provide a great basis for building this tooling. And I honestly think that the tooling has been held back more by divergent patterns of development than by the difficulty of writing the tools themselves; and a good, general application format could fix that.

Packaging
Programming
Python
Web

Comments (18)

Permalink

Javascript on the server AND the client is not a big deal

All the cool kids love Node.js. I’ve used it a little, and it’s fine; I was able to do what I wanted to do, and it wasn’t particularly painful. It’s fun to use something new, and it’s relatively straight-forward to get started so it’s an emotionally satisfying experience.

There are several reasons you might want to use Node.js, and I’ll ignore many of them, but I want to talk about one in particular:

Javascript on the client and the server!

Is this such a great feature? I think not…

You only need to know one language!

Sure. Yay ignorance! But really, this is fine but unlikely to be relevant to any current potential audience for Node.js. If you are shooting for an very-easy-to-learn client-server programming system, Node.js isn’t it. Maybe Couch or something similar has that potential? But I digress.

It’s not easy to have expertise at multiple languages. But it’s not that hard. It’s considerably harder to have expertise at multiple platforms. Node.js gives you one language across client and server, but not one platform. Node.js programming doesn’t feel like the browser environment. They do adopt many conventions when it’s reasonable, but even then it’s not always the case — in particular because many browser APIs are the awkward product of C++ programmers exposing things to Javascript, and you don’t want to reproduce those same APIs if you don’t have to (and Node.js doesn’t have to!) — an example is the event pattern in Node, which is similar to a browser but less obtuse.

You get to share libraries!

First: the same set of libraries is probably not applicable. If you can do it on the client then you probably don’t have to do it on the server, and vice versa.

But sometimes the same libraries are useful. Can you really share them? Browser libraries are often hard to use elsewhere because they rely on browser APIs. These APIs are frequently impossible to implement in Javascript.

Actually they are possible to implement in Javascript using Proxies (or maybe some other new and not-yet-standard Javascript features). But not in Node.js, which uses V8, and V8 is a pretty conservative implementation of the Javascript language. (Update: it is noted that you can implement proxies — in this case a C++ extension to Node)

Besides these unimplementable APIs, it is also just a different environment. There is the trivial: the window object in the browser has a Node.js equivalent, but it’s not named window. Performance is different — Node has long-running processes, the browser might. Node can have blocking calls, which are useful even if you can’t use them at runtime (e.g., require()); but you can’t really have any of these at any time on the browser. And then of course all the system calls, none of which you can use in the browser.

All these may simply be surmountable challenges, through modularity, mocking, abstractions, and so on… but ultimately I think the motivation is lacking: the domain of changing a live-rendered DOM isn’t the same as producing bytes to put onto a socket.

You can work fluidly across client and server!

If anything I think this is dangerous rather than useful. The client and the server are different places, with different expectations. Any vagueness about that boundary is wrong.

It’s wrong from a security perspective, as the security assumptions are nearly opposite on the two platforms. The client trusts itself, and the server trusts itself, and both should hold the other in suspicion (though the client can be more trusting because the browser doesn’t trust the client code).

But it’s also the wrong way to treat HTTP. HTTP is pretty simple until you try to make it simpler. Efforts to make it simpler mostly make it more complicated. HTTP lets you send serialized data back and forth to a server, with a bunch of metadata and other do-dads. And that’s all neat, but you should always be thinking about sending information. And never sharing information. It’s not a fluid boundary, and code that touches HTTP needs to be explicit about it and not pretend it is equivalent to any other non-network operation.

Certainly you don’t need two implementation languages to keep your mind clear. But it doesn’t hurt.

You can do validation the same way on the client and server!

One of the things people frequently bring up is that you can validate data on the client and server using the same code. And of course, what web developer hasn’t been a little frustrated that they have to implement validation twice?

Validation on the client is primarily a user experience concern, where you focus on bringing attention to problems with a form, and helping the user resolve those problems. You may be able to avoid errors entirely with an input method that avoids the problem (e.g., if a you have a slider for a numeric input, you don’t have to worry about the user inputing a non-numeric value).

Once the form is submitted, if you’ve done thorough client-side validation you can also avoid friendly server-side validation. Of course all your client-side validation could be avoided through a malicious client, but you don’t need to give a friendly error message in that case, you can simply bail out with a simple 400 Bad Request error.

At that point there’s not much in common between these two kinds of validation — the client is all user experience, and the server is all data integrity.

You can do server-side Javascript as a fallback for the client!

Writing for clients without Javascript is becoming increasingly less relevant, and if we aren’t there yet, then we’ll certainly get there soon. It’s only a matter of time, the writing is on the wall. Depending on the project you might have to put in workarounds, but we should keep those concerns out of architecture decisions. Maintaining crazy hacks is not worth it. There’s so many terrible hacks that have turned into frameworks, and frameworks that have justified themselves because of the problems they solved that no longer matter… Node.js deserves better than to be one of those.

In Conclusion Or Whatever

I’m not saying Node.js is bad. There are other arguments for it, and you don’t need to make any argument for it if you just feel like using it. It’s fun to do something new. And I’m as optimistic about Javascript as anyone. But this one argument, I do not think it is very good.

Javascript
Programming
Web

Comments (29)

Permalink

Doctest.js & Callbacks

Many years ago I wrote a fairly straight-forward port of Python’s doctest to Javascript. I thought it was cool, but I didn’t really talk about it that much. Especially because I knew it had one fatal flaw: it was very unfriendly towards programming with callbacks, and Javascript uses a lot of callbacks.

On a recent flight I decided to look at it again, and realized fixing that one flaw wasn’t actually a big deal. So now doctest.js really works. And I think it works well: doctest.js.

I have yet to really use doctest.js on more than a couple real cases, and as I do (or you do?) I expect to tweak it more to make it flow well. But having tried a couple of examples I am particularly liking how it can be used with callbacks.

Testing with callbacks is generally a tricky thing. You want to make assertions, but they happen entirely separately from the test runner’s own loop, and your callbacks may not run at all if there’s a failure.

I came upon some tests recently that used Jasmine, a BDD-style test framework. I’m not a big fan of BDD but I’m fairly new to serious Javascript development so I’m trying to withhold judgement. The flow of the tests is a bit peculiar until you realize that it’s for async reasons. I’ll try to show something that roughly approximates a real test of an XMLHttpRequest API call:


it("should give us no results", function() {
  runs(function () {
    var callback = createSpy('callback for results');
    $.ajax({
      url: '/search',
      data: {q: "query unlikely to match anything"},
      dataType: "json",
      success: callback
    });
  });
  waits(someTimeout);
  runs(function () {
    expect(callback).toHaveBeenCalled();
    expect(callback.mostRecentCall.args[0].length).toEqual(0);
  });
});
 

So, the basic pattern is it() creates a group of tests, and each call to run() is a set of items to call sequentially. Then between these run blocks you can have signals to the runner to wait for some result, either a timeout (which is fragile), or you can setup specific conditions.

Another popular test runner is QUnit; it’s popular particularly because it’s what jQuery uses, and my own impression is that QUnit is just very simple and so least likely to piss you off.

QUnit has its own style for async:


test("should give us no results", function () {
  stop();
  expect(1);
  $.ajax({
    url: '/search',
    data: {q: "query unlikely to match anything"},
    dataType: "json",
    success: function (result) {
      ok(result.length == 0, 'No results');
      start();
    }
  });
});
 

stop() confused me for a bit until I realized what they were really referring to stopping the test runner; of course the function continues on regardless. What will happen is that the function will return, but nothing will have really been tested — the success callback will not have been run, and cannot run until all Javascript execution stops and control is given back to the browser. So the test runner will use setTimeout to let time pass before the test continues. In this case it will continue once start() is called. And expect() also makes it fail if it didn’t get at least one assertion during that interval — it would otherwise be easy to simply miss an assertion (though in this example it would be okay because if the success callback isn’t invoked then start() will never be called, and the runner will timeout and signal that as a failure).

So… now for doctest.js. Note that doctest.js isn’t "plain" Javascript, it looks like what an interactive Javascript session might look like (I’ve used shell-style prompts instead of typical console prompts, because the consoles didn’t exist when first I wrote this, and because >>>/... kind of annoy me anyway).


$ success = Spy('success', {writes: true});
$ $.ajax({
>   url: '/search',
>   data: {q: "query unlikely to match anything"},
>   dataType: "json",
>   success: success.func
> });
$ success.wait();
success([])
 

With doctest.js you still get a fairly linear feel — it’s similar to how Jasmine works, except every $ prompt is potentially a place where the loop can be released so something async can happen. Each prompt is equivalent to run() (though unless you call wait, directly or indirectly, everything will run in sequence).

There’s also an implicit assertion for each stanza, which is anything that is written must be matched ({writes: true} makes the spy/mock object write out any invocations). This makes it much harder to miss something in your tests.

Update: just for the record, doctest has changed some, and while that example still works, this would be the "right" way to do it now:


$.ajax({
  url: '/search',
  data: {q: "query unlikely to match anything"},
  dataType: "json",
  success: Spy("search.success", {wait: true, ignoreThis: true})
});
// => search.success([])
 

There is a new format that I now prefer with plain Javascript and "expected output" in comments. Spy("search.success", {wait: true, ignoreThis: true}) causes the test to wait on the Spy immediately (though the same pattern as before is also possible and sometimes preferable), and in all likelihood jQuery will set this to something we don’t care about, so ignoreThis: true keeps it from being printed. (Or maybe you are interested in it, in which case you’d leave that out)

Anyway, back to the original conclusion (update over)…

I’ve never actually found Python’s doctest to be a particularly good way to write docs, and I don’t expect any different from doctest.js, but I find it a very nice way to write and run tests… and while Python’s doctest is essentially abandoned and lacks many features to make it a more humane testing environment, maybe doctest.js can do better.

Javascript
Programming
Testing
Web

Comments (3)

Permalink

The Browser Desktop, developer tools

I find myself working in a Windows environment due to some temporary problems with my Linux installation. In terms of user experience Windows is not terrible. But more notable, things mostly just feel the same. My computing experience is not very dependent on the operating system… almost. Most of what I do is in a web browser — except programming itself. Probably a lot of you have the same experience: web browser, text editor, and terminal are pretty much all I need. I occasionally play with other tools, but none of them stick. Of course underlying the terminal and text editor UI is a whole host of important software — interpreters, version control tools, checkouts of all my projects, etc. So really there’s two things keeping us from a browser-only world: a few bits of UI, and a whole bunch of tools. Can we bridge this? I’m thinking (more speculatively than as an actual plan): could I stay on Windows without ever having to "use" Windows?

Browsers are clearly capable of implementing a capable UI for a terminal or editor; not a trivial endeavor, but not impossible. We need a way of handling the tools. The obvious answer in that case is a virtual machine. The virtual machine would certainly be using Linux, as there’s clear consensus that if you remove the UI and hardware considerations and just consider tools then Linux is by far the best choice — who uses Mac servers? And Windows is barely worth mentioning. I worked in a Linux VM for a while but found it really unsatisfying — but that was using the Linux UI through a VMWare interface.

So instead imagine: you start up a headless VM (remembering the tools are not about UI, so there’s no reason to have a graphical user interface on the VM), you point your browser at this VM, and you use a browser-based developer environment that mediates all the tools (the lightest kind of mediation is just simulating a terminal and using existing console-based interfaces). Look at your existing setup and just imagine a browser window in place of each not-browser-window app you are using.

I’m intrigued then by the idea of adding more to these interfaces, incrementally. Like HTML in the console, or applications lightly wrapping individual tools. IDEs never stick for me, maybe in part because I can’t commit, and also there’s collaboration issues with these tools (I’m never in a team where we would be able to agree on a single environment). But incremental decentralized improvements seem genuinely workable — improvement more in the style of the web, the browser providing the central metaphor.

I call this a Browser Desktop because it’s a fairly incremental change at this point and other terms (Web OS, Cloud OS) are always presented with unnecessarily hyperbole. What "operating system" you are using in this imagined system is a somewhat uninteresting semantic question; the OS hasn’t disappeared, it’s just boring. "The Cloud" is fine, but too easy to overthink, and there are many technical reasons to use a hybrid of local and remote pieces. "Internet Operating System" is more a framing concept than a thing-that-can-be-built. Chromium OS is essentially the same idea… I’m not really sure how they categorize themselves.

What would be painful right now? Good Javascript terminals exist. Bespin is hard at work on an editor worthy of being used by programmers. The browser needs to be an extremely solid platform. Google Chrome has done a lot in this direction, and Firefox is moving the same direction with the Electrolysis project. It’s okay to punt for now on all the "consumer" issues like music and media handling… and anyway, other people are hard at work on those things. Web sockets will help with some kinds of services that ideally will connect directly to a port; it’s not the same as a raw socket, but I feel like there’s potential for small intermediaries (e.g., imagine a Javascript app that connects to a locally-hosted server-side app that proxies to ssh). Also AddOns can be used when necessary (e.g., ChatZilla <https://addons.mozilla.org/en-US/firefox/addon/16>).

I’d like much better management of all these "apps" aka pages aka windows or tabs — things like split screens and workspaces. Generally I think using such a system heavily will create all sorts of interesting UI tensions. Which might be annoying for the user, but if it’s a constructive annoyance…

On the whole… this seems doable. It’s navel gazing in a sense — programmers thinking about programming — but one good thing about navel gazing is that programmers have traditionally been quite good at navel gazing, and while some results aren’t generally applicable (e.g., VM management) the exercise will certainly create many generally applicable side products. It would encourage interesting itch-scratching. There’s lots of other "web OS" efforts out there, but I’ve never really understood them… they copy desktop metaphors, or have weird filesystem metaphors, or create an unnecessarily cohesive experience. The web is not cohesive, and I’m pretty okay with that; I don’t expect my experiences in this context to be any more cohesive than my tasks are cohesive. In fact it’s exactly the lack of cohesiveness that interests me in this exercise — the browser mostly gives me the level of cohesiveness I want, and I’m open to experimentation on the rest. And maybe the biggest interest for me is that I am entirely convinced that traditional GUI applications are a dead end; they rise and fall (mobile apps being a current rise) but I can’t seriously imagine long term (10 year) viability for any current or upcoming GUI system. I’m certain the browser is going to be along for the long haul. Doing this would let us Live The Future ;)

Mozilla
Programming
Web

Comments (11)

Permalink

Silver Lining: More People!

OK… so I said before Silver Lining is for collaborators not users. And that’s still true… it’s not a polished experience where you can confidently ignore the innards of the tool. But it does stuff, and it works, and you can use it. So… I encourage some more of you to do so.

Now would be a perfectly good time, for instance, to port an application you use to the system. Almost all Python applications should be portable. The requirements are fairly simple:

  1. The application needs a WSGI interface.
  2. It needs to be Python 2.6 compatible.
  3. Any libraries that aren’t pure-Python need to be available as deb packages in some form.
  4. Any persistence needs to be provided as a service; if the appropriate service isn’t already available you may need to write some code.

Also PHP applications should work (though you may encounter more rough edges), with these constraints:

  1. No .htaccess files, so you have to implement any URL rewriting in PHP (e.g., for WordPress).
  2. Source code is not writable, so self-installers that write files won’t work. (Self-installing plugins might be workable, but that hasn’t been worked out yet.)
  3. And the same constraints for services.

So… take an application, give it a try, and tell me what you encounter.

Also I’d love to get feedback and ideas from people with more sysadmin background, or who know Ubuntu/Debian tricks. For instance, I’d like to handle some of the questions packages ask about on installation (right now they are all left as defaults, not always the right answer). I imagine there’s some non-interactive way to handle those questions but I haven’t been able to find it.

Python
Silver Lining
Web

Comments (5)

Permalink

WebTest HTTP testing

I’ve yet to see another testing system for local web testing that I like as much as WebTest… which is perhaps personal bias for something I wrote, but then I don’t have that same bias towards everything I’ve written. Many frameworks build in their own testing systems but I don’t like the abstractions — they touch lots of internal things, or skip important steps of the request, or mock out things that don’t need to be mocked out. WSGI can make this testing easy.

There’s also a hidden feature here: because WSGI is basically just describing HTTP, it can be a means of representing not just incoming HTTP requests, but also outgoing HTTP requests. If you are running local tests against your application using WebTest, with just a little tweaking you can turn those tests into HTTP tests (i.e., actually connect to a socket). But doing this is admittedly not obvious; hence this post!

Here’s what a basic WebTest test looks like:


from webtest import TestApp
import json

wsgi_app = acquire_wsgi_application_somehow()
app = TestApp(wsgi_app)

def test_login():
    resp = app.post('/login', dict(username='guest', password='guest'))
    resp.mustcontain('login successful')
    resp = resp.click('home')
    resp.mustcontain('<a href="/profile">guest</a>')
    # Or with a little framework integration:
    assert resp.templatevars.get('username') == 'guest'

# Or an API test:
def test_user_query():
    resp = app.get('/users.json')
    assert 'guest' in resp.json['userList']
    user_info = dict(username='guest2', password='guest2', name='Guest')
    resp = app.post('/users.json', content_type='application/json',
                    body=json.dumps(user_info)
    assert resp.json == user_info
 

The app object is a wrapper around the WSGI application, and each of those methods runs a request and gets the response. The response object is a WebOb response with several additional helpers for testing (things like .click() which finds a link in HTML and follows it, or .json which loads the body as JSON).

You don’t have to be using a WSGI-centric framework like Pylons to use WebTest, it works fine with anything with a WSGI frontend, which is just about everything. But the point of my post: you don’t have to use it with a WSGI application at all. Using WSGIProxy:


import os
import urlparse

if os.environ.get('TEST_REMOTE'):
    from wsgiproxy.exactproxy import proxy_exact_request
    wsgi_app = proxy_exact_request
    parsed = urlparse.urlsplit(os.environ['TEST_REMOTE'])
    app = TestApp(proxy_exact_request, extra_environ={
                  'wsgi.scheme': parsed.scheme,
                  'HTTP_HOST': parsed.netloc,
                  'SERVER_NAME': parsed.netloc})
else:
    wsgi_app = acquire_wsgi_application_somehow()
    app = TestApp(wsgi_app)
 

It’s a little crude to control this with an environmental variable ($TEST_REMOTE), but it’s an easy way to pass an option in when there’s no better way (and many test runners don’t make options easy). The extra_environ option puts in the host and scheme information into each request (the default host WebTest puts in is http://localhost). WSGIProxy lets you send a request to any host, kind of bypassing DNS, so SERVER_NAME is actually the server the request goes to, while HTTP_HOST is the value of the Host header.

Going over HTTP there are a couple features that won’t work. For instance, you can pass information about your application back to the test code by putting values in environ['paste.testing_variables'] (which is how you’d make resp.templatevars work in the first example). It’s also possible to use extra_environ to pass information into your application, for example to get your application to mock out user authentication; this is fairly safe because in production no request can put those same special keys into the environment (using custom HTTP headers means you must carefully filter requests in production). But custom environ values won’t work over HTTP.

The thing that got me thinking about this is the work I’m doing on Silver Lining, where I am taking apps and rearranging the code and modifying the database configuration ad setup to fit this deployment system. It would be really nice having done that to be able to run some functional tests, and I really want to run them over HTTP. If an application has tests using something like Selenium or Windmill that would also work great, but those tools can be a bit more challenging to work with and applications still need smaller tests anyway, so being able to reuse tests like these would be most useful.

Programming
Python
Web

Comments (1)

Permalink

The Web Server Benchmarking We Need

Another WSGI web server benchmark was published. It’s a decent benchmark, despite some criticisms. But it benchmarks what everyone benchmarks: serving up a trivial app really really quickly. This is not very useful to me. Also, performance is not to me the most important differentiation of servers.

In Silver Lining we’re using mod_wsgi. Silver Lining isn’t tied to mod_wsgi (applications can’t really tell), and we may revisit that decision (mostly because of memory concerns), but it is a deliberate choice. mod_wsgi is one of the few multiprocess WSGI servers, and it manages its children (the same way Apache manages all its children). So if a child stops responding, it gets taken out of the pool and killed (brutal efficiency! Or at least brutal terminology). Child processes are also recycled, guarding against memory leaks or other peculiarities. Sometimes these kinds of things are dismissed for covering up bugs, but (a) production is a lousy time to learn about bugs, (b) it’s like a third tier of garbage collection, and (c) the bugs you are avoiding are often bugs you can’t fix anyway (for instance, if your mysql driver leaks memory, is that the application developer’s fault?)

I wish there was competition among servers not to see who can tweak their performance for entirely unrealistic situations, but to see who can implement the most fail-safe server. We’re missing good benchmarks. Unfortunately benchmarks are a pain in the butt to write and manage.

But I hope someone writes a benchmark like that. Here’s some things I’d like to see benchmarked:

  • A "realistic" CPU-bound application. for i in xrange(10000000): pass is a reasonable start.
  • An application that generates big responses, e.g., "x"*100000.
  • An I/O bound application. E.g., one that reads a big file.
  • A simply slow application (time.sleep(1)).
  • Applications that wedge. while 1: pass perhaps? Or lock = threading.Lock(); lock.acquire(); lock.acquire(). Wedging in C and wedging in Python are different, so a bunch of different kinds of wedging.
  • Applications that segfault. ctypes is specially designed for this.
  • Applications that leak memory like a sieve, e.g., global_var.extend(['x']*10000).
  • Large uploads.
  • Slow uploads, like a client that takes 30 seconds to upload 1Mb.
  • Also slow downloads.
  • In each case it is interesting what happens when something bad happens to just a portion of requests. E.g., if 1% of requests wedge hard. A good container will serve the other 99% of requests properly. A bad container will have its worker pool exhausted and completely stop.
  • Mixing and matching these could be interesting. For instance Dave Beazley found some bad GIL results mixing I/O and CPU-bound code.
  • Add ideas in the comments and I’ll copy them into this list.

The hardest part of writing this is not the applications (they are simple). One annoyance is wiring up the applications, but handily Nicholas covers that well in his benchmark. You also have to make sure to clean up, as many servers will not exit cleanly from some of the tests. Another nuisance is that some of these require funny clients. These aren’t too hard to write, but you can’t just use ab. Then you have to report.

Anyway: I would love it if someone did this, and packaged it as repeatable/runnable code/scripts. I’ll help some, but I can’t lead. I’d both really like to see the results, and in my ideal world people writing servers would start using these benchmarks to make their servers more robust.

Programming
Python
Silver Lining
Web

Comments (23)

Permalink

What Does A WebOb App Look Like?

Lately I’ve been writing code using WebOb and just a few other small libraries. It’s not entirely obvious what this looks like, so I thought I’d give a simple example.

I make each application a class. Instances of the class are "configured applications". So it looks a little like this (for an application that takes one configuration parameter, file_path):


class Application(object):
    def __init__(self, file_path):
        self.file_path = file_path
 

Then the app needs to be a WSGI app, because that’s how I roll. I use webob.dec:


from webob.dec import wsgify
from webob import exc
from webob import Response

class Application(object):
    def __init__(self, file_path):
        self.file_path = file_path
    @wsgify
    def __call__(self, req):
        return Response('Hi!')
 

Somewhere separate from the application you actually instantiate Application. You can use Paste Deploy for that, configure it yourself, or just do something ad hoc (a lot of mod_wsgi .wsgi files are like this, basically).

I use webob.exc for things like exc.HTTPNotFound(). You can raise that as an exception, but I mostly just return the object (to the same effect).

Now you have Hello World. I then sometimes do something terrible, I start handling URLs like this:


@wsgify
def __call__(self, req):
    if req.path_info == '/':
        return self.index(req)
    elif req.path_info.startswith('/view/'):
        return self.view(req)
    return exc.HTTPNotFound()
 

This is lazy and a very bad idea. So you want a dispatcher. There are several (e.g., selector). I’ll use Routes here… the latest release makes it a bit easier (though it could still be streamlined a bit). Here’s a pattern I think makes sense:


from routes import Mapper

class Application(object):
    map = Mapper()
    map.connect('index', '/', method='index')
    map.connect('view', '/view/{item}', method='view')

    def __init__(self, file_path):
        self.file_path = file_path

    @wsgify
    def __call__(self, req):
        results = self.map.routematch(environ=req.environ)
        if not results:
            return exc.HTTPNotFound()
        match, route = results
        link = URLGenerator(self.map, req.environ)
        req.urlvars = ((), match)
        kwargs = match.copy()
        method = kwargs.pop('method')
        req.link = link
        return getattr(self, method)(req, **kwargs)

    def index(self, req):
        ...
    def view(self, req, item):
        ...
 

Another way you might do it is to skip the class, which means skipping a clear place for configuration. I don’t like that, but if you don’t care about that, then it looks like this:


def index(self, req):
    ...
def view(self, req, item):
    ...

map = Mapper()
map.connect('index', '/', view=index)
map.connect('view', '/view/{item}', view=view)

@wsgify
def application(req):
    results = map.routematch(environ=req.environ)
    if not results:
        return exc.HTTPNotFound()
    match, route = results
    link = URLGenerator(map, req.environ)
    req.urlvars = ((), match)
    kwargs = match.copy()
    view = kwargs.pop('view')
    req.link = link
    return view(req, **kwargs)
 

Then application is pretty much boilerplate. You could put configuration in the request if you wanted, or use some other technique (like Contextual).

I talked some with Ben Bangert about what he’s trying with these patterns, and he’s doing something reminiscent of Pylons controllers (but without the rest of Pylons) and it looks more like this (with my own adaptations):


class BaseController(object):
    special_vars = ['controller', 'action']

    def __init__(self, request, link, **config):
        self.request = request
        self.link = link
        for name, value in config.items():
            setattr(self, name, value)

    def __call__(self):
        action = self.request.urlvars.get('action', 'index')
        if hasattr(self, '__before__'):
            self.__before__()
        kwargs = req.urlsvars.copy()
        for attr in self.special_vars
            if attr in kwargs:
                del kwargs[attr]
        return getattr(self, action)(**kwargs)

class Index(BaseController):
    def index(self):
        ...
    def view(self, item):
        ...

class Application(object):
    map = Mapper()
    map.connect('index', '/', controller=Index)
    map.connect('view', '/view/{item}', controller=Index,     action='view')

    def __init__(self, **config):
        self.config = config

    @wsgify
    def __call__(self, req):
        results = self.map.routematch(environ=req.environ)
        if not results:
            return exc.HTTPNotFound()
        match, route = results
        link = URLGenerator(self.map, req.environ)
        req.urlvars = ((), match)
        controller = match['controller'](req, link, **self.config)
        return controller()
 

That’s a lot of code blocks, but they all really say the same thing ;) I think writing apps with almost-no-framework like this is pretty doable, so if you have something small you should give it a go. I think it’s especially appropriate for applications that are an API (not a "web site").

Programming
Python
Web

Comments (30)

Permalink