We can answer on question what is VPS? and what is cheap dedicated servers?

Programming

Why doctest.js is better than Python’s doctest

I’ve been trying, not too successfully I’m afraid, to get more people to use doctest.js. There’s probably a few reasons people don’t. They are all wrong! Doctest.js is the best!

One issue in particular is that people (especially people in my Python-biased circles) are perhaps thrown off by Python’s doctest. I think Python’s doctest is pretty nice, I enjoy testing with it, but there’s no question that it has a lot of problems. I’ve even thought about trying to fix doctest, and even made a repository, but I only really got as far as creating a list of issues I’d like to fix. But, like so many before me, I never actually made those fixes. Doctest has, in its life, only really had a single period of improvement (in the time leading to Python 2.4). That’s not a recipe for success.

Of course doctest.js takes inspiration from Python’s doctest, but I wrote it as a real test environment, not for a minimal use case. In the process I fixed a bunch of issues with doctest, and in places Javascript has also provided helpful usability.

Some issues:

Doctest.js output is predictable

The classic pitfall of Python’s doctest is printing a dictionary:


>>> print {"one": 1, "two": 2}
{'two': 2, 'one': 1}
 

The print order of a dictionary is arbitrary, based on a hash algorithm that can change, or mix things up as items are added or removed. And to make it worse, the output usually stable, such that you can write tests that unexpectibly fragile. But there’s no reason why dict.__repr__ must use an arbitrary order. Personally I take it as a bit of unfortunate laziness.

If doctest had used pprint for all of its printing it would have helped some. But not enough, because this kind of code is fairly common:


def __repr__(self):
    return '<ThisClass attr=%r>' % self.attr
 

and that %r invokes a repr() that cannot be overridden.

In doctest.js I always try to make output predictable. One reason this is fairly easy is that there’s nothing like repr() in Javascript, so doctest.js has its own implementation. It’s like I started with pprint and no other notion existed.

Good matching

In addition to unpredictable output, there’s also just hard-to-match output. Output might contain blank lines, for instance, and Python’s doctest requires a very ugly <BLANKLINE> token to handle that. Whitespace might not be normalized. Maybe there’s boring output. Maybe there’s just a volatile item like a timestamp.

Doctest.js includes, by default, ellipsis: ... matches any length of text. But it also includes another wildcard, ?, which matches just one number or word. This avoids cases when the use of ... swallows up too much when you just wanted to get a single word.

Also doctest.js doesn’t use ... for other purposes. In Python’s doctest ...` is used for continuation lines, meaning you can’t just ignore output, like:


>>> print who_knows_what_this_returns()
...
 

Or even worse, you can’t ignore the beginning of an item:


>>> print some_request
...
X-Some-Header: foo
...
 

The way I prefer to use doctest.js it doesn’t have any continuation line symbol (but if there is one, it’s >).

Also doctest.js normalizes whitespace, normalizes " and ', and just generally tries to be reasonable.

Doctest.js tests are plain Javascript

Not many editors know how to syntax highlight and check doctests, with their >>> in front of each line and so forth. And the whole thing is tweaky, you need to use a continuation (...) on some lines, and start statements with >>>. It’s an awkward way to compose.

Doctest.js started out with the same notion, though with different symbols ($ and >). But recently with the rise of a number of excellent parsers (I used Esprima) I’ve moved my own tests to another pattern:


print(something())
// => expected output
 

This is already a fairly common way to write examples. Like how you may have read pre-Python pseudocode and thought: that looks like Python!: doctest.js looks like example pseudocode.

Doctest.js tests are self-describing

Python’s doctest has some options, some important options that effect the semantics of the test, that you can only turn on in the runner. The most important option is ELLIPSIS. Either your test was written to use ELLIPSIS or it wasn’t – that a test can’t self-describe its requirements means that test running is fragile.

I made the hackiest package ever to get around this in Python, but it’s hacky and lame.

Exception handling isn’t special

Python’s doctest treats exceptions differently from other output. So if you print something before the exception, it is thrown away, never to be seen. And you can’t use some of the same matching techniques.

Doctest.js just prints out exceptions, and it’s matched like anything else.

This particular case is one of several places where it feels like Python’s doctest is just being obstinate. Doing it the right way isn’t harder. Python’s doctest makes debugging exception cases really hard.

Doctest.js has a concept of "abort"

I’m actually pretty okay with Python doctest’s notion that you just run all the tests, even when one fails. Getting too many failures is a bit of a nuisance, but it’s not that bad. But there’s no way to just give up, and there needs to be. If you are relying on something to be importable, or some service to be available, there’s no point in going on with the tests.

Doctest.js lets you call Abort() and further tests are cancelled.

Distinguishing between debugging output and deliberate output

Maybe it’s my own fault for being a programming troglodite, but I use a lot of print for debugging. This becomes a real problem with Python’s doctest, as it tracks all that printing and it causes tests to fail.

Javascript has something specifically for printing debugging output: console.log(). Doctest.js doesn’t mess with that, it adds a new function print(). Only stuff that is printed (not logged) is treated as expected output. It’s like console.log() goes to stderr and print() goes to stdout.

Doctest.js also forces the developer to print everything they care about. For better or worse Javascript has many more expressions than Python (including assignments), so looking at the result of an expression isn’t a good clue for whether you care about the result of an expression. I’m not sure this is better, but it’s part of the difference.

Doctest.js also groups your printed statements according to the example you are in (an example being a block of code and an expected output). This is much more helpful than watching a giant stream of output go to the console (the browser console or terminal).

Doctest.js handles async code

This admittedly isn’t that big a deal for Python, but for Javascript it is a real problem. Not a problem for doctest.js in particular, but a problem for any Javascript test framework. You want to test return values, but lots of functions don’t "return", instead they call some callback or create some kind of promise object, and you have to test for side effects.

Doctest.js I think has a really great answer for this, which is not so much to say that Python’s doctest is so much worse, but in the context of Javascript doctest.js has something really useful and unique. If callback-driven async code had ever been very popular in Python then this sort of feature would be nice there too.

The browser is a great environment

A lot of where doctest.js is much better than Python’s doctest is simply that it has a much more powerful environment for displaying results. It can highlight failed or passing tests. When there’s a wildcard in expected output, it can show the actual output without adding any particular extra distraction. It can group console messages with the tests they go with. It can show both a simple failure message, and a detailed line-by-line comparison. All these details make it easy to identify what went wrong and fix it. The browser gives a rich and navigable interface.

I’d like to get doctest.js working well on Node.js (right now it works, but is not appealing), but I just can’t bring myself to give up the browser. I have to figure out a good hybrid.

Python’s doctest lacks a champion

This is ultimately the reason Python’s doctest has all these problems: no one cares about it, no one feels responsible for it, and no one feels empowered to make improvements to it. And to make things worse there is a cadre of people that will respond to suggestions with their own criticisms that doctest should never be used beyond its original niche, that it’s constraints are features.

Doctest is still great

I’m ragging on Python’s doctest only because I love it. I wish it was better, and I made doctest.js in a way I wish Python’s doctest was made. Doctest, and more generally example/expectation oriented code, is a great way to explain things, to make tests readable, to make test-driven development feasible, to create an environment that errs on the side of over-testing instead of under-testing, and to make failures and resolutions symmetric. It’s still vastly superior to BDD, avoiding all BDD’s aping of readability while still embracing the sense of test-as-narrative.

But, more to the point: use doctest.js, read the tutorial, or try it in the browser. I swear, it’s really nice to use.

Javascript
Mozilla
Programming
Python

Comments (19)

Permalink

Git-as-sync, not source-control-as-deployment

I don’t like systems that use git push for deployment (Heroku et al). Why? I do a lot of this:


$ git push deploy
... realize I forgot a domain name ...
$ git commit -m "fix domain name" -a ; git push deploy
... realize I didn't do something right with the database setup ...
$ git commit -m "configure database right" -a ; git push deploy
... dammit, I didn'
t fix it quite right ...
$ git commit -m "typo" -a ; git push deploy
 

And then maybe I’d actually like to keep my config out of my source control, or have a build process that I run locally, or any number of things. I’d like to be able to test deployment, but every deployment is a commit, and I like to commit tested work. I think I could use git rebase but I lack the discipline to undo my work so I can do it correctly. This is why I don’t do continuous commits.

There’s a whole different level of weirdness when you use GitHub Pages as you aren’t pushing to a deployment-specific remote, you are pushing to a deployment-specific branch.

So I’ve generally thought: git deployment is wrong.

Then I was talking to some other people at Mozilla and they mentioned that ops was using git for simply moving files around even though the stuff they were deploying was itself in Mercurial. They had a particular site with a very large number of files, and it was faster to use git than rsync (git has more metadata than rsync; rsync has to look at everything everytime you sync). And that all seemed very reasonable; git is a fine way to sync things.

But I kind of forgot about it all, and just swore to myself as I did too many trivial commits and wrote too many meaningless commit messages.

Still… it isn’t so hard to separate these concerns, is it? So I wrote up a quite small command called git-sync. The basic idea: copy the working directory to a new location (minus .git/), commit that, and push the result to your deployment remote. You can send modified and untracked files, and you can run a build script before committing and push the result of the build script, all without sullying your "real" source control. And you happen to have a nice history of deployments, which is also nice.

I’ve only used this a little bit, but I’ve enjoyed when I have used it, and it makes me feel much better/clearer about my actual commits. It’s really short right now, and probably gets some things entirely wrong (e.g., moving over untracked files). But it works well enough to be improved (winkwinknudgenudge).

So check it out: https://github.com/ianb/git-sync

Programming
Web

Comments (11)

Permalink

My Unsolicited Advice For PyPy

I think the most interesting work in programming languages right now is about the runtime, not syntax or even the languages themselves. Which places PyPy in an interesting position, as they have put a great deal of effort into abstracting out the concept of runtime from the language they are implementing (Python).

There are of course other runtime environments available to Python. The main environment has and continues to be CPython — the runtime developed in parallel with the language, and with continuous incremental feedback and improvement by the Python developer community. It is the runtime that informs and is informed by the language. It’s also the runtime that is most easy-going about integrating with C libraries, and by extension it is part of the vague but important runtime environment of "Unix". There’s also Jython and IronPython. I frankly find these completely uninteresting. They are runtimes controlled by companies, not communities, and the Python implementations are neither natural parts of their runtime environments, nor do the runtimes include many concessions to make themselves natural for Python.

PyPy is somewhere different. It still has a tremendous challenge because Python was not developed for PyPy. Even small changes to the language seem impossible — something as seemingly innocuous as making builtins static seems to be stuck in a conservative reluctance to change. But unlike Jython and IronPython they aren’t stuck between a rock and a hard place; they just have to deal with the rock, not the hard place.

So here is my unsolicited advice on what PyPy-the-runtime should consider. Simple improvements to performance and the runtime are fine, but being incrementally better than CPython only goes so far, and I personally doubt it will ever make a big impact on Python that way.

PyPy should push hard on concurrency and reliability. If it is fast enough then that’s fine; that’s done as far as I’m concerned. I say this because I’m a web programmer, and speed is uninteresting to me. Certainly opinions will differ. But to me speed (as it’s normally defined) is really really uninteresting. When or if I care about speed I’m probably more drawn to Cython. I do care about latency, memory efficiency, scalability/concurrency, resource efficiency, and most of all worst cases. I don’t think a JIT addresses any of these (and can even make things worse). I don’t know of benchmarks that measure these parameters either.

I want a runtime with new and novel features; something that isn’t just incrementally better than CPython. This itself might seem controversial, as the only point to such novel features would be for people to implement at least some code intended for only PyPy. But if the features are good enough then I’m okay with this — and if I’m not drawn to write something that will only work on PyPy, I probably won’t be drawn to use PyPy at all; natural conservatism and inertia will keep me (and most people) on CPython indefinitely.

What do I want?

  • Microprocesses. Stackless and greenlets have given us micro-threads, but it’s just not the same. Which is not entirely a criticism — it shows that unportable features are interesting when they are good features. But I want the next step, which is processes that don’t share state. (And implicitly I don’t just want standard async techniques, which use explicit concurrency and shared state.)
  • Shared objects across processes with copy-on-write; then you can efficiently share objects (like modules!) across concurrent processes without the danger of shared state, but without the overhead of copying everything you want to share. Lack of this is hurting PHP, as you can’t have a rich set of libraries and share-nothing without killing your performance.
  • I’d rather see a break in compatibility for C extensions to support this new model, than to abandon what could be PyPy’s best feature to support CPython’s C extension ecosystem. Being a web programmer I honestly don’t need many C modules, so maybe I’m biased. But if the rest of the system is good enough then the C extensions will come.
  • Make sure resource sharing that happens outside of the Python environment is really solid. C libraries are often going to be unfriendly towards microprocesses; make sure what is exposed to the Python environment is solid. That might even mean a dangerous process mode that can handle ctypes and FFI and where you carefully write Python code that has extra powers, so long as there’s a strong wall between that code and "general" code that makes use of those services.
  • Cython — it’s doing a lot of good stuff, and has a much more conservative but also more predictable path to performance (through things like type annotation). I think it’s worth leaning on. I also have something of a hunch that it could be a good way to do FFI in a safe manner, as Cython already supports multiple targets (Python 2 and 3) from the same codebase. Could PyPy be another target?
  • Runtime introspection of the runtime. We have great language introspection (probably much to the annoyance of PyPy developers who have to copy this) but currently runtime introspection is poor-to-nonexistant. What processes are running? How much memory is each using? Where? Are they holding on to resources? Are they blocking on some non-Python library? How much CPU have they been using? Then I want to be able to kill processes, send them signals, adjust priorities, etc.

And I guess it doesn’t have to be "PyPy", but a new backend for PyPy to target; it doesn’t have to be the only path PyPy pursues.

With a runtime like this PyPy could be an absolutely rocking platform for web development. Python could be as reliable as, oh… PHP? Sorry, I probably won’t win arguments that way ;) As good as Erlang! Maybe we could get the benefits of async without the pain of callbacks or Deferreds. And these are features people would use. Right now I’m perceiving a problem where there’s lots of people standing on the sidelines cheering you on but not actually using PyPy.

So: I wouldn’t tell anyone what to do, and if someone tries this out I’ll probably only be on the sidelines cheering you on… but I really think this could be awesome.

Update: there’s some interesting comments on Hacker News as well.

Programming
Python

Comments (22)

Permalink

A Python Web Application Package and Format (we should make one)

At PyCon there was an open space about deployment, and the idea of drop-in applications (Java-WAR-style).

I generally get pessimistic about 80% solutions, and dropping in a WAR file feels like an 80% solution to me. I’ve used the Hudson/Jenkins installer (which I think is specifically a project that got WARs on people’s minds), and in a lot of ways that installer is nice, but it’s also kind of wonky, it makes configuration unclear, it’s not always clear when it installs or configures itself through the web, and when you have to do this at the system level, nor is it clear where it puts files and data, etc. So a great initial experience doesn’t feel like a great ongoing experience to me — and it doesn’t have to be that way. If those were necessary compromises, sure, but they aren’t. And because we don’t have WAR files, if we’re proposing to make something new, then we have every opportunity to make things better.

So the question then is what we’re trying to make. To me: we want applications that are easy to install, that are self-describing, self-configuring (or at least guide you through configuration), reliable with respect to their environment (not dependent on system tweaking), upgradable, and respectful of persistence (the data that outlives the application install). A lot of this can be done by the "container" (to use Java parlance; or "environment") — if you just have the app packaged in a nice way, the container (server environment, hosting service, etc) can handle all the system-specific things to make the application actually work.

At which point I am of course reminded of my Silver Lining project, which defines something very much like this. Silver Lining isn’t just an application format, and things aren’t fully extracted along these lines, but it’s pretty close and it addresses a lot of important issues in the lifecycle of an application. To be clear: Silver Lining is an application packaging format, a server configuration library, a cloud server management tool, a persistence management tool, and a tool to manage the application with respect to all these services over time. It is a bunch of things, maybe too many things, so it is not unreasonable to pick out a smaller subset to focus on. Maybe an easy place to start (and good for Silver Lining itself) would be to separate at least the application format (and tools to manage applications in that state, e.g., installing new libraries) from the tools that make use of such applications (deploy, etc).

Some opinions I have on this format, exemplified in Silver Lining:

  • It’s not zipped or a single file, unlike WARs. Uploading zip files is not a great API. Geez. I know there’s this desire to "just drop in a file"; but there’s no getting around the fact that "dropping a file" becomes a deployment protocol and it’s an incredibly impoverished protocol. The format is also not subtly git-based (ala Heroku) because git push is not a good deployment protocol.
  • But of course there isn’t really any deployment protocol inferred by a format anyway, so maybe I’m getting ahead of myself ;) I’m saying a tool that deploys should take as an argument a directory, not a single file. (If the tool then zips it up and uploads it, fine!)
  • Configuration "comes from the outside". That is, an application requests services, and the container tells the application where those services are. For Silver Lining I’ve used environmental variables. I think this one point is really important — the container tells the application. As a counter-example, an application that comes with a Puppet deployment recipe is essentially telling the server how to arrange itself to suit the application. This will never be reliable or simple!
  • The application indicates what "services" it wants; for instance, it may want to have access to a MySQL database. The container then provides this to the application. In practice this means installing the actual packages, but also creating a database and setting up permissions appropriately. The alternative is never having any dependencies, meaning you have to use SQLite databases or ad hoc structures, etc. But in fact installing databases really isn’t that hard these days.
  • All persistence has to use a service of some kind. If you want to be able to write to files, you need to use a file service. This means the container is fully aware of everything the application is leaving behind. All the various paths an application should use are given in different environmental variables (many of which don’t need to be invented anew, e.g., $TMPDIR).
  • It uses vendor libraries exclusively for Python libraries. That means the application bundles all the libraries it requires. Nothing ever gets installed at deploy-time. This is in contrast to using a requirements.txt list of packages at deployment time. If you want to use those tools for development that’s fine, just not for deployment.
  • There is also a way to indicate other libraries you might require; e.g., you might lxml, or even something that isn’t quite a library, like git (if you are making a github clone). You can’t do those as vendor libraries (they include non-portable binaries). Currently in Silver Lining the application description can contain a list of Ubuntu package names to install. Of course that would have to be abstracted some.
  • You can ask for scripts or a request to be invoked for an application after an installation or deployment. It’s lame to try to test if is-this-app-installed on every request, which is the frequent alternative. Also, it gives the application the chance to signal that the installation failed.
  • It has a very simple (possibly/probably too simple) sense of configuration. You don’t have to use this if you make your app self-configuring (i.e., build in a web-accessible settings screen), but in practice it felt like some simple sense of configuration would be helpful.

Things that could be improved:

  • There are some places where you might be encouraged to use routines from the silversupport package. There are very few! But maybe an alternative could be provided for these cases.
  • A little convention-over-configuration is probably suitable for the bundled libraries; silver includes tools to manage things, but it gets a little twisty. When creating a new project I find myself creating several .pth files, special customizing modules, etc. Managing vendor libraries is also not obvious.
  • Services are IMHO quite important and useful, but also need to be carefully specified.
  • There’s a bunch of runtime expectations that aren’t part of the format, but in practice would be part of how the application is written. For instance, I make sure each app has its own temporary directory, and that it is cleared on update. If you keep session files in that location, and you expect the environment to clean up old sessions — well, either all environments should do that, or none should.
  • The process model is not entirely clear. I tried to simply define one process model (unthreaded, multiple processes), but I’m not sure that’s suitable — most notably, multiple processes have a significant memory impact compared to threads. An application should at least be able to indicate what process models it accepts and prefers.
  • Static files are all convention over configuration — you put static files under static/ and then they are available. So static/style.css would be at /style.css. I think this is generally good, but putting all static files under one URL path (e.g., /media/) can be good for other reasons as well. Maybe there should be conventions for both.
  • Cron jobs are important. Though maybe they could just be yet another kind of service? Many extra features could be new services.
  • Logging is also important; Silver Lining attempts to handle that somewhat, but it could be specified much better.
  • Silver Lining also supports PHP, which seemed to cause a bit of stress. But just ignore that. It’s really easy to ignore.

There is a description of the configuration file for apps. The environmental variables are also notably part of the application’s expectations. The file layout is explained (together with a bunch of Silver Lining-specific concepts) in Development Patterns. Besides all that there is admittedly some other stuff that is only really specified in code; but in Silver Lining’s defense, specified in code is better than unspecified ;) App Engine provides another example of an application format, and would be worth using as a point of discussion or contrast (I did that myself when writing Silver Lining).

Discussing WSGI stuff with Ben Bangert at PyCon he noted that he didn’t really feel like the WSGI pieces needed that much more work, or at least that’s not where the interesting work was — the interesting work is in the tooling. An application format could provide a great basis for building this tooling. And I honestly think that the tooling has been held back more by divergent patterns of development than by the difficulty of writing the tools themselves; and a good, general application format could fix that.

Packaging
Programming
Python
Web

Comments (18)

Permalink

Javascript on the server AND the client is not a big deal

All the cool kids love Node.js. I’ve used it a little, and it’s fine; I was able to do what I wanted to do, and it wasn’t particularly painful. It’s fun to use something new, and it’s relatively straight-forward to get started so it’s an emotionally satisfying experience.

There are several reasons you might want to use Node.js, and I’ll ignore many of them, but I want to talk about one in particular:

Javascript on the client and the server!

Is this such a great feature? I think not…

You only need to know one language!

Sure. Yay ignorance! But really, this is fine but unlikely to be relevant to any current potential audience for Node.js. If you are shooting for an very-easy-to-learn client-server programming system, Node.js isn’t it. Maybe Couch or something similar has that potential? But I digress.

It’s not easy to have expertise at multiple languages. But it’s not that hard. It’s considerably harder to have expertise at multiple platforms. Node.js gives you one language across client and server, but not one platform. Node.js programming doesn’t feel like the browser environment. They do adopt many conventions when it’s reasonable, but even then it’s not always the case — in particular because many browser APIs are the awkward product of C++ programmers exposing things to Javascript, and you don’t want to reproduce those same APIs if you don’t have to (and Node.js doesn’t have to!) — an example is the event pattern in Node, which is similar to a browser but less obtuse.

You get to share libraries!

First: the same set of libraries is probably not applicable. If you can do it on the client then you probably don’t have to do it on the server, and vice versa.

But sometimes the same libraries are useful. Can you really share them? Browser libraries are often hard to use elsewhere because they rely on browser APIs. These APIs are frequently impossible to implement in Javascript.

Actually they are possible to implement in Javascript using Proxies (or maybe some other new and not-yet-standard Javascript features). But not in Node.js, which uses V8, and V8 is a pretty conservative implementation of the Javascript language. (Update: it is noted that you can implement proxies — in this case a C++ extension to Node)

Besides these unimplementable APIs, it is also just a different environment. There is the trivial: the window object in the browser has a Node.js equivalent, but it’s not named window. Performance is different — Node has long-running processes, the browser might. Node can have blocking calls, which are useful even if you can’t use them at runtime (e.g., require()); but you can’t really have any of these at any time on the browser. And then of course all the system calls, none of which you can use in the browser.

All these may simply be surmountable challenges, through modularity, mocking, abstractions, and so on… but ultimately I think the motivation is lacking: the domain of changing a live-rendered DOM isn’t the same as producing bytes to put onto a socket.

You can work fluidly across client and server!

If anything I think this is dangerous rather than useful. The client and the server are different places, with different expectations. Any vagueness about that boundary is wrong.

It’s wrong from a security perspective, as the security assumptions are nearly opposite on the two platforms. The client trusts itself, and the server trusts itself, and both should hold the other in suspicion (though the client can be more trusting because the browser doesn’t trust the client code).

But it’s also the wrong way to treat HTTP. HTTP is pretty simple until you try to make it simpler. Efforts to make it simpler mostly make it more complicated. HTTP lets you send serialized data back and forth to a server, with a bunch of metadata and other do-dads. And that’s all neat, but you should always be thinking about sending information. And never sharing information. It’s not a fluid boundary, and code that touches HTTP needs to be explicit about it and not pretend it is equivalent to any other non-network operation.

Certainly you don’t need two implementation languages to keep your mind clear. But it doesn’t hurt.

You can do validation the same way on the client and server!

One of the things people frequently bring up is that you can validate data on the client and server using the same code. And of course, what web developer hasn’t been a little frustrated that they have to implement validation twice?

Validation on the client is primarily a user experience concern, where you focus on bringing attention to problems with a form, and helping the user resolve those problems. You may be able to avoid errors entirely with an input method that avoids the problem (e.g., if a you have a slider for a numeric input, you don’t have to worry about the user inputing a non-numeric value).

Once the form is submitted, if you’ve done thorough client-side validation you can also avoid friendly server-side validation. Of course all your client-side validation could be avoided through a malicious client, but you don’t need to give a friendly error message in that case, you can simply bail out with a simple 400 Bad Request error.

At that point there’s not much in common between these two kinds of validation — the client is all user experience, and the server is all data integrity.

You can do server-side Javascript as a fallback for the client!

Writing for clients without Javascript is becoming increasingly less relevant, and if we aren’t there yet, then we’ll certainly get there soon. It’s only a matter of time, the writing is on the wall. Depending on the project you might have to put in workarounds, but we should keep those concerns out of architecture decisions. Maintaining crazy hacks is not worth it. There’s so many terrible hacks that have turned into frameworks, and frameworks that have justified themselves because of the problems they solved that no longer matter… Node.js deserves better than to be one of those.

In Conclusion Or Whatever

I’m not saying Node.js is bad. There are other arguments for it, and you don’t need to make any argument for it if you just feel like using it. It’s fun to do something new. And I’m as optimistic about Javascript as anyone. But this one argument, I do not think it is very good.

Javascript
Programming
Web

Comments (29)

Permalink

Doctest.js & Callbacks

Many years ago I wrote a fairly straight-forward port of Python’s doctest to Javascript. I thought it was cool, but I didn’t really talk about it that much. Especially because I knew it had one fatal flaw: it was very unfriendly towards programming with callbacks, and Javascript uses a lot of callbacks.

On a recent flight I decided to look at it again, and realized fixing that one flaw wasn’t actually a big deal. So now doctest.js really works. And I think it works well: doctest.js.

I have yet to really use doctest.js on more than a couple real cases, and as I do (or you do?) I expect to tweak it more to make it flow well. But having tried a couple of examples I am particularly liking how it can be used with callbacks.

Testing with callbacks is generally a tricky thing. You want to make assertions, but they happen entirely separately from the test runner’s own loop, and your callbacks may not run at all if there’s a failure.

I came upon some tests recently that used Jasmine, a BDD-style test framework. I’m not a big fan of BDD but I’m fairly new to serious Javascript development so I’m trying to withhold judgement. The flow of the tests is a bit peculiar until you realize that it’s for async reasons. I’ll try to show something that roughly approximates a real test of an XMLHttpRequest API call:


it("should give us no results", function() {
  runs(function () {
    var callback = createSpy('callback for results');
    $.ajax({
      url: '/search',
      data: {q: "query unlikely to match anything"},
      dataType: "json",
      success: callback
    });
  });
  waits(someTimeout);
  runs(function () {
    expect(callback).toHaveBeenCalled();
    expect(callback.mostRecentCall.args[0].length).toEqual(0);
  });
});
 

So, the basic pattern is it() creates a group of tests, and each call to run() is a set of items to call sequentially. Then between these run blocks you can have signals to the runner to wait for some result, either a timeout (which is fragile), or you can setup specific conditions.

Another popular test runner is QUnit; it’s popular particularly because it’s what jQuery uses, and my own impression is that QUnit is just very simple and so least likely to piss you off.

QUnit has its own style for async:


test("should give us no results", function () {
  stop();
  expect(1);
  $.ajax({
    url: '/search',
    data: {q: "query unlikely to match anything"},
    dataType: "json",
    success: function (result) {
      ok(result.length == 0, 'No results');
      start();
    }
  });
});
 

stop() confused me for a bit until I realized what they were really referring to stopping the test runner; of course the function continues on regardless. What will happen is that the function will return, but nothing will have really been tested — the success callback will not have been run, and cannot run until all Javascript execution stops and control is given back to the browser. So the test runner will use setTimeout to let time pass before the test continues. In this case it will continue once start() is called. And expect() also makes it fail if it didn’t get at least one assertion during that interval — it would otherwise be easy to simply miss an assertion (though in this example it would be okay because if the success callback isn’t invoked then start() will never be called, and the runner will timeout and signal that as a failure).

So… now for doctest.js. Note that doctest.js isn’t "plain" Javascript, it looks like what an interactive Javascript session might look like (I’ve used shell-style prompts instead of typical console prompts, because the consoles didn’t exist when first I wrote this, and because >>>/... kind of annoy me anyway).


$ success = Spy('success', {writes: true});
$ $.ajax({
>   url: '/search',
>   data: {q: "query unlikely to match anything"},
>   dataType: "json",
>   success: success.func
> });
$ success.wait();
success([])
 

With doctest.js you still get a fairly linear feel — it’s similar to how Jasmine works, except every $ prompt is potentially a place where the loop can be released so something async can happen. Each prompt is equivalent to run() (though unless you call wait, directly or indirectly, everything will run in sequence).

There’s also an implicit assertion for each stanza, which is anything that is written must be matched ({writes: true} makes the spy/mock object write out any invocations). This makes it much harder to miss something in your tests.

Update: just for the record, doctest has changed some, and while that example still works, this would be the "right" way to do it now:


$.ajax({
  url: '/search',
  data: {q: "query unlikely to match anything"},
  dataType: "json",
  success: Spy("search.success", {wait: true, ignoreThis: true})
});
// => search.success([])
 

There is a new format that I now prefer with plain Javascript and "expected output" in comments. Spy("search.success", {wait: true, ignoreThis: true}) causes the test to wait on the Spy immediately (though the same pattern as before is also possible and sometimes preferable), and in all likelihood jQuery will set this to something we don’t care about, so ignoreThis: true keeps it from being printed. (Or maybe you are interested in it, in which case you’d leave that out)

Anyway, back to the original conclusion (update over)…

I’ve never actually found Python’s doctest to be a particularly good way to write docs, and I don’t expect any different from doctest.js, but I find it a very nice way to write and run tests… and while Python’s doctest is essentially abandoned and lacks many features to make it a more humane testing environment, maybe doctest.js can do better.

Javascript
Programming
Testing
Web

Comments (3)

Permalink

The Browser Desktop, developer tools

I find myself working in a Windows environment due to some temporary problems with my Linux installation. In terms of user experience Windows is not terrible. But more notable, things mostly just feel the same. My computing experience is not very dependent on the operating system… almost. Most of what I do is in a web browser — except programming itself. Probably a lot of you have the same experience: web browser, text editor, and terminal are pretty much all I need. I occasionally play with other tools, but none of them stick. Of course underlying the terminal and text editor UI is a whole host of important software — interpreters, version control tools, checkouts of all my projects, etc. So really there’s two things keeping us from a browser-only world: a few bits of UI, and a whole bunch of tools. Can we bridge this? I’m thinking (more speculatively than as an actual plan): could I stay on Windows without ever having to "use" Windows?

Browsers are clearly capable of implementing a capable UI for a terminal or editor; not a trivial endeavor, but not impossible. We need a way of handling the tools. The obvious answer in that case is a virtual machine. The virtual machine would certainly be using Linux, as there’s clear consensus that if you remove the UI and hardware considerations and just consider tools then Linux is by far the best choice — who uses Mac servers? And Windows is barely worth mentioning. I worked in a Linux VM for a while but found it really unsatisfying — but that was using the Linux UI through a VMWare interface.

So instead imagine: you start up a headless VM (remembering the tools are not about UI, so there’s no reason to have a graphical user interface on the VM), you point your browser at this VM, and you use a browser-based developer environment that mediates all the tools (the lightest kind of mediation is just simulating a terminal and using existing console-based interfaces). Look at your existing setup and just imagine a browser window in place of each not-browser-window app you are using.

I’m intrigued then by the idea of adding more to these interfaces, incrementally. Like HTML in the console, or applications lightly wrapping individual tools. IDEs never stick for me, maybe in part because I can’t commit, and also there’s collaboration issues with these tools (I’m never in a team where we would be able to agree on a single environment). But incremental decentralized improvements seem genuinely workable — improvement more in the style of the web, the browser providing the central metaphor.

I call this a Browser Desktop because it’s a fairly incremental change at this point and other terms (Web OS, Cloud OS) are always presented with unnecessarily hyperbole. What "operating system" you are using in this imagined system is a somewhat uninteresting semantic question; the OS hasn’t disappeared, it’s just boring. "The Cloud" is fine, but too easy to overthink, and there are many technical reasons to use a hybrid of local and remote pieces. "Internet Operating System" is more a framing concept than a thing-that-can-be-built. Chromium OS is essentially the same idea… I’m not really sure how they categorize themselves.

What would be painful right now? Good Javascript terminals exist. Bespin is hard at work on an editor worthy of being used by programmers. The browser needs to be an extremely solid platform. Google Chrome has done a lot in this direction, and Firefox is moving the same direction with the Electrolysis project. It’s okay to punt for now on all the "consumer" issues like music and media handling… and anyway, other people are hard at work on those things. Web sockets will help with some kinds of services that ideally will connect directly to a port; it’s not the same as a raw socket, but I feel like there’s potential for small intermediaries (e.g., imagine a Javascript app that connects to a locally-hosted server-side app that proxies to ssh). Also AddOns can be used when necessary (e.g., ChatZilla <https://addons.mozilla.org/en-US/firefox/addon/16>).

I’d like much better management of all these "apps" aka pages aka windows or tabs — things like split screens and workspaces. Generally I think using such a system heavily will create all sorts of interesting UI tensions. Which might be annoying for the user, but if it’s a constructive annoyance…

On the whole… this seems doable. It’s navel gazing in a sense — programmers thinking about programming — but one good thing about navel gazing is that programmers have traditionally been quite good at navel gazing, and while some results aren’t generally applicable (e.g., VM management) the exercise will certainly create many generally applicable side products. It would encourage interesting itch-scratching. There’s lots of other "web OS" efforts out there, but I’ve never really understood them… they copy desktop metaphors, or have weird filesystem metaphors, or create an unnecessarily cohesive experience. The web is not cohesive, and I’m pretty okay with that; I don’t expect my experiences in this context to be any more cohesive than my tasks are cohesive. In fact it’s exactly the lack of cohesiveness that interests me in this exercise — the browser mostly gives me the level of cohesiveness I want, and I’m open to experimentation on the rest. And maybe the biggest interest for me is that I am entirely convinced that traditional GUI applications are a dead end; they rise and fall (mobile apps being a current rise) but I can’t seriously imagine long term (10 year) viability for any current or upcoming GUI system. I’m certain the browser is going to be along for the long haul. Doing this would let us Live The Future ;)

Mozilla
Programming
Web

Comments (11)

Permalink

WebTest HTTP testing

I’ve yet to see another testing system for local web testing that I like as much as WebTest… which is perhaps personal bias for something I wrote, but then I don’t have that same bias towards everything I’ve written. Many frameworks build in their own testing systems but I don’t like the abstractions — they touch lots of internal things, or skip important steps of the request, or mock out things that don’t need to be mocked out. WSGI can make this testing easy.

There’s also a hidden feature here: because WSGI is basically just describing HTTP, it can be a means of representing not just incoming HTTP requests, but also outgoing HTTP requests. If you are running local tests against your application using WebTest, with just a little tweaking you can turn those tests into HTTP tests (i.e., actually connect to a socket). But doing this is admittedly not obvious; hence this post!

Here’s what a basic WebTest test looks like:


from webtest import TestApp
import json

wsgi_app = acquire_wsgi_application_somehow()
app = TestApp(wsgi_app)

def test_login():
    resp = app.post('/login', dict(username='guest', password='guest'))
    resp.mustcontain('login successful')
    resp = resp.click('home')
    resp.mustcontain('<a href="/profile">guest</a>')
    # Or with a little framework integration:
    assert resp.templatevars.get('username') == 'guest'

# Or an API test:
def test_user_query():
    resp = app.get('/users.json')
    assert 'guest' in resp.json['userList']
    user_info = dict(username='guest2', password='guest2', name='Guest')
    resp = app.post('/users.json', content_type='application/json',
                    body=json.dumps(user_info)
    assert resp.json == user_info
 

The app object is a wrapper around the WSGI application, and each of those methods runs a request and gets the response. The response object is a WebOb response with several additional helpers for testing (things like .click() which finds a link in HTML and follows it, or .json which loads the body as JSON).

You don’t have to be using a WSGI-centric framework like Pylons to use WebTest, it works fine with anything with a WSGI frontend, which is just about everything. But the point of my post: you don’t have to use it with a WSGI application at all. Using WSGIProxy:


import os
import urlparse

if os.environ.get('TEST_REMOTE'):
    from wsgiproxy.exactproxy import proxy_exact_request
    wsgi_app = proxy_exact_request
    parsed = urlparse.urlsplit(os.environ['TEST_REMOTE'])
    app = TestApp(proxy_exact_request, extra_environ={
                  'wsgi.scheme': parsed.scheme,
                  'HTTP_HOST': parsed.netloc,
                  'SERVER_NAME': parsed.netloc})
else:
    wsgi_app = acquire_wsgi_application_somehow()
    app = TestApp(wsgi_app)
 

It’s a little crude to control this with an environmental variable ($TEST_REMOTE), but it’s an easy way to pass an option in when there’s no better way (and many test runners don’t make options easy). The extra_environ option puts in the host and scheme information into each request (the default host WebTest puts in is http://localhost). WSGIProxy lets you send a request to any host, kind of bypassing DNS, so SERVER_NAME is actually the server the request goes to, while HTTP_HOST is the value of the Host header.

Going over HTTP there are a couple features that won’t work. For instance, you can pass information about your application back to the test code by putting values in environ['paste.testing_variables'] (which is how you’d make resp.templatevars work in the first example). It’s also possible to use extra_environ to pass information into your application, for example to get your application to mock out user authentication; this is fairly safe because in production no request can put those same special keys into the environment (using custom HTTP headers means you must carefully filter requests in production). But custom environ values won’t work over HTTP.

The thing that got me thinking about this is the work I’m doing on Silver Lining, where I am taking apps and rearranging the code and modifying the database configuration ad setup to fit this deployment system. It would be really nice having done that to be able to run some functional tests, and I really want to run them over HTTP. If an application has tests using something like Selenium or Windmill that would also work great, but those tools can be a bit more challenging to work with and applications still need smaller tests anyway, so being able to reuse tests like these would be most useful.

Programming
Python
Web

Comments (1)

Permalink

More Sentinels

I’ve been casually perusing Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp. One of the things I am noticing is that Lisp traditionally has a terrible lack of sentinels: special objects denoting some kind of meaning. Specifically in Common Lisp the empty list and false and nil are all the same thing. As a result there’s all these cases where you want to distinguish false from empty, especially when false represents a failure of some sort. In these AI examples, usually a failure to find something, while in many cases the empty list could mean "the thing is already found, no need to look". But there’s also lots of other examples when this causes problems.

More modern languages usually distinguish between these objects. Python for instance has [], False and None. They might all test as "falsish", but if you care to tell the difference it is easy to do; especially common is a test for x is None. Modern Lisps also stopped folding together all these notions (Scheme for example has #f for false as a completely distinct object, though null and the empty list are still the same). XML-RPC is an example of a language missing null… and though JSON is almost the same data model, it is a great deal superior for having null. In comparison no one seems to care much one way or the other about making a strong distinction between True/False and 1/0.

These are all examples of sentinels: special objects that represent some state. None doesn’t mean anything in particular, but it means lots of things specifically. Maybe it means "not found" in one place, or "give me anything I don’t care" in another. But sometimes you need more than one of these in the same place, or None isn’t entirely clear.

One thing I noticed while reading some Perl 6 examples is that they’ve added a number of new sentinels. One is *. So you could write something like item(*) to mean "give me any item, your choice". While the Perl tendency to use punctuation is legend, words work too.

I wonder if we need a few more sentinel conventions? If so what?

Of course any object can become a sentinel if you use it like that, None isn’t more unique than any other object. (None is conveniently available everywhere.)

Any seems useful, ala Perl’s *. But… there’s already an any available everywhere as well. It happens to be a function, but it’s also a unique named object… would it be entirely too weird to do obj is any? And there’s very few cases where the actual function any would be an appropriate input, making it a good sentinel.

Programming
Python

Comments (19)

Permalink

The Web Server Benchmarking We Need

Another WSGI web server benchmark was published. It’s a decent benchmark, despite some criticisms. But it benchmarks what everyone benchmarks: serving up a trivial app really really quickly. This is not very useful to me. Also, performance is not to me the most important differentiation of servers.

In Silver Lining we’re using mod_wsgi. Silver Lining isn’t tied to mod_wsgi (applications can’t really tell), and we may revisit that decision (mostly because of memory concerns), but it is a deliberate choice. mod_wsgi is one of the few multiprocess WSGI servers, and it manages its children (the same way Apache manages all its children). So if a child stops responding, it gets taken out of the pool and killed (brutal efficiency! Or at least brutal terminology). Child processes are also recycled, guarding against memory leaks or other peculiarities. Sometimes these kinds of things are dismissed for covering up bugs, but (a) production is a lousy time to learn about bugs, (b) it’s like a third tier of garbage collection, and (c) the bugs you are avoiding are often bugs you can’t fix anyway (for instance, if your mysql driver leaks memory, is that the application developer’s fault?)

I wish there was competition among servers not to see who can tweak their performance for entirely unrealistic situations, but to see who can implement the most fail-safe server. We’re missing good benchmarks. Unfortunately benchmarks are a pain in the butt to write and manage.

But I hope someone writes a benchmark like that. Here’s some things I’d like to see benchmarked:

  • A "realistic" CPU-bound application. for i in xrange(10000000): pass is a reasonable start.
  • An application that generates big responses, e.g., "x"*100000.
  • An I/O bound application. E.g., one that reads a big file.
  • A simply slow application (time.sleep(1)).
  • Applications that wedge. while 1: pass perhaps? Or lock = threading.Lock(); lock.acquire(); lock.acquire(). Wedging in C and wedging in Python are different, so a bunch of different kinds of wedging.
  • Applications that segfault. ctypes is specially designed for this.
  • Applications that leak memory like a sieve, e.g., global_var.extend(['x']*10000).
  • Large uploads.
  • Slow uploads, like a client that takes 30 seconds to upload 1Mb.
  • Also slow downloads.
  • In each case it is interesting what happens when something bad happens to just a portion of requests. E.g., if 1% of requests wedge hard. A good container will serve the other 99% of requests properly. A bad container will have its worker pool exhausted and completely stop.
  • Mixing and matching these could be interesting. For instance Dave Beazley found some bad GIL results mixing I/O and CPU-bound code.
  • Add ideas in the comments and I’ll copy them into this list.

The hardest part of writing this is not the applications (they are simple). One annoyance is wiring up the applications, but handily Nicholas covers that well in his benchmark. You also have to make sure to clean up, as many servers will not exit cleanly from some of the tests. Another nuisance is that some of these require funny clients. These aren’t too hard to write, but you can’t just use ab. Then you have to report.

Anyway: I would love it if someone did this, and packaged it as repeatable/runnable code/scripts. I’ll help some, but I can’t lead. I’d both really like to see the results, and in my ideal world people writing servers would start using these benchmarks to make their servers more robust.

Programming
Python
Silver Lining
Web

Comments (23)

Permalink