We can answer on question what is VPS? and what is cheap dedicated servers?

January 2009

Woonerf and Python

At TOPP there’s a lot of traffic discussion, since a substantial portion of the organization is dedicated to Livable Streets initiatives. One of the traffic ideas people have gotten excited about is Woonerf. This is a Dutch traffic planning idea. In areas where there’s the intersection of lots of kinds of traffic (car, pedestrian, bike, destinations and through traffic) you have to deal with the contention for the streets. Traditionally this is approached as a complicated system of rules and right-of-ways. There’s spaces for each mode of transportation, lights to say which is allowed to go when (with lots of red and green arrows), crosswalk islands, concrete barriers, and so on.

A problem with this is that a person can only pay attention to so many things at a time. As the number of traffic controls increases, the controls themselves dominate your attention. It’s based on the ideal that so long as everyone pays attention tothe controls, they don’t have to pay attention to each other. Of course, if there’s a circumstance the controls don’t take into account then people will deviate (for instance, crossing somewhere other than the crosswalk, or getting in the wrong lane for a turn, or the simple existance of a bike is usually unaccounted for). If all attention is on the controls, and everyone trusts that the controls are being obeyed, these deviations can lead to accidents. This can create a negative feedback cycle where the controls become increasingly complex to try to take into account every possibility, with the addition of things like Jersey barriers to exclude deviant traffic. At least in the U.S., and especially in the suburbs or in complex intersections, this feeling of an overcontrolled and restricted traffic plan is common.

Copenhagen retail street

So: Woonerf. This is an extreme reaction to traffic controls. An intersection designed with the principles of Woonerf eschews all controls. This includes even things like curbs and signage. It removes most cues about behavior, and specifically of the concept of "right of way". Every person entering the intersection must view it as a negotiation. The use of eye contact, body language, and hand signals determines who takes the right of way. In this way all kinds of traffic are peers, regardless of destination or mode of transport. Also each person must focus on where they are right now, and not where they will be a minute from now; they must stay engaged.


Code as Jersey Barrier

So, I was reading a critique of Python where someone was saying how they missed public/private/protected distinctions on attributes and methods. And it occurred to me: Python’s object model is like Woonerf.

Python does not enforce rules about what you must and must not do. There are cues, like leading underscores, the __magic_method__ naming pattern, or at the module level there’s __all__. But there are no curbs, you won’t even feel the slightest bump when you access a "private" attribute on an instance.

This can lead to conflicts. For example, during discussions on installation, some people will argue for creating requirements like "SomeLibrary>=1.0,<2.0", with the expectation that while version 2.0 doesn’t exist, so long as you install something in the 1.x line it will maintain compatibility with your application. This is an unrealistic expectation. Do you and the library maintainer have the same idea about what compatibility means? What if you depend on something the maintainer considers a bug?

Practically, you can’t be sure that future versions of a library will work. You also can’t be sure they won’t work; there’s nothing that requires the maintainer of the library to break your application with version 2.0. This is where it becomes a negotiation. If you decide to cross without a crosswalk (use a non-public API) then okay. You just have to keep an eye out. And library authors, whether they like it or not, need to consider the API-as-it-is-used as much as the API-they-have-defined. In open source in particular, there are a lot of ways to achieve this communication. We don’t use some third party (e.g., a QA team or language features) to enforce rules on both sides (there are no traffic controls), instead the communication is more flat, and speaks as much to intentions as mechanisms. When someone asks "how do I do X?" a common response is: "what are you trying to accomplish?" Often an answer to the second question makes the first question irrelevant.

Woonerf is great for small towns, for creating a humane space. Is it right for big cities and streets, for busy people who want to get places fast, for trucking and industry? I’m not sure, but probably not. This is where a multi-paradigm approach is necessary. Over time libraries have to harden, become more static, innovation should happen on top of them and not in the library. Some times we create third party controls through interfaces (of one kind or another). I suppose in this case there is a kind of negotiation about how we negotiate — there’s no one process for how to build negotiation-free foundations in Python. But it’s best not to harden things you aren’t sure are right, and I’m pretty sure there’s no "right" at this very-human level of abstraction.

Programming

Comments (9)

Permalink

Cultural Imperialism, Technology, and OLPC

A couple posts have got me thinking about cultural imperialism lately: a post by Guido van Rossum about "missionaries" and OLPC not about OLPC at all, a post by Chris Hardie and a speech by Wade Davis.

Some of the questions raised: are we destroying cultures? If so, what can we do about it? Must we be hands off? I will add these questions: is it patronizing to make these choices for other people, no matter how enlightened we try to be? How much change is inevitable? Can we help make the change positive instead of resisting change?

More specifically: what is the effect of OLPC on cultures where it is introduced? Especially small cultures, cultures that have been relatively isolated, cultures that are vulnerable. The internet Quechua community is pretty slim, for example. Introducing the internet into a community will lead the children to favor Spanish more strongly, and identify with that more dominant culture over their family and community culture.

Criticisms like Guido’s are common:

I’m not surprised that the pope is pleased by the OLPC program. The mentality from which it springs is the same mentality which in past centuries created the missionary programs. The idea is that we, the west, know what’s good for the rest of the world, and that we therefore must push our ideas onto the "third world" by means of the most advanced technology available. In past centuries, that was arguably the printing press, so we sent missionaries armed with stacks of bibles. These days, we have computers, so we send modern missionaries (of our western lifestyle, including consumerism, global warming, and credit default swaps) armed with computers.

This kind of criticism is easy, because it doesn’t have any counterproposal. It’s not saying much more than "you all suck" to the people involved.

Cultural imperialism is a genuine phenomena. In an attempt to subjugate or assimilate, the dominant culture may explicitly and cynically enforce its cultural norms, through its religion, requiring all schools to operate in the dominant language, even going as far as suggesting how we arrange ourselves during sex.

But it’s not clear to me that what’s happening now is cultural imperialism. It’s more market-oriented homogenization. Food manufacturers don’t use high-fructose corn syrup because they want to make us fat — they just give us what we want, and they are enabling our latent tendency to become obese. Similarly I think the way culture is spread currently encourages homogeneity, without explicit attempting to destroy culture.

This is where I think a protectionist stance — the idea we should just be hands-off — is patronizing. People aren’t abandoning their cultures because they are stupid and they are being manipulated. People make decisions, what they think is the best decision for themself and their families. These decisions lead them to leave rural areas, learn the dominant language, try to conform through education, and even just lead them to enjoy a dominant culture which is often far more entertaining than a smaller and more traditional culture.

The irony is that once they’ve done this they’ve traded their position for a place in the bottom rung of the dominant society. And it’s true that in many cases they’ve made these decisions because they’ve been forced out of their traditional life by political and legal systems they don’t understand. But to blame it all on oppression is to be blind to the many concrete benefits of our modern world. Corrugated metal roofs are simply superior to thatched roofs, and we can get all romantic about traditional building processes and material independence, but we do so from homes with roofs that don’t leak. Leaking roofs are just objectively unpleasant. And frankly people like TV, you don’t have to tell people to like TV, it just happens.

So I believe that assimilation pressure is natural and inevitable in our times.

What then of technology, of the internet and laptops?

I believe OLPC takes an important stance when it selects open source and open licensing for its content. It is valuing freedom, but more importantly encouraging self-determination, trying to build up a user base that can act as peers in this project, not as simply receivers of first-world largess. But it will be culturally disruptive. And I’m okay with that. In a patriarchal culture, giving girls access to this technology will be destructive to that power structure. Yay! I believe in the moral rightness of that one girl making her own choices, finding her own truths, more than I believe in the validity of the culture she was born into. If you believe people should be able to make their own choices (so long as they are aware of the real consequence of their choices), then you must allow for them to choose to abandon their own cultures for something they find more appealing. They might know better than you if that’s a good choice. I think we all hope that instead they transform their own cultures, but that’s not our choice to make.

What I find unpleasant is if they leave a true identity to find themselves in a place of cultural subservience. If they feel they can’t preserve the part of their culture they most value. Perhaps because of discrimination they feel they must hide their past, or they build up a sense of self-loathing. Perhaps they become isolated, unable to find peers that understand where they come from. And perhaps there is no higher culture at all that they can use to exalt their understanding of the world — do they have a literature? Do they have non-traditional music forms of their own? Do they have a forum where people who share their perspective can have serious discussions? Cultures aren’t destroyed so much as they are starved out of existence.


I think assimilation is inevitable, and can be positive. If we were all able to speak to each other, with some shared second or third language, I think the world would be a better place. I’m not a Christian, but I’m not afraid of anyone knowing The Bible. There’s no piece of culture that I would want to deny from anyone. Each new song, each new book, each new idea… I believe they will all make you a better person, if only in a small way.

And on the internet our culture is cumulative. There’s only so many hours of programming on TV or the radio, only so many pages in a newspaper. On the internet the presence of one kind of culture does not exclude any other. There’s room for a Quechua community as much of any other. But the online Quechua community won’t have exclusive rights to its members like a traditional culture claims — children will live between cultures.

Cumulative culture is not a promise that anyone will care. Languages can still die, cultures can still die, identities become forgotten. If these smaller cultures are going to be preserved, they must adapt to the partially-assimilated status of their members. There must be new art and new ideas and new identities. This is why I believe in the laptop project, because it can enable the creation and sharing of these new ideas. I think it will give smaller cultures a chance to survive — there’s no promises, literature doesn’t write itself, but maybe there is at least a chance.

This is also why I am more skeptical of mobile phones, audio devices, and any device that doesn’t actively enable content creation. Mobile phones are not how culture is made. It let’s people chat, consume information, communicate in a 12-key pidgin. But the mobile phone user is not a peer in a world wide web of information. The mobile phone user lives on a proprietary network, with a proprietary device, and while it perhaps it breaks down some hierarchies through disintermediation, it does so in a transient way. The uptake is certainly faster, but the potential seems so much lower.

I don’t know if OLPC will be successful. That’s as unclear now as ever. But it’s trying to do the right thing, and I think it’s a better chance than most for maintaining or improving the richness of the worlds’ culture.

Non-technical
OLPC
Politics

Comments (22)

Permalink

Modern Web Design, I Renounce Thee!

I’m not a designer, but I spend as much time looking at web pages as the next guy. So I took interest when I came upon this post on font size by Wilson Miner, which in turn is inspired by the 100e2r (100% easy to read) standard by Oliver Reichenstein.

The basic idea is simple: we should have fonts at the "default" size, about 16px, no smaller. This is about the size of text in print, read at a reasonable distance (typically closer up than a screen):

https://ianbicking.org/wp-content/uploads/images/typesize_comparison2.jpg

Also it calls out low-contrast color schemes, which I think are mostly passe, and I will not insult you, my reader, by suggesting you don’t entirely agree. Because if you don’t agree, well, I’m afraid I’d have to use some strong words.

I think small fonts, low contrast, huge amounts of whitespace, are a side effect of the audience designers create for.

This makes me think of Modern Architecture:

https://ianbicking.org/wp-content/uploads/images/300px-seagram.jpg

This is a form of architecture popular for skyscapers and other dramatic structures, with their soaring heights and other such dramatic adjectives. These are buildings designed for someone looking at the building from five hundred feet away. They are not designed for occupants. But that’s okay, because the design isn’t sold to occupants, it is sold to people who look at the sketches and want to feel very dramatic.

Similarly, I think the design pattern of small fonts is something meant to appeal to shallow observation. By deemphasizing the text itself, the design is accentuated. Low-contrast text is even more obviously the domination of design over content. And it may very well look more professional and visually pleasing. But web design isn’t for making sites visually pleasing, it is for making the experience of the content more pleasing. Sites exist for their content, not their design.

In 100e2r he also says let your text breathe. You need whitespace. If you view my site directly, you’ll notice I don’t have big white margins around my text. When you come to my site, it’s to see my words, and that’s what I’m going to give you! When I want to let my text breathe with lots of whitespace this is what I do:

https://ianbicking.org/wp-content/uploads/images/500px-my-white-desktop.jpg

Is a huge block of text hard to read? It is. And yeah, I’ve written articles like that. But the solution?

WRITE BETTER

Similarly, it’s hard to read text if you don’t use paragraphs, but the solution isn’t to increase your line height until every line is like a paragraph of its own.

The solution to the drudgery of large swathes of text is:

  1. Make your blocks of text smaller.
  2. Use something other than paragraphs of text.

Throw in a list. Do some indentation. Toss in even a stupid picture. Personally I try to throw in code examples, because that’s how we roll on this blog.

That’s good writing, that’s content that is easy to read. It’s not easy to write, and I’m sure I miss the mark more often than not. But you can’t design your way to good content. If you want to write like this, if you want to let the flow of your text reflect the flow of your ideas, you need room. Huge margins don’t give you room. They are a crutch for poor writing, and not even a good crutch.

So in conclusion: modern design be damned!

HTML
Non-technical
Web

Comments (21)

Permalink

Atompub as an alternative to WebDAV

I’ve been thinking about an import/export API for PickyWiki; I want something that’s sensible, and works well enough that it can be the basic for things like creating restorable snapshots, integration with version control systems, and being good at self-hosting documentation.

So far I’ve made a simple import/export system based on Atom. You can export the entire site as an Atom feed, and you can import Atom feeds. But whole-site import/export isn’t enough for the tools I’d like to write on top of the API.

WebDAV would seem like a logical choice, as it lets you get and put resources. But it’s not a great choice for a few reasons:

  • It’s really hard to implement on the server.
  • Even clients are hard to implement.
  • It uses GET to get resources. This is probably its most fatal flaw. There is no CMS that I know of (except maybe one) where the thing you view the browser is the thing that you’d actually edit. To work around this CMSes use User-Agent sniffing or an alternate URL space.
  • WebDAV is worried about "collections" (i.e., directories). The web basically doesn’t know what "collections" are, it only knows paths, and paths are strings.
  • (In summary) WebDAV uses HTTP, but it is not of the web.

I don’t want to invent something new though. So I started thinking of Atom some more, and Atompub.

The first thought is how to fix the GET problem in WebDAV. A web page isn’t an editable representation, but it’s pretty reasonable to put an editable representation into an Atom entry. Clients won’t necessarily understand extensions and properties you might add to those entries, but I don’t see any way around that. An entry might look like:


<entry>
  <content type="html">QUOTED HTML</content>
  ... other normal metadata (title etc) ...
  <privateprop:myproperty xmlns:privateprop="URL" name="foo" value="bar" />
</entry>
 

While there is special support for HTML, XHTML, and plain text in Atom, you can put any type of content in <content>, encoded in base64.

To find the editable representation, the browser page can point to it. I imagine something like this:


<link rel="alternate" type="application/atom+xml; type=entry"
 href="this-url?format=atom">
 

The actual URL (in this example this-url?format=atom) can be pretty much anything. My one worry is that this could be confused with feed detection, which looks like:


<link rel="alternate" type="application/atom+xml"
 href="/atom.xml">
 

The only difference is "; type=entry", which I’m betting a lot of clients don’t pay attention to.

The Atom entries then can have an element:


<link rel="edit" href="this-url" />
 

This is a location where you can PUT a new entry to update the resource. You could allow the client to PUT directly over the old page, or use this-url?format=atom or whatever is convenient on the server-side. Additionally, DELETE to the same URL would delete.

This handles updates and deletes, and single-page reads. The next issue is creating pages.

Atompub makes creation fairly simple. First you have to get the Atompub service document. This is a document with the type application/atomsvc+xml and it gives the collection URL. It’s suggested you make this document discoverable like:


<link rel="service" type="application/atomsvc+xml"
 href="/atomsvc.xml">
 

This document then points to the "collection" URL, which for our purposes is where you create documents. The service document would look like:


<service xmlns="http://www.w3.org/2007/app"
         xmlns:atom="http://www.w3.org/2005/Atom">
  <workspace>
    <atom:title>SITE TITLE</atom:title>
    <collection href="/atomapi">
      <atom:title>SITE TITLE</atom:title>
      <accept>*/*</accept>
      <accept>application/atom+xml;type=entry</accept>
    </collection>
  </workspace>
</service>
 

Basically this indicates that you can POST any media to /atomapi (both Atom entries, and things like images).

To create a page, a client then does a POST like:


POST /atomapi
Content-Type: application/atom+xml; type=entry
Slug: /page/path

<entry xmlns="...">...</entry>
 

There’s an awkwardness here, that you can suggest (via the Slug header) what the URL for the new page is. The client can find the actual URL of the new page from the Location header in the response. But the client can’t demand that the slug be respected (getting an error back if it is not), and there’s lots of use cases where the client doesn’t just want to suggest a path (for instance, other documents that are being created might rely on that path for links).

Also, "slug" implies… well, a slug. That is, some path segment probably derived from the title. There’s nothing stopping the client from putting a complete path in there, but it’s very likely to be misinterpreted (e.g. translating /page/path to /2009/01/pagepath).

Bug I digress. Anyway, you can post every resource as an entry, base64-encoding the resource body, but Atompub also allows POSTing media directly. When you do that, the server puts the media somewhere and creates a simple Atom entry for the media. If you wanted to add properties to that entry, you’d edit the entry after creating it.

The last missing piece is how to get a list of all the pages on a site. Atompub does have an answer for this: just GET /atomapi will give you an Atom feed, and for our purposes we can demand that the feed is complete (using paging so that any one page of the feed doesn’t get too big). But this doesn’t seem like a good solution to me. GData specifies a useful set of queries to for feeds, but I’m not sure that this is very useful here; the kind of queries a client needs to do for this use case aren’t things GData was designed for.

The queries that seem most important to me are queries by page path (which allows some sense of "collections" without being formal) and by content type. Also to allow incremental updates on the client side, filtering these queries by last-modified time (i.e., all pages created since I last looked). Reporting queries (date of creation, update, author, last editor, and custom properties) of course could be useful, but don’t seem as directly applicable.

Also, often the client won’t want the complete Atom entry for the pages, but only a list of pages (maybe with minimal metadata). I’m unsure about the validity of abbreviated Atom entries, but it seems like one solution. Any Atom entry can have something like:


<link rel="self" type="application/atom+xml; type=entry"
 href="url?format=atom" />
 

This indicates where the entry exists, though it doesn’t suggest very forcefully that the actual entry is abbreviated. Anyway, I could then imagine a feed like:


<feed>
  <entry>

    <content type="some/content-type" />
    <link rel="self" href="..." />
    <updated>YYYYMMDDTHH:MM:SSZ</updated>
  <entry>
  ...
</feed>
 

This isn’t entirely valid, however — you can’t just have an empty <content> tag. You can use a src attribute to use indirection for the content, and then add Yet Another URL for each page that points to its raw content. But that’s just jumping through hoops. This also seems like an opportunity to suggest that the entry is incomplete.

To actually construct these feeds, you need some way of getting the feed. I suggest that another entry be added to the Atompub service document, something like:


<cmsapi:feed href="URI-TEMPLATE" />
 

That would be a URI Template that accepted several known variables (though frustratingly, URI Templates aren’t properly standardized yet). Things like:

  • content-type: the content type of the resource (allowing wildcards like image/*)
  • container: a path to a container, i.e., /2007 would match all pages in /2007/...
  • path-regex: some regular expression to match the paths
  • last-modified: return all pages modified at the given date or later

All parameters would be ANDed together.

So, open issues:

  • How to strongly suggest a path when creating a resource (better than Slug)
  • How to rename (move) or copy a page (it’s easy enough to punt on copy, but I’d rather move by a little more formal than just recreating a resource in a new location and deleting the original)
  • How to represent abbreviated Atom entries

With these resolved I think it’d be possible to create a much simpler API than WebDAV, and one that can be applied to existing applications much more easily. (If you think there’s more missing, please comment.)

HTML
Programming
Web

Comments (26)

Permalink