PyCon 2004: Making Python Faster and Better

March 31, 2004

Kendall Grant Clark

I spent the second half of last week attending what has become one of my very favorite conferences driven by a tech community, namely, PyCon. It was held in the lovely Foggy Bottom neighborhood of Washington, DC, in a building that is part of the campus of George Washington University.

Of course there are other tech-community driven conferences, including most notably ApacheCon. What's interesting about tech-cons is that they struggle to balance a charming modesty and unpretentiousness with efficiency and a kind of corporate-centric professionalism. I like the tech-cons best that settle this tension by embracing the small, the ordinary, the do-it-ourselves ethic. After attending both this year and last, I think PyCon's organizers are consistently getting this mix right, and they should be commended. In what remains of this report, I describe the high points of the keynotes and talks I attended.

PyCon's keynote speakers were a diverse assortment: Mitch Kapor, Guido van Rossum, and Bruce Eckel (whom I missed).

Mitch Kapor: The Virtues of Open Source

Mitch Kapor -- who you may remember as the designer of Lotus 1-2-3 -- extolled the virtues of the open source methodology of building software. Kapor's latest venture, the Open Source Applications Foundation, relies on Python since its flagship project, Chandler, is a personal information manager (and more) built with Python.

Kapor suggested that open source will just win in the long run, but he pointed out some serious challenges: patents and other intellectual property issues, as well as project sustainability and security. Kapor also made some suggestions specific to Python, particularly as a result of OSAF's experience thus far working on Chandler. The most important one is that Python really needs to be faster -- something that, for me at least, turned into a kind of leitmotif of PyCon itself.

Guido van Rossum: The BDFL's Privileges

One of the privileges of being Python's bastardbenevolent-dictator-for-life, that is, of being Guido van Rossum, is that you get to give a free-form keynote at PyCon. If the past two PyCons are any guide, Guido is using this time to present a State of Python address, during which he summarizes where Python's been and where it's going next. I always enjoy these talks very much indeed.

Another BDFL privilege is the power of pronunciamento, that is, the power to pronounce that some language debate or design conundrum will be resolved thus-and-so. In fact, the culture of python-dev (the main developers mailing list) actually contains a precise discursive form: Guido will often send a mail in which he officially pronounces an issue decided or debate ended. Of course, these debates are free to continue, and Guido has been known to change his mind, but a pronouncement is a pretty strong commitment of where the community and the language are going.

The power to pronounce is more than just a cultural quirk, though. The development of Python has evolved -- or so it seems to me, and I've been watching since before the 1.5 release -- more or less in the same way as the Linux kernel development process has evolved. Both Guido and Linus Torvalds made their bones and won their moral authority within their communities by writing code, often at very high quality, and by making design decisions about complex systems. But as those systems grew more and more complex, and increased dramatically in scope, both Guido and Linus have tended to write less code and to focus more on the design of the system. Though the comparison breaks down in the face of the Linux kernel's larger scope than Python, it's more or less accurate, I think.

A concerted, sustained focus on design issues is only possible because Guido and his language have attracted some very, very good programmers. Guido is able to judge among competing patches and proposals the semantics or implementation strategy or syntax that he favors; and he often has a very wide range of existing alternatives to choose from. While Guido still writes Python code, the actual development of the language is team-shaped, with Guido calling the most plays and shots. That is, as one learns to reckon such things, a very good sign for Python's future.

One of the issues Guido talked about during his keynote is the development of Python 2.4, which will feature real, hard-won performance gains, many of which are due to the fine work of Raymond Hettinger. Armin Rigo -- of Psyco and PyPy fame -- is also working to reduce the costs function calls by reducing the size of stack frames. The expected relative performance of 2.4 versus 2.3 was the first occurrence of the performance motif at PyCon; we'll see it return in the discussion of IronPython, Starkiller, and PyPy.

Looking to the future, and the mythical Python 3000, Guido mentioned making the language more lazy, which is to say more systematically based on iterators: in just about every place where it's possible to return an iterator, instead of a concrete in-memory sequence, Guido wants to return an iterator, eventually. That will break a lot of code, so one shouldn't expect it to happen any time soon; but it will also simplify the language by eliminating some builtin functions (xrange, iterkeys) and modules (itertools).

As for new features in 2.4, look for more standard library improvements, including a new collections module with an implementation of deque, a fast, two-headed queue. Also, get ready for generator expressions (think: sum(x**2 for x in range(10))) to join list comprehensions as an at-first-I-hated-it-but-now-I-can't-live-without-it feature of the language. The other huge change will be some kind of decorator syntax, which will be used to annotate functions and methods. The outstanding issue seems to be where to put the list of expressions. Recall that one problem with being responsible for a language that many people feel has an elegant syntax is that it gets harder and harder to add new stuff while retaining the sense of elegance. There are five competing proposals:

  1. The front runner: def meth [classmethod] (cls, arg1, arg2):
  2. The C# option: [classmethod] def meth(cls, arg1, arg2):
  3. The Barry Warsaw option: def [classmethod] meth(cls, arg1, arg2):
  4. PEP 318's option: def meth(cls, arg1, arg2) [classmethod]:
  5. The ugly duckling: def meth(cls, arg1, arg2): [classmethod]

Based on Guido's preference and arguments, and the response of the people at PyCon, the front runner is likely to appear in Python 2.4. During the question and answer period, Guido talked about a variety of issues. In response to both community discussions in the past year and to Mitch Kapor's mention of security, Guido addressed the issue of adding a capabilities model to Python. His view seems to be that capabilities don't solve all the interesting problems, which I suspect the capabilities people might dispute, and that Python with a capabilities model would be a very different language.

Guido also mentioned the XML processing capabilities in Python's standard library, which he seemed a bit dissatisfied with at the API level. I heard at both this and last year's PyCon programmer grumblings about XML and RDF as well as the Semantic Web. Guido's complaints about XML processing seemed to strike a chord with many Python programmers, though I confess not to be among that number. Guido's primary complaint seemed to be a variant of the classic complaint about having to do state management with event-based APIs like SAX; he specifically mentioned the pain of having to write handlers for XML elements and attributes. This leads me to think that some of Uche Ogbuji's data binding, declarative work should either be moved into the stdlib or should be more widely publicized or both.

Web Programming with Python


I attended two tracks after Mitch Kapor's keynote. The first, web programming, is near and dear to my heart. The talks I heard included a presentation of mod_python, which is billed as a way of building web applications using Python in Apache. The latest release of mod_python features a Python Server Pages implementation. I won't say much about mod_python here because, frankly, I don't really favor it for web application or service programming in Python. It's fast but tends to encourage a style of programming that's too low-level and too tied to Apache. However, my experience with mod_python predates its latest release with a PSP implementation. While I don't favor the PSP approach, it should help with the print-to-stdout wart.


Next, I heard a talk about Nevow (say "nouveau"), Donovan Preston's Web Application Construction Kit, which includes a very Pythonic DOM implementation called "stan", as well as several other impressive pieces. One thing about Preston's presentation that I enjoyed was the gleefulness with which he bashed the W3C's DOM. While I very much admire the work of the W3C, there have been some clunkers; from a Python programmer's perspective, the DOM may be the biggest clunker of all.

Nevow is similar to my preferred way -- best exemplified in Python by XIST -- of generating markup languages from Python. While still rather young, Nevow is a project which I'll be sure to check out in detail. One problem with using Nevow, or so it seems right now, is that it may be fairly closely tied to Twisted, and I prefer Quixote these days for web programming. Let me sum up my impression of Nevow by saying that I would use it heavily if it could be freed from Twisted. If the Python community ever decides to make one web application framework, which I don't believe it will do, Nevow would be on the short list of feeder projects.


Speaking of Quixote, the next talk I heard was by Andrew Kuchling, one of the primary developers and designers of Quixote. Andrew gave a brief overview of Quixote, playing up its principled agnosticism: Quixote imposes very few restrictions as to what other technologies you want to use with it. It includes both a templating language (of sorts) and a forms-handling module; but neither of these is required. Andrew pointed out that Graham Fawcett has implemented a small Nevow-like clone for use in Quixote, something I'll be digging into very soon.

I can't really say much about Quixote here, except that it is far and away the cleanest, most programmer-friendly way of building web applications and services I've used, and I cannot recommend it highly enough to you.

Making Python Faster: IronPython, Starkiller, PyPy


After lunch on the first day, during which I chatted with John and Charlie, two Caltech programmers using Python to do experimental economics, I checked out the Python implementation track. Based on what I heard there, I think that absolute Python performance improvements will start to increase at a more rapid rate than has ever been the case.

Jim Hugunin, of Jython and AspectJ fame, has implemented Python on Microsoft's CLR (that is, IronPython compiles Python source to CLR IL bytecode), achieving some impressive performance gains in some cases. Overall Hugunin claims a 40% speed improvement over CPython.

IronPython's performance relative to CPython is not 40% faster in every aspect. Rather, it's a mixture of much faster and much slower and about the same. For example, eval() is between two and 100 times slower than CPython; range() is about three times slower. Library functions are anywhere from three times slower to two times faster. List's sort() builtin is three times slower in IronPython because of the highly optimized nature of CPython's version. On the flip side, unicode.find() is more than two times faster; Unicode performance is clearly important to .NET.

Alas, Hugunin is not releasing this implementation, which he calls IronPython, until (or unless?) he can figure out how to complete it, which I took to mean until or unless he can figure out how to have that development funded. That's too bad, especially since it means that his performance claims cannot be duplicated, and others who are interested cannot contribute to IronPython. One of the main areas of performance increase in IronPython is in user defined functions, which can be many times faster in IronPython than in CPython.

In Hugunin's view, IronPython is faster than CPython when it is possible to use native CLR bits, since the CLR is very highly optimized, with hundreds of person-years of programmer attention. But, given the static nature of CLR semantics, it often takes some ingenuity to implement Python's late-bindedness. Hugunin also uses Python to build C# "fast paths", that is, explicit hardcodings of common cases, like calling functions with zero to seven arguments -- a trick that Guido seemed interested in using in CPython.

IronPython is important for two reasons. First, it provides a testbed for new performance tricks and approaches, and that's a good thing. In other words, a sign of a language's maturity is the breadth and depth of implementations of it. Having a viable .NET implementation of Python is a very good thing. Second, if Microsoft's biggest gamble ever, .NET, pays off, Python programmers don't want to be left out in the cold. Having IronPython to run on Microsoft platforms and on open source platforms, by way of Mono, will mean that Python programmers, especially in areas like web services, will continue to be relevant and in some demand.

Starkiller, Or: "There is only one logical conclusion: we must destroy the sun"

Next up Mike Salib, a student at MIT, presented Starkiller, his static type inference tool for Python -- basically, it's a Python to C++ compiler which handles all of Python except for eval(). That's a motif within the PyCon leitmotif of performance, namely, giving up eval for the sake of potentially huge performance gains. I've been using Python actively for six or seven years, and I've never needed to use eval in anger. In my view, trading eval for speed is a no-brainer. (Yes, I know it's not that simple; but it's a very memorable line!)

Salib's presentation, which was the most amusing at PyCon, focused on how one might make Python go faster, concluding that a reduction in the number of run time choice points, such that all the well known compiler optimization tricks can be used, is key.

Salib, who described himself as a serious Python user, didn't shy away from the claim that Python is very slow. He offered a list of reasons:

  1. "Layers of indirection"
  2. "Dynamic binding"
  3. "Dynamic dispatch"
  4. "No structure/size information"
  5. "Run time choice points foil the last 30 years of optimization research"
  6. "Speed comes from eliminating run time choice points"

As Salib wrote in his PyCon proposal:

I have developed a type inference algorithm for Python that is able to resolve most dispatches statically. This algorithm is based on Ole Agesen's Cartesian Product Algorithm for type inference of Self programs. I have built a type inferencer for Python based on this algorithm called Starkiller.

I won't say any more about the details of Salib's research, only that it would be really really good if he could find a job with a company that wants to fund his continued work on Starkiller. Starkiller is presently owned by MIT and will be released under the GPL soon (probably by May).

Oh, one more thing: Salib claims that Starkiller is sixty (60!) faster than CPython for the very contrived, "pathological" benchmark of calling fibonacci and factorial functions repeatedly in a loop.


Also in XML-Deviant

The More Things Change

Agile XML


Apple Watch

Life After Ajax?

The last presentation I heard on the first day was by Jacob Hallen and Armin Rigo, who presented PyPy, a project to build a Python interpreter of Python.

PyPy seems an outside shot to be the next-generation mainline Python interpreter. It's the only one I can imagine replacing the CPython trunk eventually. PyPy is built to be very modular, with pluggable objectspaces. That means that something like Starkiller's static type inference could, or so I assume, be plugged into PyPy. In fact, a few days after PyCon I read Armin Rigo's report of the conference and of the Starkiller talk, in which he said that Starkiller's type inferencing would make a powerful combination with the dataflow graphs of Python programs that PyPy is already able to generate.

The other notable point about PyPy, and with this we come full circle back to IronPython, is that it's on the verge of receiving significant funding from the European Union, perhaps as much as 1.3 million euros. That's not only real money; it's a huge amount of funding for an open source project. The details are what matter, of course, but if that funding could be used to achieve some kind of integration between Starkiller and PyPy, Python performance might take a very robust jump in short order.


All things considered, PyCon has become one of my favorite community-organized technical events. It's sorta like off-brand political parties in the US: they never have as many resources as the Republicans or Democrats, but there are lots of opportunities and the horizons are wide open.