Menu

Mozilla and Opera Renew the Browser Battle

June 16, 2004

Kendall Grant Clark

As a lifelong, committed user of minority operating systems (OS/2 back in the day, Linux exclusively from 1994 till last year, now OS X at home and Linux on servers), I have always had, since discovering the Web in 1994, a strange and strained relationship with browser vendors. The Netscape browsers worked pretty well, then -- it felt sudden, but it must not have been -- they were totally useless. There was a period after the demise of Netscape during which I thought that I might have to switch from Linux, since having a decent browser was crucial. But with some long suffering patience, and careful use of Opera's offerings, I now find myself with a wealth of choices, not only on OS X -- where I use a freewheeling mixture of Mozilla, Safari, and Firefox daily -- but also on Linux.

Windows users who continue to use Internet Explorer fall into one of several camps: they either don't know or don't care about web standards and compliance thereto; or they can't tell the difference between a bloated piece of software and a quality piece of software; or they can tell, but that difference is of no importance to them. If they don't know or care about IE's many defects, why should I?

Because there are so many of them!

In the ideal world technology standards would unify, rationalize, and perhaps create new markets. While this kind of positive benefit is sometimes achieved by bodies like ISO, W3C, and OASIS, in the real world, given Microsoft's total domination of desktop computing, that its browser does not excel ends up being not only a pain for users but also a pain for developers and publishers.

What Could be Done?

These worries, considerations, and issues apply to the present state of web architecture, by which I mean: standardized markup languages (HTML and XHTML); standardized means of stylizing and transforming those languages (CSS and XSLT); third party media types and browser plugins or standalone apps for handling them (Adobe's PDF, the zoo of digital video and music formats, and Flash); and client-side event models and scripting languages for providing some measure of server-agnostic interactivity (DOM, JavaScript, and Java WebStart).

That is more or less the present state of the art as it is practised today. On the horizon, however, are some interesting new developments, many of which have been covered well by XML.com. I have in mind SVG and XForms.

Let's step back from this level of architecture for a moment and consider a bigger question: why not drive the browser into the desktop? My theory is that the Web has always been an application delivery platform. For its first few years the Web seemed like a publishing platform only, but that had more to do with, on the one hand, the default settings of browsers and the Apache HTTP server and, on the other, Tim Berners-Lee's day job -- supporting physicists at CERN -- than it had to do with any inherent limitations of the technology itself.

In other words, a browser dereferences an identifier of a web resource and retrieves a representation of the state of that resource -- that's what the Web does. What that browser subsequently does with that resource representation, how the user interacts with the browser and with the representations it retrieves, and the ways in which the browser is or isn't integrated with the rest of the operating system are all matters of contingent fact, convention, and customary expectation. They are not matters of technical necessity per se. If that weren't the case, there could be no such thing as Service Oriented Architectures, Semantic Webs, or Web Services.

Let me be clear. I have long opposed the idea of deeply integrating the browser into and with the desktop operating system, but I did so out of very tactical reasons, namely, if that were the game, Microsoft would have won even more quickly and decisively than it did. And that would have been a very bad thing, indeed. It might have been enough to strangle the Linux baby in its cradle, and it would have made Steve Jobs fusion of BSD and NeXT an absolute irrelevance.

Having been clear, now let me be direct. I think, speaking merely of technical merit and elegance, integrating the Web into various kinds of stuff is very interesting. For example, I have been talking to fellow researchers and Python friends about integrating the Web -- well, web services, really -- with the Python programming language. I think there are real benefits to be gained from using, say, URIs, markup languages, HTTP, and WSDL to manage some future version of what's known colloquially as the Python "standard library" -- think of it as CPAN-on-crack.

Why not, then, integrate the Web with my individual-human-oriented operating system? Why not use representations of resources retrieved from the Web, identified by URI, to drive complex, native applications? It seems like, at worst, an interesting possibility or, at best, like something I could eat off of for the next ten years. After all, somebody's gotta explain the fun new stuff to folks playing along at home.

Tactics Versus Technical Merit

One reason not to do this, of course, is that it would play right into Microsoft's monopolistic hands. My reading of the relevant, recent history is that this would have been completely disastrous if it had been the Web's original drift and orientation. There seem to be two possibilities: either it would have meant the early demise of the Web itself, because the implementation burdens would have been too great; or it would have meant Microsoft would have paid attention to the Web sooner and there never would have been a Netscape.

But here we are, a decade or so later, and things are different. Microsoft still dominates the browser market in terms of sheer numbers, but the landscape is different. There is a general acceptance of web standards by vendors. There are many, partially overlapping communities of developers who demand compliance to web standards. And there's open source software, which is here to stay. (Though I can imagine the world being slightly different and, thus, open source taking a dramatic, decisive legal beating.)

Perhaps different conditions make different things possible? Are the Microsoft counterbalances strong enough to allow for actual, rather than illusory innovation? Despite Microsoft's claims to the contrary, markets structured monopolistically have not been hotbeds of innovation. If open source is strong enough to offer a real counterbalance, maybe we can start to innovate -- always suspiciously and carefully -- in the browser space again?

The W3C's Workshop on Web Applications and Compound Documents

Anyway, these are some of the thoughts I've been having as I read about new developments in the struggle over web standards. The amazing folks who are responsible for the Mozilla browser family have, as has been widely reported, started to be more aggressive in both promoting their technology and in dissing Microsoft's. The W3C convened the Workshop on Web Applications and Compound Documents (WACD), which spawned some very interesting position papers. In what remains of this column I want to look at the position paper jointly submitted by the Mozilla Foundation and Opera Software, as well as at a new organization, the Web Hypertext Application Technology Working Group.

To my way of thinking, the W3C organizes a workshop when it -- by which I mean Tim Berners-Lee and advisers -- isn't sure what to do whether or how to proceed in some area. For example, the controversy surrounding binary variants of XML was persistent and not very cleanly or obviously structured. The Binary Infoset Workshop (see my "Binary Killed the XML Star?" for a report) apparently helped the W3C and some of its member organizations clarify the issues enough to begin standardization work in that area.

I take the WACD Workshop to have been a response, in part, to pressures being exerted on, well, everyone else by Microsoft's extraordinarily ambitious .NET plans, including the FUD about Longhorn: Indigo, Avalon (which includes but is more than just XAML), WinFS. That's pretty scary stuff, frankly, since it both embraces and extends core W3C technologies (the similarity of RDF and WinFS is very revealing; the use of vector-based user interfaces seems a dagger aimed at SVG, and so on), as well as maps out a very rich integration of the Web and the operating system.

Open Source's Response

One way to respond to these moves by Microsoft is to mirror them in open source. This is precisely what Novell's Ximian folks have been doing in the Mono project. Mono is a remarkably ambitious open source implementation of large chunks of the .NET framework and APIs. It has made very significant progress toward that goal and, since Novell acquired Ximian, one can only imagine that Mono has a real chance of succeeding.

But what would be success in this context? Having open source desktops which are able to use .NET web applications will be good for Linux users and for the industry as a whole, but will it be good for the Web? In other words, the mirror Microsoft strategy is only one possible response.

Mozilla + Opera = Innovation?

Another response is to preempt Microsoft. Yes, Microsoft has more resources than any other player in this space; but the scope of its ambition is quite stunning. Perhaps an open source proposal for a next generation web application framework could preempt Microsoft? That's how I'm thinking about the joint proposal by Mozilla and Opera at the WACD Workshop. As MozillaZine put it, the position paper "describes a device-independent Web application framework based on HTML and backwards-compatible with existing Web content...[Mozilla and Opera] are keen to get parts of this framework in place soon to prevent a single-vendor solution...becoming dominant". No need to wonder about the identity of that "single-vendor solution", of course.

The Microsoft position paper consists -- yes, I'm making this up, but just barely -- of two sentences: "Avalon and XAML. Die puny humans!" But what about the details of MozOpera's position paper? First, it sounds a note of urgency: "To compete with other players in this field, user agents with initial implementations of jointly-developed specifications should ideally be shipping before the end of the year 2004." Second, it points to an example of a jointly-produced specification in this area, Web Forms 2.0, which is hosted by something calling itself WHAT, about which more below.

The remainder of the MozOpera position paper consists of two bits: first, responses to all of the questions posed by the conveners of the WACD Workshop -- questions and answers to which I commend your careful attention. Second, MozOpera lays out seven principles underlying its approach:

  1. "Backwards compatibility, clear migration path" -- That is, we don't need no stinkin' Avalon and XAML, we've already got HTML/CSS/DOM/JavaScript.
  2. "Well-defined error handling".
  3. "Users should not be exposed to authoring errors" -- Which I can't help but read as a repudiation of the W3C TAG's Architecture of the World Wide Web notion that "silent recovery from error is always harmful", a notion that I and many others have suggested needs some clarification.
  4. "Practical use" -- In other words, standardization efforts should be driven by real use cases, not by monopolistic market pressures and megalomania.
  5. "Scripting is here to stay" -- You can't do everything with declarative markup, natch.
  6. "Device-specific profiling should be avoided" -- Please don't hurt us, All Powerful Robot. A good principle, but Microsoft may be able to agree with it, but not concede one actual inch in the bargain, which is scary but true.
  7. "Open process".

It's not easy to tell whether that last item, "open process", was MozOpera taking a shot at the W3C or at Microsoft or both. Ian Hickson, an Opera developer and W3C habitué, glossed it this way:

Mozilla obviously want this because they live in the Free software world, where everything is done in the open, and where that policy has reaped them many benefits. As far as Opera goes, we want an open process because our experience with the Web Forms 2 work ... is that developing core Web technologies in the open is far more productive than creating the technologies mostly behind closed doors, getting input from the real world only every few weeks or months.

I suppose, then, it's a complaint about the pace of W3C Working Group development. This is interesting, because the W3C was founded as an industrial consortium in part because the ISO moved too slowly back in the early days of the Web. We've come full circle because now MozOpera is complaining the the W3C moves too slowly. I take no position on that claim since different W3C WGs have different policies and paces. The WG I've been involved with lately, RDF Data Access, has done everything publicly and actively solicits feedback from anyone.

It's obvious, however, that MozOpera thinks some relevant W3C WGs are moving too slowly, giving Microsoft's rapacity too much of a head start. Hence the need to form the new Web Hypertext Application Technology Working Group, which at this point is little more than a domain name, a small web site, and a mailing list. But maybe that's enough?

I find it odd that in an effort to move more quickly and more publicly, the WHAT site says that the "creation of this forum follows from several months of work by private e-mail on specifications" for web application technologies (my emphasis). Well, you know, which is it, public or private? Is the W3C too slow or too private? What's the point of kvetching about the W3C being too private and then being even more private yourself? Not a very auspicious start.

I'm willing to overlook this inconsistency since the eventual plan is to "submit [WHAT] work for standardization to a standards body when it has reached an appropriate level of maturity. The current focus is rapid, open development and iteration to reach that level". That's fair enough. In a rapidly evolving market -- which may be the creation of a new one, in fact -- some sacrifices can be made in the interests of keeping the underdog in the game.

Finally, there are three specification documents already available from WHAT: Web Applications 1.0, Web Forms 2.0, Web Controls 1.0. The WHAT site promises additional docs, including a CSS Rendering Object Model.

Conclusions

As we can see, there are at least three broad avenues forward for the Web with regard to web applications. First, the Microsoft .NET project of tying web applications very tightly to future versions of the Windows OS, that is, to what is now called Longhorn. Given its monopolistic positions and willingness to abuse them, I am very pessimistic about Microsoft's intentions in this area. That Novell and Ximian are determined to mirror Microsoft's .NET plans is both tactically clever and strategically sound. But it's a hard, hard slog.

Also in XML-Deviant

The More Things Change

Agile XML

Composition

Apple Watch

Life After Ajax?

Second, the W3C is developing or has already standardized several technologies -- XML, XPath, XQuery, CSS, XHTML, SVG, XForms, and the Multimodal Interaction Activity all seem relevant here -- which, taken together, could serve as a next-generation platform for web application development, assuming that vendor support is forthcoming.

Third, the browser makers behind WHAT (Opera, Mozilla Foundation, and Safari -- though it's not clear which of these have take formal positions regarding WHAT), in an effort to preempt Microsoft if possible, have created a forum for rapid development outside the W3C. The technical approach of WHAT is focused mostly on backwards compatibility with existing browser architectures, which strikes me as prudent. Having support for these new features in Mozilla and its derivatives, in Opera, and in Safari may end up helping the W3C as much as splitting the standardization process hurts it.

Whether any of this heads off or reins in the Microsoft juggernaut is the one real question I wish I could answer. In truth, I have no idea.