Reports from WWW10
Last week I attended the Tenth International World Wide Web Conference in Hong Kong, the most important annual conference for developers, academics, and publishers working with the Web. This article is a collection of reports written from the conference.
Opening the conference on Wednesday, Tim Berners-Lee told the attendees they could congratulate themselves for the progress made so far on the Web, but that they weren't finished building yet.
Announcing the release of a landmark XML specification, W3C XML Schema, Berners-Lee explained that the three specifications -- XML 1.0, XML Namespaces, and XML Schema -- formed the new foundation of XML. XML Schema allows the description, in XML, of XML languages, such as SVG or XHTML, and it's designed to replace DTDs, which served the same purpose in XML 1.0.
The development of the XML Schema specification has been characterized by controversy and criticism, since the early concerns in late 1999 as to whether Microsoft would support it. Berners-Lee praised the Schema working group for their perseverance in difficult circumstances. Though many in the XML developer community still have reservations about the specification, most agree that XML Schema will, indeed, has to succeed.
So now, over three years since the XML 1.0 Recommendation was first published, the W3C has built a foundation for XML that its member companies think can be used in today's applications. However, there's more to the total XML architecture than the foundation. Berners-Lee noted that a key technology, the XML Query language, a kind of SQL for XML data, is still in development, as are XLink and XPointer, XML technologies for linking documents together.
There was a glimmer of encouragement for those who feel that XML's complexity has got out of hand. Berners-Lee said that there was a need for revisiting and simplifying the existing XML architecture. He said that a W3C Recommendation was not the end for a specification and that taking another look at technologies was required in reaction to implementation experience and questions of interpretation.
Berners-Lee then detailed his plans for the next stage in the Web, the so-called Semantic Web, recalling the early days of 1990 -- the "fun stage". The Semantic Web initiative is aimed at making the Web machine-readable by creating interoperable formats for information. It is currently seen mostly as a vision for the future, perhaps in a similar way as the Web itself was originally perceived ten years ago. However, Berners-Lee was not perturbed that industry was not embracing the Semantic Web at the moment, preferring that it attracts the attention of developers who will help build it.
Certainly, the Semantic Web has attracted more interest at this conference than at WWW9, an afternoon Semantic Web session packing a room full of over 350 attendees. The W3C still has some way to go in communicating their vision, as many participants were left scratching their heads afterward. The Semantic Web is something that a lot of people are attracted to but find that the actual details elude them.
While business-oriented XML applications have been grabbing the tech media spotlight, exciting work has been in progress in the world of web multimedia.
Presenting on the user interface work of the W3C, Thierry Michel gave the audience the latest news on SMIL (pronounced "smile"). The Synchronized Multimedia Integration Language allows the coordination of multimedia elements in a presentation and is an XML language. SMIL 1.0 has been around for a while now, implemented in programs like RealPlayer.
SMIL allows items to be layed out not just over the page but over time too. It provides alternative presentation depending on language or available bandwidth, and accessibility features like captioning and ALT tags are supported. There are currently around ten SMIL browsers available.
Michel explained what the next generation of SMIL, SMIL 2.0, had in store. The most significant change is the splitting of SMIL into modules. This allows various components of the language to be reused in other contexts. As well as the modules, there are now three profiles defined, including a "basic" flavor for lightweight devices and an XHTML-SMIL flavor for use in browsers.
The splitting up of SMIL allows implementations, such as Internet Explorer 5.5, to apply SMIL's timing and animation features in and to XHTML documents. Michel demonstrated some impressive functionality running in IE5.5. In addition to modules and profiles, SMIL 2.0 will bring new visual transition and animation effects and increase support for internationalization and accessibility.
The SMIL Working Group expects SMIL 2.0 to reach the final stage of its development, W3C Recommendation, during Summer 2001. The final recommendation will ship with a set of test cases for implementors.
Following Thierry Michel, Chris Lilley presented on the progress of the W3C's Scalable Vector Graphics format, SVG. SVG represents images in their component, vector form as opposed to raster graphics, such as JPEG or PNG. As a result, graphic items can be both smaller in file size and vastly superior in rendering quality.
SVG has been a long time in development, and during that time it's acquired several good implementations, the most popular being Adobe's browser plugin. SVG 1.0 will soon reach Recommendation stage, and there is already much interest in the next version of the specification. Features likely to be in that new version include more support for small devices, not just PDAs but small devices like cell phones, and the integration of SVG with other XML user interface languages such as MathML and XForms.
A burning question among pro-standards conference attendees was how to get browser and editing tool vendors to implement XHTML.
XHTML is the latest incarnation of HTML. It uses an XML format and enforces a higher degree of strictness than HTML does -- XHTML documents are either right or wrong, there's no halfway house where the browser tries to recover. If you make an error, you have to fix it!
As with many web standards though, the speed of implementation in browsers and editors is likely to be disappointingly slow, so what can users and standards-makers do to encourage its uptake? Participants in the W3C "Town Hall" meeting discussed just this. For XML geeks, the advantages are obvious -- having XHTML web pages allows them to use tools like XSLT and other XML-specific technologies.
For web designers, XHTML is advantageous because it's an indisputable standard -- write valid XHTML and you're more likely to get the same result in each browser that implements it, as there's nothing browser-specific about XHTML.
However, neither of those are necessarily a compelling reason for the average user, who happily surfs around unaware that web designers have had to jump through hoops to get the pages looking right. During discussions in the meeting, though, one advantage came up that would benefit both vendors and users: processing XHTML is simply much faster than processing HTML.
As XHTML is strict, the browser doesn't have to waste time guessing about what the page should look like: the page is either correct or it isn't. What's more, it doesn't take much to add this feature to browsers -- if a page's DOCTYPE is XHTML, then switch in the new, fast, XHTML parser, if not, use the existing code you have already. Unfortunately, browser vendor Microsoft was utterly noncommittal about their plans to implement XHTML. Dave Massey from Microsoft commented that they are "investigating" XHTML and may add it to their browser, but he couldn't say if or when.
Once again it seems that it must rest with the users to campaign for standards compliance from vendors so they can get fast-loading, predictable web pages. Microsoft always say that they implement according to users' priorities, so grassroots action seems like a potential route forward. However, with millions of users, it seems that they're likely to listen to the big spenders first -- what voice do average web users and developers have?
The last day of WWW10 was given over to Developers' Day, which offered the chance for developers working on web technologies to share and discuss details of their projects. I attended the Semantic Web day, where in the morning several W3C Team members showcased Annotea, their tool for annotating web pages.
Annotation isn't new, and there have been several attempts to add it to the Web so far. One of these was the controversial (and now defunct) Third Voice, allowing post-it note style adornment (some might say defacement) of pages. Annotea takes a slightly different approach, being non-proprietary and based on open web standards.
Annotea is basically an annotation server: it uses an RDF database and a simple HTTP front end to store annotations and respond to annotation queries. The W3C has deployed an Annotea server athttp://annotest.w3.org/, but anybody is free to deploy one -- so you can have multiple sources of annotation.
In order to view or add annotations, you need a user agent which supports them. Amaya, the W3C browser/editor, has pretty sophisticated support for annotations and is a good place to start experimenting. The screenshot below shows me adding an annotation to a W3C page mentioning XML.
Taking a look under the hood, Annotea is quite simple to understand. It uses an RDF schema to define the properties of an annotation and is reasonably easy to read and figure out what's going on. A fun and relatively simple experiment might be to get a weblog or comment-system to export its content in Annotea format and, thus, to show up in annotation-aware user agents.
Annotea is designed for you to use any number of annotation servers, which is in keeping with the decentralized nature of the Web, and which means you can switch in annotations from your trusted sources or, for instance, in a corporate setting, ensure your annotations get no further than your own intranet.
XML.com Copyright © 1998-2006 O'Reilly Media, Inc.