Menu

Dividing Factors

September 5, 2001

Leigh Dodds

Chasing the tail end of a flurry of activity on XML-DEV over the last few weeks, the XML-Deviant reports on signs of increasing divides in the XML community and on some refactorings in the W3C pipeline.

Dividing

Last week the XML-Deviant noted that the recent squabbles concerning typing, XML, and W3C XML Schemas hinted at increasing divides in the XML community. There are further signs this week, including many comments about accepting the diversity of XML applications, differing requirements, and viewpoints.

Given the heat of some of the recent debates, particularly the continuing attacks on XML Schemas, Tim Bray advised an even-handed approach noting that the existence of XML Schemas and its competitors is good, that nothing about them detracts from the basic utility of XML.

XSDL and its competitors are an unqualifiedly good thing. They provide immensely better expressive hooks for the language designer and for the authoring program that wishes to support a human in direct creation of XML content. The data typing system will have lots of supporting libraries which will facilitate all sorts of interchange tasks. So let's not diss the contribution of the schema folks.

...I don't think that all the PSVI theorists in the world, laid end-to-end, are any threat to the everyday working usefulness of XML.

In another post Bray said that each side should acknowledge that opinions are divided. This seemed to be the general theme this week: a desire to see acknowledgment of the diversity of opinions and, further, to have that acknowledgment enshrined not only in cordial email debates but also in the standards track itself -- more specifically, the architecture that the W3C is designing. An appreciation of diversity already seems prevalent at the grass roots level, as Simon St. Laurent observed, while summarizing the atmosphere at the recent Extreme Markup conference.

Amongst those sharing this viewpoint, Michael Brennan was particularly keen on embracing plurality.

We need an XML processing framework that accepts the plurality of application domains without prejudice or favoritism. We need an XML processing framework that does not take a specific metadata vocabulary for annotating information items and a particular set of transformations and bless them and insist that everyone accept the notion that these are not annotations and transformations, but rather the process of realizing an XML instance's True Form. The only true form of an instance is that of the instance itself, and that's nothing but a bunch of text and pointy brackets. Everything else is layered atop that to suit a particular application domain or processing model.

In a lengthy essay, "The Tragedy of the Commons", Jeff Lowery reviewed the current state of affairs.

Nobody should foist the solutions to their requirements onto those who don't need them. Another truism is that you should only pay for what you need. Some will argue that XML adheres to that philosophy, but the recent argument about namespaces and PSVI indicate otherwise. Let's not alienate people who are honestly trying to use the technology as best they see fit. Let's agree to disagree, but let's also 1) understand that there are different, incompatible requirements from two main factions; 2) understand that divergence is inevitable, but can be controlled and accommodated in a rational, well-ordered way.

Lowery concluded by arguing that the extent of the common ground should be understood and the common work factored out accordingly. One might argue that this is the core of the "Daring to Do Less" ideal: deliver the minimum that's useful. This isn't far from the philosophy of extreme programming as discussed in a previous XML-Deviant column. Michael Brennan agreed with Lowery's sentiment.

As I've worked with XML and followed debates on this list, though, I have acquired a deep and growing appreciation of the traditions that XML inherits and the rich and varied domains it serves. It would be a shame for that to be sacrificed to suit the needs of one domain, especially when the needs of that domain *can* be layered on top of general foundations that can serve everyone's needs (of that I am firmly convinced).

This exchange prompted Sean McGrath to once more argue for layering above well-formed XML as a means to avoid "brain puree".

Brennan's desire to follow the traditions of XML was echoed in a separate essay posted by Simon St. Laurent in which he argued that somewhere in the development since "SGML on the Web", some lessons have been forgotten.

...we've seen all kinds of features added to XML. We've been told that we only need to use the features we want to use for our projects, and that we can run wild with the features we like.

Some place in there, though, the lessons of the Web seem to have been lost. What began as "SGML for the Web" seems to be turning into "Markup for my particular situation which happens to use XYZ toolset with KTM options turned on/off." The Web-building idea that information is most valuable when most accessible, even on a lowest-common denominator basis seems to have been forgotten by the feature-hungry.

This growing unrest largely speaks for itself. Is "internet time" so fast that we're already approaching a level of complexity comparable to SGML? Some might argue that this was inevitable, and that Daring to Do Less is a strategy only successful in the short term anyway. Yet the majority would likely disagree or, at the very least, argue that the simplicity-complexity spiral needn't be followed so quickly.

Factoring

Also in XML-Deviant

The More Things Change

Agile XML

Composition

Apple Watch

Life After Ajax?

Perhaps the TAG (whenever it finally appears) will help resolve some of these issues. It is promising, however, that some factoring is happening already.

This week the DOM Working Group produced an updated draft of the DOM Level 3 XPath specification. The announcement notes that following feedback from the community the dependency on XPath 2.0 has been removed, which is good news. It means that a standard mechanism to query a DOM tree using XPath may be available much sooner than the intertwined XPath 2.0/XSLT 2.0/XQuery specifications. This may meet many people's requirements for an XML query language. Also, interestingly, the requirements explain that originally the DOM Working Group believed that adding an XPath query interface to the DOM should have been handled elsewhere in the W3C, but there were no takers.

The current draft shows how the DOM and XPath models are mismatched; unfortunately, it adds additional classes to the API rather than using those already available. The need for "liveness" in the DOM, surely only useful in a browser context, is likely at the center of this. One might speculate whether, if pushed further, it would be possible to factor out this browser-related requirement, making the core API smaller and more manageable. This is a clear indication of how valuing some requirements above others can cause later problems.

Elsewhere, the RDF Core Working Group, which is showing an admirable degree of openness, has indicated that it will be generating a range of deliverables. Posting to the w3c-rdfcore-wg mailing list, Brian McBride gave a list of the expected deliverables, indicating the target audiences and scope of each document. Early internal drafts of these documents are also available for public consumption.

The deliverables will include a much sought after split between the RDF model and syntax, a revised RDF Schema specification, a tutorial that is likely to meet goals similar to the XML Schema primer, and a suite of Test Cases that demonstrate different aspects of RDF functionality.

This is promising stuff and is likely to make RDF a much more digestible technology as a result.