Menu

XSLT UK 2001 Report

April 25, 2001

Jeni Tennison

Table of Contents

XSLT and the Art of Motorcycle Maintenance

XSLT Design Patterns

XSLT Performance

The XSLT Compiler for JVM

Building and Maintaining the DocBook XSL Family

XSLT and Databases: A Compelling Combination for Web Apps

Charlie, An XML Application Framework

Schematron: validating XML using XSLT

Short Papers

Markup Meets Middleware

Using XSLT to Derive Schemas from UML

Meaning Definition Language

XSLT as a Query Language

Explaining XSL Formatting Objects

The RenderX approach to XSL formatter design

Experiments with XSLT With Topic Maps

Conclusions

April 8th and 9th 2001 saw the first conference dedicated to XSLT take place at Keble College in Oxford. While the basis of the conference was XSLT, this didn't stop people talking about the XSL effort in general or about other vocabularies and technologies that work with or against XSLT.

Opening Address

The conference was opened by Norm Walsh from Sun Microsystems, member of the XSL Working Group and maintainer of one of the more complex XSL applications -- the DocBook XSL family, which he talked about later in the day. Norm set the scene for the conference, reminding us of the origins of XSLT and outlining four requirements that will make XSLT and XPath as ubiquitous as XML has become:

  • interoperable tools,
  • cooperative specs,
  • optimizations or compilations of stylesheets, and
  • information set pipelines.

XSLT and the Art of Motorcycle Maintenance

Next up was David Carlisle, from NAG Ltd., one of the editors of MathML and an XSL-List regular. David gave another view of XSLT's heritage, as a functional programming language fitting into the same development path as Scheme or DSSSL. He outlined the benefits of taking a functional approach to presenting information, especially with web-based content, where random access means that you need something that allows you to process only parts of the content and still work reliably (for example, in numbering pages without having to process each page to construct the number). David had the title for his talk thrust upon him, but he still managed to bring in a reference to the seminal book "Zen and the Art of Motorcycle Maintenance" with a quote.

After a while he says, "Can I have a motorcycle when I get old enough?"

"If you take care of it."

"What do you have to do?"

"Lots of things. You've been watching me."

"Will you show me all of them?"

"Sure."

"Is it hard?"

"Not if you have the right attitudes. It's having the right attitudes that's hard."

"Oh."

After a while I see he is sitting down again. Then he says, "Dad?"

"What?"

"Will I have the right attitudes?"

"I think so," I say. "I don't think that will be any problem at all."

And so we ride on and on, down through Ukiah, and Hopland, and Cloverdale, down into the wine country...

Beginners can find XSLT difficult to deal with, especially when they come from a procedural languages background. But XSLT isn't hard if you have the right attitude.

XSLT Design Patterns

I spoke next, representing only myself and drawing on my experience answering questions on XSL-List. I outlined some of the design patterns that have emerged in the use of XSLT. Using examples from an application I worked on for Xi advise bv as an example, I spoke about four levels of design patterns.

application level
combining stylesheets and using XSLT within a wider context -- I specifically talked about getting multiple views of the same data using XSLT
stylesheet level
the flow of processing within the application -- I talked about the differences between push and pull, and how to combine them, and about grouping by position, in hierarchies and by value (using the Muenchian Method)
template level
patterns in instructions such as Wendell Piez's method for repetition and David Allouche's method for normalizing strings
XPath level
expressions for getting unique nodes, for set manipulation and for conditional XPaths, such as Oliver Becker's method

Throughout, I talked about the way that identifying these methods can help us to identify the areas where XSLT and XPath need to be developed.

XSLT Performance

We were then treated to a talk by Mike Kay that highlighted the experiences of implementers. Now at Software AG, he is a member of the XSL Working Group and another regular contributor on XSL-List, but he's probably most well known as the implementer of the Saxon XSLT processor and the author of the XSLT Programmer's Reference.

Mike spoke about XSLT performance. Kay advised that you only need to worry about the performance of XSLT processors or stylesheets if you have business requirements that require a certain throughput or response time, although you might also be concerned about the predictability, tuneability, or scalability of a particular stylesheet.

While he didn't specifically talk about Saxon, Mike showed the basic way an XSLT processor works: taking the XML stylesheet, turning it into a tree, 'compiling' that tree, similarly taking the XML source and turning that into a tree, and then constructing the result tree (theoretically in memory, but often practically outputting it immediately).

Mike described the most important things for XSLT processor efficiency: tight code, name management, XPath queries, XSLT pattern matching, pipelining, and the storage of node sets. He discussed the issues involved in constructing a node tree for XPath/XSLT processing, especially given its differences from the DOM. (XPath node trees don't include CDATA or entity nodes, and there is different handling of whitespace.) He also outlined the Tiny Tree Model that he now uses in Saxon (after seeing a similar technique in Xalan), where transient objects are created from arrays as required. This gives real advantages, allowing run-time decisions about the kinds of access paths that should be stored (for example, you only need to store information about what a node's parent is if you need to access a node's parent).

The areas for future optimization that implementers have barely touched yet are

  • parallel execution, which should be possible as XSLT is side-effect free
  • compilation of stylesheets into byte code, something picked up by Morten Jørgensen in the next talk
  • global optimization of processing flow, as opposed to local optimization of XPaths
  • serial transformations, if it's possible to detect those (parts of) transformations that don't require access to the entire tree
  • exploiting XML schemas

There were some tips for users too:

  • follow good performance engineering practice: record the time a stylesheet takes before and after making each change, and change it back if it doesn't improve
  • use small documents rather than large ones
  • don't assume that the processor makes a particular optimization
  • minimize the number of visits to each node
  • use variables
  • use temporary trees (result tree fragments in XSLT 1.0)
  • use keys
  • don't use xsl:number
  • don't care about the changes that can only give less than 10% improvement

The XSLT Compiler for JVM

Morten Jørgensen, from Sun Microsystems, introduced the XSLT Compiler (XSLTC). XSLTC creates "translets": Java classes that run about 30-200% faster than interpretive XSLT processors and are usually about a quarter of the size of an XSLT processor and stylesheet. Because of their size and platform independence, these translets can run on virtually anything, including handheld machines.

With XSLTC, stylesheets can be compiled into translet bundles, each one of which contains a main class and a set of auxiliary classes for elements that require special handling. These are shipped with an XSLT runtime library, containing a tailored DOM with SAX interfaces for input and output.

For authors using XSLTC, Morten outlined a few tips. The main body of a translet is a switch statement, which each case being a particular match pattern. Authors should therefore keep match patterns simple and, in particular, avoid unioned match patterns. At an application level, developers should take advantage of the cacheability of the DOMs used by XSLTC as XML parsing can take as much as 50% of the total processing time.

XSLTC is still alpha software, but the only outstanding features needed for conformance with XSLT 1.0 are support for simplified stylesheets (where the document element of the stylesheet is not xsl:stylesheet), the namespace axis, and id() and key() functions within match patterns.

Building and Maintaining the DocBook XSL Family

Norm Walsh took to the stage again, this time in his role as maintainer of the DocBook stylesheets. They can be used to transform DocBook into HTML or XSL-FO and include support for XML vocabularies derived from DocBook as well as full DocBook itself. The DocBook stylesheets are a huge undertaking, currently around 1000 templates.

They are designed to be modular, to be customizable through parameterization and the use of template documents, and to have a literate programming style. They use extensions to address some of the problems.

Norm focused on the problem of supporting internationalization within DocBook stylesheets. The main issue with internationalization is the use of different text or labels within the output. For example, in English chapters are labeled with 'Chapter' and in French they are labeled with 'Chapitre'. Norm has addressed this by having an external XML document that holds translations for the words in this generated text which is used as a named template to access and insert them as appropriate.

However, it's not just the terminology that differs between languages, it's also the arrangement of generated phrases, the formatting of numbers, and so on. Norm's current solution is to use a regular expression string, which holds escape codes such as %t, to insert the value of different pieces of text.

Norm thus separates the translation of words from the arrangement of phrases and both of these from the stylesheets themselves. This means that translators do not have to worry about learning XSLT or even DocBook in order to translate and rearrange terms. Norm admitted, though, that it does mean that translators have to learn the escape codes, and that XSLT doesn't cope with string processing of this kind very well. If he has to introduce more escape codes (there are currently three), he will try another approach.

XSLT and Databases: A Compelling Combination for Web Apps

Steve Muench, an XML Evangelist from Oracle, is a member of the XSL Working Group, a member of the team developing the XSQL Servlet, and author of Building Oracle XML Applications. The basis of Steve's talk was the equation

SQL + XML + XSLT = WOW

Earlier in the day, we'd heard from Mike Kay about the problems with XSLT applications reading in large documents. Steve's approach to this problem is to use a blend of technologies, so that the information in the XML source that's used in the transformation is the information that you want to see. It's a common problem for people to access hundreds of rows from a database as XML and filter it using XSLT down to the few rows that they're actually interested in. Steve presented XSQL as a solution for those problems.

The basis of this approach is to have data (or documents) stored within databases and have an application that dynamically produces XML based on this data, which imports data within XML into the database. The XML produced from the database can then be restructured with XSLT to produce the view that's required. Oracle's XML SQL Utility gives a mapping between an Object View of the structure of the database and XML output, either through DOM or SAX. The XSQL Servlet supports this by processing requests, accessing the database, and increasing performance by pooling connections, pooling stylesheets, and caching XPaths.

These techniques and tools make up XSQL Pages, a server-side processing framework like Cocoon, but based on databases rather than file structures. XSQL Pages hold queries in a loose XML structure (actually textual SQL queries as the body of an xsql:query element). While there is a command-line utility to use them, the functionality comes to the fore when they're used with JSP. This allows the parameterization of the SQL queries through URLs, so that the same XSQL Page can be used to access different parts of the database.

Steve showed many examples of XSLT being used to format the same data in different ways: as HTML tables, as new SQL queries for inserting data into different databases, and as bar charts using SVG. All together, it was a very convincing demonstration of the power of the approach.

Charlie, An XML Application Framework

The next talk was also about a framework for XML using XSLT. Petr Cimprich, of Ginger Alliance, presented Charlie as a solution to the problems of performance with server-side applications (like XSQL and Cocoon) and of portability with client-side applications (like Internet Explorer).

Charlie can sit on a client machine and acts as a kind of proxy, taking connections from clients through handlers and translating them into actions. These actions can involve accessing information on a server through data drivers, which currently support file access, HTTP, SQL and SOAP.

Schematron: validating XML using XSLT

Leigh Dodds from XMLhack.com and Ingenta, and author of the XML-Deviant column on XML.com, spoke next about Schematron, a user-centered schema language that uses XPath expressions to describe the rules governing a particular XML vocabulary. It differs from grammar-based schema languages and is better when it comes to hard documents (such as those containing multiple namespaces) or constraints that are difficult to express (such as those governing accessibility).

Leigh talked through some of the syntax of Schematron, including the differences between assert, which checks conformance, and report, which highlights features in the XML document. Diagnostics, introduced in Schematron 1.5, give additional information to the user, and patterns group rules together to allow validation phases. Validation doesn't have to be a single process but, rather, an iterative one tied to the authoring process.

Schematron schemas are transformed using XSLT into a particular Schematron implementation, a stylesheet that can be run with an instance document to give information about the validity of that instance document. These implementations may give user diagnostic information to help with authoring, or RDF descriptions of errors for application processing, or anything that can be produced using XSLT.

For the future, Leigh talked about integration with XML Query, access of Schematron schemas through RDDL, and its use in authoring environments.

Short Papers

There were a series of short papers and presentations. The most important of these was the announcement by Sharon Adler, a co-chair in the XSL Working Group, that XSLT 1.1 is officially "on hold" so that the WG can focus on XSLT 2.0 and XPath 2.0. The Working Group target for XSLT 2.0 is the end of the year.

Francis Norton, from iE Ltd, spoke about using schemas, in particular XML Schema, as a documentation of requirements of XML documents, and he talked about producing HTML documentation from XML schemas. He also showed how to use Schematron to encode things like intradocument constraints, and how Schematron can be integrated with XML Schema through the appinfo element, using XSLT to pull out the Schematron schema as required.

I spoke a little about Extensions to XSLT (EXSLT) and the website that Jim Fuller, Uche Ogbuji, Dave Pawson and I have set up. The aims are to standardize extension functions and elements and to provide a repository of implementations for the extensions to help implementers and authors. Anyone can get involved. See http://www.exslt.org for more details.

Ken Holman, from Crane Softwrights Ltd, and member of the OASIS committee on XSLT conformance, presented the OASIS test suite. He showed off his new Chrysler with "XML GURU" license plates. The various discretionary items within the XSLT Recommendation mean that putting together a test suite is not straightforward. Anyone can submit test cases; the committee will decide whether they are acceptable and put together a normative package. Each XSLT processor implementer will submit a definition of the choices they have made on the discretionary items in the XSLT Recommendation, and stylesheets will be used to construct a configured test suite tailored to the particular implementation. The committee's work is currently fairly far behind schedule; for now they are focusing on xsl:number to try out the process.

Markup Meets Middleware

The first talk of the second day was given by Wolfgang Emmerich from University College London and Zühlke Engineering AG. He talked about some work he'd done to support trading within a German bank. The existing trading system had cross interfaces between various systems, and the goal of the project was to introduce a common infrastructure with a central hub managing communication between the systems.

Wolfgang described the integration issues on two levels -- the logical integration of information (how to gain a common data format), and the reliable transfer of data across the system. Wolfgang comes from a CORBA background, and in the initial phase of the project, they experimented with both an IDL/CORBA and an XML/XSLT solution to the problem. However, the IDL/CORBA approach had a number of drawbacks:

  • it is hard to accommodate change when using IDL because of the dependencies between different parts of the representation
  • CORBA may not be a standard technology in 2 years time
  • the mappings between different data formats are hard, and need to be done by IDL specialists rather than the people who know the business rules

Thus they decided to use XML/XSLT as the basis of the system. They drew upon several existing standard DTDs, such as FpML and FixML, with transformations from the proprietary formats being used by the bank systems into the common FixML. The XSLT was supported with extension functions to carry out conversions and validations, especially of dates. The aim of the system was to support 100,000 transfers a day; 10 seconds per trade delivery. While they originally stored information about the mappings between values within XML files, there were problems with the speed of this approach, and they moved to a database to alleviate them. They also used cached, compiled stylesheets to improve the speed of the solution.

Wolfgang finally gave a useful summary of the real world benefits of XML/XSLT. They had originally been worried about using open source XSLT processors in a real world system but have found them to be very good quality. They estimate that the current solution is four times more cost efficient than the original system, and that there will be a complete return on investment by the end of the year.

Using XSLT to Derive Schemas from UML

The next speaker was Mario Jeckle from DaimlerChrysler Research and Technology. As a representative of a car company, he justified his presence at an IT conference by pointing out that IT is critical in car development. The central theme of Mario's talk was that developing XML and XML schemas should be transparent to users. If XML is the next ASCII, then users shouldn't have to care about the fact that they're using it.

Mario pointed out several problems that need to be addressed when developing XML vocabularies. They need to be flexible to accommodate changes, developed quickly, coherent with legacy systems, accurate, have a good style to make them usable, be integrated with other systems, and be reusable. As an approach, Mario discussed taking UML diagrams and turning these into XMI, which is an XML vocabulary designed to represent UML diagrams. This XMI can then be transformed -- using XSLT, naturally -- to XML Schemas.

Some of the talk was spent outlining the syntax of UML, which is a standard graphical modeling language that is supported by many products but has no standard textual, portable representation. XMI was developed as an interchange format for UML, and it can be used as a basis for code generation, model assessment, and checking modeling guidelines, using the information within the UML models in a programmatic way.

However, UML is not a static standard. The developers of XMI needed an easy way to keep the XMI schema synchronized with UML as it developed. Fortunately, the structure of UML can be described in UML itself, as meta-level models known as the MOF. Therefore, given an automated means of transforming a UML model into an XML Schema, the developers of XMI could automate the generation of the XMI schema from the MOF.

Unfortunately, Mario didn't have much time to go into the technical details of the generation of schemas from UML models, but it seemed to be a fairly straightforward process. The transformation they use supports UML data types but also allows XML Schema data types to be used within UML models. One of the issues of UML models is that they don't have a distinct starting point for navigation around the model, whereas XML documents have to have a single document element. For this reason, the XML Schemas the transformation produces allow any nesting method.

Meaning Definition Language

The advertised speaker for the next slot was Ben Robb of cScape Strategic Internet Services Ltd, but he was unable to speak due to work commitments. Instead, Robert Worden of Charteris spoke on the Meaning Definition Language (MDL) as a means of indicating the semantics of an XML vocabulary. The aim of MDL is to support meaning-level queries, automated XML transformation, a meaning-level API, and the Semantic Web in general. It works by associating particular nodes within an XML vocabulary to a meaning-level description of the domain, such as a UML class model (represented in XMIk), RDF or DAML plus OIL (which are XML vocabularies that represent ontologies, based on RDF).

Robert discussed the mappings between XML vocabularies and the underlying conceptual model that they represent. Generally, instances are represented as XML elements, although there can be a conditional aspect to the mapping, and not all instances in a particular domain will be represented within an XML document. Similarly, properties are usually represented as attributes, but it's important to identify the object that the attribute is a property of.

Associations between instances are represented in many different ways within XML vocabularies. They can be represented through ID/IDREF pairs, through element nesting, or through what Robert called overloading, where several instances are bundled together within another element that represents the association.

The MDL encodes these mappings and can be embedded in a schema, such that for each XML vocabulary of a particular domain, there is a mapping between it and a common conceptual model. MDL's benefits arise when users phrase queries in terms of the meaning of the information they wish to retrieve rather than having to know about a particular XML vocabulary. For transformations, it is possible to map between any two vocabularies via the conceptual model rather than having to design separate transformations for each pair of vocabularies.

XSLT as a Query Language

Next to take the stand was Evan Lenz of XYZFind Corp, who spoke about the XML Query work and the correspondence between it and XSLT. The XML Query work at W3C has produced a number of documents: requirements, use cases, a data model, an algebra, and a syntax known as XQuery.

Most of the requirements are very similar to those of XSLT. XML Query should be declarative, have closures, have an XML syntax, perform transformations, have a human readable syntax (which may or may not be the same as the XML syntax), operate on multiple documents, enable references between those documents, and use XML Schema information. Evan argued that it is only this last requirement that is not satisfied with XSLT as it currently stands.

Evan went on to describe some of the differences between XML Query and XSLT. The XML Query data model operates on the post-schema validation infoset (PSVI) and includes both ordered and unordered forests. The algebra is strongly typed, which enables static analysis, optimization and composable queries. XQuery uses XPath, as XSLT does, although it uses a restricted form that only allows the abbreviated syntax, rather than the full flexibility of axes. XQuery has similar constructs for most of the instructions in XSLT, but it doesn't have templates, instead having user-defined functions. Also, everything in XQuery is an expression.

The majority of the talk was spent going through some of the XML Query use cases, examining the XQuery solution and comparing it to the XSLT solution. In the main, these served to underline the question "What does XQuery do that XSLT doesn't do already?", and while Evan may have been preaching to the converted about the power of XSLT, it is interesting to highlight the features introduced by XQuery as these are areas that XSLT may address in its next incarnation:

  • RANGE operator in predicates to get nodes between certain positions
  • dereference operator (->), operating in a similar way to id() but with an implicit name test
  • DEFAULT namespace declarations, which specifies a namespace as the default namespace to be used to interpret unpredicated names in XPaths
  • functions such as avg() and distinct()
  • BEFORE and AFTER operators to get those nodes that occur before or after another node
  • SOME and EVERY operators that make explicit whether a comparison needs to be true for just one node in the node set, or for all of them
  • filter() function that copies all nodes aside from those in a second set

Evan raised some issues about whether XSLT could be used as a query language, in particular whether the use of full XPaths made optimization difficult, and the problem of the way that the XSLT built-in templates are set up to dump out the text of a document by default. But he rounded off by countering Steve Muench's assertion that "SQL + XML + XSLT = WOW" with the statement that just XML + XSLT = WOAH BABY!

Explaining XSL Formatting Objects

The final talk of the morning was given by Arven Sandström of e-plicity and the release coordinator of FOP, a formatting-object-to-PDF renderer. Arved talked through the XSL 1.0 Candidate Recommendation and the purpose of formatting objects. The anticipated use of formatting objects is as a final, unchangeable format, produced by an XSLT stylesheet and rendered mainly as PDF but also possibly as Postscript, PCL, MIF, or RTF.

Arved made the distinction between two different types of document: content-driven documents, such as books, and layout-driven documents such as newspapers. Formatting objects are currently limited to a single flow, with simple page masters, which means that certain things such as marginal notes are difficult (or ugly) using XSL-FO.

Using XSL-FO involves taking the result tree from an XSLT transformation, which comprises a number of elements and attributes in the XSL-FO namespace, and objectifying these into a formatting object tree. This tree is then refined to give the layout, which is in turn converted to an area tree which can be rendered. The XML elements have attributes, which are objectified into properties on the formatting objects, and finally traits on the area tree. These traits describe constraints on the layout, such as leeway on hyphenation or line or page breaking, which means that different renderers may render the same set of formatting objects in different ways.

Arved went through the formatting object basics. XSL-FO documents consist of an initial section which describes the page masters, followed by a number of page sequences. There is currently only one kind of page master, a simple page master, which has a central region surrounded by a header, a footer and left and right margins. Within these areas there are block and inline areas. XSL-FO has good support for lists and tables, supports references such as page numbers and citations, doesn't have support for tables of contents or indices, but does have markers. For electronic versions of documents, there are formatting objects that support links and dynamic display. Finally, XSL-FO supports floats and footnotes and allows you to incorporate external graphics or in-stream foreign objects such as snippets of SVG or MathML. Arved mentioned that he was using a left-to-right, top-to-bottom, Western view of documents when describing these terms, and that actually the XSL-FO Recommendation is a lot less Western-oriented.

There is a bit of a barrier to using XSL-FO, in that people need to have information in XML, be able to write XSLT stylesheets to convert it into XSL-FO, and need some expertise in page layouts in order to make best use of the formatting objects. In a web publishing environment, such as when using FOP with Cocoon, rendering large documents can take a long time. Arved talked about the possibility of piping streams of rendering information rather than using a batch process, delivering XML and XSLT to the client, so that the rendering can be carried out where there is relatively more free processing power, and having greater ties with XSLT processors so that information can be piped between them or at least passed as a DOM rather than through a file. He also pointed out that XSL-FO and CSS are very similar, with CSS being more oriented towards web publishing, while XSL-FO is suited to printing; and he speculated that implementations may be able to support both with a fair amount of reuse of code. For XSL 2.0, Arved anticipates general regions and more internationalization support.

The RenderX approach to XSL formatter design

Still on the topic of XSL-FO, David Tolpin from RenderX spoke next about the design of XEP, an XSL-FO renderer. David first talked through FORM, the parser that's used by XEP, which parses the XSL-FO, validates it, retrieves images, expands shortands, calculates properties, and adjusts the tree structure as required.

The kernel of XEP is FO2PDF, which takes the normalized XSL-FO tree and converts it into PDF. The FO tree is interpreted as a stream of events; certain objects, such as lists, tables, or outlying floats are managed through several separate streams, which are linked together at critical points. There are exceptions that take control away from this main stream, for things such as footnotes, page numbers, or keeps and breaks. Rendering constraints can clash with each other, which means that a renderer often needs to backtrack to attempt to satisfy them. XEP avoids backtracking as much as possible by constantly keeping track of the last point at which a page break was possible.

The result of the layout process is converted into a pragmatic internal vocabulary, which can be saved and then processed later by one of the output producers available with XEP. The XEP output producers include XML, Postscript, and three PDF producers.

David talked a little about the particular problems that the RenderX team had faced in developing XEP, such as managing nested tables and repeated table heads. He discussed their approach to footnotes, where the content of the footnotes is rendered backwards, to tell how much space they use up, and then reversed in the final page. As far as performance is concerned, David pointed out that parsing the XML is the slowest part, and that this could be alleviated by linking directly to the XSLT processor producing the XSL-FO or by reducing the default attribute set that is specified in XSL 1.0. Speed is a big issue for the RenderX team; the features included in XEP are assessed primarily according to the speed hit that they would incur, rather than their conformance to the specification.

Experiments with XSLT With Topic Maps

For the last talk of the conference, Ken Holman appeared again, this time to talk about the use of XSLT with Topic Maps. Ken set himself a number of goals; to render topic maps automatically, to navigate using them, to render different topics in different ways, and to merge topic maps -- all using XSLT.

Topic Maps express navigational meta-information about topics. Ken introduced the idea of Topic Maps by comparing them to glossaries, where each term might reference other terms, and thesauri, which contain synonyms and antonyms, as well as broader and narrower terms.

The results of Ken's experiments are a number of stylesheets based on version 0.2 of the XTM (XML Topic Maps) standard, which is now out of date. Ken constructed a navigation tool wherein different topics have different looks, inherited from further up the topic hierarchy. The associations between a topic and the stylesheet for that topic are represented within a Topic Map. The final set of HTML pages for the Topic Map is generated through a two step process; the first creates a set of stylesheets and a batch file that will generate the HTML when run.

Ken went through a number of the lessons that he learned while authoring these stylesheets. This was his first experience of using namespaces within stylesheets, and he was caught by the fact that XPaths are not interpreted using the default namespace. He also found that using xsl:copy rather than literal result elements is a good way to keep namespace declarations under control. As he was authoring extensible stylesheets, he recommended the use of namespaces for named templates. Ken demonstrated how to use the Allouche method to eliminate unwanted whitespace; and how to use xml:space within the stylesheet to preserve the indenting scheme he wanted rather than that used by the XSLT processor.

Two tips that I hadn't seen used before involved using terminating messages to prevent the stylesheets being used in inappropriate ways. In stylesheets that were designed to be imported rather than used as the main stylesheet, Ken included a template matching the root node which gave a message indicating that the stylesheet should not be used in that way (naturally this assumes that the importing stylesheet also has a template matching the root node). He used a similar technique to check that the stylesheet was being used on the correct type of document, having a stylesheet matching the (named) document element, and another, more general one, matching any document element and reporting an error. Ken also raised the possibility of using the content of xsl:sort and xsl:key to enable the sort value or key value to be calculated using XSLT rather than limiting it to XPath.

Conclusions

The XSLT UK '01 conference was a very enjoyable opportunity to get to know the people behind the names on XSL-List and to be brought up to date with some of the advances and developments in the fields of XSLT and XSL-FO. I'm sure all who attended are looking forward to the next XSLT UK conference, whether it's held in 6 months or in a year.

Many thanks are due Sebastian Rahtz and Dave Pawson for organizing it. The conference was sponsored by on-IDLE, who kept a modest low profile during the proceedings. 75% of the profits from the conference will be going to local charities in Oxford.