April 27, 2005
1060 NetKernel is a software infrastructure, but it will generate misconceptions if I label it before describing its origins and essential architectural principles.
Where did it come from? Back in 1999 I was leading a small team of researchers at Hewlett-Packard Labs. We were exploring e-payment and e-contract systems — even back then we could see that the communicability of XML was going to transform business systems, and that XML message exchange was the route to interoperability. So while the systems we were developing were in a vertical domain, they were characterized by XML-messages exchanged over the wire and bound to procedural objects.
We discovered two things pretty quickly.
- The X in XML really delivers. Businesses change very frequently and therefore, so must business messages. Using XML as the extensible messaging format gave us great freedom to adapt to the business imperative to change.
- Binding XML to procedural code destroys most of the flexibility promised by the XML. Procedural code is innately brittle with respect to changes in the business data model.
The net result was that we could craft fairly sophisticated large-scale systems, but the cost of creating, maintaining and changing these systems imposed a large economic threshold below which costs outweighed benefits. We'd found a glass floor. In addition, our own research experiences were echoed by external feedback from colleagues working on RosettaNet (still the largest XML messaging system); XML was cheaper than EDI but not cheap enough.
To put these experiences another way, and in empathy with the large majority of enterprise developers: writing procedural code to process XML is hideously painful — a pain that just won't go away!
It felt like we'd stumbled over an interesting challenge. A challenge that had an historical parallel; replace "XML" with "RDBMS" in the statements above and you have a picture of the 1970s relational database before the introduction of the declarative SQL abstraction. Aha. Could we find an abstraction that encompassed the XML technology set and which decoupled the XML machinery from the XML process?
This introductory preamble is retrospective and, hopefully, has a coherence that didn't exist when we set out to solve the problem. Our first experiments were to prototype XML pipeline frameworks. In many respects, these were very similar to Cocoon, Orbeon, SXPipe, etc; essentially a declarative language runtime which executed a sequence of XML operations as a linear pipeline (see  HP Tech report on Dexter1 and Dexter2).
We made mistakes and threw more than one away, but the process of experimentation allowed us to understand the requirements for a general solution.
Language Diversity.It rapidly became clear that a single language runtime is too limited for general applications. There have been several proposals for XML pipeline languages . In our test applications we discovered that as a minimum we needed both a linear-flow language and a recursive tree composition language. However, we'd also found that while declarative languages are excellent for rapid assembly of XML operations, they are terrible for expressing business logic and logical flow-control; it's far easier to use a procedural language for control. In general we felt that language diversity was a necessity and would enable design-pattern diversity such as object-oriented, linear-flow, functional-composition, etc, etc.
XML Object Model Diversity. We built both SAX and DOM based engines, and both had limitations. SAX is efficient but difficult to use for custom components. DOM can be unwieldy and also has API limitations. New generations of object models have been, and will continue to be, developed as XML matures. It was clear that with XML object models, just like languages, one size did not fit all; we needed to allow for appropriate choice, integration of legacy code and future extensibility.
Intrinsic Caching. To manipulate XML it has to be in some in-memory form and requires parsing. Parsing is expensive, so if a resource is used frequently the in-memory form should be available from a cache. XML pipelines create lots of intermediate results, and frequently these are reusable. Caching results, both intermediate and final, massively improves performance.
Libraries. Any general purpose processing environment scales efficiently when code reuse is encouraged. It was apparent that support for libraries of XML pipeline assemblies was essential for maintenance and scalability. It was also clear from our experience with vertical applications that domain-specific XML libraries would be an inevitable requirement.
Dynamic Discovery. XML technologies are many and varied; back then it seemed like there was a new one announced every day. In order to use a technology in a high-level pipeline it has to be named and located. We tried using declarative registries, but realized that this led to management and maintenance complexity. We wanted dynamic discovery of components so that XML pipelines could rely on the environment to resolve the implementation. This would also enable hot-update and install, which is essential for production systems.
Exceptions. To be reliable and transactional, a processing model must enable exception management. Many of the XML technologies do not have an intrinsic exception model. It was clear that the environment had to extrinsically provide uniform exception management across any assembly of XML technologies.
Debugging. It is very difficult to develop reliable software without the ability to stop and inspect the execution of the code. XML is generally very hard to debug at the object model level simply because the XML is dispersed through a collection of linked objects. It was essential that an XML pipeline support breakpoints and dynamic inspection.
Over a number of years of investigation, we began to understand that "XML pipeline" was not an adequate expression of what we were encountering; we were specifying the requirements for "XML processes", process being defined as, "The execution of set of operations that is not linearly predetermined but is dynamically evaluated based upon the input data to the system" — or, and this is critical — "by exceptions."
Back to First Principles
We had a specification, but did the XML technologies have a unifying core foundation? For example, the technologies have an implicit relationship to the Web and are somewhat informed by Roy Fielding's description of the REST architecture.
It seemed pretty clear that the URI is the axiomatic starting point from which to develop an abstraction. The URI is the common factor that underpins the XML technologies and which provides the foundation of the Web model. From this the simple idea that a resource may be requested using a URI develops; the first principle of the Web.
What if software components were treated as URI-addressable services and invoked by making Web-like URI requests?
What is NetKernel?
1060 NetKernel is the logical extrapolation of the simple idea of using URIs to dynamically locate and invoke software components.
NetKernel manages a dynamically populated virtual URI address space, composed by linking modules. Modules may expose software services and resources on their public URI interface and have a protected internal URI space which itself may consist of local and imported address spaces. The NetKernel URI address-space is analogous to the way Unix abstracts multiple file-systems into a uniform, logical file-system. The Unix abstraction treats everything as a file. In NetKernel everything is a URI-addressable resource.
At NetKernel's heart is an asynchronous process scheduler which manages processes and low-level thread allocation. Most importantly, the kernel is a dynamic URI-resolver. URI requests issued to the kernel are resolved through the NetKernel URI address space where they ultimately connect with code. Requests are qualified by REST-like verbs; we have generalized from HTTP (GET, PUT, POST, etc) to a set of application protocol independent verbs: SOURCE, SINK, DELETE, EXISTS and NEW.
So, what is NetKernel? NetKernel is the URI addressing model of the Web combined with a Unix-like kernel. We sometimes describe NetKernel as a REST microkernel and, if we get ahead of ourselves, as a "virtual operating system".
This all sounds tremendously complicated. Well, it can seem that way when taken all at once, but in fact what emerges from this abstraction is a very simple, self-consistent way to treat software components as services. The URI address space provides the same horizontal and vertical scaling properties as the Web and results in general software systems which inherit Web-like adaptability and tolerance to change.
Executing a Service
To date, the Web model has not presented an intrinsically programmable environment, although frequently resources are dynamically generated (hence technologies such as CGI, Servlets etc). NetKernel is a Web-like environment in which the URI address space can be treated as an executable program, but to accomplish this we have to think about URIs differently.
We have created a new URI scheme called the "Active URI"  which is fully compliant with the IETF URI specification. An active URI is a URI which consists of a base part followed by any number of named arguments, and each named argument is also a URI. Here's an example:
This URI uniquely expresses the XSLT transformation of mydoc.xml by the stylesheet mytransform.xsl. (Don't worry about the operator/operand names; these are simply a convention we have adopted as a convenient way to retain a uniform interface for common services.) When a SOURCE request for this URI is issued to the kernel, it resolves the URI through the address space and locates and executes the software component which performs XSLT transformation, and the result is the transformed document.
Since a named active URI argument is also a URI, it could be another Active URI (escaped appropriately). The active URI, in combination with the local NetKernel environment, is a functional program. Issuing an active URI to the kernel results in the lazy evaluation of the program to compute the resource which it expresses.
A direct consequence of the Web-like properties of the NetKernel abstraction is that the active URI is the unique "vector" to the computed result, so just like the Web we can cache the resource under its URI key. In fact every resource in NetKernel, even an intermediate computational result, has a unique URI and has the potential to be cached.
Since all resources are obtained through the kernel, the dependency chain of every resource is known. This allows the NetKernel cache to be "dependency-aware" such that changes to any given resource are automatically propagated to invalidate computed dependents.
The NetKernel URI address space, in conjunction with the dependency model, provides a computational environment in which caching is systemic, transparent, and fine-grained. This has very significant consequences for performance but also has a dramatic impact on the elegance of application design patterns.
Anything above a first order active URI is really too difficult to work with directly. Fortunately, you don't have to. Services are composed into processes and applications by using higher-order programming languages which abstract the underlying URI infrastructure. On NetKernel, a language runtime is just another type of service — a service for executing code for a given language. In turn, the code may execute other services by issuing further URI requests.
In addition to procedural languages, we have written two declarative language runtimes: Declarative Process Markup Language (DPML) is a very simple XML syntax for constructing active URIs. The DPML runtime simply dynamically compiles the XML syntax to a functional, active URI program.
Our other declarative language is XML Recursion Language (XRL). XRL is like XInclude with services, in which inclusion references fire service invocations into the URI address space in order to recursively compose an XML document. XRL is an elegant and powerful way of building XHTML applications. You can think of a rendered Web page as the plan view of an XML tree, and using XRL the page elements may be recursively sourced from fine-grained services. The result from each service (you could think of these as micro-servlets) may be cached with their individual dependency associations. Fine-grained caching means that any given page may be composed from document fragments with independent life spans, the net result of which is that overall computation time is always a local minima.
Since language runtimes on NetKernel are simply services, it is very easy to add new ones, even application-specific languages such as custom workflow. We anticipate that there is the potential for several new languages that will intimately reflect the underlying NetKernel abstraction. We are certainly not satisfied that DPML or XRL have yet found the limits.
Finally, we can return to the original motivating problem: can the economics of XML processing be made to add up?
NetKernel provides a wide range of XML technologies as a set of modular service libraries. At the last count this was more than one hundred; from XSLT (v1.0 and 2.0), Xquery, and Validation to XML spell-checking via XHTML tidiers, SOAP message slicing/dicing tools, and SVG renderers.
Placing the XML technologies behind service interfaces encapsulates the API complexity and makes it very simple to compose genuine XML applications. Even more importantly, it provides an environment which is easily, even dynamically, reconfigurable to tolerate changes to the message syntax.
As an example, here's a DPML process to execute the XSLT transform we talked about earlier
<idoc> <seq> <instr> <type>xslt</type> <operator>mytransform.xsl</operator> <operand>mydocument.xml</operand> <target>this:response</target> </instr> </seq> </idoc>
When executed by the DPML runtime, the single instruction (<instr>) in this script
will be compiled down to the active URI shown earlier. DPML is a service composition
assembly language. Within DPML the URI address space
this: is special;
this:response is the resource which is returned by the DPML process.
Here's the same program in Python using the NKF API:
#Create URI request request=context.createSubRequest() request.addArgument("operator", "mytransform.xsl") request.addArgument("operand", "mydocument.xml") #Synchronously Issue request to kernel for evaluation result=context.issueSubRequest(request) #Create and Issue response response=context.createResponseFrom(result) context.setResponse(response)
You will have noticed that neither of these processes has any need to understand the object model used by the underlying service. That is because NetKernel dynamically performs object model translation (this is just another service). It is possible to write service implementations with SAX, DOM, JDOM etc. A higher order process does not have to care; the kernel ensures that the required resource is provided to the appropriate service at the appropriate time. A consequence of this abstraction is that XML parsing and serialization are encompassed and are themselves simply services.
You can think of NetKernel service composition as analogous to a generalization of the Unix-pipeline model, but since NetKernel exists on top of a Java virtual machine environment we are not limited to directing binary streams and can pass around higher order representational objects (streams or otherwise). When there is a mismatch (service A requests service B but does not pass the required resource instance) the process will gracefully fall back to binary streams, the lowest common denominator, and simply continue.
NetKernel is a symmetric peer. It is both client and server. On the server side it supports pluggable transports including HTTP (REST), JMS, SOAP 1.1/1.2, POP, IMAP, and runs as a self-contained standalone application server. It also has an embeddable API and so can be deployed as a "co-processor", for example, within an existing J2EE application server. As a client it provides library services for HTTP (REST), SOAP 1.1/1.2, JMS, SMTP, RDMBS/SQL, etc.
A transport on NetKernel is analogous to a device driver; its job is to receive external events and map them to service executions in the NetKernel URI space. It is simple to add new transports. As an example, we have written a "particle simulator" GUI application in which GUI events are issued as URI service requests to a service-oriented simulation model. The results of the model are then rendered in the GUI.
As food for thought, another way to think of NetKernel is as a Web browser with pluggable rendering surfaces.
We started this article with a discussion of our early experience of building XML systems. We developed NetKernel to solve that problem. Along the way we discovered something quite general. Today we think of the XML technologies as like the lexical (line-based ASCII) tools which every flavor of Unix provides (sed, awk, grep, etc.) To NetKernel, XML is the highest common denominator date type; it is flexible and, when combined with atomic service-based technologies, very powerful. But NetKernel has no dependence on XML. You may create your own application-specific object models, or add other standard data models. For example, it would be very straightforward to add an RDF tool set.
NetKernel is different but it is not a theoretical concept. It is used in real, large-scale production systems today. NetKernel is available under a dual-license model. If you keep it on the open-source commons it is free to use. The full system with source is available for download.
Unfortunately, this article is necessarily brief. I intend to describe the new design patterns which stem from programing in the URI space, and the practicalities such as exception handling, transactions, and URI-break-space debugging in a future article.
 HP Labs Technical Report on the Dexter project. http://www.hpl.hp.com/techreports/2004/HPL-2004-23.html
 W3C XML pipeline language note. http://www.w3.org/TR/2002/NOTE-xml-pipeline-20020228/
 REST is described in Roy Fieldings Dissertation. http://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm
 IETF Active URI Internet Draft. http://ietf.mirror.netmonic.com/draft-butterfield-active-uri-01.txt
 Why functional programming matters, J, Hughes, 1984 http://www.md.chalmers.se/~rjmh/Papers/whyfp.html
 Trimondo B2B portal, a joint venture of Deutchse Poste and Lufthansa http://www.1060research.com/buzz/case_trimondo.html
 Further information on 1060 NetKernel including downloads of the full open-source system. http://www.1060research.com/netkernel/