Menu

A Web of Rules

October 23, 2003

Kendall Grant Clark

How the Semantic Web Will Really Happen

Whether Tim Berners-Lee's idea of a public Semantic Web -- what Jim Hendler, in his invited talk at ISWC 2003, called a "web of semantics" -- ever becomes a concrete reality is still an open question; it's also still an interesting one. No matter what its eventual resolution, this idea has stimulated public investment in an interesting, burgeoning field of scientific inquiry. It did not create that field out of whole cloth, of course, since AI researchers have been interested in the Web for as long as there has been a Web. A distributed, decentralized hypermedia system is just too rich with possibility for AI folks to ignore. Together with industry, the W3C and other standards bodies, and the web-hacker community, the academic research community is working hard to redeem some of the promise of AI web technology, if not yet the public Web itself.

My view, sustained by an admittedly simplistic analogy to the way the Web itself developed, is that if the Semantic Web is to happen, it will be because of a loosely coupled collaboration between three communities: the academics, the industrialists, and the hackers. This view gives me some pain, however, since the hacker community (by which I mean people who develop open source software for fun and for profit) is perhaps the one least engaged in the Semantic Web effort.

Before I alienate friends and fellow travelers, let me explain myself. There are some obvious inflection points at which hackers are engaged with the Semantic Web; these points include FOAF, RDF, RSS 1.0, and N3. By and large, however, the hackers are not engaged with the Semantic Web effort and, more to the point, it hasn't yet generally ignited their technical imagination. Of particular noteworthiness, the LAMP (Linux, Apache, MySQL, [Perl, Python, PHP]) crowd has not yet bought into the animating ideas of the Semantic Web. That's a problem which should be given some thought.

What makes me think, you may be asking yourself, that the hackers and the LAMP crowd will ever work on the Semantic Web effort? After all, the open source world isn't exactly a hotbed of knowledge representation, formal reasoning, and logic programming. Ah, dear and gentle reader, I'm glad that I made you ask yourself that question, for now I can deploy my simplistic analogy with the Web itself. Before the Web, the free software world -- as it was called back then -- was, first, considerably smaller. As others have noted, the Web was an enabling technology for the hackers as much as the hackers (by creating the LAMP platform) enabled the Web itself. But, second, before the Web the free software world was hardly a hotbed of relational database, hypertext, and document markup technology. The Web was the motivation for an entire generation of hackers to learn SQL, SGML, XML, and so on.

It's not much of a leap to think that the promise of the Semantic Web may fuel a new generation of hackers to learn RDF, OWL, and rule systems. I anticipate that, at some point, we will talk about, say, an RORB (RDF, OWL, Rules, Bayes) platform for Semantic Web development.

The aforementioned inflection points are an ideal starting point. Hackers love FOAF. RSS 1.0 has a loyal following; it's almost certainly still the most widely deployed RDF data application, though perhaps not for much longer. There are other possible points of inflection as well, including OWL and Bayesian classification systems. Hackers also love Bayesian systems, largely because they're good at stemming the tide of spam.

A Web of Rules

I won't say much about OWL here, since I've written about it for XML.com in the past, except to say that I think OWL has a very bright, very important future. I also won't say much about Bayesian systems, since others have written about them for various O'Reilly publications, except to point out that Bayesian systems are being used for research projects, some of which I've heard talks about at ISWC this week. (I'll mention briefly one such project which is using a Bayesian classification technique to aid human efforts to classify images -- a project which seems perfectly suited to hacker and academic collaboration.)

What I want to talk about at some length is rule systems. For some reasons about which I won't speculate here, generic rule systems have not been a key tool in the hacker's toolkit. In fact, I suspect that more hackers have used a rule system than can really say what a rule system is. I'm referring, of course, to the make build tool, which is (or, more accurately, contains) a kind of very domain-specific rule system. Likewise for the infinitely clever system configuration engine, cfengine ("a middle to high level policy language for building expert systems which administrate and configure large computer networks").

Hackers are, then, perfectly willing and able to use domain-specific rule systems and rule languages. That's an encouraging sign because, if ISWC 2003 is any fair indication, academics love rule systems. And why not? They are very powerful tools. The crucial point here is that generic rule systems are not much used amongst the hackers. Perhaps more tellingly, there doesn't seem to be much awareness of rule systems amongst the hackers. Rules are not part of the typical hacker toolkit.

Yet Semantic Web researchers are crazily enthusiastic about a web of rules. In addition to there having been an entire day of meetings at ISWC devoted to rules, about which more below, I heard lots of talks and read many posters which referred to rules, rule engines, rule-based representations of policies, etc. There is also the matter of Tim Berners-Lee's layer cake diagram of the Semantic Web, the next layer of which is a rule layer.

Together with OWL and Bayesian systems, rule systems are a potential big win in the loosely coupled collaboration between hackers and academics. In addition to the reasons I've offered for thinking this might be the case, I just can't think of any substantial blocking reasons.

An Idiosyncratic Survey of Rules for the Semantic Web

Since I think rule systems are going to be important as we move forward, I want to offer a brief, idiosyncratic overview of some of the ways rule systems might be used to build the Semantic Web -- short because I really should get back to the lovely beach on Sanibel Island before ISWC 2003 is over; idiosyncratic because I don't claim to have a synoptic view of the state of the field.

RuleML

The canonical place to start is with RuleML, the rule markup language. While I have to say that RuleML is my leading candidate for the much-coveted Simon St. Laurent Award for Most Awful XML Markup (an award I just made up on the spot, of course), it is an interesting project. Before explaining why it's interesting, however, let me show you a very brief snippet of RuleML, but you really must look at it indirectly and only for a moment, in the same way it will not do to stare too long directly into the sun:


<cterm>

   <_opc><ctor>atom</ctor></_opc>

   <_r n="opr"><rel>buy</rel></_r>

   <var>person</var>

   <var>merchant</var>

   <var>object</var>

</cterm>

Yes, there really is an element named "_r" which has an attributed named "n". Brutal.

So, obviously, the RuleML guys don't intend anyone to deal with RuleML instances by hand. The problem, of course, as XML veterans can attest, is that this is always wrong: no matter how sincere the intent that some markup language is meant to be consumed and produced only by machines, it is always the case that some human eventually ends up having to deal with that markup.

All that having been duly said and noted, RuleML is still interesting because it suggests some of the ways in which rules and the Semantic Web may interact. One of the overarching metathemes of Semantic Web research -- and one which bears out my contention in "Commercializing the Semantic Web" that the Semantic Web is an effort to webize AI, rather than to AI-ize the Web -- is to determine with some formality what happens when you reconceptualize AI models, techniques, tools as able to produce and consume URIs. That is, what happens when you rethink AI with a global, distributed and decentralized hypermedia system in mind.

According to Harold Boley, one of the RuleML developers, RuleML has been doing just that since 2001, when it moved to an RDF-like style of knowledge representation, using URIs as additions to or substitutes for logical constants, relations and functional symbols, clauses and rulebase labels. In other words, what happens when you webize a rule system?

One of the interesting developments in the RuleML world, which I can't say much about here, is OO RuleML, a kind of modular rule system toolkit, which allows you to mix and match various permutations of user-level roles, URI-grounded clauses, and order-sorted terms.

Harold Boley also made one of the more XML.com-salient points I've heard at ISWC thus far: XML supports a kind of positional knowledge representation wherein parent elements are focus points applied to ordered child elements. RDF supports a kind of role knowledge representation wherein unordered descriptions focus a resource that has various properties associated with it by way of predication. This tension between the ordered and the unordered aspects of XML and RDF has been a subject of intense debate and interest -- even if not always in these precise terms -- among XML developers. We might consider this an inflection point of the other kind; that is, a point at which XML developers might be able to learn valuable lessons from old AI hands. Consider all the ink that's been spilled about using unorderable RDF as a syndication format, for example.

RLS, the Rule Language Server

Tanel Tammet, one of the developers of RLS, a GPL'd rule server -- which comprised a big pile of Scheme and C, talked about RLS's implementation.

RLS is targeted at two goals: first, integrating various formats and rule languages, including SQL, RDF, CSV, HTML. Rules themselves may be in various formats, including RDFS, first order logic, Datalog, etc.; and, second, to provide a tool for Semantic Web developers interested in doing rule processing.

RLS is another good example of a concrete tool which is available now for web hackers to play with and contribute to. And it's a full-on, core Semantic Web tool.

Reactive Rules: Integrating Rules and Events

Guy Sharon, a IBM researcher in Haifa, Israel, presented some of his research on reactive rule systems. A reactive rule is one which associates a response with an event. Sharon is one of the developers of ADI (Active Dependency Integration). In addition to reactive rules, this research focuses on predictive rules, which try to redirect some system toward a better result, eliminate problems, or offer advanced warnings.

The research also aims at figuring out how to infer rules from dependency models. Large, complex business systems, the kind which are IBM's bread and butter, contain not only many dependencies, but many kinds of dependency: functional, workflow, causal, and business logic. These can be expressed with rules, which can handle the interactions between these dependencies in order to resolve them.

Rules and ACLs

One of the core competencies, as implemented in actual projects and tools, of rule systems is to policy management and control. Thus, it's not surprising that rule systems are particularly useful in security contexts. One talk I heard earlier this week concerned rule-based processing of an XML Access Control List (ACL) model, which provides a finely-grained ACL model for XML instances, user-defined policies for conflict resolution, and a declarative semantics (using XDD, the XML Declarative Description language). It's also worth noting that an XML-based rule language -- XET, An Equivalent-Transformation-Based XML Rule Language -- has been used in the implementation of the ACL model.

This project is using RDF Schema as a formalism for expressing ontologies for authorization, group, and role semantics. It's not clear whether a transition to OWL is on the cards, but even among academic researchers there are many projects and tools which are still using RDFS. At the very least this suggests that hackers who are still using RDFS, or thinking about using it, should realize that while OWL is gaining momentum, it's still very early days.

OWL Rules

Finally, I want to say a few words about OWL after all. One of the threads of the various rules talks I heard is the question of how (never whether) to integrate ontology formalisms with rule systems. Various approaches are possible, but one which deserves mention here is the relatively new proposal, by Ian Horrocks and Peter Patel-Schneider, called "A Proposal for an OWL Rules Language" (specifically for OWL DL and OWL Lite variants).

Before taking a very brief look at this OWL Rules proposal, why might you want to add rules to an OWL project? In many domains, one needs a way to represent conditions which must obtain before, say, a certain relation may exist between various entities or before, say, an entity may be said to possess a certain property. Consider a very trivial example from the health care domain. If you'd like to manage particular health care services in OWL, an obstetrician's system might need to represent women not merely as subclasses of a person class, but as kinds of entities which can move from one state, being a woman, to another, being a mother. Obviously this state transition is precipitated by a series of events, including conception, pregnancy and birth. If one has a rule system neatly integrated within OWL, then one can model this series of events as a set of constitutive conditions that must occur in order for some entities to obtain some new relations or properties. Rules offer an elegant way to represent event and relation antecedents.

The chief virtues of OWL Rules can be stated simply:

  • OWL Rules is a relatively noninvasive change to OWL. OWL is rapidly nearing W3C Recommendation status; the noninvasiveness of OWL Rules suggests that it will be relatively easy, as a matter of W3C process, to achieve consensus about it. It also suggests, though more weakly, that the implementation of OWL Rules will not be particularly difficult.
  • OWL Rules seems to have won over the support of the DAML community, an important constituency within the relevant W3C working groups.
  • OWL Rules is very expressive, since it leverages the expressiveness of OWL in both parts of a rule, the antecedent (body) and consequent (head) expressions.
  • OWL Rules leverages all of one's OWL expertise, which is an important social factor in technology adoption.

Another virtue of OWL Rules is that it provides a very elegant way to do what might be called multimodal semantic processing. One thing that's especially clear from the Semantic Web academic research community is that no single technique, model, method is alone sufficient to achieve the Semantic Web. Multimodal semantic processing -- spanning at least ontological, machined-based, and rules approaches -- is going to be a necessity rather than an option.

What is to be Done?

I suggest that these three communities -- industry, academia, hackerdom -- are necessary for the development of a robust, public Semantic Web. I also suggest that the coordination between them may be informal and loosely coupled. Yet there must be some coordination, and conferences such as ISWC may well prove a valuable avenue for such coordination.

There are cultural mismatches that must be addressed. On the hacker side, we tend to want a less structured conference, one with more time set aside for actual programming and design work; more time set aside for "open space", self-organizing conference planning; more web-accessible planning and archival mechanisms (weblogs, IRC channels, conference proceedings on the Web); and an explicit open source project conference track. On the academic side, of course, we want a formal, peer-reviewed paper process, as well as the maintenance of professor-graduate student social and professional relations.

I'm not sure one conference can encompass both of these cultures, plus whatever it is that the industrial folks need and want. But I'm not sure one conference cannot encompass these cultures. It's worth thinking about and, perhaps, experimenting with.

Aside from conference considerations, there are other things we can all do. Professors should encourage (or mandate?) their students to use open source software whenever possible, to participate in relevant open source projects and communities, to use open source resources like SourceForge in order to increase the visibility of research and increase the prospects for mutually fruitful collaboration. Finally, everyone in academia should think about the lesson of N3.

What is the lesson of N3? It is that, given its skunkworks origin, no one has any real idea what N3 is or what it's formal semantics are. Academic researchers tend, therefore, to ignore it, and perhaps they should. But despite these obvious flaws, N3 has been used by web hackers in a way that Prolog, Mercury, Haskell, Mozart/Oz and other real logic programming languages may never be used. It's worthwhile to think about why this has been the case, as well as the utility of it being the case. Of course formal semantics and hackability are not necessarily mutually exclusive, but they seem to occur together in a single project very rarely in practice. That's unfortunate.

Finally, what can the hackers do differently? I think the primary needful thing is to overcome our stubborn pride, our tendency to hack first, research later, if at all. I have a theory, which may be totally false, that the very best hackers frequently consult deep computer science research, but that they tend to do so secretly because hacker culture, or at least large swathes of it, tends to downplay the importance of using formal research results. That's also unfortunate, and we should work hard to overturn it.