The Next Web?
by Simon St. Laurent
|
Pages: 1, 2, 3
The Semantic Web
In the next phase of the conversation, XML never yielded all that wonderful improvement for searches, but that's OK, since that was meant to be one of the things the Semantic Web would deliver. Why rely on tag names when you could have a whole infrastructure of knowledge representation formalisms (RDF and OWL) that could tell you exactly where to find what you need? There could be vast collections of metadata at your fingertips, ready for slicing, dicing, and analysis.
Tim Berners-Lee, creator of the World Wide Web and chief promoter of the Semantic Web, explained the vision like this:
While Web pages are not generally written for machines, there is a vast amount of data in them, such as stock quotes and many parts of online catalogues, with well-defined semantics. I take as evidence of the desperate need for the Semantic Web the many recent screen-scraping products, such as those used by the brokers, to retrieve the normal Web pages and extract the original data. What a waste: Clearly there is a need to be able to go publish and read data directly.
Most databases in daily use are relational databases—databases with columns of information that relate to each other, such as the temperature, barometric pressure, and location entries in a weather database. The relationships between the columns are the semantics—the meaning—of the data. These data are ripe for publication as a semantic web page. For this to happen, we need a common language that allows computers to represent and share data, just as HTML allows computers to represent and share hypertext. The consortium is developing such a language, the Resource Description Framework (RDF), which, not surprisingly, is based on XML. In fact it is just XML with some tips about which bits are data and how to find the meaning of the data. RDF can be used in files on and off the Web. It can also be embedded in regular HTML Web pages. The RDF specification is relatively basic, and is already a W3C Recommendation. What we need now is a practical plan for deploying it. (Writing the Web, 1999, p. 181)
While development of RDF, OWL, and related standards, as well as software to support those standards, continues, no one has yet found a "practical plan for deploying it," at least not in the broad way Berners-Lee proposed. Mixing XML with HTML is a much less complicated proposition, but web developers never found that all too appealing either. Semantic Web technologies and projects--consider RSS 1.0, FOAF, and DOAP--are definitely designed to address real problems, but they haven't yet come together to create anything like the Semantic Web vision.
The Services Web
As the Semantic Web story was taking off, a different group of developers (also largely under the aegis of the W3C) saw another group of possibilities for getting those stock quotes to those hungry brokers. Their vision still combined XML and the Web, but freed the notion of web from the notion of a web browser. Web Services initially combined the protocol side of the web equation, HTTP, with XML. The trio of SOAP, WSDL, and UDDI would let developers create, define, and share their mechanisms for exchanging data among computers.
As it turned out, while web services have proven popular for Enterprise Application Information (EAI), Service Oriented Architecture (SOA), and a wide variety of other business-oriented projects, they've had very little impact on the traditional Web. Some web companies do expose their data to outsiders through SOAP-based APIs, but the vision of a large market of open services accessible to anyone who needs data or processing has largely faded. Instead, web services have become a replacement for CORBA and similar architectures.
Services haven't vanished, though perhaps it's unfortunate that SOAP-based services got the title of "Web Services" simply for their use--or some would say abuse--of HTTP. Another architecture for web services, REST, is quite deliberately built on the traditional web browser/web server model, allowing for much easier integration with things like human visitors exploring a service through a web browser. Despite its greater compatibility with traditional web models, though, REST hasn't become an instant business success either.
The Next XHTML
While grand visions of an XML- or RDF-enriched Web competed with SOAP-based services for attention, the HTML community, both at the W3C and elsewhere, had some ideas of its own.
XHTML 1.0, recasting HTML as an XML vocabulary, was the first small step. The vast majority of web developers haven't noticed XHTML, though the acronym is becoming more common as new editions of HTML books roll off the presses with an 'X' appended to their titles. XHTML eases the pain of the screen-scraping Berners-Lee complained of earlier. It also offers a key advantage for advanced developers using CSS and Dynamic HTML: a clear set of document structures on which to build.
XHTML 1.1 attempted to modularize HTML into smaller and reusable components. For many XML developers XHTML 1.1 served primarily to demonstrate how DTDs and W3C XML Schemas both lacked clean mechanisms for modularization, but XHTML 1.1 did at least open the way for smaller versions of XHTML, notably XHTML Basic, aimed at mobile devices with limited bandwidth and processing power. (Actually, a lot of those phones have more power and bandwidth than the computer/modem combination on which I first surfed the Web.)