XML.com: XML From the Inside Out
oreilly.comSafari Bookshelf.Conferences.

advertisement

Write Once, Publish Everywhere

August 16, 2000

It's 2:00 p.m. -- I am surfing the Web. Suddenly, I remember that I have an airplane to catch at 5:00 p.m. Driven by a sudden adrenaline rush, I log onto an XML server, read the project progress report, and finally decide to print it and read it later on in the airplane.

3:00 p.m. -- Gee ... time flies, I am stuck in the traffic. Will I be late for the plane? I pick up my mobile phone and tell it to browse to our corporate XML server to grab some e-mail. "Acme Corp." I say, the phone replies by displaying the login screen. Keeping a eye on the traffic jam in front of me, I enter my user ID and password, then, I select the e-mail option and the phone displays the message subject list on its WML mini-browser.

3:30 p.m. -- I am not so far from the airport but the traffic is moving slower than the guy I just saw walking beside the street. I am getting more and more nervous about missing my flight. Again, I pick my mobile phone and call Tellme (1-800-555-8355). The Tellme virtual host welcomes me with a joyful "Tellme." I reply immediately by saying "traffic," and I learn that the traffic jam is caused by an accident. I need to make a rapid decision. I say to my mobile phone "MapQuest," and it brings me to the MapQuest site. I enter the info with my two thumbs, thinking that WAP phones really need some improvements in human factors. (I said the same thing in 1978 about my first microcomputer). Be patient, my inner voice says, trying to bring some wisdom to the situation. But the adrenaline flows and reminds me that the urgency is not resolved. Is there any alternative route? MapQuest seems to suggest one. OK, I can take the next street and follow the instructions until the airport. Yes! No traffic jam there.

4:05 p.m. -- I am finally at the airport....

In just two hours, our hero used landline and wireless devices to find the information he needed. He browsed the Web with his fingers, thumbs, and his voice. He used, in fact, three different browsers to access the Web:

  • A plain old HTML browser, running on a wired PC
  • A WML mini-browser, incorporated in a mobile phone
  • A VoiceXML browser, located somewhere on the West Coast

The Challenge

In the 20th century, the challenge was to create an electronic publishing infrastructure. As a by-product, we also gained a new application infrastructure.

In the 21st century, the challenge is to adapt to the new pervasive Internet. We are no longer restricted solely to PCs connected by a wire to the Web. As developers, we no longer have to deal with a world solely dominated with Windows. We have to adapt our content and applications to a new plethora of devices. This new wave comes, surprisingly, through the phone, and most particularly the mobile phone.

Mobile phones are now equipped with WML mini-browsers. The same mobile phone can be used, as well, as a voice-browsing device. Voice browsing became feasible because of the tremendous improvements over recent years in voice recognition and voice synthesis technologies. The limiting form-factor of the palm created a market for the palm-top computers. It seems that smart phones and palm computers may collide in the same market space if phones become small computers and small computers become phones.

The computer of the future understands what we say, will talk to us, show us images and movies, and fit in a palm. The keyboard is not a natural extension of the human body. After all, we do not communicate naturally with a keyboard, but with our voice.

One might call it an evolution -- that started with huge computing devices taking up big rooms, rapidly morphing into interactive devices fitting into a palm. But, for the immediate future, let's focus on the first step of this technology: delivering our content on an HTML browser, a WML browser, and finally on a VoiceXML browser.

Our Project

Our task is to create interactive applications using XML technologies. These applications should be accessible on three different devices: the telephone (VoiceXML), mobile phone mini browsers (WML), and finally, PC browsers (HTML).

To realize this vision, we will create abstract models and encode them in XML. Then, we must recognize the device and transform these models into the appropriate rendering format. The rules of the game are to use, as much as possible, XML technologies which are freely available, and also to restrict our scope to XML technologies documented by a public specification.

Each rendering device is limited by some form factors. So, before exploring the technologies used, let's first explore these devices' form factors.

The Form Factor

Each device imposes some limits on interaction. Often these limits are imposed by the form factor. For instance, a 21-inch computer screen can display more information than a phone's screen. Also, a phone comes with a limited set of keys, and is more adapted to vocal interaction than to text entry.

We can say that the phone's form factor makes it more adapted to aural interaction, and that the PC's form factor makes it more adapted to visual interaction.

There is also the possibility that palm-top computers equipped with small screens (but bigger than phone screens), and with a small and discreet headset, may merge the visual and aural worlds more effectively than today's devices. Who will do that? Maybe a phone company learning about how to add the visual world to their aural device. Or a palm-top company learning how to add voice interaction to their visual world. Who knows?

Enough philosophy. Ready for the lab?

Pages: 1, 2

Next Pagearrow