Menu

Benchmarking XML Parsers on Solaris

June 9, 1999

Steven Marcus

Clark Cooper's Benchmarking XML Parsers article is an even-handed attempt to compare the relative performance of XML parsing using C, Java, Perl and Python.

From the original article:

There aren't really many surprises. The C parsers (especially Expat) are very fast, the script-language parsers are slow, and the Java parsers occupy a middleground for larger documents. However for smaller documents (less than .5 megabytes), the Perl and Python parsers are actually faster than either Java parser tested.

After reading the article, and the paragraph above in particular, I was curious if the results reported would be similar to those obtained using Sun's Solaris-optimized Java virtual machine. Solaris is the "showcase" environment for Java -- and Sun claims to have spent much time and money optimizing Java for its platform. Table 1 summarizes the results:

Table 1: xmlbench/Solaris results
  REC chrmed med chrbig big
C-Expat
0.05 0.10 0.34 0.31 1.34
Java-xp
1.26 1.37 1.98 1.96 4.53
Java-xml4j
1.47 1.61 2.62 2.21 6.41
Java-xml-tr2
1.34 1.37 2.07 1.83 4.47
Perl
1.34 3.69 8.61 11.94 33.57
Python
1.38 3.93 9.63 12.88 37.95

For people that prefer column charts:

 
 
 

Java Performance

These results show that Java can perform faster than Perl/Python even for the "small document" case cited above.

Also interesting is the relative performance of Java on each platform. The following table and graph show the results when normalizing for expat per platform (the Solaris results were adjusted to make expat equal to 1, and independently, the same done for the Linux results):

Table 2: Performance (time) relative to expat (=1)
  REC chrmed med chrbig big
Java-avg (Solaris) 27.13 14.50 6.54 6.45 3.83
Perl (Solaris) 26.80 36.90 25.32 38.52 25.05
Python (Solaris) 27.60 39.30 28.32 41.55 28.32
Java-avg (Linux) 54.33 28.01 15.18 13.66 10.75
Perl (Linux) 28.26 31.09 22.13 31.62 21.86
Python (Linux) 33.00 43.61 32.06 46.74 32.75

 

 

The results on Solaris still show a large relative performance gap between native code (expat) and Java when running this benchmark. Is this an issue? Can it be "explained"?

Native code has a clear "startup" speed advantage. That is, Java, Perl and Python have considerably more overhead before "real work" begins. Also, note that the test harness runs each test application 3 times. Consequently, the Java JIT compiler (and Perl bytecode compilation, and Python bytecode compilation) overhead is incurred 3 times and that cost must be amortized over a smaller amount of work.

In particular, this benchmark would put Sun's "HotSpot" Java virtual machine at a disadvantage.

It's possible that a different xml benchmark could show a decrease in the relative performance gap of Java and native code. That's the trouble with benchmarks: you can almost always design or find one to give you the results you want.

Perl v. Python Performance

One small surprise is the convergence of the performance of Perl and Python in these tests. The results from the original article show that Perl has a clear speed advantage taking from 85% to 66% of the time of Python. The results here show that Perl's performance advantage narrows to a 98% to 89% range. That is, Python's speed has improved to the point that it is almost as fast as Perl. I have not investigated this but it's likely that the performance of Python between version 1.5.1 (used in the original article) and version 1.5.2 (used here) improved.

Java Notes

  1. Performance differences do exist between the various Java XML parsers -- but there are no order of magnitude differences revealed by xmlbench. Most applications can choose an XML parser based on features and/or support. The largest difference of 25% occurs using xml4j to process big.xml. It will be interesting to rerun these tests with later versions of xml4j to see whether it can close this performance gap.
  2. The results for xml-tr2 were obtained by modifying the Java-xml4j/Statsax.java file. "Minimal" changes were made - perhaps 4 lines. The SAX ParserFactory was used to create the parser. The resulting Statsax.java file can now be used with no modifications by both xml4j and xml-tr2. The output from the modified Statsax application was verified to be identical when run using the two different parsers. The harness.pl file was modified to test the new parser. If there is interest, and with the original author's approval, the updated xmlbench files will be made available here.
  3. The test harness was modified to increase the initial and maximum Java heap size to 128M and 256M respectively. The results were only marginally different so they have not been reproduced here. At least for this benchmark the Solaris_JDK_1.2.1_03_pre-release initial default heap size of 80M(?) appears to be appropriate to avoid excessive garbage collection overhead.
  4. IBM's Java virtual machine for Windows currently supports Java 1.1. Subjectively, this virtual machine performs very well compared to Javasoft's original virtual machine for Windows. However, the xmlbench test files as provided require Java 2 support. I plan to post results for IBM's Java virtual machine for Windows when IBM releases a Java 2 implementation.
  5. Other Java execution environments may reduce the "startup overhead" mentioned above. In particular, the AS/400 Java environment compiles Java bytecodes to native code the first time they are run and then caches the resulting native code. This optimization could narrow the gap between native (expat) and Java results.

Solaris Notes

  1. Solaris 7 does not include the GNU version of "time". The GNU version of "time" adds the ability to format the output results which is required by the testing harness supplied in xmlbench. I installed time-1.7 from the GNU site. Also, the configure script as supplied in time-1.7 caused build errors. I updated the configure script using autoconf-2.13.
  2. EGCS has many strong points but UltraSparc code generation is not necessarily one of them. The relative performance of Perl and Python with respect to Java might improve when compiled with Sun's C compiler. Code optimization may account for Perl/Linux's generally better relative performance over Perl/Solaris as it is likely the EGCS team has spent more time on optimizing Intel x86 code generation.

Configuration Notes

  • Ultra 5 333MHz SPARC with 384M RAM
  • Solaris 7
  • Java 1.2 (Solaris_JDK_1.2.1_03_pre-release)
  • EGCS 1.1.2 was used to compile Perl, Python, expat etc..
  • Perl 5.005_54
  • XML-Parser-2.23
  • expat version 19990425 as supplied in XML-Parser-2.23
  • Python 1.5.2
  • Pyexpat from the Python SIG's xml-0.5.1 package using expat from XML-Parser-2.23
  • xp v0.5
  • xml4j 2.0.9
  • xml-tr2 from Javasoft

and the xmlbench.tar.gz supplied in the article, as of 19990507.

Disclosure

I am a consultant working with Java, Python and XML. I also find Java an extremely productive environment that is well suited for server-side applications. I think benchmarks work best as general indications of relative performance, as I have attempted to present here. I consider fair and meaningful benchmarks hard to construct and use them infrequently in "real world decisions".

My apologies to the author of C-Rxp: I do not have that parser installed.

All credit to Clark Cooper for creating xmlbench, for writing it in a way that I could add the xml-tr2 test quickly, and for writing a Perl script even I can modify.

This file is the Excel spreadsheet that massages the data and produces the chart "Relative performance to expat". Please be gentle if I have committed some statistical blunder.

Feedback

Please contact me if you have issue with the methodology or conclusions.
I am happy to link to any other results.