0
4

2 回答 2

1

(I am sure M. Kay will jump in but in the meantime)

AFAIK XSL Transformation is always done in memory; I do not know of any streaming XSL transformer implementation (I guess that would be hard since the whole XML tree is always 'visible' in the XSLT).

What we found out is that Saxon has overall much better performance than Xalan. Spending less time processing a document is another way of improving performance by processing more documents with the same amount of memory over the same period of time.

Saxon has (had?) its own DocumentBuilder implementation but we did not notice a memory gain using it in lieu of Xerces.

For large XML documents we split them in smaller pieces using a (streaming) map/reduce algorithm before running them through an XSL. Our map/reduce code is sitting on top of XML Dog

于 2012-09-26T16:53:10.137 回答
1

A factor of 3 expansion between the raw XML size and the size of the in-memory tree is certainly normal; in fact it's low. See for example http://dev.saxonica.com/blog/mike/2012/09/

Streamed transformation is starting to become possible for a limited class of transformations. See for example http://www.saxonica.com/documentation/sourcedocs/streaming.xml. But when your documents are only 5Mb in size, I'm not sure it's the right approach for you, at least not without further evidence.

It seems to me that you have come to the conclusion that memory allocation by the XSLT processor is the critical factor affecting the performance of your workload without any real evidence that this is the case. It would be interesting to see, for example, what the transformation time is in relation to the parsing time - many people are surprised that sometimes the transformation cost is tiny compared to the parsing cost. Before addressing one aspect of your system performance, you need to work out what the true bottlenecks are.

于 2012-09-26T17:09:02.967 回答