The problem
When opening very large XML files locally, on your machine, it’s almost a certainty that it will take an age for that file to open – it can often mean your computer locks down because it thinks it’s not responding.
This is an issue if you serve users XML backups of rather complex databases or systems they use – the likehood of them being able to open large backups, let alone use them, is slim.
Is pagination possible?
I use XSLT to present readable backups to users. In this same way, would it be possible to pull only a page at a time of data, to prevent the entire file from being read in one go, thus causing the issues above.
I imagine the answer is simply a no – but I would like to know if anyone else has seen the same issues and resolved them.
Note: This is on a local machine only, it must not require an internet connection. JavaScript can be used if it makes things easier.
Pagination with XSLT is possible, but will probably not lead to the desired results: For XSLT to work, the whole XML document must be parsed into a DOM tree.
What you could do, is experiment with streaming transformations: http://stx.sourceforge.net/
Or you could preprocess the large XML file to cut it up into smaller bits before processing with XSLT. For this I’d use a command line tool like XMLStarlet