We are more and more running into an issue on DocRaptor where individual users' documents are causing issues for the service due to scarce resources being dominated. We used to be able to figure this out just looking at who was making lots of docs, but it turns out a lot of people are making a lot of PDFs around the clock.
To see the sorts of statistics that are available (if you're using a Unix-like operating system), try doing ‘man 2 getrusage’. A literal interpretation of ‘RAM consumption’ might be the ‘maximum resident set size’ field, but the right fields to use would depend in part on what resources your server is most short of (swap space, address space, or the much more likely case that the server is finishing all the jobs but slowly due to thrashing). You might actually want one of the less obvious measures, such as number of major page faults (plus number of I/O requests).
(Note that your shell might well have a built-in version of time that lacks a --format option, in which case you might need to tell the shell to use the gnu version: e.g. by giving a full path such as ‘/usr/bin/time’ or ‘/opt/gnu/time’, or some shells allow ‘\time’ or ‘command time’ to consult $PATH.)
Good call re: time. That will get me through this current issue.
I figured this might also be a way to ease into a conversation about more stats from Prince. PDF stats would also be cool: number and size of resources downloaded (as well as timings); some stats around complexity to go with page count like number of boxes generated, fonts used, etc.
I think we can revisit this in February in preparation for the Prince 11 release. Having timings available for downloading remote resources would help to diagnose slow conversions.