What is the difference between ?long term? and ?short term?average?
WebLogic Server UNIX
A servlet?s long term average execution time is the total amount of time that the servlet spent executing since the WebLogic Server was started divided by the total number of times that the servlet was invoked since the WebLogic Server was started. This is the average execution time that is displayed in the WebLogic Administration Console.
The short-term average execution time is the amount of time that a servlet spent executing since the last sample divided by the number of times that the servlet was invoked since the last sample.
The long-term average calculation tends to dampen the effect of peaks and valleys in performance, whereas the short-term average is better at reflecting peaks and valleys.
For example, suppose that since the WebLogic Server was started, a servlet was invoked 90 times and each invocation required 1 second, for a total of 90 seconds executing. Its long-term average execution time would be 1 second.
Now suppose that the next time the job collects data, the number of invocations was now 100 and the total amount of time spent executing was 110 seconds. The long-term average execution time would be 1.1 seconds.
However, the short term execution time tells a much different story. The servlet spent 110 ? 90 = 20 seconds executing and was invoked 100 ? 90 = 10 times, for an average execution time of 2 seconds. Compared to the previous sample, the execution time has doubled!If you are interested in finding peaks and valleys, or want to know how performance varies by time of day, use the short term average rather than the long term average.