Profiling

January 30, 2014

Terms

Profiling – dynamic analysis of software, consisting of gathering various metrics and calculating some statistical info from it. Usually, you do profiling to analyze performance though it’s not the single case, e.g. there are works about profiling for energy consumption analysis.

Do not confuse profiling and tracing. Tracing is a procedure of saving program runtime steps to debug it – you are not gathering any metrics.

Also, don’t confuse profiling and benchmarking. Benchmarking is all about marketing. You launch some predefined procedure to get a couple of numbers that you can print in your marketing brochures.

Profiler – program that does profiling.

Profile – result of profiling, some statistical info calculated from gathered metrics.

Metrics

There are a lot of metrics that profiler can gather and analyze and I won’t list them all but instead try to make some hierarchy of it:

  • Time metrics
    • Program/function runtime
    • I/O latency
  • Space metrics
    • Memory usage
    • Open files
    • Bandwidth
  • Code metrics
    • Call graph
    • Function hit count
    • Loops depth
  • Hardware metrics
    • CPU cache hit/miss ratio
    • Interrupts count

Profiling methods

The variety of metrics implies the variety of methods to gather it. And I have a beautiful hierarchy for that, yeah:

  • Invasive profiling – changing profiled code
    • Source code instrumentation
    • Static binary instrumentation
    • Dynamic binary instrumentation
  • Non-invasive profiling – without changing any code
    • Sampling
    • Event-based
    • Emulation

(That’s all the methods I know. If you come up with another – feel free to contact me).

A quick review of methods.

Source code instrumentation is the simplest one. If you have source codes you can add special profiling calls to every function (not manually, of course) and then launch your program. Profiling calls will trace function graph and can also compute time spent in functions and also branch prediction probability and a lot of other things. But oftentimes you don’t have the source code. And that makes me saaaaad panda.

Binary instrumentation is what you can guess by yourself - you are modifying program binary image - either on disk (program.exe) or in memory. This is what reverse engineers love to do. To research some commercial critical software or analyze malware they do binary instrumentation and analyze program behavior.

Anyway, binary instrumentation also really useful in profiling – many modern instruments are built on top binary instrumentation ideas (SystemTap, ktap, DTrace).

Ok, so sometimes you can’t instrument even binary code, e.g. you’re profiling OS kernel, or some pretty complicated system consisting of many tightly coupled modules that won’t work after instrumenting. That’s why you have non-invasive profiling.

Sampling is the first natural idea that you can come up with when you can’t modify any code. The point is that profiler periodically asks CPU registers (e.g. PSW) and analyze what is going on. By the way, this is the only reasonable way you can get hardware metrics - by periodical polling of [PMU] (performance monitoring unit).

Event-based profiling is about gathering events that must somehow be prepared/preinstalled by the vendor of profiling subject. Examples are inotify, kernel tracepoints in Linux and VTune events.

And finally, emulation is just running your program in an isolated environment like virtual machine or QEMU thus giving you full control over program execution but garbling behavior.

Resources