Our field specialization process proceeds through nine basic steps, supported along the way with proprietary tools that can contend with code bases comprised of millions of lines of source code.
Starting from representative workloads, dynamic analysis creates a runtime profile to identify hot routines.
A particular hotspot function may be invoked from multiple different calling contexts. Not all such invocations will result the function to be a hotspot, e.g., some contexts may just invoke the function for a few times whereas others may frequently invoke the function. So it is important to isolate the identified hot routines by that context, using proprietary tooling.
Within each such routine context, the scope of the analysis is narrowed to the code regions (a single function, one or more loops, or even individual statements) that contribute the bulk of the execution time, via fine-grained dynamic analysis.
Further analyses and customized visualizations tailored to articulated categories of specialization focus and characterize the mechanics of the performance bottlenecks. For example, is the bottleneck due to machine instructions that could possibly be eliminated if more was known about the context, or to instruction cache misses, or to data cache misses?
Given the understood mechanics, we apply a collection of analysis tools to identify specialization opportunities.
Each specialization opportunity relies on the separation between the location in the source code where an invariant is created and the specialization opportunity. We use our tools to understand the flow of these invariants from the creation site to the opportunity site. Values flow through convoluted paths within complex applications; our visualizations are critical to this step.
With the results of the above analyses, we can apply our proprietary collection of methods to specialize the target hotspot based on identified invariants.
We then generalize the specialization to contend with the range of possibilities implied by the invariants and productize them, including adding calls to the Spiff Runtime Environment API within the application source code.
The final step is to perform comprehensive correctness and performance tests.
The result: a faster DBMS on any workload that touches the routine that was specialized.