

Field specialization can be applied to both legacy and new code
As we work with our customers on their ever-evolving code bases, we find that new features and components added to these code bases often bring more specialization opportunities. Our own experience on PostgreSQL supports this observation. As an example, PostgreSQL 9.6 introduced more-native support for the JSON datatype, specifically, a new indexing scheme and the associated logic. Hotspot analysis on workloads utilizing this datatype revealed new specialization opportunities


Field specialization comes in many guises
We have developed many types of specialization techniques, in three basic categories: Some eliminate unnecessary instruction execution at runtime Some minimize instruction cache pressure, and Some reduce memory access overheads for data-intensive computation. The performance benefit from individual specialization is additive, therefore when many specializations are applied aggressively in concert, the overall performance improvements can be maximized. We use Q7 from TPCH as a


Adding spiffs exposes further specialization opportunities
The first step of applying field specialization is to look for runtime hotspots with profiling tools. Once field specialization is applied to address the top few hotspot routines, their overhead typically will reduce significantly. Then we can start the field specialization process once again. Since the original hotspots have been addressed, other routines will rise in the profile, becoming new hotspots that present further opportunities for field specialization. Let’s look a


Field Specialization and the Cloud: A Great Combination
Field specialization and cloud applications go well together, in four advantageous ways. First, field specialization speeds up DBMSes and other data-intensive enterprise applications. This results in a more responsive cloud service, while reducing hardware provisioning costs. Both the cloud vendor and end users benefit. More efficient application also directly translate to lower energy consumption, thereby reducing operation cost. Second, continuous specialization works espec


How big are spiffs, anyway?
Recall that a spiff is code added to the DBMS source that, given values of invariants known in cold code, generate specialized code (termed “speccode”) that will be called in hot code when the query is executed. How big are they? There are several senses of “how big” that are important. The first is the amount of original code that is specialized by each spiff. In our experience over trials on four proprietary DBMSes and four open-source DBMSes, that the code to be specialize


How is field specialization done, exactly?
Our field specialization process proceeds through nine basic steps, supported along the way with proprietary tools that can contend with code bases comprised of millions of lines of source code. Starting from representative workloads, dynamic analysis creates a runtime profile to identify hot routines. A particular hotspot function may be invoked from multiple different calling contexts. Not all such invocations will result the function to be a hotspot, e.g., some contexts ma