Field specialization comes in many guises
We have developed many types of specialization techniques, in three basic categories:
Some eliminate unnecessary instruction execution at runtime
Some minimize instruction cache pressure, and
Some reduce memory access overheads for data-intensive computation.
The performance benefit from individual specialization is additive, therefore when many specializations are applied aggressively in concert, the overall performance improvements can be maximized.
We use Q7 from TPCH as an example.
This query consists of table scans, joins, predicates, and aggregates. These various query operators present different types of bottlenecks, some are due to instruction execution and some are due to memory access-related issues, such as data cache misses. For this query, table and hash join spiffs, which address instruction execution overhead and memory access overhead, respectively, as well as predicate and aggregate spiffs, both addressing instruction execution overhead, were relevant.
On stock Postgres, we observed that the top few routines in the profile are all related to the aforementioned query operators. The execution time of Q7 under TPCH 1G was 1802 milliseconds. The above table illustrates (i) that spiffs target different bottlenecks and (ii) that the performance benefit is cumulative.
A third advantage is that the spiffs can provide further benefit when more data is involved (assuming the same query plan is chosen). To illustrate, let's examine the speedups of these spiffs on TPCH 10G below. The execution time of Q7 on stock Postgres was 23.4 seconds.
Regardless what the workloads are and what runtime configuration is provided, field specialization will always improve the performance when it is used, as field specialization focuses on addressing hotspots that can appear in a wide variety of workloads.