How does field specialization fit in with other approaches to optimize DBMSes?
DBMSs are by their nature very general: they can handle whatever schema is specified, any data that is consistent with that schema, and any query or modification that is presented. Such generality and efficiency have enabled their proliferation and use in many domains. Through such innovations as indexing structures, concurrency control mechanisms, and sophisticated query optimization strategies, DBMSs are already very efficient.
The primary approach used by DBMS vendors for increasing DBMS efficiency is that of specialization, which comprises a wide range of approaches that customize the DBMS code to the specific context in which each query is evaluated, in a way that avoids the inefficiencies resulting from their generality. Much of the work over the last three decades in improving DBMS performance can be characterized as specialization with various levels of granularity: at the architectural level (e.g., columnar store), at the component level (e.g., adding a spatial index), at the user-stated level (e.g., user-defined triggers), and at the operator level (e.g., a new kind of join).
Dataware’s technology for DBMS optimization, called field specialization, applies at a finer granularity than that of any of these other specializations. Field specialization works at the machine code level, removing individual machine instructions that prior analysis has revealed are not required given the presence of values of variables that are deemed to be invariant. That means that field specialization can be applied to code that was earlier written to alter the overall DBMS architecture, or to modify or add a component to the DBMS, or that evaluates constructs such as user-specified triggers, or that implements improved or additional query evaluation operators.
The result is that field specialization yields multiplicative increases in performance when used in conjunction with these many other approaches at coarser granularities.