Abstract:
A query optimizer optimizes a query to a partitioned database table by determining common characteristics of the partitions and generating a virtual maintained temporary index that spans multiple partitions. Using the virtual maintained temporary index allows the query optimizer to generate an access plan based on the virtual maintained temporary index, which relieves the optimizer from having to individually optimize access to each partition for partitions that share common characteristics.
Abstract:
Methods, systems, and products are provided for code path tracking. Embodiments include identifying an instrumented trace point in software code to be path tracked; identifying a function executed at the instrumented trace point in the software code; identifying parameters for the function executed at the instrumented trace point; and recording a description of the function, the parameters, and the result of the execution of the function using the parameters.
Abstract:
A plan for executing a query in a relational database is obtained. A query for accessing data in the relational database is received. The query specifies N tables in the relational database from which data is to be retrieved. A determination is made whether a syntax of the query matches a syntax of a plan in a plan cache for executing the query. Matches are identified between generic table formats of the N tables specified in the query to generic table formats of N tables specified in the plan responsive to the syntax of the query matching the syntax of a plan in the plan cache for executing the query. The plan for executing the query is obtained based on whether the syntax of the query matches the syntax of the plan and based on identified matches between the generic table formats of the N tables specified in the query to the generic table formats of the N tables specified in the plan.
Abstract:
Embodiments of the invention provide techniques for optimizing database queries for energy efficiency. In general, a query optimizer is configured to compare energy requirements of query plans, and to select a query plan requiring minimal energy to execute. In one embodiment, the query optimizer may also compare time performance of the query plans, and may select a query plan by matching to a user preference for a relative priority between energy requirements and time performance.
Abstract:
Methods, systems, and products are provided for code path tracking. Embodiments include identifying an instrumented trace point in software code to be path tracked; identifying a function executed at the instrumented trace point in the software code; identifying parameters for the function executed at the instrumented trace point; and recording a description of the function, the parameters, and the result of the execution of the function using the parameters.
Abstract:
A query facility for database queries dynamically determines whether selective portions of a database table are likely to benefit from separate query execution strategies, and constructs an appropriate separate execution strategies accordingly. Preferably, the database contains at least one relatively large table comprising multiple partitions, each sharing the definitional structure of the table and containing a different respective discrete subset of the table records. The query facility compares metadata for different partitions to determine whether sufficiently large differences exist among the partitions, and in appropriate cases selects one or more partitions for separate execution strategies. Preferably, partitions are ranked for separate evaluation using a weighting formula which takes into account: (a) the number of indexes for the partition, (b) recency of change activity, and (c) the size of the partition.
Abstract:
Embodiments of the invention provide techniques for optimizing database queries for energy efficiency. In general, a query optimizer is configured to compare energy requirements of query plans, and to select a query plan requiring minimal energy to execute. In one embodiment, the query optimizer may also compare time performance of the query plans, and may select a query plan by matching to a user preference for a relative priority between energy requirements and time performance.
Abstract:
Embodiments of the invention provide techniques for aggregating database queries for energy efficiency. In one embodiment, queries received by a DBMS are aggregated and staged according to hard-disk drives required for query execution. Each group of queries accessing a given drive may be dispatched for execution together. Further, the queries received by a DBMS may be matched to patterns of previously received queries. The matching patterns may be used to predict other queries which are likely to be received by the DBMS. The received queries may be staged to be dispatched with the predicted queries. By aggregating queries to be executed, access to each hard-disk drive may be optimized, thus reducing the overall energy consumption required for executing the queries.
Abstract:
Embodiments of the invention provide techniques for maintaining I/O value caches for database queries. Each maintained cache may be configured for use with a particular database query. Each cache may be persistently maintained in a system, meaning the cache is not automatically deleted after some period of time, and may thus be used to process subsequent instances of the same query. By use of the maintained cache, executing subsequent instances of the query may be avoided, thus saving time and system resources. Further, the maintained cache may be adapted to process other queries having similar characteristics to the initial query. The data included in each cache may be refreshed as required by changes to the underlying data.
Abstract:
Embodiments of the invention provide techniques for maintaining I/O value caches for database queries. Each maintained cache may be configured for use with a particular database query. Each cache may be persistently maintained in a system, meaning the cache is not automatically deleted after some period of time, and may thus be used to process subsequent instances of the same query. By use of the maintained cache, executing subsequent instances of the query may be avoided, thus saving time and system resources. Further, the maintained cache may be adapted to process other queries having similar characteristics to the initial query. The data included in each cache may be refreshed as required by changes to the underlying data.