Abstract:
Techniques are provided for more efficiently using the bandwidth of the I/O path between a CPU and volatile memory during the performance of database operation. Relational data from a relational table is stored in volatile memory as column vectors, where each column vector contains values for a particular column of the table. A binary-comparable format may be used to represent each value within a column vector, regardless of the data type associated with the column. The column vectors may be compressed and/or encoded while in volatile memory, and decompressed/decoded on-the-fly within the CPU. Alternatively, the CPU may be designed to perform operations directly on the compressed and/or encoded column vector data. In addition, techniques are described that enable the CPU to perform vector processing operations on the column vector values.
Abstract:
Techniques are provided for more efficiently using the bandwidth of the I/O path between a CPU and volatile memory during the performance of database operation. Relational data from a relational table is stored in volatile memory as column vectors, where each column vector contains values for a particular column of the table. A binary-comparable format may be used to represent each value within a column vector, regardless of the data type associated with the column. The column vectors may be compressed and/or encoded while in volatile memory, and decompressed/decoded on-the-fly within the CPU. Alternatively, the CPU may be designed to perform operations directly on the compressed and/or encoded column vector data. In addition, techniques are described that enable the CPU to perform vector processing operations on the column vector values.
Abstract:
A shared-nothing database system is provided in which parallelism and workload balancing are increased by assigning the rows of each table to “slices”, and storing multiple copies (“duplicas”) of each slice across the persistent storage of multiple nodes of the shared-nothing database system. When the data for a table is distributed among the nodes of a shared-nothing system in this manner, requests to read data from a particular row of the table may be handled by any node that stores a duplica of the slice to which the row is assigned. For each slice, a single duplica of the slice is designated as the “primary duplica”. All DML operations (e.g. inserts, deletes, updates, etc.) that target a particular row of the table are performed by the node that has the primary duplica of the slice to which the particular row is assigned. The changes made by the DML operations are then propagated from the primary duplica to the other duplicas (“secondary duplicas”) of the same slice.
Abstract:
A computer analyzes a relational schema of a database to generate a data entry schema and encodes the data entry schema as JSON. The data entry schema is sent to a database client so that the client can validate entered data before the entered data is sent for storage. From the client, entered data is received that conforms to the data entry schema because the client used the data entry schema to validate the entered data before sending the data. Into the database, the entered data is stored that conforms to the data entry schema. The data entry schema and the relational schema have corresponding constraints on a datum to be stored, such as a range limit for a database column or an express set of distinct valid values. A constraint may specify a format mask or regular expression that values in the column should conform to, or a correlation between values of multiple columns.
Abstract:
Techniques herein use in-memory column vectors to process data that is external to a database management system (DBMS) and logically join the external data with data that is native to the DBMS. In an embodiment, a computer maintains a data dictionary for native data that is durably stored in an DBMS and external data that is not durably stored in the DBMS. From a client through a connection to the DBMS, the computer receives a query. The computer loads the external data into an in-memory column vector that resides in random access memory of the DBMS. Based on the query and the data dictionary, the DBMS executes a data join of the in-memory column vector with the native data. To the client through said connection, the computer returns results of the query based on the data join.
Abstract:
A shared-nothing database system is provided in which parallelism and workload balancing are increased by assigning the rows of each table to “slices”, and storing multiple copies (“duplicas”) of each slice across the persistent storage of multiple nodes of the shared-nothing database system. When the data for a table is distributed among the nodes of a shared-nothing system in this manner, requests to read data from a particular row of the table may be handled by any node that stores a duplica of the slice to which the row is assigned. For each slice, a single duplica of the slice is designated as the “primary duplica”. All DML operations (e.g. inserts, deletes, updates, etc.) that target a particular row of the table are performed by the node that has the primary duplica of the slice to which the particular row is assigned. The changes made by the DML operations are then propagated from the primary duplica to the other duplicas (“secondary duplicas”) of the same slice.
Abstract:
A hashing scheme includes a cache-friendly, latchless, non-blocking dynamically resizable hash index with constant-time lookup operations that is also amenable to fast lookups via remote memory access. Specifically, the hashing scheme provides each of the following features: latchless reads, fine grained lightweight locks for writers, non-blocking dynamic resizability, cache-friendly access, constant-time lookup operations, amenable to remote memory access via RDMA protocol through one sided read operations, as well as non-RDMA access.
Abstract:
A method, apparatus, and system for OZIP, a data compression and decompression codec, is provided. OZIP utilizes a fixed size static dictionary, which may be generated from a random sampling of input data to be compressed. Compression by direct token encoding to the static dictionary streamlines the encoding and avoids expensive conditional branching, facilitating hardware implementation and high parallelism. By bounding token definition sizes and static dictionary sizes to hardware architecture constraints such as word size or processor cache size, hardware implementation can be made fast and cost effective. For example, decompression may be accelerated by using SIMD instruction processor extensions. A highly granular block mapping in optional stored metadata allows compressed data to be accessed quickly at random, bypassing the processing overhead of dynamic dictionaries. Thus, OZIP can support low latency random data access for highly random workloads, such as for OLTP systems.
Abstract:
A method and apparatus for efficiently processing data in various formats in a single instruction multiple data (“SIMD”) architecture is presented. Specifically, a method to unpack a fixed-width bit values in a bit stream to a fixed width byte stream in a SIMD architecture is presented. A method to unpack variable-length byte packed values in a byte stream in a SIMD architecture is presented. A method to decompress a run length encoded compressed bit-vector in a SIMD architecture is presented. A method to return the offset of each bit set to one in a bit-vector in a SIMD architecture is presented. A method to fetch bits from a bit-vector at specified offsets relative to a base in a SIMD architecture is presented. A method to compare values stored in two SIMD registers is presented.
Abstract:
A method and apparatus is provided for optimizing queries received by a database system that relies on an intelligent data storage server to manage storage for the database system. Storing compression units in hybrid columnar format, the storage manager evaluates simple predicates and only returns data blocks containing rows that satisfy those predicates. The returned data blocks are not necessarily stored persistently on disk. That is, the storage manager is not limited to returning disc block images. The hybrid columnar format enables optimizations that provide better performance when processing typical database workloads including both fetching rows by identifier and performing table scans.