HARDWARE-ACCELERATED COROUTINES FOR LINKED DATA STRUCTURES

    公开(公告)号:US20240143406A1

    公开(公告)日:2024-05-02

    申请号:US17976969

    申请日:2022-10-31

    CPC classification number: G06F9/5044 G06F9/485 G06F9/5016 G06F9/505

    Abstract: A computer assigns many threads to a hardware pipeline that contains a sequence of hardware stages that include a computing stage, a suspending stage, and a resuming stage. Each cycle of the hardware pipeline can concurrently execute a respective distinct stage of the sequence of hardware stages for a respective distinct thread. A read of random access memory (RAM) can be requested for a thread only during the suspending stage. While a previous state of a finite state machine (FSM) that implements a coroutine of the thread is in the suspending stage, a read of RAM is requested, and the thread is unconditionally suspended. While the coroutine of the thread is in the resuming stage, an asynchronous response from RAM is correlated to the thread and to a next state of the FSM. While in the computing stage, the next state of the FSM executes based on the asynchronous response from RAM.

    Efficient usage of one-sided RDMA for linear probing

    公开(公告)号:US11966356B2

    公开(公告)日:2024-04-23

    申请号:US18088353

    申请日:2022-12-23

    CPC classification number: G06F15/17331 G06F16/245 G06F16/2455

    Abstract: Systems and methods for reducing latency of probing operations of remotely located linear hash tables are described herein. In an embodiment, a system receives a request to perform a probing operation on a remotely located linear hash table based on a key value. Prior to performing the probing operation, the system dynamically predicts a number of slots for a single read of the linear hash table to minimize total cost for an average probing operation. The system determines a hash value based on the key value and determines a slot of the linear hash table to which the hash value corresponds. After predicting the number of slots, the system issues an RDMA request to perform a read of the predicted number of slots from the linear hash table starting at the slot to which the hash value corresponds.

    PRODUCING NATIVELY COMPILED QUERY PLANS BY RECOMPILING EXISTING C CODE THROUGH PARTIAL EVALUATION

    公开(公告)号:US20240095249A1

    公开(公告)日:2024-03-21

    申请号:US17948821

    申请日:2022-09-20

    CPC classification number: G06F16/24562 G06F8/41 G06F16/24542

    Abstract: In an embodiment, a database management system (DBMS) hosted by a computer receives a request to execute a database statement and responsively generates an interpretable execution plan that represents the database statement. The DBMS decides whether execution of the database statement will or will not entail interpreting the interpretable execution plan and, if not, the interpretable execution plan is compiled into object code based on partial evaluation. In that case, the database statement is executed by executing the object code of the compiled plan, which provides acceleration. In an embodiment, partial evaluation and Turing-complete template metaprogramming (TMP) are based on using the interpretable execution plan as a compile-time constant that is an argument for a parameter of an evaluation template.

    STORAGE FORMATS FOR IN-MEMORY CACHES
    27.
    发明申请

    公开(公告)号:US20190102391A1

    公开(公告)日:2019-04-04

    申请号:US15943335

    申请日:2018-04-02

    Abstract: Techniques related to cache storage formats are disclosed. In some embodiments, a set of values is stored in a cache as a set of first representations and a set of second representations. For example, the set of first representations may be a set of hardware-level representations, and the set of second representations may be a set of non-hardware-level representations. Responsive to receiving a query to be executed over the set of values, a determination is made as to whether or not it would be more efficient to execute the query over the set of first representations than to execute the query over the set of second representations. If the determination indicates that it would be more efficient to execute the query over the set of first representations than to execute the query over the set of second representations, the query is executed over the set of first representations.

Patent Agency Ranking