METHOD FOR GENERIC VECTORIZED D-HEAPS
    141.
    发明申请

    公开(公告)号:US20200348933A1

    公开(公告)日:2020-11-05

    申请号:US16399226

    申请日:2019-04-30

    Abstract: Techniques are provided for obtaining generic vectorized d-heaps for any data type for which horizontal aggregation SIMD instructions are not available, including primitive as well as complex data types. A generic vectorized d-heap comprises a prefix heap and a plurality of suffix heaps. Each suffix heap of the plurality of suffix heaps comprises a d-heap. A plurality of key values stored in the heap are split into key prefix values and key suffix values. Key prefix values are stored in the prefix heap and key suffix values are stored in the plurality of suffix heaps. Each entry in the prefix heap includes a key prefix value of the plurality of key values and a reference to the suffix heap of the plurality of suffix heaps that includes all key suffix values of the plurality of key values that share the respective key prefix value.

    NAMED ENTITY DISAMBIGUATION USING ENTITY DISTANCE IN A KNOWLEDGE GRAPH

    公开(公告)号:US20200342055A1

    公开(公告)日:2020-10-29

    申请号:US16392386

    申请日:2019-04-23

    Abstract: Techniques are described herein for performing named entity disambiguation. According to an embodiment, a method includes receiving input text, extracting a first mention and a second mention from the input text, and selecting, from a knowledge graph, a plurality of first candidate vertices for the first mention and a plurality of second candidate vertices for the second mention. The present method also includes evaluating a score function that analyzes vertex embedding similarity between the plurality of first candidate vertices and the plurality of second candidate vertices. In response to evaluating and seeking to optimize the score function, the method performs selecting a first selected candidate vertex from the plurality of first candidate vertices and a second selected candidate vertex from the plurality of second candidate vertices. Further, the present method includes mapping a first entry from the knowledge graph to the first mention and mapping a second entry from the knowledge graph to the second mention. In this embodiment, the first entry corresponds to the first selected candidate vertex and the second entry corresponds to the second selected candidate.

    Efficient data decoding using runtime specialization

    公开(公告)号:US10684873B2

    公开(公告)日:2020-06-16

    申请号:US16006668

    申请日:2018-06-12

    Abstract: Computer-implemented techniques described herein provide efficient data decoding using runtime specialization. In an embodiment, a method comprises a virtual machine executing a body of code of a dynamically typed language, wherein executing the body of code includes: querying a relational database, and in response to the query, receiving table metadata indicating data types of one or more columns of a first table in the relational database. In response to receiving the table metadata: for a first column of the one or more columns, generating decoding machine code to decode the first column based on the data type of the first column, and executing the decoding machine code to decode the first column of the one or more columns.

    Reducing synchronization of tasks in latency-tolerant task-parallel systems

    公开(公告)号:US10678588B2

    公开(公告)日:2020-06-09

    申请号:US15597460

    申请日:2017-05-17

    Abstract: Techniques are provided for reducing synchronization of tasks in a task scheduling system. A task queue includes multiple tasks, some of which require an I/O operation while other tasks require data stored locally in memory. A single thread is assigned to process tasks in the task queue. The thread determines if a task at the head of the task queue requires an I/O operation. If so, then the thread generates an I/O request, submits the I/O request, and may place the task at (or toward) the end of the task queue. When the task reaches the head of the task queue again, the thread determines if data requested by the I/O request is available yet. If so, then the thread processes the request. Otherwise, the thread may place the task at (or toward) the end of the task queue again.

    FAST DETECTION OF VERTEX-CONNECTIVITY WITH DISTANCE CONSTRAINT

    公开(公告)号:US20200151216A1

    公开(公告)日:2020-05-14

    申请号:US16185236

    申请日:2018-11-09

    Abstract: Embodiments perform real-time vertex connectivity checks in graph data representations via a multi-phase search process. This process includes an efficient first search phase using landmark connectivity data that is generated during a preprocessing phase. Landmark connectivity data maps the connectivity of a set of identified landmarks in a graph to other vertices in the graph. Upon determining that the subject vertices are not closely related via landmarks, embodiments implement a second search phase that performs a brute-force search for connectivity, between the subject vertices, among the graph's non-landmark vertices. This brute-force search prevents exploration of cyclical paths by recording the vertices on a currently-explored path in a stack data structure. The second search phase is automatically aborted upon detecting that the non-landmark vertices in the graph are over a threshold density. In this case, embodiments perform a third search phase involving either a modified breadth-first search or modified bidirectional search.

    Visualizing UI tool for graph construction and exploration with alternative action timelines

    公开(公告)号:US10585575B2

    公开(公告)日:2020-03-10

    申请号:US15609629

    申请日:2017-05-31

    Abstract: Techniques herein organize and display as branches the historical versions of filtrations of a property graph in a way that suits interactive exploration. In embodiments, a computer loads metadata that describes versions of filtration of a graph that contains vertices interconnected by edges. Based on the metadata, the computer displays, along a timeline, version indicators that each represents a respective historical version of filtration of the graph. The computer displays, responsive to receiving an interactive selection of a particular version indicator of the plurality of version indicators, a particular version of filtration of the graph that is represented by the particular version indicator. Subsequences of versions for the timeline may be organized as branches that may be interactively created and merged. Branching and merging are integrated into the general lifecycle of graph filtration. A version timeline may be presented and operated as a tool for historical navigation and speculative exploration.

    Flushing by copying entries in a non-coherent cache to main memory

    公开(公告)号:US10509725B2

    公开(公告)日:2019-12-17

    申请号:US13791863

    申请日:2013-03-08

    Abstract: Techniques are provided for performing a flush operation in a non-coherent cache. In response to determining to perform a flush operation, a cache unit flushes certain data items. The flush operation may be performed in response to a lapse of a particular amount of time, such as a number of cycles, or an explicit flush instruction that does not indicate any cache entry or data item. The cache unit may store change data that indicates which entry stores a data item that has been modified but not yet been flushed. The change data may be used to identify the entries that need to be flushed. In one technique, a dirty cache entry that is associated with one or more relatively recent changes is not flushed during a flush operation.

    Defining subgraphs declaratively with vertex and edge filters

    公开(公告)号:US10445319B2

    公开(公告)日:2019-10-15

    申请号:US15592050

    申请日:2017-05-10

    Abstract: Techniques herein optimally distribute graph query processing across heterogeneous tiers. In an embodiment, a computer receives a graph query to extract a query result (QR) from a graph in a database operated by a database management system (DBMS). The graph has vertices interconnected by edges. Each vertex has vertex properties, and each edge has edge properties. The computer decomposes the graph query into filter expressions (FE's). Each FE is processed as follows. A filtration tier to execute the FE is selected from: the DBMS which sends at least the QR to a stream, a stream evaluator that processes the stream as it arrives without waiting for the entire QR to arrive and that stores at least the QR into memory, and an in-memory evaluator that identifies the QR in memory. A translation of the FE executes on the filtration tier to obtain vertices and/or edges that satisfy the FE.

Patent Agency Ranking