AUTOMATED GENERATION OF MEMORY CONSUMPTION AWARE CODE

    公开(公告)号:US20170168779A1

    公开(公告)日:2017-06-15

    申请号:US14969231

    申请日:2015-12-15

    CPC classification number: G06F8/456

    Abstract: Techniques generate memory-optimization logic for concurrent graph analysis. A computer analyzes domain-specific language logic that analyzes a graph having vertices and edges. The computer detects parallel execution regions that create thread locals. Each thread local is associated with a vertex or edge. For each parallel region, the computer calculates how much memory is needed to store one instance of each thread local. The computer generates instrumentation that determines how many threads are available and how many vertices and edges will create thread locals. The computer generates tuning logic that determines how much memory is originally needed for the parallel region based on how much memory is needed to store the one instance, how many threads are available, and graph size. The tuning logic detects a memory shortage based on the original amount of memory needed exceeding how much memory is available and accordingly adjusts the execution of the parallel region.

    EFFICIENT METHOD FOR INDEXING DATA TRANSFERRED BETWEEN MACHINES IN DISTRIBUTED GRAPH PROCESSING SYSTEMS

    公开(公告)号:US20170147706A1

    公开(公告)日:2017-05-25

    申请号:US14947382

    申请日:2015-11-20

    CPC classification number: G06F17/30958 G06F17/30584

    Abstract: Techniques herein index data transferred during distributed graph processing. In an embodiment, a system of computers divides a directed graph into partitions. The system creates one partition per computer and distributes each partition to a computer. Each computer builds four edge lists that enumerate edges that connect the partition of the computer with a partition of a neighbor computer. Each of the four edge lists has edges of a direction, which may be inbound or outbound from the partition. Edge lists are sorted by identifier of the vertex that terminates or originates each edge. Each iteration of distributed graph analysis involves each computer processing its partition and exchanging edge data or vertex data with neighbor computers. Each computer uses an edge list to build a compactly described range of edges that connect to another partition. The computers exchange described ranges with their neighbors during each iteration.

    ADVANCED INTERACTIVE COMMAND-LINE FRONT-END FOR GRAPH ANALYSIS SYSTEMS
    93.
    发明申请
    ADVANCED INTERACTIVE COMMAND-LINE FRONT-END FOR GRAPH ANALYSIS SYSTEMS 有权
    图形分析系统的先进交互式命令前端

    公开(公告)号:US20170024192A1

    公开(公告)日:2017-01-26

    申请号:US14805882

    申请日:2015-07-22

    Abstract: Systems and methods for interactive front-end graph analysis are provided herein. According to one embodiment, a front-end application receives, from a compiler, first meta-information for a particular graph analysis procedure, where the first meta-information identifies a set of input parameters for passing graph information to the particular graph analysis procedure. The front-end application registers, using the first meta-information, the particular graph analysis procedure as an available command. The front-end application also receives second meta-information that identifies, for each respective graph object of a set of one or more graph objects, a respective set of graph characteristics. In response to receiving a request to apply the particular graph analysis procedure to the set of one or more graph objects, the front-end application enforces a set of one or more constraints based on the first meta-information and the second meta-information.

    Abstract translation: 本文提供了交互式前端图分析的系统和方法。 根据一个实施例,前端应用从编译器接收用于特定图分析过程的第一元信息,其中第一元信息识别用于将图信息传递到特定图分析过程的一组输入参数。 前端应用程序使用第一个元信息将特定图分析过程注册为可用命令。 前端应用还接收第二元信息,其针对一个或多个图形对象的集合的每个相应图形对象标识相应的图形特征集合。 响应于接收到将特定图形分析过程应用于一个或多个图形对象的集合的请求,前端应用基于第一元信息和第二元信息强制执行一组或多个约束。

    Latency-hiding context management for concurrent distributed tasks in a distributed system
    94.
    发明授权
    Latency-hiding context management for concurrent distributed tasks in a distributed system 有权
    分布式系统中并发分布式任务的延迟隐藏上下文管理

    公开(公告)号:US09535756B2

    公开(公告)日:2017-01-03

    申请号:US14619414

    申请日:2015-02-11

    CPC classification number: G06F9/5016 G06F9/546 G06F9/547 G06F2209/548

    Abstract: Techniques are provided for latency-hiding context management for concurrent distributed tasks. A plurality of task objects is processed, including a first task object corresponding to a first task that includes access to first data residing on a remote machine. A first access request is added to a request buffer. A first task reference identifying the first task object is added to a companion buffer. A request message including the request buffer is sent to the remote machine. A response message is received, including first response data responsive to the first access request. For each response of one or more responses of the response message, the response is read from the response message, a next task reference is read from the companion buffer, and a next task corresponding to the next task reference is continued based on the response. The first task is identified and continued.

    Abstract translation: 为并发分布式任务的延迟隐藏上下文管理提供了技术。 处理多个任务对象,包括对应于第一任务的第一任务对象,其包括访问驻留在远程机器上的第一数据。 第一个访问请求被添加到请求缓冲区。 将标识第一个任务对象的第一个任务引用添加到协同缓冲区。 包含请求缓冲区的请求消息被发送到远程机器。 接收响应消息,包括响应于第一接入请求的第一响应数据。 对于响应消息的一个或多个响应的每个响应,从响应消息中读取响应,从伴随缓冲器读取下一个任务引用,并且基于响应继续下一个与下一个任务引用相对应的任务。 第一个任务被确认并继续。

    LATENCY-HIDING CONTEXT MANAGEMENT FOR CONCURRENT DISTRIBUTED TASKS
    95.
    发明申请
    LATENCY-HIDING CONTEXT MANAGEMENT FOR CONCURRENT DISTRIBUTED TASKS 有权
    用于同时分配的任务的隐藏背景管理

    公开(公告)号:US20160232037A1

    公开(公告)日:2016-08-11

    申请号:US14619414

    申请日:2015-02-11

    CPC classification number: G06F9/5016 G06F9/546 G06F9/547 G06F2209/548

    Abstract: Techniques are provided for latency-hiding context management for concurrent distributed tasks. A plurality of task objects is processed, including a first task object corresponding to a first task that includes access to first data residing on a remote machine. A first access request is added to a request buffer. A first task reference identifying the first task object is added to a companion buffer. A request message including the request buffer is sent to the remote machine. A response message is received, including first response data responsive to the first access request. For each response of one or more responses of the response message, the response is read from the response message, a next task reference is read from the companion buffer, and a next task corresponding to the next task reference is continued based on the response. The first task is identified and continued.

    Abstract translation: 为并发分布式任务的延迟隐藏上下文管理提供了技术。 处理多个任务对象,包括对应于第一任务的第一任务对象,其包括访问驻留在远程机器上的第一数据。 第一个访问请求被添加到请求缓冲区。 将标识第一个任务对象的第一个任务引用添加到协同缓冲区。 包含请求缓冲区的请求消息被发送到远程机器。 接收响应消息,包括响应于第一接入请求的第一响应数据。 对于响应消息的一个或多个响应的每个响应,从响应消息中读取响应,从伴随缓冲器读取下一个任务引用,并且基于响应继续下一个与下一个任务引用相对应的任务。 第一个任务被确认并继续。

    Efficiently counting triangles in a graph
    96.
    发明授权
    Efficiently counting triangles in a graph 有权
    有效地计算图中的三角形

    公开(公告)号:US09361403B2

    公开(公告)日:2016-06-07

    申请号:US14139269

    申请日:2013-12-23

    CPC classification number: G06F17/30958 G06F17/30312

    Abstract: Techniques for identifying common neighbors of two nodes in a graph are provided. One technique involves performing a binary split search and/or a linear search. Another technique involves creating a segmenting index for a first neighbor list. A second neighbor list is scanned and, for each node indicated in the second neighbor list, the segmenting index is used to determine whether the node is also indicated in the first neighbor list. Techniques are also provided for counting the number of triangles. One technique involves pruning nodes from neighbor lists based on the node values of the nodes whose neighbor lists are being pruned. Another technique involves sorting the nodes in a node array (and, thus, their respective neighbor lists) based on the nodes' respective degrees prior to identifying common neighbors. In this way, when pruning the neighbor lists, the neighbor lists of the highly connected nodes are significantly reduced.

    Abstract translation: 提供了用于识别图中两个节点的公共邻居的技术。 一种技术涉及执行二分割搜索和/或线性搜索。 另一种技术涉及为第一邻居列表创建分段索引。 扫描第二邻居列表,并且对于第二邻居列表中指示的每个节点,使用分段索引来确定节点是否也在第一邻居列表中指示。 还提供了用于计数三角形数量的技术。 一种技术是根据邻居列表被修剪的节点的节点值从邻居列表中修剪节点。 另一技术涉及在识别公共邻居之前基于节点的相应度来对节点阵列(以及因此其相应的邻居列表)中的节点进行排序。 这样,当修剪邻居列表时,高度连接的节点的邻居列表显着减少。

    INVALIDATING ENTRIES IN A NON-COHERENT CACHE
    97.
    发明申请
    INVALIDATING ENTRIES IN A NON-COHERENT CACHE 审中-公开
    在非相干缓存中隐藏入侵

    公开(公告)号:US20140258635A1

    公开(公告)日:2014-09-11

    申请号:US13791847

    申请日:2013-03-08

    CPC classification number: G06F12/0808 G06F12/0891 Y02D10/13

    Abstract: Techniques are provided for performing an invalidate operation in a non-coherent cache. In response to receiving an invalidate instruction, a cache unit only invalidates cache entries that are associated with invalidation data. In this way, a separate invalidate instruction is not required for each cache entry that is to be invalidated. Also, cache entries that are not to be invalidated remain unaffected by the invalidate operation. A cache entry may be associated with invalidation data if an address of the corresponding data item is in a particular set of addresses. The particular set of addresses may have been specified as a result of an invalidation instruction specified in code that is executing on a processor that is coupled to the cache.

    Abstract translation: 提供了用于在非相干高速缓存中执行无效操作的技术。 响应于接收到无效指令,高速缓存单元仅使与无效数据相关联的高速缓存条目失效。 以这种方式,对于要被无效的每个缓存条目,不需要单独的无效指令。 此外,不被无效的缓存条目不会被无效操作影响。 如果对应的数据项的地址在特定的一组地址中,则缓存条目可以与无效数据相关联。 作为在耦合到高速缓存的处理器上执行的代码中指定的无效指令的结果可以指定特定的地址集合。

    Exploiting Intra-Process And Inter-Process Parallelism To Accelerate In-Memory Processing Of Graph Queries In Relational Databases

    公开(公告)号:US20250138874A1

    公开(公告)日:2025-05-01

    申请号:US18384743

    申请日:2023-10-27

    Abstract: The illustrative embodiments provide techniques that utilizes graph topology information to partition work according to ranges of vertices so that each unit of work can be computed independently by different worker processes (inter-process parallelism). The illustrative embodiments also provide an approach for decomposing the graph neighbor matching operations and the property projection operation into fine-grained configurable size tasks that can be processed independently by threads (intra-process parallelism) without the need for expensive synchronization primitives. For graph neighbor matching operations, a given set of source vertices is split into smaller tasks that are assigned to dedicated threads for processing. Each thread is responsible for computing a number of matching source vertices and propagating them to the next graph match operator for further processing. For property projection operations, the computed graph paths are organized into rows that contain the requested properties for each element of the path (vertices and/or edges).

    Subqueries in distributed asynchronous graph queries

    公开(公告)号:US12197436B2

    公开(公告)日:2025-01-14

    申请号:US18091242

    申请日:2022-12-29

    Abstract: A graph processing engine is provided for executing a graph query comprising a parent query and a subquery nested within the parent query. The subquery uses a reference to one or more correlated variables from the parent query. Executing the graph query comprises initiating execution of the parent query, pausing the execution of the parent query responsive to the parent query matching the one or more correlated variables in an intermediate result set, generating a subquery identifier for each match of the one or more correlated variables, modifying the subquery to include a subquery aggregate function and a clause to group results by subquery identifier, executing the modified subquery using the intermediate result set and collecting subquery results into a subquery results table responsive to pausing execution of the parent query, and resuming execution of the parent query using the subquery results table.

Patent Agency Ranking