Abstract:
The present invention provides a method and system for providing targeted applications within a search engine results page. The method and system includes receiving a search query from a user and interpreting the search query. The method and system then first maps the interpreted query to one or more action templates, wherein mapping the interpreted query to one or more action templates comprises selecting one or more actions associated with the interpreted query. The method and system then maps the selected one or more actions associated with the interpreted query to a plurality of applications and selecting one or more applications associated with the one or more actions. Finally, the method and system displays the one or more applications within a search results page.
Abstract:
Techniques are provided for detecting and resolving conflicts between native and transactional applications sharing a common database. As transactions are received at the database system, a timestamp is assigned to both the start and the commit time of a transaction, where the timestamps are synchronized with a logical clock in the database system. When the database system receives a native operation, the database system increments the time in the logical clock and assigns that updated time to the native operation. When the transaction is ready to commit, database system may determine conflicts between native and transactional operations. If the database system determines that a native operation conflicts with a transactional operation, database system will abort the transaction.
Abstract:
Methods and apparatus for performing top-k query processing include pruning a list of documents to identify a subset of the list of documents, where pruning includes, for other query terms in the set of query terms, skipping a document in the list of documents based, at least in part, on the contribution of the query term to the score of the corresponding document and the term upper bound for each other query term, in the set of query terms, that matches the document.
Abstract:
In one embodiment, a search engine may generate and store a plurality of search index segments such that each of the search index segments is stored in a corresponding one of a plurality of heaps of memory. The plurality of search index segments may include inverted index segments mapping content to documents containing the content. A garbage collection module may release one or more heaps of the memory.
Abstract:
Multi-thread systems and methods are described for concurrently handling requests to commit data updates to a database by a plurality of data transactions. The database preferably supports multi-versioning and the data transactions are preferably isolated by snapshot isolation. In one embodiment, concurrent and lock-free handling of requests to commit data updates includes performing two types of concurrent data conflict detection. A transaction proceeds to commit only if it passes both types of conflict detection. The first type of conflict detection is based on a hash map between data keys and their commit timestamps whereas the second type of conflict detection is based on a log that keeps track of the status of transactions whose requests to commit are actively being processed. In another embodiment, concurrent conflict detection for data items in concurrent transactions is broken down into buckets and locks are used for accessing each bucket. These systems and methods maintain transactional integrity to database while improving throughput by maximizing concurrency of data commits in a multi-thread environment.
Abstract:
In one embodiment, a processor of a computing device receives a query. The computing device may compare a centroid of each of a plurality of clusters to the query such that a subset of the plurality of clusters is selected, each of the plurality of clusters having a set of data points. An assignment of the subset of the plurality of clusters may be communicated to a hardware accelerator of the computing device. A plurality of threads of the hardware accelerator of the computing device may generate one or more distance tables that store results of intermediate computations corresponding to the query and the subset of the plurality of clusters. The distance tables may be stored in shared memory of the hardware accelerator. A plurality of threads of the hardware accelerator may determine a plurality of data points using the distance tables. The processor may provide query results pertaining to at least a portion of the plurality of data points.
Abstract:
The present teaching relates to concurrency control in log-structured merge (LSM) data stores. In one example, a call is received from a thread for writing a value to a key of LSM components. A shared mode lock is set on the LSM components in response to the call. The value is written to the key once the shared mode lock is set on the LSM components. The shared mode lock is released from the LSM components after the value is written to the key.
Abstract:
Briefly, embodiments of methods and/or systems of providing services in a distributed file system are disclosed. For one embodiment, as an example, a system may be capable of forming server-side mount tables comprising hierarchically organized namespaces. Server-side mount tables may be replicated or reproduced across services platforms, for example.