CONTENT PREFETCHING AND CACHE MANAGEMENT
    55.
    发明申请

    公开(公告)号:US20170187822A1

    公开(公告)日:2017-06-29

    申请号:US14982136

    申请日:2015-12-29

    Applicant: Yahoo!. Inc.

    Inventor: Bart Thomée

    Abstract: The presentation of content items within a hosting item is typically performed by on-demand retrieval of the content item from a content server. However, on-demand retrieval may impose an undesirable delay in the presentation of the content; may spontaneously alter the layout of the hosting item; and/or may involve an expedient but unsophisticated selection among the content items of a content store (e.g., random selection), resulting in the presentation of irrelevant and/or redundant content. Instead, a device may prefetch content items into a content cache, such that when a user later requests to view a hosting item, the device may insert a content item selected from the content cache. The device may also notify the content server when a content item has been presented to the user; by marking the content item as such, the content server may provide additional, fresh content for the device content cache.

    System and Method for Preemptive Request Processing

    公开(公告)号:US20170171311A1

    公开(公告)日:2017-06-15

    申请号:US14964982

    申请日:2015-12-10

    Applicant: SAP SE

    CPC classification number: H04L67/1097 H04L67/2842 H04L67/2847 H04L67/42

    Abstract: Embodiments described herein relate to an improved technique for preemptive client application request processing based on observed use access patterns and/or models. The system includes a framework engine operable to trace sequences of requests to one or more service provider applications, including which particular client requests are likely to be followed by other particular client requests for each service. Based on the resulting traces, the framework can determine the probability of a particular request B following another particular request A. When request A is retrieved from the service provider application, and when the probability is high enough (e.g. >50%) that request B will follow request A in the sequence of requests, the framework is operable to simulate request B in a background process and provide a response to request B from a local memory storage.

    OPTIMIZING PREDICTIVELY CACHING REQUESTS TO REDUCE EFFECTS OF LATENCY IN NETWORKED APPLICATIONS

    公开(公告)号:US20170154265A1

    公开(公告)日:2017-06-01

    申请号:US15362157

    申请日:2016-11-28

    CPC classification number: G06N5/02 G06N7/005 G06N20/00 H04L67/2847 H04L67/42

    Abstract: A method for creating a cache by predicting database requests by an application and storing responses to the database requests is disclosed. In an embodiment, the method involves identifying a networked application having a client portion and a server portion coupled to the client portion over a network characterized by a first latency, identifying a database used to store activity related to the networked application is identified, identifying a request-response context of the networked application, using the request-response context to predict requests the networked application is likely to make using the database, using the request-response context to predict responses to the requests, creating a cache having the requests and/or the responses stored therein, and providing the cache to a predictive cache engine coupled to the client portion of the networked application by a computer-readable medium that has a second latency less than the first latency.

Patent Agency Ranking