摘要:
A method includes providing a first journal and first digest matrix in a first computer, a second journal and second digest matrix in the second computer, the second journal entry indicating a previous modification of an object on a third computer, sending a representation of a previous latest second journal entry, comparing the representation with a comparable portion of the second journal, if a current latest second journal entry referring to the third computer is newer than the previous latest second journal entry, transmitting from the second journal to the first journal the second journal entry that is newer than the previous latest second journal entry, adjusting a cache in the first computer according to the first journal, updating the representation, receiving a request for an object, and identifying via the first digest matrix the second computer as a supplier of the object.
摘要:
A communication process and system for communicating between communication participants (S1, S2, S3, S4) that are provided for control and/or monitoring of a technological process, the communication participants connected in communication with each other by way of a bus system (B) and can be identified using their addresses. Each communication participant manages a first group of references as so-called service access points (SAPs), and for at least one of the service access points, a second group of references is managed. Access to an individual reference from this second group of references is carried out using the address of the accessing communication participant. A source address lookup table may be provided to convert participant addresses into a unique natural number corresponding to a particular reference.
摘要:
Storage capability otherwise going underutilized in a LAN is made available for sharing among workstations connected to the LAN. Systems connected to a LAN are surveyed for storage capability potentially available for sharing, a weighting function is derived for each system which is indicative of shared system storage capability, and data files to be stored are scattered among and gathered from the connected systems.
摘要:
A request handler in a server handles requests from a client as a user navigates through an application having a plurality of states. A data generator is coupled to the request handler. A cache is coupled to the data generator. The data generator processes the requests received by the request handler and based, at least in part, on the requests, stores data in the cache. An application state controller is coupled to the request handler, and a preprocessor is coupled to the application state controller. The requests handled by the request handler indicate the current state of the application in which the requesting user is located, and such an indication is forwarded to the application state controller. The application state controller reads the graphical usage description, which graphically illustrates the flow of the application from state to state and determines a likely next state based on the current state. The application state controller produces a control signal based on the indication of the current state of the user and the likely next state. The preprocessor generates a preprocess signal based on the control signal. Responsive to the preprocess signal, the data generator caches the data that are likely to be needed as the user navigates through the states of the application.
摘要:
An architecture is described having characteristics, scale and realized according to a minimized cost function with the ability to control and govern liability, availability, band width, capacity and quality of service as one pleases subject to a desired type of management software or framework.
摘要:
A method and system are provided for retrieving a Web page in a multiple cache networking system. Data requested to be cached by browsers is cached among a plurality of processors in a multiple cache networking system. A request for cached data is received from a browser. A determination is made as to which of the plurality of processors are operative. A load level of each of the operative processors is then determined. Each of the operative processors is queried to locate the requested cached data. An address of the operative processor having the requested cached data is outputted.
摘要:
A method and system for transparently combining remote and local storage to provide an extended file system such as a virtual local drive for a computer system client/user, e.g., a user of a pocket sized personal computer or a cable set-top box. A client device may load file system object data, storing the directories and files remotely, and retrieving the files only when required. Via its local storage, the extended file system handles unreliable connections and delays. When a connection to an extended file system server is present, the extended file system provides automatic downloading of information that is not locally cached, and automatically uploading of information that has been modified on the client. Extended file system attributes are employed to determine the actual location of file system data, and a lightweight protocol is defined to download or upload remote data by low-level components that make the remote source transparent from the perspective of the application. The system scales to large networks as it employs the lightweight protocol and establishes a connection only to retrieve and submit data.
摘要:
Under the present invention, a small cache is used for the selective buffering of devices of a heterogeneous striping group (i.e., striping group made of devices with unequal capacities) to match the load on each device to its capacity. The inventive caching algorithm utilizes a device map, or disk map, and applies a cache distribution factor for each device of a group to determined how to selectively buffer blocks read from different devices of a striping group; thereby placing different loads on the different devices of a striping group in accordance with their capacities.
摘要:
In a transmission protocol in which a user running an application in an address space in one data processing system wishes to transmit a data packet to another address space in another data processing system by means of direct memory access directly from a sending buffer to a receiving buffer with no copy, a mechanism is provided for minimizing the need for retransmission and for insuring proper entry into the target data processing system address space. In particular, when the first system does not receive an acknowledgment from the receiver, a special data packet with a retransmit flag bit set is sent to the second system. When this system receives the data packet with the retransmit flag bit set the second system responds either by sending a new acknowledgment or by sending a request for retransmission. No transmission back to the first system occurs, however before such a request is made and in fact the receiving system does not send this retransmission request without insuring that its receipt would be appropriate. In particular, the second system, before requesting retransmission, checks to assure that tag association is still valid so that an adapter at the second system is still capable of matching tags in data packet headers with appropriate real address memory locations within address spaces belonging to the second receiving data processing system. In this manner needless retransmission of packets does not occur and retransmission occurs only when receipt of the data packet is appropriate.
摘要:
A system and method for a secure cached subscription system is described. In one embodiment, the system comprises a content provider and a caching device connected to the content provider. The content provider speculatively downloads information into the caching device based upon a user's data. A processing device is connected via a high-bandwidth connection to the caching device for processing the information.