摘要:
The embodiments herein provide a system and a method for integrating a data from a source to a destination. The method comprises generating a global-id, setting an event-id corresponding to an entity id in the global id, polling a data from a source, sorting changes of a source system based on a time of update and an entity id, creating and comparing an old as of state value and a new as of state value for each field for each update in the entity in the source and destination to detect a conflict on an entity, sending a time of update in the entity and a revision id of a change to the destination, comparing the global id with an event id for each entity at the destination to detect a presence of an entity in the destination and processing an entity at the destination based an event id.
摘要:
A multi-threaded binary translation system performs atomic operations by a thread, such operations include processing a load linked instruction and a store conditional instruction. The store conditional instruction updates data stored in a shared memory address only when at least three conditions are satisfied. The conditions are: a copy of a load linked shared memory address of the load linked instruction is the same as the store conditional shared memory address, a reservation flag indicates that the thread has a valid reservation, and the copy of data stored by the load linked instruction is the same as data stored in the store conditional shared memory address.
摘要:
A multi-threaded binary translation system performs atomic operations by a thread, such operations include processing a load linked instruction and a store conditional instruction. The store conditional instruction updates data stored in a shared memory address only when at least three conditions are satisfied. The conditions are: a copy of a load linked shared memory address of the load linked instruction is the same as the store conditional shared memory address, a reservation flag indicates that the thread has a valid reservation, and the copy of data stored by the load linked instruction is the same as data stored in the store conditional shared memory address.
摘要:
A functional simulator with watchpoint support includes a CPU having a first-level DMI cache, a watchpoint manager having a second-level DMI cache, an interconnect module, and a memory controller. The simulator is operated by a front-end tool. Watchpoints corresponding to a predetermined memory addresses are set by the front-end tool and stored as a watchpoint address list in the watchpoint manager. When a memory access request is received by the first-level DMI cache, after a failure to complete the memory access request, the CPU transmits the request to the watchpoint manager. The watchpoint manager searches for a memory address associated with the memory access request in the watchpoint address list. If a match is found, the watchpoint manager generates a watchpoint hit signal and notifies the front-end tool.
摘要:
A network device comprises a service card (e.g., a lawful intercept (LI) service card) executing a communication protocol to receive, from one or more sources (e.g., law enforcement agents), intercept information specifying at least one destination and criteria for matching one or more packet flows. The network device further includes a network interface card to receive a packet from a network, and a control unit to provide the packet from the interface card to the LI service card. The LI service card executes a flow match detection module that, when the packet matches the criteria of the intercept information, forwards the packet to the destination specified by the intercept information. The network device may provide real-time intercept and relaying of specified network-based communications. Moreover, the techniques described herein allow LEAs to tap packet flows with little delay after specifying intercept information, e.g., within 50 milliseconds, even under high-volume networks.
摘要:
Systems and methods of managing memory provide for detecting a request to activate a memory portion that is limited in size to a partial page size, where the partial page size is less than a full page size associated with the memory. In one embodiment, detecting the request may include identifying a row address and partial page address associated with the request, where the partial page address indicates that the memory portion is to be limited to the partial page size.
摘要:
The invention provides a transcorneal vision assistance device implantable in the eye of a patient. A preferred embodiment transcorneal microtelescope vision assistance device is implantable in the eye of a patient and includes a keratoprosthesis configured to replace a portion of the cornea of a patient and to secure the keratoprosthesis to a remaining front portion of the cornea. A microtelescope is carried by the keratoprosthesis for transcorneal mounting of the microtelescope.
摘要:
A method, apparatus, and system are described in which a memory controller may have two or more registers to create and track zones of memory in a volatile memory device. The memory controller controls a power consumption state of a first zone of memory in the volatile memory device and a second zone of memory within the first volatile memory device on an individual basis; and one or more memory arrays contained within the first volatile memory device.
摘要:
The temperature for multiple devices of a memory module are determined. In one example a memory module includes a printed circuit board, a plurality of memory chips on the printed circuit board, each chip containing a plurality of memory cells and a thermal sensor, and a multiplexer on the printed circuit board, independent of the memory chips, coupled to each of the thermal sensors. A current source is coupled to the multiplexer to provide a current to each one of the thermal sensors, and a voltage detector is coupled to the multiplexer to detect a voltage from each of the thermal sensors when a current is applied. A temperature circuit is coupled to the voltage detector to determine a temperature for each memory chip based on the detected voltage.
摘要:
Network analysis techniques are described for generating and outputting traffic flow packets which include traffic flow information indicative of network traffic flows. More specifically, the traffic flow packets may be generated and output in a rate-controlled fashion, which can avoid data loss of traffic flow information in the traffic flow packets. For example, rate-controlled transmission can be extremely useful in ensuring that transmitted traffic flow packets will be received by a packet flow collector without overloading input buffers of the packet flow collector.