摘要:
In one embodiment, an apparatus comprises a memory and a processor. The memory is to store sensor data captured by one or more sensors associated with a first device. Further, the processor comprises circuitry to: access the sensor data captured by the one or more sensors associated with the first device; determine that an incident occurred within a vicinity of the first device; identify a first collection of sensor data associated with the incident, wherein the first collection of sensor data is identified from the sensor data captured by the one or more sensors; preserve, on the memory, the first collection of sensor data associated with the incident; and notify one or more second devices of the incident, wherein the one or more second devices are located within the vicinity of the first device.
摘要:
An adaptive video end-to-end network is described that uses local abstraction. One example includes an image sensor to generate a sequence of images, a processor coupled to the image sensor to analyze the sequence of images to detect an event, to select images related to the event and to generate metadata regarding the event, and a communications interface coupled to the processor to send the metadata information through a network connection to a central node.
摘要:
A region or group of pixels may be textured as a unit, using a range specifier and one or more anchor pixels to define the group. In some embodiments, processing grouped pixels improves efficiency.
摘要:
A system and method are configured to detect conflicts when converting scalar processes to parallel processes (“SIMDifying”). Conflicts may be detected for an unordered single index, an ordered single index and/or ordered pairs of indices. Conflicts may be further detected for read-after-write dependencies. Conflict detection is configured to identify operations (i.e., iterations) in a sequence of iterations that may not be done in parallel.
摘要:
An apparatus to detect streaming data in memory is presented. In one embodiment the apparatus use reuse bits and S-bits status for cache lines wherein an S-bit status indicates the data in the cache line are potentially streaming data. To enhance the efficiency of a cache, different measures can be applied to make the streaming data become the next victim during a replacement.
摘要:
A processing core implemented on a semiconductor chip is described having first execution unit logic circuitry that includes first comparison circuitry to compare each element in a first input vector against every element of a second input vector. The processing core also has second execution logic circuitry that includes second comparison circuitry to compare a first input value against every data element of an input vector.
摘要:
A system and method are configured to detect conflicts when converting scalar processes to parallel processes (“SIMDifying”). Conflicts may be detected for an unordered single index, an ordered single index and/or ordered pairs of indices. Conflicts may be further detected for read-after-write dependencies. Conflict detection is configured to identify operations (i.e., iterations) in a sequence of iterations that may not be done in parallel.
摘要:
A directory of a private cache hierarchy is provided to maintain coherency between data stored in the cache hierarchy, where the directory is to enable concurrent cache-to-cache transfer of data to two private caches from another private cache. This directory can be implemented in a system having a multi-core processor. Other embodiments are described.
摘要:
Methods, apparatuses and systems to decrease the energy consumption of a memory chip while increasing its effect bandwidth during the execution of any workload. Methods, apparatuses and systems may allow a memory chip utilize a plurality of virtual row buffers to respond to requests for data included in a memory array block. Methods, apparatuses and systems may further eliminate or reduce the cost associated with transferring unnecessary data from a memory array block to row buffers by altering the data transfer size between a memory array block and a row buffer.
摘要:
In some embodiments, the invention involves utilizing a tree merge sort in a platform to minimize cache reads/writes when sorting large amounts of data. An embodiment uses blocks of pre-sorted data residing in “leaf nodes” residing in memory storage. A pre-sorted block of data from each leaf node is read from memory and stored in faster cache memory. A tree merge sort is performed on the nodes that are cache resident until a block of data migrates to a root node. Sorted blocks reaching the root node are written to memory storage in an output list until all pre-sorted data blocks have been moved to cache and merged upward to the root. The completed output list in memory storage is a list of the fully sorted data. Other embodiments are described and claimed.