Abstract:
A method for delivering audio/video data through a hardware device using a software application comprises, at the hardware end, receiving an encrypted application key, an encrypted random session key, and encrypted audio/video data from the software. The hardware then decrypts the encrypted application key using a secret encryption key, decrypts the encrypted random session key using the application key, and decrypts the encrypted audio/video data using the random session key. The hardware may then deliver the unencrypted audio/video data by way of a display and speakers. The secret encryption key is securely embedded within the hardware device at an earlier point in time.
Abstract:
A depth write disable apparatus and method for controlling evictions, such as depth values, from a depth cache to a corresponding depth buffer in a zone rendering system. When the depth write disable circuitry is enabled, evictions from the depth cache (as which typically occur during the rendering of the next zone) to the depth buffer are prevented. In particular, once the depth buffer is initialized (i.e. cleared) to a constant value at the beginning of a scene, the depth buffer does not need to be read. The depth cache handles intermediate depth reads and writes within each zone. Since the memory resident depth buffer is not required after a scene is rendered, it never needs to be written. The final depth values for a zone can thus be discarded (i.e., rather than written to the depth buffer) after each zone is rendering.
Abstract:
A method and apparatus for managing overlay data requests are disclosed. One embodiment of an apparatus includes a request unit and a timer. A request is made by a graphics controller to the request unit for a line of overlay data. The request unit divides the request from the graphics controller into a series of smaller requests. The smaller requests are issued to a memory controller. Delays are inserted between each of the smaller requests in order to allow other system resources to more easily gain access to memory.
Abstract:
A method for mutual exclusion of drawing engine execution on a graphics device is disclosed. The method checks a busy signal of an executing drawing engine. The executing drawing engine is one of a plurality of drawing engines of the graphics device and the only drawing engine executing out of the plurality of drawing engines. The method forwards a graphics instruction and associated data packet to a next drawing engine to execute after the executing drawing engine has completed execution. The next drawing engine to execute is one of the plurality of drawing engines.
Abstract:
A data storage array is provided having a number, n, of sequential data storage areas for the storage of data. A valid status array including n bits is provided where there is a one to one correspondence between the bits of the valid status array and the data storage areas of the data storage array. When valid data are written into a data storage area, the status bit of the valid status array corresponding to this data storage area is set to indicate that valid data are present. When data are read out of the data storage area, the corresponding status bit is cleared indicating the absence of valid data. If the data storage array is one that is written to in a random access manner and read from sequentially, as a queue, then the valid status array would indicate the presence of valid data at the head of the queue for the data storage array.
Abstract:
A graphics engine may include a decryption device, a renderer, and a sprite or overlay engine, all connected to a display. A memory may have a protected and non-protected portions in one embodiment. An application may store encrypted content on the non-protected portion of said memory. The decryption device may access the encrypted material, decrypt the material, and provide it to the renderer engine of a graphics engine. The graphics engine may then process the decrypted material using the protected portion of the memory. Only graphics devices can access the protected portion of the memory in at least one mode, preventing access by outside sources. In addition, the protected memory may be stolen memory that is not identified to the operating system, making that stolen memory inaccessible to applications running on the operating system.
Abstract:
An apparatus and method are disclosed for synchronization of command processing from multiple command queues. Various embodiments employ a condition code register that indicates which queues should have processing suspended until a specified event condition occurs. Upon satisfaction of the specified condition, processing of commands from the suspended queue is resumed.
Abstract:
Embodiments of the present invention relate to accessing a first pair of adjacent data blocks using a first channel of a dual channel memory device; and simultaneously accessing a second pair of adjacent data blocks using a second channel of the memory device, the second pair being spaced apart from the first pair by a predetermined interval.
Abstract:
Embodiments of the present invention provide a memory arbiter for directing chipset and graphics traffic to system memory. Page consistency and priorities are used to optimize memory bandwidth utilization and guarantee latency to isochronous display requests. The arbiter also contains a mechanism to prevent CPU requests from starving lower priority requests. The memory arbiter thus provides a simple, easy to validate architecture that prevents the CPU from unfairly starving low priority agent and takes advantage of grace periods and memory page detection to optimize arbitration switches, thus increasing memory bandwidth utilization.
Abstract:
A method and apparatus for efficient translation lookaside buffer (“TLB”) management of three-dimensional surfaces is disclosed. A three-dimensional surface is represented as a square pixel surface. The square-surface representation is stored in a single entry of the TLB.