Abstract:
Disclosed are systems and methods for protecting secret device keys, such as High-bandwidth Digital Content Protection (HDCP) device keys. Instead of storing secret device keys in the plain, a security algorithm and one or more protection keys are stored on the device. The security algorithm is applied to the secret device keys and the one or more protection keys to produce encrypted secret device keys. The encrypted secret device keys are then stored either on chip or off-chip.
Abstract:
Systems and methods are provided for providing a quality score associated with an advertisement response to be used to assist in determining the display of the advertisement response on a user interface. Methods include sending a quality score associated with an advertisement response as well as receiving and using a quality score to determine where to display at least a portion of the advertisement response on a user interface.
Abstract:
Systems and methods are provided for dynamically rotating keywords to be used to alter an advertisement request from an advertisement requester. In one embodiment, a method includes identifying a base request from an advertisement requester; identifying at least two potential keywords associated with the base request; and assigning a usage weight to each of the at least two potential keywords, the usage weight determining a percentage of times that each of the at least two potential keywords should be selected to dynamically alter the base request.
Abstract:
Systems and methods are provided for increasing user response to advertisements. Embodiments include an optimization engine configured to identify a request from a requester; identify at least one potential keyword associated with the request; associate the at least one potential keyword with at least one criteria bin; and identify at least one keyword from the at least one criteria bin based on a weight given the at least one criteria bin; and a routing system communicating with the requester and a supplier, the routing system also communicating with the optimization engine, the routing system configured to dynamically alter the base request with the at least one keyword to form an altered request; and send the altered request to the supplier.
Abstract:
Methods for fabricating two metal gate stacks for complementary metal oxide semiconductor (CMOS) devices are provided. A first metal layer may be deposited onto a gate dielectric. Next a mask layer may be deposited on the first metal layer and subsequently etch. The first metal layer is then etched. Without removing the mask layer, a second metal layer may be deposited. In one embodiment, the mask layer is a second metal layer. In other embodiments, the mask layer is a silicon layer. Subsequent fabrication steps include depositing another metal layer (e.g., another PMOS metal layer), depositing a cap, etching the cap to define gate stacks, and simultaneously etching the first and second gate region having a similar thickness with differing metal layers.
Abstract:
An improved content search mechanism uses a graph that includes intelligent nodes avoids the overhead of post processing and improves the overall performance of a content processing application. An intelligent node is similar to a node in a DFA graph but includes a command. The command in the intelligent node allows additional state for the node to be generated and checked. This additional state allows the content search mechanism to traverse the same node with two different interpretations. By generating state for the node, the graph of nodes does not become exponential. It also allows a user function to be called upon reaching a node, which can perform any desired user tasks, including modifying the input data or position.
Abstract:
A content aware application processing system is provided for allowing directed access to data stored in a non-cache memory thereby bypassing cache coherent memory. The processor includes a system interface to cache coherent memory and a low latency memory interface to a non-cache coherent memory. The system interface directs memory access for ordinary load/store instructions executed by the processor to the cache coherent memory. The low latency memory interface directs memory access for non-ordinary load/store instructions executed by the processor to the non-cache memory, thereby bypassing the cache coherent memory. The non-ordinary load/store instruction can be a coprocessor instruction. The memory can be a low-latency type memory. The processor can include a plurality of processor cores.