Abstract:
Techniques for tracking memory usages of a data processing system are described herein. According to one embodiment, a memory manager is to perform a first lookup operation in a memory allocation table to identify an allocation entry based on a handle representing a memory address of a memory block allocated to a client and to retrieve a trace entry pointer from the allocation entry. The memory manager is then to perform a second lookup operation in a memory trace table to identify a trace entry based on the trace entry pointer and to increment a memory allocation count of the trace entry. The memory allocation count is utilized to indicate a likelihood of the client causing a memory leak.
Abstract:
The subject technology provides receiving a neural network (NN) model to be executed on a target platform, the NN model including multiple layers that include operations and some of the operations being executable on multiple processors of the target platform. The subject technology further sorts the operations from the multiple layers in a particular order based at least in part on grouping the operations that are executable by a particular processor of the multiple processors. The subject technology determines, based at least in part on a cost of transferring the operations between the multiple processors, an assignment of one of the multiple processors for each of the sorted operations of each of the layers in a manner that minimizes a total cost of executing the operations. Further, for each layer of the NN model, the subject technology includes an annotation to indicate the processor assigned for each of the operations.
Abstract:
The subject technology provides receiving a neural network (NN) model to be executed on a target platform, the NN model including multiple layers that include operations and some of the operations being executable on multiple processors of the target platform. The subject technology further sorts the operations from the multiple layers in a particular order based at least in part on grouping the operations that are executable by a particular processor of the multiple processors. The subject technology determines, based at least in part on a cost of transferring the operations between the multiple processors, an assignment of one of the multiple processors for each of the sorted operations of each of the layers in a manner that minimizes a total cost of executing the operations. Further, for each layer of the NN model, the subject technology includes an annotation to indicate the processor assigned for each of the operations.
Abstract:
Embodiments described herein ensure differential privacy when transmitting data to a server that estimates a frequency of such data amongst a set of client devices. The differential privacy mechanism may provide a predictable degree of variance for frequency estimations of data. The system may use a multibit histogram model or Hadamard multibit model for the differential privacy mechanism, both of which provide a predictable degree of accuracy of frequency estimations while still providing mathematically provable levels of privacy.
Abstract:
In an exemplary process for remote execution of machine-learned models, one or more signals from a second electronic device is detected by a first electronic device. The second electronic device includes a machine-learned model associated with an application implemented on the first electronic device. Based on the one or more signals, a communication connection is established with the second electronic device and a proxy to the machine-learned model is generated. Input data is obtained via a sensor of the first electronic device. A representation of the input data is sent to the second electronic device via the proxy and the established communication connection. The representation of the input data is processed through the machine-learned model to generate an output. A result derived from the output is received via the communication connection and a representation of the result is outputted.
Abstract:
Embodiments described herein provide a privacy mechanism to protect user data when transmitting the data to a server that estimates a frequency of such data amongst a set of client devices. In one embodiment, a differential privacy mechanism is implemented using a count-mean-sketch technique that can reduce resource requirements required to enable privacy while providing provable guarantees regarding privacy and utility. For instance, the mechanism can provide the ability to tailor utility (e.g. accuracy of estimations) against the resource requirements (e.g. transmission bandwidth and computation complexity).
Abstract:
Embodiments described herein provide a privacy mechanism to protect user data when transmitting the data to a server that estimates a frequency of such data amongst a set of client devices. One embodiment uses a differential privacy mechanism to enhance a user experience by identifying particular websites that exhibit particular characteristics. In one embodiment, websites that are associated with a high resource consumption are identified. High resource consumption can be identified based on threshold of particular resources such as processor, memory, network bandwidth, and power usage.
Abstract:
Disclosed are systems, methods, and non-transitory computer-readable storage media for efficiently monitoring the operating context of a computing device. In some implementations, the context daemon and/or the context client can be terminated to conserve system resources. For example, if the context daemon and/or the context client are idle, they can be shutdown to conserve battery power or free other system resources (e.g., memory). When an event occurs (e.g., a change in current context) that requires the context daemon and/or the context client to be running, the context daemon and/or the context client can be restarted to handle the event. Thus, system resources can be conserved while still providing relevant context information collection and callback notification features.
Abstract:
One embodiment provides a system that implements a 1-bit protocol for differential privacy for a set of client devices that transmit information to a server. Implementations may leverage specialized instruction sets or engines built into the hardware or firmware of a client device to improve the efficiency of the protocol. For example, a client device may utilize these cryptographic functions to randomize information sent to the server. In one embodiment, the client device may use cryptographic functions such as hashes including SHA or block ciphers including AES to provide an efficient mechanism for implementing differential privacy.
Abstract:
Systems and methods are disclosed for a server learning new words generated by user client devices in a crowdsourced manner while maintaining local differential privacy of client devices. A client device can determine that a word typed on the client device is a new word that is not contained in a dictionary or asset catalog on the client device. New words can be grouped in classifications such as entertainment, health, finance, etc. A differential privacy system on the client device can comprise a privacy budget for each classification of new words. If there is privacy budget available for the classification, then one or more new terms in a classification can be sent to new term learning server, and the privacy budget for the classification reduced. The privacy budget can be periodically replenished.