Abstract:
Disclosed are systems, methods, and non-transitory computer-readable storage media for notifying context clients of changes to the current context of a computing device. In some implementations, a context client can register to be called back when the context daemon detects specified context. For example, the context client can specify a context in which the context client is interested. When the context daemon detects that the current context of the computing device corresponds to the registered context, the context daemon can notify the context client that the current context matches the context in which the context client is interested. Thus, context clients do not require the programming necessary to independently obtain context updates and detect changes in context that are relevant or of interest to the context client.
Abstract:
Embodiments of the present disclosure present devices, methods, and computer readable medium for techniques for creating machine learning models. Application developers can select a machine learning template from a plurality of templates appropriate for the type of data used in their application. Templates can include multiple templates for classification of images, text, sound, motion, and tabular data. A graphical user interface allows for intuitive selection of training data, validation data, and integration of the trained model into the application. The techniques further display a numerical score for both the training accuracy and validation accuracy using the test data. The application provides a live mode that allows for execution of the machine learning model on a mobile device to allow for testing the model from data from one or more of the sensors (i.e., camera or microphone) on the mobile device.
Abstract:
Embodiments of the present disclosure present devices, methods, and computer readable medium for techniques for creating machine learning models. Application developers can select a machine learning template from a plurality of templates appropriate for the type of data used in their application. Templates can include multiple templates for classification of images, text, sound, motion, and tabular data. A graphical user interface allows for intuitive selection of training data, validation data, and integration of the trained model into the application. The techniques further display a numerical score for both the training accuracy and validation accuracy using the test data. The application provides a live mode that allows for execution of the machine learning model on a mobile device to allow for testing the model from data from one or more of the sensors (i.e., camera or microphone) on the mobile device.
Abstract:
The subject technology provides for determining that a machine learning model in a first format includes sufficient data to conform to a particular model specification in a second format, the second format corresponding to an object oriented programming language), wherein the machine learning model includes a model parameter of the machine learning model. The subject technology transforms the machine learning model into a transformed machine learning model that is compatible with the particular model specification. The subject technology generates a code interface and code for the transformed machine learning model, the code interface including code statements in the object oriented programming language, the code statements corresponding to an object representing the transformed machine learning model and the object includes an interface to update the model parameter. Further, the subject technology provides the generated code interface and the code for display in an integrated development environment (IDE), the IDE enabling modifying of the generated code interface and the code.
Abstract:
The subject technology transforms a machine learning model into a transformed machine learning model in accordance with a particular model specification when the machine learning model does not conform to the particular model specification, the particular model specification being compatible with an integrated development environment (IDE). The subject technology generates a code interface and code for the transformed machine learning model, the code interface including code statements in the object oriented programming language, the code statements corresponding to an object representing the transformed machine learning model. Further, the subject technology provides the generated code interface and the code for display in the IDE, the IDE enabling modifying of the generated code interface and the code.
Abstract:
The subject technology provides for generating machine learning (ML) model code from a ML document file, the ML document file being in a first data format, the ML document file being converted to code in an object oriented programming language different than the first data format. The subject technology further provides for receiving additional code that calls a function provided by the ML model code. The subject technology compiles the ML model code and the additional code, the compiled ML model code including object code corresponding to the compiled ML model code and the compiled additional code including object code corresponding to the additional code. The subject technology generates a package including the compiled ML model code and the compiled additional code. Further, the subject technology sends the package to a runtime environment on a target device for execution.
Abstract:
The subject technology provides for parsing a line of code in a project of an integrated development environment (IDE). The subject technology executes indirectly, using the interpreter, the parsed line of code. The interpreter references a translated source code document generated by a source code translation component from a machine learning (ML) document written in a particular data format. The translated source code document includes code in a chosen programming language specific to the IDE, and the code of the translated source code document is executable by the interpreter. Further the subject technology provides, by the interpreter, an output of the executed parsed line of code.
Abstract:
Systems and methods for proactively populating an application with information that was previously viewed by a user in a different application are disclosed herein. An example method includes: while displaying a first application, obtaining information identifying a first physical location viewed by a user in the first application. The method also includes exiting the first application and, after exiting the first application, receiving a request from the user to open a second application that is distinct from the first application. In response to receiving the request and in accordance with a determination that the second application is capable of accepting geographic location information, the method includes presenting the second application so that the second application is populated with information that is based at least in part on the information identifying the first physical location.
Abstract:
The subject technology provides for dynamic task allocation for neural network models. The subject technology determines an operation performed at a node of a neural network model. The subject technology assigns an annotation to indicate whether the operation is better performed on a CPU or a GPU based at least in part on hardware capabilities of a target platform. The subject technology determines whether the neural network model includes a second layer. The subject technology, in response to determining that the neural network model includes a second layer, for each node of the second layer of the neural network model, determines a second operation performed at the node. Further the subject technology assigns a second annotation to indicate whether the second operation is better performed on the CPU or the GPU based at least in part on the hardware capabilities of the target platform.
Abstract:
Pacer activity data of a user may be managed. For example, historical activity data of a user corresponding to a particular time of a day prior to a current day may be received. Additionally, a user interface configured to display an activity goal of the user may be generated and the user interface may be provided for presentation. In some aspects, the user interface may be configured to display a first indicator that identifies cumulative progress towards the activity goal and a second indicator that identifies predicted cumulative progress towards the activity goal. The cumulative progress may be calculated based on monitored activity from a start of the current day to the particular time of the current day and the predicted cumulative progress may be calculated based on the received historical activity data corresponding to the particular time of the day prior to the current day.