Abstract:
An electronic device with a touch-sensitive display and one or more sensors to detect signals from a stylus associated with the device displays a user interface in a viewing mode, the user interface including a content region and a first control region. While displaying the user interface in the viewing mode, the device detects an input by a first contact on the touch-sensitive display; and, in response to detecting the input: when the first contact is a stylus contact in the content region, the device displays, in the content region, a mark drawn in accordance with movement of the first contact in the input. If the first contact is a non-stylus contact in the content region, the device performs a navigation operation in the content region in accordance with movement of the first contact without displaying the mark that corresponds to the first contact in the content region.
Abstract:
An electronic device with a touch-sensitive display and one or more sensors to detect signals from a stylus associated with the device: displays a user interface in a viewing mode, the user interface including a content region and a first control region; while displaying the user interface in the viewing mode, detects an input by a first contact on the touch-sensitive display; and, in response to detecting the input: when the first contact is a stylus contact in the content region: changes from the viewing mode to an editing mode; and displays, in the content region, a mark drawn in accordance with movement of the first contact; and when the first contact is a non-stylus contact in the content region: remains in the viewing mode; and performs a navigation operation in the content region in accordance with movement of the first contact without displaying the mark.
Abstract:
Systems and methods for proactively populating an application with information that was previously viewed by a user in a different application are disclosed herein. An example method includes: while displaying a first application, obtaining information identifying a first physical location viewed by a user in the first application. The method also includes exiting the first application and, after exiting the first application, receiving a request from the user to open a second application that is distinct from the first application. In response to receiving the request and in accordance with a determination that the second application is capable of accepting geographic location information, the method includes presenting the second application so that the second application is populated with information that is based at least in part on the information identifying the first physical location.
Abstract:
Systems and methods for generating a video of an emoji that has been puppeted using inputs from image, depth, and audio. The inputs can capture facial expressions of a user, eye, eyebrow, mouth, and head movements. A pose, held by the user, can be detected that can be used to generate supplemental animation. The emoji can further be animated using physical properties associated with the emoji and captured movements. An emoji of a dog can have its ears move in response to an up-and-down movement, or a shaking of the head. The video can be sent in a message to one or more recipients. A sending device can render the puppeted video in accordance with hardware and software capabilities of a recipient's computer device.
Abstract:
Systems and methods for proactively identifying and surfacing relevant content on an electronic device with a touch-sensitive display are disclosed herein. In one aspect, a method includes obtaining information identifying a first physical location viewed by a user in a first application. The method further includes detecting a first input. In response to detecting the first input: a second application is identified that is capable of accepting geographic location information; and an affordance is presented that is distinct from the first application, with a suggestion to open the second application. The suggestion includes information about the first physical location. The method further includes detecting a second input at the affordance. In response to detecting the second input at the affordance, the second application is opened and populated to include information that is based at least in part on the information identifying the first physical location.
Abstract:
Various customization options are provided for customizing a 3D avatar of a head. Features of the head and assets corresponding to the features can be customized using blend shapes. An amount of storage for the plurality of blend shapes is minimized by determining overlapping blend shapes that can be reused for a plurality of different assets. Further, techniques are provided for dynamic changes to an avatar in accordance with selected features and assets.
Abstract:
Systems and methods for proactively assisting users with accurately locating a parked vehicle are disclosed herein. An example method includes: automatically, and without instructions from a user: determining that a user of the electronic device is in a vehicle that has come to rest at a geographic location. Upon determining that the user has left the vehicle at the geographic location, the method includes automatically, and without instructions from a user: determining whether positioning information, retrieved from the location sensor to identify the geographic location, satisfies accuracy criteria. Upon determining that the positioning information does not satisfy the accuracy criteria, the method includes: providing a prompt to the user to input information about the geographic location. In response to providing the prompt, the method includes receiving information from the user about the geographic location and storing the information as vehicle location information.
Abstract:
Systems and methods for generating a video of an emoji that has been puppeted using inputs from image, depth, and audio. The inputs can capture facial expressions of a user, eye, eyebrow, mouth, and head movements. A pose, held by the user, can be detected that can be used to generate supplemental animation. The emoji can further be animated using physical properties associated with the emoji and captured movements. An emoji of a dog can have its ears move in response to an up-and-down movement, or a shaking of the head. The video can be sent in a message to one or more recipients. A sending device can render the puppeted video in accordance with hardware and software capabilities of a recipient's computer device.
Abstract:
Systems and methods for proactively populating an application with information that was previously viewed by a user in a different application are disclosed herein. An example method includes: while displaying a first application, obtaining information identifying a first physical location viewed by a user in the first application. The method also includes exiting the first application and, after exiting the first application, receiving a request from the user to open a second application that is distinct from the first application. In response to receiving the request and in accordance with a determination that the second application is capable of accepting geographic location information, the method includes presenting the second application so that the second application is populated with information that is based at least in part on the information identifying the first physical location.
Abstract:
Various customization options are provided for customizing a 3D avatar of a head. Features of the head and assets corresponding to the features can be customized using blend shapes. An amount of storage for the plurality of blend shapes is minimized by determining overlapping blend shapes that can be reused for a plurality of different assets. Further, techniques are provided for dynamic changes to an avatar in accordance with selected features and assets.