Abstract:
A method executes software including a view hierarchy with a plurality of views which displays one or more views of the view hierarchy. The method executes software elements associated with a particular view, wherein each particular view includes event recognizers. Each event recognizer has an event definition based on sub-events, and an event handler that specifies an action for a target, and is configured to send the action to the target in response to an event recognition. The method detects a sequence of sub-events, and identifies one of the views of the view hierarchy as a hit view that establishes which views in the hierarchy are actively involved views. The method delivers a respective sub-event to event recognizers for each actively involved view, wherein each event recognizer for actively involved views in the view hierarchy processes the respective sub-event prior to processing a next sub-event in the sequence of sub-events.
Abstract:
An electronic device comprising one or more touches on a region of the touchscreen display that includes the application content of the first application. In response to detecting the input comprising the one or more touches, the electronic device, in accordance with a determination that the one or more touches correspond to a system gesture for switching applications, replaces display of the first application with a second application without delivering information corresponding to the one or more touches to the first application; and in accordance with a determination that the touches do not correspond to a system gesture, delivers the information corresponding to the one or more touches to the first application.
Abstract:
At a portable electronic device that includes a portable-device display and is in communication with a vehicle display, displaying a first user interface on the portable-device display. Sending, from the portable electronic device to the vehicle display, information for generating a second user interface, the second user interface including an affordance. While the second user interface is displayed on the vehicle display, detecting an input activating the affordance in the second user interface, and in response, causing the portable electronic device to invoke a digital assistant. In response to invoking the digital assistant, prompting a user for an audible request. In response to receiving the audible request, causing display, within the second user interface, of a digital assistant dialogue box; and subsequently causing display, within the second user interface, of a user interface object associated with a search result, and maintaining the first user interface on the portable-device display.
Abstract:
Disclosed herein is a technique for implementing a secure lock screen on a computing device. The secure lock screen is configured to permit particular applications to display their content—such as main user interfaces (UIs)—while maintaining a desired overall level of security on the computing device. Graphics contexts, which represent drawing destinations associated with the applications, are tagged with entitlement information that indicates whether or not each graphics context should be displayed on the computing device when the computing device is in a locked-mode. Specifically, an application manager tags each application that is initialized, where the tagging is based on a level of entitlement possessed by the application. In turn, a rendering server that manages the graphics contexts can identify the tagged entitlement information and display or suppress the content of the applications in accordance with their entitlements.
Abstract:
A software application includes a plurality of views and an application state. The application includes instructions for displaying one or more views, where a respective view includes a respective gesture recognizer having a corresponding delegate, detecting one or more touches on a touch-sensitive surface, and processing a respective touch. The processing includes obtaining a receive touch value based on the application state by executing the delegate; when the receive touch value meets predefined criteria, processing the respective touch at the respective gesture recognizer; and conditionally sending information corresponding to the respective touch to the software application in accordance with an outcome of the processing by the respective gesture recognizer and in accordance with the receive touch value determined by the delegate. The software application is executed in accordance with the outcome of the processing of the respective touch by the respective gesture recognizer.
Abstract:
Systems, methods, and devices can allow applications to provide complication data to be displayed in a complication of a watch face. A client application can create a complication data object according to a template to efficiently select how the complication data is to be displayed. For example, a complication controller on the watch can receive new data and determine which template to use. The complication data object can be sent to a display manager that can identify the selected template and display the data according to the template.
Abstract:
A software application includes a plurality of views and an application state. The application includes instructions for displaying one or more views, where a respective view includes a respective gesture recognizer having a corresponding delegate, detecting one or more touches on a touch-sensitive surface, and processing a respective touch. The processing includes obtaining a receive touch value based on the application state by executing the delegate; when the receive touch value meets predefined criteria, processing the respective touch at the respective gesture recognizer; and conditionally sending information corresponding to the respective touch to the software application in accordance with an outcome of the processing by the respective gesture recognizer and in accordance with the receive touch value determined by the delegate. The software application is executed in accordance with the outcome of the processing of the respective touch by the respective gesture recognizer.
Abstract:
While displaying one or more views of a first software application, an electronic device detects a sequence of touch inputs. The electronic device, in accordance with a determination that no gesture recognizer of the first software application recognizes a portion of the sequence of touch inputs, delivers the sequence of touch inputs to the second software application, and in accordance with a determination that at least one gesture recognizer in the second software application recognizes the sequence of touch inputs, processes the sequence of touch inputs with the at least one gesture recognizer in the second software application that recognizes the sequence of touch inputs.
Abstract:
A method includes displaying a suggestion region above an on-screen keyboard. The suggestion region includes multiple suggested character strings. The method further includes: detecting a gesture that begins within a predefined key of the on-screen keyboard; and responsive to detecting the gesture: in accordance with a determination that the gesture ends within the predefined key, inserting a first character string into a text field; and in accordance with a determination that the gesture ends outside of the predefined key, inserting a second character string into the text field, wherein the second character string is different from the first character string.
Abstract:
While displaying one or more views of a first software application, an electronic device detects a sequence of touch inputs. The electronic device, in accordance with a determination that no gesture recognizer of the first software application recognizes a portion of the sequence of touch inputs, delivers the sequence of touch inputs to the second software application, and in accordance with a determination that at least one gesture recognizer in the second software application recognizes the sequence of touch inputs, processes the sequence of touch inputs with the at least one gesture recognizer in the second software application that recognizes the sequence of touch inputs.