Abstract:
A user interface based interaction method includes: displaying a user interface; when a level i focal point stays in a level i area X of the user interface, if an instruction for moving a level i focal point is received, in response to the instruction for moving a level i focal point, moving the level i focal point from the level i area X of the user interface to a level i area Y of the user interface; and when the level i focal point stays in the level i area Y of the user interface, if an instruction for moving a level i+1 focal point is received, in response to the instruction for moving a level i+1 focal point, moving a level i+1 focal point from the interface element a in the level i area Y to the interface element b in the level i area Y.
Abstract:
A method for generating an identification pattern, a terminal device, and the like are provided. The method includes: obtaining a first image (110), where the first image is used to represent any one of a Chinese character, an English character, and an Arabic numeral character; performing transformation on the first image based on a contour line of the first image to obtain a second image (120), where the second image includes the first image and a plurality of contour lines; and generating an identification pattern of a software program based on the second image (130). This application can improve a display effect of the identifier pattern.
Abstract:
A picture obtaining method and apparatus and a picture processing method and apparatus are provided. The method includes: obtaining a grayscale image corresponding to a first picture and a first image, where a size of the first picture is equal to a size of the first image, the first image includes N parallel lines, a spacing between two adjacent lines does not exceed a spacing threshold, and N is an integer greater than 1; translating a pixel included in each line in the first image based on the grayscale image, to obtain a second image, where the second image includes a contour of an image in the first picture; and set a pixel value of each pixel included in each line in the second image, to obtain a second picture.
Abstract:
An information displaying method and a terminal are provided. The method includes: obtaining audio data to be played in a chronological order; determining, based on attribute information at any moment of a sound represented by the audio data, a shape of a graph corresponding to the any moment, where the graph corresponding to the any moment including a closed curve with a bump, and a maximum distance in distances from points on the bump to a center of the graph is positively correlated to a value indicated by the attribute information at the any moment; and displaying the graph corresponding to the any moment. The bump in the graph changes with the value indicated by the attribute information of the sound, and such graph is presented to a user, to enhance perception of the user on the attribute information of the audio data and improve user experience.
Abstract:
In a privacy information generation method, a terminal device displays on a display an interactive element of a privacy settings page for a target application. The terminal device responds to a first gesture operation performed by a user on the interactive element of the privacy settings page, and determines a privacy precision for the target application according to the first gesture operation. The terminal device then generates privacy information based on the privacy precision for the target application when the target application requests the privacy information from the terminal device.
Abstract:
A virtual robot image presentation method and an apparatus are provided to improve virtual robot utilization and user experience. In this method, an electronic device generates a first virtual robot image, and presents the first virtual robot image. The first virtual robot image is determined by the electronic device based on scene information. The scene information includes at least one piece of information in first information and second information, the first information is used to represent a current time attribute, and the second information is used to represent a type of an application currently running in the electronic device. According to the foregoing method, in a human-machine interaction process, virtual robot images can be richer and more vivid, so that user experience can be better, thereby improving virtual robot utilization of a user.
Abstract:
An image processing method and apparatus. The method includes: obtaining a source image; determining spot superposition positions according to pixel values of the source image, where values of pixels of the source image that are located in the spot superposition positions are greater than a preset first threshold; and blurring the source image, and performing, in the spot superposition positions of a source image, image fusion on the source image and spot images to obtain a processed image, where the spot superposition positions and the spot images fused with in the spot superposition positions are in a one-to-one correspondence.
Abstract:
A video playback method includes: obtaining playback progress information of all sub-files in a video file, where the video file includes at least two sub-files; displaying a playback progress bar list, where the playback progress bar list includes playback progress bars of the at least two sub-files, and a playback progress bar of any sub-file displays playback progress of the any sub-file according to playback progress information of the any sub-file; receiving a user instruction used for selecting, according to the playback progress bars, a target sub-file that needs to be played, where the target sub-file is any sub-file of the at least two sub-files; and playing the target sub-file according to the instruction.