Abstract:
A system and process for generating a video animation from the frames of a video sprite with user-controlled motion is presented. An object is extracted from the frames of an input video and processed to generate a new video sequence or video sprite of that object. In addition, the translation velocity of the object for each frame is computed and associated with each frame in the newly generated video sprite. The system user causes a desired path to be generated for the object featured in the video sprite to follow in the video animation. Frames of the video sprite showing the object of interest are selected and inserted in a background image, or frame of a background video, along the prescribed path. The video sprite frames are selected by comparing a last-selected frame to the other video sprite frames, and selecting a video sprite frame that is identified in the comparison as corresponding to an acceptable transition from the last-selected frame. Each newly selected video sprite frame is inserted at a point along the prescribed path dictated by the velocity associated with the object in the last-inserted frame. The process of selecting, inserting and comparing video sprite frames to create successive frames of the video animation continues for as long as it is desired to produce new frames of the video animation.
Abstract:
A graphics system including a custom graphics and audio processor produces exciting 2D and 3D graphics and surround sound. The system includes a graphics and audio processor including a 3D graphics pipeline and an audio digital signal processor. Improved fog simulation is provided by enabling backwards exponential and backwards exponential squared fog density functions to be used in the fog calculation. Improved exponential and exponential squared fog density functions are also provided which provide the ability to program a fog start value. A range adjustment function is used to adjust fog based on the X position of the pixels being rendered, thereby preventing range error as the line of sight moves away from the Z axis. An exemplary Fog Calculation Unit, as well as exemplary fog control functions and fog related registers, are also disclosed.
Abstract:
Three-dimensional data that defines a bone in a three-dimensional model is coded by coding a quaternion defining an orientation of the bone, coding vectors defining a displacement of the bone and a scaling factor for the bone, and coding a value defining a time corresponding to the orientation, displacement and scaling of the bone.
Abstract:
A set of viewpoints for a given scene of 3D objects is defined by a system that restricts the degrees of freedom available to a user, through use of a bounding surface (a viewpoint sphere), and provides varying degrees of automation ranging from predefined viewpoints to generated tour paths, to interactive selection using free navigation. The system calculates the scene sphere, which is the minimum bounding sphere that contains the set of objects in the scene and then finds the viewpoint sphere, which is done by calculating the viewpoint sphere radius. The user then chooses the mode of viewpoint selection as either completely automated, semi-automated, or free navigation. The output is a set of viewpoints for the given scene of objects.
Abstract:
A character is represented in a character generator as a set of polygons. The character may be manipulated using three-dimensional animation techniques. A code for a character may be used to access a set of curves defining the outline of the character. This set of curves is transformed into a set of polygons. The set of polygons may be rendered as a three-dimensional object. The set of polygons may be created by converting the curves into sets of connected line segments and then tessellating the polygon defined by the line segments. Animation properties are represented using a normalized scale along a path or over time. Animation may be provided in a manner that is independent of the spatial and temporal resolution of the video to which it is applied. Such animation may be applied to characters defined by a set of polygons. Various three-dimensional spatial transformations, lighting effects and other colorizations may be provided. A user interface for editing a character string may provide two alternate displays. A first display allows a user to input and view any desired portion of the character string for the purpose of editing. A second display allows a user to view how the character string appears at a selected point in time during a titling effect for the purpose of animation. In both displays, the text is displayed in a three-dimensional form. This interface may be combined with a timeline editing interface for editing an associated video program, or other user interface, to permit layering of titling effects and adjustment of animation properties, positioning and timing.
Abstract:
A method and arrangements for transmitting, receiving and displaying graphic images. The graphic images include dynamic icons (dynacons), i.e. graphic subpictures comprising two or more motion phases. By alternately displaying said motion phases, an attractive motion is created. They enhance the appearance of graphic images considerably. This is especially useful in the transmission of an electronic television program guide, e.g. to indicate the type of television programs to come.
Abstract:
An image generation apparatus and information storage medium wherein the flow of a fluid over a course influences the behavior of a moving body. The moving body is moved along the course, based on manipulation data, flow data PSn (position of a point Sn), flow velocity VSn, and flow direction &agr;Sn, this data being set for the course in an object space. A flow-velocity vector VECFn at the position of the moving body is obtained by interpolation based on flow data that is set for sample points Sn and position data for the moving body, and the moving body is moved in accordance with this VECFn. The flow velocity VSn is assumed to be the maximum value through the cross-section of the course and the interpolation is based on this VSn and the flow velocity at either the left edge Ln or the right edge Rn of the course. The flow velocities at the left edge Ln and right edge Rn of the course are made to be greater than zero. The flow data is set for the sample points Sn in a one-to-one correspondence with course data PCn, &agr;Cn, WLn, and WRn.
Abstract:
A video editing system in which source clips are added to a composed video sequence by addition to a curved time line which displays the entire temporal arrangement of the program elements. Editing is carried out in a region of high temporal resolution for maximum accuracy.
Abstract:
Utterances comprising text and behavioral movement commands entered by a user are processed to identify patterns of behavioral movements executed by the user's visual representation. Once identified, the patterns are used to generate behavioral movements responsive to new utterances received from the user, without requiring the user to explicitly alter the behavioral characteristics selected by the user. An application module parses an utterance generated by a user to determine the presence of gesture commands. If a gesture command is found in an utterance, the utterance is stored for behavioral learning processing. A stored utterance is analyzed with existing stored utterances to determine if the stored utterances provide the basis for creating a new behavioral rule. Newly stored utterances are first analyzed to generate different contexts associated with the behavioral movement. To determine if any of the contexts should be used as a basis for a new behavioral rule in an embodiment in which contexts are stored, the contexts of the existing utterances in the log are compared with the new contexts. If a context appears in the log at a frequency above a threshold, then the context is used as the basis for a new behavioral rule. The new behavioral rule is then used to modify existing rules, or create more generally applicable rules. New general rules are not created unless the number of existing rules that could create a behavioral rule exceeds a threshold to control how persistent a user's behavior must be to create a new rule.
Abstract:
A method for delivering animation includes transmitting a single source image to a client system. Parameters that generate a function are transmitted to the client system. Modulation frames are generated with the function. The modulation frames are applied to the single source image to generate the animation.