Abstract:
Embodiments described herein includes a system comprising a processor coupled to display devices, sensors, remote client devices, and computer applications. The computer applications orchestrate content of the remote client devices simultaneously across the display devices and the remote client devices, and allow simultaneous control of the display devices. The simultaneous control includes automatically detecting a gesture of at least one object from gesture data received via the sensors. The detecting comprises identifying the gesture using only the gesture data. The computer applications translate the gesture to a gesture signal, and control the display devices in response to the gesture signal.
Abstract:
Systems and methods for initializing real-time, vision-based hand tracking systems are described. The systems and methods for initializing the vision-based hand tracking systems image a body and receive gesture data that is absolute three-space data of an instantaneous state of the body at a point in time and space, and at least one of determine an orientation of the body using an appendage of the body and track the body using at least one of the orientation and the gesture data.
Abstract:
Systems and methods for initializing real-time, vision-based hand tracking systems are described. The systems and methods for initializing the vision-based hand tracking systems image a body and receive gesture data that is absolute three-space data of an instantaneous state of the body at a point in time and space, and at least one of determine an orientation of the body using an appendage of the body and track the body using at least one of the orientation and the gesture data.
Abstract:
Systems and methods for initializing real-time, vision-based hand tracking systems are described. The systems and methods for initializing the vision-based hand tracking systems image a body and receive gesture data that is absolute three-space data of an instantaneous state of the body at a point in time and space, and at least one of determine an orientation of the body using an appendage of the body and track the body using at least one of the orientation and the gesture data.
Abstract:
Embodiments described herein includes a system comprising a processor coupled to display devices, sensors, remote client devices, and computer applications. The computer applications orchestrate content of the remote client devices simultaneously across the display devices and the remote client devices, and allow simultaneous control of the display devices. The simultaneous control includes automatically detecting a gesture of at least one object from gesture data received via the sensors. The detecting comprises identifying the gesture using only the gesture data. The computer applications translate the gesture to a gesture signal, and control the display devices in response to the gesture signal.
Abstract:
Systems and methods for initializing real-time, vision-based hand tracking systems are described. The systems and methods for initializing the vision-based hand tracking systems image a body and receive gesture data that is absolute three-space data of an instantaneous state of the body at a point in time and space, and at least one of determine an orientation of the body using an appendage of the body and track the body using at least one of the orientation and the gesture data.
Abstract:
A Spatial Operating Environment (SOE) with markerless gestural control includes a sensor coupled to a processor that runs numerous applications. A gestural interface application executes on the processor. The gestural interface application receives data from the sensor that corresponds to a hand of a user detected by the sensor, and tracks the hand by generating images from the data and associating blobs in the images with tracks of the hand. The gestural interface application detects a pose of the hand by classifying each blob as corresponding to an object shape. The gestural interface application generates a gesture signal in response to a gesture comprising the pose and the tracks, and controls the applications with the gesture signal.
Abstract:
Embodiments include vision-based interfaces performing hand or object tracking and shape recognition. The vision-based interface receives data from a sensor, and the data corresponds to an object detected by the sensor. The interface generates images from each frame of the data, and the images represent numerous resolutions. The interface detects blobs in the images and tracks the object by associating the blobs with tracks of the object. The interface detects a pose of the object by classifying each blob as corresponding to one of a number of object shapes. The interface controls a gestural interface in response to the pose and the tracks.
Abstract:
Embodiments described herein includes a system comprising a processor coupled to display devices, sensors, remote client devices, and computer applications. The computer applications orchestrate content of the remote client devices simultaneously across the display devices and the remote client devices, and allow simultaneous control of the display devices. The simultaneous control includes automatically detecting a gesture of at least one object from gesture data received via the sensors. The detecting comprises identifying the gesture using only the gesture data. The computer applications translate the gesture to a gesture signal, and control the display devices in response to the gesture signal.
Abstract:
Systems and methods for initializing real-time, vision-based hand tracking systems are described. The systems and methods for initializing the vision-based hand tracking systems image a body and receive gesture data that is absolute three-space data of an instantaneous state of the body at a point in time and space, and at least one of determine an orientation of the body using an appendage of the body and track the body using at least one of the orientation and the gesture data.