Abstract:
Workspace coordination systems and methods are presented. A method can comprise: accessing in real time respective information associated with a first actor and a second actor, including sensed activity information; analyzing the information, including analyzing activity of the first actor with respect to a second actor; and forwarding respective feedback based on the results of the analysis. The feedback can includes an individual objective specific to one of either the first actor or the second actor. The feedback includes collective objective with respect to the first actor or the second actor. The analyzing can include automated artificial intelligence analysis. Sensed activity information can be associated with a grid within the activity space. It is appreciated there can be various combinations of actors (e.g., human and device, device and device, human and human, etc.). The feedback can be a configuration layout suggestion. The feedback can be a suggested assignment of a type of actor to an activity.
Abstract:
An information processing apparatus including a learning unit that learns a predetermined time-series pattern. An output unit outputs a time-series pattern corresponding to the result of learning by the learning unit. An adjusting unit supplied with a time-series pattern obtained from an action by an action unit on the basis of a time-series pattern supplied from the output unit and external teaching for the action adjusts a time-series pattern supplied from the output unit correspondingly to the input time-series pattern. The learning unit learns the time-series pattern supplied from the output unit and adjusted by the adjusting unit.
Abstract:
An example method includes receiving position data indicative of position of a demonstration tool. Based on the received position data, the method further includes determining a motion path of the demonstration tool, wherein the motion path comprises a sequence of positions of the demonstration tool. The method additionally includes determining a replication control path for a robotic device, where the replication control path includes one or more robot movements that cause the robotic device to move a robot tool through a motion path that corresponds to the motion path of the demonstration tool. The method also includes providing for display of a visual simulation of the one or more robot movements within the replication control path.
Abstract:
A method of training a robot to autonomously execute a robotic task includes moving an end effector through multiple states of a predetermined robotic task to demonstrate the task to the robot in a set of n training demonstrations. The method includes measuring training data, including at least the linear force and the torque via a force-torque sensor while moving the end effector through the multiple states. Key features are extracted from the training data, which is segmented into a time sequence of control primitives. Transitions between adjacent segments of the time sequence are identified. During autonomous execution of the same task, a controller detects the transitions and automatically switches between control modes. A robotic system includes a robot, force-torque sensor, and a controller programmed to execute the method.
Abstract:
A numerical controller including an automatic display unit of a teach program includes a manual movement axis monitor unit for monitoring whether there is an axis moved by manual feed, a teach target program selection and determination unit for selecting and determining a teach program controlling the axis, and a teach block selection and determination unit for selecting and determining a teach point from a movement direction of the axis, and selecting and determining, as a teach block, a block in the teach program in which the teach point is an end point.
Abstract:
The invention proposes a method for imitation-learning of movements of a robot, wherein the robot performs the following steps: observing a movement of an entity in the robot's environment, recording the observed movement using a sensorial data stream and representing the recorded movement in a different task space representations, selecting a subset of the task space representations for the imitation learning and reproduction of the movement to be imitated.
Abstract:
The systems and methods provide an action recognition and analytics tool for use in manufacturing, health care services, shipping, retailing and other similar contexts. Machine learning action recognition can be utilized to determine cycles, processes, actions, sequences, objects and or the like in one or more sensor streams. The sensor streams can include, but are not limited to, one or more video sensor frames, thermal sensor frames, infrared sensor frames, and or three-dimensional depth frames. The analytics tool can provide for automatic creation of certificates for each instance of a subject product or service. The certificates can string together snippets of the sensor streams along with indicators of cycles, processes, action, sequences, objects, parameters and the like captured in the sensor streams.
Abstract:
In one embodiment a method comprises: accessing information associated with a first actor, including sensed activity information associated with an activity space; analyzing the activity information, including analyzing activity of the first actor with respect to a plurality of other actors; and forwarding feedback on the results of the analysis, wherein the results includes identification of a second actor as a replacement actor to replace the first actor, wherein the second actor is one of the plurality of other actors. The activity space can include an activity space associated with performance of a task. The analyzing can comprise: comparing information associated with activity of the first actor within the activity space with anticipated activity of the respective ones of the plurality of the actors within the activity space; and analyzing/comparing deviations between the activity of the first actor and the anticipated activity of the second actor.
Abstract:
The systems and methods provide an action recognition and analytics tool for use in manufacturing, health care services, shipping, retailing and other similar contexts. Machine learning action recognition can be utilized to determine cycles, processes, actions, sequences, objects and or the like in one or more sensor streams. The sensor streams can include, but are not limited to, one or more video sensor frames, thermal sensor frames, infrared sensor frames, and or three-dimensional depth frames. The analytics tool can provide for process validation, anomaly detection and in-process quality assurance.
Abstract:
A machine learning device includes a state observation unit for observing state variables that include at least one of the state of an assembly constituted of first and second components, an assembly time and information on a force, the result of a continuity test on the assembly, and at least one of position and posture command values for at least one of the first and second components and direction, speed and force command values for an assembly operation; and a learning unit for learning, in a related manner, at least one of the state of the assembly, the assembly time and the information on the force, the result of the continuity test on the assembly, and at least one of the position and posture command values for at least one of the first and second components and the direction, speed and force command values for the assembly operation.