Abstract:
Embodiments herein relate generally to changing spatial audio fields that are defined for audio sources. In the embodiments, the spatial audio fields are indicated to a user performing audio mixing, for instance by displaying them as polygons on a touch screen. The spatial audio fields move as the related audio sources move, and/or as the position of a notional consumer changes. Apparatus of the embodiments is configured to detect whether at any time (initially, or after movement) there is overlapping of two spatial audio fields. If an overlap is detected, this is indicated to a user performing audio mixing The apparatus then responds to a user input (e.g. a gesture on the touch screen) by detecting the nature of the user input and then moving or sizing one or both of overlapping spatial audio fields and such that overlapping is avoided or reduced.
Abstract:
A method and corresponding system for correcting for deviations in a performance that includes a plurality of audio sources, the method comprising detecting a parameter relating to an audio source, determining if the parameter deviates from a predetermined characteristic and in response to it being determined that the parameter deviates from the predetermined characteristic, causing display of a user interface configured to control the parameter, to allow a user to correct the deviation.
Abstract:
A method comprising: causing display of a polyhedral virtual object, having a first number (M) of faces, in a virtual visual space, wherein each of at least a second number (N) of the M faces, displays content captured from an associated one of N different camera perspectives; causing rotation of the polyhedral virtual object in the virtual visual space to select a first face of the at least M faces of the polyhedral virtual object by orienting the first face in a predetermined direction within the virtual visual space; and causing display of the content captured from the camera perspective associated with the selected first face of the polyhedral virtual object.
Abstract:
An apparatus configured, based on a first virtual space in which visual imagery is presented to a first user and a second, different, virtual space in which visual imagery is presented to a second user, the first and second virtual spaces based on respective virtual reality content that comprises visual imagery and wherein a representation of the first user, viewable by the second user, is provided in the second virtual space and based on a communication initiation input from the second user; to provide for communication and presentation in the second virtual space, at a virtual location based on a location of the second representation of the first user, of a context-volume comprising a sub-volume of the first virtual space at least partly surrounding the first user to enable the second user to seethe first virtual space currently experienced by the first user.
Abstract:
A method comprising: rendering a user interface for user selection of sound objects for rendering, each sound object being associated with a location in a three-dimensional sound space, wherein the user interface maps sound objects onto at least one shape and identifies sound objects on the shape at a collection of locations on the shape that differs from the associated locations of the identified sound objects; and in response to a user actuation selecting a sound object, rendering at least the selected sound object in the three-dimensional sound space at its associated location.
Abstract:
A method comprising: causing display of a sound-source virtual visual object in a three-dimensional virtual visual space; causing display of a multiplicity of interconnecting virtual visual objects in the three-dimensional virtual visual space, wherein at least some of the multiplicity of interconnecting virtual visual objects interconnect visually a sound-source virtual visual object and a user-controlled virtual visual object, wherein a visual appearance of each interconnecting virtual visual object, is dependent upon one or more characteristics of a sound object associated with the sound-source virtual visual object to which the interconnecting virtual visual object is interconnected, and wherein audio processing of the sound objects to produce rendered sound objects depends on user-interaction with the user-controlled virtual visual object and user-controlled interconnection of interconnecting virtual visual objects between sound-source virtual visual objects and the user-controlled virtual visual object.
Abstract:
A method comprises receiving information associated with a first content item, designating a first bead apparatus (842) to be associated with the first content item, the first content item being identified by a first content item identifier, causing display of a visual representation of the first content item identifier by the first bead apparatus on a display of the first bead apparatus, receiving information indicative of a content item selection input of the first bead apparatus indicative of selection of the first content item, receiving input indicative of a tag selection input that identifies a tag, and causing an establishment of an association between the first content item and the tag based, at least in part, on the tag selection input. The tag selection input may relate to a tap input associated with the bead apparatus, a rotation input associated with the bead apparatus, and/or the like.
Abstract:
An apparatus, method, and computer program product for: associating a first item with a first portion of a hovering field, the hovering field at least partially encompassing a device, providing a first virtual item representative of the first item and controlling spatial audio in dependence on a position of the first virtual item.