Abstract:
A method comprising: causing rendering of a first sound scene comprising multiple first sound objects; in response to direct or indirect user specification of a change in sound scene from the first sound scene to a mixed sound scene based in part on the first sound scene and in part on a second sound scene, causing selection of one or more second sound objects of the second sound scene comprising multiple second sound objects; causing selection of one or more first sound objects in the first sound scene; and causing rendering of a mixed sound scene by rendering the first sound scene while de-emphasising the selected one or more first sound objects and emphasising the selected one or more second sound objects.
Abstract:
Apparatus is configured to associate each of one or more spatially-distributed audio sources in a virtual space, each audio source providing one or more audio signals representing audio for playback through a user device, with a respective fade-in profile which defines how audio volume for the audio source is gradually increased from a minimum level to a target volume level as a function of time. It is configured also to identify, based on user position, a current field-of-view within a virtual space and, in response to detecting that one or more new audio sources have a predetermined relationship with respect to the current field-of-view, fading-in the audio from the or each new audio source according to the fade-in profile for the respective audio source so as to increase their volume gradually towards the target volume level defined by the fade-in profile.
Abstract:
A method is disclosed, which comprises receiving a plurality of audio signals representing audio from respective audio sources in a space; defining for each audio source a spatial audio field indicative of the propagation of its audio signals within the space. For a first audio source is defined a restricted zone within its spatial audio field. A first control signal may be received for changing the spatial audio field of a second audio source so that said spatial audio field is moved towards, and overlaps part of the restricted zone of the first audio source. Responsive to the first control signal, the method may comprise changing the spatial audio field of the second audio source so that there is no overlap with the restricted zone.
Abstract:
A method is disclosed, including providing data indicative of dimensions of a real-world space within which a virtual world is to be consumed. The method may also include identifying one or more objects within said real-world space, and determining one or more available areas within the real-world space for rendering three-dimensional virtual content, based at least partly on the dimensions of the real-world space. The method may also include identifying one or more of the objects as being movable, identifying, from a set of three-dimensional virtual content items, one or more candidate items unable to be rendered within the available area(s) and which can be rendered if one or more of the movable objects is moved and providing an indication to a virtual reality user device of the candidate virtual item(s) and of the movable object(s) required to be moved.
Abstract:
Systems and methods for distributed audio mixing are disclosed, comprising providing one or more predefined constellations, each constellation defining a spatial arrangement of points forming a shape or pattern and receiving positional data indicative of the spatial positions of a plurality of audio sources in a capture space. A correspondence may be identified between a subset of the audio sources and a constellation based on the relative spatial positions of audio sources in the subset. Responsive to said correspondence, at least one action may be applied, for example an audio, video and/or controlling action to audio sources of the subset.
Abstract:
A method comprising receiving a representation of an image (702), the image being based, at least in part, on at least one operational circumstance, determining a first part of the representation based, at least in part, on a position of a first bead apparatus (704), causing display of the first part of the representation by the first bead apparatus (706), determining a second part of the representation based, at least in part, on a position of a second bead apparatus (708), and causing display of at least a portion of the second part of the representation by the second bead apparatus (710) is disclosed.
Abstract:
A method comprises receiving information associated with a content item, designating a first bead apparatus (716) to be associated with a first content item segment of the content item, the first content item segment being identified by a first content item segment identifier, causing display of a visual representation of the first content item segment identifier by the first bead apparatus (726), designating a second bead apparatus (712) to be associated with a second content item segment of the content item, the second content item segment being identified by a second content item segment identifier, causing display of a visual representation of the second content item segment identifier by the second bead apparatus (722), receiving information indicative of a selection input of the second bead apparatus, and causing rendering of the second content item segment based, at least in part, on the selection input (734). The causation of rendering comprises sending information indicative of a content item segment to a separate apparatus or causing sending of information indicative of a content item segment by another apparatus to a separate apparatus (732) such as a bead apparatus, an electronic apparatus, a server, a computer, a laptop, a television, a phone and/or the like.
Abstract:
An apparatus configured to, based on first imagery (301) of at least part of a body of a user (204), and contemporaneously captured second imagery (302) of a scene, the second imagery comprising at least a plurality of images taken over time, and based on expression-time information indicative of when a user expression of the user (204) occurs, provide a time window (303) temporally extending from a first time (t-1) prior to the time (t) of the expression-time information, to a second time (t-5) comprising a time equal to or prior to the first time (t-1), the time window (303) provided to identify at least one expression-causing image (305) from the plurality of images of the second imagery (302) that was captured in said time window, and provide for recordal of the at least one expression-causing image (305) with at least one expression-time image (306) comprising at least one image from the first imagery (301).
Abstract:
A method comprising: determining a portion of a visual scene, wherein the portion is dependent upon a position of a sound source within the visual scene; and enabling adaptation of the visual scene to provide, via a display, spatially-limited visual highlighting of the portion of the visual scene.
Abstract:
A method comprising:rendering a first media scene based upon media content provided by a content-rendering application via one or more rendering devices worn by the user; determining a priority for an event that occurs near the user, the event being independent of the content-rendering application; and automatically modifying the rendered first media scene, to render a modified second media scene based at least in part upon media content provided by the content-rendering application and at least in part upon other media content associated with the event.