Abstract:
A method comprising receiving a representation of an image (702), the image being based, at least in part, on at least one operational circumstance, determining a first part of the representation based, at least in part, on a position of a first bead apparatus (704), causing display of the first part of the representation by the first bead apparatus (706), determining a second part of the representation based, at least in part, on a position of a second bead apparatus (708), and causing display of at least a portion of the second part of the representation by the second bead apparatus (710) is disclosed.
Abstract:
An apparatus for audio signal processing audio objects within at least one audio scene, the apparatus comprising at least one processor configured to: define for at least one time period at least one contextual grouping comprising at least two of a plurality of audio objects and at least one further audio object of the plurality of audio objects outside of the at least one contextual grouping, the plurality of audio objects within at least one audio scene; and define with respect to the at least one contextual grouping at least one first parameter and/or parameter rule type which is configured to be applied with respect to a common element associated with the at least two of the plurality of audio objects and wherein the at least one first parameter and/or parameter rule type is configured to be applied with respect to individual element associated with the at least one further audio object outside of the at least one contextual grouping, the at least one first parameter and/or parameter rule type being applied in audio rendering of both the at least two of the plurality of audio objects and the at least one further audio object.
Abstract:
An apparatus including circuitry configured to: obtain at least one location and/or orientation associated with a user; obtain, based on the at least one location and/or orientation, one or more audio element, wherein the one or more audio element at least partially forms an audio scene; obtain, based on the at least one location and/or orientation, at least one auxiliary audio element, the at least one auxiliary audio element being at least one audio element or a combination of audio elements, wherein the at least one auxiliary audio element is associated with at least a part of the audio scene and is located at an outside zone of the audio scene; render the obtained audio element and/or at least one auxiliary audio element.
Abstract:
An apparatus, method and computer program is described comprising: capturing images using one or more imaging devices of a foldable user device, wherein the user device comprises a visual display, wherein, in a first mode of operation, a display output on the visual display is modified depending on a folding angle of the foldable user device; capturing audio using one or more microphones of the foldable user device; providing a wind mode indication in a second mode of operation; and disabling the modification of the visual display depending on the folding angle in the second mode of operation.
Abstract:
An apparatus configured to obtain audio signals related to one or more audio sources; determine a zoom effect related to the one or more audio sources; and generate audio effect information for the one or more audio sources based, at least partially, on the zoom effect and a zoom factor, wherein the audio effect information is configured to enable control of audio signal processing associated with the obtained audio signals, wherein the zoom factor is associated with, at least, the one or more audio sources.
Abstract:
An apparatus is disclosed comprising means for providing first positional data indicative of a first user position in a virtual space which comprises a plurality of audio objects. The means may also be for receiving, based at least partially on the first positional data, data indicative of a first allocation of the audio objects into a prioritized order or into one of a plurality of prioritized groups in which each group has a respective priority order, and receiving audio data associated with at least some of the audio objects. The means may also be for rendering the received audio data for at least some of the audio objects associated with the received audio data, based on the first allocation data, in which audio objects with a higher priority or in a higher priority group are rendered with priority over audio objects allocated with a lower priority or in a lower priority group.
Abstract:
An apparatus including circuitry configured to: obtain at least one location and/or orientation associated with a user; obtain, based on the at least one location and/or orientation, one or more audio element, wherein the one or more audio element at least partially forms an audio scene; obtain, based on the at least one location and/or orientation, at least one auxiliary audio element, the at least one auxiliary audio element being at least one audio element or a combination of audio elements, wherein the at least one auxiliary audio element is associated with at least a part of the audio scene and is located at an outside zone of the audio scene; render the obtained audio element and/or at least one auxiliary audio element.
Abstract:
Apparatuses, methods and computer programs are described comprising: providing an incoming call indication in response to an incoming call, the incoming call indication including an initial ambient audio signal comprising a combination of first ambient audio and second ambient audio; receiving an ambient audio control command; and adjusting the initial ambient audio signal to generate an adjusted ambient audio signal depending on the ambient audio control command.
Abstract:
Examples of the disclosure relate to apparatus, methods and computer programs. The apparatus including circuitry configured for obtaining a spatial audio signal where the spatial audio signal includes at least one participant audio object and at least one private audio object wherein the private audio object is associated with a participant which generated the participant audio object. The apparatus also includes circuitry configured for causing the participant audio object to be rendered in a first spatial location and causing the private audio object to be rendered in a second spatial location so that the rendering of the private audio object is less prominent than the rendering of the participant audio object.
Abstract:
An apparatus, method and computer program is disclosed, comprising rendering a virtual scene of a virtual space that corresponds to a virtual position of a user in the virtual space as determined at least in part by the position of the user in a physical space. Embodiments also involve identifying one or more objects in the virtual scene which are in conflict with attributes of the physical space. Embodiments also involve detecting one or more blinking periods of the user when consuming the virtual scene. Embodiments also involve modifying the position of the one or more conflicting objects in the virtual scene based on a detected context. The modifying may be performed within the one or more detected blinking periods.