Abstract:
An apparatus comprising an audio detector configured to analyse a first audio signal to determine at least one audio source, wherein the first audio signal is generated from the sound-field in the environment of the apparatus; an audio generator configured to generate at least one further audio source; and a mixer configured to mix the at least one audio source and the at least one further audio source such that the at least one further audio source is associated with the at least one audio source.
Abstract:
An apparatus comprising: an input configured to receive from at least one co-operating apparatus at least one audio signal; an audio signal analyser configured to analyse the at least one audio signal to determine at least one audio component position relative to the at least one co-operating apparatus recording position; and a processor configured to determine an position value based on the at least one cooperating recording position and the apparatus position, and further configured to apply the position value to the at least one audio component position, such that the at least one audio component position is substantially aligned with the apparatus position.
Abstract:
An apparatus comprising at least one processor and at least one memory, the memory comprising machine-readable instructions, that when executed cause the apparatus to: store in a non-volatile memory multiple sets of predetermined spatial audio processing parameters for differently moving sound sources; provide in a man machine interface an option for a user to select one of the stored multiple sets of predetermined spatial audio processing parameters for differently moving sound sources; and in response to the user selecting one of the stored multiple sets of predetermined spatial audio processing parameters for differently moving sound sources, the apparatus is further caused to use the selected one of the stored multiple sets of predetermined spatial audio processing parameters to spatially process audio from one or more sound sources.
Abstract:
An apparatus comprising: an input configured to receive from at least one co-operating apparatus at least one audio signal; an audio signal analyser configured to analyse the at least one audio signal to determine at least one audio component position relative to the at least one co-operating apparatus recording position; and a processor configured to determine an position value based on the at least one cooperating recording position and the apparatus position, and further configured to apply the position value to the at least one audio component position, such that the at least one audio component position is substantially aligned with the apparatus position.
Abstract:
An apparatus comprising: an input configured to receive from at least one co-operating apparatus at least one audio signal; an audio signal analyzer configured to analyze the at least one audio signal to determine at least one audio component position relative to the at least one co-operating apparatus recording position; and a processor configured to determine an position value based on the at least one cooperating recording position and the apparatus position, and further configured to apply the position value to the at least one audio component position, such that the at least one audio component position is substantially aligned with the apparatus position.
Abstract:
An approach is provided for efficiently capturing, processing, presenting, and/or associating audio objects with content items and geo-locations. A processing platform may determine a viewpoint of a viewer of at least one content item associated with a geo-location. Further, the processing platform and/or a content provider may determine at least one audio object associated with the at least one content item, the geo-location, or a combination thereof. Furthermore, the processing platform may process the at least one audio object for rendering one or more elements of the at least one audio object based, at least in part, on the viewpoint.
Abstract:
An approach is provided for annotating point of interest information to structures. One or more representations of at least one structure are determined. One or more partitions of the at least one structure is determined based, at least in part, on one or more features of the one or more representations. One or more points of interest associated with the at least one structure are determined. One or more points of interest are determined to be rendered for presentation based, at least in part, on the one or more partitions.