-
公开(公告)号:US11222652B2
公开(公告)日:2022-01-11
申请号:US16516780
申请日:2019-07-19
Applicant: Apple Inc.
Inventor: Ante Jukic , Mehrez Souden , Joshua D. Atkins
Abstract: A learning based system such as a deep neural network (DNN) is disclosed to estimate a distance from a device to a speech source. The deep learning system may estimate the distance of the speech source at each time frame based on speech signals received by a compact microphone array. Supervised deep learning may be used to learn the effect of the acoustic environment on the non-linear mapping between the speech signals and the distance using multi-channel training data. The deep learning system may estimate the direct speech component that contains information about the direct signal propagation from the speech source to the microphone array and the reverberant speech signal that contains the reverberation effect and noise. The deep learning system may extract signal characteristics of the direct signal component and the reverberant signal component and estimate the distance based on the extracted signal characteristics using the learned mapping.
-
公开(公告)号:US10334357B2
公开(公告)日:2019-06-25
申请号:US15721644
申请日:2017-09-29
Applicant: Apple Inc.
Inventor: Joshua D. Atkins , Mehrez Souden , Symeon Delikaris-Manias , Peter Raffensperger
Abstract: Impulse responses of a device are measured. A database of sound files is generated by convolving source signals with the impulse responses of the device. The sound files from the database are transformed into time-frequency domain. One or more sub-band directional features is estimated at each sub-band of the time-frequency domain. A deep neural network (DNN) is trained for each sub-band based on the estimated one or more sub-band directional features and a target directional feature.
-
公开(公告)号:US20190172476A1
公开(公告)日:2019-06-06
申请号:US15830955
申请日:2017-12-04
Applicant: Apple Inc.
Inventor: Jason Wung , Mehrez Souden , Ramin Pishehvar , Joshua D. Atkins
IPC: G10L21/02 , G10L25/30 , G10L15/02 , G10L21/0232 , G10L25/03
Abstract: A number of features are extracted from a current frame of a multi-channel speech pickup and from side information that is a linear echo estimate, a diffuse signal component, or a noise estimate of the multi-channel speech pickup. A DNN-based speech presence probability is produced for the current frame, where the SPP value is produced in response to the extracted features being input to the DNN. The DNN-based SPP value is applied to configure a multi-channel filter whose input is the multi-channel speech pickup and whose output is a single audio signal. In one aspect, the system is designed to run online, at low enough latency for real time applications such voice trigger detection. Other aspects are also described and claimed.
-
公开(公告)号:US20240312468A1
公开(公告)日:2024-09-19
申请号:US18605688
申请日:2024-03-14
Applicant: Apple Inc.
Inventor: Ismael H. Nawfal , Symeon Delikaris Manias , Mehrez Souden , Joshua D. Atkins
IPC: G10L19/008 , H04S7/00
CPC classification number: G10L19/008 , H04S7/30 , H04S2420/11
Abstract: A sound scene is represented as first order Ambisonics (FOA) audio. A processor formats each signal of the FOA audio to a stream of audio frames, provides the formatted FOA audio to a machine learning model that reformats the formatted FOA audio in a target or desired higher order Ambisonics (HOA) format, and obtains output audio of the sound scene in the desired HOA format from the machine learning model. The output audio in the desired HOA format may then be rendered according to a playback audio format of choice. Other aspects are also described and claimed.
-
公开(公告)号:US11546692B1
公开(公告)日:2023-01-03
申请号:US17370679
申请日:2021-07-08
Applicant: Apple Inc.
Inventor: Symeon Delikaris Manias , Mehrez Souden , Ante Jukic , Matthew S. Connolly , Sabine Webel , Ronald J. Guglielmone, Jr.
Abstract: An audio renderer can have a machine learning model that jointly processes audio and visual information of an audiovisual recording. The audio renderer can generate output audio channels. Sounds captured in the audiovisual recording and present in the output audio channels are spatially mapped based on the joint processing of the audio and visual information by the machine learning model. Other aspects are described.
-
公开(公告)号:US20220369030A1
公开(公告)日:2022-11-17
申请号:US17322539
申请日:2021-05-17
Applicant: Apple Inc.
Inventor: Mehrez Souden , Jason Wung , Ante Jukic , Ramin Pishehvar , Joshua D. Atkins
IPC: H04R3/04 , H04R3/00 , H04R5/04 , G10L21/0216 , G10L25/78
Abstract: A plurality of microphone signals can be captured with a plurality of microphones of the device. One or more echo dominant audio signals can be determined based on a pick-up beam directed towards one or more speakers of a playback device. Sound that is emitted from the one or more speakers and sensed by the plurality of microphones can be removed from plurality of microphone signals, by using the one or more echo dominant audio signals as a reference, resulting in clean audio.
-
公开(公告)号:US10798511B1
公开(公告)日:2020-10-06
申请号:US16378438
申请日:2019-04-08
Applicant: Apple Inc.
Inventor: Jonathan D. Sheaffer , Juha O. Merimaa , Jason Wung , Martin E. Johnson , Peter A. Raffensperger , Joshua D. Atkins , Symeon Delikaris Manias , Mehrez Souden
IPC: H04S5/00 , G10K11/178 , H04R1/40
Abstract: Processing input audio channels for generating spatial audio can include receiving a plurality of microphone signals that capture a sound field. Each microphone signal can be transformed into a frequency domain signal. From each frequency domain signal, a direct component and a diffuse component can be extracted. The direct component can be processed with a parametric renderer. The diffuse component can be processed with a linear renderer. The components can be combined, resulting in a spatial audio output. The levels of the components can be adjusted to match a direct to diffuse ratio (DDR) of the output with the DDR of the captured sound field. Other aspects are also described and claimed.
-
公开(公告)号:US20200312315A1
公开(公告)日:2020-10-01
申请号:US16368403
申请日:2019-03-28
Applicant: Apple Inc.
Inventor: Feipeng Li , Mehrez Souden , Joshua D. Atkins , John Bridle , Charles P. Clark , Stephen H. Shum , Sachin S. Kajarekar , Haiying Xia , Erik Marchi
IPC: G10L15/20
Abstract: An acoustic environment aware method for selecting a high quality audio stream during multi-stream speech recognition. A number of input audio streams are processed to determine if a voice trigger is detected, and if so a voice trigger score is calculated for each stream. An acoustic environment measurement is also calculated for each audio stream. The trigger score and acoustic environment measurement are combined for each audio stream, to select as a preferred audio stream the audio stream with the highest combined score. The preferred audio stream is output to an automatic speech recognizer. Other aspects are also described and claimed.
-
19.
公开(公告)号:US20180350379A1
公开(公告)日:2018-12-06
申请号:US15613127
申请日:2017-06-02
Applicant: Apple Inc.
Inventor: Jason Wung , Joshua D. Atkins , Ramin Pishehvar , Mehrez Souden
IPC: G10L21/02 , G10L21/0232 , G10L21/0272 , G10L21/038
CPC classification number: G10L21/0205 , G10L21/0208 , G10L21/0232 , G10L21/0272 , G10L21/038 , G10L2021/02082 , G10L2021/02166 , H04M9/082
Abstract: A digital speech enhancement system that performs a specific chain of digital signal processing operations upon multi-channel sound pick up, to result in a single, enhanced speech signal. The operations are designed to be computationally less complex yet as a whole yield an enhanced speech signal that produces accurate voice trigger detection and low word error rates by an automatic speech recognizer. The constituent operations or components of the system have been chosen so that the overall system is robust to changing acoustic conditions, and can deliver the enhanced speech signal with low enough latency so that the system can be used online (enabling real-time, voice trigger detection and streaming ASR.) Other embodiments are also described and claimed.
-
公开(公告)号:US20240314509A1
公开(公告)日:2024-09-19
申请号:US18605701
申请日:2024-03-14
Applicant: Apple Inc.
Inventor: Ismael H. Nawfal , Mehrez Souden , Juha O. Merimaa
CPC classification number: H04S7/30 , H04S1/007 , H04S2420/11
Abstract: A sound scene is represented as first order Ambisonics (FOA) audio. A processor formats each signal of the FOA audio to a stream of audio frames, provides the formatted FOA audio to a machine learning model that reformats the formatted FOA audio in a target or desired higher order Ambisonics (HOA) format, and obtains output audio of the sound scene in the desired HOA format from the machine learning model. The output audio in the desired HOA format may then be rendered according to a playback audio format of choice. Other aspects are also described and claimed.
-
-
-
-
-
-
-
-
-