Abstract:
An automated music composition and generation system including a system user interface for enabling system users to review and select one or more musical experience descriptors, as well as time and/or space parameters; and an automated music composition and generation engine, operably connected to the system user interface, for receiving, storing and processing musical experience descriptors and time and/or space parameters selected by the system user, so as to automatically compose and generate one or more digital pieces of music in response to the musical experience descriptors and time and/or space parameters selected by the system user. Each digital piece of composed and generated music contains a set of musical notes arranged and performed in the digital piece of music. The engine includes: a digital piece creation subsystem and a digital audio sample producing subsystem supported by virtual musical instrument libraries.
Abstract:
An audio converter device and a method for using the same. The audio converter device receives the digital audio data from a first device via a local area network. The audio converter device decompresses the digital audio data and converts the digital audio data into analog electrical data. The audio converter device transfers the analog electrical data to an audio playback device.
Abstract:
An automated music composition and generation system for automatically composing and generating digital pieces of music using an automated music composition and generation engine driven by a set of emotion-type and style-type musical experience descriptors and time and/or space parameters supplied by a system user during an automated music composition and generation process. The system includes a system user interface allowing a system user to input (i) linguistic and/or graphical icon based musical experience descriptors, and (ii) a video, audio-recording, image, slide-show, or event marker, as input through the system user interface.
Abstract:
A method of generating a stream comprising synchronized interactive content is described. The method comprises the steps of: transmitting a first stream from a studio to a terminal or a terminal system of a first user and to a buffer; transmitting in response to the first stream a second stream to a mixer connected to the buffer, the second stream comprising content generated in reaction to the content of the first stream; providing the mixer with a temporal relation between the packets in the first and second stream; generating a first output stream comprising a substantially synchronized content by mixing packets in second stream with packets of the buffered first stream on the basis of the temporal relation.
Abstract:
An acoustic system includes a supply device, which is connected to a network and configured to supply an acoustic signal to the network and at least one output device configured to output a sound that is based on the acoustic signal supplied from the supply device via the network. The acoustic system also includes a detection unit configured to detect whether the at least one output device is in a state of being capable of outputting the sound and a control device configured to control, based on a result of the detection by the detection unit, to which output device the acoustic signal is to be supplied out of the at least one output device.
Abstract:
Joint sound model generation techniques are described. In one or more implementations, a plurality of models of sound data received from a plurality of different sound scenes are jointly generated. The joint generating includes learning information as part of generating a first said model of sound data from a first one of the sound scenes and sharing the learned information for use in generating a second one of the models of sound data from a second one of the sound scenes.
Abstract:
A method of presenting music to a user of an electronic device comprises the step of providing score data representing the musical score of the piece of music in a graphical representation, audio data representing a recording of the piece of music, and music data representing one or more parts of parts of the piece of music in a digital format such as MIDI or MusicXML. The music data representing a part of the music that has been selected by the user is transformed into part sound signals using a sound generator. The part sound signals and audio sound signals are merged so as to obtain a merged sound signal in which the piece of music as represented by the music data file and by the audio data file are synchronized. Finally, and simultaneously, the sound of the piece of music using the merged sound signal is played audibly, the musical score is displayed on a display, and a sub-portion of the musical score corresponding to a passage of the piece of music which is presently audible is highlighted on the display.
Abstract:
Synthetic multi-string musical instruments have been developed for capturing and rendering musical performances on handheld or other portable devices in which a multi-touch sensitive display provides one of the input vectors for an expressive performance by a user or musician. Visual cues may be provided on the multi-touch sensitive display to guide the user in a performance based on a musical score. Alternatively, or in addition, uncued freestyle modes of operation may be provided. In either case, it is not the musical score that drives digital synthesis and audible rendering of the synthetic multi-string musical instrument. Rather, it is the stream of user gestures captured at least in part using the multi-touch sensitive display that drives the digital synthesis and audible rendering.
Abstract:
An audio converter device and a method for using the same. The audio converter device receives the digital audio data from a first device via a local area network. The audio converter device decompresses the digital audio data and converts the digital audio data into analog electrical data. The audio converter device transfers the analog electrical data to an audio playback device.
Abstract:
In exemplary embodiments of the present invention systems and methods are provided to implement and facilitate cross-fading, interstitials and other effects/processing of two or more media elements in a personalized media delivery service so that each client or user has a consistent high quality experience. The effects or crossfade processing can occur on the broadcast, publisher or server-side, but can still be personalized to a specific user, thus still allowing a personalized experience for each individual user, in a manner where the processing burden is minimized on the downstream side or client device. This approach enables a consistent user experience, independent of client device capabilities, both static and dynamic. The cross-fade can be implemented after decoding the relevant chunks of each component clip, processing, recoding and rechunking, or, in a preferred embodiment, the cross-fade or other effect can be implemented on the relevant chunks to the effect in the compressed domain, thus obviating any loss of quality by re-encoding. A large scale personalized content delivery service can be implemented by limiting the processing to essentially the first and last chunks of any file, since there is no need to processing the full clip. In exemplary embodiments of the present invention this type of processing can easily be accommodated in cloud computing technology, where the first and last files may be conveniently extracted and processed within the cloud to meet the required load. Processing may also be done locally, for example, by the broadcaster, with sufficient processing power to manage peak load.