Abstract:
Joint sound model generation techniques are described. In one or more implementations, a plurality of models of sound data received from a plurality of different sound scenes are jointly generated. The joint generating includes learning information as part of generating a first said model of sound data from a first one of the sound scenes and sharing the learned information for use in generating a second one of the models of sound data from a second one of the sound scenes.
Abstract:
The apparatus displays, on a display screen, an icon placement region having a time axis defined therein, and displays, in accordance with an input instruction, an icon image with which feature amount information of material data indicative of a sound material is associated. The apparatus further sets a type of a feature amount database, where material data and feature amount information of the material data are associated with each other, along the time axis in accordance with an input instruction. Then, the apparatus references the feature amount database of a type having correspondence relationship in time axis with an icon image to identify material data similar to the feature amount information corresponding to the icon image, and audibly generates a sound at timing corresponding to a position, on the time axis, of the icon image and with content corresponding to the identified material data.
Abstract:
A method performed by one or more processing devices includes receiving information indicative of an input chord progression, with the input chord progression comprising a plurality of chords; identifying chord changes in the plurality of chords; identifying, based on the chord changes, moving tones in the input chord progression; selecting, from the moving tones, guide tones that provide an outline of a harmony to be used in generating a synthesized melody; generating, based on the selected guide tones and one or more interpolation operations, interpolation tones for interpolation among the guide tones; and generating, based on interpolation of the interpolation tones with the guide tones, the synthesized melody.
Abstract:
A system and method for enhancing audio, the method including receiving audio input tracks, with at least one audio input track including a restriction parameter, determining a restricted audio input track, where the restricted audio input track is the audio input track including the restriction parameter, manipulating another audio input track based on musical properties of the restricted audio input track, and combining the restricted audio input track and the manipulated audio input track into a single output audio track.
Abstract:
Provided are methods and systems and computer readable media for providing interaction between remote players and one or more local players of a rhythm-action game executed on a game platform. One or more local players is identified to participate in a networked session of a rhythm action game corresponding to a predetermined band template, each local player associated with a type of simulated musical instrument. A first type of simulated musical instrument, represented in the predetermined band template and not associated with any of the one or more local players, may then be identified, along with a remote player associated with the first type of simulated musical instrument. Then, game platforms of the local and remote players communicate to establish a networked session of the rhythm action game with the one or more local players and the identified remote player before initiating a game session where the players play the game.
Abstract:
The free-space gesture MIDI controller technique described herein marries the technologies embodied in a free-space gesture controller with MIDI controller technology, allowing a user to control an infinite variety of electronic musical instruments through body gesture and pose. One embodiment of the free-space gesture MIDI controller technique described herein uses a human body gesture recognition capability of a free-space gesture control system and translates human gestures into musical actions. Rather than directly connecting a specific musical instrument to the free-space gesture controller, the technique generalizes its capability and instead outputs standard MIDI signals, thereby allowing the free-space gesture control system to control any MIDI-capable instrument.
Abstract:
A method includes sending a first message to a conference bridge from a first device. The first message includes an audio signal, a video signal, a musical instrument digital interface signal, or combinations thereof. The conference bridge establishes a voice over internet protocol call between the first device, a second device, and a mix controller external to the conference bridge. The method includes receiving a second message from the conference bridge at the first device. The second message comprises a mixed audio signal produced by the conference bridge from the first message and a third message received by the conference bridge from the second device. The mix controller sets a mixing level for each audio signal used to produce the mixed audio signal. The method also includes processing the second message via the first device to generate an output.
Abstract:
A system and method including an agent for selecting at least two songs among simultaneously streaming songs based on user information and inserting additional content in the duration of time between the end of the earlier song and the start of the later song.
Abstract:
Sharing of a music experience amongst a group of people each using a personal communication device is described. In some cases, the group can congregate at the same geographic location or at least some of the group can be located at widely dispersed locations and yet still be able to share a music experience. Information can be passed between the personal communication devices using point to point wireless communication, a distributed network of computers such as the Internet, a wireless cellular communication network, and so on. The information can include an indication of a shared music characteristic. The personal communication devices can use the shared music characteristic to identify and start to privately play those music items stored in the personal communication device having a characteristic that matches or most closely matches the shared music characteristic at about the same time.
Abstract:
A method of generating a stream comprising synchronized interactive content is described. The method comprises the steps of: transmitting a first stream from a studio to a terminal or a terminal system of a first user and to a buffer; transmitting in response to the first stream a second stream to a mixer connected to the buffer, the second stream comprising content generated in reaction to the content of the first stream; providing the mixer with a temporal relation between the packets in the first and second stream; generating a first output stream comprising a substantially synchronized content by mixing packets in second stream with packets of the buffered first stream on the basis of the temporal relation.