Abstract:
Digital signal processing and machine learning techniques can be employed in a vocal capture and performance social network to computationally generate vocal pitch tracks from a collection of vocal performances captured against a common temporal baseline such as a backing track or an original performance by a popularizing artist. In this way, crowd-sourced pitch tracks may be generated and distributed for use in subsequent karaoke-style vocal audio captures or other applications. Large numbers of performances of a song can be used to generate a pitch track. Computationally determined pitch trackings from individual audio signal encodings of the crowd-sourced vocal performance set are aggregated and processed as an observation sequence of a trained Hidden Markov Model (HMM) or other statistical model to produce an output pitch track.
Abstract:
Methods and systems are described that are utilized for remotely controlling a musical instrument. A first digital record comprising musical instrument digital commands from a first electronic instrument for a first item of music is accessed. The first digital record is transmitted over a network using a network interface to a remote, second electronic instrument for playback to a first user. Optionally, video data is streamed to a display device of a user while the first digital record is played back by the second electronic instrument. A key change command is transmitted over the network using the network interface to the second electronic instrument to cause the second electronic instrument to playback the first digital record for the first item of music in accordance with the key change command. The key change command may be transmitted during the streaming of the video data.
Abstract:
A method of generating music contents from input music contents that includes development of models of music composition generation on the basis of business rules and composition rules. In parallel, sounds are prepared, which may be saved in the sound repository. Then, models in the form of source code are sent to a melody generator. Firstly, the generator is set with specific parameters using a controller conforming to MIDI standards and supplemented with composition characteristics read from the user preference database. Next, the contents are sent to automatic generation based on artificial intelligence algorithms and the digital score of the composition with the desired characteristics is generated. Sound tracks of individual instruments are rendered and the rendered tracks are mixed into the final music record. Next, the composition and its record are verified by the critic module using algorithms based on neural networks.
Abstract:
A music application guides a user with some musical experience through the steps of creating and editing a musical enhancement file that enhances and plays in synchronicity with an audio signal of an original artist's recorded performance. This enables others, perhaps with lesser musical ability than the original artist, to play-along with the original artist by following melodic, chordal, rhythmic, and verbal prompts. The music application accounts for differences in the timing of the performance from a standard tempo by guiding the user through the process of creating a tempo map for the performance and by associating the tempo map with MIDI information of the enhancement file. Enhancements may contain MIDI information, audio signal information, and/or video signal information which may be played back in synchronicity with the recorded performance to provide an aural and visual aid to others playing-along who may have less musical experience.
Abstract:
Methods and systems are described that are utilized for remotely controlling a musical instrument. A first digital record comprising musical instrument digital commands from a first electronic instrument for a first item of music is accessed. The first digital record is transmitted over a network using a network interface to a remote, second electronic instrument for playback to a first user. Optionally, video data is streamed to a display device of a user while the first digital record is played back by the second electronic instrument. A key change command is transmitted over the network using the network interface to the second electronic instrument to cause the second electronic instrument to playback the first digital record for the first item of music in accordance with the key change command. The key change command may be transmitted during the streaming of the video data.
Abstract:
A method of presenting music to a user of an electronic device comprises the step of providing score data representing the musical score of the piece of music in a graphical representation, audio data representing a recording of the piece of music, and music data representing one or more parts of parts of the piece of music in a digital format such as MIDI or MusicXML. The music data representing a part of the music that has been selected by the user is transformed into part sound signals using a sound generator. The part sound signals and audio sound signals are merged so as to obtain a merged sound signal in which the piece of music as represented by the music data file and by the audio data file are synchronized. Finally, and simultaneously, the sound of the piece of music using the merged sound signal is played audibly, the musical score is displayed on a display, and a sub-portion of the musical score corresponding to a passage of the piece of music which is presently audible is highlighted on the display.
Abstract:
Embodiments of the present invention provide a tonal analysis method including the steps of parsing notes of a musical score to generate a time-ordered plurality of sonorities; confirming a plurality of tonal centers each having a tone; accounting a chord of a sonority for a confirmed tonal center to determine whether the chord of the sonority is a functional symbol of the confirmed tonal center; and identifying a tonally stable region of the musical score for a confirmed tonal center, then accounting the chord of each sonority in a tonally stable region as a functional symbol of that tonal center. Embodiments of the present invention provide a non-transitory computer-readable medium which stores the output of the tonal analysis method as sonority data structures associated with theory line entry data structures.
Abstract:
A degree of difficulty of a musical piece is comprehensively evaluated also in consideration of a viewpoint of expressions and dynamics of the musical piece. A segment evaluator evaluates a degree of segment difficulty. The degree of segment difficulty indicates the degree of difficulty in performance of a segment (for example, a predetermined number of measures) serving as a part of the musical piece. The segment evaluator divides the musical piece into a plurality of segments, and evaluates the degree of segment difficulty for each of the plurality of segments. The degree of segment difficulty is evaluated in accordance with a predetermined algorithm. A musical piece evaluator evaluates a degree of musical piece difficulty. The degree of musical piece difficulty indicates a degree of difficulty in the performance of the entire musical piece. The musical piece evaluator evaluates the degree of musical piece difficulty of the musical piece based on a change of the degree of segment difficulty within the musical piece.
Abstract:
A computer system and method for generating event distribution information using an audio file and controlling events corresponding to the user inputs based on the event distribution information. The computer-implemented method includes: extracting a predefined number of event-triggering times from event distribution information, wherein the event distribution information is associated with an audio file currently played on the computer and the event-triggering times are arranged in a sequential order; determining a current play time for the audio file; and controlling event locations corresponding to user inputs on the display of the computer based on a comparison of the current play time and the extracted event-triggering times.
Abstract:
The system provides voice call services to plurality of users using a music engine. Music engine generates music of one of plurality of styles of music. In response to one or more first commands entered by the first user, music of a particular style is selected by the first user and the at least one music engine is controlled so that the at least one music engine generates music of the particular style selected by the first user.