Abstract:
The present disclosure provides a generation device having a plurality of operators that receive a user operation that causes generation of a sound. The generation device includes: at least one first operator arranged in a first region and configured to receive a first user operation that causes generation of a rhythm sound signal; at least one second operator arranged in a second region and configured to receive a second user operation that causes generation of a melody sound signal; and at least one third operator arranged in a third region and configured to receive a third user operation that causes a sound effect to be applied to a synthesized sound signal of the generated rhythm sound signal and the generated melody sound signal. The first region, the second region, and the third region are different regions of the generation device from each other.
Abstract:
A signal processing device includes an electronic controller including at least one processor. The electronic controller is configured to execute a reception unit, a generation unit, and a processing unit. The reception unit is configured to receive first time-series data that include sound data, and second time-series data that are generated based on the first time-series data and that include at least data indicating a timing of a human action. The generation unit is configured to generate, based on the second time-series data, third time-series data notifying of the timing of the human action. The processing unit is configured to synchronize and output an output signal based on the first time-series data and an output signal based on the third time-series data, such that the timing of the human action for the first time-series data and the timing of the human action for the third time-series data match.
Abstract:
In one embodiment, a method for the enhanced display of the hands of a pianist playing a piano is disclosed, including: (a) recording at least one video stream of at least one hand of a pianist playing a piano; and (b) while the pianist is playing the piano, using at least a portion of the piano as a display for displaying the video stream.
Abstract:
Vocal audio of a user together with performance synchronized video is captured and coordinated with audiovisual contributions of other users to form composite duet-style or glee club-style or window-paned music video-style audiovisual performances. In some cases, the vocal performances of individual users are captured (together with performance synchronized video) on mobile devices, television-type display and/or set-top box equipment in the context of karaoke-style presentations of lyrics in correspondence with audible renderings of a backing track. Contributions of multiple vocalists are coordinated and mixed in a manner that selects for presentation, at any given time along a given performance timeline, performance synchronized video of one or more of the contributors. Selections are in accord with a visual progression that codes a sequence of visual layouts in correspondence with other coded aspects of a performance score such as pitch tracks, backing audio, lyrics, sections and/or vocal parts.
Abstract:
The present disclosure provides a system and method for representing music in a three dimensions using contexts based around tonal centers, to form three dimensional geometric shapes. The musical notation method described herein is easy to understand and visualize. The method is based on three dimensional structures which may represent contexts. The contexts may be formed by combining diminished and augmented scales shown as symmetrical three dimensional geometric shapes. These symmetrical geometric shapes may be formed from a plurality of polygons, which may include polygons comprised of a set of related notes from a diminished or augmented scale, together forming a looped harmonic polygon. Each note in a respective scale is placed at a vertex of a harmonic polygon, wherein the vertices of the harmonic polygons are selected from notes in a twelve note chromatic scale.
Abstract:
An electronic apparatus includes a plurality of pads, each of the plurality of pads including a touch sensor and an acceleration sensor, a sound output interface configured to output sounds that are set to the respective pads, a display configured to display visual feedback, and a processor configured to, in response to the touch sensor in a pad among the plurality of pads detecting a touch of the pad, and the acceleration sensor in the pad detecting an intensity of the touch that is greater than or equal to a value, determine that a beat is performed on the pad, control the sound output interface to output a sound that is set to the pad on which the beat is determined to be performed, with a magnitude corresponding to an intensity of the beat, and control the display to display the visual feedback corresponding to the beat determined to be performed.
Abstract:
A musical instrument accessory with a capo carrying an adjustable connector in turn carrying a screen receiving fixture. When the capo is clamped to the neck of a guitar at a first position, the screen receiving fixture may be arranged and oriented for convenient viewing of a screen carried therein by the guitar musician. When moved to second position, viewing the screen necessitates re-orientation of the screen receiving fixture to accommodate geometry of the second position, which re-orientation is conveniently carried out by re-orientation of the adjustable connector.
Abstract:
The proposed method creates a detailed, accurate, frequency and amplitude-based three-dimensional moving display of digital audio files. It adds timestamps, used later in the process to achieve accurate synchronization between a moving display and playback of the analyzed audio file. It details how the analyzed data is processed and enhanced to prominently show the most fundamental elements in the audio file. The method proposes different layouts for displays and ways of showing separate elements of the audio simultaneously. Upcoming audio is displayed in locations that allow viewers to anticipate and react to events about to happen. It introduces the temporal plane of playback for clearly showing the part of the moving display that corresponds to the exact part of the analyzed audio that is playing. The temporal plane of playback demonstrates the direct correlation with the audio and accurate synchronization between sound and picture.
Abstract:
A scenario generation system, a scenario generation method, and a scenario generation program are provided. A scenario generation system used for video playback synchronized with musical piece playback includes a situation estimating portion for estimating a situation expressed by the musical piece, a video specifying portion for specifying at least one video suited for the estimated situation in the video constituted by scenes each having a time-series order, and a scenario generating portion for generating a scenario associating the scenes constituting the specified video with each section of the musical piece. As a result, a scenario can be generated by the scenes each having the time-series order, and the synchronized video with a natural impression can be reproduced corresponding to the musical piece playback on the basis of the scenario.
Abstract:
This document describes a device for receiving and displaying graphical representations of digital music tracks and their components (in the form of digital interactive phrases, or “DIPs”). The device allows a user to play the music tracks using a new format, blend, mix or mash different music tracks together, via a digital interactive phrase process, and produce and listen to the blended, mixed or mashed digital music.