Abstract:
Embodiments of the invention are directed to using image data and contextual data to determine information about a scene, based on one or more previously obtained images. Contextual data, such location of image capture, can be used to determine previously obtained images related to the contextual data and other location-related information, such as billboard locations. With even low resolution devices, such as cell phone, image attributes, such as a histogram or optically recognized characters, can be compared between the previously obtained images and the newly captured image. Attributes matching within a predefined threshold indicate matching images. Information on the content of matching previously obtained images can be provided back to a user who captured the new image. User profile data can refine the content information. The content information can also be used as search terms for additional searching or other processing.
Abstract:
The systems and methods described create a mathematical representation of each of the media objects for which user ratings are known. The mathematical representations take into account the subjective rating value assigned by a user to the respective media object and the user that assigned the rating value. The media object with the mathematical representation closest to that of the seed media object is then selected as the most similar media object to the seed media object. In an embodiment, the mathematical representation is a vector representation in which each user is a different dimension and each user's rating value is the magnitude of the vector in that dimension. Similarity between two songs is determined by identifying the closest vectors to that of the seed song. Closeness may be determined by subtracting or by calculating the dot product of each of the vectors with that of the seed media object.
Abstract:
Embodiments of the invention are directed to using image data and contextual data to determine information about a scene, based on one or more previously obtained images. Contextual data, such location of image capture, can be used to determine previously obtained images related to the contextual data and other location-related information, such as billboard locations. With even low resolution devices, such as cell phone, image attributes, such as a histogram or optically recognized characters, can be compared between the previously obtained images and the newly captured image. Attributes matching within a predefined threshold indicate matching images. Information on the content of matching previously obtained images can be provided back to a user who captured the new image. User profile data can refine the content information. The content information can also be used as search terms for additional searching or other processing.
Abstract:
Systems and methods for generating and playing a sequence of media objects based on a mood gradient are also disclosed. A mood gradient is a sequence of items, in which each item is media object haying known characteristics or a representative set of characteristics of a media object, that is created or used by a user for a specific purpose. Given a mood gradient, one or more new media objects are selected for each item in the mood gradient based on the characteristics associated with that item. In this way, a sequence of new media objects is created but the sequence exhibits a similar variation in media object characteristics. The mood gradient may be presented to a user or created via a display illustrating a three-dimensional space in which each dimension corresponds to a different characteristic. The mood gradient may be represented as a path through the three-dimensional space and icons representing media objects are located within the three-dimensional space based on their characteristics.
Abstract:
Embodiments of the invention are directed to using image data and contextual data to determine information about a scene, based on one or more previously obtained images. Contextual data, such location of image capture, can be used to determine previously obtained images related to the contextual data and other location-related information, such as billboard locations. With even low resolution devices, such as cell phone, image attributes, such as a histogram or optically recognized characters, can be compared between the previously obtained images and the newly captured image. Attributes matching within a predefined threshold indicate matching images. Information on the content of matching previously obtained images can be provided back to a user who captured the new image. User profile data can refine the content information. The content information can also be used as search terms for additional searching or other processing.
Abstract:
Embodiments of the invention are directed to using image data and contextual data to determine information about a scene, based on one or more previously obtained images. Contextual data, such location of image capture, can be used to determine previously obtained images related to the contextual data and other location-related information, such as billboard locations. With even low resolution devices, such as cell phone, image attributes, such as a histogram or optically recognized characters, can be compared between the previously obtained images and the newly captured image. Attributes matching within a predefined threshold indicated matching images. Information on the content of matching previously obtained images can be provided back to a user who captured the new image. User profile data can refine the content information. The content information can also be used as search terms for additional searching or other processing.
Abstract:
Systems and methods for generating and playing a sequence of media objects based on a mood gradient are also disclosed. A mood gradient is a sequence of items, in which each item is media object having known characteristics or a representative set of characteristics of a media object, that is created or used by a user for a specific purpose. Given a mood gradient, one or more new media objects are selected for each item in the mood gradient based on the characteristics associated with that item. In this way, a sequence of new media objects is created but the sequence exhibits a similar variation in media object characteristics. The mood gradient may be presented to a user or created via a display illustrating a three-dimensional space in which each dimension corresponds to a different characteristic. The mood gradient may be represented as a path through the three-dimensional space and icons representing media objects are located within the three-dimensional space based on their characteristics.
Abstract:
An online music system includes a music database configured to store musical selections and to store a user profile for respective users of the online music system, an advertiser account management system to store bid amounts from advertisers seeking to provide information to the users of the online music system and a user recommendation system coupled to the music database to present information about musical selections to respective users based on the stored user profile and the stored bid amounts.
Abstract:
Embodiments of the invention are directed to using image data and contextual data to determine information about a scene, based on one or more previously obtained images. Contextual data, such location of image capture, can be used to determine previously obtained images related to the contextual data and other location-related information, such as billboard locations. With even low resolution devices, such as cell phone, image attributes, such as a histogram or optically recognized characters, can be compared between the previously obtained images and the newly captured image. Attributes matching within a predefined threshold indicate matching images. Information on the content of matching previously obtained images can be provided back to a user who captured the new image. User profile data can refine the content information. The content information can also be used as search terms for additional searching or other processing.
Abstract:
Systems and methods for generating and playing a sequence of media objects based on a mood gradient are also disclosed. A mood gradient is a sequence of items, in which each item is media object having known characteristics or a representative set of characteristics of a media object, that is created or used by a user for a specific purpose. Given a mood gradient, one or more new media objects are selected for each item in the mood gradient based on the characteristics associated with that item. In this way, a sequence of new media objects is created but the sequence exhibits a similar variation in media object characteristics. The mood gradient may be presented to a user or created via a display illustrating a three-dimensional space in which each dimension corresponds to a different characteristic. The mood gradient may be represented as a path through the three-dimensional space and icons representing media objects are located within the three-dimensional space based on their characteristics.