US20020002897A1 - Incremental sequence completion system and method - Google Patents
Incremental sequence completion system and method Download PDFInfo
- Publication number
- US20020002897A1 US20020002897A1 US09/897,243 US89724301A US2002002897A1 US 20020002897 A1 US20020002897 A1 US 20020002897A1 US 89724301 A US89724301 A US 89724301A US 2002002897 A1 US2002002897 A1 US 2002002897A1
- Authority
- US
- United States
- Prior art keywords
- sequence
- items
- database
- user
- item
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/466—Learning process for intelligent management, e.g. learning user preferences for recommending movies
- H04N21/4668—Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/63—Querying
- G06F16/635—Filtering based on additional data, e.g. user or group profiles
- G06F16/637—Administration of user profiles, e.g. generation, initialization, adaptation or distribution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/68—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/68—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/683—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/0033—Recording/reproducing or transmission of music for electrophonic musical instruments
- G10H1/0041—Recording/reproducing or transmission of music for electrophonic musical instruments in coded form
- G10H1/0058—Transmission between separate instruments or between individual components of a musical system
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/8106—Monomedia components thereof involving special audio data, e.g. different tracks for different languages
- H04N21/8113—Monomedia components thereof involving special audio data, e.g. different tracks for different languages comprising music, e.g. song in MP3 format
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/075—Musical metadata derived from musical analysis or for use in electrophonic musical instruments
- G10H2240/081—Genre classification, i.e. descriptive metadata for classification or selection of musical pieces according to style
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/095—Identification code, e.g. ISWC for musical works; Identification dataset
- G10H2240/101—User identification
- G10H2240/105—User profile, i.e. data about the user, e.g. for user settings or user preferences
Definitions
- the present invention relates to an incremental sequence completion system and method designed to compute e.g. music sequences in a variety of different contexts and situations, including: Internet adaptive or interactive radio, digital audio broadcasting (DAB) with intelligent scheduling, music recommendation systems, and other innovative Electronic Music Distribution (EMD) services in general. These sequences are generated “iteratively”, step by step.
- DAB digital audio broadcasting
- EMD Electronic Music Distribution
- the present invention also concerns a system or server adapted to implement such a method.
- the system according to the present invention allows to compute one step in the music sequence generation process.
- the server When implementing the inventive system, the server typically receives repeated calls to provide full-fledged EMD services.
- a user may compute the choice of an initial music title by using a device or system of the invention. He thereby starts the procedure from an empty sequence. The system then computes a next title using a sequence containing the first computed title, etc. The system computes only the “best next item”, sometimes referred to here as “bext”, of a given sequence of items. This allows to compute different kinds of continuations, and to take into account possible changes in the user's taste, or in the sequence heard.
- the system according to the present invention takes into account two main parameters:
- the items are music titles, and the sequences of music programs composed of a sequence of titles, e.g. interactive Internet Radio and “on-demand” music compilations.
- the system produces the “best next item”, i.e. “bext”:
- the term “bext” means the item proposed by the server which should satisfy two criteria: 1) conforming to the user's taste, and 2) being consistent within the given context (defined by the sequence).
- the main innovative idea of the present invention resides in combining two elements, i.e. 1) an incremental sequence completion system and 2) a standard-user profiling system.
- completion is well known in the field of computing, and refers to the technique of completing by anticipation a sequence of which the first elements are given as an input.
- the method in accordance with the present invention is capable of operating interactively.
- the user (recipient of the sequence) can send data to the server during the course of a sequence generation to modify the selections to follow in the sequence.
- These data can e.g. correspond to parameters that form the user profile.
- a dialogue can thereby be established between the user and the server of the sequence: the server delivers an item of the sequence and the user may, in response, indicate his or her appreciation of that item, e.g. through user profile parameters.
- the response is taken into account by the server to modify—if needs be—the corresponding profile accordingly.
- the server can evolve in real time by such interactions with the user to provide an increasingly accurate choice of best next items in the sequence, and thereby effect an optimised completion through a better anticipation of the next item of the sequence likely to satisfy the user.
- the term “database” is used for designating any collection of data, e.g. covering both pre-stored data and dynamically stored data.
- database e.g. covering both pre-stored data and dynamically stored data.
- a sequence of items e.g. music titles
- a created sequence is “coherent”, i.e. there should exist a particular relationship between attributes (or descriptors) of the items which constitute a sequence.
- the attributes of the items, components of the sequence should not be too dissimilar, especially for successive items in the same sequence.
- the items are generally stored in a database and described in terms of data pairs, each pair respectively consisting of an attribute and the corresponding value.
- the problem of creating the desired fixed length sequence is treated as “Constraint Satisfaction Programming (CSP)”, also disclosed in the above EP application.
- CSP Constraint Satisfaction Programming
- the sequence to be obtained is specified by formulating a collection of constraints holding on items in the database. Each constraint describes a particular property of the sequence, and the sequence can be specified by any number of constraints.
- the items in the database exhibit a particular generic format with associated taxonomies for at least some of the attribute values.
- the constraints are specified out of a predetermined library of generic constraint classes which have been specially formulated.
- the special constraint classes allow the expression of desired properties of the target sequence, notably properties of similarity between groups of items, properties of dissimilarity and properties of cardinality. These constraint classes enable the properties of coherent sequences to be expressed in a particularly simple manner.
- sequences here mean ordered collections of items, as found typically in the context of music listening (e.g. radio programs, concerts, compilations).
- a technology to produce sequences of items was previously the subject of patent application EP-A 0 961 209 described above.
- the previous patent application considered fixed-length sequences, and did not explicitly take into account user profiling. Furthermore, it was not fully adapted to incremental sequence generation.
- the proposed invention allows not only to propose the items likely to please the user (standard collaborative filtering), but also the items that fit well in the given sequence. Moreover, the invention described here does not compute actual sequences, but is limited to computing the next item in a given sequence. This allows to use the invention in a variety of contexts, for different EMD applications, taking user's interaction into account as needed.
- a method of generating incrementally a sequence of items from a database containing the items is characterised in that the sequence is generated by implementing in combination a sequence completion system and a user profiling system, thereby taking into account both sequence coherence and user profile.
- the item comprises at least one attribute.
- the items are linked to each other in the sequence by similarity relations in attributes of the items.
- the sequence generating system is implemented by generating a desired next item in the sequence on the basis of similarity relationships between the item and the sequence.
- the desired-next-item is further generated by user profiling techniques and/or metadata analysis techniques.
- the sequence may represent music titles.
- the method according to the invention may further comprise the steps of providing the database with a parameter relating to a “continuity/discontinuity” mode on the sequence, a parameter relating to a “repetitivity” mode on the sequence, a parameter relating to a “length of past” mode on the sequence, a parameter relating to a “explicit constraints” mode on said sequence and a parameter relating to the “number of items to be generated at a time” mode, respectively.
- the user profiling system may be implemented using a parameter relating to a “continuity/discontinuity” mode on a user profile.
- the database may contain information representing a plurality of collections of descriptor/value pairs, each of the values for descriptors being selected from descriptor/value lists, and each of the descriptors is associated to a descriptor type.
- the descriptor types may at least comprise Integer-Type, Taxonomy-Type and Discrete-Type.
- descriptor types may have mathematical similarity functions.
- the database may comprise musical pieces, and the sequence of items may comprise music programs.
- the database may contain data corresponding to musical pieces and the attribute(s) may express objective data associated with a item, such as a song title, author of the musical piece, duration of the musical piece, recording label.
- the database may contain data corresponding to musical pieces and the attribute(s) may express subjective data, associated with a item, which describe musical properties thereof, such as style, type of voice, music setup, type of instruments, tempo, type of melody, main theme of the lyrics.
- the invention also relates to an interactive radio station providing a personalised sequence of musical items, characterised in that the sequence is generated by the above methods, thereby taking into account user tastes interactively.
- the invention further concerns a system adapted to implement the method of any one of claims 1 to 19, comprising a general-purpose computer and a monitor for display of the generated information.
- a computer program product adapted to carry out any one of the above methods, when loaded into a general purpose computer.
- FIG. 1 illustrates the taxonomy of musical styles in which links indicate a similarity relation between styles.
- “Jazz-Crooner” is represented as similar to “Soul-Blues”;
- FIG. 2 illustrates overall data flow of the present invention
- FIG. 3 is a view of a screen showing how to implement a sequence completion system and a user profiling system in an embodiment of the invention.
- the present disclosure partly relates to constraint satisfaction programming techniques contained in EP 0 961 209, which are herein expressly incorporated by reference in its entirety.
- An important aspect of the database is that the values of attributes are linked to each other by similarity relations. These similarity relations are used for specifying constraints on the continuity of the sequence (e.g. the preceding example contains a constraint on the continuity of styles). More generally, the taxonomies on attribute values establish links of partial similarity between items, according to a specific dimension of musical content.
- the taxonomy of styles in accordance with the present invention explicitly represents relations of similarity between styles as a non-directed graph in which vertices are styles and edges express similarity. It currently includes 400 different styles, covering most of western music.
- the database which can be a database of music titles, contains content information needed for specifying the constraints.
- Each item is described in terms of attributes which take their value in a predefined taxonomy.
- the attributes are of two sorts: technical attributes (descriptors) and content attributes (values).
- Technical attributes include the name of the title (e.g. name of a song), the name of the author (e.g. singer's name), the duration (e.g. “279 sec”), and the recording label (e.g. “Epic”).
- Content attributes describe musical properties of individual titles.
- the attributes are the following: style (e.g. “Jazz Crooner”), type of voice (e.g. “muffled”), music setup (e.g.
- instrumental type of instruments (e.g. “brass”), tempo (e.g. “slow-fast”), and other optional attributes such as the type of melody (e.g. “consonant”), or the main theme of the lyrics (e.g. “love”).
- type of instruments e.g. “brass”
- tempo e.g. “slow-fast”
- other optional attributes such as the type of melody (e.g. “consonant”), or the main theme of the lyrics (e.g. “love”).
- the database is created manually by experts. However, it should be noted that 1) some attributes could be extracted automatically from the signal, such as the tempo, see e.g. Scheirer, E. D., J. of the Acoustical Society of America, 103 (1), 588-601, 1998, and 2) all the attributes are simple, i.e. do not require sophisticated musical analysis.
- the above database is called “a metadatabase”.
- This database contains descriptions of music titles. These descriptions are sets of associations descriptors/values. Although the invention is largely independent of the actual structure of the metadatabase, an example of such a metadatabase is given.
- the descriptors are typically as follows:
- each descriptor is associated to a Descriptor-Type.
- the Tempo descriptor is of Integer-Type (its value is an integer).
- the Style descriptor is of type Taxonomy-Type.
- the main instrument descriptor is of type DiscreteDescriptor, which can take its value in a finite set of discrete values.
- a similarity relation similarity_X This relation indicates whether a value for a given descriptor is similar to another value.
- Other descriptors can have mathematical similarity function.
- the tempo descriptors range over integers. Accordingly, similarity relations can be defined using thresholds: similar_tempo(a, b) if
- a profile is a dictionary associating title numbers to grades. Title numbers are taken from a given music catalogue. Grades are numbers within a given grade range, such as [0, 1]. For instance a user profile could be:
- a profile is typically unique to a user.
- SEQ is the sequence already heard: song 1231 , song 9823 , . . . , song 23
- [0072] is a user's profile.
- the device may take some technical parameters which allow to tune the output.
- This parameter is in fact a set of parameters, which indicates how “continuous” the sequence should be with respect to several musical dimensions.
- the possible values indicates the type of continuity for each descriptor.
- the range of values depend on the types of descriptors.
- a value of 0 means that the corresponding descriptor for the next item to compute should be similar to the “current value” of the same descriptor (current value is explicitly defined in the algorithm).
- a value of 1 means that the corresponding descriptor for the next item to compute should be not similar to the “current value” of the same descriptor (current value is explicitly defined in the algorithm).
- a value of 0 means that the corresponding descriptor for the next item to compute should be similar to the “current value” of the same descriptor (current value is explicitly defined in the algorithm).
- a value of ⁇ 1 means that the corresponding descriptor for the next item to compute should be “less” than the current value.
- a value of +1 means that the corresponding descriptor for the next item to compute should be “more” than the current value.
- Taxonomy Descriptors (as, e.g., in Style)
- Values range from 0 to n, where n is the maximum distance between nodes using the similarity relation.
- This parameter can take on the following four basic values:
- 1+3 means the union of the titles obtained by 1 and the titles obtained by 3.
- a repetition is a title which is present more than once in the sequence.
- n number of items (length of the sequence).
- d′ ⁇ (d.n ⁇ 1)/(n ⁇ 1), and d′ belongs to [0, 1], varying as d.
- This parameter is used by the computing algorithm, in particular to determine the “current value” to be compared against. It is also used to determine the title to be repeated, if any.
- This number can take any value from 1 to n. When the value is greater than 1, the process is applied iteratively n times, with the same input parameters, except for the input sequence SEQ, which is iteratively augmented with the output of the preceding computation.
- some parameters can be provided by either the server (e.g. an Internet Radio wanting to impose particular titles, styles, etc.), or the user himself (e.g. a title he or she likes or does not like).
- the server e.g. an Internet Radio wanting to impose particular titles, styles, etc.
- the user himself e.g. a title he or she likes or does not like.
- the implementation also uses a constraint solver (described in patent application EP 0 961 209).
- POT Titles in the profile which correspond to a “good” grade (for instance, titles with grade “1”, in the case of a Boolean grade).
- POT Titles obtained by metadata analysis, from titles “close” to profile good titles (described infra).
- POT Titles obtained by metadata analysis, from titles “far” to profile good titles (described infra).
- mean values for SEQ are computed for the various descriptors: style, tempo, energy, RhythmType, VoiceType, MainInstrument, etc.
- POT cannot be empty (in the worst case, all continuity constraints have been removed, so POT is not filtered).
- collaborative filtering is used to compute, from a profile, a set of titles to be recommended, based on this similarity measure.
- This algorithm also computes a set of titles from a profile. Instead of basing the computation of profile similarity, as in collaborative filtering, the computations is based on metadata similarity.
- a global distance measure on titles is defined, from each individual descriptor. Any distance measure can be used here.
- a simple distance measure is for instance:
- D(T1, T2) Number of descriptors which have a non similar value.
- Threshold is set to be “small” if only “close” titles are sought, and larger if “distant” titles are sought.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Library & Information Science (AREA)
- Signal Processing (AREA)
- Acoustics & Sound (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Reverberation, Karaoke And Other Acoustics (AREA)
Abstract
Description
- The present invention relates to an incremental sequence completion system and method designed to compute e.g. music sequences in a variety of different contexts and situations, including: Internet adaptive or interactive radio, digital audio broadcasting (DAB) with intelligent scheduling, music recommendation systems, and other innovative Electronic Music Distribution (EMD) services in general. These sequences are generated “iteratively”, step by step. The present invention also concerns a system or server adapted to implement such a method.
- Advances in networking and transmission of digital multimedia data have made it possible to provide users with huge catalogues of information, such as music catalogues. These advances thus raise not only the problem of distribution, but also the problem of choosing the desired information among huge catalogues.
- Such new developments raise music selection problems which may depend on users' aims or those of content providers. Although modelling a user's goal in accessing music is very complex, two basic elements, i.e. desire of repetition and desire of surprise, can be identified.
- The desire of repetition means that people want to listen to music they already know, or similar to what they already know. Sequences of repeating notes create expectations of the same notes to occur. On the other hand, the desire for surprise is a key to understanding music at all levels of perception.
- Of course, these two desires are contradictory, and the issue in music selection is precisely to find the right compromise: provide users with items they already know, or items they do not know but would probably like.
- From the viewpoint of record companies, the goal of music delivery is to achieve a better exploitation of the catalogue. Indeed, record companies have problems with the exploitation of their catalogue using standard distribution schemes. For technical reasons, only a small part of the catalogue is actually “active”, i.e. proposed to users, in the form of easily available products. More importantly, the analysis of music sales shows clearly decreases in the sales of albums, and short-term policies based on selling many copies of a limited number of items (hits) are no longer efficient. Additionally, the sales of general-purpose “samplers” (e.g. “Best of love songs”) are no longer profitable, because users already have the hits, and do not want to buy CDs in which they like only a fraction of the titles. Instead of proposing a small number of hits to a large audience, a natural solution is to increase diversity by proposing more customised albums to users.
- The system according to the present invention allows to compute one step in the music sequence generation process. When implementing the inventive system, the server typically receives repeated calls to provide full-fledged EMD services.
- For instance, a user may compute the choice of an initial music title by using a device or system of the invention. He thereby starts the procedure from an empty sequence. The system then computes a next title using a sequence containing the first computed title, etc. The system computes only the “best next item”, sometimes referred to here as “bext”, of a given sequence of items. This allows to compute different kinds of continuations, and to take into account possible changes in the user's taste, or in the sequence heard.
- The system according to the present invention takes into account two main parameters:
- 1) a context of what is listened to, given by a sequence of items that is supposed to have already been heard by the user; and
- 2) a user profile, defining the taste of the user.
- Typically, the items are music titles, and the sequences of music programs composed of a sequence of titles, e.g. interactive Internet Radio and “on-demand” music compilations.
- The system produces the “best next item”, i.e. “bext”: Here, the term “bext” means the item proposed by the server which should satisfy two criteria: 1) conforming to the user's taste, and 2) being consistent within the given context (defined by the sequence).
- The main innovative idea of the present invention resides in combining two elements, i.e. 1) an incremental sequence completion system and 2) a standard-user profiling system. The term completion is well known in the field of computing, and refers to the technique of completing by anticipation a sequence of which the first elements are given as an input.
- The method in accordance with the present invention is capable of operating interactively. In other words, the user (recipient of the sequence) can send data to the server during the course of a sequence generation to modify the selections to follow in the sequence. These data can e.g. correspond to parameters that form the user profile. A dialogue can thereby be established between the user and the server of the sequence: the server delivers an item of the sequence and the user may, in response, indicate his or her appreciation of that item, e.g. through user profile parameters. The response is taken into account by the server to modify—if needs be—the corresponding profile accordingly. In this way, the server can evolve in real time by such interactions with the user to provide an increasingly accurate choice of best next items in the sequence, and thereby effect an optimised completion through a better anticipation of the next item of the sequence likely to satisfy the user.
- In the present invention, the term “database” is used for designating any collection of data, e.g. covering both pre-stored data and dynamically stored data. There are many situations in which it is necessary or desirable to create a sequence of items (e.g. music titles) from a collection of items for which data are available. It is also important that a created sequence is “coherent”, i.e. there should exist a particular relationship between attributes (or descriptors) of the items which constitute a sequence. Typically, the attributes of the items, components of the sequence, should not be too dissimilar, especially for successive items in the same sequence.
- A system for producing “coherent” sequences of items in a particular order is known from patent document EP-A-0 961 209. However, this patent deals specifically with sequences having a length that is initially fixed, i.e. known a priori.
- The items are generally stored in a database and described in terms of data pairs, each pair respectively consisting of an attribute and the corresponding value. The problem of creating the desired fixed length sequence is treated as “Constraint Satisfaction Programming (CSP)”, also disclosed in the above EP application. The sequence to be obtained is specified by formulating a collection of constraints holding on items in the database. Each constraint describes a particular property of the sequence, and the sequence can be specified by any number of constraints.
- The items in the database exhibit a particular generic format with associated taxonomies for at least some of the attribute values. Also, the constraints are specified out of a predetermined library of generic constraint classes which have been specially formulated. The special constraint classes allow the expression of desired properties of the target sequence, notably properties of similarity between groups of items, properties of dissimilarity and properties of cardinality. These constraint classes enable the properties of coherent sequences to be expressed in a particularly simple manner.
- It is the combination of the use of a generic format for items in the database and the special constraint classes which makes it possible to use CSP solution techniques to solve the combinatorial problem of building an ordered collection of elements satisfying a number of constraints.
- Much work has been carried out in user recommendation systems. Most of this work is based on the idea of managing user “profiles”, using some sort of collaborating filtering approach (for instance, the FireFly technology). Similarity measures between profiles allow to compute the closest profiles of a given individual. Data analysis techniques then allow to extract the most common taste of these “close” profiles, which is then recommended to the user.
- The concept of the present invention is to combine this technology with another technology, namely an incremental sequence completion system, which make it possible to create sequences of items (and not simply sets of items as in collaborative filtering). Sequences here mean ordered collections of items, as found typically in the context of music listening (e.g. radio programs, concerts, compilations). A technology to produce sequences of items was previously the subject of patent application EP-
A 0 961 209 described above. The previous patent application, however, considered fixed-length sequences, and did not explicitly take into account user profiling. Furthermore, it was not fully adapted to incremental sequence generation. - The proposed invention allows not only to propose the items likely to please the user (standard collaborative filtering), but also the items that fit well in the given sequence. Moreover, the invention described here does not compute actual sequences, but is limited to computing the next item in a given sequence. This allows to use the invention in a variety of contexts, for different EMD applications, taking user's interaction into account as needed.
- To this end, there is provided a method of generating incrementally a sequence of items from a database containing the items. The method is characterised in that the sequence is generated by implementing in combination a sequence completion system and a user profiling system, thereby taking into account both sequence coherence and user profile.
- Typically, the item comprises at least one attribute.
- Further, the items are linked to each other in the sequence by similarity relations in attributes of the items.
- Suitably, the sequence generating system is implemented by generating a desired next item in the sequence on the basis of similarity relationships between the item and the sequence.
- The desired-next-item is further generated by user profiling techniques and/or metadata analysis techniques.
- The sequence may represent music titles.
- The method according to the invention may further comprise the steps of providing the database with a parameter relating to a “continuity/discontinuity” mode on the sequence, a parameter relating to a “repetitivity” mode on the sequence, a parameter relating to a “length of past” mode on the sequence, a parameter relating to a “explicit constraints” mode on said sequence and a parameter relating to the “number of items to be generated at a time” mode, respectively.
- Likewise, the user profiling system may be implemented using a parameter relating to a “continuity/discontinuity” mode on a user profile.
- In the above methods, the database may contain information representing a plurality of collections of descriptor/value pairs, each of the values for descriptors being selected from descriptor/value lists, and each of the descriptors is associated to a descriptor type.
- Further, the descriptor types may at least comprise Integer-Type, Taxonomy-Type and Discrete-Type.
- Further yet, at least some of the descriptor types may have mathematical similarity functions.
- In the above methods of the invention, the database may comprise musical pieces, and the sequence of items may comprise music programs.
- The database may contain data corresponding to musical pieces and the attribute(s) may express objective data associated with a item, such as a song title, author of the musical piece, duration of the musical piece, recording label.
- Likewise, the database may contain data corresponding to musical pieces and the attribute(s) may express subjective data, associated with a item, which describe musical properties thereof, such as style, type of voice, music setup, type of instruments, tempo, type of melody, main theme of the lyrics.
- There is also provided an implementation of the method mentioned above, for creating a user recommendation system, each recommendation taking into account both sequence coherence and user profile.
- The invention also relates to an interactive radio station providing a personalised sequence of musical items, characterised in that the sequence is generated by the above methods, thereby taking into account user tastes interactively.
- The invention further concerns a system adapted to implement the method of any one of
claims 1 to 19, comprising a general-purpose computer and a monitor for display of the generated information. There is also provided a computer program product adapted to carry out any one of the above methods, when loaded into a general purpose computer. - The above and other objects, features and advantages of the present invention will be made apparent from the following description of the preferred embodiments, given as non-limiting examples, with reference to the accompanying drawings, in which:
- FIG. 1 illustrates the taxonomy of musical styles in which links indicate a similarity relation between styles. For example, “Jazz-Crooner” is represented as similar to “Soul-Blues”;
- FIG. 2 illustrates overall data flow of the present invention; and
- FIG. 3 is a view of a screen showing how to implement a sequence completion system and a user profiling system in an embodiment of the invention.
- The following description of the preferred embodiments will begin with an explanation of the constitutive elements of the preferred embodiment. In the preferred examples, the invention is applied to the automatic composition of music programmes.
- The present disclosure partly relates to constraint satisfaction programming techniques contained in
EP 0 961 209, which are herein expressly incorporated by reference in its entirety. - Taxonomies of Values and Similarity Relations
- An important aspect of the database is that the values of attributes are linked to each other by similarity relations. These similarity relations are used for specifying constraints on the continuity of the sequence (e.g. the preceding example contains a constraint on the continuity of styles). More generally, the taxonomies on attribute values establish links of partial similarity between items, according to a specific dimension of musical content.
- Some of these relations are simple ordering relations. For instance, tempos take their value in the ordered list (fast, fast-slow, slow-fast, slow). Other attributes such as style, take their value in full-fledged taxonomies. The taxonomy of styles is particularly worth mentioning, because it embodies a global knowledge on music that the system is able to exploit.
- The taxonomy of styles in accordance with the present invention explicitly represents relations of similarity between styles as a non-directed graph in which vertices are styles and edges express similarity. It currently includes 400 different styles, covering most of western music.
- 1) Database
- The database, which can be a database of music titles, contains content information needed for specifying the constraints. Each item is described in terms of attributes which take their value in a predefined taxonomy. The attributes are of two sorts: technical attributes (descriptors) and content attributes (values). Technical attributes include the name of the title (e.g. name of a song), the name of the author (e.g. singer's name), the duration (e.g. “279 sec”), and the recording label (e.g. “Epic”). Content attributes describe musical properties of individual titles. The attributes are the following: style (e.g. “Jazz Crooner”), type of voice (e.g. “muffled”), music setup (e.g. “instrumental”), type of instruments (e.g. “brass”), tempo (e.g. “slow-fast”), and other optional attributes such as the type of melody (e.g. “consonant”), or the main theme of the lyrics (e.g. “love”).
- In the current state, the database is created manually by experts. However, it should be noted that 1) some attributes could be extracted automatically from the signal, such as the tempo, see e.g. Scheirer, E. D., J. of the Acoustical Society of America, 103 (1), 588-601, 1998, and 2) all the attributes are simple, i.e. do not require sophisticated musical analysis.
- The above database is called “a metadatabase”. This database contains descriptions of music titles. These descriptions are sets of associations descriptors/values. Although the invention is largely independent of the actual structure of the metadatabase, an example of such a metadatabase is given. The descriptors are typically as follows:
- Style
- Tempo
- Energy
- VoiceType
- MainInstrument
- RhythmType
- The possible values for each of these descriptors are taken in descriptor-value lists. Each descriptor is associated to a Descriptor-Type. For instance, the Tempo descriptor is of Integer-Type (its value is an integer). The Style descriptor is of type Taxonomy-Type. The main instrument descriptor is of type DiscreteDescriptor, which can take its value in a finite set of discrete values.
- For some descriptor types, there is also provided a similarity relation similarity_X. This relation indicates whether a value for a given descriptor is similar to another value. For instance, the Style descriptor takes its value in a taxonomy of styles, in which the similarity relation is explicitly present (e.g. style_value=“Disco:US” could be explicitly stated as similar to style_value=“Disco:Philadelphia Sound”) c.f. A Taxonomy of Musical Genres by F. PACHET and D. CAZALY, RIAO 2000, Content-Based Multimedia-Information Access published by College de France, Paris, Apr. 14, 2000 (copy included in the present application file). Other descriptors can have mathematical similarity function. For instance, the tempo descriptors range over integers. Accordingly, similarity relations can be defined using thresholds: similar_tempo(a, b) if |b−a|<threshold.
- 2) User Profile
- The embodiment utilises so-called user profiles. A profile is a dictionary associating title numbers to grades. Title numbers are taken from a given music catalogue. Grades are numbers within a given grade range, such as [0, 1]. For instance a user profile could be:
- song1=1,
- song45=0,
- song1234=1,
- A profile is typically unique to a user.
- 3) Parameters of the Invention
- i) Main Parameters
- SEQ is the sequence already heard: song1231, song9823, . . . , song23
- is a user's profile.
- Additionally, the device may take some technical parameters which allow to tune the output.
- ii) Technical Parameters
- P1: Mode continuity/discontinuity of the sequence
- This parameter is in fact a set of parameters, which indicates how “continuous” the sequence should be with respect to several musical dimensions.
- These dimensions correspond to the descriptors as found in the metadatabase:
- continuity_style: 0, 1, 2, 3
- continuity_tempo: −1, 0, 1
- continuity_energy: −1, 0, 1
- continuity_voice: 0, 1
- continuity_MainInstrument: 0, 1
- continuity_rhythmeType: 0, 1
- The possible values indicates the type of continuity for each descriptor. The range of values depend on the types of descriptors.
- a) Discrete Descriptors
- A value of 0 means that the corresponding descriptor for the next item to compute should be similar to the “current value” of the same descriptor (current value is explicitly defined in the algorithm).
- A value of 1 means that the corresponding descriptor for the next item to compute should be not similar to the “current value” of the same descriptor (current value is explicitly defined in the algorithm).
- b) Integer Descriptors
- A value of 0 means that the corresponding descriptor for the next item to compute should be similar to the “current value” of the same descriptor (current value is explicitly defined in the algorithm).
- A value of −1 means that the corresponding descriptor for the next item to compute should be “less” than the current value.
- A value of +1 means that the corresponding descriptor for the next item to compute should be “more” than the current value.
- c) Taxonomy Descriptors (as, e.g., in Style)
- Values range from 0 to n, where n is the maximum distance between nodes using the similarity relation.
- P2: Mode continuity/discontinuity of the profile
- This parameter can take on the following four basic values:
- 0=compute only titles which are explicitly present in the profile
- 1=compute only titles which are obtained by collaborative filtering (CF)
- 2=compute only “close” titles which are obtained by using metadata (MD)
- 3=compute only “distant” titles which are obtained by using metadata (MD)
- Additionally, any combination of these four values can be specified, using “+” sign.
- For instance: 1+3 means the union of the titles obtained by 1 and the titles obtained by 3.
- P3: Repetitivity of sequence
- This parameter indicates how “repetitive” the sequence should be.
- A repetition is a title which is present more than once in the sequence.
- It is a percentage value, i.e. range from 0% to 100%. A sequence having no repetition has a repetitivity of 0%. A sequence having the same title repeated all the time (whatever the sequence length) has a repetitivity of 100%.
- Repetitivity is defined as follows:
- Let n=number of items (length of the sequence).
- Let d=(number of different items in the sequence)/n.
- By definition, d belongs to [1/n, 1]. Since we want a value that belongs to [0, 1], we therefore define:
- d′−(d.n−1)/(n−1), and d′ belongs to [0, 1], varying as d.
- Finally we define the repetitivity:
- r=1−d′=(1−d).n/(n−1)
- with the convention that r(empty sequence)=r(singleton)=0
- P4: Length of past to be taken into account
- This can take the
value 1 to n, where n is the length of the input sequence. This parameter is used by the computing algorithm, in particular to determine the “current value” to be compared against. It is also used to determine the title to be repeated, if any. - P5: Explicit constraints
- These constraints are the same as in the previous patent application EP-A-0 961 209, e.g. constraints imposing a title, a style, etc. They are used only when P6 is >1, and imposed that, therefore, a fixed length subsequence has to be produced.
- P6: Length of sequence to be produced
- This number can take any value from 1 to n. When the value is greater than 1, the process is applied iteratively n times, with the same input parameters, except for the input sequence SEQ, which is iteratively augmented with the output of the preceding computation.
- 4) Implementation: the Algorithm
- The computation of the next song takes into account all the input parameters, and exploits the metadatabase, whose design is outside the scope of the present patent application.
- Depending on the application envisaged, some parameters can be provided by either the server (e.g. an Internet Radio wanting to impose particular titles, styles, etc.), or the user himself (e.g. a title he or she likes or does not like).
- The implementation also uses a constraint solver (described in
patent application EP 0 961 209). - The algorithm always returns a title (unless the initial metadatabase is empty).
- Compute set POT of potential candidate titles.
- If P3=0 then POT=Titles in the profile which correspond to a “good” grade (for instance, titles with grade “1”, in the case of a Boolean grade).
- If P3=1 then POT=Titles obtained by collaborative filtering (CF, described infra)
- If P3=2 then POT=Titles obtained by metadata analysis, from titles “close” to profile good titles (described infra).
- If P3=3 then POT=Titles obtained by metadata analysis, from titles “far” to profile good titles (described infra).
- If P3=4 then POT=all titles in metadatabase.
- The combinations of basic cases are treated by computing the union of the result of each basic case (e.g. “1+3”).
- IF POT is empty, then relax constraints until POT not empty.
- This can happen e.g. if Profile P is empty, and P3=0. In this case, relax constraint P3=0, and choose P3=1 instead. Repeat until choosing P3=4 which ensures POT not empty.
- IF (P6=1) THEN (compute only one bext)
- Compute r=repetitivity (SEQ), using the following formula:
- IF SEQ is empty, THEN r=0 (by convention)
- ELSE
- r=1−d′×(1−d).n/(n−1),
- where d=number of different titles in SEQ, and n=length(SEQ).
- IF P3<r THEN (an item in the sequence has to be repeated)
- Choose a title in S which 1) is close to POT, 2) far in SEQ, and 3) which has not been repeated yet. This is done by performing a left-to-right scan of SEQ, with a past length determined by parameter P4 (length of past). Each title in this subsequence is graded according to the three criteria above. A global score is given by a sum of the criteria. The best item is selected. In the case that either SEQ is empty, or P4=0, then skip to the ELSE part.
- ELSE
- Compute from SEQ the source descriptors for the continuity constraints:
- According the value of P4 (length of past), mean values for SEQ are computed for the various descriptors: style, tempo, energy, RhythmType, VoiceType, MainInstrument, etc.
- Filter POT to keep only matching titles:
- Remove from POT the titles which do not satisfy the continuity constraints, taking as current value the mean computed mean values.
- WHILE POT is empty DO
- 1) Remove a continuity constraint
- 2) Re Filter POT as above (with one less continuity constraint)
- At this point, POT cannot be empty (in the worst case, all continuity constraints have been removed, so POT is not filtered).
- RESULT=Random (POT)
- END (P6=1)
- IF (P6>1) THEN (compute several items at once)
- IF (P5 is empty) THEN
- REPEAT P6 TIMES WHOLE PROCESS with same input parameters except:
- P6=1;
- SEQ←SEQ+RESULT,
- END REPEAT
- ELSE (P5 is not empty)
- Compute next subsequence of P6 items using constraints disclosed in previous patent application EP-A-0 961 209 and specified in P5, augmented with continuity constraints (P1).
- Collaborative Filtering Algorithm
- This method is well known in the prior art, e.g. under the name of “Firefly” (MIT), or in an article by U. Shardanand and P. Maes entitled “Social Information Filtering: Algorithms for Automating “Word of Mouth””, published in “Proceedings of the ACM Conference on Human Factors in Computing Systems”, pp. 210-217, 1995.
- It allows basically to provide a similarity measure between 2 titles, based on profile similarity.
- In our invention collaborative filtering is used to compute, from a profile, a set of titles to be recommended, based on this similarity measure.
- Metadata Analysis Algorithm
- This algorithm also computes a set of titles from a profile. Instead of basing the computation of profile similarity, as in collaborative filtering, the computations is based on metadata similarity.
- A global distance measure on titles is defined, from each individual descriptor. Any distance measure can be used here. A simple distance measure is for instance:
- D(T1, T2)=Number of descriptors which have a non similar value.
- We then consider all titles X in the database which have a distance D(X, T)<Threshold, with at least one title T of the profile.
- Value of Threshold is set to be “small” if only “close” titles are sought, and larger if “distant” titles are sought.
Claims (23)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP00401915 | 2000-07-04 | ||
EP00401915.4 | 2000-07-04 | ||
EP00401915A EP1170722B1 (en) | 2000-07-04 | 2000-07-04 | Incremental music title item sequence completion apparatus and method |
Publications (2)
Publication Number | Publication Date |
---|---|
US20020002897A1 true US20020002897A1 (en) | 2002-01-10 |
US6452083B2 US6452083B2 (en) | 2002-09-17 |
Family
ID=8173756
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/897,243 Expired - Lifetime US6452083B2 (en) | 2000-07-04 | 2001-07-02 | Incremental sequence completion system and method |
Country Status (4)
Country | Link |
---|---|
US (1) | US6452083B2 (en) |
EP (1) | EP1170722B1 (en) |
JP (1) | JP4804658B2 (en) |
DE (1) | DE60045001D1 (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6791020B2 (en) | 2002-08-14 | 2004-09-14 | Sony Corporation | System and method for filling content gaps |
US20040194612A1 (en) * | 2003-04-04 | 2004-10-07 | International Business Machines Corporation | Method, system and program product for automatically categorizing computer audio files |
US20060080356A1 (en) * | 2004-10-13 | 2006-04-13 | Microsoft Corporation | System and method for inferring similarities between media objects |
US20070073574A1 (en) * | 2005-09-23 | 2007-03-29 | Everyoung Media, Llc | Network marketing system |
US20070157797A1 (en) * | 2005-12-14 | 2007-07-12 | Sony Corporation | Taste profile production apparatus, taste profile production method and profile production program |
US7277877B2 (en) * | 2002-08-14 | 2007-10-02 | Sony Corporation | System and method for selecting a music channel |
US20090025540A1 (en) * | 2006-02-06 | 2009-01-29 | Mats Hillborg | Melody generator |
US20090327341A1 (en) * | 2008-06-30 | 2009-12-31 | Microsoft Corporation | Providing multiple degrees of context for content consumed on computers and media players |
US20100153469A1 (en) * | 2005-06-30 | 2010-06-17 | Koninklijke Philips Electronics, N.V. | Electronic device and method of creating a sequence of content items |
US7884274B1 (en) | 2003-11-03 | 2011-02-08 | Wieder James W | Adaptive personalized music and entertainment |
US8370952B1 (en) | 2003-11-03 | 2013-02-05 | Wieder James W | Distributing digital-works and usage-rights to user-devices |
US8396800B1 (en) | 2003-11-03 | 2013-03-12 | James W. Wieder | Adaptive personalized music and entertainment |
US8832016B2 (en) | 2011-12-09 | 2014-09-09 | Nokia Corporation | Method and apparatus for private collaborative filtering |
US9053299B2 (en) | 2003-11-03 | 2015-06-09 | James W. Wieder | Adaptive personalized playback or presentation using rating |
US9053181B2 (en) | 2003-11-03 | 2015-06-09 | James W. Wieder | Adaptive personalized playback or presentation using count |
US9098681B2 (en) | 2003-11-03 | 2015-08-04 | James W. Wieder | Adaptive personalized playback or presentation using cumulative time |
US9773205B1 (en) | 2003-11-03 | 2017-09-26 | James W. Wieder | Distributing digital-works and usage-rights via limited authorization to user-devices |
US11165999B1 (en) | 2003-11-03 | 2021-11-02 | Synergyze Technologies Llc | Identifying and providing compositions and digital-works |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DK200101619A (en) * | 2001-11-01 | 2003-05-02 | Syntonetic Aps | Automatic for template-based sequence production, as well as automatic sequence production method |
US20060080103A1 (en) * | 2002-12-19 | 2006-04-13 | Koninklijke Philips Electronics N.V. | Method and system for network downloading of music files |
EP1685710A1 (en) * | 2003-11-12 | 2006-08-02 | Philips Intellectual Property & Standards GmbH | Program recommendation system |
CN1882999A (en) * | 2003-11-18 | 2006-12-20 | 皇家飞利浦电子股份有限公司 | User aware audio playing apparatus and method |
US7394011B2 (en) * | 2004-01-20 | 2008-07-01 | Eric Christopher Huffman | Machine and process for generating music from user-specified criteria |
JPWO2005096629A1 (en) * | 2004-03-31 | 2007-08-16 | 株式会社デンソーアイティーラボラトリ | Program guide creation method, program guide creation device, and program guide creation system |
JP2006084749A (en) * | 2004-09-16 | 2006-03-30 | Sony Corp | Content generation device and content generation method |
EP1849099B1 (en) * | 2005-02-03 | 2014-05-07 | Apple Inc. | Recommender system for identifying a new set of media items responsive to an input set of media items and knowledge base metrics |
JP4626376B2 (en) | 2005-04-25 | 2011-02-09 | ソニー株式会社 | Music content playback apparatus and music content playback method |
FR2897176B1 (en) * | 2006-02-08 | 2008-09-19 | Immofrance Com | DATA ENTRY ASSISTANCE SYSTEM |
EP1895505A1 (en) | 2006-09-04 | 2008-03-05 | Sony Deutschland GmbH | Method and device for musical mood detection |
US7863511B2 (en) * | 2007-02-09 | 2011-01-04 | Avid Technology, Inc. | System for and method of generating audio sequences of prescribed duration |
US9990655B2 (en) | 2007-08-24 | 2018-06-05 | Iheartmedia Management Services, Inc. | Live media stream including personalized notifications |
US9699232B2 (en) | 2007-08-24 | 2017-07-04 | Iheartmedia Management Services, Inc. | Adding perishable content to media stream based on user location preference |
WO2009029222A1 (en) | 2007-08-24 | 2009-03-05 | Clear Channel Management Services, L.P. | System and method for providing a radio-like experience |
US11265355B2 (en) | 2007-08-24 | 2022-03-01 | Iheartmedia Management Services, Inc. | Customized perishable media content based on user-specified preference for static or variable location |
FR2928766B1 (en) * | 2008-03-12 | 2013-01-04 | Iklax Media | METHOD FOR MANAGING AUDIONUMERIC FLOWS |
US8650094B2 (en) * | 2008-05-07 | 2014-02-11 | Microsoft Corporation | Music recommendation using emotional allocation modeling |
US8344233B2 (en) * | 2008-05-07 | 2013-01-01 | Microsoft Corporation | Scalable music recommendation by search |
US8026436B2 (en) * | 2009-04-13 | 2011-09-27 | Smartsound Software, Inc. | Method and apparatus for producing audio tracks |
US20120198355A1 (en) * | 2011-01-31 | 2012-08-02 | International Business Machines Corporation | Integrating messaging with collaboration tools |
US10984387B2 (en) * | 2011-06-28 | 2021-04-20 | Microsoft Technology Licensing, Llc | Automatic task extraction and calendar entry |
US10361981B2 (en) | 2015-05-15 | 2019-07-23 | Microsoft Technology Licensing, Llc | Automatic extraction of commitments and requests from communications and content |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1189231B1 (en) * | 1993-05-26 | 2005-04-20 | Pioneer Electronic Corporation | Recording Medium for Karaoke |
US5693902A (en) * | 1995-09-22 | 1997-12-02 | Sonic Desktop Software | Audio block sequence compiler for generating prescribed duration audio sequences |
US6231347B1 (en) * | 1995-11-20 | 2001-05-15 | Yamaha Corporation | Computer system and karaoke system |
US5918303A (en) * | 1996-11-25 | 1999-06-29 | Yamaha Corporation | Performance setting data selecting apparatus |
US6243725B1 (en) * | 1997-05-21 | 2001-06-05 | Premier International, Ltd. | List building system |
US6201176B1 (en) * | 1998-05-07 | 2001-03-13 | Canon Kabushiki Kaisha | System and method for querying a music database |
EP0961209B1 (en) * | 1998-05-27 | 2009-10-14 | Sony France S.A. | Sequence generation using a constraint satisfaction problem formulation |
JP4487332B2 (en) * | 1998-05-29 | 2010-06-23 | ソニー株式会社 | Information processing apparatus and method, recording medium, and information processing system |
US5969283A (en) * | 1998-06-17 | 1999-10-19 | Looney Productions, Llc | Music organizer and entertainment center |
JP3646011B2 (en) * | 1998-10-22 | 2005-05-11 | 三菱電機株式会社 | Retrieval system and computer-readable recording medium on which program of retrieval system is recorded |
US6248946B1 (en) * | 2000-03-01 | 2001-06-19 | Ijockey, Inc. | Multimedia content delivery system and method |
-
2000
- 2000-07-04 DE DE60045001T patent/DE60045001D1/en not_active Expired - Lifetime
- 2000-07-04 EP EP00401915A patent/EP1170722B1/en not_active Expired - Lifetime
-
2001
- 2001-07-02 US US09/897,243 patent/US6452083B2/en not_active Expired - Lifetime
- 2001-07-04 JP JP2001203929A patent/JP4804658B2/en not_active Expired - Fee Related
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7277877B2 (en) * | 2002-08-14 | 2007-10-02 | Sony Corporation | System and method for selecting a music channel |
US6791020B2 (en) | 2002-08-14 | 2004-09-14 | Sony Corporation | System and method for filling content gaps |
USRE44569E1 (en) | 2002-08-14 | 2013-11-05 | Sony Corporation | System and method for filling content gaps |
US20040194612A1 (en) * | 2003-04-04 | 2004-10-07 | International Business Machines Corporation | Method, system and program product for automatically categorizing computer audio files |
US9053181B2 (en) | 2003-11-03 | 2015-06-09 | James W. Wieder | Adaptive personalized playback or presentation using count |
US8656043B1 (en) | 2003-11-03 | 2014-02-18 | James W. Wieder | Adaptive personalized presentation or playback, using user action(s) |
US11165999B1 (en) | 2003-11-03 | 2021-11-02 | Synergyze Technologies Llc | Identifying and providing compositions and digital-works |
US10970368B1 (en) | 2003-11-03 | 2021-04-06 | James W. Wieder | Distributing digital-works and usage-rights to user-devices |
US10223510B1 (en) | 2003-11-03 | 2019-03-05 | James W. Wieder | Distributing digital-works and usage-rights to user-devices |
US9858397B1 (en) | 2003-11-03 | 2018-01-02 | James W. Wieder | Distributing digital-works and usage-rights to user-devices |
US7884274B1 (en) | 2003-11-03 | 2011-02-08 | Wieder James W | Adaptive personalized music and entertainment |
US8370952B1 (en) | 2003-11-03 | 2013-02-05 | Wieder James W | Distributing digital-works and usage-rights to user-devices |
US8396800B1 (en) | 2003-11-03 | 2013-03-12 | James W. Wieder | Adaptive personalized music and entertainment |
US9773205B1 (en) | 2003-11-03 | 2017-09-26 | James W. Wieder | Distributing digital-works and usage-rights via limited authorization to user-devices |
US9645788B1 (en) | 2003-11-03 | 2017-05-09 | James W. Wieder | Adaptively scheduling playback or presentation, based on user action(s) |
US9098681B2 (en) | 2003-11-03 | 2015-08-04 | James W. Wieder | Adaptive personalized playback or presentation using cumulative time |
US9053299B2 (en) | 2003-11-03 | 2015-06-09 | James W. Wieder | Adaptive personalized playback or presentation using rating |
US20060080356A1 (en) * | 2004-10-13 | 2006-04-13 | Microsoft Corporation | System and method for inferring similarities between media objects |
US20100153469A1 (en) * | 2005-06-30 | 2010-06-17 | Koninklijke Philips Electronics, N.V. | Electronic device and method of creating a sequence of content items |
US20070073574A1 (en) * | 2005-09-23 | 2007-03-29 | Everyoung Media, Llc | Network marketing system |
US20070157797A1 (en) * | 2005-12-14 | 2007-07-12 | Sony Corporation | Taste profile production apparatus, taste profile production method and profile production program |
US7671267B2 (en) * | 2006-02-06 | 2010-03-02 | Mats Hillborg | Melody generator |
US20090025540A1 (en) * | 2006-02-06 | 2009-01-29 | Mats Hillborg | Melody generator |
US8527525B2 (en) | 2008-06-30 | 2013-09-03 | Microsoft Corporation | Providing multiple degrees of context for content consumed on computers and media players |
US20090327341A1 (en) * | 2008-06-30 | 2009-12-31 | Microsoft Corporation | Providing multiple degrees of context for content consumed on computers and media players |
US8832016B2 (en) | 2011-12-09 | 2014-09-09 | Nokia Corporation | Method and apparatus for private collaborative filtering |
Also Published As
Publication number | Publication date |
---|---|
DE60045001D1 (en) | 2010-11-04 |
JP2002117069A (en) | 2002-04-19 |
JP4804658B2 (en) | 2011-11-02 |
EP1170722A1 (en) | 2002-01-09 |
US6452083B2 (en) | 2002-09-17 |
EP1170722B1 (en) | 2010-09-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6452083B2 (en) | Incremental sequence completion system and method | |
US7130860B2 (en) | Method and system for generating sequencing information representing a sequence of items selected in a database | |
US11669296B2 (en) | Computerized systems and methods for hosting and dynamically generating and providing customized media and media experiences | |
Pachet et al. | A combinatorial approach to content-based music selection | |
Ragno et al. | Inferring similarity between music objects with application to playlist generation | |
Whitman et al. | Artist detection in music with minnowmatch | |
Pachet et al. | A taxonomy of musical genres. | |
US7279629B2 (en) | Classification and use of classifications in searching and retrieval of information | |
US6545209B1 (en) | Music content characteristic identification and matching | |
US7877387B2 (en) | Systems and methods for promotional media item selection and promotional program unit generation | |
JP4343330B2 (en) | Sequence information generation method and sequence information generation system | |
US7533091B2 (en) | Methods, systems, and computer-readable media for generating a suggested list of media items based upon a seed | |
KR20220147156A (en) | Music generator | |
Pachet et al. | Automatic generation of music programs | |
Knees et al. | Combining audio-based similarity with web-based data to accelerate automatic music playlist generation | |
Smith et al. | Towards a Hybrid Recommendation System for a Sound Library. | |
Dias et al. | From manual to assisted playlist creation: a survey | |
Kostek | Music information retrieval—The impact of technology, crowdsourcing, big data, and the cloud in art. | |
Craw et al. | Music recommendation: audio neighbourhoods to discover music in the long tail | |
Kathavate | Music recommendation system using content and collaborative filtering methods | |
Garg et al. | Implementing and Analyzing Behaviour of ML based Tamil Songs Recommendation System | |
CN1643529B (en) | Method of personalizing and identifying communications | |
Pontello et al. | Mixtape: Using real-time user feedback to navigate large media collections | |
US7254618B1 (en) | System and methods for automatic DSP processing | |
Schedl et al. | Multimedia information retrieval: music and audio |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SONY FRANCE S.A., FRANCE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PACHET, FRANCOIS;CAZALY, DANIEL;REEL/FRAME:011954/0409 Effective date: 20010517 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
FPAY | Fee payment |
Year of fee payment: 12 |
|
AS | Assignment |
Owner name: SONY EUROPE LIMITED, ENGLAND Free format text: MERGER;ASSIGNOR:SONY FRANCE SA;REEL/FRAME:052149/0560 Effective date: 20110509 |
|
AS | Assignment |
Owner name: SONY EUROPE B.V., UNITED KINGDOM Free format text: MERGER;ASSIGNOR:SONY EUROPE LIMITED;REEL/FRAME:052162/0623 Effective date: 20190328 |