Contributor
Date
2011-12
Description
Comunicación presentada en MML 2011, 4th International Workshop on Machine Learning and Music: Learning from Musical Structure, Sierra Nevada, Spain, December 17, 2011.
Automatic musical analysis has been approached from different perspectives: grammars, expert systems, probabilistic models, and model matching have been proposed for implementing tonal analysis. In this work we focus on automatic melodic analysis. One question that arises when building a melodic analysis system using a-priori music theory is whether it is possible to automatically extract analysis rules from examples, and how similar are those learnt rules compared to music theory rules. This work investigates this question, i.e. given a dataset of analyzed melodies our objective is to automatically learn analysis rules and to compare them with music theory rules.
Subject
Identifier
Database
Language
Link to record
Show preview
Hide preview
Learning melodic analysis rules
Pla´cido R. Illescas David Rizo Jose´ Manuel In˜esta Department of Software and Computing Systems
University of Alicante placidoroman@gmail.com, {drizo,inesta}@dlsi.ua.es
Rafael Ramirez Department of Information and Communication Technologies
Universitat Pompeu Fabra rafael.ramirez@upf.edu
1 Introduction
Musical analysis is a mean to better understand the composer’s intentions when creating a piece and can be used as an intermediate description of a musical work for other purposes, e.g. expressive performance [4] or music comparison [5]. A musical analysis can be decomposed in melodic, har- monic, and tonal function analyses. Melodic analysis studies the stylistic characteristics of a note from a contrapunctal point of view, while tonal and harmonic analyses investigate chord roles in particular musical pieces.
Automatic musical analysis has been approached from different perspectives: grammars, expert systems, probabilistic models, and model matching have been proposed for implementing tonal analysis (a comprehensive review can be found in [1]).
In this work we focus on automatic melodic analysis. One question that arises when building a melodic analysis system using a-priori music theory is whether it is possible to automatically extract analysis rules from examples, and how similar are those learnt rules compared to music theory rules. This work investigates this question, i.e. given a dataset of analyzed melodies our objective is to automatically learn analysis rules and to compare them with music theory rules.
2 Melodic analysis
The data set used in this work consists of the transcriptions, in MusicXML format, of the harmonized chorals from J.S.Bach (BWV-26, 29, 253, 272, 274, 275, 280, 285, 437, and 438). Each of the 2528 notes in the data set is characterised by its interval with the previous and following note, the direction of both intervals, whether they are tied or not, a ratio [3] describing the rhythm based on the Inter- onset Interval ratio, and the position in the bar that determines the stability or metrical strength of the note. After a manual melodic analysis each of the notes has been classified as harmonic tone, passing tone, neighbor tone, appogiatura, escape note, suspension, anticipation, pedal note. These classes are unbalanced (more than half of notes are tagged as harmonic) 1.
We have applied the RIPPER algorithm [2] to learn melodic analysis rules that classifies each note as either harmonic, passing, neighbor, appogiatura, escape, suspension, anticipation, or pedal note. The algorithm grows one rule by greedily adding antecedents to the rule until the rule is 100% accurate and trying every possible value of each attribute and selects the condition with highest information gain. Less prevalent classes are considered first.
1Note that we are including all four voices of harmonized chorals, and commonly the lower voices just contain harmony
1
Musicologist analysis #Rules Example rule Strongly agree 1 prevIntervalDir = EQUAL ∧ duration ≤ 0.5 ∧
¬tied ∧ nextIntervalDir = DESC ∧ instability ≤ 3 ∧ nextInterval ≤ 2 ∧ nextIntervalMode = MINOR ∧ ratio ≤ 0.5→ melodictag = appogiatura
Agree 20 instability ≥ 5 ∧ nextIntervalDir = ASCENDING ∧ prevIntervalDir = DESCENDING ∧ prevInterval ≤ 2 ∧ nextIntervalMode = MAJOR → melodictag = neighbor
Neither agree nor disagree 15 tied ∧ nextIntervalMode = MAJOR ∧ duration ≥ 2→ melodictag = suspension
Disagree 3 instability ≥ 5 ∧ instability ≥ 13 ∧ instability ≤ 13 ∧ prevIntervalDir = DESCENDING ∧ nextIntervalMode = MINOR → melodictag = neighbor
Strongly disagree 0
Table 1: Likert-type scale analysis of rules. The rule that exemplifies the Agree entry needs the condition nextInterval = 2 to exactly correspond to music theory knowledge. The rule for Neither agree nor disagree, contains irrelevant clauses (nextIntervalMode = MAJOR), and on the contrary, nextInterval = 2 is missing and the instability factor should small. Regarding the Disagree rule, it is irrelevant because it is ambiguous.
3 Results
The results show that the its possible to obtain accurate meldic analysis models (the obtained accu- racy is 92%) and that learning algorithm is able to extract meaningful rules that can be applied to melodically analyze baroque music. Table 1 shows the Likert-type scale analysis of the rules by a musicologist. From a musicology point of view, just one of the rules corresponds exactly with mu- sic theory knowledge, although most of the rules are extremely similar to music theory rules. The difference between the learnt rules and music theory rules is that the former disregard conditions that are common to most notes in the data set, e.g. next intervals of two semitones. Due to the fact that this is the most common interval in the dataset, it is not surprising that the algorithm has not considered this information discriminant. Besides, the algorithm has generated, as expected, groups of rules that could be merged into music theory knowledge.
To conclude, it seems that it is indeed possible to automatically extract analysis rules from exam- ples using machine learning techniques, and that the resulting rules correspond to music theory knowledge. We plan to extend our data set specially with pieces containing a greater proportion of non-harmonic notes. We believe this would improve the generality and accuracy of melodic analysis model.
References
[1] Je´roˆme Barthelemy. Figured bass and tonality recognition. In ISMIR, 2001. [2] William W. Cohen. Fast effective rule induction. In Proc. of the Twelfth International Confer-
ence on Machine Learning, 115-123, 1995. [3] Pla´cido R. Illescas, David Rizo, and Jose´ Manuel In˜esta. Harmonic, melodic, and functional
automatic analysis. In Proceedings of the 2007 International Computer Music Conferrence, pages 165–168, 2007.
[4] R. Ramı´rez, A. Pe´rez, S. Kersten, D Rizo, P.R. Illescas, and Jose´ Manuel In˜esta. Modeling celtic violin expressive performance. In Proc. Int. Workshop on Machine Learning and Music, MML 2008, pages 7–8, Helsinki (Finland), 2008.
[5] David Rizo. Symbolic music comparison with tree data structures. PhD thesis, Universidad de Alicante, 2010.
2Learning melodic analysis rules
Pla´cido R. Illescas David Rizo Jose´ Manuel In˜esta Department of Software and Computing Systems
University of Alicante placidoroman@gmail.com, {drizo,inesta}@dlsi.ua.es
Rafael Ramirez Department of Information and Communication Technologies
Universitat Pompeu Fabra rafael.ramirez@upf.edu
1 Introduction
Musical analysis is a mean to better understand the composer’s intentions when creating a piece and can be used as an intermediate description of a musical work for other purposes, e.g. expressive performance [4] or music comparison [5]. A musical analysis can be decomposed in melodic, har- monic, and tonal function analyses. Melodic analysis studies the stylistic characteristics of a note from a contrapunctal point of view, while tonal and harmonic analyses investigate chord roles in particular musical pieces.
Automatic musical analysis has been approached from different perspectives: grammars, expert systems, probabilistic models, and model matching have been proposed for implementing tonal analysis (a comprehensive review can be found in [1]).
In this work we focus on automatic melodic analysis. One question that arises when building a melodic analysis system using a-priori music theory is whether it is possible to automatically extract analysis rules from examples, and how similar are those learnt rules compared to music theory rules. This work investigates this question, i.e. given a dataset of analyzed melodies our objective is to automatically learn analysis rules and to compare them with music theory rules.
2 Melodic analysis
The data set used in this work consists of the transcriptions, in MusicXML format, of the harmonized chorals from J.S.Bach (BWV-26, 29, 253, 272, 274, 275, 280, 285, 437, and 438). Each of the 2528 notes in the data set is characterised by its interval with the previous and following note, the direction of both intervals, whether they are tied or not, a ratio [3] describing the rhythm based on the Inter- onset Interval ratio, and the position in the bar that determines the stability or metrical strength of the note. After a manual melodic analysis each of the notes has been classified as harmonic tone, passing tone, neighbor tone, appogiatura, escape note, suspension, anticipation, pedal note. These classes are unbalanced (more than half of notes are tagged as harmonic) 1.
We have applied the RIPPER algorithm [2] to learn melodic analysis rules that classifies each note as either harmonic, passing, neighbor, appogiatura, escape, suspension, anticipation, or pedal note. The algorithm grows one rule by greedily adding antecedents to the rule until the rule is 100% accurate and trying every possible value of each attribute and selects the condition with highest information gain. Less prevalent classes are considered first.
1Note that we are including all four voices of harmonized chorals, and commonly the lower voices just contain harmony
1
Musicologist analysis #Rules Example rule Strongly agree 1 prevIntervalDir = EQUAL ∧ duration ≤ 0.5 ∧
¬tied ∧ nextIntervalDir = DESC ∧ instability ≤ 3 ∧ nextInterval ≤ 2 ∧ nextIntervalMode = MINOR ∧ ratio ≤ 0.5→ melodictag = appogiatura
Agree 20 instability ≥ 5 ∧ nextIntervalDir = ASCENDING ∧ prevIntervalDir = DESCENDING ∧ prevInterval ≤ 2 ∧ nextIntervalMode = MAJOR → melodictag = neighbor
Neither agree nor disagree 15 tied ∧ nextIntervalMode = MAJOR ∧ duration ≥ 2→ melodictag = suspension
Disagree 3 instability ≥ 5 ∧ instability ≥ 13 ∧ instability ≤ 13 ∧ prevIntervalDir = DESCENDING ∧ nextIntervalMode = MINOR → melodictag = neighbor
Strongly disagree 0
Table 1: Likert-type scale analysis of rules. The rule that exemplifies the Agree entry needs the condition nextInterval = 2 to exactly correspond to music theory knowledge. The rule for Neither agree nor disagree, contains irrelevant clauses (nextIntervalMode = MAJOR), and on the contrary, nextInterval = 2 is missing and the instability factor should small. Regarding the Disagree rule, it is irrelevant because it is ambiguous.
3 Results
The results show that the its possible to obtain accurate meldic analysis models (the obtained accu- racy is 92%) and that learning algorithm is able to extract meaningful rules that can be applied to melodically analyze baroque music. Table 1 shows the Likert-type scale analysis of the rules by a musicologist. From a musicology point of view, just one of the rules corresponds exactly with mu- sic theory knowledge, although most of the rules are extremely similar to music theory rules. The difference between the learnt rules and music theory rules is that the former disregard conditions that are common to most notes in the data set, e.g. next intervals of two semitones. Due to the fact that this is the most common interval in the dataset, it is not surprising that the algorithm has not considered this information discriminant. Besides, the algorithm has generated, as expected, groups of rules that could be merged into music theory knowledge.
To conclude, it seems that it is indeed possible to automatically extract analysis rules from exam- ples using machine learning techniques, and that the resulting rules correspond to music theory knowledge. We plan to extend our data set specially with pieces containing a greater proportion of non-harmonic notes. We believe this would improve the generality and accuracy of melodic analysis model.
References
[1] Je´roˆme Barthelemy. Figured bass and tonality recognition. In ISMIR, 2001. [2] William W. Cohen. Fast effective rule induction. In Proc. of the Twelfth International Confer-
ence on Machine Learning, 115-123, 1995. [3] Pla´cido R. Illescas, David Rizo, and Jose´ Manuel In˜esta. Harmonic, melodic, and functional
automatic analysis. In Proceedings of the 2007 International Computer Music Conferrence, pages 165–168, 2007.
[4] R. Ramı´rez, A. Pe´rez, S. Kersten, D Rizo, P.R. Illescas, and Jose´ Manuel In˜esta. Modeling celtic violin expressive performance. In Proc. Int. Workshop on Machine Learning and Music, MML 2008, pages 7–8, Helsinki (Finland), 2008.
[5] David Rizo. Symbolic music comparison with tree data structures. PhD thesis, Universidad de Alicante, 2010.
2
Pla´cido R. Illescas David Rizo Jose´ Manuel In˜esta Department of Software and Computing Systems
University of Alicante placidoroman@gmail.com, {drizo,inesta}@dlsi.ua.es
Rafael Ramirez Department of Information and Communication Technologies
Universitat Pompeu Fabra rafael.ramirez@upf.edu
1 Introduction
Musical analysis is a mean to better understand the composer’s intentions when creating a piece and can be used as an intermediate description of a musical work for other purposes, e.g. expressive performance [4] or music comparison [5]. A musical analysis can be decomposed in melodic, har- monic, and tonal function analyses. Melodic analysis studies the stylistic characteristics of a note from a contrapunctal point of view, while tonal and harmonic analyses investigate chord roles in particular musical pieces.
Automatic musical analysis has been approached from different perspectives: grammars, expert systems, probabilistic models, and model matching have been proposed for implementing tonal analysis (a comprehensive review can be found in [1]).
In this work we focus on automatic melodic analysis. One question that arises when building a melodic analysis system using a-priori music theory is whether it is possible to automatically extract analysis rules from examples, and how similar are those learnt rules compared to music theory rules. This work investigates this question, i.e. given a dataset of analyzed melodies our objective is to automatically learn analysis rules and to compare them with music theory rules.
2 Melodic analysis
The data set used in this work consists of the transcriptions, in MusicXML format, of the harmonized chorals from J.S.Bach (BWV-26, 29, 253, 272, 274, 275, 280, 285, 437, and 438). Each of the 2528 notes in the data set is characterised by its interval with the previous and following note, the direction of both intervals, whether they are tied or not, a ratio [3] describing the rhythm based on the Inter- onset Interval ratio, and the position in the bar that determines the stability or metrical strength of the note. After a manual melodic analysis each of the notes has been classified as harmonic tone, passing tone, neighbor tone, appogiatura, escape note, suspension, anticipation, pedal note. These classes are unbalanced (more than half of notes are tagged as harmonic) 1.
We have applied the RIPPER algorithm [2] to learn melodic analysis rules that classifies each note as either harmonic, passing, neighbor, appogiatura, escape, suspension, anticipation, or pedal note. The algorithm grows one rule by greedily adding antecedents to the rule until the rule is 100% accurate and trying every possible value of each attribute and selects the condition with highest information gain. Less prevalent classes are considered first.
1Note that we are including all four voices of harmonized chorals, and commonly the lower voices just contain harmony
1
Musicologist analysis #Rules Example rule Strongly agree 1 prevIntervalDir = EQUAL ∧ duration ≤ 0.5 ∧
¬tied ∧ nextIntervalDir = DESC ∧ instability ≤ 3 ∧ nextInterval ≤ 2 ∧ nextIntervalMode = MINOR ∧ ratio ≤ 0.5→ melodictag = appogiatura
Agree 20 instability ≥ 5 ∧ nextIntervalDir = ASCENDING ∧ prevIntervalDir = DESCENDING ∧ prevInterval ≤ 2 ∧ nextIntervalMode = MAJOR → melodictag = neighbor
Neither agree nor disagree 15 tied ∧ nextIntervalMode = MAJOR ∧ duration ≥ 2→ melodictag = suspension
Disagree 3 instability ≥ 5 ∧ instability ≥ 13 ∧ instability ≤ 13 ∧ prevIntervalDir = DESCENDING ∧ nextIntervalMode = MINOR → melodictag = neighbor
Strongly disagree 0
Table 1: Likert-type scale analysis of rules. The rule that exemplifies the Agree entry needs the condition nextInterval = 2 to exactly correspond to music theory knowledge. The rule for Neither agree nor disagree, contains irrelevant clauses (nextIntervalMode = MAJOR), and on the contrary, nextInterval = 2 is missing and the instability factor should small. Regarding the Disagree rule, it is irrelevant because it is ambiguous.
3 Results
The results show that the its possible to obtain accurate meldic analysis models (the obtained accu- racy is 92%) and that learning algorithm is able to extract meaningful rules that can be applied to melodically analyze baroque music. Table 1 shows the Likert-type scale analysis of the rules by a musicologist. From a musicology point of view, just one of the rules corresponds exactly with mu- sic theory knowledge, although most of the rules are extremely similar to music theory rules. The difference between the learnt rules and music theory rules is that the former disregard conditions that are common to most notes in the data set, e.g. next intervals of two semitones. Due to the fact that this is the most common interval in the dataset, it is not surprising that the algorithm has not considered this information discriminant. Besides, the algorithm has generated, as expected, groups of rules that could be merged into music theory knowledge.
To conclude, it seems that it is indeed possible to automatically extract analysis rules from exam- ples using machine learning techniques, and that the resulting rules correspond to music theory knowledge. We plan to extend our data set specially with pieces containing a greater proportion of non-harmonic notes. We believe this would improve the generality and accuracy of melodic analysis model.
References
[1] Je´roˆme Barthelemy. Figured bass and tonality recognition. In ISMIR, 2001. [2] William W. Cohen. Fast effective rule induction. In Proc. of the Twelfth International Confer-
ence on Machine Learning, 115-123, 1995. [3] Pla´cido R. Illescas, David Rizo, and Jose´ Manuel In˜esta. Harmonic, melodic, and functional
automatic analysis. In Proceedings of the 2007 International Computer Music Conferrence, pages 165–168, 2007.
[4] R. Ramı´rez, A. Pe´rez, S. Kersten, D Rizo, P.R. Illescas, and Jose´ Manuel In˜esta. Modeling celtic violin expressive performance. In Proc. Int. Workshop on Machine Learning and Music, MML 2008, pages 7–8, Helsinki (Finland), 2008.
[5] David Rizo. Symbolic music comparison with tree data structures. PhD thesis, Universidad de Alicante, 2010.
2Learning melodic analysis rules
Pla´cido R. Illescas David Rizo Jose´ Manuel In˜esta Department of Software and Computing Systems
University of Alicante placidoroman@gmail.com, {drizo,inesta}@dlsi.ua.es
Rafael Ramirez Department of Information and Communication Technologies
Universitat Pompeu Fabra rafael.ramirez@upf.edu
1 Introduction
Musical analysis is a mean to better understand the composer’s intentions when creating a piece and can be used as an intermediate description of a musical work for other purposes, e.g. expressive performance [4] or music comparison [5]. A musical analysis can be decomposed in melodic, har- monic, and tonal function analyses. Melodic analysis studies the stylistic characteristics of a note from a contrapunctal point of view, while tonal and harmonic analyses investigate chord roles in particular musical pieces.
Automatic musical analysis has been approached from different perspectives: grammars, expert systems, probabilistic models, and model matching have been proposed for implementing tonal analysis (a comprehensive review can be found in [1]).
In this work we focus on automatic melodic analysis. One question that arises when building a melodic analysis system using a-priori music theory is whether it is possible to automatically extract analysis rules from examples, and how similar are those learnt rules compared to music theory rules. This work investigates this question, i.e. given a dataset of analyzed melodies our objective is to automatically learn analysis rules and to compare them with music theory rules.
2 Melodic analysis
The data set used in this work consists of the transcriptions, in MusicXML format, of the harmonized chorals from J.S.Bach (BWV-26, 29, 253, 272, 274, 275, 280, 285, 437, and 438). Each of the 2528 notes in the data set is characterised by its interval with the previous and following note, the direction of both intervals, whether they are tied or not, a ratio [3] describing the rhythm based on the Inter- onset Interval ratio, and the position in the bar that determines the stability or metrical strength of the note. After a manual melodic analysis each of the notes has been classified as harmonic tone, passing tone, neighbor tone, appogiatura, escape note, suspension, anticipation, pedal note. These classes are unbalanced (more than half of notes are tagged as harmonic) 1.
We have applied the RIPPER algorithm [2] to learn melodic analysis rules that classifies each note as either harmonic, passing, neighbor, appogiatura, escape, suspension, anticipation, or pedal note. The algorithm grows one rule by greedily adding antecedents to the rule until the rule is 100% accurate and trying every possible value of each attribute and selects the condition with highest information gain. Less prevalent classes are considered first.
1Note that we are including all four voices of harmonized chorals, and commonly the lower voices just contain harmony
1
Musicologist analysis #Rules Example rule Strongly agree 1 prevIntervalDir = EQUAL ∧ duration ≤ 0.5 ∧
¬tied ∧ nextIntervalDir = DESC ∧ instability ≤ 3 ∧ nextInterval ≤ 2 ∧ nextIntervalMode = MINOR ∧ ratio ≤ 0.5→ melodictag = appogiatura
Agree 20 instability ≥ 5 ∧ nextIntervalDir = ASCENDING ∧ prevIntervalDir = DESCENDING ∧ prevInterval ≤ 2 ∧ nextIntervalMode = MAJOR → melodictag = neighbor
Neither agree nor disagree 15 tied ∧ nextIntervalMode = MAJOR ∧ duration ≥ 2→ melodictag = suspension
Disagree 3 instability ≥ 5 ∧ instability ≥ 13 ∧ instability ≤ 13 ∧ prevIntervalDir = DESCENDING ∧ nextIntervalMode = MINOR → melodictag = neighbor
Strongly disagree 0
Table 1: Likert-type scale analysis of rules. The rule that exemplifies the Agree entry needs the condition nextInterval = 2 to exactly correspond to music theory knowledge. The rule for Neither agree nor disagree, contains irrelevant clauses (nextIntervalMode = MAJOR), and on the contrary, nextInterval = 2 is missing and the instability factor should small. Regarding the Disagree rule, it is irrelevant because it is ambiguous.
3 Results
The results show that the its possible to obtain accurate meldic analysis models (the obtained accu- racy is 92%) and that learning algorithm is able to extract meaningful rules that can be applied to melodically analyze baroque music. Table 1 shows the Likert-type scale analysis of the rules by a musicologist. From a musicology point of view, just one of the rules corresponds exactly with mu- sic theory knowledge, although most of the rules are extremely similar to music theory rules. The difference between the learnt rules and music theory rules is that the former disregard conditions that are common to most notes in the data set, e.g. next intervals of two semitones. Due to the fact that this is the most common interval in the dataset, it is not surprising that the algorithm has not considered this information discriminant. Besides, the algorithm has generated, as expected, groups of rules that could be merged into music theory knowledge.
To conclude, it seems that it is indeed possible to automatically extract analysis rules from exam- ples using machine learning techniques, and that the resulting rules correspond to music theory knowledge. We plan to extend our data set specially with pieces containing a greater proportion of non-harmonic notes. We believe this would improve the generality and accuracy of melodic analysis model.
References
[1] Je´roˆme Barthelemy. Figured bass and tonality recognition. In ISMIR, 2001. [2] William W. Cohen. Fast effective rule induction. In Proc. of the Twelfth International Confer-
ence on Machine Learning, 115-123, 1995. [3] Pla´cido R. Illescas, David Rizo, and Jose´ Manuel In˜esta. Harmonic, melodic, and functional
automatic analysis. In Proceedings of the 2007 International Computer Music Conferrence, pages 165–168, 2007.
[4] R. Ramı´rez, A. Pe´rez, S. Kersten, D Rizo, P.R. Illescas, and Jose´ Manuel In˜esta. Modeling celtic violin expressive performance. In Proc. Int. Workshop on Machine Learning and Music, MML 2008, pages 7–8, Helsinki (Finland), 2008.
[5] David Rizo. Symbolic music comparison with tree data structures. PhD thesis, Universidad de Alicante, 2010.
2
Comments