Structuring Music through Markup Language:
Designs and Architectures

Steyn J (ed) 2013
Structuring Music through Markup Language:
Designs and Architectures
New York: IGI Information Science Reference
DOI: 10.4018/978-1-4666-2497-9
ISBN13: 9781466624979
ISBN10: 1466624973
EISBN13: 978146662498

Chapters

  1. The information architecture of music
    Jacques Steyn
  2. The Physics of Music
    Jyri Pakarinen
  3. Expressing Musical Features, Class Labels, Ontologies and Metadata Using ACE XML 2.0
    Cory McKay and Ichiro Fujinaga
  4. Towards an encoding of Musical Interaction
    Antoine Allombert and Myriam Desainte-Catherine
  5. Chronicle: XML-representation of symbolic music and other complex time structures
    Wijnand Schepens and Marc Leman
  6. Representing music as work in progress
    Gerard Roma and Perfecto Herrera
  7. Structuring music-related movements
    Alexander Refsum Jensenius
  8. Expressiveness in music performance: analysis, models, mapping, encoding
    Sergio Canazza, Giovanni De Poli, Antonio Rodà, Alvise Vidolin
  9. MusicXML: The First Decade
    Michael D. Good
  10. Universal information architecture of acoustic music instruments
    Jacques Steyn

1. The information architecture of music


Jacques Steyn

Information architecture is about information structures and their relations within the information space, and in this chapter the music information space. To determine what the structures and relationships are, an ontological investigation is launched. Ontology in Information Systems has a specific meaning, and is here considered to be a methodology that results in a specific information architecture. Ontologies can apply to many levels of investigation and description, and to any of contemporary music disciplines. Music is here demarcated to a core consisting of pitch-frequency and tempo-time relationships, mapped onto music space. The roles of PitchSets ("octaves"), scales and tuning systems within this space are explained, and proposed as the core components of the object "music". The most basic and generic markup language for music should thus start from this core. All other ontologies and markup are secondary to this object.


2. The Physics of Music


Jyri Pakarinen

This chapter discusses the central physical phenomena involved in music. The aim is to provide an explanation of the related issues in an understandable level, without delving unnecessarily deep in the underlying mathematics. The chapter is divided in two main sections: musical sound sources and sound transmission to the observer. The first section starts from the definition of sound as wave motion, and then guides the reader through the vibration of strings, bars, membranes, plates, and air columns, that is, the oscillating sources that create the sound for most of the musical instruments. Resonating structures, such as instrument bodies are also reviewed, and the section ends with a discussion on the potential physical markup parameters for musical sound sources. The second section starts with an introduction to the basics of room acoustics, and then explains the acoustic effect that the human observer causes in the sound field. The end of the second section provides a discussion on which sound transmission parameters could be used in a general music markup language. Finally, a concluding section is presented.


3. Expressing Musical Features, Class Labels, Ontologies and Metadata Using ACE XML 2.0

Cory McKay and Ichiro Fujinaga

This chapter presents ACE XML, a set of file formats that are designed to meet the special representational needs of research in music information retrieval (MIR) in general and automatic music classification in particular. The ACE XML formats are designed to represent a wide range of musical information clearly and simply using formally structured frameworks that are flexible and extensible. This chapter includes a critical review of existing file formats that have been used in MIR research. This is followed by a set of design priorities that are proposed for use in developing new formats and improving existing ones. The details of the ACE XML specification are then described in this context. Finally, research priorities for the future are discussed, as well as possible uses for ACE XML outside the specific domain of MIR.


4. Towards an encoding of Musical Interaction

Antoine Allombert and Myriam Desainte-Catherine

While representing musical processes or musical scores through Markup Languages is now well established, we assume that there is still the need for a format to encode musical material with which a musician can interact. The lack of such a format is especially crucial for contemporary music that involves computing processes. We propose such a formal representation for composing musical scores in which some temporal properties can be interactively modified during their execution. This allows to create scores that can be interpreted by a performer in the same way a musician can interpret a score of instrumental music. The formal representation comes with an XML format for encoding the scores and also interfacing the representation with other types of Markup Language musical description.


5. Chronicle: XML-representation of symbolic music and other complex time structures

Wijnand Schepens and Marc Leman

Chronicle is a new universal system for representing time-related data. Although the system was developed for the representation of symbolic music, it is readily applicable to other types of data with complex time structure, such as audio, multimedia, poetry, choreography, task scheduling and so on.

Different levels of complexity are supported. The lowest level deals with a stream of events ordered in time. Subsequent higher levels add the possibility to organize the data in groups and subgroups forming a hierarchical structure, local timing, automatic layout of sequential or parallel sections, association of data with other elements, working with multiple timescales, time mappings and more.

The system is primarily concerned with the treatment of time and structure. The domain developer is free to choose the event data types, timescales and organizational constraints most suited for the application.

The Chronicle system defines an XML-based encoding based on a set of level-specific DTD’s, but it also offers software support in the form of classes, interfaces, libraries for reading and writing XML, tools for level reduction etc.

It is hoped that developers of new XML-encodings in different domains can use Chronicle as a powerful base layer. The software can also be useful for researchers who need an easy and flexible way to store XML-data.


6. Representing music as work in progress

Gerard Roma Perfecto Herrera

In this chapter we discuss an approach to music representation that supports collaborative composition given current practices based on digital audio. A music work is represented as a directed graph that encodes sequences and layers of sound samples. We discuss graph grammars as a general framework for this representation. From a grammar perspective, we analyze the use of XML for storing production rules, music structures, and references to audio files. We describe an example implementation of this approach.


7. Structuring music-related movements


Alexander Refsum Jensenius

The chapter starts by discussing the importance of body movement in both music performance and perception, and argues that for future research in the field it is important to develop solutions for being able to stream and store music-related movement data alongside other types of musical information. This is followed by a suggestion for a multilayered approach to structuring movement data, where each of the layers represent a separate and consistent subset of information. Finally, examples of two prototype implementations are presented: a setup for storing GDIF-data into SDIF-files, and an example of how GDIF-based OSC streams can allow for more flexible and meaningful mapping from controller to sound engine.


8. Expressiveness in music performance: analysis, models, mapping, encoding

       
Sergio Canazza   Giovanni De Poli   Antonio Rodà   Alvise Vidolin

During the last decade, in the fields of both systematic musicology and cultural musicology, a lot of research effort (using methods borrowed from music informatics, psychology and neurosciences) has been spent to connect two worlds that seemed to be very distant or even antithetic: machines and emotions. Mainly in the Sound and Music Computing framework of human-computer interaction an increasing interest grew up in finding ways to allow machines communicating expressive, emotional content using a nonverbal channel. Such interest has been justified with the objective of an enhanced interaction between humans and machines exploiting communication channels that are typical of human-human communication and that can therefore be easier and less frustrating for users, and in particular for non-technically skilled users (e.g. musicians, teacher, students, common people). While on the one hand research on emotional communication found its way into more traditional fields of computer science such as Artificial Intelligence, on the other hand novel fields are being focusing on such issues. Examples are researches on Affective Computing in the United States, KANSEI Information Processing in Japan and Expressive Information Processing in Europe. This section presents the state of the art in the research field of a computational approach to the study of music performance. In addition, analysis methods and synthesis models of expressive content in music performance, carried out by the authors, are presented. Finally, an encoding system aiming to encode the music performance expressiveness will be detailed, using an XML-based approach.


9. MusicXML: The First Decade


Michael D. Good

MusicXML is a universal interchange and distribution format for common Western music notation. MusicXML’s design and development began in 2000, with the purpose to be the MP3 equivalent for digital sheet music. MusicXML was developed by Recordare and can represent music from the 17th century onwards, including guitar tablature and other music notations used to notate or transcribe contemporary popular music. MusicXML is supported by over 160 applications. The development and history of MusicXML is described in this chapter.


10. Universal information architecture of acoustic music instruments


Jacques Steyn

The information architecture of acoustic music instruments is described. As contemporary acoustic research has not yet covered all the possible materials, shapes, designs and other essential properties of possible acoustic instruments, the model proposed in this chapter serves as a high-level analysis with meta-level XML-based markup elements and attributes. The ultimate design goal of the proposed model is to be able to create a software synthesis application that could recreate the acoustic sounds of music instruments faithfully, as well as the ability to create novel acoustic sounds using virtual music instruments.

Although this chapter is not concerned much with organology, ethnomusicology and acoustics, the findings of these fields informed the proposed information architecture. On the highest level, according to acoustic properties, instruments can be classified as air and as solid instruments. The second level of instruments within this classification consists of pipes, strings, bars and membranes. The acoustic properties of the components parts of each of these classes of instruments are considered and marked. It is proposed that algorithms be created independently for each of these components, which might lead to more realistically sounding instruments, as well as creating totally new sounds based on the properties of acoustic instruments, even the virtual creation of new types of acoustic instrument which would be impossible to build with real materials.

It is further proposed that the properties of different components serve as modifiers on a base soundwave. Modifiers include the materials used in the construction of instruments, energy sources, dimensions of 3D acoustic cavities and relationships between instrument components are considered. For example, for pipes the pipe-like components are the mouth cavity, lips, mouthpiece, pipe body, holes and valves, flare and bell, while these properties are powerful enough to describe oddly shaped pipe instruments (such as the serpent and sousaphone), as well as virtual polygon shaped instruments. Oddly shaped string instruments, bars and membranes can also be described.

The properties of a wide range of music instruments have been considered, ranging from ancient acoustic instruments, to modern ones, as well as including the instruments of many music cultures. Following on a logical analysis and synthesis of previous research rather than acoustic lab results, a high-level generic and universal model of the information architecture of acoustic music instruments is constructed.

See the list, images and sound clips of the instruments mentioned in this chapter.