Popular science reading about music and the brain

On this page I found a nice reading list of books about music and the brain. Several I haven’t read 🙂

  • Oliver Sacks – Musicophilia
  • Daniel Levintin – This is your brain on music
  • Aniruddh Patel – Music, Lanugage and the brain
  • Alex Ross – Listen to this
  • Robert Jourdain – Music, the brain and ecstasy
  • John Oritz – The tao of music
  • Anthony Storr – Music and the Mind

Apart from pop-sci reading, there are also the documentary mini-series and movies:

 

Relevance of Neuromusicology for Music Research

After Steve Novella’s talk at TAM2012, where he mentioned the way
reporting will often include a stock photo of a brain scan i was
inspired to search for studies in neuromusicology. By chance I found
this paper, which seemed interstning.

Relevance
of Neuromusicology for Music Research, Journal of New Music Research,
28 (1999), No. 3, pp. 186-199

Thoughts:

  • Helmholtz’s theory of pitch perception has long been important. I
    should check whether this is the same as what is described in Bigands
    work, in ‘Generative Theory of Tonal Music’ or in “Connectionist
    Framework”
  • If new materials are being produced to teach aural training
    (hørelære) in Denmark, are they being informed from knowledge of
    neuromusicology?
  • He seems to be arguing, that because music and musical phenomena
    are so complex and because we don’t have any other ways of doing so,
    it is ok to combine the evidence based approaches with e.g. musical
    intuition and musical theory in order to choose which avenues to
    pursue. In other words to use approaches which we are unsure of the
    scientific applicability of. (p6, c2, l43- p7, c1, l45)
The paper raises some interesting questions, but a google scholar
search for citations of this paper doesn’t seem to find any (if I
understand the Brazilian articles correctly), which expand on what is
discussed here.

Errata:

p1, c2, l15: extend should be extent
p7, c2, l14: extend should be extent
p7, c2, l29: extend should be extent
p10, c1, l37: a]. should be al.

Perspectives

This fall I’ve been attending an introductory class in Music Psychology. The field is very intriguing, but as it has been an introductory course we haven’t gotten to dig very deep into the different subjects.

Some of the new subjects that I have had to read about for the course have been interesting enough that I want to list them here. Hopefully it’ll make sense when seen together with my earlier post on algorithmic composition and where to go.

Cognition and musicology. Whether[1] music is an intrinsic result or precondition to having a brain capable of advances speech.

Creativity and computers. Is it possible to simulate or recreate human creativity in a sufficiently advanced computer model. I read an article stipulating that the “human performance” that sets played music apart from digitally created music can be categorised into variations on a set of comparatively few parameters. With a sufficiently intelligent model it could be possible to recreate the effect using VST’s. Also a cognition model through artificial intelligence could maybe learn to produce creativity through random variation and feedback.
Both of these make discussing the social an philosophical implications obvious questions.

Algorithmic composition would need to, at least, simulate creativity to have success as more than a composition tool.

Not a topic from the course, but interesting:
Computer assisted composition, is it creativity in the traditional sense or is it ‘tinkering’, and does it matter either way?

Interesting… (for me at least 😉 )

[1] A google search shows that 2,7 mi entries exist with the words “wether or not” rather than “whether or not” 🙂

Where to go?

Last year I had the opportunity to attend the ICMC2007 in Copenhagen. A lot of different people attended the conference; instrument designers, composers, audiology therapists and of course developers and DSP researchers. The topics presented at the conference were of course broad, to appeal to all these different people and even though a lot of it was very interesting for me, it wasn’t specific to what I want to explore.

So where can I go? I’ve come across a few possibilities, which are the basis of this post.

Also, for a possible application of algorithmically generated music, look at beatsuite.com

Note: Just found out that this post hadn’t been posted but was saved as a draft. Will have to remember to check in the future.

Reproduced performer

In this online article, researchers have acchieved some success reproducing the physics of a clarinettist and a clarinet. They are of course refining the model but they already have a model that allows them to recreate passages of music quite well.

This is interesting even though it is a problem separate from the area I want to pursue. The focus of the article is mainly that music reproduced in this way is very compact compared to e.g. mp3

Journals

It seems like my department hasn’t received the newest edition of the Computer Music Journal, but I had a look in the edition for Winter 2007.

There were a few interesting things.

The introduction mentioned a study into how different intervals sound consonant or dissonant because of the interference patterns that are created in the cochlea.

There was also an interesting article: Paul Nauert, Division- and Addition-Based Models of Rhythm in a Computer Assisted Composition System. Sadly the articles don’t seem to be accessible from the website, but I will have a go later through the Royal Library’s remote connection system. 🙂

Bachelor of Arts – where to go from here?

I seriously need to use bookmarks more – it took me 10 minutes to find my own blog. This also doesn’t reflect very well on the search function of the CU blog portal. Maybe it’ll get better as I add more entries

I’ve finished my BA in musicology with the project “What is understood by the term Crunk in Denmark?“. I ended with a grade of 7 (ECTS: C) and I’m very sure that the oral defence helped raise the grade, so don’t expect too much if you start reading it 😉

I’ve enrolled for the two year masters degree in Musicology, but I’m not sure whether that’s the way I want to go. My priorities are split between working and studying, as they seem to be somewhat mutually exclusive, but I’m not really in doubt that I want to finish a masters degree at some point. The question will be which degree and when.

Overview

..or maybe just a slightly more detailed first post. This time in English.

In this blog I will try to focus on Computer Music, but what is Computer Music

Computer Music isn’t:
a musical genre (techno, electronica and so on)
DSP (Digital signal processing)

The reason I single these things out, is that these are often what others, at, repectively, the department of Computer Science and Musicology assume. So, what is it then?

For me, Computer Musicis music produced by a computer, with the computer as creator of the music. Typically those who have studied computer music have been one of two. Either Computer Science reserchers looking at implementations of DSP, how to synthesise rythmical paterns and melodic lines or composers who see the algorithms in computer music as a(nother) way of producing original music.

I would like to focus my attention somewhere between the two:

Even though the parameters of the program are set by a person, can they be said to be the creators of the music when the actual score, of not the sounds, are produced by the computer.

When a computer analyses a collection of the music of one composer, for instance through Markovian analysis, it is possible to create fair imitations of that composers music, that are derived solely from recognition of patterns in his works.
If you look at the recognised patterns, do they correspond to the traditional, scholarly recognised characteristic elements of the composers works?

Is it possible to make a computer generate commercially acceptable synthetically scored music in “stereotypical genres” like in game background music for MMORPGs. MMORPGs, and traditional computer RPGs usually use the in game music to reflect the game characters situation in respect to the game environment. For instance there is music for “walking in the forest”, “walking in town”, “combat”. These of course vary in quality from game to game, but they are always pre-scored. This leads to two problems; When the game character’s environment changes, e.g. the character is waking in the forest and is attacked, the music abruptly changes from one type to the other, and if the game is playd for a long time, the music can become repetetive.
Algorithmically composed music could assure that the music never repeats. It could be possible to create models of “forest” and “combat” music and, instead of changing from one to the other, allow the forest music to be influenced by the combat model as the game character moves into danger. This would also create a different model mixture depending on the setting.

This post has gone from trying to define my view of Computer Music to presenting a couple of examples. I’ll have to come back and explain some of the terms i used, like Markov analysis and algorithmic composition, but that’ll have to be some other time.

Feel free to comment if you’ve read this. I’m mostly writing this blog to collect my own thoughts on this, but it would be nice if anyone else finds it interesting.

Første indlæg / first post

…og hvad skal man så?

Det var selvfølgelig ikke nogen hovedløs gerning at oprette en blog. Her vil jeg prøve at fremstille nogen af tankerne om det faglige sammenspil mellem datalogi og musikvidenskab – første fokus bliver Computer Music, men der er først nogle praktiske forhold.

  • Skal jeg skrive på engelsk?
  • Hvor tit skal jeg skrive?
  • Hvor vigtigt er layoutet på en faglig blog?

Alt dette og mere kan du høre mere om i næste afnit