Hercules Seghers, "Piles of Books" (1629-1630), etching (via Rijksmuseum)

Hercules Seghers, “Piles of Books” (1629–1630), etching (via Rijksmuseum)

The future of e-books, or any electronic text, may be soundtracked. A new experiment in automation is generating music in response to the emotion of words in literature.

TransPose was created by Hannah Davis, a programmer, artist, and musician based in New York, and Saif Mohammad, a research officer at the National Research Council Canada focused on natural language processing and computational linguistics. As Davis and Mohammad admit on their site, “we don’t claim to be making beautiful music yet,” but the results are nonetheless intriguing.

The text of a novel — with the first experiments including To Kill a Mockingbird and Alice in Wonderland — is segmented into four parts, the octaves determined by the “joy and sadness densities,” and the length of notes set by the density of those emotions. These emotions in turn are determined by a database of words linked to eight different sentiments, including joy, anticipation, anger, disgust, trust, fear, surprise, and sadness. So the sections of the book each get an emotional profile.

Most literature is obviously a bit too complicated to make this an accurate exercise, but based on the preliminary selections it isn’t far off. The TransPose piano pounds out a piece for A Clockwork Orange in the key of C Minor — with the first emotion fear and the second sadness — at a tempo of 171, while The Road clods along with the same emotions at a tempo of 42, and The Little Prince is sprightly in C Major with its emotions of trust and joy. As Davis and Mohammad state in their joint research paper on the project, they anticipate applications such as “audio-visual e-books that generate music when certain pages are opened — music that accentuates the mood conveyed by the text in those pages,” as well film soundtracks or even a “tweet stream that is accompanied by music that captures the aggregated sentiment towards an entity.”

They explain that this is just the first stage, and as they investigate this algorithm-based music they hope to better gauge the rhythm of characters and “non-emotional features” like scene settings. Future Ennio Morricones likely don’t have to worry about being out of a job yet, but the experience of electronic literature and reading could soon be enhanced with a little mood-setting accompaniment.

More music from TransPose can be found on their site, and more on the project is discussed in paper “Generating Music from Literature” by Hannah Davis and Saif Mohammad. 

Allison C. Meier is a former staff writer for Hyperallergic. Originally from Oklahoma, she has been covering visual culture and overlooked history for print and online media since 2006. She moonlights...

2 replies on “New Algorithm Turns the Emotion of Language into Music”

Comments are closed.