Music composed by and co-composed with computers having “musical intelligence”.
St. Dunstan and All Saints, Stepney High Street, London, UK.
This unique concert will feature works created with computers as creative partners drawing on a uniquely human tradition: instrumental folk music. We aren’t so interested in whether a computer can compose a piece of music as well as a human, but instead how we composers, musicians and engineers can use artificial intelligence to explore creative domains we hadn't thought of before. This follows on from recent sensational stories of artificial intelligence making both remarkable achievements – a computer beating humans at Jeopardy! – and unintended consequences – a chatbot mimicking racist tropes. We are now living in an age, for better or worse, when artificial intelligence is seamlessly integrated into the daily life of many. It is easy to feel surrounded and threatened, but at the same time empowered by these new tools. Find more information in our recent article at The Conversation: ‘Machine folk’ music composed by AI shows technology’s creative side.
Our concert is centred around a computer program we have trained with over 23,000 “Celtic” tunes -- typically played in communities and festivals around Ireland, France and the UK. We will showcase works involving composers and musicians co-creating music with our program, drawing upon the features it has learned from this tradition, and combining it with human imagination. A trio of traditional Irish musicians led by Daren Banarsë will play three sets of computer-generated “Celtic” tunes. Ensemble x.y will perform a work by Oded Ben-Tal, which is a 21st century homage to folk-song arrangements from composers such as Brahms, Britten and Berio. They will also perform a work by Bob L. Sturm created from material the computer program has self-titled “Chicken”. You will hear pieces performed on the fine organ of St Dunstan generated by two computer programs co-creating music together: our system generates a melody and another system harmonises it in the style of Bach chorale. Another work by Nick Collins at Durham blends computer models of three different musicians and composers: Iannis Xenakis, Ed Sheeran, and Adele. Our concert will provide an exciting glimpse into how new musical opportunities are enabled by partnerships: between musicians from different traditions; between scientists and artists; and last, but not least, between humans and computers.
Other Featured performers:
- Úna Monaghan: a composer and researcher currently based at Cambridge, will perform her works for Irish harp and live electronics, combining elements of Irish traditional music with computer sound, controlled via motion sensor and pitch detection.
- Elaine Chew: a musician and Professor at the Centre for Digital Music, will perform a series of solo piano works "re-composed" by MorpheuS.
Machine learning has been making headlines with its sometimes alarming progress in skills previously thought to be the preserve of the human. Now these artificial things are “composing” music. Our event, part concert part talk, aims to demystify machine learning for music. We will describe how we are using state-of-the-art machine learning methods to teach a computer specific musical styles. We will take the audience behind the scenes of such systems, and show how we are using it to enhance human creativity in both music performance and composition. Human musicians will play several works composed by and with such systems. The audience will see how these tools can be used to augment human creativity and not replace it.
Programme includes:
- Richard Salmon plays artificial Bach chorales on the St. Dunstan organ
- Daren Banarsë and musicians play computer-generated tunes in the Irish style
- Luca Turchet interprets computer-generated tunes on his new “Smart Mandolin”
- Ensemble x.y plays “Two short pieces and an interlude”, a composition co-created by Bob L. Sturm and computer; and “Bastard Tunes”, a composition co-created by Oded Ben-Tal and computer
- Jennifer Walshe performs a new work interpreting computer-generated text
What will become of music with machines capable of creative things like composing music?
The pieces in this concert come from our application of machine learning to model a crowd-sourced collection of over 23,000 music transcriptions available online at https://thesession.org. This collection consists of many transcriptions of traditional music played in Ireland and the UK. Here is one example transcription with symbols specifying the meter, key and basic melody of the tune, “The Morning Lark”:
M: 6/8 K: Dmaj AFD D2A,|DEF Adc|BGG DGG|B2B BdB|AFD D2A,|DEF A3|def gfe|fd^c d2B:| |:ABd fdd|add fdB|Add fed|edB BAF|Add fdd|add fdd|faf ede|fd^c d2B:|
Our machine learning system (folk-rnn) creates models of these transcriptions, which one can then use to generate any number of new transcriptions that resemble those in the real collection. Here’s one example transcription completely “composed” by our models:
M:6/8 K:Dmix |:A2D D2B|AGE c2A|GEE c2E|E2D DEG|A2D D2B|AGE c2d|ecA GEA|D3 D3:| |:Add Add|ede dcA|GEE cEE|GAB cBc|Add Add|edc AGA|BcB AGE|D3 D3:|
We are interested in how such machine learning models can contribute to music practice, both in and out of the traditional practices of the training data. We are also interested in how the methods used to create such models can themselves be improved by working together with practitioners.
The works on this concert demonstrate different ways of making music with our models. Sturm builds his three short works through curation from a large number of tunes generated by the model. In fact, the example transcription above is the subject of his “March to the Mainframe”. Another work, “The Humours of Time Pigeon”, comes from a failure of the model, which is nonetheless the right kind of failure. To create material for his piece “Bastard Tunes”, Ben-Tal interacts with the model by seeding its transcription generation process using melodic fragments that are different from the patterns in its training data – “prodding” it out of its “Irish training”. Each movement of “Bastard Tunes” is built using the continuations generated by the model.
Technological innovation has transformed music in the past, from the development of the piano to the gramophone to online streaming services. Machines of all kinds play a significant role in our musical life from the recording studio to the devices through which we listen to the music. What will be the impact of today’s technologies on music, musicians, and audiences? While there is much concern about artificial intelligence replacing or even displacing humans, this concert shows the potential for these approaches to augment human creativity. Music machine learning opens new avenues for engaging with music, and developing new practices.
Join us at St. James’s Sussex Gardens for an evening featuring artificial and biological intelligences working together. The evening—both a concert and a demonstration—presents a diverse program of music created with the assistance of folk-rnn, a machine learning system that has been trained on folk music from Ireland and the UK. folk-rnn can create music both within that traditional style and outside it.
You’ll hear some of London’s best Irish musicians playing a mix of traditional and generated tunes. Will you be able to tell the difference? Pieces generated by the folk-rnn system and harmonized by a different system trained on music by J. S. Bach (DeepBach) will be performed on the church organ. The New Music Players will perform pieces composed interactively with the system and premiere the winning piece from the folk-rnn composition competition, Gwyl Werin for mixed quartet by Derri Joseph Lewis. This piece was judged and selected by three leading experts in the field of music and artificial intelligence: interdisciplinary artist Elaine Chew, musician and AI Google researcher Sageev Oore, and composer and researcher Oded Ben-Tal. Ben-Tal will discuss this research project, conducted with his colleague Bob Sturm, from KTH Stockholm.
Dessert, wine, and refreshments will be served.
The concert features the metamorphosis of music into data and data into music. Some works feature data cast as sound (sonification). Other works feature material generated by artificial intelligence (Ai) trained on large music databases. Composers on the concert are: Margaret Schedel (USA), Oded Ben-Tal (UK), Zoë Gorman (UK), Scott Cazan (SE), Bob L. T. Sturm (SE), and various music Ai.