Neural Nets and Music: Can Artificial Intelligence Be Creative?

Sound Machines 2.0 is not a techno punk rock band. It’s not even a human music group – it’s a self-playing, auto-composing robot quintet, designed by the engineering firm Festo. By using advanced artificial intelligence technology, this “band” analyzes the acoustic fingerprints of existing musical pieces and then generates and executes its own original compositions.

  Photo credit: http://www.festo.com/

Photo credit: http://www.festo.com/

Each of these periscope-looking machines (pictured on the right) produces sound by using bouncing pistons to strike a piano hammer against a string. To adjust the pitch, the string is sectioned off by a programmable slider to mimic human fingers sliding up and down the string. The music is produced using electronic instruments, but bears unmistakable qualities of Baroque- and Romantic-period music.

How does it work?

In a purely logistical sense, Sound Machines 2.0 follows the same general procedure that composers do. The model begins with developing a motif (a short recurring musical phrase) and then expands upon it. Therefore, the ability of a composer lies in how he or she can take this simple idea and transform it into an exquisitely layered and complex composition that is harmonized.

On a microscopic scale, the process of composing can be broken down mathematically into simple transitions between tones and chords that satisfy the rules of harmony. However, to formulate an entire piece using these simple building blocks is extremely difficult and caused great anguish to even the most skilled composers of centuries past. It has been told that Beethoven would often mull over his sonatas for years, and that it would often take Bach several weeks at a time to put together his elaborate polyphonic compositions. Even Mozart, who is widely regarded as the most ingenious composer to have ever lived, would spend entire days on just one movement of a set.

Now with advanced algorithms and robust hardware, Sound Machines 2.0 is able to produce full-length compositions in mere minutes by analyzing human music and attempting to recreate it. Does this drastically improved efficiency (and potential output volume) mean Sound Machines 2.0 is a better composer than these centuries-old household names?

To answer this, we must dive into how music-making machines decide what music is “good” or “bad.” Sound Machines 2.0 and other systems like it use artificial neural networks to refine their compositions through a trial-and-error approach. Neural nets are multi-layer software systems designed to imitate the processes that human neurons typically execute, such as pattern and symbol recognition. They are composed of abstract neuron-like structures – called nodes – that pass information unidirectionally among layers, each time refining the signal until the desired final result is returned. When a robot “learns,” it utilizes a feedback mechanism, taking massive data sets of popular human-created music and emulating their various styles.

In composing new music, the user of a robot music group would likely begin by feeding the algorithm the digitized representation of songs in the desired style. The computer then proceeds to assign random values to a given set of variables and executes the program. The output at this point is unlikely to resemble the inputted music, but the computer analyzes the similarities between the two and adjusts the variable values. After many iterations and refinements, a new song can be created with similarities to other songs in the given music style.

The reason for this perceived ease of music production is that the 12-tone system utilized in Western music is very mathematical. This quality allows it to be easily represented and imitated by algorithms, but it also raises the question: can robots really compose creative, unique music?

One could argue that artificial intelligence exhibits at most what might be called secondary, or “derivative,” creativity, given that the foundational information for all compositions comes from external outlets. After all, creativity is a process that takes not a single input source, but a combination of social and personal contexts. We enjoy music because it is an emotional experience drawn from an artist’s personal background and life story.

In an article published by ABC Australia, Jon McCormack of Monash University said “So much of what we think about art is humans communicating to each other.” So when robots make music with no personal or human influences to draw from, it can hardly be deemed original nor sensational.

Although there is still much uncertainty about the future of machine-produced music, there is undeniably a growing interest in exploring capabilities with machine creativity. Going hand-in-hand with this development is an advancement not in a machine’s ability to replace human artistry, but rather in our progress of uniting the two to create compelling and artistic content enjoyable across all mediums.