This presentation is a kind of diary of research in sound and music generation by computer. Throughout its duration I will play various recordings and examples. My research has two directions, the first of which is using recurrent neural networks for jazz improvisation. I have trained various recurrent neural networks to reproduce human renditions of jazz melodies using notes and chords as input. The trained networks are used in different ways to produce new melodies. I will discuss representation issues, training issues, and use of reinforcement learning with the trained networks.
The second research direction has been an effort in using computer science algorithms for generation of grains of sound, or granular synthesis. Part of this is to demonstrate algorithms through sound and to have a little fun. Most recently we have used reinforcement learning to control a granular synthesis engine in order to shape certain spectral characteristics of the clouds of grains. This is work in progress, and we will listen to current results.