Skip to content

ericqweinstein/do-androids-dream-of-edm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Do Androids Dream of EDM?

A RubyKaigi Talk by Julian and Eric

About

This talk/repo is about using a recurrent neural network (in particular, an LSTM) to generate music from training MIDI files using Tensorflow's Magenta project and Ruby.

Dependencies

Installation

  1. Install Python requirements ($ pip install -r requirements.txt). We recommend using VirtualEnv to manage your Python version(s) and dependencies.
  2. Install Ruby requirements ($ bundle).

Training and Generating Music

You can train and use the LSTM neural network as follows:

  1. Place the training MIDI files in the midi/ directory.
  2. Change to the Ruby directory: $ cd src/rb.
  3. Run the main Ruby file: $ ruby main.rb.

This will convert the MIDI files to a TFRecord file (which contains NoteSequence protocol buffers), create SequenceExamples from the TFRecord file, train the network on the data, and generate music from the resulting checkpoints. You can view the training and evaluation data via $ tensorboard --logdir=src/py/melody_rnn/checkpoints:

tensorboard

Generated music will be written to the generated/ directory in the root of this project. We use timidity to listen to it: $ brew install timidity && timidity path_to_your.midi.

Roadmap

  • Generate MIDI files using Magenta and Python.
  • Call into the Python code using the rubypython gem (this is currently super minimal).
  • Help extend tensorflow.rb to more seamlessly integrate Ruby + Magenta.

About

Code for "Do Androids Dream of Electronic Dance Music?"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published