Full-featured Algorithmic Intelligence Music Augmentator (AIMA) with full multi-instrument MIDI output.
A perfect tool for a musician or a composer to stay competitive and relevant in the era of Artificial Intelligence :)
-
Without working and functional General Artificial Intelligence, creation of proper Music AI is NOT currently possible. All current SOTA Music AI implementations (i.e MuseNet or Magenta) rely on a similar music augmentation algorithms, heavy pre/post music/MIDI processing, and other tricks/hacks to compensate for shortcomings of regular AI (as opposed to GAI).
-
No need for 10k USD GPUs to train/run the code/software. All you need is the cheapest computer/CPU to use/run the MM code. MM is small and fast enough to be deployed/ran on Raspberry PI (see MM repo for RP code/implementation).
-
Super fast "training" on MIDI dataset/super fast music generation. It takes about 10-20 minutes to process/tune the average MIDI dataset with MM as opposed to hours or days with AI implementations. Same applies to music generation, as It takes the cheapest computer w/o a GPU and about 1 minute to generate an orignial performance with MM.
-
Code/implementation/ideas used for MM can be adopted for RAW audio/music generation.
-
MM does NOT have the same ethical and copyright issues as AI models/systems as it is pure algorithms/code/regular software, while offering a similar output/quality of music.
Video: https://youtu.be/46hKTkU7CDU
Option 1:
- Click on Meddleying_MAESTRO.ipynb above
- Click on blue "Open in Colab" button in the Github preview
Option 2:
- git clone https://github.com/asigalov61/Meddleying-MAESTRO/
- cd /Meddleying-MAESTRO/
- Install all requirements from Requirements
- Unzip provided MIDI dataset in Dataset to Dataset or copy your MIDIs to Dataset
- python MM_MIDI_Processor.py
- python MM_Generator.py
- If everything worked the graph of the composition will pop up. Close it to start the fluidsynth player.
- Type quit in fluidsynth to return to command line prompt.
Enjoy! :)