Simple visual demo of the Expressive Guitar Technique classifier, coded in Processing.
The Expressive Guitar Technique Classifier is a deep learning algorithm that can classify the expressive technique used by a guitarist.
Such classifier was developed to work on an embedded system (running ElkOS) and produce a prediction of the technique used with a maximum latency of 20ms from each individual note.
The recognition information can be used in real-time to either trigger/modify syntetic sounds, prerecorded audio samples, control stage lighting, fog machines, video transitions and more:
This repo contains the code that creates simple visuals which can demonstrate the potential of the system.
Depending on the technique predicted by the classifier (from 0 to 8), the visual will change color and speed up.
The sketch receives OSC messages like the following:
/guitarClassifier/class i <predicted_technique[0-7]>
,
/guitarClassifier/class if <predicted_technique[0-7]> <confidence>
,
or
/guitarClassifier/class ffffffff <conf.class0> <conf.class1> <conf.class2> <conf.class3> <conf.class4> <conf.class5> <conf.class6> <conf.class7>
Inspired by basboy12's Processing_audio_visualizer.