A GUI for the Neutrino neural singing synthesizer, written in Flutter (and uses synaps!).
Coming soon!
- Added undos/redos
- Voices now intelligently cache their audio until something in that voice gets edited
- Fixed some minor bugs with reactivity
- Cleaned up code, abstracted pianoroll to facilitate any future extensions (e.g. f0/mgc editing)
- Upgraded to use the synaps state management library (and as a result cleaned up more code!)
- Fixed bugs with note dragging across semitones
- Cleaned up code, documented many functions and classes
- Fixed a bug with the flutter_desktop_audio plugin that caused deadlocks
- Basic functionality implemented
Building Steps:
- Clone this repository somewhere
flutter pub get
-> gets all packagesflutter pub run build_runner build
-> build the generated files for json_serializable/synapsflutter build windows/macos/linux
- sigh just read this https://flutter.dev/desktop#distribution
N.B. I have used a fixed version of flutter_desktop_audio which I do not plan on releasing publicly, because I do not wish to maintain that codebase. I will be replacing that module with my own FFI based audio playback library. Until then, if you want to build this yourself, shoot me a message and I can give you source access to my version of flutter_desktop_audio.
This is my first Dart/Flutter project, and I am still learning how the Flutter engine works. Though I can figure out how to write logic for a component, I may not know the best way to do it, and I certainly am not familiar with the best practices for Dart and Flutter. The more code I write, the more will learn, and the more my code quality will improve. I am sorry in advance.