Skip to content

Web-Audio-Tools/React-Audio-Suite

Repository files navigation

React Audio Components (JS & TSX)


WEB Audio API:

Introduction

Audio on the web has been fairly primitive up to this point and until very recently has had to be delivered through plugins such as Flash and QuickTime. The introduction of the [audio](https://html.spec.whatwg.org/multipage/media.html#audio) element in HTML5 is very important, allowing for basic streaming audio playback. But, it is not powerful enough to handle more complex audio applications. For sophisticated web-based games or interactive applications, another solution is required. It is a goal of this specification to include the capabilities found in modern game audio engines as well as some of the mixing, processing, and filtering tasks that are found in modern desktop audio production applications.

The APIs have been designed with a wide variety of use cases [webaudio-usecases] in mind. Ideally, it should be able to support any use case which could reasonably be implemented with an optimized C++ engine controlled via script and run in a browser. That said, modern desktop audio software can have very advanced capabilities, some of which would be difficult or impossible to build with this system. Apple’s Logic Audio is one such application which has support for external MIDI controllers, arbitrary plugin audio effects and synthesizers, highly optimized direct-to-disk audio file reading/writing, tightly integrated time-stretching, and so on. Nevertheless, the proposed system will be quite capable of supporting a large range of reasonably complex games and interactive applications, including musical ones. And it can be a very good complement to the more advanced graphics features offered by WebGL. The API has been designed so that more advanced capabilities can be added at a later time.

Using the Web Audio API - Web APIs | MDN

Great! We have a boombox that plays our 'tape', and we can adjust the volume and stereo panning, giving us a fairly basic working audio graph.

Let's take a look at getting started with the Web Audio API. We'll briefly look at some concepts, then study a simple boombox example that allows us to load an audio track, play and pause it, and change its volume and stereo panning.

The Web Audio API does not replace the <audio> media element, but rather complements it, just like <canvas> coexists alongside the <img> element. Your use case will determine what tools you use to implement audio. If you want to control playback of an audio track, the <audio> media element provides a better, quicker solution than the Web Audio API. If you want to carry out more complex audio processing, as well as playback, the Web Audio API provides much more power and control.

A powerful feature of the Web Audio API is that it does not have a strict "sound call limitation". For example, there is no ceiling of 32 or 64 sound calls at one time. Some processors may be capable of playing more than 1,000 simultaneous sounds without stuttering.

Our boombox looks like this:

A boombox with play, pan, and volume controls

Note the retro cassette deck with a play button, and vol and pan sliders to allow you to alter the volume and stereo panning. We could make this a lot more complex, but this is ideal for simple learning at this stage.

Check out the final demo here on Codepen, or see the source code on GitHub.

Modern browsers have good support for most features of the Web Audio API. There are a lot of features of the API, so for more exact information, you'll have to check the browser compatibility tables at the bottom of each reference page.

Everything within the Web Audio API is based around the concept of an audio graph, which is made up of nodes.

The Web Audio API handles audio operations inside an audio context, and has been designed to allow modular routing. Basic audio operations are performed with audio nodes, which are linked together to form an audio routing graph. You have input nodes, which are the source of the sounds you are manipulating, modification nodes that change those sounds as desired, and output nodes (destinations), which allow you to save or hear those sounds.

Several audio sources with different channel layouts are supported, even within a single context. Because of this modular design, you can create complex audio functions with dynamic effects.

To be able to do anything with the Web Audio API, we need to create an instance of the audio context. This then gives us access to all the features and functionality of the API.

const AudioContext = window.AudioContext || window.webkitAudioContext;

const audioContext = new AudioContext();

So what's going on when we do this? A BaseAudioContext is created for us automatically and extended to an online audio context. We'll want this because we're looking to play live sound.

Note: If you just want to process audio data, for instance, buffer and stream it but not play it, you might want to look into creating an OfflineAudioContext.

Now, the audio context we've created needs some sound to play through it. There are a few ways to do this with the API. Let's begin with a simple method — as we have a boombox, we most likely want to play a full song track. Also, for accessibility, it's nice to expose that track in the DOM. We'll expose the song on the page using an <audio> element.

<audio src="myCoolTrack.mp3"></audio>

Note: If the sound file you're loading is held on a different domain you will need to use the crossorigin attribute; see Cross Origin Resource Sharing (CORS) for more information.

To use all the nice things we get with the Web Audio API, we need to grab the source from this element and pipe it into the context we have created. Lucky for us there's a method that allows us to do just that — AudioContext.createMediaElementSource:

const audioElement = document.querySelector('audio');


const track = audioContext.createMediaElementSource(audioElement);

Note: The <audio> element above is represented in the DOM by an object of type HTMLMediaElement, which comes with its own set of functionality. All of this has stayed intact; we are merely allowing the sound to be available to the Web Audio API.

When playing sound on the web, it's important to allow the user to control it. Depending on the use case, there's a myriad of options, but we'll provide functionality to play/pause the sound, alter the track's volume, and pan it from left to right.

Controlling sound programmatically from JavaScript code is covered by browsers' autoplay support policies, as such is likely to be blocked without permission being granted by the user (or a whitelist). Autoplay policies typically require either explicit permission or a user engagement with the page before scripts can trigger audio to play.

These special requirements are in place essentially because unexpected sounds can be annoying and intrusive, and can cause accessibility problems. You can learn more about this in our article Autoplay guide for media and Web Audio APIs.

Since our scripts are playing audio in response to a user input event (a click on a play button, for instance), we're in good shape and should have no problems from autoplay blocking. So, let's start by taking a look at our play and pause functionality. We have a play button that changes to a pause button when the track is playing:

<button data-playing="false" role="switch" aria-checked="false">
    <span>Play/Pause</span>
</button>

Before we can play our track we need to connect our audio graph from the audio source/input node to the destination.

We've already created an input node by passing our audio element into the API. For the most part, you don't need to create an output node, you can just connect your other nodes to BaseAudioContext.destination, which handles the situation for you:

track.connect(audioContext.destination);

A good way to visualise these nodes is by drawing an audio graph so you can visualize it. This is what our current audio graph looks like:

an audio graph with an audio element source connected to the default destination

Now we can add the play and pause functionality.

const playButton = document.querySelector('button');

playButton.addEventListener('click', function() {


    if (audioContext.state === 'suspended') {
        audioContext.resume();
    }


    if (this.dataset.playing === 'false') {
        audioElement.play();
        this.dataset.playing = 'true';
    } else if (this.dataset.playing === 'true') {
        audioElement.pause();
        this.dataset.playing = 'false';
    }

}, false);

We also need to take into account what to do when the track finishes playing. Our HTMLMediaElement fires an ended event once it's finished playing, so we can listen for that and run code accordingly:

audioElement.addEventListener('ended', () => {
    playButton.dataset.playing = 'false';
}, false);

Let's delve into some basic modification nodes, to change the sound that we have. This is where the Web Audio API really starts to come in handy. First of all, let's change the volume. This can be done using a GainNode, which represents how big our sound wave is.

There are two ways you can create nodes with the Web Audio API. You can use the factory method on the context itself (e.g. audioContext.createGain()) or via a constructor of the node (e.g. new GainNode()). We'll use the factory method in our code:

const gainNode = audioContext.createGain();

Now we have to update our audio graph from before, so the input is connected to the gain, then the gain node is connected to the destination:

track.connect(gainNode).connect(audioContext.destination);

This will make our audio graph look like this:

an audio graph with an audio element source, connected to a gain node that modifies the audio source, and then going to the default destination

The default value for gain is 1; this keeps the current volume the same. Gain can be set to a minimum of about -3.4 and a max of about 3.4. Here we'll allow the boombox to move the gain up to 2 (double the original volume) and down to 0 (this will effectively mute our sound).

Let's give the user control to do this — we'll use a range input:

<input type="range" id="volume" min="0" max="2" value="1" step="0.01">

Note: Range inputs are a really handy input type for updating values on audio nodes. You can specify a range's values and use them directly with the audio node's parameters.

So let's grab this input's value and update the gain value when the input node has its value changed by the user:

const volumeControl = document.querySelector('#volume');

volumeControl.addEventListener('input', function() {
    gainNode.gain.value = this.value;
}, false);

Note: The values of node objects (e.g. GainNode.gain) are not simple values; they are actually objects of type AudioParam — these called parameters. This is why we have to set GainNode.gain's value property, rather than just setting the value on gain directly. This enables them to be much more flexible, allowing for passing the parameter a specific set of values to change between over a set period of time, for example.

Great, now the user can update the track's volume! The gain node is the perfect node to use if you want to add mute functionality.

Let's add another modification node to practice what we've just learnt.

There's a StereoPannerNode node, which changes the balance of the sound between the left and right speakers, if the user has stereo capabilities.

Note: The StereoPannerNode is for simple cases in which you just want stereo panning from left to right. There is also a PannerNode, which allows for a great deal of control over 3D space, or sound spatialisation, for creating more complex effects. This is used in games and 3D apps to create birds flying overhead, or sound coming from behind the user for instance.

To visualise it, we will be making our audio graph look like this:

An image showing the audio graph showing an input node, two modification nodes (a gain node and a stereo panner node) and a destination node.

Let's use the constructor method of creating a node this time. When we do it this way, we have to pass in the context and any options that the particular node may take:

const pannerOptions = { pan: 0 };
const panner = new StereoPannerNode(audioContext, pannerOptions);

Note: The constructor method of creating nodes is not supported by all browsers at this time. The older factory methods are supported more widely.

Here our values range from -1 (far left) and 1 (far right). Again let's use a range type input to vary this parameter:

<input type="range" id="panner" min="-1" max="1" value="0" step="0.01">

We use the values from that input to adjust our panner values in the same way as we did before:

const pannerControl = document.querySelector('#panner');

pannerControl.addEventListener('input', function() {
    panner.pan.value = this.value;
}, false);

Let's adjust our audio graph again, to connect all the nodes together:

track.connect(gainNode).connect(panner).connect(audioContext.destination);

The only thing left to do is give the app a try: Check out the final demo here on Codepen.

Great! We have a boombox that plays our 'tape', and we can adjust the volume and stereo panning, giving us a fairly basic working audio graph.

This makes up quite a few basics that you would need to start to add audio to your website or web app. There's a lot more functionality to the Web Audio API, but once you've grasped the concept of nodes and putting your audio graph together, we can move on to looking at more complex functionality.

There are other examples available to learn more about the Web Audio API.

The Voice-change-O-matic is a fun voice manipulator and sound visualization web app that allows you to choose different effects and visualizations. The application is fairly rudimentary, but it demonstrates the simultaneous use of multiple Web Audio API features. (run the Voice-change-O-matic live).

A UI with a sound wave being shown, and options for choosing voice effects and visualizations.

Another application developed specifically to demonstrate the Web Audio API is the Violent Theremin, a simple web application that allows you to change pitch and volume by moving your mouse pointer. It also provides a psychedelic lightshow (see Violent Theremin source code).

A page full of rainbow colors, with two buttons labeled Clear screen and mute.

Also see our webaudio-examples repo for more examples.

Source

Modular Routing

Modular routing allows arbitrary connections between different AudioNode objects. Each node can have inputs and/or outputs. A source node has no inputs and a single output. A destination node has one input and no outputs. Other nodes such as filters can be placed between the source and destination nodes. The developer doesn’t have to worry about low-level stream format details when two objects are connected together; the right thing just happens. For example, if a mono audio stream is connected to a stereo input it should just mix to left and right channels appropriately.

In the simplest case, a single source can be routed directly to the output. All routing occurs within an AudioContext containing a single AudioDestinationNode:



├── ./README.md
├── ./r-audio
  ├── ./README.md
  ├── ./examples
    ├── ./examples/README.md
    ├── ./examples/assets
      ├── ./examples/assets/audio
        ├── ./examples/assets/audio/a.wav
        ├── ./examples/assets/audio/b.wav
        └── ./examples/assets/audio/clarinet.mp3
      └── ./examples/assets/js
          └── ./examples/assets/js/bit-crusher.js
    ├── ./examples/audio-worklet.js
    ├── ./examples/buffers-channels.js
    ├── ./examples/complex-effects-graph.js
    ├── ./examples/custom-nodes.js
    ├── ./examples/delay-lines.js
    ├── ./examples/examples.js
    ├── ./examples/gain-matrix.js
    ├── ./examples/index.html
    ├── ./examples/index.js
    ├── ./examples/media-element.js
    ├── ./examples/media-stream.js
    └── ./examples/mutation.js
  ├── ./index.js
  ├── ./package-lock.json
  ├── ./package.json
  ├── ./src
    ├── ./src/audio-nodes
      ├── ./src/audio-nodes/analyser.js
      ├── ./src/audio-nodes/audio-worklet.js
      ├── ./src/audio-nodes/biquad-filter.js
      ├── ./src/audio-nodes/buffer-source.js
      ├── ./src/audio-nodes/channel-merger.js
      ├── ./src/audio-nodes/channel-splitter.js
      ├── ./src/audio-nodes/constant-source.js
      ├── ./src/audio-nodes/convolver.js
      ├── ./src/audio-nodes/delay.js
      ├── ./src/audio-nodes/dynamics-compressor.js
      ├── ./src/audio-nodes/gain.js
      ├── ./src/audio-nodes/iir-filter.js
      ├── ./src/audio-nodes/index.js
      ├── ./src/audio-nodes/media-element-source.js
      ├── ./src/audio-nodes/media-stream-source.js
      ├── ./src/audio-nodes/oscillator.js
      ├── ./src/audio-nodes/panner.js
      ├── ./src/audio-nodes/stereo-panner.js
      └── ./src/audio-nodes/wave-shaper.js
    ├── ./src/base
      ├── ./src/base/audio-context.js
      ├── ./src/base/audio-node.js
      ├── ./src/base/component.js
      ├── ./src/base/connectable-node.js
      └── ./src/base/scheduled-source.js
    └── ./src/graph
        ├── ./src/graph/cycle.js
        ├── ./src/graph/extensible.js
        ├── ./src/graph/pipeline.js
        ├── ./src/graph/split-channels.js
        ├── ./src/graph/split.js
        └── ./src/graph/utils.js
  └── ./webpack.config.js
├── ./react-audio-recorder
  ├── ./README.md
  ├── ./dist
    ├── ./dist/AudioContext.d.ts
    ├── ./dist/AudioContext.js
    ├── ./dist/AudioRecorder.d.ts
    ├── ./dist/AudioRecorder.js
    ├── ./dist/dist
      └── ./dist/dist/AudioRecorder.min.js
    ├── ./dist/downloadBlob.d.ts
    ├── ./dist/downloadBlob.js
    ├── ./dist/getUserMedia.d.ts
    ├── ./dist/getUserMedia.js
    ├── ./dist/waveEncoder.d.ts
    ├── ./dist/waveEncoder.js
    ├── ./dist/waveInterface.d.ts
    └── ./dist/waveInterface.js
  ├── ./package-lock.json
  ├── ./package.json
  ├── ./src
    ├── ./src/AudioContext.ts
    ├── ./src/AudioRecorder.tsx
    ├── ./src/downloadBlob.ts
    ├── ./src/getUserMedia.ts
    ├── ./src/waveEncoder.ts
    └── ./src/waveInterface.ts
  ├── ./tsconfig.json
  ├── ./types
    └── ./types/dom.d.ts
  └── ./webpack.config.js
├── ./react-native-voice-processor-main
  ├── ./README.md
  ├── ./android
    ├── ./android/build.gradle
    ├── ./android/gradle
      └── ./android/gradle/wrapper
          ├── ./android/gradle/wrapper/gradle-wrapper.jar
          └── ./android/gradle/wrapper/gradle-wrapper.properties
    ├── ./android/gradle.properties
    ├── ./android/gradlew
    ├── ./android/gradlew.bat
    ├── ./android/settings.gradle
    └── ./android/src
        └── ./android/src/main
            ├── ./android/src/main/AndroidManifest.xml
            └── ./android/src/main/java
                └── ./android/src/main/java/ai
                    └── ./android/src/main/java/ai/picovoice
                        └── ./android/src/main/java/ai/picovoice/reactnative
                            └── ./android/src/main/java/ai/picovoice/reactnative/voiceprocessor
                                ├── ./android/src/main/java/ai/picovoice/reactnative/voiceprocessor/VoiceProcessorModule.java
                                └── ./android/src/main/java/ai/picovoice/reactnative/voiceprocessor/VoiceProcessorPackage.java
  ├── ./babel.config.js
  ├── ./example
    ├── ./example/android
      ├── ./example/android/app
        ├── ./example/android/app/build.gradle
        ├── ./example/android/app/debug.keystore
        ├── ./example/android/app/proguard-rules.pro
        └── ./example/android/app/src
            ├── ./example/android/app/src/debug
              ├── ./example/android/app/src/debug/AndroidManifest.xml
              └── ./example/android/app/src/debug/java
                  └── ./example/android/app/src/debug/java/com
                      └── ./example/android/app/src/debug/java/com/example
                          └── ./example/android/app/src/debug/java/com/example/reactnativevoiceprocessor
                              └── ./example/android/app/src/debug/java/com/example/reactnativevoiceprocessor/ReactNativeFlipper.java
            └── ./example/android/app/src/main
                ├── ./example/android/app/src/main/AndroidManifest.xml
                ├── ./example/android/app/src/main/java
                  └── ./example/android/app/src/main/java/ai
                      └── ./example/android/app/src/main/java/ai/picovoice
                          └── ./example/android/app/src/main/java/ai/picovoice/reactnative
                              └── ./example/android/app/src/main/java/ai/picovoice/reactnative/voiceprocessorexample
                                  ├── ./example/android/app/src/main/java/ai/picovoice/reactnative/voiceprocessorexample/MainActivity.java
                                  └── ./example/android/app/src/main/java/ai/picovoice/reactnative/voiceprocessorexample/MainApplication.java
                └── ./example/android/app/src/main/res
                    ├── ./example/android/app/src/main/res/drawable
                      ├── ./example/android/app/src/main/res/drawable/ic_launcher_background.xml
                      └── ./example/android/app/src/main/res/drawable/ic_launcher_foreground.xml
                    ├── ./example/android/app/src/main/res/mipmap-anydpi-v26
                      ├── ./example/android/app/src/main/res/mipmap-anydpi-v26/ic_launcher.xml
                      └── ./example/android/app/src/main/res/mipmap-anydpi-v26/ic_launcher_round.xml
                    ├── ./example/android/app/src/main/res/mipmap-hdpi
                      ├── ./example/android/app/src/main/res/mipmap-hdpi/ic_launcher.png
                      └── ./example/android/app/src/main/res/mipmap-hdpi/ic_launcher_round.png
                    ├── ./example/android/app/src/main/res/mipmap-mdpi
                      ├── ./example/android/app/src/main/res/mipmap-mdpi/ic_launcher.png
                      └── ./example/android/app/src/main/res/mipmap-mdpi/ic_launcher_round.png
                    ├── ./example/android/app/src/main/res/mipmap-xhdpi
                      ├── ./example/android/app/src/main/res/mipmap-xhdpi/ic_launcher.png
                      └── ./example/android/app/src/main/res/mipmap-xhdpi/ic_launcher_round.png
                    ├── ./example/android/app/src/main/res/mipmap-xxhdpi
                      ├── ./example/android/app/src/main/res/mipmap-xxhdpi/ic_launcher.png
                      └── ./example/android/app/src/main/res/mipmap-xxhdpi/ic_launcher_round.png
                    ├── ./example/android/app/src/main/res/mipmap-xxxhdpi
                      ├── ./example/android/app/src/main/res/mipmap-xxxhdpi/ic_launcher.png
                      └── ./example/android/app/src/main/res/mipmap-xxxhdpi/ic_launcher_round.png
                    └── ./example/android/app/src/main/res/values
                        ├── ./example/android/app/src/main/res/values/strings.xml
                        └── ./example/android/app/src/main/res/values/styles.xml
      ├── ./example/android/build.gradle
      ├── ./example/android/gradle
        └── ./example/android/gradle/wrapper
            ├── ./example/android/gradle/wrapper/gradle-wrapper.jar
            └── ./example/android/gradle/wrapper/gradle-wrapper.properties
      ├── ./example/android/gradle.properties
      ├── ./example/android/gradlew
      ├── ./example/android/gradlew.bat
      └── ./example/android/settings.gradle
    ├── ./example/app.json
    ├── ./example/babel.config.js
    ├── ./example/index.tsx
    ├── ./example/ios
      ├── ./example/ios/File.swift
      ├── ./example/ios/Podfile
      ├── ./example/ios/Podfile.lock
      ├── ./example/ios/VoiceProcessorExample
        ├── ./example/ios/VoiceProcessorExample/AppDelegate.h
        ├── ./example/ios/VoiceProcessorExample/AppDelegate.m
        ├── ./example/ios/VoiceProcessorExample/Base.lproj
          └── ./example/ios/VoiceProcessorExample/Base.lproj/LaunchScreen.xib
        ├── ./example/ios/VoiceProcessorExample/Images.xcassets
          ├── ./example/ios/VoiceProcessorExample/Images.xcassets/AppIcon.appiconset
            ├── ./example/ios/VoiceProcessorExample/Images.xcassets/AppIcon.appiconset/Contents.json
            ├── ./example/ios/VoiceProcessorExample/Images.xcassets/AppIcon.appiconset/pv_circle_512-1024.png
            ├── ./example/ios/VoiceProcessorExample/Images.xcassets/AppIcon.appiconset/pv_circle_512-20.png
            ├── ./example/ios/VoiceProcessorExample/Images.xcassets/AppIcon.appiconset/pv_circle_512-20@2x.png
            ├── ./example/ios/VoiceProcessorExample/Images.xcassets/AppIcon.appiconset/pv_circle_512-20@3x.png
            ├── ./example/ios/VoiceProcessorExample/Images.xcassets/AppIcon.appiconset/pv_circle_512-29.png
            ├── ./example/ios/VoiceProcessorExample/Images.xcassets/AppIcon.appiconset/pv_circle_512-29@2x.png
            ├── ./example/ios/VoiceProcessorExample/Images.xcassets/AppIcon.appiconset/pv_circle_512-29@3x.png
            ├── ./example/ios/VoiceProcessorExample/Images.xcassets/AppIcon.appiconset/pv_circle_512-40.png
            ├── ./example/ios/VoiceProcessorExample/Images.xcassets/AppIcon.appiconset/pv_circle_512-40@2x.png
            ├── ./example/ios/VoiceProcessorExample/Images.xcassets/AppIcon.appiconset/pv_circle_512-40@3x.png
            ├── ./example/ios/VoiceProcessorExample/Images.xcassets/AppIcon.appiconset/pv_circle_512-60@2x.png
            ├── ./example/ios/VoiceProcessorExample/Images.xcassets/AppIcon.appiconset/pv_circle_512-60@3x.png
            ├── ./example/ios/VoiceProcessorExample/Images.xcassets/AppIcon.appiconset/pv_circle_512-76.png
            ├── ./example/ios/VoiceProcessorExample/Images.xcassets/AppIcon.appiconset/pv_circle_512-76@2x.png
            └── ./example/ios/VoiceProcessorExample/Images.xcassets/AppIcon.appiconset/pv_circle_512-83.5@2x.png
          └── ./example/ios/VoiceProcessorExample/Images.xcassets/Contents.json
        ├── ./example/ios/VoiceProcessorExample/Info.plist
        └── ./example/ios/VoiceProcessorExample/main.m
      ├── ./example/ios/VoiceProcessorExample-Bridging-Header.h
      ├── ./example/ios/VoiceProcessorExample.xcodeproj
        ├── ./example/ios/VoiceProcessorExample.xcodeproj/project.pbxproj
        └── ./example/ios/VoiceProcessorExample.xcodeproj/xcshareddata
            └── ./example/ios/VoiceProcessorExample.xcodeproj/xcshareddata/xcschemes
                └── ./example/ios/VoiceProcessorExample.xcodeproj/xcshareddata/xcschemes/VoiceProcessorExample.xcscheme
      └── ./example/ios/VoiceProcessorExample.xcworkspace
          ├── ./example/ios/VoiceProcessorExample.xcworkspace/contents.xcworkspacedata
          └── ./example/ios/VoiceProcessorExample.xcworkspace/xcshareddata
              └── ./example/ios/VoiceProcessorExample.xcworkspace/xcshareddata/IDEWorkspaceChecks.plist
    ├── ./example/metro.config.js
    ├── ./example/package-lock.json
    ├── ./example/package.json
    ├── ./example/src
      └── ./example/src/App.tsx
    └── ./example/yarn.lock
  ├── ./ios
    ├── ./ios/VoiceProcessor-Bridging-Header.h
    ├── ./ios/VoiceProcessor.m
    ├── ./ios/VoiceProcessor.swift
    └── ./ios/VoiceProcessor.xcodeproj
        └── ./ios/VoiceProcessor.xcodeproj/project.pbxproj
  ├── ./package.json
  ├── ./react-native-voice-processor.podspec
  ├── ./src
    └── ./src/index.tsx
  ├── ./tsconfig.json
  └── ./yarn.lock
├── ./react-player
  ├── ./README.md
  ├── ./config
    ├── ./config/env.js
    ├── ./config/jest
      ├── ./config/jest/cssTransform.js
      └── ./config/jest/fileTransform.js
    ├── ./config/modules.js
    ├── ./config/paths.js
    ├── ./config/pnpTs.js
    ├── ./config/webpack.config.js
    └── ./config/webpackDevServer.config.js
  ├── ./package.json
  ├── ./public
    ├── ./public/16x16_radius.png
    ├── ./public/24x24_radius.png
    ├── ./public/32x32_radius.png
    ├── ./public/64x64_radius.png
    ├── ./public/_redirects
    ├── ./public/favicon.png
    ├── ./public/index.html
    └── ./public/manifest.json
  ├── ./scripts
    ├── ./scripts/build.js
    ├── ./scripts/start.js
    └── ./scripts/test.js
  ├── ./src
    ├── ./src/App.test.js
    ├── ./src/assets
      ├── ./src/assets/css
        ├── ./src/assets/css/base.styl
        └── ./src/assets/css/mixins
            ├── ./src/assets/css/mixins/animations.styl
            ├── ./src/assets/css/mixins/breakpoints.styl
            ├── ./src/assets/css/mixins/colors.styl
            ├── ./src/assets/css/mixins/reset.styl
            ├── ./src/assets/css/mixins/root.styl
            └── ./src/assets/css/mixins/zindex.styl
      ├── ./src/assets/github
        └── ./src/assets/github/GitHub-Mark-Light-32px.png
      ├── ./src/assets/logo
        ├── ./src/assets/logo/16x16.png
        ├── ./src/assets/logo/16x16.svg
        ├── ./src/assets/logo/16x16_radius.png
        ├── ./src/assets/logo/24x24.png
        ├── ./src/assets/logo/24x24.svg
        ├── ./src/assets/logo/24x24_radius.png
        ├── ./src/assets/logo/32x32.png
        ├── ./src/assets/logo/32x32.svg
        ├── ./src/assets/logo/32x32_radius.png
        ├── ./src/assets/logo/64x64.png
        ├── ./src/assets/logo/64x64.svg
        ├── ./src/assets/logo/64x64_radius.png
        ├── ./src/assets/logo/audio-player.ai
        └── ./src/assets/logo/audio-player_radius.ai
      ├── ./src/assets/music
        ├── ./src/assets/music/fantastic.mp3
        ├── ./src/assets/music/legends-never-die.mp3
        ├── ./src/assets/music/rise.mp3
        └── ./src/assets/music/short-legends-never-die.mp3
      ├── ./src/assets/spotify
        ├── ./src/assets/spotify/icon
          ├── ./src/assets/spotify/icon/Spotify_Icon_RGB_Black.png
          ├── ./src/assets/spotify/icon/Spotify_Icon_RGB_Green.png
          └── ./src/assets/spotify/icon/Spotify_Icon_RGB_White.png
        └── ./src/assets/spotify/logo
            ├── ./src/assets/spotify/logo/Spotify_Logo_RGB_Black.png
            ├── ./src/assets/spotify/logo/Spotify_Logo_RGB_Green.png
            └── ./src/assets/spotify/logo/Spotify_Logo_RGB_White.png
      └── ./src/assets/svg
          └── ./src/assets/svg/logo.svg
    ├── ./src/components
      ├── ./src/components/_boilerplate
        ├── ./src/components/_boilerplate/index.jsx
        └── ./src/components/_boilerplate/style.styl
      ├── ./src/components/app-footer-nav
        ├── ./src/components/app-footer-nav/index.jsx
        └── ./src/components/app-footer-nav/style.styl
      ├── ./src/components/app-version
        ├── ./src/components/app-version/index.jsx
        └── ./src/components/app-version/style.styl
      └── ./src/components/audio
          ├── ./src/components/audio/audio.worker.js
          ├── ./src/components/audio/index.jsx
          └── ./src/components/audio/style.styl
    ├── ./src/context
      └── ./src/context/app-context.jsx
    ├── ./src/export-components.js
    ├── ./src/index.js
    ├── ./src/modules
      └── ./src/modules/app
          ├── ./src/modules/app/index.jsx
          └── ./src/modules/app/style.styl
    └── ./src/serviceWorker.js
  └── ./yarn.lock
└── ./react-web-audio-graph
    ├── ./react-web-audio-graph/README.md
    ├── ./react-web-audio-graph/package.json
    ├── ./react-web-audio-graph/public
      ├── ./react-web-audio-graph/public/favicon.ico
      ├── ./react-web-audio-graph/public/index.html
      ├── ./react-web-audio-graph/public/logo192.png
      ├── ./react-web-audio-graph/public/logo512.png
      ├── ./react-web-audio-graph/public/manifest.json
      └── ./react-web-audio-graph/public/robots.txt
    ├── ./react-web-audio-graph/src
      ├── ./react-web-audio-graph/src/App.tsx
      ├── ./react-web-audio-graph/src/components
        ├── ./react-web-audio-graph/src/components/Audio.tsx
        ├── ./react-web-audio-graph/src/components/ContextMenu.tsx
        ├── ./react-web-audio-graph/src/components/Flow.tsx
        ├── ./react-web-audio-graph/src/components/FlowContextMenu.tsx
        ├── ./react-web-audio-graph/src/components/Node.tsx
        ├── ./react-web-audio-graph/src/components/Nodes.tsx
        ├── ./react-web-audio-graph/src/components/Note.tsx
        ├── ./react-web-audio-graph/src/components/Project.tsx
        ├── ./react-web-audio-graph/src/components/controls
          ├── ./react-web-audio-graph/src/components/controls/Slider.tsx
          └── ./react-web-audio-graph/src/components/controls/Toggle.tsx
        └── ./react-web-audio-graph/src/components/nodes
            ├── ./react-web-audio-graph/src/components/nodes/ADSR.tsx
            ├── ./react-web-audio-graph/src/components/nodes/Analyser
              ├── ./react-web-audio-graph/src/components/nodes/Analyser/Visualiser.tsx
              └── ./react-web-audio-graph/src/components/nodes/Analyser/index.tsx
            ├── ./react-web-audio-graph/src/components/nodes/AndGate.tsx
            ├── ./react-web-audio-graph/src/components/nodes/AudioBufferSource.tsx
            ├── ./react-web-audio-graph/src/components/nodes/BiquadFilter.tsx
            ├── ./react-web-audio-graph/src/components/nodes/ChannelMerger.tsx
            ├── ./react-web-audio-graph/src/components/nodes/ChannelSplitter.tsx
            ├── ./react-web-audio-graph/src/components/nodes/Comparator.tsx
            ├── ./react-web-audio-graph/src/components/nodes/ConstantSource.tsx
            ├── ./react-web-audio-graph/src/components/nodes/Delay.tsx
            ├── ./react-web-audio-graph/src/components/nodes/DelayEffect.tsx
            ├── ./react-web-audio-graph/src/components/nodes/Destination.tsx
            ├── ./react-web-audio-graph/src/components/nodes/DynamicsCompressor.tsx
            ├── ./react-web-audio-graph/src/components/nodes/Equalizer.tsx
            ├── ./react-web-audio-graph/src/components/nodes/Gain.tsx
            ├── ./react-web-audio-graph/src/components/nodes/Gate.tsx
            ├── ./react-web-audio-graph/src/components/nodes/InputSwitch.tsx
            ├── ./react-web-audio-graph/src/components/nodes/Keyboard.css
            ├── ./react-web-audio-graph/src/components/nodes/Keyboard.tsx
            ├── ./react-web-audio-graph/src/components/nodes/Meter.tsx
            ├── ./react-web-audio-graph/src/components/nodes/Metronome.tsx
            ├── ./react-web-audio-graph/src/components/nodes/Noise.tsx
            ├── ./react-web-audio-graph/src/components/nodes/NotGate.tsx
            ├── ./react-web-audio-graph/src/components/nodes/OrGate.tsx
            ├── ./react-web-audio-graph/src/components/nodes/Oscillator.tsx
            ├── ./react-web-audio-graph/src/components/nodes/OscillatorNote.tsx
            ├── ./react-web-audio-graph/src/components/nodes/OutputSwitch.tsx
            ├── ./react-web-audio-graph/src/components/nodes/Quantizer.tsx
            ├── ./react-web-audio-graph/src/components/nodes/Rectifier.tsx
            ├── ./react-web-audio-graph/src/components/nodes/SampleAndHold.tsx
            ├── ./react-web-audio-graph/src/components/nodes/Sign.tsx
            ├── ./react-web-audio-graph/src/components/nodes/StereoPanner.tsx
            ├── ./react-web-audio-graph/src/components/nodes/Transformer.tsx
            ├── ./react-web-audio-graph/src/components/nodes/WaveShaper.tsx
            └── ./react-web-audio-graph/src/components/nodes/XorGate.tsx
      ├── ./react-web-audio-graph/src/context
        ├── ./react-web-audio-graph/src/context/AudioContextContext.tsx
        ├── ./react-web-audio-graph/src/context/ContextMenuContext.tsx
        ├── ./react-web-audio-graph/src/context/NodeContext.tsx
        └── ./react-web-audio-graph/src/context/ProjectContext.tsx
      ├── ./react-web-audio-graph/src/fonts
        └── ./react-web-audio-graph/src/fonts/bravura
            ├── ./react-web-audio-graph/src/fonts/bravura/bravura.css
            ├── ./react-web-audio-graph/src/fonts/bravura/bravura.woff
            └── ./react-web-audio-graph/src/fonts/bravura/bravura.woff2
      ├── ./react-web-audio-graph/src/hooks
        ├── ./react-web-audio-graph/src/hooks/nodes
          ├── ./react-web-audio-graph/src/hooks/nodes/useAnalyserNode.tsx
          ├── ./react-web-audio-graph/src/hooks/nodes/useAudioWorkletNode.tsx
          ├── ./react-web-audio-graph/src/hooks/nodes/useBiquadFilterNode.tsx
          ├── ./react-web-audio-graph/src/hooks/nodes/useChannelMergerNode.tsx
          ├── ./react-web-audio-graph/src/hooks/nodes/useChannelSplitterNode.tsx
          ├── ./react-web-audio-graph/src/hooks/nodes/useConstantSourceNode.tsx
          ├── ./react-web-audio-graph/src/hooks/nodes/useDelayNode.tsx
          ├── ./react-web-audio-graph/src/hooks/nodes/useDestinationNode.tsx
          ├── ./react-web-audio-graph/src/hooks/nodes/useDynamicsCompressorNode.tsx
          ├── ./react-web-audio-graph/src/hooks/nodes/useGainNode.tsx
          ├── ./react-web-audio-graph/src/hooks/nodes/useOscillatorNode.tsx
          ├── ./react-web-audio-graph/src/hooks/nodes/useStereoPannerNode.tsx
          └── ./react-web-audio-graph/src/hooks/nodes/useWaveShaperNode.tsx
        └── ./react-web-audio-graph/src/hooks/useAnimationFrame.ts
      ├── ./react-web-audio-graph/src/index.css
      ├── ./react-web-audio-graph/src/index.tsx
      ├── ./react-web-audio-graph/src/logo.svg
      ├── ./react-web-audio-graph/src/react-app-env.d.ts
      ├── ./react-web-audio-graph/src/reportWebVitals.ts
      ├── ./react-web-audio-graph/src/setupTests.ts
      ├── ./react-web-audio-graph/src/types
        ├── ./react-web-audio-graph/src/types/AudioWorkletGlobalScope.d.ts
        ├── ./react-web-audio-graph/src/types/AudioWorkletProcessor.d.ts
        └── ./react-web-audio-graph/src/types/worklet-loader.d.ts
      ├── ./react-web-audio-graph/src/utils
        ├── ./react-web-audio-graph/src/utils/audioContext.ts
        ├── ./react-web-audio-graph/src/utils/channels.ts
        ├── ./react-web-audio-graph/src/utils/handles.ts
        ├── ./react-web-audio-graph/src/utils/notes.ts
        ├── ./react-web-audio-graph/src/utils/scale.ts
        └── ./react-web-audio-graph/src/utils/units.ts
      └── ./react-web-audio-graph/src/worklets
          ├── ./react-web-audio-graph/src/worklets/StoppableAudioWorkletProcessor.ts
          ├── ./react-web-audio-graph/src/worklets/adsr-processor.types.ts
          ├── ./react-web-audio-graph/src/worklets/adsr-processor.worklet.ts
          ├── ./react-web-audio-graph/src/worklets/and-gate-processor.worklet.ts
          ├── ./react-web-audio-graph/src/worklets/comparator-processor.worklet.ts
          ├── ./react-web-audio-graph/src/worklets/gate-processor.worklet.ts
          ├── ./react-web-audio-graph/src/worklets/meter-processor.worklet.ts
          ├── ./react-web-audio-graph/src/worklets/noise-processor.types.ts
          ├── ./react-web-audio-graph/src/worklets/noise-processor.worklet.ts
          ├── ./react-web-audio-graph/src/worklets/not-gate-processor.worklet.ts
          ├── ./react-web-audio-graph/src/worklets/or-gate-processor.worklet.ts
          ├── ./react-web-audio-graph/src/worklets/quantizer-processor.worklet.ts
          ├── ./react-web-audio-graph/src/worklets/rectifier-processor.types.ts
          ├── ./react-web-audio-graph/src/worklets/rectifier-processor.worklet.ts
          ├── ./react-web-audio-graph/src/worklets/sample-and-hold-processor.types.ts
          ├── ./react-web-audio-graph/src/worklets/sample-and-hold-processor.worklet.ts
          ├── ./react-web-audio-graph/src/worklets/sign-processor.worklet.ts
          ├── ./react-web-audio-graph/src/worklets/transformer-processor.worklet.ts
          └── ./react-web-audio-graph/src/worklets/xor-gate-processor.worklet.ts
    ├── ./react-web-audio-graph/tsconfig.json
    └── ./react-web-audio-graph/yarn.lock


About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published