Building a Real-Time Spectrum Analyzer Plot using HTML5 Canvas, Web Audio API & React

Amanda Barrafato
6 min readDec 18, 2020

As an audio engineer turned software engineer, I am still in awe of the digital tools I used to measure audio. Meters on an audio digital console, digital decibel meters and RTA plots on programs like SMAART, all interest me.

However, all of these products cost a lot of money. Here is my low cost attempt to create a RTA plot using React, the HTML5 canvas, and the built-in Web Audio API.

Here’s an image of what we’re trying to achieve:

Image from: https://www.rationalacoustics.com/smaart/smaart-v8/

The general data flow of Web Audio API is to:

1. Create the audio context
2. Inside the context, create sources — such as <audio>, oscillator, stream
3. Create effects nodes, such as reverb, biquad filter, panner, compressor
4. Choose final destination of audio, for example your system speakers
5. Connect the sources up to the effects, and the effects to the destination.

Not all of the above applies to what we are trying to achieve. Here, our general flow will be:

1. Ask for microphone permissions
2. Create Audio Context
3. Create a component that creates the source (microphone), gets data from the microphone onto state, and analyzes it
4. Create a component with the canvas we are going to animate the data onto, animate that data on the canvas logarithmically to mimic a true RTA plot

Our first component is going to get the users permission to use the microphone, and send that data down as props to our second. First, initialize audio on state as null.

class AudioChecker extends Component {
constructor(props) {
super(props)
this.state = {
audio: null
}
}
...

Then, create a function to get the user’s microphone permissions, and set that onto state (remember to bind these functions):

async getMicrophone(){
const audio = await navigator.mediaDevices.getUserMedia({
audio: true,
video: false
})
this.setState({audio})
}

Then, make a function that turns off the microphone (users will thank you for this). This will loop through each MediaTrack associated with MediaStream and stop them:

stopMicrophone(){
this.state.audio.getTracks().forEach(track => track.stop())
this.setState({audio: null})
}

Let’s add a button to the component’s render with those methods, as well as the second component we’ll make, the AudioAnalyzer, with the audio from state passed down as props.

handleMicrophone(){
if (this.state.audio) {
this.stopMicrophone()
}
else {
this.getMicrophone()
}
}
render() {
return (
<div>
<div className="charts">
<button type="button" onClick={this.handleMicrophone}>
{this.state.audio ? 'Stop Microphone' :
'Start Microphone'}
</button>
</div>
<div className="analyzers">
{this.state.audio ? (
<AudioAnalyzer
audio={this.state.audio}
/>) : (
<div className="message">
Start your microphone in order to start!
</div>)}
</div>
</div>
)
}

Next, we will create the AudioAnalyzer component. Web Audio API has two built-in functions that take in arrays as arguments, and copy over the current frequency data from the audio source into those arrays. Those two functions are getFloatFrequencyData and getByteFrequencyData. For our purposes, we are using getFloatFrequencyData because it uses a Float32Array as opposed to getByteFrequencyData which uses a Uint8Array. Therefore, it gives us a more accurate measurement of frequencies.

Since we have to pass the array into the getFloatFrequencyData’s argument, we’d want to initialize it here into a Float32Array:

class AudioAnalyzer extends Component {
constructor(props){
super(props)
this.state = {
audioData = new Float32Array(0),
bufferLength: 0
}
}

In our AudioAnalyzer’s componentDidMount, we will set the audio context, the bufferLength, the FFT size, and create a rafId for the animation. The sample rate on my computer defaults to 44,100 Hz which would give us a range of 0 Hz — 20,000 Hz worth of data (which is the entire range of human hearing and what we want to achieve).

componentDidMount() {
//webkit is for safari, audio context is for the rest
this.audioContext = new (window.AudioContext ||
window.webkitAudioContext)({sampleRate: 44100})
//create an audio analyser from WEB AUDIO API
this.analyser = this.audioContext.createAnalyser()
//FFT size
this.analyser.fftSize = 4096
//smoothing Time constant
this.analyser.smoothingTimeConstant = 0.8
//trying to account for my room tone, adjust this as needed
this.analyser.minDecibels = -96
this.analyser.maxDecibels = 0
//frequency bin count /buffer length (half the fft)
const bufferLength = this.analyser.frequencyBinCount
this.setState({bufferLength})
//this is how many datapoints we'll be collecting
this.dataArray = new Float32Array(bufferLength)

//getting the microphone as a source of audio
this.source = this.audioContext.createMediaStreamSource(this.props.audio)
//connect the microphone to the analyser
this.source.connect(this.analyser)
//calling upon animation to start
this.rafId = requestAnimationFrame(this.animate)
}

Next, we have to create a method that will animate our RTA plot, and bind that method to this component. We also have to cancel the animation and disconnect the microphone in the componentWillUnMount:

animate() {
this.analyser.getFloatFrequencyData(this.dataArray)
this.setState({audioData: this.dataArray})
//recursive so it calls on the next update
this.rafId = requestAnimationFrame(this.animate)
}
componentWillUnmount() {
cancelAnimationFrame(this.rafId)
this.analyser.disconnect()
this.source.disconnect()
}

In the render method of this component, we’ll add the RTA component we’re about to make and pass down the audioData array and bufferLength from state as props.

In the RTA component, we will create a canvas in the render method and create a React reference to that canvas in its constructor:

this.canvas = React.createRef()

The render method will look like:

render(){
<div id="graphic">
<canvas width="945" height="445" ref={this.canvas}
id="graphicChart" />
</div>
}

We need to create two methods on this component, in order to accurately draw the data we are receiving from the getFloatFrequencyData method. The first method we will create(frequencyToXAxis) takes in a frequency and calculates where on the x-axis that data should be drawn on a logarithmic scale.

Our second method will draw out our data. We will invoke this method in componentWillUpdate, in order to have a continuously changing chart.

The getFloatFrequencyData method gives us an array of the decibel measurement of each frequency, without any indication of what frequency they are coming from. In order to get where on our x-axis the frequency data should be drawn, we have to calculate what the frequency is from its index in the array. Then we have to pass that frequency to our first method to get the x-axis and use its decibel reading to get its height.

frequencyToXAxis(frequency) {
const minF = Math.log(20) / Math.log(10)
const maxF = Math.log(20000) / Math.log(10)

let range = maxF - minF
let xAxis = (Math.log(frequency) / Math.log(10) - minF) / range
* 945
return xAxis
}
draw(){
const {audioData} = this.props
const canvas = this.canvas.current
const height = canvas.height
const width = canvas.width
const context = canvas.getContext('2d')
context.clearRect(0, 0, width, height)

//loop to create the bars so I get to 20k!
for (let i = 0; i < this.props.bufferLength; i++) {
let value = audioData[i]

//finding the frequency from the index
let frequency = Math.round(i * 44100 / 2 /
this.props.bufferLength)
//need to convert db Value because it is -120 to 0
let barHeight = (value / 2 + 70) * 10
let barWidth = width / this.props.bufferLength * 2.5 context.fillStyle = 'rgb(' + (barHeight + 200) + ',100,100)' //finding the x location px from the frequency
let x = this.frequencyToXAxis(frequency)
let h = height - barHeight / 2 if (h > 0) {
context.fillRect(x, h, barWidth, barHeight)
}
}
}
//updates every time the audio data is updated!
componentDidUpdate() {
this.draw()
}

And when we put all of those methods and components together, we get a chart that looks like this:

RTA plot we created! A representation of me humming quietly so as to not disturb my roommates in our Zoom world.

This is sort of useful! It just lacks anything to tell me where the frequencies are. I added a background to show where the frequencies are:

Our implementation of the RTA plot! A representation of me humming a little louder.

Thanks for reading my first article! Let me know what you think about it, I will appreciate any comments relating to improvements I can make!

--

--