Skip to Content

soundStrider v1.0.2

Anatomy of a Web Audio performance hotfix

As promised, the latest soundStrider hotfix resolves some performance degradation issues that caused audio stuttering and drop-outs. With tactical problem solving I was able to pinpoint the issue to eight sounds that created audio sources which never stopped. Continue reading to learn more about my approach to debugging a complex Web Audio application.

The problem

Synthesis with the Web Audio API is achieved by instantiating audio sources and combining them in various ways. These include the OscillatorNode, BufferSourceNode, and ConstantSourceNode interfaces. When audio sources are started they consume CPU resources until explicitly stopped. Therefore performance issues can arise when there are too many of them running concurrently. Typically they manifest as stuttering or drop-outs.

For an application the size and scope of soundStrider it can be difficult to determine where and how these issues arise. In its post-mortem I lamented about the lack of a proper audio inspector in its target browsers. I feared that I would need to audit its codebase line-by-line to correct its most egregious audio performance issues. It became one of those impossible tasks that I kept avoiding despite its importance. Depression does things like that.

Thankfully I was wrong. The solution involved a clever trap to catch and correct some pretty simple mistakes that never should have made it to production. So it’s only a little embarassing!

The solution

To avoid auditing the codebase I thought it would be a fun experiment to build some basic profiling tools. If I could keep track of when and where audio sources were being started and stopped, then I could calculate the difference and focus on those areas. This proved to be an elegant approach that captured all of the issues in a fraction of the time:

Figure 1: Decorating an AudioContext instance with debugging features
function debuggable(context) {
  if (!(context instanceof AudioContext)) {
    throw new Error('Please provide an AudioContext')

  const runningSources = new Map()

  const sourceFactories = [

  for (const key of sourceFactories) {
    const factory = context[key]

    context[key] = function () {
      const node = factory.apply(this, arguments),
        start = node.start,
        stop = node.stop,
        uuid =

      node.start = function () {
        const error = new Error()
        runningSources.set(uuid, error.stack)
        return start.apply(this, arguments)

      node.stop = function () {
        return stop.apply(this, arguments)

      return node

  context.getRunningSources = () => [

  return context

This essentially overrides several methods of an AudioContext instance so that audio sources it creates are added and removed from an inner Map when they are started and stopped. The custom getRunningSources() method is then defined to expose its contents. Because it’s considered bad practice to change the prototypes of native objects like AudioNode, I found this to be a good use case for the decorator design pattern. With a decorator function we can apply these changes purposefully to specific instances and create closure over our inner variables.

Interestingly this solution is only possible because of the non-standard Error.stack property. Without the ability to capture a stack trace as a string when the audio sources are started, there would be no way to locate where in the codebase they originated from. Otherwise a full audit would’ve been necessary.

Let’s put our custom getRunningSources() method to use:

Figure 2: Example usage of the debuggable() decorator function
const context = debuggable(new AudioContext())

const bufferSource = context.createBufferSource(),
  constantSource = context.createConstantSource(),
  oscillator = context.createOscillator()


// Outputs an array of stack traces, one per node


// Outputs an empty array

From here I was able to playtest the game under a variety of scenarios without digging into its code. Periodically I would check for unstopped audio sources, examine their stack traces, and rectify any underlying issues. For all of the cases it was a complicated circuit where I had mistakenly overlooked one of their many oscillators. Lesson learned!

Upcoming changes

Since soundStrider was released I’ve received a lot of feedback to make it more inclusive. Its next minor update will address some big issues.

Recently I learned that photosensitivity is an umbrella that covers a variety of conditions beyond epilepsy. Something I didn’t realize was how the visualizer can agitate these conditions and cause harm with its high contrast colors and striped patterns. In the next update I plan to add a Graphics Settings screen. It will offer selectable color palettes for different or less stimulating experiences. Additional settings may include adjusting its hue, saturation, brightness, and contrast.

A common request from the audio gaming community is a screen for learning the navigational cues present in its Adventure mode. This is a difficult request to fulfil; however, I believe it should be a target for this update as well. Part of the difficulty is routing the sounds to a separate mix, making them slightly more reusable, and then emulating the navigational systems in an understandable way. If I end up copy-pasting it all and maintaining it in two places, then I won’t be too happy, but it won’t be the end of the world. Otherwise I just need to decide where it lives in the growing maze of menus!

Thanks again to all players who have given me their feedback since release. Supporting soundStrider is going to be a long and slow process and I’m grateful you’re all here for the ride. I’m looking forward to sharing more soon.