Skip to Content

soundStrider Postmortem

Reflecting on a nine-month journey with the Web Audio API

Nine months ago I started a solo game development journey that began with a prototype and ended in my first self-published title. It’s been an incredible journey that’s tested my fortitude and taught me transferrable lessons. Throughout this post-mortem I will present an in-depth analysis of creating soundStrider with the Web Audio API, its effect on my finances, and my takeaways. Please join me in wrapping up this chapter of my life.

The pitch

Whether you’re new or returning I’d like to introduce you to my game.

soundStrider is a psychedelic exploration game where you explore endless audio worlds of synthesized sounds that evoke naturalistic, abstract, and musical spaces. Any location can be saved as a bookmark to revisit later, like for meditation, studying, or sleep. Whenever you feel inspired, it has a virtual instrument with collectible presets ranging from acoustic instruments to digital synths that adapts its controls to the surrounding world.

Let’s take a short trip through some of its possibilities:

Developing soundStrider

Realizing this pitch—starting from its early conceptual design to the final experience shown in the trailer—was the most challenging software development project I’ve ever taken. It was rigorous and multidisciplinary. Yet it felt like I’d been training for it my entire life.

My game development journey started early with RPG Maker. Throughout childhood I created small worlds, but I was more interested in engineering systems like banks and day-night cycles in its event manager than telling a story. Over the years I tinkered with various ideas and engines but it never materialized as finished products. The fascination remained dormant.

Simultaneously I also had a passion for music. I composed all the music in the games and eventually that became my fixation. When I first picked up an instrument my instinct was to record it. Over the years the sounds they produced grew more elaborate and processed. This inspired my pursuit of a degree in computer music, where I learned the crafts of synthesis and digital signal processing.

While in college I started a career in web development and pursued it for the past decade. Bouts of impostor syndrome pushed me to understand and learn from my weaknesses and failures. That itself warrants a post-mortem. Since then I feel I’ve completely changed but deep down there’s still the imaginative child who wants to create.

The intersection of all that brings us here.

Early conceptual design

Nine months ago I entered my first game jam as a solo developer and created Soundsearcher, a minimalist audio exploration game that can be completed in a few minutes. Creating it was revelatory because it demonstrated the power of the Web Audio API and how it could be leveraged in crafting spatial audio experiences. In my post about building Soundsearcher I announced that I was beginning work on its successor.

It started as a game design document that I kept in version control to track its changes. There I outlined my objective to build upon this prototype into a full game. My goals included expanding it with more to hear and do, packaging it as an inclusive experience, and giving it wider distribution. From there I began to strategize.

To expand the game I wanted to use procedural generation to place a variety of biomes with their own unique environmental and footstep sounds, and then link them together with repeatable quests that encourage exploration between them. To promote inclusivity I wanted to implement a realistic psychoacoustic model with binaural rendering and more support for assistive technologies. And thinking about its distribution helped me define its unique look and consider its utility beyond being just a game.

Figure 1: Expanding from Soundsearcher into sixteen palettes of sounds
Elemental Astral Beach Aquatic
Mainframe Limbic Storm Forest
Trance Urban Mountain Classic
Pulse Industrial Desert Tundra

Perhaps my biggest realization was that I needed more research, tooling, and systems to make soundStrider possible. I couldn’t simply pile these features on top of a fragile prototype built in 48 hours. It needed reworked.

Building a game engine

The decision to build a game engine isn’t one to take lightly, even for experienced software engineers. For example, users on the r/gamedev subreddit frequently dissuade their peers from doing it, emphasizing that in building an engine less energy can be devoted to shipping a good game.

Personally I like building hammers and I felt that soundStrider presented a unique use case. Besides, in targeting the web platform I actually didn’t need to worry about many of the low-level issues that engines typically solve, like the rendering pipeline or system compatibility. In many ways its engine is more of a framework that wraps around existing Web APIs.

Figure 2: The soundStrider engine as of December 2019
Debugging interface

The biggest architectural change between the two games is its publish-subscribe messaging pattern. It allows for game systems to be modularized and subscribe to engine events like a physics frame or whenever state is imported or exported. Other enhancements include a virtualized mixer that allows modules to create buses for more advanced routing and layering, a streaming service that abstracts away prop spawning and culling, and a plethora of helpful utilities.

Those utilities included custom implementations of seeded pseudo-random number generation and Perlin noise. At this point developing the engine was still an extracurricular learning experience without deadlines so I was happy to reinvent them. Understanding how those algorithms fundamentally worked helped reveal truths about their usefulness and how I could apply them to procedural world generation.

Implementing the psychoacoustic model was mostly a deep dive back into the college textbooks I kept. Each prop has a binaural circuit that combines two monaural circuits into a stereo signal. Each monaural circuit models the space between the sound source and the listener’s ear. Every frame they calculate attributes like interaural arrival time and intensity differences and acoustic shadow to emulate human hearing. A nice side effect from this method is that the Doppler effect is built-in due to the pitch shifting caused by fast modulations of arrival times.

In total the engine took about two months to build after wrapping up Soundsearcher and understanding its flaws. From there I had initially wanted to take it slow, but what happened next really accelerated my journey.

Switching to full-time

Personally I believe that time is our most valuable resource. We waste the best years of our lives through the exploitation of our labor. This is a deeply intersectional problem that can’t be easily solved. Yet I fantasize of a world with four-day workweeks, three-month vacations, compassionate sick time, and regular gap years supported by automation, green energy, social policies, tax reforms, and a universal basic income. This utopia is possible if we only had the courage to confront our capitalism.

When you can’t fix the system, taking care of yourself is the next best option. So for the past four years I was diligently yet quietly planning a sabbatical. Every paycheck I would deposit a third of it into savings and project how long it’d last. It was beginning to add up, but then I lost my job.

Figure 3: After the exit interview I went straight to this protest
Impeach and remove

In the past I haven’t shared much more about it than it being the catalyst for soundStrider, but in fact the wound is quite deep. I directly attribute it to a private meeting I had with the company owner, during which I revealed my passions for audio programming and game development, and suggested that there was a lot we could learn and apply as an agency from those areas. The nature of my letting go was a legal loophole in which my position was eliminated so I was neither fired nor laid off. In practice this meant that, just one month before COVID-19 was first documented in the US, I was ineligible to file for unemployment or receive extended benefits from the CARES Act. I’ve since learned that this is a pattern of abuse that has happened to previous employees. It’s heartless.

After a few days of recovering from the whiplash, I decided my sabbatical was due and officially announced soundStrider. Then I packed a bag and took a short holiday to prepare for the hardest year of my life.

Alchemizing sound from code

The craft of synthesis is like the sixth sense of mathematics. At first the toolbox is overwhelming, but our intuition expands with every experimental combination. Signal processing applies basic arithmetic operations to signal and constant operands such that a circuit flows from input to output. To mix two signals we add them together, to amplify a signal we multiply it by a constant, and so on. From more elaborate circuits are more elaborate results.

The Web Audio API provides a toolbox of basic building blocks that need combined into circuits. In articulating the sounds of soundStrider a repeating pattern of common circuits emerged, like FM synthesizers and feedback delays. Extracting these circuits into factory functions early on helped with clarity, consistency, and efficiency. By reducing my mental load I could then focus on the creative aspects of its sound design.

Below is an example of how a simple FM synthesizer might be built from a factory. FM synthesis applies frequency modulation to a carrier signal to produce a harmonically-rich signal. By returning an interface of parameters it provides easy and indirect access to its internals for automation:

Figure 4: A factory function that builds an FM synthesizer
function createFmSynth({
  carrierDetune = 0,
  carrierFrequency = 440,
  carrierType = 'sine',
  gain = 0,
  modulatorDepth = 0,
  modulatorDetune = 0,
  modulatorFrequency = 0,
  modulatorType = 'sine',
  when = 0,
} = {}) {
  if (!(context instanceof AudioContext)) {
    throw new Error('Please provide an AudioContext')

  const carrier = context.createOscillator(),
    depth = context.createGain(),
    modulator = context.createOscillator(),
    output = context.createGain()

  // Set up circuit

  // Set parameters
  carrier.detune.value = carrierDetune
  carrier.frequency.value = carrierFrequency
  carrier.type = carrierType
  depth.gain.value = modulatorDepth
  modulator.detune.value = modulatorDetune
  modulator.frequency.value = modulatorFrequency
  modulator.type = modulatorType
  output.gain.value = gain

  // Start oscillators

  return {
    connect: (...args) => output.connect(...args),
    disconnect: (...args) => output.disconnect(...args),
    param: {
      carrierDetune: carrier.detune,
      carrierFrequency: carrier.frequency,
      gain: output.gain,
      modulatorDepth: depth.gain,
      modulatorDetune: modulator.detune,
      modulatorFrequency: modulator.frequency,
    stop: (when = context.currentTime) => {

With this one factory a variety of textures can be achieved, from rich horns and bright pads to howling wolves and scuttling crabs. The intuition arrives from a careful modeling that deconstructs each sound into its harmonic and envelope components. Some are rich and some are dull, some are quick and some are slow. It’s what differentiates strings and ideophones.

So what makes a glockenspiel? Harmonically it exhibits a pure root tone, with a tapering series of slightly dissonant overtones. It’s more complex than a triangle wave, but with finely tuned FM synthesis and a highpass filter it becomes recognizable. Its percussive characteristics arrive from a gain envelope applying a quick attack and gradual release. Here’s an application of the createFmSynth() factory function defined above:

Figure 5: Simple FM model of a glockenspiel
function playGlockenspiel({
  frequency = 440,
  gain = 0,
  when = 0,
} = {}) {
  // Ensure future times
  when = Math.max(when, context.currentTime)

  // Instantiate synth
  const synth = createFmSynth({
    carrierFrequency: frequency,
    modulatorDepth: frequency / 2,
    modulatorDetune: 215,
    modulatorFrequency: frequency * 3,
    modulatorType: 'sawtooth',

  // Filter the synth
  const filter = context.createBiquadFilter()
  filter.frequency.value = frequency * 6

  filter.connect(destination || context.destination)

  // Schedule gain envelope
  const attack = when + 1/64,
    decay = attack + 1,
    release = decay + 2

  const gainParam = synth.param.gain

  gainParam.setValueAtTime(0, when)
  gainParam.exponentialRampToValueAtTime(gain, attack)
  gainParam.exponentialRampToValueAtTime(gain / 8, decay)
  gainParam.linearRampToValueAtTime(0, release)


  return synth

Putting it all together arrives at this interactive example. With each click it sequences a major triad and visualizes the notes with an oscilloscope:

Figure 6: Playing the synthetic glockenspiel

In total 137 unique sounds were created with factories like these, including environmental sounds, footsteps, instruments, and navigational cues. The majority of them are driven by procedural systems and made flexible through randomization to produce infinite arrangements and possibilities. Inventing them occupied most of my development time, but with each sound it felt easier as my sixth sense grew stronger.

Sharpening an accessibility focus

Long-time readers should by now be familiar with my commitment to accessibility. It was a difficult awakening for me in my professional career when I first learned of the Web Content Accessibility Guidelines. I obsessed over my failures to make inclusive interfaces that were perceivable, operable, robust, and understandable. I reflected on why they were never mentioned in school or by my employers or colleagues. And then I acted by incorporating them into my workflow and advocacy.

The mainstream concept of accessibility is a technological accident. Our bodies are our only interface with the world, and those with power who’ve constructed our society have excluded others from participation due to lack of awareness or necessity. When we stand on the shoulders of these giants the exclusion exponentiates as we leave more folks behind in a world that’s increasingly globalized by high-speed communications and computing. So it’s essentially a poorly maintained backdoor built into a system that by design works perfectly for a narrowing definition of abled folks. It might be the best solution we’ve got, but its existence should give you pause.

With soundStrider I wanted to add a new chapter to my accessibility journey. Through an audio game I had the opportunity to produce something that was inherently accessible, at least within the constraints of modern technology. I leveraged my professional expertise to engineer an inclusive interface. With careful sound design and physical modeling I built a navigable acoustic space. Various control schemes were implemented to accommodate a spectrum of physical and musical abilities. And a plethora of changes and additions were made based on player feedback.

What’s funny about this journey is that I had no idea that audio games were a real genre. soundStrider is just a cool product of my passions. I was aware of gamers who are blind, typically featured winning Street Fighter tournaments, but I didn’t fully grasp the depth of their community or the true visionaries who delight and empower them. My entire world opened up when I did. From them I learn every day and my work exudes their energy.

Ultimately I’m grateful for their contributions to this journey. In fact, entire features wouldn’t exist without the input from the audio gaming community. The Audio Mixer and Performance Settings screens, as well as extensive user manual revisions, are the direct result of several conversations we had on the forums. Since release I’ve outlined future updates based on their suggestions that would further improve its accessibility. I’m looking forward to playing their games and watching our relationship grow.

Road to release

With a vision, an engine, a sabbatical, a sixth sense, and an ethos, the road to release was pretty straightforward. For the first four months I adhered to a strict morning schedule and worked at least six hours a day, five days a week, with daily journals and biweekly milestones. It shifted slightly when the pandemic hit and my locality entered lockdown. Basically I quit journaling and wearing pants, but I stayed focused on soundStrider and its roadmap.

Every day was strung together with challenges and their learning experiences. Obviously I can’t dive into all of it here. Initially this devlog contained a trove of reflections, but its frequency and scope scaled back as I confronted the complexities of my goals and milestones. Most weeks I was too busy to write good content and meet my deadlines. However, there are some illuminating posts about process that are worthy of summary:

  • soundStrider v0.2.0. In my first post after announcing soundStrider I discussed how I crafted its visual identity, implemented its menu systems, and set up task automation for building and distributing the game.
  • soundStrider v0.3.0. In the next update I provided a detailed tutorial of building a spectrogram with the Web Audio API. Afterwards I discussed how I ported the sounds from Soundsearcher into the new engine.
  • soundStrider v0.4.0. After that I revealed how I implemented its procedural quest system and my approach to its navigational cues. Then I reflected on several accessibility insights gained from watching Mark Brown’s Designing for Accessibility series.
  • Improving soundStrider. In my Beta 4 update announcement I unpacked my lessons learned from introducing soundStrider to the audio gaming community. There I opened up about my philosophy of game design and my definition of a game.
  • soundStrider Released. In the first half of my release announcement I detailed how the poison system and virtual instrument were designed and implemented. The second half is a compilation of patch notes from its final beta releases to launch.

When I finally uploaded its final builds and clicked the publish buttons, a wave of emotions passed that I never had the time to process until now. Release day was the start of GMTK Jam 2020 and my code switch was instantaneous. I guess I just like to keep going like that.

By the numbers

With soundStrider finally released, let’s dive into its results! Below we’ll assess the accuracy of its roadmap, calculate its effect on my finances, reveal its Steam wishlists and conversions, and reflect on its total units shipped:

Development roadmap

In my second soundStrider devlog I announced its roadmap. Over the next six months it would undergo four development phases that would end with its release. For the most part it was quite accurate:

  • Early access release. This was projected for the end of March. The demo released on February 16 and the first alpha followed on March 30. It was quite polished, containing 11 of 16 palettes and 4 of 10 features.
  • Alpha point releases. Throughout April and into the first half of May there were 6 alpha updates that introduced the remaining palettes and one new feature. This period lasted 3 weeks longer than predicted due to an optimization sprint, implementing escort quests, and preparing for the beta. This was also when COVID-19 began to take its toll on my local community, during which I dealt with debilitating anxiety that upended my workflow and daily schedule.
  • Beta point releases. Leading up to release there were 6 beta updates. Instead of my planned feature freeze, I rapidly iterated on its remaining features, two of which were planned to be revealed on launch day but instead received their own beta updates. With this rapid iteration I made some mistakes and had to issue 2 critical hotfixes. Overall this stage did yield vast improvements in accessibility and performance without breaking any saves.
  • Stable gold release. With a target release date of June 15, the exact release date of July 10 was estimated 6 months in advance with an 87% accuracy. The estimate was off because of an extended alpha period, additional betas for secret features, and a conflict with the Steam Summer Sale. Yet the release was feature complete and far beyond my original imagination. There are still some stability issues that I intend to address soon.

Opportunity cost

While developing soundStrider I was unemployed, therefore its opportunity cost is the wages I would have earned from similar full-time employment. At the end of each day I logged my time into a spreadsheet categorized by task. This wasn’t entirely accurate because it didn’t account for short breaks or distractions, so for future projects I’d like to invest in a more robust time tracking solution.

To calculate its costs I assigned hourly rates to each category:

  • Development. I spent 828 hours implementing, testing, or documenting the software. A $75 per hour rate is appropriate for my skill level as an engineer while accounting for my inexperience with professional game development. This totals $62,100.
  • Design. I spent 70 hours creating graphical assets, storyboarding screens, capturing screenshots and video, editing trailers, and writing marketing copy. Given my basic proficiencies an entry level rate of $40 per hour is appropriate. This totals $2,800.
  • Devlog. I spent 102 hours creating content for this blog minus this post. A living wage of $20 per hour is appropriate because I’m not a professional writer and its quality varied over time. This totals $2,040.
  • Miscellaneous. I spent 29 hours on journaling, live streaming, and time management. For this a living wage of $20 per hour is also appropriate. This totals $580.

Overall I spent 1,050 hours over 159 work days for an average of 38 hours per week. In total it cost me $67,520 of potential wages. In comparison this is much greater than my entire salary last year, so I will be more mindful when negotiating future employment opportunities.

Burn rate

While unemployed I burnt through half my savings to support the full-time development of soundStrider. I adapted with extreme frugality, spending $11,000 on expenses like rent, utilities, health insurance, and basic needs for an average monthly burn of $1,571.

Ultimately this would have been impossible without help from my partner. I’m extremely grateful for their emotional support and covering half of our living expenses. They work so hard and I can’t thank them enough.


To date soundStrider has sold 38 retail copies for a net revenue of $298. Of these copies, 28 were purchased via since its early access began and 10 were purchased via Steam upon release.

Wishlists and conversions

To date soundStrider has received 239 wishlists on Steam with a conversion rate of 3.7%. This is extremely below average for the typical game launch, but without real data on similar games with similar development or marketing budgets it’s difficult to understand how this compares.

Something that surprised me was how little my participation in the Steam Game Festival helped with its visibility or wishlists. I created a thoughtful demo, uploaded new screenshots and trailers, rewrote my store page, and hosted a live stream. My expectation for a virtual convention with equal access to each booth was countered by an algorithm that only promoted popular titles. It was discouraging to say the least.

Otherwise I put very little effort into its marketing or growing its wishlists. I posted on Reddit a few times, uploaded a few trailers, and updated its devlog regularly. If my livelihood relied on its success, then I certainly would have tried harder to reach potential players, streamers, or publishers. But nothing in the world could convince me to rejoin Facebook or Twitter!

Total units shipped

From its inception soundStrider was a niche passion project that was about transforming the folks it ultimately reached. To me that meant getting it into as many hands as possible—even if it meant giving it away for free. So looking at the total number of units shipped presents a clearer indicator of success.

The first playable soundStrider demo was released about one-fifth into its development cycle. To date it’s received 282 downloads and 1,496 browser plays on A few months later the demo was released on Steam with 3,744 downloads and 89 unique users to date.

Last month I was proud to participate in’s Bundle for Racial Justice and Equality. Each of the 814,578 contributors have soundStrider in their libraries. To date 1,107 downloads can be attributed to the bundle.

On release day I started a free community copy program on for low income and marginalized folks. Whenever soundStrider is purchased at its recommended price another copy is added to the pool. So far 4 community copies have been claimed.

Unlike my wishlists and sales, these numbers absolutely warm my heart. I’ve created a game that nearly 1 in 10,000 people own and virtually anyone can try. Now that’s a mission accomplished.

Lessons learned

Overall the results weren’t surprising for my first self-published title.

I understood the risks I was taking by developing soundStrider and managed my expectations accordingly. I didn’t expect a hit, to be noticed by influential publications or streamers, or to recoup my burn. So financial success would have been an arrogant and short-sighted goal.

How I judge its success is how I grew as a developer and person. Every day I learned more about my crafts that I can apply to future projects. Lately I’ve been engaging more in the audio gaming community, understanding their struggles, and have even made a few friends. Now I have a polished portfolio piece that I look forward to sharing and improving. This is success.

What follows are the technical lessons I learned from building soundStrider on the web platform. In pushing myself to create such detailed worlds I also pushed the boundaries of current browsers and specifications. Performance was volatile so its scope was constantly reduced. Ideally it should’ve been implemented in an environment like SuperCollider, but I persisted with what I knew. Perhaps when browser technologies improve it could be expanded with more concurrent voices, farther render distances, and increased realism. Or at least perform as intended after a few Chromium updates?

Web audio is powerful

The fact that a game like soundStrider is even possible in the browser is a testament to the Web Audio API’s strength. It can support virtually any type of synthesis, represent large and complex graphs, and build circuits with computational uses beyond audio. With its AudioNode interface theoretically anything is possible.

Let’s deconstruct a scene in the Elemental palette. A field of twenty oscillators surrounds you. Each oscillator is a filtered sawtooth wave with another triangle wave modulating its amplitude. Those oscillators are processed binaurally and passed through a reverb send to emulate their positions in a realistic physical space. Before output a lowpass filter and a compressor perform the final mastering step. Overall this graph contains about 300 nodes, forty of which are running oscillators.

For other palettes these graphs become more complex. Many of their objects leverage AM, FM, additive, subtractive, and granular synthesis techniques to produce their uncanny sounds. Whenever they emit a sound they create these circuits, connect them to the graph, and dismantle them when they’re finished. Imagine then how the graph breathes as your movements produce footsteps and the music flows through your instrument.

For some of the instruments there was a use case for scaling or interpolating input values into different ranges. For instance, the modulation wheel event produces values in the range [0, 127]. Those events might need to scale to [0.5f, 2f] in order to control the frequency of a BiquadFilterNode. With a couple ConstantSourceNode and GainNode instances a circuit can be built to scale these values. This alludes to the greater computational tasks that could be achieved with more complex circuits.

Web audio is young

When the first draft of the Web Audio API specification was originally published in 2011 its eventual adoption completely revolutionized audio on the web. Historically audio could be only played via a Flash or Java applet loaded within an <embed> element. The recent introduction of the <audio> element allowed developers to embed or trigger audio files natively with scripts. Then the Web Audio API opened endless possibilities with its addition of real-time processing and synthesis.

Through soundStrider I’ve learned that, despite its maturity, there are some things the specification still can’t do. My belief is that this is the result of the introduction of the AudioWorklet interface and WebAssembly. In practice both of these technologies are powerful additions to the web platform, but they have become excuses to not expand the specification’s features. Here are some suggestions.

For example, there was a use case for a circuit to calculate the reciprocal of a signal. With the current specification there are two ways to accomplish this. An AudioWorkletProcessor could be registered to perform this operation. Or, after scaling the signal to a value between [-1, 1], a reciprocal could be calculated by passing it through a WaveShaperNode instance with an appropriate curve and then scaling it back. Both solutions are absurd.

Signal division is actually a common ingredient in digital signal processing. To solve this problem I would propose a DividerNode interface that accepts an input signal, divides it by the value of an a-rate AudioParam (which could be a constant value or a signal), and outputs the result. Other parameters could help prevent the exceptions caused by infinities and infinitesimals by clamping its output. If the GainNode interface performs multiplication, then this would be its inverse.

The advantage of the AudioWorklet interface is how it enables multithreaded audio by performing its processing in real-time off the main thread. Something that would improve the AudioWorkletGlobalScope is giving it an AudioContext instance so processors could leverage the API to build audio subgraphs natively. For example, all the ingredients of binaural processing exist in the API, but to offload it to another thread its filtering and delay would need to be implemented from scratch. Inconsistencies arise when these are mixed with native nodes, therefore access to the same logic would eliminate this and provide developers with more tools.

Optimization is complex

All of this happens in near real-time. It’s magical to keep introducing more nodes and complexity, but—suddenly—an invisible wall appears. The audio stutters and drops out inconsistently. What’s insidious about the invisible wall is that it’s a quantum black box; it’s not clearly defined, its boundaries differ vastly per system, and its arrival is undetectable. This is the sisyphean task of optimizing a web audio application.

As one of the principle architects of the Web Audio API, Paul Adenot has provided several performance and debugging notes. I’m grateful for this excellent resource that outlines the computational and memory needs of every node and their best practices for optimization. However, there are some missing browser features that could help developers inspect and mitigate performance issues—or circumvent them entirely.

Currently the resources available to web audio applications are limited to a fraction of the available CPU. For a game like soundStrider I would have loved command line flags or user permissions that grant real-time access and more CPU power to the BaseAudioContext such that it performed like a native synthesis environment. This would reduce the barrier of entry to creating complex web audio applications and offer more power when needed.

Additionally the BaseAudioContext should provide its resource usage as read-only floats like Chromium’s Web Audio panel. Hypothetically this could be exposed through an AudioPerformance interface. By accessing this programmatically developers could detect when the graph is becoming too complex. Then they could develop dynamic optimization strategies like disconnecting or simplifying lower priority sounds and effects. This would allow them to scale the graph based on the device and its available resources.

Ultimately what would improve the developer experience is more tools for inspecting the audio graph. Earlier versions of Firefox had provided a primitive tool but it has since been deprecated. Ideally an audio inspector would allow for the freezing of execution, the ability to step through events, and a GUI that visualizes all AudioNode instances, their connections, and their parameter events and values. Without such tools it’s difficult to profile issues with garbage collection, orphaned graphs, or inefficiencies in implementation because it’s so fast and magical.

I can do this

My biggest lesson is a mantra we should all repeat more often: I can do this.

For a project of this scope it’s easy as time goes on to feel overwhelmed, lose sight of your vision, or feel like an impostor. Life is busy, the world changes, so many other developers are making great things, and software tools are constantly being released and updated. It may be difficult to stay motivated, keep up, and feel relevant, but you can do this too.

Finding the right motivators can keep you afloat. Mine were knowing my strengths, welcoming my curiosities, and impacting positive change in others. My education and career prepared me for this, I crave to learn and grow because we can always do better, and soundStrider has the potential to touch and heal so many folks. Look deeply into yourself and your goals and you’ll find them for your long-term project too.

When it’s over, take that energy to your next big thing, and repeat.

Life after soundStrider

With soundStrider released and a fuller understanding of its development cycle, I’m ready to continue my game development journey. In its release announcement I outlined a few avenues to explore.

After this post goes live I’m beginning work on an update which will address some outstanding performance issues. Because it’s difficult to profile garbage collection issues with the audio graph I fear that it will be a complete line-by-line audit of its codebase. My hope is that I overlooked something simple, but it could result in a major overhaul. Thankfully I have tools for testing individual palettes and props that could help illuminate leaks.

Once I understand the performance issues further I plan to formally announce the open-source release of its engine. With it I hope to inspire other web developers who would like to create interactive audio experiences by providing the tools for building synthesized sounds and positioning them as props on a binaural stage. While you can already clone the repository, it would benefit greatly from my findings in the next point release.

From there, there are so many options to pursue. Audo is due for a post-jam update. No Video Jam will be underway. And with my sabbatical coming to a close I’ll be looking for work in accessibility engineering, audio programming, front-end development, game development, sound design, or consultancy. Please contact me if you think my expertise would be a good fit for you.

Eventually I’d like to issue an interactive anthology covering all playable soundStrider versions. I’m still unsure of how exactly I’ll execute this. At its simplest it’d provide a wiki-like experience that pairs each executable with its patch notes. More information will be available on that as it progresses.

Play soundStrider today

Hey there! If you’ve gotten this far and you’re still interested in my game, then there are a few ways you can support my work.

First, soundStrider and its demo are available on Steam:

If you’re looking for DRM-free copies or the ability to play in your browser, contribute a community copy, or leave me a donation, then you’ll want to head over to

Thanks for reading!