
This project ended up being born from an idea I had for an audio reactive shader on my VRChat avatar. It first feeds the live samples coming in from an audio capture of the system in to the GPU for processing. It then runs the samples through a semi-custom GPU accelerated DFT that pulls out frequency information for automatically adjusting the gain of the visualizer. Finally, it uses a few combinations of interference patterns that take the raw audio data to generate shapes that seem to move with the music.
gullible