This is an experiment created with help from ChatGPT. The Eyesy normally only reacts to the amplitude of the incoming audio. I was hoping to create something that would react to the amplitude different frequency spectrums within the audio signal.
The result uses a pared down elements from numpy to analyse the audio and break it into three different audio spectrums – low, medium and high.
This mode is an expirement to test the concept. Three scopes are drawn, each reacting to a different frequency spectrum. Its not perfect (there is still some frequency bleed) but it seems to work.
I’m posting this in the hopes that someone with more coding experience could futher refine the idea. Or perhaps C&G can even build something like this into the firmware so that all patches can be even more dynamically reactive!
Knob 1 – Amplitude of the scopes
Knob 2 – line thickness
Knob 3 – this controls the filter tightness, though I think it reacts best having the knob mostly down.
Knob 4 – standard color picker for the three scopes, including LFO. The colors of the three scopes are offset, so they will always be different.
Knob 5 – standard background color picker.
