How and Why I Use AI In My Music Production

I am working on a project for a label and I have used AI for assistance in creating said project. And, I’ll be honest, it’s a track for Prognosis.

I can hear it now – “oh my GOD, Chad, how COULD YOU?!?!?! AI?!?!?!”

That’s a perfectly valid response. After all, you know and I know that Dan and his crew wouldn’t stand for that shit.

I ain’t trying to pull a fast one on them.

Let me be clear – every track I have ever written and released has been 100% written, recorded, and produced by me and my brain. I have never, and will never, use, promote, nor endorse AI produced music (samples, beats, instrumentals, melodies, patterns, etc, etc, etc, everything AI produced with prompts bullshit forever).

That being said, I am not against the use of AI to act as a personal assistant. In the past, I’ve discussed how I use AI for my marketing. And just today, the following is how I used AI to help me finish a track I’m working on.

My AI Personal Assistant For A Prognosis Track

I am at the final listening stages of a remix I am completing for Prognosis. I have many different listening environments that I need to review the track in to understand if there are audio issues that need to be addressed. There are a LOT of variables to consider. So, I gathered up all of my headphones that I own (5 in total) and reviewed the entire track at length (9+ minutes long) in each headphone while taking notes of things I was intentionally listening for. I did my best not to get lost in the track but to actually listen to it critically at a reasonably toned down volume level. The headphones included:

I would assume that about 80% of the general public are listening to music in headphones of similar cost and quality, on average.

Obviously, there is still a car test and a test on a high quality home sound system to complete, but, I have a great deal of data to consider already.

All of this being said, I knew that utilizing all of this data in any sort of useful way would require organization. But I have about an hour’s worth of notes to go over.

This is what I used AI for. I gave it instructions of what I wanted it to do (i.e. read all notes and identify common issues, suggest ways to fix issues using these available plugins, explain why it is suggesting these particular fixes, suggest ways for me to verify the results of its suggestions). After a bit of work, I eventually get down to something that looks like this:

🧠 High-Level Summary of Issues (With Prioritization)
🔴 Top Priority Fixes (Heard Across Most or All Devices)
• Snare and hi-hats are too flat and lack presence
• Mid-bass is consistently drowned out by kick and sub-bass
• Sub-bass and low-end are muddy — masking the mix
• Kick + Kick sub-bass overpower the track (especially in M50x)
• Ambient pads lack clarity, presence, and have muddy low-end
🟡 Secondary Fixes
• Kick transient could be sharper (more punch needed)
• Ambient pads might benefit from a high-frequency layer for clarity
• Waah Waah element may have resonance issues (HD6xx + M50x)
• Drum bus compression may be contributing to muffled tone — try bypassing
• Cymbals (esp. crashes) lack reverb and sparkle
✅ Elements That Sound Good
• High-end mix balance is generally acceptable (especially on better IEMs)
• Chirp element sounds great on all devices
• General warmth and vibe are coming through well on M50x
Then, if I wanted to go even further, I asked the AI to make a check list of things I can check/fix to address my issues. It looks like this:

🥁 DRUMS & PERCUSSION
🔲 Snare Rework
• ☐ Replace with a TR-909 snare or hybrid (909 body + clap/top layer)
• ☐ Apply snare layering (transient, body, clap/air layers)
• ☐ Use transient shaper or parallel compression for snap
• ☐ Boost ~2–5kHz slightly for presence
• ☐ Consider adding reverb tail (short plate)
🔲 Hi-Hats + Cymbals
• ☐ Increase hat volume 1–2dB
• ☐ Use EQ to bring forward ~7kHz–12kHz range
• ☐ Add short stereo reverb to crashes to help them bloom
🔲 Kick Drum Enhancement
• ☐ Slightly increase transient (transient designer or EQ)
• ☐ Consider layering: transient + low-end + body
• ☐ A/B test kick layering vs. single sample
• ☐ Slight reduction of sub-layer volume (especially for M50x translation)
🔲 Drum Bus Check
• ☐ Temporarily bypass drum bus compression and compare
• ☐ If compressing:
◦ Use slow attack (20–30ms), fast release (50–80ms)
◦ Ratio: 2:1 or less
◦ GR: <3dB
• ☐ If still dull, reintroduce drum parallel bus for energy

🎚️ LOW END & BASS MIXING
🔲 Sub-Bass Cleanup
• ☐ EQ sculpt:
◦ HPF to ~30Hz
◦ Notch out muddy buildup areas (e.g. 50–60Hz, 80Hz)
• ☐ Ensure mono below 100Hz
• ☐ Use kick-triggered sidechain or manual ducking
🔲 Mid-Bass Reinforcement
• ☐ Boost level 1–3dB
• ☐ Parallel compress mid-bass with high ratio, blend back in
• ☐ Add light saturation (for harmonics that translate better)
• ☐ EQ boost 250–500Hz as needed (watch masking with pads)
• ☐ Balance it just under the main bass line in loud sections
🔲 Kick/Sub Balance
• ☐ Reassess kick/sub levels: reduce both slightly if overpowering
• ☐ Use spectrum analyzer to confirm separation
• ☐ Ensure enough space for mid-bass to live

🌫️ AMBIENCE & TEXTURE ELEMENTS
🔲 Pads / Ambient Layers
• ☐ EQ out mud: HPF at 150–250Hz depending on pad character
• ☐ Add high-frequency layer (filtered noise, airy synth, shimmer) to sit around 8–12kHz
• ☐ Volume boost + subtle reverb/delay
• ☐ Optional: sidechain pads subtly to kick or master rhythm element
🔲 Waah Waah Element
• ☐ Check for resonance: sweep EQ for harsh peaks around 300–800Hz
• ☐ Apply notch EQ cuts (Q: 3–5, Gain: -2 to -4dB)
• ☐ Consider light stereo widening or pitch modulation for movement

📐 FINAL PRE-MASTER CHECKS
🔲 Mix Level Targets
• ☐ Integrated LUFS: -14 LUFS for pre-master
• ☐ Peak level: -6 dBFS
• ☐ Use TDJ Prism or similar analyzer to confirm clean low-end spread and tonal balance
🔲 Reference Listening
• ☐ Recheck across all of the following:
◦ Apple EarPods
◦ Basin IEMs
◦ Truthear IEMs
◦ Sennheiser HD6xx
◦ Audio Technica M50x
• ☐ Optional: car stereo, Bluetooth speaker, mono check

That kind of organization took less than 5 minutes to produce.

THAT is how I use AI in the development of my music. I use it as a personal assistant. I use it as a glorified spreadsheet. I use it to keep track of notes, reference of manuals, and helpful guidance on how to use my DAW and the tools it contains.

Never have I asked AI to write a song for me. But I have asked it to explain parallel compression.

This image is AI slop.

Look. I know there are ethical dilemmas when it comes to using AI. Should we or should we not use it… the perpetuation of the theft of intellectual property… what it means to be an artist in 2025… automation and our replacement in the workforce…

I don’t know about all of you, but, this shit is tough. I’m barely making it. Most of you are as well.

I don’t see any other way to compete than to incorporate AI into at least some aspects of our life. I don’t know how we’re expected to make a living when fighting against the AI by shunning all things AI.

As I see it right now, AI is taking over and nobody is taking up arms against it.

So, are we just expected to lay down and die? Allow the absolutely worst people and businesses in society to use AI to do their evil bidding while we common folk chastise each other for using AI to get an advantage? It’s almost as if this was planned…

Nah. I’m gonna use AI as a personal assistant so that I can get WAY more of my creativity out into the world. Because, let’s face it, nobody else is going to do it for me like AI can, as cheaply as AI can. My job isn’t in any rush to give me a raise. I’ve somehow got to compete without losing my soul. It’s just the reality.