Digital Audio Stair-Step myth (or Old Man Yells At Cloud)
I know I'm an old curmudgeon RCA trained audio engineer and AES officer for over 30 years and I resent having to take all the AMX beginners courses in Audio and blah blah blah...
but, while watching the AMX video there was a repetition of a myth in digital audio that us engineers have been trying to dispel as long as digital audio have been around: that being the "Stair Step" graph of a digital waveform. The idea being that digital audio is represented as a series of stair steps with horizontal flat sections between sample points.
This is erroneous and not what it actually happening. They talk about the concept of Interpolation as being an method of removing the stair step waveform into something more natural and analog-ish.
what most folks don't seem to realize is that the most important part of digital audio is actually the conversion to and from analog to/from digital. No A/D or D/A converter ever made can produce a true square wave from sample point A to B. Instead, there is a smooth curve based upon the raising and/or lowering or even staying the same amplitude. The voltage change ΔVolts has to it a curve which is very similar to an inertial curve. The end result is not a jagged or stair-steppy waveform. Even with a cheap/bad D/A converter you don't see a jagged waveform.
What causes the "harsh digital" sound most folks experienced with early converters had a lot to do with poorly designed electronics responsible for creating the analog voltage. They were cheaply made because they had to fit on a chip and used very poor quality components with lots of harmonic distortion and horrible signal-to-noise ratios. (Not to mention poor noise rejection being that they were living inside a digital processor producing tons of EM noise)
Modern A/D and D/A converters do a much better job of interpolation and have much better noise rejection and signal-to-noise ratios. but, all of this is still the analog part of the chain. The digital side has changed little in 25 years.
And while I'm being an audio curmudgeon - higher sample rates are a gimmick. I master records as part of my living and I receive files all the time at 96K, 192K, etc... Trust me - there is no audio information in them above 22-23kHz.
everything above that is just white noise. It's actually lowering the signal-to-noise ratio of the audio chain which is what we all perceive as good or bad quality. normal 44.1kHz is plenty of frequency coverage for us humans. (0-22kHz) and if you're an audiophile I can say that 48kHz sample rate almost always gets all the high frequency that is actually recorded. (0-24kHz)
If you truly want to make the listening experience more glorious, go with a higher bit depth. 24 bits has a noticeably clearer sound quality than 16. I've done Analog vs. Digital side by side shootouts with AES members where we blind tested a 24-bit/44.1kHz digital recording playback with an ATR-24 1" reel to reel (2-channel) analog machine. It was nearly impossible to tell the difference reliably.
And lastly, GET OFF MY LAWN.