So how could one demonstrate the difference between a 16 bit and 24 bit file of the same audio unless the level of the audio was entirely contained in the last 8 bits?
With some difficulty! I think that it's easier to demonstrate that dither works as a means of extending the apparent bit depth than it is to come up with a good demonstration of any difference between 16 and 24-bit resolution - certainly on a real-world signal, at any rate.
It's easier to demonstrate visibly what the difference is - a sine wave at -80dB
looks a damn sight better in 24-bit than it does in 16, although that can be a bit misleading. Generally it will sound a bit better with the reproduction level turned up than the 16-bit one will, simply because the BG noise level will be lower.