Again today as before, I do not get the predicted results, in terms of dither making anything sound more true, or revealing anything otherwise too low level to hear. This might be me or CE2k doing something wrong, or maybe comes from using digitally generated tones, or maybe this is just the way it really is.
You've done something wrong. I've done this exact thing using generated tones and CE2k.
DITHERED 16 BIT
[...]
I understand the data extends less far in time, to the right on the screen, because 16 bit does not support any signal there; all sample values are zero.
After correct dithering, they will not be - you appear to be describing the non-dithered case.
But frankly, I'm not going to pursue this further. All you need to know about dither (and the answer to the question in the thread title) is this:
It is clearly shown by the maths that dithering is a necessary and effective procedure in the single and important case of reducing the bit depth. If the reduction is (say) from 32 to 24 bits, it is unlikely that the difference will be anywhere near audible, but there is no harm in doing it anyway (the cpu overhead is negligible); if the reduction is from 24 to 16 bits, the improvement is often audible, and again there is no reason not to do it anyway. In a correctly staged workflow, the dithered reduction to 16 bits should take place only once, at the end; in this single case it is worth using noise shaping, which reduces the audibility of the noise which results from the dithering process - but noise shaping should not be used if the dithered signal is to be processed further.
Paul