AudioMasters
 
  User Info & Key Stats   
Welcome, Guest. Please login or register.

Login with username, password and session length
February 01, 2012, 03:43:42 PM
73736 Posts in 7768 Topics by 2596 Members
Latest Member: paulvincent
News:       Buy Adobe Audition:
+  AudioMasters
|-+  Audio Software
| |-+  Previous Versions
| | |-+  Cool Edit 96, 2000, 1.2a
| | | |-+  CoolEdit 2000 program error? No...
  « previous next »
Pages: 1 [2] Print
Author
Locked Topic Topic: CoolEdit 2000 program error? No...  (Read 3783 times)
Reply #15
« on: September 12, 2010, 05:27:13 PM »
SteveG Offline
Administrator
Member
*****
Posts: 10094



No it isn't - that's a time-based error, and this is a level-based one; specifically a failure to adhere to mid-tread encoding.

Maybe I should have looked harder - it looked like a time-based one to me.

But even so, that doesn't have to be an error... just a decision.

Because IIRC, there are distortion differences between the different ways of handling this, and mid-tread encoding isn't necessarily best. From the notes I've got, it suggests that mid-tread even alignment yields a larger distortion figure than mid-tread odd alignment, or mid rise. This means that if you are encoding to some common schemes, things are not so good. I also recollect that it's not so easy trying to compare these sensibly either - too many variables.

Somewhere there's an IEEE paper with all this in, but I haven't seen it lately.
Logged

Reply #16
« on: September 12, 2010, 05:40:05 PM »
pwhodges Offline
Member
*****
Posts: 1252

WWW

But even so, that doesn't have to be an error... just a decision.

Sure.  But the decision should be consistent throughout the program; and since doing the mathematical things (like the RMS level) in the way that's natural for the processor really only works with mid-tread encoding*, this is what should be used in the case at issue (and is in later versions of the program).

Integer maths can be really hard!

Paul

*  I know this stuff really intimately, because I have not merely programmed related operations in a context in which low-level behaviour was of the essence, but implemented them in microprogram as well.
Logged
Reply #17
« on: September 12, 2010, 07:48:11 PM »
oretez Offline
Member
*****
Posts: 713



Thanks Paul.  I genuinely appreciated your taking the time to explore this. 

While I've dealt with some of the macro issues of this (not at a medical imagining level, but with regard to color space dynamic range issues, converting vector to bitmap, and using data compressed images originating @ 72 ppi (with a different pitch) for 1400 dpi print processes, & now video compression (data) for editing and attempting to restore resolution for the final product, etc.)  I think I learned something from your presentation . . . not entirely sure I can say succinctly exactly 'what' . . . but something 'clicked' that I'm going to have to ruminate on and seems to be applicable to some scaling issues I'm having with a current animation project for which I got sucked into helping with the graphics as well as the audio.

Initially my gut reaction is to more or less tilt in Steve's direction (and just so I can continue to be insulting to everybody his statement of 'decision' vs. anomaly I viewed with what I perceive to be a particularly 'English' (vs. Welsh for example) phrasing).  From CEP 1 through CEP 2, inclusive of CE2k, truncation is what I expected, certainly based on what little I know about CEP's development and the creators connection with and dependence on the MS API (& in 1994).  I always thought that some of CEP's issues with 'statistics stemmed from some of these decisions . . . 

While I would, as a client, agree that a program should be consistent in how it does math I also think that whether mid-trend rounding is better or worse then truncation depends on the goals.  Admittedly at the macro level of pressure waves generated from the files this tends to be immaterial and while I'll accept that Paul's graphs display an amplitude anomaly, on a practical basis I'd tend to be more concerned about the subsequent time/phase issue that this might introduce.  Whether the issue is noticeable once you move beyond isolated sine wave to the complex interplay of fundamental transients and sustained harmonics (perhaps more problematic with Steinways then Bosendorfers) might even be open to debate . . . suggesting why, on a practical level, dither became the  standard operation in early DAWs that, in my experience, tended to convert via truncation

Whether you can actually hear any of this is not something I'm going to loose sleep over.  But do appreciate Paul's presentation.  While my experience is extremely limited and at least a decade out of date my memory is that strict truncation was the excepted forensic practice?  (In my very limited experience my expectation was that specific issues were fought over in specific cases, specific jurisdictions, specific courtrooms, frequently by people who did not understand the fundamental issues . . . nor as it turned out was I ever the best choice for this type of work, but in mid 90's there were times that I was the only game in town.  And an issue was ability to demonstrate exactly what a program did.  Information that most consumer oriented software developers were loath to supply.)

 In the vid editing world X, Y + previous frame, succeeding frame would be what I'd want.  But there you are dealing with static situations.  With regard to audio I know from working with clip restoration that mid trend (even when dealing with programs capable of some relatively sophisticated integrals) can generate some interesting (& anomalous in the way that the 'common sense' little man sitting at the point of the 'clip' would  go 'whoa! that aint going to be what happened!' . . . but which 'trends' based on integrals prior to clip intersecting extrapolation from integrals post clip suggest as 'best case scenario of what happen at the clip.   at times intersection of asymptotes from artificially selected generalized curves (hyperbola, etc.) actually do a better job. In some ways that's a debate at the opposite end from the OP's speculations)

& unfortunately there is no good hearted way for me to get out of having interjected here (other then to apologize for stepping in, in the first place), because I still am far from clear as to how AndyH's evidence supports his claim of a bug discovery and even the anomalous (sort of) behavior I 'observed' at the noise floor  reflected exactly the things I'd expect of pressure waves generated by instruments, then truncated.  That the truncated -95 (peak) piano did not match exactly my predication suggested to me that something relatively sophisticated was going on with the conversion.  This based on Paul's info is probably simply an artifact of strict truncation vs. mid trend programers decisions . . . still, even though I can't hear the damn thing the statistics matched my eddy current expectations for action of pressure waves @ the noise floor.

there is a reason why I continue to love this forum.
Logged
Reply #18
« on: September 12, 2010, 10:10:11 PM »
SteveG Offline
Administrator
Member
*****
Posts: 10094



No, it really could have been a decision - albeit a difficult one to make. I found a link to a recent IEEE paper that begins to show why - if, for instance, you assume that a 16-bit reduction is going to be used as a compression source - you might not necessarily want to do what's easiest to compute:

IEEE paper

To be fair, any differences would most likely be pretty difficult to spot, and from that POV I'm sure that Paul's right - when you rewrite, you take the opportunity to make the coding more efficient in any way you can. Even so, we are talking about something that you really won't be able to detect for quite a while after listening to anything in a mid-frequency range around 0dB...

The only reason that I've had anything to do with any of this is that when I did my Institute of Acoustics diploma, one part of my special study related very much to coding errors introduced by data compression, and what difference this could make if you were taking measurements from the results - would they stand up in a Public Enquiry, for instance. This was all a part of the background research.

The results were slightly interesting, and a bit of an operational change resulted from them, as it was clearly demonstrable that the effects were nowhere near as bad as had previously been suspected, and were, in fact, quite predicable.

Logged

Reply #19
« on: September 12, 2010, 10:47:45 PM »
pwhodges Offline
Member
*****
Posts: 1252

WWW

No, it really could have been a decision - albeit a difficult one to make. I found a link to a recent IEEE paper that begins to show why - if, for instance, you assume that a 16-bit reduction is going to be used as a compression source - you might not necessarily want to do what's easiest to compute:

I'll read the paper in more detail later, but at first glance I don't think its arguments are applicable to the audio scenario which has to handle both positive and negative values, and so which should have a concern with symmetry about zero that image data doesn't have.  It would appear that the CoolEdit/Audition programmers took my view when they changed the code somewhere between CE2000 and Audition 3 to correct the lack of symmetry.

Paul
Logged
Reply #20
« on: September 13, 2010, 12:12:41 AM »
AndyH Offline
Member
*****
Posts: 1769



Quote
You haven't made the slightest attempt to understand this at all, have you?

Aside from that not being true, the question doesn’t tell me anything about why you believe it. Your explanation of the circumstances involved values using the least significant bit, or effecting the two least significant bits
Huh Isn’t that bits 15 and 16?
Huh Shouldn’t the maximum value of  -120dB tones, or -130dB audio (both of which give the same results as -90dB) be completely below bit 16?
Huh Does this have something to do with that “systematic error” introduced by amplifying the generated tone by -30dB or  -40dB in order to achieve a sufficiently low level?
Huh What is this error?

Paul’s statement “This has no effect on audio” isn’t true, but perhaps he is writing about something other than what those words mean to me. I accept that it doesn’t effect music quality-- because the results at -96dB are at too low a level to be heard when listening to music. However, it isn’t only the program Statistics that are effected. The converted 16 bit tone is much louder than the 32 bit version if played. It is much brighter in Spectral View. It has a higher peak in Frequency Analysis.

I don’t know if the Program Statistics of -130dB at 32 bit vs -96dB at 16 bit are correct, but there is definitely a big difference in signal leve., not only in the total (original tone plus error products) but at the frequency of the original tone itself. Using FFT notch filters on the other frequencies (remove below 2800Hz and above 3200Hz for a 3kHz tone) the Statistics go down by only 1dB, still +33dB higher than in the 32 bit file.)

Huh Is this truncation at the 16th bit responsible for all those results?
Huh In the sample tone that I just processed, a 0.002% DC offset is shown in Statistics after converting to 16 bit without dither. Using Amplify at 0dB, I did a DC Bias Adjust to 0% Absolute (it is necessary to first convert back to 32 bit or the offset is reintroduced by the operation that removes it). This changes the Statistics min/max sample values from 0/1 to -0.5/0.5, and the DC offset to 0%, but no other numbers change, so the level error isn’t correctable in this way. Are these results as should be expected?
Huh These kinds or results are not evident in a 24 bit file, saved from 32 bits. Can that be for any other reason except that rounding rather than simple truncation is employed in the 24 bit  conversion?
Huh In this case a 32 bit file at -145dB produces digital silence at 24 bit, going back to the questions about Steve’s explanations. If it were not for this problem of no rounding in the 16 bit conversion, which hadn’t been pointed out when Steve wrote his part, would Steve’s explanations of my results still be valid?
Huh If so, is that consistent with all the different results between the 24 bit and 16 bit undithered conversions?
Logged
Reply #21
« on: September 13, 2010, 08:39:21 AM »
pwhodges Offline
Member
*****
Posts: 1252

WWW

Paul’s statement “This has no effect on audio” isn’t true, but perhaps he is writing about something other than what those words mean to me. I accept that it doesn’t affect music quality-- because the results at -96dB are at too low a level to be heard when listening to music. However, it isn’t only the program Statistics that are affected. The converted 16 bit tone is much louder than the 32 bit version if played. It is much brighter in Spectral View. It has a higher peak in Frequency Analysis.

You are describing a difference in the distortion products resulting from an undithered bit depth conversion; as it is indisputable that you should never reduce bit depth without dithering, that part of the investigation is pointless.  The issue you raised must have been noted years ago, as it has been corrected in later versions of the software; it will never be corrected in this long obsolete version.

Huh These kinds or results are not evident in a 24 bit file, saved from 32 bits. Can that be for any other reason except that rounding rather than simple truncation is employed in the 24 bit  conversion?

Yes it can - the reason is that the 32-bit format already only contains 24 bits of data (it is a kind of floating format, so the other eight bits are mantissa).

Paul
Logged
Reply #22
« on: September 13, 2010, 07:38:22 PM »
AndyH Offline
Member
*****
Posts: 1769



Quote
When a signal with greater than 16 bits depth is converted to 16 bits by CoolEdit2000 without dither, this is done by simple truncation. For correct results, it should actually be done with rounding.

When the signal level is so low in 32 bit that the 18 or 19 or 20 most significant bits are all zero, is it still this truncation at bit 16 that produces the results I’ve reported?
Logged
Reply #23
« on: September 13, 2010, 11:51:08 PM »
pwhodges Offline
Member
*****
Posts: 1252

WWW

If you have an audio signal without DC offset, the top bits do not stay at zero however low the level; this is because the signals are in twos-complement integer form in which negative numbers are filled up to the highest bit with ones.  So the effect I have demonstrated is still happening; you can see that in my second graph by the fact that the unrounded truncation changes as the signal passes zero, however small a change is made to pass it.

But please stop trying to explain your figures; I have demonstrated the reason that they are inaccurate at low levels, and they will be totally meaningless for signals below 16 bits (because the error will stay at the 16-bit level while the expected result continues to fall).  Continuing to look at them is a complete waste of your time and ours that could be more productively spent, I don't know, recording or something...

Paul
Logged
Reply #24
« on: September 14, 2010, 08:32:50 AM »
AndyH Offline
Member
*****
Posts: 1769



Thank you
Logged
Reply #25
« on: September 14, 2010, 09:42:45 AM »
pwhodges Offline
Member
*****
Posts: 1252

WWW

I realise that in my attempt to be brief, I told a lie.  This effect is nothing to do with twos-complement arithmetic in particular (only to the extent that it does things right where signed-magnitude does things wrong). 

As it seems to me that my last explanation was too cryptic, I am preparing a detailed illustration that should lay this all to rest - I shall edit it into this post later today.

Paul

Edit as promised:

First a table.  In this I am simplifying by using 8 bits and reducing them to 4 bits, just to save typing.  Also I am representing the numbers as fixed point, with the top 4 bits being the whole numbers and the bottom 4 the fraction (I have included a decimal point to clarify this, but the computer doesn't bother). 

Consider something a bit like a sine wave (I am guessing similar numbers rather than getting them right!) at a very low level, which is fractional from the point of view of the 4 bit integer that we will end up with after shortening from 8 bits.  Successive digitised sample values might be, say: 0.0625 (1/16), 0.1875 (3/16), 0.25 (1/4), 0.25, 0.125 (1/8), -0.0625, -0.1875, -0.25, -0.25, -0.125.  The table shows these values with their twos-complement binary versions, then the truncated and rounded versions.

Value      Binary       Truncated     Rounded

0.0625     0000.0001    0000   (0)    0000   (0)
0.1875     0000.0011    0000   (0)    0000   (0)
0.25       0000.0100    0000   (0)    0000   (0)
0.25       0000.0100    0000   (0)    0000   (0)
0.125      0000.0010    0000   (0)    0000   (0)
-0.0625    1111.1111    1111  (-1)    0000   (0)
-0.1875    1111.1101    1111  (-1)    0000   (0)
-0.25      1111.1100    1111  (-1)    0000   (0)
-0.25      1111.1100    1111  (-1)    0000   (0)
-0.125     1111.1110    1111  (-1)    0000   (0)


The effect of this is as shown in the left-hand graph below.  Note that the square wave that we end up with is considerably larger than the sign wave we started with!  The right-hand part of the graph shows that the boot is on the other foot if we move the starting point up by half a (reduced) bit. 



This is why I said there was no effect on music signals; because this quantisation distortion - for that is what it is - will average out to the same net amount on any real signals whose level is varying all over the place.  However, if rounding is not done, very low-level signals will not end up as zero and RMS measurements will become misleading at those levels (as Andy saw).

To illustrate how a real signal will have different, but comparable, quantisation distortion using truncation or rounding, consider the following graphs:



I hope this clarifies things a bit; I really don't have the time to do more.

Paul
Logged
Reply #26
« on: September 15, 2010, 10:17:59 PM »
AndyH Offline
Member
*****
Posts: 1769



When I opened this thread I was just reporting my observations; I did not expect any discussion. The program code isn’t available, the condition probably wasn’t ever going to be explained. It is nice that Paul was able to deduce a cause.

Things have been written, however, and there are a couple of points I hope to have clarified further. Steve wrote
Quote
And creating sine waves with Cool Edit, and then amplifying them, is no way to create a sine wave to use for measurement purposes at all - all you are doing is introducing a systematic error.
What is this systematic error? The symptoms of the problem I reported were obvious, even though I didn’t know what caused them. I can find no symptoms of any problem in the tones I generate, either before or after amplifying.

If there really is something, it is important to me to know what I’m dealing with. Recording and related practices may be what interest most people here, but we each pursue are own goals. Mine aren’t anyone else’s responsibility, but this claim does create a difficulty for me and I really would appreciate an elaboration.

The other point isn’t relevant to the topic here, but since reference has been made a number of times, it provides the opportunity to once again issue a challenge that interest me. Most people are not only proponents of using dither, they are insistent that
Quote
it is indisputable that you should never reduce bit depth without dithering
.

I have no arguments about what happens to the data, but even here (in CE2k) where all samples falling below -90.3 dB ends up with values of either +1 or 0 when converted to 16 bit without dither (even music, not just generated tones) I don’t believe it makes any audible difference. If anyone can show me music where it does make a difference, I am interested.
Logged
Reply #27
« on: September 15, 2010, 10:39:28 PM »
SteveG Offline
Administrator
Member
*****
Posts: 10094



I can find no symptoms of any problem in the tones I generate, either before or after amplifying.

Of course you can't - you have no easy means of checking their absolute accuracy at all. You have no idea how amplitude-accurate even the original tone was unless you go to the trouble of working out exactly how many samples high it is, and if you amplify it, you have the same problem - only you've compounded it. This is a systematic error, and when it comes down to signals at the lsb level, it's likely to be significant unless carefully accounted for. Paul has shown you where the errors arise, and unless you can create signals of known accuracy around those levels, you don't stand a chance of making any valid assertions at all. Also, using sine waves in this instance doesn't lead you exactly to the answer anyway - the relationship between that square wave and the sine wave isn't entirely straightforward.

We are not interested in the slightest in your dither problem, which is yours alone - so officially, please stop going on about it.
Logged

Reply #28
« on: September 20, 2010, 09:25:58 PM »
AndyH Offline
Member
*****
Posts: 1769



Thanks for explaining. I don’t believe that has been an issue for anything I’ve done but it is good to know about the possibility in case it ever matters.

The results discussed here apply to any audio, generated or recorded. Since I can manipulate sample values, I could determine something more accurately, but the result of this program peculiarity is consistent over a range of more than 80 dB, so I don’t see any point to the exercise.

Paul has provided a reasonable explanation and that is fine. Even no explanation at all would have been fine. I didn’t start out asking anything, I was only reporting my observation to interested parties, if any.

It doesn’t matter except as a item of intellectual curiosity, but Paul’s explanation is at least incomplete. The program results are the inverse of his example based on twos-complement, the inverse of what two compliment truncation should produce. Perhaps there is just another calculation step involved in the program, or maybe some other variant of binary representation is in use here.

As far as forbidding inquiries about music samples, I can’t see that as anything other than you reacting as though something that doesn’t interest you is a thorn in your personal side. My post is just words in the forum database. I’m not attacking anyone, I only questioning an idea, a convention.

If the topic interests no one, which you can’t possibly know, no one will pay any attention. That has certainly happened with more than a few of my posts. Or, maybe I will get a sample audio file, if such a file is possible. What is the problem? Why would you even mention it unless you had something to offer?

Your position seems rather like an official of some one true church who fears heresy is sure to condemn the souls of all who read it, or maybe like a current German government official who is eager to throw into prison anyone who dares to speak or write something that is not in accordance with official German State Doctrine. Except here the rules seem to be made up on a whim.

If you care to explain how my understanding of your “official” action is biased, I might be more sympathetic, and possibly even have the insight to avoid future conflicts.

I also think your attempt to prevent people from finding out what this thread is about, by changing the title, is very poor form and exceedingly misleading. You and oretez wrote a lot of column inches denying what I reported. Someone will have to read through that before getting to Paul’s verification. Your change to the title is very likely to give potential readers the impression that what I reported is not actually in the program, so they are less likely to ever read far enough to know otherwise, or maybe even to open the thread to find out what it is about.

Whether you want to admit this as an error, corrected in later revision of the program, or call it something like a expression of an outdated viewpoint changed in later updates, it is a inconvenient fault for someone stuck with this version of the software.

This program bug is why I did not get the expected results from Paul’s proposed fade demonstration and was consequently told I was too stupid to do the experiment correctly. It is the reason I was unable to duplicate the demonstration reported by Vanderkooy and Lipshitz on the differences between dithered and undithered low level 16 bit signals a few years ago.
Logged
Reply #29
« on: September 20, 2010, 10:00:22 PM »
SteveG Offline
Administrator
Member
*****
Posts: 10094



If you care to explain how my understanding of your “official” action is biased, I might be more sympathetic, and possibly even have the insight to avoid future conflicts.
You just don't get it, do you?

Since you are not doing directly as formally asked, and have never had the wit to avoid conflicts (and I have received complaints about your behaviour, the politest of which described you as 'really annoying') I am going to reward your conduct initially with a one week ban, effective immediately.

And if that doesn't work, I will consider extending it.

Logged

Pages: 1 [2] Print 
« previous next »
Jump to:  

Powered by MySQL Powered by PHP Valid XHTML 1.0! Valid CSS! Ig-Oh Theme by koni.