AudioMasters
 
  User Info & Key Stats   
Welcome, Guest. Please login or register.

Login with username, password and session length
February 01, 2012, 03:57:30 PM
73736 Posts in 7768 Topics by 2596 Members
Latest Member: paulvincent
News:       Buy Adobe Audition:
+  AudioMasters
|-+  Audio Software
| |-+  Previous Versions
| | |-+  Cool Edit 96, 2000, 1.2a
| | | |-+  CoolEdit 2000 program error? No...
  « previous next »
Pages: [1] 2 Print
Author
Locked Topic Topic: CoolEdit 2000 program error? No...  (Read 3784 times)
« on: September 10, 2010, 08:59:01 AM »
AndyH Offline
Member
*****
Posts: 1769



I presume the program behavior has been corrected in later versions, so this probably won’t interest many people. Then again, it probably came up quite rarely even when CE2k was current, but someone who does use the program might like to know about it.

For some years I’ve know certain results did not match textbook descriptions, at least as I under the text, but no reports of CoolEdit malfunctions were brought up in response to my inquiries. Only after the experiences with a recent thread did I look at the situation more closely.

Basically, the problem is undithered conversions from 32 bit to 16 bit, at least as the signal level approaches the 16 bit limit. Below a certain signal level, all undithered conversions to 16 bit produce exactly the same 16 bit result, regardless of the signal level at 32 bit. I believe the level at 16 bit should decrease as the 32 bit level decreases, then drop to zero when the 32 bit level falls below 16 bit resolution.

In the screen shot, the lavender column heading is the amplification factor used to lower the level of the 32 bit generated tone (i.e. generate @ -80dB, amplify by -10.4dB).

I don’t know if the values above (X -10.4dB) level are correct or not; I don’t know how to calculate them myself.

I saw various instances of dithered conversions coming out wrong too, but results are not consistent so it must be considered as speculation that a program error exists.
Logged
Reply #1
« on: September 10, 2010, 11:28:06 AM »
SteveG Offline
Administrator
Member
*****
Posts: 10094



You can't calculate this by normal methodology at all - dither is statistical. What you have to calculate is how much confidence you have in the probability of any given result being what it theoretically computes to. And on that basis alone, you cannot call anything like this a bug until you've proved that it reliably exceeds the statistical limits set for the conversion. And since they haven't been published...
Logged

Reply #2
« on: September 10, 2010, 08:30:47 PM »
AndyH Offline
Member
*****
Posts: 1769



The per sample dither value is either random or based on noise shaping calculations, but is its value not constrained within fairly narrow limits (my usual setting uses a depth of 0.5 bits)? Isn’t it likely that the variations will be undetectable in Analyze/Statistics since the dB calculations there are only presented to two decimal places?

Anyway, the problem I’m reporting does not involve dither, so why do you insert dither characteristics as an argument against calling it a bug? If all the data in the 24/32 bit audio is below the upper 16 bits, and the data is simple truncated, the result should be zero. That is simple arithmetic.

Of course, it there is something in the 17 bit and the process involves rounding, then the data below 16 bits must be lower in value than 0.5 in the 17th bit part for the result to be zero, but the principal is the same. Since that isn’t what the program does, the program isn’t working properly.
Logged
Reply #3
« on: September 10, 2010, 11:22:00 PM »
ryclark Offline
Member
*****
Posts: 650



What on earth are you talking about? huh
Logged
Reply #4
« on: September 11, 2010, 12:56:55 AM »
SteveG Offline
Administrator
Member
*****
Posts: 10094



Okay, ignore the dither. If it's truncated, then bit 16 will be at whatever value it was at in the 32-bit stream. And as a consequence, the lowest actual level above digital zero will be determined most likely by the lowest two bits, not just bit 16.

And quite frankly, none of us could give a stuff, so could you at least try to make an effort to keep this deeply neurotic problem you have with some massively outdated software to yourself, please? Whatever you think about it is not going to result in any action, therefore it's a completely pointless discussion.
Logged

Reply #5
« on: September 11, 2010, 08:02:33 AM »
AndyH Offline
Member
*****
Posts: 1769



I wrote about the problem described in the third paragraph of my first post, and illustrated in the attachment. To restate, low level signals in a 24 or 32 bit file are not converted to 16 bit correctly. This is definitely true for undithered 16 bit conversions. I’m not sure about dithered conversions, if there is a problem it is of a much smaller magnitude, but I’ve seen some evidence of erratic program behavior with dithered 16 bit conversions.

Since I’m not the only one still using the older software, it is possible that someone else may be interested, thus I report the problem. No one who is piqued at simply misreading the post can speak for those people.  I already expressed my awareness that there will be no fixes for the program, that is hardly the point.
Logged
Reply #6
« on: September 11, 2010, 09:31:28 AM »
SteveG Offline
Administrator
Member
*****
Posts: 10094



To restate, low level signals in a 24 or 32 bit file are not converted to 16 bit correctly. This is definitely true for undithered 16 bit conversions.
To restate what I said (which is rather more relevant) - you have not proved that in the slightest. And since what you stated was clearly wrong, nobody else is going to be in the slightest bit interested.

If there was even a shred of truth in what you are saying, then it would have been discovered and fixed years ago.
Logged

Reply #7
« on: September 11, 2010, 08:54:14 PM »
AndyH Offline
Member
*****
Posts: 1769



I don’t want to engage in a philosophical discussion of proof. I can’t provide a tight mathematical proof and I don’t think there is any other kind of “proof.” I have, however, demonstrated the malfunctioning in a pretty convincing way. If anyone can make a reasonable claim that those results are what should be expected, that would certainly be interesting.

I don’t understand why there is any confusion on this. It is pretty simple and the data is there in my post for anyone who cares to look at it. Take a low level audio file in 32 bit space. Convert it to 16 bit, no dither. It doesn’t matter if you use 32 bit audio at -91dB or at -130dB, you end up with exactly the same level in 16 bit: just slightly below 96dB. The particular tone generated in the 32 bit space is still quite apparent and easily enough heard, although not exactly the same sound as in 32 bit because of the harmonics.

The change to this constant 16 bit result, instead of a result corresponding to the 32 bit level, starts at a certain level, anything above that looks more reasonable. The level at which it starts this malfunction is well above the minimum 16 bit resolution.

Part of the lack of “proof” is that I can’t actually say it occurs that way at any and every level between the highest and lowest that I’ve tested, I’ve just made a few sample files at different level. I don’t claim they fulfill the requirements of any good statistical model. However, if it is not consistent, it is an even stranger bug than it first appears.

Do the same thing going from 32 bit to 24 bit and the results are more reasonable and logical.

As to anyone’s interest, value is always personal. Certainly doing this is not going to be a very common operation, but I can’t be the only one who reads about how digital audio works and tries experiments to observe results. If the program is broken, it isn’t very useful. Since the malfunction isn’t explained, how does one know that other results involving bit depth conversion are exactly what they should be? This program behavior explains why the demonstration of dither/no dither on a fade, presented in that other infamous thread, does not produce the results expected from doctrine.
Logged
Reply #8
« on: September 11, 2010, 11:31:55 PM »
SteveG Offline
Administrator
Member
*****
Posts: 10094



Quote
I don’t want to engage in a philosophical discussion of proof. I can’t provide a tight mathematical proof and I don’t think there is any other kind of “proof.” I have, however, demonstrated the malfunctioning in a pretty convincing way.

Well you don't even begin to convince me.

Basically, all you've explained is that you haven't really got the faintest idea of what you are talking about. When you truncate a signal, even one that you've carefully manipulated to fall at some arbitrary level slightly above the lsb point, the uncertainty in the conversion means that this signal, if it hits that final point at all, will waggle that bit up and down - resulting in anything up to a 6dB instantaneous error, or possibly even more. It's because this always happens, and produces errors, that dither is invariably used to cover up the artefact by giving back additional statistical resolution.

It's the same problem at what might be called the 0dB level. It's not too difficult to manipulate signal samples so that the resultant peaks when reproduced go significantly above 0dB. And if you do the same thing at the lsb point, you are introducing the possibility of signals troughs going lower - but not in a statistically controlled manner - which is what dither achieves. You cannot ignore lsb uncertainty - with any signal at all, even if it's artificially created really carefully, you are going to get errors at the point where it could disappear. And creating sine waves with Cool Edit, and then amplifying them, is no way to create a sine wave to use for measurement purposes at all - all you are doing is introducing a systematic error.

So, you have proved nothing  - except that you haven't understood the issues involved at all.
Logged

Reply #9
« on: September 12, 2010, 01:08:21 AM »
oretez Offline
Member
*****
Posts: 713



I am quite sure AndyH won't believe me and my guess is I'll barely get this locked in before the thread gets locked

but I actually still have CE2K installed on at least one machine.  Reason for that is not only do I retain an affection for that build, but in trying to be scrupulous about license(ing), with in context of the number of computers I use it's typical to retain copies of a previous version on at least one machine . . . since I approached V3 not via upgrade but as a full 'seat' CE2k remains on two systems with V1.5 that were upgraded from V1 which used CE2k as the source for the initial Adobe upgrade.  Know I do not have to retain earlier versions once the upgrade is accomplished . . . but it's not unusual to do that.  And as V3 was not an upgrade CE2k can reside quite legally on machines other then the AA v3.x ones, yada yada.

In any case I did not attempt to duplicate AndyH's steps . . . (sitting outside literally watching flood water level go down until reaches a point where I can get to a pontoon boat currently hi-centered on jet ski lift, leverage it off and tie it to the bank before it goes completely walk about.  Water is not quite Pakistani (after current floods) toxicity level but don't want to 'wade in the water' any more then I have to) even with my current project more or less restraining serious thought or action, I am fairly sure that might take far more 'thinking' then I care to put into it.  That in itself is not meant to be slanderous, (or libelous, depending on one's read on internet publication, which the Craigslist decision this past week might at some future date influence) merely an indication that I have no idea after a quick scan of exactly what AndyH did, nor entirely what it was he was trying to demonstrate or test or? whatever.

But he seemed to be fairly certain that he demonstrated that his ten year old version of  Syntrillium's old software did/does not execute bit conversion accurately.  For anyone particularly interested my results suggest that my version of the software performs 32 to 16 conversion exactly as one would hope.

Not going to go through all the details, first because they are nearly boring to me and in any case in shuffling between both CE2k & AA1.5, using audio not purely sine wave then for the hell of it checking with a CE2k generated sine wave I did not log all my exact steps or variations.

I started with a different app as a VSTi host for a sampled bosendorfer (selected that rather then synth because of a different thread on this forum).  Because with 32 bit float & a virtual instrument you can generate/record well above the D/A ability to reproduce.  I generated a short file with huge dynamic range and peak amplitude well above (let's say +24 dBfu) what can be reproduced by a 16 bit file. I then used other software (as issues with Synt/Adobe 'statistics' are detailed elsewhere on the forum I'll side step that issue) to get a snapshot of peak amplitude, dynamic range yada yada yada

then truncated dynamic range to below what could be represented in a 16 bit file (for this experiment always sans dither), converted to 16 bits and got the expected results (silence).  I truncated dynamic range to just above what could be represented by a 16 bit file, converted to 16 bits and while the file did not have exactly the same characteristics (statistics) as the 32 bit file (Steve did a reasonable (i.e. brief) job presenting lsb issues with this type of test . . . would not expect files with coherent information only in the least significant bits to not have some variability) it did have the more or less expected peak amplitude @  -92.xx dB (I tried to 'set' the 32 bit float @ -95 dB, but it is probable that I did not hit that exactly)

With regard to what ever it is that AndyH thinks he's demonstrated I think Steve has addressed the principle issue with clarity and, again, brevity so won't ramble on about that.  Started the tests (while sitting outdoors watching water levels recede (the compromise here is that waiting even an hour too long and the boat's hi-centered until another near flood rain storm, go in too soon and not only have toxic water to deal with but the current's a bitch) prior to AndyH's reply#7 but on reading that his explanation suggests exactly the results I would expect if the program was working as intended!

In any case with regard to personal use I'm comfortable that I've, demonstrated, for me, that a 10 yr. old bit of software that doesn't exist anymore, can't legally be installed on systems on which it does not already exist (no upgrade path from CEP) actually does, in 32 float to 16 integer, pretty much what you would hope (that it do)

So, for AndyH, if he is comfortable that he has, for his personal use, demonstrated a huge anomaly in the software he uses, it is probably time for him to not work in 32 bit float or . . .  or migrate to some software with which he's more comfortable!
Logged
Reply #10
« on: September 12, 2010, 07:53:33 AM »
AndyH Offline
Member
*****
Posts: 1769



Just so I’m  clear on this. You two are telling me that it is perfectly reasonable to start with two generated tones in a 32 bit file, one measuring (Analyze/Statistics)
peak   -90.4dB
RMS   -93.4dB
the other measuring
peak    -120dB
RMS   -123dB
convert to 16 bit without dither
and both tones become identical, in spectral view and with Frequency Analysis,
both measuring
peak   -96.3dB
RMS   -96.3dB
with great consistency, regardless of the generated frequency, and repeating with many different signal levels,
while I don’t get such strange (to me, anyway) seeming results if I save relatively equivalent files as 24 bit packed, no dither?

If this is so, why were you telling me the results I got on the fade comparison, convert to 16 bit, dither vs no dither, were incorrect? This is exactly what I reported then.
Logged
Reply #11
« on: September 12, 2010, 11:09:25 AM »
SteveG Offline
Administrator
Member
*****
Posts: 10094



You haven't made the slightest attempt to understand this at all, have you?
Logged

Reply #12
« on: September 12, 2010, 11:22:14 AM »
pwhodges Offline
Member
*****
Posts: 1252

WWW

I used to do critical data acquisition and analysis (of images for medical diagnosis) using data from ADCs, and got very familiar with various problems that can arise with handling such signals using integer maths.  So I just reinstalled CoolEdit2000 to see what's happening here.

Yes, there is a bug; no, it does not affect audio quality in any respect.  

When a signal with greater than 16 bits depth is converted to 16 bits by CoolEdit2000 without dither, this is done by simple truncation. For correct results, it should actually be done with rounding.  The effect of this varies according to the architecture of the computer and the program; if the sample handling is done in a signed-magnitude representation, then the bucket with value zero after truncation is twice as wide as it should be, giving crossover distortion (I believe some very early DAWs had this problem). However, using the twos-complement notation which is native to the PC, and so is used by CoolEdit, the effect is of a DC offset of half an LSB, which I judge not to be sonically significant (though had I written the program, I would have corrected it).

This offset throws the indirect measurements that Andy is using, making them far less useful than he assumes, which is why I chose to investigate this at the actual sample level.

Below are two screen captures showing this effect. The first is of a ramp generated in 32 bits, and the second is the same ramp after conversion to 16 bits. The images are aligned; note that the zero section of the second is not centred on the crossover point in the first - this is the bug.

Paul



Logged
Reply #13
« on: September 12, 2010, 02:54:38 PM »
SteveG Offline
Administrator
Member
*****
Posts: 10094



That's the equivalent of an aperture error - exactly what I would have expected at the zero point, given the truncation method.
Logged

Reply #14
« on: September 12, 2010, 04:22:54 PM »
pwhodges Offline
Member
*****
Posts: 1252

WWW

No it isn't - that's a time-based error, and this is a level-based one; specifically a failure to adhere to mid-tread encoding.

This has no effect on audio, but the calculations that Andy is using, such as RMS level, are only correct for mid-tread encoding, so he is being given the wrong answers.  This is of no practical concern for audio purposes, but makes the program unsuitable for investigating behaviour at this level as Andy is trying to do.

I have a vague memory of this coming up before, many years ago; at any rate, the rounding to maintain mid-tread encoding is correctly done in Audition 3, as the image below shows (the cursor is at the zero crossing of the 32-bit ramp).

Paul

Logged
Pages: [1] 2 Print 
« previous next »
Jump to:  

Powered by MySQL Powered by PHP Valid XHTML 1.0! Valid CSS! Ig-Oh Theme by koni.