AudioMasters
 
  User Info & Key Stats   
Welcome, Guest. Please login or register.

Login with username, password and session length
November 11, 2007, 12:47:29 PM
62078 Posts in 6141 Topics by 2109 Members
Latest Member: electron
News:   | Forum Rules
+  AudioMasters
|-+  Audio Software
| |-+  Adobe Audition 2.0 & 3.0
| | |-+  Adobe Audition 2.0
| | | |-+  Edit view display
  « previous next »
Pages: 1 [2] Print
Author
Topic: Edit view display  (Read 2526 times)
Reply #15
« on: July 28, 2006, 06:11:24 AM »
bonne Offline
Member
*****
Posts: 16



Quote from: SteveG
I've put 4 screenshots in this PDF document.
Quote from: bonne

Our friend Paul Frindle doubts that CEP has the neccesary underlying reconstruction math (like you find in DAC's) that is needed to get it right.

Did he actually say that? If he did, he's simply wrong - it's always had this ability since it was shareware back in the 90's. Not only that, but it will redraw the resultant waveform on the fly as you move the samples around - because  calculating the resultant display values isn't anything like as difficult as he suggests, in practice.

The other thing to note is that if you play back the file, the 0dB part doesn't clip at all, but the overload triggers the clip indicator when it occurs. The only thing it won't do is tell you the peak value of the clip; you have to find it, and observe the value on the waveform display.

None of this is guesswork within the software - it's all accurate.


Thanks for the PDF, Steve, I'll study closer.

Here's what Paul said:

"If cooledit truly displays an upsampled wavefore that is derived after reconstruction it would possibly be more useful in the way you describe. But I can't confirm that it actually does though? But one thing to remeber is that the errors appear only at the DAC reconstruction filter - so an upsampling process intended for other reasons may not have the correct response characteristics to accurately display the problems.

Short single sample peaks are unlikely to cause overs - it's the action of what happens over many samples that makes a recon over.. The relationship is mostly far too complex to be seen directly. That's why the expensive math processing is needed.

IME the greatest cause of recon errors are actually quiet parts of the music that have been boosted in the mix or by comps and limiters. E.g. Sparse portions of Shania Twain produce much larger and more frequent overs (if boosted) than Green Day at full pelt!! The corellation between the overs and what one would expect from the material itself is not at all intuitive at first sight.

Unfortunately a 1mS burst of something nasty is actually very audible indeed. The Oxford recon compensation overcomes this by dealing with the errors before they arrive at the output (i.e. using a fairly long look ahead). This is why it has quite a long latency..
"

The Oxford recon compensation he's referring to here is in the Sony Oxford Limiter (unfortunately not available on the TC Powercore platform yet) that has a precise reconstruction meter in combination with a very elaborate dynamic process for removing any intersample peaks from the program. See the second link I posted.
Logged
Reply #16
« on: July 28, 2006, 08:26:58 PM »
MusicConductor Offline
Member
*****
Posts: 1275



Bonne, this may not sound like a very friendly welcome.  That is not my intention -- there's nothing personal here whatsoever.  Just a pursuit of accurate information.

I'm not anything like the engineer that Steve is, Bonne, but this stinks of marketing, which Sony tends to do a great deal of.  I don't buy Paul's quote.  Proper reconstruction shouldn't need any analysis more than a dozen or so samples either side of target.  If it does, you have some severe ringing going on, don't you?  So much for purity, then.  Furthermore, Shania Twain's CDs these days are ridiculously compressed and limited beyond belief, of course there will be inter-sample problems, and maybe even flat-topping as well.  This is true of many pop CDs.  For example, I've analyzed work from the highly-respected mastering guru Bernie Grundman, who lets flat-topped clipping go in the final product (up to 50 samples!) and, in at least one instance, used no dither and merely truncated to 16 bits.  So just because a great engineer says something is so, I don't trust it without good evidence when people like Grundman can allow that kind of distortion in the final product and yet be picky about miniscule other details of little consequence... lots of smoke and mirrors, if you ask me.

Now, if Paul Frindle has hard proof of his claims, please bring it on.  

By the way, it is because of Cool Edit 96 that I learned how oversampling and intersample overshoot works in the first place.  He really ought to look into this.

The only real-world scenario for inter-sample overshoot that I've had to deal with happens with close-miked one-mike-per-player Trombones.  The spikes in that asymmetrical waveform can be within range, yet still show a clip on the meters just as Steve illustrated.
Logged
Reply #17
« on: July 28, 2006, 10:38:58 PM »
SteveG Offline
Administrator
Member
*****
Posts: 8246



Quote from: MusicConductor
Proper reconstruction shouldn't need any analysis more than a dozen or so samples either side of target.  If it does, you have some severe ringing going on, don't you?  So much for purity, then.  Furthermore, Shania Twain's CDs these days are ridiculously compressed and limited beyond belief, of course there will be inter-sample problems, and maybe even flat-topping as well.  This is true of many pop CDs.  For example, I've analyzed work from the highly-respected mastering guru Bernie Grundman, who lets flat-topped clipping go in the final product (up to 50 samples!) and, in at least one instance, used no dither and merely truncated to 16 bits.  So just because a great engineer says something is so, I don't trust it without good evidence when people like Grundman can allow that kind of distortion in the final product and yet be picky about miniscule other details of little consequence... lots of smoke and mirrors, if you ask me.

I'm not trying to attack or defend anybody here, either. If I understand what Frindle was suggesting correctly (he uses an awful lot of words to get across a simple principle - makes me look quite sparse!), he's suggesting that it is artefacts introduced into processing at lower levels that, when raised to the region around 0dB, are likely to cause these problems. Now, there is quite a lot that could be said about this technically, but in the end what it boils down to in practice is that with an equal energy distibution across the audio band, you are very unlikely to encounter more than a couple of dB of overs in practice caused by this. And it is inevitable that they will occur only very briefly unless somebody has deliberately engineered it otherwise - or simply listened to the result without observing the technicalities of it, which is what happens with some so-called 'mastering engineers', as mentioned above. You certainly don't have to go to ridiculous (and expensive) lengths to get rid of it (the marketing angle again); there are enough technically accurate filters within CEP/AA as it stands to eliminate this problem (if it actually is one) without spending a fortune on Sony Oxford processors. But if you do spend a fortune on them, then you are going to be in exactly the same position as everybody else defending poor wallet decisions, like those in the ProTools camp.

Oh, Sony Oxford processors work with Pro Tools, don't they...? rolleyes

And if your samples remain in their relatively correct positions within the gamut when amplified, then you only have a problem anyway if your DA converter hasn't got enough headroom on the reference to cope with output a few dB above and beyond what it normally copes with, or the following analog stage clips, for the same reason. You can either recognise the problem (easy to do with CEP/AA) and fix it, or decide that it's not a real issue, and ignore it - it's up to you. But I would also remind anybody else reading this that for a lot of mastering, Audition has become the tool of choice - and one of the reasons for that is precisely because it shows up this type of error, and has the tools to fix it.

Quote
The only real-world scenario for inter-sample overshoot that I've had to deal with happens with close-miked one-mike-per-player Trombones.  The spikes in that asymmetrical waveform can be within range, yet still show a clip on the meters just as Steve illustrated.

Along similar lines, I'm going to make a new bassoon recording soon - this can cause the same thing, for very similar reasons. I only know one bassoon player, and she is in a lot of demand these days, so it won't be so easy to find a good time. The good news? She's one of my sisters.
Logged

Reply #18
« on: July 29, 2006, 06:09:44 AM »
bonne Offline
Member
*****
Posts: 16



Guys,

This is not about the "Protools camp" vs other camps. I have friends recording on Acid, Pyramix, Paris, Cubase, CEP, Logic and Samplitude. They are all happy with their chosen platform and getting good results with either of those. Some use Mac, others PC. It doesn't matter. To each his own. Let's refrain from platform envy/warfare. These are just tools.

This is not about marketing, either. Paul is now retired and does not work for Sony. He thus has more time on his hands and contributes on various forums in discussions about digital audio. He gets a lots of respect for doing this and is held in VERY high regard by his peers. Like two other design engineeers, Dan Lavry and George Massenburg, who are also leading authorities on digital audio, not just from a theoretical point of view but when it comes to practical application of the theories, he has been active explaining things to people in forumland. And yes, he uses quite a few words sometimes, mainly because many people just don't get it.

This is about improving on one's skills and the quality of our recordings. Who can be against that? Why resist that? Why be suspicious and look for ulterior motives?

Here is Paul's explanation of the issue we are dicussing once again:

http://www.sonyoxford.co.uk/pub/plugins-sony/products/limiter-Tech_Detail.htm#recon

If you have a program littered with undetected intersample peaks, due to heavy processing with eq, comps, limiters and/or improper gainstaging while mixing, your final loudness maximation (in order to get a competitive final product) is gonna bring these out and make your recording sound like rubbish when played out on an average consumer CD player. These distortions due to intersample peaks come in addition to other distortions created by the heavy processing of the samples themselves.

Ok, a lot of people don't hear the difference, including many recordists, and see it as not a real issue. Fair enough, if you can't hear it and you don't see it in your DAW, why worry about it (except other people, your listeners, may hear it).

A lot of todays recordings sound pathetically harsh, narrow, flat (undimensional) and distorted. Many people like it that way. Some of us are aiming for full, warm, strong and dimensional. If you have ever heard how fantatic digital audio CAN sound you'd be interested in knowing HOW to get there.

But, like I said some posts ago, I didn't really come here to discuss these issues. I had a specific question about the way the CEP editor draws and displays the waveform/samples. The reason was to try and find a way of seeing these intersample peaks in order to deal with them, without having the Sony Oxfort Limiter, the TC 6000 Brickwall limiter or the RME Digicheck software, that all allow you to detect the intersample peaks.

The jury is still out as to whether the CEP editor has similar capability.

Cheers

Jørn
Logged
Reply #19
« on: July 29, 2006, 09:48:05 AM »
SteveG Offline
Administrator
Member
*****
Posts: 8246



Quote from: bonne

The jury is still out as to whether the CEP editor has similar capability.

No jury, no case. It's never been an issue that it can display these peaks accurately - it simply can.
Quote
This is not about marketing, either. Paul is now retired and does not work for Sony.

I was aware of this - he mentions it in the thread. But old habits die hard, and they may well be paying him a pension; wouldn't want to jeopardise that, now, would he? But anyway, I find it very hard to believe that he wouldn't want to promote something he's been involved in the development of - to continue to promote it would be quite natural.

And for similar reasons, quite a few of us feel the same way about Audition. It's not perfect, but some of the things it does, and shows, are simply not replicated in other software. Or if they are, then you have to pay a staggering price for it.
Logged

Reply #20
« on: July 29, 2006, 03:45:00 PM »
bonne Offline
Member
*****
Posts: 16



Don't get me wrong, Steve, I have a lot of repect for CEP/Audition. A friend has it. I'm even considering getting it myself. I've seen some great improvements in the last upgrade. The plug-in delay compensation, that you and I talked about earlier, being one welcome addition.

As for Paul Frindle's contributions in forumland and his motives for doing it, here is a link to the "sticky" thread on the Reason In Audio forum on ProSoundWeb (formerly George Massenburg Forum). This great thread was made a "sticky" at the request of Terry Manning of Compass Point Studios in the Bahamas, who should be well known in England. Read Paul's posts. Lots of very informative stuff and very little marketing.

http://recforums.prosoundweb.com/index.php/t/4918/4796/

In Terry Mannings review of the Sony Oxford Limiter in his own forum on PSW, Whatever Works, he pays homage to "the brilliant Paul Frindle."

http://recforums.prosoundweb.com/index.php/m/88149/4796/?srch=sony+oxford+limiter#msg_88149


Back to the topic at hand. I finally had a chance to print out your screenshots today (I'm in transit at the moment).
It looks convincing. Paul's question is:

" .......But one thing to remeber is that the errors appear only at the DAC reconstruction filter - so an upsampling process intended for other reasons may not have the correct response characteristics to accurately display the problems. "

You wrote: "Not only that, but it will redraw the resultant waveform on the fly as you move the samples around - because calculating the resultant display values isn't anything like as difficult as he suggests, in practice."

It's not so much the graphic display values as such we're after, it's a precise image of the actual audio signal after reconstruction we need in order to check for these overshoots. But by all means, if you think the CEP display values closely mirror the audio signal after reconstruction, then this is just what the doctor ordered.

I wonder if anyone else could shed some light on this. Are there people around who have been involved in the programming of the editor display, or have first hand information about it?

Cheers

Jørn
Logged
Reply #21
« on: July 29, 2006, 04:21:08 PM »
SteveG Offline
Administrator
Member
*****
Posts: 8246



Quote from: bonne

It's not so much the graphic display values as such we're after, it's a precise image of the actual audio signal after reconstruction we need in order to check for these overshoots. But by all means, if you think the CEP display values closely mirror the audio signal after reconstruction, then this is just what the doctor ordered.

If you download the trial version, and try playing around with moving samples around, you'd rapidly realise that the only thing that it can be displaying is a precise visual image of the audio values you get. We know this, because metering the signals externally on a known-accurate peak measuring system gives the same instantaneous values that Audition displays. Lower peak values displayed within the program itself are also accurate - why should it display anything other than this? - it would be pointless. The ultimate accuracy of reconstruction though, is limited to the accuracy of the D-A conversion. What Audition displays is the theoretical value - but there again, no software is ever going to be able to do more than this. My B&K 2636 can measure soundcard outputs directly though - and the correlation is as good as you'd expect after going through a soundcard's theoretical operating ceiling. But ultimately, this is the limiting factor as far as playback is concerned - the way your particular D-A handles overshoots.

What this means in practice is that if your output produces potential oveshoot conditions, the extent of them is inevitably going to vary between different playback systems - which is why it shouldn't be allowed to happen, really. Fortunately, spotting them with Audition is easy.

Quote
I wonder if anyone else could shed some light on this. Are there people around who have been involved in the programming of the editor display, or have first hand information about it?

The only person who could give you definitive information about the way the display is formed is the program's orginal creator, David Johnson - it's his code. David is a member of this forum, but he hasn't visited for a while, so its unlikely (but not impossible, of course) that he'll see this.
Logged

Reply #22
« on: July 29, 2006, 08:27:47 PM »
MusicConductor Offline
Member
*****
Posts: 1275



Quote from: bonne
Guys,

This is not about the "Protools camp" vs other camps. ...Let's refrain from platform envy/warfare. These are just tools.

This about improving ones skills and the quality of our recordings. Who can be against that? Why resist that? Why be suspicious and look for ulterior motives?...

Jørn


We completely agree.  I'm not suspicious or trying to look at a target to bomb.  My comments were simply evaluating the quotes you had furnished.  When I have more time, I'll enjoy the latest link you've offered us.

I also agree that digital sound can be warm and lovely and believe that the best of my work exemplifies that.  And no, I use minimal compression and don't leave clipping on my CDs!

It has been years since David Johnston has posted here, which I say with great regret.  Whenever he did, it tended to be slightly revelatory.
Logged
Reply #23
« on: July 31, 2006, 03:17:40 PM »
bonne Offline
Member
*****
Posts: 16



Quote from: SteveG
Quote from: bonne

It's not so much the graphic display values as such we're after, it's a precise image of the actual audio signal after reconstruction we need in order to check for these overshoots. But by all means, if you think the CEP display values closely mirror the audio signal after reconstruction, then this is just what the doctor ordered.

If you download the trial version, and try playing around with moving samples around, you'd rapidly realise that the only thing that it can be displaying is a precise visual image of the audio values you get. We know this, because metering the signals externally on a known-accurate peak measuring system gives the same instantaneous values that Audition displays. Lower peak values displayed within the program itself are also accurate - why should it display anything other than this? - it would be pointless.


Yes, indeed. But the question remains whether what CEP shows is an aproximation of the reconstructed audio signal, and to what degree of aproximation, or if it really shows a precise visual representation of the signal like the Sony Oxford and TC Electronic kits mentioned earlier.
Paul Frindle wrote in the "Q for Paul Frindle" thread:

"It is not a given by any means that the presence of large samples alone indicate an over - so if you hit them by lowering their value you could be unnecessarily reducing the levels - AND - by doing so you may actually create an error that was not there in the first place!. Generally messing with undecoded sample values by visual analysis is not effective - because your sample display and eyes cannot reconstruct the band limited signal that will exist after the decoder filter.. Remember that the filter required to reconstruct PCM is the equivalent of a suite of successively accumulated calculations taking into account upwards of 200 sample positions on your wave editor screen."

A dozen or so samples either side of target, as MusicConducter suggested, may not be enough to get this right. I don't think Sony would throw out resources on the "expensive math" in the Sony Oxford Limiter, if this was not considered neccesary to get the over detection needed to make it what Terry Manning calls the most transparent software limiter on the market.

Paul Frindle saw the need to develope the "expensive math". David J. (is it Johnson or Johnston?) may have seen the same thing, and may indeed have implemented that when he wrote the code.

I had hoped that David himself or somebody else could confim that.
Logged
Reply #24
« on: July 31, 2006, 05:42:07 PM »
SteveG Offline
Administrator
Member
*****
Posts: 8246



Quote from: bonne


Yes, indeed. But the question remains whether what CEP shows is an aproximation of the reconstructed audio signal, and to what degree of aproximation, or if it really shows a precise visual representation of the signal like the Sony Oxford and TC Electronic kits mentioned earlier.

If you read the rest of the paragraph you quoted, and think about it, you'd realise that neither CEP/AA, or Sony, or anybody else come to that, can actually predict accurately what will come from a reconstruction - because they all come before the reconstruction filter, and have no knowledge of what it contains. So there is no question, as I already explained - this is a theoretical approximation, because that's all it ever can be. If you tried it, you'd very rapidly realise how it works, and what it does. If you reconstruct the same experiment in Mathcad, you get exactly the same result.

The further you get from a particular sample, the less effect you can have on it. If you generate silence at 44.1k, and move one sample to its maximum value, the effect on the samples around it will reduce to the background noise level 20 samples away. In fact, this is what happens at any sample rate you care to try. That's a worse case scenario in terms of a ringing disturbance, and it's over in about 0.5 ms at 44.1k - just as it should be. The entire duration of the effect lasts about a millisecond, because it will cause a symmetrical disturbance either side of the discontinuity. The created pulse will ring at the sample rate, and therefore can't be analysed easily by reference to the normal methods. Neither will this signal, as it stands, make it through an adequate reconstruction filter, which by definition has to remove signals at the sample rate.
Logged

Reply #25
« on: August 04, 2006, 04:03:35 PM »
bonne Offline
Member
*****
Posts: 16



Quote from: SteveG
If you read the rest of the paragraph you quoted, and think about it, you'd realise that neither CEP/AA, or Sony, or anybody else come to that, can actually predict accurately what will come from a reconstruction - because they all come before the reconstruction filter, and have no knowledge of what it contains. So there is no question, as I already explained - this is a theoretical approximation, because that's all it ever can be. If you tried it, you'd very rapidly realise how it works, and what it does. If you reconstruct the same experiment in Mathcad, you get exactly the same result.


I will check out the CEP editor at my friend's house when I get back from my holidays.

I agree with you that the reconstruction filter in an intersample detecting software shows a theoretical aproximation. But I still feel that Sony, TC Electronic and RME, mentioned in the "Q for Paul Frindle" thread, who all have many years of experience designing DA converters, know what they are doing when they see the need to either oversample and/or use the "expensive math" to get the best results. The TC Brickwall uses a five-times oversampling algorithm.

When Paul says: "Remember that the filter required to reconstruct PCM is the equivalent of a suite of successively accumulated calculations taking into account upwards of 200 sample positions on your wave editor screen" , I'm sure he knows what he is talking about and I'm equally sure he wouldn't go to this length if there were another, easier and cheaper way to get the precision he's looking for.

I also see that Izotope uses oversampling in their intersample peak detecting meter in Ozone.

CEP/Audition's interpolation may or may not be as precise as needed.
I will check this out and let you know what I find.

Cheers

Jørn
Logged
Pages: 1 [2] Print 
« previous next »
Jump to:  

Powered by MySQL Powered by PHP Valid XHTML 1.0! Valid CSS! Ig-Oh Theme by koni.