AudioMasters
 
  User Info & Key Stats   
Welcome, Guest. Please login or register.

Login with username, password and session length
December 16, 2007, 03:27:21 PM
62675 Posts in 6217 Topics by 2169 Members
Latest Member: tone2
News:   | Forum Rules
+  AudioMasters
|-+  Audio Related
| |-+  General Audio
| | |-+  chord electronics technology?
  « previous next »
Pages: [1] 2 Print
Author
Topic: chord electronics technology?  (Read 1285 times)
« on: May 21, 2004, 03:42:16 AM »

Guest

http://www.chordelectronics.co.uk/chord_tech.asp?view=dac

Hi, I just wanted someone to explain how tap lengths affect audio performance in an easier to understand way.

Has anyone heard such a system in action.

Also would an average Hi- Res (24-bit / 96khz) player sound better than this technology.
Logged
Reply #1
« on: May 21, 2004, 03:54:14 AM »
SteveG Offline
Administrator
Member
*****
Posts: 8319



Quote from: tannoyingteflon

That entire article is a work of fiction based on a comprehensive misunderstanding of the basic principles involved.
Logged

Reply #2
« on: May 21, 2004, 05:24:29 AM »

Guest

Ok I'll take your word for it but could you please explain what tap lenght is and for.

What I'm really puzzled about in digital audio is sampling and why higher than 44.1khz exist.
 
I remember tomwar pointing out that higher sampling rates affect transients, Is this true?

If microphones can capture higher transients with sampling rates above 44.1khz can loudspeakers output them or benifit from higher sampling rates?

In summary - WHY DO WE NEED HIGHER SAMPLING RATES?
THAT IS THE QUESTION. cheesy
Logged
Reply #3
« on: May 21, 2004, 10:25:11 AM »
SteveG Offline
Administrator
Member
*****
Posts: 8319



Okay, this is the core of the crap: (in an appropriate colour)
The answer is not being able to hear inaudible supersonic information, but the ability to hear the timing of transients more clearly. It has long been known that the human ear and brain can detect differences in the phase of sound between the ears to the order of microseconds. This timing difference between the ears is used for localising high frequency sound. Since transients can be detected down to microseconds, the recording system needs to be able to resolve timing of one microsecond. A sampling rate of 1 MHz is needed to achieve this!

However, 44.1 kHz sampling can be capable of accurately resolving transients by the use of digital filtering. Digital filtering can go some way towards improving resolution without the need for higher sampling rates. However in order to do this the filters need to have infinite long tap lengths. Currently all reconstruction filters have relatively short tap lengths - the largest commercial device is only about 256 taps. It is due to this short tap length and the filter algorithm employed that generates the transient timing errors. These errors turned out to be very audible. Going from 256 taps to 2048 taps gave a massive improvement in sound quality - much smoother, more focused sound quality, with an incredibly deep and precise sound stage. [/list:u][/color]
This 'order of microseconds' is technically true - but you can measure any time at all in these units! The actual minimum number of microseconds difference that the human brain can distinguish as a timing error (this is called ITD - interaural time difference) occurs with sounds directly in front of you and
is probably in the order of some 10's of uS - but there are two things they've completely overlooked. The first is that the maximum frequency that can be positionally determined by this method is about 743Hz (yes you read that right), and the second is that it's not the sampling frequency that determines the accuracy of this inter-channel timing information at all - this is determined entirely by the degree of jitter in the clock signals - that's why a really low-jitter clock will reduce the apparent transient 'smear' of a soundstage - you need a clock with a jitter rate at least an order of magnitude (preferrably more) lower than the maximum clock frequency the system handles to achieve this. So for a 96k sample rate, you need a clock with considerably less than 1uS jitter to achieve a stable sonic field, and it's this that tends to distinguish the indifferent systems from the good ones - the actual sample rate is irrelevant. That first paragraph actually tells out-and-out lies. If you want to see the math of this, I'll do it for you, but it's not going to look too good on the forum.

The rest of the article is also misleading, because what ultimately determines a filter's ability to achieve inter-channel stability (and therefore positional 'smear') is still clock jitter, not the number of taps in a filter. This whole 'tap' business is all about how FIR and IIR filters work, and the whole basis of FFT's as well. If you look at the FFT size numbers in any of CEP/AA's filters, these relate to 'tap' numbers as well, but the whole explanation of how this works is rather more complex, because it involves a detailed understanding of how a digital filter works. Now, it's true that a filter that uses more taps is going to have an improved transient and phase response - but the number of taps doesn't affect the inter-channel stability - just the ripple response of the filter. And quite frankly, an oversampled output, which eliminates the need for a steep slope anti-alias filter (this is what they are actually talking about underneath all that bull) is going to sound better than any brick-wall filter will anyway, simply because the massive phase shifts that extend well back into the audible range, won't happen.

I think that it's demonstrably true that an oversampled system with a low slope filter on the output, which is all that's neccessary to remove the now rather higher frequency sample clock (now way higher than audio)  is going to sound better than any finite-step filter working on lower clock rate audio. But this has nothing to do with interchannel positional accuracy at all. And also, sampling theory dictates that using an extended number of taps in a filter will in fact compromise its response to  lower frequencies unless the system is oversampled sufficiently. You can't win at both ends of the spectrum at the same time unless you put a lot more resources in.
Logged

Reply #4
« on: May 21, 2004, 10:52:02 AM »

Guest

Thanks steve for your informative reply.
Hey I was only joking about killing you. cheesy

Also steve can you tell me any good audio review websites.
I'm thinking of upgrading my audio player.
Logged
Reply #5
« on: May 21, 2004, 11:30:53 AM »
SteveG Offline
Administrator
Member
*****
Posts: 8319



Quote from: tannoyingteflon

Also steve can you tell me any good audio review websites.
I'm thinking of upgrading my audio player.

I'm afraid that I have no idea at all, because I don't look at them. The basic principle though, would be to find a site where they've actually measured what happens with players, as well as listened to them. I'd guess that this would be more likely to be one of the hi-fi magazine websites - because they would still have an income stream from selling hard copy to pay for this. And most of them are protected by being in large publishing houses, who may well have the resources to invest in proper tests anyway.

I have got (in a somewhat incomplete form) an explanation of what goes on with digital filtering - it just needs a few diagrams to clarify the process. When I've sorted this out (may be a few weeks), I'll post it on the forum.
Logged

Reply #6
« on: May 21, 2004, 12:07:59 PM »
AndyH Offline
Member
*****
Posts: 1481



You use the term "oversampled system" which I don't think I've run into before. Is this simply the way you chose to say oversampling, such as the Mia's ADC uses 64X oversampling and its DAC uses 128X oversampling, or is it a reference to some other aspect of digital audio?
Logged
Reply #7
« on: May 21, 2004, 03:34:04 PM »
SteveG Offline
Administrator
Member
*****
Posts: 8319



Quote from: AndyH
You use the term "oversampled system" which I don't think I've run into before. Is this simply the way you chose to say oversampling, such as the Mia's ADC uses 64X oversampling and its DAC uses 128X oversampling, or is it a reference to some other aspect of digital audio?

No, it simply refers to a combination of oversampled DAC and low pass filter. The moment you add any two components together it becomes a system - this is a generic use of the word. Sometimes specific words are used in place of 'system' - like in an audio mixer, the mic preamp, EQ aux sends, direct output, fader, etc would be referred to as 'channel' - but generically, this can be described as a system too.
Logged

Reply #8
« on: May 23, 2004, 05:42:34 AM »
AndyH Offline
Member
*****
Posts: 1481



Perhaps I should have mnade this a new topic, but it seems to fit in here without too much squinting.

A soundcard is a system with several functions, A to D and D to A being two of the most common. All soundcards specs I've looked at say something about their converters, but none say anything about their filters. There have been come comments in this forum about the quality of the anti-aliasing filters being one thing that sets the more expensive cards apart from the less expensive ones, but no discussion of the how or the why of that. The implication is that good filters are expensive.

The things I've read, that I can follow, talk about oversampling to make filter design easier and to avoid some side effects of overly steep filters. 64X and 128X oversampling is common in soundcards. I have an older  CD player, which sounds quite alright to me, that proclaims "4 times oversampling digital filter" on its faceplate.

There are subcultures in the audio world highly opposed to anti-aliasing filters. There are DIY DACs plans available and some filterless commercial HiFi DACs. Some use oversampling and some avoid even that. I've read about this idea less frequently on the ADC side, but it does have some following, a notable one being the recordings done for LSO (176.4kHz sampling rate, no anti-aliasing filters during recording, none during conversion for CD).

What I haven't been able to grasp is why at 128X, or even at 64X, a filter is necessary at all. The things I've read don't consider such finer points, at least not in terms I recognize. Is it possible to make a statement that might illuminate this point without getting into heavy math or great detail?

* * * * * * * * * * *
Another thing in this general area that I've wondered about is the dynamic range in digital recording. I understand that each bit represents about 6dB and so dynamic range increases as bit depth increases. However, the actual limits are quite a ways from the theoretical limits due to device noise and the practical limits, in most cases, are considerably short of the device limits.

On the other hand, there are quantization errors. If I have the terminology correct, these are errors due to the fact that D to A conversion is, of necessity, in discrete steps. The levels of these steps essentially never exactly equal the input signal level at the moment of sampling. The mismatch makes for error in the digitized result and is the reason we add noise (dither) into recordings.

If nothing else, the answer to my forthcoming question is "standards." Improvements to devices have to be in conformity to the established specifications or the new will be incompatible with the already existing. What I really want to know is if there is something more basic than that, something along the line of "that is the way this universe works," like quantum levels for electron orbitals.

Could better quality be pursued by making the errors smaller through making the steps smaller? Instead of putting those additional bits (18 bit, 20 bit, 24 bit) into increased dynamic range, a great deal of which is unuseable, could they have been put into decreasing the size of the steps within the realistically useable dynamic range?
Logged
Reply #9
« on: May 23, 2004, 06:46:12 AM »

Guest

Post deleted.  cheesy
Logged
Reply #10
« on: May 23, 2004, 10:54:10 AM »
SteveG Offline
Administrator
Member
*****
Posts: 8319



Quote from: AndyH

The things I've read, that I can follow, talk about oversampling to make filter design easier and to avoid some side effects of overly steep filters. 64X and 128X oversampling is common in soundcards. I have an older  CD player, which sounds quite alright to me, that proclaims "4 times oversampling digital filter" on its faceplate.

There are subcultures in the audio world highly opposed to anti-aliasing filters. There are DIY DACs plans available and some filterless commercial HiFi DACs. Some use oversampling and some avoid even that. I've read about this idea less frequently on the ADC side, but it does have some following, a notable one being the recordings done for LSO (176.4kHz sampling rate, no anti-aliasing filters during recording, none during conversion for CD).

What I haven't been able to grasp is why at 128X, or even at 64X, a filter is necessary at all. The things I've read don't consider such finer points, at least not in terms I recognize. Is it possible to make a statement that might illuminate this point without getting into heavy math or great detail?

The people who advocate no filtering are deluding themselves, and clearly don't understand the real implications of doing this at all. It's quite simple - you simply don't let unfiltered clock signals out on your audio lines. Firstly, you won't get FCC (or any other standards body) approval to manufacture the equipment. The reason for this is simple - basically you'd be manufacturing a transmitter radiating a square(ish) wave output at the clock frequency from the audio output sockets. Also, if the amp, preamp or whatever you are connecting to has one of these ridiculously extended frequency responses, you will inevitably increase the intermodulation distortion levels, and have a higher resulting background noise for your troubles.

As far as filters go, it is expensive to build relatively good brickwall filters. But it is not expensive to build a simple passive -6dB/oct filter that starts to roll off at a suitable point above the audio band. Even if you don't start this roll-off until you've got to 50kHz, with a simple passive the response will be down to the background noise level by the time you've got to the clock rate you need for 64x oversampling. Anybody who knows about the phase response of filters will tell you that the rate of phase change with a 6dB/oct filter isn't an issue, even within the audio band.

Quote

On the other hand, there are quantization errors. If I have the terminology correct, these are errors due to the fact that D to A conversion is, of necessity, in discrete steps. The levels of these steps essentially never exactly equal the input signal level at the moment of sampling. The mismatch makes for error in the digitized result and is the reason we add noise (dither) into recordings.

Absolutely correct. If you look at my accurate formula for calculating dynamic range (it's in another thread somewhere), you'll find that it's the step size indeterminacy that actually extends the theoretical dynamic range slightly. The effect of the dither is to 'smooth' the steps into more like a continuous ramp. This is the reason that the shape of the noise is significant - it effects our perception of this effect at different frequencies.

Quote

Could better quality be pursued by making the errors smaller through making the steps smaller? Instead of putting those additional bits (18 bit, 20 bit, 24 bit) into increased dynamic range, a great deal of which is unuseable, could they have been put into decreasing the size of the steps within the realistically useable dynamic range?

Unfortunately not. You have to remember that the system uses linear steps, but that we are considering the effect of these on the bit depth in logarithmic terms. The inevitable implication of this is that extending the bit depth in a linear system reduces the noise floor, and that's all it can ever do. This isn't such a simple concept to get one's head around - most people have to think about this quite a lot!

The only way around this is to use non-linear steps - which is what some of the mu-law encode/decode systems do. These can effectively give you smaller steps at the top of your chosen range. But at this point, you have to throw away all of the rules about dither, which rely on equal height steps, and you are also a hostage to fortune when it comes to the tracking accuracy of encoder/decoder systems. The other significant problem with this is that there will be a percieved worsening of quality as signals get quieter. And as for manipulating the signals in software - it's a nightmare. Think about doing even a simple thing like amplification in AA/CEP. Instead of a computation formula that works at any level, you'd need to calculate new step sizes every time you did it. It's not going to happen. Not even a little bit...
Logged

Reply #11
« on: May 23, 2004, 11:30:29 AM »
SteveG Offline
Administrator
Member
*****
Posts: 8319



Quote from: tannoyingteflon
Dynamic range and audio detail ratio (if I can call it that) aren't the same for digital and analog (I Guess). Just take a look at low level audio detail in cd players!
So even though we don't need a dynamic range better than cd for most application, it would still benefit digital audio to go beyond 16-bit. And seeing that computers work with bytes 24 bit was perfect for hi - res audio.

Is there any truth in what I'm saying steve or have I got it completely wrong.

It's certainly true that low level detail suffers in 16-bit systems - but it's also true that to resolve this in a domestic system with signals recorded up to 0dB is quite a tall order, because the deterioration doesn't get serious until you are down below the -60dB level. Yes, you can perceive a difference, but only reliably in a direct A/B test - in absolute terms it's nowhere near as bad as you might think, because our own built-in masking system hides most of this, and a typical domestic environment does the rest.

As for bytes and computers, the two bytes in a 16-bit system are rather easier to handle than the 3 bytes in a 24-bit system - which is why they get stored either as 4-byte blocks with 25% of the storage wasted, or as packed 3-byte blocks which have a sort-of 3 block 'granularity' to them. That's hardly a problem these days, but when 16-bit audio was introduced, it was rather more of an issue, as was storage in general. 16 bit systems were (and still are in domestic terms for the vast majority of people) a good compromise between what was possible and what was actually economically affordable. If you look at most analog recording equipment, you will find that it struggles even to 16-bit levels of performance in noise terms. Certainly some of it sounds better, but a good 16-bit system will actually give most analog systems rather more of a run for their money than their owners are ever likely to admit.

But it seems very likely that 24-bit systems are where we will end up - although CDs aren't going to go away in a hurry, simple because there are too many vested interests at stake. Despite what the purists might claim, a carefully engineered 24-bit 96k sample rate system will comfortably out-perform any analog kit. People will still like the sound of recordings that get processed via analog tape, though - but the reasons for this are different completely, and relate very much to particular distortion characteristics that are introduced... It seems strange, but a lot of people actually don't appear to like the sound of undistorted recordings at all!
Logged

Reply #12
« on: May 23, 2004, 01:07:34 PM »

Guest

Thanks steve  Cool I thought my comments were are waste of time that's why I deleted it. huh
Logged
Reply #13
« on: June 02, 2004, 05:36:58 AM »

Guest

Steve. What is meant by 20-bit or 24-bit filters?

Does this mean that interpolation in oversampling is done in the stated bit depth. Does a higher resolution filter make a system sound better?
Much Apreciated.
Logged
Reply #14
« on: June 20, 2004, 01:45:34 AM »
MartysProduction_dot_com Offline
Member
*****
Posts: 168

WWW

Steve,

I read a lot of your posts and replies.  Dude, no sarcasm and no bullsh*t--YOU are a genius.

Smiles,

MM
Logged

Marty Mitchell, CENM
Chief Executive Noize Maker
www.MartysProduction.com
The BEST Noize You'll Ever Hear!™
Pages: [1] 2 Print 
« previous next »
Jump to:  

Powered by MySQL Powered by PHP Valid XHTML 1.0! Valid CSS! Ig-Oh Theme by koni.