Author |
Topic
|
Heavens to Betsy
Location: USA
Posts: 508
|
Posted - Sat Sep 22, 2001 10:06 am
|
|
|
I've been on the fence in regards to moving to Win2k--a damned fine, stable OS that I use at the office for non-audio. All the same, I use a Delta 66 for audio work and am quite wary of the WDM/CEP 1.2a/MME issue. M-Audio boasts a 6/01 release of a beta driver that supports MME in Win2k. Has anyone been using it to any success, especially with 24-bit audio?
I'd ask M-Audio but their response mechanisms can be spotty at times--and I'd rather listen to a user experience, as well. Thanks!
|
|
Syntrillium M.D.
Location: USA
Posts: 5124
|
Posted - Mon Sep 24, 2001 9:13 am
|
|
|
Hi HtB. Well, essentially, if you're comfortable with your current 9x running the most current (4.1.22.27) MME driver, I'd probably stick with that.
We've had some success with the 6/1 WDM/MME driver on some systems...but as it's stated on the M-Audio site (as well as on Microsoft) their WDM spec really suits WDM specific Apps for full 24-bit support; otherwise, you may be limited to 16-bit.
So, if you have the option of experimentation, you can certainly give it a try; but for now, I'd probably stick with your 9x/Me system, and the earlier MME-only driver.
---Syntrillium, M.D.
_________________
|
|
|
|
Heavens to Betsy
Location: USA
Posts: 508
|
Posted - Mon Sep 24, 2001 10:05 am
|
|
|
Yeaahhh...like Beetle said elsewhere regarding Windows 9x: it ain't that bad.
I'll hold the line, then. I'm sending you a question via e-mail, if you don't mind. Thanks!
|
|
Heavens to Betsy
Location: USA
Posts: 508
|
Posted - Tue Sep 25, 2001 7:53 am
|
|
|
As circumstances have it, M-Audio seems to have released a subsequent version of their driver...I may be interested in ginning up a Win2k bootable and giving it a whirl.
If anyone is actively running CEP1.2a with Win2k and a Delta, a scoop would be nice. If not--well, a collective crossing of fingers will do.
|
|
PapillonIrl
Posts: 158
|
Posted - Wed Sep 26, 2001 7:09 am
|
|
|
I'm going to be installing a Delta 44 on my dual boot (2K and 98SE) DAW today, Ideally I'd like to continue just running 2K.
I'll keep you all posted on the results.
pAp.
|
|
Heavens to Betsy
Location: USA
Posts: 508
|
Posted - Wed Sep 26, 2001 9:58 am
|
|
|
Sounds great. Good luck!
|
|
PapillonIrl
Posts: 158
|
Posted - Wed Sep 26, 2001 3:20 pm
|
|
|
Ok.
Took some fiddling but the new drivers seem to be up to scratch. I have been testing it with four mics going into this mixer...
http://www.behringer.com/eng/products/eurorack/mx2004a.htm
and from four channel inserts in the mixer into the breakout box. It records each mic to a seprate channel with virtually no noise, unlike my old card. I'm doing everything in 32-bit in CEP and have the CODEC sample rate set to 96,000 in the Midiman Mixer. Does this mean I'm not limited to 16-bit (excuse my ignorance) ? I didn't intsall any of the drivers from the CD. I believe these would have to be uninstalled using Midiman's uninstall programs (more downloading) if I had used them.
There is only one problem, and again, I'm pretty sure it's down to my ignorance and not the soundcard...the signal reaching Cool Edit seems very weak when I set the Midiman Mixer software to accept a signal of +4db. This is (supposedly) the output of my mixer so why is this happening ? If I set Midiman software to accept a signal of -10db the waveform looks right. This can't be right. Is it anything to do with a balanced\unbalanced thing ? I never took the time to understand these terms properly, maybe it's time I did.
I would really appreciate if somebody familiar with this souncard could take a look at the the link above concerning my mixer and try and help me out as it is doing my head in !
I do reckon you are safe enough going fully 2K, Heavens. I don't see how this could be a driver or OS issue. I have my old soundcard enabled as well, and I can swap between the two cards for monitoring on different speakers, no problem. Seems surprisingly stable.:)
pAp.
Edited by - papillonirl on 09/26/2001 3:27:15 PM
Edited by - papillonirl on 09/26/2001 3:34:28 PM
Edited by - papillonirl on 09/26/2001 3:37:48 PM
|
|
|
|
Heavens to Betsy
Location: USA
Posts: 508
|
Posted - Wed Sep 26, 2001 4:52 pm
|
|
|
Dammit, Papillon, you just raised the stakes! I was all set to give up and wait a few months and go for XP. :)
But if you're...let's see:
1. Under Win2k
2. Using CEP, therefore MME drivers exclusively
3. Running a D-44, virtually identical to the D-66
4. Not having any problems with the x.x.x.25 multiclient driver
...then everything is looking green. Now, I do have an Athlon chip and have to make sure I don't have that ridiculous chipset conflict--but other than that it looks like you've forged a path, my good fellow.
I just finished a demo and should be in a nice lull with other in-progress mixes, so...gonna give this horse a ride soon.
** Oh, regarding the +4/consumer/-10 settings for recording: +4dB will offer the least signal to an app, -10dB the most. I think the settings refer to input adjustment, e.g., industry-grade, hot signals (+4dB) don't need the boost. That's my rational explanation, anyway. Hey, SteveG, Graeme, Beetle or anybody else, what's the deal with those settings? Are they old-school standards that have carried over? Inquiring minds want to know. :)
Again--well done, Pap. Brave of you. As soon as I find the time I'll go ahead myself.
|
|
|
|
SteveG
Location: United Kingdom
Posts: 6695
|
Posted - Wed Sep 26, 2001 6:33 pm
|
|
|
Quote: |
** Oh, regarding the +4/consumer/-10 settings for recording: +4dB will offer the least signal to an app, -10dB the most. I think the settings refer to input adjustment, e.g., industry-grade, hot signals (+4dB) don't need the boost. That's my rational explanation, anyway. Hey, SteveG, Graeme, Beetle or anybody else, what's the deal with those settings? Are they old-school standards that have carried over? Inquiring minds want to know. :)
-Heavens to Betsy |
No, they're not 'old-school standards', they're all current. I'll give you a brief explanation, and you probably won't want to know. This is messy, and actually more complicated than this explanation...
Decibels aren't absolute values. They are logarithmic ratios that have to be referenced to something in order to mean anything at all. Now, the trouble with audio is that there's more than one reference that's commonly used, and with your +4 and -10 figures, they're each using a different reference.... groan....
Common Units
There's dBV, which is the level compared to 1 Volt RMS, where 1 Volt is 0dBV.
There's dBu, which is the level compared to 0.775V RMS with an unloaded (o/c) output (u=unloaded) where 0.775 Volts is 0dBu.
There's dBm, which actually represents a power level, compared to 1mW across 600ohms, which just happens to be 0.775 Volts again, so 0.775 Volts across 600ohms = 0dBm.
Now wouldn't you know it. We're talking +4dBu and -10dBV. Yes, there are some conversion factors we can use (with care), like this one: 1dBV = +2.2dBu
So +4dBu = 1.224V, but going down to -10dBV is a difference of 11.8dB and not 14dB! This means that if you do a little logarithmic calculating for dBu, you get (and I'm not working it all backwards, it's too complicated)
20 Log 0.2/0.775 = -11.76dBu, which is near enough.
So the answer to your question is that -10dBV systems (the 'domestic' standard) have a notional max. output of 200mV and +4dBu systems (the 'professional' standard) would produce an output of 1.224V for the same signal.
So in English, a +4dBu input needs lots more signal to drive it than a -10dBV one, and a +4dBu output will provide a lot more output level than a -10dBV one. And if you are not confused by now, you haven't been paying attention...
Well, you did ask!
Steve
Edited by - SteveG on 09/27/2001 01:30:52 AM
_________________
 |
|
|
|
PapillonIrl
Posts: 158
|
Posted - Thu Sep 27, 2001 1:41 am
|
|
|
Quote: |
So in English, a +4dBu input needs lots more signal to drive it than a -10dBV one, and a +4dBu output will provide a lot more output level than a -10dBV one. And if you are not confused by now, you haven't been paying attention...
Well, you did ask!
Steve
|
Thanks for the explanation Steve, I knew the dB unit was logarithmic, but had no idea there were different reference values.
Hmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm....
This makes sense to me in a sick, twisted kind of way. But just to clarify, the main output on my mixer has a button beside it that toggles between '+4dB' and 'Mic', and I always have it set on +4.
Now I've always assumed that this output level applied to the channel inserts also, (never seemed to be much difference betweeen main\insert as far as the signal reaching my old soundcard is concerned) so if my output level is +4, why, when the delta mixer is expecting a signal at +4, is the signal reaching CEP so unusably weak ?
Is it because, as you pointed out, the +4dB setting on the delta mixer is referenced to a DIFFERENT dB unit than the +4dB settingo on my mixer ?
Please tell me life is not this complicated :(
Oh, and Heavens, I have to say I was pleasantly surprised with the stabilty of the new drivers. I have a Pentium 3 system with a Gigabyte motherboard in case you're wondering, and I take no responsiblity for any nightmares you may have with the same drivers and Microschlong. But if there is no issues with the Athlon chipset, I reckon you'll be fine.
pAp.
Edited by - papillonirl on 09/27/2001 01:48:54 AM
Edited by - papillonirl on 09/27/2001 01:50:32 AM
|
|
|
|
PapillonIrl
Posts: 158
|
Posted - Thu Sep 27, 2001 3:20 am
|
|
|
I posted my experiences in another forum and one response I got included these comments...
Quote: |
CEP, like most audio software, works internally with 32bit floating point numbers (decimal). For some reason the CEP people have an option to save files in this 32bit format. I suppose they claim it saves on a conversion routine which can speed things up slightly. Personally I think it's so that they can say "32 bit recording" because it sounds cool. Anyhow, this doesn't mean that you're recording at 24bit. In fact, I thought that CEP was a 16bit application...but my information could be wrong.
With older Delta drivers, you were limited to 16bit recording in MME applications and I'm not sure if that's still the case. However, if you're doing WDM or ASIO, you will not have this limitation. You'll have to find out the kind of driver you're using in CEP. For instance in n-Track I have the option to choose between using the Delta ASIO, WDM, MME, or DS drivers.
|
Anybody have any thoughts on this ?
I thought CEP used only MME drivers, no ?
Finally, how to I know for absolute certain that I am recording at 24-bit ?
Thanks,
pAp.
Edited by - papillonirl on 09/27/2001 03:21:20 AM
|
|
Heavens to Betsy
Location: USA
Posts: 508
|
Posted - Thu Sep 27, 2001 4:20 am
|
|
|
Thanks, SteveG...I'll need to unravel that like an onion but the concept makes sense.
Pap, CEP can definitely answer those questions. That query about actually recording 24-bit is interesting, too. And like I said, I'll investigate any possible conflicts with my Athlon...I don't have any problems now but I'd hate to exacerbate anything.
|
|
PapillonIrl
Posts: 158
|
Posted - Thu Sep 27, 2001 7:27 am
|
|
|
Okay in case anybody is having similar problems, the reason for my weak signal is that the channel inserts on my mixer are NOT balanced, only the main outs are.
*slaps forehead*
Sorted.
Am I right in saying then, that if I use good, short cables I should be able to record with the delta mixer set to accept -10dB with no loss in quality (compared to balanced/+4dB) ?
Thanks again to SteveG, I've really on understood your post properly now ( 5th time reading ) that's some important **** to know I reckon.
pAp.
Edited by - papillonirl on 09/27/2001 07:31:17 AM
Edited by - papillonirl on 09/27/2001 07:37:53 AM
|
|
PapillonIrl
Posts: 158
|
Posted - Thu Sep 27, 2001 7:39 am
|
|
|
Deleted - sometimes I get on my own nerves.
Edited by - papillonirl on 09/27/2001 07:49:01 AM
|
|
Syntrillium M.D.
Location: USA
Posts: 5124
|
Posted - Thu Sep 27, 2001 9:03 am
|
|
|
Ahh...okay, you beat me to it. Just regarding the -10/+4 deal...the general consensus is to use +4 for balanced signals and -10 for non-balanced. If you're using short cables, I wouldn't worry about losing too much by going the -10 route - really. Don't sweat it.
As far as the quote above (from some other forum)...frankly, it couldn't be more incorrect.
Yes, Cool Edit uses MME drivers...
No, Cool Edit is not limited to 16-bit and you can very confidently record 24-bit signals, saved in 32-bit float. This is not a gimmick, this is real. You can verify that the signals are greater than 16-bit by bringing the file into edit view and zooming in to sample level. If you begin to notice that the little squares are falling in between sample values (on the Y-axis) as opposed to being right on the line, this is telling you that you're in greater than 16-bit depth. For more details on how to verify that, contact ;
Regarding WDM...Uh, again, that guy was pretty off the mark. Actually, with many apps out there (except Sonar) you WILL have 16-bit limitation in recording using WDM drivers...It actually references this in an FAQ on M-Audio's Delta site.
Hope that clears a few things up for you.
---Syntrillium, M.D.
_________________
|
|
|
|
SteveG
Location: United Kingdom
Posts: 6695
|
Posted - Fri Sep 28, 2001 1:13 am
|
|
|
Quote: |
Ahh...okay, you beat me to it. Just regarding the -10/+4 deal...the general consensus is to use +4 for balanced signals and -10 for non-balanced. If you're using short cables, I would worry about losing too much by going the -10 route - really. Don't sweat it.
-Synt |
Did you really mean would, or wouldn't? Just in case there's any slight confusion here, a few more words of explanation might be in order, I think.
For a start, the balanced/unbalanced thing is purely pragmatic...
Domestic = cheap = unbalanced = -10dBV
Professional = expensive = needs to be interference-proof = balanced = +4dBu
The balanced +4dBu situation is a 'belt and braces' approach. You have a much hotter signal, so any interference or hum needs to be about 5 times the size it would have to be for the same effect on a domestic system. Then you say 'well who cares, anyway?' and put in a balanced system that will pretty effectively cancel out all interference and hum at whatever level you run it at. And don't give me the 'long runs' argument either. I've run a balanced -10dBV (sorry, Synt!) signal half a mile with virtually no losses. I could do this, because it's dBV, and I used a nice low-impedance driver, because nobody said that I couldn't. There didn't seem to be much point in going to +4dBu.
So I don't think it matters at all what level you run at, as long as everything in the chain is getting the signal that it needs to see to work properly. If you run unbalenced signals and you get hum and interference, then that's a good sign that you need to check out your working environment, as there's a good chance that the same interference will be getting into the internally unbalanced bits of your kit anyway.
Does anybody want to know how balanced lines do their stuff, or are you all happy with that? (There might be a test... )
Pap, have you really got a mixer where the main output switches between +4dBu and mic? What on earth is it?
Steve
_________________
 |
|
|
|
jonrose
Location: USA
Posts: 2901
|
Posted - Fri Sep 28, 2001 1:23 am
|
|
|
From the above post...
They do have some interesting ideas, don't they?
Actually, I use two of their 8024's - those are the digital EQ's - when I'm out gigging. Pretty handy, sound okay, and are cheap.
Can't vouch for their inexpensive mixers, though the largest desk they make isn't bad for the price. Still, I think I'd go elsewhere in that price range, given a choice.
All the best... -Jon
:-)
_________________
 |
|
|
|
PapillonIrl
Posts: 158
|
Posted - Fri Sep 28, 2001 2:22 am
|
|
|
Synt - Thanks, I had a feeling this was the case, thanks for confirming it.
Steve - Thats the balanced\unbalanced thing finally clear in my head; you truly are a gentleman & a scholar.;D
Jon - Yeah I'm starting to really hate this mixer, we bought it for live use originally, and we needed one quickly, (and cheaply) after spending alot of money on an active speaker system for live gigs and an outboard effects unit. I'm starting to find, after trying out some differentvpreamps from my local music shark, that the preamps on the Behringher reeeeeally suck. Well, I was warned.
pAp.
Anyone want to buy a mixing desk ?
|
|
Heavens to Betsy
Location: USA
Posts: 508
|
Posted - Fri Sep 28, 2001 5:51 am
|
|
|
Quote: |
Does anybody want to know how balanced lines do their stuff, or are you all happy with that? (There might be a test...) |
Throw it down, Steve! Many of us are musicians who could use a good dose of scientific how-and-why.
So, then: with balanced cables, use of either +4dBu or -10dBV is inconsequential, save for fact that a good, balanced connection wouldn't have a problem driving a +4dBu?
Quote: |
If you begin to notice that the little squares are falling in between sample values (on the Y-axis) as opposed to being right on the line, this is telling you that you're in greater than 16-bit depth. --Synt |
True--very fun to investigate if you have a moment to zoom in.
|
|
Syntrillium M.D.
Location: USA
Posts: 5124
|
Posted - Fri Sep 28, 2001 9:03 am
|
|
|
Hi guys...well, this thread wound up pretty nice, huh!
And thanks SteveG for catching my typo! I fixed it, so I hope there isn't any confusion!!
Alright you guys, onto the next one!
Happy recording, everyone!
---Syntrillium, M.D.
_________________
|
|
|
|
AndyH
Posts: 1425
|
Posted - Fri Sep 28, 2001 5:00 pm
|
|
|
SteveG, or anyone who knows who can still stand to talk to me
If -10dBV is a standard wherein 1 volt RMS is 0dBV, is it the case that, when using the -10dBV inputs, 1 volt of audio signal will record at 0dB?
or do the VU meter readings relate to still another standard?
or is the question not meaningful because there are too many other variables?
|
|
jonrose
Location: USA
Posts: 2901
|
Posted - Fri Sep 28, 2001 5:54 pm
|
|
|
Geez, Andy - don't be so self-deprecating! You've just managed to type out a very concise question, after all...
;-)
I'll have to ask you for at least one more bit of information, though: What VU meters are you concerned with - the level meters within Cool Edit, or perhaps meters in an external device...?
Keep in mind that 1V placed across various input loads will also produce various results, especially if there are reactive components involved whose response will change with respect to frequency... meters are usually calibrated to some specified voltage level, input at a standard frequency or frequencies.
(An example: Let's say a tape deck's service manual requires a technician to calibrate the input metering circuits for each track to read 0dbVU with a 1kHz, 1V signal applied to the inputs of the deck - that's all fine and dandy for that deck, but this reading is dBVU, which can vary greatly among different machines!)
Also, remember that the level meters you're using on the computer are calibrated to a scale that is different from any that Steve mentioned above: 0dbFS (zero decibels, full-scale) is the absolute ceiling on this scale, so it doesn't equate with the other scales used for analog devices.
You probably didn't want to hear that either, did you?
;-)
Best... -Jon
_________________
 |
|
|
|
SteveG
Location: United Kingdom
Posts: 6695
|
Posted - Fri Sep 28, 2001 5:54 pm
|
|
|
Quote: |
Alright you guys, onto the next one!
Happy recording, everyone!
---Syntrillium, M.D. |
Wishful thinking, I'm afraid. And so to AndyH...
Quote: |
SteveG, or anyone who knows who can still stand to talk to me |
...assuming that I still will...
Quote: |
If -10dBV is a standard wherein 1 volt RMS is 0dBV, is it the case that, when using the -10dBV inputs, 1 volt of audio signal will record at 0dB?
or do the VU meter readings relate to still another standard?
or is the question not meaningful because there are too many other variables? |
The reason that it's a -10dBV input is because 200mV is what is supposedly required to achieve a max level undistorted signal through the device, any headroom spec notwithstanding. And yes, 0dB is the level that the VU (=Virtually Useless) meters will probably indicate with a steady tone at -10dBV applied to the input. Yes, it means something different - 0dB on the meter indicates the 'optimum' record level, perhaps...
But if you try to feed in a signal that's at 0dBV, you'll probably exceed the headroom anyway, as well as either physically or metaphorically 'bending the VU meter needles round the end-stops'.
What it all comes down to is that all these levels are slightly arbitrary in analog kit, unless you happen to know what 0dBFS is. You should get this one quite easily - it's the point where clipping occurs (Full Scale). So the reference level is really being stated as 0dBFS - dB(headroom). And even this isn't sufficient for analog tape, which doesn't clip, it just distorts more and more. Here, 0dB is usually stated as a tape flux density, or a level achievable for a given distortion figure, and even this varies with the bias settings. Don't go there unless you are very brave.
BTW, VU really stands for Volume Unit. Fine for steady lineup tone, but the specification won't let a VU meter read transients. For that, you need a peak-reading meter of some description, which will indicate accurately what's going on.
Some of the quirks of balanced lines and signals will be forthcoming when I've figured out a good way of explaining them without pictures.
Steve
Edited by - SteveG on 10/01/2001 02:54:57 AM
_________________
 |
|
|
|
SteveG
Location: United Kingdom
Posts: 6695
|
Posted - Fri Sep 28, 2001 5:56 pm
|
|
|
Whoops! didn't realise that jonrose had posted! Mind you, it all seems to tie up...
_________________
 |
|
|
|
jonrose
Location: USA
Posts: 2901
|
Posted - Fri Sep 28, 2001 5:59 pm
|
|
|
Only one second before you, it would seem...
We've managed to do that a few times, lately!
;-)
_________________
 |
|
|
|
AndyH
Posts: 1425
|
Posted - Sun Sep 30, 2001 6:17 pm
|
|
|
I was thinking primarily of the meters in the recording software, and how, or if, they relate to the sound card specs.
The sound card gets fed a signal. In the analogue world, amplifier and/or buffer circuits vary in the maximum signal level they can handle without clipping (comparing only circuits intended for the same purpose, e.g line level inputs; not comparing mc phono inputs with power amplifier inputs).
Sound card inputs are not analogue (or maybe there is some kind of analogue circuitry at the input, to control input impedance or otherwise buffer the digital circuitry?, but my imagination suggest there are also differences (one card vs another card) in their ability to handle input signals without clipping (or getting shut down, or doing whatever they do when they get too much signal).
The USB sound card I just returned (under hardware and sound cards) said 5 volts peak input. I don’t have the manual anymore, so I can’t be more specific. The card I obtained in exchange says +2dBV peak analogue input signal.
If the dBV scale is based on comparing to 1 volt RMS, and 1 volt (RMS?) is 0 dBV, then do we not have
20 log (1/1) = 0dBV?
if so, then
20 log (v/1) = 2dBV
log v = 2dBV/20
v = 1.2589
So unless I can’t remember simple algebra after too many years of little use, the soundcard manual says the peak input voltage is approx. 1.3 volts, although I am missing something if there is a significant reason for stating it as +2dVB instead of as 1.3 volts. I guess that means it can handle 1.3 volt signals, but if 200 mV coming in is going to produce 0dBFS in CoolEdit, any signal of 200 mV or more needs to be attenuated before getting ... where?
One can attenuate in the analogue circuitry preceding the sound card, or one can use the faders in the soundcard control panel (or the Windows “play control” mixer, as appropriate to the particular soundcard).
What do the later actually do? Do they act upon the soundcard, reducing the signal level somewhere between the input jacks and the output (the output, for a PCI sound card is the feed to the PCI bus, I think)? If that is so, it would seem to follow that there is some analogue circuitry in the soundcard input side, and that circuitry is (in part) some solid state volume control.
The reason I am trying to get this straight comes from the frequent statements that digital transforms, such as amplification or compression or limiting, etc, are best done in 32 bit mode. I believe I understand the computational reasons for this. If the soundcard faders are software that change the sample values, it should follow that they are best never used (i.e. always set at max), because they cannot partake of the larger word lengths, and will introduce greater distortion. If, however, the faders control analogue circuitry, they would probably be no more harmful than an amplifier’s volume control (assuming similar quality of components).
To further confuse things, on a related note, this soundcard can select for ‘consumer’ settings and ‘-10dBv’ settings on its analogue output side. Both are unbalanced only, both are fed through the same output jacks. ‘Consumer’ is said to be less sensitive and thus capable of handling the ‘hottest’ signals. On the specs page, peak output is listed as consumer setting = +2dBV and -10dBV setting = -4dBV.
My real reason for asking still has to do with transformations of the data:
The digital music recording has particular values for each sample. These samples get converted to an analogue signal in the D/A converter. If the resulting analogue signal can have two different amplitudes (for any given digital value) then either the D/A conversion process is different, depending on the setting, or there is an analogue amplifier (in the soundcard) following the D/A, and that amplifier uses different settings to produce the different signal amplitudes.
If the different signal levels are achieved via different D/A processing, then there is the possibility that the qualtity also differs and one might achieve better or worse results by virtue of choosing either the ‘consumer’ or the ‘-10dBV’ setting. Then there is also the same question as on the input side, to use, or not use, the output faders. (I realize that none of this output side stuff matters when one is writing to CDR)
So to restate the questions:
Where, and how do the input faders act?
Are the different output levels (in my card’s case, ‘consumer’ vs ‘-10dBV’) achieved in the D/A conversion, or in the analogue circuitry following D/A, or am I so far into fantasy land there is no point in trying to answer?
Can adjusting the output level via the software mixer panel effect the quality of the digital (which would in turn, directly and immediately, effect the analogue output)?
|
|
|
|
SteveG
Location: United Kingdom
Posts: 6695
|
Posted - Mon Oct 01, 2001 2:53 am
|
|
|
Well, I'll try...
Quote: |
I was thinking primarily of the meters in the recording software, and how, or if, they relate to the sound card specs.
The sound card gets fed a signal. In the analogue world, amplifier and/or buffer circuits vary in the maximum signal level they can handle without clipping (comparing only circuits intended for the same purpose, e.g line level inputs; not comparing mc phono inputs with power amplifier inputs).
Sound card inputs are not analogue (or maybe there is some kind of analogue circuitry at the input, to control input impedance or otherwise buffer the digital circuitry?, but my imagination suggest there are also differences (one card vs another card) in their ability to handle input signals without clipping (or getting shut down, or doing whatever they do when they get too much signal). |
Analog inputs on sound cards are analog! All the same restrictions apply in terms of overload, etc. You have to precondition the analog input before feeding it to the [t:106b3da6d5]lions[/t:106b3da6d5] A/D converter.
Quote: |
The USB sound card I just returned (under hardware and sound cards) said 5 volts peak input. I don’t have the manual anymore, so I can’t be more specific. The card I obtained in exchange says +2dBV peak analogue input signal.
If the dBV scale is based on comparing to 1 volt RMS, and 1 volt (RMS?) is 0 dBV, then do we not have
20 log (1/1) = 0dBV?
if so, then
20 log (v/1) = 2dBV
log v = 2dBV/20
v = 1.2589
So unless I can’t remember simple algebra after too many years of little use, the soundcard manual says the peak input voltage is approx. 1.3 volts, although I am missing something if there is a significant reason for stating it as +2dVB instead of as 1.3 volts. I guess that means it can handle 1.3 volt signals, but if 200 mV coming in is going to produce 0dBFS in CoolEdit, any signal of 200 mV or more needs to be attenuated before getting ... where? |
Well, your algebra's not wrong, but you are plainly unaware of the difference between RMS, peak, and (wait for it) peak-to-peak values. I really wish that manufacturers wouldn't do this! And in this particular case, I suspect that they've been very sloppy (like their power supply design?) and they don't mean this at all. According to the published spec on their website, your USB card has a -10dBV input, in other words, you will fully drive the A/D converter with 200mV, although I bet that's not true in practice. Now:
peak = the maximum value of either the positive or negative half of a cycle of a waveform.
peak-to-peak (P-P) = the max amplitude of the whole of the waveform
RMS = Root of the Mean value Squared. You can look up how it's derived, but for sine waves it is 1/(sq. rt 2), or 0.707 times the P-P value. The peak value of a sine wave is 1.414 times greater than the RMS value.
People only quote 'peak' values in order to confuse, or in ignorance. So, P-P 5V = 3.535V RMS
20 Log 3.535/1 = +10.95dB
Now if this means anything at all, it's probably indicating the point at which the soundcard analog input will clip. But according to the spec, you will have overloaded the A/D converter by then anyway.
Quote: |
One can attenuate in the analogue circuitry preceding the sound card, or one can use the faders in the soundcard control panel (or the Windows “play control” mixer, as appropriate to the particular soundcard).
What do the later actually do? Do they act upon the soundcard, reducing the signal level somewhere between the input jacks and the output (the output, for a PCI sound card is the feed to the PCI bus, I think)? If that is so, it would seem to follow that there is some analogue circuitry in the soundcard input side, and that circuitry is (in part) some solid state volume control.
The reason I am trying to get this straight comes from the frequent statements that digital transforms, such as amplification or compression or limiting, etc, are best done in 32 bit mode. I believe I understand the computational reasons for this. If the soundcard faders are software that change the sample values, it should follow that they are best never used (i.e. always set at max), because they cannot partake of the larger word lengths, and will introduce greater distortion. If, however, the faders control analogue circuitry, they would probably be no more harmful than an amplifier’s volume control (assuming similar quality of components). |
Judging by the zipper noise you get from moving the faders, and far more significantly, the fact that high quality VCAs are expensive, I think you'll find that volume changes are achieved by either changing the reference voltage of the A/D or by scaling the output values. Without looking it up, I presently have no more info about this to hand. But if you scale the reference voltage, you can retain the resolution of the conversion process and account for different input levels. If I was designing one of these from scratch, I'd certainly look carefully at a combination of this, and some fixed-value attenuators at the input. Muting of unwanted analog channels is much easier to do, so this could be done immediately pre-A/D. But this is all speculation. If I have time, I'll look it up.
Quote: |
To further confuse things, on a related note, this soundcard can select for ‘consumer’ settings and ‘-10dBv’ settings on its analogue output side. Both are unbalanced only, both are fed through the same output jacks. ‘Consumer’ is said to be less sensitive and thus capable of handling the ‘hottest’ signals. On the specs page, peak output is listed as consumer setting = +2dBV and -10dBV setting = -4dBV.
My real reason for asking still has to do with transformations of the data:
The digital music recording has particular values for each sample. These samples get converted to an analogue signal in the D/A converter. If the resulting analogue signal can have two different amplitudes (for any given digital value) then either the D/A conversion process is different, depending on the setting, or there is an analogue amplifier (in the soundcard) following the D/A, and that amplifier uses different settings to produce the different signal amplitudes.
If the different signal levels are achieved via different D/A processing, then there is the possibility that the qualtity also differs and one might achieve better or worse results by virtue of choosing either the ‘consumer’ or the ‘-10dBV’ setting. Then there is also the same question as on the input side, to use, or not use, the output faders. (I realize that none of this output side stuff matters when one is writing to CDR) |
If you think about it, you will realise from what I said before that there are other possibilities...
Quote: |
So to restate the questions:
Where, and how do the input faders act?
Are the different output levels (in my card’s case, ‘consumer’ vs ‘-10dBV’) achieved in the D/A conversion, or in the analogue circuitry following D/A, or am I so far into fantasy land there is no point in trying to answer? |
No, just not understanding the physics of A/D converters. You can do the same thing with D/A references as well...
Quote: |
Can adjusting the output level via the software mixer panel effect the quality of the digital (which would in turn, directly and immediately, effect the analogue output)? |
Depends what you feed it to. If it's done by reference scaling, each successive bit is just represented by a smaller or larger jump in the output voltage.
So, in conclusion, I'm pretty sure that CE's meters work on the basis that they read 0dB for the highest code you can digitise, so they are effectively using dBFS as a reference. What this means in analog terms is determined by the reference voltage on the A/D converter and any attenuation/amplification preceding it. These are not dBFS related, because they are voltages. None of the analog inputs I've ever seen are set up that precisely - that costs money, and isn't strictly neccessary. You get the level matching so it's in the right ball park and watch the meters. And try listening to the music occasionally. It will help you keep calm.
Steve
Edited by - SteveG on 10/01/2001 02:59:24 AM
_________________
 |
|
|
|
SteveG
Location: United Kingdom
Posts: 6695
|
|
AndyH
Posts: 1425
|
Posted - Tue Oct 02, 2001 3:35 am
|
|
|
SteveG, you said that one relatively inexpensive way for the soundcard faders to function is to adjust the reference voltage on the converter. What I am trying to get at is whether or not this is as good as, or better than, adjusting (decreasing) the signal in the exterior analogue circuitry.
In the exterior circuitry, the adjustment is generally called a volume control, and it is most frequently a variable resistence (e.g. potentiometer or stepped attenuators). In most analogue circuits there is very little distortion, and generally not a whole lot of noise, that can be attributed to the volume control. Therefore, for the soundcard faders to be as good, they would have to produce as little distortion and noise.
Perhaps that is what you mean, that the converter circuitry is generally linear enough over its working range of reference voltages that it does not matter if one adjusts the voltage (to adjust the volume). You say, for instance, “If it's done by reference scaling, each successive bit is just represented by a smaller or larger jump in the output voltage” but I am really not sure if you are saying ‘that is as good a way to do it (as good as the analogue volume control)’ or just saying ‘that is a way to do it’.
My soundcard’s spec page says
peak analog input signal: +2dBV
I need to complete my calculation from above by multiplying the 1.2589 volts by the 1.414 conversion factor to get a peak signal voltage of 1.780 volts. This is in-between the -10dBV (200 millivolts) standard and the +4dBu (1.224 volts) standard, but clearly closer to the later.
I don’t see anything is the (manufacturer revealed) specifications that would let me guess whether this is the voltage that will produce a maximum digital value of 16777216 (this card claims to have 24 bit converters) or it is something else, like maybe the maximum voltage before the sound card dies and needs to be buried. However, I can’t think of any logical reason the manufactures would bother to give any value other than the maximum signal value that can be correctly converted to digital.
However, this is not a professional card with balanced inputs, so the most reasonable expectation is that it is designed to work with most equiptment that might feed it, like HiFi preamps. If those are built around the -10dBV standard .... I don’t think all this has led to any greater insight into my card, even though your explanations of the standards seem understandable in a more general venue.
I did, by the way, start reading some of the material at the site you mentioned, but There are an awful lot of branches and topics there. I did not, so far, run into anything that is relevant to this topic.
|
|
|
|
Jim Smitherman
Posts: 352
|
Posted - Sun Dec 02, 2001 10:00 pm
|
|
|
Quote: |
Regarding WDM...Uh, again, that guy was pretty off the mark. Actually, with many apps out there (except Sonar) you WILL have 16-bit limitation in recording using WDM drivers...It actually references this in an FAQ on M-Audio's Delta site.
---Syntrillium, M.D. |
Sonar is Big Bucks apps, would be nice to have, but N-tracks is small bucks, and it supports win2k 24 bit WDM/ASIO (as well as midi, though alas, no notation features), and in fact, lets you use or abuse any driver you have on your system. SO . . . ..
the Big Question: will cool2000 upgrade anytime soon to support WDM, and 24 bit under win2k/XP? Isn't this sort of unavoidable if you want to keep your client base? Does working in 32 bit mode (playing back at 16) get around this limitation, at least for editing purposes? I fear i'm using cool less now that I'm running win2k (and, yes, 9x is THAT bad. good riddance) and I've used cooledit since the windows 3.1 days.
Jim
|
|
Syntrillium M.D.
Location: USA
Posts: 5124
|
Posted - Mon Dec 03, 2001 2:31 pm
|
|
|
Well Jim, I can't really comment on N-tracks.
I can tell you that we're very aware of the WDM issues facing the PC-audio world, and will certainly keep you posted as things develop.
---Syntrillium, M.D.
_________________
|
|
|
|
Amrad
Location: United Kingdom
Posts: 10
|
Posted - Fri Jun 13, 2003 7:03 pm
|
|
|
Hello Steve,
I trust you won't mind if I draw to your attention an error in your explanation of the properties of a sine wave! <g>
Quote: |
Well, your algebra's not wrong, but you are plainly unaware of the difference between RMS, peak, and (wait for it) peak-to-peak values. I really wish that manufacturers wouldn't do this! And in this particular case, I suspect that they've been very sloppy (like their power supply design?) and they don't mean this at all. According to the published spec on their website, your USB card has a -10dBV input, in other words, you will fully drive the A/D converter with 200mV, although I bet that's not true in practice. Now:
peak = the maximum value of either the positive or negative half of a cycle of a waveform.
peak-to-peak (P-P) = the max amplitude of the whole of the waveform
RMS = Root of the Mean value Squared. You can look up how it's derived, but for sine waves it is 1/(sq. rt 2), or 0.707 times the P-P value. The peak value of a sine wave is 1.414 times greater than the RMS value. |
The RMS value of a sine wave is 0.707 times the peak value, not the peak-to-peak value, as you claim! I realise that this may just have been a typing error, but, for the sake of accuracy, I think it should be pointed out! The RMS value is equal to the p-p value divided by 2.828, or multiplied by 0.3535 (= 0.707/2).
Regards,
Dave.
|
|
Amrad
Location: United Kingdom
Posts: 10
|
Posted - Fri Jun 13, 2003 9:22 pm
|
|
|
Hello Steve,
I'm somewhat puzzled by some of the figures you qoute as they don't agree with mine! <g>
According to my calculations, -10dBV = 316mV, not the 200mV you say it represents, and I calculate 200mV as being -14dBV, as follows:
20 Log 0.316/1 = 10dBV
and
20 Log 0.2/1 = 14dBV
Also, the ratio of the voltage at +4dBu (=1.228V) to that at -10dBV (=0.316V) gives a reduction in decibels of:
20 Log 0.316/1.228 = -11.79dB
That you arrived at the correct figure was due to the fact that you used the incorrect -10dBV voltage of 0.2V and compared it with the voltage at 0dBu instead of the voltage at +4dBu. Had you compared 0.2V with 1.228V this would have given:
20 Log 0.2/1.228 = -15.76dB
which is incorrect!
Using the correct -10dBV voltage of 0.316V and the 0dBu voltage of 0.775 gives a reduction in decibels of:
20 Log 0.316/0.775 = -7.79dBu
I'd be interested in hearing your comments on the above.
Regards,
Dave.
SteveG wrote: |
Common Units
There's dBV, which is the level compared to 1 Volt RMS, where 1 Volt is 0dBV.
There's dBu, which is the level compared to 0.775V RMS with an unloaded (o/c) output (u=unloaded) where 0.775 Volts is 0dBu.
There's dBm, which actually represents a power level, compared to 1mW across 600ohms, which just happens to be 0.775 Volts again, so 0.775 Volts across 600ohms = 0dBm.
Now wouldn't you know it. We're talking +4dBu and -10dBV. Yes, there are some conversion factors we can use (with care), like this one: 1dBV = +2.2dBu
So +4dBu = 1.224V, but going down to -10dBV is a difference of 11.8dB and not 14dB! This means that if you do a little logarithmic calculating for dBu, you get (and I'm not working it all backwards, it's too complicated)
20 Log 0.2/0.775 = -11.76dBu, which is near enough.
So the answer to your question is that -10dBV systems (the 'domestic' standard) have a notional max. output of 200mV and +4dBu systems (the 'professional' standard) would produce an output of 1.224V for the same signal.
So in English, a +4dBu input needs lots more signal to drive it than a -10dBV one, and a +4dBu output will provide a lot more output level than a -10dBV one. And if you are not confused by now, you haven't been paying attention...
Well, you did ask!
Steve |
|
|
|
|
ozpeter
Location: Australia
Posts: 3200
|
Posted - Sat Jun 14, 2003 6:13 am
|
|
|
If there is anything amiss here (and I wouldn't have a clue) it's gone unchallenged for 18 months!
- Ozpeter
|
|
Amrad
Location: United Kingdom
Posts: 10
|
Posted - Mon Jun 16, 2003 7:01 pm
|
|
|
Hello Ozpeter,
I think you've answered your own question there! <g>
It may be that anyone who read it simply took the figures as correct and didn't bother checking them or, if they did, they didn't bother commenting on them or, like yourself, they "wouldn't have a clue"!
I'm surprised, though, that Steve hasn't responded!
ozpeter wrote: |
If there is anything amiss here (and I wouldn't have a clue) it's gone unchallenged for 18 months!
- Ozpeter |
|
|
|
Topic
|