|
|
|
This is the original question and ensuing discussion, but edited somewhat so that just the pertinent information remains:
The mastering house that I'm using for a recent album I engineered with CEP2 is requesting 24-bit files. All of my files are saved as 16-bit.
It appears that Audition/CEP2 does not have an option to save or convert from 16- or 32-bit to 24-bit. Am I missing something, or is there some way to achieve this?
Do a 'Save as', still as a wav file, and click on the 'options' box. Then you'll have the option to save them as a 24-bit packed integer file - which for some reason is presumably what they want. Although why they can't do this themselves is a bit of a mystery...
One thing, though - you can only do this if you have a 32-bit FP file in the first place. But since you've mixed in CEP/AA then presumably this will be available to you quite easily, or you'll have to reconvert your 16-bit files...
Is a 24-bit packed integer file the same as a file saved as 24-bit with Pro Tools?
A. Not quite... the ProTools file is a Mac-generated .aiff file made to a standard called SDII (Sound Designer), a part of one of the OMF 'standards'. Now, I believe that there may be translators available, but I've never had to use one, so I don't know what really happens if you do, or even if one actually exists.
Ensuing discussion:
Is this what your mastering house wants? A ProTools file?
Nah, they just want a 24-bit mixdown. I was just wondering if saving as '24-bit packed integer' with Audition was the same as a standard 24-bit file -- for instance, an .aif file generated by Pro Tools or any other multitrack software that supports 24-bit.
My DAC is 24-bit compatible, but Audition is not; so I've been mixing and recording as 16-bit. It's always kind of bugged me, but this is the first time it's an issue with a project.
It shouldn’t bug you - because Audition works internally in what is actually 24-bit floating point. It's only called 32-bit because it is a 23-bit mantissa + sign bit + 8-bit exponent, which is effectively 24-bit. The reason for working like this is to retain the resolution when audio has any sort of amplitude-related operation carried out on it - which it can do over about a 1500dB range. If Audition worked in 24-bit integer mode, this range would be restricted to about 144dB. So mixing in 16-bit is actually going to make your product sound far worse than it actually would if you used Audition the way it was designed to be used - ideally you'd track and mix in 32-bit and then do a final, single conversion to the format you want to output in.
If you mix in 16-bit, which is an integer format, then there's a very good chance that you've reduced your effective resolution to rather less than 16-bit - because every level drop of 6dB will lose you a bit, and if you save and then boost that level back again, you will retain the resolution loss. This can noticeably screw up things like reverb tails, etc.
Your DAC is operating in 24-bit integer mode because it has to - it's a hardware device. Audition has no such limitation, and is accordingly engineered to take the best advantage possible from it. And there's no such thing as a 'standard' 24-bit file anyway - there are several formats, and everybody uses their own 'standard'. That's why there are translators available. Mind you, I'm not saying that it wouldn't be a good idea to have a proper standard for audio file transfer - but if there was one, nobody would accept the ProTools 24-bit integer one anyway, because it's too limiting. The IEEE 32-bit standard that Audition uses would be a far better bet.
Ok, so if I went back to my multitrack session that was recorded 16-bit, would it help to do the mixdowns over in 32-bit and save the stereo track as 24-bit? Or is it too late -- have I already screwed up by tracking 16-bit?
As long as you have not added any effects to your tracks, there should be no difference between converting your files from 16-bit to 32-bit and recording your files from the beginning as 32-bit, unless your recording device supports higher resolutions than 16-bit (incoming). I have an Extigy which has internal 24-bit processing but, and I had to research this, only inputs and outputs 16-bit, so I am recording into the device in 16-bit whether I like it or not, I just let Audition/CEP do its floating point conversion on the fly.
I guess I'll just have to reload all the sessions and save mixdowns as 32-bit, THEN save them as 24-bit from Edit View.
This is kind of a hassle, since the whole thing is arranged, sequenced, scrubbed, and ready for mastering but c’est la vie, eh?
Redoing the mixdown in 32-bit should be fine - you end up with a 32-bit master that you can then convert to whatever format the mastering lab wants. There are a few other issues about operating in 32-bit mode, but they're all good things, not bad ones. Probably the most important is that as long as it sounds good, you really don't have to worry about the mixdown level, even if its way undermodded, or even overloaded slightly, because normalising the FP mixdown will restore everything to the correct peak levels without any loss of resolution at all - this is the magic of Floating Point, and that's something that you just can't achieve with a 24-bit integer system. Mind you, if you have an overloaded mix running, it doesn't usually sound too good when you monitor it, because at that stage, the soundcard can't cope with it - this is ultimately why the mixdown needs to be normalised so that the peaks don't exceed 0dBFS, or slightly below. This way, it converts back to integer levels that the card can cope with.
Do you mean to say that I should convert each of the individual 16-bit tracks of any given session to 32-bit and THEN load them into the session, and then use 32-bit from then on;
Or, would it suffice to load the 16-bit tracks into the session (with all settings now at 32-bit) and then save the mixdown at 32-bit and then use 32-bit from then on?
As long as no editing is to be done to the 16-bit source files, and all processing and mixing of them is done in AA's multitrack window, and the mixdown file is 32-bit, there seems to be no difference whether a 16-bit source file is converted to 32-bit or not. AA's internal processing is all 32-bit, so the "conversion" as well as any alteration is done there, and retained in the final output.
I placed a 16 bit source file in a MV track twice: once at full volume, the other lowered 72dB (it doesn't matter whether you alter the track volume or the block volume as the result is the same). After mixdown I raised the soft portion of the file 72dB and invert pasted it over the full volume portion. The cancellation is perfect. This is true with both Full Reverb applied to the track(s) as well as with it dry. If converting to 32-bit were necessary, I'd expect a great deal of quantization error to be introduced by the -72dB block/track, but there is none.
I have many a session with both 16-bit and 32-bit files. As long as the sample rate is the same, AA doesn't care.
Thanks again for all the help. This'll be a lot easier next project now. I'm kind of glad that the mastering house made a fuss about it. Incidentally, they said that they cannot work with 32-bit at all on their end -- that 24-bit is the pro standard. Strange.
It may be their pro standard - and it does rather imply that they are using ProTools. You may yet need some conversion SW - but this is still only going to achieve what you want from a 32-bit master. And you might like to ask them why they are using such a poor system from this POV. I think that a snort of contempt would be in order! I'd like to see a bit more of a ProTools backlash...
Also, you might like to bear in mind that there are mastering apps like iZotope's Ozone around that work within AA - and they are Floating Point apps. In the particular case of Ozone, it works internally at 64-bit resolution, and gives exceedingly fine results.
Now, these 'professional' 24-bit systems your mastering house has got are only working at the same audio resolution that a consumer DVD can manage - I think that you have good reason for questioning what the mastering house is doing in this day and age if that's all they can manage... AA can do rather better than that!
I converted some 16-bit PCM .wav files to 32-bit AIFF files, and it seemed to work like a charm. I then opened one of the converted AIFF files in Quicktime and viewed its properties. Quicktime said that it was a 16-bit file with 24-bit compression. This further confuses the situation, it seems.
Not really. This is the problem with trying to convert in this direction (16>32) rather than the other (32>anything you want). You've started with an integer file, and the converter knows this! It may sound daft, but try converting to 32-bit first, and then to AIFF...
And a bit about ProTools and their integer-based format:
--which, if you look at the DSP core overview, says it has a "48-bit precision mix bus providing nearly 300 dB of internal dynamic range." That's fine, but there seems to be no evidence (even with the upcoming PT 6.4 software release) that 32-bit float is available as an import (or export) option.
That's not surprising - because I don't think it is. 48-bit precision mixing with 300dB of dynamic range implies that this is integer-based, not Floating Point arithmetic. The actual dynamic range this gives is 289dB, which is presumably what they mean by 'nearly 300dB'
Now, compare this to Audition. 32-bit Floating Point internal processing gives, as I pointed out earlier, nearly 1500dB of internal dynamic range. The Pro Tools system uses inherently dated architecture - they haven't even bothered to use sensible up-to-date DSP as far as I can tell. I know I'm biassed, but their product really isn't up to much these days.
|