Bouncing in 32bit WAV

This forum is for discussing Reason. Questions, answers, ideas, and opinions... all apply.
User avatar
ScuzzyEye
Moderator
Posts: 1402
Joined: 15 Jan 2015
Contact:

07 Jan 2022

Billy+ wrote:
07 Jan 2022
I always assumed that the export limit was due to the audio interface limits?

when I'm exporting without my dedicated interface switched on the export options is 16 bit
if I turn on my interface I can export at 24 bit or 16 bit....

I generally export at 24 bit from Reason at around -6db and the use ozone standalone to add the extra polish/volume (mastering) as I always find that even if I'm not clipping in Reason the exported files nearly always are but I'm under the understanding that this is due to the conversion from 64bit DAW to 24bit audio files.
That's very likely the origin of the 16-/24-bit export limitation. They're writing to disk what would be written to the audio interface. But the fact you can get access to the 32-bit data with a normalized bounce, just irritates me that they don't go one step further and put a RIFF header on that data, and make it a wave, rather than turning it down and making it 24-bit.

User avatar
integerpoet
Posts: 832
Joined: 30 Dec 2020
Location: East Bay, California
Contact:

07 Jan 2022

selig wrote:
07 Jan 2022
integerpoet wrote:
07 Jan 2022
My understanding is that parts of the mixer internal to Reason work with 32-bit floating point samples and others work with 64-bit floating point samples. So the quality is "in there" and it's just a matter of how to surface it for the user.
No, the 'quality' isn't in there. 32 bit isn't higher quality on any level, it just gives more dynamic range.
Of course, I can't argue exporting anything more than -144 dBFS (24-bit integers) would actually sound better.

All I was really getting at was that 64-bit floating point samples are already rattling around inside Reason.

So it's not as if they'd need to do a pervasive refactoring of internal signal paths to be able to export that.
The only real implication is mastering engineers don' have to worry about those folks who don't keep levels under 0 dBFS for whatever reasons.
"Real" isn't exactly an audio engineering word, so you can't be wrong, but Neil Young might try to argue otherwise. :-)
Last edited by integerpoet on 07 Jan 2022, edited 4 times in total.

Goriila Texas
Posts: 983
Joined: 31 Aug 2015
Location: Houston TX
Contact:

07 Jan 2022

I thought you couldn't clip on mix channels in Reason. Also, if I want louder mixes my settings need to be 32bit or 64bit? Is 64bit worth the file size?



orthodox wrote:
07 Jan 2022
selig wrote:
07 Jan 2022
No, the 'quality' isn't in there. 32 bit isn't higher quality on any level, it just gives more dynamic range. The only real implication is mastering engineers don' have to worry about those folks who don't keep levels under 0 dBFS for whatever reasons.
I am those folks, and I find it ridiculous that I have to keep channel(!) peaks under 0 dBFS. Peaks is a thing I can deal with later on the master channel. I'm typically exporting stems for external mixing and I hate to have to adjust the channel faders to a sane state twice, first in Reason and then in the other DAW after they are normalized on the bounce from Reason.

User avatar
orthodox
RE Developer
Posts: 2286
Joined: 22 Jan 2015
Location: 55°09'24.5"N 37°27'41.4"E

07 Jan 2022

Goriila Texas wrote:
07 Jan 2022
I thought you couldn't clip on mix channels in Reason. Also, if I want louder mixes my settings need to be 32bit or 64bit? Is 64bit worth the file size?
You couldn't, until you need to bounce them to individual files from Reason.

I have never used 64 bits, maybe some day in my restoration work, I don't see any other use for that.

User avatar
ScuzzyEye
Moderator
Posts: 1402
Joined: 15 Jan 2015
Contact:

07 Jan 2022

integerpoet wrote:
07 Jan 2022
All I was really getting at was that 64-bit floating point samples are already rattling around inside Reason.
Oh, I meant to provide some details about this too. Reason's mixer does do 64-bit summing, but those bits are only available internally to the mixer. In theory, since the master channel is part of the mixer, the final output could be written to disk as a 64-bit float. In actual practice truncating a 64-bit float to a 32-bit float is not going to make any audible difference. It might only be marginally useful to do the 64-bit math when mixing a very low channel with a very loud one. But we're talking a difference in level on the order of >96dB. I've only written one bit of DSP code that ever required 64-bit floats. It was for a very slow envelope generator. It happened to be able to decay from 100% (1.00) in such small amounts that the decremental value was smaller than the step size of the 32-bit float (when storing values the 1.0 range). So subtracting the very small value resulted in the same starting value. In theory the same thing could happen when adding two audio signals together, but that means the one is so quiet it's not going to be audible against the louder signal anyway.

Everywhere else inside of Reason, any where you see an audio cable, the values are being passed as 32-bit floats. This means to any RE or VST. While the VST spec allows for compatibility with 64-bit floats. There's nothing in the RE SDK that would let an RE receive or send anything other than 32-bit audio. The RE is welcome to do internal processing as 64-bit, but there's no way to get those results back into Reason.

User avatar
integerpoet
Posts: 832
Joined: 30 Dec 2020
Location: East Bay, California
Contact:

07 Jan 2022

ScuzzyEye wrote:
07 Jan 2022
Everywhere else inside of Reason, any where you see an audio cable, the values are being passed as 32-bit floats.
Sounds perfect for the current title of this thread. :-)

User avatar
Billy+
Posts: 4166
Joined: 09 Dec 2016

07 Jan 2022

Goriila Texas wrote:
07 Jan 2022
I thought you couldn't clip on mix channels in Reason. Also, if I want louder mixes my settings need to be 32bit or 64bit? Is 64bit worth the file size?
You technically don't but each time the bit depth is reduced you are reducing the available space and eventually the signal that wasn't clipping will start to clip.

you won't get louder than 0 ever without clipping no matter the bit depth 16 bit 32 bit 64 bit still clips above 0 the bit depth has nothing to do with the loudness limit.

User avatar
ScuzzyEye
Moderator
Posts: 1402
Joined: 15 Jan 2015
Contact:

07 Jan 2022

Goriila Texas wrote:
07 Jan 2022
Also, if I want louder mixes my settings need to be 32bit or 64bit?
Perhaps counter-intuitively louder mixes require fewer bits. Sometimes bit depth is equated to dynamic range, but what it's actually measuring is the ratio between the quietest sound and the loudest sound. Each bit doubles the range of values that can be stored. A doubling of (sound) energy is measured as an increase of 6 dB. If you want to be able to record a signal that's -6 dB and also represent one that's -12 dB in the same method, you'll need 2 bits. CDs use 16-bits so they can represent ratio of 96 dB of signal to noise. But said another way. You can store a sound that's 96 dB quieter than the loudest signal before is falls below the noise floor. Said one more way, 24-bit values can store signals at -144 dB below full 0 dB. That's why the loudest peak is called -0 dB. It's 0 below the loudest storable value. Then we measure how much lower you can go before the signal disappears. More bits let you go quieter, not louder.

User avatar
orthodox
RE Developer
Posts: 2286
Joined: 22 Jan 2015
Location: 55°09'24.5"N 37°27'41.4"E

07 Jan 2022

Billy+ wrote:
07 Jan 2022
Goriila Texas wrote:
07 Jan 2022
I thought you couldn't clip on mix channels in Reason. Also, if I want louder mixes my settings need to be 32bit or 64bit? Is 64bit worth the file size?
You technically don't but each time the bit depth is reduced you are reducing the available space and eventually the signal that wasn't clipping will start to clip.
I don't understand that. Why is the bit depth reduced, from what?

Goriila Texas
Posts: 983
Joined: 31 Aug 2015
Location: Houston TX
Contact:

07 Jan 2022

In theory if it's not clipping I can turn it up and in essence have a louder mix.

Billy+ wrote:
07 Jan 2022
Goriila Texas wrote:
07 Jan 2022
I thought you couldn't clip on mix channels in Reason. Also, if I want louder mixes my settings need to be 32bit or 64bit? Is 64bit worth the file size?
You technically don't but each time the bit depth is reduced you are reducing the available space and eventually the signal that wasn't clipping will start to clip.

you won't get louder than 0 ever without clipping no matter the bit depth 16 bit 32 bit 64 bit still clips above 0 the bit depth has nothing to do with the loudness limit.

Goriila Texas
Posts: 983
Joined: 31 Aug 2015
Location: Houston TX
Contact:

07 Jan 2022

But lower bit depth is lower dynamic range and easier to clip. I'm confused now.


ScuzzyEye wrote:
07 Jan 2022
Goriila Texas wrote:
07 Jan 2022
Also, if I want louder mixes my settings need to be 32bit or 64bit?
Perhaps counter-intuitively louder mixes require fewer bits. Sometimes bit depth is equated to dynamic range, but what it's actually measuring is the ratio between the quietest sound and the loudest sound. Each bit doubles the range of values that can be stored. A doubling of (sound) energy is measured as an increase of 6 dB. If you want to be able to record a signal that's -6 dB and also represent one that's -12 dB in the same method, you'll need 2 bits. CDs use 16-bits so they can represent ratio of 96 dB of signal to noise. But said another way. You can store a sound that's 96 dB quieter than the loudest signal before is falls below the noise floor. Said one more way, 24-bit values can store signals at -144 dB below full 0 dB. That's why the loudest peak is called -0 dB. It's 0 below the loudest storable value. Then we measure how much lower you can go before the signal disappears. More bits let you go quieter, not louder.

User avatar
orthodox
RE Developer
Posts: 2286
Joined: 22 Jan 2015
Location: 55°09'24.5"N 37°27'41.4"E

07 Jan 2022

Goriila Texas wrote:
07 Jan 2022
In theory if it's not clipping I can turn it up and in essence have a louder mix.
The mix will eventually be sent to the sound card or exported, where it will clip.

User avatar
Billy+
Posts: 4166
Joined: 09 Dec 2016

07 Jan 2022

orthodox wrote:
07 Jan 2022
Billy+ wrote:
07 Jan 2022


You technically don't but each time the bit depth is reduced you are reducing the available space and eventually the signal that wasn't clipping will start to clip.
I don't understand that. Why is the bit depth reduced, from what?
Um text based discussion can be very difficult sometimes......

The DAW is 64 bit but when you export to file at 24 bit the bit depth is reduced
if you then take the file and convert to low quality MP3 the bit depth is reduced even further

Is my simple understanding of bit depth reduction

So exporting a 0db file from Reason at 24 bit and slamming it into a low quality MP3 could potentially make the audio clip..

Goriila Texas
Posts: 983
Joined: 31 Aug 2015
Location: Houston TX
Contact:

07 Jan 2022

What's the point then of using 32bit floating point not to clip in the mix then clip at export??

orthodox wrote:
07 Jan 2022
Goriila Texas wrote:
07 Jan 2022
In theory if it's not clipping I can turn it up and in essence have a louder mix.
The mix will eventually be sent to the sound card or exported, where it will clip.

User avatar
orthodox
RE Developer
Posts: 2286
Joined: 22 Jan 2015
Location: 55°09'24.5"N 37°27'41.4"E

07 Jan 2022

Billy+ wrote:
07 Jan 2022
So exporting a 0db file from Reason at 24 bit and slamming it into a low quality MP3 could potentially make the audio clip..
Ok now I understand the process, but it won't start to clip from that.

User avatar
orthodox
RE Developer
Posts: 2286
Joined: 22 Jan 2015
Location: 55°09'24.5"N 37°27'41.4"E

07 Jan 2022

Goriila Texas wrote:
07 Jan 2022
What's the point then of using 32bit floating point not to clip in the mix then clip at export??
I can suppress the peaks in the Master Section, with Maximizer or just by reducing the mix volume with Master Fader.

Goriila Texas
Posts: 983
Joined: 31 Aug 2015
Location: Houston TX
Contact:

07 Jan 2022

You can do the same at 24bit what advantages do you get with 32bit float if we still have to reduce levels on the master.


orthodox wrote:
07 Jan 2022
Goriila Texas wrote:
07 Jan 2022
What's the point then of using 32bit floating point not to clip in the mix then clip at export??
I can suppress the peaks in the Master Section, with Maximizer or just by reducing the mix volume with Master Fader.

User avatar
ScuzzyEye
Moderator
Posts: 1402
Joined: 15 Jan 2015
Contact:

07 Jan 2022

Goriila Texas wrote:
07 Jan 2022
But lower bit depth is lower dynamic range and easier to clip. I'm confused now.
The loudest signal you can store is when all bits are 1. That will be where -0 dBfs is. How much quieter you can go below that, and still store a recognizable signal is where you need more bits. If you just want to make the loudest possible square wave, you only need 1 bit. Bit turns on, speaker gets pushed out all the way, bit turns off, speaker gets pulled in all way the way. If you want to describe softer signals, you need more bits.

It when you do math to the signal, and get a result that exceeds the maximum value that you lose information. The normal way to handle that is just store it as the loudest possible value. When looking at a graphical representation of that, they graph is clipped off. More bits won't let you go higher, it just gives you more precision for the smaller values you can store.

Where some confusion might creep in is the difference between using integer values, that is counting numbers, 0, 1, 2, 3, etc. And floating point values, that are fractions, 0.0, 0.1, 0.2, 0.3, etc. Early on in this thread I started going a little deep on how floating point values are stored in memory. A 32-bit floating point number has the same step size as 24-bit integer. You can count to 16 million and represent every value along the way using 24-bit integers. You can also do the same with 32-bit floating point. What if you wanted to count to 32 million using just 24-bits. You'd have to count by 2s. 0, 2, 4, etc., and then remember what ever value you're storing of is actually 2 times that. That's basically what the extra 8 bits in a 32-bit floating point store does. It says, look at this number in these 24-bits, and multiply or divide it by this other value to get real number. With 8 bits you get 256 different ranges of values you can work with, but still each range only has 16 million steps. The step size just gets scaled from either very small to very large.

Normally audio, when working in floating point, uses the range -1.0 to +1.0 for the peaks and troughs of the waveform. Again there will be 16 million evenly spaced steps in that range. What floating point allows is the storage the results of signal processing math that would normally set all the bits of an integer to 1, to still mean something.

You can't send that audio to any real world hardware. DACs (Digital Analog Converters) work with integer values. So the DAW converts -1.0 to 0 and +1.0 to 16 million. Anything below -1.0 is still 0, and anything above +1.0 is still 16 million. That is, it gets clipped. So you can do math, turning a signal up really loud in one channel, and then more math, turning it down again in the master, and the floating point values won't lose anything, but when you want to hear it, you need to make sure it's in the range where the hardware can process it. That being all bits set to 1 is the loudest, and anything less than that is quieter. More bits let you have quieter signals, but the loudest you can ever go is setting all of them to 1, and that's always fully loud.

User avatar
orthodox
RE Developer
Posts: 2286
Joined: 22 Jan 2015
Location: 55°09'24.5"N 37°27'41.4"E

07 Jan 2022

Goriila Texas wrote:
07 Jan 2022
You can do the same at 24bit what advantages do you get with 32bit float if we still have to reduce levels on the master.
I have problems when I bounce the mix channels to files for external mixing. Now with 24 bits I have to ensure that each(!) one of some 40(!) mix channels is under 0 dBFS. With 32bit bounce I wouldn't have that problem.

Goriila Texas
Posts: 983
Joined: 31 Aug 2015
Location: Houston TX
Contact:

07 Jan 2022

Thanks for the breakdown. So floating-point is about the dynamic range of the lowest frequencies and not peaks. If that's true then there must be a certain level in dB or dynamic range that would benefit from float-point math? As it wouldn't make sense for audio that is inaudible.
ScuzzyEye wrote:
07 Jan 2022
Goriila Texas wrote:
07 Jan 2022
But lower bit depth is lower dynamic range and easier to clip. I'm confused now.
The loudest signal you can store is when all bits are 1. That will be where -0 dBfs is. How much quieter you can go below that, and still store a recognizable signal is where you need more bits. If you just want to make the loudest possible square wave, you only need 1 bit. Bit turns on, speaker gets pushed out all the way, bit turns off, speaker gets pulled in all way the way. If you want to describe softer signals, you need more bits.

It when you do math to the signal, and get a result that exceeds the maximum value that you lose information. The normal way to handle that is just store it as the loudest possible value. When looking at a graphical representation of that, they graph is clipped off. More bits won't let you go higher, it just gives you more precision for the smaller values you can store.

Where some confusion might creep in is the difference between using integer values, that is counting numbers, 0, 1, 2, 3, etc. And floating point values, that are fractions, 0.0, 0.1, 0.2, 0.3, etc. Early on in this thread I started going a little deep on how floating point values are stored in memory. A 32-bit floating point number has the same step size as 24-bit integer. You can count to 16 million and represent every value along the way using 24-bit integers. You can also do the same with 32-bit floating point. What if you wanted to count to 32 million using just 24-bits. You'd have to count by 2s. 0, 2, 4, etc., and then remember what ever value you're storing of is actually 2 times that. That's basically what the extra 8 bits in a 32-bit floating point store does. It says, look at this number in these 24-bits, and multiply or divide it by this other value to get real number. With 8 bits you get 256 different ranges of values you can work with, but still each range only has 16 million steps. The step size just gets scaled from either very small to very large.

Normally audio, when working in floating point, uses the range -1.0 to +1.0 for the peaks and troughs of the waveform. Again there will be 16 million evenly spaced steps in that range. What floating point allows is the storage the results of signal processing math that would normally set all the bits of an integer to 1, to still mean something.

You can't send that audio to any real world hardware. DACs (Digital Analog Converters) work with integer values. So the DAW converts -1.0 to 0 and +1.0 to 16 million. Anything below -1.0 is still 0, and anything above +1.0 is still 16 million. That is, it gets clipped. So you can do math, turning a signal up really loud in one channel, and then more math, turning it down again in the master, and the floating point values won't lose anything, but when you want to hear it, you need to make sure it's in the range where the hardware can process it. That being all bits set to 1 is the loudest, and anything less than that is quieter. More bits let you have quieter signals, but the loudest you can ever go is setting all of them to 1, and that's always fully loud.

User avatar
ScuzzyEye
Moderator
Posts: 1402
Joined: 15 Jan 2015
Contact:

07 Jan 2022

Billy+ wrote:
07 Jan 2022
if you then take the file and convert to low quality MP3 the bit depth is reduced even further

So exporting a 0db file from Reason at 24 bit and slamming it into a low quality MP3 could potentially make the audio clip..
MP3s (and other lossy formats) have a bit rate, not a bit depth.

It is true that encoding PCM audio into an MP3 and then decoding for playback it can produce values that clip, but it's not because of the change in bit depth.

Going back to MP3s not having a bit depth. They use a process called a discrete cosine transform. Basically they spit an audio signal into a number of signals represented by cosine functions with different amplitudes. When those functions are added back together you get (approximately) the original signal. The more (and higher frequency) cosine waves you use, the closer you can represent the original signal. Just how many you can fit into a unit of time is the rate of the MP3. The cosine function can be computed to infinite resolution, but in practice it's usually only done to 24-bits (use to be 16 in the past). The amplitude of each cosine individually won't be more than the 24-bit integer can hold, but when you sum all the different frequencies together it's possible that in some places the final value will clip.

So if you know you're audio is going to be encoded into a lossy format (e.g. its doing to be streamed), it's probably best to leave 1 dB of headroom. So that the reconstructed signal won't add up to anything greater than 0 dBfs.

Goriila Texas
Posts: 983
Joined: 31 Aug 2015
Location: Houston TX
Contact:

07 Jan 2022

Got you! So it allows you the freedom of just making music and let the engineer do the level mixing?

orthodox wrote:
07 Jan 2022
Goriila Texas wrote:
07 Jan 2022
You can do the same at 24bit what advantages do you get with 32bit float if we still have to reduce levels on the master.
I have problems when I bounce the mix channels to files for external mixing. Now with 24 bits I have to ensure that each(!) one of some 40(!) mix channels is under 0 dBFS. With 32bit bounce I wouldn't have that problem.

User avatar
ScuzzyEye
Moderator
Posts: 1402
Joined: 15 Jan 2015
Contact:

07 Jan 2022

Goriila Texas wrote:
07 Jan 2022
Thanks for the breakdown. So floating-point is about the dynamic range of the lowest frequencies and not peaks. If that's true then there must be a certain level in dB or dynamic range that would benefit from float-point math? As it wouldn't make sense for audio that is inaudible.
The lowest volume level, not frequency. But yes. The biggest benefit of floating point is storing intermediate values that don't fit nicely in integers. This can be either very quiet signals, that then can be turned up. Or very loud signals that can then be tuned down. It's about not losing anything in the processing. Not the final output.

That's what prompted this thread. If your absolutely final output is coming from Reason, 24-bit is more than adequate. But if you or someone else is going to take what you made in Reason, and continue processing it, it would be really nice to get access to the exact same data that Reason is using internally to pass to another program. Right now you can't.

Goriila Texas
Posts: 983
Joined: 31 Aug 2015
Location: Houston TX
Contact:

07 Jan 2022

Thank you all learned something today :thumbup:

ScuzzyEye wrote:
07 Jan 2022
Goriila Texas wrote:
07 Jan 2022
Thanks for the breakdown. So floating-point is about the dynamic range of the lowest frequencies and not peaks. If that's true then there must be a certain level in dB or dynamic range that would benefit from float-point math? As it wouldn't make sense for audio that is inaudible.
The lowest volume level, not frequency. But yes. The biggest benefit of floating point is storing intermediate values that don't fit nicely in integers. This can be either very quiet signals, that then can be turned up. Or very loud signals that can then be tuned down. It's about not losing anything in the processing. Not the final output.

That's what prompted this thread. If your absolutely final output is coming from Reason, 24-bit is more than adequate. But if you or someone else is going to take what you made in Reason, and continue processing it, it would be really nice to get access to the exact same data that Reason is using internally to pass to another program. Right now you can't.

Goriila Texas
Posts: 983
Joined: 31 Aug 2015
Location: Houston TX
Contact:

07 Jan 2022

Wait...floating point is about the lowest volume but many want 32 bit floating point bounce to help with stopping clipping and clipping is going over 0dB.

ScuzzyEye wrote:
07 Jan 2022
Goriila Texas wrote:
07 Jan 2022
Thanks for the breakdown. So floating-point is about the dynamic range of the lowest frequencies and not peaks. If that's true then there must be a certain level in dB or dynamic range that would benefit from float-point math? As it wouldn't make sense for audio that is inaudible.
The lowest volume level, not frequency. But yes. The biggest benefit of floating point is storing intermediate values that don't fit nicely in integers. This can be either very quiet signals, that then can be turned up. Or very loud signals that can then be tuned down. It's about not losing anything in the processing. Not the final output.

That's what prompted this thread. If your absolutely final output is coming from Reason, 24-bit is more than adequate. But if you or someone else is going to take what you made in Reason, and continue processing it, it would be really nice to get access to the exact same data that Reason is using internally to pass to another program. Right now you can't.

Post Reply
  • Information
  • Who is online

    Users browsing this forum: Google [Bot], mimidancer and 4 guests