![]() I assume choosing 24bit when source audio is 32bit would do likewise. This implies that choosing 16bit when the source audio is 24 will involve a conversion down to 16bit which can lose data. The second render was a 100% null there was no difference between 24bit and 32bit renders.īut the 16bit and 24bit test did show some minor differences between those two outputs. I then did two test renders: 16 + 24bit (inverted phase), and 24bit (inverted phase) + 32bit. I then imported these exports into Reaper and inverted the phase of the 24-bit version. My source audio in Resolve for these exports was 24bit PCM. I tested on macOS, doing three test renders where I rendered out a video with WAV dialogue track to MP4 H264 AAC with the AAC export bit depth set to: I couldn't quickly find docs for the macOS equivalent With ffmpeg you'd certainly do that, and it would use whatever source audio you had to create the AAC.īut as we see from the Windows AAC docs, it seems these OS encoders require PCM input of a fixed bit depth, so there's an unfortunate but necessary conversion step first. So logically you would think one should just pass in the source audio - at its native bit depth. I think it's usually regarded as being 32-bit float? AAC, like MP3, has a variable bit depth that changes from sample to sample. In general, I was rather confused why Windows was limited to 16bit, and why on macOS you have to choose between 16 vs 24 vs 32. Perhaps there's some technical or legal reason for this use of two different encoders, but on the surface it does seem a bit odd. That does raise the question of: why are different encoders used, and couldn't the encoder that's used for Quicktime also be used for MP4? Especially as in both cases it's ffmpeg's LibAV that writes the container. I guess in the Quicktime case a different encoder must be used.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |