I wish to know if I have a correct understanding of how the way in which audio is encoded and then compressed determines the size of the resulting compressed file. Here is an example of what I think occurs.
Suppose a 240 second song is encoded digitally with a sampling rate of 44.1kHz. Then suppose that 16 bits are used to encode each "frame". This would result in 44,100 x 16 x 240 / 8 = 21,168,000 bytes of data. Thus this example song would require 21.168 mb uncompressed when stored on an audio CD.
Next compress the roughly 21 mb file using a bitrate (assumed constant) of 128 kbits per second. This will be a function of the 240 seconds and the bitrate only, ignoring the size of the uncompressed file. I.e, I assume the 16 bits used for each "frame" during the encoding as well as the sampling rate are transparent to the compression process. So, the compressed file's size will be 240 x 128,000 bits / 8 = 3,840,000 bytes. Then this 3.84 mb is a typical file size for, say, an mp3 file holding a 4 minute song.
If I am correct with this example, great. If not, I would appreciate finding where I misunderstand this.