I need to store an exact audio position in a database, namely SQLite. I could store the frame position (sample offset / channels) as an integer, but this would cause extra data maintenance in case of certain file conversions.
So I'm thinking about storing the position as an 8 byte real value in seconds, that is a double, and so as a REAL in SQLite. That makes the database structure more consistent.
But, given a maximum samplerate of 192kHz, is the double precision sufficient 开发者_运维知识库so that I can always recover the exact frame position when multiplying the value by the samplerate?
Is there a certain maximum position above which an error may occur? What is this maximum position?
PS: this is about SQLite REAL, but also about the C and Java double type which may hold the position value at various stages.
Update:
Since the discussions now focus on the risks related to conversion and rounding, here's the C method that I'm planning to use:
// Given these types:
int samplerate;
long long framepos;
double position;
// First compute the position in seconds from the framepos:
position = (double) framepos / samplerate;
// Now store the position in an SQLite REAL column, and retrieve it later
// Then compute the framepos back from position, with rounding:
framepos = position * samplerate + 0.5;
Is this safe and symmetrical?
A double has 51 bits worth of precision. Depending on the exponent part, some of these bits will represent whole numbers (seconds in your case), the others fractions of seconds. At 48 kilobits, a minimum of 16 bits is required to get the sub-second precise enough (more if rounding is not optimal). That leaves 35 bits for the seconds, which will span just over a thousand years.
So even if you need an extra bit or two for the sub-second to guard against rounding, and even if SQL loses a bit or two of precision converting it to decimal and back here and there, you aren't anywhere near losing sample precision with your double precision number. Make sure your rounding works correctly - C tends to always round down on convert to integer, so even an infintessimaly small conversion error could throw you off by 1.
I would store it as a (64-bit) integer representing microseconds (approx 2**20). This avoids floating point hardware/software, is readily understood by all, and gives you a range of 0..2**44 seconds which is a little over 55 thousand years.
As an alternative, use a readable fixed precision decimal representation (20 digits should be enough). Right-justified with leading zeros. The cost of conversion is negligible compared to DB accesses anyway.
One advantage of these options is that any database will trivially know how to order them, not necessarily obvious for floating point values.
As the answer by Matthias Wandel explains, there's probably nothing to worry about. OTOH by using integers you would get fixed precision regardless of the magnitude which might be useful.
Say, use a 64-bit integer, and store the time as microseconds. That gives you an equivalent sampling precision of 1 MHz and a range of almost 300000 years (if my quick calculation is correct).
Edit Even when taking into account the need for the timestamp * sample_rate to fit into a 64-bit integer, you still have a range of 1.5 years (2**63/1e6/3600/24/365/192e3), assuming a max sample rate of 192kHz.
精彩评论