POSIX uses struct timeval to represent time intervals.
struct timeval { time_t tv_sec; unsigned tv_usec; };
GHS Integrity represents Time in the following manner,
struct Time { time_t Seconds; unsigned Fraction; };
For example, 0.5sec is represented as 0x80000000 and 0.25sec is represented as 0x40000000.
What is the best way to convert from timeval to Time?
(p.s. The answer is not to link the POSIX library into Integrity and use POSIX calls.)
This is an unusual way to represent time.
Anyway, there are two easy ways to do it either way if you have 64-bit integers or floating points (the former are more likely on an embedded system):
Obviously both are only integer approximations, because most values in one scale cannot be represented as integers in the other scale. And in one direction you may be losing some precision because the
Fractionform can represent much finer times – one increment of theFractionform is less than 0.00024 microseconds. But that is only if your timer can actually measure those values which is not very likely – most timers cannot even measure at the scale of microseconds, and the value you see intv_usecis often rounded.If neither 64-bit integers nor floating points are available an option, you could do it iteratively with an extra variable. I was thinking if there is a simpler (and less expensive, considering that this is timing code) way to do such scaling than doing the equivalent of iterative 64-bit multiplication and division with two 32-bit integers. Of the two ideas that came to my mind, one would not do exact even scaling and may produce results that are by up to 9 bits off, and the one that compensates for that turns out not to be any cheaper. If something new comes up in my mind I will post it here, but this is an interesting challenge. Does anyone else have a good algorithm or snippet? Perhaps with the aid of a small precomputed table?