I’m using a open-source library for i2c bus actions. This library frequently uses a function to obtain an actual time-stamp with millisecond resolution.
Example Call:
nowtime = timer_nowtime();
while ((i2c_CheckBit(dev) == true) && ((timer_nowtime() - nowtime) < I2C_TIMEOUT));
The application using this i2c library uses a lot of CPU capacity. I figured out, that the running program the most time is calling the function timer_nowtime().
The original function:
unsigned long timer_nowtime(void) {
static bool usetimer = false;
static unsigned long long inittime;
struct tms cputime;
if (usetimer == false)
{
inittime = (unsigned long long)times(&cputime);
usetimer = true;
}
return (unsigned long)((times(&cputime) - inittime)*1000UL/sysconf(_SC_CLK_TCK));
}
My aim now is, to improve the efficiency of this function. I tried it this way:
struct timespec systemtime;
clock_gettime(CLOCK_REALTIME, &systemtime);
//convert the to milliseconds timestamp
// incorrect way, because (1 / 1000000UL) always returns 0 -> thanks Pace
//return (unsigned long) ( (systemtime.tv_sec * 1000UL) + (systemtime.tv_nsec
// * (1 / 1000000UL)));
return (unsigned long) ((systemtime.tv_sec * 1000UL)
+ (systemtime.tv_nsec / 1000000UL));
Unfortunately, I can’t declare this function inline (no clue why).
Which way is more efficient to obtain an actual timestamp with ms resolution?
I’m sure there is a more per-formant way to do so. Any suggestions?
thanks.
Its clear that your example call uses most CPU time in
timer_nowtime()function. You are polling, and the loop eats your CPU time. You could exchange the timer function with a better alternative and so you may achieve more loop iterations, but it will still use most CPU time in that function! You will not achieve using less CPU time by changing your timer function!You may change your loop and introduce wait times – but only if it makes sense in your application, e.g.:
The timer: I think
gettimeofday()would be a good decision, it has high precision and is available in most (all?) Unices.