Suppose we have to use in some function deeply nested pointer very extensively:
function (ptr_a_t ptr_a) {
...
a = ptr_a->ptr_b->ptr_c->val;
b = ptr_a->ptr_b->ptr_c->val;
...
}
Assuming all pointers are checked and valid, is there performance degradation, problems with atomicity or other caveats (except readability) in comparison with:
function (ptr_a_t ptr_a) {
val = ptr_a->ptr_b->ptr_c->val;
...
a = val;
b = val;
...
}
Update
I compiled this c file (written only for investigation purposes) with gcc -S:
typedef struct {
int val;
} c_str_t;
typedef struct {
c_str_t *p_c;
} b_str_t;
typedef struct {
b_str_t *p_b;
} a_str_t;
void func (a_str_t *p_a)
{
int a,b;
a = p_a->p_b->p_c->val;
b = p_a->p_b->p_c->val;
printf("", a,b);
}
For gcc -S:
movl 8(%ebp), %eax
movl (%eax), %eax
movl (%eax), %eax
movl (%eax), %eax
movl %eax, -4(%ebp)
movl 8(%ebp), %eax
movl (%eax), %eax
movl (%eax), %eax
movl (%eax), %eax
movl %eax, -8(%ebp)
For gcc -S -O1:
movl 8(%ebp), %eax
movl (%eax), %eax
movl (%eax), %eax
movl (%eax), %eax
movl %eax, 8(%esp)
movl %eax, 4(%esp)
The same I observe using volatile specificator inside structures.
So, nested pointers are forcedly optimized.
Whether these will be treated as the same is implementation-dependent. Compile your code both ways and examine the assembly output to see how your compiler treats both cases.
On an embedded system I am developing for, I added an “intermediate” pointer like you did and saw an appreciable speed-up in the execution time of the function. In my case, the compiler was re-calculating the pointer chain from scratch each time and was not optimizing them down. Your compiler may be different, the only real way to tell is to try it both ways and measure the execution time.