I was wondering how does a C compiler (I tried gcc and clang) organize the argument list of functions internally. For this purpose I made a program consisting of two files, for instance:
foo.c :
double foo (double (*f) (void *, double), void * arg, double x)
{
return f (arg, x);
}
main.c :
#include <stdio.h>
extern double foo (double (*) (double), double);
double bar (double x)
{
return x + x;
}
int main ()
{
double x = foo (&bar, 2);
printf ("%g\n", x);
return 0;
}
and played with the argument list and the body of the foo
in the foo.c. When foo.c is not consistent with definitions in the main.c the behavior of the program should be undefined according to the C standard (if I understand correctly?).
I have tried several variants (including the one above), for which the program will print 4. The one of the more exotic is
double foo (double x, double (*f) (char, double, int), int r)
{
char a;
return f (a, x, r);
}
. However, if I would try something like
double foo (double x, double (*f) (char, double, double), double y)
{
char a;
return f (a, y, x);
}
the result would not be 4, but if I write f(a, x, y)
instead, I get 4 again.
The experiment makes me think that the argument list is internally represented as a list of arrays corresponding to different types, in which the information about the order of the arguments of different types is lost. For example, the argument list
(char a, double x, int i, double y, char b)
would be something like (char:{a, b}, int:{i}, double:{x, y})
and the casting to the (double z, char c)
would equal (char:{c = a}, double:{z = x})
, where I have defined T:{}
as an array of type T
.
So:
Is my interpretation correct?
Is this behavior standardized somewhere?
How much can I rely on such a behavior?
This behavior allows some generic programming. Does anybody exploit it in practice?
thanks!