Suppose the following program is run on an x86_64 system:
int main() {
//sizeof(int) == 4
//sizeof(int*) == 8
//sizeof(long) == 8
// I would like 2 distinct memory locations to hold these two integers
int* mem1 = (int*)malloc(2 * sizeof(int));
mem1[0] = 1;
mem1[1] = 2;
//I would like 1 distinct memory location to hold this one long
long* mem2 = (long*)malloc(1 * sizeof(long));
mem2[0] = 3;
free(mem1);
free(mem2);
return 0;
}
Since malloc
receives a number of bytes to allocate, both calls to malloc look exactly the same. How does malloc know to actually allocate 16 bytes to store the two integer array and only to allocate 8 bytes for the one long?
Edit for clarity:
Based on the following assumptions, storing these two different arrays will require a different amount of space for each. However, malloc appears to reserve the same amount of space each time in this program. Yet, arrays sizes are correctly determined for arrays of datatypes of different lengths than long
s.
Can someone help me identify a flaw in this understanding of the memory, or point to something that malloc / libc is doing in the background? Here are the assumptions I'm operating on
- At each memory address on this system, a maximum of one long can be stored in it
mem[idx]
refers to the address of mem plus the offset of idx, and that address cannot point to data in the middle of another item in memory (somem1[0]
cannot refer to the lower half-word ofmem1
andmem1[1]
can't then refer to the high word)- When an array of integers are made, two integers are not packed on this system into one long