I don't think you can figure that out in your example. There's too little code in that function.
If it were a bigger function that used the array multiple times you might find some tips pointing to that. Like base address + different offset popping in and out through out the generated machine code.
Weak assumption:
for (i = 0; i < 10; i++)
array[i] = i * 2;
This would allow you to assume, by looking at the generated code, that you're dealing with an array of 10 ints.
Stronger case:
int *array = NULL;
array = malloc(10 * sizeof *array);
if (array == NULL)
return ENOMEM;
for (i = 0; i < 10; i++)
array[i] = i * 2;
This would make the fact that you're dealing with an array of 10 ints a certainty.
In your case you only have the raw information: the function allocated 10 * sizeof(int) bytes on the stack. (Which actually depends on the optimizer as well, but that's another topic).
So it's all about the heuristics and code pattern recognition algorithms that programs like IDA use to feed you as much reliable information as possible.
The rest is up to the reverser's experience.