The basic problem is that the number of times each instruction or method or whatever is executed is NOT a reliable predictor of anything ... except how long it will take to run the same program on the same input again.
To get a useful predictor, you need a good model of how the program's execution time depends on its inputs. Producing such a model automatically is not a tractable problem.
The Halting Theorem says that you cannot determine a priori whether any program is going to halt with a given input set.
In practice, theorem proving technology is not up to the task for anything but really simple programs.
Attempting to get model by empirical means (i.e. by trying to fit a curve to experimental results) is not going to work because:
- It is not possible to work out (automatically) what the key variables are for the model. Indeed, the variables may not even be numeric.
- Even if you managed that, you've now way of knowing if the program's performance is going to fit one of the classic curves with any degree of accuracy. You may get a model, but you have know way of knowing how accurate it will be.
Just to illustrate, consider the problem of factoring a large number. If the number has small factors, then the computation time will be quick. If it doesn't then ... well according to wikipedia, the worst case computational complexity of the best available method for factoring integers is:
O(exp(((64b/9)^(1/2).(log b)^(2/3)))
where b
is the number of bits in the number. That's intractable for large values of b
.
So ...
A analytical solution which could give us a useful prediction would be equivalent to finding a fast analytical solution that says whether a number is (definitely) factorizable, and how large the factors are. This is a mathematically challenging problem.
An empirical solution cannot exist because no useful empirical model exists ... that doesn't involve first doing the factorization.