I did a few tests, and caching the values seem to be [marginally] better than evaluating the condition with every iteration.
Here are my test results when using std::vector
.
Code: Testing for-loop caching with std::vector
Timing results (4 runs for evaluated and cached runs each):
Evaluated:
- Compilation time: 2,16 sec, absolute running time: 10,94 sec, absolute service time: 13,11 sec
- Compilation time: 1,76 sec, absolute running time: 9,98 sec, absolute service time: 11,75 sec
- Compilation time: 1,76 sec, absolute running time: 10,11 sec, absolute service time: 11,88 sec
- Compilation time: 1,91 sec, absolute running time: 10,62 sec, absolute service time: 12,53 sec
Cached:
- Compilation time: 1,84 sec, absolute running time: 9,55 sec, absolute
service time: 11,39 sec
- Compilation time: 1,75 sec, absolute running
time: 9,85 sec, absolute service time: 11,61 sec
- Compilation time:
1,83 sec, absolute running time: 9,41 sec, absolute service time:
11,25 sec
- Compilation time: 1,86 sec, absolute running time: 9,87
sec, absolute service time: 11,73 sec
Here are my test results when using std::list
.
Code: Testing for-loop caching with std::list
Timing results (2 runs for evaluated and cached runs each):
Evaluated:
- Compilation time: 1,9 sec, absolute running time: 17,94 sec, absolute service time: 19,84 sec
- Compilation time: 1,84 sec, absolute running time: 17,52 sec, absolute service time: 19,36 sec
Cached:
- Compilation time: 1,81 sec, absolute running time: 17,74 sec, absolute service time: 19,56 sec
- Compilation time: 1,92 sec, absolute running time: 17,29 sec, absolute service time: 19,22 sec
The absolute running time is what I used as a comparison metric.
Caching the condition is consistently (yet marginally) better than evaluating the condition.