I wrote a simple OGL ES 2.0 program running on my Tegra2-based Android 2.2 device. It simply renders a big quad on the screen with some diffuse and specular lighting (no normal mapping). Obviously, I was curious about the performance. I measure it by computing a sum of delta times for one second and then divide that sum by the total number of FPS counted. Here's the code:
long currentTime = System.currentTimeMillis();
long deltaTime = currentTime - lastTime;
lastTime = currentTime;
timeCounter += deltaTime;
FPS++;
if (timeCounter >= 1000)
{
System.out.println(FPS + " " + (float)timeCounter/(float)FPS);
timeCounter = 0;
FPS = 0;
}
After I run the program, the first 2-3 measurements (seconds) give me something around 31-33 FPS but after that time it drops to 28-29 for one measurement, and finally to 26 and stays at this number. What is interesting here is that 26 = 52/2 and no matter what I do I cannot make my program run faster than with 52 FPS, even with the simplest solid-colored fragment shaders. Moreover, changing the shader a little bit (by removing pow function from the specular compuations for instance) does not affect the performance - still 26 FPS. There has to be a more performance-heavy (or -lighter) change to see the FPS changed. So I am wondering if there is any synchronization I am falling into here? Anyone expierienced anything like that?
I also made a simple program in Unity 3D, which does exactly what my original program does. I even copied the GLSL fragment shader and adapted it a little bit to conform with Unity 3D instead of using preferred Cg/HLSL language. The timings from the program in Unity clearly show a stable FPS around 31-33, which is what I get in my program for the first 2-3 seconds. I know that Unity is a good piece of software, but I would expect my program to perform, under the same conditions, almost exactly the same :).