I'm making a simple fractal viewing app for Android, just for fun. I'm also using it as an oppotunity to learn OpenGL since I've never worked with it before. Using the Android port of the NeHe tutorials as a starting point, my approach is to have one class (FractalModel) which does all the math to create the fractal, and FractalView which does all the rendering.
The difficulty I'm having is in getting the rendering to work. Since I'm essentially plotting a graph of points of different colors where each point should correspond to 1 pixel, I thought I'd handle this by rendering 1x1 rectangles over the entire screen, using the dimensions to calculate the offsets so that there's a 1:1 correspondence between the rectangles and the physical pixels. Since the color of each pixel will be calculated independently, I can re-use the same rendering code to render different parts of the fractal (I want to add panning and zooming later on).
Here is the view class I wrote:
public class FractalView extends GLSurfaceView implements Renderer {
private float[] mVertices;
private FloatBuffer[][] mVBuffer;
private ByteBuffer[][] mBuffer;
private int mScreenWidth;
private int mScreenHeight;
private float mXOffset;
private float mYOffset;
private int mNumPixels;
//references to current vertex coordinates
private float xTL;
private float yTL;
private float xBL;
private float yBL;
private float xBR;
private float yBR;
private float xTR;
private float yTR;
public FractalView(Context context, int w, int h){
super(context);
setEGLContextClientVersion(1);
mScreenWidth = w;
mScreenHeight = h;
mNumPixels = mScreenWidth * mScreenHeight;
mXOffset = (float)1.0/mScreenWidth;
mYOffset = (float)1.0/mScreenHeight;
mVertices = new float[12];
mVBuffer = new FloatBuffer[mScreenHeight][mScreenWidth];
mBuffer = new ByteBuffer[mScreenHeight][mScreenWidth];
}
public void onDrawFrame(GL10 gl){
int i,j;
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glLoadIdentity();
mapVertices();
gl.glColor4f(0.0f,1.0f, 0.0f,.5f);
for(i = 0; i < mScreenHeight; i++){
for(j = 0; j < mScreenWidth; j++){
gl.glFrontFace(GL10.GL_CW);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVBuffer[i][j]);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, mVertices.length / 3);
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
}
}
}
public void onSurfaceChanged(GL10 gl, int w, int h){
if(h == 0) { //Prevent A Divide By Zero By
h = 1; //Making Height Equal One
}
gl.glViewport(0, 0, w, h); //Reset The Current Viewport
gl.glMatrixMode(GL10.GL_PROJECTION); //Select The Projection Matrix
gl.glLoadIdentity(); //Reset The Projection Matrix
//Calculate The Aspect Ratio Of The Window
GLU.gluPerspective(gl, 45.0f, (float)w / (float)h, 0.1f, 100.0f);
gl.glMatrixMode(GL10.GL_MODELVIEW); //Select The Modelview Matrix
gl.glLoadIdentity();
}
public void onSurfaceCreated(GL10 gl, EGLConfig config){
gl.glShadeModel(GL10.GL_SMOOTH); //Enable Smooth Shading
gl.glClearColor(0.0f, 0.0f, 0.0f, 0.5f); //Black Background
gl.glClearDepthf(1.0f); //Depth Buffer Setup
gl.glEnable(GL10.GL_DEPTH_TEST); //Enables Depth Testing
gl.glDepthFunc(GL10.GL_LEQUAL); //The Type Of Depth Testing To Do
//Really Nice Perspective Calculations
gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST);
}
private void mapVertices(){
int i,j;
xTL = -1;
yTL = 1;
xTR = -1 + mXOffset;
yTR = 1;
xBL = -1;
yBL = 1 - mYOffset;
xBR = -1 + mXOffset;
yBR = 1 - mYOffset;
for(i = 0; i < mScreenHeight; i++){
for (j = 0; j < mScreenWidth; j++){
//assign coords to vertex array
mVertices[0] = xBL;
mVertices[1] = yBL;
mVertices[2] = 0f;
mVertices[3] = xBR;
mVertices[4] = xBR;
mVertices[5] = 0f;
mVertices[6] = xTL;
mVertices[7] = yTL;
mVertices[8] = 0f;
mVertices[9] = xTR;
mVertices[10] = yTR;
mVertices[11] = 0f;
//add doubleBuffer
mBuffer[i][j] = ByteBuffer.allocateDirect(mVertices.length * 4);
mBuffer[i][j].order(ByteOrder.nativeOrder());
mVBuffer[i][j] = mBuffer[i][j].asFloatBuffer();
mVBuffer[i][j].put(mVertices);
mVBuffer[i][j].position(0);
//transform right
transformRight();
}
//transform down
transformDown();
//reset x
xTL = -1;
xTR = -1 + mXOffset;
xBL = -1;
xBR = -1 + mXOffset;
}
}
//transform all the coordinates 1 "pixel" to the right
private void transformRight(){
xTL = xTL + mXOffset; //TL
xBL = xBL + mXOffset; //BL
xBR = xBR + mXOffset; //BR
xTR = xTR + mXOffset; //TR;
}
//transform all of the coordinates 1 pixel down;
private void transformDown(){
yTL = yTL - mYOffset;
yBL = yBL - mYOffset;
yBR = yBR - mYOffset;
yTR = yTR - mYOffset;
}
}
Basically I'm trying to do it the same way as this (the square in lesson 2) but with far more objects. I'm assuming 1 and -1 roughly correspond to screen edges, (I know this isn't totally true, but I don't really understand how to use projection matrices and want to keep this as simple as possible unless there's a good resource out there I can learn from) but I understand that OpenGL's coordinates are separate from real screen coordinates. When I run my code I just get a black screen (it should be green) but LogCat shows the garbage collector working away so I know something is happening. I'm not sure if it's just a bug caused by my just not doing something right, or if it's just REALLY slow. In either case, what should I do differently? I feel like I may be going about this all wrong. I've looked around and most of the tutorials and examples are based on the link above.
Edit: I know I could go about this by generating a texture that fills up the entire screen and just drawing that, though the link I read which mentioned it said it would be slower since you're not supposed to redraw a texture every frame. That said, I only really need to redraw the texture when the perspective changes, so I could write my code to take this into account. The main difficulty I'm having currently is drawing the bitmap, and getting it to display correctly.