6

我在 android 上使用 OpenGL-ES 开发了一个地图。它可以很好地显示我的地图,并且我刚刚添加了触摸事件处理,因此我可以移动和扔它,这也很有效。

但是它的延迟时间约为 1 秒。我希望图像的平移显然尽可能平滑。

我有很多要显示的矢量数据,但仍然必须有其他方法可以使交互更流畅,我有 17000 个多边形(地块或地块)和大约 1500 条线(道路中心线),它们都得到了预- 加载到应用程序启动时保存 FloatBuffers 的列表中。当我进入我的地图活动时,渲染器会遍历这些列表,正如您将在下面的代码中看到的那样。

我真的很感激一些关于如何加快速度的指示。

(另外请注意,请忽略比例检测器和任何旋转代码,它们不起作用,我现在只关注平移地图。)

在此处输入图像描述

package com.ANDRRA1.utilities;

import android.content.Context;
import android.opengl.GLSurfaceView;
import android.util.AttributeSet;
import android.view.MotionEvent;
import android.view.GestureDetector;
import android.view.ScaleGestureDetector;
import android.view.animation.DecelerateInterpolator;
import android.view.animation.Interpolator;

public class CustomGLView extends GLSurfaceView {

    public vboCustomGLRenderer mGLRenderer;

    public CustomGLView(Context context){
        super(context);
    }

    public CustomGLView(Context context, AttributeSet attrs) 
    {
        super(context, attrs);  
    }

    // Hides superclass method.
    public void setRenderer(vboCustomGLRenderer renderer) 
    {
        mGLRenderer = renderer;
        super.setRenderer(renderer);

        super.setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY);
    }

    private static final int INVALID_POINTER_ID = -1;

    private float mPosX;
    private float mPosY;

    private float mLastTouchX;
    private float mLastTouchY;
    private float mLastGestureX;
    private float mLastGestureY;
    private int mActivePointerId = INVALID_POINTER_ID;
    private int mActivePointerId2 = INVALID_POINTER_ID;
    float oL1X1, oL1Y1, oL1X2, oL1Y2;

    private ScaleGestureDetector mScaleDetector = new ScaleGestureDetector(getContext(), new ScaleListener());
    private GestureDetector mGestureDetector = new GestureDetector(getContext(), new GestureListener());

    private float mScaleFactor = 1.f;

    //The following variable control the fling gesture
    private Interpolator animateInterpolator;
    private long startTime;
    private long endTime;
    private float totalAnimDx;
    private float totalAnimDy;
    private float lastAnimDx;
    private float lastAnimDy;

    @Override
    public boolean onTouchEvent(MotionEvent ev) {
        // Let the ScaleGestureDetector inspect all events.
        mScaleDetector.onTouchEvent(ev);
        mGestureDetector.onTouchEvent(ev);

        final int action = ev.getAction();
        switch (action & MotionEvent.ACTION_MASK) {
            case MotionEvent.ACTION_DOWN: {

                if (!mScaleDetector.isInProgress()) {
                    final float x = ev.getX();
                    final float y = ev.getY();

                    mLastTouchX = x;
                    mLastTouchY = y;
                    mActivePointerId = ev.getPointerId(0);
                }
                break;
            }
            case MotionEvent.ACTION_POINTER_DOWN: {
                if (mScaleDetector.isInProgress()) {
                    mActivePointerId2 = ev.getPointerId(1);

                    mLastGestureX = mScaleDetector.getFocusX();
                    mLastGestureY = mScaleDetector.getFocusY();

                    oL1X1 = ev.getX(ev.findPointerIndex(mActivePointerId));
                    oL1Y1 = ev.getY(ev.findPointerIndex(mActivePointerId));
                    oL1X2 = ev.getX(ev.findPointerIndex(mActivePointerId2));
                    oL1Y2 = ev.getY(ev.findPointerIndex(mActivePointerId2));
                }
                break;
            }

            case MotionEvent.ACTION_MOVE: {

                // Only move if the ScaleGestureDetector isn't processing a gesture.
                if (!mScaleDetector.isInProgress()) {
                    final int pointerIndex = ev.findPointerIndex(mActivePointerId);
                    final float x = ev.getX(pointerIndex);
                    final float y = ev.getY(pointerIndex);

                    final float dx = x - mLastTouchX;
                    final float dy = y - mLastTouchY;

                    mPosX += dx;
                    mPosY += dy;

                    mGLRenderer.setEye(dx, dy);
                    requestRender();

                    mLastTouchX = x;
                    mLastTouchY = y;
                }
                else{
                    final float gx = mScaleDetector.getFocusX();
                    final float gy = mScaleDetector.getFocusY();

                    final float gdx = gx - mLastGestureX;
                    final float gdy = gy - mLastGestureY;

                    mPosX += gdx;
                    mPosY += gdy;

                    mLastGestureX = gx;
                    mLastGestureY = gy;
                }

                break;
            }

            case MotionEvent.ACTION_UP: {
                mActivePointerId = INVALID_POINTER_ID;

                break;
            }
            case MotionEvent.ACTION_CANCEL: {
                mActivePointerId = INVALID_POINTER_ID;
                break;
            }
            case MotionEvent.ACTION_POINTER_UP: {

                final int pointerIndex = (ev.getAction() & MotionEvent.ACTION_POINTER_INDEX_MASK) 
                        >> MotionEvent.ACTION_POINTER_INDEX_SHIFT;
                final int pointerId = ev.getPointerId(pointerIndex);
                if (pointerId == mActivePointerId) {
                    // This was our active pointer going up. Choose a new
                    // active pointer and adjust accordingly.
                    final int newPointerIndex = pointerIndex == 0 ? 1 : 0;
                    mLastTouchX = ev.getX(newPointerIndex);
                    mLastTouchY = ev.getY(newPointerIndex);
                    mActivePointerId = ev.getPointerId(newPointerIndex);
                }
                else{
                    final int tempPointerIndex = ev.findPointerIndex(mActivePointerId);
                    mLastTouchX = ev.getX(tempPointerIndex);
                    mLastTouchY = ev.getY(tempPointerIndex);
                }

                break;
            }
        }

        return true;
    }

    private class ScaleListener extends ScaleGestureDetector.SimpleOnScaleGestureListener {
        @Override
        public boolean onScale(ScaleGestureDetector detector) {
            mScaleFactor *= detector.getScaleFactor();

            // Don't let the object get too small or too large.
            mScaleFactor = Math.max(0.1f, Math.min(mScaleFactor, 10000.0f));

            //invalidate();
            return true;
        }
    }

    private class GestureListener extends GestureDetector.SimpleOnGestureListener {
        @Override
        public boolean onFling(MotionEvent e1, MotionEvent e2, float velocityX, float velocityY) {

            if (e1 == null || e2 == null){
                return false;
            }
            final float distanceTimeFactor = 0.4f;
            final float totalDx = (distanceTimeFactor * velocityX/2);
            final float totalDy = (distanceTimeFactor * velocityY/2);

            onAnimateMove(totalDx, totalDy, (long) (1000 * distanceTimeFactor));
            return true;
        }
    }

    public void onAnimateMove(float dx, float dy, long duration) {
        animateInterpolator = new DecelerateInterpolator();
        startTime = System.currentTimeMillis();
        endTime = startTime + duration;
        totalAnimDx = dx;
        totalAnimDy = dy;
        lastAnimDx = 0;
        lastAnimDy = 0;

        post(new Runnable() {
            @Override
            public void run() {
                onAnimateStep();
            }
        });
    }

    private void onAnimateStep() {
        long curTime = System.currentTimeMillis();
        float percentTime = (float) (curTime - startTime) / (float) (endTime - startTime);
        float percentDistance = animateInterpolator.getInterpolation(percentTime);
        float curDx = percentDistance * totalAnimDx;
        float curDy = percentDistance * totalAnimDy;

        float diffCurDx = curDx - lastAnimDx;
        float diffCurDy = curDy - lastAnimDy;
        lastAnimDx = curDx;
        lastAnimDy = curDy;

        doAnimation(diffCurDx, diffCurDy);

        if (percentTime < 1.0f) {
            post(new Runnable() {
                @Override
                public void run() {
                    onAnimateStep();
                }
            });
        }
    }

    public void doAnimation(float diffDx, float diffDy) {
        mPosX += diffDx;
        mPosY += diffDy;

        mGLRenderer.setEye(diffDx, diffDy);
        requestRender();
    }

    public float angleBetween2Lines(float L1X1, float L1Y1, float L1X2, float L1Y2, float L2X1, float L2Y1, float L2X2, float L2Y2)
    {
        float angle1 = (float) Math.atan2(L1Y1 - L1Y2, L1X1 - L1X2);
        float angle2 = (float) Math.atan2(L2Y1 - L2Y2, L2X1 - L2X2);

        float angleDelta = findAngleDelta( (float)Math.toDegrees(angle1), (float)Math.toDegrees(angle2));
        return -angleDelta;
    }

    private float findAngleDelta( float angle1, float angle2 )
    {
        return angle1 - angle2;
    }
}

.

package com.ANDRRA1.utilities;

import java.nio.FloatBuffer;
import java.util.ListIterator;

import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;

import android.opengl.GLES20;
import android.opengl.GLSurfaceView;
import android.opengl.Matrix;

public class vboCustomGLRenderer  implements GLSurfaceView.Renderer {

    /**
     * Store the model matrix. This matrix is used to move models from object space (where each model can be thought
     * of being located at the center of the universe) to world space.
     */
    private float[] mModelMatrix = new float[16];

    /**
     * Store the view matrix. This can be thought of as our camera. This matrix transforms world space to eye space;
     * it positions things relative to our eye.
     */
    private float[] mViewMatrix = new float[16];

    /** Store the projection matrix. This is used to project the scene onto a 2D viewport. */
    private float[] mProjectionMatrix = new float[16];

    /** Allocate storage for the final combined matrix. This will be passed into the shader program. */
    private float[] mMVPMatrix = new float[16];

    /** This will be used to pass in the transformation matrix. */
    private int mMVPMatrixHandle;

    /** This will be used to pass in model position information. */
    private int mPositionHandle;

    /** This will be used to pass in model color information. */
    private int mColorUniformLocation;

    /** How many bytes per float. */
    private final int mBytesPerFloat = 4;   

    /** Offset of the position data. */
    private final int mPositionOffset = 0;

    /** Size of the position data in elements. */
    private final int mPositionDataSize = 3;

    /** How many elements per vertex for double values. */
    private final int mPositionFloatStrideBytes = mPositionDataSize * mBytesPerFloat;

    // geometry types
    private final byte wkbPoint = 1;
    private final byte wkbLineString = 2;
    private final byte wkbPolygon = 3;
    //private final byte wkbMultiPoint = 4;
    //private final byte wkbMultiLineString = 5;
    //private final byte wkbMultiPolygon = 6;
    //private final byte wkbGeometryCollection = 7;

    // Big Endian
    final int wkbXDR = 0;
    // Little Endian
    final int wkbNDR = 1;


    float count = 0;

    // Position the eye behind the origin.
    public volatile float eyeX = default_settings.mbrMinX + ((default_settings.mbrMaxX - default_settings.mbrMinX)/2);
    public volatile float eyeY = default_settings.mbrMinY + ((default_settings.mbrMaxY - default_settings.mbrMinY)/2);

    // Position the eye behind the origin.
    //final float eyeZ = 1.5f;
    public volatile float eyeZ = 1.5f;

    // We are looking toward the distance
    public volatile float lookX = eyeX;
    public volatile float lookY = eyeY;
    public volatile float lookZ = 0.0f;

    // Set our up vector. This is where our head would be pointing were we holding the camera.
    public volatile float upX = 0.0f;
    public volatile float upY = 1.0f;
    public volatile float upZ = 0.0f;


    public vboCustomGLRenderer() {
    }

    public void setEye(float x, float y){

        eyeX -= (x/screen_vs_map_horz_ratio);
        lookX = eyeX;
        eyeY += (y/screen_vs_map_vert_ratio);
        lookY = eyeY;

        // Set the camera position (View matrix)
        Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY, lookZ, upX, upY, upZ);
    }

    @Override
    public void onSurfaceCreated(GL10 unused, EGLConfig config) {


        Thread.currentThread().setPriority(Thread.MIN_PRIORITY);

        // Set the background frame color
        //White
        GLES20.glClearColor(1.0f, 1.0f, 1.0f, 1.0f);

        // Set the view matrix. This matrix can be said to represent the camera position.
        // NOTE: In OpenGL 1, a ModelView matrix is used, which is a combination of a model and
        // view matrix. In OpenGL 2, we can keep track of these matrices separately if we choose.
        Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY, lookZ, upX, upY, upZ);

        final String vertexShader =
            "uniform mat4 u_MVPMatrix;      \n"     // A constant representing the combined model/view/projection matrix.

          + "attribute vec4 a_Position;     \n"     // Per-vertex position information we will pass in.
          + "attribute vec4 a_Color;        \n"     // Per-vertex color information we will pass in.              

          + "varying vec4 v_Color;          \n"     // This will be passed into the fragment shader.

          + "void main()                    \n"     // The entry point for our vertex shader.
          + "{                              \n"
          + "   v_Color = a_Color;          \n"     // Pass the color through to the fragment shader. 
                                                    // It will be interpolated across the triangle.
          + "   gl_Position = u_MVPMatrix   \n"     // gl_Position is a special variable used to store the final position.
          + "               * a_Position;   \n"     // Multiply the vertex by the matrix to get the final point in                                                                   
          + "}                              \n";    // normalized screen coordinates.

        final String fragmentShader =
                "precision mediump float;       \n"     // Set the default precision to medium. We don't need as high of a 
                                                        // precision in the fragment shader.                
              + "uniform vec4 u_Color;          \n"     // This is the color from the vertex shader interpolated across the 
                                                        // triangle per fragment.             
              + "void main()                    \n"     // The entry point for our fragment shader.
              + "{                              \n"
              + "   gl_FragColor = u_Color;     \n"     // Pass the color directly through the pipeline.          
              + "}                              \n";                                                

        // Load in the vertex shader.
        int vertexShaderHandle = GLES20.glCreateShader(GLES20.GL_VERTEX_SHADER);

        if (vertexShaderHandle != 0) 
        {
            // Pass in the shader source.
            GLES20.glShaderSource(vertexShaderHandle, vertexShader);

            // Compile the shader.
            GLES20.glCompileShader(vertexShaderHandle);

            // Get the compilation status.
            final int[] compileStatus = new int[1];
            GLES20.glGetShaderiv(vertexShaderHandle, GLES20.GL_COMPILE_STATUS, compileStatus, 0);

            // If the compilation failed, delete the shader.
            if (compileStatus[0] == 0) 
            {               
                GLES20.glDeleteShader(vertexShaderHandle);
                vertexShaderHandle = 0;
            }
        }

        if (vertexShaderHandle == 0)
        {
            throw new RuntimeException("Error creating vertex shader.");
        }

        // Load in the fragment shader shader.
        int fragmentShaderHandle = GLES20.glCreateShader(GLES20.GL_FRAGMENT_SHADER);

        if (fragmentShaderHandle != 0) 
        {
            // Pass in the shader source.
            GLES20.glShaderSource(fragmentShaderHandle, fragmentShader);

            // Compile the shader.
            GLES20.glCompileShader(fragmentShaderHandle);

            // Get the compilation status.
            final int[] compileStatus = new int[1];
            GLES20.glGetShaderiv(fragmentShaderHandle, GLES20.GL_COMPILE_STATUS, compileStatus, 0);

            // If the compilation failed, delete the shader.
            if (compileStatus[0] == 0) 
            {               
                GLES20.glDeleteShader(fragmentShaderHandle);
                fragmentShaderHandle = 0;
            }
        }

        if (fragmentShaderHandle == 0)
        {
            throw new RuntimeException("Error creating fragment shader.");
        }

        // Create a program object and store the handle to it.
        int programHandle = GLES20.glCreateProgram();

        if (programHandle != 0) 
        {
            // Bind the vertex shader to the program.
            GLES20.glAttachShader(programHandle, vertexShaderHandle);           

            // Bind the fragment shader to the program.
            GLES20.glAttachShader(programHandle, fragmentShaderHandle);

            // Bind attributes
            GLES20.glBindAttribLocation(programHandle, 0, "a_Position");
            GLES20.glBindAttribLocation(programHandle, 1, "a_Color");

            // Link the two shaders together into a program.
            GLES20.glLinkProgram(programHandle);

            // Get the link status.
            final int[] linkStatus = new int[1];
            GLES20.glGetProgramiv(programHandle, GLES20.GL_LINK_STATUS, linkStatus, 0);

            // If the link failed, delete the program.
            if (linkStatus[0] == 0) 
            {               
                GLES20.glDeleteProgram(programHandle);
                programHandle = 0;
            }
        }

        if (programHandle == 0)
        {
            throw new RuntimeException("Error creating program.");
        }

        // Set program handles. These will later be used to pass in values to the program.
        mMVPMatrixHandle = GLES20.glGetUniformLocation(programHandle, "u_MVPMatrix");
        mPositionHandle = GLES20.glGetAttribLocation(programHandle, "a_Position");
        mColorUniformLocation = GLES20.glGetUniformLocation(programHandle, "u_Color");

        // Tell OpenGL to use this program when rendering.
        GLES20.glUseProgram(programHandle);

    }

    static float mWidth = 0;
    static float mHeight = 0;
    static float mLeft = 0;
    static float mRight = 0;
    static float mTop = 0;
    static float mBottom = 0;
    static float mRatio = 0;
    float screen_width_height_ratio;
    float screen_height_width_ratio;
    final float near = 1.5f;
    final float far = 10.0f;

    double screen_vs_map_horz_ratio = 0;
    double screen_vs_map_vert_ratio = 0;

    @Override
    public void onSurfaceChanged(GL10 unused, int width, int height) {

        // Adjust the viewport based on geometry changes,
        // such as screen rotation
        // Set the OpenGL viewport to the same size as the surface.
        GLES20.glViewport(0, 0, width, height);
        //Log.d("","onSurfaceChanged");

        screen_width_height_ratio = (float) width / height;
        screen_height_width_ratio = (float) height / width;

        //Initialize
        if (mRatio == 0){
            mWidth = (float) width;
            mHeight = (float) height;

            //map height to width ratio
            float map_extents_width = default_settings.mbrMaxX - default_settings.mbrMinX;
            float map_extents_height = default_settings.mbrMaxY - default_settings.mbrMinY;
            float map_width_height_ratio = map_extents_width/map_extents_height;
            //float map_height_width_ratio = map_extents_height/map_extents_width;
            if (screen_width_height_ratio > map_width_height_ratio){
                mRight = (screen_width_height_ratio * map_extents_height)/2;
                mLeft = -mRight;
                mTop = map_extents_height/2;
                mBottom = -mTop;
            }
            else{
                mRight = map_extents_width/2;
                mLeft = -mRight;
                mTop = (screen_height_width_ratio * map_extents_width)/2;
                mBottom = -mTop;
            }

            mRatio = screen_width_height_ratio;
        }

        if (screen_width_height_ratio != mRatio){
            final float wRatio = width/mWidth;
            final float oldWidth = mRight - mLeft;
            final float newWidth = wRatio * oldWidth;
            final float widthDiff = (newWidth - oldWidth)/2;
            mLeft = mLeft - widthDiff;
            mRight = mRight + widthDiff;

            final float hRatio = height/mHeight;
            final float oldHeight = mTop - mBottom;
            final float newHeight = hRatio * oldHeight;
            final float heightDiff = (newHeight - oldHeight)/2;
            mBottom = mBottom - heightDiff;
            mTop = mTop + heightDiff;

            mWidth = (float) width;
            mHeight = (float) height;

            mRatio = screen_width_height_ratio;
        }

        screen_vs_map_horz_ratio = (mWidth/(mRight-mLeft));
        screen_vs_map_vert_ratio = (mHeight/(mTop-mBottom));

        Matrix.frustumM(mProjectionMatrix, 0, mLeft, mRight, mBottom, mTop, near, far);
    }

    @Override
    public void onDrawFrame(GL10 unused) {

        GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);

        //The following lists hold the vector data in FloatBuffers pre-loaded from when then application starts
        ListIterator<mapLayer> orgNonAssetCatLayersList_it = default_settings.orgNonAssetCatMappableLayers.listIterator();
        while (orgNonAssetCatLayersList_it.hasNext()) {
            mapLayer MapLayer = orgNonAssetCatLayersList_it.next();

            ListIterator<FloatBuffer> mapLayerObjectList_it = MapLayer.objFloatBuffer.listIterator();
            ListIterator<Byte> mapLayerObjectTypeList_it = MapLayer.objTypeArray.listIterator();
            while (mapLayerObjectTypeList_it.hasNext()) {

                switch (mapLayerObjectTypeList_it.next()) {
                    case wkbPoint:
                        break;
                    case wkbLineString:
                        Matrix.setIdentityM(mModelMatrix, 0);
                        //Matrix.rotateM(mModelMatrix, 0, 0, 0.0f, 0.0f, 1.0f);
                        drawLineString(mapLayerObjectList_it.next(), MapLayer.lineStringObjColor);
                        break;
                    case wkbPolygon:
                        Matrix.setIdentityM(mModelMatrix, 0);
                        //Matrix.rotateM(mModelMatrix, 0, 0, 0.0f, 0.0f, 1.0f);
                        drawPolygon(mapLayerObjectList_it.next(), MapLayer.polygonObjColor);
                        break;
                }
            }
        }
    }

    private void drawLineString(final FloatBuffer geometryBuffer, final float[] colorArray)
    {
        // Pass in the position information
        geometryBuffer.position(mPositionOffset);
        GLES20.glVertexAttribPointer(mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false, mPositionFloatStrideBytes, geometryBuffer);

        GLES20.glEnableVertexAttribArray(mPositionHandle);

        GLES20.glUniform4f(mColorUniformLocation, colorArray[0], colorArray[1], colorArray[2], 1f);

        // This multiplies the view matrix by the model matrix, and stores the result in the MVP matrix
        // (which currently contains model * view).
        Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);

        // This multiplies the modelview matrix by the projection matrix, and stores the result in the MVP matrix
        // (which now contains model * view * projection).
        Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);

        GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);

        GLES20.glLineWidth(2.0f);
        GLES20.glDrawArrays(GLES20.GL_LINE_STRIP, 0, geometryBuffer.capacity()/mPositionDataSize);
    }

    private void drawPolygon(final FloatBuffer geometryBuffer, final float[] colorArray)
    {
        // Pass in the position information
        geometryBuffer.position(mPositionOffset);
        GLES20.glVertexAttribPointer(mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false, mPositionFloatStrideBytes, geometryBuffer);

        GLES20.glEnableVertexAttribArray(mPositionHandle);

        GLES20.glUniform4f(mColorUniformLocation, colorArray[0], colorArray[1], colorArray[2], 1f);

        // This multiplies the view matrix by the model matrix, and stores the result in the MVP matrix
        // (which currently contains model * view).
        Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);

        // This multiplies the modelview matrix by the projection matrix, and stores the result in the MVP matrix
        // (which now contains model * view * projection).
        Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);

        GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);

        GLES20.glLineWidth(1.0f);
        GLES20.glDrawArrays(GLES20.GL_LINE_LOOP, 0, geometryBuffer.capacity()/mPositionDataSize);
    }
}
4

5 回答 5

6

这里的其他答案已经非常好,并显示了可以查看的地方和需要改进的地方。我怀疑减速是由于每个绘图帧调用 drawPolygon 和 drawLine 数千次(如果你有数千个多边形和线),并且每个方法调用多次调用 OpenGL 方法。您真的想批量处理这些调用,以便在单个单独的绘制调用中绘制所有多边形和所有线条。

根据我的经验,很难准确计时,OpenGL 会缓冲调用,甚至 Android 跟踪器也会给出不准确的结果。您可以做的是尝试在运行之间删除和更改代码,为整个绘制循环计时,然后看看事情如何变化。

尝试删除 Thread.currentThread().setPriority(Thread.MIN_PRIORITY);,并重新设计应用程序以将您的数据放入与 GL_STATIC_DRAW 绑定的顶点缓冲区对象中。使用一次绘图调用绘制所有线条。为避免状态更改和中断绘制调用,您可以将颜色作为属性而不是制服。您还可以在每次整体绘制而不是每条线时计算和传递一次矩阵制服。

于 2013-07-10T19:34:02.823 回答
3

我建议您专注于根据缩放级别和视口限制绘制的数量,而不是尝试优化渲染效率。如果您有多达 17,000 个多边形且响应时间不错,那么您可能不会进行太多低效的渲染调用。

看你发的图。如果您在那里渲染 17,000 个多边形和 1,500 条线,那么大部分细节都被浪费了,因为我们看不到那个级别的细节,对吧?我当然看不到 17,000 个多边形。

相反,保持加载完整的细节,并编写代码以根据缩放级别限制细节。毫不奇怪,这种方法被称为细节级别算法。如果您曾经使用 MipMaps 做过很多事情,那么它基于相同的原理。

我会为您想要的所有缩放级别计算详细数据级别,并根据当前缩放级别引用此缓存数据。当用户不在您的离散缩放级别之一时,只需参考最接近的缩放级别。

当在更近的缩放级别需要高细节时,您可以通过使用空间分区算法在您的详细级别数据中剔除不需要渲染的线和多边形来保持快速。

如果您需要澄清任何问题,请告诉我。这些东西说起来容易,但编码起来很棘手。祝你好运!

编辑:

一种 LoD 实现是根据矩阵缩放计算多边形和线的位置。然后,丢弃任何相距不够远的点。我只是将它们的浮点位置转换为整数,然后看看它是什么样子。对多个缩放级别执行此操作。将这些结果存储在一个数组中,然后四舍五入您所处的任何缩放级别以选择最近的缓存 LoD 数据。

于 2013-07-10T04:58:22.663 回答
2

有几件事让我感到震惊。

1) 不要在你的绘图例程中创建对象,例如 onDrawFrame。迭代器如

    ListIterator<mapLayer> orgNonAssetCatLayersList_it = default_settings.orgNonAssetCatMappableLayers.listIterator();

创建对象并在您的绘图例程中创建对象会损害性能。

2) 尽可能减少 OpenGL 调用。当您执行 OpenGL 调用时,Java 仍然需要跨越 JNI 边界,因此如果您可以将所有内容放在几个大字节缓冲区中并避免更改 OpenGL 状态。我会尝试将数据组织到尽可能少的绘制线条的缓冲区中,另一个设置绘制多边形的缓冲区。

您可能只想考虑以各种缩放级别呈现部分数据。其他人可能有更好的想法,如果您环顾四周或在线,我相信您会找到它们。

3)始终衡量你的表现,看看你真正的问题在哪里。Android 有多种工具(Traceview SystraceOpenGL ES Tracer)可用。

请参阅更多通用 Android 性能提示:http: //developer.android.com/training/articles/perf-tips.html

于 2013-07-10T04:59:42.627 回答
2

没有人提到FBO的??我的意思是,LOD 将是一个很好的方法,但在某些情况下,您可以使用 FBO。这取决于很多事情,但你应该考虑它们!

您可以将洞场景(或其中的一部分)渲染到帧缓冲区对象并显示图像,而不是每帧都绘制场景。这将多边形数量减少到只有几个(最好的情况是 2 个,即一个正方形)。

一个新问题是如何处理平移/缩放,因为您需要动态重新计算 fbo。您可以在 fbo 准备好之前绘制一个模糊的图像,或者您可以通过一些预加载采取更高级的方法,例如将地图划分为正方形并预加载其中的 9 个(中心和 8 个邻居),就像 google-maps 加载它的那样映射数据。

您还需要查看内存消耗,您不能只将每个组合都绘制到 FBO。

我再说一遍,FBO 不是一个“独立”的解决方案,只要记住它们,看看你是否可以在某个地方使用它们!

于 2013-07-12T16:06:24.653 回答
0

你是在虚拟机还是在真正的手机上运行它?

你可以添加一些代码来检查你的函数的渲染时间吗?

尝试找出以这种方式花费时间的原因。

于 2013-07-10T04:50:51.770 回答