9

问题: 有谁知道如何使用 Tango Java ( Jacobi ) APIonFrameAvailable()回调来获取 Tango 的彩色相机图像缓冲区?

背景:

我有一个增强现实应用程序,可以在 Tango 的背景中显示视频。我已按照此示例使用 Java API (Jacobi) 成功创建了视频叠加示例。我的应用程序工作正常,视频在后台正确呈现。

作为应用程序的一部分,我想在用户按下按钮时存储视频后缓冲区的副本。因此,我需要访问相机的 RGB 数据。

根据Jacobi 发布说明,任何希望访问相机 RGB 数据的类都应该onFrameAvailable()OnTangoUpdateListener. 我这样做了,但我没有看到任何实际获取像素的句柄或参数:

Java API

@Override
public void onFrameAvailable(int cameraId) {
    //Log.w(TAG, "Frame available!");
    if (cameraId == TangoCameraIntrinsics.TANGO_CAMERA_COLOR) {
        tangoCameraPreview.onFrameAvailable();
    }
}

如图所示,onFrameAvailable只有一个参数,整数表示生成视图的相机的 id。将此与 C 库回调进行对比,后者提供对图像缓冲区的访问:

C API

TangoErrorType TangoService_connectOnFrameAvailable(
    TangoCameraId id, void* context,
    void (*onFrameAvailable)(void* context, TangoCameraId id,
                             const TangoImageBuffer* buffer));

我期待 Java 方法具有类似于 C API 调用中的缓冲区对象的东西。

我试过的

我尝试扩展TangoCameraPreview类并将图像保存在那里,但我只得到黑色背景。

public class CameraSurfaceView extends TangoCameraPreview {


    private boolean takeSnapShot = false;

    public void takeSnapShot() {
        takeSnapShot = true;
    }

    /**
     * Grabs a copy of the surface (which is rendering the Tango color camera)
     * https://stackoverflow.com/questions/14620055/how-to-take-a-screenshot-of-androids-surface-view
     */
    public void screenGrab2(){

        int width = this.getWidth();
        int height = this.getHeight();
        long fileprefix = System.currentTimeMillis();

        View v= getRootView();
         v.setDrawingCacheEnabled(true);
        // this is the important code :)  
        // Without it the view will have a dimension of 0,0 and the bitmap will be null          
        v.measure(MeasureSpec.makeMeasureSpec(0, MeasureSpec.UNSPECIFIED),
        MeasureSpec.makeMeasureSpec(0, MeasureSpec.UNSPECIFIED));
        v.layout(0, 0, width, height);

        v.buildDrawingCache(true);

        Bitmap image = v.getDrawingCache();

        //TODO:  make seperate subdirctories for each exploitation sessions
        String targetPath =Environment.getExternalStorageDirectory()  + "/RavenEye/Photos/";
        String imageFileName = fileprefix + ".jpg";   

        if(!(new File(targetPath)).exists()) {
            new File(targetPath).mkdirs();
        }

        try {           
            File targetDirectory = new File(targetPath);
            File photo=new File(targetDirectory, imageFileName);
            FileOutputStream fos=new FileOutputStream(photo.getPath());
            image.compress(CompressFormat.JPEG, 100, fos);          
            fos.flush();
            fos.close();
            Log.i(this.getClass().getCanonicalName(), "Grabbed an image in target path:" + targetPath);
        } catch (FileNotFoundException e) {
            Log.e(CameraPreview.class.getName(),"Exception " + e);
            e.printStackTrace();
        } catch (IOException e) {
            Log.e(CameraPreview.class.getName(),"Exception " + e);
            e.printStackTrace();
        }   

    }


    /**
     * Grabs a copy of the surface (which is rendering the Tango color camera)
     */
    public void screenGrab(){

        int width = this.getWidth();
        int height = this.getHeight();
        long fileprefix = System.currentTimeMillis();

        Bitmap image = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);

        Canvas canvas = new Canvas(image);
        canvas.drawBitmap(image, 0, 0, null);

        //TODO:  make seperate subdirctories for each exploitation sessions
        String targetPath =Environment.getExternalStorageDirectory()  + "/RavenEye/Photos/";
        String imageFileName = fileprefix + ".jpg";   

        if(!(new File(targetPath)).exists()) {
            new File(targetPath).mkdirs();
        }

        try {           
            File targetDirectory = new File(targetPath);
            File photo=new File(targetDirectory, imageFileName);
            FileOutputStream fos=new FileOutputStream(photo.getPath());
            image.compress(CompressFormat.JPEG, 100, fos);          
            fos.flush();
            fos.close();
            Log.i(this.getClass().getCanonicalName(), "Grabbed an image in target path:" + targetPath);
        } catch (FileNotFoundException e) {
            Log.e(CameraPreview.class.getName(),"Exception " + e);
            e.printStackTrace();
        } catch (IOException e) {
            Log.e(CameraPreview.class.getName(),"Exception " + e);
            e.printStackTrace();
        }   

    }

    @Override
    public void onFrameAvailable() {
        super.onFrameAvailable();
        if(takeSnapShot) {
            screenGrab();
            takeSnapShot = false;

        }
    }

    public CameraSurfaceView(Context context) {
        super(context);
        // TODO Auto-generated constructor stub
    }
}

我要去哪里

我正准备根设备,然后使用该onFrameAvailable方法提示外部根进程,例如以下之一:

邮政23610900

发布 10965409

发布 4998527

我希望我能找到一种方法来避免root hack。

先感谢您!

4

2 回答 2

4

好的,我想出了一个让它工作的方法。

更新: 我的工作解决方案在这里:

https://github.com/stevehenderson/GoogleTango_AR_VideoCapture

我基本上在渲染管道上设置了一个“中间人(渲染器)”攻击。这种方法拦截SetRenderer来自TangoCameraPreview基类的调用,并允许访问基渲染器的OnDraw()方法和 GL 上下文。然后我向这个扩展渲染器添加额外的方法,允许读取 GL 缓冲区。

一般的做法

1)扩展TangoCameraPreview类(例如在我的例子中ReadableTangoCameraPreview)。覆盖setRenderer(GLSurfaceView.Renderer renderer),保留对基本渲染器的引用,并将渲染器替换为您自己的“包装”GLSUrface.Renderer渲染器,该渲染器将添加将后台缓冲区渲染到设备上的图像的方法。

2) 创建您自己的实现所有方法的GLSurfaceView.Renderer接口(例如我的类) ,将它们传递给在步骤 1 中捕获的基本渲染器。此外,当您想要抓取图像时,添加一些新方法来“提示”。ScreenGrabRendererGLSurfaceView.Renderer

3) 执行上述ScreenGrabRenderer步骤 2 中的描述。

4)使用回调接口(我的TangoCameraScreengrabCallback)在复制图像时进行通信

它工作得很好,并且允许人们在不生根设备的情况下抓取图像中的相机位。

注意:我不需要将捕获的图像与点云紧密同步。所以我没有检查延迟。为获得最佳结果,您可能需要调用 Mark 提出的 C 方法。

这是我的每个班级的样子..

///Main Activity Class where bulk of Tango code is
.
.
.

// Create our Preview view and set it as the content of our activity.
mTangoCameraPreview = new ReadableTangoCameraPreview(getActivity());

RelativeLayout preview = (RelativeLayout) view.findViewById(R.id.camera_preview);
preview.addView(mTangoCameraPreview);


.
.
.
//When you want to take a snapshot, call the takeSnapShotMethod()
//(you can make this respond to a button)
mTangoCameraPreview.takeSnapShot();
.
.
.
.
.
//Main Tango Listeners
@Override
public void onFrameAvailable(final int cameraId) {
    // Update the UI with TangoPose information
    runOnUiThread(new Runnable() {
        @Override
        public void run() {
            if (cameraId == TangoCameraIntrinsics.TANGO_CAMERA_COLOR) {
                tangoCameraPreview.onFrameAvailable();
            }       
        }
    });
}

ReadableTangoCameraPreview 类

public class ReadableTangoCameraPreview extends TangoCameraPreview implements  TangoCameraScreengrabCallback  {

    Activity mainActivity;
    private static final String TAG = ReadableTangoCameraPreview.class.getSimpleName();

    //An intercept renderer
    ScreenGrabRenderer screenGrabRenderer;

    private boolean takeSnapShot = false;

    @Override
    public void setRenderer(GLSurfaceView.Renderer renderer) {  
        //Create our "man in the middle"
        screenGrabRenderer= new ScreenGrabRenderer(renderer);

        //Set it's call back
        screenGrabRenderer.setTangoCameraScreengrabCallback(this);

        //Tell the TangoCameraPreview class to use this intermediate renderer
        super.setRenderer(screenGrabRenderer);
        Log.i(TAG,"Intercepted the renderer!!!");       
    }


    /**
     * Set a trigger for snapshot.  Call this from main activity
     * in response to a use input
     */
    public void takeSnapShot() {
        takeSnapShot = true;
    }   

    @Override
    public void onFrameAvailable() {
        super.onFrameAvailable();
        if(takeSnapShot) {
            //screenGrabWithRoot();
            screenGrabRenderer.grabNextScreen(0,0,this.getWidth(),this.getHeight());
            takeSnapShot = false;           
        }
    }

    public ReadableTangoCameraPreview(Activity context) {
        super(context); 
        mainActivity = context;     

    }

    public void newPhoto(String aNewPhotoPath) {
        //This gets called when a new photo was  grabbed created in the renderer
        Log.i(TAG,"New image available at" + aNewPhotoPath);    
    }

}

ScreenGrabRenderer 接口

(重载 TangoCameraPreview 默认渲染器)

/**
 * This is an intermediate class that intercepts all calls to the TangoCameraPreview's
 * default renderer.
 * 
 * It simply passes all render calls through to the default renderer.
 * 
 * When required, it can also use the renderer methods to dump a copy of the frame to a bitmap
 * 
 * @author henderso
 *
 */
public class ScreenGrabRenderer implements GLSurfaceView.Renderer  {


    TangoCameraScreengrabCallback mTangoCameraScreengrabCallback;

    GLSurfaceView.Renderer tangoCameraRenderer;
    private static final String TAG = ScreenGrabRenderer.class.getSimpleName();

    private String lastFileName = "unset";

    boolean grabNextScreen = false;

    int grabX = 0;
    int grabY = 0;
    int grabWidth = 640;
    int grabHeight = 320;

    public void setTangoCameraScreengrabCallback(TangoCameraScreengrabCallback aTangoCameraScreengrabCallback) {
        mTangoCameraScreengrabCallback = aTangoCameraScreengrabCallback;
    }

    /**
     * Cue the renderer to grab the next screen.  This is a signal that will
     * be detected inside the onDrawFrame() method
     * 
     * @param b
     */
    public void grabNextScreen(int x, int y, int w, int h) {
        grabNextScreen = true;
        grabX=x;
        grabY=y;
        grabWidth=w;
        grabHeight=h;
    }

    @Override
    public void onSurfaceCreated(GL10 gl, EGLConfig config) {
        tangoCameraRenderer.onSurfaceCreated(gl, config);

    }

    @Override
    public void onSurfaceChanged(GL10 gl, int width, int height) {
        tangoCameraRenderer.onSurfaceChanged(gl, width, height);        
    }

    @Override
    public void onDrawFrame(GL10 gl) {
        tangoCameraRenderer.onDrawFrame(gl);    
        if(grabNextScreen) {
            screenGrab(gl);
            grabNextScreen=false;
        }
    }


    /**
     * 
     * Creates a bitmap given a certain dimension and an OpenGL context
     *  
     * This code was lifted from here:
     * 
     * http://stackoverflow.com/questions/5514149/capture-screen-of-glsurfaceview-to-bitmap 
     */
    private Bitmap createBitmapFromGLSurface(int x, int y, int w, int h, GL10 gl)
            throws OutOfMemoryError {
        int bitmapBuffer[] = new int[w * h];
        int bitmapSource[] = new int[w * h];
        IntBuffer intBuffer = IntBuffer.wrap(bitmapBuffer);
        intBuffer.position(0);

        try {
            gl.glReadPixels(x, y, w, h, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, intBuffer);
            int offset1, offset2;
            for (int i = 0; i < h; i++) {
                offset1 = i * w;
                offset2 = (h - i - 1) * w;
                for (int j = 0; j < w; j++) {
                    int texturePixel = bitmapBuffer[offset1 + j];
                    int blue = (texturePixel >> 16) & 0xff;
                    int red = (texturePixel << 16) & 0x00ff0000;
                    int pixel = (texturePixel & 0xff00ff00) | red | blue;
                    bitmapSource[offset2 + j] = pixel;
                }
            }
        } catch (GLException e) {
            Log.e(TAG,e.toString());
            return null;
        }

        return Bitmap.createBitmap(bitmapSource, w, h, Bitmap.Config.ARGB_8888);
    }


    /**
     * Writes a copy of the GLSurface backbuffer to storage
     */
    private void screenGrab(GL10 gl) {
        long fileprefix = System.currentTimeMillis();
        String targetPath =Environment.getExternalStorageDirectory()  + "/RavenEye/Photos/";
        String imageFileName = fileprefix + ".png";   
        String fullPath = "error";

        Bitmap image = createBitmapFromGLSurface(grabX,grabY,grabWidth,grabHeight,gl);
        if(!(new File(targetPath)).exists()) {
            new File(targetPath).mkdirs();
        }
        try {           
            File targetDirectory = new File(targetPath);
            File photo=new File(targetDirectory, imageFileName);
            FileOutputStream fos=new FileOutputStream(photo.getPath());
            image.compress(CompressFormat.PNG, 100, fos);          
            fos.flush();
            fos.close();
            fullPath =targetPath + imageFileName;
            Log.i(TAG, "Grabbed an image in target path:" + fullPath);      

            ///Notify the outer class(es)
            if(mTangoCameraScreengrabCallback != null) {
                mTangoCameraScreengrabCallback.newPhoto(fullPath);
            } else {
                Log.i(TAG, "Callback not set properly..");
            }

        } catch (FileNotFoundException e) {
            Log.e(TAG,"Exception " + e);
            e.printStackTrace();
        } catch (IOException e) {
            Log.e(TAG,"Exception " + e);
            e.printStackTrace();
        }   
        lastFileName = fullPath;
    }


    /**
     * Constructor
     * @param baseRenderer
     */
    public ScreenGrabRenderer(GLSurfaceView.Renderer baseRenderer) {
        tangoCameraRenderer = baseRenderer;     
    }
}

TangoCameraScreengrabCallback 接口 (除非您想从屏幕抓取渲染器传回信息,否则不需要)

    /*
     * The TangoCameraScreengrabCallback is a generic interface that provides callback mechanism 
     * to an implementing activity.
     * 
     */
    interface TangoCameraScreengrabCallback {
        public void newPhoto(String aNewPhotoPath);
    }
于 2015-03-23T02:17:48.340 回答
2

我还没有尝试过最新版本,但正是由于缺少此功能,我才开始使用 C API 来获取图像数据——我认为在 G+ 页面上的最近一篇文章似乎表明 Unity API现在也返回图像数据 - 对于一家在我们不使用 Java 时想要继续责骂我们的公司来说,这肯定是一个奇怪的滞后 :-)

于 2015-03-22T13:13:13.607 回答