0

我试图通过简单地使用一个片段的提要并将另一个屏幕元素设置为相同的方式在我的屏幕上显示一个 arFragment 两次,但我无法确定要采用哪个元素。

我知道,我通过调用获取当前的相机图像

ArFragment arFragment = (ArFragment) getSupportFragmentManager()
                           .findFragmentById(R.id.arFragment);
Image image = arFragment.getArSceneView().getArFrame().acquireCameraImage();

但我不知道如何获取另一个屏幕对象并将视图设置为提要,arFragment 给了我。比如,例如:

TextureView secondView = (TextureView) findViewById(R.id.texture);
secondView.setSurfaceTexture((SurfaceTexture) image);

产生不可转换的类型错误。

我不能使用另一个 arFragment,因为那会已经分配了另一个摄像头(显然会产生黑屏和“摄像头已在使用”错误)。我还没有找到

arFrame.assignCamera();

方法,无所谓,因为 Fragment 使用的相机只是一个物体,而不是真实的东西。但我不知道硬件在哪里与 Fragment 相关联。如果我没记错的话,我不能在那里读写。

我可以将提要转换为位图,也可以将其放在 imageView 上,但我有点害怕每秒执行 60 次。必须有一个简单的解决方案,对吧?...

不能这么难显示两次视图-.-

4

1 回答 1

1

好,知道了。转换为 bmp 有点神奇,但猜测真的没有直接的方法。

所以我确实初始化了一个bytebuffer,分析了android.media.image的YUV成分,把它们转成Jpeg,然后改成Bitmap,旋转90°来匹配原图。

// get the arFragment
arFragment = (ArFragment) getSupportFragmentManager().findFragmentById(R.id.arFragment);
ArSceneView arSceneView = arFragment.getArSceneView();

// set up a Listener to trigger on every frame
arSceneView.getScene().addOnUpdateListener(frameTime -> 
{
  try 
  {
   frame = arSceneView.getArFrame();
   androidMediaImage = frame.acquireCameraImage();
   int imageWidth = androidMediaImage.getWidth();
   int imageHeight = androidMediaImage.getHeight();

   // select the target Container to display the image in
   ImageView secondView = (ImageView) findViewById(R.id.imageView3);
   byte[] nv21;

   // an Android.Media.Image is a YUV-Image which is made out of 3 planes
   ByteBuffer yBuffer = androidMediaImage.getPlanes()[0].getBuffer();
   ByteBuffer uBuffer = androidMediaImage.getPlanes()[1].getBuffer();
   ByteBuffer vBuffer = androidMediaImage.getPlanes()[2].getBuffer();

   // set up a Bytearray with the size of all the planes
   int ySize = yBuffer.remaining();
   int uSize = uBuffer.remaining();
   int vSize = vBuffer.remaining();

   nv21 = new byte[ySize + uSize + vSize];

   // Fill in the array. This code is directly taken from https://www.programcreek.com 
   //where it was pointed out that U and V have to be swapped
   yBuffer.get(nv21, 0 , ySize);
   vBuffer.get(nv21, ySize, vSize);
   vBuffer.get(nv21, ySize + vSize, uSize);

   // combine the three layers to one nv21 image
   YuvImage yuvImage = new YuvImage(nv21, ImageFormat.NV21, imageWidth, imageHeight, null);
   // Open a Bytestream to feed the compressor
   ByteArrayOutputStream out = new ByteArrayOutputStream();
   // compress the yuv image to Jpeg. This is important, because the BitmapFactory can't read a 
   // yuv-coded image directly (belief me I tried -.-)
   yuvImage.compressToJpeg(new Rect(0, 0, imageWidth, imageHeight), 50, out);
   // now write down the bytes of the image into an array
   byte[] imageBytes = out.toByteArray();
   // and build the bitmap using the Factory
   Bitmap bitmapImage = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);

   // use a Matrix for the rotation
   Matrix rotationMatrix = new Matrix();
   // the thing is basically a bunch of numbers which then can be used to compute the new location of each pixel
   rotationMatrix.postRotate(90);
   // the rotatedImage will be our target image
   Bitmap rotatedImage = Bitmap.createBitmap(bitmapImage, 0,0, bitmapImage.getWidth(), bitmapImage.getHeight(), rotationMatrix, true);

   // it's so easy!!!!
   secondView.setImageBitmap(rotatedImage);
  } catch (NotYetAviableException e) 
   {
     e.printStackTrace();
   }
});

如果我完全错了,您显然可以纠正我,并且有一种更简单的解决方案。但它至少有效,所以我很高兴 <3

于 2018-12-06T10:41:46.020 回答