如何将返回的位图转换BitmapFactory.decodeFile()
为 YUV 格式(类似于相机的 onPreviewFrame() 在字节数组中返回的内容)?
问问题
44793 次
6 回答
58
这是一些实际有效的代码:
// untested function
byte [] getNV21(int inputWidth, int inputHeight, Bitmap scaled) {
int [] argb = new int[inputWidth * inputHeight];
scaled.getPixels(argb, 0, inputWidth, 0, 0, inputWidth, inputHeight);
byte [] yuv = new byte[inputWidth*inputHeight*3/2];
encodeYUV420SP(yuv, argb, inputWidth, inputHeight);
scaled.recycle();
return yuv;
}
void encodeYUV420SP(byte[] yuv420sp, int[] argb, int width, int height) {
final int frameSize = width * height;
int yIndex = 0;
int uvIndex = frameSize;
int a, R, G, B, Y, U, V;
int index = 0;
for (int j = 0; j < height; j++) {
for (int i = 0; i < width; i++) {
a = (argb[index] & 0xff000000) >> 24; // a is not used obviously
R = (argb[index] & 0xff0000) >> 16;
G = (argb[index] & 0xff00) >> 8;
B = (argb[index] & 0xff) >> 0;
// well known RGB to YUV algorithm
Y = ( ( 66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
U = ( ( -38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
V = ( ( 112 * R - 94 * G - 18 * B + 128) >> 8) + 128;
// NV21 has a plane of Y and interleaved planes of VU each sampled by a factor of 2
// meaning for every 4 Y pixels there are 1 V and 1 U. Note the sampling is every other
// pixel AND every other scanline.
yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (j % 2 == 0 && index % 2 == 0) {
yuv420sp[uvIndex++] = (byte)((V<0) ? 0 : ((V > 255) ? 255 : V));
yuv420sp[uvIndex++] = (byte)((U<0) ? 0 : ((U > 255) ? 255 : U));
}
index ++;
}
}
}
于 2012-10-24T18:38:54.037 回答
4
以下是将位图转换为Yuv(NV21)
格式的代码。
void yourFunction(){
// mBitmap is your bitmap
int mWidth = mBitmap.getWidth();
int mHeight = mBitmap.getHeight();
int[] mIntArray = new int[mWidth * mHeight];
// Copy pixel data from the Bitmap into the 'intArray' array
mBitmap.getPixels(mIntArray, 0, mWidth, 0, 0, mWidth, mHeight);
// Call to encoding function : convert intArray to Yuv Binary data
encodeYUV420SP(data, intArray, mWidth, mHeight);
}
static public void encodeYUV420SP(byte[] yuv420sp, int[] rgba,
int width, int height) {
final int frameSize = width * height;
int[] U, V;
U = new int[frameSize];
V = new int[frameSize];
final int uvwidth = width / 2;
int r, g, b, y, u, v;
for (int j = 0; j < height; j++) {
int index = width * j;
for (int i = 0; i < width; i++) {
r = Color.red(rgba[index]);
g = Color.green(rgba[index]);
b = Color.blue(rgba[index]);
// rgb to yuv
y = (66 * r + 129 * g + 25 * b + 128) >> 8 + 16;
u = (-38 * r - 74 * g + 112 * b + 128) >> 8 + 128;
v = (112 * r - 94 * g - 18 * b + 128) >> 8 + 128;
// clip y
yuv420sp[index] = (byte) ((y < 0) ? 0 : ((y > 255) ? 255 : y));
U[index] = u;
V[index++] = v;
}
}
于 2012-02-17T10:58:26.603 回答
0
bmp 文件将是 RGB888 格式,因此您需要将其转换为 YUV。我没有在 Android 中遇到任何可以为您执行此操作的 api。
但是您可以自己执行此操作,请参阅此链接以了解如何..
于 2011-05-13T05:29:28.060 回答
0
如果使用 java 将 Bitmap 转换为 YUV byte[] 对你来说太慢了,你可以试试谷歌的libyuv
于 2017-12-07T06:18:00.500 回答
0
通过OpenCV
库,您可以encodeYUV420SP
用一条本地OpenCV
行替换 java 函数,它的速度快了约 4 倍:
Mat mFrame = Mat(height,width,CV_8UC4,pFrameData).clone();
完整示例:
Java端:
Bitmap bitmap = mTextureView.getBitmap(mWidth, mHeight);
int[] argb = new int[mWidth * mHeight];
// get ARGB pixels and then proccess it with 8UC4 opencv convertion
bitmap.getPixels(argb, 0, mWidth, 0, 0, mWidth, mHeight);
// native method (NDK or CMake)
processFrame8UC4(argb, mWidth, mHeight);
本机端(NDK):
JNIEXPORT jint JNICALL com_native_detector_Utils_processFrame8UC4
(JNIEnv *env, jobject object, jint width, jint height, jintArray frame) {
jint *pFrameData = env->GetIntArrayElements(frame, 0);
// it is the line:
Mat mFrame = Mat(height,width,CV_8UC4,pFrameData).clone();
// the next only is a extra example to gray convertion:
Mat mout;
cvtColor(mFrame, mout,CV_RGB2GRAY);
int objects = face_detection(env, mout);
env->ReleaseIntArrayElements(frame, pFrameData, 0);
return objects;
}
于 2018-03-17T00:18:42.513 回答
-1
首先你计算 rgb 数据:
r=(p>>16) & 0xff;
g=(p>>8) & 0xff;
b= p & 0xff;
y=0.2f*r+0.7*g+0.07*b;
u=-0.09991*r-0.33609*g+0.436*b;
v=0.615*r-0.55861*g-0.05639*b;
y、u 和 v 是 yuv 矩阵的合成物。
于 2015-06-23T14:02:37.920 回答