I would like to perform face detection / tracking on a video file (e.g. an MP4 from the users gallery) using the Android Vision FaceDetector
API. I can see many examples on using the CameraSource class to perform face tracking on the stream coming directly from the camera (e.g. on the android-vision github), but nothing on video files.
I tried looking at the source code for CameraSource
through Android Studio, but it is obfuscated, and I couldn't see the original online. I image there are many commonalities between using the camera and using a file. Presumably I just play the video file on a Surface
, and then pass that to a pipeline.
Alternatively I can see that Frame.Builder
has functions setImageData
and setTimestampMillis
. If I was able to read in the video as ByteBuffer
, how would I pass that to the FaceDetector
API? I guess this question is similar, but no answers. Similarly, decode the video into Bitmap
frames and pass that to setBitmap
.
Ideally I don't want to render the video to the screen, and the processing should happen as fast as the FaceDetector
API is capable of.