Chinaunix首页 | 论坛 | 博客
  • 博客访问: 15582
  • 博文数量: 11
  • 博客积分: 0
  • 博客等级: 民兵
  • 技术积分: 120
  • 用 户 组: 普通用户
  • 注册时间: 2019-12-06 11:01
文章分类
文章存档

2020年(6)

2019年(5)

我的朋友
最近访客

分类: Android平台

2020-02-26 14:22:56

相信大家都对直播不会陌生,直播的技术也越来越成熟了,目前有这样的一个技术,当弹幕飘到主播的脸上的时候,弹幕会自动消失,出了人脸范围内,就继续显示出来。这个原理非常的简单,其实就是人脸识别,将人脸识别范围内的弹幕全都隐藏。说起来容易做起来难,本文将分以下几点讲述如何实现RTMP视频流的人脸识别。

  • 方案选择
  • ffmpeg的接入
  • ffmpeg的数据解析
  • OpenGL的数据绘制
  • 人脸跟踪以及人脸框的绘制

一、方案的选择

笔者一开始想直接使用别人封装好的播放器,输入地址就能播放。接入后发现,确实接入和使用都很简单,也能够显示,但是有一个很致命的问题,就是没有提供获取裸数据的接口,因而没办法进行人脸识别,后面我就转用了ffmpeg。当然如果只是想在设备上播放RTMP流,bilibili的ijkplayer的框架是完全没有问题的,接入和使用都很简单下面是他们的地址。

解析方案已经选择完毕,接下来就是绘制和人脸识别,绘制我采用OpenGL。原因是之前有自己封装过一个自定义surfaceView,直接拿来用就可以了。人脸识别引擎我选择虹软的引擎,原因有二,一是使用起来比较简单,虹软的demo写的不错,很多东西可以直接抄过来;二是免费,其他公司的引擎我也用过,都是有试用期限,我不喜欢有期限的东西,而且虹软的引擎效果也不错,当然是我的首选。

二、ffmpeg的接入

1.目录结构

在src/main目录下新建cpp以及jniLibs目录,并将ffmpeg库放入,如下图所示。


图片


2.CMakeLists

首先我们在src/main/cpp目录下新建两个文件,CMakeLists.txt,rtmpplayer-lib。CMake用于库间文件的管理与构建,rtmpplayer-lib是放我们解析数据流的jni代码用的。


图片



图片


CMakeLists.txt

点击(此处)折叠或打开

  1. # For more information about using CMake with Android Studio, read the
  2. # documentation: https://d.android.com/studio/projects/add-native-code.html

  3. # Sets the minimum version of CMake required to build the native library.

  4. cmake_minimum_required(VERSION 3.4.1)

  5. # Creates and names a library, sets it as either STATIC
  6. # or SHARED, and provides the relative paths to its source code.
  7. # You can define multiple libraries, and CMake builds them for you.
  8. # Gradle automatically packages shared libraries with your APK.

  9. add_library( # Sets the name of the library.
  10.         rtmpplayer-lib

  11.         # Sets the library as a shared library.
  12.         SHARED

  13.         # Provides a relative path to your source file(s).
  14.         rtmpplayer-lib.cpp)
  15. include_directories(ffmpeg)

  16. # Searches for a specified prebuilt library and stores the path as a
  17. # variable. Because CMake includes system libraries in the search path by
  18. # default, you only need to specify the name of the public NDK library
  19. # you want to add. CMake verifies that the library exists before
  20. # completing its build.

  21. find_library( # Sets the name of the path variable.
  22.         log-lib

  23.         # Specifies the name of the NDK library that
  24.         # you want CMake to locate.
  25.         log)

  26. # Specifies libraries CMake should link to your target library. You
  27. # can link multiple libraries, such as libraries you define in this
  28. # build script, prebuilt third-party libraries, or system libraries.

  29. target_link_libraries( # Specifies the target library.
  30.         rtmpplayer-lib

  31.         # Links the target library to the log library
  32.         # included in the NDK.
  33.         ${log-lib}
  34.         android
  35.         ${PROJECT_SOURCE_DIR}/../jniLibs/${CMAKE_ANDROID_ARCH_ABI}/libavcodec.so
  36.         ${PROJECT_SOURCE_DIR}/../jniLibs/${CMAKE_ANDROID_ARCH_ABI}/libavdevice.so
  37.         ${PROJECT_SOURCE_DIR}/../jniLibs/${CMAKE_ANDROID_ARCH_ABI}/libavfilter.so
  38.         ${PROJECT_SOURCE_DIR}/../jniLibs/${CMAKE_ANDROID_ARCH_ABI}/libavformat.so
  39.         ${PROJECT_SOURCE_DIR}/../jniLibs/${CMAKE_ANDROID_ARCH_ABI}/libavutil.so
  40.         ${PROJECT_SOURCE_DIR}/../jniLibs/${CMAKE_ANDROID_ARCH_ABI}/libpostproc.so
  41.         ${PROJECT_SOURCE_DIR}/../jniLibs/${CMAKE_ANDROID_ARCH_ABI}/libswresample.so
  42.         ${PROJECT_SOURCE_DIR}/../jniLibs/${CMAKE_ANDROID_ARCH_ABI}/libswscale.so
  43.         )

3.build.gradle

我们需要在指定上面我们的CMake文件的位置,以及指定构建的架构。


点击(此处)折叠或打开

  1. android{
  2.       defaultConfig {
  3.         ...
  4.         ...
  5.         externalNativeBuild {
  6.             cmake {
  7.                 abiFilters "armeabi-v7a"
  8.             }
  9.         }
  10.         ndk {
  11.             abiFilters 'armeabi-v7a' //只生成armv7的so
  12.         }
  13.     }
  14.     externalNativeBuild {
  15.         cmake {
  16.         //path即为上面CMakeLists的地址
  17.             path "src/main/cpp/CMakeLists.txt"
  18.             version "3.10.2"
  19.         }
  20.     }
  21. }



4.完成构建

在上述的步骤都完成后,我们就可以构建了,点击build下的refresh linked C++ prject,再点击右侧Gradle/other/externalNativeBuildDebug,等待构建完成后就可以在build/intermediates/cmake下就可以看到自己构建成功的so库了,如果能看到libnative-lib.so那么恭喜你,ffmpeg接入就算完成了。


图片



图片



图片


三、ffmpeg的数据解析

1.JNI数据流解析

上面提到了native-lib.cpp,我们要在这个文件内编写解析RTMP数据流的Jni代码。


点击(此处)折叠或打开

  1. #include <jni.h>
  2. #include <string>
  3. #include <android/log.h>
  4. #include <fstream>

  5. #define LOGE(FORMAT, ...) __android_log_print(ANDROID_LOG_ERROR, "player", FORMAT, ##__VA_ARGS__);
  6. #define LOGI(FORMAT, ...) __android_log_print(ANDROID_LOG_INFO, "player", FORMAT, ##__VA_ARGS__);

  7. extern "C" {
  8. #include "libavformat/avformat.h"
  9. #include "libavcodec/avcodec.h"
  10. #include "libswscale/swscale.h"
  11. #include "libavutil/imgutils.h"
  12. #include "libavdevice/avdevice.h"
  13. }


  14. static AVPacket *pPacket;
  15. static AVFrame *pAvFrame, *pFrameNv21;
  16. static AVCodecContext *pCodecCtx;
  17. struct SwsContext *pImgConvertCtx;
  18. static AVFormatContext *pFormatCtx;
  19. uint8_t *v_out_buffer;
  20. jobject frameCallback = NULL;
  21. bool stop;
  22. extern "C"
  23. JNIEXPORT jint JNICALL
  24. Java_com_example_rtmpplaydemo_RtmpPlayer_nativePrepare(JNIEnv *env, jobject, jstring url) {
  25.     // 初始化
  26. #if LIBAVCODEC_VERSION_INT < AV_VERSION_INT(55, 28, 1)
  27. #define av_frame_alloc avcodec_alloc_frame
  28. #endif
  29.     if (frameCallback == NULL) {
  30.         return -1;
  31.     }
  32.     //申请空间
  33.     pAvFrame = av_frame_alloc();
  34.     pFrameNv21 = av_frame_alloc();
  35.     const char* temporary = env->GetStringUTFChars(url,NULL);
  36.     char input_str[500] = {0};
  37.     strcpy(input_str,temporary);
  38.     env->ReleaseStringUTFChars(url,temporary);

  39.     //注册库中所有可用的文件格式和编码器
  40.     avcodec_register_all();
  41.     av_register_all();
  42.     avformat_network_init();
  43.     avdevice_register_all();

  44.     pFormatCtx = avformat_alloc_context();
  45.     int openInputCode = avformat_open_input(&pFormatCtx, input_str, NULL, NULL);
  46.     LOGI("openInputCode = %d", openInputCode);
  47.     if (openInputCode < 0)
  48.         return -1;
  49.     avformat_find_stream_info(pFormatCtx, NULL);

  50.     int videoIndex = -1;
  51.     //遍历各个流,找到第一个视频流,并记录该流的编码信息
  52.     for (unsigned int i = 0; i < pFormatCtx->nb_streams; i++)
  53.     {
  54.         if (pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
  55.             //这里获取到的videoindex的结果为1.
  56.             videoIndex = i;
  57.             break;
  58.         }
  59.     }
  60.     if (videoIndex == -1) {
  61.         return -1;
  62.     }
  63.     pCodecCtx = pFormatCtx->streams[videoIndex]->codec;
  64.     AVCodec *pCodec = avcodec_find_decoder(pCodecCtx->codec_id);
  65.     avcodec_open2(pCodecCtx, pCodec, NULL);

  66.     int width = pCodecCtx->width;
  67.     int height = pCodecCtx->height;
  68.     LOGI("width = %d , height = %d", width, height);
  69.     int numBytes = av_image_get_buffer_size(AV_PIX_FMT_NV21, width, height, 1);
  70.     v_out_buffer = (uint8_t *) av_malloc(numBytes * sizeof(uint8_t));
  71.     av_image_fill_arrays(pFrameNv21->data, pFrameNv21->linesize, v_out_buffer, AV_PIX_FMT_NV21,
  72.                          width,
  73.                          height, 1);
  74.     pImgConvertCtx = sws_getContext(
  75.             pCodecCtx->width, //原始宽度
  76.             pCodecCtx->height, //原始高度
  77.             pCodecCtx->pix_fmt, //原始格式
  78.             pCodecCtx->width, //目标宽度
  79.             pCodecCtx->height, //目标高度
  80.             AV_PIX_FMT_NV21, //目标格式
  81.             SWS_FAST_BILINEAR, //选择哪种方式来进行尺寸的改变,关于这个参数,可以参考:http://www.cnblogs.com/mmix2009/p/3585524.html
  82.             NULL,
  83.             NULL,
  84.             NULL);
  85.     pPacket = (AVPacket *) av_malloc(sizeof(AVPacket));
  86.     //onPrepared 回调
  87.     jclass clazz = env->GetObjectClass(frameCallback);
  88.     jmethodID onPreparedId = env->GetMethodID(clazz, "onPrepared", "(II)V");
  89.     env->CallVoidMethod(frameCallback, onPreparedId, width, height);
  90.     env->DeleteLocalRef(clazz);
  91.     return videoIndex;
  92. }

  93. extern "C"
  94. JNIEXPORT void JNICALL
  95. Java_com_example_rtmpplaydemo_RtmpPlayer_nativeStop(JNIEnv *env, jobject) {
  96.     //停止播放
  97.     stop = true;
  98.     if (frameCallback == NULL) {
  99.         return;
  100.     }
  101.     jclass clazz = env->GetObjectClass(frameCallback);
  102.     jmethodID onPlayFinishedId = env->GetMethodID(clazz, "onPlayFinished", "()V");
  103.     //发送onPlayFinished 回调
  104.     env->CallVoidMethod(frameCallback, onPlayFinishedId);
  105.     env->DeleteLocalRef(clazz);
  106.     //释放资源
  107.     sws_freeContext(pImgConvertCtx);
  108.     av_free(pPacket);
  109.     av_free(pFrameNv21);
  110.     avcodec_close(pCodecCtx);
  111.     avformat_close_input(&pFormatCtx);
  112. }

  113. extern "C"
  114. JNIEXPORT void JNICALL
  115. Java_com_example_rtmpplaydemo_RtmpPlayer_nativeSetCallback(JNIEnv *env, jobject,
  116.                                                            jobject callback) {
  117.     //设置回调
  118.     if (frameCallback != NULL) {
  119.         env->DeleteGlobalRef(frameCallback);
  120.         frameCallback = NULL;
  121.     }
  122.     frameCallback = (env)->NewGlobalRef(callback);
  123. }

  124. extern "C"
  125. JNIEXPORT void JNICALL
  126. Java_com_example_rtmpplaydemo_RtmpPlayer_nativeStart(JNIEnv *env, jobject) {
  127.     //开始播放
  128.     stop = false;
  129.     if (frameCallback == NULL) {
  130.         return;
  131.     }
  132.     // 读取数据包
  133.     int count = 0;
  134.     while (!stop) {
  135.         if (av_read_frame(pFormatCtx, pPacket) >= 0) {
  136.             //解码
  137.             int gotPicCount = 0;
  138.             int decode_video2_size = avcodec_decode_video2(pCodecCtx, pAvFrame, &gotPicCount,
  139.                                                            pPacket);
  140.             LOGI("decode_video2_size = %d , gotPicCount = %d", decode_video2_size, gotPicCount);
  141.             LOGI("pAvFrame->linesize %d %d %d", pAvFrame->linesize[0], pAvFrame->linesize[1],
  142.                  pCodecCtx->height);
  143.             if (gotPicCount != 0) {
  144.                 count++;
  145.                 sws_scale(
  146.                         pImgConvertCtx,
  147.                         (const uint8_t *const *) pAvFrame->data,
  148.                         pAvFrame->linesize,
  149.                         0,
  150.                         pCodecCtx->height,
  151.                         pFrameNv21->data,
  152.                         pFrameNv21->linesize);
  153.                 //获取数据大小 宽高等数据
  154.                 int dataSize = pCodecCtx->height * (pAvFrame->linesize[0] + pAvFrame->linesize[1]);
  155.                 LOGI("pAvFrame->linesize %d %d %d %d", pAvFrame->linesize[0],
  156.                      pAvFrame->linesize[1], pCodecCtx->height, dataSize);
  157.                 jbyteArray data = env->NewByteArray(dataSize);
  158.                 env->SetByteArrayRegion(data, 0, dataSize,
  159.                                         reinterpret_cast<const jbyte *>(v_out_buffer));
  160.                 // onFrameAvailable 回调
  161.                 jclass clazz = env->GetObjectClass(frameCallback);
  162.                 jmethodID onFrameAvailableId = env->GetMethodID(clazz, "onFrameAvailable", "([B)V");
  163.                 env->CallVoidMethod(frameCallback, onFrameAvailableId, data);
  164.                 env->DeleteLocalRef(clazz);
  165.                 env->DeleteLocalRef(data);

  166.             }
  167.         }
  168.         av_packet_unref(pPacket);
  169.     }
  170. }


2.Java层数据回调

在上面的jni操作完成后,我们已经获得了解析完成的裸数据,接下来只要将裸数据传到java层,我们也就算是大功告成了,这里我们用回调来实现。

点击(此处)折叠或打开

  1. //Rtmp回调
  2. public interface PlayCallback {
  3.     //数据准备回调
  4.     void onPrepared(int width, int height);
  5.     //数据回调
  6.     void onFrameAvailable(byte[] data);
  7.     //播放结束回调
  8.     void onPlayFinished();
  9. }

接着我们只需要将这个回调传入native,再通过jni将解析好的数据传给java即可。

点击(此处)折叠或打开

  1. RtmpPlayer.getInstance().nativeSetCallback(new PlayCallback() {
  2.     @Override
  3.     public void onPrepared(int width, int height) {
  4.         //start 循环调运会阻塞主线程 需要在子线程里运行
  5.         RtmpPlayer.getInstance().nativeStart();
  6.     }

  7.     @Override
  8.     public void onFrameAvailable(byte[] data) {
  9.        //获得裸数据,裸数据的格式为NV21
  10.         Log.i(TAG, "onFrameAvailable: " + Arrays.hashCode(data));
  11.         surfaceView.refreshFrameNV21(data);
  12.     }

  13.     @Override
  14.     public void onPlayFinished() {
  15.          //播放结束的回调
  16.     }
  17. });
  18. //数据准备
  19. int code = RtmpPlayer.getInstance().prepare("rtmp://58.200.131.2:1935/livetv/hunantv");
  20. if (code == -1) {
  21.     //code为-1则证明rtmp的prepare有问题
  22.     Toast.makeText(MainActivity.this, "prepare Error", Toast.LENGTH_LONG).show();
  23. }

onFrameAvailable得到的data就是我们需要的NV21格式的数据了,下图是我播放湖南卫视得到的数据回调,从hashCode上来看,每次的数据回调都不一样,可以认为数据是实时刷新的。

图片


3.Java层与JNI的交互

新建了RtmpPlayer单例类做为Jni与java层交互的通道。


点击(此处)折叠或打开

  1. public class RtmpPlayer {

  2.     private static volatile RtmpPlayer mInstance;
  3.     private static final int PREPARE_ERROR = -1;

  4.     private RtmpPlayer() {
  5.     }

  6.     //双重锁定防止多线程操作导致的创建多个实例
  7.     public static RtmpPlayer getInstance() {
  8.         if (mInstance == null) {
  9.             synchronized (RtmpPlayer.class) {
  10.                 if (mInstance == null) {
  11.                     mInstance = new RtmpPlayer();
  12.                 }
  13.             }
  14.         }
  15.         return mInstance;
  16.     }

  17.     //数据准备操作
  18.     public int prepare(String url) {
  19.         if(nativePrepare(url) == PREPARE_ERROR){
  20.             Log.i("rtmpPlayer", "PREPARE_ERROR ");
  21.         }
  22.         return nativePrepare(url);
  23.     }

  24.     //加载库
  25.     static {
  26.         System.loadLibrary("rtmpplayer-lib");
  27.     }

  28.     //数据准备
  29.     private native int nativePrepare(String url);
  30.     //开始播放
  31.     public native void nativeStart();
  32.     //设置回调
  33.     public native void nativeSetCallback(PlayCallback playCallback);
  34.     //停止播放
  35.     public native void nativeStop();
  36. }


四、小结

至此为止我们已经获得NV21的裸数据,由于时间有限,文章需要写的内容比较多,因此需要分为上下两篇进行讲述。在下篇内我会讲述如何通过OpenGL将我们获得的NV21数据绘制上去,以及如何通过NV21裸数据进行人脸识别,并绘制人脸框。这也是为何我们费尽心机想要得到NV21裸数据的原因。上篇的ffmpeg接入如有问题,可以在下篇最后的附录查看我上传的demo,参考demo可能上手的更容易些。



阅读(493) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~