Chinaunix首页 | 论坛 | 博客
  • 博客访问: 86606
  • 博文数量: 22
  • 博客积分: 0
  • 博客等级: 民兵
  • 技术积分: 13
  • 用 户 组: 普通用户
  • 注册时间: 2013-08-25 12:38
文章分类
文章存档

2016年(5)

2013年(17)

我的朋友

分类: Android平台

2013-08-26 22:54:36

Android Camera AppNotifier工作机制分析

上面的章节中已经说过了,AppNotifier在initialize的时候会创建并且开启一个线程,这个线程不停的检查是否底层有消息传送过来,如果有消息传送过来,响应的消息会做相应的处理,主要包括以下三种消息:msgQ,mEventQ,mFrameQ,在之前的文章中也同样说到了,这些消息的来源,通过setEventProvider和setFrameProvider设置好消息来源并通过start和setMeasurements开始启动AppCallbackNotifier
这里我们以mFrameQ为例子先做一下说明
  1. void AppCallbackNotifier::frameCallbackRelay(CameraFrame* caFrame)
  2. {
  3.     LOG_FUNCTION_NAME;
  4.     AppCallbackNotifier *appcbn = (AppCallbackNotifier*) (caFrame->mCookie);
  5.     appcbn->frameCallback(caFrame);
  6.     LOG_FUNCTION_NAME_EXIT;
  7. }
底层是通过调用定义好的frameCallbackRelay方法,这个方法调用frameCallback
  1. void AppCallbackNotifier::frameCallback(CameraFrame* caFrame)
  2. {
  3.     ///Post the event to the event queue of AppCallbackNotifier
  4.     TIUTILS::Message msg;
  5.     CameraFrame *frame;

  6.     LOG_FUNCTION_NAME;

  7.     if ( NULL != caFrame )
  8.         {

  9.         frame = new CameraFrame(*caFrame);
  10.         if ( NULL != frame )
  11.             {
  12.               msg.command = AppCallbackNotifier::NOTIFIER_CMD_PROCESS_FRAME;
  13.               msg.arg1 = frame;
  14.               mFrameQ.put(&msg);
  15.             }
  16.         else
  17.             {
  18.             CAMHAL_LOGEA("Not enough resources to allocate CameraFrame");
  19.             }

  20.         }

  21.     LOG_FUNCTION_NAME_EXIT;
  22. }
这里首先new CameraFrame,保存回调回来的数据,让后使用put方法将这个消息放进mFrameQ堆栈,所以上面说过的那个一直运行的线程就可以检查到有消息到来,开始干活处理
那么我们真正关注的是,这个frameCallbackRelay是怎么与底层实现关联的啊?地层是怎样通过这个方法实现回调的呢?
这里我们就要追究一下这个frameCallbackRelay是怎么样和底层实现关联的了
mFrameProvider = new FrameProvider(frameNotifier, this, frameCallbackRelay);
先看这里,在new FrameProvider的时候传入了这个framecallback方法,在FrameProvider的构造函数中将这个函数指针保存到mFrameCallback中,方法如下:
  1. FrameProvider(FrameNotifier *fn, void* cookie, frame_callback frameCallback)
  2.         :mFrameNotifier(fn), mCookie(cookie),mFrameCallback(frameCallback) { }
然后在CameraHal的初始化setFrameProvider的方式中调用一下语句
mFrameProvider->enableFrameNotification(CameraFrame::IMAGE_FRAME);
mFrameProvider->enableFrameNotification(CameraFrame::RAW_FRAME);
我们就看看enableFrameNotification的实现吧,他的实现在CameraHalUtilClasses中
  1. int FrameProvider::enableFrameNotification(int32_t frameTypes)
  2. {
  3.     LOG_FUNCTION_NAME;
  4.     status_t ret = NO_ERROR;

  5.     ///Enable the frame notification to CameraAdapter (which implements FrameNotifier interface)
  6.     mFrameNotifier->enableMsgType(frameTypes<<MessageNotifier::FRAME_BIT_FIELD_POSITION, mFrameCallback , NULL, mCookie);

  7.     LOG_FUNCTION_NAME_EXIT;
  8.     return ret;
  9. }
说一下上面这个方法,这里调用了我们上面FrameProvider构造函数中保存下来的,我们接着看看enableMsgType的实现,在BaseCameraAdapter中
  1. void BaseCameraAdapter::enableMsgType(int32_t msgs, frame_callback callback, event_callback eventCb, void* cookie)
  2. {
  3.     Mutex::Autolock lock(mSubscriberLock);

  4.     LOG_FUNCTION_NAME;

  5.     int32_t frameMsg = ((msgs >> MessageNotifier::FRAME_BIT_FIELD_POSITION) & EVENT_MASK);
  6.     int32_t eventMsg = ((msgs >> MessageNotifier::EVENT_BIT_FIELD_POSITION) & EVENT_MASK);

  7.     if ( frameMsg != 0 )
  8.         {
  9.         CAMHAL_LOGVB("Frame message type id=0x%x subscription request", frameMsg);
  10.         switch ( frameMsg )
  11.             {
  12.             case CameraFrame::PREVIEW_FRAME_SYNC:
  13.                 mFrameSubscribers.add((int) cookie, callback);
  14.                 break;
  15.             case CameraFrame::FRAME_DATA_SYNC:
  16.                 mFrameDataSubscribers.add((int) cookie, callback);
  17.                 break;
  18.             case CameraFrame::SNAPSHOT_FRAME:
  19.                 mSnapshotSubscribers.add((int) cookie, callback);
  20.                 break;
  21.             case CameraFrame::IMAGE_FRAME:
  22.                 mImageSubscribers.add((int) cookie, callback);
  23.                 break;
  24.             case CameraFrame::RAW_FRAME:
  25.                 mRawSubscribers.add((int) cookie, callback);
  26.                 break;
  27.             case CameraFrame::VIDEO_FRAME_SYNC:
  28.                 mVideoSubscribers.add((int) cookie, callback);
  29.                 break;
  30.             case CameraFrame::REPROCESS_INPUT_FRAME:
  31.                 mVideoInSubscribers.add((int) cookie, callback);
  32.                 break;
  33.             default:
  34.                 CAMHAL_LOGEA("Frame message type id=0x%x subscription no supported yet!", frameMsg);
  35.                 break;
  36.             }
  37.         }

  38.     if ( eventMsg != 0)
  39.         {
  40.         CAMHAL_LOGVB("Event message type id=0x%x subscription request", eventMsg);
  41.         if ( CameraHalEvent::ALL_EVENTS == eventMsg )
  42.             {
  43.             mFocusSubscribers.add((int) cookie, eventCb);
  44.             mShutterSubscribers.add((int) cookie, eventCb);
  45.             mZoomSubscribers.add((int) cookie, eventCb);
  46.             mMetadataSubscribers.add((int) cookie, eventCb);
  47.             }
  48.         else
  49.             {
  50.             CAMHAL_LOGEA("Event message type id=0x%x subscription no supported yet!", eventMsg);
  51.             }
  52.         }

  53.     LOG_FUNCTION_NAME_EXIT;
  54. }
这里出现很多的向量表,我们看一下这几个向量表mFrameDataSubscribers,mImageSubscribers,mRawSubscribers
在这里将我们初始化定义好的callback方法加入向量表中,底层到底怎么调用到还是不知道啊,接着找啊找
跟上一篇文章是一样的,之前采用V4LCameraAdapter的方式分析其实我们早就在之前文章中说过了,这里这篇文章我们针对OMXCameraAdapter做分析
OMXCameraAdapter中有这样一个方法
  1. status_t OMXCameraAdapter::sendCallBacks(CameraFrame frame, OMX_IN OMX_BUFFERHEADERTYPE *pBuffHeader, unsigned int mask, OMXCameraPortParameters *port)
  2. {
  3.   status_t ret = NO_ERROR;

  4.   LOG_FUNCTION_NAME;

  5.   if ( NULL == port)
  6.     {
  7.       CAMHAL_LOGEA("Invalid portParam");
  8.       return -EINVAL;
  9.     }

  10.   if ( NULL == pBuffHeader )
  11.     {
  12.       CAMHAL_LOGEA("Invalid Buffer header");
  13.       return -EINVAL;
  14.     }

  15.   Mutex::Autolock lock(mSubscriberLock);

  16.   //frame.mFrameType = typeOfFrame;
  17.   frame.mFrameMask = mask;
  18.   frame.mBuffer = (CameraBuffer *)pBuffHeader->pAppPrivate;
  19.   frame.mLength = pBuffHeader->nFilledLen;
  20.   frame.mAlignment = port->mStride;
  21.   frame.mOffset = pBuffHeader->nOffset;
  22.   frame.mWidth = port->mWidth;
  23.   frame.mHeight = port->mHeight;
  24.   frame.mYuv[0] = NULL;
  25.   frame.mYuv[1] = NULL;

  26.   if ( onlyOnce && mRecording )
  27.     {
  28.       mTimeSourceDelta = (pBuffHeader->nTimeStamp * 1000) - systemTime(SYSTEM_TIME_MONOTONIC);
  29.       onlyOnce = false;
  30.     }

  31.   frame.mTimestamp = (pBuffHeader->nTimeStamp * 1000) - mTimeSourceDelta;

  32.   ret = setInitFrameRefCount(frame.mBuffer, mask);

  33.   if (ret != NO_ERROR) {
  34.      CAMHAL_LOGDB("Error in setInitFrameRefCount %d", ret);
  35.   } else {
  36.       ret = sendFrameToSubscribers(&frame);
  37.   }

  38.   CAMHAL_LOGVB("B 0x%x T %llu", frame.mBuffer, pBuffHeader->nTimeStamp);

  39.   LOG_FUNCTION_NAME_EXIT;

  40.   return ret;
  41. }
这里这个方法其实和V4LCameraAdapter中的使用方法是很类似的,先是填充CameraFrame这个结构,最后通过sendFrameToSubscribers这个方法,我们看看这个方法吧
这个方法在BaseCameraAdapter中实现
  1. status_t BaseCameraAdapter::sendFrameToSubscribers(CameraFrame *frame)
  2. {
  3.     status_t ret = NO_ERROR;
  4.     unsigned int mask;

  5.     if ( NULL == frame )
  6.         {
  7.         CAMHAL_LOGEA("Invalid CameraFrame");
  8.         return -EINVAL;
  9.         }

  10.     for( mask = 1; mask < CameraFrame::ALL_FRAMES; mask <<= 1){
  11.       if( mask & frame->mFrameMask ){
  12.         switch( mask ){

  13.         case CameraFrame::IMAGE_FRAME:
  14.           {
  15. #if PPM_INSTRUMENTATION || PPM_INSTRUMENTATION_ABS
  16.             CameraHal::PPM("Shot to Jpeg: ", &mStartCapture);
  17. #endif
  18.             ret = __sendFrameToSubscribers(frame, &mImageSubscribers, CameraFrame::IMAGE_FRAME);
  19.           }
  20.           break;
  21.         case CameraFrame::RAW_FRAME:
  22.           {
  23.             ret = __sendFrameToSubscribers(frame, &mRawSubscribers, CameraFrame::RAW_FRAME);
  24.           }
  25.           break;
  26.         case CameraFrame::PREVIEW_FRAME_SYNC:
  27.           {
  28.             ret = __sendFrameToSubscribers(frame, &mFrameSubscribers, CameraFrame::PREVIEW_FRAME_SYNC);
  29.           }
  30.           break;
  31.         case CameraFrame::SNAPSHOT_FRAME:
  32.           {
  33.             ret = __sendFrameToSubscribers(frame, &mSnapshotSubscribers, CameraFrame::SNAPSHOT_FRAME);
  34.           }
  35.           break;
  36.         case CameraFrame::VIDEO_FRAME_SYNC:
  37.           {
  38.             ret = __sendFrameToSubscribers(frame, &mVideoSubscribers, CameraFrame::VIDEO_FRAME_SYNC);
  39.           }
  40.           break;
  41.         case CameraFrame::FRAME_DATA_SYNC:
  42.           {
  43.             ret = __sendFrameToSubscribers(frame, &mFrameDataSubscribers, CameraFrame::FRAME_DATA_SYNC);
  44.           }
  45.           break;
  46.         case CameraFrame::REPROCESS_INPUT_FRAME:
  47.           {
  48.             ret = __sendFrameToSubscribers(frame, &mVideoInSubscribers, CameraFrame::REPROCESS_INPUT_FRAME);
  49.           }
  50.           break;
  51.         default:
  52.           CAMHAL_LOGEB("FRAMETYPE NOT SUPPORTED 0x%x", mask);
  53.         break;
  54.         }//SWITCH
  55.         frame->mFrameMask &= ~mask;

  56.         if (ret != NO_ERROR) {
  57.             goto EXIT;
  58.         }
  59.       }//IF
  60.     }//FOR

  61.  EXIT:
  62.     return ret;
  63. }
他接着调用上面标出的方法,到这里我们发现V4LCameraAdapter和OMXCameraAdapter使用的是同一个方法与AppNotifier实现交互,BaseCameraAdapter作为一个接口使用
我们接着看看上面标注部分的实现,其实之前早就说过了
  1. status_t BaseCameraAdapter::__sendFrameToSubscribers(CameraFrame* frame,
  2.                                                      KeyedVector<int, frame_callback> *subscribers,
  3.                                                      CameraFrame::FrameType frameType)
  4. {
  5.     size_t refCount = 0;
  6.     status_t ret = NO_ERROR;
  7.     frame_callback callback = NULL;

  8.     frame->mFrameType = frameType;

  9.     if ( (frameType == CameraFrame::PREVIEW_FRAME_SYNC) ||
  10.          (frameType == CameraFrame::VIDEO_FRAME_SYNC) ||
  11.          (frameType == CameraFrame::SNAPSHOT_FRAME) ){
  12.         if (mFrameQueue.size() > 0){
  13.           CameraFrame *lframe = (CameraFrame *)mFrameQueue.valueFor(frame->mBuffer);
  14.           frame->mYuv[0] = lframe->mYuv[0];
  15.           frame->mYuv[1] = frame->mYuv[0] + (frame->mLength + frame->mOffset)*2/3;
  16.         }
  17.         else{
  18.           CAMHAL_LOGDA("Empty Frame Queue");
  19.           return -EINVAL;
  20.         }
  21.       }

  22.     if (NULL != subscribers) {
  23.         refCount = getFrameRefCount(frame->mBuffer, frameType);

  24.         if (refCount == 0) {
  25.             CAMHAL_LOGDA("Invalid ref count of 0");
  26.             return -EINVAL;
  27.         }

  28.         if (refCount > subscribers->size()) {
  29.             CAMHAL_LOGEB("Invalid ref count for frame type: 0x%x", frameType);
  30.             return -EINVAL;
  31.         }

  32.         CAMHAL_LOGVB("Type of Frame: 0x%x address: 0x%x refCount start %d",
  33.                      frame->mFrameType,
  34.                      ( uint32_t ) frame->mBuffer,
  35.                      refCount);

  36.         for ( unsigned int i = 0 ; i < refCount; i++ ) {
  37.             frame->mCookie = ( void * ) subscribers->keyAt(i);
  38.             callback = (frame_callback) subscribers->valueAt(i);

  39.             if (!callback) {
  40.                 CAMHAL_LOGEB("callback not set for frame type: 0x%x", frameType);
  41.                 return -EINVAL;
  42.             }

  43.             callback(frame);
  44.         }
  45.     } else {
  46.         CAMHAL_LOGEA("Subscribers is null??");
  47.         return -EINVAL;
  48.     }

  49.     return ret;
  50. }
在上面这个方法中,通过对应的向量表找到需要的callback方法,最终调用callback方法,这里调用的这个callback方法其实就是我们上面初始化定义的callback方法,到这里其实底层与AppNotifier已经实现了关联,不过我们并没有花很多时间去说明底层是如何通过OMX方式与kernel driver进行交互最后获取到数据并调用上面的sendcallback方法的,这里之后要单独作为一个章节去研究;同样,我们也还没有好好的去说明一下AppNotifier又是怎么样一步一步把数据真正的递交到上层app手中的,这里简单说明一下吧
上面已经说过了,AppNotifier在构造的时候会创建并启动一个线程检查是否有消息到达,消息到达则进行相应的分类处理
这里只针对notifyFrame进行分析
  1. void AppCallbackNotifier::notifyFrame()
  2. {
  3.     ///Receive and send the frame notifications to app
  4.     TIUTILS::Message msg;
  5.     CameraFrame *frame;
  6.     MemoryHeapBase *heap;
  7.     MemoryBase *buffer = NULL;
  8.     sp<MemoryBase> memBase;
  9.     void *buf = NULL;

  10.     LOG_FUNCTION_NAME;

  11.     {
  12.         Mutex::Autolock lock(mLock);
  13.         if(!mFrameQ.isEmpty()) {
  14.             mFrameQ.get(&msg);
  15.         } else {
  16.             return;
  17.         }
  18.     }

  19.     bool ret = true;

  20.     frame = NULL;
  21.     switch(msg.command)
  22.         {
  23.         case AppCallbackNotifier::NOTIFIER_CMD_PROCESS_FRAME:

  24.                 frame = (CameraFrame *) msg.arg1;
  25.                 if(!frame)
  26.                     {
  27.                     break;
  28.                     }

  29.                 if ( (CameraFrame::RAW_FRAME == frame->mFrameType )&&
  30.                     ( NULL != mCameraHal ) &&
  31.                     ( NULL != mDataCb) &&
  32.                     ( NULL != mNotifyCb ) )
  33.                     {

  34.                     if ( mCameraHal->msgTypeEnabled(CAMERA_MSG_RAW_IMAGE) )
  35.                         {
  36. #ifdef COPY_IMAGE_BUFFER
  37.                         copyAndSendPictureFrame(frame, CAMERA_MSG_RAW_IMAGE);
  38. #else
  39.                         //TODO: Find a way to map a Tiler buffer to a MemoryHeapBase
  40. #endif
  41.                         }
  42.                     else {
  43.                         if ( mCameraHal->msgTypeEnabled(CAMERA_MSG_RAW_IMAGE_NOTIFY) ) {
  44.                             mNotifyCb(CAMERA_MSG_RAW_IMAGE_NOTIFY, 0, 0, mCallbackCookie);
  45.                         }
  46.                         mFrameProvider->returnFrame(frame->mBuffer,
  47.                                                     (CameraFrame::FrameType) frame->mFrameType);
  48.                     }

  49.                     mRawAvailable = true;

  50.                     }
  51.                 else if ( (CameraFrame::IMAGE_FRAME == frame->mFrameType) &&
  52.                           (NULL != mCameraHal) &&
  53.                           (NULL != mDataCb) &&
  54.                           (CameraFrame::ENCODE_RAW_YUV422I_TO_JPEG & frame->mQuirks) )
  55.                     {

  56.                     int encode_quality = 100, tn_quality = 100;
  57.                     int tn_width, tn_height;
  58.                     unsigned int current_snapshot = 0;
  59.                     Encoder_libjpeg::params *main_jpeg = NULL, *tn_jpeg = NULL;
  60.                     void* exif_data = NULL;
  61.                     const char *previewFormat = NULL;
  62.                     camera_memory_t* raw_picture = mRequestMemory(-1, frame->mLength, 1, NULL);

  63.                     if(raw_picture) {
  64.                         buf = raw_picture->data;
  65.                     }

  66.                     CameraParameters parameters;
  67.                     char *params = mCameraHal->getParameters();
  68.                     const String8 strParams(params);
  69.                     parameters.unflatten(strParams);

  70.                     encode_quality = parameters.getInt(CameraParameters::KEY_JPEG_QUALITY);
  71.                     if (encode_quality < 0 || encode_quality > 100) {
  72.                         encode_quality = 100;
  73.                     }

  74.                     tn_quality = parameters.getInt(CameraParameters::KEY_JPEG_THUMBNAIL_QUALITY);
  75.                     if (tn_quality < 0 || tn_quality > 100) {
  76.                         tn_quality = 100;
  77.                     }

  78.                     if (CameraFrame::HAS_EXIF_DATA & frame->mQuirks) {
  79.                         exif_data = frame->mCookie2;
  80.                     }

  81.                     main_jpeg = (Encoder_libjpeg::params*)
  82.                                     malloc(sizeof(Encoder_libjpeg::params));

  83.                     // Video snapshot with LDCNSF on adds a few bytes start offset
  84.                     // and a few bytes on every line. They must be skipped.
  85.                     int rightCrop = frame->mAlignment/2 - frame->mWidth;

  86.                     CAMHAL_LOGDB("Video snapshot right crop = %d", rightCrop);
  87.                     CAMHAL_LOGDB("Video snapshot offset = %d", frame->mOffset);

  88.                     if (main_jpeg) {
  89.                         main_jpeg->src = (uint8_t *)frame->mBuffer->mapped;
  90.                         main_jpeg->src_size = frame->mLength;
  91.                         main_jpeg->dst = (uint8_t*) buf;
  92.                         main_jpeg->dst_size = frame->mLength;
  93.                         main_jpeg->quality = encode_quality;
  94.                         main_jpeg->in_width = frame->mAlignment/2; // use stride here
  95.                         main_jpeg->in_height = frame->mHeight;
  96.                         main_jpeg->out_width = frame->mAlignment/2;
  97.                         main_jpeg->out_height = frame->mHeight;
  98.                         main_jpeg->right_crop = rightCrop;
  99.                         main_jpeg->start_offset = frame->mOffset;
  100.                         if ( CameraFrame::FORMAT_YUV422I_UYVY & frame->mQuirks) {
  101.                             main_jpeg->format = TICameraParameters::PIXEL_FORMAT_YUV422I_UYVY;
  102.                         }
  103.                         else { //if ( CameraFrame::FORMAT_YUV422I_YUYV & frame->mQuirks)
  104.                             main_jpeg->format = CameraParameters::PIXEL_FORMAT_YUV422I;
  105.                         }
  106.                     }

  107.                     tn_width = parameters.getInt(CameraParameters::KEY_JPEG_THUMBNAIL_WIDTH);
  108.                     tn_height = parameters.getInt(CameraParameters::KEY_JPEG_THUMBNAIL_HEIGHT);
  109.                     previewFormat = parameters.getPreviewFormat();

  110.                     if ((tn_width > 0) && (tn_height > 0) && ( NULL != previewFormat )) {
  111.                         tn_jpeg = (Encoder_libjpeg::params*)
  112.                                       malloc(sizeof(Encoder_libjpeg::params));
  113.                         // if malloc fails just keep going and encode main jpeg
  114.                         if (!tn_jpeg) {
  115.                             tn_jpeg = NULL;
  116.                         }
  117.                     }

  118.                     if (tn_jpeg) {
  119.                         int width, height;
  120.                         parameters.getPreviewSize(&width,&height);
  121.                         current_snapshot = (mPreviewBufCount + MAX_BUFFERS - 1) % MAX_BUFFERS;
  122.                         tn_jpeg->src = (uint8_t *)mPreviewBuffers[current_snapshot].mapped;
  123.                         tn_jpeg->src_size = mPreviewMemory->size / MAX_BUFFERS;
  124.                         tn_jpeg->dst_size = calculateBufferSize(tn_width,
  125.                                                                 tn_height,
  126.                                                                 previewFormat);
  127.                         tn_jpeg->dst = (uint8_t*) malloc(tn_jpeg->dst_size);
  128.                         tn_jpeg->quality = tn_quality;
  129.                         tn_jpeg->in_width = width;
  130.                         tn_jpeg->in_height = height;
  131.                         tn_jpeg->out_width = tn_width;
  132.                         tn_jpeg->out_height = tn_height;
  133.                         tn_jpeg->right_crop = 0;
  134.                         tn_jpeg->start_offset = 0;
  135.                         tn_jpeg->format = CameraParameters::PIXEL_FORMAT_YUV420SP;;
  136.                     }

  137.                     sp<Encoder_libjpeg> encoder = new Encoder_libjpeg(main_jpeg,
  138.                                                       tn_jpeg,
  139.                                                       AppCallbackNotifierEncoderCallback,
  140.                                                       (CameraFrame::FrameType)frame->mFrameType,
  141.                                                       this,
  142.                                                       raw_picture,
  143.                                                       exif_data, frame->mBuffer);
  144.                     gEncoderQueue.add(frame->mBuffer->mapped, encoder);
  145.                     encoder->run();
  146.                     encoder.clear();
  147.                     if (params != NULL)
  148.                       {
  149.                         mCameraHal->putParameters(params);
  150.                       }
  151.                     }
  152.                 else if ( ( CameraFrame::IMAGE_FRAME == frame->mFrameType ) &&
  153.                              ( NULL != mCameraHal ) &&
  154.                              ( NULL != mDataCb) )
  155.                     {

  156.                     // CTS, MTS requirements: Every 'takePicture()' call
  157.                     // who registers a raw callback should receive one
  158.                     // as well. This is not always the case with
  159.                     // CameraAdapters though.
  160.                     if (!mCameraHal->msgTypeEnabled(CAMERA_MSG_RAW_IMAGE)) {
  161.                         dummyRaw();
  162.                     } else {
  163.                         mRawAvailable = false;
  164.                     }

  165. #ifdef COPY_IMAGE_BUFFER
  166.                     {
  167.                         Mutex::Autolock lock(mBurstLock);
  168. #if defined(OMAP_ENHANCEMENT)
  169.                         if ( mBurst )
  170.                         {
  171.                             copyAndSendPictureFrame(frame, CAMERA_MSG_COMPRESSED_BURST_IMAGE);
  172.                         }
  173.                         else
  174. #endif
  175.                         {
  176.                             copyAndSendPictureFrame(frame, CAMERA_MSG_COMPRESSED_IMAGE);
  177.                         }
  178.                     }
  179. #else
  180.                      //TODO: Find a way to map a Tiler buffer to a MemoryHeapBase
  181. #endif
  182.                     }
  183.                 else if ( ( CameraFrame::VIDEO_FRAME_SYNC == frame->mFrameType ) &&
  184.                              ( NULL != mCameraHal ) &&
  185.                              ( NULL != mDataCb) &&
  186.                              ( mCameraHal->msgTypeEnabled(CAMERA_MSG_VIDEO_FRAME) ) )
  187.                     {
  188.                     AutoMutex locker(mRecordingLock);
  189.                     if(mRecording)
  190.                         {
  191.                         if(mUseMetaDataBufferMode)
  192.                             {
  193.                             camera_memory_t *videoMedatadaBufferMemory =
  194.                                              mVideoMetadataBufferMemoryMap.valueFor(frame->mBuffer->opaque);
  195.                             video_metadata_t *videoMetadataBuffer = (video_metadata_t *) videoMedatadaBufferMemory->data;

  196.                             if( (NULL == videoMedatadaBufferMemory) || (NULL == videoMetadataBuffer) || (NULL == frame->mBuffer) )
  197.                                 {
  198.                                 CAMHAL_LOGEA("Error! One of the video buffers is NULL");
  199.                                 break;
  200.                                 }

  201.                             if ( mUseVideoBuffers )
  202.                               {
  203.                                 CameraBuffer *vBuf = mVideoMap.valueFor(frame->mBuffer->opaque);
  204.                                 GraphicBufferMapper &mapper = GraphicBufferMapper::get();
  205.                                 Rect bounds;
  206.                                 bounds.left = 0;
  207.                                 bounds.top = 0;
  208.                                 bounds.right = mVideoWidth;
  209.                                 bounds.bottom = mVideoHeight;

  210.                                 void *y_uv[2];
  211.                                 mapper.lock((buffer_handle_t)vBuf, CAMHAL_GRALLOC_USAGE, bounds, y_uv);
  212.                                 y_uv[1] = y_uv[0] + mVideoHeight*4096;

  213.                                 structConvImage input = {frame->mWidth,
  214.                                                           frame->mHeight,
  215.                                                           4096,
  216.                                                           IC_FORMAT_YCbCr420_lp,
  217.                                                           (mmByte *)frame->mYuv[0],
  218.                                                           (mmByte *)frame->mYuv[1],
  219.                                                           frame->mOffset};

  220.                                 structConvImage output = {mVideoWidth,
  221.                                                           mVideoHeight,
  222.                                                           4096,
  223.                                                           IC_FORMAT_YCbCr420_lp,
  224.                                                           (mmByte *)y_uv[0],
  225.                                                           (mmByte *)y_uv[1],
  226.                                                           0};

  227.                                 VT_resizeFrame_Video_opt2_lp(&input, &output, NULL, 0);
  228.                                 mapper.unlock((buffer_handle_t)vBuf->opaque);
  229.                                 videoMetadataBuffer->metadataBufferType = (int) kMetadataBufferTypeCameraSource;
  230.                                 /* FIXME remove cast */
  231.                                 videoMetadataBuffer->handle = (void *)vBuf->opaque;
  232.                                 videoMetadataBuffer->offset = 0;
  233.                               }
  234.                             else
  235.                               {
  236.                                 videoMetadataBuffer->metadataBufferType = (int) kMetadataBufferTypeCameraSource;
  237.                                 videoMetadataBuffer->handle = camera_buffer_get_omx_ptr(frame->mBuffer);
  238.                                 videoMetadataBuffer->offset = frame->mOffset;
  239.                               }

  240.                             CAMHAL_LOGVB("mDataCbTimestamp : frame->mBuffer=0x%x, videoMetadataBuffer=0x%x, videoMedatadaBufferMemory=0x%x",
  241.                                             frame->mBuffer->opaque, videoMetadataBuffer, videoMedatadaBufferMemory);

  242.                             mDataCbTimestamp(frame->mTimestamp, CAMERA_MSG_VIDEO_FRAME,
  243.                                                 videoMedatadaBufferMemory, 0, mCallbackCookie);
  244.                             }
  245.                         else
  246.                             {
  247.                             //TODO: Need to revisit this, should ideally be mapping the TILER buffer using mRequestMemory
  248.                             camera_memory_t* fakebuf = mRequestMemory(-1, sizeof(buffer_handle_t), 1, NULL);
  249.                             if( (NULL == fakebuf) || ( NULL == fakebuf->data) || ( NULL == frame->mBuffer))
  250.                                 {
  251.                                 CAMHAL_LOGEA("Error! One of the video buffers is NULL");
  252.                                 break;
  253.                                 }

  254.                             *reinterpret_cast<buffer_handle_t*>(fakebuf->data) = reinterpret_cast<buffer_handle_t>(frame->mBuffer->mapped);
  255.                             mDataCbTimestamp(frame->mTimestamp, CAMERA_MSG_VIDEO_FRAME, fakebuf, 0, mCallbackCookie);
  256.                             fakebuf->release(fakebuf);
  257.                             }
  258.                         }
  259.                     }
  260.                 else if(( CameraFrame::SNAPSHOT_FRAME == frame->mFrameType ) &&
  261.                              ( NULL != mCameraHal ) &&
  262.                              ( NULL != mDataCb) &&
  263.                              ( NULL != mNotifyCb)) {
  264.                     //When enabled, measurement data is sent instead of video data
  265.                     if ( !mMeasurementEnabled ) {
  266.                         copyAndSendPreviewFrame(frame, CAMERA_MSG_POSTVIEW_FRAME);
  267.                     } else {
  268.                         mFrameProvider->returnFrame(frame->mBuffer,
  269.                                                     (CameraFrame::FrameType) frame->mFrameType);
  270.                     }
  271.                 }
  272.                 else if ( ( CameraFrame::PREVIEW_FRAME_SYNC== frame->mFrameType ) &&
  273.                             ( NULL != mCameraHal ) &&
  274.                             ( NULL != mDataCb) &&
  275.                             ( mCameraHal->msgTypeEnabled(CAMERA_MSG_PREVIEW_FRAME)) ) {
  276.                     //When enabled, measurement data is sent instead of video data
  277.                     if ( !mMeasurementEnabled ) {
  278.                         copyAndSendPreviewFrame(frame, CAMERA_MSG_PREVIEW_FRAME);
  279.                     } else {
  280.                          mFrameProvider->returnFrame(frame->mBuffer,
  281.                                                      (CameraFrame::FrameType) frame->mFrameType);
  282.                     }
  283.                 }
  284.                 else if ( ( CameraFrame::FRAME_DATA_SYNC == frame->mFrameType ) &&
  285.                             ( NULL != mCameraHal ) &&
  286.                             ( NULL != mDataCb) &&
  287.                             ( mCameraHal->msgTypeEnabled(CAMERA_MSG_PREVIEW_FRAME)) ) {
  288.                     copyAndSendPreviewFrame(frame, CAMERA_MSG_PREVIEW_FRAME);
  289.                 } else {
  290.                     mFrameProvider->returnFrame(frame->mBuffer,
  291.                                                 ( CameraFrame::FrameType ) frame->mFrameType);
  292.                     CAMHAL_LOGDB("Frame type 0x%x is still unsupported!", frame->mFrameType);
  293.                 }

  294.                 break;

  295.         default:

  296.             break;

  297.         };

  298. exit:

  299.     if ( NULL != frame )
  300.         {
  301.         delete frame;
  302.         }

  303.     LOG_FUNCTION_NAME_EXIT;
  304. }
以上按照相应方式处理消息,returnFrame,copyAndSendPicture,还有一些回调函数,这些都是很重要的,这里先不做过多研究,在之前的文章中其实也已经提到过,数据怎样最终上传到app层

待续。。。
阅读(1083) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~