Android Camera数据流分析全程记录
花了不少时间在这个数据流的分析上面,自己毕竟没怎么做过android,这里记录一下自己的见解,任何理解错误还望高人指教,以后还需慢慢纠正
整个分析过程从app的onCreate开始:packages/apps/OMAPCamera/src/com/ti/omap4/android/camera/Camera.java
在onCreate中做了很多的初始化,我们真正关注的是一下几条语句:
-
// don't set mSurfaceHolder here. We have it set ONLY within
-
// surfaceChanged / surfaceDestroyed, other parts of the code
-
// assume that when it is set, the surface is also set.
-
SurfaceView preview = (SurfaceView) findViewById(R.id.camera_preview);
-
SurfaceHolder holder = preview.getHolder();
-
holder.addCallback(this);
在这里我们实例化了一个SurfaceView对象,通过这个对象获取SurfaceHolder对象,实现这个addCallback方法,
其中SurfaceView的定义在以下路径:frameworks/base/core/java/android/view/SurfaceView.java
其中SurfaceHolder的定义在以下路径:frameworks/base/core/java/android/view/SurfaceHolder.java
这里看看这个文章的解释,写的很是不错:http://blog.chinaunix.net/uid-9863638-id-1996383.html
SurfaceFlinger 是Android multimedia 的一个部分,在Android 的实现中它是一个service ,提供系统范围内的surface composer 功能,它能够将各种应用程序的2D,3D surface 进行组合。
在具体讲SurfaceFlinger 之前,我们先来看一下有关显示方面的一些基础知识 。
每个应用程序可能对应着一个或者多个图形界面,而每个界面我们就称之为一个surface ,或者说是window ,在上面的图中我们能看到4 个surface ,一个是home 界面,还有就是红、绿、蓝分别代表的3 个surface ,而两个button 实际是home surface 里面的内容。在这里我们能看到我们进行图形显示所需要解决 的问题:
a 、首先每个surface 在屏幕上有它的位置,以及大小,然后每个surface 里面还有要显示的内容,内容,大小,位置 这些元素 在我们改变应用程序的时候都可能会改变,改变时应该如何处理
b 、然后就各个surface 之间可能有重叠,比如说在上面的简略图中,绿色覆盖了蓝色,而红色又覆盖了绿色和蓝色以及下面的home ,而且还具有一定透明度。这种层之间的关系应该如何描述
我们首先来看第二个问题,我们可以想象在屏幕平面的垂直方向还有一个Z 轴,所有的surface 根据在Z 轴上的坐标来确定前后,这样就可以描述各个surface 之间的上下覆盖关系了,而这个在Z 轴上的顺序,图形上有个专业术语叫Z-order 。
对于第一个问题,我们需要一个结构来记录应用程序界面的位置,大小,以及一个buffer 来记录需要显示的内容,所以这就是我们surface 的概念,surface 实际我们可以把它理解成一个容器,这个容器记录着应用程序界面的控制信息,比如说大小啊,位置啊,而它还有buffer 来专门存储需要显示的内容。
在这里还存在一个问题,那就是当存在图形重合的时候应该如何处理呢,而且可能有些surface 还带有透明信息,这里就是我们SurfaceFlinger 需要解决问题,它要把各个surface 组合(compose/merge) 成一个main Surface ,最后将Main Surface 的内容发送给FB/V4l2 Output ,这样屏幕上就能看到我们想要的效果。
在实际中对这些Surface 进行merge 可以采用两种方式,一种就是采用软件的形式来merge ,还一种就是采用硬件的方式,软件的方式就是我们的SurfaceFlinger ,而硬件的方式就是Overlay 。
首先继承SurfaceView并实现SurfaceHolder.Callback接口
使用接口的原因:因为使用SurfaceView 有一个原则,所有的绘图工作必须得在Surface 被创建之后才能开始(Surface—表面,基本上我们可以把它当作显存的一个映射,写入到Surface 的内容可以被直接复制到显存从而显示出来,这使得显示速度会非常快),而在Surface 被销毁之前必须结束。所以Callback 中的surfaceCreated 和surfaceDestroyed 就成了绘图处理代码的边界。
需要重写的方法
(1)public void surfaceChanged(SurfaceHolder holder,int format,int width,int height){}//在surface的大小发生改变时激发
(2)public void surfaceCreated(SurfaceHolder holder){}//在创建时激发,一般在这里调用画图的线程。
(3)public void surfaceDestroyed(SurfaceHolder holder) {} //销毁时激发,一般在这里将画图的线程停止、释放。
这几个方法在在app中都已经重新实现了,重点分析surfaceChanged
-
public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {
-
// Make sure we have a surface in the holder before proceeding.
-
if (holder.getSurface() == null) {
-
Log.d(TAG, "holder.getSurface() == null");
-
return;
-
}
-
-
Log.v(TAG, "surfaceChanged. w=" + w + ". h=" + h);
-
-
// We need to save the holder for later use, even when the mCameraDevice
-
// is null. This could happen if onResume() is invoked after this
-
// function.
-
mSurfaceHolder = holder;
-
-
// The mCameraDevice will be null if it fails to connect to the camera
-
// hardware. In this case we will show a dialog and then finish the
-
// activity, so it's OK to ignore it.
-
if (mCameraDevice == null) return;
-
-
// Sometimes surfaceChanged is called after onPause or before onResume.
-
// Ignore it.
-
if (mPausing || isFinishing()) return;
-
-
setSurfaceLayout();
-
-
// Set preview display if the surface is being created. Preview was
-
// already started. Also restart the preview if display rotation has
-
// changed. Sometimes this happens when the device is held in portrait
-
// and camera app is opened. Rotation animation takes some time and
-
// display rotation in onCreate may not be what we want.
-
if (mCameraState == PREVIEW_STOPPED) {//这里check摄像头是否已经启动,第一次启动摄像头和摄像头已经打开从新进入摄像头实现方法不同
-
startPreview(true);
-
startFaceDetection();
-
} else {
-
if (Util.getDisplayRotation(this) != mDisplayRotation) {
-
setDisplayOrientation();
-
}
-
if (holder.isCreating()) {
-
// Set preview display if the surface is being created and preview
-
// was already started. That means preview display was set to null
-
// and we need to set it now.
-
setPreviewDisplay(holder);
-
}
-
}
-
-
// If first time initialization is not finished, send a message to do
-
// it later. We want to finish surfaceChanged as soon as possible to let
-
// user see preview first.
-
if (!mFirstTimeInitialized) {
-
mHandler.sendEmptyMessage(FIRST_TIME_INIT);
-
} else {
-
initializeSecondTime();
-
}
-
-
SurfaceView preview = (SurfaceView) findViewById(R.id.camera_preview);
-
CameraInfo info = CameraHolder.instance().getCameraInfo()[mCameraId];
-
boolean mirror = (info.facing == CameraInfo.CAMERA_FACING_FRONT);
-
int displayRotation = Util.getDisplayRotation(this);
-
int displayOrientation = Util.getDisplayOrientation(displayRotation, mCameraId);
-
-
mTouchManager.initialize(preview.getHeight() / 3, preview.getHeight() / 3,
-
preview, this, mirror, displayOrientation);
-
-
}
以上标注部分是关键,现在直接分心startPreview方法,这是第一次打开摄像头的处理函数,进行了一些初始换,而已经处于摄像头打开状态时不必使用startPreview方法,而是用上面的另外一个分支,重新开始显示即可
-
private void startPreview(boolean updateAll) {
-
if (mPausing || isFinishing()) return;
-
-
mFocusManager.resetTouchFocus();
-
-
mCameraDevice.setErrorCallback(mErrorCallback);
-
-
// If we're previewing already, stop the preview first (this will blank
-
// the screen).
-
if (mCameraState != PREVIEW_STOPPED) stopPreview();
-
-
setPreviewDisplay(mSurfaceHolder);
-
setDisplayOrientation();
-
-
if (!mSnapshotOnIdle) {
-
// If the focus mode is continuous autofocus, call cancelAutoFocus to
-
// resume it because it may have been paused by autoFocus call.
-
if (Parameters.FOCUS_MODE_CONTINUOUS_PICTURE.equals(mFocusManager.getFocusMode())) {
-
mCameraDevice.cancelAutoFocus();
-
}
-
mFocusManager.setAeAwbLock(false); // Unlock AE and AWB.
-
}
-
-
if ( updateAll ) {
-
Log.v(TAG, "Updating all parameters!");
-
setCameraParameters(UPDATE_PARAM_INITIALIZE | UPDATE_PARAM_ZOOM | UPDATE_PARAM_PREFERENCE);
-
} else {
-
setCameraParameters(UPDATE_PARAM_MODE);
-
}
-
-
//setCameraParameters(UPDATE_PARAM_ALL);
-
-
// Inform the mainthread to go on the UI initialization.
-
if (mCameraPreviewThread != null) {
-
synchronized (mCameraPreviewThread) {
-
mCameraPreviewThread.notify();
-
}
-
}
-
-
try {
-
Log.v(TAG, "startPreview");
-
mCameraDevice.startPreview();
-
} catch (Throwable ex) {
-
closeCamera();
-
throw new RuntimeException("startPreview failed", ex);
-
}
-
-
mZoomState = ZOOM_STOPPED;
-
setCameraState(IDLE);
-
mFocusManager.onPreviewStarted();
-
if ( mTempBracketingEnabled ) {
-
mFocusManager.setTempBracketingState(FocusManager.TempBracketingStates.ACTIVE);
-
}
-
-
if (mSnapshotOnIdle) {
-
mHandler.post(mDoSnapRunnable);
-
}
-
}
这里的思路是:先通过setPreviewDisplay方法将surface设定为window-player,这个方法会调用到HAL层,进行很重要的初始化,实现数据的回调
这里我必须得着重着重的进行分析,我一直在寻找是什么决定了overlay的使用与不适用,这里就这个setPreviewDisplay方法就是“罪魁祸首”
在setPreview方法中传入的参数是surfaceview,这个surfaceview传到底层HAL层是参数形式发生了改变,但是在我的理解下,就是人换衣服一样,
张三今天换了一身衣服,但这个张三跟昨天穿不同衣服的张三是同一个人,到了HAL层这个参数的形式是preview_stream_ops ,下面慢慢你就可以知道了,
在camerahal中的setPreviewDisplay方法中,是通过判断传下来的的preview_stream_ops 参数是否为空决定使用overlay还是不适用overlay的,很重要的
这篇文章只是在这里提及一下,下面不会提及overlay的内容,默认是以不适用overlay的方式分析数据流的整个过程的,这里可千万别混淆了
使用overl的数据回流方式将单独作为一章分析,同时会详细分析使用和不适用overlay的最终决定权
流程如下:app-->frameworks-->通过JNI-->camera client-->camera service-->通过hardware-interface-->hal_module-->HAL
这里十分有必要看一下camera service层的调用过程:
-
// set the Surface that the preview will use
-
status_t CameraService::Client::setPreviewDisplay(const sp<Surface>& surface) {
-
LOG1("setPreviewDisplay(%p) (pid %d)", surface.get(), getCallingPid());
-
-
sp<IBinder> binder(surface != 0 ? surface->asBinder() : 0);
-
sp<ANativeWindow> window(surface);
-
return setPreviewWindow(binder, window);
-
}
这里其实我还是理解的不是很透彻,将我们从app传进来的surface转换为IBinder和ANativiWindow,然后以这两个变量为参数接着调用参数不同的setPreviewWindow
-
status_t CameraService::Client::setPreviewWindow(const sp<IBinder>& binder,
-
const sp<ANativeWindow>& window) {
-
Mutex::Autolock lock(mLock);
-
status_t result = checkPidAndHardware();
-
if (result != NO_ERROR) return result;
-
-
// return if no change in surface.
-
if (binder == mSurface) {
-
return NO_ERROR;
-
}
-
-
if (window != 0) {
-
result = native_window_api_connect(window.get(), NATIVE_WINDOW_API_CAMERA);
-
if (result != NO_ERROR) {
-
LOGE("native_window_api_connect failed: %s (%d)", strerror(-result),
-
result);
-
return result;
-
}
-
}
-
-
// If preview has been already started, register preview buffers now.
-
if (mHardware->previewEnabled()) {
-
if (window != 0) {
-
native_window_set_scaling_mode(window.get(),
-
NATIVE_WINDOW_SCALING_MODE_SCALE_TO_WINDOW);
-
native_window_set_buffers_transform(window.get(), mOrientation);
-
result = mHardware->setPreviewWindow(window);
-
}
-
}
-
-
if (result == NO_ERROR) {
-
// Everything has succeeded. Disconnect the old window and remember the
-
// new window.
-
disconnectWindow(mPreviewWindow);
-
mSurface = binder;
-
mPreviewWindow = window;
-
} else {
-
// Something went wrong after we connected to the new window, so
-
// disconnect here.
-
disconnectWindow(window);
-
}
-
-
return result;
-
}
上面先调用到CameraHardwareInterface中的setPreview方法:
-
status_t setPreviewWindow(const sp<ANativeWindow>& buf)
-
{
-
LOGV("%s(%s) buf %p", __FUNCTION__, mName.string(), buf.get());
-
-
if (mDevice->ops->set_preview_window) {
-
mPreviewWindow = buf;
-
#ifdef OMAP_ENHANCEMENT_CPCAM
-
mHalPreviewWindow.user = mPreviewWindow.get();
-
#else
-
mHalPreviewWindow.user = this;
-
#endif
-
LOGV("%s &mHalPreviewWindow %p mHalPreviewWindow.user %p", __FUNCTION__,
-
&mHalPreviewWindow, mHalPreviewWindow.user);
-
return mDevice->ops->set_preview_window(mDevice,
-
buf.get() ? &mHalPreviewWindow.nw : 0);
-
}
-
return INVALID_OPERATION;
-
}
到这里为止,传输的参数已经由最初的surface-->ANativeWindow-->preview_stream_ops,传递到底层的参数已经发生了本质的变化,后面数据回调的时候还会见到这里变量,现在先记下它
其实我说的本质的变化这里也只能这么说,但往深入追究,这个preview_stream_ops也可以说只是surface的另外一种形式而已
这样才通过hardware调用到hal-module再调用到hal层
-
int camera_set_preview_window(struct camera_device * device,
-
struct preview_stream_ops *window)
-
{
-
int rv = -EINVAL;
-
ti_camera_device_t* ti_dev = NULL;
-
-
LOGV("%s", __FUNCTION__);
-
-
if(!device)
-
return rv;
-
-
ti_dev = (ti_camera_device_t*) device;
-
-
rv = gCameraHals[ti_dev->cameraid]->setPreviewWindow(window);
-
-
return rv;
-
}
HAL层调用:
-
status_t CameraHal::setPreviewWindow(struct preview_stream_ops *window)
-
{
-
status_t ret = NO_ERROR;
-
CameraAdapter::BuffersDescriptor desc;
-
-
LOG_FUNCTION_NAME;
-
mSetPreviewWindowCalled = true;
-
-
//If the Camera service passes a null window, we destroy existing window and free the DisplayAdapter
-
if(!window)
-
{
-
if(mDisplayAdapter.get() != NULL)
-
{
-
///NULL window passed, destroy the display adapter if present
-
CAMHAL_LOGD("NULL window passed, destroying display adapter");
-
mDisplayAdapter.clear();
-
///@remarks If there was a window previously existing, we usually expect another valid window to be passed by the client
-
///@remarks so, we will wait until it passes a valid window to begin the preview again
-
mSetPreviewWindowCalled = false;
-
}
-
CAMHAL_LOGD("NULL ANativeWindow passed to setPreviewWindow");
-
return NO_ERROR;
-
}else if(mDisplayAdapter.get() == NULL)
-
{
-
// Need to create the display adapter since it has not been created
-
// Create display adapter
-
mDisplayAdapter = new ANativeWindowDisplayAdapter();
-
ret = NO_ERROR;
-
if(!mDisplayAdapter.get() || ((ret=mDisplayAdapter->initialize())!=NO_ERROR))
-
{
-
if(ret!=NO_ERROR)
-
{
-
mDisplayAdapter.clear();
-
CAMHAL_LOGEA("DisplayAdapter initialize failed");
-
LOG_FUNCTION_NAME_EXIT;
-
return ret;
-
}
-
else
-
{
-
CAMHAL_LOGEA("Couldn't create DisplayAdapter");
-
LOG_FUNCTION_NAME_EXIT;
-
return NO_MEMORY;
-
}
-
}
-
-
// DisplayAdapter needs to know where to get the CameraFrames from inorder to display
-
// Since CameraAdapter is the one that provides the frames, set it as the frame provider for DisplayAdapter
-
mDisplayAdapter->setFrameProvider(mCameraAdapter);
-
-
// Any dynamic errors that happen during the camera use case has to be propagated back to the application
-
// via CAMERA_MSG_ERROR. AppCallbackNotifier is the class that notifies such errors to the application
-
// Set it as the error handler for the DisplayAdapter
-
mDisplayAdapter->setErrorHandler(mAppCallbackNotifier.get());
-
-
// Update the display adapter with the new window that is passed from CameraService
-
ret = mDisplayAdapter->setPreviewWindow(window);
-
if(ret!=NO_ERROR)
-
{
-
CAMHAL_LOGEB("DisplayAdapter setPreviewWindow returned error %d", ret);
-
}
-
-
if(mPreviewStartInProgress)
-
{
-
CAMHAL_LOGDA("setPreviewWindow called when preview running");
-
// Start the preview since the window is now available
-
ret = startPreview();
-
}
-
} else {
-
// Update the display adapter with the new window that is passed from CameraService
-
ret = mDisplayAdapter->setPreviewWindow(window);
-
if ( (NO_ERROR == ret) && previewEnabled() ) {
-
restartPreview();
-
} else if (ret == ALREADY_EXISTS) {
-
// ALREADY_EXISTS should be treated as a noop in this case
-
ret = NO_ERROR;
-
}
-
}
-
LOG_FUNCTION_NAME_EXIT;
-
-
return ret;
-
-
}
这里配置好显示数据来源,显示到目标,以及错误信息回调方法,最终开始preview
-
status_t CameraHal::startPreview() {
-
LOG_FUNCTION_NAME;
-
-
// When tunneling is enabled during VTC, startPreview happens in 2 steps:
-
// When the application sends the command CAMERA_CMD_PREVIEW_INITIALIZATION,
-
// cameraPreviewInitialization() is called, which in turn causes the CameraAdapter
-
// to move from loaded to idle state. And when the application calls startPreview,
-
// the CameraAdapter moves from idle to executing state.
-
//
-
// If the application calls startPreview() without sending the command
-
// CAMERA_CMD_PREVIEW_INITIALIZATION, then the function cameraPreviewInitialization()
-
// AND startPreview() are executed. In other words, if the application calls
-
// startPreview() without sending the command CAMERA_CMD_PREVIEW_INITIALIZATION,
-
// then the CameraAdapter moves from loaded to idle to executing state in one shot.
-
status_t ret = cameraPreviewInitialization();这个地方十分重要,下面会具体分析
-
-
// The flag mPreviewInitializationDone is set to true at the end of the function
-
// cameraPreviewInitialization(). Therefore, if everything goes alright, then the
-
// flag will be set. Sometimes, the function cameraPreviewInitialization() may
-
// return prematurely if all the resources are not available for starting preview.
-
// For example, if the preview window is not set, then it would return NO_ERROR.
-
// Under such circumstances, one should return from startPreview as well and should
-
// not continue execution. That is why, we check the flag and not the return value.
-
if (!mPreviewInitializationDone) return ret;
-
-
// Once startPreview is called, there is no need to continue to remember whether
-
// the function cameraPreviewInitialization() was called earlier or not. And so
-
// the flag mPreviewInitializationDone is reset here. Plus, this preserves the
-
// current behavior of startPreview under the circumstances where the application
-
// calls startPreview twice or more.
-
mPreviewInitializationDone = false;
-
-
//Enable the display adapter if present, actual overlay enable happens when we post the buffer这里说overlay happens,我一直在找的地方,上面棕色标注将来会在详细说说这里
-
if(mDisplayAdapter.get() != NULL) {
-
CAMHAL_LOGDA("Enabling display");
-
int width, height;
-
mParameters.getPreviewSize(&width, &height);
-
-
#if PPM_INSTRUMENTATION || PPM_INSTRUMENTATION_ABS
-
ret = mDisplayAdapter->enableDisplay(width, height, &mStartPreview);
-
#else
-
ret = mDisplayAdapter->enableDisplay(width, height, NULL);
-
#endif
-
-
if ( ret != NO_ERROR ) {
-
CAMHAL_LOGEA("Couldn't enable display");
-
-
// FIXME: At this stage mStateSwitchLock is locked and unlock is supposed to be called
-
// only from mCameraAdapter->sendCommand(CameraAdapter::CAMERA_START_PREVIEW)
-
// below. But this will never happen because of goto error. Thus at next
-
// startPreview() call CameraHAL will be deadlocked.
-
// Need to revisit mStateSwitch lock, for now just abort the process.
-
CAMHAL_ASSERT_X(false,
-
"At this stage mCameraAdapter->mStateSwitchLock is still locked, "
-
"deadlock is guaranteed");
-
-
goto error;
-
}
-
-
}
-
-
CAMHAL_LOGDA("Starting CameraAdapter preview mode");
-
//Send START_PREVIEW command to adapter
-
ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_START_PREVIEW);//从这里开始调用到BaseCameraAdapter
-
-
if(ret!=NO_ERROR) {
-
CAMHAL_LOGEA("Couldn't start preview w/ CameraAdapter");
-
goto error;
-
}
-
CAMHAL_LOGDA("Started preview");
-
-
mPreviewEnabled = true;
-
mPreviewStartInProgress = false;
-
return ret;
-
-
error:
-
-
CAMHAL_LOGEA("Performing cleanup after error");
-
-
//Do all the cleanup
-
freePreviewBufs();
-
mCameraAdapter->sendCommand(CameraAdapter::CAMERA_STOP_PREVIEW);
-
if(mDisplayAdapter.get() != NULL) {
-
mDisplayAdapter->disableDisplay(false);
-
}
-
mAppCallbackNotifier->stop();
-
mPreviewStartInProgress = false;
-
mPreviewEnabled = false;
-
LOG_FUNCTION_NAME_EXIT;
-
-
return ret;
-
}
BaseCameraAdapter实现了父类的sendcommand方法:
-
case CameraAdapter::CAMERA_START_PREVIEW:
-
{
-
-
CAMHAL_LOGDA("Start Preview");
-
-
if ( ret == NO_ERROR )
-
{
-
ret = setState(operation);
-
}
-
-
if ( ret == NO_ERROR )
-
{
-
ret = startPreview();
-
}
-
-
if ( ret == NO_ERROR )
-
{
-
ret = commitState();
-
}
-
else
-
{
-
ret |= rollbackState();
-
}
-
-
break;
-
-
}
这里我们接着分析startPreview方法,之前的文章中已经分析过,这里调用的startPreview方法不是BaseCameraAdapter中的startPreview,而是调用的V4LCameraAdapter中的startPreview方法:
-
status_t V4LCameraAdapter::startPreview()
-
{
-
status_t ret = NO_ERROR;
-
-
LOG_FUNCTION_NAME;
-
Mutex::Autolock lock(mPreviewBufsLock);
-
-
if(mPreviewing) {
-
ret = BAD_VALUE;
-
goto EXIT;
-
}
-
-
for (int i = 0; i < mPreviewBufferCountQueueable; i++) {
-
-
mVideoInfo->buf.index = i;
-
mVideoInfo->buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
-
mVideoInfo->buf.memory = V4L2_MEMORY_MMAP;
-
-
ret = v4lIoctl(mCameraHandle, VIDIOC_QBUF, &mVideoInfo->buf);//申请内存空间
-
if (ret < 0) {
-
CAMHAL_LOGEA("VIDIOC_QBUF Failed");
-
goto EXIT;
-
}
-
nQueued++;
-
}
-
-
ret = v4lStartStreaming();
-
-
// Create and start preview thread for receiving buffers from V4L Camera
-
if(!mCapturing) {
-
mPreviewThread = new PreviewThread(this);//开始preview线程
-
CAMHAL_LOGDA("Created preview thread");
-
}
-
-
//Update the flag to indicate we are previewing
-
mPreviewing = true;
-
mCapturing = false;
-
-
EXIT:
-
LOG_FUNCTION_NAME_EXIT;
-
return ret;
-
}
-
status_t V4LCameraAdapter::v4lStartStreaming () {
-
status_t ret = NO_ERROR;
-
enum v4l2_buf_type bufType;
-
-
if (!mVideoInfo->isStreaming) {
-
bufType = V4L2_BUF_TYPE_VIDEO_CAPTURE;
-
-
ret = v4lIoctl (mCameraHandle, VIDIOC_STREAMON, &bufType);开始preview
-
if (ret < 0) {
-
CAMHAL_LOGEB("StartStreaming: Unable to start capture: %s", strerror(errno));
-
return ret;
-
}
-
mVideoInfo->isStreaming = true;
-
}
-
return ret;
-
}
现在我们就看看开启的preview线程都在干什么:
-
int V4LCameraAdapter::previewThread()
-
{
-
status_t ret = NO_ERROR;
-
int width, height;
-
CameraFrame frame;
-
void *y_uv[2];
-
int index = 0;
-
int stride = 4096;
-
char *fp = NULL;
-
-
mParams.getPreviewSize(&width, &height);
-
-
if (mPreviewing) {
-
-
fp = this->GetFrame(index);
-
if(!fp) {
-
ret = BAD_VALUE;
-
goto EXIT;
-
}
-
CameraBuffer *buffer = mPreviewBufs.keyAt(index);
-
CameraFrame *lframe = (CameraFrame *)mFrameQueue.valueFor(buffer);
-
if (!lframe) {
-
ret = BAD_VALUE;
-
goto EXIT;
-
}
-
-
debugShowFPS();
-
-
if ( mFrameSubscribers.size() == 0 ) {
-
ret = BAD_VALUE;
-
goto EXIT;
-
}
-
//从这里开始以我的理解是进行数据的转换和保存操作
-
y_uv[0] = (void*) lframe->mYuv[0];
-
//y_uv[1] = (void*) lframe->mYuv[1];
-
//y_uv[1] = (void*) (lframe->mYuv[0] + height*stride);
-
convertYUV422ToNV12Tiler ( (unsigned char*)fp, (unsigned char*)y_uv[0], width, height);
-
CAMHAL_LOGVB("##...index= %d.;camera buffer= 0x%x; y= 0x%x; UV= 0x%x.",index, buffer, y_uv[0], y_uv[1] );
-
-
#ifdef SAVE_RAW_FRAMES
-
unsigned char* nv12_buff = (unsigned char*) malloc(width*height*3/2);
-
//Convert yuv422i to yuv420sp(NV12) & dump the frame to a file
-
convertYUV422ToNV12 ( (unsigned char*)fp, nv12_buff, width, height);
-
saveFile( nv12_buff, ((width*height)*3/2) );
-
free (nv12_buff);
-
#endif
-
-
frame.mFrameType = CameraFrame::PREVIEW_FRAME_SYNC;
-
frame.mBuffer = buffer;
-
frame.mLength = width*height*3/2;
-
frame.mAlignment = stride;
-
frame.mOffset = 0;
-
frame.mTimestamp = systemTime(SYSTEM_TIME_MONOTONIC);
-
frame.mFrameMask = (unsigned int)CameraFrame::PREVIEW_FRAME_SYNC;
-
-
if (mRecording)
-
{
-
frame.mFrameMask |= (unsigned int)CameraFrame::VIDEO_FRAME_SYNC;
-
mFramesWithEncoder++;
-
}
-
-
ret = setInitFrameRefCount(frame.mBuffer, frame.mFrameMask);
-
if (ret != NO_ERROR) {
-
CAMHAL_LOGDB("Error in setInitFrameRefCount %d", ret);
-
} else {
-
ret = sendFrameToSubscribers(&frame);
-
}
-
}
-
EXIT:
-
-
return ret;
-
}
就上面这段代码做一下说明,这里在我看来就是整个数据回流过程的中转站了,上面棕色部分buffer中拿到就就是底层driver返回回来的视频数据了,
那么我不是很明白的是,driver中的视频数据是怎么和mPreviewBufs还有index关联在一起的,并且这里可以通过buffer = mPreviewBufs.keyAt(index)获取到CameraBuffer,这里待会会详细探究一下
先接着往下说,获取到视频数据之后,如果需要,会将数据经过转换保存到file中方便之后使用,
最后使用得到的camerabuffer填充CameraFrame,这个结构至关重要,在我的理解,最终是通过sendFrameToSubscribers(&frame);方法将数据回流的
这里就先追踪一下driver中的视频数据是怎么和mPreviewBufs还有index关联在一起的
到了这里就不得不提及上面已经说的一个很重要的方法,先看看这个方法:
他是startPreview的第一步,cameraPreviewInitialization
-
status_t CameraHal::cameraPreviewInitialization()
-
{
-
-
status_t ret = NO_ERROR;
-
CameraAdapter::BuffersDescriptor desc;
-
CameraFrame frame;
-
unsigned int required_buffer_count;
-
unsigned int max_queueble_buffers;
-
-
#if PPM_INSTRUMENTATION || PPM_INSTRUMENTATION_ABS
-
gettimeofday(&mStartPreview, NULL);
-
#endif
-
-
LOG_FUNCTION_NAME;
-
-
if (mPreviewInitializationDone) {
-
return NO_ERROR;
-
}
-
-
if ( mPreviewEnabled ){
-
CAMHAL_LOGDA("Preview already running");
-
LOG_FUNCTION_NAME_EXIT;
-
return ALREADY_EXISTS;
-
}
-
-
if ( NULL != mCameraAdapter ) {
-
ret = mCameraAdapter->setParameters(mParameters);配置参数到CameraAdapter
-
}
-
-
if ((mPreviewStartInProgress == false) && (mDisplayPaused == false)){
-
ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_QUERY_RESOLUTION_PREVIEW,( int ) &frame);//通过这个command获取frame
-
if ( NO_ERROR != ret ){
-
CAMHAL_LOGEB("Error: CAMERA_QUERY_RESOLUTION_PREVIEW %d", ret);
-
return ret;
-
}
-
-
///Update the current preview width and height
-
mPreviewWidth = frame.mWidth;//初始化宽和高
-
mPreviewHeight = frame.mHeight;
-
}
-
-
///If we don't have the preview callback enabled and display adapter,
-
if(!mSetPreviewWindowCalled || (mDisplayAdapter.get() == NULL)){
-
CAMHAL_LOGD("Preview not started. Preview in progress flag set");
-
mPreviewStartInProgress = true;
-
ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_SWITCH_TO_EXECUTING);
-
if ( NO_ERROR != ret ){
-
CAMHAL_LOGEB("Error: CAMERA_SWITCH_TO_EXECUTING %d", ret);
-
return ret;
-
}
-
return NO_ERROR;
-
}
-
-
if( (mDisplayAdapter.get() != NULL) && ( !mPreviewEnabled ) && ( mDisplayPaused ) )
-
{
-
CAMHAL_LOGDA("Preview is in paused state");
-
-
mDisplayPaused = false;
-
mPreviewEnabled = true;
-
if ( NO_ERROR == ret )
-
{
-
ret = mDisplayAdapter->pauseDisplay(mDisplayPaused);
-
-
if ( NO_ERROR != ret )
-
{
-
CAMHAL_LOGEB("Display adapter resume failed %x", ret);
-
}
-
}
-
//restart preview callbacks
-
if(mMsgEnabled & CAMERA_MSG_PREVIEW_FRAME)
-
{
-
mAppCallbackNotifier->enableMsgType (CAMERA_MSG_PREVIEW_FRAME);//
-
}
-
-
signalEndImageCapture();
-
return ret;
-
}
-
-
required_buffer_count = atoi(mCameraProperties->get(CameraProperties::REQUIRED_PREVIEW_BUFS));
-
-
///Allocate the preview buffers
-
ret = allocPreviewBufs(mPreviewWidth, mPreviewHeight, mParameters.getPreviewFormat(), required_buffer_count, max_queueble_buffers);
-
-
if ( NO_ERROR != ret )
-
{
-
CAMHAL_LOGEA("Couldn't allocate buffers for Preview");
-
goto error;
-
}
-
-
if ( mMeasurementEnabled )
-
{
-
-
ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_QUERY_BUFFER_SIZE_PREVIEW_DATA,
-
( int ) &frame,
-
required_buffer_count);
-
if ( NO_ERROR != ret )
-
{
-
return ret;
-
}
-
-
///Allocate the preview data buffers
-
ret = allocPreviewDataBufs(frame.mLength, required_buffer_count);
-
if ( NO_ERROR != ret ) {
-
CAMHAL_LOGEA("Couldn't allocate preview data buffers");
-
goto error;
-
}
-
-
if ( NO_ERROR == ret )
-
{
-
desc.mBuffers = mPreviewDataBuffers;
-
desc.mOffsets = mPreviewDataOffsets;
-
desc.mFd = mPreviewDataFd;
-
desc.mLength = mPreviewDataLength;
-
desc.mCount = ( size_t ) required_buffer_count;
-
desc.mMaxQueueable = (size_t) required_buffer_count;
-
-
mCameraAdapter->sendCommand(CameraAdapter::CAMERA_USE_BUFFERS_PREVIEW_DATA,
-
( int ) &desc);
-
}
-
-
}
-
-
///Pass the buffers to Camera Adapter
-
desc.mBuffers = mPreviewBuffers;
-
desc.mOffsets = mPreviewOffsets;
-
desc.mFd = mPreviewFd;
-
desc.mLength = mPreviewLength;
-
desc.mCount = ( size_t ) required_buffer_count;
-
desc.mMaxQueueable = (size_t) max_queueble_buffers;
-
-
ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_USE_BUFFERS_PREVIEW,( int ) &desc);
-
-
if ( NO_ERROR != ret )
-
{
-
CAMHAL_LOGEB("Failed to register preview buffers: 0x%x", ret);
-
freePreviewBufs();
-
return ret;
-
}
-
-
mAppCallbackNotifier->startPreviewCallbacks(mParameters, mPreviewBuffers, mPreviewOffsets, mPreviewFd, mPreviewLength, required_buffer_count);
-
///Start the callback notifier
-
ret = mAppCallbackNotifier->start();
-
-
if( ALREADY_EXISTS == ret )
-
{
-
//Already running, do nothing
-
CAMHAL_LOGDA("AppCallbackNotifier already running");
-
ret = NO_ERROR;
-
}
-
else if ( NO_ERROR == ret ) {
-
CAMHAL_LOGDA("Started AppCallbackNotifier..");
-
mAppCallbackNotifier->setMeasurements(mMeasurementEnabled);
-
}
-
else
-
{
-
CAMHAL_LOGDA("Couldn't start AppCallbackNotifier");
-
goto error;
-
}
-
-
if (ret == NO_ERROR) mPreviewInitializationDone = true;
-
return ret;
-
-
error:
-
-
CAMHAL_LOGEA("Performing cleanup after error");
-
-
//Do all the cleanup
-
freePreviewBufs();
-
mCameraAdapter->sendCommand(CameraAdapter::CAMERA_STOP_PREVIEW);
-
if(mDisplayAdapter.get() != NULL)
-
{
-
mDisplayAdapter->disableDisplay(false);
-
}
-
mAppCallbackNotifier->stop();
-
mPreviewStartInProgress = false;
-
mPreviewEnabled = false;
-
LOG_FUNCTION_NAME_EXIT;
-
-
return ret;
-
}
这里先为preview buffer申请内存,并将preview set进cameraAdapter通过方法mCameraAdapter->sendCommand(CameraAdapter::CAMERA_USE_BUFFERS_PREVIEW,( int ) &desc)
在sendcommand中实现如下:
-
case CameraAdapter::CAMERA_USE_BUFFERS_PREVIEW:
-
CAMHAL_LOGDA("Use buffers for preview");
-
desc = ( BuffersDescriptor * ) value1;
-
-
if ( NULL == desc )
-
{
-
CAMHAL_LOGEA("Invalid preview buffers!");
-
return -EINVAL;
-
}
-
-
if ( ret == NO_ERROR )
-
{
-
ret = setState(operation);
-
}
-
-
if ( ret == NO_ERROR )
-
{
-
Mutex::Autolock lock(mPreviewBufferLock);
-
mPreviewBuffers = desc->mBuffers;
-
mPreviewBuffersLength = desc->mLength;
-
mPreviewBuffersAvailable.clear();
-
mSnapshotBuffersAvailable.clear();
-
for ( uint32_t i = 0 ; i < desc->mMaxQueueable ; i++ )
-
{
-
mPreviewBuffersAvailable.add(&mPreviewBuffers[i], 0);这里实现了mPreviewBuffersAvailable与mPreviewBuffers的关联
-
}
-
// initial ref count for undeqeueued buffers is 1 since buffer provider
-
// is still holding on to it
-
for ( uint32_t i = desc->mMaxQueueable ; i < desc->mCount ; i++ )
-
{
-
mPreviewBuffersAvailable.add(&mPreviewBuffers[i], 1);
-
}
-
}
-
-
if ( NULL != desc )
-
{
-
ret = useBuffers(CameraAdapter::CAMERA_PREVIEW,
-
desc->mBuffers,
-
desc->mCount,
-
desc->mLength,
-
desc->mMaxQueueable);
-
}
-
-
if ( ret == NO_ERROR )
-
{
-
ret = commitState();
-
}
-
else
-
{
-
ret |= rollbackState();
-
}
-
-
break;
调用V4LCameraAdapter中的useBuffers方法,然后他会接着调用UseBuffersPreview方法:
-
status_t V4LCameraAdapter::UseBuffersPreview(CameraBuffer *bufArr, int num)
-
{
-
int ret = NO_ERROR;
-
LOG_FUNCTION_NAME;
-
-
if(NULL == bufArr) {
-
ret = BAD_VALUE;
-
goto EXIT;
-
}
-
-
ret = v4lInitMmap(num);
-
if (ret == NO_ERROR) {
-
for (int i = 0; i < num; i++) {
-
//Associate each Camera internal buffer with the one from Overlay
-
mPreviewBufs.add(&bufArr[i], i);//这里实现了mPreviewBufs和desc->mBuffers的关联
-
CAMHAL_LOGDB("Preview- buff [%d] = 0x%x ",i, mPreviewBufs.keyAt(i));
-
}
-
-
// Update the preview buffer count
-
mPreviewBufferCount = num;
-
}
-
EXIT:
-
LOG_FUNCTION_NAME_EXIT;
-
return ret;
-
}
在这里我们还是很有必要去深入研究一下mAppCallbackNotifier这个变量的初始化过程,他决定很多回调函数的初始化
的初始化是在哪里实现的呢??在在camerahal文件的initial中初始化的
-
/**
-
@brief Initialize the Camera HAL
-
-
Creates CameraAdapter, AppCallbackNotifier, DisplayAdapter and MemoryManager
-
-
@param None
-
@return NO_ERROR - On success
-
NO_MEMORY - On failure to allocate memory for any of the objects
-
@remarks Camera Hal internal function
-
-
*/
-
-
status_t CameraHal::initialize(CameraProperties::Properties* properties)
-
{
-
LOG_FUNCTION_NAME;
-
-
int sensor_index = 0;
-
const char* sensor_name = NULL;
-
-
///Initialize the event mask used for registering an event provider for AppCallbackNotifier
-
///Currently, registering all events as to be coming from CameraAdapter
-
int32_t eventMask = CameraHalEvent::ALL_EVENTS;
-
-
// Get my camera properties
-
mCameraProperties = properties;
-
-
if(!mCameraProperties)
-
{
-
goto fail_loop;
-
}
-
-
// Dump the properties of this Camera
-
// will only print if DEBUG macro is defined
-
mCameraProperties->dump();
-
-
if (strcmp(CameraProperties::DEFAULT_VALUE, mCameraProperties->get(CameraProperties::CAMERA_SENSOR_INDEX)) != 0 )
-
{
-
sensor_index = atoi(mCameraProperties->get(CameraProperties::CAMERA_SENSOR_INDEX));
-
}
-
-
if (strcmp(CameraProperties::DEFAULT_VALUE, mCameraProperties->get(CameraProperties::CAMERA_NAME)) != 0 ) {
-
sensor_name = mCameraProperties->get(CameraProperties::CAMERA_NAME);
-
}
-
CAMHAL_LOGDB("Sensor index= %d; Sensor name= %s", sensor_index, sensor_name);
-
-
if (strcmp(sensor_name, V4L_CAMERA_NAME_USB) == 0) {
-
#ifdef V4L_CAMERA_ADAPTER
-
mCameraAdapter = V4LCameraAdapter_Factory(sensor_index);
-
#endif
-
}
-
else {
-
#ifdef OMX_CAMERA_ADAPTER
-
mCameraAdapter = OMXCameraAdapter_Factory(sensor_index);
-
#endif
-
}
-
-
if ( ( NULL == mCameraAdapter ) || (mCameraAdapter->initialize(properties)!=NO_ERROR))
-
{
-
CAMHAL_LOGEA("Unable to create or initialize CameraAdapter");
-
mCameraAdapter = NULL;
-
goto fail_loop;
-
}
-
-
mCameraAdapter->incStrong(mCameraAdapter);
-
mCameraAdapter->registerImageReleaseCallback(releaseImageBuffers, (void *) this);
-
mCameraAdapter->registerEndCaptureCallback(endImageCapture, (void *)this);
-
-
if(!mAppCallbackNotifier.get())
-
{
-
/// Create the callback notifier
-
mAppCallbackNotifier = new AppCallbackNotifier();
-
if( ( NULL == mAppCallbackNotifier.get() ) || ( mAppCallbackNotifier->initialize() != NO_ERROR))
-
{
-
CAMHAL_LOGEA("Unable to create or initialize AppCallbackNotifier");
-
goto fail_loop;
-
}
-
}
-
-
if(!mMemoryManager.get())
-
{
-
/// Create Memory Manager
-
mMemoryManager = new MemoryManager();
-
if( ( NULL == mMemoryManager.get() ) || ( mMemoryManager->initialize() != NO_ERROR))
-
{
-
CAMHAL_LOGEA("Unable to create or initialize MemoryManager");
-
goto fail_loop;
-
}
-
}
-
-
///Setup the class dependencies...
-
-
///AppCallbackNotifier has to know where to get the Camera frames and the events like auto focus lock etc from.
-
///CameraAdapter is the one which provides those events
-
///Set it as the frame and event providers for AppCallbackNotifier
-
///@remarks setEventProvider API takes in a bit mask of events for registering a provider for the different events
-
/// That way, if events can come from DisplayAdapter in future, we will be able to add it as provider
-
/// for any event
-
mAppCallbackNotifier->setEventProvider(eventMask, mCameraAdapter);
-
mAppCallbackNotifier->setFrameProvider(mCameraAdapter);
-
-
///Any dynamic errors that happen during the camera use case has to be propagated back to the application
-
///via CAMERA_MSG_ERROR. AppCallbackNotifier is the class that notifies such errors to the application
-
///Set it as the error handler for CameraAdapter
-
mCameraAdapter->setErrorHandler(mAppCallbackNotifier.get());
-
-
///Start the callback notifier
-
if(mAppCallbackNotifier->start() != NO_ERROR)
-
{
-
CAMHAL_LOGEA("Couldn't start AppCallbackNotifier");
-
goto fail_loop;
-
}
-
-
CAMHAL_LOGDA("Started AppCallbackNotifier..");
-
mAppCallbackNotifier->setMeasurements(mMeasurementEnabled);
-
-
///Initialize default parameters
-
initDefaultParameters();
-
-
-
if ( setParameters(mParameters) != NO_ERROR )
-
{
-
CAMHAL_LOGEA("Failed to set default parameters?!");
-
}
-
-
// register for sensor events
-
mSensorListener = new SensorListener();
-
if (mSensorListener.get()) {
-
if (mSensorListener->initialize() == NO_ERROR) {
-
mSensorListener->setCallbacks(orientation_cb, this);
-
mSensorListener->enableSensor(SensorListener::SENSOR_ORIENTATION);
-
} else {
-
CAMHAL_LOGEA("Error initializing SensorListener. not fatal, continuing");
-
mSensorListener.clear();
-
mSensorListener = NULL;
-
}
-
}
-
-
LOG_FUNCTION_NAME_EXIT;
-
-
return NO_ERROR;
-
-
fail_loop:
-
-
///Free up the resources because we failed somewhere up
-
deinitialize();
-
LOG_FUNCTION_NAME_EXIT;
-
-
return NO_MEMORY;
-
-
}
这里实例化了一些对象,我真正关注的是mAppCallbackNotifier这个对象,实例化这个对象,并且initialize,并且设置EventProvider和FrameProvider
我们就看一下setFrameProvider这个方法都做了什么事情,
-
void AppCallbackNotifier::setFrameProvider(FrameNotifier *frameNotifier)
-
{
-
LOG_FUNCTION_NAME;
-
///@remarks There is no NULL check here. We will check
-
///for NULL when we get the start command from CameraAdapter
-
mFrameProvider = new FrameProvider(frameNotifier, this, frameCallbackRelay);
-
if ( NULL == mFrameProvider )
-
{
-
CAMHAL_LOGEA("Error in creating FrameProvider");
-
}
-
else
-
{
-
//Register only for captured images and RAW for now
-
//TODO: Register for and handle all types of frames
-
mFrameProvider->enableFrameNotification(CameraFrame::IMAGE_FRAME);
-
mFrameProvider->enableFrameNotification(CameraFrame::RAW_FRAME);
-
}
-
-
LOG_FUNCTION_NAME_EXIT;
-
}
实例化一个FrameProvider对象,并且enable响应notification,这里实例化的对象里面的参数中有一个frameCallbackRely是一个回调函数,这里我们暂且回过头看看previewthread中的那个方法sendFrameToSubscribers
这个方法只是调用了下面这个方法实现:
-
status_t BaseCameraAdapter::__sendFrameToSubscribers(CameraFrame* frame,
-
KeyedVector<int, frame_callback> *subscribers,
-
CameraFrame::FrameType frameType)
-
{
-
size_t refCount = 0;
-
status_t ret = NO_ERROR;
-
frame_callback callback = NULL;
-
-
frame->mFrameType = frameType;
-
-
if ( (frameType == CameraFrame::PREVIEW_FRAME_SYNC) ||
-
(frameType == CameraFrame::VIDEO_FRAME_SYNC) ||
-
(frameType == CameraFrame::SNAPSHOT_FRAME) ){
-
if (mFrameQueue.size() > 0){
-
CameraFrame *lframe = (CameraFrame *)mFrameQueue.valueFor(frame->mBuffer);
-
frame->mYuv[0] = lframe->mYuv[0];
-
frame->mYuv[1] = frame->mYuv[0] + (frame->mLength + frame->mOffset)*2/3;
-
}
-
else{
-
CAMHAL_LOGDA("Empty Frame Queue");
-
return -EINVAL;
-
}
-
}
-
-
if (NULL != subscribers) {
-
refCount = getFrameRefCount(frame->mBuffer, frameType);
-
-
if (refCount == 0) {
-
CAMHAL_LOGDA("Invalid ref count of 0");
-
return -EINVAL;
-
}
-
-
if (refCount > subscribers->size()) {
-
CAMHAL_LOGEB("Invalid ref count for frame type: 0x%x", frameType);
-
return -EINVAL;
-
}
-
-
CAMHAL_LOGVB("Type of Frame: 0x%x address: 0x%x refCount start %d",
-
frame->mFrameType,
-
( uint32_t ) frame->mBuffer,
-
refCount);
-
-
for ( unsigned int i = 0 ; i < refCount; i++ ) {
-
frame->mCookie = ( void * ) subscribers->keyAt(i);
-
callback = (frame_callback) subscribers->valueAt(i);
-
-
if (!callback) {
-
CAMHAL_LOGEB("callback not set for frame type: 0x%x", frameType);
-
return -EINVAL;
-
}
-
-
callback(frame);
-
}
-
} else {
-
CAMHAL_LOGEA("Subscribers is null??");
-
return -EINVAL;
-
}
-
-
return ret;
-
}
最重要的部分我在上面已经表示出来,通过subscribers这个全局KeyedVector变量找到相应的frame->mCookie和callback方法,
这里所要获取到的callback方法就是上面setFrameProvider时引入的frameCallbackRelay这个函数,我们看看这个函数的具体实现
-
void AppCallbackNotifier::frameCallbackRelay(CameraFrame* caFrame)
-
{
-
LOG_FUNCTION_NAME;
-
AppCallbackNotifier *appcbn = (AppCallbackNotifier*) (caFrame->mCookie);
-
appcbn->frameCallback(caFrame);
-
LOG_FUNCTION_NAME_EXIT;
-
}
-
-
void AppCallbackNotifier::frameCallback(CameraFrame* caFrame)
-
{
-
///Post the event to the event queue of AppCallbackNotifier
-
TIUTILS::Message msg;
-
CameraFrame *frame;
-
-
LOG_FUNCTION_NAME;
-
-
if ( NULL != caFrame )
-
{
-
-
frame = new CameraFrame(*caFrame);
-
if ( NULL != frame )
-
{
-
msg.command = AppCallbackNotifier::NOTIFIER_CMD_PROCESS_FRAME;
-
msg.arg1 = frame;
-
mFrameQ.put(&msg);
-
}
-
else
-
{
-
CAMHAL_LOGEA("Not enough resources to allocate CameraFrame");
-
}
-
-
}
-
-
LOG_FUNCTION_NAME_EXIT;
-
}
这个回调函数只是将数据装载入mag这个消息结构体,比且把这个消息put到mFrameQ这个全局消息中心,app就是从这个消息中心把数据取走的
我们可以看一下在AppCallbackNotifier初始化的时候就调用了initialize做一下初始设置
-
/**
-
* NotificationHandler class
-
*/
-
-
///Initialization function for AppCallbackNotifier
-
status_t AppCallbackNotifier::initialize()
-
{
-
LOG_FUNCTION_NAME;
-
-
mPreviewMemory = 0;
-
-
mMeasurementEnabled = false;
-
-
mNotifierState = NOTIFIER_STOPPED;
-
-
///Create the app notifier thread
-
mNotificationThread = new NotificationThread(this);
-
if(!mNotificationThread.get())
-
{
-
CAMHAL_LOGEA("Couldn't create Notification thread");
-
return NO_MEMORY;
-
}
-
-
///Start the display thread
-
status_t ret = mNotificationThread->run("NotificationThread", PRIORITY_URGENT_DISPLAY);
-
if(ret!=NO_ERROR)
-
{
-
CAMHAL_LOGEA("Couldn't run NotificationThread");
-
mNotificationThread.clear();
-
return ret;
-
}
-
-
mUseMetaDataBufferMode = true;
-
mRawAvailable = false;
-
-
mRecording = false;
-
mPreviewing = false;
-
-
LOG_FUNCTION_NAME_EXIT;
-
-
return ret;
-
}
这个初始化的方法里面最重要就是开启了一个线程,用来监听HAL层发送来的一切消息,并在其中将消息或者数据告诉app,看看这个线程的具体实现
-
bool AppCallbackNotifier::notificationThread()
-
{
-
bool shouldLive = true;
-
status_t ret;
-
-
LOG_FUNCTION_NAME;
-
-
//CAMHAL_LOGDA("Notification Thread waiting for message");
-
ret = TIUTILS::MessageQueue::waitForMsg(&mNotificationThread->msgQ(),
-
&mEventQ,
-
&mFrameQ,
-
AppCallbackNotifier::NOTIFIER_TIMEOUT);
-
-
//CAMHAL_LOGDA("Notification Thread received message");
-
-
if (mNotificationThread->msgQ().hasMsg()) {
-
///Received a message from CameraHal, process it
-
CAMHAL_LOGDA("Notification Thread received message from Camera HAL");
-
shouldLive = processMessage();
-
if(!shouldLive) {
-
CAMHAL_LOGDA("Notification Thread exiting.");
-
return shouldLive;
-
}
-
}
-
-
if(mEventQ.hasMsg()) {
-
///Received an event from one of the event providers
-
CAMHAL_LOGDA("Notification Thread received an event from event provider (CameraAdapter)");
-
notifyEvent();
-
}
-
-
if(mFrameQ.hasMsg()) {
-
///Received a frame from one of the frame providers
-
//CAMHAL_LOGDA("Notification Thread received a frame from frame provider (CameraAdapter)");
-
notifyFrame();
-
}
-
-
LOG_FUNCTION_NAME_EXIT;
-
return shouldLive;
-
}
这里等待到有消息时,我们直接分析我需要的,preview,如果检测到mFrameQ中有消息,则调用notifyFrame方法
-
void AppCallbackNotifier::notifyFrame()
-
{
-
///Receive and send the frame notifications to app
-
TIUTILS::Message msg;
-
CameraFrame *frame;
-
MemoryHeapBase *heap;
-
MemoryBase *buffer = NULL;
-
sp<MemoryBase> memBase;
-
void *buf = NULL;
-
-
LOG_FUNCTION_NAME;
-
-
{
-
Mutex::Autolock lock(mLock);
-
if(!mFrameQ.isEmpty()) {
-
mFrameQ.get(&msg);
-
} else {
-
return;
-
}
-
}
-
-
bool ret = true;
-
-
frame = NULL;
-
switch(msg.command)
-
{
-
case AppCallbackNotifier::NOTIFIER_CMD_PROCESS_FRAME:
-
-
frame = (CameraFrame *) msg.arg1;
-
if(!frame)
-
{
-
break;
-
}
-
-
if ( (CameraFrame::RAW_FRAME == frame->mFrameType )&&
-
( NULL != mCameraHal ) &&
-
( NULL != mDataCb) &&
-
( NULL != mNotifyCb ) )
-
{
-
-
if ( mCameraHal->msgTypeEnabled(CAMERA_MSG_RAW_IMAGE) )
-
{
-
#ifdef COPY_IMAGE_BUFFER
-
copyAndSendPictureFrame(frame, CAMERA_MSG_RAW_IMAGE);
-
#else
-
//TODO: Find a way to map a Tiler buffer to a MemoryHeapBase
-
#endif
-
}
-
else {
-
if ( mCameraHal->msgTypeEnabled(CAMERA_MSG_RAW_IMAGE_NOTIFY) ) {
-
mNotifyCb(CAMERA_MSG_RAW_IMAGE_NOTIFY, 0, 0, mCallbackCookie);
-
}
-
mFrameProvider->returnFrame(frame->mBuffer,
-
(CameraFrame::FrameType) frame->mFrameType);
-
}
-
-
mRawAvailable = true;
-
-
}
-
else if ( (CameraFrame::IMAGE_FRAME == frame->mFrameType) &&
-
(NULL != mCameraHal) &&
-
(NULL != mDataCb) &&
-
(CameraFrame::ENCODE_RAW_YUV422I_TO_JPEG & frame->mQuirks) )
-
{
-
-
int encode_quality = 100, tn_quality = 100;
-
int tn_width, tn_height;
-
unsigned int current_snapshot = 0;
-
Encoder_libjpeg::params *main_jpeg = NULL, *tn_jpeg = NULL;
-
void* exif_data = NULL;
-
const char *previewFormat = NULL;
-
camera_memory_t* raw_picture = mRequestMemory(-1, frame->mLength, 1, NULL);
-
-
if(raw_picture) {
-
buf = raw_picture->data;
-
}
-
-
CameraParameters parameters;
-
char *params = mCameraHal->getParameters();
-
const String8 strParams(params);
-
parameters.unflatten(strParams);
-
-
encode_quality = parameters.getInt(CameraParameters::KEY_JPEG_QUALITY);
-
if (encode_quality < 0 || encode_quality > 100) {
-
encode_quality = 100;
-
}
-
-
tn_quality = parameters.getInt(CameraParameters::KEY_JPEG_THUMBNAIL_QUALITY);
-
if (tn_quality < 0 || tn_quality > 100) {
-
tn_quality = 100;
-
}
-
-
if (CameraFrame::HAS_EXIF_DATA & frame->mQuirks) {
-
exif_data = frame->mCookie2;
-
}
-
-
main_jpeg = (Encoder_libjpeg::params*)
-
malloc(sizeof(Encoder_libjpeg::params));
-
-
// Video snapshot with LDCNSF on adds a few bytes start offset
-
// and a few bytes on every line. They must be skipped.
-
int rightCrop = frame->mAlignment/2 - frame->mWidth;
-
-
CAMHAL_LOGDB("Video snapshot right crop = %d", rightCrop);
-
CAMHAL_LOGDB("Video snapshot offset = %d", frame->mOffset);
-
-
if (main_jpeg) {
-
main_jpeg->src = (uint8_t *)frame->mBuffer->mapped;
-
main_jpeg->src_size = frame->mLength;
-
main_jpeg->dst = (uint8_t*) buf;
-
main_jpeg->dst_size = frame->mLength;
-
main_jpeg->quality = encode_quality;
-
main_jpeg->in_width = frame->mAlignment/2; // use stride here
-
main_jpeg->in_height = frame->mHeight;
-
main_jpeg->out_width = frame->mAlignment/2;
-
main_jpeg->out_height = frame->mHeight;
-
main_jpeg->right_crop = rightCrop;
-
main_jpeg->start_offset = frame->mOffset;
-
if ( CameraFrame::FORMAT_YUV422I_UYVY & frame->mQuirks) {
-
main_jpeg->format = TICameraParameters::PIXEL_FORMAT_YUV422I_UYVY;
-
}
-
else { //if ( CameraFrame::FORMAT_YUV422I_YUYV & frame->mQuirks)
-
main_jpeg->format = CameraParameters::PIXEL_FORMAT_YUV422I;
-
}
-
}
-
-
tn_width = parameters.getInt(CameraParameters::KEY_JPEG_THUMBNAIL_WIDTH);
-
tn_height = parameters.getInt(CameraParameters::KEY_JPEG_THUMBNAIL_HEIGHT);
-
previewFormat = parameters.getPreviewFormat();
-
-
if ((tn_width > 0) && (tn_height > 0) && ( NULL != previewFormat )) {
-
tn_jpeg = (Encoder_libjpeg::params*)
-
malloc(sizeof(Encoder_libjpeg::params));
-
// if malloc fails just keep going and encode main jpeg
-
if (!tn_jpeg) {
-
tn_jpeg = NULL;
-
}
-
}
-
-
if (tn_jpeg) {
-
int width, height;
-
parameters.getPreviewSize(&width,&height);
-
current_snapshot = (mPreviewBufCount + MAX_BUFFERS - 1) % MAX_BUFFERS;
-
tn_jpeg->src = (uint8_t *)mPreviewBuffers[current_snapshot].mapped;
-
tn_jpeg->src_size = mPreviewMemory->size / MAX_BUFFERS;
-
tn_jpeg->dst_size = calculateBufferSize(tn_width,
-
tn_height,
-
previewFormat);
-
tn_jpeg->dst = (uint8_t*) malloc(tn_jpeg->dst_size);
-
tn_jpeg->quality = tn_quality;
-
tn_jpeg->in_width = width;
-
tn_jpeg->in_height = height;
-
tn_jpeg->out_width = tn_width;
-
tn_jpeg->out_height = tn_height;
-
tn_jpeg->right_crop = 0;
-
tn_jpeg->start_offset = 0;
-
tn_jpeg->format = CameraParameters::PIXEL_FORMAT_YUV420SP;;
-
}
-
-
sp<Encoder_libjpeg> encoder = new Encoder_libjpeg(main_jpeg,
-
tn_jpeg,
-
AppCallbackNotifierEncoderCallback,
-
(CameraFrame::FrameType)frame->mFrameType,
-
this,
-
raw_picture,
-
exif_data, frame->mBuffer);
-
gEncoderQueue.add(frame->mBuffer->mapped, encoder);
-
encoder->run();
-
encoder.clear();
-
if (params != NULL)
-
{
-
mCameraHal->putParameters(params);
-
}
-
}
-
else if ( ( CameraFrame::IMAGE_FRAME == frame->mFrameType ) &&
-
( NULL != mCameraHal ) &&
-
( NULL != mDataCb) )
-
{
-
-
// CTS, MTS requirements: Every 'takePicture()' call
-
// who registers a raw callback should receive one
-
// as well. This is not always the case with
-
// CameraAdapters though.
-
if (!mCameraHal->msgTypeEnabled(CAMERA_MSG_RAW_IMAGE)) {
-
dummyRaw();
-
} else {
-
mRawAvailable = false;
-
}
-
-
#ifdef COPY_IMAGE_BUFFER
-
{
-
Mutex::Autolock lock(mBurstLock);
-
#if defined(OMAP_ENHANCEMENT)
-
if ( mBurst )
-
{
-
copyAndSendPictureFrame(frame, CAMERA_MSG_COMPRESSED_BURST_IMAGE);
-
}
-
else
-
#endif
-
{
-
copyAndSendPictureFrame(frame, CAMERA_MSG_COMPRESSED_IMAGE);
-
}
-
}
-
#else
-
//TODO: Find a way to map a Tiler buffer to a MemoryHeapBase
-
#endif
-
}
-
else if ( ( CameraFrame::VIDEO_FRAME_SYNC == frame->mFrameType ) &&
-
( NULL != mCameraHal ) &&
-
( NULL != mDataCb) &&
-
( mCameraHal->msgTypeEnabled(CAMERA_MSG_VIDEO_FRAME) ) )
-
{
-
AutoMutex locker(mRecordingLock);
-
if(mRecording)
-
{
-
if(mUseMetaDataBufferMode)
-
{
-
camera_memory_t *videoMedatadaBufferMemory =
-
mVideoMetadataBufferMemoryMap.valueFor(frame->mBuffer->opaque);
-
video_metadata_t *videoMetadataBuffer = (video_metadata_t *) videoMedatadaBufferMemory->data;
-
-
if( (NULL == videoMedatadaBufferMemory) || (NULL == videoMetadataBuffer) || (NULL == frame->mBuffer) )
-
{
-
CAMHAL_LOGEA("Error! One of the video buffers is NULL");
-
break;
-
}
-
-
if ( mUseVideoBuffers )
-
{
-
CameraBuffer *vBuf = mVideoMap.valueFor(frame->mBuffer->opaque);
-
GraphicBufferMapper &mapper = GraphicBufferMapper::get();
-
Rect bounds;
-
bounds.left = 0;
-
bounds.top = 0;
-
bounds.right = mVideoWidth;
-
bounds.bottom = mVideoHeight;
-
-
void *y_uv[2];
-
mapper.lock((buffer_handle_t)vBuf, CAMHAL_GRALLOC_USAGE, bounds, y_uv);
-
y_uv[1] = y_uv[0] + mVideoHeight*4096;
-
-
structConvImage input = {frame->mWidth,
-
frame->mHeight,
-
4096,
-
IC_FORMAT_YCbCr420_lp,
-
(mmByte *)frame->mYuv[0],
-
(mmByte *)frame->mYuv[1],
-
frame->mOffset};
-
-
structConvImage output = {mVideoWidth,
-
mVideoHeight,
-
4096,
-
IC_FORMAT_YCbCr420_lp,
-
(mmByte *)y_uv[0],
-
(mmByte *)y_uv[1],
-
0};
-
-
VT_resizeFrame_Video_opt2_lp(&input, &output, NULL, 0);
-
mapper.unlock((buffer_handle_t)vBuf->opaque);
-
videoMetadataBuffer->metadataBufferType = (int) kMetadataBufferTypeCameraSource;
-
/* FIXME remove cast */
-
videoMetadataBuffer->handle = (void *)vBuf->opaque;
-
videoMetadataBuffer->offset = 0;
-
}
-
else
-
{
-
videoMetadataBuffer->metadataBufferType = (int) kMetadataBufferTypeCameraSource;
-
videoMetadataBuffer->handle = camera_buffer_get_omx_ptr(frame->mBuffer);
-
videoMetadataBuffer->offset = frame->mOffset;
-
}
-
-
CAMHAL_LOGVB("mDataCbTimestamp : frame->mBuffer=0x%x, videoMetadataBuffer=0x%x, videoMedatadaBufferMemory=0x%x",
-
frame->mBuffer->opaque, videoMetadataBuffer, videoMedatadaBufferMemory);
-
-
mDataCbTimestamp(frame->mTimestamp, CAMERA_MSG_VIDEO_FRAME,
-
videoMedatadaBufferMemory, 0, mCallbackCookie);
-
}
-
else
-
{
-
//TODO: Need to revisit this, should ideally be mapping the TILER buffer using mRequestMemory
-
camera_memory_t* fakebuf = mRequestMemory(-1, sizeof(buffer_handle_t), 1, NULL);
-
if( (NULL == fakebuf) || ( NULL == fakebuf->data) || ( NULL == frame->mBuffer))
-
{
-
CAMHAL_LOGEA("Error! One of the video buffers is NULL");
-
break;
-
}
-
-
*reinterpret_cast<buffer_handle_t*>(fakebuf->data) = reinterpret_cast<buffer_handle_t>(frame->mBuffer->mapped);
-
mDataCbTimestamp(frame->mTimestamp, CAMERA_MSG_VIDEO_FRAME, fakebuf, 0, mCallbackCookie);
-
fakebuf->release(fakebuf);
-
}
-
}
-
}
-
else if(( CameraFrame::SNAPSHOT_FRAME == frame->mFrameType ) &&
-
( NULL != mCameraHal ) &&
-
( NULL != mDataCb) &&
-
( NULL != mNotifyCb)) {
-
//When enabled, measurement data is sent instead of video data
-
if ( !mMeasurementEnabled ) {
-
copyAndSendPreviewFrame(frame, CAMERA_MSG_POSTVIEW_FRAME);
-
} else {
-
mFrameProvider->returnFrame(frame->mBuffer,
-
(CameraFrame::FrameType) frame->mFrameType);
-
}
-
}
-
else if ( ( CameraFrame::PREVIEW_FRAME_SYNC== frame->mFrameType ) &&
-
( NULL != mCameraHal ) &&
-
( NULL != mDataCb) &&
-
( mCameraHal->msgTypeEnabled(CAMERA_MSG_PREVIEW_FRAME)) ) {
-
//When enabled, measurement data is sent instead of video data
-
if ( !mMeasurementEnabled ) {
-
copyAndSendPreviewFrame(frame, CAMERA_MSG_PREVIEW_FRAME);
-
} else {
-
mFrameProvider->returnFrame(frame->mBuffer,
-
(CameraFrame::FrameType) frame->mFrameType);
-
}
-
}
-
else if ( ( CameraFrame::FRAME_DATA_SYNC == frame->mFrameType ) &&
-
( NULL != mCameraHal ) &&
-
( NULL != mDataCb) &&
-
( mCameraHal->msgTypeEnabled(CAMERA_MSG_PREVIEW_FRAME)) ) {
-
copyAndSendPreviewFrame(frame, CAMERA_MSG_PREVIEW_FRAME);
-
} else {
-
mFrameProvider->returnFrame(frame->mBuffer,
-
( CameraFrame::FrameType ) frame->mFrameType);
-
CAMHAL_LOGDB("Frame type 0x%x is still unsupported!", frame->mFrameType);
-
}
-
-
break;
-
-
default:
-
-
break;
-
-
};
-
-
exit:
-
-
if ( NULL != frame )
-
{
-
delete frame;
-
}
-
-
LOG_FUNCTION_NAME_EXIT;
-
}
这里对不同的操作方式做了不同的处理,我们还是先通过preview过程作分析,如上面标注的那样处理,这里看看copyAndSendPreviewFrame的实现方法:
-
void AppCallbackNotifier::copyAndSendPreviewFrame(CameraFrame* frame, int32_t msgType)
-
{
-
camera_memory_t* picture = NULL;
-
CameraBuffer * dest = NULL;
-
-
// scope for lock
-
{
-
Mutex::Autolock lock(mLock);
-
-
if(mNotifierState != AppCallbackNotifier::NOTIFIER_STARTED) {
-
goto exit;
-
}
-
-
if (!mPreviewMemory || !frame->mBuffer) {
-
CAMHAL_LOGDA("Error! One of the buffer is NULL");
-
goto exit;
-
}
-
-
dest = &mPreviewBuffers[mPreviewBufCount];
-
-
CAMHAL_LOGVB("%d:copy2Dto1D(%p, %p, %d, %d, %d, %d, %d,%s)",
-
__LINE__,
-
dest,
-
frame->mBuffer,
-
mPreviewWidth,
-
mPreviewHeight,
-
mPreviewStride,
-
2,
-
frame->mLength,
-
mPreviewPixelFormat);
-
-
/* FIXME map dest */
-
if ( NULL != dest && dest->mapped != NULL ) {
-
// data sync frames don't need conversion
-
if (CameraFrame::FRAME_DATA_SYNC == frame->mFrameType) {
-
if ( (mPreviewMemory->size / MAX_BUFFERS) >= frame->mLength ) {
-
memcpy(dest->mapped, (void*) frame->mBuffer->mapped, frame->mLength);
-
} else {
-
memset(dest->mapped, 0, (mPreviewMemory->size / MAX_BUFFERS));
-
}
-
} else {
-
if ((NULL == frame->mYuv[0]) || (NULL == frame->mYuv[1])){
-
CAMHAL_LOGEA("Error! One of the YUV Pointer is NULL");
-
goto exit;
-
}
-
else{
-
copy2Dto1D(dest->mapped,
-
frame->mYuv,
-
mPreviewWidth,
-
mPreviewHeight,
-
mPreviewStride,
-
frame->mOffset,
-
2,
-
frame->mLength,
-
mPreviewPixelFormat);
-
}
-
}
-
}
-
}
-
-
exit:
-
mFrameProvider->returnFrame(frame->mBuffer, (CameraFrame::FrameType) frame->mFrameType);
-
-
if((mNotifierState == AppCallbackNotifier::NOTIFIER_STARTED) &&
-
mCameraHal->msgTypeEnabled(msgType) &&
-
(dest != NULL) && (dest->mapped != NULL)) {
-
AutoMutex locker(mLock);
-
if ( mPreviewMemory )
-
mDataCb(msgType, mPreviewMemory, mPreviewBufCount, NULL, mCallbackCookie);
-
}
-
-
// increment for next buffer
-
mPreviewBufCount = (mPreviewBufCount + 1) % AppCallbackNotifier::MAX_BUFFERS;
-
}
具体中间的实现过程这里就先不做具体分析,我们只看最后这一个标注的方法mDataCb,他的定义是这样的
camera_data_callback mDataCb;
这个方法的实现很关键,其实他是在cameraservice中实现,过程如下:
1.cameraservice中调用mHardware->setCallbacks(notifyCallback,dataCallback, dataCallbackTimestamp, (void *)cameraId);
2.camerahardwareinterface中调用mDevice->ops->set_callbacks(mDevice,__notify_cb,__data_cb, __data_cb_timestamp, __get_memory, this);
3.camerahal_module中调用gCameraHals[ti_dev->cameraid]->setCallbacks(notify_cb, data_cb, data_cb_timestamp, get_memory, user);
4.camerahal中调用mAppCallbackNotifier->setCallbacks(this,notify_cb, data_cb, data_cb_timestamp, get_memory, user);
5.这里就到了appcallbacknotifier中,我们就看看这个setcallbacks的实现吧
-
void AppCallbackNotifier::setCallbacks(CameraHal* cameraHal,
-
camera_notify_callback notify_cb,
-
camera_data_callback data_cb,
-
camera_data_timestamp_callback data_cb_timestamp,
-
camera_request_memory get_memory,
-
void *user)
-
{
-
Mutex::Autolock lock(mLock);
-
-
LOG_FUNCTION_NAME;
-
-
mCameraHal = cameraHal;
-
mNotifyCb = notify_cb;
-
mDataCb = data_cb;
-
mDataCbTimestamp = data_cb_timestamp;
-
mRequestMemory = get_memory;
-
mCallbackCookie = user;
-
-
LOG_FUNCTION_NAME_EXIT;
-
}
这里可以很清楚的看到mDataCb就是指向cameraservice中定义的回调函数,也就是通过这种机制,最终底层获取的数据现在已经传递到cameraservice层了
不过这里还是要很注意一点,我上面我说的mDataCb就是指向cameraservice中定义的回调函数也是不准确的说法,准确的说法应该是mDataCb的实现方法最红会调用到cameraservice中定义的回调函数,
这里还是花点时间说明一下这个回调过程:
mDataCb其实真正知道的是camerahardwareinterface中定义的__data_cb实现,这是由以下调用决定的
mDevice->ops->set_callbacks(mDevice,
__notify_cb,
__data_cb,
__data_cb_timestamp,
__get_memory,
this);
下面来看看的__data_cb定义
-
static void __data_cb(int32_t msg_type,
-
const camera_memory_t *data, unsigned int index,
-
camera_frame_metadata_t *metadata,
-
void *user)
-
{
-
LOGV("%s", __FUNCTION__);
-
CameraHardwareInterface *__this =
-
static_cast<CameraHardwareInterface *>(user);
-
sp<CameraHeapMemory> mem(static_cast<CameraHeapMemory *>(data->handle));
-
if (index >= mem->mNumBufs) {
-
LOGE("%s: invalid buffer index %d, max allowed is %d", __FUNCTION__,
-
index, mem->mNumBufs);
-
return;
-
}
-
__this->mDataCb(msg_type, mem->mBuffers[index], metadata, __this->mCbUser);
-
}
而这里这个mDataCb又是哪里来的
-
/** Set the notification and data callbacks */
-
void setCallbacks(notify_callback notify_cb,
-
data_callback data_cb,
-
data_callback_timestamp data_cb_timestamp,
-
void* user)
-
{
-
mNotifyCb = notify_cb;
-
mDataCb = data_cb;
-
mDataCbTimestamp = data_cb_timestamp;
-
mCbUser = user;
-
-
LOGV("%s(%s)", __FUNCTION__, mName.string());
-
-
if (mDevice->ops->set_callbacks) {
-
mDevice->ops->set_callbacks(mDevice,
-
__notify_cb,
-
__data_cb,
-
__data_cb_timestamp,
-
__get_memory,
-
this);
-
}
-
}
找里大家清楚的看到了mDataCb就指向cameraservice中定义的datacallback方法,这个过程只是稍微绕了下圈
这里还是继续分析吧,看看数据到底是怎样送到app的,下面看看cameraservice中的这个datacallback方法的定义
-
void CameraService::Client::dataCallback(int32_t msgType,
-
const sp<IMemory>& dataPtr, camera_frame_metadata_t *metadata, void* user) {
-
LOG2("dataCallback(%d)", msgType);
-
-
sp<Client> client = getClientFromCookie(user);
-
if (client == 0) return;
-
if (!client->lockIfMessageWanted(msgType)) return;
-
-
if (dataPtr == 0 && metadata == NULL) {
-
LOGE("Null data returned in data callback");
-
client->handleGenericNotify(CAMERA_MSG_ERROR, UNKNOWN_ERROR, 0);
-
return;
-
}
-
-
switch (msgType & ~CAMERA_MSG_PREVIEW_METADATA) {
-
case CAMERA_MSG_PREVIEW_FRAME:
-
client->handlePreviewData(msgType, dataPtr, metadata);
-
break;
-
case CAMERA_MSG_POSTVIEW_FRAME:
-
client->handlePostview(dataPtr);
-
break;
-
case CAMERA_MSG_RAW_IMAGE:
-
client->handleRawPicture(dataPtr);
-
break;
-
case CAMERA_MSG_COMPRESSED_IMAGE:
-
client->handleCompressedPicture(dataPtr);
-
break;
-
#ifdef OMAP_ENHANCEMENT
-
case CAMERA_MSG_COMPRESSED_BURST_IMAGE:
-
client->handleCompressedBurstPicture(dataPtr);
-
break;
-
#endif
-
default:
-
client->handleGenericData(msgType, dataPtr, metadata);
-
break;
-
}
-
}
preview数据是通过上面的client->handlePreviewData(msgType, dataPtr, metadata);方法继续往上走的,通过这种方式,数据由cameras层---->cameraclient层
接着看看cameraclent的实现
-
// callback from camera service when frame or image is ready
-
void Camera::dataCallback(int32_t msgType, const sp<IMemory>& dataPtr,
-
camera_frame_metadata_t *metadata)
-
{
-
sp<CameraListener> listener;
-
{
-
Mutex::Autolock _l(mLock);
-
listener = mListener;
-
}
-
if (listener != NULL) {
-
listener->postData(msgType, dataPtr, metadata);
-
}
-
}
这里的listener到底是什么,还记得初始化的时候,在jni里面有设置listenerm吗?我们还是从新再看一下吧:frameworks/base/core/jni/android_hardware_Camera.cpp
-
// connect to camera service
-
static void android_hardware_Camera_native_setup(JNIEnv *env, jobject thiz,
-
jobject weak_this, jint cameraId)
-
{
-
sp<Camera> camera = Camera::connect(cameraId);
-
-
if (camera == NULL) {
-
jniThrowRuntimeException(env, "Fail to connect to camera service");
-
return;
-
}
-
-
// make sure camera hardware is alive
-
if (camera->getStatus() != NO_ERROR) {
-
jniThrowRuntimeException(env, "Camera initialization failed");
-
return;
-
}
-
-
jclass clazz = env->GetObjectClass(thiz);
-
if (clazz == NULL) {
-
jniThrowRuntimeException(env, "Can't find android/hardware/Camera");
-
return;
-
}
-
-
// We use a weak reference so the Camera object can be garbage collected.
-
// The reference is only used as a proxy for callbacks.
-
sp<JNICameraContext> context = new JNICameraContext(env, weak_this, clazz, camera);
-
context->incStrong(thiz);
-
camera->setListener(context);
-
-
// save context in opaque field
-
env->SetIntField(thiz, fields.context, (int)context.get());
-
}
由上面可以看出JNICameraContext是个监听类,同时set这个监听类,这个类的定义在:frameworks/base/core/jni/android_hardware_Camera.cpp
-
// provides persistent context for calls from native code to Java
-
class JNICameraContext: public CameraListener
-
{
-
public:
-
JNICameraContext(JNIEnv* env, jobject weak_this, jclass clazz, const sp<Camera>& camera);
-
~JNICameraContext() { release(); }
-
virtual void notify(int32_t msgType, int32_t ext1, int32_t ext2);
-
virtual void postData(int32_t msgType, const sp<IMemory>& dataPtr,
-
camera_frame_metadata_t *metadata);
-
virtual void postDataTimestamp(nsecs_t timestamp, int32_t msgType, const sp<IMemory>& dataPtr);
-
void postMetadata(JNIEnv *env, int32_t msgType, camera_frame_metadata_t *metadata);
-
void addCallbackBuffer(JNIEnv *env, jbyteArray cbb, int msgType);
-
void setCallbackMode(JNIEnv *env, bool installed, bool manualMode);
-
sp<Camera> getCamera() { Mutex::Autolock _l(mLock); return mCamera; }
-
bool isRawImageCallbackBufferAvailable() const;
-
void release();
-
-
private:
-
void copyAndPost(JNIEnv* env, const sp<IMemory>& dataPtr, int msgType);
-
void clearCallbackBuffers_l(JNIEnv *env, Vector<jbyteArray> *buffers);
-
void clearCallbackBuffers_l(JNIEnv *env);
-
jbyteArray getCallbackBuffer(JNIEnv *env, Vector<jbyteArray> *buffers, size_t bufferSize);
-
-
jobject mCameraJObjectWeak; // weak reference to java object
-
jclass mCameraJClass; // strong reference to java class
-
sp<Camera> mCamera; // strong reference to native object
-
jclass mFaceClass; // strong reference to Face class
-
jclass mRectClass; // strong reference to Rect class
-
Mutex mLock;
-
-
/*
-
* Global reference application-managed raw image buffer queue.
-
*
-
* Manual-only mode is supported for raw image callbacks, which is
-
* set whenever method addCallbackBuffer() with msgType =
-
* CAMERA_MSG_RAW_IMAGE is called; otherwise, null is returned
-
* with raw image callbacks.
-
*/
-
Vector<jbyteArray> mRawImageCallbackBuffers;
-
-
/*
-
* Application-managed preview buffer queue and the flags
-
* associated with the usage of the preview buffer callback.
-
*/
-
Vector<jbyteArray> mCallbackBuffers; // Global reference application managed byte[]
-
bool mManualBufferMode; // Whether to use application managed buffers.
-
bool mManualCameraCallbackSet; // Whether the callback has been set, used to
-
// reduce unnecessary calls to set the callback.
-
};
标注部分是我们在上面用到的postData,我们看一看postData的实现过程:
-
void JNICameraContext::postData(int32_t msgType, const sp<IMemory>& dataPtr,
-
camera_frame_metadata_t *metadata)
-
{
-
// VM pointer will be NULL if object is released
-
Mutex::Autolock _l(mLock);
-
JNIEnv *env = AndroidRuntime::getJNIEnv();
-
if (mCameraJObjectWeak == NULL) {
-
LOGW("callback on dead camera object");
-
return;
-
}
-
-
int32_t dataMsgType = msgType & ~CAMERA_MSG_PREVIEW_METADATA;
-
-
// return data based on callback type
-
switch (dataMsgType) {
-
case CAMERA_MSG_VIDEO_FRAME:
-
// should never happen
-
break;
-
-
// For backward-compatibility purpose, if there is no callback
-
// buffer for raw image, the callback returns null.
-
case CAMERA_MSG_RAW_IMAGE:
-
LOGV("rawCallback");
-
if (mRawImageCallbackBuffers.isEmpty()) {
-
env->CallStaticVoidMethod(mCameraJClass, fields.post_event,
-
mCameraJObjectWeak, dataMsgType, 0, 0, NULL);
-
} else {
-
copyAndPost(env, dataPtr, dataMsgType);
-
}
-
break;
-
-
// There is no data.
-
case 0:
-
break;
-
-
default:
-
LOGV("dataCallback(%d, %p)", dataMsgType, dataPtr.get());
-
copyAndPost(env, dataPtr, dataMsgType);
-
break;
-
}
-
-
// post frame metadata to Java
-
if (metadata && (msgType & CAMERA_MSG_PREVIEW_METADATA)) {
-
postMetadata(env, CAMERA_MSG_PREVIEW_METADATA, metadata);
-
}
-
}
我们接着看看这个copyAndPost方法:
-
void JNICameraContext::copyAndPost(JNIEnv* env, const sp<IMemory>& dataPtr, int msgType)
-
{
-
jbyteArray obj = NULL;
-
-
// allocate Java byte array and copy data
-
if (dataPtr != NULL) {
-
ssize_t offset;
-
size_t size;
-
sp<IMemoryHeap> heap = dataPtr->getMemory(&offset, &size);
-
LOGV("copyAndPost: off=%ld, size=%d", offset, size);
-
uint8_t *heapBase = (uint8_t*)heap->base();
-
-
if (heapBase != NULL) {
-
const jbyte* data = reinterpret_cast<const jbyte*>(heapBase + offset);
-
-
if (msgType == CAMERA_MSG_RAW_IMAGE) {
-
obj = getCallbackBuffer(env, &mRawImageCallbackBuffers, size);
-
} else if (msgType == CAMERA_MSG_PREVIEW_FRAME && mManualBufferMode) {
-
obj = getCallbackBuffer(env, &mCallbackBuffers, size);
-
-
if (mCallbackBuffers.isEmpty()) {
-
LOGV("Out of buffers, clearing callback!");
-
mCamera->setPreviewCallbackFlags(CAMERA_FRAME_CALLBACK_FLAG_NOOP);
-
mManualCameraCallbackSet = false;
-
-
if (obj == NULL) {
-
return;
-
}
-
}
-
} else {
-
LOGV("Allocating callback buffer");
-
obj = env->NewByteArray(size);
-
}
-
-
if (obj == NULL) {
-
LOGE("Couldn't allocate byte array for JPEG data");
-
env->ExceptionClear();
-
} else {
-
env->SetByteArrayRegion(obj, 0, size, data);
-
}
-
} else {
-
LOGE("image heap is NULL");
-
}
-
}
-
-
// post image data to Java
-
env->CallStaticVoidMethod(mCameraJClass, fields.post_event,
-
mCameraJObjectWeak, msgType, 0, 0, obj);
-
if (obj) {
-
env->DeleteLocalRef(obj);
-
}
-
}
以上先建立一个byte数组obj,将data缓存数据存储进obj数组,CallStaticVoidMethod是C调用java函数,最后执行实在Camera.java(框架)的postEventFromNative()
从这里开始,回调函数进入到camera framework层
frameworks/base/core/java/android/hardware/Camera.java
-
private static void postEventFromNative(Object camera_ref,
-
int what, int arg1, int arg2, Object obj)
-
{
-
Camera c = (Camera)((WeakReference)camera_ref).get();
-
if (c == null)
-
return;
-
-
if (c.mEventHandler != null) {
-
Message m = c.mEventHandler.obtainMessage(what, arg1, arg2, obj);
-
c.mEventHandler.sendMessage(m);
-
}
-
}
sendMessage之后由handle进行处理,定义同样在framework层
-
private class EventHandler extends Handler
-
{
-
private Camera mCamera;
-
-
public EventHandler(Camera c, Looper looper) {
-
super(looper);
-
mCamera = c;
-
}
-
-
@Override
-
public void handleMessage(Message msg) {
-
switch(msg.what) {
-
case CAMERA_MSG_SHUTTER:
-
if (mShutterCallback != null) {
-
mShutterCallback.onShutter();
-
}
-
return;
-
-
case CAMERA_MSG_RAW_IMAGE:
-
if (mRawImageCallback != null) {
-
mRawImageCallback.onPictureTaken((byte[])msg.obj, mCamera);
-
}
-
return;
-
-
case CAMERA_MSG_COMPRESSED_IMAGE:
-
if (mJpegCallback != null) {
-
mJpegCallback.onPictureTaken((byte[])msg.obj, mCamera);
-
}
-
return;
-
-
case CAMERA_MSG_PREVIEW_FRAME:
-
if (mPreviewCallback != null) {
-
PreviewCallback cb = mPreviewCallback;
-
if (mOneShot) {
-
// Clear the callback variable before the callback
-
// in case the app calls setPreviewCallback from
-
// the callback function
-
mPreviewCallback = null;
-
} else if (!mWithBuffer) {
-
// We're faking the camera preview mode to prevent
-
// the app from being flooded with preview frames.
-
// Set to oneshot mode again.
-
setHasPreviewCallback(true, false);
-
}
-
cb.onPreviewFrame((byte[])msg.obj, mCamera);
-
}
-
return;
-
-
case CAMERA_MSG_POSTVIEW_FRAME:
-
if (mPostviewCallback != null) {
-
mPostviewCallback.onPictureTaken((byte[])msg.obj, mCamera);
-
}
-
return;
-
-
case CAMERA_MSG_FOCUS:
-
if (mAutoFocusCallback != null) {
-
mAutoFocusCallback.onAutoFocus(msg.arg1 == 0 ? false : true, mCamera);
-
}
-
return;
-
-
case CAMERA_MSG_ZOOM:
-
if (mZoomListener != null) {
-
mZoomListener.onZoomChange(msg.arg1, msg.arg2 != 0, mCamera);
-
}
-
return;
-
-
case CAMERA_MSG_PREVIEW_METADATA:
-
if (mFaceListener != null) {
-
mFaceListener.onFaceDetection((Face[])msg.obj, mCamera);
-
}
-
return;
-
-
case CAMERA_MSG_ERROR :
-
Log.e(TAG, "Error " + msg.arg1);
-
if (mErrorCallback != null) {
-
mErrorCallback.onError(msg.arg1, mCamera);
-
}
-
return;
-
-
default:
-
Log.e(TAG, "Unknown message type " + msg.what);
-
return;
-
}
-
}
-
}
上面可以看出,这里处理了所有的回调,快门回调mShutterCallback.onShutter(),RawImageCallback.onPictureTaken()拍照数据回调,自动对焦回调等。。
默认是没有previewcallback这个回调的,除非你的app设置了setPreviewCallback,可以看出preview的数据还是可以向上层回调,只是系统默认不回调,这里再说深一些:
由上面绿色标注的地方可以看出,我们需要做以下事情,检查PreviewCallback 这个在framework中定义的接口有没有设置了setPreviewCallback,设置则调用,这里接口中
的onPreviewFrame方法需要开发者自己实现,这里默认是没有实现的,需要特殊使用的要自己添加,这里是自己的理解,看一下PreviewCallback 接口的定义:frameworks/base/core/java/android/hardware/Camera.java
-
/**
-
* Callback interface used to deliver copies of preview frames as
-
* they are displayed.
-
*
-
* @see #setPreviewCallback(Camera.PreviewCallback)
-
* @see #setOneShotPreviewCallback(Camera.PreviewCallback)
-
* @see #setPreviewCallbackWithBuffer(Camera.PreviewCallback)
-
* @see #startPreview()
-
*/
-
public interface PreviewCallback
-
{
-
/**
-
* Called as preview frames are displayed. This callback is invoked
-
* on the event thread {@link #open(int)} was called from.
-
*
-
* @param data the contents of the preview frame in the format defined
-
* by {@link android.graphics.ImageFormat}, which can be queried
-
* with {@link android.hardware.Camera.Parameters#getPreviewFormat()}.
-
* If {@link android.hardware.Camera.Parameters#setPreviewFormat(int)}
-
* is never called, the default will be the YCbCr_420_SP
-
* (NV21) format.
-
* @param camera the Camera service object.
-
*/
-
void onPreviewFrame(byte[] data, Camera camera);
-
};
另数据采集区与显示区两个缓存区buffer preview数据的投递,以完成preview实时显示是在HAL层完成的。
到这里为止,整个过程大致走了一遍,中间必定有很多不多,这也只是自己的学习记录,难免有自己错误的见解,待修正
待续。。。。。
阅读(1619) | 评论(0) | 转发(0) |