Chinaunix首页 | 论坛 | 博客
  • 博客访问: 3003864
  • 博文数量: 674
  • 博客积分: 17881
  • 博客等级: 上将
  • 技术积分: 4849
  • 用 户 组: 普通用户
  • 注册时间: 2010-03-17 10:15
文章分类

全部博文(674)

文章存档

2013年(34)

2012年(146)

2011年(197)

2010年(297)

分类: LINUX

2011-06-07 16:47:22

This post is talk about Android Audio system, and it's will be use audio recording as example, because I debug audio a recording bug these days, and I found there very few people talking about sound recording, I think I should do it.

This topic have 2 part: 1. Audio Abstract Layer; 2. Hardware Layer.

Let start the part 1:

I just assume that you can get a copy of android code, this topic use android 2.0 code. Maybe the arch will evolve in further.

* Related code:
[1]. droid/frameworks/bash/media/libmedia/AudioRecord*
[2]. droid/external/opencore/android/author/android_audio_input*
[3]. droid/hardware/frameworks/bash/libs/audioflinger/*

* Role:

** Control Server - AudioFlinger

(in System Server), For The Command such as create new AudioTrack,control hardware devices.

** Media Server - AudioRecord

(in Media server, for compress/decompress audio data, AudioRecord** Client Side - Application Using JNI & RPC to request media server to record/play sound.

* Communication between Role
AudioFlinger and mediaServer use shared memory (called Heap in android) to IPC, general idea was raw data in Heap, and use a semaphore to tell other data is ready or need more data.

Application and mediaServer use Binder RPC to communicate.

* Control Stream:

Application -> request recording -> Java -> jni -> mediaserver ->opencore ->opencore/android/author/android_audio_input*

** After Application RPC to media server

** MeidaServer Side: android_audio_input.cpp: audio_thread_func():

1.1 create and AudioRecord object pass AudioSource, SampleRate, Format, Channels, and buffer count, and flags, the class is defined in [1],

1.2 In AudioRecord.cpp:

This class then class the set member function, it will check the incoming prams, and get a AudioPolicyServericeClient from binder, compute the frame size in byte by formula:

frame Size(bytes) = channel Count * (format in byte(PCM 16 BIT is 2 bytes);

The recording buffer is set least 2 frame.

set() will call openRecord to PRC AudioFlinger->opencRecord()

** Into AudioFlinger
1.3 In AudioFlinger:

openRecord: The func will check the Premission first,and get a RecordThread in Recordthread Pool.

And create a new RecordTrack use the RecordTrack to create a RecordHandle return this Handler to caller. the cblk is a shared memory between AudioFlinger and it's user.

** Back MediaServer
1.4 Go Back to AudioRecord:

The openRecord very care about Cblk(control block of track).
back to set();

set() will new a ClientRecordThread, this thread will continue call the *this 's processAudioBuffer(), ProcessAudioBuffer first check whether the position reached marked position, when it reached, call the cblk() {the control block is very strange, can
get member and call the as function, very like the callable class in C++.

And then, it will call obtainBuffer(), this function will continue wait for the lock() of cblk, if the buffer was filled, audioFiliger will give up the lock, so here we can get the lock(), if can't get any data from device, in this function will print some log "obtainBuffer timed out (is the CPU pegged?) " message, normally when the log show up, you can't hear the voice
unless a real CPU pegged happened.

if get the buffer, the func will call the mCbf with event EVENT_MORE_DATA.

** AudioFlinger
1.5 back to AudioFlinger side:

After AudioFlinger Create a Track, it will start a Record thread, It will get into a loop, [Receive From Device] -> [Signal User] -> .. -> [Signal User]... When Receive from device, it will first lock the shared memory, read frames, and Unlock & signal users, in record mode, media server is the user.

This Figure show this process:



 

The archer in Figure show the sequence of state shift.


btw, you may noticed this post is have some like a org-mode , you are right, the original is written in org-mode.

ALSA hardware layer is becoming a standard part of Android master tree.
the Android ALSA layer library Code base:
The major development job is by WindRiver.
This paper is a continue part of the past post Audio audio system(1), if you haven't read it , I strongly suggest you read it first.

** Roles

***  AudioSystem - Audio Abstract Layer

code: frameworks/base/media/libmedia/AudioSystem.*

*** AudioPlicyManager - Platform specific policy manager
Alsa: AudioPolicyManagerALSA.* : This Manager can be implement in OSS or ALSA.
*** AudioStream[In|Out]Alsa
This class is controled by AudioPolicyManager, charge the read and audio mix job. using alsa pcm interface.
*** Alsa device handler: reference code: alsa_default.cpp
This part is compiled as a separated shared library, alsa_default.cpp just acts as a  fall back.(This is just like other libhardware modules, the libhardware frist search the [ModuleName].[ro.hardware prop].so  such as alsa.dream.so and ro.product.board, ro.board.platform, ro.arch is continue search, if can't find any of these libraries, it's will fall back to search [ModuleNmae].default.so, such as alsa.default.so)
** Communation betweens Roles:
AudioSystem has a AudioPolicyServer Client, it will PRC to AudioPlicyManager class, in this case, the class is  AudioPolicyManagerALSA.
AudioStream[In|Out] will use hardware lib via dlopen() the ALSA handler library. And the call the open/close directly.

** Control follow:

*** Media Server side.

   When recording, the AudioSystem will call AudioSystem::getInput first, this call will deliver to AudioPolicyManagerALSA::getInput();   AudioPolicyManagerALSA::getInput() will check channel, input format, sampleRate, and call the mpClientInterface openInput's interface(AudioPolicyManagerALSA.cpp:1014), this call will back into audio_flinger 's openInput(AudioPolicyServer.cpp:504),

*** Back to AudioFlinger   

   In AudioFlinger::openInput,This function will wil call AudioHardware's openInputStream(), this will drop into AudioHardwareALSA class(audioHardwareALSA.cpp:229), this fucntion will call the hardware list and call the open function of this device, this will into the hardware shared library(such as alsa_default.cpp) 's open function.

*** Into hardware handler library

In the open() it will call snd_pcm_open, this is a ALSA user library function, this function will do some check device, and apply the config (such as hooks in asound.conf) belows to this device. Also it's will do some params setting such as set Hardware params, and software params.After open() returns, the the AudioHardwareALSA will new a AUdioStreamInALSA class, by the handler open() returns,  and acoustics.

*** Leave hardware handler library

In AudioStreamInALSA, it will init the ASLAStreamOps class by the alsa hardware hander( for example, alsa_default).Also will create a acousitic device in same time.After that, AudioHardwareALSA will call a stream's set(), this will be in the ALSAStreamOPS::set() function.This function will see the channel config in hardware handler, and format and sample rate, and perpare for the parameter AudioHardwareALSA need.These paramenter will finnally return to audioFlinger, audioFlinger will check whether these valuse are same to what he want, if the value is not same and not by error, the audioFlinger will call openInputStream again by these paramenters.

   If the the input is open success, the a RecordThread will be created, and put it into the record thread pool.And then, the record precess will same to other non-ALSA sound hardware.This is how the ALSA connect to AudioSystem and how ALSA connect to ALSA hardware handler.
阅读(1274) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~