分类: iOS平台
2013-01-18 11:28:06
Audio Toolbox Framework: Use the Audio Toolbox framework to play audio with synchronization capabilities, access packets of incoming audio, parse audio streams, convert audio formats, and record audio with access to individual packets. Audio Toobox framework 可以做同步音频,音频采集,音频格式转换,划分音频流, 录制音频等。 录制音频:
对象声明:
@interface MicCapture : NSObject { AudioStreamBasicDescription mDataFormat; AudioQueueRef mQueue; AudioQueueBufferRef mBuffers[kNumberBuffers]; UInt32 bufferByteSize; SInt64 mCurrentPacket; BOOL mIsRunning; }
设置录制音频回调函数
static void HandleInputBuffer ( void *aqData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer, const AudioTimeStamp *inStartTime, UInt32 inNumPackets, const AudioStreamBasicDescription *inPacketDesc ) { MicCapture *THIS = (__bridge MicCapture *)aqData; if (THIS->mIsRunning == false) return; // Do something with input buffer AudioQueueEnqueueBuffer (THIS->mQueue,inBuffer,0,NULL); }
设置录制音频格式:
- (void)setUpAudioFormat { UInt32 size = sizeof(mDataFormat.mSampleRate); AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareSampleRate, &size, &mDataFormat.mSampleRate); size = sizeof(mDataFormat.mChannelsPerFrame); AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareInputNumberChannels, &size, &mDataFormat.mChannelsPerFrame); mDataFormat.mFormatID = kAudioFormatLinearPCM; mDataFormat.mSampleRate = 44100.0; mDataFormat.mChannelsPerFrame = 2; mDataFormat.mBitsPerChannel = 16; mDataFormat.mBytesPerFrame = mDataFormat.mChannelsPerFrame * sizeof (SInt16); mDataFormat.mBytesPerPacket = mDataFormat.mBytesPerFrame; mDataFormat.mFramesPerPacket = 1; mDataFormat.mFormatFlags = // kLinearPCMFormatFlagIsEndian| kLinearPCMFormatFlagIsSignedInteger| kLinearPCMFormatFlagIsPacked; }
// 计算音频缓冲大小
void DeriveBufferSize ( AudioQueueRef audioQueue, AudioStreamBasicDescription ASBDescription, Float64 seconds, UInt32 *outBufferSize ) { static const int maxBufferSize = 0x50000; int maxPacketSize = ASBDescription.mBytesPerPacket; if (maxPacketSize == 0) { UInt32 maxVBRPacketSize = sizeof(maxPacketSize); AudioQueueGetProperty (audioQueue, kAudioQueueProperty_MaximumOutputPacketSize, // in Mac OS X v10.5, instead use //kAudioConverterPropertyMaximumOutputPacketSize &maxPacketSize, &maxVBRPacketSize ); } Float64 numBytesForTime = ASBDescription.mSampleRate * maxPacketSize * seconds; *outBufferSize = (UInt32)(numBytesForTime < maxBufferSize ? numBytesForTime : maxBufferSize); }
// 开始录制
- (void)startMicCapture { if (!mIsRunning) { // Set Up Audio Format [self setUpAudioFormat]; // Creating a Recording Audio Queue OSStatus err = AudioQueueNewInput(&mDataFormat, (AudioQueueInputCallback)HandleInputBuffer, (__bridge void*)self, NULL, kCFRunLoopCommonModes, 0, &mQueue); if (err == -1) { } // Set an Audio Queue Buffer Size DeriveBufferSize(mQueue, mDataFormat,1, &bufferByteSize); // Prepare a Set of Audio Queue Buffers for (int i = 0; i < kNumberBuffers; ++i) { AudioQueueAllocateBuffer(mQueue, bufferByteSize, &mBuffers[i]); AudioQueueEnqueueBuffer(mQueue,mBuffers[i],0,NULL); } mCurrentPacket = 0; mIsRunning = true; AudioQueueStart(mQueue, NULL); } }
// 结束录制
- (void)stopMicCapture { if (mIsRunning) { AudioQueueStop(mQueue, true); mIsRunning = false; AudioQueueDispose(mQueue,true); } }
如果同时做播放和录制得话 (类似于语音通话之类),需要设置Audio Session 相关部分,因为本书默认单独运行其中一个操作(录制或者播放)。 设置如下:
UInt32 category = kAudioSessionCategory_PlayAndRecord; error = AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(category), &category); if (error) printf("couldn't set audio category!");
播放音频方法程序逻辑同上,具体参考
AudioQueueProgrammingGuide
AudioSessionProgrammingGuide