我之前发过我的贪吃蛇的代码,我想把它变成语音控制上下左右的,于是选择科大讯飞的SDK,官方有一些文档,但有一些细节还是会让在linux下开发的孩子们产生困惑比如我,现在总结在此~
首先下载科大讯飞的SDK-linux版本,需要注册先。。。在下载下来的include文件夹下,有四个文件:msp_errors.h msp_types.h qisr.h qtts.h。前两个是通用的一些数据结构,剩下的qisr.h是语音识别用的头文件,qtts.h是语音合成用的头文件,因为我之需要语音识别的功能,只要在我的代码中include进qisr.h头文件就OK啦。在bin文件夹下比较乱,但主要就是libmsc.so和libspeex.so两个动态库,我直接把这两个动态库拷到/usr/lib里面。
在bin文件夹下注意到一个asr_keywords_utf8.txt的文件,这个SDK的思路是这样:把自己想识别的文字写到asr_keywords_utf8.txt中,然后上传到讯飞的服务器上,然后返回一个GrammarID,据说上传一次“终身有效”,意思就是不让重复上传占用服务器空间,反正有了这个GrammarID以后在不同的程序中想识别相同的文字就直接用好了,比如我想识别“左,右,上,下,图书馆,独自”,把这些汉字写到asr_keywords_utf8.txt中,而且必须是utf-8的格式,当然在linux下默认如此。下面是我写的上传这个txt并获得GrammarID的代码:
-
#include
-
#include
-
#include
-
#include
-
-
#include
-
-
#define TRUE 1
-
#define FALSE 0
-
-
int main()
-
{
-
int ret = QISRInit("appid=xxxxxxx");
-
if(ret != MSP_SUCCESS)
-
{
-
printf("QISRInit with errorCode: %d \n", ret);
-
return 0;
-
}
-
char GrammarID[128];
-
memset(GrammarID, 0, sizeof(GrammarID));
-
const int MAX_KEYWORD_LEN = 4096;
-
ret = MSP_SUCCESS;
-
const char * sessionID = NULL;
-
-
sessionID = QISRSessionBegin(NULL, "ssm=1,sub=asr", &ret);
-
if(ret != MSP_SUCCESS)
-
{
-
printf("QISRSessionBegin with errorCode: %d \n", ret);
-
return ret;
-
}
-
-
char UserData[MAX_KEYWORD_LEN];
-
memset(UserData, 0, MAX_KEYWORD_LEN);
-
FILE* fp = fopen("asr_keywords_utf8.txt", "rb");
-
if (fp == NULL)
-
{
-
printf("keyword file cannot open\n");
-
return -1;
-
}
-
unsigned int len = (unsigned int)fread(UserData, 1, MAX_KEYWORD_LEN, fp);
-
UserData[len] = 0;
-
fclose(fp);
-
const char* testID = QISRUploadData(sessionID, "contact", UserData, len, "dtt=keylist", &ret);
-
if(ret != MSP_SUCCESS)
-
{
-
printf("QISRUploadData with errorCode: %d \n", ret);
-
return ret;
-
}
-
memcpy((void*)GrammarID, testID, strlen(testID));
-
printf("GrammarID: \"%s\" \n", GrammarID);
-
-
QISRSessionEnd(sessionID, "normal");
-
return 0;
-
}
这样GrammarID会输出到终端中,记下来就妥了。然后是录制要识别的音频文件,原来不知道要求,直接用ubuntu自带的录音机录了一段,发现总也识别不了,在bbs上问了问题才搞明白,讯飞语音对语音的要求如下:采样率16K或8KHz,采样位是16位,单声道,格式是PCM或WAV。自带的录音软件都是默认32位采样,只能用ffmpeg或自己写代码录制,ffmpeg命令如下:
-
ffmpeg -f alsa -i hw:0 -ar 16000 -ac 1 lib.wav
我用普通话录了2秒的“图书馆”音频,下面是识别的代码:
-
#include
-
#include
-
#include
-
#include
-
-
#include
-
-
#define TRUE 1
-
#define FALSE 0
-
-
int run_asr(const char* asrfile);
-
-
const int BUFFER_NUM = 4096;
-
const int MAX_KEYWORD_LEN = 4096;
-
-
int main(int argc, char* argv[])
-
{
-
int ret = MSP_SUCCESS;
-
const char* asrfile ="lib.wav";
-
ret = QISRInit("appid=xxxxxx");
-
if(ret != MSP_SUCCESS)
-
{
-
printf("QISRInit with errorCode: %d \n", ret);
-
return 0;
-
}
-
-
ret = run_asr(asrfile);
-
QISRFini();
-
char key = getchar();
-
return 0;
-
}
-
-
-
int run_asr(const char* asrfile)
-
{
-
int ret = MSP_SUCCESS;
-
int i = 0;
-
FILE* fp = NULL;
-
char buff[BUFFER_NUM];
-
unsigned int len;
-
int status = MSP_AUDIO_SAMPLE_CONTINUE, ep_status = -1, rec_status = -1, rslt_status = -1;
-
-
-
const char *GrammarID="c66d4eecd37d4fe1c8274a2224b832d5";
-
const char* param = "rst=json,sub=asr,ssm=1,aue=speex,auf=audio/L16;rate=16000";
-
const char* sess_id = QISRSessionBegin(GrammarID, param, &ret);
-
if ( MSP_SUCCESS != ret )
-
{
-
printf("QISRSessionBegin err %d\n", ret);
-
return ret;
-
}
-
-
fp = fopen( asrfile , "rb");
-
if ( NULL == fp )
-
{
-
printf("failed to open file,please check the file.\n");
-
QISRSessionEnd(sess_id, "normal");
-
return -1;
-
}
-
-
printf("writing audio...\n");
-
-
int count=0;
-
-
while ( !feof(fp) )
-
{
-
len = (unsigned int)fread(buff, 1, BUFFER_NUM, fp);
-
feof(fp) ? status = MSP_AUDIO_SAMPLE_LAST : status = MSP_AUDIO_SAMPLE_CONTINUE;
-
if(status==MSP_AUDIO_SAMPLE_LAST)
-
printf("MSP_AUDIO_SAMPLE_LAST\n");
-
if(status==MSP_AUDIO_SAMPLE_CONTINUE)
-
printf("MSP_AUDIO_SAMPLE_CONTINUE\n");
-
-
ret = QISRAudioWrite(sess_id, buff, len, status, &ep_status, &rec_status);
-
if ( ret != MSP_SUCCESS )
-
{
-
printf("\nQISRAudioWrite err %d\n", ret);
-
break;
-
}
-
-
printf("%d\n",count++);
-
-
if ( rec_status == MSP_REC_STATUS_SUCCESS )
-
{
-
const char* result = QISRGetResult(sess_id, &rslt_status, 0, &ret);
-
if (ret != MSP_SUCCESS )
-
{
-
printf("error code: %d\n", ret);
-
break;
-
}
-
else if( rslt_status == MSP_REC_STATUS_NO_MATCH )
-
printf("get result nomatch\n");
-
else
-
{
-
if ( result != NULL )
-
printf("get result[%d/%d]:len:%d\n %s\n", ret, rslt_status,strlen(result), result);
-
}
-
}
-
printf(".");
-
}
-
printf("\n");
-
-
if (ret == MSP_SUCCESS)
-
{
-
printf("get reuslt~~~~~~~\n");
-
char asr_result[1024] = "";
-
unsigned int pos_of_result = 0;
-
int loop_count = 0;
-
do
-
{
-
const char* result = QISRGetResult(sess_id, &rslt_status, 0, &ret);
-
if ( ret != 0 )
-
{
-
printf("QISRGetResult err %d\n", ret);
-
break;
-
}
-
-
if( rslt_status == MSP_REC_STATUS_NO_MATCH )
-
{
-
printf("get result nomatch\n");
-
}
-
else if ( result != NULL )
-
{
-
-
FILE*f=fopen("data.txt","wb");
-
printf("~~~%d\n",strlen(result));
-
fwrite(result,1,strlen(result),f);
-
fclose(f);
-
-
printf("[%d]:get result[%d/%d]: %s\n", (loop_count), ret, rslt_status, result);
-
strcpy(asr_result+pos_of_result,result);
-
pos_of_result += (unsigned int)strlen(result);
-
}
-
else
-
{
-
printf("[%d]:get result[%d/%d]\n",(loop_count), ret, rslt_status);
-
}
-
usleep(500000);
-
} while (rslt_status != MSP_REC_STATUS_COMPLETE && loop_count++ < 30);
-
if (strcmp(asr_result,"")==0)
-
{
-
printf("no result\n");
-
}
-
-
}
-
-
QISRSessionEnd(sess_id, NULL);
-
printf("QISRSessionEnd.\n");
-
fclose(fp);
-
-
return 0;
-
}
输出结果如下:
-
kl@kl-Latitude:~/xunfeiSDK$ ./a.out
-
writing audio...
-
MSP_AUDIO_SAMPLE_CONTINUE
-
0
-
.MSP_AUDIO_SAMPLE_CONTINUE
-
1
-
.MSP_AUDIO_SAMPLE_CONTINUE
-
2
-
.MSP_AUDIO_SAMPLE_CONTINUE
-
3
-
.MSP_AUDIO_SAMPLE_CONTINUE
-
4
-
.MSP_AUDIO_SAMPLE_CONTINUE
-
5
-
.MSP_AUDIO_SAMPLE_CONTINUE
-
6
-
.MSP_AUDIO_SAMPLE_CONTINUE
-
7
-
.MSP_AUDIO_SAMPLE_CONTINUE
-
8
-
.MSP_AUDIO_SAMPLE_CONTINUE
-
9
-
.MSP_AUDIO_SAMPLE_CONTINUE
-
10
-
.MSP_AUDIO_SAMPLE_CONTINUE
-
11
-
.MSP_AUDIO_SAMPLE_CONTINUE
-
12
-
.MSP_AUDIO_SAMPLE_CONTINUE
-
13
-
.MSP_AUDIO_SAMPLE_CONTINUE
-
14
-
.MSP_AUDIO_SAMPLE_CONTINUE
-
15
-
.MSP_AUDIO_SAMPLE_CONTINUE
-
16
-
.MSP_AUDIO_SAMPLE_CONTINUE
-
17
-
.MSP_AUDIO_SAMPLE_CONTINUE
-
18
-
.MSP_AUDIO_SAMPLE_CONTINUE
-
19
-
.MSP_AUDIO_SAMPLE_CONTINUE
-
20
-
.MSP_AUDIO_SAMPLE_CONTINUE
-
21
-
.MSP_AUDIO_SAMPLE_CONTINUE
-
22
-
.MSP_AUDIO_SAMPLE_LAST
-
23
-
.
-
get reuslt~~~~~~~
-
[0]:get result[0/2]
-
~~~123
-
[1]:get result[0/5]: {"sn":1,"ls":true,"bg":0,"ed":0,"ws":[{"bg":0,"cw":[{"sc":"85","gm":"0","w":"图书馆","mn":[{"contact":"图书馆"}]}]}]}
-
QISRSessionEnd.
这个输出格式是个坑,因为官方的例子默认是直接输出识别的结果,但是结果是GB2312格式的,在linux终端下是乱码,后来才搞明白,在QISRSessionBegin()函数初始化的时候第二个参数param中的rst改成json,就是按照json格式把所有结果全输出来后,是utf8格式的汉字,之后再用json模块来解就妥妥的了~整体代码很清晰,简单说一下:
1.先要调用QISRInit()函数,参数是自己的appid,每个SDK都是注册才能下载的,所以是唯一的,用来区分用户的,不同级别的用户每天可以使用SDK的次数有限制,毕竟人用的多了语音识别的性能肯定会下降;
2.之后就是把GrammarID,输入输出的参数param和调用状态返回值ret作为参数传入QISRSessionBegin()函数中进行初始化,返回值是sessionID,这个是后面所有函数的主要参数之一;
3.打开自己的音频文件,调用QISRAudioWrite()函数写入,可以分段也可以一次,第一个参数是sessionID,上面初始化函数返回的值,第二个参数是音频数据头指针,第三个参数是音频文件大小,第四个参数是音频发送的状态,表示发送完了没有,剩下两个是服务器端检测语音状态和识别状态的返回值;
4.调用QISRGetResult()函数获取识别的结果,第一个参数还是sessionID,第二个参数是输出识别的状态,第三个参数是与服务器交互的间隔时间,官方建议5000,我取为0,第四个参数是调用状态返回值ret,最后这个函数的返回值就是上面结果的json数据了;
5.最后的收尾清理工作,毕竟是C语言,不是java~哈哈
转载请注明:转自http://blog.csdn.net/littlethunder/article/details/17047663
阅读(1660) | 评论(0) | 转发(0) |