Chinaunix首页 | 论坛 | 博客
  • 博客访问: 84667
  • 博文数量: 14
  • 博客积分: 10
  • 博客等级: 民兵
  • 技术积分: 153
  • 用 户 组: 普通用户
  • 注册时间: 2012-07-13 11:26
文章分类

全部博文(14)

文章存档

2015年(2)

2014年(7)

2013年(5)

我的朋友

分类: C/C++

2014-04-22 20:59:23


Tutorial 06: Synching Audio

Synching Audio

So now we have a decent enough player to watch a movie, so let's see what kind of loose ends we have lying around. Last time, we glossed over synchronization a little bit, namely sychronizing audio to a video clock rather than the other way around. We're going to do this the same way as with the video: make an internal video clock to keep track of how far along the video thread is and sync the audio to that. Later we'll look at how to generalize things to sync both audio and video to an external clock, too.

Implementing the video clock

Now we want to implement a video clock similar to the audio clock we had last time: an internal value that gives the current time offset of the video currently being played. At first, you would think that this would be as simple as updating the timer with the current PTS of the last frame to be shown. However, don't forget that the time between video frames can be pretty long when we get down to the millisecond level. The solution is to keep track of another value, the time at which we set the video clock to the PTS of the last frame. That way the current value of the video clock will bePTS_of_last_frame + (current_time - time_elapsed_since_PTS_value_was_set). This solution is very similar to what we did with get_audio_clock.

So, in our big struct, we're going to put a double video_current_pts and a int64_t video_current_pts_time. The clock updating is going to take place in the video_refresh_timer function:

void video_refresh_timer(void *userdata) {

  /* ... */

  if(is->video_st) {
    if(is->pictq_size == 0) {
      schedule_refresh(is, 1);
    } else {
      vp = &is->pictq[is->pictq_rindex];

      is->video_current_pts = vp->pts;
      is->video_current_pts_time = ();
Don't forget to initialize it in stream_component_open:
    is->video_current_pts_time = ();
And now all we need is a way to get the information:
double get_video_clock(VideoState *is) {
  double delta;

  delta = (() - is->video_current_pts_time) / 1000000.0;
  return is->video_current_pts + delta;
}


Abstracting the clock

But why force ourselves to use the video clock? We'd have to go and alter our video sync code so that the audio and video aren't trying to sync to each other. Imagine the mess if we tried to make it a command line option like it is in ffplay. So let's abstract things: we're going to make a new wrapper function, get_master_clock that checks an av_sync_type variable and then call get_audio_clock, get_video_clock, or whatever other clock we want to use. We could even use the computer clock, which we'll call get_external_clock:

enum {
  AV_SYNC_AUDIO_MASTER,
  AV_SYNC_VIDEO_MASTER,
  AV_SYNC_EXTERNAL_MASTER,
};

#define DEFAULT_AV_SYNC_TYPE AV_SYNC_VIDEO_MASTER

double get_master_clock(VideoState *is) {
  if(is->av_sync_type == AV_SYNC_VIDEO_MASTER) {
    return get_video_clock(is);
  } else if(is->av_sync_type == AV_SYNC_AUDIO_MASTER) {
    return get_audio_clock(is);
  } else {
    return get_external_clock(is);
  }
}
main() {
...
  is->av_sync_type = DEFAULT_AV_SYNC_TYPE;
...
}


Synchronizing the Audio

Now the hard part: synching the audio to the video clock. Our strategy is going to be to measure where the audio is, compare it to the video clock, and then figure out how many samples we need to adjust by, that is, do we need to speed up by dropping samples or do we need to slow down by adding them?

We're going to run a synchronize_audio function each time we process each set of audio samples we get to shrink or expand them properly. However, we don't want to sync every single time it's off because process audio a lot more often than video packets. So we're going to set a minimum number of consecutive calls to the synchronize_audio function that have to be out of sync before we bother doing anything. Of course, just like last time, "out of sync" means that the audio clock and the video clock differ by more than our sync threshold.

So we're going to use a fractional coefficient, say c, and So now let's say we've gotten N audio sample sets that have been out of sync. The amount we are out of sync can also vary a good deal, so we're going to take an average of how far each of those have been out of sync. So for example, the first call might have shown we were out of sync by 40ms, the next by 50ms, and so on. But we're not going to take a simple average because the most recent values are more important than the previous ones. So we're going to use a fractional coefficient, say c, and sum the differences like this:diff_sum = new_diff + diff_sum*c. When we are ready to find the average difference, we simply calculate avg_diff = diff_sum * (1-c).

Note: What the heck is going on here? This equation looks like magic! Well, it's basically a weighted mean using a geometric series as weights. I don't know if there's a name for this (I even checked Wikipedia!) but for more info,  (or at

Here's what our function looks like so far:

/* Add or subtract samples to get a better sync, return new
   audio buffer size */
int synchronize_audio(VideoState *is, short *samples,
		      int samples_size, double pts) {
  int n;
  double ref_clock;
  
  n = 2 * is->audio_st->codec->channels;
  
  if(is->av_sync_type != AV_SYNC_AUDIO_MASTER) {
    double diff, avg_diff;
    int wanted_size, min_size, max_size, nb_samples;
    
    ref_clock = get_master_clock(is);
    diff = get_audio_clock(is) - ref_clock;

    if(diff < AV_NOSYNC_THRESHOLD) {
      // accumulate the diffs
      is->audio_diff_cum = diff + is->audio_diff_avg_coef
	* is->audio_diff_cum;
      if(is->audio_diff_avg_count < AUDIO_DIFF_AVG_NB) {
	is->audio_diff_avg_count++;
      } else {
	avg_diff = is->audio_diff_cum * (1.0 - is->audio_diff_avg_coef);

       /* Shrinking/expanding buffer code.... */

      }
    } else {
      /* difference is TOO big; reset diff stuff */
      is->audio_diff_avg_count = 0;
      is->audio_diff_cum = 0;
    }
  }
  return samples_size;
}


So we're doing pretty well; we know approximately how off the audio is from the video or whatever we're using for a clock. So let's now calculate how many samples we need to add or lop off by putting this code where the "Shrinking/expanding buffer code" section is:

if(fabs(avg_diff) >= is->audio_diff_threshold) {
  wanted_size = samples_size + 
  ((int)(diff * is->audio_st->codec->sample_rate) * n);
  min_size = samples_size * ((100 - SAMPLE_CORRECTION_PERCENT_MAX)
                             / 100);
  max_size = samples_size * ((100 + SAMPLE_CORRECTION_PERCENT_MAX) 
                             / 100);
  if(wanted_size < min_size) {
    wanted_size = min_size;
  } else if (wanted_size > max_size) {
    wanted_size = max_size;
  }
Remember that audio_length * (sample_rate * # of channels * 2) is the number of samples in audio_length seconds of audio. Therefore, number of samples we want is going to be the number of samples we already have plus or minus the number of samples that correspond to the amount of time the audio has drifted. We'll also set a limit on how big or small our correction can be because if we change our buffer too much, it'll be too jarring to the user.


Correcting the number of samples

Now we have to actually correct the audio. You may have noticed that our synchronize_audio function returns a sample size, which will then tell us how many bytes to send to the stream. So we just have to adjust the sample size to the wanted_size. This works for making the sample size smaller. But if we want to make it bigger, we can't just make the sample size larger because there's no more data in the buffer! So we have to add it. But what should we add? It would be foolish to try and extrapolate audio, so let's just use the audio we already have by padding out the buffer with the value of the last sample.

if(wanted_size < samples_size) {
  /* remove samples */
  samples_size = wanted_size;
} else if(wanted_size > samples_size) {
  uint8_t *samples_end, *q;
  int nb;

  /* add samples by copying final samples */
  nb = (samples_size - wanted_size);
  samples_end = (uint8_t *)samples + samples_size - n;
  q = samples_end + n;
  while(nb > 0) {
    memcpy(q, samples_end, n);
    q += n;
    nb -= n;
  }
  samples_size = wanted_size;
}
Now we return the sample size, and we're done with that function. All we need to do now is use it:
void audio_callback(void *userdata, Uint8 *stream, int len) {

  VideoState *is = (VideoState *)userdata;
  int len1, audio_size;
  double pts;

  while(len > 0) {
    if(is->audio_buf_index >= is->audio_buf_size) {
      /* We have already sent all our data; get more */
      audio_size = audio_decode_frame(is, is->audio_buf, sizeof(is->audio_buf), &pts);
      if(audio_size < 0) {
	/* If error, output silence */
	is->audio_buf_size = 1024;
	memset(is->audio_buf, 0, is->audio_buf_size);
      } else {
	audio_size = synchronize_audio(is, (int16_t *)is->audio_buf,
				       audio_size, pts);
	is->audio_buf_size = audio_size;
All we did is inserted the call to synchronize_audio. (Also, make sure to check the source code where we initalize the above variables I didn't bother to define.)


One last thing before we finish: we need to add an if clause to make sure we don't sync the video if it is the master clock:

if(is->av_sync_type != AV_SYNC_VIDEO_MASTER) {
  ref_clock = get_master_clock(is);
  diff = vp->pts - ref_clock;

  /* Skip or repeat the frame. Take delay into account
     FFPlay still doesn't "know if this is the best guess." */
  sync_threshold = (delay > AV_SYNC_THRESHOLD) ? delay :
                    AV_SYNC_THRESHOLD;
  if(fabs(diff) < AV_NOSYNC_THRESHOLD) {
    if(diff <= -sync_threshold) {
      delay = 0;
    } else if(diff >= sync_threshold) {
      delay = 2 * delay;
    }
  }
}
And that does it! Make sure you check through the source file to initialize any variables that I didn't bother defining or initializing. Then compile it:
gcc -o tutorial06 tutorial06.c -lavutil -lavformat -lavcodec -lz -lm`sdl-config --cflags --libs`
and you'll be good to go.


Next time we'll make it so you can rewind and fast forward your movie.


源代码如下(修改部分代码,加入一些注释)

点击(此处)折叠或打开

  1. // tutorial06.c
  2. // A pedagogical video player that really
  3. //
  4. // This tutorial was written by Stephen Dranger (dranger@gmail.com).
  5. //
  6. // Code based on FFplay, Copyright (c) 2003 Fabrice Bellard,
  7. // and a tutorial by Martin Bohme (boehme@inb.uni-luebeckREMOVETHIS.de)
  8. // Tested on Gentoo, CVS version 5/01/07 compiled with GCC 4.1.1
  9. //
  10. // Use the Makefile to build all the samples.
  11. //
  12. // Run using
  13. // tutorial06 myvideofile.mpg
  14. //
  15. // to play the video.
  16. //
  17. //
  18. //
  19. //gcc -o tutorial06 tutorial06.c -lavformat -lavcodec -lavutil -lswscale -lz -lm -L/opt/libffmpeg/lib/ -I/opt/libffmpeg/include/ \
  20. -ldl -lpthread -lSDL2 -I/opt/libsdl/include/ -L/opt/libsdl/lib/ -g
  21. //./tutorial06 testvideo.mov


  22. #include <libavcodec/avcodec.h>
  23. #include <libavformat/avformat.h>
  24. #include <libavformat/avio.h>
  25. #include <libswscale/swscale.h>
  26. #include <libavutil/avstring.h>
  27. #include <libavutil/time.h>

  28. #include <SDL2/SDL.h>
  29. #include <SDL2/SDL_thread.h>

  30. #ifdef __MINGW32__
  31. #undef main /* Prevents SDL from overriding main() */
  32. #endif

  33. #include <stdio.h>
  34. #include <math.h>

  35. #define SDL_AUDIO_BUFFER_SIZE 1024
  36. #define MAX_AUDIO_FRAME_SIZE 192000

  37. #define MAX_AUDIOQ_SIZE (5 * 16 * 1024)
  38. #define MAX_VIDEOQ_SIZE (5 * 256 * 1024)

  39. #define AV_SYNC_THRESHOLD 0.01
  40. #define AV_NOSYNC_THRESHOLD 10.0

  41. #define SAMPLE_CORRECTION_PERCENT_MAX 10
  42. #define AUDIO_DIFF_AVG_NB 20

  43. #define FF_ALLOC_EVENT (SDL_USEREVENT)
  44. #define FF_REFRESH_EVENT (SDL_USEREVENT + 1)
  45. #define FF_QUIT_EVENT (SDL_USEREVENT + 2)

  46. #define VIDEO_PICTURE_QUEUE_SIZE 1
  47. #define DEFAULT_AV_SYNC_TYPE AV_SYNC_VIDEO_MASTER

  48. SDL_Window    *screen;
  49. SDL_Renderer    *renderer;
  50. #define PRE_WIDTH 904
  51. #define PRE_HEIGHT 600

  52. static int global_readframe_cntout = 0;

  53. typedef struct PacketQueue {
  54.     AVPacketList *first_pkt, *last_pkt;
  55.     int nb_packets;
  56.     int size;
  57.     SDL_mutex *mutex;
  58.     SDL_cond *cond;
  59. } PacketQueue;


  60. typedef struct VideoPicture {
  61.     SDL_Texture *bmp;
  62.     AVFrame *pFrameYUV;
  63.     char *bufpoint;
  64.     double pts;
  65.     int width, height; /* source height & width */
  66.     int allocated;
  67. } VideoPicture;

  68. typedef struct VideoState {
  69.     AVFormatContext *pFormatCtx;
  70.     int videoStream, audioStream;
  71.     int av_sync_type;
  72.     double external_clock; /* external clock base */
  73.     int64_t external_clock_time;
  74.     double audio_clock;
  75.     AVStream *audio_st;
  76.     PacketQueue audioq;                        /** 音频AVPacket队列 */
  77.     uint8_t audio_buf[(MAX_AUDIO_FRAME_SIZE * 3) / 2];    /** 解码后的音频数据 */
  78.     unsigned int audio_buf_size;                    /** 解码后音频数据总size */
  79.     unsigned int audio_buf_index;                /** 当前已使用偏移 */
  80.     AVFrame audio_frame;                    /** 存放解码后的一帧音频 */
  81.     AVPacket audio_pkt;                    /** 存放当前需要解码的一个音频包 */
  82.     uint8_t *audio_pkt_data;                /** 初始化为指向一个AVPacket的音频帧数据 */
  83.     int audio_pkt_size;                    /** 初始化为一个AVPacket音频帧的数据长度 */
  84.     int audio_hw_buf_size;                /** 用来干啥的??? */
  85.     double audio_diff_cum;                    /** used for AV difference average computation */
  86.     double audio_diff_avg_coef;                /** 固定值c,用于加权平均 */
  87.     double audio_diff_threshold;
  88.     int audio_diff_avg_count;                /** 每AUDIO_DIFF_AVG_NB帧检测一下同步情况,计数器 */
  89.     double frame_timer;
  90.     double frame_last_pts;
  91.     double frame_last_delay;

  92.     /**
  93.      如果某一帧没有pts信息,就用这个值设置为该帧的pts
  94.      */
  95.     double video_clock; ///<pts of last decoded frame / predicted pts of next decoded frame
  96.     double video_current_pts; ///<current displayed pts (different from video_clock if frame fifos are used)
  97.     int64_t video_current_pts_time; ///<time (av_gettime) at which we updated video_current_pts - used to have running video pts

  98.     AVStream *video_st;
  99.     PacketQueue videoq;
  100.     VideoPicture pictq[VIDEO_PICTURE_QUEUE_SIZE];
  101.     int pictq_size, pictq_rindex, pictq_windex;
  102.     SDL_mutex *pictq_mutex;
  103.     SDL_cond *pictq_cond;
  104.     SDL_Thread *parse_tid;
  105.     SDL_Thread *video_tid;

  106.     char filename[1024];
  107.     int quit;

  108.     AVIOContext *io_context;
  109.     struct SwsContext *sws_ctx;
  110. } VideoState;

  111. enum {
  112.     AV_SYNC_AUDIO_MASTER,
  113.     AV_SYNC_VIDEO_MASTER,
  114.     AV_SYNC_EXTERNAL_MASTER,
  115. };

  116. /* Since we only have one decoding thread, the Big Struct
  117.    can be global in case we need it. */
  118. VideoState *global_video_state;

  119. void packet_queue_init(PacketQueue *q)
  120. {
  121.     memset(q, 0, sizeof(PacketQueue));
  122.     q->mutex = SDL_CreateMutex();
  123.     q->cond = SDL_CreateCond();
  124. }

  125. int packet_queue_put(PacketQueue *q, AVPacket *pkt)
  126. {
  127.     AVPacketList *pkt1;
  128.     if(av_dup_packet(pkt) < 0) {
  129.         return -1;
  130.     }
  131.     pkt1 = av_malloc(sizeof(AVPacketList));
  132.     if (!pkt1)
  133.         return -1;
  134.     pkt1->pkt = *pkt;
  135.     pkt1->next = NULL;

  136.     SDL_LockMutex(q->mutex);

  137.     if (!q->last_pkt)
  138.         q->first_pkt = pkt1;
  139.     else
  140.         q->last_pkt->next = pkt1;
  141.     q->last_pkt = pkt1;
  142.     q->nb_packets++;
  143.     q->size += pkt1->pkt.size;
  144.     SDL_CondSignal(q->cond);

  145.     SDL_UnlockMutex(q->mutex);
  146.     return 0;
  147. }

  148. static int packet_queue_get(PacketQueue *q, AVPacket *pkt, int block)
  149. {
  150.     AVPacketList *pkt1;
  151.     int ret;

  152.     SDL_LockMutex(q->mutex);

  153.     for(;;) {

  154.         if(global_video_state->quit) {
  155.             ret = -1;
  156.             break;
  157.         }

  158.         pkt1 = q->first_pkt;
  159.         if (pkt1) {
  160.             q->first_pkt = pkt1->next;
  161.             if (!q->first_pkt)
  162.                 q->last_pkt = NULL;
  163.             q->nb_packets--;
  164.             q->size -= pkt1->pkt.size;
  165.             *pkt = pkt1->pkt;
  166.             av_free(pkt1);
  167.             ret = 1;
  168.             break;
  169.         } else if (!block) {
  170.             ret = 0;
  171.             break;
  172.         } else {
  173.             SDL_CondWait(q->cond, q->mutex);
  174.         }
  175.     }
  176.     
  177.     SDL_UnlockMutex(q->mutex);
  178.     return ret;
  179. }

  180. double get_audio_clock(VideoState *is)
  181. {
  182.     double pts;
  183.     int hw_buf_size, bytes_per_sec, n;

  184.     pts = is->audio_clock; /* maintained in the audio thread */
  185.     hw_buf_size = is->audio_buf_size - is->audio_buf_index;
  186.     bytes_per_sec = 0;
  187.     n = is->audio_st->codec->channels * 2;
  188.     if(is->audio_st) {
  189.         bytes_per_sec = is->audio_st->codec->sample_rate * n;
  190.     }
  191.     if(bytes_per_sec) {
  192.         pts -= (double)hw_buf_size / bytes_per_sec;
  193.     }

  194.     return pts;
  195. }

  196. double get_video_clock(VideoState *is)
  197. {
  198.     double delta;
  199.     delta = (av_gettime() - is->video_current_pts_time) / 1000000.0;
  200.     return is->video_current_pts + delta;
  201. }

  202. double get_external_clock(VideoState *is)
  203. {
  204.     return av_gettime() / 1000000.0;
  205. }

  206. double get_master_clock(VideoState *is)
  207. {
  208.     if(is->av_sync_type == AV_SYNC_VIDEO_MASTER) {
  209.         return get_video_clock(is);
  210.     } else if(is->av_sync_type == AV_SYNC_AUDIO_MASTER) {
  211.         return get_audio_clock(is);
  212.     } else {
  213.         return get_external_clock(is);
  214.     }
  215. }

  216. /* Add or subtract samples to get a better sync, return new audio buffer size */
  217. int synchronize_audio(VideoState *is, short *samples, int samples_size, double pts)
  218. {
  219.     int n;
  220.     double ref_clock;

  221.     //一个采样n个字节, 不理解为什么乘以2,难道是(16bits)
  222.     n = 2 * is->audio_st->codec->channels;

  223.     if(is->av_sync_type == AV_SYNC_AUDIO_MASTER)
  224.         goto funout;

  225.     double diff, avg_diff;
  226.     int wanted_size, min_size, max_size /*, nb_samples */;
  227.     ref_clock = get_master_clock(is);
  228.     diff = get_audio_clock(is) - ref_clock;

  229.     if(diff < AV_NOSYNC_THRESHOLD) {
  230.         // accumulate the diffs
  231.         is->audio_diff_cum = diff + is->audio_diff_avg_coef * is->audio_diff_cum;
  232.         if(is->audio_diff_avg_count < AUDIO_DIFF_AVG_NB) {
  233.             is->audio_diff_avg_count++;
  234.         } else {
  235.             /**
  236.              加权平均,离当前越近的值,权重越大,即影响越大
  237.              */
  238.             avg_diff = is->audio_diff_cum * (1.0 - is->audio_diff_avg_coef);
  239.             if(fabs(avg_diff) >= is->audio_diff_threshold) {
  240.                 wanted_size = samples_size + ((int)(diff * is->audio_st->codec->sample_rate) * n);
  241.                 min_size = samples_size * ((100 - SAMPLE_CORRECTION_PERCENT_MAX) / 100);
  242.                 max_size = samples_size * ((100 + SAMPLE_CORRECTION_PERCENT_MAX) / 100);
  243.                 if(wanted_size < min_size) {
  244.                     wanted_size = min_size;
  245.                 } else if (wanted_size > max_size) {
  246.                     wanted_size = max_size;
  247.                 }
  248.                 if(wanted_size < samples_size) {
  249.                     /* remove samples */
  250.                     samples_size = wanted_size;
  251.                 } else if(wanted_size > samples_size) {
  252.                     uint8_t *samples_end, *q;
  253.                     int nb;

  254.                     /* add samples by copying final sample*/
  255.                     //nb = (samples_size - wanted_size); //源代码,怀疑这里写错了
  256.                     nb = wanted_size - samples_size;
  257.                     samples_end = (uint8_t *)samples + samples_size - n;
  258.                     q = samples_end + n;
  259.                     while(nb > 0) {
  260.                         memcpy(q, samples_end, n);
  261.                         q += n;
  262.                         nb -= n;
  263.                     }
  264.                     samples_size = wanted_size;
  265.                 }
  266.             }
  267.         }
  268.     } else {
  269.         /* difference is TOO big; reset diff stuff */
  270.         is->audio_diff_avg_count = 0;
  271.         is->audio_diff_cum = 0;
  272.     }

  273. funout:
  274.     return samples_size;
  275. }

  276. int audio_decode_frame(VideoState *is, double *pts_ptr)
  277. {
  278.     int len1, data_size = 0, n;
  279.     AVPacket *pkt = &is->audio_pkt;
  280.     double pts;

  281.     for(;;) {
  282.         while(is->audio_pkt_size > 0) {
  283.             int got_frame = 0;
  284.             /**
  285.              一个AVPicture数据包含多个音频帧的编码后信息,一次解码一帧音频帧,故分多次解码
  286.              返回值len1表示,一次解码了len1字节的原始编码音频数据,len1的音频编码数据解码后长度为data_size
  287.              */
  288.             len1 = avcodec_decode_audio4(is->audio_st->codec, &is->audio_frame, &got_frame, pkt);
  289.             if(len1 < 0) {
  290.                 /* if error, skip frame */
  291.                 is->audio_pkt_size = 0;
  292.                 break;
  293.             }
  294.             if (got_frame){
  295.                 data_size =
  296.                 av_samples_get_buffer_size(NULL, is->audio_st->codec->channels,is->audio_frame.nb_samples,is->audio_st->codec->sample_fmt,1);
  297.                 memcpy(is->audio_buf, is->audio_frame.data[0], data_size);
  298.             }
  299.             is->audio_pkt_data += len1;
  300.             is->audio_pkt_size -= len1;
  301.             if(data_size <= 0) {
  302.                 /* No data yet, get more frames */
  303.                 continue;
  304.             }

  305.             pts = is->audio_clock;
  306.             *pts_ptr = pts;
  307.             n = 2 * is->audio_st->codec->channels;
  308.             /**
  309.              每个音频AVPacket里又有多帧音频AVFrame,计算每个AVFrame的pts值
  310.              */
  311.             is->audio_clock += (double)data_size /
  312.             (double)(n * is->audio_st->codec->sample_rate);

  313.             /* We have data, return it and come back for more later */
  314.             return data_size;
  315.         }
  316.         if(pkt->data)
  317.             av_free_packet(pkt);

  318.         if(is->quit) {
  319.             return -1;
  320.         }
  321.     
  322.         /* next packet */
  323.         if(packet_queue_get(&is->audioq, pkt, 1) < 0) {
  324.             return -1;
  325.         }
  326.     
  327.         is->audio_pkt_data = pkt->data;
  328.         is->audio_pkt_size = pkt->size;
  329.         /* if update, update the audio clock w/pts */
  330.         /**
  331.          初始化audio_clock为一个音频AVPacket的pts值
  332.          */
  333.         if(pkt->pts != AV_NOPTS_VALUE) {
  334.             is->audio_clock = av_q2d(is->audio_st->time_base)*pkt->pts;
  335.         }
  336.     }
  337. }

  338. void audio_callback(void *userdata, Uint8 *stream, int len)
  339. {
  340.     VideoState *is = (VideoState *)userdata;
  341.     int len1, audio_size;
  342.     double pts;

  343.     while(len > 0) {
  344.         if(is->audio_buf_index >= is->audio_buf_size) {
  345.             /* We have already sent all our data; get more */
  346.             /**
  347.              audio_decode_frame函数返回一个音频帧数据长度(解码后)
  348.              */
  349.             audio_size = audio_decode_frame(is, &pts);
  350.             if(audio_size < 0) {
  351.                 /* If error, output silence */
  352.                 is->audio_buf_size = 1024;
  353.                 memset(is->audio_buf, 0, is->audio_buf_size);
  354.             } else {
  355.                 /**
  356.                  同步音频到其他时钟, 假设同步到视频,假设此时音频比视频快,
  357.                  假设视频播放到第7秒,而音频播放到第10秒, 那就让第10秒的音频连续播放3秒
  358.                  这样当视频播放到第11秒的时候 音频也能刚好播放到第11秒(连续播放的实现是重复最后一个采样)

  359.                  如果音频比视频慢,视频播放到7秒,音频播放到5秒,截断相关采样,让音频接下来6, 7, 8秒要播放的采样
  360.                  在第6秒全部播放完,这样在视频播放的第9秒的时候,音频也刚好播放到第9秒
  361.                  */
  362.                 audio_size = synchronize_audio(is, (int16_t *)is->audio_buf, audio_size, pts);
  363.                 is->audio_buf_size = audio_size;
  364.             }
  365.             is->audio_buf_index = 0;
  366.         }
  367.         len1 = is->audio_buf_size - is->audio_buf_index;
  368.         if(len1 > len)
  369.             len1 = len;
  370.         memcpy(stream, (uint8_t *)is->audio_buf + is->audio_buf_index, len1);
  371.         len -= len1;
  372.         stream += len1;
  373.         is->audio_buf_index += len1;
  374.     }
  375. }

  376. static Uint32 sdl_refresh_timer_cb(Uint32 interval, void *opaque)
  377. {
  378.     SDL_Event event;
  379.     event.type = FF_REFRESH_EVENT;
  380.     event.user.data1 = opaque;
  381.     SDL_PushEvent(&event);
  382.     return 0; /* 0 means stop timer */
  383. }

  384. /* schedule a video refresh in 'delay' ms */
  385. static void schedule_refresh(VideoState *is, int delay)
  386. {
  387.     SDL_AddTimer(delay, sdl_refresh_timer_cb, is);
  388. }

  389. void video_display(VideoState *is)
  390. {
  391.     SDL_Rect rect = {0};
  392.     VideoPicture *vp;
  393.     //AVPicture pict;
  394.     float aspect_ratio;
  395.     int w, h, x, y;
  396.     //int i;
  397.     vp = &is->pictq[is->pictq_rindex];
  398.     if(vp->bmp) {
  399.         if(is->video_st->codec->sample_aspect_ratio.num == 0) {
  400.             aspect_ratio = 0;
  401.         } else {
  402.             aspect_ratio = av_q2d(is->video_st->codec->sample_aspect_ratio) * is->video_st->codec->width / is->video_st->codec->height;
  403.         }
  404.     
  405.         if(aspect_ratio <= 0.0) {
  406.             aspect_ratio = (float)is->video_st->codec->width / (float)is->video_st->codec->height;
  407.         }
  408.     
  409.         //h = screen->h;
  410.         //w = ((int)rint(h * aspect_ratio)) & -3;
  411.         //if(w > screen->w) {
  412.         //    w = screen->w;
  413.         //    h = ((int)rint(w / aspect_ratio)) & -3;
  414.         //}
  415.         //x = (screen->w - w) / 2;
  416.         //y = (screen->h - h) / 2;

  417.         rect.x = 0;
  418.         rect.y = 0;
  419.         //rect.w = w;
  420.         //rect.h = h;
  421.         rect.w = vp->width;
  422.         rect.h = vp->height;
  423.         
  424.         SDL_UpdateTexture(vp->bmp, &rect, vp->pFrameYUV->data[0], vp->pFrameYUV->linesize[0] );
  425.         SDL_RenderClear( renderer );
  426.         SDL_RenderCopy( renderer, vp->bmp, &rect, &rect );
  427.         SDL_RenderPresent( renderer );

  428.         free(vp->bufpoint);
  429.         av_free(vp->pFrameYUV);
  430.         // SDL_DisplayYUVOverlay(vp->bmp, &rect);
  431.     }
  432. }

  433. void video_refresh_timer(void *userdata)
  434. {
  435.     VideoState *is = (VideoState *)userdata;

  436.     VideoPicture *vp;
  437.     double actual_delay, delay, sync_threshold, ref_clock, diff;

  438.     if(is->video_st) {
  439.         if(is->pictq_size == 0) {
  440.             schedule_refresh(is, 1);
  441.         } else {
  442.             vp = &is->pictq[is->pictq_rindex];

  443.             is->video_current_pts = vp->pts;
  444.             is->video_current_pts_time = av_gettime();
  445.             /**
  446.              两个视频帧的pts差值,在(0s, 1.0s)这个区间内,如果帧率是25fps, 那么这个值应该在40ms,即0.04s附近波动
  447.              如果不在上述区间,则把这个delta pts设置成与上一个相同
  448.              */
  449.             delay = vp->pts - is->frame_last_pts; /* the pts from last time */
  450.             if(delay <= 0 || delay >= 1.0) {
  451.                 /* if incorrect delay, use previous one */
  452.                 delay = is->frame_last_delay;
  453.             }
  454.             /* save for next time */
  455.             is->frame_last_delay = delay;
  456.             is->frame_last_pts = vp->pts;

  457.             /* update delay to sync to audio if not master source */
  458.             /**
  459.              视频跟自己同步就不需要改变delay, 如果是同步到音频,或者wallclock就需要判断是否在同步阀值内
  460.              是否需要改变delay的值
  461.              */
  462.             if(is->av_sync_type != AV_SYNC_VIDEO_MASTER) {
  463.                 ref_clock = get_master_clock(is);
  464.                 diff = vp->pts - ref_clock;

  465.                 /* Skip or repeat the frame. Take delay into account
  466.                  FFPlay still doesn't "know if this is the best guess." */
  467.                 sync_threshold = (delay > AV_SYNC_THRESHOLD) ? delay : AV_SYNC_THRESHOLD;
  468.                 if(fabs(diff) < AV_NOSYNC_THRESHOLD) {
  469.                     if(diff <= -sync_threshold) {
  470.                         delay = 0;
  471.                     } else if(diff >= sync_threshold) {
  472.                         delay = 2 * delay;
  473.                     }
  474.                 }
  475.             }            
  476.             is->frame_timer += delay;

  477.             /* computer the REAL delay */
  478.             /**
  479.              在stream_component_open处理视频的时候有初始化一个值frame_timer, is->frame_timer = (double)av_gettime() / 1000000.0;
  480.              我们把这个时间设为time_a, 这个认为是视频最开始的时间,现在假设最开始的情况是开始播放第一帧视频,delay秒后播放第一帧视频
  481.              即要在wallclock的 time_a + delay秒开始播放第一帧,设这个时间为time_c, 当前时间是time_b. 时间关系如下所示
  482.              time_a ------ time_b ------------------------------------------------ time_c
  483.              如果我们在time_b还延迟delay秒播放第一帧就不准确了, 所以需要av_gettime函数来校正delay时间获取actual_delay
  484.              */
  485.             actual_delay = is->frame_timer - (av_gettime() / 1000000.0);
  486.             if(actual_delay < 0.010) {
  487.                 /* Really it should skip the picture instead */
  488.                 actual_delay = 0.010;
  489.             }
  490.             schedule_refresh(is, (int)(actual_delay * 1000 + 0.5));            

  491.             /* show the */
  492.             video_display(is);

  493.             /* update queue for next */
  494.             if(++is->pictq_rindex == VIDEO_PICTURE_QUEUE_SIZE) {
  495.                 is->pictq_rindex = 0;
  496.             }
  497.             SDL_LockMutex(is->pictq_mutex);
  498.             is->pictq_size--;
  499.             SDL_CondSignal(is->pictq_cond);
  500.             SDL_UnlockMutex(is->pictq_mutex);
  501.         }
  502.     } else {
  503.         schedule_refresh(is, 100);
  504.     }
  505. }
  506.       
  507. void alloc_picture(void *userdata)
  508. {
  509.     VideoState *is = (VideoState *)userdata;
  510.     VideoPicture *vp;

  511.     vp = &is->pictq[is->pictq_windex];
  512.     if(vp->bmp) {
  513.         // we already have one make another, bigger/smaller
  514.         SDL_DestroyTexture(vp->bmp);
  515.     }
  516.     
  517.     // Allocate a place to put our YUV image on that screen
  518.     vp->bmp = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_YV12, SDL_TEXTUREACCESS_STREAMING, is->video_st->codec->width, is->video_st->codec->height);
  519.     vp->width = is->video_st->codec->width;
  520.     vp->height = is->video_st->codec->height;

  521.     SDL_LockMutex(is->pictq_mutex);
  522.     vp->allocated = 1;
  523.     SDL_CondSignal(is->pictq_cond);
  524.     SDL_UnlockMutex(is->pictq_mutex);
  525. }

  526. int queue_picture(VideoState *is, AVFrame *pFrame, double pts)
  527. {
  528.     VideoPicture *vp;
  529.     SDL_Rect rect;

  530.     /* wait until we have space for a new pic */
  531.     SDL_LockMutex(is->pictq_mutex);
  532.     while(is->pictq_size >= VIDEO_PICTURE_QUEUE_SIZE && !is->quit) {
  533.         SDL_CondWait(is->pictq_cond, is->pictq_mutex);
  534.     }
  535.     SDL_UnlockMutex(is->pictq_mutex);

  536.     if(is->quit)
  537.         return -1;

  538.     // windex is set to 0 initially
  539.     vp = &is->pictq[is->pictq_windex];

  540.     /* allocate or resize the */
  541.     if(!vp->bmp || vp->width != is->video_st->codec->width || vp->height != is->video_st->codec->height) {
  542.         SDL_Event event;

  543.         vp->allocated = 0;
  544.         /* we have to do it in the main thread */
  545.         event.type = FF_ALLOC_EVENT;
  546.         event.user.data1 = is;
  547.         SDL_PushEvent(&event);

  548.         /* wait until we have a picture allocated */
  549.         SDL_LockMutex(is->pictq_mutex);
  550.         while(!vp->allocated && !is->quit) {
  551.             SDL_CondWait(is->pictq_cond, is->pictq_mutex);
  552.         }
  553.         SDL_UnlockMutex(is->pictq_mutex);
  554.         if(is->quit) {
  555.             return -1;
  556.         }
  557.     }
  558.     /* We have a place to put our picture on the queue */
  559.     /* If we are skipping a frame, do we set this to null but still return vp->allocated = 1? */

  560.     if(vp->bmp) {
  561.         vp->pFrameYUV = avcodec_alloc_frame();
  562.         // SDL_LockYUVOverlay(vp->bmp);
  563.         int numBytes = avpicture_get_size(PIX_FMT_YUV420P, is->video_st->codec->width, is->video_st->codec->height);
  564.         uint8_t* buffer = (uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
  565.         vp->bufpoint = buffer;
  566.         avpicture_fill((AVPicture *)vp->pFrameYUV, buffer, PIX_FMT_YUV420P, is->video_st->codec->width, is->video_st->codec->height);
  567.         /* point pict at the queue */
  568.         
  569.         // Convert the image into YUV format that SDL uses
  570.         sws_scale(is->sws_ctx,(uint8_t const * const *)pFrame->data,pFrame->linesize,0, is->video_st->codec->height, vp->pFrameYUV->data, vp->pFrameYUV->linesize);

  571.         // SDL_UnlockYUVOverlay(vp->bmp);
  572.         vp->pts = pts;

  573.         /* now we inform our display thread that we have a pic ready */
  574.         if(++is->pictq_windex == VIDEO_PICTURE_QUEUE_SIZE) {
  575.             is->pictq_windex = 0;
  576.         }

  577.         SDL_LockMutex(is->pictq_mutex);
  578.         is->pictq_size++;
  579.         SDL_UnlockMutex(is->pictq_mutex);
  580.     }
  581.     
  582.     return 0;
  583. }

  584. double synchronize_video(VideoState *is, AVFrame *src_frame, double pts)
  585. {
  586.     double frame_delay;

  587.     //pts of last decoded frame
  588.     if(pts != 0) {
  589.         /* if we have pts, set video clock to it */
  590.         is->video_clock = pts;
  591.     } else {
  592.         /* if we aren't given a pts, set it to the clock */
  593.         pts = is->video_clock;
  594.     }

  595.     //predicted pts of next decoded frame
  596.     /* update the video clock */
  597.     frame_delay = av_q2d(is->video_st->codec->time_base);    // 1 / fps
  598.     /* if we are repeating a frame, adjust clock accordingly */
  599.     frame_delay += src_frame->repeat_pict * (frame_delay * 0.5);    // why 0.5? Do not understand this
  600.     is->video_clock += frame_delay;
  601.     
  602.     return pts;
  603. }

  604. uint64_t global_video_pkt_pts = AV_NOPTS_VALUE;
  605. int video_thread(void *arg)
  606. {
  607.     VideoState *is = (VideoState *)arg;
  608.     AVPacket pkt1, *packet = &pkt1;
  609.     int frameFinished;
  610.     AVFrame *pFrame;
  611.     double pts;

  612.     pFrame = av_frame_alloc();

  613.     for(;;) {
  614.         if(packet_queue_get(&is->videoq, packet, 1) < 0) {
  615.             // means we quit getting packets
  616.             break;
  617.         }
  618.         pts = 0;

  619.         // Save global pts to be stored in pFrame in first call
  620.         global_video_pkt_pts = packet->pts;
  621.         // Decode video frame
  622.         /**
  623.          The pts of a frame equal to packet's pts which is the first packet of this frame.
  624.          But we do not know which packet is the first packet of a frame.
  625.          Whenever a packet starts a frame, the avcodec_decode_video() will call a function to allocate a buffer in our frame.
  626.          And of course, ffmpeg allows us to redefine what that allocation function is. (our_get_buffer)
  627.          So we'll make a new function that saves the pts of the packet.
  628.         */
  629.         avcodec_decode_video2(is->video_st->codec, pFrame, &frameFinished, packet);

  630.         if(packet->dts == AV_NOPTS_VALUE && pFrame->opaque && *(uint64_t*)pFrame->opaque != AV_NOPTS_VALUE) {
  631.             pts = *(uint64_t *)pFrame->opaque;
  632.         } else if(packet->dts != AV_NOPTS_VALUE) {
  633.             pts = packet->dts;
  634.         } else {
  635.             pts = 0;
  636.         }
  637.         /**
  638.          pts value is a timestamp that corresponds to a measurement of time in that stream's time_base unit.
  639.          For example, if a stream has time_base unit = 25hz/s ,the a PTS of 50 means 2s
  640.          Just like pts's Frequency equal to 90khz/s ,and wallclock's Frequency equal to 1hz/s
  641.          If we want to transform a pts to wallclock we can calculate it like this: wallclock = pts * (1/90000)
  642.          */
  643.         pts *= av_q2d(is->video_st->time_base);

  644.         av_free_packet(packet);

  645.         // Did we get a video frame?
  646.         if(frameFinished) {
  647.             pts = synchronize_video(is, pFrame, pts);
  648.             if(queue_picture(is, pFrame, pts) < 0) {
  649.                 break;
  650.             }
  651.         }
  652.     }
  653.     
  654.     av_free(pFrame);
  655.     return 0;
  656. }

  657. /* These are called whenever we allocate a frame
  658.  * buffer. We use this to store the global_pts in
  659.  * a frame at the time it is allocated.
  660.  */
  661. int our_get_buffer(struct AVCodecContext *c, AVFrame *pic, int flags)
  662. {
  663. //    int ret = avcodec_default_get_buffer(c, pic);
  664.     int ret = avcodec_default_get_buffer2(c, pic, flags);
  665.     uint64_t *pts = av_malloc(sizeof(uint64_t));

  666.     *pts = global_video_pkt_pts;
  667.     pic->opaque = pts;
  668.     
  669.     return ret;
  670. }

  671. void our_release_buffer(struct AVCodecContext *c, AVFrame *pic)
  672. {
  673.     if(pic) av_freep(&pic->opaque);

  674. //    avcodec_default_release_buffer(c, pic);
  675.     if(!(c->codec_type == AVMEDIA_TYPE_VIDEO))
  676.         abort();
  677.     av_frame_unref(pic);
  678. }

  679. int stream_component_open(VideoState *is, int stream_index)
  680. {
  681.     AVFormatContext *pFormatCtx = is->pFormatCtx;
  682.     AVCodecContext *codecCtx = NULL;
  683.     AVCodec *codec = NULL;
  684.     AVDictionary *optionsDict = NULL;
  685.     SDL_AudioSpec wanted_spec, spec;

  686.     if(stream_index < 0 || stream_index >= pFormatCtx->nb_streams) {
  687.         return -1;
  688.     }

  689.     // Get a pointer to the codec context for the video stream
  690.     codecCtx = pFormatCtx->streams[stream_index]->codec;

  691.     if(codecCtx->codec_type == AVMEDIA_TYPE_AUDIO) {
  692.         // Set audio settings from codec info
  693.         wanted_spec.freq = codecCtx->sample_rate;
  694.         wanted_spec.format = AUDIO_S16SYS;
  695.         wanted_spec.channels = codecCtx->channels;
  696.         wanted_spec.silence = 0;
  697.         wanted_spec.samples = SDL_AUDIO_BUFFER_SIZE;
  698.         wanted_spec.callback = audio_callback;
  699.         wanted_spec.userdata = is;

  700.         //open a audio device
  701.         if(SDL_OpenAudio(&wanted_spec, &spec) < 0) {
  702.             fprintf(stderr, "SDL_OpenAudio: %s\n", SDL_GetError());
  703.             return -1;
  704.         }
  705.         is->audio_hw_buf_size = spec.size;
  706.     }
  707.     codec = avcodec_find_decoder(codecCtx->codec_id);
  708.     if(!codec || (avcodec_open2(codecCtx, codec, &optionsDict) < 0)) {
  709.         fprintf(stderr, "Unsupported codec!\n");
  710.         return -1;
  711.     }

  712.     switch(codecCtx->codec_type) {
  713.         case AVMEDIA_TYPE_AUDIO:
  714.             is->audioStream = stream_index;
  715.             is->audio_st = pFormatCtx->streams[stream_index];
  716.             is->audio_buf_size = 0;
  717.             is->audio_buf_index = 0;
  718.             /* averaging filter for audio sync */
  719.             is->audio_diff_avg_coef = exp(log(0.01 / AUDIO_DIFF_AVG_NB));    /** 0.0005 */
  720.             is->audio_diff_avg_count = 0;
  721.             /* Correct audio only if larger error than this */
  722.             is->audio_diff_threshold = 2.0 * SDL_AUDIO_BUFFER_SIZE / codecCtx->sample_rate;
  723.             memset(&is->audio_pkt, 0, sizeof(is->audio_pkt));
  724.             packet_queue_init(&is->audioq);
  725.             //play the audio
  726.             SDL_PauseAudio(0);
  727.             break;
  728.         case AVMEDIA_TYPE_VIDEO:
  729.             is->videoStream = stream_index;
  730.             is->video_st = pFormatCtx->streams[stream_index];
  731.             //initialize the frame timer and the initial previous frame delay(40ms)
  732.             is->frame_timer = (double)av_gettime() / 1000000.0;
  733.             is->frame_last_delay = 40e-3;
  734.             is->video_current_pts_time = av_gettime();
  735.             packet_queue_init(&is->videoq);
  736.             is->sws_ctx = sws_getContext(is->video_st->codec->width,is->video_st->codec->height,is->video_st->codec->pix_fmt,is->video_st->codec->width,
  737.                             is->video_st->codec->height,PIX_FMT_YUV420P, SWS_BILINEAR, NULL, NULL, NULL);
  738.             codecCtx->get_buffer2 = our_get_buffer;
  739.             codecCtx->release_buffer = our_release_buffer;
  740.             is->video_tid = SDL_CreateThread(video_thread, "video_tid", is);
  741.             break;
  742.         default:
  743.             break;
  744.     }
  745.     
  746.     return 0;
  747. }

  748. int decode_interrupt_cb(void *opaque)
  749. {
  750.     return (global_video_state && global_video_state->quit);
  751. }

  752. int decode_thread(void *arg)
  753. {
  754.     VideoState *is = (VideoState *)arg;
  755.     AVFormatContext *pFormatCtx = NULL;
  756.     AVPacket pkt1, *packet = &pkt1;

  757.     int video_index = -1;
  758.     int audio_index = -1;
  759.     int i;

  760.     AVDictionary *io_dict = NULL;
  761.     AVIOInterruptCB callback;

  762.     is->videoStream=-1;
  763.     is->audioStream=-1;

  764.     global_video_state = is;

  765.     // will interrupt blocking functions if we
  766.     callback.callback = decode_interrupt_cb;
  767.     callback.opaque = is;

  768.     /**
  769.      不理解这个io_context到底怎么回事?
  770.      只在这里初始化,并未关联到pFormatCtx
  771.      */
  772.     if (avio_open2(&is->io_context, is->filename, 0, &callback, &io_dict)){
  773.         fprintf(stderr, "Unable to open I/O for %s\n", is->filename);
  774.         return -1;
  775.     }

  776.     // Open video file
  777.     if(avformat_open_input(&pFormatCtx, is->filename, NULL, NULL)!=0)
  778.         return -1; // Couldn't open file

  779.     is->pFormatCtx = pFormatCtx;

  780.     // Retrieve stream information
  781.     if(avformat_find_stream_info(pFormatCtx, NULL)<0)
  782.         return -1; // Couldn't find stream information

  783.     // Dump information about file onto standard error
  784.     av_dump_format(pFormatCtx, 0, is->filename, 0);

  785.     // Find the first video stream
  786.     for(i=0; i<pFormatCtx->nb_streams; i++) {
  787.         if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO && video_index < 0) {
  788.             video_index=i;
  789.         }
  790.         if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_AUDIO && audio_index < 0) {
  791.             audio_index=i;
  792.         }
  793.     }
  794.     if(audio_index >= 0) {
  795.         stream_component_open(is, audio_index);
  796.     }
  797.     if(video_index >= 0) {
  798.         stream_component_open(is, video_index);
  799.     }

  800.     if(is->videoStream < 0 || is->audioStream < 0) {
  801.         fprintf(stderr, "%s: could not open codecs\n", is->filename);
  802.         goto fail;
  803.     }

  804.     // main decode loop
  805.     for(;;) {
  806.         if(is->quit) {
  807.             break;
  808.         }
  809.         // seek stuff goes here
  810.         if(is->audioq.size > MAX_AUDIOQ_SIZE || is->videoq.size > MAX_VIDEOQ_SIZE) {
  811.             SDL_Delay(10);
  812.             continue;
  813.         }
  814.         if(av_read_frame(is->pFormatCtx, packet) < 0) {
  815.             is->quit = global_readframe_cntout++ < 8? 0:1;
  816.             SDL_Delay(200);
  817.             /*
  818.             if(is->pFormatCtx->pb->error == 0) {
  819.                 SDL_Delay(100); // no error; wait for user input
  820.                 continue;
  821.             } else {
  822.                 break;
  823.             }
  824.             */
  825.         }
  826.     
  827.         // Is this a packet from the video stream?
  828.         if(packet->stream_index == is->videoStream) {
  829.             packet_queue_put(&is->videoq, packet);
  830.         } else if(packet->stream_index == is->audioStream) {
  831.             packet_queue_put(&is->audioq, packet);
  832.         } else {
  833.             av_free_packet(packet);
  834.         }
  835.     }

  836.     /* all done - wait for it */
  837.     while(!is->quit) {
  838.         SDL_Delay(100);
  839.     }

  840. fail:
  841.     if(1){
  842.         SDL_Event event;
  843.         event.type = FF_QUIT_EVENT;
  844.         event.user.data1 = is;
  845.         SDL_PushEvent(&event);
  846.     }
  847.     return 0;
  848. }

  849. int main(int argc, char *argv[])
  850. {
  851.     if(argc < 2) {
  852.         fprintf(stderr, "Usage: test \n");
  853.         return -1;
  854.     }

  855.     SDL_Event event = {0};
  856.     VideoState *is;
  857.       is = av_mallocz(sizeof(VideoState));
  858.     if(!is) return -1;

  859.     if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER)) {
  860.         fprintf(stderr, "Could not initialize SDL - %s\n", SDL_GetError());
  861.         return -1;
  862.     }

  863.     // Register all formats and codecs
  864.     av_register_all();
  865.   
  866.     // Make a screen to put our video
  867.       screen = SDL_CreateWindow("Window", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, PRE_WIDTH, PRE_HEIGHT, SDL_WINDOW_FULLSCREEN_DESKTOP);
  868.     renderer = SDL_CreateRenderer(screen, -1, 0);
  869.     if(!screen) {
  870.         fprintf(stderr, "SDL: could not set video mode - exiting (%s)\n", SDL_GetError());
  871.         av_free(is);
  872.         return -1;
  873.     }
  874.     
  875.     av_strlcpy(is->filename, argv[1], sizeof(is->filename));
  876.     is->pictq_mutex = SDL_CreateMutex();
  877.     is->pictq_cond = SDL_CreateCond();

  878.     schedule_refresh(is, 40);    //this line aim to send a FF_REFRESH_EVENT signal to main function 40ms later

  879.     is->av_sync_type = DEFAULT_AV_SYNC_TYPE;
  880.     is->parse_tid = SDL_CreateThread(decode_thread, "parse_tid", is);
  881.     if(!is->parse_tid) {
  882.         av_free(is);
  883.         return -1;
  884.     }

  885.     for(;;) {
  886.         SDL_WaitEvent(&event);
  887.         switch(event.type) {
  888.             case FF_QUIT_EVENT:
  889.             case SDL_QUIT:
  890.                 is->quit = 1;
  891.                 /*
  892.                 * If the video has finished playing, then both the picture and
  893.                 * audio queues are waiting for more data. Make them stop
  894.                 * waiting and terminate normally.
  895.                 */
  896.                 SDL_CondSignal(is->audioq.cond);
  897.                 SDL_CondSignal(is->videoq.cond);
  898.                 SDL_Quit();
  899.                 av_free(is);
  900.                 return 0;
  901.             case FF_ALLOC_EVENT:
  902.                 alloc_picture(event.user.data1);
  903.                 break;
  904.             case FF_REFRESH_EVENT:
  905.                 video_refresh_timer(event.user.data1);
  906.                 break;
  907.             default:
  908.                 break;
  909.         }
  910.     }
  911.     
  912.     av_free(is);
  913.     return 0;
  914. }








阅读(1786) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~