Chinaunix首页 | 论坛 | 博客
  • 博客访问: 388939
  • 博文数量: 55
  • 博客积分: 1907
  • 博客等级: 上尉
  • 技术积分: 869
  • 用 户 组: 普通用户
  • 注册时间: 2010-11-04 19:30
文章分类

全部博文(55)

文章存档

2011年(32)

2010年(23)

分类: 嵌入式

2011-02-14 16:53:47

An ffmpeg and SDL Tutorial
 
Tutorial 02: Outputting to the ScreenCode:
(截取于ffplay)
 
SDL and Video
To draw to the screen, we're going to use SDL. SDL stands for Simple Direct Layer, and is an excellent library for multimedia, is cross-platform, and is used in several projects. You can get the library at or you can download the development package for your operating system if there is one. You'll need the libraries to compile the code for this tutorial (and for the rest of them, too).

SDL has many methods for drawing images to the screen, and it has one in particular that is meant for displaying movies on the screen - what it calls a YUV overlay. is a way of storing raw image data like RGB. Roughly speaking, Y is the brightness (or "luma") component, and U and V are the color components. (It's more complicated than RGB because some of the color information is discarded, and you might have only 1 U and V sample for every 2 Y samples.) SDL's YUV overlay takes in a raw array of YUV data and displays it. It accepts 4 different kinds of YUV formats, but YV12 is the fastest. There is another YUV format called YUV420P that is the same as YV12, except the U and V arrays are switched. The 420 means it is at a ratio of 4:2:0, basically meaning there is 1 color sample for every 4 luma samples, so the color information is quartered. This is a good way of saving bandwidth, as the human eye does not percieve this change. The "P" in the name means that the format is "planar" — simply meaning that the Y, U, and V components are in separate arrays. ffmpeg can convert images to YUV420P, with the added bonus that many video streams are in that format already, or are easily converted to that format.

  1. * A note: There is a great deal of annoyance from some people at the convention of calling "YCbCr" "YUV". Generally speaking, YUV is an analog format and YCbCr is a digital format. ffmpeg and SDL both refer to YCbCr as YUV in their code and macros.

So our current plan is to replace the SaveFrame() function from , and instead output our frame to the screen. But first we have to start by seeing how to use the SDL Library. First we have to include the libraries and initalize SDL:

() essentially tells the library what features we're going to use. SDL_GetError(), of course, is a handy debugging function.

Creating a Display

Now we need a place on the screen to put stuff. The basic area for displaying images with SDL is called a surface:

  1. SDL_Surface *screen;

  2. screen = SDL_SetVideoMode(pCodecCtx->width, pCodecCtx->height, 0, 0);
  3. if(!screen) {
  4.   fprintf(stderr, "SDL: could not set video mode - exiting\n");
  5.   exit(1);
  6. }
This sets up a screen with the given width and height. The next option is the bit depth of the screen - 0 is a special value that means "same as the current display". (This does not work on OS X; see source.)

Now we create a YUV overlay on that screen so we can input video to it:

  1. SDL_Overlay *bmp;

  2. bmp = SDL_CreateYUVOverlay(pCodecCtx->width, pCodecCtx->height,
  3.                            SDL_YV12_OVERLAY, screen);
As we said before, we are using YV12 to display the image.

Displaying the Image

Well that was simple enough! Now we just need to display the image. Let's go all the way down to where we had our finished frame. We can get rid of all that stuff we had for the RGB frame, and we're going to replace the SaveFrame() with our display code. To display the image, we're going to make an struct and set its data pointers and linesize to our YUV overlay:

  1. if(frameFinished) {
  2.     SDL_LockYUVOverlay(bmp);

  3.     AVPicture pict;
  4.     pict.data[0] = bmp->pixels[0];
  5.     pict.data[1] = bmp->pixels[2];
  6.     pict.data[2] = bmp->pixels[1];

  7.     pict.linesize[0] = bmp->pitches[0];
  8.     pict.linesize[1] = bmp->pitches[2];
  9.     pict.linesize[2] = bmp->pitches[1];

  10.     // Convert the image into YUV format that SDL uses

  11.     img_convert(&pict, PIX_FMT_YUV420P,
  12.                     (AVPicture *)pFrame, pCodecCtx->pix_fmt,
  13.             pCodecCtx->width, pCodecCtx->height);
  14.     
  15.     SDL_UnlockYUVOverlay(bmp);
  16.   }
First, we lock the overlay because we are going to be writing to it. This is a good habit to get into so you don't have problems later. The struct, as shown before, has a data pointer that is an array of 4 pointers. Since we are dealing with YUV420P here, we only have 3 channels, and therefore only 3 sets of data. Other formats might have a fourth pointer for an alpha channel or something. linesize is what it sounds like. The analogous structures in our YUV overlay are the pixels and pitches variables. ("pitches" is the term SDL uses to refer to the width of a given line of data.) So what we do is point the three arrays of pict.data at our overlay, so when we write to pict, we're actually writing into our overlay, which of course already has the necessary space allocated. Similarly, we get the linesize information directly from our overlay. We change the conversion format to PIX_FMT_YUV420P, and we use just like before.

Drawing the Image

But we still need to tell SDL to actually show the data we've given it. We also pass this function a rectangle that says where the movie should go and what width and height it should be scaled to. This way, SDL does the scaling for us, and it can be assisted by your graphics processor for faster scaling:

  1. SDL_Rect rect;

  2.   if(frameFinished) {
  3.     /* ... code ... */
  4.     // Convert the image into YUV format that SDL uses

  5.     img_convert(&pict, PIX_FMT_YUV420P,
  6.                     (AVPicture *)pFrame, pCodecCtx->pix_fmt,
  7.             pCodecCtx->width, pCodecCtx->height);
  8.     
  9.     SDL_UnlockYUVOverlay(bmp);
  10.     rect.x = 0;
  11.     rect.y = 0;
  12.     rect.w = pCodecCtx->width;
  13.     rect.h = pCodecCtx->height;
  14.     SDL_DisplayYUVOverlay(bmp, &rect);
  15.   }
Now our video is displayed!

Let's take this time to show you another feature of SDL: its event system. SDL is set up so that when you type, or move the mouse in the SDL application, or send it a signal, it generates an event. Your program then checks for these events if it wants to handle user input. Your program can also make up events to send the SDL event system. This is especially useful when multithread programming with SDL, which we'll see in . In our program, we're going to poll for events right after we finish processing a packet. For now, we're just going to handle the SDL_QUIT event so we can exit:

  1. SDL_Event event;

  2.     av_free_packet(&packet);
  3.     SDL_PollEvent(&event);
  4.     switch(event.type) {
  5.     case SDL_QUIT:
  6.       SDL_Quit();
  7.       exit(0);
  8.       break;
  9.     default:
  10.       break;
  11.     }
And there we go! Get rid of all the old cruft, and you're ready to compile. If you are using Linux or a variant, the best way to compile using the SDL libs is this:
  1. gcc -o tutorial02 tutorial02.c -lavutil -lavformat -lavcodec -lz -lm \
  2. `sdl-config --cflags --libs`
sdl-config just prints out the proper flags for gcc to include the SDL libraries properly. You may need to do something different to get it to compile on your system; please check the SDL documentation for your system. Once it compiles, go ahead and run it.

What happens when you run this program? The video is going crazy! In fact, we're just displaying all the video frames as fast as we can extract them from the movie file. We don't have any code right now for figuring out when we need to display video. Eventually (in ), we'll get around to syncing the video. But first we're missing something even more important: sound!

 

阅读(1356) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~