开发者

libav* incorrect decode

开发者 https://www.devze.com 2023-03-29 11:04 出处:网络
Use libav to save frames from a video. The problem is that if you call the function decode a few times, then 2nd and then not correctly handled.

Use libav to save frames from a video.

The problem is that if you call the function decode a few times, then 2nd and then not correctly handled.

1st time such a conclusion (all works fine):

[swscaler @ 0x8b48510]No accelerated colorspace conversion found from yuv420p to bgra.
good

2nd (can not find stream, but these are the same):

[mp3 @ 0x8ae5800]Header missing
Last message repeated 223 times
[mp3 @ 0x8af31c0]Could not find codec parameters (Audio: mp1, 0 channels, s16)
[mp3 @ 0x8af31c0]Estimating duration from bitrate, this may be inaccurate
av_find_stream_info

Can you please tell where the error occurred.

main.cpp

avcodec_init();
avcodec_register_all();
av_register_all();
char *data;
int size;
//fill data and size
...
decode(data, size);
decode(data, size);

video.cpp

int f_offset = 0;
int f_length = 0;
char *f_data = 0;

int64_t seekp(void *opaque, int64_t offset, int whence)
{
    switch (whence)
    {
    case SEEK_SET:
        if (offset > f_length || offset < 0)
            return -1;
        f_offset = offset;
        return f_offset;
    case SEEK_CUR:
        if (f_offset + offset > f_length || f_offset + offset < 0)
            return -1;
        f_offset += offset;
        return f_offset;
    case SEEK_END:
        if (offset > 0 || f_length + offset < 0)
            return -1;
        f_offset = f_length + offset;
        return f_offset;
    case AVSEEK_SIZE:
        return f_length;
    }

    return -1;
}
int readp(void *opaque, uint8_t *buf, int buf_size)
{
    if (f_offset == f_length)
        return 0;

    int length = buf_size <= (f_length - f_offset) ? buf_size : (f_length - f_offset);

    memcpy(buf, f_data + f_offset, length);
    f_offset += length;

    return length;
}

bool decode(char *data, int length)
{
    f_offset = 0;
    f_length = length;
    f_data = data;

    int buffer_read_size = FF_MIN_BUFFER_SIZE;
    uchar *buffer_read = (uchar *) av_mallocz(buffer_read_size + FF_INPUT_BUFFER_PADDING_SIZE);

    AVProbeData pd;
    pd.filename = "";
    pd.buf_size = 4096 < f_length ? 4096 : f_length;
    pd.buf = (uchar *) av_mallocz(pd.buf_size + AVPROBE_PADDING_SIZE);
    memcpy(pd.buf, f_data, pd.buf_size);

    AVInputFormat *pAVInputFormat = av_probe_input_format(&pd, 1);
    if (pAVInputFormat == NULL)
    {
        std::cerr << "AVIF";
        return false;
    }
    pAVInputFormat->flags |= AVFMT_NOFILE;

    ByteIOContext ByteIOCtx;
    if (init_put_byte(&ByteIOCtx, buffer_read, buffer_read_size, 0, NULL, readp, NULL, seekp) < 0)
    {
        std::cerr << "init_put_byte";
        return false;
    }

    AVFormatContext *pFormatCtx;
    if (av_open_input_stream(&pFormatCtx, &ByteIOCtx, "", pAVInputFormat, NULL) < 0)
    {
        std::cerr << "av_open_stream";
        return false;
    }

    if (av_find_stream_info(pFormatCtx) < 0)
    {
        std::cerr << "av_find_stream_info";
        return false;
    }

    int video_stream;
    video_stream = -1;
    for (uint i = 0; i < pFormatCtx->nb_streams; ++i)
        if (pFormatCtx->streams[i]->codec->codec_type == CODEC_TYPE_VIDEO)
        {
            video_stream = i;
            break;
        }
    if (video_stream == -1)
    {
        std::cerr << "video_stream == -1";
        return false;
    }

    AVCodecContext *pCodecCtx;
    pCodecCtx = pFormatCtx->streams[video_stream]->codec;

    AVCodec *pCodec;
    pCodec = avcodec_find_decoder(pCodecCtx->codec_id);
    if (pCodec == NULL)
    {
        std::cerr << "pCodec == NULL";
        return false;
    }

    if (avcodec_open(pCodecCtx, pCodec) < 0)
    {
        std::cerr << "avcodec_open";
        return false;
    }

    AVFrame *pFrame;
    pFrame = avcodec_alloc_frame();
    if (pFrame == NULL)
    {
        std::cerr << "pFrame == NULL";
        return false;
    }
    AVFrame *pFrameRGB;
    pFrameRGB = avcodec_alloc_frame();
    if (pFrameRGB == NULL)
    {
        std::cerr << "pFrameRGB == NULL";
        return false;
    }

    int numBytes;
    numBytes = avpicture_get_size(PIX_FMT_RGB32, pCodecCtx->width, pCodecCtx->height);
    uint8_t *buffer;
    buffer = (uint8_t *) av_malloc(numBytes * sizeof(uint8_t));
    if (buffer == NULL)
    {
        std::cerr << "buffer == NULL";
        return false;
    }

    // Assign appropriate parts of buffer to image planes in pFrameRGB
    // Note t开发者_JAVA技巧hat pFrameRGB is an AVFrame, but AVFrame is a superset
    // of AVPicture
    avpicture_fill((AVPicture *) pFrameRGB, buffer, PIX_FMT_RGB32, pCodecCtx->width, pCodecCtx->height);

    SwsContext *swsctx;
    swsctx = sws_getContext(
                pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt,
                pCodecCtx->width, pCodecCtx->height, PIX_FMT_RGB32,
                SWS_BILINEAR, NULL, NULL, NULL);
    if (swsctx == NULL)
    {
        std::cerr << "swsctx == NULL";
        return false;
    }

    AVPacket packet;
    while (av_read_frame(pFormatCtx, &packet) >= 0)
    {
        if (packet.stream_index == video_stream)
        {
            int frame_finished;
            avcodec_decode_video2(pCodecCtx, pFrame, &frame_finished, &packet);

            if (frame_finished)
            {
                sws_scale(swsctx, pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);

                std::cerr << "good";
                av_close_input_stream(pFormatCtx);

                return true;
            }
            else
                std::cerr << "frame_finished == 0";
        }
    }

    std::cerr << "av_read_frame < 0";
    return false;
}

ffmpeg -version

FFmpeg 0.6.2-4:0.6.2-1ubuntu1
libavutil     50.15. 1 / 50.15. 1
libavcodec    52.72. 2 / 52.72. 2
libavformat   52.64. 2 / 52.64. 2
libavdevice   52. 2. 0 / 52. 2. 0
libavfilter    1.19. 0 /  1.19. 0
libswscale     0.11. 0 /  0.11. 0
libpostproc   51. 2. 0 / 51. 2. 0


You have probably read some libav tutorial and simply copy\paste almost all code in your function decode(). It's really wrong. Look at the source. Every time you want to decode some frame - doesn't matter audio or video - you open input context, initialize codecs and other multiple times. And every time you don't close\free it whatsoever! Keep in mind that even if you correct open\init and close\freeing all stuff, you will get the same frame every time you call decode() 'coz this approach will result in seeking file position to begin of file every time you call decode().

Moreover you call av_close_input_stream() instead of av_close_input_file(), you forgot to close codec with avcodec_close(), to free allocated pictures with avpicture_free(), to free allocated frames with av_free(), to free read packets with av_free_packet(). In addition your functions seekp() and readp() can be wrong too.

One more advice - func sws_getContext() is now deprecated and you should use sws_getCachedContext() instead. According to the function name in your case (multiple calls to sws_getContext; but it's still wrong) it will work faster.

Please, read some tutorials on libav again. They all seems to be out of date, but you can just replace deprecated or removed functions in it with new one which you can find in official libav doxygen documentation. Here are some links:

http://www.inb.uni-luebeck.de/~boehme/using_libavcodec.html
http://dranger.com/ffmpeg/ffmpeg.html

You will find examples which are up to date in the official API Documentation of libav.

http://libav.org/doxygen/master/examples.html

They explain the most common use cases.

0

精彩评论

暂无评论...
验证码 换一张
取 消