Sign Up

Sign Up to our social questions and Answers Engine to ask questions, answer people’s questions, and connect with other people.

Have an account? Sign In

Have an account? Sign In Now

Sign In

Login to our social questions & Answers Engine to ask questions answer people’s questions & connect with other people.

Sign Up Here

Forgot Password?

Don't have account, Sign Up Here

Forgot Password

Lost your password? Please enter your email address. You will receive a link and will create a new password via email.

Have an account? Sign In Now

You must login to ask a question.

Forgot Password?

Need An Account, Sign Up Here

Please briefly explain why you feel this question should be reported.

Please briefly explain why you feel this answer should be reported.

Please briefly explain why you feel this user should be reported.

Sign InSign Up

The Archive Base

The Archive Base Logo The Archive Base Logo

The Archive Base Navigation

  • Home
  • SEARCH
  • About Us
  • Blog
  • Contact Us
Search
Ask A Question

Mobile menu

Close
Ask a Question
  • Home
  • Add group
  • Groups page
  • Feed
  • User Profile
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Buy Points
  • Users
  • Help
  • Buy Theme
  • SEARCH
Home/ Questions/Q 1024745
In Process

The Archive Base Latest Questions

Editorial Team
  • 0
Editorial Team
Asked: May 16, 20262026-05-16T11:49:26+00:00 2026-05-16T11:49:26+00:00

so I have VideoEncoder.h with such code /* FFmpeg simple Encoder */ #ifndef __VIDEO_ENCODER_H__

  • 0

so I have VideoEncoder.h with such code

/*
  FFmpeg simple Encoder
 */


#ifndef __VIDEO_ENCODER_H__
#define __VIDEO_ENCODER_H__

#include "ffmpegInclude.h"
#include <Windows.h>
#include <string>

class VideoEncoder
{
  private:

  // output file name
  std::string     outputFilename;
  // output format.
  AVOutputFormat  *pOutFormat;
  // format context
  AVFormatContext *pFormatContext;
  // video stream context
  AVStream * pVideoStream;
  // audio streams context
  AVStream * pAudioStream;
  // convert context context
  struct SwsContext *pImgConvertCtx;
  // encode buffer and size
  uint8_t * pVideoEncodeBuffer;
  int nSizeVideoEncodeBuffer;

  // audio buffer and size
  uint8_t * pAudioEncodeBuffer;
  int nSizeAudioEncodeBuffer;


  // count of sample
  int audioInputSampleSize;
  // current picture
  AVFrame *pCurrentPicture;

  // audio buffer
  char* audioBuffer;
  int   nAudioBufferSize;
  int   nAudioBufferSizeCurrent;

  public:

  VideoEncoder() 
  {
    pOutFormat = NULL;
    pFormatContext = NULL;
    pVideoStream = NULL;
    pImgConvertCtx = NULL;
    pCurrentPicture = NULL;
    pVideoEncodeBuffer = NULL;
    nSizeVideoEncodeBuffer = 0;
    pAudioEncodeBuffer = NULL;
    nSizeAudioEncodeBuffer = 0;
    nAudioBufferSize = 1024 * 1024 * 4;
    audioBuffer      = new char[nAudioBufferSize];
    nAudioBufferSizeCurrent = 0;
  }

  virtual ~VideoEncoder() 
  {
    Finish();
  }

  // init output file
  bool InitFile(std::string& inputFile, std::string& container);
  // Add video and audio data
  bool AddFrame(AVFrame* frame, const char* soundBuffer, int soundBufferSize);
  // end of output
  bool Finish();

  private: 

  // Add video stream
  AVStream *AddVideoStream(AVFormatContext *pContext, CodecID codec_id);
  // Open Video Stream
  bool OpenVideo(AVFormatContext *oc, AVStream *pStream);
  // Allocate memory
  AVFrame * CreateFFmpegPicture(int pix_fmt, int nWidth, int nHeight);
  // Close video stream
  void CloseVideo(AVFormatContext *pContext, AVStream *pStream);
  // Add audio stream
  AVStream * AddAudioStream(AVFormatContext *pContext, CodecID codec_id);
  // Open audio stream
  bool OpenAudio(AVFormatContext *pContext, AVStream *pStream);
  // close audio stream
  void CloseAudio(AVFormatContext *pContext, AVStream *pStream);
  // Add video frame
  bool AddVideoFrame(AVFrame * frame, AVCodecContext *pVideoCodec);
  // Add audio samples
  bool AddAudioSample(AVFormatContext *pFormatContext, 
    AVStream *pStream, const char* soundBuffer, int soundBufferSize);
  // Free resourses.
  void Free();
  bool NeedConvert();
};

#endif // __VIDEO_ENCODER_H__

so I see InitFile, AddFrame and Finish here.

While in in VideoEncoder.cpp I see this

#include <stdio.h>
#include <stdlib.h>
#include "ffmpegInclude.h"
#include <math.h>
#include "VideoEncoder.h"
#include "Settings.h"

#define MAX_AUDIO_PACKET_SIZE (128 * 1024)

bool VideoEncoder::InitFile(std::string& inputFile, std::string& container)
{
  bool res = false;

  const char * filename = inputFile.c_str();
  outputFilename = inputFile;

  // Initialize libavcodec
  av_register_all();

  if (container == std::string("auto"))
  {
    // Create format
    pOutFormat = guess_format(NULL, filename, NULL);
  }
  else
  {
    // use contanier
    pOutFormat = guess_format(container.c_str(), NULL, NULL);
  }

  if (pOutFormat) 
  {
    // allocate context
    pFormatContext = avformat_alloc_context();
    if (pFormatContext) 
    {    
      pFormatContext->oformat = pOutFormat;
      memcpy(pFormatContext->filename, filename, min(strlen(filename), 
        sizeof(pFormatContext->filename)));

      // Add video and audio stream
      pVideoStream   = AddVideoStream(pFormatContext, pOutFormat->video_codec);
      pAudioStream   = AddAudioStream(pFormatContext, pOutFormat->audio_codec);

      // Set the output parameters (must be done even if no
      // parameters).
      if (av_set_parameters(pFormatContext, NULL) >=0) 
      {
        dump_format(pFormatContext, 0, filename, 1);

        // Open Video and Audio stream
        res = false;
        if (pVideoStream)
        {
          res = OpenVideo(pFormatContext, pVideoStream);
        }

        res = OpenAudio(pFormatContext, pAudioStream);

        if (res && !(pOutFormat->flags & AVFMT_NOFILE)) 
        {
          if (url_fopen(&pFormatContext->pb, filename, URL_WRONLY) < 0) 
          {
            res = false;
            printf("Cannot open file\n");
          }
        }

        if (res)
        {
          av_write_header(pFormatContext);
          res = true;
        }
      }    
    }   
  }

  if (!res)
  {
    Free();
    printf("Cannot init file\n");
  }

  return res;
}


bool VideoEncoder::AddFrame(AVFrame* frame, const char* soundBuffer, int soundBufferSize)
{
  bool res = true;
  int nOutputSize = 0;
  AVCodecContext *pVideoCodec = NULL;

  if (pVideoStream && frame && frame->data[0])
  {
    pVideoCodec = pVideoStream->codec;

    if (NeedConvert()) 
    {
      // RGB to YUV420P.
      if (!pImgConvertCtx) 
      {
        pImgConvertCtx = sws_getContext(pVideoCodec->width, pVideoCodec->height,
          PIX_FMT_RGB24,
          pVideoCodec->width, pVideoCodec->height,
          pVideoCodec->pix_fmt,
          SWS_BICUBLIN, NULL, NULL, NULL);
      }
    }

    // Allocate picture.
    pCurrentPicture = CreateFFmpegPicture(pVideoCodec->pix_fmt, pVideoCodec->width, 
      pVideoCodec->height);

    if (NeedConvert() && pImgConvertCtx) 
    {
      // Convert RGB to YUV.
      sws_scale(pImgConvertCtx, frame->data, frame->linesize,
        0, pVideoCodec->height, pCurrentPicture->data, pCurrentPicture->linesize);      
    }

    res = AddVideoFrame(pCurrentPicture, pVideoStream->codec);

    // Free temp frame
    av_free(pCurrentPicture->data[0]);
    av_free(pCurrentPicture);
    pCurrentPicture = NULL;
  }

  // Add sound
  if (soundBuffer && soundBufferSize > 0)
  {
    res = AddAudioSample(pFormatContext, pAudioStream, soundBuffer, soundBufferSize);
  }

  return res;
}


bool VideoEncoder::Finish()
{
  bool res = true;

  if (pFormatContext)
  {
    av_write_trailer(pFormatContext);
    Free();
  }

  if (audioBuffer)
  {
    delete[] audioBuffer;
    audioBuffer = NULL;
  }

  return res;
}


void VideoEncoder::Free()
{
  bool res = true;

  if (pFormatContext)
  {
    // close video stream
    if (pVideoStream)
    {
      CloseVideo(pFormatContext, pVideoStream);
    }

    // close audio stream.
    if (pAudioStream)
    {
      CloseAudio(pFormatContext, pAudioStream);
    }

    // Free the streams.
    for(size_t i = 0; i < pFormatContext->nb_streams; i++) 
    {
      av_freep(&pFormatContext->streams[i]->codec);
      av_freep(&pFormatContext->streams[i]);
    }

    if (!(pFormatContext->flags & AVFMT_NOFILE) && pFormatContext->pb) 
    {
      url_fclose(pFormatContext->pb);
    }

    // Free the stream.
    av_free(pFormatContext);
    pFormatContext = NULL;
  }
}

AVFrame * VideoEncoder::CreateFFmpegPicture(int pix_fmt, int nWidth, int nHeight)
{
  AVFrame *picture     = NULL;
  uint8_t *picture_buf = NULL;
  int size;

  picture = avcodec_alloc_frame();
  if ( !picture)
  {
    printf("Cannot create frame\n");
    return NULL;
  }

  size = avpicture_get_size(pix_fmt, nWidth, nHeight);

  picture_buf = (uint8_t *) av_malloc(size);

  if (!picture_buf) 
  {
    av_free(picture);
    printf("Cannot allocate buffer\n");
    return NULL;
  }

  avpicture_fill((AVPicture *)picture, picture_buf,
    pix_fmt, nWidth, nHeight);

  return picture;
}


bool VideoEncoder::OpenVideo(AVFormatContext *oc, AVStream *pStream)
{
  AVCodec *pCodec;
  AVCodecContext *pContext;

  pContext = pStream->codec;

  // Find the video encoder.
  pCodec = avcodec_find_encoder(pContext->codec_id);
  if (!pCodec) 
  {
    printf("Cannot found video codec\n");
    return false;
  }

  // Open the codec.
  if (avcodec_open(pContext, pCodec) < 0) 
  {
    printf("Cannot open video codec\n");
    return false;
  }

  pVideoEncodeBuffer = NULL;      
  if (!(pFormatContext->oformat->flags & AVFMT_RAWPICTURE)) 
  {
    /* allocate output buffer */
    nSizeVideoEncodeBuffer = 10000000;
    pVideoEncodeBuffer = (uint8_t *)av_malloc(nSizeVideoEncodeBuffer);
  }

  return true;
}


void VideoEncoder::CloseVideo(AVFormatContext *pContext, AVStream *pStream)
{
  avcodec_close(pStream->codec);
  if (pCurrentPicture)
  {
    if (pCurrentPicture->data)
    {
      av_free(pCurrentPicture->data[0]);
      pCurrentPicture->data[0] = NULL;
    }
    av_free(pCurrentPicture);
    pCurrentPicture = NULL;
  }

  if (pVideoEncodeBuffer)
  {
    av_free(pVideoEncodeBuffer);
    pVideoEncodeBuffer = NULL;
  }
  nSizeVideoEncodeBuffer = 0;
}


bool VideoEncoder::NeedConvert()
{
  bool res = false;
  if (pVideoStream && pVideoStream->codec)
  {
    res = (pVideoStream->codec->pix_fmt != PIX_FMT_RGB24);
  }
  return res;
}


AVStream *VideoEncoder::AddVideoStream(AVFormatContext *pContext, CodecID codec_id)
{
  AVCodecContext *pCodecCxt = NULL;
  AVStream *st    = NULL;

  st = av_new_stream(pContext, 0);
  if (!st) 
  {
    printf("Cannot add new vidoe stream\n");
    return NULL;
  }

  pCodecCxt = st->codec;
  pCodecCxt->codec_id = (CodecID)codec_id;
  pCodecCxt->codec_type = CODEC_TYPE_VIDEO;
  pCodecCxt->frame_number = 0;
  // Put sample parameters.
  pCodecCxt->bit_rate = 2000000;
  // Resolution must be a multiple of two.
  pCodecCxt->width  = W_VIDEO;
  pCodecCxt->height = H_VIDEO;
  /* time base: this is the fundamental unit of time (in seconds) in terms
     of which frame timestamps are represented. for fixed-fps content,
     timebase should be 1/framerate and timestamp increments should be
     identically 1. */
  pCodecCxt->time_base.den = 25;
  pCodecCxt->time_base.num = 1;
  pCodecCxt->gop_size = 12; // emit one intra frame every twelve frames at most

  pCodecCxt->pix_fmt = PIX_FMT_YUV420P;
  if (pCodecCxt->codec_id == CODEC_ID_MPEG2VIDEO) 
  {
      // Just for testing, we also add B frames 
      pCodecCxt->max_b_frames = 2;
  }
  if (pCodecCxt->codec_id == CODEC_ID_MPEG1VIDEO)
  {
      /* Needed to avoid using macroblocks in which some coeffs overflow.
         This does not happen with normal video, it just happens here as
         the motion of the chroma plane does not match the luma plane. */
      pCodecCxt->mb_decision = 2;
  }

  // Some formats want stream headers to be separate.
  if(pContext->oformat->flags & AVFMT_GLOBALHEADER)
  {
      pCodecCxt->flags |= CODEC_FLAG_GLOBAL_HEADER;
  }

  return st;
}


AVStream * VideoEncoder::AddAudioStream(AVFormatContext *pContext, CodecID codec_id)
{
  AVCodecContext *pCodecCxt = NULL;
  AVStream *pStream = NULL;

  // Try create stream.
  pStream = av_new_stream(pContext, 1);
  if (!pStream) 
  {
    printf("Cannot add new audio stream\n");
    return NULL;
  }

  // Codec.
  pCodecCxt = pStream->codec;
  pCodecCxt->codec_id = codec_id;
  pCodecCxt->codec_type = CODEC_TYPE_AUDIO;
  // Set format
  pCodecCxt->bit_rate    = 128000;
  pCodecCxt->sample_rate = 44100;
  pCodecCxt->channels    = 1;
  pCodecCxt->sample_fmt  = SAMPLE_FMT_S16;

  nSizeAudioEncodeBuffer = 4 * MAX_AUDIO_PACKET_SIZE;
  if (pAudioEncodeBuffer == NULL)
  {      
    pAudioEncodeBuffer = (uint8_t * )av_malloc(nSizeAudioEncodeBuffer);
  }

  // Some formats want stream headers to be separate.
  if(pContext->oformat->flags & AVFMT_GLOBALHEADER)
  {
    pCodecCxt->flags |= CODEC_FLAG_GLOBAL_HEADER;
  }

  return pStream;
}


bool VideoEncoder::OpenAudio(AVFormatContext *pContext, AVStream *pStream)
{
  AVCodecContext *pCodecCxt = NULL;
  AVCodec *pCodec = NULL;
  pCodecCxt = pStream->codec;

  // Find the audio encoder.
  pCodec = avcodec_find_encoder(pCodecCxt->codec_id);
  if (!pCodec) 
  {
    printf("Cannot open audio codec\n");
    return false;
  }

  // Open it.
  if (avcodec_open(pCodecCxt, pCodec) < 0) 
  {
    printf("Cannot open audio codec\n");
    return false;
  }

  if (pCodecCxt->frame_size <= 1) 
  {
    // Ugly hack for PCM codecs (will be removed ASAP with new PCM
    // support to compute the input frame size in samples. 
    audioInputSampleSize = nSizeAudioEncodeBuffer / pCodecCxt->channels;
    switch (pStream->codec->codec_id) 
    {
      case CODEC_ID_PCM_S16LE:
      case CODEC_ID_PCM_S16BE:
      case CODEC_ID_PCM_U16LE:
      case CODEC_ID_PCM_U16BE:
        audioInputSampleSize >>= 1;
        break;
      default:
        break;
    }
    pCodecCxt->frame_size = audioInputSampleSize;
  } 
  else 
  {
    audioInputSampleSize = pCodecCxt->frame_size;
  }

  return true;
}


void VideoEncoder::CloseAudio(AVFormatContext *pContext, AVStream *pStream)
{
  avcodec_close(pStream->codec);
  if (pAudioEncodeBuffer)
  {
    av_free(pAudioEncodeBuffer);
    pAudioEncodeBuffer = NULL;
  }
  nSizeAudioEncodeBuffer = 0;
}


bool VideoEncoder::AddVideoFrame(AVFrame * pOutputFrame, AVCodecContext *pVideoCodec)
{
  bool res = false;

  if (pFormatContext->oformat->flags & AVFMT_RAWPICTURE) 
  {
    // Raw video case. The API will change slightly in the near
    // futur for that.
    AVPacket pkt;
    av_init_packet(&pkt);

    pkt.flags |= PKT_FLAG_KEY;
    pkt.stream_index = pVideoStream->index;
    pkt.data= (uint8_t *) pOutputFrame;
    pkt.size= sizeof(AVPicture);

    res = av_interleaved_write_frame(pFormatContext, &pkt);
    res = true;
  } 
  else 
  {
    // Encode
    int nOutputSize = avcodec_encode_video(pVideoCodec, pVideoEncodeBuffer, 
      nSizeVideoEncodeBuffer, pOutputFrame);
    if (nOutputSize > 0) 
    {
      AVPacket pkt;
      av_init_packet(&pkt);

      if (pVideoCodec->coded_frame->pts != AV_NOPTS_VALUE)
      {
        pkt.pts = av_rescale_q(pVideoCodec->coded_frame->pts, 
          pVideoCodec->time_base, pVideoStream->time_base);
      }

      if(pVideoCodec->coded_frame->key_frame)
      {
        pkt.flags |= PKT_FLAG_KEY;
      }
      pkt.stream_index = pVideoStream->index;
      pkt.data         = pVideoEncodeBuffer;
      pkt.size         = nOutputSize;

      // Write frame
      res = (av_interleaved_write_frame(pFormatContext, &pkt) == 0);
    }
    else 
    {
      res = false;
    }
  }

  return res;
}


bool VideoEncoder::AddAudioSample(AVFormatContext *pFormatContext, AVStream *pStream, 
                                        const char* soundBuffer, int soundBufferSize)
{
  AVCodecContext *pCodecCxt;    
  bool res = true;  

  pCodecCxt       = pStream->codec;
  memcpy(audioBuffer + nAudioBufferSizeCurrent, soundBuffer, soundBufferSize);
  nAudioBufferSizeCurrent += soundBufferSize;

  BYTE * pSoundBuffer = (BYTE *)audioBuffer;
  int nCurrentSize    = nAudioBufferSizeCurrent;

  // Size of packet on bytes.
  // FORMAT s16
  DWORD packSizeInSize = 2 * audioInputSampleSize;

  while(nCurrentSize >= packSizeInSize)
  {
    AVPacket pkt;
    av_init_packet(&pkt);

    pkt.size = avcodec_encode_audio(pCodecCxt, pAudioEncodeBuffer, 
      nSizeAudioEncodeBuffer, (const short *)pSoundBuffer);

    if (pCodecCxt->coded_frame && pCodecCxt->coded_frame->pts != AV_NOPTS_VALUE)
    {
      pkt.pts = av_rescale_q(pCodecCxt->coded_frame->pts, pCodecCxt->time_base, pStream->time_base);
    }

    pkt.flags |= PKT_FLAG_KEY;
    pkt.stream_index = pStream->index;
    pkt.data = pAudioEncodeBuffer;

    // Write the compressed frame in the media file.
    if (av_interleaved_write_frame(pFormatContext, &pkt) != 0) 
    {
      res = false;
      break;
    }

    nCurrentSize -= packSizeInSize;  
    pSoundBuffer += packSizeInSize;      
  }

  // save excess
  memcpy(audioBuffer, audioBuffer + nAudioBufferSizeCurrent - nCurrentSize, nCurrentSize);
  nAudioBufferSizeCurrent = nCurrentSize; 

  return res;
}

(code samples from example presented in this article in russian)

what does all this VideoEncoder:: is fore and about? why to declare tham in H file and not create all the class inside cpp file?

And why further in code only #include “VideoEncoder.h” is used for declaring VE class and why functions just declared in VE.h work perfectly even with out refrence to VE.cpp file?

  • 1 1 Answer
  • 0 Views
  • 0 Followers
  • 0
Share
  • Facebook
  • Report

Leave an answer
Cancel reply

You must login to add an answer.

Forgot Password?

Need An Account, Sign Up Here

1 Answer

  • Voted
  • Oldest
  • Recent
  • Random
  1. Editorial Team
    Editorial Team
    2026-05-16T11:49:27+00:00Added an answer on May 16, 2026 at 11:49 am

    The fundamental reason why the .cpp file has so many functions preceded with VideoEncoder:: is that the .cpp file contains the implementation of what the .h file declares as the interface. It is common practice in c++ to declare the class’s functionality in the header file and then implement it in a separate .cpp file.

    One benefit is that whenever you decide to change the implementation details of the class (leaving the interface untouched), you will have to only re-compile the .cpp file. Everything else needs not be recompiled. Developers also try to utilize this technique to the max trying to declare as little as possible in the .h file. Read more about the pImpl pattern (after reading a good c++ intro). The basic idea is that the less info is in the .h file, the less time will be required to re-compile the files that include this header file. So you only put into the .h file functions that are absolutely essential for the class’s clients to understand what the class is all about.

    As for why the functions declared in the .h file work perfectly without ever referencing the .cpp file — it’s all about the way c++ programs are compiled. People are right saying that you need to pick a book on c++ and study it carefully. Anyway, the concept of a c++ program is that the program consists of compilation units — individual .cpp files that can be compiled independently. You only need to #include the functionality that your individual .cpp file is using. The compiler is satisfied with that on this stage. However, there is another stage — linkage. On this stage the linker checks whether there are actual implementations available. It does so by searching the .obj files (generated by the compiler) and evaluating whether there is an entry corresponding to a certain class function. If the entry is not available, you will see a linker error (note that the compiler will not report any errors because the function was declared somewhere, but not implemented).

    • 0
    • Reply
    • Share
      Share
      • Share on Facebook
      • Share on Twitter
      • Share on LinkedIn
      • Share on WhatsApp
      • Report

Sidebar

Ask A Question

Stats

  • Questions 506k
  • Answers 506k
  • Best Answers 0
  • User 1
  • Popular
  • Answers
  • Editorial Team

    How to approach applying for a job at a company ...

    • 7 Answers
  • Editorial Team

    How to handle personal stress caused by utterly incompetent and ...

    • 5 Answers
  • Editorial Team

    What is a programmer’s life like?

    • 5 Answers
  • Editorial Team
    Editorial Team added an answer As grossvogel suggests, it is likely that the elements are… May 16, 2026 at 3:50 pm
  • Editorial Team
    Editorial Team added an answer With an extension method: Func<string, string> mapFun = n =>… May 16, 2026 at 3:50 pm
  • Editorial Team
    Editorial Team added an answer create table scores (score int) insert into scores values(5); insert… May 16, 2026 at 3:50 pm

Trending Tags

analytics british company computer developers django employee employer english facebook french google interview javascript language life php programmer programs salary

Top Members

Related Questions

so I have such code in my VS ffmpeg video encoder project in C++
I have a workspace for running an H.263 Video Encoder in a loop for
Have you guys had any experiences (positive or negative) by placing your source code/solution
Have following setup: MainActivity class - extends activity MyLayout class - extends View Prefs
have such zend query: $select = $this->_table ->select() ->where('title LIKE ?', '%'.$searchWord.'%') ->where('description LIKE
This question might be specific to SoThink Video Encoder v2.5, but it might not.
Have just started using Google Chrome , and noticed in parts of our site,
Have you ever seen any of there error messages? -- SQL Server 2000 Could
Have just started using Visual Studio Professional's built-in unit testing features, which as I
Have you used VS.NET Architect Edition's Application and System diagrams to start designing a

Explore

  • Home
  • Add group
  • Groups page
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Users
  • Help
  • SEARCH

Footer

© 2021 The Archive Base. All Rights Reserved
With Love by The Archive Base

Insert/edit link

Enter the destination URL

Or link to existing content

    No search term specified. Showing recent items. Search or use up and down arrow keys to select an item.