What is the difference between an image and a video frame?

The question looks innocent enough. A video is a sequence of images (called frames) captured and eventually displayed at a given frequency. However, by stopping at a specific frame of the sequence, a single video frame, i.e. an image, is obtained.

If we talk of a sequence of video frames, that would always be true. It would also be true if an image compression algorithm (an “intra-frame” coding system) is applied to each individual frame. Such coding system may not give an exciting compression ratio, but can serve very well the needs of some applications, for instance those requiring the ability to decode an image using just one compressed image. This is the case of Motion JPEG (now largely forgotten) and Motion JPEG 2000 (used for movie distribution and other applications) or some profiles of MPEG video coding standards used for studio or contribution applications.

If the application domain requires more powerful compression algorithms, the design criteria are bound to be different. Interframe video compression that exploits the redundancy between frames must be used. In general, however, if video is compressed using an interframe coding mode, a single frame may very well not be an image because its pixels may have been encoded using pixels of some other frames. This can be seen in the image below dating back 30 years ago in MPEG-1 times.

The first image (I-picture) at the left is compressed using only the pixels in the image. The fourth one (P-picture) is predictively encoded starting from the I-Picture. The second and third image (B-pictures) are interpolated using the first and the fourth. This continue in the next frames where the sequence can be P-B-B-B-P where the last P-picture is predicted from the first P-picture and 3 interpolated pictures (B-pictures) are created from the first and the last P pictures.

All MPEG intraframe coding schemes – MPEG-1, MPEG-2, MPEG-4 Visual and AVC, MPEG-H (HEVC), and MPEG-I (VVC) – have intraframe encoded pictures. This is needed because in broadcasting applications the time it takes for a decoder to “tune-in” must be as short as possible. Having an intra-coded picture, say, every half a second or every second, is a way to achieve that. Having intra-coded pictures is also helpful in interactive applications where the user may wish to jump anywhere in a video.

Therefore, some specific video frames in an interframe coding scheme can be images.

Why don’t we make the algorithms for image coding and intra-coded pictures of an interframe coding scheme the same?

We could but this has never been done for several reasons

  1. The intra-coding mode is a subset of a general interframe video coding scheme. Such schemes are rather complex, over the years many coding tools have been designed and when the intraframe coding mode is designed some tools are used because “they are already there”.
  2. Most applications employing an interframe coding scheme have strict real time decoding requirements. Hence complexity of decoding tools plays a significantly more critical role in an interframe coding scheme than in a still picture coding scheme.
  3. A large number of coding tools in an interframe video coding scheme are focused on motion-related processing.
  4. Due to very large data collected in capturing video than capturing images, the impact of coding efficiency improvement is different.
  5. Real time delivery requirements of coded video have led MPEG to develop significantly different System Layer technologies (e.g. DASH) and make different compromises at the system layer.
  6. Comparisons between the performance of the still picture coding mode of the various interframe coding standards with available image coding standards have not been performed in an environment based on a design of tests agreed among experts from all areas.
  7. There is no proven need or significant benefit of forcing the still picture coding mode of an MPEG scheme to be the same as any image compression standard developed by JPEG or vice-versa.

There is no reason to believe that this conclusion will not be confirmed in future video coding systems. So why are there several image compression schemes that have no relationship with video coding systems? The answer is obvious: the industry that needs compressed images is different than the industry that needs compressed video. The requirements of the two industries are different and, in spite of the commonality of some compression tools, the specification of the image compression schemes and of the video compression schemes turn out to be different and incompatible.

One could say that the needs of traditional 2D image and video are well covered by existing standards, But what about new technologies that enable immersive 2D visual experiences?

One could take a top-down philosophical approach. This is intellectually rewarding but technology is not necessarily progressing following a rational approach. The alternative is to take a bottom-up experiential approach. MPEG has constantly taken the latter approach and, in this particular case, it acts in two directions:

  1. Metadata for Immersive Video (MIV). This representsa dynamic immersive visual experience with 3 streams of data: Texture, Depth and Metadata. Texture information is obtained by suitably projecting the scene on a series of suitably selected planes. Texture and Depth are currently encoded with HEVC.
  2. Point Clouds with a large number of points can efficiently represent immersive visual content. Point clouds are projected on a fixed number of planes and projections can be encoded using any video codec.

Both #1 and #2 coding schemes include the equivalent of video intra-coded pictures. As for video, these are designed using the tools that exist in the equivalent of video inter-coded pictures.

MPEG and JPEG are grown up


A group of MPEG and JPEG members have developed a proposal seek to leverage the impact MPEG and JPEG standards have had on thousands of companies and billions of people all over the world.

A few numbers related to 2018 tell a long story. At the device level, the installed base of MPEG-enabled devices was worth 2.8 trillion USD and the value of devices in that year was in excess of 1 trillion USD. At the service level, the revenues of the PayTV industry were ~230 billion USD and of the total turnover of the global digital terrestrial television was ~200 billion USD.

Why we need to do something

So far MPEG and JPEG were hosted by Subcommittee 29 (SC 29). The group thinks that it is time to revitalise the 27-year old SC 29 structure. To achieve the goal, let’s make the following considerations:

  1. MPEG has been and continues to be able to conceive strategic visions for new media user experiences, design work plans in response to industry needs, develop standards in close collaboration with client industries, demonstrate their performance and promote their use.
  2. For many years MPEG and JPEG have provided standards to operate and innovate the broadcast, broadband and mobile distribution industries, and the imaging industry, respectively;
  3. MPEG and JPEG have become the reference committee for their industries;
  4. MPEG reference industries’ needs for more standards continue to grow causing a sustained increase in MPEG members attending (currently 600);
  5. JPEG and MPEG have a track record of widely deployed standards developed for and in collaboration with other committees that require a more appropriate level of liaison;
  6. MPEG and JPEG operate as virtual SCs, each with a structure of interacting subgroups covering the required areas of expertise, including a strategic planning function;
  7. MPEG and JPEG have independent and and universally recognised strong brands that must be preserved unfettered and enhanced;
  8. MPEG and JPEG are running standardisation projects whose operation must be guaranteed;

A Strengths-Weaknesses-Opportunities-Threats (SWOT) analysis has been carried out on MPEG. The results point to the need for MPEG

  1. To achieve an SC status compatible with its wide scope of work and large membership (1500 registered members and 600 attending physical meetings)
  2. To retain its scope and structure slightly amended to improve the match of standards with market needs and leverage internal talents
  3. To keep and enhance the MPEG brand.

What should be done

This is the proposal

  1. MPEG becomes a JTC 1 SC (SC 4x) with the title “MPEG compression and delivery of Moving Pictures, Audio and Other Data”;
  2. JPEG becomes SC 29 with the title “JPEG Coding of digital representations of images”;
  3. MPEG/JPEG subgroups become working groups (WG) or advisory groups (AG) of SC 4x/SC 29. MPEG adds a Market needs AG;
  4. Both SC 4x and SC 29 retain existing collaborations with ITU-T and their collaborative stance with other committees/bodies, e.g. by setting up joint working groups (JWG);
  5. SC 4x may create, in addition to genomics, WGs/JWGs for compression of other types of data with relevant committees, building on MPEG’s common tool set;
  6. If selected as secretariat (a proposal for a new SC 4x requires that a National Body be ready to take the secretariat), the Italian National Body (ITNB) is willing to make the following steps to expedite a smooth transition:
    1. Nominate the MPEG convenor as SC 4x chair;
    2. Nominate an “SC 4x chair elect” from a country other than Italy using criteria of 1) con-tinuity of MPEG’s vision and strategy, 2) full understanding of the scope of SC 4x and 3) record of performance in the currently held position;
    3. Call for nominations of convenors of SC 4x working groups (WG). We nominate current subgroup chairs as convenors of the respective WG

The benefits of the proposal

The proposal brings a significant number of benefits

  1. It has a positive impact on the heavy load of MPEG and JPEG work plans:
    1. It supports and enhances MPEG work plan, as MPEG is moved to SC 4x, retaining its proven structure, modus operandi and relationships with client industries in scope;
    2. It supports and enhances JPEG work plan, as SC 29 elevates JPEG SGs to WGs, retaining its proven modus operandi and relationships with client industries in scope;
  2. It preserves and builds upon the established MPEG and JPEG brands;
  3. It retains and improves all features of MPEG success, in particular its structure and modus operandi:
    1. SC 4x holds its meetings collocated with the meetings of its WGs and AGs requesting to meet;
    2. SC 4x facilitates the formation of break-out groups during meetings and of ad hoc groups in between meetings;
    3. SC 4x exploits inter-group synergies by facilitating joint meetings between different WGs and AGs during physical meetings;
    4. SC 4x promotes use of every ICT tools that can improve its effectiveness, e.g. teleconferencing and MPEG-specific IT tools to support standards development.
  4. It enhances MPEG’s and JPEG’s collaboration stance with other committees via Joint Working Groups;
  5. It improves MPEG’s supplier-client relationship with its client industries with its new status;
  6. It adds formal governance to the well-honed MPEG and JPEG structures;
  7. It balances continuity and renewal of MPEG leadership at all levels;
  8. It formalises MPEG’s and JPEG’s high-profile standard reference roles for the video and image sectors, respectively.

The title and scope of SC 4x

Upon approval by JTC 1 and ratification by the TMB, SC 4x will assume the following

  1. Title: MPEG compression and delivery of moving pictures, audio and other data;
  2. Scope: Standardisation in the area of efficient delivery of moving pictures and audio, their descriptions and other data
    • Serve as the focus and proponent for JTC 1’s standardisation program for broadcast, broadband and mobile distribution based on analysis, compression, transport and consumption of digital moving pictures and audio, including conventional and immersive, generated or captured by any technology;
    • Serve as the focus and co-proponent for JTC 1’s standardisation program on efficient storage, processing and delivery of genomic and other data, in agreement and collaboration with the relevant committees.

The SC 4x structure

  1. WG 11 subgroups become:
  2. SC 4x Advisory Groups (AG) – do not produce standards;
  3. SC 4x Working Groups (WG) – produce standards;
  4. Minor adjustments to WG 11 subgroup structure made to strengthen productivity:
  1. New Market needs AG to enhance alignment of standards with market needs (to be installed at an appropriate time after establishment of SC 4x);
  2. Genome Coding moves from a Requirements activity to WG level;
  3. SC 4x retains WG 11’s collaborative stance with other committees/bodies, e.g. Collaborative Teams with ITU-T on Video Coding and Joint Working Groups with ISO/IEC committees to carry out commonly agreed projects;

Joint Working Groups (JWG) may be established if the need for common standards with other ISO/IEC committees is identified.

SC 4x will constantly monitor the state of standards development and adapt its structure accor­dingly, including by establishing new WGs, e.g. on standards for other data types.

SC 4x meetings

  1. For the time being, to effectively pursue its standardisation goals, SC 4x will continue its practice of quarterly meetings collocated with its AGs and WGs (same time/place) organised as an “SC 4x week”, virtually the same of that of MPEG. Extended plenaries are joint meetings of all WGs/AGs. SC 4x plenaries held on the Sunday before and during an hour after the extended plenary on Friday. The last plenary deals with matters such as liaisons, meeting schedules etc that used to be handled by WG 11 plenaries
Day Time Meeting Chaired by
Sunday 14-16 SC 4x plenary Chair
Monday 09-13 Extended SC 4x plenary to review AhG reports and plan for the week Chair elect
Wednesday 09-11 Extended SC 4x plenary to review work done so far by AGs/WGs and plan for the rest of the week Chair elect with Tech. Coord. AG Convenor
Friday 14-17 Extended SC 4x plenary to review and approve recommend­ations produced by AGs/WGs Chair
Friday 17-18 Plenary to act on matters requiring SC 4x intervention Chair
  1. WGs and AGs could have longer meeting durations (i.e. start before first SC 4x meeting);
  2. Carry out a thorough review of all details of meeting sessions, agendas, document regis­tration etc. with the involvement of all affected experts;
  3. Institut Mines Télécom’s unique services offered for the last 15 years would be warmly welcome to preserve and continually improve WG 11’s operating efficiency with the involvement of all WG/AG members.

Title and scope of SC 29

(the following is a first attempt at defining the SC 29 title and scope after creation of SC 4x)

Upon approval by JTC 1, SC 29 will change its title and scope as follows:

  1. Title: JPEG coding of digital representations of images
  2. Scope: Development of international standards for
  • Efficient digital representations, processing and interchange of conventional and immersive images
  • Efficient digital representations of image-related sensory and digital data, such as medical and satellite
  • Support to digital image coding applications
  • Maintenance of ISO/IEC 13522

The structure of SC 29

  1. WG 11 subgroups become:
  1. SC 4x Advisory Groups (AG) – do not produce standards;
  2. SC 4x Working Groups (WG) – produce standards;
  3. SC 29 may set up Joint Working Groups, e.g. with SC 4x and TC 42, to carry out commonly agreed projects;

(the following is a first attempt at defining the SC 29 structure after creation of SC 4x, using the current SG structure of WG 1)

  1. SC 29 meetings: similar organisation as currently done by JPEG.

Why MPEG and JPEG do not work together?

This is a reasonable question, and has a simple answer. They can and should, however, the following should be taken into consideration

In an MPEG moving picture codec, there is always a still picture coding mode, a mode of the general moving picture coding scheme, whose tools are a subset of the tools of the complete moving picture coding scheme.

No need or significant benefit has ever been found that justifies the adoption of a JPEG image coding scheme, as the still picture coding mode of an MPEG moving picture coding scheme. Ditto for other schemes

There is no reason to believe that the same should not apply to such media types as point cloud and lightfield. The still picture coding mode of a dynamic (time dependent) point cloud or lightfield coding scheme uses coding tools from the general coding scheme, not those independently developed for images.

Image compression schemes have their own market. separate from the market of moving picture compression schemes. Often the market for images anticipates the market for moving pictures. That is why independent JPEG standards can be useful.

Posts in this thread