Standards and business models


Some could think that the title is an oxymoron. Indeed standards, certainly international ones, are published by not-for-profit organisations. How could they have a business model?

The answer is that around a standard there are quite a few entities, some of which are far from being not-for-profit.

Therefore, this article intends to analyse how business models can influence standards.

The actors of standardisation

Let’s first have a look at the actors of standardisation.

  1. The first actor is the organisation issuing standards. It may be an international organisation such as ISO, IEC or ETSI, or a trade association or an industry forum, but the organisation itself has not been designed to make money. A typical arrangement is a membership fee that allows an individual or a company employee to participate. Another is to make users of the standard pay to obtain the specification
  2. The second actor is the staff of the standards developing organisation. Depending of the type of organisation their role may be marginal of highly influential
  3. The third actor is the company who is a member of the organisation issuing standards.
  4. The fourth actor is the expert, typically the personnel sent by the company to contribute to the development of the standards.

From the interaction of these actors, the the standard is created, Then the standard creates an ecosystem. Companies become member of the ecosystem.

Why do companies participate in standard development?

Here is an initial list of motivations prompting companies to send their personnel to a standards committee.

  1. A company is interested in shaping the landscape of how a new technology will be used by concerned companies or industries. This is the case of Artificial Intelligence (AI), a technology that has recently matured and whose use has different sometimes unexpected implications. JTC 1/SC 42 has recently been formed to define AI architectures, frameworks, models etc. This kind of participation is not exclusive of companies. Universities find it useful to join this “exploratory” work because it may help them identify new research topics.
  2. A company is interested in developing a new product or launch a new service that requires a new standard technology.
  3. A company may be obliged by national regulations to participate in the development of a standard
  4. A company or, more and more often a university, owns technology it believes is useful or even required to draft a standard that a committee plans to develop. Again, a relevant case for this is MPEG where the number of Non-Performing Entities (NPE) is on the rise.
  5. A university or, not infrequently, a company wants to keep abreast of what is going on in a technology field or become aware as early as possible of the emergence of new standards that will affect its domain. MPEG is a typical case because it is a group open to new ideas and is attended by all relevant players.

Not all standards are born equal

The word “equal” in the title does not imply that there is a hierarchy of standards where some are more important than others. It simply means that the same name “standard” can be attached to quite different things.

The compact disc (CD) can be taken as the emblem of a traditional standard. Jointly developed by Philips and Sony, the CD quickly defeated the competing product by RCA and became the universal digital music distribution medium. The technical specification of the CD was originally defined in the Red Book and later became the IEC 60908 standard.

MPEG introduced a new process that replaced the development of a product, the marketplace success and the eventual ratification by a recognised standards organisation. This is how the process can be summarised:

  1. Identify the need of a new standard
  2. Develop requirements
  3. Issue call for proposals (CfP)
  4. Integrate technologies obtained from the CfP
  5. Draft the standard

In the early MPEG days, most participants were companies interested in developing new products or launch new services. They actively contributed to the standards because they needed it but also because they had relevant technologies developed in their laboratories.

Later the range of contributors to standard development got larger. The fact that in the mid- 1990’s a patent from Columbia University, clearly an NPE, had been declared essential in MPEG-2 video made headlines and prompted many to follow suit. The trend so initiated continues to this day.

After MPEG-2 the next step was to revive the old model represented by the CD. MPEG-4 became just one “product” but other companies developed other “products” some of which got a recognition as “standard” by a professional organisation. The creation of such standards implied the conversion of an internal company specification to the standard format of the professional organisation. The use of those “standards” was “free” in the sense that there no fees were charged for their use. However, other strings of less immediate comprehension to laymen were typically attached.

MPEG (formally WG 11) is about “coding of moving pictures and audio”, but a parallel group called JPEG (formally WG 1) is about “coding of digital representations of images”. The two groups operate based on different “business models”. Today the ubiquitous JPEG standard for image compression is royalty free because the 20-year validity of any patents has been largely overcome. However, even before the 20-year limit was crossed, the JPEG standard could be used freely with no charge. The same happened to the less famous but still quite important JPEG2000 standard used for movie distribution and to the less used JPEG XR standard.

More recently a consortium was formed to develop a royalty-free video compression specification. In rough, imperfect but sufficiently descriptive words, members of that consortium can freely use the specification in their products and services.

The business model of a standard is a serious matter

From the above we see that working for a standard has the basic motivation of creating a technology to enable a certain function in an ecosystem. The ecosystem can be anything from the ensemble of users of a product/service of a company, a country, an industry or the world at large. Beyond this common motivation, however, a company contributing to the development of a standard can have widely different motivations that I simplify as follows

  1. The common technology is encumbered because, by rewarding inventions, the ecosystem has embedded the means to constantly innovate its enabling technologies for new products and services. This is the basis of the MPEG business model that has ensured 30 years of development to the digital media industry. It has advantages and disadvantages
    1. The advantage of this model is that, once a licence for the standard has been defined, no one can hold the community hostage.
    2. The disadvantage is that getting agreement to the licence may prove difficult, thus disabling or hampering the business of the entire community.
  2. The common technology is “free” because the members of the ecosystem have assessed that they do not have interest in the technology per se but only in the technology as an enabler of other functions on which their business is built. This is the case of Linux/Android and most web technologies. Here, too, there are advantages and disadvantages
    1. The advantage of this model is that anybody can access the technology by accepting the “free” licence.
    2. The disadvantage is that a member of the ecosystem can be excluded for whatever reason and have its business ruined.

Parallel worlds

It is clear now that “standard” is a name that can be assigned to things that have the promotion of the creation of an ecosystem in common but may be very different otherwise. The way the members of the ecosystem operate is completely different depending on whether the standard is encumbered or free.

Let’s see the concrete cases of MPEG and JPEG. In the late 1980’s they started as two group with roughly the same size (30 people). Thirty years later MPEG has become a 600-member group and JPEG a 60-member group. In spite of handling similar technologies, less than 1% of MPEG members attend JPEG meetings. Why?

The answer is because MPEG decided (more correctly, was forced by the very complex IP environment of video and audio coding) to adopt the encumbered standard model while JPEG could decide to adopt the free standard model. In the last 30 years companies have heavily invested in MPEG standards because they have seen a return from that investment, and a host of new companies were created and are operating thanks to the reward coming from their inventions. JPEG developed less because fewer companies saw a return from the free standard business model.

A low number of common members exists between MPEG and JPEG because the MPEG and JPEG business models are antithetical.


I would like to apply the elements above to some current discussions where some people argue that, since JPEG and some MPEG experts have similar expertise, we should put them together to make “synergy”.

The simple answer to this argument is that it would be foolish to do that. JPEG people produce free standards because, those who have a business in mind, want to make money from something else that is enabled by the free standard. If JPEG people are mixed with MPEG people who want encumbered standards, the business of JPEG people is gone

People have better play the game they know, not improvise competences in things they don’t know. It is more or less the same story as in Einige Gespenster gehen um in der Welt – die Gespenster der Zauberlehrlinge.

The right solution is MPEGfuture.

Posts in this thread

On the convergence of Video and 3D Graphics

Table of contents


For a few years now, MPEG has explored the issue of to efficiently represent (i.e. compress) data from a range of technologies offering users dynamic immersive visual experiences. Here the word “dynamic” captures the fact that the user can have an experience where objects move in the scene as opposed to being static.

Being static and dynamic may not appear to be a conceptually important difference. In practice, however, products that handle static scenes may be orders of magnitude less complex than those handling dynamic scenes. This is true both at the capture-encoding side and at the decoding-display side. This consideration implies that industry may need standards for static objects much earlier than for dynamic objects.

Industry has guided MPEG to develop two standards that are based on two approaches that are conceptually similar but are targeted to different aoolications and involve different technologies:

  1. Point clouds generated by multiple cameras and depth sensors in a variety of setups. These may contain up to billions of points with colours, material properties and other attributes to offer reproduced scenes characterised by high realism, free interaction and navigation.
  2. Multi-view videos generated by multiple cameras that capture a 3D scene from a pre-set number of viewpoints. This arrangement can also provide limited navigation capabilities.

The compression algorithms employed for the two sources of information have similarities and differences as well, and the purpose of this article is to briefly describe the algorithms involved in a general point cloud and in the particular case that MPEG calls 3DoF+ (central case in Figure 1), investigate to what extent the algorithms are similar and different, they can share technologies today and in the future.

Figure 1 – 3DoF (left), 3DoF+ (centre) and 6DoF (left)

Computer-generated scenes and video are worlds apart

A video is composed of a sequence of matrices of coloured pixels, but a computer-generated 3D scene and its objects are not represented like a video, but by geometry and appearance attributes (colour, reflectance, material…). In other words, a computer-generated scene is based on a model.

Thirty-one years ago, MPEG started working on video coding and 7 years later did the same for computer-generated objects. The (ambitious) title of MPEG-4 “Coding of audio-visual objects” signalled MPEG’s intention to handle the two media types jointly.

Until quite recently the Video and 3D Graphics competence centres (read Developing standards while preparing for the future to know more about how work in MPEG is carried out by competence centres and units) have largely worked independently until the need to compress real world 3D objects in 3D scenes has become important to industry.

The Video and 3D Graphics competence centres attacked the problem using their own specific backgrounds: 3D Graphics used Point Cloud because it is a 3D graphics representation (it has geometry), while Video used the videos obtained from a number of cameras (because they only have colours).

Video came up with a solution that is video based (obviously, because there was no geometry to encode) and 3D Graphics came up with two solutions, one which encodes the 3D geometry directly (G-PCC) and another which projects the Point Cloud objects on fixed planes (V-PCC). In V-PCC, it is possible to apply traditional video coding because geometry is implicit.

Point cloud compression

MPEG is currently working on two PCC standards: G-PCC standard, a purely geometry-based approach without much to share with conventional video coding and on V-PCC that is heavily based on video coding. Why do we need two different algorithms? Because G-PCC does a better job in “new” domains (say, automotive) while V-PCC leverages video codecs already installed on handsets. The fact that V-PCC is due to become FDIS in January 2020, makes it extremely attractive to an industry where novelty in products is a matter of life or death.

V-PCC seeks to map a point of the 3D cloud to a pixel of a 2D grid (an image). To be efficient, this mapping should be as stationary as possible (only minor changes between two consecutive frames) and should not introduce visible geometry distortions. Then the video encoder can take advantage of the temporal and spatial correlations of the point cloud geometry and attributes by maximising temporal coherence and minimising distance/angle distortions.

A 3D to 2D mapping guarantees that all the input points are captured by the geometry and attribute images so that they can be reconstructed without loss. If the point cloud is projected to the faces of a cube or a sphere bounding the object does not guarantee lossless reconstruction because auto-occlusions (points projected in the same 2D pixel are not captured) may generates significant distortions.

To avoid these negative effects, V-PCC decomposes the input point cloud into “patches”, which can be independently mapped to a 2D grid through a simple orthogonal projection. Mapped patches do not suffer from auto-occlusions and do not require re-sampling of the point cloud geometry and can produce patches with smooth boundaries, while minimising the number of patches and mapping distortions. This is an NP-hard optimization problem that V-PCC solves by applying the heuristic segmentation approach of Figure 2.

Figure 2: from point cloud to patches

An example of how an encoder operates is provided by the following steps (note: the encoder process is not standardised):

  1. At every point the normal on the point cloud “surface” is estimated;
  2. An initial clustering of the point cloud is obtained by associating each point to one of the six planes forming the unit cube (each point is associated with the plane that has the closest normal). Projections on diagonal planes are also allowed;
  3. The initial clustering is iteratively refined by updating the cluster index associated with each point based on its normal and the cluster indexes of its nearest neighbours;
  4. Patches are extracted by applying a connected component extraction procedure;
  5. The 3D patches so obtained are projected and packed into the same 2D frame.
  6. The only attribute per point that is mandatory to encode is the color  (see right-hand side of Figure 3); other attributes, such as reflectance or material properties can be optionally encoded.
  7. The distances (depth) of the point to the corresponding projection plane are used to generate a gray-scale image which is encoded using a traditional video codec. When the object is complex and several points project to the same 2D pixel, two depth layers are used encoding near plane and far plane (see left-hand side Figure 3 with one single depth layer).

Figure 3: Patch projection

3DoF+ compression

3DoF+ is a simpler case of the general visual immersion case to be specified by part 12 Immersive Video in MPEG-I. In order to provide sufficient visual quality for 3DoF+, a large number of source views need to be used, e.g. 10 ~ 25 views for a 30cm radius viewing space. Each source view can be captured as omnidirectional or perspectively projected video with texture and depth.

If such large number of source views were independently coded with legacy 2D video coding standards, such as HEVC, an unpractically high bitrate would be generated, and a costly large number of decoders would be required to view the scene.

The Depth Image Based Rendering (DIBR) inter-view prediction tools of 3D-HEVC may help to reduce the bitrate, but the 3D-HEVC codec is not widely deployed. Additionally, the parallel camera setting assumption of 3D-HEVC may affect the coding efficiency of inter-view prediction with arbitrary camera settings.

MPEG-I Immersive Video targets the support of 3DoF+ applications, with a significantly reduced coding pixel rate and limited bitrate using a limited number of legacy 2D video codecs applied to suitably pre- and post-processed videos.

The encoder is described by Figure 4.

Figure 4: Process flow of 3DoF encoder

An example of how an encoder operates is described below (note that the encoding process is not standardised):

  1. Multiple views (possibly one) are selected from the source views;
  2. The selected source views are called basic views and the non-selected views additional views;
  3. All additional views are pruned by synthesizing basic views to the additional views to erase non-occluded area;
  4. Pixels left in the pruned additional views are grouped into patches;
  5. Patches in a certain time interval may be aggregated to increase temporal stability of the shape and location of patches;
  6. Aggregated patches are packed into one or multiple atlases (Figure 5).

Figure 5: Atlas Construction process

  1. The selected basic view(s) and all atlases with patches are fed into a legacy encoder (an example of how an input looks like is provided by Figure 6)

Figure 6: An example of texture and depth atlas with patches

The atlas parameter list of Figure 4 contains: a list of starting position in atlas, source view IDs, location in source view and size for all patches in the atlas. The camera parameter list comprises the camera parameters of all indicated source views.

At the decoder (Figure 7) the following operations are performed

  1. The atlas parameter and camera parameter lists are parsed from the metadata bitstream;
  2. The legacy decoder reconstructs the atlases from the video bitstream;
  3. An occupancy map with patch IDs are generated according to the atlas parameter list and decoded depth atlas;
  4. When users watch the 3DoF+ content, the viewports corresponding to the position and orientation of their head are rendered using patches in the decoded texture and depth atlases, and corresponding patch and camera parameters.

Figure 7: Process flow of 3DoF+ decoder

Figure 8 shows how the quality of synthesised viewports decreases with decreasing number of views. With 24 views the image looks perfect, with 8 views there are barely visible artefacts on the tube on the floor, but with only two views artefacts become noticeable. The goal of 3DoF+ is to achieve the quality of the leftmost image when using the bitrate and pixel rate for the rightmost case.

Figure 8: Quality of synthesized video as a function of the number of views

Commonalities and differences of PCC and 3DoF+

V-PCC and 3DoF+ can use the same 2D video codec, e.g. HEVC. For 3DoF+, input to the encoder and output from the decoder are sequences of texture and depth atlases containing patches, which are somewhat similar to V-PCC patches, sequences of geometry/attribute video data also containing patches.

Both 3DoF+ and V-PCC have metadata describing positions and parameters for patches in atlas or video. But 3DoF+ should describe the view ID each patch belongs to and its camera parameters to support flexible camera setting, while V-PCC just needs to indicate which of the 6 fixed cube-faces each patch bonds to. V-PCC does not need metadata of camera parameters.

3DoF+ uses a renderer to generate synthesised viewport at any desired position and towards any direction, while V-PCC re-projects pixels of decoded video into 3D space to regenerate the point cloud.

Further, the V-PCC goal is to reconstruct the 3D model, in order to obtain the 3D coordinates for each point. For 3DoF+, the goal is to obtain some additional views by interpolation but not necessarily any possible view. While both methods use patches/atlases and encode them as video + depth, the encoders and decoders are very different because the input formats (and, implicitly, the output) are completely different.

The last difference is how the two groups developed their solutions. It is already known that G-PCC has much more flexibility in representing the geometry than V-PCC. It is also expected that compression gains will be bigger for G-PCC than for V-PCC. However, the overriding advantage of V-PCC it that is can use using existing and widely deployed video codecs. Industry would not accept dumping V-PCC to rely exclusively on G-PCC.

How can we achieve further convergence?

You may ask: I understand the differences between PCC and 3DoF+, but why was convergence not identified at the start? The answer depends on the nature of MPEG.

MPEG could have done that if it were a research centre. At its own will it could put researchers together on common projects and give them the appropriate time. Eventually, this hypothetical MPEG could have merged and united the two cultures (within its organisation, not the two communities at large), identified the common parts and, step by step, defined all the lower layers of the solution.

But MPEG is not a research centre, it is a standards organisation whose members are companies’ employees “leased” to MPEG to develop the standards their companies need. Therefore, the primary MPEG task is to develop the standards its “customers” need. As explained in Developing standards while preparing for the future, MPEG has a flexible organisation that allows it to accomplish its primary duty to develop the standards that industry needs while at the same time explore the next steps.

Now that we have identified that there are commonalities, does MPEG need to change its organisation? By all means no. Look at the MPEG organisation of Figure 9

Figure 9 – The flat MPEG organisation

The PCC work is developed by a 3DG unit (soon to become two because of the widely different V-PCC and G-PCC) and the 3DoF+ standard is developed by a Video unit. These units are at the same level and can easily talk to one another now because they have concrete matters to discuss, even more than they did before. This will continue for the next challenges of 6DoF where the user can freely move in a virtual 3D space corresponding to a real 3D space.

The traditional Video and 3D Graphics tools can also continue to be in the MPEG tool repository and continue to supplemented by new technologies that make them more and more friendly to each other.

This is the power of the flat and flexible MPEG organisation as opposed to a hierarchical and rigid organisations advocated by some. A rigid hierarchical organisation where standards are developed in a top-down fashion is unable to cope with the conflicting requirements that MPEG continuously faces.


MPEG is synonymous of technology convergence and the case illustrated in this paper is just the most recent. It indicates that more such cases will appear in the future as more sophisticated point cloud compressions will be introduced and technologies supporting the full navigation of 6DoF will become available.

This can happen without the need to change the MPEG organisational structure because the MPEG organisation has been designed to allow units interact in the same easy way if they are in the same competence centre or in different ones.

Many thanks to Lu Yu (Zhejiang University) and Marius Preda (Institut Polytechnique de Paris) who are the real authors of this article.

Posts in this thread



Developing standards while preparing for the future


In Einige Gespenster gehen um in der Welt – die Gespenster der Zauberlehrlinge I described the case of an apprentice who operates in an environment that practices some mysterious arts (that in the past could have been called sorcery). Soon the apprentice wakes up at the importance of what he is learning/doing and thinks he has learned enough to go his own way.

The reality is that the apprentice has not learnt yet the art not because he has not studied and practiced it diligently and for enough time. He has not learnt it because no one can say “I know the art”. One can say to have practiced the art for a certain time, to have got that much experience or to have had this and that success (better not talk of failures or, better, to talk of successes after failures). The best one can say is that successes were the result of a time-tested teamwork.

Nothing fits more this description than the organisation of human work and this article will deal with how MPEG has developed a non-apprentice based organisation.

Work organisation across the millennia

Thousands of years ago (even rather recently) there was slave labour. By “owning the humans” people assumed they could impose any task on them (until a Spartacus came in, I mean).

More advanced than slavery is hired labour, because humans are not owned, you pay them and they do work for you. You can do what you want within a contract but only up to a point (much lower than with slave labour). If you cross a threshold you have to deal with unions or simply with workers leaving for another job and employer.

Fortunately, there have been many innovations over and beyond this archaic form of relationship. One case is when you have a work force hired to do intellectual work, such as research on new technologies. Managing the work force is more complicated, but there is an unbeatable tool: the promise to share the revenues of any invention that the intellectual worker makes.

Here, too, there are many variations to bind researcher and employer, to the advantage of both.

The MPEG context is quite different

Apart from not being a company, MPEG has a radically different organisation than any of those described above. In MPEG there are “workers”, but MPEG has no control of them because MPEG is not paying any salary. Someone else does. Still, MPEG has to improve the relationship between itself, the “virtual employer”, and its“workers”, if it is not to produce dull standards.

Here are some: projecting the development of a standard as a shared intellectual adventure, pursuing the goal with a combination of collective and personal advantages, promoting a sense of belonging to a great team, flashing the possibility to acquire personal fame because “we are making history here” and more.

For the adventure to be possible, however, MPEG has to entice two types of “worker”. One is the researcher who knows things and the other the employer who pays the salary. Both have to buy into the adventure.

This is not the end of the story because MPEG must also convince users of the standard that it will make sense for their industrial plans. By providing requirements, the users of the standard establish client-supplier relationship with MPEG.

Thirty years ago, matters were much simpler because the guy who paid the salary was regularly the same guy who used the standard. Today things are more complicated because the guy who pays the salary of the “worker” may very well not have any relationship with the guy who uses the standard, because their role may stop at providing the technologies that are used in the standards.

Organising the work in MPEG

So far so good. This is the evolution of an established business that MPEG brought about. This evolution, however, was accompanied by substantial changes in the object of work. In MPEG-1 and MPEG-2, audio, video, 3D graphics and systems were rather well delimited areas (not really, but compared with today, certainly so). Starting with MPEG-4, however, the different pieces of MPEG business got increasingly entangled.

If MPEG had been a company, it could have launched a series of restructurings, a favourites activity by many companies who think that a restructuring shows how their organisation is flexible. They can think and say that because they are not aware of the human costs of such reorganisations.

I said that MPEG is not a company and MPEG “workers” are not really workers but researchers rented out by their employers, or self-styled entrepreneurs, or students working on a great new idea etc. In any case, MPEG “workers” are intellectually highly prominent individuals.

When it started its work on video coding for interactive applications on digital media, MPEG did not have particularly innovative organisational ideas. Little by little it extended the scope of its work to other areas that were required to provided complete audio-visual solution.

MPEG ended up with building a peculiar competence centre-based organisation by reacting to the changing conditions at each step of its evolution. The organisation has gradually morphed (see here for the full history). Today the competence centres are: Requirements, Systems, Video, Video collaborations with ITU-T, Audio, 3D Graphics and Tests.

The innovative parts of the MPEG organisation are the units, formed inside each of these competence centres. They can have one of these two main goals: to address specific items that are required to develop one or more standards and to investigate related issues that may not be directly related to the standard. This is the mix of two developments: technologies for the standards and know-how for future standards, of which MPEG is proud.

Units may be temporary and sometimes may be long-lived. All units are formed with a natural leader, often as a result of an ad hoc group, whose creation may have been triggered by a proposal.

A graphical description of the MPEG organisation is provided by Figure 1

Figure 1 – The flat MPEG organisation

The units working for standards under development produce outputs which are integrated by the relevant competence centre plenaries and implemented by the editors with the assistance of the experts who have developed component technologies. The activity of the units of the joint groups with ITU-T are limited to the development of specific standards. Therefore, they do not have units working on explorations if not directly finalised to providing answers to open issues in their standards under development.

This is MPEG’s modus operandi that some (outside MPEG) think is the MPEG process described in How does MPEG actually work?. Nothing is farther from truth. MPEG’s modus operandi is the informal but effective organisation that permeates the MPEG work and allows interactions to happen when they are needed by those who need them at the time they need them. It is a system that allows MPEG to get the most out of the individual initiative, combining the need to satisfy industry needs now and and the need to create the right conditions for future standards tomorrow.

Proposing, as mentioned in Einige Gespenster gehen um in der Welt – die Gespenster der Zauberlehrlinge, to create a group that merges the Video and 3D Graphics competence centres based on a a hierarchical structure with substructures is prehistory of work organisation – fortunately not stretching back to slave labour. This is something that today would not even be considered in the organisation of a normal company, to say nothing of the organisation of a peculiar entity such as MPEG.

Units are highly mobile. They interact with other groups either because an issue is identified by the competence centre chairs, or by the competence centre plenaries or by initiative of the unit itself. Interaction can also be between groups or between units in different groups.

The number of units at any given time is rather large, exceeding 20 or even 30. Therefore the IT support system described in Digging deeper in the MPEG work and that is reproduced below helps MPEG members to keep up with the dynamics of all these interactions by providing information on what is being discussed where and when.

Figure 2 – How MPEG people know what happens where


A good example of how MPEG’s modus operandi can pursue its primary goal of producing standards, while at the same time keeping abreast of what comes next is the common layer shared by 3DoF+ and 3DG. This is something that MPEG thought conceptually existed and could have been designed in a top-down fashion. We did not do it because MPEG is not a not-for-profit organisation that pursues the goal of furthering the understanding of science. MPEG is a not-for-profit organisation developing standard while at the same time preparing the know-how for the next challenges. By not by imposing a vision of the future but doing the work today and  investigating the next steps, we get ready to respond to future requests from the industry.

What will be the nest steps of 3DoF+ and 3DG convergence is another story for another article.

Posts in this thread