digital video

Codecs and Wrappers for Digital Video

In the last Greatbear article we quoted sage advice from the International Association of Audiovisual Archivists: ‘Optimal preservation measures are always a compromise between many, often conflicting parameters.’ [1]

While this statement is true in general for many different multi-format collections, the issue of compromise and conflicting parameters becomes especially apparent with the preservation of digitized and born-digital video. The reasons for this are complex, and we shall outline why below.

Lack of standards (or are there too many formats?)

Carl Fleischhauer writes, reflecting on the Federal Agencies Digitization Guidelines Initiative (FADGI) research exploring Digital File Formats for Videotape Reformatting (2014), ‘practices and technology for video reformatting are still emergent, and there are many schools of thought. Beyond the variation in practice, an archive’s choice may also depend on the types of video they wish to reformat.’ [2]

We have written in depth on this blog about the labour intensity of digital information management in relation to reformatting and migration processes (which are of course Greatbear’s bread and butter). We have also discussed how the lack of settled standards tends to make preservation decisions radically provisional.

In contrast, we have written about default standards that have emerged over time through common use and wide adoption, highlighting how parsimonious, non-interventionist approaches may be more practical in the long term.

The problem for those charged with preserving video (as opposed to digital audio or images) is that ‘video, however, is not only relatively more complex but also offers more opportunities for mixing and matching. The various uncompressed-video bitstream encodings, for example, may be wrapped in AVI, QuickTime, Matroska, and MXF.’ [3]

What then, is this ‘mixing and matching’ all about?

It refers to all the possible combinations of bitsteam encodings (‘codecs’) and ‘wrappers’ that are available as target formats for digital video files. Want to mix your JPEG2000 – Lossless with your MXF, or ffv1 with your AVI? Well, go ahead!

What then is the difference between a codec and wrapper?.

As the FADGI report states: ‘Wrappers are distinct from encodings and typically play a different role in a preservation context.’ [4]

The wrapper or ‘file envelope’ stores key information about the technical life or structural properties of the digital object. Such information is essential for long term preservation because it helps to identify, contextualize and outline the significant properties of the digital object.

Information stored in wrappers can include:

  • Content (number of video streams, length of frames),
  • Context (title of object, who created it, description of contents, re-formatting history),
  • Video rendering (Width, Height and Bit-depth, Colour Model within a given Colour Space, Pixel Aspect Ratio, Frame Rate and Compression Type, Compression Ratio and Codec),
  • Audio Rendering – Bit depth and Sample Rate, Bit Rate and compression codec, type of uncompressed sampling.
  • Structure – relationship between audio, video and metadata content. (adapted from the Jisc infokit on High Level Digitisation for Audiovisual Resources)

Codecs, on the other hand, define the parameters of the captured video signal. They are a ‘set of rules which defines how the data is encoded and packaged,’ [5] encompassing Width, Height and Bit-depth, Colour Model within a given Colour Space, Pixel Aspect Ratio and Frame Rate; the bit depth and sample rate and bit rate of the audio.

Although the wrapper is distinct from the encoded file, the encoded file cannot be read without its wrapper. The digital video file, then, comprises of wrapper and at least one codec, often two, to account for audio and images, as this illustration from AV Preserve makes clear.

Codecs and Wrappers

Diagram taken from AV Preserve’s A Primer on Codecs for Moving Image and Sound Archives

Pick and mix complexity

Why then, are there so many possible combinations of wrappers and codecs for video files, and why has a settled standard not been agreed upon?

Fleischhauer at The Signal does an excellent job outlining the different preferences within practitioner communities, in particular relating to the adoption of ‘open’ and commercial/ proprietary formats.

Compellingly, he articulates a geopolitical divergence between these two camps, with those based in the US allegedly opting for commercial formats, and those in Europe opting for ‘open.’ This observation is all the more surprising because of the advice in FADGI’s Creating and Archiving Born Digital Video: ‘choose formats that are open and non-proprietary. Non-proprietary formats are less likely to change dramatically without user input, be pulled from the marketplace or have patent or licensing restrictions.’ [6]

One answer to the question: why so many different formats can be explained by different approaches to information management in this information-driven economy. The combination of competition and innovation results in a proliferation of open source and their proprietary doubles (or triplets, quadruples, etc) that are constantly evolving in response to market ‘demand’.

Impact of the Broadcast Industry

An important area to highlight driving change in this area is the role of the broadcast industry.

Format selections in this sector have a profound impact on the creation of digital video files that will later become digital archive objects.

In the world of video, Kummer et al explain in an article in the IASA journal, ‘a codec’s suitability for use in production often dictates the chosen archive format, especially for public broadcasting companies who, by their very nature, focus on the level of productivity of the archive.’ [7] Broadcast production companies create content that needs to be able to be retrieved, often in targeted segments, with ease and accuracy. They approach the creation of digital video objects differently to how an archivist would, who would be concerned with maintaining file integrity rather ensuring the source material’s productivity.

Furthermore, production contexts in the broadcast world have a very short life span: ‘a sustainable archiving decision will have to made again in ten years’ time, since the life cycle of a production system tends to be between 3 and 5 years, and the production formats prevalent at that time may well be different to those in use now.’ [8]

Take, for example, H.264/ AVC ‘by far the most ubiquitous video coding standard to date. It will remain so probably until 2015 when volume production and infrastructure changes enable a major shift to H.265/ HEVC […] H.264/ AVC has played a key role in enabling internet video, mobile services, OTT services, IPTV and HDTV. H.264/ AVC is a mandatory format for Blu-ray players and is used by most internet streaming sites including Vimeo, youtube and iTunes. It is also used in Adobe Flash Player and Microsoft Silverlight and it has also been adopted for HDTV cable, satellite, and terrestrial broadcasting,’ writes David Bull in his book Communicating Pictures.

HEVC, which is ‘poised to make a major impact on the video industry […] offers to the potential for up to 50% compression efficiency improvement over AVC.’ Furthermore, HEVC has a ‘specific focus on bit rate reduction for increased video resolutions and on support for parallel processing as well as loss resilience and ease if integration with appropriate transport mechanisms.’ [9]

CODEC Quality Chart3 Increased compression

The development of codecs for use in the broadcast industry deploy increasingly sophisticated compression that reduce bit rate but retain image quality. As AV Preserve explain in their codec primer paper, ‘we can think of compression as a second encoding process, taking coded information and transferring or constraining it to a different, generally more efficient code.’ [10]

The explosion of mobile, video data in the current media moment is one of the main reasons why sophisticated compression codecs are being developed. This should not pose any particular problems for the audiovisual archivist per se—if a file is ‘born’ with high degrees of compression the authenticity of the file should not ideally, be compromised in subsequent migrations.

Nevertheless, the influence of the broadcast industry tells us a lot about the types of files that will be entering the archive in the next 10-20 years. On a perceptual level, we might note an endearing irony: the rise of super HD and ultra HD goes hand in hand with increased compression applied to the captured signal. While compression cannot, necessarily, be understood as a simple ‘taking away’ of data, its increased use in ubiquitous media environments underlines how the perception of high definition is engineered in very specific ways, and this engineering does not automatically correlate with capturing more, or better quality, data.

Like error correction that we have discussed elsewhere on the blog, it is often the anticipation of malfunction that is factored into the design of digital media objects. These, in turn, create the impression of smooth, continuous playback—despite the chaos operating under the surface. The greater clarity of the visual image, the more the signal has been squeezed and manipulated so that it can be transmitted with speed and accuracy. [11]

MXF

Staying with the broadcast world, we will finish this article by focussing on the MXF wrapper that was ‘specifically designed to aid interoperability and interchange between different vendor systems, especially within the media and entertainment production communities. [MXF] allows different variations of files to be created for specific production environments and can act as a wrapper for metadata & other types of associated data including complex timecode, closed captions and multiple audio tracks.’ [12]

The Presto Centre’s latest TechWatch report (December 2014) asserts ‘it is very rare to meet a workflow provider that isn’t committed to using MXF,’ making it ‘the exchange format of choice.’ [13] MXF

We can see such adoption in action with the Digital Production Partnership’s AS-11 standard, which came into operation October 2014 to streamline digital file-based workflows in the UK broadcast industry.

While the FADGI reports highlights the instability of archival practices for video, the Presto Centre argue that practices are ‘currently in a state of evolution rather than revolution, and that changes are arriving step-by-step rather than with new technologies.’

They also highlight the key role of the broadcast industry as future archival ‘content producers,’ and the necessity of developing technical processes that can be complimentary for both sectors: ‘we need to look towards a world where archiving is more closely coupled to the content production process, rather than being a post-process, and this is something that is not yet being considered.’ [14]

The world of archiving and reformatting digital video is undoubtedly complex. As the quote used at the beginning of the article states, any decision can only ever be a compromise that takes into account organizational capacities and available resources.

What is positive is the amount of research openly available that can empower people with the basics, or help them to delve into the technical depths of codecs and wrappers if so desired. We hope this article will give you access to many of the interesting resources available and some key issues.

As ever, if you have a video digitization project you need to discuss, contact us—we are happy to help!

References:

[1] IASA Technical Committee (2014) Handling and Storage of Audio and Video Carriers, 6. 

[2] Carl Fleischhauer, ‘Comparing Formats for Video Digitization.’ http://blogs.loc.gov/digitalpreservation/2014/12/comparing-formats-for-video-digitization/.

[3] Federal Agencies Digital Guidelines Initiative (FADGI), Digital File Formats for Videotape Reformatting Part 5. Narrative and Summary Tables. http://www.digitizationguidelines.gov/guidelines/FADGI_VideoReFormatCompare_pt5_20141202.pdf, 4.

[4] FADGI, Digital File Formats for Videotape, 4.

[5] AV Preserve (2010) A Primer on Codecs for Moving Image and Sound Archives & 10 Recommendations for Codec Selection and Managementwww.avpreserve.com/wp-content/…/04/AVPS_Codec_Primer.pdf, 1.

‎[6] FADGI (2014) Creating and Archiving Born Digital Video Part III. High Level Recommended Practices, http://www.digitizationguidelines.gov/guidelines/FADGI_BDV_p3_20141202.pdf, 24.
[7] Jean-Christophe Kummer, Peter Kuhnle and Sebastian Gabler (2015) ‘Broadcast Archives: Between Productivity and Preservation’, IASA Journal, vol 44, 35.

[8] Kummer et al, ‘Broadcast Archives: Between Productivity and Preservation,’ 38.

[9] David Bull (2014) Communicating Pictures, Academic Press, 435-437.

[10] Av Preserve, A Primer on Codecs for Moving Image and Sound Archives, 2.

[11] For more reflections on compression, check out this fascinating talk from software theorist Alexander Galloway. The more practically bent can download and play with VISTRA, a video compression demonstrator developed at the University of Bristol ‘which provides an interactive overview of the some of the key principles of image and video compression.

[12] ‘FADGI, Digital File Formats for Videotape, 11.

[13] Presto Centre, AV Digitisation and Digital Preservation TechWatch Report #3, https://www.prestocentre.org/, 9.

[14] Presto Centre, AV Digitisation and Digital Preservation TechWatch Report #3, 10-11.

Posted by debra in digitisation expertise, video tape, 1 comment

DVCAM transfers, error correction coding & misaligned machines

This article is inspired by a collection of DVCAM tapes sent in by London-based cultural heritage organisation Sweet Patootee. Below we will explore several issues that arise from the transfer of DVCAM tapes, one of the many Digital Video formats that emerged in the mid-1990s. A second article will follow soon which focuses on the content of the Sweet Patootee archive, which is a fascinating collection of video-taped oral histories of 1 World War veterans from the Caribbean.

The main issue we want to explore below is the role error correction coding performs both in the composition of the digital video signal and during the preservation playback. We want to highlight this issue because it is often assumed that DVCAM, which first appeared on the market in 1996, is a fairly robust format.

The work we have done to transfer tapes to digital files indicates that error correction coding is working overdrive to ensure we can see and hear these recordings. The implication is that DVCAM collections, and wider DV-based archives, should really be a preservation priority for institutions, organisations and individuals.

Before we examine this in detail, let’s learn a bit about the technical aspects of error correction coding.

Error error error

DVFormat7 Error correction coding is a staple part of audio and audio-visual digital media. It is of great important in the digital world of today where the higher volume of transmitted signals require greater degrees of compression, and therefore sophisticated error correction schemes, as this article argues.

Error correction works through a process of prediction and calculation known as interpolation or concealment. It takes an estimation of the original recorded signal in order to re-construct parts of the data that have been corrupted. Corruption can occur due either to wear and tear, or insufficiencies in the original recorded signal.

Yet as Hugh Robjohns explains in the article ‘All About Digital Audio’ from 1998:

 ‘With any error protection system, if too many erroneous bits occur in the same sample, there is a risk of the error detection system failing, and in practice, most media failures (such as dropouts on tape or dirt on a CD), will result in a large chunk of data being lost, not just the odd data bit here and there. So a technique called interleaving is used to scatter data around the medium in such a way that if a large section is lost or damaged, when the data is reordered many smaller, manageable data losses are formed, which the detection and correction systems can hopefully deal with.’

There are many different types of error correction, and ‘like CD-ROMs, DV uses Reed-Solomon (RS) error detection and correction coding. RS can correct localised errors, but seldom can reconstruct data damaged by a dropout of significant size (burst error),’ explains this wonderfully detailed article about DV video formats archived on web archive.

The difference correction makes

Digital technology’s error correction is one of the key things that differentiate it from their analogue counterparts. As the IASA‘s Guidelines on the Production and Preservation of Digital Audio Objects (2009) explains:

‘Unlike copying analogue sound recordings, which results in inevitable loss of quality due to generational loss, different copying processes for digital recordings can have results ranging from degraded copies due to re-sampling or standards conversion, to identical “clones” which can be considered even better (due to error correction) than the original.’ (65)

To think that digital copies can, at times, exceed the quality of the original digital recording is both an astonishing and paradoxical proposition. After all we are talking about a recording that improves at the perceptual level, despite being compositionally damaged. It is important to remember that error correction coding cannot work miracles, and there are limits to what it can do.

Dietrich Schüller and Albrecht Häfner argue in the International Association of Sound and Audiovisual Archives’s (IASA) Handling and Storage of Audio and Video Carriers (2014) that ‘a perfect, almost error free recording leaves more correction capacity to compensate for handling and ageing effects and, therefore, enhances the life expectancy.’ If a recording is made however ‘with a high error rate, then there is little capacity left to compensate for further errors’ (28-29).

The bizarre thing about error-correction coding then is the appearance of clarity it can create. And if there are no other recordings to compare with the transferred file, it is really hard to know what the recorded signal is supposed to look and sound like were its errors not being corrected.

DVCAM PRO

When we watch the successfully migrated, error corrected file post-transfer, it matters little whether the original was damaged. If a clear signal is transmitted with high levels of error correction, the errors will not be transferred, only the clear image and sound.

Contrast this with a damaged analogue tape it would be clearly discernible on playback. The plus point of analogue tape is they do degrade gracefully: it is possible to play back an analogue tape recording with real physical deterioration and still get surprisingly good results.

Digital challenges

The big challenge working with any digital recordings on magnetic tape is to know when a tape is in poor condition prior to playback. Often tape will look fine and, because of error correction, will sound fine too until it stops working entirely.

How then did we know that the Sweet Patootee tapes were experiencing difficulties?

Professional DV machines such as our DVC PRO have a warning function that flashes when the error-correction coding is working at heightened levels. With our first attempt to play back the tapes we noticed that regular sections on most of the tapes could not be fixed by error correction.

The ingest software we use is designed to automatically retry sections of the tape with higher levels of data corruption until a signal can be retrieved. Imagine a process where a tape automatically goes through a playing-rewinding loop until the signal can be read. We were able to play back the tapes eventually, but the high level of error correction was concerning.

DVFormat6

As this diagram makes clear, around 25% of the recorded signal in DVCAM is composed of subcode data, error detection and error correction.

DVCAM & Mis-alignment

It is not just the over-active error correction on DVCAMs that should send the alarm bells ringing.

Alan Griffiths from Bristol Broadcast Engineering, a trained SONY engineer with over 40 years experience working in the television industry, told us that early DVCAM machines pose particular preservation challenges. The main problem here is that the ‘mechanisms are completely different’ for earlier DVCAM machines which means that there is ‘no guarantee’ they will play back effectively on later models.

Recordings made on early DVCAM machines exhibit back tensions problems and tracking issues. This increases the likelihood of DV dropout on playback because a loss of information was recorded onto the original tape. The IASA confirm that ‘misalignment of recording equipment leads to recording imperfections, which can take manifold form. While many of them are not or hardly correctable, some of them can objectively be detected and compensated for.’

One possible solution to this problem, as with DAT tapes, is to ‘misalign’ the replay digital video tape recorder to match the misaligned recordings. However ‘adjustment of magnetic digital replay equipment to match misaligned recordings requires high levels of engineering expertise and equipment’ (2009; 72), and must therefore not be ‘tried at home,’ so to speak.

Our experience with the Sweet Patootee tapes indicates that DVCAM tapes are a more fragile format than is commonly thought, particularly if your DVCAM collection was recorded on early machines. If you have a large collection of DVCAM tapes we strongly recommend that you begin to assess the contents and make plans to transfer them to digital files. As always, do get in touch if you need any advice to develop your plans for migration and preservation.

 

Posted by debra in digitisation expertise, video tape, 0 comments