Posts Tagged ‘metadata’

Digital Video in Mixed-Content Archives

Monday, September 12th, 2016

On a recent trip to one of Britain’s most significant community archives, I was lucky enough to watch a rare piece of digitised video footage from the late 1970s.

As the footage played it raised many questions in my mind: who shot it originally? What format was it originally created on? How was it edited? How was it distributed? What was the ‘life’ of the artefact after it ceased to actively circulate within communities of interest/ use? How and who digitised it?

As someone familiar with the grain of video images, I could make an educated guess about the format. I also made other assumptions about the video. I imagined there was a limited amount of tape available to capture the live events, for example, because a number of still images were used to sustain the rolling audio footage. This was unlikely to be an aesthetic decision given that the aim of the video was to document a historic event. I could be wrong about this, of course.

When I asked the archivist the questions flitting through my mind she had no answers. She knew who the donor of the digital copy was, but nothing about the file’s significant properties. Nor was this important information included in the artefact’s record.

This struck me as a hugely significant problem with the status of digitised material – and especially perhaps video – in mixed-content archives where the specificities of AV content are not accounted for.

Due to the haphazard and hand-to-mouth way mixed-content archives have acquired digital items, it seems more than likely this situation is the rule rather than the exception: acquired bit by bit (no pun intended), maintaining access is often privileged over preserving the context and context of the digitised video artefact.

As a researcher I was able to access the video footage, and this of course is better than nothing.

Yet I was viewing the item in an ahistoric black hole. It was profoundly decontextualised; an artefact extracted to its most barest of essences.

Standard instabilities

This is not in any way a criticism of the archive in question. In fact, this situation is wholly understandable given that digital video are examples of ‘media formats that exist in crisis.’

Video digitisation remains a complex and unstable area of digital preservation. It is, as we have written elsewhere on this blog, the final frontier of audiovisual archiving. This seems particularly true within the UK context where there is currently no systematic plan to digitise video collections, unlike film and audio.

The challenge with digital video preservation remains the bewildering number of potential codec/ wrapper combinations that can be used to preserve video content.

There are signs, however, that file-format stabilities are emerging. The No Time to Wait: Standardizing FFV1 & Matroska for Preservation symposium (Berlin, July 2016) brought together software developers and archivists who want to make the shared dream of an open source lossless video standard, fit for archival purpose, a reality.

It seems like the very best minds are working together to solve this problem, so Great Bear are quietly optimistic that a workable, open source standard for video digital preservation is in reach in the not too distant future.

Metadata

Yet as my experience in the archive makes clear, the challenge of video digitisation is not about file format alone.

There is a pressing need to think very carefully about the kind of metadata and other contextual material that need to be preserved within and alongside the digitised file.

Due to limited funding and dwindling technical capacity, there is likely to be only one opportunity to transfer material currently recorded on magnetic tape. This means that in 2016 there really can be no dress rehearsal for your video digitisation plans.

As Joshua Ranger strongly emphasises:

‘Digitization is preservation…For audiovisual materials. And this bears repeating over and over because the anti-digitization voice is much stronger and generally doesn’t include any nuance in regards to media type because the assumption is towards paper. When we speak about digitization for audio and video, we now are not speaking about simple online access. We are speaking about the continued viability, about the persistence and the existence of the media content.’

What information will future generations need to understand the digitised archive materials we produce?

An important point to reckon with here is that not all media are the same. The affordances of particular technologies, within specific historical contexts, have enabled new forms of community and communicative practice to emerge. Media are also disruptive (if not deterministic) – they influence how we see the world and what we can do.

On this blog, for example, Peter Sachs Collopy discussed how porta-pak technology enabled video artists and activists in the late 1960s/ early 1970s to document and re-play events quickly.

Such use of video is also evident in the 1975 documentary Les prostituées de Lyon parlent (The prostitutes of Lyon speak).

Les prostituées documents a wave of church occupations by feminist activists in France.

The film demonstrates how women used emergent videotape technology to transmit footage recorded within the church onto TV screens positioned outside. Here videotape technology, and in particular its capacity to broadcast uni-directional messages, was used to protect and project the integrity of the group’s political statements. Video, in this sense, was an important tool that enabled the women – many of whom were prostitutes and therefore without a voice in French society – to ‘speak’.

Peter’s interview and Les prostituées de Lyon parlent are specific examples of how AV formats are concretely embedded within a social-historical and technical context. The signal captured – when reduced to bit stream alone – is simply not an adequate archival source. Without sufficient context too much historical substance is shed.

In this respect I disagree with Ranger’s claim that ‘all that really may be needed moving ahead [for videotape digitisation] is a note in the record for the new digital preservation master that documents the source.’ To really preserve the material, the metadata record needs to be rich enough for a future researcher to understand how a format was used, and what it enabled users to do.

‘Rich enough’ will always be down to subjective judgement, but such judgements can be usefully informed by understanding what makes AV archive material unique, especially within the context of mixed-content archives.

Moving Forward

So, to think about this practically. How could the archive item I discuss at the beginning of the article be contextualised in a way that was useful to me, as a researcher?

At the most basic level the description would need to include:

  • The format it was recorded on, including details of tape stock and machine used to record material
  • When it was digitised
  • Who digitised it (an individual, an institution)

In an ideal world the metadata would include:

  • Images of the original artefact – particularly important if digital version is now the only remaining copy
  • Storage history (of original and copy)
  • Accompanying information (e.g., production sheets, distribution history – anything that can illuminate the ‘life’ of artefact, how it was used)

This information could be embedded in the container file or be stored in associated metadata records.

matroska sample with embedded preservation metadataThese suggestions may seem obvious, but it is surprising the extent to which they are overlooked, especially when the most pressing concern during digitisation is access alone.

In every other area of archival life, preserving the context of item is deemed important. The difference with AV material is that the context of use is often complex, and in the case of video, is always changing.

As stressed earlier: in 2016 and beyond you will probably only get one chance to transfer collections stored on magnetic tape, so it is important to integrate rich descriptions as part of the transfer.

Capturing the content alone is not sufficient to preserve the integrity of the video artefact. Creating a richer metadata record will take more planning and time, but it will definitely be worth it, especially if we try to imagine how future researchers might want to view and understand the material.

Videokunstarkivet – Norway’s Digital Video Art Archive

Monday, July 7th, 2014

We have recently digitised a U-matic video tape of eclectic Norwegian video art from the 1980s. The tape documents a performance by Kjartan Slettemark, an influential Norwegian/ Swedish artist who died in 2008. The tape is the ‘final mix’ of a video performance entitled Chromakey Identity Blue in which Slettemark live mixed several video sources onto one tape.

The theoretical and practical impossibility of documenting live performance has been hotly debated in recent times by performance theorists, and there is some truth to those claims when we consider the encounter with Slettemark’s work in the Great Bear studio. The recording is only one aspect of the overall performance which, arguably, was never meant as a stand alone piece. This was certainly reflected in our Daily Mail-esque reaction to the video when we played it back. ‘Eh? Is this art?! I don’t get it!’ was the resounding response.

Having access to the wider context of the performance is sometimes necessary if the intentions of the artist are to be appreciated. Thankfully, Slettemark’s website includes part-documentation of Chromakey Identity Blue, and we can see how the different video signals were played back on various screens, arranged on the stage in front of (what looks like) a live TV audience.

Upon seeing this documentation, the performance immediately evokes to the wider context of 70s/ 80s video art, that used the medium to explore the relationship between the body, space, screen and in Slettemark’s case, the audience. A key part of Chromakey Identity Blue is the interruption of the audience’s presence in the performance, realised when their images are screened across the face of the artist, whose wearing of a chroma key mask enables him to perform a ‘special effect’ which layers two images or video streams together.

What unfolds through Slettemark’s performance is at times humorous, suggestive and moving, largely because of the ways the faces of different people interact, perform or simply ignore their involvement in the spectacle. As Marina Abramovic‘s use of presence testifies, there can be something surprisingly raw and even confrontational about incorporating the face into relational art. As an ethical space, meeting with the ‘face’ of another became a key concept for twentieth century philosopher Emmanuel Levinas. The face locates, Bettina Bergo argues, ‘“being” as an indeterminate field’ in which ‘the Other as a face that addresses me […] The encounter with a face is inevitably personal.’

If an art work like Slettemark’s is moving then, it is because it stages moments where ‘faces’ reflect and interface across each other. Faces meet and become technically composed. Through the performance of personal-facial address in the artwork, it is possible to glimpse for a brief moment the social vulnerability and fragility such meetings engender. Brief because the seriousness is diffused Chromakey Identity Blue by a kitsch use of a disco ball that the artist moves across the screen to symbolically change the performed image, conjuring the magical feel of new technologies and how they facilitate different ways of seeing, being and acting in the world.

Videokunstarkivet (The Norwegian Video Art Archive)

VKA DAM Interface

The tape of Slettemark was sent to us by Videokunstarkivet, an exciting archival project mapping all the works of video art that have been made in Norway since the mid-1960s. Funded by the Norwegian Arts Council, the project has built the digital archival infrastructure from the bottom up, and those working on it have learnt a good many things along the way. Per Platou, who is managing the project, was generous enough to share some the insights for readers of our blog, and a selection of images from archive’s interface.

There are several things to be considered when creating a digital archive ‘from scratch’. Often at the beginning of a large project it is possible look around for examples of best practice within your field. This isn’t always the case for digital archives, particularly those working almost exclusively with video files, whose communities of practice are unsettled and established ways of working few and far between. The fact that even in 2014, when digital technologies have been widely adopted throughout society, there is still not any firm agreement on standard access and archival file formats for video files indicates the peculiar challenges of this work.

Because of this, projects such as Videokunstarkivet face multiple challenges, with significant amounts of improvisation required in the construction of the project infrastructure. An important consideration is the degree of access users will have to the archive material. As Per explained, publicly re-publishing the archive material from the site in an always open access form is not a concern of the  Videokunstarkivet, largely due to the significant administrative issues involved in gaining licensing and copyright permissions. ‘I didn’t even think there was a difference between collecting and communicating the work yet after awhile I saw there is no point in showing everything, it has to be filtered and communicated in a certain way.’

VKA DAM INterace

Instead, interested users will be given a research key or pass word which enables them to access the data and edit metadata where appropriate. If users want to re-publish or show the art in some form, contact details for the artist/ copyright holder are included as part of the entry. Although the Videokunstarkivet deals largely with video art, entries on individual artists include information about other archival collections where their material may be stored in order to facilitate further research. Contemporary Norwegian video artists are also encouraged to deposit material in the database, ensuring that ongoing collecting practices are built-in to the long-term project infrastructure.VKA DAM Interface

Another big consideration in constructing an archive is what to collect. Per told me that video art in Norway really took off in the early 80s. Artists who incorporated video into their work weren’t necessarily specialists in the medium, ‘there just happened to be a video camera nearby so they decided to use it.’ Video was therefore often used alongside films, graphics, performance and text, making the starting point for the archive, according to Per, ‘a bit of a mess really.’ Nonetheless, Videokunstarkivet ‘approaches every artist like it was Edvard Munch,’ because it is very hard to know now exactly what will be culturally valuable in 10, 20 or even 100 years from now. While it may not be appropriate to ‘save everything!’ for larger archival projects, for a self-contained and focused archival project such as the Videokunstarkivet, an inclusive approach may well be perfectly possible.

Building software infrastructures

Another important aspect of the project is technical considerations – the actual building of the back/ front end of the software infrastructure that will be used to manage newly migrated digital assets.

It was very important that the Videokunstarkivet archive was constructed using Open Source software. It was necessary to ensure resilience in a rapidly changing technological context, and so the project could benefit from any improvements in the code as they are tested out by user communities.

The project uses an adapted version of Digital Asset Management system Resource Space that was developed with LIMA, an organisation based in Holland that preserves, distributes and researches media art. Per explained that ‘since Resource Space was originally meant for photos and other “light” media files, we found it not so well suited for our actual tasks.’ Video files are of course far ‘heavier’ than image or even uncompressed audio files. This meant that there were some ‘pretty severe’ technical glitches in the process of establishing a database system that could effectively manage and playback large, uncompressed master and access copies. Through establishing the Videokunstarkivet archive they were ‘pushing the limits of what is technically possible in practice’, largely because internet servers are not built to handle large files, particularly not if those files are being transcoding back and forth across the file management system. In this respect, the project is very much ‘testing new ground’, creating an infrastructure capable of effectively managing, and enabling people to remotely access large amounts of high-quality video data.

VKA DAM Interface Access files will be available to stream using open source encoded files Web M (hi and lo) and X264 (hi and lo), ensuring that streaming conditions can be adapted to individual server capabilities. The system is also set up to manage change large-scale file transcoding should there be substantial change in file format preferences. These changes can occur without compromising the integrity of the uncompressed master file.

The interface is built with Bootstrap which has been adapted to create ‘a very advanced access-layer system’ that enables Videokunstarkivet to define user groups and access requirements. Per outlined these user groups and access levels as follows:

‘- Admin: Access to everything (i.e.Videokunstarkivet team members)

– Research: Researchers/curators can see video works, and almost all the metadata (incl previews of the videos). They cannot download master files. They can edit metadata fields, however all their edits will be visible for other users (Wikipedia style). If a curator wants to SHOW a particular work, they’ll have to contact the artist or owner/gallery directly. If the artist agrees, they (or we) can generate a download link (or transcode a particular format) with a few clicks.

– Artist: Artists can up/download uncompressed master files freely, edit metadata and additional info (contact, cv, websites etc etc). They will be able to use the system to store digital master versions freely, and transcode files or previews to share with who they want. The ONLY catch is that they can never delete a master file – this is of course coming out of national archive needs.’F+©lstad overview

Per approached us to help migrate the Kjartan Slettemark tape because of the thorough approach and conscientious methodology we apply to digitisation work. As a media archaeology enthusiast, Per stressed that it was desirable for both aesthetic and archival reasons that the materiality of U-matic video was visible in the transferred file. He didn’t want the tape, in other words, to be ‘cleaned up’ in anyway. To migrate the tape to digital file we used our standardised transfer chain for U-matic tape. This includes using an appropriate time-based-corrector contemporary to U-matic era, and conversion of the dub signal using a dedicated external dub – y/c converter circuit.

We are very happy to be working with projects such as the Videokunstarkivet. It has been a great opportunity to learn about the nuts and bolts design of cutting-edge digital video archives, as well as discover the work of Kjartan Slettemark, whose work is not well-known in the UK. Massive thanks must go to Per for his generous sharing of time and knowledge in the process of writing this article. We wish the Videokunstarkivet every success and hope it will raise the profile of Norwegian video art across the world.

Going ‘tape-less’: AS-11 Digital Production Partnership standards

Wednesday, May 7th, 2014

Is this the end of tape as we know it? Maybe not quite yet, but October 1, 2014, will be a watershed moment in professional media production in the UK: it is the date that file format delivery will finally ‘go tape-less.’

Establishing end-to-end digital production will cut out what is now seen as the cumbersome use of video tape in file delivery. Using tape essentially adds a layer of media activity to a process that is predominantly file based anyway. As Mark Harrison, Chair of the Digital Production Partnership (DPP), reflects:

Example of a workflow for the DPP AS-11 standard

Example of a workflow for the DPP AS-11 standard

‘Producers are already shooting their programmes on tapeless cameras, and shaping them in tapeless post production environments. But then a strange thing happens. At the moment a programme is finished it is transferred from computer file to videotape for delivery to the broadcaster. When the broadcaster receives the tape they pass it to their playout provider, who transfers the tape back into a file for distribution to the audience.’

Founded in 2010, the DPP are a ‘not-for-profit partnership funded and led by the BBC, ITV and Channel 4 with representation from Sky, Channel 5, S4/C, UKTV and BT Sport.’ The purpose of the coalition is to help ‘speed the transition to fully digital production and distribution in UK television’ by establishing technical and metadata standards across the industry.

The transition to a standardised, tape-less environment has further been rationalised as a way to minimise confusion among media producers and help economise costs for the industry. As reported on Avid Blogs production companies, who often have to respond to rapidly evolving technological environments, are frantically preparing for deadline day. ‘It’s the biggest challenge since the switch to HD’, said Andy Briers, from Crow TV. Moreover, this challenge is as much financial as it is technical: ‘leading post houses predict that the costs of implementing AS-11 delivery will probably be more than the cost of HDCAM SR tape, the current standard delivery format’, writes David Wood on televisual.com

Outlining the standard

Audio post production should now be mixed to the EBU R128 loudness standard. As stated in the DPP’s producer’s guide, this new audio standard ‘attempts to model the way our brains perceive sound: our perception is influenced by frequency and duration of sound’ (9).

In addition, the following specifications must be observed to ensure the delivery format is ‘technically legal.’

  • HD 1920×1080 in an aspect ratio of 16:9 (1080i/25)
  • AVC-I in MXF (Material Exchange Format) OP1a files to AS11 specification
  • DPP required metadata
  • Photo Sensitive Epilepsy (flashing) testing to OFCOM standard/ the Harding Test

The shift to file-based delivery will require new kinds of vigilance and attention to detail in order to manage the specific problems that will potentially arise. The DPP producer’s guide states: ‘unlike the tape world (where there may be only one copy of the tape) a file can be copied, resulting in more than one essence of that file residing on a number of servers within a playout facility, so it is even more crucial in file-based workflows that any redelivered file changes version or number’.

Another big development within the standard is the important role performed by metadata, both structural (inherent to the file) and descriptive (added during the course of making the programme) . While broadcasters may be used to manually writing metadata as descriptive information on tape-boxes, they must now be added to the digital file itself. Furthermore, ‘the descriptive and technical metadata will be wrapped with the video and audio into a new and final AS-11 DPP MXF file,’ and if ‘any changes to the file are [made it is] likely to invalidate the metadata and cause the file to be rejected. If any metadata needs to be altered this will involve re-wrapping the file.’

Interoperability: the promise of digital technologies

The sector-wide agreement and implementation of digital file-delivery standards are significant because they represent a commitment to manufacturing full interoperability, an inherent potential of digital technologies. As French philosopher of technology Bernard Stiegler explains:

‘The digital is above all a process of generalised formalisation. This process, which resides in the protocols that enable interoperability, makes a range of diverse and varied techniques. This is a process of unification through binary code of norms and procedures that today allow the formalisation of almost everything: traveling in my car with a GPS system, I am connected through a digitised triangulation process that formalises my relationship with the maps through which I navigate and that transform my relationship with territory. My relationships with space, mobility and my vehicle are totally transformed. My inter-individual, social, familial, scholarly, national, commercial and scientific relationships are all literally unsettled by the technologies of social engineering. It is at once money and many other things – in particular all scientific practices and the diverse forms of public life.’

Jigsaw with the pieces representing various technical elements fitting together

This systemic homogenisation described by Stiegler is called into question if we consider whether the promise of interoperability – understood here as different technical systems operating efficiently together – has ever been fully realised by the current generation of digital technologies. If this was the case then initiatives like the DPP’s would never have to be pursued in the first place – all kinds of technical operations would run in a smooth, synchronous matter. Amid the generalised formalisation there are many micro-glitches and incompatibilities that slow operations down at best, and grind them to a halt at worst.

With this in mind we should note that standards established by the DPP are not fully interoperable internationally. While the DPP’s technical and metadata standards were developed in close alliance with the US-based Advanced Media Workflow Association’s (AMWA) recently released AS-11 specification, there are also key differences.

As reported in 2012 by Broadcast Now Kevin Burrows, DPP Technical Standards Lead, said: ‘[The DPP standards] have a shim that can constrain some parameters for different uses; we don’t support Dolby E in the UK, although the [AMWA] standard allows it. Another difference is the format – 720 is not something we’d want as we’re standardising on 1080i. US timecode is different, and audio tracks are referenced as an EBU standard.’ Like NTSC and PAL video/ DVD then, the technical standards in the UK differ from those used in the US. We arguably need, therefore, to think about the interoperability of particular technical localities rather than make claims about the generalised formalisation of all technical systems.  Dis-synchrony and technical differences remain despite standardisation.

The AmberFin Academy blog have also explored what they describe as the ‘interoperability dilemma’. They suggest that the DPP’s careful planning mean their standards are likely to function in an efficient manner: ‘By tightly constraining the wrapper, video codecs, audio codecs and metadata schema, the DPP Technical Standards Group has created a format that has a much smaller test matrix and therefore a better chance of success. Everything in the DPP File Delivery Specification references a well defined, open standard and therefore, in theory, conformance to those standards and specification should equate to complete interoperability between vendors, systems and facilities.’ They do however offer these words of caution about user interpretation: ‘despite the best efforts of the people who actually write the standards and specifications, there are areas that are, and will always be, open to some interpretation by those implementing the standards, and it is unlikely that any two implementations will be exactly the same. This may lead to interoperability issues.’

It is clear that there is no one simple answer to the dilemma of interoperability and its implementation. Establishing a legal commitment, and a firm deadline date for the transition, is however a strong message that there is no turning back. Establishing the standard may also lead to a certain amount of technological stability, comparable to the development of the EIAJ video tape standards in 1969, the first standardised format for industrial/non-broadcast video tape recording. Amid these changes in professional broadcast standards, the increasingly loud call for standardisation among digital preservationists should also be acknowledged.

For analogue and digital tapes however, it may well signal the beginning of an accelerated end. The professional broadcast transition to ‘full-digital’ is a clear indication of tape’s obsolescence and vulnerability as an operable media format.

A word about metadata and digital collections

Monday, September 23rd, 2013

Metadata is data about data. Maybe that sounds pretty boring, but archivists love it, and it is really important for digitisation work.

As mentioned in the previous post that focused on the British Library’s digital preservation strategies, as well as many other features on this blog, it is fairly easy to change a digital file without knowing because you can’t see the changes. Sometimes changing a file is reversible (as in non-destructive editing) but sometimes it is not (destructive editing). What is important to realise is changing a digital file irrevocably, or applying lossy instead of lossless compression, will affect the integrity and authenticity of the data.

What is perhaps worse in the professional archive sector than changing the structure of the data, is not making a record of it in the metadata.

Metadata is a way to record all the journeys a data object has gone through in its lifetime. It can be used to highlight preservation concerns if, for example, a file has undergone several cycles of coding and decoding that potentially make it vulnerable to degradation.

Example of Metadata

Metadata can in fact be split into three kinds, as Ian Ireland writes in this article:

technical data (info on resolution, image size, file format, version, size), structural metadata (describes how digital objects are put together such as a structure of files in different folders) and descriptive (info on title, subject, description and covering dates) with each type providing important information about the digital object.’

As the previous blog entry detailed, digital preservation is a dynamic, constantly changing sector. Furthermore, digital data requires far greater intervention to manage collections than physical objects and even analogue media. In such a context data objects undergo rapid changes as they adapt to the technical systems they are opened by and moved between. This would produce, one would speculate, a large stream of metadata.

What is most revealing about metadata surrounding digital objects, is they create a trail of information not only about the objects themselves. They also document our changing relationship to, and knowledge about, digital preservation. Metadata can help tell the story about how a digital object is transformed as different technical systems are adopted and then left behind. The marks of those changes are carried in the data object’s file structure, and the metadata that further elaborate those changes.

Like those who preserve physical heritage collections, a practice of minimal intervention is the ideal for maintaining both the integrity and authenticity of digital collections. But mistakes are made, and attempts to ‘clean up’ or otherwise clarify digital data do happen, so when they do, it is important to record those changes because they help guide how we look after archives in the long term.


designed and developed by
greatbear analogue and digital media ltd, 0117 985 0500
Unit 26, The Coach House, 2 Upper York Street, Bristol, BS2 8QN, UK


XHTML | CSS
greatbear analogue and digital media is proudly powered by WordPress
hosted using Debian and Apache