Posts Tagged ‘video’

Digital Video in Mixed-Content Archives

Monday, September 12th, 2016

On a recent trip to one of Britain’s most significant community archives, I was lucky enough to watch a rare piece of digitised video footage from the late 1970s.

As the footage played it raised many questions in my mind: who shot it originally? What format was it originally created on? How was it edited? How was it distributed? What was the ‘life’ of the artefact after it ceased to actively circulate within communities of interest/ use? How and who digitised it?

As someone familiar with the grain of video images, I could make an educated guess about the format. I also made other assumptions about the video. I imagined there was a limited amount of tape available to capture the live events, for example, because a number of still images were used to sustain the rolling audio footage. This was unlikely to be an aesthetic decision given that the aim of the video was to document a historic event. I could be wrong about this, of course.

When I asked the archivist the questions flitting through my mind she had no answers. She knew who the donor of the digital copy was, but nothing about the file’s significant properties. Nor was this important information included in the artefact’s record.

This struck me as a hugely significant problem with the status of digitised material – and especially perhaps video – in mixed-content archives where the specificities of AV content are not accounted for.

Due to the haphazard and hand-to-mouth way mixed-content archives have acquired digital items, it seems more than likely this situation is the rule rather than the exception: acquired bit by bit (no pun intended), maintaining access is often privileged over preserving the context and context of the digitised video artefact.

As a researcher I was able to access the video footage, and this of course is better than nothing.

Yet I was viewing the item in an ahistoric black hole. It was profoundly decontextualised; an artefact extracted to its most barest of essences.

Standard instabilities

This is not in any way a criticism of the archive in question. In fact, this situation is wholly understandable given that digital video are examples of ‘media formats that exist in crisis.’

Video digitisation remains a complex and unstable area of digital preservation. It is, as we have written elsewhere on this blog, the final frontier of audiovisual archiving. This seems particularly true within the UK context where there is currently no systematic plan to digitise video collections, unlike film and audio.

The challenge with digital video preservation remains the bewildering number of potential codec/ wrapper combinations that can be used to preserve video content.

There are signs, however, that file-format stabilities are emerging. The No Time to Wait: Standardizing FFV1 & Matroska for Preservation symposium (Berlin, July 2016) brought together software developers and archivists who want to make the shared dream of an open source lossless video standard, fit for archival purpose, a reality.

It seems like the very best minds are working together to solve this problem, so Great Bear are quietly optimistic that a workable, open source standard for video digital preservation is in reach in the not too distant future.

Metadata

Yet as my experience in the archive makes clear, the challenge of video digitisation is not about file format alone.

There is a pressing need to think very carefully about the kind of metadata and other contextual material that need to be preserved within and alongside the digitised file.

Due to limited funding and dwindling technical capacity, there is likely to be only one opportunity to transfer material currently recorded on magnetic tape. This means that in 2016 there really can be no dress rehearsal for your video digitisation plans.

As Joshua Ranger strongly emphasises:

‘Digitization is preservation…For audiovisual materials. And this bears repeating over and over because the anti-digitization voice is much stronger and generally doesn’t include any nuance in regards to media type because the assumption is towards paper. When we speak about digitization for audio and video, we now are not speaking about simple online access. We are speaking about the continued viability, about the persistence and the existence of the media content.’

What information will future generations need to understand the digitised archive materials we produce?

An important point to reckon with here is that not all media are the same. The affordances of particular technologies, within specific historical contexts, have enabled new forms of community and communicative practice to emerge. Media are also disruptive (if not deterministic) – they influence how we see the world and what we can do.

On this blog, for example, Peter Sachs Collopy discussed how porta-pak technology enabled video artists and activists in the late 1960s/ early 1970s to document and re-play events quickly.

Such use of video is also evident in the 1975 documentary Les prostituées de Lyon parlent (The prostitutes of Lyon speak).

Les prostituées documents a wave of church occupations by feminist activists in France.

The film demonstrates how women used emergent videotape technology to transmit footage recorded within the church onto TV screens positioned outside. Here videotape technology, and in particular its capacity to broadcast uni-directional messages, was used to protect and project the integrity of the group’s political statements. Video, in this sense, was an important tool that enabled the women – many of whom were prostitutes and therefore without a voice in French society – to ‘speak’.

Peter’s interview and Les prostituées de Lyon parlent are specific examples of how AV formats are concretely embedded within a social-historical and technical context. The signal captured – when reduced to bit stream alone – is simply not an adequate archival source. Without sufficient context too much historical substance is shed.

In this respect I disagree with Ranger’s claim that ‘all that really may be needed moving ahead [for videotape digitisation] is a note in the record for the new digital preservation master that documents the source.’ To really preserve the material, the metadata record needs to be rich enough for a future researcher to understand how a format was used, and what it enabled users to do.

‘Rich enough’ will always be down to subjective judgement, but such judgements can be usefully informed by understanding what makes AV archive material unique, especially within the context of mixed-content archives.

Moving Forward

So, to think about this practically. How could the archive item I discuss at the beginning of the article be contextualised in a way that was useful to me, as a researcher?

At the most basic level the description would need to include:

  • The format it was recorded on, including details of tape stock and machine used to record material
  • When it was digitised
  • Who digitised it (an individual, an institution)

In an ideal world the metadata would include:

  • Images of the original artefact – particularly important if digital version is now the only remaining copy
  • Storage history (of original and copy)
  • Accompanying information (e.g., production sheets, distribution history – anything that can illuminate the ‘life’ of artefact, how it was used)

This information could be embedded in the container file or be stored in associated metadata records.

matroska sample with embedded preservation metadataThese suggestions may seem obvious, but it is surprising the extent to which they are overlooked, especially when the most pressing concern during digitisation is access alone.

In every other area of archival life, preserving the context of item is deemed important. The difference with AV material is that the context of use is often complex, and in the case of video, is always changing.

As stressed earlier: in 2016 and beyond you will probably only get one chance to transfer collections stored on magnetic tape, so it is important to integrate rich descriptions as part of the transfer.

Capturing the content alone is not sufficient to preserve the integrity of the video artefact. Creating a richer metadata record will take more planning and time, but it will definitely be worth it, especially if we try to imagine how future researchers might want to view and understand the material.

Codecs and Wrappers for Digital Video

Thursday, April 9th, 2015

In the last Great Bear article we quoted sage advice from the International Association of Audiovisual Archivists: ‘Optimal preservation measures are always a compromise between many, often conflicting parameters.’ [1]

While this statement is true in general for many different multi-format collections, the issue of compromise and conflicting parameters becomes especially apparent with the preservation of digitized and born-digital video. The reasons for this are complex, and we shall outline why below.

Lack of standards (or are there too many formats?)

Carl Fleischhauer writes, reflecting on the Federal Agencies Digitization Guidelines Initiative (FADGI) research exploring Digital File Formats for Videotape Reformatting (2014), ‘practices and technology for video reformatting are still emergent, and there are many schools of thought. Beyond the variation in practice, an archive’s choice may also depend on the types of video they wish to reformat.’ [2]

We have written in depth on this blog about the labour intensity of digital information management in relation to reformatting and migration processes (which are of course Great Bear’s bread and butter). We have also discussed how the lack of settled standards tends to make preservation decisions radically provisional.

In contrast, we have written about default standards that have emerged over time through common use and wide adoption, highlighting how parsimonious, non-interventionist approaches may be more practical in the long term.

The problem for those charged with preserving video (as opposed to digital audio or images) is that ‘video, however, is not only relatively more complex but also offers more opportunities for mixing and matching. The various uncompressed-video bitstream encodings, for example, may be wrapped in AVI, QuickTime, Matroska, and MXF.’ [3]

What then, is this ‘mixing and matching’ all about?

It refers to all the possible combinations of bitsteam encodings (‘codecs’) and ‘wrappers’ that are available as target formats for digital video files. Want to mix your JPEG2000 – Lossless with your MXF, or ffv1 with your AVI? Well, go ahead!

What then is the difference between a codec and wrapper?.

As the FADGI report states: ‘Wrappers are distinct from encodings and typically play a different role in a preservation context.’ [4]

The wrapper or ‘file envelope’ stores key information about the technical life or structural properties of the digital object. Such information is essential for long term preservation because it helps to identify, contextualize and outline the significant properties of the digital object.

Information stored in wrappers can include:

  • Content (number of video streams, length of frames),
  • Context (title of object, who created it, description of contents, re-formatting history),
  • Video rendering (Width, Height and Bit-depth, Colour Model within a given Colour Space, Pixel Aspect Ratio, Frame Rate and Compression Type, Compression Ratio and Codec),
  • Audio Rendering – Bit depth and Sample Rate, Bit Rate and compression codec, type of uncompressed sampling.
  • Structure – relationship between audio, video and metadata content. (adapted from the Jisc infokit on High Level Digitisation for Audiovisual Resources)

Codecs, on the other hand, define the parameters of the captured video signal. They are a ‘set of rules which defines how the data is encoded and packaged,’ [5] encompassing Width, Height and Bit-depth, Colour Model within a given Colour Space, Pixel Aspect Ratio and Frame Rate; the bit depth and sample rate and bit rate of the audio.

Although the wrapper is distinct from the encoded file, the encoded file cannot be read without its wrapper. The digital video file, then, comprises of wrapper and at least one codec, often two, to account for audio and images, as this illustration from AV Preserve makes clear.

Codecs and Wrappers

Diagram taken from AV Preserve’s A Primer on Codecs for Moving Image and Sound Archives

Pick and mix complexity

Why then, are there so many possible combinations of wrappers and codecs for video files, and why has a settled standard not been agreed upon?

Fleischhauer at The Signal does an excellent job outlining the different preferences within practitioner communities, in particular relating to the adoption of ‘open’ and commercial/ proprietary formats.

Compellingly, he articulates a geopolitical divergence between these two camps, with those based in the US allegedly opting for commercial formats, and those in Europe opting for ‘open.’ This observation is all the more surprising because of the advice in FADGI’s Creating and Archiving Born Digital Video: ‘choose formats that are open and non-proprietary. Non-proprietary formats are less likely to change dramatically without user input, be pulled from the marketplace or have patent or licensing restrictions.’ [6]

One answer to the question: why so many different formats can be explained by different approaches to information management in this information-driven economy. The combination of competition and innovation results in a proliferation of open source and their proprietary doubles (or triplets, quadruples, etc) that are constantly evolving in response to market ‘demand’.

Impact of the Broadcast Industry

An important area to highlight driving change in this area is the role of the broadcast industry.

Format selections in this sector have a profound impact on the creation of digital video files that will later become digital archive objects.

In the world of video, Kummer et al explain in an article in the IASA journal, ‘a codec’s suitability for use in production often dictates the chosen archive format, especially for public broadcasting companies who, by their very nature, focus on the level of productivity of the archive.’ [7] Broadcast production companies create content that needs to be able to be retrieved, often in targeted segments, with ease and accuracy. They approach the creation of digital video objects differently to how an archivist would, who would be concerned with maintaining file integrity rather ensuring the source material’s productivity.

Furthermore, production contexts in the broadcast world have a very short life span: ‘a sustainable archiving decision will have to made again in ten years’ time, since the life cycle of a production system tends to be between 3 and 5 years, and the production formats prevalent at that time may well be different to those in use now.’ [8]

Take, for example, H.264/ AVC ‘by far the most ubiquitous video coding standard to date. It will remain so probably until 2015 when volume production and infrastructure changes enable a major shift to H.265/ HEVC […] H.264/ AVC has played a key role in enabling internet video, mobile services, OTT services, IPTV and HDTV. H.264/ AVC is a mandatory format for Blu-ray players and is used by most internet streaming sites including Vimeo, youtube and iTunes. It is also used in Adobe Flash Player and Microsoft Silverlight and it has also been adopted for HDTV cable, satellite, and terrestrial broadcasting,’ writes David Bull in his book Communicating Pictures.

HEVC, which is ‘poised to make a major impact on the video industry […] offers to the potential for up to 50% compression efficiency improvement over AVC.’ Furthermore, HEVC has a ‘specific focus on bit rate reduction for increased video resolutions and on support for parallel processing as well as loss resilience and ease if integration with appropriate transport mechanisms.’ [9]

CODEC Quality Chart3Increased compression

The development of codecs for use in the broadcast industry deploy increasingly sophisticated compression that reduce bit rate but retain image quality. As AV Preserve explain in their codec primer paper, ‘we can think of compression as a second encoding process, taking coded information and transferring or constraining it to a different, generally more efficient code.’ [10]

The explosion of mobile, video data in the current media moment is one of the main reasons why sophisticated compression codecs are being developed. This should not pose any particular problems for the audiovisual archivist per se—if a file is ‘born’ with high degrees of compression the authenticity of the file should not ideally, be compromised in subsequent migrations.

Nevertheless, the influence of the broadcast industry tells us a lot about the types of files that will be entering the archive in the next 10-20 years. On a perceptual level, we might note an endearing irony: the rise of super HD and ultra HD goes hand in hand with increased compression applied to the captured signal. While compression cannot, necessarily, be understood as a simple ‘taking away’ of data, its increased use in ubiquitous media environments underlines how the perception of high definition is engineered in very specific ways, and this engineering does not automatically correlate with capturing more, or better quality, data.

Like error correction that we have discussed elsewhere on the blog, it is often the anticipation of malfunction that is factored into the design of digital media objects. These, in turn, create the impression of smooth, continuous playback—despite the chaos operating under the surface. The greater clarity of the visual image, the more the signal has been squeezed and manipulated so that it can be transmitted with speed and accuracy. [11]

MXF

Staying with the broadcast world, we will finish this article by focussing on the MXF wrapper that was ‘specifically designed to aid interoperability and interchange between different vendor systems, especially within the media and entertainment production communities. [MXF] allows different variations of files to be created for specific production environments and can act as a wrapper for metadata & other types of associated data including complex timecode, closed captions and multiple audio tracks.’ [12]

The Presto Centre’s latest TechWatch report (December 2014) asserts ‘it is very rare to meet a workflow provider that isn’t committed to using MXF,’ making it ‘the exchange format of choice.’ [13]MXF

We can see such adoption in action with the Digital Production Partnership’s AS-11 standard, which came into operation October 2014 to streamline digital file-based workflows in the UK broadcast industry.

While the FADGI reports highlights the instability of archival practices for video, the Presto Centre argue that practices are ‘currently in a state of evolution rather than revolution, and that changes are arriving step-by-step rather than with new technologies.’

They also highlight the key role of the broadcast industry as future archival ‘content producers,’ and the necessity of developing technical processes that can be complimentary for both sectors: ‘we need to look towards a world where archiving is more closely coupled to the content production process, rather than being a post-process, and this is something that is not yet being considered.’ [14]

The world of archiving and reformatting digital video is undoubtedly complex. As the quote used at the beginning of the article states, any decision can only ever be a compromise that takes into account organizational capacities and available resources.

What is positive is the amount of research openly available that can empower people with the basics, or help them to delve into the technical depths of codecs and wrappers if so desired. We hope this article will give you access to many of the interesting resources available and some key issues.

As ever, if you have a video digitization project you need to discuss, contact us—we are happy to help!

References:

[1] IASA Technical Committee (2014) Handling and Storage of Audio and Video Carriers, 6. 

[2] Carl Fleischhauer, ‘Comparing Formats for Video Digitization.’ http://blogs.loc.gov/digitalpreservation/2014/12/comparing-formats-for-video-digitization/.

[3] Federal Agencies Digital Guidelines Initiative (FADGI), Digital File Formats for Videotape Reformatting Part 5. Narrative and Summary Tables. http://www.digitizationguidelines.gov/guidelines/FADGI_VideoReFormatCompare_pt5_20141202.pdf, 4.

[4] FADGI, Digital File Formats for Videotape, 4.

[5] AV Preserve (2010) A Primer on Codecs for Moving Image and Sound Archives & 10 Recommendations for Codec Selection and Managementwww.avpreserve.com/wp-content/…/04/AVPS_Codec_Primer.pdf, 1.

‎[6] FADGI (2014) Creating and Archiving Born Digital Video Part III. High Level Recommended Practices, http://www.digitizationguidelines.gov/guidelines/FADGI_BDV_p3_20141202.pdf, 24.
[7] Jean-Christophe Kummer, Peter Kuhnle and Sebastian Gabler (2015) ‘Broadcast Archives: Between Productivity and Preservation’, IASA Journal, vol 44, 35.

[8] Kummer et al, ‘Broadcast Archives: Between Productivity and Preservation,’ 38.

[9] David Bull (2014) Communicating Pictures, Academic Press, 435-437.

[10] Av Preserve, A Primer on Codecs for Moving Image and Sound Archives, 2.

[11] For more reflections on compression, check out this fascinating talk from software theorist Alexander Galloway. The more practically bent can download and play with VISTRA, a video compression demonstrator developed at the University of Bristol ‘which provides an interactive overview of the some of the key principles of image and video compression.

[12] ‘FADGI, Digital File Formats for Videotape, 11.

[13] Presto Centre, AV Digitisation and Digital Preservation TechWatch Report #3, https://www.prestocentre.org/, 9.

[14] Presto Centre, AV Digitisation and Digital Preservation TechWatch Report #3, 10-11.

Videokunstarkivet – Norway’s Digital Video Art Archive

Monday, July 7th, 2014

We have recently digitised a U-matic video tape of eclectic Norwegian video art from the 1980s. The tape documents a performance by Kjartan Slettemark, an influential Norwegian/ Swedish artist who died in 2008. The tape is the ‘final mix’ of a video performance entitled Chromakey Identity Blue in which Slettemark live mixed several video sources onto one tape.

The theoretical and practical impossibility of documenting live performance has been hotly debated in recent times by performance theorists, and there is some truth to those claims when we consider the encounter with Slettemark’s work in the Great Bear studio. The recording is only one aspect of the overall performance which, arguably, was never meant as a stand alone piece. This was certainly reflected in our Daily Mail-esque reaction to the video when we played it back. ‘Eh? Is this art?! I don’t get it!’ was the resounding response.

Having access to the wider context of the performance is sometimes necessary if the intentions of the artist are to be appreciated. Thankfully, Slettemark’s website includes part-documentation of Chromakey Identity Blue, and we can see how the different video signals were played back on various screens, arranged on the stage in front of (what looks like) a live TV audience.

Upon seeing this documentation, the performance immediately evokes to the wider context of 70s/ 80s video art, that used the medium to explore the relationship between the body, space, screen and in Slettemark’s case, the audience. A key part of Chromakey Identity Blue is the interruption of the audience’s presence in the performance, realised when their images are screened across the face of the artist, whose wearing of a chroma key mask enables him to perform a ‘special effect’ which layers two images or video streams together.

What unfolds through Slettemark’s performance is at times humorous, suggestive and moving, largely because of the ways the faces of different people interact, perform or simply ignore their involvement in the spectacle. As Marina Abramovic‘s use of presence testifies, there can be something surprisingly raw and even confrontational about incorporating the face into relational art. As an ethical space, meeting with the ‘face’ of another became a key concept for twentieth century philosopher Emmanuel Levinas. The face locates, Bettina Bergo argues, ‘“being” as an indeterminate field’ in which ‘the Other as a face that addresses me […] The encounter with a face is inevitably personal.’

If an art work like Slettemark’s is moving then, it is because it stages moments where ‘faces’ reflect and interface across each other. Faces meet and become technically composed. Through the performance of personal-facial address in the artwork, it is possible to glimpse for a brief moment the social vulnerability and fragility such meetings engender. Brief because the seriousness is diffused Chromakey Identity Blue by a kitsch use of a disco ball that the artist moves across the screen to symbolically change the performed image, conjuring the magical feel of new technologies and how they facilitate different ways of seeing, being and acting in the world.

Videokunstarkivet (The Norwegian Video Art Archive)

VKA DAM Interface

The tape of Slettemark was sent to us by Videokunstarkivet, an exciting archival project mapping all the works of video art that have been made in Norway since the mid-1960s. Funded by the Norwegian Arts Council, the project has built the digital archival infrastructure from the bottom up, and those working on it have learnt a good many things along the way. Per Platou, who is managing the project, was generous enough to share some the insights for readers of our blog, and a selection of images from archive’s interface.

There are several things to be considered when creating a digital archive ‘from scratch’. Often at the beginning of a large project it is possible look around for examples of best practice within your field. This isn’t always the case for digital archives, particularly those working almost exclusively with video files, whose communities of practice are unsettled and established ways of working few and far between. The fact that even in 2014, when digital technologies have been widely adopted throughout society, there is still not any firm agreement on standard access and archival file formats for video files indicates the peculiar challenges of this work.

Because of this, projects such as Videokunstarkivet face multiple challenges, with significant amounts of improvisation required in the construction of the project infrastructure. An important consideration is the degree of access users will have to the archive material. As Per explained, publicly re-publishing the archive material from the site in an always open access form is not a concern of the  Videokunstarkivet, largely due to the significant administrative issues involved in gaining licensing and copyright permissions. ‘I didn’t even think there was a difference between collecting and communicating the work yet after awhile I saw there is no point in showing everything, it has to be filtered and communicated in a certain way.’

VKA DAM INterace

Instead, interested users will be given a research key or pass word which enables them to access the data and edit metadata where appropriate. If users want to re-publish or show the art in some form, contact details for the artist/ copyright holder are included as part of the entry. Although the Videokunstarkivet deals largely with video art, entries on individual artists include information about other archival collections where their material may be stored in order to facilitate further research. Contemporary Norwegian video artists are also encouraged to deposit material in the database, ensuring that ongoing collecting practices are built-in to the long-term project infrastructure.VKA DAM Interface

Another big consideration in constructing an archive is what to collect. Per told me that video art in Norway really took off in the early 80s. Artists who incorporated video into their work weren’t necessarily specialists in the medium, ‘there just happened to be a video camera nearby so they decided to use it.’ Video was therefore often used alongside films, graphics, performance and text, making the starting point for the archive, according to Per, ‘a bit of a mess really.’ Nonetheless, Videokunstarkivet ‘approaches every artist like it was Edvard Munch,’ because it is very hard to know now exactly what will be culturally valuable in 10, 20 or even 100 years from now. While it may not be appropriate to ‘save everything!’ for larger archival projects, for a self-contained and focused archival project such as the Videokunstarkivet, an inclusive approach may well be perfectly possible.

Building software infrastructures

Another important aspect of the project is technical considerations – the actual building of the back/ front end of the software infrastructure that will be used to manage newly migrated digital assets.

It was very important that the Videokunstarkivet archive was constructed using Open Source software. It was necessary to ensure resilience in a rapidly changing technological context, and so the project could benefit from any improvements in the code as they are tested out by user communities.

The project uses an adapted version of Digital Asset Management system Resource Space that was developed with LIMA, an organisation based in Holland that preserves, distributes and researches media art. Per explained that ‘since Resource Space was originally meant for photos and other “light” media files, we found it not so well suited for our actual tasks.’ Video files are of course far ‘heavier’ than image or even uncompressed audio files. This meant that there were some ‘pretty severe’ technical glitches in the process of establishing a database system that could effectively manage and playback large, uncompressed master and access copies. Through establishing the Videokunstarkivet archive they were ‘pushing the limits of what is technically possible in practice’, largely because internet servers are not built to handle large files, particularly not if those files are being transcoding back and forth across the file management system. In this respect, the project is very much ‘testing new ground’, creating an infrastructure capable of effectively managing, and enabling people to remotely access large amounts of high-quality video data.

VKA DAM Interface Access files will be available to stream using open source encoded files Web M (hi and lo) and X264 (hi and lo), ensuring that streaming conditions can be adapted to individual server capabilities. The system is also set up to manage change large-scale file transcoding should there be substantial change in file format preferences. These changes can occur without compromising the integrity of the uncompressed master file.

The interface is built with Bootstrap which has been adapted to create ‘a very advanced access-layer system’ that enables Videokunstarkivet to define user groups and access requirements. Per outlined these user groups and access levels as follows:

‘- Admin: Access to everything (i.e.Videokunstarkivet team members)

– Research: Researchers/curators can see video works, and almost all the metadata (incl previews of the videos). They cannot download master files. They can edit metadata fields, however all their edits will be visible for other users (Wikipedia style). If a curator wants to SHOW a particular work, they’ll have to contact the artist or owner/gallery directly. If the artist agrees, they (or we) can generate a download link (or transcode a particular format) with a few clicks.

– Artist: Artists can up/download uncompressed master files freely, edit metadata and additional info (contact, cv, websites etc etc). They will be able to use the system to store digital master versions freely, and transcode files or previews to share with who they want. The ONLY catch is that they can never delete a master file – this is of course coming out of national archive needs.’F+©lstad overview

Per approached us to help migrate the Kjartan Slettemark tape because of the thorough approach and conscientious methodology we apply to digitisation work. As a media archaeology enthusiast, Per stressed that it was desirable for both aesthetic and archival reasons that the materiality of U-matic video was visible in the transferred file. He didn’t want the tape, in other words, to be ‘cleaned up’ in anyway. To migrate the tape to digital file we used our standardised transfer chain for U-matic tape. This includes using an appropriate time-based-corrector contemporary to U-matic era, and conversion of the dub signal using a dedicated external dub – y/c converter circuit.

We are very happy to be working with projects such as the Videokunstarkivet. It has been a great opportunity to learn about the nuts and bolts design of cutting-edge digital video archives, as well as discover the work of Kjartan Slettemark, whose work is not well-known in the UK. Massive thanks must go to Per for his generous sharing of time and knowledge in the process of writing this article. We wish the Videokunstarkivet every success and hope it will raise the profile of Norwegian video art across the world.

Mistress or master? Digitising the cultural heritage of women’s movements

Monday, April 14th, 2014

U-Matic video case with lettering 'mistress copy'The Women’s Liberation Movement (WLM) is full of quirky examples of how womyn tried to wrestle culture from the sordid grip of male domination.

Part of this process was reinventing the world in wimmin’s image, word and song; to create and reclaim a lasting herstory in which sisterhood could flourish.

A recent U-Matic video tape transfer conducted in the Great Bear studio offers a window into this cultural heritage. 

State Your Destination was a film made by Bristol-based 80s feminist film collective Women in Moving Pictures (W.I.M.P.S.), whose complete archive is stored at the Feminist Archive South.

We previously migrated another film by W.I.M.P.S called In Our Own Time, screened at the recent Translation/ Transmission Women’s Film Season which took place at Watershed.

The way women shirked the language of patriarchy is evident on the tape box. We digitised the ‘MISTRESS’ copy, not the master copy.

Seeing the mistress copy today is a reminder of the way gendered language influences how we can think about cultural forms.  The master copy, of course, in conventional understanding, is the finished article, the final cut. The master of the house – the person in charge – is gendered male. Yet is this still the case?
DSC04764

Writing about a similar issue almost thirty years later, sound theorists Jonathan Sterne and Tara Rodgers seem to think so:

‘If we find that audio-technical discourse renders signal processing in terms of masculinist languages of mastery and domination of nature, can we help but wonder after its broader social implications? Does it not also suggest a gendered set of relations to these technologies? It is any wonder we still find the design, implementation, marketing, and use of audio-signal processing technologies to be male-dominated fields? [To change things] it will require fundamentally rethinking how we model, describe, interact, and sound with signal processing technologies’.

For feminist women who felt systematically excluded from certain kinds of cultural and economic activity, the gendering of language was an extension of violence they experienced because they were women.

Making the tape a MISTRESS may help rectify the problem, as does crossing out the very idea of a master copy.

Digitising Stereo Master Hi-Fi VHS Audio Recordings

Tuesday, March 18th, 2014

The history of amateur recording is peppered with examples of people who stretched technologies to their creative limit. Whether this comes in the form of hours spent trying things out and learning through doing, endlessly bouncing tracks in order to turn an 8-track recording into a 24-track epic or making high quality audio masters on video tape, people have found ways to adapt and experiment using the tools available to them.

Hollow Hand Demos

One of the lesser known histories of amateur home recordings is making high quality stereo mixdowns and master recordings from multi-track audio tape onto consumer-level Hi-Fi VCRs.

We are currently migrating a stereo master VHS Hi-Fi recording of London-based indie band Hollow Hand. Hollow Hand later adopted the name Slanted and were active in London between 1992-1995. The tapes were sent in by Mark Venn, the bass player with Slanted and engineer for these early recordings that were recorded in 1992 in the basement of a Clapham squat. Along with the Hi-Fi VHS masters, we have also been sent eight reels of AMPEX ¼ tapes of Slanted that are being transferred for archival purposes. Mark intends to remix the eight track recordings digitally but as of yet has no plans for a re-release.

When Mark sent us the tapes to be digitised he thought they had been encoded with a SONY PCM, a mixed digital/ analogue recording method we have covered in a previous blog post. The tapes had, however, been recorded directly from the FOSTEX eight track recorder to the stereo Hi-Fi function on a VHS video tape machine. For Mark at the time this was the best way to get a high quality studio master because other analogue and digital tape options, such as Studer open reel to reel and DAT machines, were financially off-limits to him. It is worth mentioning that Hi-Fi audio technologies were introduced in the VHS model by JVC around 1984, so using this method to record stereo masters would have been fairly rare, even among people who did a lot of home recording. It was certainly a bit of a novelty in the Great Bear Studio – they are the first tapes we have ever received that have been recorded in this way – and take it for granted that we see a lot of tape.

Using the Hi-Fi function on VHS tape machines was probably as good as it got in terms of audio fidelity for those working in an exclusively analogue context. It produced a master recording comparable in quality to a CD, particularly if the machine had manual audio recording level control. This is because, as we wrote about in relation to PCM/ Betamax, video tape could accommodate greater bandwidth that audio tape (particularly audio cassette), therefore leading to better quality recordings.

One of our replacement upper head drums

One of our replacement upper head drums

VHS Hi-Fi audio is achieved using audio frequency-modulation (AFM) and relied on a form of magnetic recording called ‘depth multiplexing‘. This is when

‘the modulated audio carrier pair was placed in the hitherto-unused frequency range between the luminance and the colour carrier (below 1.6 MHz), and recorded first. Subsequently, the video head erases and re-records the video signal (combined luminance and colour signal) over the same tape surface, but the video signal’s higher centre frequency results in a shallower magnetization of the tape, allowing both the video and residual AFM audio signal to coexist on tape.’

Challenges for migrating Hi-Fi VHS Audio

Although the recordings of Hollow Hand are in good working condition, analogue masters to VHS Hi-Fi audio do face particular challenges in the migration process.

Playing back the tapes in principle is easy if both tape and machine are in optimum condition, but if either are damaged the original recordings can be hard to reproduce.

A particular problem for Hi-Fi audio emerges when the tape heads wear and it becomes harder to track the hi-fi audio recording because the radio frequency signal (RF) can’t be read consistently off the tape. Hi-Fi recordings are harder to track because of depth multiplexing, namely the position of the recorded audio relative to the video signal. Even though there is no video signal as such in the playback of Hi-Fi audio, the video signal is still there, layered on top of the audio signal, essentially making it harder to access. Of course when tape heads/ drums wear down they can always be replaced, but acquiring spare parts will become increasingly difficult in years to come, making Hi-Fi audio recordings on VHS particularly threatened.

In order to migrate tape-based media to digital files in the most effective way possible, it is important to use appropriate machines for the transfer. The Panasonic AG-7650 we used to transfer the Hollow Hand tapes afforded us great flexibility because it is possible to select which audio tracks are played back at any given time which meant we could isolate the Hi-Fi audio track. The Panasonic AG-7650 also has tracking meters which makes it easy to assess and adjust the tracking of the tape and tape head where necessary.

As ever, the world of digitisation continues to generate anomalies, surprises and good stories. Who knows how many other video/ audio hybrid tapes are out there! If you do possess an archive collection of such tapes we advise you to take action to ensure they are migrated because of the unique problems they pose as a storage medium.

‘Missing Believed Wiped’: The Search For Lost TV Treasures

Monday, March 10th, 2014

Contemporary culture is often presented as drowning in mindless nostalgia, with everything that has ever been recorded circulating in a deluge of digital information.

Whole subcultures have emerged in this memory boom, as digital technologies enable people to come together via a shared passion for saving obscurities presumed to be lost forever. One such organisation is Kaleidoscope, whose aim is to keep the memory of ‘vintage’ British television alive. Their activities capture an urgent desire bubbling underneath the surface of culture to save everything, even if the quality of that everything is questionable.

Of course, as the saying goes, one person’s rubbish is another person’s treasure. As with most cultural heritage practices, the question of value is at the centre of people’s motivations, even if that value is expressed through a love for Pan’s People, Upstairs, Downstairs, Dick Emery and the Black and White Minstrel Show.

We were recently contacted by a customer hunting for lost TV episodes. His request: to lay hands on any old tapes that may unwittingly be laden with lost jewels of TV history. His enquiry is not so strange since a 70s Top of the Pops programme, a large proportion of which were deleted from the official BBC archive, trailed the end of ½ EIAJ video tape we recently migrated. And how many other video tapes stored in attics, sheds or barns potentially contain similar material? Or, as stated on the Kaleidoscope website:

‘Who’d have ever imagined that a modest, sometimes mould-infested collection of VHS tapes in a cramped back bedroom in Pill would lead to the current Kaleidoscope archive, which hosts the collections of many industry bodies as well as such legendary figures as Bob Monkhouse or Frankie Howard?’

Selection and appraisal in the archive

Selection of video tapes

Mysterious tapes?

Living in an age of seemingly infinite information, it is easy to forget that any archival project involves keeping some things and throwing away others. Careful considerations about the value of an item needs to be made, both in relation to contemporary culture and the projected needs of subsequent generations.

These decisions are not easy and carry great responsibility. After all, how is it possible to know what society will want to remember in 10, 20 or even 30 years from now, let alone 200? The need to remember is not static either, and may change radically over time. What is kept now also strongly shapes future societies because our identities, lives and knowledge are woven from the memory resources we have access to. Who then would be an archivist?

When faced with a such a conundrum the impulse to save everything is fairly seductive, but this is simply not possible. Perhaps things were easier in the analogue era when physical storage constraints conditioned the arrangement of the archive. Things had to be thrown away because the clutter was overwhelming. With the digital archive, always storing more seems possible because data appears to take up less space. Yet as we have written about before on the blog, just because you can’t touch or even see digital information, doesn’t mean it is not there. Energy consumption is costly in a different way, and still needs to be accounted for when appraising how resource intensive digital archives are.

For those who want their media memories to remain intact, whole and accessible, learning about the clinical nature of archival decisions may raise concern. The line does however need to be drawn somewhere. In an interview in 2004 posted on the Digital Curation Centre’s website, Richard Wright, who worked in the BBC’s Information and Archives section, explained the long term preservation strategy for the institution at the time.

‘For the BBC, national programmes that have entered the main archive and been fully catalogued have not, in general, been deleted. The deletions within the retention policy mainly apply to “contribution material” i.e. components (rushes) of a final programme, or untransmitted material. Hence, “long-term” for “national programmes that have entered the main archive and been fully catalogued” means in perpetuity. We have already kept some material for more than 75 years, including multiple format migrations.’

Value – whose responsibility?

For all those episodes, missing believed wiped, the treasure hunters who track them down tread a fine line between a personal obsession and offering an invaluable service to society. You decide.

What is inspiring about amateur preservationists is that they take the question of archival value into their own hands. In the 21st century, appraising and selecting the value of cultural artifacts is therefore no longer the exclusive domain of the archivist, even if expertise about how to manage, describe and preserve collections certainly is.

Does the popularity of such activities change the constitution of archives? Are they now more egalitarian spaces that different kinds of people contribute to? It certainly suggests that now, more than ever, archives always need to be thought of in plural terms, as do the different elaborations of value they represent.

2″ Quad Video Tape Transfers – new service offered

Monday, March 3rd, 2014

We are pleased to announce that we are now able to support the transfer of 2″ Quadruplex Video Tape (PAL, SECAM & NTSC) to digital formats.

Quadruplex Scanning Diagram

2” Quad was a popular broadcast analogue video tape format whose halcyon period ran from the late 1950s to the 1970s. The first quad video tape recorder made by AMPEX in 1956 cost a modest $45,000 (that’s $386,993.38 in today’s money).

2” Quad revolutionized TV broadcasting which previously had been reliant on film-based formats, known in the industry as ‘kinescope‘ recordings. Kinescope film required significant amounts of skilled labour as well as time to develop, and within the USA, which has six different time zones, it was difficult to transport the film in a timely fashion to ensure broadcasts were aired on schedule.

To counter these problems, broadcasters sought to develop magnetic recording methods, that had proved so successful for audio, for use in the television industry.

The first experiments directly adapted the longitudinal recording method used to record analogue audio. This however was not successful because video recordings require more bandwidth than audio. Recording a video signal with stationary tape heads (as they are in the longitudinal method), meant that the tape had to be recorded at a very high speed in order accommodate sufficient bandwidth to reproduce a good quality video image. A lot of tape was used!

Ampex, who at the time owned the trademark marketing name for ‘videotape’, then developed a method where the tape heads moved quickly across the tape, rather than the other way round. On the 2” quad machine, four magnetic record/reproduce heads are mounted on a headwheel spinning transversely (width-wise) across the tape, striking the tape at a 90° angle. The recording method was not without problems because, the Toshiba Science Museum write, it ‘combined the signal segments from these four heads into a single video image’ which meant that ‘some colour distortion arose from the characteristics of the individual heads, and joints were visible between signal segments.’

Quad scanning

The limitations of Quadruplex recording influenced the development of the helical scan method, that was invented in Japan by Dr. Kenichi Sawazaki of the Mazda Research Laboratory, Toshiba, in 1954. Helical scanning records each segment of the signal as a diagonal stripe across the tape. ‘By forming a single diagonal, long track on two-inch-wide tape, it was possible to record a video signal on one tape using one head, with no joints’, resulting in a smoother signal. Helical scanning was later widely adopted as a recording method in broadcast and domestic markets due to its simplicity, flexibility, reliability and economical use of tape.

This brief history charting the development of 2″ Quad recording technologies reveals that efficiency and cost-effectiveness, alongside media quality, were key factors driving the innovation of video tape recording in the 1950s.

 

Early digital tape recordings on PCM/ U-matic and Betamax video tape

Monday, February 3rd, 2014

We are now used to living in a born-digital environment, but the transition from analogue to digital technologies did not happen overnight. In the late 1970s, early digital audio recordings were made possible by a hybrid analogue/digital system. It was composed by the humble transport and recording mechanisms of the video tape machine, and a not so humble PCM (pulse-code-modulation) digital processor. Together they created the first two-channel stereo digital recording system.

Inside a Betamax Video Recorder

The first professional use digital processing machine, made by SONY, was the PCM 1600. It was introduced in 1978 and used a U-Matic tape machine. Later models, the PCM 1610/ 1630, acted as the first standard for mastering audio CDs in the 1980s. SONY employee Toshitada Doi, whose impressive CV includes the development of the PCM adaptor, the Compact Disc and the CIRC error correction system, visited recording studios around the world in an effort to facilitate the professional adoption of PCM digital technologies. He was not however welcomed with open arms, as the SONY corp. website explains:

‘Studio engineers were opposed to digital technology. They criticized digital technology on the grounds that it was more expensive than analogue technology and that it did not sound as soft or musical. Some people in the recording industry actually formed a group called MAD (Musicians Against Digital), and they declared their position to the Audio Engineering Society (AES).’

Several consumer/ semi-professional models were marketed by SONY in the 70s and 80s, starting with the PCM-1 (1977). In a retro-review of the PCM-F10 (1981), Dr Frederick J. Bashour explains that

‘older model VCRs often worked better than newer ones since the digital signal, as seen by the VCR, was a monochrome pattern of bars and dots; the presence of modern colour tweaking and image compensation circuits often reduced the recording system’s reliability and, if possible, were turned off.’

Why did the evolution of an emerging digital technology stand on the shoulders of what had, by 1981, become a relatively mature analogue technology? It all comes down to the issue of bandwidth. A high quality PCM audio recording required 1-1.5 MHz bandwidth, which is far greater than a conventional analogue audio signal (15-20KHz). While this bandwidth was beyond the scope of analogue recording technology of the time, video tape recorders did have the capacity to record signals with higher bandwidths.

If you have ever wondered where the 16 bit/ 44 Khz sampling standard for the CD came from, it was because in the early 1980s, when the CD standard was agreed, there was no other practical way of storing digital sound than by a PCM Converter & video recorder combination. As the wikipedia entry for the PCM adaptor explains, ‘the sampling frequencies of 44.1 and 44.056 kHz were thus the result of a need for compatibility with the 25-frame (CCIR 625/50 countries) and 30-frame black and white (EIAN 525/60 countries) video formats used for audio storage at the time.’ The sampling rate was adopted as the standard for CDs and, unlike many other things in our rapidly changing technological world, it hasn’t changed since.

The fusion of digital and analogue technologies did not last long, and the introduction of DAT tapes in 1987 rendered the PCM digital converters/ video tape system largely obsolete. DAT recorders basically did the same job as PCM/ video but came in one, significantly smaller, machine. DAT machines had the added advantage of being able to accept multiple sampling rates (the standard 44.1 kHz, as well as 48kHz, and 32kHz, all at 16 bits per sample, and a special LP recording mode using 12 bits per sample at 32 kHz for extended recording time).

Problems with migrating early digital tape recordings

There will always be the risk with any kind of magnetic tape recordings that there won’t be enough working tape machines to playback the material recorded on them in the future. As spare parts become harder to source, tapes with worn out transport mechanisms will simply become inoperable. We are not quite at this stage yet, and at Great Bear we have plenty of working U-Matic, Betamax and VHS machines so don’t worry too much! Machine obsolescence is however a real threat facing tape based archives.

Such a problem comes into sharp relief when we consider the case of digital audio recordings made on analogue video tape machines. Audio recordings ‘work’ the tape transport in a far more vigorous fashion than your average domestic video tape user. It may be rewound and fast-forwarded more often, and in a professional environment may be in constant use, thus leading to greater wear and tear.

Those who chose to adopt digital early and made recordings on tape will have marvelled at the lovely clean recordings and the wonders of error correction technology. As a legacy format however, tape-based digital recordings are arguably more at risk than their analogue counterparts. They are doubly compromised by fragility of tape, and the particular problems that befall digital technologies when things go wrong.

Example of edge damage on a video tape‘Edge damage’ is very common in video tape and can happen when the tape transport becomes worn. This can alter the alignments of transport mechanism, leading it to move move up and down and crush the tape. As you can see in this photograph the edge of this tape has become damaged.

Because it is a digital recording, this has led to substantial problems with the transfer, namely that large sections of the recording simply ‘drop out.’ In instances such as these, where the tape itself has been damaged, analogue recordings on tape are infinitely more recoverable than digital ones. Dr W.C. John Van Bogart explains that

‘even in instances of severe tape degradation, where sound or video quality is severely compromised by tape squealing or a high rate of dropouts, some portion of the original recording will still be perceptible. A digitally recorded tape will show little, if any, deterioration in quality up to the time of catastrophic failure when large sections of recorded information will be completely missing. None of the original material will be detectable in these missing sections.’

This risk of catastrophic, as opposed to gradual loss of information on tape based digital media, is what makes these recordings particularly fragile and at risk. What is particularly worrying about digital tape recordings is they may not show any external signs of damage until it is too late. We therefore encourage individuals, recording studios and memory institutions to assess the condition of their digital tape collections and take prompt action if the recorded information is valuable.

***

 The story of PCM digital processors and analogue tapes gives us a fascinating window into a time when we were not quite analogue, but not quite digital either, demonstrating how technologies co-evolve using the capacities of what is available in order to create something new.

 

Digitise VHS Tapes – Bristol’s Meet Your Feet

Monday, October 14th, 2013

We recently digitised some VHS tapes from when Bristol-based band Meet Your Feet performed on HTV in 1990. Meet Your Feet

‘formed in 1988 as a result of three of the women getting together to start a women’s music workshop, Meet Your Feet played its first gig in June 1988, when asked to get a set together for a Benefit Gig against section 28. This gig was so successful that the band decided to stay together and gradually the original line-up of the early years of the band evolved: Carol Thomas, vocals; Diana Milstein, founder member, bass and lyricist; Diggy, percussion; Heie Gelhaus, founder member, keyboards and songwriter; Julie Lockhart, vocals; Karen Keen, sax; Sue Hewitt, founder member, drums and songwriter; Vicki Burke, sax’ (taken from the  Women’s Liberation Music Archive).

During the 80s the band achieved great success and performed at prestigious festivals such as Glastonbury and WOMAD, as well as appearing on Radio 4’s Women’s Hour. They played together until 1992 before disbanding, reformed in 2010 and continue to play shows in Bristol and beyond. Meet Your Feet’s style, which draws from Latin, Jazz and Soul influences, interspersed with passionate, upbeat political lyrics, align them with other ‘women’s music’ bands from the 1980s, such as The Guest Stars and Hi-Jinx.

Meet Your Feet from Adrian Finn on Vimeo.

The video clip we digitised is interesting because it indicates how novel women’s bands were in 1990.

After the band finish performing their new single, they take part in a short interview where they are asked:

‘Its an obvious question, but I am going to ask it, why all women?’

Julie Lockhart, one of the singers, responds wittily, but not without a tinge of bewilderment, ‘Um, we were born that way!’

Can you imagine an all male group being asked a similar question in a television interview, either now or in the early 1990s?! It just wouldn’t happen because no one notices if all the members of a group are male, it just seems completely normal.

The interview goes on to emphasise gender issues, rather than focus on other aspects, such as themes in their music or that it is a large group (there are nine people in the band after all, which is a lot!)

This is not a criticism of the interviewer’s questions as such. Yet the fact it was necessary to asks them about their gender speaks volumes about how surprising it was to see women playing music together. The interview continues as follows:

Presenter: Are there any real advantages to being an all female group?

Sue Hewitt: We listen to each other more, and spin ideas of each other a lot more easily

Julie Lockhart: We giggle a lot more

Presenter: Do you row a lot because you are on the road, its a hard life isn’t it, very intense?

Julie Lockhart: No, that’s the obvious difference we never row!

Presenter: Do you find it hard to be taken seriously by men who come to see an all girl band?

Sue Hewitt: Well no, not all the time. I think initially some men take the view of ‘oh well, its just a bunch of girls on stage’ but when we get up there and start playing they think, ohhh [they can play as well]

It is frustrating that such questions had to be asked, and maybe they wouldn’t be now – although it is still often the case that in music, as in other areas of cultural life, women’s gender is marked, while male gender is not. We have all heard, for example, the phrase ‘female-fronted band’. When do we ever hear of bands that are ‘male-fronted’?

It is really valuable to have access to recordings such as those of Meet Your Feet, not only as a documentation of their performances, but also to demonstrate the attitudes and assumptions that women faced when they participated in a male dominated cultural field.

It is also good to know that Meet Your Feet are still performing and undoubtedly upsetting a few stereotypes and expectations along the way, so make sure you catch them at a show soon!

1/2 inch EIAJ skipfield reel to reel videos transferred for Stephen Bell

Monday, October 7th, 2013

We recently digitised a collection of 1/2 inch EIAJ skipfield reel to reel videos for Dr Stephen Bell, Lecturer in Computer Animation at Bournemouth University.

CLEWS SB 01 from Stephen Bell on Vimeo.

Stephen wrote about the piece:

‘The participatory art installation that I called “Clews” took place in “The White Room”, a bookable studio space at the Slade School of Art, over three days in 1979. People entering the space found that the room had been divided in half by a wooden wall that they could not see beyond, but they could enter the part nearest the entrance. In that half of the room there was a video monitor on a table with a camera above it pointing in the direction of anyone viewing the screen. There was also some seating so that they could comfortably view the monitor. Pinned to the wall next to the monitor was a notice including cryptic instructions that referred to part of a maze that could be seen on the screen. Participants could instruct the person with the video camera to change the view by giving simple verbal instructions, such as ‘up’, “down”, “left”, “right”, “stop”, etc. until they found a symbol that indicated an “exit”.’

My plan was to edit the video recordings of the event into a separate, dual screen piece but it was too technically challenging for me at the time. I kept the tapes though, with the intention of completing the piece when time and resources became available. This eventually happened in 2012 when, researching ways to get the tapes digitized, I discovered Greatbear in Bristol. They have done a great job of digitizing the material and this is the first version of piece I envisaged all those years ago.’

Nice to have a satisfied customer!

Measuring signals – challenges for the digitisation of sound and video

Monday, September 9th, 2013

In a 2012 report entitled ‘Preserving Sound and Moving Pictures’ for the Digital Preservation Coalition’s Technology Watch Report series, Richard Wright outlines the unique challenges involved in digitising audio and audiovisual material. ‘Preserving the quality of the digitized signal’ across a range of migration processes that can negotiate ‘cycles of lossy encoding, decoding and reformatting is one major digital preservation challenge for audiovisual files’ (1).

Wright highlights a key issue: understanding how data changes as it is played back, or moved from location to location, is important for thinking about digitisation as a long term project. When data is encoded, decoded or reformatted it alters shape, therefore potentially leading to a compromise in quality. This is a technical way of describing how elements of a data object are added to, taken away or otherwise transformed when they are played back across a range of systems and software that are different from the original data object.

Time-Based-Corrector

To think about this in terms which will be familiar to people today, imagine converting an uncompressed WAV into an MP3 file. You then burn your MP3s onto a CD as a WAV file so it will play back on your friend’s CD player. The WAV file you started off with is not the same as the WAV file you end up with – its been squished and squashed, and in terms of data storage, is far smaller. While smaller file size may be a bonus, the loss of quality isn’t. But this is what happens when files are encoded, decoded and reformatted.

Subjecting data to multiple layers of encoding and decoding does not only apply to digital data. Take Betacam video for instance, a component analogue video format introduced by SONY in 1982. If your video was played back using composite output, the circuity within the Betacam video machine would have needed to encode it. The difference may have looked subtle, and you may not have even noticed any change, but the structure of the signal would be altered in a ‘lossy’ way and can not be recovered to it’s original form. The encoding of a component signal, which is split into two or more channels, to a composite signal, which essentially squashes the channels together, is comparable to the lossy compression applied to digital formats such as mp3 audio, mpeg2 video, etc.

UMatic-Time-Based-Corrector

A central part of the work we do at Great Bear is to understand the changes that may have occurred to the signal over time, and try to minimise further losses in the digitisation process. We use a range of specialist equipment so we can carefully measure the quality of the analogue signal, including external time based correctors and wave form monitors. We also make educated decisions about which machine to play back tapes in line with what we expect the original recording was made on.

If we take for granted that any kind of data file, whether analogue or digital, will have been altered in its lifetime in some way, either through changes to the signal, file structure or because of poor storage, an important question arises from an archival point of view. What do we do with the quality of the data customers send us to digitise? If the signal of a video tape is fuzzy, should we try to stabilise the image? If there is hiss and other forms of noise on tape, should we reduce it? Should we apply the same conservation values to audio and film as we do to historic buildings, such as ruins, or great works of art? Should we practice minimal intervention, use appropriate materials and methods that aim to be reversible, while ensuring that full documentation of all work undertaken is made, creating a trail of endless metadata as we go along?

Do we need to preserve the ways magnetic tape, optical media and digital files degrade and deteriorate over time, or are the rules different for media objects that store information which is not necessarily exclusive to them (the same recording can be played back on a vinyl record, a cassette tape, a CD player, an 8 track cartridge or a MP3 file, for example)? Or should we ensure that we can hear and see clearly, and risk altering the original recording so we can watch a digitised VHS on a flat screen HD television, in line with our current expectations of media quality?

Time-Based-Correctors

Richard Wright suggests it is the data, rather than operating facility, which is the important thing about the digital preservation of audio and audiovisual media.

‘These patterns (for film) and signals (for video and audio) are more like data than like artefacts. The preservation requirement is not to keep the original recording media, but to keep the data, the information, recovered from that media’ (3).

Yet it is not always easy to understand what parts of the data should be discarded, and which parts should kept. Audiovisual and audio data are a production of both form and content, and it is worth taking care over the practices we use to preserve our collections in case we overlook the significance of this point and lose something valuable – culturally, historically and technologically.

Sony V62 EIAJ reel to reel video tape transfer for Barrie Hesketh

Monday, July 8th, 2013

We have recently been sent a Sony V62 high density video tape by Barrie Hesketh. Barrie has had an active career in theatre and in 1966 he set up the Mull Little Theatre on the Isle of Mull off the West Coast of Scotland with his late wife Marianne Hesketh. Specialising in what Barrie calls the ‘imaginative use of nothing’ they toured the UK, Germany and Holland and gained a lot of publicity world wide in the process. Both Marianne and Barrie were awarded MBEs for their services to Scottish Theatre.

You can read a more detailed history of the Mull Little Theatre in this book written by Barrie.

Panasonic VTR NV-8030 transferring a tape

Panasonic VTR NV-8030 EIAJ ½” reel to reel video recorder

The video tape Barrie sent us came from when he and Marianne were working as actors in residence at Churchill College at Cambridge University. Barrie and Marianne had what Barrie described as ‘academic leanings,’ gained from their time as students at the Central College of Speech and Drama in London.

In a letter Barrie sent with the tape he wrote:

‘I own a copy of a video tape recording made for me by the University of Cambridge video unit in 1979. I was researching audience/actors responses and the recording shows the audience on the top half of the picture, and the actors on the bottom half – I have not seen the stuff for years, but have recently been asked about it.’

While audience research is a fairly common practice now in the Creative Arts, in 1979 Barrie’s work was pioneering. Barrie was very aware of audience’s interests when he performed, and was keen to identify what he calls ‘the cool part’ of the audience, and find out ways to ‘warm them up.’

Recording audience responses was a means to sharpen the attention of actors. He was particularly interested in the research to identify ‘includers’. These were individuals who influenced the wider audience by picking up intentions of the performers and clearly responding. The movement of this individual (who would look around from time to time to see if other people ‘got it’), would be picked up in the peripheral vision of other audience members and an awareness gradually trickled throughout. Seeing such behaviour helped Barrie to understand how to engage audiences in his subsequent work.

Screenshot of the Audience Reactions

Barrie’s tape would have been recorded on one of the later reel-to-reel tape machines that conformed to the EIAJ Standard.

The EIAJ-1 was developed in 1969 by the Electronic Industries Association of Japan. It was the first standardized format for industrial/non-broadcast video tape recording. Once implemented it enabled video tapes to be played on machines made by different manufacturers.

Prior to the introduction of the standard, tapes could not be interchanged between comparable models made by different manufacturers. The EIAJ standard changed all this, and certainly makes the job of transferring tapes easier for us today! Imagine the difficulties we would face if we had to get exactly the right machine for each tape transfer. It would probably magnify the problem of tape and machine obsolescence effecting magnetic tape collections.

In the Great Bear Studio we have the National Panasonic Time Lapse VTR NV-8030 and Hitachi SV-640.

Diagram of a Panasonic VTR NV-8030

Like Ampex tapes, all the Sony EIAJ tape tend to suffer from sticky shed syndrome caused by absorption of moisture into the binder of the tape. Tapes need to be dehydrated and cleaned before being played back, as we did with Barrie’s tape.

The tape is now being transferred and Barrie intends to give copies to his sons. It will also be used by Dr Richard Trim in an academic research project. In both cases it is gratifying to give the these video tapes a new lease of life through digitisation. No doubt they will be of real interest to Barrie’s family and the wider research community.

D1 digital video transfer – new additions and economies of size

Monday, June 10th, 2013

A recent addition to the greatear digitising studio is a BTS D1 digital video cassette recorder.

As revolutionary as it was at the time, early digital audio and video tape recording is more threatened with obsolescence than earlier analogue formats.

bts-dcr-300-d1-digital-video-recorder

Introduced in 1986, D-1 was the very first, real-time, digital broadcast-quality tape format. It stored uncompressed digitized component video, had uncompromising picture quality and used enormous bandwidth for its time. The maximum record time on a D-1 tape is 94 minutes.

Enormous is certainly the word for the D1 tape! Compared with the so-called ‘invisible’ nature of today’s digital data and the miniDV introduced in 1998, this tape from 1992 is in comedy proportions.

d1-minidv-tape-comparison-2

D-1 was notoriously expensive and the equipment required large infrastructure changes in facilities which upgraded to this digital recording format.

Early D-1 operations were plagued with difficulties, though the format quickly stabilized and is still renowned for its superb standard definition image quality, sometimes referred to as a ‘no compromise’ format.

D-1 kept the data recorded as uncompressed 8bit 4:2:2, unlike today where compression is required for digital data to save space and time for practical delivery to the home, but sacrificing the picture and sound quality in the process.

D1 was supplanted by subsequent D models that recorded a combination of component (D5) and composite (D3) signals.

Digitising Ampex U-Matic KCS-20 Video Tapes

Wednesday, May 29th, 2013

We are currently digitising a collection of U-Matic Ampex KCS-20 video tapes for Keith Barnfather, the founder of Reeltime Pictures.

Reeltime Pictures are most well-known for their production of documentaries about the BBC series Doctor Who. They also made Doctor Who spin-off films, a kind of film equivalent of fan fiction, that revived old and often marginal characters from the popular TV series.

The tapes we were sent were Ampex’s U-Matic video tapes. For those of you out there that have recorded material on Ampex tape be it audio or video, we have bad news for you. While much magnetic tape is more robust than most people imagine, this is not true of tape made by Ampex in the 1970s and 1980s.

Nearly all Ampex tape degrades disgracefully with age. A common outcome is ‘sticky shed syndrome,‘ a condition created by the deterioration of the binders in a magnetic tape which hold the iron oxide magnetic coating to its plastic carrier. So common was this problem with Ampex tape that the company patented the process of baking the tape (to be done strictly at the temperature 54 Centrigade, for a period of 16 hours), that would enable the tape to be played back.

ampex-umatic-tapes-dehydrating

In order to migrate the Ampex video tapes to a digital format they have, therefore, to be dehydrated in our incubator. This is careful process where we remove the tape from its outer shell to minimise ‘outgassing‘. Outgassing refers to the release of a gas that has become dissolved, trapped, frozen or absorbed in material. This can have significant effects if the released gas collects in a closed environment where air is stagnant or recirculated. The smell of new cars is a good example of outgassing that most people are familiar with.

When baking a tape in an enclosed incubator, it can therefore be vulnerable to the potential release of gasses from the shell, as well as the tape and its constituent material parts. Removing the shell primarily minimises danger to the tape, as it is difficult to know in advance what chemicals will be released when baking occurs.

It is important to stress that tape dehydration needs to be done in a controlled manner within a specifically designed lab incubator. This enables the temperature to be carefully regulated to the degree. Such precision cannot of course be achieved with domestic ovens (which are designed to cook things!), nor even food dehydrators, because there is very little temperature control.

So if you do have Ampex tapes, whether audio or video, we recommend that you treat them with extreme care, and if what is recorded on them is important to you, migrate them to a digital format before they almost certainly deteriorate.

Digitise VHS Tape – Martin Smith’s Life Can Be Wonderful

Monday, May 20th, 2013

In February 2013 we digitised a VHS tape from Martin Smith, the 1994 documentary Life Can Be Wonderful. The VHS tape was the only copy of the film Smith owned, and it is quite common for Great Bear to digitise projects where the film maker does not have the master copy. This is because original copies are often held by large production companies, and films can be subject to complex distribution and screening conditions.

Life Can Be Wonderful is a film was about the life of his good friend Stanley Forman, a committed communist and major figure in British left-wing cinema, who passed away at the age of 91 on 7 February 2013. Forman’s dedication to communism remained a controversial issue until his death. Smith described his conflicts with his friend which ‘most often they centred on what I saw as his refusal to own up to the enormity of Stalin’s crimes. On camera he told me that I was his dear friend, “but not a dear comrade” and apologised for failing to convey “the spirit of the times”‘.

Stanley Forman is a fascinating figure in terms of the work we do at the Great Bear. He is described on the website Putney Debater as ‘the archive man.’ The site goes on to say

His company, Plato/ Education and Television Films (ETV), held a unique library of left- wing documentaries which amounted to the history of the twentieth century from a socialist perspective. Established in 1950 as Plato Films, the outfit was what would be called in Cold War ideology a front organisation, set up by members of the Communist Party to distribute films from behind the Iron Curtain. Under the slogan ‘See the other half of the world’, Plato provided the movement with a film distributor for documentaries from the Soviet Union and Eastern Europe, taking in China (until the Sino-Soviet split), Cuba, Vietnam and elsewhere, which would otherwise never be seen here.

The Educational and Television Films archive is held at the British Film Institute, and some material is available to view on the JISC Media Hub website.


Trustpilot

designed and developed by
greatbear analogue and digital media ltd, 0117 985 0500
Unit 26, The Coach House, 2 Upper York Street, Bristol, BS2 8QN, UK


greatbear analogue and digital media is proudly powered by WordPress
hosted using Debian and Apache