Perils Of Digital Preservation

What are the greatest perils of digital preservation?

  • the total collapse of our modern digital infrastructure, vanishing our digital artifacts and memories in a single, fell swoop?
  • small-scale hard drive failures and format obsolescence, surgically and quietly rendering our files inaccessible?
  • forgetting a single digital object during a software / hardware migration?

I had a near miss with the last one recently. It was a snowflake of an object. Without naming titles or identifiers, it was a book scanned as a one-off digital object. An important, interesting, and culturally valuable book. And this is precisely why it got lost in the shuffle.

During migration, or even general record-keeping, auditing, and intellectual control, we focus on the big collections. Or, we cut our teeth on the small ones, working up to the big push (makes sense when, perhaps, one collection is literally 1,000x larger than small ones). We measure our success in achieiving 100% migration rate – both in quantity and fidelity – in groups, “got 2293/2293 for that collection, 422/422 for the other, and 16/16 for that little tyke over there,” and so on, and so forth.

But what about those other objects that have made it into our purview and custody? The objects that have no collection, that have no measurement of quantity outside of their self-reflexive parity? Those are the ones at risk.

I have likened it to “hitching your wagon” to a known entity. Or “safety in numbers.” The list goes on. The moment we tether an object to another, preferably a bunch, they benefit from the visibility of the herd.

The original files for the object were always safe, but all the work that went into ingest, creating derivatives, modeling for shifting platforms, would have all been lost. Not to mention any additional content, metadata, or insight that might have accompanied the object as it vinted as a digital object.

I never did write-up our conversion from single-object ebooks to ebooks that are modeled as multiple-objects, but it was quite an undertaking. Not only did this object in question not belong to a collection, but once it had missed the ebook migration, it had two strikes against it. It not longer registered in QA and auditing as an “ebook”; instead, drifting into the tepid abyss of non-intellectually controlled items.

Do you “have” an object if not controlled?

Is every connection to a collection or another object, distinction in an otherwise entropic stew of files on a server?

There are all kinds of safeguards and practices against “misplacing” a digital object like this, but in some way, don’t they all involved tethering? Even if but a sliver of metadata that reads, “I am object, hear me roar”?


A strange thought today while working with a colleague on tuning caching for our digital collections with Varnish.

We have been working to cache thumbnails and single item pages, and in the process, and I just about physically tripped on the interesting difference between caching website resources, and archiving a rendered version of the website.

To cache a single item page, we have been experimenting with using python to make headless, HTTP requests to our front-end PHP framework, Slim. I had delighted that a single request would put into motion the reconciling work that Slim does for a single item page, including a couple of API calls to our backend, and then save that rendering as a static HTML response for future visits. Awesome.

But on testing today, we noticed that a “preview” image was not cached, and would load the first time. Actually, a handful of things were not cached. Anything that the browser requested, after our front-end framework had delivered its payload, had not been cached in that early item page caching. Thinking it through, this is expected! But it was interesting, and got the wheels turning…

What if we were to use a headless browser to render the page, something like Selenium, or Splash, one of my favorites from the wonderful people at ScrapingHub. Or the myriad of other headless browser options out there. What would happen then? It was thinking this through, that it became clear it would work for caching the entirety of the page, but not in the way I had originally anticipated.

When I think of headless browsers, and the amazing things they do, one product is the HTML of the page, fully formed even after Javascript Ajax calls (which are incredibly common now). However, I had not deeply considered what happens to other resources like images which are pulled in via <img> tags. What do headless browsers do with these? Are they content to leave them as-is? or pull in the binary bits and drop those where the image would have landed? Interesting in its own right, there was more!

By firing off a headless browser for a single item page – that contains at least one additional image request via an <img> tag – that should trigger the HTTP request needed for Varnish to cache that URL. So, if one were to load that single item page after a headless browser already had, one would not receieve the entirety of the page pre-rendered like headless browsers provide, but would instead just be delighted with the virtually instant response of any HTTP requests the page needed.

Which introduces this interesting area between raw, completely unrendered pages and archived instances (like WARC files). If we cache each HTTP request the page requires, the only thing we leave to the browser is to actually put all the pieces together as a whole (including firing javascript).

I realize as I type this out, that some of the nuance of the insight may be lost without more discussion, but suffice it to say, caching is an interesting and ceaselessly beguiling beast.


I was reflecting today while putting together some thoughts for class, that learning / memorizing a single standard is useful, but learning how to learn standards, can be so much more valuable.

We have covered MARC and EAD, DACS, ISAD(G), and the list goes on. Obviously, each of these is critically important and uniquely interesting in their own right, but they do not lend themselves to a linear read. Standards encapuslate everything from history, to why, to how, to specific rules, to integration with other standards. Each standard is a complex network of information with muliptle inroads, and cannot be treated as a linear text to be read once and understood in its entirety. Furthermore, standards may vary greatly from one to another. Some may explain tag libraries for EAD based standards, others might attempt to codify norms of behavior or philosophies of a body into workflows and decision trees.

However different they might be, and how little they may lend themselves to a single-read-and-understand, standards also share a striking similarity: they are standards! They are attemping to make order out of chaos, impose or suggest a way of doing things so that people and systems may be interoperable across space and time. And in this similarity, they open themselves up to those familiar with standards.

Just today I was reading the meeting notes from a Hydra / Sufia related working group that was interested in codifying the metadata principles and formats for Sufia. It was mentioned that a handful of well made standards in other, related areas were using a fixed set of words to help standardize the standards! Words like: MUST, SHOULD, ALLOW, that would help humans and machines parse the rules for this particular standard. The IIIF Image API is a nice example of a relatively new standard, where a considerable amount of work has been done to make sure it is expressive, succinct, and unambiguous. The discussions leading up to the standard, I’m sure, where quite lively and full of questioning. But the result is a standard with clear language and vision.

So, back to my notes, I got to thinking. The value of learning some of these standards is not to internalize their every twist and turn, their specific rules or exceptions, but to instead feel their radiating essence. What standards are similar? What standards are complimentary? How much of the standard documentation is narrative, how much is meant to be referenced? How much is distinctly machine-readable (thinking RDF ontologies, XML schemas, etc.)?

If it is anything like learning programming languages - and I believe that it is - learning the shape and confines of single standard opens the door to picking up other standards quickly. The first time you see MUST and SHOULD in a document are jarring, but seeing them in a different standard’s documentation is comforting like an old friend.