Foray Into Ml

In everything gloriously complex, it can be difficult to ascertain when you’ve reached a level of understanding that might merit debrief and reflection. In my forays into machine learning, this is probably as good a point as any.

I’ve been interested in Machine Learning for quite some time now. Since I first started stumbling on ImageNet and how it helped image processors classify images, to actual tinkering with an Alpha release of Google Vision, through the heady days of self-driving cars, and backing up even further, initial go’s at deriving meaning and topics from text with Python libraries like NLTK.

“Quite some time,” being relative. It’s a rapidly developing area of computer science, philosophy, mathematics, humanities, and their intersections.

Recently I’ve embarked on a project to try and derive topics from a corpus of academic articles in PDF form (around 700 of them), then by re-submitting one to the model, asking what other articles are “similar”. After quite a bit of trial and error, researching these emerging areas, and honing in on workflows that I might actually whiddle into working, I’m thrilled to have some results coming back that are – genuinly – spine-tingling.

The guts of this project falls on the python library, gensim. A masterful library meant to humanize the multi-dimensional math that underpins machine learning (and I use that term loosely here).

The rough sketch is thus:

  1. Point this little tool (called atm, for “article topic modeling”, and the way it dispenses fun things to think about) at a dropbox folder with API credentials
  2. Downloads the entire directory, in this case, ~700 articles
  3. Create a bag of words for each document
  4. Strip of stopwords and punctuation (poorly at this point, I might add)
  5. Then the fun part: create a Latent Dirichlet Allocation (LDA) model with gensim
  6. Index ~100 topics that are suggested by this model, for this given corpus
  7. Finally, query the model with a document (in this case, an article from the corpus) for similarity to other documents in the corpus

The results are other documents in the corpus, and a percentile on topical vectors that match the article. I should stop myself here: the details are still forming, and while I’m getting a good grasp on relatively low-level how this works, that’s fodder for another post. At this point, the rough sketches.

Below are the results of a query, run through a Jupyter notebook of atm:

Screen Shot 2017-01-12 at 3.41.16 PM.png

I submitted an article called Arnold_2003.pdf, and it suggested a handful that match topically. The magic, the interest, lies in how these topics are derived and how the similarites are ranked. Much of this can be attributed to the LDA model that gensim creates for me. While these results are fascinating in their own right, what really sends chills down my spine, is the similarity with which this process / workflow shares with other domains like image processing, self-driving cars, speech recognition, etc.

Google’s Tensorflow has a wonderful tutorial, MNIST for ML that helped with my understanding. When the inputs you’re dealing with are 28x28 pixel images, of nothing but handwritten numerals from 0-9, you can begin to wrap your head around the math that supports machine learning. When we quantify input – sound, visual, text – into vectors and matrices, we can look for patterns over moving windows of input. Well, computers can. They can see patterns in a machine-digestable version of media we entertain with our senses. And when, with great grit and finesse, we can bubble up these patterns to more high-level libraries, we can apply them to actual corpuses. It’s phenomonal.

The results from atm are already encouraging, and it’s almost by-the-book tuning from tutorials. I want more corpuses, more targeted querying, adjusting of modeling parameters, the works. But for now, I’ve been thrilled to get something working with the tools I love and understand.

Much more to come on this front. An example: taking the thousands of PDFs that will soon flood our digital collections platform at Wayne State Digital Collections, run topic modeling on these, and provide new and interesting ways to find related documents.

Access Challenge

Stumbld on an interesting access challenge for our Digital Collections today.

When searching for the keyword, library, all records in the Digital Collections are returned because library exists for each record somewhere in the metadata (probably embedded somewhere like Wayne State University Library).

Though Solr’s stellar ranking boosts relevant records to the top of the results – items that have library in the title, or prominently in the description – it’s still a little unnerving that it returns so many positive results.

An option would be to limit particular Solr fields from being indexed. BUT, should we start accepting records from other institutions, with varying metadata, that might become an important field to search and facet on. I’m sure there are workarounds for this scenario, but interesting all the same, and indicative of the iterative nature of tuning search and discovery systems.

Digilib And Image Processing

I am following a thread on the IIIF Google group forum, ruminating on how IIIF and the Image API might support more advanced image processing. I am probably mis-characterizing, or reading too much into the conversation a bit, but something interesting to me emerged from some of the early comments.

There was the acknowledgement that as stewards of digital images, looking to the future, it’s likely that we will start undertaking image processing - OCR, classification, etc. – on the images we have at our disposal. Perhaps for metadata enrichment, digital humanities work, the possibilities are extensive. IIIF, and the Image API, provide an excellent and standardized way to access images. Image processing is helped by preparing images in particular ways, such as converting to grayscale to help detect nodes and edges, that IIIF might be able to help with. What if the API, in addition to rotating, scaling, selecting, and some limited color options, could help facilitate image processing of our visual resources?

This conversation has been fascinating on many levels.

Robert Casties responded to the thread, pointing out that the project digilib has some methods and functionality that would do just such things. I wasn’t familiar with digilib, but what a neat project! Appears to be out of Germany, dating back to the early to mid 2000’s. In many ways, it mirrors the IIIF ecosystem of image servers, and standardized APIs for requesting these images. Details drift and overlap here and there, but it’s devilishly similar to image servers such as Loris (which we use here at Wayne) or Canteloupe.

IIIF has what it calls the “Image API”, the particular GET parameters used to request images. Digilib appears to have something called the “Scaler API” that does the same. Digilib appears to also support IIIF, perhaps an update to a project that seems to pre-date the IIIF movement, that acknowledges the increasing prevlance of IIIF in the digital repository spheres.

Though I’ve yet to install or interact with digilib, something deep in the fingers and toes tells me I like it. It has a page called, “Ancient History”, in German, which makes sense given where digilib was engendered. In principle and architecture, it very much mirrors what I have found so appealing about IIIF when I first stumbled on it in 2011 or 2012. This “Ancient History” page dates this project back the late 1990’s, where this kind of thinking for serving digital images online was pretty revolutionary.

I’ve strayed a bit from the original impetus for penning this post, that being ruminating on how standards like IIIF can support downstream image processing, but as I like to say, that’s okay! It’s been a fascinating thread to follow, and I’m hoping more will weigh in on how they envision emerging image delivery standards can help get these images into machine learning environements.