-
"Electronic records" is a particularly awful phrase and does not even actually capture anything about the underlying records at all. As far as the term goes, it's not too far off from "machine readable records." As a profession, can we start actually thinking critically about the underlying technical issues and push for using terms that more accurately describe what it is we're dealing with? I understand it's a convenient catch-all term, but there is a large range of issues that differ with the kinds of data and systems.
-
When Mark asked me to write about our use of Drupal at the Dickinson College Archives and Special Collections, the first thing I thought about was when our Archives Reference Blog was initially launched in April 2007. I couldn't believe that it has been two years already. I am pleased to report that my colleagues at Dickinson and I are enormously happy with the results of those two years. I hope others may find this brief explanation of how and why we are using Drupal as a reference management tool to be helpful and instructive.
The concept for our implementation of Drupal was a simple one. I was thinking about the fact that we help researchers everyday to locate information that they want, but that what they discover among our collections or learn from them seldom gets shared, except by those who write for publication. So, what if we shared via the web, through a simple blog format, the basic questions posed by our researchers along with a simple summary of the results?
-
If you don't, I'll make your data linkable.
-
I've been fairly quiet lately as I've been busy with this and that, but I thought I'd let everyone know that I've been beginning to put together a series of posts entitled "Drupal for Archivists." Drupal, as you may or may not know, is a flexible and extensible open source content management system. There will be a general overview of some of the important concepts, but it'll focus less on the basics of getting people up and running — there are plenty of resources out there, such as the wonderful tutorials and articles available from Lullabot. Instead, I've drafted a handful of guest bloggers to discuss how and why they're using Drupal. Keep your eyes peeled!
-
The always groundbreaking Brooklyn Museum has now released an API to allow the public to interact with their collections data. I can't even tell you how happy I am about this in terms of an open data perspective. Also, this is the direction that makes the whole "detailed curation by passionate amateurs" thing possible.
There are only three simple methods for accessing the data. Ideally, it would be nice to see them put their collections metadata up as linked data, but now I'm daring to dream a little. Hey, wait a minute! I think that's the perfect way to start playing around with the API. Doing some digging through the documentation, I'm seeing that all the objects and creators seem to have URIs. Take a crack at it - the registration form is ready for you.
-
It's official - I've moved the codebase for worldcat, my Python module for working with the OCLC WorldCat APIs, to be hosted on Bitbucket, which uses the Mercurial distributed version control system. You can find the new codebase at http://bitbucket.org/anarchivist/worldcat/.
-
The Society of American Archivists released the Thesaurus for Use in College and University Archives as an electronic publication this week. Specifically, it was issued as a series of PDF files. Is this data stored in some sort of structured format somewhere? If so, it's not available directly from the SAA site. There's no good reason why TUCUA shouldn't be converted to structured, linkable data, expressed using SKOS, the Simple Knowledge Organization System. It's not like I need another project, but I'm sure I could write some scraper to harvest the terms out of the PDF, and while I'm at it, I could write one to also harvest the Glossary of Archival Terminology. Someone, please stop me. I really don't need another project.
-
I'm really looking forward to next week's code4lib conference in Providence, despite my utter failure to complete or implement the project on which I am presenting. In particular, I'm really looking forward to the linked data preconference. Like some of my other fellow attendees, I've hammered out a FOAF file for the preconference already so that Ed Summers' combo FOAF crawler and attendee info web app.
This is what the sample output looks using my FOAF data. It's good to see we're well on our way to have an easily creatable sample type of RDF data for people to play with. At a bare minimum, you can create your FOAF data using FOAF-A-Matic and then edit it to add the assertions you need to get it to play nice with Ed's application. See you in Providence, but go FOAF yourself first.
-
ArchivesNext recently inquired about how archivists measure success of 2.0 initiatives. It's hard to determine some 2.0-ish initiatives will really impact statistics when you don't really define what the results you're trying to see. I'd like to open the question further — how do we begin developing metrics for things that sit on the cusp between forms of outreach? Furthermore, I'm curious to see where this information is captured — do archivists wait until the end to gather survey data, or if they working towards something like we at NYPL Labs are doing with Infomaki, our new usability tool developed by Michael Lascarides, our user analyst.
-
So, it's time for another rant about my issues with EAD. This one is a pretty straightforward and short one, and comes down to the issue that I should essentially be able to mix and match metadata schemas. This is not a new idea, and I'm tired of the archives community treating it like it is one. Application profiles, as they are called, allow us to define a structured way to combine elements from different schemas, prevent addition of new and arbitrary elements, and tighten existing standards for particular use cases.
However, to a certain extent, the EAD community has accepted the concept of combining XML namespaces but on a very limited level. The creation of the EAD 2002 Schema allows EAD data to be embedded into other XML documents, such as METS. However, I can't do it the other way around; for example, I can't work a MODS or MARCXML record into a finding aid. Why not? As I said in my last dEAD Reckoning rant as well as during my talk at EAD@10, the use of encoding analog attributes is misguided, confusing, and just plain annoying.