This class has been a wild roller coaster. I think the most eye opening aspect of it was realizing how pervasive and wide spread is the world of metadata. Metadata is used to define and control our world through different schemas, but the underlying aspect is the same. It makes me want to start organizing my entire life by metadata and realizing the far flung impacts that metadata has had and is still having on our world. I think that is why I like Dublin Core so much. While it is a simple enough schema to learn and employ it is so customizable it can be tailored to nearly any collection. It was fascinating to see how many times it cropped up in our descriptions of digital repositories or even as some sort of basis for other schemas. Managing metadata is definitely a skill worth learning.
I admit when I looked at all the elements to index for our football project I was intimidated. But the more I worked with it I was pleasantly surprised how taking the project on in small chunks made it a lot easier. This is an instance where I definitely think working element by element instead of record by record helped me. I started with some of the easier elements for me (i.e. the elements that I and my group had created) and then worked out to the more difficult elements. Having tackled Player Name element, filling in such information like title and description became much easier.
So Southern Mississippi doesn’t have player numbers or positions for most of their roster for 1975. I’m not quite sure how to figure out this information from a blank roster, and I guess I’ll have to leave most of those names blank. It was also interesting as I went through trying index the title, subject, and description elements how much more difficult it was to define images from Southern Mississippi games since I was missing some key metadata. I feel like those records are a lot more ambiguous or perhaps even have incorrect information entered.
The entire point of the unique identifier is, as the name suggests, to create a unique code to specify each record in a database or collection. Having this unique identifier allows a user to disambiguate records, create specific searches, link records and more. However, creating a unique identifier can be a little more complicated. A unique ID can be as simple as a number that has no relation to the record or object. In other cases the ID can hold information relating to the object ranging from accession number to date and subject information.
It took me reading through a couple of the posted articles for me to wrap my head around microformats. I’ll be honest, I had to go to Wikipedia to make sure I had a firm grasp on what “microformat” actually meant. The terminology that finally made me get it was the discussion of syntax and semantics.
As I recall syntax is the grammar of schemas and semantics is the meaning. Just as I could never quite get my head around English grammar until I learned Latin sentence construction taking a step back often helps me put new formats into perspective. Microformats tell computers what various bits of XHTML or HTML code mean in a human context by giving them handles. This bit of information is a nominative, nominative means the subject of the sentence and so forth and so on. Once I could wrap my head around it I can understand why people are as interested in the possibilities as they are. Formats which help make computer data relevant and immediately understandable and useful to the human part of the equation is always exciting.
I found the article “New Metadata Standards for Digital Resources: MODS and METS” by Rebecca Guenther and Sally McCallum to be really helpful. I felt like I’ve heard both terms bandied about with only a general understanding of what they are and how they work. The clear definitions and examples really helped me grasp both the complexities and the excitement that MODS and METS can provide to the library world.
MODS allow a solid level of description and definition for records and has a good crosswalks to and from MARC 21. While having such a rich format like MARC 21 has provided libraries with interoperability it can also make transitions very difficult as there is such a backlog of materials to convert. Since MODS are already providing a middle ground for definition between MARC 21 and simpler schemas such as Dublin Core I wonder if they might also provide a good crosswalk to future systems such as a FRBR system. In any case, it is exciting to see the world of digital resources supported by a strong metadata standards.
I found the “Christmas Themed Exploration” of metadata to be a fun romp. It was a good reminder that metadata is not only all around us, but that it can be very simple as well as very complex. I think that Swoger makes a good point in that metadata can seem very overwhelming for a lot of newcomers and demystifying it with some simple and festive examples was a good idea.
The research article performed by Jeonghyun Kim et al. in “Competencies Required for Digital Curation: An Analysis of Job Advertisements” is a useful compilation of what employer’s are looking for when they ask for a digital curator. More than that, the article looks at the emerging concept of digital curation, what it is coming to mean in the job market, and if the necessary competencies to perform it can be identified. I think what I find both so exciting and intimidating about this article is realizing how new positions like digital curator actual are. This article was published in 2013 to help create a list of competencies around which to build a program teaching those necessary skills. While curation is a core of information technology and has been the realm of Libraries and Museums for hundreds of years, bringing curation into the digital world requires forethought and imagination. Metadata is only one of the skills necessary for digital curation, and the list can be very daunting especially to young professionals like me. However, taking stock of the emerging opportunities and identifying the needs therein is both necessary and helpful.
Twitter gifted the Library of Congress it’s entire collection of Tweets in 2010. This is old news, I know, but it got me thinking. I remember sitting around the dinner table at college and laughing with my friends as they Tweeted that their words were going into the Library of Congress. Still, one has to wonder how these Tweets will be organized. Are they stored by date and by author only or is there a more sophisticated method of organization? In the article I read (G.L. “Library of Congress to Archive Twitter.” American Libraries 2010: 24. JSTOR Journals) the Library of Congress was aware of the need for finding aides to make the collection viable. The concept is mind boggling. How would you classify billions of Tweets? Cataloging at the Tweet level would be insane which makes me wonder what other methods might be used. Perhaps automatically generated Metadata pulled from the frequency of words might be viable, but the system still seems just too big to handle. Maybe I’ll have a better idea how to organize such a collection by the end of this class.