Dallas, not the most inspiring city to have a museum conference. Starting the conference off with a Forth Worth museum tour and having a great view from the 24th floor made me rethink that (a bit). Beforehand I was curious how I would experience a museum focussed conference such as MCN2014, but the many parallels with my work made it worth the while.
Cristiano Bianchi from keepthinking gave a thought provoking talk about responsiveness, starting of with the question whether responsiveness only involves adapting to different screen sizes. One of the insights was the fact that tablets are more often used for reading long pieces of text, while getting less content to show. Responsiveness should be more context dependent, especially now more and more varied devices are used to access web content. Having a smaller screen should mean better prioritisation of information, not less information and data structures should support this.
The session wisdom of the crowds showed a number of different approaches to using crowdsourcing in a museum context. Dominic McDevitt-Parks gave a very practical talk on how to setup a Wikipedia edithaton, while the Columbus museum showed how they use existing social networks to spark successful participatory initiatives focussed on photography. In their experience such initiatives help build a community, the lessons learned: test your hashtag, find a fitting audience and keep it simple.
The theme Linked Data was more prominent than expected, even having its own session. Nonetheless, an ideal use case such as integrating 150+ museum collections of museums on Nova Scotia never mentioned the existence of RDF. Apparently these museums were small enough to be told to all use the same content management system. I was pleasantly surprised though by the use of DBPedia for autocompletion of species in a horticultural database.
The International Image Interoperability Framework (IIIF) was a good showcase of how Linked Data principles can be used to integrate data (loved the coop silos picture). IIIF focusses on image delivery, providing two specifications of APIs. The image API is used to retrieve images from servers, enabling a standardised way of specifying size, format and region. The second API supports JSON-LD and is used to retrieve metadata of images. While all of this was based on Linked Data, they did a good job in not showing this to the user and keeping things as simple as possible. Mirrador was showcased, an image viewer focussed on humanities scholars utilising the two APIs, which could be very suitable for the INVENiT project.
The actual Linked Data session was a bit disappointing: people discussing their one year of experience in trying to publish their collections as Linked Data and a Google employee requesting more event data (as in when a exhibition will take place). There was an interesting question from the audience though: “We all know that Linked Data will be the future for museums, but how do we convince our museum directors, in terms of crowds buzz and revenue?”. Formulating an appropriate answer was surprisingly hard, the best candidate: “Think about putting a man on the moon”.
Source: Chris Dijkshoorn