From 7 to 10 October I visited the Eighth International Conference on Knowledge Capture (K-CAP 2015), in Palisades, New York. The conference was scheduled right before and at a few hours driving from ISWC, so many participants attended both conferences. About 60 people attended this edition of KCAP, which is comparable to the numbers of participants of earlier editions.
Workshops and Tutorials
On the first day of the conference one workshop and two tutorials were scheduled. The two half-day tutorials were:
“Acquisition of Complex Logical Knowledge: Authoring, Assimilation, and Assembly” by Benjamin Grosof, Michael Kifer, and Paul Fodor
“Knowledge Processing with Big Data and Semantic Web Technologies” by Ali Hasnain, Naoise Dunne, and Stefan Decker
I attended the full day workshop on “Scientific Knowledge Capture“, organized by Peter Clark, Tim Clark, Imme Ebert-Uphoff, Yolanda Gil, and Mark Musen. The main lesson I learned in this workshop is that our way of doing science has to change, as the context of doing science is changing. Datasets are groing larger, machines have more computing power, and stakeholders (funders/publishers/peers) require transparency and reproducibility. Computer scientists can support the way towards this new way of doing science, by offering all kinds of tools and approaches to capture scientific knowledge, and represent it in such a way that it can be used by machines. Several examples of these tools and approaches were presented in the workshop. But, on top of that, domain scientists also have to change their way of working and thinking, and gain basic knowledge and skills on knowledge engineering.
Both the organizers and the audience enjoyed the workshop and thought it would certainly be worth it to organize this workshop again for future editions of KCAP.
From 8 to 10 October the main conference program was scheduled with 16 full papers, a poster session and two invited talks. The organizers also reserved a time slot
for presenters of the last edition of KCAP (2013), as that conference was cancelled due to extreme weather conditions.
Chris Welty gave a keynote presentation on “crowd truth”. Many computer scientist create the ground truth for their experiments based on two assumptions: 1) we need experts to guide us to the truth, and 2) disagreement is bad. In his talk Chris challenged these assumptions. He showed that allowing disaggrement between people provides useful information, on the level of ambiguity in the given examples. And he showed that using a crowd of untrained workers can be just as reliable as using highly trained (and expensive) experts
The keynote talk by Yolanda Gil was on “scientific knowledge capture”. Some decades ago domain scientists wanted more data and more computing power. As they now have plenty of both, they need their computers to have more (scientific) knowledge, so they can work smarter and answer new questions. Scientific knowledge has many forms, i.e., it is so much more than taxonomies and vocabularies. There are many ways of capturing scientific knowledge, for example, capturing how scientists describe their data, how they describe their software, or how they follow lines of inquiry. When these types of scientific knowledge are captured and stored, this will help domain scientists doing better and smarter science.
José Luis Redondo-García, Giuseppe Rizzo and Raphaël Troncy won the best paper award with their work on “The Concentric Nature of News Semantic Snapshots: Knowledge Extraction for Semantic Annotation of News Items“. Single news articles on the Web often give a limited picture of the story being reported. There is sufficient information available on the Web to enrich the article and provide a broader picture of the reported story. The authors propose a concentric-based approach that enables to represent the context of a news item. Representative entities are collected via named-entity recognition and entity expansion. These entities are then harmonized into a single model and arranged according to different dimensions such as frequency (core), informativeness, semantic connectivity, and popularity (Crust).
Maribel Acosta, Elena Simperl, Fabian Flöck and Maria Esther Vidal won the best student paper award for “HARE: A Hybrid SPARQL Engine to Enhance Query Answers via Crowdsourcing”
Missing values in RDF data sets affect answer completeness of SPARQL queries. The novel hybrid query processing engine HARE brings together machine and human computation to
execute SPARQL queries. HARE uses the estimated completeness of portions of an RDF data set, and crowd knowledge, to decide which parts of a query should be executed against the data set or via crowd computing.
The peoples choice best poster award was won by Giuseppe Rizzo, Oscar Corcho and Raphaël Troncy with their poster on “The 3cixty Knowledge Base for Expo Milano 2015: Enabling Visitors to Explore the City“