A look back at UCDS at ICT.Open2018

Two weeks ago, ICT.Open2018 was held in Amersfoort. This event brings together Computer Science researchers from all over the Netherlands and our research group was present with many posters and presentations.

We even won a prize! (Well, a 2nd place prize, but awesome nonetheless). Xander Wilcke presented work on using Knowledge Graphs for Machine Learning. He was awarded the runner-up prize for best poster presentation at ICTOpen2018. Congrats!

The knowledge graph for end-to-end learning poster at #ictopen2018 check it out at stand nr 4 @UserCentricDS pic.twitter.com/3INrp63lN9

— Victor de Boer (@victordeboer) March 19, 2018

 

Ronald Siebes presented work in the ArchiMediaL project on reconstructing 4D street views from historical images.

Let Ronald Siebes tell you all about reconstructing #4D street views from historical image collections, as proposed by #ArchiMediaL @VU_Science @tudelft @UserCentricDS pic.twitter.com/6WQUGta4VC

— Victor de Boer (@victordeboer) March 20, 2018

Oana Inel presented her work on Named Entity Recognition and Gold Standard critiquing. She also demonstrated the Clariah MediaSuite.

@oana_inel shows that 'gold standards' in NER might not be gold, nor standard. #ICTOPEN2018 @UserCentricDS @VU_Science #crowdtruth pic.twitter.com/cSRIH3YUS7

— Victor de Boer (@victordeboer) March 19, 2018

Advanced digital data services for media scholars: @oana_inel demonstrating the @CLARIAH_NL MediaSuite at #ictopen2018 @VU_Science @benglabs #clariah #digitalhumanities pic.twitter.com/K6wzGFdHFR

— Victor de Boer (@victordeboer) March 19, 2018

Anca Dumitrache talked about using crowdsourcing as part of the Machine Learning life cycle.

@anca_dmtrch proposes #crowdsourcing to fix errors in distant supervision data in the machine learning session at #ICTOPEN2018 #crowdtruth @UserCentricDS @VU_Science pic.twitter.com/2QXNcSkdRN

— Victor de Boer (@victordeboer) March 19, 2018

Tobias Kuhn talked about Reliable Granular References to Changing Linked Data, which was previously published at ISWC2017.

Cristina Bucur introduced  Linkflows: enabling a web of linked semantic publishing workflows

I talked myself a bit about current work in the ABC-Kb Network Institute project

@victordeboer presenting "UX Challenges of information organization: the assessment of language impairment in bilingual children" @ #ictopen2018 @networkinstvu @UserCentricDS @VU_Science pic.twitter.com/2CY4esa4vy

— Oana Inel (@oana_inel) March 20, 2018

All in all, this was quite a nice edition of the yearly event for our group. See you next year in Amersfoort!

Share This:

Source: Victor de Boer

Posted in Staff Blogs, Victor de Boer

An Augmented Reality App to Annotate Art

[This post is based on the Bachelor project by Jurjen Braam and reuses content from his thesis]

The value of Augmented Reality applications has been shown for a number of different tasks. Most of these show that AR applications add to the immersiveness of an experience. For his Bachelor Project, VU student Jurjen Braam researched to what extent AR technology makes sense for the task of annotating artworks.

To this end, Jurjen built a mobile application which allows experts or laypeople to add textual annotations to artworks in three different modes. One mode doesnt show the artwork, but allows for textual input, the 2nd mode shows the work in an image and allows for localised annotations. The last mode is the AR mode, which projects the artwork in the physical space, using the device camera and screen.

Three modes of the Application (Text, 2D, AR)

Jurjen evaluated the three modes through a small user study, which showed that immersion and enjoyment was highest in the AR mode but that this mode was least efficient. Also, participants indicated that for annotation tasks, larger screens would be preferable.

User evaluation in action

This research was a unique endeavour combining a proven technology (AR) and well-known task (Annotation) which identified interesting possibilities for follow-up research.

Share This:

Source: Victor de Boer

Posted in Staff Blogs, Victor de Boer

W4RA work featured in BBC article

Screenshot of the bBC article
Screenshot of the bBC article “‘Siri, will talking ever top typing?’ By Padraig Belton”

A BBC web article “‘Siri, will talking ever top typing?’ By Padraig Belton features our W4RA work done on voice interfaces for farmer information in Northern Ghana.

Francis Dittoh talks about the need for specific information for farmers in their own language and discusses ongoing research into our Kasadaka system. Anna Bon talks more about the web of voices. Very nice to see our work recognized by international media!

Read more at http://www.bbc.com/news/business-43409952

Share This:

Source: Victor de Boer

Posted in Staff Blogs, Victor de Boer

Field trip to Ghana, feb 2018

As part of the ongoing W4RA efforts, the VU ICT4D team visited West-Africa once more. This time, we visited Mali, Burkina Faso and Ghana. I personally went to Ghana to talk to external PhD candidate Francis Dittoh and his colleagues at the University of Development Studies (UDS) and the SARI institute in Tamale, in Northern Ghana. Francis and myself talking to a shea-butter guideAfter first flying to Accra, I first was able to have a short meeting with my good friend Nana Baah Gyan, who is now an ICT for Development expert in that city. I then flew to Tamale,. where we met up with Francis to talk about his PhD work on information systems for rural farmers.

After colleagues Anna Bon and Hans Akkermans arrived from Burkina Faso, we met with UDS vice-chancellor as well as the Dean of the Faculty of Engineering to  discuss the ongoing collaboration of W4RA and VU with Prof Saa Dittoh, and Francis. We hope to formalize these collaborations in a Memorandum of Understanding between the two universities. We also discuss the ambition of UDS to set up a curriculum in software engineering as part of the undergraduate programs. Such a programme would include a yearly community service courses, where students go into the field. This is very interesting for us as it aligns well with the goals of the ICT4D course at VU. The programmes also include a French language course to allow for smoother cooperation with other Sahel countries (specifically Burkina Faso).The UDS mission statement

We also visited the Savanna Agriculture Research Institute (CSIR-SARI). The situation is comparable to that in Burkina. SARI governs the use of hybrid seeds, which cannot be reused year by year but result in a higher yield. SARI is interested in educating farmers on how to handle these seeds (fertilization, planting etc). There are three classes of seeds: 1) breeder seeds – grown at research institutes, 2) foundation seeds and 3) certified seeds – produced by ~1000 farmers monitored by ~10 seed companies, and certified by an external agency.SARI research institute

Potentially interesting for us is their connection to rural farmers. SARI now mostly does this through extension workers from the ministry of Agriculture. However, because of budget cuts, these are now reduced to roughly 1 extension worker to 2,500 farmers. Therefore, SARI is open to the idea of commucating with farmers directly, for example through a voice-based system. An interesting opportunity could be two-track strategy with a smartphone app for extension workers to allow them to do their work better and a companion voice app for farmers. SARI is very much interested in developing applications in a co-creation process. This would match the research proposal that Francis has submitted to SARI and UDS.

A personal highlight was the 10hr road trip that Francis offered me to join instead of taking a flight back to Accra. Ghana is a beautiful country that changes before your eyes on such a trip. My sincere hope is that the proposed collaborations will lead to many more visits to this great country. 

Share This:

Source: Victor de Boer

Posted in Staff Blogs, Victor de Boer

Trip Report: Royal Society & KNAW – Responsible Data Science

Last week, I had the pleasure to be able to attend a bilateral meeting between the Royal Society and the KNAW. The aim was to strengthen the relation between the UK and Dutch scientific communities. The meeting focused on three scientific areas: quantum physics & technology; nanochemistry; and responsible data science. I was there for the latter. The event was held at Chicheley Hall which is a classic baroque English country house (think Pride & Prejudice). It’s a marvelous venue – very much similar in concept to Dagstuhl (but with an English vibe) where you are really wholly immersed in academic conversation.

.IMG_0290

One of the fun things about the event was getting a glimpse of what other colleagues from other technical disciplines are doing. It was cool to see Prof. Bert Weckhuysen enthusiasm for using imaging technologies to understand catalysts at the nanoscale. Likewise, seeing both the progress and the investment (!) in quantum computing from Prof. Ian Walmsley was informative. I also got an insider intro to the challenges of engineering a quantum computer from Dr. Ruth Oulton.

The responsible data science track had ~15 people. What I liked was that the organizers not only included computer scientists but also legal scholars, politicians, social scientists, philosophers and policy makers. The session consisted primarily of talks but luckily everyone was open to discussion throughout. Broadly, responsible data science covers the ethics of the practice and implications of data science or put another way:

Call to arms on Responsible Data Science by Jeroen van den Hoven "if we don't do it, somebody else will do it to us" @hoven_j #rdsuknl pic.twitter.com/48xb8SvR5S

— Frank van Harmelen (@FrankVanHarmele) February 21, 2018

For more context, I suggest starting with two sources: 1) The Dutch consortium on responsible data science 2) the paper 10 Simple Rules for Responsible Big Data Research. I took away two themes both from the track as well as my various chats with people during coffee breaks, dinner and the bar.

1) The computer science community is engaging

It was apparent through out the meeting that the computer science community is confronting the challenges head on. A compelling example was the talk by Dr. Alastair Beresford from Cambridge about Device Analyzer a system that captures the activity of user’s mobile phones in order to provide data to improve device security, which it has:

I'm at a joint @royalsociety @_knaw symposium on responsible data science. Interesting opening talk from @arberesford about their Android DeviceAnalyzer – and the huge vulnerabilities found! https://t.co/Kpg3lZl0d1 #RDSUKNL #fb pic.twitter.com/CqVG9PlteM

— Julian Huppert (@julianhuppert) February 21, 2018

He talked compellingly about the trade-offs between consent and privacy and how the project tries to manage these issues. In particular, I thought how they handle data sharing with other researchers was interesting. It reminded me very much of how the Dutch Central Bureau of Statistics manages microdata on populations.

Another example was the discussion by Prof. Maarten De Rijke on the work going on with diversity for recommender and search systems. He called out the Conference on Fairness, Accountability, and Transparency (FAT*) that was happening just after this meeting, where the data science community is engaging on these issues. Indeed, one of my colleagues was tweeting from that meeting:

Huge Kudos to https://t.co/uUaHfDb28i for reporting & helping fix this space. IBM replicated their results internally and released a new, improved API! 👏 #FAT2018

— Corey Harper (@chrpr) February 24, 2018

Julian Huppert, former MP, discussed the independent review board setup up by DeepMind Health to enable transparency about their practices. He is part of that board.  Interestingly, Richard Horton, Editor of the Lancet is also part of that board Furthermore, Prof. Bart Jacobs discussed the polymorphic encryption based privacy system he’s developing for a collaboration between Google’s Verily and Radboud University around Parkinson’s disease. This is an example that  even the majors are engaged around these notions of responsibility. To emphasize this engagement notion even more, during the meeting a new report on the Malicious Uses of AI came out from a number or well-known organizations.

One thing that I kept thinking is that we need more assets or concrete artifacts that data scientists can apply in practice.

So my question is how do I build values into a standard development life cycle? Need actionable artifacts Things like https://t.co/VG63JibOau are important for practice #rdsuknl @hoven_j

— Paul Groth (@pgroth) February 21, 2018

For example, I like the direction outlined in this article from Dr. Virginia Dignum about defining concrete principles using a design for values based approach. See TU Delft’s Design for Values Institute for more on this kind of approach.

2) Other methods needed

As data scientists, we tend to want to use an experimental / data driven approach even to these notions surrounding responsibility.

Computer/data scientists tend to look at news #diversity as a statistical measure of similarity/serendipity etc, while social scientists often look at as a precondition for democracy –@nhelberger #rdsuknl #filterbubble #fakenews

— IB (@1Br0wn) February 21, 2018

Even though I think there’s absolutely a role here for a data driven approach, it’s worth looking at other kinds of more qualitative methods, for example, by using survey instruments or an ethnographic approach or even studying the textual representation of the regulatory apparatus.  For instance, reflecting on the notion of Thick Data is compelling for data science practice. This was brought home by Dr. Ian Brown in his talk on data science and regulation which combined both an economic and survey view:

Personally, I tried to bring some social science literature to bear when discussing the need for transparency in how we source our data. I also argued for the idea that adopting a responsible approach is also actually good for the operational side of data science practice:

#rdsuknl @pgroth on reusing datasets pic.twitter.com/DI4LoiLUPW

— Virginia Dignum (@vdignum) February 22, 2018

While I think it’s important for computer scientists to look at different methods, it’s also important for other disciplines to gain insight into the actual process of data science itself as Dr. Linnet Taylor grappled within in her talk about observing a data governance project.

Overall, I enjoyed both the setting and the content of the meeting. If we can continue to have these sorts of conversations, I think the data science field will be much better placed to deal with the ethical and other implications of our technology.

Random Thoughts

  • Peacocks!
  • Regulating Code – something for the reading list
  • Somebody remind me to bring a jacket next time I go to an English Country house!
  • I always love it when egg codes get brought up when talking about provenance.
  • I was told that I had a “Californian conceptualization” of things – I don’t think it was meant as a complement – but I’ll take it as such 🙂
  • Interesting pointer to work by Seda Gurses about in privacy and software engineering from @1Br0wn
  • Lots of discussion of large internet majors and monopolies. There’s lots of academic work on this but I really like Ben Thompson’s notion of aggregator’s as the way to think about them.
  • Merkle trees are great – but blockchain is a nicer name 😉

 

Source: Think Links

Posted in Paul Groth, Staff Blogs

Trip Report: Royal Society & KNAW – Responsible Data Science

Source: Think Links

Posted in Paul Groth, Staff Blogs

A Voice Service Development Kit for the Kasadaka platform

[This post is written by André Baart and describes his MSc thesis]

While the internet usage in the developing world is still low, the adoption of simple mobile phones is widespread. A way to offer the advantages of the internet to these populations is voice-based information systems. The KasaDaka voice-services platform is aimed at providing voice-services in the context of ICT for Development (ICT4D). The platform is based on a Raspberry Pi and a GSM modem, which enables affordable voice-service hosting, using the locally available GSM network. The platform takes into account the special requirements of the ICT4D context, such as limited internet connectivity and low literacy rates.

This research focuses on lowering the barrier to entry of voice-service development, by reducing the skill set needed to do so. A Voice Service Development Kit (VSDK) is developed that allows the development of voice-services by deploying and customizing provided building-blocks. These building blocks each represent a type of interaction that is often found in voice-services. (for example a menu, user voice input or the playback of a message) The researcher argues that the simplification of voice-service development is an essential step towards sustainable voice-services in the ICT4D context; As this increases the potential number of local voice-service developers, hremoving the dependency on foreign (and thus expensive) developers and engineers. This simplification should ideally be achieved by providing a graphical interface to voice-service development.

The VSDK was evaluated during the ICT4D course at the Vrije Universiteit Amsterdam, where students built applications for various ICT4D use-cases using the VSDK. Afterwards a survey was conducted, which provided insight on the students’ experiences with voice-service development and the VSDK. From the results of the evaluation is concluded that the building-block approach to voice-service development used in the VSDK, is successful for the development of simple voice-services. It allows newcomers to (voice-service) development, to quickly develop (simple) voice-services from a graphical interface, without requiring programming experience.

The VSDK combined with the existing KasaDaka platform provides a good solution to the hosting and development of voice-services in the ICT4D context.

More details can be found in the complete thesis.A slidedeck is included below. You can find the VSDK code on Andre’s Github: http://github.com/abaart/KasaDaka-VSDK

 

Share This:

Source: Victor de Boer

Posted in Staff Blogs, Victor de Boer

Machine-to-machine communication in rural conditions: Realizing KasadakaNet

[This post describes research by Fahad Ali and is based on his Msc. thesis]

Contextual constraints (lack of infrastructure, low-literacy etc.) play an important role in ICT for Development (ICT4D) projects. The Kasadaka project offers a technological platform for knowledge sharing applications in rural areas in Sub-Saharan Africa. However, lack of stable internet connections restrict exchange of data between distributed Kasadaka instances, which leads us to research alternative ways of machine-to-machine (m2m) communication.

Example of a KasadakaNet situation, with a wifi-donkey mounted on a bus, visiting a city and two remote villages, creating a so-called sneakernet

Fahad Ali’s research focuses on mobile elements and using wifi sneakernets for this m2m to enable information sharing between geographically distributed devices. He developed a Raspberry Pi-based device called the Wifi-donkey that can be mounted on a vehicle and facilitates information exchange with nearby devices, using the built-in wifi card of the rPi 3.The solution is based on Piratebox offline file-sharing and communications system built with free software and uses off-the-shelf Linux software components and configuration settings to allow it to discover and connect to nearby Kasadaka devices based using Wifi technologies.

Experimental setup: the wifi-donkey taped to an Amsterdam balcony to test range and bandwith.

We evaluated the solution by simulating a low resource setting and testing it by performing so-called “pass-bys” in an Amsterdam residential area. In these cases, SPARQL queries are exchanged between host and client devices and we measure amount of RDF triples transferred. This setup matches earlier case requirements as described in Onno Valkering’s work.Results show that the system works fairly reliably in the simulated setting. The machine-to-machine communication method can be used in various ICT4D projects that require some sort of data sharing functionality.

You can find out more about Fahad’s work through the following resources:

Share This:

Source: Victor de Boer

Posted in Staff Blogs, Victor de Boer

DIVE+ collection enrichment paper wins best paper award at MTSR2017

Last week, I visited the 11th Metadata and Semantics Research Conference (MTSR2017) in Tallinn, Estonia. This conference brings together computer scientists. information scientists and people from the domain of digital libraries to discuss their work in metadata and semantics. The 2017 edition of the conference draws around 70 people which is a great size for a single-track conference with lively discussions. The paper included interesting tracks on Cultural Heritage and Library (meta)data as well as one on Digital Humanities.

On the last day I presented our paper “Enriching Media Collections for Event-based Exploration” [draft pdf], co-authored with the people in the CLARIAH and DIVE+ team working on data enrichment and APIs: Liliana Melgar, Oana Inel Carlos Martinez Ortiz, Lora Aroyo and Johan Oomen.  The slides for the presentation can be found here on slideshare. We were very happy to hear that our paper was presented the MTSR2017 Best Paper Award!

In the paper, we present a methodology to publish, represent, enrich, and link heritage collections so that they can be explored by domain expert users. We present four methods to derive events from media object descriptions. We also present a case study where four datasets with mixed media types are made accessible to scholars and describe the building blocks for event-based proto-narratives in the knowledge graph

Share This:

Source: Victor de Boer

Posted in Staff Blogs, Victor de Boer

Trip Report: ISWC 2017

Last week, I conferenced! I attended the 16th International Semantic Web Conference (ISWC 2017) in Vienna at the beginning of the week and then headed up to FORCE 2017 in Berlin for the back half of the week. For the last several ISWC, I’ve been involved in the organizing committee, but this year I got to relax. It was a nice chance to just be an attendee and see what was up. This was made even nicer by the really tremendous job Axel, Jeff and their team did  in organizing both the logistics and program. The venues were really amazing and the wifi worked!

WU campus #iswc2017 #UrbanSketchers pic.twitter.com/LYeEgFlNy4

— Sean Bechhofer (@seanbechhofer) October 22, 2017

Amazing reception in the Vienna city hall for all #iswc2017 attendees. Enjoy the conference! pic.twitter.com/aQvZTxhNNj

— Javier Fernández (@ciutti) October 22, 2017

#iswc2017 posters and demo session in the @wu_vienna spaceship library pic.twitter.com/5Jbo6ulkrn

— Marieke van Erp (@merpeltje) October 23, 2017

Before getting into what I thought were the major themes of the conference, lets do some stats:

  • 624 participants
  • Papers
    • Research track: 197 submissions – 44 accepted – 23% acceptance rate
    • In-use: 27 submissions – 9  accepted – 33% acceptance rate
    • Resources: 73 submissions – 23 accepted – 31% acceptance rate
  • 46 posters & 61 demos
  • Over 1000 reviews were done excluding what was done for the workshop / demos / posters. Just a massive amount of work in helping work get better.

This year they expanded the number of best reviewers and I was happy to be one of them:

The best reviewers of @iswc2017 #iswc2017 – research, resources + in-use – really important work! pic.twitter.com/j30HApBqdJ

— Paul Groth (@pgroth) October 23, 2017

You can find all the papers online as preprints.

The three themes I took away from the conference were:

  1. Ecosystems for knowledge engineering
  2. Learn from everything
  3. More media

Ecosystems for knowledge engineering

This was a hard theme to find a title for but there were several talks about how to design and engineer the combination of social and technical processes to build knowledge graphs. Deborah McGuinness in her keynote talked about how it took a village to create effective knowledge driven systems. These systems are the combination of experts, knowledge specialists, systems that do ML, ontologies, and data sources. Summed up by the following slide:

#iswc2017 @dlmcguinness takeaway artifact: Semantic-enable Framework pic.twitter.com/7tVqt80fjA

— iswc2017 (@iswc2017) October 23, 2017

My best idea is that this would fall under the rubric of knowledge engineering. Something that has always been part of the semantic web community. What I saw though was the development of more extensive ideas and guidelines about how to create and put into practice not just human focused systems but entire social-techical ecosystems that leveraged all manner of components.

Some examples: Gil et al.’s paper on  creating a platform for high-quality ontology development and data annotation explicitly discusses the community organization along with the platform used to enable it. Knoblock et al’s paper on creating linked data for the American Art Collaborative discusses not only the technology for generating linked data from heterogenous sources but the need for a collaborative workflow facilitated by a shared space (Github) but also the need for tools used to do expert review.  In one of my favorite papers, Piscopo et al evaluated the the provenance of Wikidata statements and also developed machine learning models that could judge authoritativeness & relevance of potential source material. This could provide a helpful tool in allowing Wikidata editors to garden the statements automatically added by bots. As a last example, Jamie Taylor in his keynote discussed how at Google they have a Knowledge Graph Schema team that is there to support a developers in creating interlocking data structures. The team is focused on supporting and maintaining quality of the knowledge graph.

A big discussion area was the idea coming out of the US for a project / initiative around an Open Knowledge Network introduced by Guha. Again, I’ll put this under the notion of how to create these massive social-technical knowledge systems.

I think more work needs to be done in this space not only with respect to the dynamics of these ecosystems as Michael Lauruhn and I discussed in a recent paper but also from a reuse perspective as Pascal Hitzler has been talking about with ontology design patterns.

Learn from everything

The second theme for me was learning from everything. Essentially, this is the use of the combination of structured knowledge and unstructured data within machine learning scenarios to achieve better results. A good example of this was presented by Achim Rettinger on using cross modal embeddings to improve semantic similarity and type prediction tasks:

Nice presentation by Achim Rettinger about a cross modal approach combining knowledge, visual and textual features #iswc2017 pic.twitter.com/Q6k74A6yUD

— Jörn Hees (@joernhees) October 25, 2017

Likewise, Nada Lavrač discussed in her keynote how to different approaches for semantic data mining, which also leverages different sources of information for learning. In particular, what was interesting is the use of network analysis to create a smaller knowledge network to learn from.

Nada Lavrač #keynote on the power of flattening Structured Data for #datamining and a few takeaway notes for #iswc2017 pic.twitter.com/Py6IfWaqBv

— iswc2017 (@iswc2017) October 24, 2017

A couple of other examples include:

It’s worth calling out the winner of the renewed  Semantic Web Challenge from IBM, which used deep learning in combination with sources such as dbpedia, geonames and background assumptions for relation learning.

2017-10-23 20.44.14.jpg

Socrates – Winner SWC

(As an aside, I think it’s pretty cool that the challenge was won by IBM on data provided by Thomson Reuters with an award from Elsevier. Open innovation at its best.)

For a more broad take on the complementarity between deep learning and the semantic web, Dan Brickley’s paper is a fun read. Indeed, as we start to potentially address common sense knowledge we will have to take more opportunity to learn from everywhere.

Future of knowledge graph according to Jamie Taylor #iswc2017 keynote: Linking entities to actions, commonsense (but important!) knowledge https://t.co/2xjWqNIRZX

— Yolanda Gil (@yolandagil) October 25, 2017

More media

Finally, I think we saw an increase in the number of works dealing with different forms of media. I really enjoyed the talk on Improving Visual Relationship Detection using Semantic Modeling of Scene Descriptions given by Stephan Brier. Where they used a background knowledge base to improve relation prediction between portions of images:

tresp.png

There was entire session focused on multimodal linked data including talks on audio ( MIDI LOD cloud, the Internet Music Archive as linked data) and images IMGPedia content analyzed linked data descriptions of Wikimedia commons.  You can even mash-up music with the SPARQL-DJ.

Conclusion

DBPedia won the 10 year award paper. 10 years later semantic technologies and in particular the notion of a knowledge graph are mainstream (e.g. Thomson Reuters has a 100 billion node knowledge graph). While we may still be focused too much on the available knowledge graphs  for our research work, it seems to me that the community is branching out to begin to answer a range new questions (how to build knowledge ecosystems?, where does learning fit?, …) about the intersection of semantics and the web.

Random Notes:

Filed under: academia, linked data, trip report Tagged: #iswc2017, #linkeddata, machine learning, semantic tech, semantic web
Source: Think Links

Posted in Paul Groth, Staff Blogs