The 6th AAAI Conference on Human Computation and Crowdsourcing (aka HCOMP 2018) was held in Zurich from the 5th to the 8th of July. This year the conference was back in Europe, after seven editions in the US. This post gives an account of the contributions made by WDAqua.
Whilst HCOMP focuses on research around crowdsourcing and human computation, it encourages interdisciplinary contributions ranging from HCI and artificial intelligence, to economics and social sciences, and it is open to researchers and practitioners alike. WDAqua was at HCOMP with various types of contributions.
We are very happy to announce that the paper 'How Biased Is Your NLG Evaluation' supported by QROWD won the Best Paper Award at the #CrowdBias workshop at #HCOMP2018! https://t.co/ZhKAX0lvFF #SmartMobility #EU #triples #humanassessment pic.twitter.com/cIZa2R4vuK— QROWD Project (@QrowdProject) July 5, 2018
Natural Language Generation consists in the task of generating natural language text on the basis of data in structured form. This technique is increasingly used to improve the accessibility of information that would be otherwise hard to read for humans, e.g. by Question Answering (QA) systems relying on structured data to produce their answers. Pavlos’ work, carried out together with Eddy Maddalena, Jonathon Hare, and Elena Simperl, investigates similarities between expert and crowdsourced evaluation of automatically generated text. In all three features taken into consideration, the difference between crowdworkers’ and experts’ judgements was significant, as the first tended to underestimate (fluency) or overestimate (coverage and contradictions) the feature assessed, compared to the latter. Alessandro Piscopo presented WDAqua to the attendees of the 1st Research Project Networking workshop.
This workshop aimed at offering the opportunity to meet up, exchange ideas, and set new collaborations to the representatives of a number of European and non-European research projects which involve the utilisation of crowdsourcing. Besides presenting WDAqua and its consortium, Alessandro spoke about an experiment he performed alongside other researchers, in which they evaluated a two-stage approach (crowdsourcing+machine learning) to predict quality of Wikidata references on a large scale. This piece of research was previously presented at ISWC 2017 and may be seen as an example of using human computation to enhance the answers provided by a QA system with provenance. In addition to his talk at the workshop, Alessandro took part in the conference poster session, where he discussed WDAqua and his research with several interested attendees.
We've been at #HCOMP2018 this week, with @aliossandro presenting our project and his work. Lots of interesting discussion with researchers eager to know more about what we do! @MSCActions @EU_H2020 pic.twitter.com/GcEw4SoWyj— WDAqua (@WDAqua) July 7, 2018