Our partners at the third project meeting in Zagreb, Croatia.
While we hope you are mostly staying at home these days, we offer you some reading material from our partners' articles:
Methods and visualization tools for the analysis of medical, political and scientific concepts in Genealogies of Knowledge
"The article presents an approach to establishing requirements and developing visualization tools for scholarly work, which involves reviewing published methodology, software prototyping, analysis of scholarly output produced with the support of text visualization software, and interviews with users."
Reproduction, replication, analysis and adaptation of a term alignment approach
"In this paper, we look at the issue of reproducibility and replicability in bilingual terminology alignment (BTA), propose a set of best practices for reproducibility and replicability of NLP papers and analyze several influential BTA papers from this perspective. We present our attempts at replication and reproduction."
Authors: Andraž Repar, Matej Martinc, Senja Pollak
"Our task is to demystify fears": analysing newsroom management of automation in journalism
"The study explores uses of algorithmic techniques in journalists' working environments and investigates newsroom managers' negotiations of automation as innovation process aimed at ensuring partial or full replacement of human labour with technology, drawing from 15 qualitative interviews with representatives of newsroom management from legacy news institutions in the United Kingdom, Germany and the United States of America."
Our partners developed COVID-19 Explorer, a search tool for fast and interactive exploration of current COVID-19 literature. It uses a graph-based keyword extraction methodology developed as a part of the EMBEDDIA project.
Participants and organizers of the News Automation at Work workshop in Dublin, Ireland.
We also joined SemEval and LREC!
For our task at the International Workshop on Semantic Evaluation (SemEval) we asked participants to build systems that try to predict the effect that context has in human perception of similarity of words. Our new datasets, containing contextual similarity ratings, in four different languages, will soon be publicly released: