KMi News

KMi at Dagstuhl: Shaping the Future of Knowledge Graph-Based AI

KMi at Dagstuhl: Shaping the Future of Knowledge Graph-Based AI

Last week, Prof. John Domingue and Dr Aisling Third attended a highly influential Dagstuhl Seminar focused on trust, accountability, and self-determination in AI, particularly in systems built on Knowledge Graphs (KGs) (John served as a co-Chair).

Why Knowledge Graphs Matter

With AI increasingly reshaping the world and driving our most powerful digital platforms—from Google and Netflix to Spotify and Facebook—KGs serve as a foundational technology, structuring vast amounts of web data into machine-readable knowledge. However, as the power of AI continues to increase, concerns over transparency, data privacy, and AI accountability continue to grow, prompting discussions on how to align AI broadly with societal benefits and human rights.

Key Seminar Topics

The event brought together global experts from all over Europe, and the US, Japan, Chile, and Brazil, including Jim Hendler, co-author of the seminal 2001 Semantic Web paper with Sir Tim Berners-Lee, and the ensuing discussions explored:

  • Computer Using Personal Agents
  • Integrating LLMs with Knowledge Graphs
  • Developing Knowledge Graph Ecosystems
  • Machine-Readable Policy Descriptions (ODRL-based), and 
  • Evaluating AI-based systems

Big Questions on AI Ethics

Additionally attendees tackled the following critical questions:

  • How can AI be made accountable for its decisions?
  • What ensures AI systems remain transparent?
  • How can users retain autonomy in AI-driven environments?

Outcomes & Impact

The intense week led to the completion of one paper on personal agents and draft versions of several others. The attendees agreed to collaborate over the next weeks to create a manifesto outlining how  trust, accountability, and autonomy in AI systems can be enhanced at the role that Knowledge Graphs will play in that. 

Looking Ahead

With overwhelmingly positive feedback, many attendees mentioned that it was one of the best seminars they’ve ever attended. As AI regulation tightens worldwide, KMi continues to play a crucial role in shaping responsible and ethical AI practices.

Related links: 

Celebrating Ángel Pavón Pérez’s Success: A Milestone in Fair and Responsible AI Research 

Celebrating Ángel Pavón Pérez’s Success: A Milestone in Fair and Responsible AI Research 

KMi is thrilled to announce that Ángel Pavón Pérez has successfully passed his viva on 28 January 2025. His groundbreaking research, titled “Enhancing Fairness in Machine Learning: Identifying and Mitigating Bias with a Focus on Gender Bias in Finance”, marks a significant contribution to the field of responsible AI.  

Ángel’s work addresses critical challenges in Machine Learning (ML) by developing innovative methods to identify and mitigate bias, particularly gender bias in financial systems. His research introduces novel approaches to uncover hidden biases, even when sensitive data is unavailable, and balances multiple fairness constraints in ML models. These advancements pave the way for more equitable AI applications particularly in high-risk domains for bias, such as finance.  

Supervised by Miriam Fernandez, Gregoire Burel, Harith Alani, and Hasan Al-Madfai, Ángel’s work was examined by an esteemed committee, including Paul Piwek (internal) and Danica Greetham (external). His achievements reflect KMi’s commitment to cutting-edge research with real-world impact.  

Looking ahead, Ángel will continue his journey as a Research Associate at the Centre for Protecting Women Online, where he will further explore responsible AI. Congratulations, Ángel, on this remarkable achievement! We look forward to seeing your continued contributions to the field.  

Stay tuned for more updates on KMi’s pioneering research in AI and beyond!

KMi-led robotics project receives major funding to enhance hospital efficiency and safety

KMi-led robotics project receives major funding to enhance hospital efficiency and safety

A pioneering robotics project, led by Prof Enrico Motta’s team at KMi, has secured Innovate UK funding to improve hospital efficiency and safety. This initiative aims to revolutionise hospital efficiency and safety through the deployment of AI-driven robots. These smart robots will handle routine tasks, such as delivering medicines, while simultaneously identifying and addressing safety hazards like intruders, wandering patients, and floor hazards.

This project builds on Agnese Chiatti’s award-winning PhD research at the OU. Agnese developed a robot capable of recognising environmental hazards, such as fire risks, a breakthrough that earned her the prestigious L’Oréal-UNESCO Award for Women in Science. Her work laid the foundation for applying robotics in real-world settings, particularly in healthcare.

In partnership with Swift Robotics, a local startup specialising in robots for hospitals and shopping centres, the project will enhance robots to take on complex responsibilities. The robots will be trialled at MK University Hospital. By automating routine deliveries and identifying risks in real-time, these robots will free up staff to focus on critical patient care, improving hospital safety and potentially saving lives. The societal impact of this project is potentially highly significant. It offers cost-saving benefits while enhancing efficiency and safety in hospitals. If successful, this innovative approach could see widespread adoption across healthcare systems in the UK and beyond.

This international effort, part of the Eureka framework’s “Resilient Enterprise” initiative, includes collaborators from Finland, Switzerland, South Korea, and the UK. The OU’s role focuses on healthcare robotics, leveraging long-standing partnerships, including one with Finland’s VTT Technical Research Centre.

Finally, this project also contributes to the MK:Smart initiative, a research programme launched in 2014, which fosters innovation and the smart city agenda in Milton Keynes.

Related links:

Climate misinformation surveillance project secures €1 million grant

Climate misinformation surveillance project secures €1 million grant

A pioneering research initiative, ClimateSense, led by KMi’s Director Harith Alani, has received a €1 million grant from the European CHIST-ERA programme, with £275,000 allocated from the EPSRC. The project tackles the critical issue of climate misinformation by integrating Geographic Information Systems (GIS) with AI to analyse the geopolitical spread of false claims about climate issues.

ClimateSense will develop a multidimensional GIS that combines climate data—such as temperature, carbon emissions, and precipitation—with misinformation sourced from media and social platforms. Using AI, the system will identify correlations, predict misinformation spread, and empower policymakers to counter its impact effectively.

Focusing on the next three UN Climate Change Conferences (COPs), this three-year project involves international collaboration with partners in France, the Czech Republic, and Lithuania, aiming to strengthen societal resilience against misinformation and inform evidence-based climate policies.

Related links:

How Generative AI is transforming Teaching and Learning: Insights from KMi’s Prof John Domingue

How Generative AI is transforming Teaching and Learning: Insights from KMi’s Prof John Domingue

Generative AI has the potential to revolutionise education in ways comparable to the industrial revolution or the advent of the internet, claims Prof John Domingue. Speaking at Fleksibel utdanning Norge’s recent conference in Norway, he explored how AI can reshape teaching, learning, and institutional practices to meet the needs of future generations.

Central to Prof Domingue’s discussion is the importance of equipping students with AI skills essential for future employment. He emphasised that educational institutions must embrace AI strategically or risk falling behind. This requires a robust infrastructure, such as enterprise-level platforms, that ensure data security and regulatory compliance. Institutions must also develop clear guidance on AI use for students, covering ethical considerations, privacy, and responsible utilisation.

AI’s transformative impact on education is already evident. Prof Domingue highlighted tools that support educators in creating materials collaboratively with AI, generating assessments, and identifying state-of-the-art research. In the delivery of education, AI-driven digital assistants, such as our own AIDA platform, offer personalised support to students, enhancing their learning experiences.

Assessment, a cornerstone of education, must also evolve to reflect AI’s influence. Prof Domingue suggested focusing on meta-skills like critical thinking and problem-solving, which remain uniquely human advantages in an AI-driven world.

While AI presents challenges, such as data privacy and bias, it offers unparalleled opportunities for innovation. By embracing experimentation and collaboration, educational institutions can harness the full potential of AI to enhance learning, ensuring both educators and students thrive in this new era.

Related links: 

CORE receives funding for five years from Microsoft’s open data initiative to increase access to open scientific research

CORE receives funding for five years from Microsoft’s open data initiative to increase access to open scientific research

CORE, an Open Access infrastructure operated by The Open University, has been awarded significant funding for the next five years from Microsoft, as part of its’ support for increasing access to data and scientific research. 

Since 2020, Microsoft has worked to close the data divide and help organisations of all sizes realise the benefits of open data and the new technologies it powers. Microsoft aims to make the process of opening, sharing, and collaborating around data easier so that researchers and innovators can uncover new insights, make better decisions, and improve efficiencies when tackling some of the world’s most pressing challenges. 

CORE indexes scholarly content from over 12,000 institutional repositories, preprint servers and journals and makes this available to the global community via a range of services built on top of the data. CORE currently delivers its services to over 30 million monthly users. 

The funding will support CORE in:

  • Improving processes for indexing and increasing discoverability of Open Access content from repositories, journals and preprint services. 
  • Developing technology to improve the quantity and quality of scholarly metadata in CORE, ensuring CORE significantly contributes to the delivery of a global scholarly knowledge graph. 
  • Improving CORE services for better machine access to academic content, in line with the requirements of the OSTP memo, and in line with other OA policy requirements. 
  • Conducting research on ethical and responsible ways of using academic content in the age of AI.
  • Developing an annual yearly statistical monitoring process for open access content growth.

Further, this funding will allow us to grow and strengthen the infrastructure that supports these services. CORE will remain a community-governed infrastructure, in line with its commitments as a signatory to the Principles of Open Scholarly Infrastructures (POSI). 

Professor Petr Knoth, founder and team lead for CORE said; 

“We are delighted to receive the support from Microsoft to deliver on our ongoing mission to become the most comprehensive index of open access scholarly documents. Open access, open science and open data are critical components of a fairer, more just world where access to scientific information is available to all. The team at CORE has been committed to this goal for over 10 years and it is fantastic to see this effort being recognised.” 

Burton Davis, Vice President and Deputy General Counsel, at Microsoft added;

“Microsoft is committed to increasing open access to scientific knowledge to help further research and fuel breakthroughs to address society’s greatest challenges, such as improving healthcare and the sustainability of our planet. We are proud to support CORE’s important work of making more scientific scholarly content openly available, helping foster innovation, including the responsible development of AI.”    

Professor Nicholas Braithwaite, Executive Dean for the STEM Faculty said; 

“I am delighted to witness CORE’s continued growth and success. CORE’s mission of making scientific knowledge open to students, researchers and the general public is engrained in The Open University’s DNA. What began as a research project has evolved into a world-leading service that serves 30 million users every month and advances not only the principles of Open Science, but also the global visibility and impact of The Open University.”  

Professor Kevin Shakesheff, Pro-Vice Chancellor for Research at The Open University added; 

“CORE is a noteworthy example of how OU research can generate substantial value beyond academia for society and industry. This new cooperation with Microsoft as part of their Open Data Initiative highlights our ability to drive meaningful change through innovative research.”

Professor Harith Alani, Director of the Knowledge Media institute said; 

“At KMi, we are proud to see our years of dedicated research and teamwork culminate in this significant investment. The success of CORE is a testament to the exceptional vision, capability, culture, and perseverance of KMi, whose efforts continue, against all odds, to shape the future of the OU and Higher Education through cutting-edge research and innovation.” 

Related:

Best Paper Award at EKAW 2024 for groundbreaking work on capturing the political discourse in the news

Best Paper Award at EKAW 2024 for groundbreaking work on capturing the political discourse in the news

A paper authored by KMi researchers Enrico Motta, Francesco Osborne, Angelo Salatino, and Iman Naja, and Martino Pulici from the Bosch Centre for Artificial Intelligence, has received the prestigious Best Research Paper Award at the 24th International Conference on Knowledge Engineering and Knowledge Management (EKAW 24).  The EKAW series of conferences provides the premier European forum for research in Knowledge-Based Systems. 

The paper, which is entitled Capturing the Viewpoint Dynamics in the News Domain, introduces an innovative approach to capturing the way the political debate around important topics is represented in news media.  This is a very important issue, given that a healthy democracy requires a balanced news landscape, providing a fair account of the variety of political positions relevant to a particular issue. Indeed, in the past few years concerns about the lack of balance in the way the political debate is covered in UK’s news media have increased and this paper provides an innovative solution to analysing media coverage of the political discourse, allowing academics and practitioners to assess to what extent this media coverage provides a fair and balanced representation of the debate. 

Technically, our solution employs a hybrid human-machine approach, which leverages Large Language Models (LLMs) to analyse news, combining human expertise with AI. It identifies the spectrum of viewpoints within a debate and classifies claims in a news corpus according to these perspectives, enabling a comprehensive understanding of the narratives that shape public discourse.

In an age of misinformation and polarisation, such tools are invaluable for journalists, researchers, and policymakers. Indeed, the proposed solution has the potential for widespread societal impact, by enabling not just researchers but also regulatory authorities, policy actors, media organisations, and a wide range of civil society stakeholders to better monitor media performance, identify potential harms and fine tune policy, potentially contributing to improving the health of our media ecosystem, and therefore of our democracy.

This research was supported by a grant from the OU’s Open Societal Challenges Programme — Project 192, Unlocking computational media research: Innovative AI technologies to assess fairness, balance and diversity in the media 

Related links: 

Building Trust and Preempting Misinformation: Prof. Harith Alani on Combating Disinformation in Central Banking

Building Trust and Preempting Misinformation: Prof. Harith Alani on Combating Disinformation in Central Banking

Harith Alani, Director of KMi, was interviewed by Central Banking journal about the growing disinformation threat faced by central banks. He highlighted that while disinformation often originates externally, its real damage arises when false claims are amplified locally. Alani emphasised the importance of proactive strategies, such as fostering trust and increasing transparency, to prevent false narratives from gaining traction, rather than relying only on reactive corrections.

Alani also underscores the role of Artificial Intelligence (AI) in both exacerbating and alleviating these issues. He warns that AI-driven deepfakes and misinformation will become more convincing, complicating efforts to combat disinformation. However, AI tools also present opportunities, as they can be used to predict, detect, and prevent misinformation, offering organisations new ways to protect their reputations. Furthermore, Alani suggests that central banks collaborate with existing fact-checking organisations rather than creating their own, thus pooling resources for a more effective response to misinformation.

KMi is one of the leading labs in developing advanced technologies for tracking and combating fake news and hate speech, and Alani’s insights underline the importance of central banks not only engaging with the public through communication but also preparing for potential disinformation crises with robust strategies. His emphasis on pre-emptive communication and building long-term trust offers a strategic approach to safeguarding institutions from the growing digital misinformation threat.

Related links:

KMi’s Paula Reyero-Lobo Passes Her PhD Viva with Flying Colours! 

KMi’s Paula Reyero-Lobo Passes Her PhD Viva with Flying Colours! 

Congratulations to Paula Reyero-Lobo, who was passed her viva on 18 November 2024 for her thesis, “Addressing Bias in Hate Speech Detection: Enhancing Target Group Identification with Semantics.”  

Paula’s research explored how to reduce bias in hate speech detection systems, particularly in identifying content targeting specific groups. AI systems often misclassify content from certain groups as hate speech or fail to effectively moderate harmful content directed at them. Paula tackled this challenge by investigating how semantics, including knowledge graphs and linguistic resources, can improve both human and machine understanding of the non-standard, slang, and domain-specific language frequently used in hate speech. Her findings offer a significant contribution to advancing fairer and more effective AI moderation tools.  

Her thesis was supervised by Miriam Fernandez, Enrico Daga, and Harith Alani, with her defence committee including Haiming Liu (external), John Domingue (internal), and Soraya Kouadri (chair).  

Looking ahead, Paula will join the Centre for Protecting Women Online at The Open University as an AI consultant. Working with the Ethical and Responsible Tech/AI Team, she will engage in collaborative research to address the unique challenges to women’s safety in online spaces.  

We are excited to see the continued impact of Paula’s work and wish her every success in this important field!  

KMi researchers win Best Resource Paper Award at ISWC 2024  

KMi researchers win Best Resource Paper Award at ISWC 2024  

Gregoire Burel and Harith Alani were honoured with the Best Resource Paper Award at the prestigious International Semantic Web Conference (ISWC 2024). Their groundbreaking work, CimpleKG, the largest knowledge graph of this type, is an innovative open and continuously updated semantic resource designed to combat online misinformation.  

CimpleKG integrates data from 77 fact-checking organizations and over 217,000 documents, creating a comprehensive knowledge graph with over 15 million triples. By encompassing diverse topics, languages, and countries, it empowers researchers to delve deeper into misinformation trends and develop cutting-edge detection and verification tools. Its structured, enriched textual data provides a powerful foundation for building applications aimed at curbing the spread of false information online.  

The data behind CimpleKG has been used in multiple research studies led by KMi researchers investigating the co-spread of fact-checks and misinformation online and the automatic correction of misinformation sharers on social media, and integrated into various applications such as the CimpleKG explorer, the Fact-checking observatory, the Iffy Index and the MisinfoMe bot.

We are incredibly proud of this achievement, which underscores KMi’s commitment to impactful, socially responsible research. Congratulations to the team for this remarkable recognition! 

Related links: