New ‘blueprint’ for advancing practical, trustworthy AI

AI
AI new approaches planned ahead- Representational image
Published on

London | A new “blueprint” for building AI that highlights how the technology can learn from different kinds of data – beyond vision and language – to make it more deployable in the real world, has been developed by researchers at the University of Sheffield and the Alan Turing Institute.

The framework -- which can act like a guide in how to create and deploy AI -- could make the technology more practical, ethical and effective in solving real-world problems.

Published as part of a study in the journal Nature Machine Intelligence, the framework is a roadmap for building multimodal AI -- systems that learn from different types of data such as text, images, sound and sensor readings.

AI typically learns from one type of information, such as text or images, but these more advanced multimodal AI systems integrate different data sources to form a more complete picture of the world.

However, despite these advantages, the study has found that most multimodal AI systems and research are still mainly learning from vision and language data, which the researchers say limits its ability and potential to tackle complex challenges that require broader data.

For example, combining visual, sensor and environmental data could help self-driving cars perform more safely in complex conditions, while integrating medical, clinical and genomic data could make AI tools more accurate at diagnosing diseases and supporting drug discovery.

The new framework could be used by both developers in industry and researchers in academia, particularly in light of findings showing that 88.9 per cent of papers featuring AI that draws on exactly two different types of data posted on arXiv – a leading open repository for computer-science preprints – in 2024 involved vision or language data.

Professor Haiping Lu, who led the study from the University of Sheffield’s School of Computer Science and Centre for Machine Intelligence, said: “AI has made great progress in vision and language, but the real world is far richer and more complex. To address global challenges like pandemics, sustainable energy, and climate change, we need multimodal AI that integrates broader types of data and expertise.”

“The study provides a deployment blueprint for AI that works beyond the lab — focusing on safety, reliability, and real-world usefulness,” he added.

The research illustrates the new approach through three real-world use cases – pandemic response, self-driving car design, and climate change adaptation – bringing together 48 contributors from 22 institutions across the UK and worldwide.

The work originated through collaboration supported by the Alan Turing Institute, via its Meta-learning for Multimodal Data Interest Group that Professor Lu is leading, which brought together researchers from across disciplines and institutions in the UK and abroad.

The collaborative foundation built through this Turing Interest Group also helped inspire the vision behind the UK Open Multimodal AI Network (UKOMAIN), a 1.8 million pounds EPSRC-funded network now led by Professor Lu to advance deployment-centric multimodal AI across the UK.

Dr Louisa van Zeeland, Research Lead at the Alan Turing Institute, said: “By integrating and modelling large, diverse sets of data through multimodal AI, our work together with Turing collaborators is setting a new standard for environmental forecasting."

"This sophisticated approach enables us to generate predictions over various spatial and temporal scales, driving real-world results in areas from Arctic conservation to agricultural resilience,” she added.

Latest News

No stories found.

Related Stories

No stories found.
logo
Metrovaartha- En
english.metrovaartha.com