Skip to menu Skip to content Skip to footer

Towards Evolvable and Sustainable Multimodal Machine Learning (2024-2027)

Abstract

Machine learning is commonly limited to a single operational modality. To enable image, sound and language comprehension simultaneously would require machines to reuse knowledge and understand concepts from multimodal data. The project aims to build a sparse model and present a set of innovative algorithms to enhance model generalisation for addressing distributional and semantic shifts and minimise the computational and labelling costs for training multimodal systems. Its outcomes will enable evolvable learning of models to suit varying testing scenarios after deployment and whilst reducing energy consumption and carbon emission. The application of these techniques could benefit sectors such as E-commerce, agriculture and transport.

Experts

Dr Yadan Luo

ARC DECRA
School of Electrical Engineering and Computer Science
Faculty of Engineering, Architecture and Information Technology
Lecturer
School of Electrical Engineering and Computer Science
Faculty of Engineering, Architecture and Information Technology
Yadan Luo
Yadan Luo