The AI4Pain Grand Challenge 2024: Advancing Pain Assessment with Multimodal fNIRS and Facial Video Analysis
Fernandez-Rojas R., Joseph C., Hirachan N., Seymour B., Goecke R.
The Multimodal Sensing Grand Challenge for NextGen Pain Assessment (AI4PAIN) is the first international competition focused on automating the recognition of acute pain using multimodal sensing technologies. Participants are tasked with classifying pain intensity into three categories: No Pain, Low Pain, and High Pain, utilising functional near-infrared spectroscopy (fNIRS) and facial video recordings. This paper presents the baseline results of our approach, examining both individual and combined modalities. Notably, this challenge represents a pioneering effort to advance pain recognition by integrating neurological information (fNIRS) with behavioural data (facial video). The AI4Pain Grand Challenge aims to generate a novel multimodal sensing dataset, facilitating benchmarking and serving as a valuable resource for future research in autonomous pain assessment. The results show that individual fNIRS data achieved the highest accuracy, with 43.2% for the validation set and 43.3% for the test set, while facial data yielded the lowest accuracy, with 40.0% for the validation set and 40.1% for the test set. The combined multimodal approach produced accuracies of 40.2% for the validation set and 41.7% for the test set. This challenge provides the research community with a significant opportunity to enhance the understanding of pain, ultimately aiming to improve the quality of life for many pain sufferers through advanced, automated pain assessment methods.