Prof. Mel Slater
Prof. Mel Slater is a computer scientist known for his work in virtual reality, in particular in the field of embodiment and presence in VR. He is currently Distinguished Investigator at the University of Barcelona. He is Director of the Event Lab (Experimental Virtual Environments in Neuroscience and Technology) at the University of Barcelona. He is a co-Founder of Virtual Bodyworks Inc. In 2005 he was awarded the IEEE Virtual Reality Career Award in Recognition of Pioneering Achievements in Theory and Applications of Virtual Reality.
Prof. Marios S. Pattichis
Bio: Marios S. Pattichis is the Gardner Zemke Professor at the Department of Electrical and Computer Engineering at the University of New Mexico. He has served as a Senior Associate Editor for the IEEE Transactions On Image Processing and a Senior Associate Editor for IEEE Signal Processing Letters, Associate Editor for IEEE Transactions on Image Processing, Pattern Recognition, IEEE Transactions on Industrial Informatics, and a Guest Associate Editor for special issues published in the IEEE Transactions on Information Technology in Biomedicine, IEEE Journal of Biomedical and Health Informatics, Biomedical Signal Processing and Control, and Teachers College Record. He was a recipient of the 2016 Lawton-Ellis and the 2004 Distinguished Teaching Awards from the Department of Electrical and Computer Engineering at UNM. In 2022 he was elected Fellow of the European Alliance of Medical and Biological Engineering and Science (EAMBES) for his contributions to biomedical image analysis.
Abstract: Large-scale video analysis remains extremely challenging despite significant progress in large-scale image analysis. The challenges come from the need to properly define the problems, developing ground-truth on large-scale video datasets, the need for video compression, and the development of efficient methods with training over large datasets. The talk will discuss how these fundamental issues were addressed for analyzing educational videos to support better teaching practices and assess student participation. Unlike the majority of existing video datasets, Educational video analysis proved to be particularly challenging due to the need to narrow the focus on what can be achieved, the need to process large videos of over an hour, occlusions, the need to process the audio in noisy classrooms, and to present the results in a meaningful way. The talk will discuss the development of effective video models for video analysis, the integration of computer vision models for bilingual speech recognition, and the development of an interactive WebApp for visualizing the results from human activity recognition.