Prof. Dan Casas

Dan Casas is Assistant Professor at the Universidad Rey Juan Carlos (URJC), Spain. Previously he was Marie Skłodowska-Curie Individual Fellow (2016–2018) at MSLab of the URJC, and postdoc (2015–2016) in the Graphics, Vision and Video group at the Max Planck Institute in Saarbrucken, Germany, led by Prof. Christian Theobalt, and at the Character Animation group of the University of Southern California’s Institute for Creative Technology, in Los Angeles, USA (2014–2015). Dan received his PhD in Computer Graphics in 2014 from the University of Surrey (UK), supervised by Prof. Adrian Hilton. Dan’s dissertation introduced novel methods for character animation from multi-camera capture that allow the synthesis of video-realistic interactive 3D characters. During his PhD, he was also an intern at the R&D department of the Oscar Award-winning visual effects company, Framestore. Previously, in 2009, Dan received his M.Sc. degree from the Universitat Autonoma de Barcelona (Spain). In 2008, during the last year of his M.Sc. studies, he joined the Human Sensing Lab at Carnegie Mellon University (PA, USA) as an invited research scholar, where he investigated methods for real-time face tracking, advised by Prof. Fernando de la Torre.
Keynote Title: 3D Digital Avatars with Machine Learning
Abstract: Creating 3D digital garments is an active area of research due to a large number of applications in many fields, including fashion design, e-commerce, virtual try-on, and video games. The traditional approaches to this problem use physics-based simulation techniques to model how clothing deforms, but the high computational cost required at run time hinders the deployment of these techniques to real-world applications. Alternatively, recent methods based on Machine Learning are able to reconstruct 3D garments directly from images and to infer how 3D garments deform when worn by arbitrary body shapes. This has opened the door to the democratization of digital clothing, with a direct impact on video games, to improve to the visual fidelity of 3D characters; online shopping, to enable to virtually try on clothes in online stores; and fashion, to speed up the design process to create. In this talk, I will introduce recent state-of-the-art techniques for digital avatars introduced by our lab, including friendly descriptions of the fundamental parts of this exciting line of research in Computer Graphics and Machine Learning.
Prof. Belen Masià

Belen Masia is an Associate Professor in the Computer Science Department at Universidad de Zaragoza, and a member of the Graphics and Imaging Lab. Before, she was a postdoctoral researcher at the Max Planck Institute for Informatics. Her research focuses on the areas of appearance modeling, applied perception and virtual reality. She is the recipient of a Eurographics Young Researcher Award in 2017, a Eurographics PhD Award in 2015, an award to the top ten innovators below 35 in Spain from MIT Technology Review in 2014, and an NVIDIA Graduate Fellowship in 2012. She has served as an Associate Editor for ACM Transactions on Graphics, Computers and Graphics and ACM Transactions on Applied Perception. She is also a co-founder of DIVE Medical, a startup devoted to enabling an automatic, fast, and accurate exploration of the visual function, even in non-verbal patients.
Keynote Title: Modeling attention in immersive environments
Abstract: Creating engaging and compelling experiences in Virtual Reality is a challenging task: large bandwidth, computation and memory requirements are limiting factors; on top of that, there is the added difficulty of designing content for users who have control over the point of view. We argue that understanding user behavior in immersive environments can help address these challenges. In this talk, we explore approaches to modeling attention and gaze in VR scenarios. Applications range from compression to realistic avatar simulation or scene content design, as well as furthering our understanding of human perception, and in particular how we selectively process the sensory information we receive.
