Search
📃

Motion Sickness Prediction in Stereoscopic Videos using 3D Convolutional Neural Networks

Citation:
Tae Min Lee, Jong-Chul Yoon, and and In-Kwon Lee, "Motion Sickness Prediction in Stereoscopic Videos using 3D Convolutional Neural Networks", IEEE Transactions on Visualization and Computer Graphics (SCIE) 25(5), pp.1919-1927, (presented in IEEE VR 2019), May 2019
Abstract:
In this paper, we propose a three-dimensional (3D) convolutional neural network (CNN)-based method for predicting the degree of motion sickness induced by a 360◦ stereoscopic video. We consider the user’s eye movement as a new feature, in addition to the motion velocity and depth features of a video used in previous work. For this purpose, we use saliency, optical flow, and disparity maps of an input video, which represent eye movement, velocity, and depth, respectively, as the input of the 3D CNN. To train our machine-learning model, we extend the dataset established in the previous work using two data augmentation techniques: frame shifting and pixel shifting. Consequently, our model can predict the degree of motion sickness more precisely than the previous method, and the results have a more similar correlation to the distribution of ground-truth sickness. (Journal IF = 3.780 (2018), Ranking = 9.81%, Category = COMPUTER SCIENCE, SOFTWARE ENGINEERING, SCIE)