•
Citation:
Ki-Ho Shin, Hye-Rin Kim, and In-Kwon Lee, "Automated Music Video Generation Using Emotion Synchronization", IEEE International Conference on Systems, Man, and Cybernetics 2016, October 2016.
•
Abstract:
In this paper, we present an automated music video generation framework that utilizes emotion synchronization between video and music. After a user uploads a video or music, the framework segments the video and music, and then predicts the emotion of each of the segments. The preprocessing result is stored on the server’s database. The user can select a set of videos and music from the database, and the framework will generate a music video. The system finds the most closely associated video segment with the music segment by comparing certain low level features and the emotion differences. We compare our work to a similar music video generation method by performing a user preference study, and show that our method generates a preferable result.