Search
📃

Automated Music Video Generation Using Multi-Level Feature-Based Segmentation

Citation:
Jong-Chul Yoon, In-Kwon Lee, and Siwoo Byun, "Automated Music Video Generation Using Multi-Level Feature-Based Segmentation", Multimedia Tools and Applications (SCIE), Vol. 41, No. 2, pp.197-214, 2009, January 2009
Abstract:
The expansion of the home video market has created a requirement for video editing tools to allow ordinary people to assemble videos from short clips. However, professional skills are still necessary to create a music video, which requires a stream to be synchronized with pre-composed music. Because the music and the video are pre-generated in separate environments, even a professional producer usually requires a number of trials to obtain a satisfactory synchronization, which is something that most amateurs are unable to achieve. Our aim is automatically to extract a sequence of clips from a video and assemble them to match a piece of music. Previous authors [8, 9, 16] have approached this problem by trying to synchronize passages of music with arbitrary frames in each video clip using predefined feature rules. However, each shot in a video is an artistic statement by the video-maker, and we want to retain the coherence of the video-maker’s intentions as far as possible.
Videos: