Interactive Approaches to Video Lecture Assessment
A growing number of universities and other educational institutions record videos of regularly scheduled classes and lectures to provide students with additional resources for their study. However, the video alone is not necessarily the same than a carefully prepared educational video. The main issue is that they are typically not post-processed in an editorial sense. That is, the videos often contain longer periods of silence or inactivity, unnecessary repetitions, spontaneous interaction with students, or even corrections of prior false statements or mistakes. Furthermore, there is often no summary or table of contents of the video, unlike with educational videos that supplement a certain curriculum and are well scripted and edited. Thus, the plain recording of a lecture is a good start but far from a good e-learning resource.
This thesis describes a system that can close the gap between a plain video recording and useful e-learning resource by producing automatic summaries and providing an interactive lecture browser that can visualize automatically extracted key phrases and their importance on an augmented time line. The lecture browser depends on four tasks: automatic speech recognition, automatic extraction and ranking of key phrases, extractive speech summarization, and the visualization of the phrases and their salience. These tasks as well as the contribution to the state of the art are described in detail and evaluated on a newly acquired corpus of academic spoken English, the LMELectures. A first user study shows that students using the lecture browser can solve a topic localization task about 29% faster than students that are provided with the video only.