CS 5754: Virtual Environments

 

Announcements & syllabus

Schedule

Projects

    Proposal

    Literature review

    IRB information

    List of current projects

    Presentations

    Final report

Participation

Discussions

Final Exam

 

Tips for performing a literature review

 

To perform your literature survey, first make sure you understand the scope of your chosen topic. Talk to Dr. Bowman if you need help getting started. Make a list of some terminology related to your topic that might serve as good search terms. Once you're ready to search, don't start with Google! Start with more specific search engines that are more likely to return relevant results. These include the ACM digital library, IEEExplore, and Citeseer. Try several different versions of search strings, and search both titles and abstracts if possible.  Browse the results looking for papers directly related to your topic. Once you have identified 2-3 "core" papers and read them, you can broaden your search in a couple of ways. You can look at the references of a core paper and follow the most relevant ones (searching backward in time). You can also search for other publications by the same author(s) on the same general topic. Finally, using some of the tools (esp. Citeseer) you can find out which papers have referenced your core paper and look at those (searching forward in time). After all this, you can do a Google search to uncover anything interesting (e.g. commercial products) that the other tools might have missed.

 

Once you have identified all the papers/sites that relate to your topic, read all the abstracts (at least). For those that are the most interesting or relevant, read the entire paper. Then organize your papers into categories – these categories will serve as the outline for the related work section of your final report. Finally, ask yourself, "What important research questions remain in this area that have not yet been adequately addressed?" Some of this will come from the future work sections of the papers you read, but you will also need to think deeply about the subject yourself.

 

As you put together your annotated bibliography, you may format the references in any way as long as they are consistent.  Include as much bibliographic information as you can (including page numbers, volume/issue numbers for journals, publishers, etc.) for each reference. You may reference URLs if that is the only option, but most of your references should be published articles.

 

At this time you may also want to draft the related work section of your final report. As you summarize the existing research, do not simply list papers or projects and describe their content. Rather, you need to provide a readable summary of the topic that shows how researchers have addressed the topic and how their approaches and results are similar or different. Bring out the relationships between different papers/projects. And show the limitations of the existing research (limitations that you will hopefully address). Support your arguments with as many citations as you can, but do not simply make the sections a list of citations. Here are two examples, one showing the style that I want, and one demonstrating a poor, "laundry list" style.


An example of good literature survey style (from a paper on comparisons of VE displays):

Note how the literature is divided into several categories, how the author makes several points of his own, and how he constructs an argument demonstrating the limitations of the existing research.

 

Many authors have noted the importance of studying the differences between displays and the effects of displays on users, applications, and tasks [e.g. 7, 16]. Few, however, have provided empirical evidence of these effects.

 

One type of display comparison study found in the literature is a comparison of desktop and immersive displays for a particular task or application [e.g. 1, 11]. These studies attempt to demonstrate the effects of immersion, as opposed to the effects of a particular type of display.

 

A second type of experiment compares the value of multiple VE displays for common tasks [e.g. 25, 28]. This is closer to the intent of our work, but is not explicitly focused on 3D interaction.

 

A few studies have looked at the effects of particular display characteristics on interaction performance or usability. For example, Arthur [2] studied the effect of field of view in an HMD on performance in searching and walking tasks.

 

The prior research most similar to ours involves studies that compare users' behavior and performance when interacting with VEs using different displays. Kjeldskov [15] reports an ambitious study on the usability of 40 common 3D interaction techniques in a semi-immersive curved display and a fully-immersive surround-screen display. He found qualitative differences in the usability of particular techniques between displays, but no quantitative data was collected. Our own prior work [3] did demonstrate a statistically significant difference in users' behavior between an HMD and a CAVE during a navigation task.


An example of poor literature survey style (using the same references as the example above):

Note how there is no organization to this writing (the paragraphs don't indicate different categories or themes), how the author doesn't analyze any of the literature, and how he simply lists the existing projects and papers.

 

There are several existing examples of display comparison experiments in the literature. Brooks [7] said that such experiments were important. One group compared a desktop display to a CAVE for an oil-drilling application [11]. Arthur [2] studied the effect of field of view in an HMD on performance in searching and walking tasks. Another study [25] looked at five different displays for construction-related tasks.  Bowman et al. [3] looked at users' preferences for real and virtual turning in HMDs and CAVEs.

 

A CAVE and a semi-immersive curved display were compared by Kjeldskov [15], and he used over 40 different 3D interaction techniques with the displays. Military applications on different displays were compared by Swan and his colleagues [28]. A comparison between a CAVE and a monitor has also been performed for a statistical analysis application [1].

 

A SIGGRAPH panel considered the relative advantages and disadvantages of HMDs and surround-screen displays [16].