. GLOCOM Platform
. . debates Media Reviews Tech Reviews Special Topics Books & Journals
.
.
.
.
.
. Newsletters
(Japanese)
. Summary Page
(Japanese)
.
.
.
.
.
.
Search with Google
.
.
.
Home > Special Topics > Colloquium Last Updated: 15:15 03/09/2007
Colloquium #65: July 10, 2006

VISITOR STUDIES : Part 1 - Visitor Studies in General

Jack Hiroki IGUCHI (Professor, Graduate School of Environmental Science, Aomori University, Japan)


HISTORY OF VISITOR STUDIES

Many museum evaluation studies, often called audience research studies or visitor studies have been conducted in museums during the past 70 years. One of the best known pioneering pieces of research on visitor studies, focusing on visitors' fatigue during and after viewing the exhibit called "museum fatigue", was conducted by Robinson E.S., a psychologist, Yale University, USA in 1925, supported by the American Association of Museums and the Carnegie Corporation as a sponsor (Robinson 1930:9-11). As a part of this series, Bloomberg M. examined the differential effects on school children which were divided into 5 grades of intelligence levels. She concluded that the formation of questions by the children themselves was vital in terms of learning from the exhibition (American Association of Museums, 1984: 65). However most visitor studies, research or evaluation had not been conducted until the 1960s.

In 1968, Shettel H. published his research on the criteria for judging the quality of museum science exhibitions (Shettel 1968: 137-153). This is one of the first of the new studies of visitor behaviour (Patterson 1989:81). In 1973, one of the first experimental studies in museums was published by Screven, a psychologist, University of Wisconsin-Milwaukee, USA, who concentrated on instructional design and human motivation (Ibid & American Association of Museums: 65). Also during the 70's, Roger Miles in the NHM (Natural History Museum), London, who is well-known as a pioneer of visitor studies in the UK, published his study of the exhibition of "Human Biology" at the NHM, London a new exhibition scheme (Miles & Tout, 1978: 36-50). Butler asserts, about this exhibition, that "the gallery broke new ground for science exhibitions in the UK, being concept-based rather than object-based. It has proved hugely successful in its provocative approach" (Butter S. 1992: 91).

Recently academic bulletins of both the museum and education field show considerable amount of research papers on visitor studies. Examples of journals and bulletins include the bulletin of the "Centre for Social Design (Jacksonville, Alabama, USA); "Journal of Museum Education" (Museum Education Roundtable, Washington DC, USA); and "A Journal of Visitor Behavior" (The International Laboratory for Visitor Studies, Shorewood, Wisconsin, USA).

However this area is new as a field of study and research. Hence museums are conducting this sort of research from a variety of angles in order to evaluate a range of research methodologies and collect a range of data before visitor studies can be established as a vital science for museums.


CONTENTS OF VISITOR STUDIES

a) PHILOSOPHY AND MISSION OF VISITOR STUDIES

Of research centres for visitor studies, ILVS (The International Laboratory for Visitor Studies - Shorewood, Wisconsin, USA) is one of the most active laboratories.

The philosophy of this laboratory is that:
"The International Laboratory of Visitor Studies is an organization dedicated to improving the quality of visitor learning in public environments throughout the world. By inviting and sharing research from varied sources, cultures, and disciplines. ILVS believes that it can contribute to teaching / learning interactions which can have dynamic, long-lasting impact on public knowledge and attitudes….." (Exhibit Communications Research Inc. 1992: 157)

Also the mission statement of this laboratory is:
1) to encourage and conduct research and
2) to disseminate this information to administrators, exhibit planners, designers, interpreters, and educators.
An expanded knowledge base will help maximize the educational impact of exhibits and programs for self-directed learner. (Ibid)

Both statements emphasize planning and running educational exhibits for particularly the visitors without professional guides for their informal studies in maybe their leisure time.

b) DEFINITION OF VISITOR STUDIES

No official definition of visitor studies exists for museums and the field of psychology, since this field is still in its infancy. However, the "Statement of Goals" of the AAM (The American Association of Museums), (AAM Visitor Evaluation and Research Committee, 1988) is the nearest one can get. This statement implies four fundamental assumptions for visitor studies (Bitgood S. editor, 1989: 10,11). That is:
  1) Visitor advocacy: primary mission.
Visitors should play the major role in the design of both exhibitions and programmes. Traditionally, this has not been the case.

  2) Multidisciplinary view: global approach.
The mix of viewpoints and expertise by specialists from for example exhibit design, education, visitor services, marketing, recreation and evaluation.

  3) Formal evaluation: a technique for answering questions.
Evaluation involves the specification of criteria for judging the effectiveness of something.

  4) Scientific: developing methods and theories.
The visitor studies approach uses a scientific model of collecting information about visitors, and a scientific model of theory building borrowing from psychology, sociology, education, and marketing to formulate empirically-based principles of visitor behaviour and informal learning.

In many academic fields, multidisciplinary research has been taking place since around the 60's in areas such as electronics and mechanical engineering. Specialists research not only their specific area but also other studies related to their subjects to help develop theories. The visitor studies is a typical multidisciplinary area in which specialists from a variety of academic fields can contribute.

c) METHODOLOGY OF VISITOR STUDIES

Some papers describe the method of visitor studies and each description is quite similar to others. The following descriptions are summarised by this author using mainly the papers of Screven C.G (1990: 37-59), Bitgood (1988: 5-7), (1989a: 18,19) and the Centre for Social Design (1988: 8,9).

The exhibition development in generally can be divided into four stages:

EXHIBITION DEVELOPMENT
  1) Planning Stage:
Themes, audiences, objectives and messages are considered.
Front-End Evaluation (or Pre-Design Evaluation): Evaluation undertaken before the project begins to help establish objectives and messages of the exhibition.

  2) Design Stage:
Artefacts, layout, sequencing, lighting, signage, labels, panels and orientation are designed.
Formative Evaluation: to improve the functioning of the exhibit.

  3) Construction and Installation Stage:
Developmental Evaluation: to rethink the design of the exhibit and also find some faults by architects or blue prints. This term is also used in design stage.

  4) Occupancy Stage: Traffic flow, visitor usage, attitudes, interests, learning, cost-effectiveness and also crowds, fatigue and noise are examined.
Summative Evaluation (or Post-Design Evaluation / Post-Occupancy Evaluation):Evaluation of the exhibit completed if the project is successful in terms of objectives.

Also, some papers describe "Post-Occupancy Stage". In this stage, adjustments may be made to the installed exhibition to correct some existing problems. This evaluation is called "Remedial Evaluation" which can correct post-occupancy problems. However, "Occupancy Stage" and "Post-Occupancy Stage" are often common in terms of the method of evaluation. And therefore the use of the term "remedial evaluation" is probably unhelpful and should be avoided since "summative" is in more common parlance (Miles R.S. 1993: 26).

METHODS OF EVALUATION

Some methods of evaluation conform to the field of evaluation in education such as formative and summative evaluation which were originally defined by Scriven M. (1977: 334-371).

  1) FRONT-END EVALUATION
Whenever a museum is going to establish an exhibit with a specific theme, it must determine visitors' existing knowledge and preconceptions about the theme as well as their motivations for going to the museum and attitudes to the exhibit. Museum should look into the opinions of people who do not come to the museums by street interviews or mail surveys. Although people visit museums, most visitors have well established misconceptions or naive notions about the exhibit topic. Museums must correct their misconceptions and increase their knowledge in a satisfying manner and atmosphere.

Front-End Evaluation is conducted using some existing exhibits. The basic methods include interview (open-end, structured); focus group (from marketing research in which a small group of consumers is interviewed in-depth focusing on a particular topic or product); observation; and questionnaire.

  2) FORMATIVE EVALUATION
In the design stage, using some information from visitors through Front-End Evaluation, a more realistic exhibition design is drawn. To achieve most effective evaluation, mock-up (or often called prototype) tests are run. A mock-up must be an inexpensive simulation of an exhibit or object in order to determine its most educationally effective design before a final exhibit is completed.

The methods of the testing are observation, interview and questionnaire. The testing of mock-ups can be divided into two, "Cued" and "Noncued". The former is that visitors are told that they are being observed or will be questioned, and the latter is that visitors do not know they are being observed or going to be questioned. Cued testing is for knowing its teaching power (how it teaches the message to visitors effectively), and noncued testing is for knowing its holding power (a measure of time spent viewing an exhibit).

The main problem of these tests is that the effectiveness of isolated micro exhibits (parts of the whole exhibition - the macro exhibit) cannot be effectively evaluated prior to the occupancy stage because a mock-up exhibition environment is different from that of the whole exhibition completed. However exhibit planners need to make efforts to get as much useful information for a completed exhibition as they can in order to avoid modifications after opening to the public.

  3) SUMMATIVE EVALUATION
After installation of the exhibit, summative evaluation is needed. The purpose is to examine if the exhibit does work effectively and reaches its objectives. At the same time, it gives insights for modifications or new exhibits in the future.

Summative evaluation covers a wide range of activities. First of all, before starting this evaluation, architectural and physical matters of the exhibit should be examined by experts. It ranges from architectural matters and the climate of the exhibit to the artifacts and labeling. After having done this job, formal summative evaluation can be started.

The method of this evaluation is similar to "formative evaluation" using the methods of observation, interview and questionnaire. Observation is often called "direct observation" compared to "self-reporting methods" (using interview and questionnaire). This includes "Attracting Power" (the ability of the exhibition to attract visitors - usually measured as the percentage of the visitors who stop at an exhibit), "Holding Power" (see Formative Evaluation), "Viewing Time" (at a section and also in the whole exhibit), and general behaviour including "social interaction" between visitors. Observation can use both obtrusive and unobtrusive methods in which visitors do not know they are being observed. Usually it is best to use the method of data collection with the least intrusive methods whenever possible.

In this stage, a mock-up test can also be used if the museum regulations allow this. For instance, some museums do not allow setting up mock-up exhibitions, because they are incomplete and fail to offer a sense of beauty and thus may harm the overall image of the museum.

Furthermore, summative evaluation includes looking into long-term impact of a visit to the exhibition. One of these methods is follow-up investigations of visitors using mailed questionnaires asking how effective the exhibit was for them. Also at the same time, flyers, educational materials, the plan of educational activities, and outreach programmes, which all relate to the exhibition, should be evaluated.

RELIABILITY AND VALIDITY
It is vitally important to carry on the evaluation using most reliable and valid methods to get usable data. There are lots of arguments about these matters. However most papers describe the same idea (Bitgood S. 1988: 4, 1989a: 13-15); (Ellis J. & Koran J. 1991: 72); (Centre for Social Design, 1988: 8-9).

RELIABILITY
This is the degree of consistency or stability of behavioural measurements.
First of all, it is a basic point that the evaluation must be objective. This means that any personal feelings do not be allowed strictly to collect data and evaluate them. Observers must always follow the standardized procedure.

1. Inter-observer Reliability: if there is a gap between the results observed by plural observers. Before starting the observation, each independent observer must agree on the detailed method.

2. Internal Consistency Reliability: if there is a gap between the results when measurements take place at different places and times. All of the items on the test must be measured in the same way across times and places.

3. Replication Reliability: if there is a gap between the results of measuring the thing twice. The method of observation can be used repeatedly for the same thing or for the different things as well. This means that if measuring the thing twice, both results must be exactly the same. In order to enable the results to do so, the method must be described in enough detail.

VALIDITY
This is the degree of the value of the results in terms of the method - the results might be different some what if the method is changed. The method must be elaborated by expert(s) so that the results are accurate and valuable. There are many aspects of validity. Some of them are selected and listed as follows.

1. Construct Validity:
The degree to which the recorded data really measures what observes are supposed to measure, especially through direct observation. For example, the viewing time recorded may include visitor's day dreaming time.

2. Recording Validity:
The degree to which the measurement system distorts the actual behaviour of visitors, especially in the Self-Reported Method such as interview and questionnaire. For example, the time spent in the exhibit which is reported by a visitor may be over estimated.

3. Content Validity:
The degree to which a sample of visitor's behaviour is representative of the behaviour which an observer wished to test. For example, the viewing time recorded is influenced by various conditions such as crowded conditions and weather conditions when an exhibition is outside.

4. Predictive Validity:
The degree to which the results can be used to predict the visitor's reactions to other exhibits. If the method which was used for the evaluation did not take account of above mentioned validities that is construct validity; recording validity; and content validity, the results cannot be generalized to use for predicting the visitor's reactions to other exhibitions and for setting up other exhibitions.


These above mentioned descriptions of the contents of visitor studies are fundamental principles. In fact, these studies are more complicated in reality than in theory, since they cannot avoid considering other studies such as psychology and sociology. These matters will be discussed in the next section.

Bibliography
AAM (American Association of Museums) Visitor Evaluation and Research Committee, 1988. Statement of Goals.
American Association of Museums, 1984. Museums for a new Century: 65. Bitgood S., 1988. An Overview of the Methodology of Visitor Studies, Visitor Behavior V3, N3, Fall 1988: 4,5-7.
Bitgood S., 1989. Professional Issues in Visitor Studies, Visitors Studies: Theory, Research and Practice V2: 10, 11,13,15,18,19.
Butler S., 1992. Science and Technology Museums, Leicester University Press: 91.
Center for Social Design, 1988. Method of Evaluation, Visitor Behavior V3, N3, Fall 1988: 8.9.
Ellis J. & Koran J., 1991. Research in Informal Settings, Some Reflections on Designs and Methodology, ILVS Review Spring 1991: 72.
Exhibit Communications Research Inc., 1992. The International Laboratory for Visitors Studies (ILVS), ILVS Review, A Journal of Visitors Behaviour, V2, N2, 1992: 157.
Miles R.S., 1993. Grasping the Greased Pig: Evaluation of Educational Exhibits, Museum Visitor Studies in the 90s, Science Museum, London: 26. Miles R.S., & Tout A.F., 1978. Human Biology and the New Exhibition Scheme in the British Museum (Natural History), Curator 21 (1): 36-50.
Patterson D., 1989. Contributions of Environmental Psychology to Visitors Studies, Visitor Studies: Theory, Research and Practice, V2, Centre for Social Design: 81.
Robinson E.S., 1930. Psychological Problems in the Science Museum, Museum News (USA), 8 (5): 9-11.
Screven C.G., 1990. Uses of Evaluation Before, During and After Exhibit Design, ILVS Review, 1 (2), 1990: 37-59.
Scriven M., 1977. Methodology of Evaluation, in Bellack and Kiebard (editors), Curriculum and Evaluation, Berrley, McCutchan: 334-371. Shettel H., 1968. An Evaluation of Existing Criteria for Judging the Quality of Science Exhibits, Curator, 11 (2): 137-153.

(This article is published here with the author's permission.)

 Top
TOP BACK HOME
Copyright © Japanese Institute of Global Communications