A Study of Self-Paced and Machine-Paced Inspection

A study was conducted to investigate the performance of inspectors under different conditions using computer generated visual test items. The purpose of the study was to compare performance for self-paced vs. machine-paced inspection tasks. The factors considered were searching for a single type of flaw vs. three types of flaws, the viewing time for machine-paced inspection, and the type of instruction for self-paced inspection. The results showed that performance was better when subjects searched for only one type of flaw. Performance improved with increased viewing time, but was not affected by the type of instruction. There was no difference between self-paced and machine-paced inspection provided the viewing time was sufficient. Overall, performance in self-paced inspection was better.


INTRODUCTION
Inspection is the careful search for nonconformities in an item when that item is compared with a standard. The importance of inspection has increased in the past decade partly due to the greater consumer demand for accountability for defective manufacture and defective materials. This has resulted in a drastic increase in the cost of liability litigations and consequently, to increased insurance premiums.
Improved inspection helps to reduce the number of poor quality products reaching the consumer and reduce high litigation costs.
In industry today, most inspection tasks are visual and, as a result, inspection errors are inevitable. Therefore, it is of utmost importance to assess the accuracy of visual inspection and to optimize inspection conditions.
There are several factors that affect inspector performance. Among them are: (1) the time allowed for inspecting each item, (2) inspection for single vs. multiple flaws , and (3) whether the task is machine-paced or self-paced. Geyer, Patel, and Perry (1979) and Geyer and Perry (1982) reported a study in which they suggested that the performance of inspectors deteriorates when they are looking for more than one type of defect in a single inspection. Half of the displays were good, the other half contained either one type of flaw o r any one of three types of flaws. The results showed that the harder task (multiple flaw inspection) was performed consistently two seconds slower. McFarling and Heimstra (1975) showed that self-paced subjects detected more defects and rated the task "less uncomfortable" than machine-paced subjects. In the experiment, subjects were asked to inspect 255 slides of printed circuits.
Half of the subjects were machine-paced throughout the 52-minute trial. The other half were self-paced but were asked to try to finish within 52 minutes. Eskew and Riche (1982) thought that the improvement in performance on a selfpaced task could be attributed to either the subject's possession of control of the task, or by the slowing down of the presentation rate by the subjects. They conducted an experiment in which a machine-paced and a self-paced subject performed the inspection task simultaneously, with the self-paced subject controlling the rate of presentation for both subjects. The self-paced subject still performed better indicating that giving control of the rate of presentation to the inspector improves performance.

OBJECTIVES
The study reported here was PROCEEDINGS Of THE HUMAN FACTORS SOCIETY-30th ANNUAL MEETING-7986 conducted to investigate the following questions with respect to visual inspection.
(1) How do inspection mode (single flaw vs. multiple flaw) and presentation time affect accuracy in a machinepaced inspection task?
(2) How do inspection mode and type of instruction affect accuracy and response time in a self-paced inspection task?
( 3 ) What differences in accuracy exist between self-paced and machinepaced inspection?
A further objective of the study involved a comparison with the results of Geyer and Perry (1982) using a CRT generated display rather than the slides used in the former study.

Task
The displays used for the inspection task were a computerized version of the slides used by Geyer and Perry (1982). A good display consisted of a symmetrical cross of 1's over background noise of 0 ' s centered with respect to the outside border of + I s (Figure 1). + + + + + + + + + + + + + + + + + + +
There was an extra I located at either end of the vertical arm of the cross (Figure 3 ) . + + + + + + + + + + + + + + + + + + + Centered cross of 1's with an extra I in one of the vertical arms.
( 3 ) T. One of the 1's was replaced by a T somewhere in the cross (Figure   4 ) .
The density of the 0's was 100 per display for both good and defective items. The 0 ' s were located at random so they would not serve as a reference to the identification of defective displays.
Only one type of defect occurred on any defective item.

Subjects
Two different subject groups were established to compare performance in Twelve volunteers from an undergraduate psychology class at the University of Oklahoma participated in the experimen t . Six subjects were randomly assigned to each test group.
Subjects that were inspecting for one type of flaw received instructions as to which type they were looking for. The particular flaw that a subject had to search for was determined prior to the experiment using a counterbalanced design.

Equipment and Software
The test items were presented via a 8 8 9 5 9 6 9 6 .76 . 9 3 8 3 86 90 94 1 . 0 4 1 . 6 8 display monitor using a TRS-80 Extended BASIC, 32K RAM color computer. The software generated the test items and recorded the subject responses including the number of correct responses, number of misses, number of false alarms and the response time.
The proportion of good and defective items presented to the subjects was randomized by the software with a mean of 50%.

Procedure
Subjects were tested individually. Each subject received specific instructions according to the test group to which he or she belonged.
The subject was allowed a fiveminute practice trial to become accustomed to the different response keys and to learn the difference between good and defective displays.
Each subject then performed one testing trial on each set assigned for that test group. Each test trial of 7 2 items lasted approximately fifteen minutes.
For the machine-paced group, the order of presentation of the four sets was determined at random for each subject. After each display was presented € o r the fixed time period, the subject was to answer whether the display had a flaw o r not. For the self-paced group, the order of presentation of the four sets was also determined at random for each subject.
Subjects were to answer whether the display contained a flaw or not as soon as the decision was reached.

RESULTS
The results of the study are presented in Table 1 and Figures 5 , 6 and 7.
Separate analyses of variance were performed for accuracy and response time . .88 sec sec

Figure 5. Effect of viewing time and number of flaws on accuracy for a machine-paced inspection task,
In the machine-paced task, accuracy increased (p<0.03) when viewing time was increased from 1/2 second (85% correct) to 1 second (90% correct) and was higher (p<0.04) when subjects were looking for a single type of flaw ( Figure 5).
In the self-paced task, accuracy was not affected by the type of instruction (93% correct for emphasis on accuracy and 95% correct for emphasis on speed), nor by subjects looking for single flaws (96%) vs. multiple flaws (92%) (Figure 6).
When comparing the machine-paced and self-paced tasks, there was no difference in accuracy provided the subject had sufficient time (1 second) to make a decision (accuracy was 94% for self-paced subjects, 90% for machinepaced subjects with a viewing time of 1 second and 85% for machine-paced subjects with a viewing time of 1/2 second). Response time was not affected by the type of instructions (1.31 sec for emphasis on accuracy and 8.90 sec for emphasis on speed), but was faster (p<0.088) when the subject inspected for a single flaw (0.84 sec € o r single flaws vs. 1.36 sec for multiple flaws) (Figure

)
It is interesting to note that subjects performing a self-paced inspection had an average response time of 1.1 seconds and an average accuracy of 94%. This could be compared to the machinepaced group that had a viewing time of one second. This group had a mean accuracy of 90% confirming the results of Eskew and Riche stated earlier.

CONCLUSIONS
The practical significance of this study is in its application to the design of inspection tasks in industry. If the constraints of the production process allow self-paced inspection, it should be adopted along with appropriate instructions and inspector motivation. However, if the constraints necessitate machine-paced inspection, then a preliminary study should be performed f o r each inspection task to determine the required amount of inspection time for the desired level of accuracy. In addition, placing more than one inspector at each station when multiple flaws are to be identified helps to improve overall inspection performance.