November 2003
Volume 44, Issue 11
Free
Visual Psychophysics and Physiological Optics  |   November 2003
Facial Recognition Using Simulated Prosthetic Pixelized Vision
Author Affiliations
  • Robert W. Thompson, Jr
    From the Lions Vision Research and Rehabilitation Center, Wilmer Ophthalmological Institute, The Johns Hopkins University School of Medicine, Baltimore, Maryland; and the
  • G. David Barnett
    From the Lions Vision Research and Rehabilitation Center, Wilmer Ophthalmological Institute, The Johns Hopkins University School of Medicine, Baltimore, Maryland; and the
  • Mark S. Humayun
    Doheny Retina Institute, Keck School of Medicine, University of Southern California, Los Angeles, California.
  • Gislin Dagnelie
    From the Lions Vision Research and Rehabilitation Center, Wilmer Ophthalmological Institute, The Johns Hopkins University School of Medicine, Baltimore, Maryland; and the
Investigative Ophthalmology & Visual Science November 2003, Vol.44, 5035-5042. doi:10.1167/iovs.03-0341
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Robert W. Thompson, Jr, G. David Barnett, Mark S. Humayun, Gislin Dagnelie; Facial Recognition Using Simulated Prosthetic Pixelized Vision. Invest. Ophthalmol. Vis. Sci. 2003;44(11):5035-5042. doi: 10.1167/iovs.03-0341.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

purpose. To evaluate a model of simulated pixelized prosthetic vision using noncontiguous circular phosphenes, to test the effects of phosphene and grid parameters on facial recognition.

methods. A video headset was used to view a reference set of four faces, followed by a partially averted image of one of those faces viewed through a square pixelizing grid that contained 10 × 10 to 32 × 32 dots separated by gaps. The grid size, dot size, gap width, dot dropout rate, and gray-scale resolution were varied separately about a standard test condition, for a total of 16 conditions. All tests were first performed at 99% contrast and then repeated at 12.5% contrast.

results. Discrimination speed and performance were influenced by all stimulus parameters. The subjects achieved highly significant facial recognition accuracy for all high-contrast tests except for grids with 70% random dot dropout and two gray levels. In low-contrast tests, significant facial recognition accuracy was achieved for all but the most adverse grid parameters: total grid area less than 17% of the target image, 70% dropout, four or fewer gray levels, and a gap of 40.5 arcmin. For difficult test conditions, a pronounced learning effect was noticed during high-contrast trials, and a more subtle practice effect on timing was evident during subsequent low-contrast trials.

conclusions. These findings suggest that reliable face recognition with crude pixelized grids can be learned and may be possible, even with a crude visual prosthesis.

In the United States, at least 1.3 million people (0.5%) are legally blind. In 2001, the American Foundation for the Blind estimated that 260,000 Americans have vision that is clinically measured as light perception vision or less, and roughly half of these individuals, or 130,000 persons, are totally blind. 1  
Electronic sensory substitution devices and prosthetic visual devices have been proposed, to provide visual information to blind individuals. Sensory substitution devices use other senses, such as auditory or tactile stimulation, to communicate visual information to subjects. 2 3 4 Prosthetic visual devices electronically stimulate a portion of the subject’s visual pathway to produce the sensation of a point of light called a phosphene. Researchers have been working to develop devices to elicit the perception of phosphenes by stimulation of the visual cortex or optic nerve. 5 6 7 8 9 10 Other investigators have pursued the development of devices to stimulate the remaining retinal neurons in patients blind from photoreceptor degenerative diseases, such as retinitis pigmentosa and age-related macular degeneration. 11 12 13 14 15  
Stimulating the visual system with multiple electrodes produces phosphenes that may result in the perception of a gross image similar to that produced by the array of lights on a scoreboard in athletics. The stimulation of electrodes associated with specific phosphenes has allowed some blind subjects to recognize shapes and letters during electrical stimulation studies of the occipital cortex and retina. 16 17  
Those involved in prosthetic vision research seek to develop devices capable of producing visual perception that enables or enhances the performance of important and meaningful daily tasks. Face recognition is among the most important visual tasks in daily life. Severe visual impairment was defined in the Lighthouse National Survey as “an inability to recognize a friend at arm’s length, the inability to read ordinary newspaper print, or reports of poor or very poor vision or blindness even when wearing glasses or contact lenses.” 1 In low-vision clinics, patients often report difficulty in recognizing faces. Some patients with low vision benefit from spotting telescopes to assist them in this task. For other patients, this aid is inadequate. 
Previous simulation studies of pixelized prosthetic vision have been performed to investigate the effects of dot size, dot spacing, and grid size in the performance of tasks such as high-contrast letter discrimination, reading speed, and moving through a maze. 18 19 20 Studies testing the resolution necessary for recognition of pixelized facial images have demonstrated that at least 15 pixels per face width are required. 21 22 However, these investigators used contiguous square blocks for the pixelized images. Such images contain straight edges with high spatial frequency components that may mask the lower spatial frequency features. The purpose of our study was to use a more sophisticated model of simulated pixelized prosthetic vision using noncontiguous circular phosphenes to evaluate multiple parameters that may affect facial recognition. 
Methods
Subjects
Four college students (two women and two men), 20 to 32 years of age, volunteered for the study. All had best-corrected visual acuity of 20/20. They received a meal coupon and a small financial remuneration for their participation. Before they were enrolled, written informed consent was obtained from all participants. The research adhered to the tenets of the Declaration of Helsinki and was approved by the Institutional Review Board of the Johns Hopkins University School of Medicine. 
Techniques
The facial images used in the study were drawn from a database of 60 patients and hospital employees. The group was balanced in the following respects: number of men and women; number of black and white individuals; and number of old, middle-aged, and young individuals. Using a digital camera with a resolution of 320 × 256 pixels with 256 gray levels, two head-on images (one serious and one smiling) and three averted images of each individual were captured for use in the study. Images were cropped to 240 × 240 pixels (18° × 18°). All faces displayed in the reference and test conditions occupied a visual field of 13° horizontally (ear to ear) and 17° vertically (chin to crown). 
Using commercial software (Microsoft Visual Basic; Microsoft Corp., Redmond, WA), we developed a program to present the facial recognition task using a 400-MHz desktop computer (Pentium II; Gateway, Poway, CA), a video card (Diamond Stealth; Diamond Computer Systems, Sunnyvale, CA) with 2 megabytes of memory, with video output to a modified low-vision enhancement system (LVES) display. 23 The LVES display has a vertical visual field of 36° and a horizontal visual field of 48°. The display has 480 vertical and 640 horizontal pixels, of which a 480 × 480-pixel area was used. Thus, each pixel spans 4.5 minutes of arc (arcmin). The LVES system (Fig. 1) is capable of displaying 256 gray levels. 
Prefiltered test images were viewed through dot masks representing the phosphene arrays a visual prosthesis wearer might perceive. The grid size (10 × 10, 16 × 16, 25 × 25, 32 × 32 dots), dot size (13.5, 31.5, 49.5, 58.5, and 67.5 arcmin), dot gap size (4.5, 22.5, and 40.5 arcmin), number of simultaneous gray levels (two, four, six, and eight), and dot dropout percentage (10%, 30%, 50%, and 70%) were varied. The dropout percentage investigates the role of dots omitted at random from the array to test how well a prosthesis wearer may be able to adjust to the “incomplete” array that may result if stimulating electrodes fail to function because of nonresponsive underlying tissue or degeneration. The effect of each variable was tested separately around the standard condition of a 16 × 16-dot grid, with 31.5-arcmin dot diameters and 4.5-arcmin dot gaps (producing a 36-arcmin pitch), with six gray levels and a 30% dropout percentage (Fig. 2 , panels with S after the parameter value). This standard test condition was chosen on the basis of preliminary test results and reports from prior retinal and cortical stimulation studies. 
Procedure
At the start of each experimental session, the LVES head-mounted display was fitted to the subject. A test pattern was used to allow the subject to focus the image and correct for refractive errors in the right eye. Subjects wore an eye patch to cover the left eye. During each trial, subjects first viewed a reference set of four unfiltered facial images of different individuals that filled the 480 × 480-pixel display area (Fig. 3 , left column). The four faces in each trial were matched for gender and race, but contained images representing each age group. After reviewing these images, subjects depressed the space bar to clear the screen, start the viewing timer, and bring up the test image (Fig. 3 , right column). 
The test image was a partially averted facial view of one of the individuals in the reference set, filtered with a circular spatial box filter, with a width equaling the pitch of the dot mask used to scan the display. Specifically, the raw 240 × 240-pixel image was convolved with the box filter, and each dot in the mask was filled with the gray level of the pixel at its center in the convolved image. The gaps between the dots were filled with the background gray level of 0% (high-contrast) or 44% (low-contrast) intensity, as was the unused portion of the screen. The dot mask was instantly updated as the grid was scanned across the test image, with a standard video screen refresh rate of 30 Hz. Using a mouse-pointing device, subjects viewed the test image by scanning the mask simulating pixelized prosthetic vision across the filtered image. 
On determining the identity of the test face, subjects again depressed the space bar. This stopped the viewing timer and returned the display to the reference set of four facial images. Subjects then used the mouse-pointing device to select the face in the reference set that matched the identity of the filtered face scanned by the dot mask. 
Subjects performed 192 trials with the test images viewed under high-contrast conditions (darkest gray level 0%, and brightest gray level 100%). Each of the parameter combinations was presented 12 times in a pseudorandom, counterbalanced order, so that any learning effects would be evenly distributed across the test conditions (Fig. 2) . The experiment was then repeated with the test images viewed under low-contrast (12.5%) conditions (darkest gray level 44% and brightest gray level 56%). 
Performance was evaluated based on the percentage of correct responses and the average correct response time for each test condition. The percentage of correct responses indicates the subject’s accuracy—that is, the ability to discriminate among four faces using a given test grid. The average correct response time provides a measure of the ease with which a subject correctly identified the test face during a trial. Trials with a response time of less than 200 ms were discarded, because they almost certainly represented errors in testing caused by accidental space bar presses (this occurred in 3 high- and 8 low-contrast trials, out of 1536 trials). There were no response times between 200 and 900 ms, and times beyond 900 ms were likely to be from genuine responses. 
For each subject and each condition, an identification index (II) was defined to compare the perceptual efficacy of the various test grids. The rationale for using the II is to generate a single number to measure the potential of a specific set of parameters to meet the basic requirement of a prosthetic visual device (i.e., rapid and accurate visual recognition of images). The value of a prosthetic visual device to a user increases as the percentage of correctly identified images rises above that expected by chance (i.e., 25%, up to the maximum accuracy of 100%). Thus, the numerator of the II is the (average) percent correctly identified by a subject in excess of that expected by chance. A premium is also placed on the speed with which an image is correctly identified. The value of a visual prosthetic device to an individual may be thought to double if the time and effort required to identify an image correctly is cut in half. Thus, the denominator of the II is the average time (in seconds) required for the subject to correctly identify an image: II = (%correct − 25%)/t correct
The one-tailed t-test was used to determine whether the percentage of correct responses and the II for each trial condition significantly exceeded that expected by chance alone (25% and 0, respectively). 
Results
In all but two high-contrast test conditions, the subjects achieved highly significant facial recognition accuracy (P < 0.01). The high-contrast test conditions that did not achieve statistical significance were those with only two gray levels and with a 70% dot dropout rate. Table 1 shows stimulus parameters and subject performance (means and standard deviations across subjects) at high contrast, expressed through the following measures: accuracy (% correct); relative accuracy (ratio of the mean accuracy for a condition and that for the standard condition; only for conditions with significant accuracy levels); response time (in seconds; correct responses only); and the II. Rows in the table have been ordered according to mean accuracy. 
For high-contrast conditions, the average II correlated highly with the average accuracy (r 2 = 0.85). The average II exceeded 10 only for the 16 × 16-dot grid with a 10% dropout rate and for the 25 × 25- and 32 × 32-dot grids (see Table 1 ). Note that most II significance levels were lower than those for accuracy, because of interindividual differences in response timing, and hence increased II standard deviations. Nonetheless, IIs were significant for the same 14 conditions for which accuracy levels were significant. 
Of the low-contrast test conditions, only six did not achieve significant (P < 0.05) accuracy levels—that is, those with grid areas covering less than 17% of the facial image area (the 10 × 10-dot grid, and the 16 × 16-dot grid with 13.5-arcmin dot size), with the highest dropout rate (70%), with fewer than six gray levels, and with the largest dot gaps (40.5 arcmin). Table 2 shows parameters and performance measures for the low-contrast conditions, again ordered according to mean accuracy. 
As in the high-contrast conditions, the average II correlated highly with the average accuracy (r 2 = 0.90). It exceeded seven only for the 25 × 25- and 32 × 32-dot grids (see Table 2 ), yet reached significance in all but one (49.5-arcmin dots) of the conditions with significant accuracy levels. 
In both high- and low-contrast tests, significant performance changes were noted as the test parameters were varied about the standard test condition. For some parameters, these changes are straightforward: an increase in grid size (from 10 × 10 to 32 × 32 dots), a decrease in missing dots (from 70% to 10%), and an increase in number of gray levels (from two to eight) all improved performance. Note, however, that good performance required no more than a 25 × 25-dot grid size, no fewer than 30% dot dropouts, and no more than six gray levels. For other parameters, the changes are less obvious: Dot size appeared to have an optimum range from approximately 0.5° to 1°, whereas an increase of gap size beyond the 4.5-arcmin minimum did not appear to confer a benefit. 
High-contrast test conditions (Table 1) resulted in higher average facial recognition accuracy than did low-contrast test conditions (Table 2) . Accuracy, correct response timing, and II were averaged across 16 test conditions for each subject, and means and standard deviations across subjects, for high and low contrast, show the effect of contrast on visual performance in Figure 4 . The high-contrast test conditions had a higher average accuracy and II. Yet, unexpectedly, the high-contrast tests also had a higher average correct response time than the low-contrast tests. 
Because the discrimination task is more difficult at low contrast than at high contrast, the shorter average response times for these conditions (Fig. 4B) are unexpected. It could be explained, however, if our subjects’ response times shortened significantly with practice, because the low-contrast tests followed the high-contrast tests. To investigate this, our subjects’ performances were plotted by chronologic quartiles (blocks of 48 consecutive trials) in Figure 5 . As would be expected for a practice effect, Figure 5B shows that the first quartile of the high-contrast trials had the longest average correct response time, and subsequent high-contrast quartiles showed progressive decreases in response time. For the low-contrast trials, response times in the first quartile were somewhat elevated from those recorded previously in the late quartiles of the high-contrast trials and were then followed by moderate decreases. Average correct response times for the second, third, and fourth quartiles were indistinguishable between the high- and low-contrast trials. Thus, the pronounced early decline in the average correct response time for the high-contrast test conditions is the sole cause of the unexpected discrepancy in Figure 4B
The average accuracy (Fig. 5A) did not change appreciably during the high-contrast trials, but results in later low-contrast trials appeared slightly less accurate than earlier ones, whereas correct response times became shorter, suggesting a time–accuracy tradeoff that may be explained by subject fatigue. 
The II (Fig. 5C) showed a marked and almost linear increase throughout the duration of high-contrast testing, but no appreciable increase during low-contrast testing. This suggests that practice had a major effect on overall test performance, but this practice effect leveled off by the end of high-contrast testing. 
Discussion
The results of these tests indicate that multiple variables influenced facial recognition with a pixelized grid viewing system. Parameters such as contrast, grid size, dot size, dot gap, dropout rate, and gray-scale resolution had a significant effect on facial recognition speed and accuracy. 
As has been noted, the dependence of performance measures on dot and gap sizes is not straightforward. Both dot size and gap affect the dot pitch and hence the sampling frequency (cycles per face), which is known to affect facial recognition performance. In this study, only the largest dot pitch (72 arcmin, i.e., 67.5 + 4.5 arcmin and 31.5 + 40.5 arcmin), with a spatial sampling frequency of 5.6 cycles per face width, was associated with poor facial recognition. This result resembles previously published studies in which spatially filtered facial recognition was optimal at spatial frequencies of 8 to 16 cycles (or 16 to 32 dots) per face width. 24 25 26 27 28 Note, however, that IIs for the 6.3- and 7.3-cycles-per-face width conditions were close to those for the standard condition, 11.1 cycles per face width. A tentative explanation is that, in most of our test conditions, the masks were limited to eight cycles (the Nyquist equivalent of 16 × 16 dots), which may have limited performance by restricting the field of view to part of the face and thus required more scanning and integration to visualize the test face. The poorer performance for the smallest dot pitch (corresponding to 22 cycles per face width), especially at low contrast, can be attributed to both field limitations and high spatial frequency content. Even for the grids with the largest dot sizes (1.1°) acceptable average accuracy (53/48% and 40/31% at high- and low-contrast, respectively) was achieved. This condition approaches the 1.75° two-point resolution reported by Humayun et al., 11 who used a handheld multielectrode probe to stimulate the retina in a blind volunteer. The phosphenes generated by electrical retinal stimulation have a perceived size of approximately 0.5° to 1°. 29 The phosphenes generated by stimulation of the occipital cortex have characteristics similar to those generated by electrical stimulation of the retina. In particular, studies investigating stimulation of the occipital cortex have resulted in phosphenes that have varied from pinpoint to nickel-sized at arm’s length (a visual angle of approximately 2°). 8 In this study, some grids with simulated phosphenes with dot sizes of 31.5 arcmin (approximately 0.5°) and 58.5 arcmin (approximately 1°) were associated with more than 80% accuracy. This strongly suggests that face recognition with a crude visual prosthesis is feasible. 
For a given dot pitch (72 arcmin), narrower gaps seem to yield slightly better identification than wider gaps: 53% versus 48% in high-contrast tests and 40% versus 33% in low-contrast tests. Thus, minimizing the gaps between electrodes while maintaining separable phosphenes may result in improved performance with a prosthetic pixelized visual system. 
For our subjects to achieve an average accuracy over 60%, dot dropout rates had to be 30% or less. When a grid with 70% dropout was used, the average accuracy did not exceed 35%; however, one subject still achieved 50% accuracy, even at low contrast. This suggests that experienced visual prosthesis wearers may function successfully, even if dropout rates in a prosthesis increase over time, either due to device deterioration or through further degeneration of the target cell population. An individual with a 64-electrode array occipital cortex prosthetic visual device and a 67% dropout rate has been reported to discriminate reproducibly the orientation of a high-contrast tumbling E or a Landolt ring with 20/1200 acuity. 30 This suggests that well-trained visual prosthesis wearers may perform simple visual tasks, even with a primitive (and deteriorating) device. 
High-contrast tests using test grids with four to eight gray levels resulted in high accuracy levels. When two gray levels were used, the average accuracy was less than 35%; yet one subject achieved accuracies of 42% in high-contrast tests and 58% in low-contrast tests. Thus, even with only two gray levels, appreciable facial recognition performance may be possible for some subjects. Investigators of occipital cortex stimulation have reported that some subjects have been able to distinguish five to eight levels of phosphene brightness. 31 32 Prosthetic visual devices that are capable of reliably producing four or more levels of brightness are likely to have higher accuracy than those just capable of turning phosphenes “off” and “on.” 
For both the high- and low-contrast test conditions, the smallest grids (10 × 10-dot grid, 36-arcmin pitch; and 16 × 16-dot grid, 18-arcmin pitch) had the longest average correct response times. These were the only two grids that spanned a visual angle of 6° or less and had a grid-to-facial image area ratio of less than 0.17. The increased response time may be attributed to the increased effort required to scan the test image. These two small grids had a significant average accuracy for the high-contrast test condition. However, in low-contrast tests, these grids were associated with the lowest accuracies and failed to achieve statistical significance. This suggests that information integration from prolonged scanning requires high contrast, but the frustration and fatigue of the subjects may also have played a role. 
The II may be used as a single-number performance measure to evaluate the various grid parameters in terms of their importance for image identification. The larger 25 × 25- and 32 × 32-dot grids were associated with the highest accuracies and IIs in both the high- and low-contrast test conditions. Grids with an average II more than 5 had common features: a grid size 16 × 16 to 32 × 32 dots, a grid visual angle of 9.6° to 19.2°, a dot dropout rate of 30% or less, at least six gray levels, a dot size of 31.5 or 58.5 arcmin, and a gap size of 4.5 arcmin. High contrast was an additional common characteristic for 7 of the 11 grids with an average II more than 5 and for all three test conditions with an average II more than 10. 
Low IIs were associated with grids with a high dropout rate (70%), and a low gray-scale resolution (two gray levels). In addition, for low-contrast tests, the two smallest grids (10 × 10-dot, 36-arcmin pitch and 16 × 16-dot, 18-arcmin pitch), large gaps between the dots (40.5 arcmin) and low gray-scale resolution (four gray levels or less) were associated with low IIs. However, even in the test conditions resulting in the worst performance, some subjects still achieved discrimination levels above chance. This suggests that some subjects learn to use coarse distinctive traits (such as hair line, or jaw line) that remain detectable even through very crude masks. 
The simulations in this study may exceed certain properties of a visual prosthesis, such as the rapid refresh rate (30 Hz) of the grid scanning the test image. Prosthetic visual devices may have limited image refresh rates. For example, Dobelle’s 27 cortical prosthesis is reported to have a maximum refresh rate of 20 Hz, but is also claimed to provide optimal performance to the wearer at refresh rates down to 4 Hz. Tests using decreased frame rates may provide a more realistic simulation of prosthetic vision, but with increasing hardware capabilities, frame rate is unlikely to remain a limitation. In our tests, whenever more difficult viewing conditions were encountered, subjects were observed to rapidly scan the grid over the image in an effort to paint in the gaps in the image. This rapid scanning suggests that subjects use spatiotemporal integration to fill in the missing dots and expand the effective visual field. 
In Figure 4B , we noted that the average correct response time was shorter for low than for high contrast, and the analysis by chronologic quartiles in Figure 5 demonstrated that this difference is largely attributable to lack of practice in the early (i.e., high-contrast, first quartile) trials. We also noted further gradual shortening of correct response times in both high- and low-contrast conditions. However, the constancy of low-contrast IIs across quartiles, after a continued increase of II throughout the high-contrast tests (Fig. 5C) , suggests that any practice effect was essentially complete by the end of the high-contrast trials. 
An explanation for the slight decline in correct response times in the third and fourth quartiles of both high- and low-contrast conditions, with a slight decrease in accuracy at low contrast, would be the subject’s loss of patience and commitment. Subjects may become less willing to maintain the time and effort necessary to continue performing at a high accuracy level. It is possible, for example, that the correct responses during later quartiles stem from relatively fewer difficult conditions (which would require longer decision times for than easier conditions). To illustrate this idea, Figure 6 presents data similar to that in Figure 5B , but separately for conditions with the highest and the lowest average correct rates (i.e., the upper and lower eight conditions in Tables 1 and 2 ). 
One may note that the difficult conditions (Fig. 6 , dashed lines) in both panels of this figure show, in more pronounced form, the same trends signaled in Figures 4B and 5B , whereas the trends are much weaker for the easy conditions (solid lines). A dramatic and continuing reduction in correct response times over the course of the high-contrast experiment (Fig. 6A) suggests that the practice effect is strongest for the difficult conditions. On the other hand, there was only a moderate reduction in response times during low-contrast tests (Fig. 6B) , combined with an apparent slight deterioration in response accuracy (Fig. 5A) . Whether this deterioration is attributable to fatigue, loss of patience, or other factors is difficult to establish, but the timing error bars for the difficult conditions at low contrast (Fig. 6B) suggest an answer. These error bars are between-subject averages of individual standard deviations across eight conditions. Therefore, the steady decrease of the error bars for difficult conditions (Fig. 6B , upward bars) suggests that subjects develop, over the course of the 96 trials, a well-honed scanning routine with fairly constant time to inspect the pixelized face and that they become more likely to resort to guessing if this scan does not provide a probable answer. 
In tests of prosthetic vision, the phosphenes have been noted to move with the subjects’ eye movements, as would be expected for electrodes rigidly attached to a retinal or cortical substrate. Thus, in a realistic simulator, the image should be stabilized on the retina, which would require an eye tracker. In our simulation, eye movements were not tracked. The subject was able to scan the sampled image using eye movements, while using the mouse to scan the grid across the filtered image. Eliminating the ability to scan and rapidly foveate multiple areas of the sampled test image will provide a more realistic simulation of prosthetic vision and may degrade task performance. We will be investigating this effect in an upcoming study. 
As one goes up the visual system, receptive fields have an increasingly complex organization. Thus, although small homogeneous dots are an adequate representation of the phosphenes seen by stimulation at the distal retinal level, more complex phosphenes may be observed by stimulation at higher levels. Conversely, it is possible that more complex stimulation patterns and parameters are needed for optimal stimulation at the ganglion cell level and beyond. It is possible that one day visual prostheses will be capable of reliably producing more complex percepts than the simple phosphenes used in this study (e.g., rings, colors, three-dimensional shapes), but at the present time it seems appropriate to restrict these simulations to the types of phosphenes reported by subjects in intraoperative and prolonged prosthetic trials. In future studies, the effects of more elaborate stimulation patterns on tasks such as facial recognition should be explored. 
Our subjects did not receive formal training to optimize the use of the pixelized grid viewing system. Teaching the techniques used by individuals to recognize faces successfully under more challenging test conditions would probably improve performance of this task, although the data in Figure 6 suggest that all four subjects gradually developed a routine that worked for them. Nonetheless, future simulation tests, with dot parameters and grid patterns matching those of actual visual prosthetic devices, should seek to develop rehabilitative strategies that may improve task performance in conditions that are difficult for our subjects and can eventually help future prosthesis wearers to derive maximum benefit from their devices. 
The subjects in this study were relatively young. Many users of visual prosthetic devices may be older individuals. Age may affect learning, fatigue, and motivation and therefore may somewhat affect the results of the study. We anticipate, however, that older individuals would be capable of acquiring the same recognition strategies as those used by our younger subjects, albeit possibly at a slower pace. 
The effects of various image enhancements may also be investigated in future studies. Such studies may examine the merits of feature extraction through contrast enhancement, edge detection, high-pass filtering, and variable zoom on the performance of tasks such as facial recognition. 
The results presented herein indicate that initially slow, but gradually swifter and more accurate, facial recognition can be achieved when pixelized dot images are scanned over convolved facial images by using a mouse-pointing device. This study also suggests that prosthetic visual devices designed to minimum specifications outlined by our results may provide blind individuals with visual percepts enabling facial recognition and similar daily living activities. 
 
Figure 1.
 
The LVES was worn by the subjects and used as a video projector to view the computer-generated reference and test images.
Figure 1.
 
The LVES was worn by the subjects and used as a video projector to view the computer-generated reference and test images.
Figure 2.
 
Pixelized test images for one of the averted images of the white male in the bottom right-hand corner of the fourth reference set in Figure 3 . The parameters were independently varied about the standard test condition (marked S), which consisted of a 16 × 16-dot grid with 31.5-arcmin dots, 4.5-arcmin gaps, a 30% dot dropout rate, and six gray levels. This standard condition is placed in each row for comparison. All conditions were presented in both high and low contrast, during separate experimental sessions. First row: appearance of variations in grid size; second row: dot size; third row: dot gap and contrast; fourth row: dot dropout rate; fifth row: number of gray levels.
Figure 2.
 
Pixelized test images for one of the averted images of the white male in the bottom right-hand corner of the fourth reference set in Figure 3 . The parameters were independently varied about the standard test condition (marked S), which consisted of a 16 × 16-dot grid with 31.5-arcmin dots, 4.5-arcmin gaps, a 30% dot dropout rate, and six gray levels. This standard condition is placed in each row for comparison. All conditions were presented in both high and low contrast, during separate experimental sessions. First row: appearance of variations in grid size; second row: dot size; third row: dot gap and contrast; fourth row: dot dropout rate; fifth row: number of gray levels.
Figure 3.
 
Samples of reference images (left column) and 32 × 32 dot test images (right column) from one of the four models in each set. In the study, the test image was the same size as the faces in the reference image. The test image has been enlarged 50% for better illustration of the pixelization. Subjects scanned the grid of dots over the test image using the mouse-pointing device.
Figure 3.
 
Samples of reference images (left column) and 32 × 32 dot test images (right column) from one of the four models in each set. In the study, the test image was the same size as the faces in the reference image. The test image has been enlarged 50% for better illustration of the pixelization. Subjects scanned the grid of dots over the test image using the mouse-pointing device.
Table 1.
 
High-Contrast Grid Parameters and Test Performance Measures Ranked by Response Accuracy
Table 1.
 
High-Contrast Grid Parameters and Test Performance Measures Ranked by Response Accuracy
Grid Size (No. dots) Dot Size (arcmin) Gap Size (arcmin) Dropout (%) Gray Levels Accuracy, † (%) Relative Accuracy Correct Resp. Time, † (s) Identification Index, † Grid Size (deg) Dot Pitch Horizontal Cycles Per Face Vertical Cycles Per Face
25×25 31.5 4.5 30 6 92 ± 7, ** 1.52 5.7 ± 1.6 12.2 ± 2.0, ** 15 36 11.1 14.3
16×16 58.5 4.5 30 6 88 ± 8, ** 1.43 8.2 ± 2.8 8.1 ± 2.1, ** 16.8 63 6.3 8.1
16×16 31.5 4.5 10 6 85 ± 13, ** 1.36 6.3 ± 1.9 10.3 ± 3.4, ** 9.6 36 11.1 14.3
32×32 31.5 4.5 30 6 83 ± 12, ** 1.32 6.2 ± 1.9 10.0 ± 3.4, ** 19.2 36 11.1 14.3
16×16 31.5 4.5 30 8 81 ± 17, ** 1.27 8.7 ± 1.8 6.6 ± 2.3, ** 9.6 36 11.1 14.3
16×16 31.5 4.5 30 6 69 ± 17 * 1.00 8.1 ± 1.7 5.4 ± 2.0 , ** 9.6 36 11.1 14.3
16×16 49.5 4.5 30 6 63 ± 11, ** 0.86 7.5 ± 3.9 6.1 ± 3.1* 14.4 54 7.4 9.5
16×16 31.5 4.5 30 4 63 ± 11, ** 0.86 9.9 ± 4.5 4.0 ± 0.9, ** 9.6 36 11.1 14.3
10×10 31.5 4.5 30 6 60 ± 10, ** 0.80 17.0 ± 9.2 2.7 ± 1.9* 6 36 11.1 14.3
16×16 31.5 22.5 30 6 59 ± 12, ** 0.77 7.6 ± 2.6 4.9 ± 2.2* 14.4 54 7.4 9.5
16×16 31.5 4.5 50 6 56 ± 10, ** 0.70 12.1 ± 5.7 3.2 ± 1.8* 9.6 36 11.1 14.3
16×16 67.5 4.5 30 6 53 ± 10, ** 0.64 7.6 ± 4.8 4.3 ± 1.8, ** 19.2 72 5.6 7.1
16×16 13.5 4.5 30 6 52 ± 8, ** 0.61 14.5 ± 4.9 2.1 ± 0.9* 4.8 18 22.2 28.5
16×16 31.5 40.5 30 6 48 ± 8, ** 0.52 8.0 ± 5.7 3.6 ± 1.6* 19.2 72 5.6 7.1
16×16 31.5 4.5 70 6 35 ± 14 11.7 ± 6.7 1.1 ± 2.0 9.6 36 11.1 14.3
16×16 31.5 4.5 30 2 32 ± 10 13.4 ± 5.7 0.8 ± 1.3 9.6 36 11.1 14.3
Table 2.
 
Low-Contrast Grid Parameters and Test Performance Measures Ranked by Response Accuracy
Table 2.
 
Low-Contrast Grid Parameters and Test Performance Measures Ranked by Response Accuracy
Grid Size (No. dots) Dot Size (arcmin) Gap Size (arcmin) Dropout (%) Gray Levels Accuracy (%) Relative Accuracy Correct Resp. Time (s) Identification Index Grid Size (deg) Dot Pitch Horizontal Cycles Per Face Vertical Cycles Per Face
32×32 31.5 4.5 30 6 68 ± 25%* 1.23 7.3 ± 4.4 7.8 ± 5.5* 19.2 36 11.1 14.3
25×25 31.5 4.5 30 6 65 ± 20%* 1.14 6.4 ± 3.3 7.7 ± 5.5* 15 36 11.1 14.3
16×16 58.5 4.5 30 6 60 ± 13%** 1.00 6.8 ± 2.9 6.1 ± 3.4* 16.8 63 6.3 8.1
16×16 31.5 4.5 30 6 60 ± 17%* 1.00 8.6 ± 3.4 4.8 ± 2.8* 9.6 36 11.1 14.3
16×16 31.5 4.5 10 6 58 ± 26%* 0.94 5.7 ± 1.2 5.6 ± 4.2* 9.6 36 11.1 14.3
16×16 31.5 4.5 30 8 53 ± 15%* 0.80 8.1 ± 2.0 3.9 ± 2.7* 9.6 36 11.1 14.3
16×16 31.5 22.5 30 6 51 ± 14%* 0.74 7.4 ± 2.8 4.2 ± 2.3* 14.4 54 7.4 9.5
16×16 49.5 4.5 30 6 46 ± 17%* 0.60 7.5 ± 4.4 3.9 ± 3.8 14.4 54 7.4 9.5
16×16 31.5 4.5 30 4 45 ± 34% 9.4 ± 4.9 1.8 ± 4.6 9.6 36 11.1 14.3
16×16 31.5 4.5 50 6 42 ± 15%* 0.49 7.2 ± 1.7 2.4 ± 2.0* 9.6 36 11.1 14.3
16×16 67.5 4.5 30 6 40 ± 4%** 0.43 6.4 ± 2.6 2.4 ± 0.9** 19.2 72 5.6 7.1
16×16 31.5 4.5 30 2 35 ± 22% 6.4 ± 3.2 1.3 ± 5.4 9.6 36 11.1 14.3
16×16 31.5 4.5 70 6 33 ± 12% 9.4 ± 4.0 1.2 ± 1.5 9.6 36 11.1 14.3
10×10 31.5 4.5 30 6 33 ± 18% 10.4 ± 3.3 0.7 ± 2.1 6 36 11.1 14.3
16×16 31.5 40.5 30 6 31 ± 18% 6.6 ± 1.6 1.5 ± 3.0 19.2 72 5.6 7.1
16×16 13.5 4.5 30 6 31 ± 10% 9.9 ± 5.4 0.0 ± 1.7 4.8 18 22.2 28.5
Figure 4.
 
Accuracy (A), correct response time (B), and II (C) averaged across all high- and low-contrast test conditions. Bar heights and error bars represent the mean and standard deviation across subjects.
Figure 4.
 
Accuracy (A), correct response time (B), and II (C) averaged across all high- and low-contrast test conditions. Bar heights and error bars represent the mean and standard deviation across subjects.
Figure 5.
 
Accuracy (A), correct response time (B), and II (C) averaged across all high- and low-contrast test conditions, by chronological quartiles (48 trials per subject). Data points and error bars represent the mean and standard deviation across subjects.
Figure 5.
 
Accuracy (A), correct response time (B), and II (C) averaged across all high- and low-contrast test conditions, by chronological quartiles (48 trials per subject). Data points and error bars represent the mean and standard deviation across subjects.
Figure 6.
 
Means and standard deviations of correct response times in chronological quartiles, for stimulus groups with highest and lowest correct response percentages, at high (A) and low (B) contrast. Note that each error bar represents the between-subjects average of individual standard deviations.
Figure 6.
 
Means and standard deviations of correct response times in chronological quartiles, for stimulus groups with highest and lowest correct response percentages, at high (A) and low (B) contrast. Note that each error bar represents the between-subjects average of individual standard deviations.
The authors thank the volunteer participants. 
Leonard, R. (2002) Statistics on Vision Impairment: A Resource Manual 5th Edition ,iv Lighthouse International New York.
Brabyn, J. (1986) Developments in electronic aids for the blind and visually impaired IEEE Eng Med Biol 4,33-37
De Volder, AG, Catalan-Ahumada, M, Robert, A, et al (1999) Changes in occipital cortex activity in early blind humans using a sensory substitution device Brain Res 826,128-134 [CrossRef] [PubMed]
Capelle, C, Trullemans, C, Arno, P, Veraart, C. (1998) A real-time experimental prototype for enhancement of vision rehabilitation using auditory substitution IEEE Trans Biomed Eng 45,1279-1293 [CrossRef] [PubMed]
Brindley, GS, Lewin, WS. (1968) The sensations produced by electrical stimulation of the visual cortex J Physiol 196,479-493 [CrossRef] [PubMed]
Dobelle, WH, Mladejovsky, MG. (1974) Phosphenes produced by electrical stimulation of human occipital cortex, and their application to the development of a prosthesis for the blind J Physiol 243,553-576 [CrossRef] [PubMed]
Pollen, DA. (1977) Responses of single neurons to electrical stimulation of the surface of the visual cortex Brain Behav Evol 14,67-86 [CrossRef] [PubMed]
Schmidt, EM, Bak, MJ, Hambrecht, FT, Kufta, CV, O’Rourke, DK, Vallabhanath, P. (1996) Feasibility of a visual prosthesis for the blind based on intracortical microstimulation of the visual cortex Brain 119,507-522 [CrossRef] [PubMed]
Normann, RA, Maynard, EM, Guillory, KS, Warren, DJ. (1996) Cortical implants for the blind IEEE Spectrum 112,54-59
Delbeke, J, Wanet-Defalque, MC, Gerard, B, Troosters, M, Michaux, G, Veraart, C. (2002) The microsystems based visual prosthesis for optic nerve stimulation Artif Organs 26,232-234 [CrossRef] [PubMed]
Humayun, MS, de Juan, E, Jr, Dagnelie, G, Greenberg, RJ, Propst, RH, Phillips, H. (1996) Visual perception elicited by electrical stimulation of retina in blind humans Arch Ophthalmol 114,40-46 [CrossRef] [PubMed]
Wyatt, J, Rizzo, JF. (1996) Ocular Implants for the blind IEEE Spectrum 112,7-53
Eckmiller, R. (1997) Learning retina implants with epiretinal contact Ophthalmic Res 29,281-289 [CrossRef] [PubMed]
Zrenner, E, Miliczek, KD, Graf, HG, et al (1997) The development of subretinal microphotodiodes for replacement of degenerated photoreceptors Ophthalmic Res 29,269-280 [CrossRef] [PubMed]
Chow, AY, Chow, VY. (1997) Subretinal electrical stimulation of the rabbit retina Neurosci Lett 225,13-16 [CrossRef] [PubMed]
Dobelle, WH, Mladejovsky, MG, Evans, JR, Roberts, TS, Girvin, JP. (1976) “Braille” reading by a blind volunteer by visual cortex stimulation Nature 259,111-112 [CrossRef] [PubMed]
Humayun, MS, de Juan, E, Jr, Weilans, JD, et al (1999) Pattern electrical stimulation of the human retina Vision Res 39,2569-2576 [CrossRef] [PubMed]
Cha, K, Horch, K, Normann, RA. (1992) Simulation of a phosphene-based visual field: visual acuity in a pixelized vision system Ann Biomed Eng 20,439-449 [CrossRef] [PubMed]
Cha, K, Horch, KW, Normann, RA, Boman, DK. (1992) Reading speed with a pixelized vision system J Opt Soc Am A 9,673-677 [CrossRef] [PubMed]
Cha, K, Horch, KW, Normann, RA. (1992) Mobility performance with a pixelized vision system Vision Res 32,1367-1372 [CrossRef] [PubMed]
Harmon, LD. (1973) The recognition of faces Sci Am 229,70-83
Bachmann, T. (1991) Identification of spatially quantised tachistoscopic images of faces: how many pixels does it take to carry identity? Eur J Cogn Psychol 3,85-103
Massof, RW. (1998) Electro-optical head-mounted low vision enhancement Pract Optom 9,214-220
Costen, NP, Parker, DM, Craw, I. (1994) Spatial content and spatial quantisation effects in face recognition Perception 23,129-146 [CrossRef] [PubMed]
Peli, E, Lee, E, Trempe, CL, Buzney, S. (1994) Image enhancement for the visually impaired: the effects of enhancement on face recognition J Opt Soc Am A 11,1929-1939 [CrossRef]
Costen, NP, Parker, DM, Craw, I. (1996) Effects of high-pass and low-pass spatial filtering on face identification Percept Psychophys 58,602-612 [CrossRef] [PubMed]
Gold, J, Bennett, PJ, Sekuler, AB. (1999) Identification of band-pass filtered letters and faces by human and ideal observers Vision Res 39,3537-3560 [CrossRef] [PubMed]
Näsänen, R. (1999) Spatial frequency bandwidth used in the recognition of facial images Vision Res 39,3824-3833 [CrossRef] [PubMed]
Humayun, MS, de Juan, E, Jr (1998) Artificial vision Eye 12,605-607 [CrossRef] [PubMed]
Dobelle, WH. (2000) Artificial vision for the blind by connecting a television camera to the visual cortex ASAIO J 46,3-9 [CrossRef] [PubMed]
Rushton, DN, Brindley, GS. (1978) Frontiers in Visual Science ,574-593 Springer-Verlag New York, NY.
Evans, JR, Gordon, J, Abramov, I, Mladejovsky, MG, Dobelle, WH. (1979) Brightness of phosphenes elicited by electrical stimulation of human visual cortex Sens Processes 3,82-94 [PubMed]
Figure 1.
 
The LVES was worn by the subjects and used as a video projector to view the computer-generated reference and test images.
Figure 1.
 
The LVES was worn by the subjects and used as a video projector to view the computer-generated reference and test images.
Figure 2.
 
Pixelized test images for one of the averted images of the white male in the bottom right-hand corner of the fourth reference set in Figure 3 . The parameters were independently varied about the standard test condition (marked S), which consisted of a 16 × 16-dot grid with 31.5-arcmin dots, 4.5-arcmin gaps, a 30% dot dropout rate, and six gray levels. This standard condition is placed in each row for comparison. All conditions were presented in both high and low contrast, during separate experimental sessions. First row: appearance of variations in grid size; second row: dot size; third row: dot gap and contrast; fourth row: dot dropout rate; fifth row: number of gray levels.
Figure 2.
 
Pixelized test images for one of the averted images of the white male in the bottom right-hand corner of the fourth reference set in Figure 3 . The parameters were independently varied about the standard test condition (marked S), which consisted of a 16 × 16-dot grid with 31.5-arcmin dots, 4.5-arcmin gaps, a 30% dot dropout rate, and six gray levels. This standard condition is placed in each row for comparison. All conditions were presented in both high and low contrast, during separate experimental sessions. First row: appearance of variations in grid size; second row: dot size; third row: dot gap and contrast; fourth row: dot dropout rate; fifth row: number of gray levels.
Figure 3.
 
Samples of reference images (left column) and 32 × 32 dot test images (right column) from one of the four models in each set. In the study, the test image was the same size as the faces in the reference image. The test image has been enlarged 50% for better illustration of the pixelization. Subjects scanned the grid of dots over the test image using the mouse-pointing device.
Figure 3.
 
Samples of reference images (left column) and 32 × 32 dot test images (right column) from one of the four models in each set. In the study, the test image was the same size as the faces in the reference image. The test image has been enlarged 50% for better illustration of the pixelization. Subjects scanned the grid of dots over the test image using the mouse-pointing device.
Figure 4.
 
Accuracy (A), correct response time (B), and II (C) averaged across all high- and low-contrast test conditions. Bar heights and error bars represent the mean and standard deviation across subjects.
Figure 4.
 
Accuracy (A), correct response time (B), and II (C) averaged across all high- and low-contrast test conditions. Bar heights and error bars represent the mean and standard deviation across subjects.
Figure 5.
 
Accuracy (A), correct response time (B), and II (C) averaged across all high- and low-contrast test conditions, by chronological quartiles (48 trials per subject). Data points and error bars represent the mean and standard deviation across subjects.
Figure 5.
 
Accuracy (A), correct response time (B), and II (C) averaged across all high- and low-contrast test conditions, by chronological quartiles (48 trials per subject). Data points and error bars represent the mean and standard deviation across subjects.
Figure 6.
 
Means and standard deviations of correct response times in chronological quartiles, for stimulus groups with highest and lowest correct response percentages, at high (A) and low (B) contrast. Note that each error bar represents the between-subjects average of individual standard deviations.
Figure 6.
 
Means and standard deviations of correct response times in chronological quartiles, for stimulus groups with highest and lowest correct response percentages, at high (A) and low (B) contrast. Note that each error bar represents the between-subjects average of individual standard deviations.
Table 1.
 
High-Contrast Grid Parameters and Test Performance Measures Ranked by Response Accuracy
Table 1.
 
High-Contrast Grid Parameters and Test Performance Measures Ranked by Response Accuracy
Grid Size (No. dots) Dot Size (arcmin) Gap Size (arcmin) Dropout (%) Gray Levels Accuracy, † (%) Relative Accuracy Correct Resp. Time, † (s) Identification Index, † Grid Size (deg) Dot Pitch Horizontal Cycles Per Face Vertical Cycles Per Face
25×25 31.5 4.5 30 6 92 ± 7, ** 1.52 5.7 ± 1.6 12.2 ± 2.0, ** 15 36 11.1 14.3
16×16 58.5 4.5 30 6 88 ± 8, ** 1.43 8.2 ± 2.8 8.1 ± 2.1, ** 16.8 63 6.3 8.1
16×16 31.5 4.5 10 6 85 ± 13, ** 1.36 6.3 ± 1.9 10.3 ± 3.4, ** 9.6 36 11.1 14.3
32×32 31.5 4.5 30 6 83 ± 12, ** 1.32 6.2 ± 1.9 10.0 ± 3.4, ** 19.2 36 11.1 14.3
16×16 31.5 4.5 30 8 81 ± 17, ** 1.27 8.7 ± 1.8 6.6 ± 2.3, ** 9.6 36 11.1 14.3
16×16 31.5 4.5 30 6 69 ± 17 * 1.00 8.1 ± 1.7 5.4 ± 2.0 , ** 9.6 36 11.1 14.3
16×16 49.5 4.5 30 6 63 ± 11, ** 0.86 7.5 ± 3.9 6.1 ± 3.1* 14.4 54 7.4 9.5
16×16 31.5 4.5 30 4 63 ± 11, ** 0.86 9.9 ± 4.5 4.0 ± 0.9, ** 9.6 36 11.1 14.3
10×10 31.5 4.5 30 6 60 ± 10, ** 0.80 17.0 ± 9.2 2.7 ± 1.9* 6 36 11.1 14.3
16×16 31.5 22.5 30 6 59 ± 12, ** 0.77 7.6 ± 2.6 4.9 ± 2.2* 14.4 54 7.4 9.5
16×16 31.5 4.5 50 6 56 ± 10, ** 0.70 12.1 ± 5.7 3.2 ± 1.8* 9.6 36 11.1 14.3
16×16 67.5 4.5 30 6 53 ± 10, ** 0.64 7.6 ± 4.8 4.3 ± 1.8, ** 19.2 72 5.6 7.1
16×16 13.5 4.5 30 6 52 ± 8, ** 0.61 14.5 ± 4.9 2.1 ± 0.9* 4.8 18 22.2 28.5
16×16 31.5 40.5 30 6 48 ± 8, ** 0.52 8.0 ± 5.7 3.6 ± 1.6* 19.2 72 5.6 7.1
16×16 31.5 4.5 70 6 35 ± 14 11.7 ± 6.7 1.1 ± 2.0 9.6 36 11.1 14.3
16×16 31.5 4.5 30 2 32 ± 10 13.4 ± 5.7 0.8 ± 1.3 9.6 36 11.1 14.3
Table 2.
 
Low-Contrast Grid Parameters and Test Performance Measures Ranked by Response Accuracy
Table 2.
 
Low-Contrast Grid Parameters and Test Performance Measures Ranked by Response Accuracy
Grid Size (No. dots) Dot Size (arcmin) Gap Size (arcmin) Dropout (%) Gray Levels Accuracy (%) Relative Accuracy Correct Resp. Time (s) Identification Index Grid Size (deg) Dot Pitch Horizontal Cycles Per Face Vertical Cycles Per Face
32×32 31.5 4.5 30 6 68 ± 25%* 1.23 7.3 ± 4.4 7.8 ± 5.5* 19.2 36 11.1 14.3
25×25 31.5 4.5 30 6 65 ± 20%* 1.14 6.4 ± 3.3 7.7 ± 5.5* 15 36 11.1 14.3
16×16 58.5 4.5 30 6 60 ± 13%** 1.00 6.8 ± 2.9 6.1 ± 3.4* 16.8 63 6.3 8.1
16×16 31.5 4.5 30 6 60 ± 17%* 1.00 8.6 ± 3.4 4.8 ± 2.8* 9.6 36 11.1 14.3
16×16 31.5 4.5 10 6 58 ± 26%* 0.94 5.7 ± 1.2 5.6 ± 4.2* 9.6 36 11.1 14.3
16×16 31.5 4.5 30 8 53 ± 15%* 0.80 8.1 ± 2.0 3.9 ± 2.7* 9.6 36 11.1 14.3
16×16 31.5 22.5 30 6 51 ± 14%* 0.74 7.4 ± 2.8 4.2 ± 2.3* 14.4 54 7.4 9.5
16×16 49.5 4.5 30 6 46 ± 17%* 0.60 7.5 ± 4.4 3.9 ± 3.8 14.4 54 7.4 9.5
16×16 31.5 4.5 30 4 45 ± 34% 9.4 ± 4.9 1.8 ± 4.6 9.6 36 11.1 14.3
16×16 31.5 4.5 50 6 42 ± 15%* 0.49 7.2 ± 1.7 2.4 ± 2.0* 9.6 36 11.1 14.3
16×16 67.5 4.5 30 6 40 ± 4%** 0.43 6.4 ± 2.6 2.4 ± 0.9** 19.2 72 5.6 7.1
16×16 31.5 4.5 30 2 35 ± 22% 6.4 ± 3.2 1.3 ± 5.4 9.6 36 11.1 14.3
16×16 31.5 4.5 70 6 33 ± 12% 9.4 ± 4.0 1.2 ± 1.5 9.6 36 11.1 14.3
10×10 31.5 4.5 30 6 33 ± 18% 10.4 ± 3.3 0.7 ± 2.1 6 36 11.1 14.3
16×16 31.5 40.5 30 6 31 ± 18% 6.6 ± 1.6 1.5 ± 3.0 19.2 72 5.6 7.1
16×16 13.5 4.5 30 6 31 ± 10% 9.9 ± 5.4 0.0 ± 1.7 4.8 18 22.2 28.5
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×