June 2022
Volume 63, Issue 7
Open Access
ARVO Annual Meeting Abstract  |   June 2022
Recording multifocal ERGs using a custom-built gaze-contingent system
Author Affiliations & Notes
  • Sara Aghajari
    New England College of Optometry, Boston, Massachusetts, United States
  • Alison Taylor
    New England College of Optometry, Boston, Massachusetts, United States
  • Peter J Bex
    Northeastern University College of Science, Boston, Massachusetts, United States
  • Fuensanta A Vera-Diaz
    New England College of Optometry, Boston, Massachusetts, United States
  • Thanasis Panorgias
    New England College of Optometry, Boston, Massachusetts, United States
  • Footnotes
    Commercial Relationships   Sara Aghajari None; Alison Taylor None; Peter Bex Adaptive Sensory Technology, PerZeption Inc, Code I (Personal Financial Interest); Fuensanta Vera-Diaz Essilor International, Code C (Consultant/Contractor); Thanasis Panorgias None
  • Footnotes
    Support  NIH Grant 5R21EY031085-02
Investigative Ophthalmology & Visual Science June 2022, Vol.63, 765 – F0417. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Sara Aghajari, Alison Taylor, Peter J Bex, Fuensanta A Vera-Diaz, Thanasis Panorgias; Recording multifocal ERGs using a custom-built gaze-contingent system. Invest. Ophthalmol. Vis. Sci. 2022;63(7):765 – F0417.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Purpose : Recording mfERG using commercial devices requires the subject’s sustained fixation at a specific location, which is not always attainable. We used our custom-built gaze-contingent system (GCS) to test the feasibility of recording mfERG in the presence of normal eye movements.

Methods : ERG responses were recorded in young adults (n=7) with a commercial device (Diagnosys) and also with the GCS under two conditions: Fixed and GC. The stimulus consisted of a central disk (radius = 2.5°) with 5 surrounding concentric rings (inner radii = [2.5°-12.5°], outer radii = [5°-15°], step size = 2.5°), and was presented on an LCD monitor (refresh rate = 75Hz). The m-sequence (m-seq) had 212-1 states, each lasting 39.9ms (1 stimulus frame, and 2 filler frames), and the m-seq lag between the rings was 256 states.
The GCS consisted of a 1401 DAQ system, and a 1902 amplifier (CED UK), a Bits# stimulus generator (CRS UK) alongside MATLAB and Psychtoolbox for stimulus generation, and an Eyelink eye tracker (2KHz sampling rate, SR Research CA) for monitoring gaze. In Fixed mode, the stimulus and fixation point were centered at the screen, while in GC mode, they were updated based on the gaze location at the beginning of the previous stimulus frame. An unfilled 2×2° square centered on the screen designated the permissible extent of gaze locations. If the gaze moved beyond this area, the edges changed color to warn the subject, and the affected m-seq states were repeated. To remove blink artifacts, all states presented 0.7sec prior until 3sec after a blink detection were re-tested. Computation of the mfERG responses required cross-correlation of the ERG signal and the stimulus luminance time-series. A photodiode at the top-left corner of the screen rendered the m-seq based on the luminance of the central stimulus.

Results : A repeated measure correlation (rmcorr) analysis indicated: 1) A high correlation between the P1 latencies measured with the 2 modes of the GCS (r = 0.84, P < 0.001), 2) An agreement between the commercial device and the GCS-GC mode in terms of P1 latencies (r = 0.35, P = 0.037), 3) No significant correlations for N1 latencies.

Conclusions : Our results suggest that making the stimulus GC does not affect P1 timings, and our measurements are not qualitatively different from the commercial devices. Therefore, this approach can be used with unstable fixation and to improve the spatial specificity of the mfERG technique.

This abstract was presented at the 2022 ARVO Annual Meeting, held in Denver, CO, May 1-4, 2022, and virtually.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.