The proposed method, in accordance with the original CxC algorithm, captures a true steady-state signal with no gaps, saturations, modulations, adaptation phenomena, or other anomalies. In addition, the sampling process is synchronized with the stimulus generation, and an integer number of cycles is collected, allowing for optimal use of Fourier analysis.
19 The ideal model of a periodical signal hidden in random noise is then well matched, and a meaningful high-resolution spectrogram may be obtained from the recording. Moreover, the test–retest reliability is improved. The distinctive feature of our method is the use of a purposely designed statistical assessment toolkit that provides a comprehensive and contextual assessment of the quality of the recording and the reliability of the results.
The three tests presented are in principle strongly correlated and free of bias effects. It should therefore be possible, in given conditions, to assess their relative power and choose the one having the better performance. In less controlled conditions, the three tests frequently agree, but in case of discordance they give an indication of the origin of the problem, which in some cases may be corrected.
The CxC test has the advantage of having a larger sample dimension but may be altered by the presence of periodic interference at frequencies near the stimulus, typically coming from the 50 or 60 Hz mains. In a well-designed system, the stimulation frequency and the number of averaged cycles are adjusted to obtain a full cancellation of the interference from the final average. Nonetheless, the variance of the cycle components remains affected by the interference, and the resulting confidence region is abnormally extended.
The test based on partial averaging is the only part of the toolkit where traces of the averaged signal are shown, a feature that retains the utility of offering a qualitative check of the result to spot any evident anomaly of the examination. A specific problem appears in records where the variance of the sine component is significantly greater than that of the cosine component, typically the effect of low-frequency noise. This condition is in contrast with the assumption of equal variance made for the T2circ test and may produce an altered statistic result.
The test based on the signal-to-noise ratio is affected by anomalies in noise spectrum distribution that usually are not an intrinsic feature of noise but are an artifact of the Fourier analysis, the well-known problem of spectral leakage. In the present context, the problem typically becomes important when the record has a large amplitude difference (VD) between the values of the first and last sample, a condition that is usually handled with the help of a window function. This VD may be the effect of a trend line, a low-frequency noise, or a combination of local features of the signal. In such a situation, the Fourier analysis, made without windowing, generates spurious components that generally spread all over the spectrum, starting as large ones at lower frequencies and decreasing in amplitude as frequency increases. In the vicinity of the frequency of interest, the disturbance generally becomes negligible, but, as the VD could be orders of magnitude greater than the response amplitude, significant spurious components may still appear. In a small frequency range nearby, the response frequency the amplitude of such artifactual components may be assumed to vary in a linear fashion, with the addition of true noise components having the normal random distribution. The average of these components may be therefore regarded as an unbiased estimator of the error component at the response frequency and used to correct the result. An effective correction may also be obtained using a detrend algorithm or a linear highpass filter on the signal before the Fourier analysis. The use of a window function such as the Hamming one is also appropriate and is commonly used in signal processing applications. This practice is not recommended in advanced RD because it further reduces the intrinsic signal-to-noise ratio, or process gain, and may impair the ability to detect the weakest responses.
It is important to point out that the magnitude responses so far obtained result from the sum of the signal and noise amplitudes. This sum is a random variate that obeys the Rice distribution,
21 and it is possible to compute the relationship between the underlying signal
v and the observed response, as showed in
Figure 5, where both signal and response are normalized to the noise amplitude. The mean response magnitude (central curve) was obtained from the analytic formulas of the moments of Rice's distribution and the confidence limits and the detection probability were obtained using the Marcum
Q-function for the cumulative probability
22 and the critical value
Q = 2.02 previously described.
The chart may be used to raise many interesting considerations. First, it may be noted that the mean magnitude curve starts at a value of 1, due to the contribution of noise, and that the initial large positive bias over the “true” value v reduces itself as v increases, becoming negligible for v > 2.56 (bias error <5%). At the same time, the detection probability increases, attaining almost certainty (95% level) when v is greater than 3.2. Meanwhile, the spread range of the measured magnitudes reduces very little in absolute values, so that at the v = 3.2 level it is still +43.6% and −37.0%. of the mean. If an accuracy of ±20% is needed, the signal-to-noise ratio must be greater than 7.
Thanks to the noise data produced by the toolkit, it is possible to create a database useful to estimate the noise levels for different clinical or research settings. In this way the
v values previously considered may be translated into actual voltages so that, using the chart in
Figure 5, it is possible to forecast the detection probability and the average dispersion of the results for the cohort to be studied. This capacity may be of value for the rational planning of the clinical activity and the efficient design of clinical trials.
The dedicated software application made it possible to concentrate in a single tool a range of functions usually obtained with the help of different statistical and signal processing packages. The method provided reliable analysis of the validity of measurements made by ERG recordings of amplitude as low as 1 µV, as may be required for the study of advanced RD patients. Moreover, the method is of potential value both as an outcome variable and as a planning tool in clinical trials on natural history and treatment of advanced RDs. Future developments may include the use of correction schemes based on the analysis of the variance of the measured magnitude.