Spatial frequency analysis was performed in Matlab (Mathworks, Inc., Natick, MA, USA) with custom software. Source images were either JPEG format (Berkeley dataset), or RAW camera images (Texas and UPenn datasets). Images were all converted into CIE (Commission Internationale de l'éclairage) LMS color space, corresponding to long-, medium-, and short-wave sensitive cones, from the manufacturer's defined color space of the different camera systems (iPhone, Nikon D70 digital SLR, and Nikon D700 digital SLR). This transformation allowed creation of images weighted according to the human photopic luminosity function. Gamma correction, where present, was reversed to create linear image files for analysis. After these steps, the source images were all subject to identical preprocessing. Images were first cropped to a square format and resized to 1024 × 1024 pixels for computational efficiency. The field of view of each camera system/lens combination, adjusted for the square image format, was used to calculate spatial frequency in cycles/degree of visual angle. A fast Fourier transform (FFT) was applied to each plane of the LMS images to generate a spatial amplitude spectrum. The spectrum was then radially averaged to generate a single amplitude spectrum representing the average of each directional meridian within the image. Linear regression was used to calculate the slope of the relationship between log amplitude and log spatial frequency.