# What is single fissure disorder

## Does a camera have a single gap disorder?

The resolution ("sharpness") you get with a camera depends on a number of factors (roughly in order of importance ... although this is a subjective order as any of these factors can mess up your image:

• The degree of focus
• The quality of the lens
• The resolution of the sensor (or the grain size of the film)
• diffraction

Of course, if you don't focus your subject properly, the picture will be out of focus. Now it turns out that focusing is less important as your lens gets smaller (i.e. a larger f-number, i.e. the ratio of focal length to diameter). This is because the blurring effect of the blur is basically a convolution of the size of the aperture with the image - scaled by the degree of defocus. Best explained with a diagram:

You can learn two things from this: if your film (sensor) is in the wrong place, the light will not be focused on one point; and the size of the "blob" depends on the angle that the lens makes on the film. When photographing objects at different distances, each object wants to project onto a different plane. If your lens aperture is small, the "blobs" it creates will be small. We say you get a good "depth of field". When the lens is bigger, the blobs are bigger and only what is "exactly" in focus looks good.

Most of the time, this will determine how sharp your image is.

However, it is possible that the other factors I have listed are important. In the case of diffraction (your question), the diffraction pattern of a circular aperture (usually a reasonable assumption) is an Airy disk, and the width of the disk is approximately

W. = 1.22λfd

Where f is the focal length, d is the diameter of the lens and λ is the wavelength.

Now fd is called the "f number" of the lens. For most cameras, this range can be between 2.8 and 22, although a larger range is possible. If the f-number is large, the diffraction effect is large: if you assume a wavelength of 500 nm, the "blob" is approximately f × 600 nm wide.

Now there is a very nice article that describes the effect of pixel size on a number of imaging problems on this link. It contains a nice table with the size of the diffraction blob for different colors as a function of the f-number:

The pixel size of a Canon 1D Mark II is 8.2 µm. So if you shoot with a lens with an f-number greater than 5.6, your performance will be limited by diffraction (rather than pixel size). Note that cameras with smaller sensors tend to have smaller pixels (your iPhone 5 phone might be around 1.5 microns pixels). This shows you that the sensor has reached the diffraction limit - more megapixels won't help with this form factor as you can't decrease the f-number or make the focal length longer (and the sensor larger). However, with a diffraction spot of 10 or more pixels, you can start processing the image (deconvolution filtering) to make the image sharper. As you can see from the table, if you want to optimize the result, you have to "deblur" the different wavelengths (R, G, B components of the image) with a kernel of different sizes.

I have not been able to determine if this is being done. Your iPhone's camera has an f-number of 2.2 - so diffraction isn't all that important.

Here are some related discussions. The "starburst" effect that is asked about in this question is also a diffraction effect - although in this case the fact that the diaphragm consists of blades with straight edges causes the respective effect. It clearly shows that diffraction is taking place - but in this case the geometry only favors a few directions, which gives an effect over a much greater distance (if the light source is so bright and the pattern only goes in selected directions, you can basically see the Nth diffraction peak).

Thought:

The second point I made, "lens quality", is something we often forget these days because most lenses are so good. But not the lens in the human eye. I have strong astigmatism - basically the lens in my eye is a little cylindrical, which means there isn't a single distance for a point to focus on a point - instead, it's an elongated spot. When there is a lot of light and my pupil narrows, things are much more focused - because when less of the "imperfect" lens is used, the effect is less. I can simulate this by partially closing my eye - by covering a "particularly bad" part of the lens, I end up seeing better ... Diffraction really comes into play when those other things are no longer the limiting factor.