Why use DSLR over Point-and-Shoot Cameras, Part II
Series Introduction: The aim of this knowledge base series is to answer the question “Why use DSLR (digital single-lens reflex) cameras when smaller cameras are lighter, cheaper, and get good results?” This is a question we hear a lot and it is worth answering. But, while the question is simple to ask, the answer is complex. We’re going to provide a technical explanation through a three-part series of articles as to why we opt for the larger, DSLR cameras and why they’re better for data collection. For a look at Part I in this series, click here.
Focusing on Lens Choice
We saw in Part I of this series that bigger is, in fact, better (at least when it comes to pixel size and image noise), but overall image quality is not just a function of one or two things. The entire imaging chain must be considered in system design, with proper attention given to size, weight, and cost in the context of drones. So, let’s consider another key area of distinction between DSLR (digital single-lens reflex) and point-and-shoot cameras: the lens.
The importance of the lens is often understated. Lens choice impacts potential distortions in your data, such as chromatic aberrations, which is why we must make informed decision for lens choice. The main advantage DSLRs have over point-and-shoot is that they can be equipped with larger lenses. This gives us flexibility in choosing a sensor package that minimizes distortions.
First, there’s no answer to the question of what is the “best” lens without having a certain amount of context. We need to think of what the image produced by any given set of optics “looks like” to the feature detection and matching algorithms that will generate the input for the bundle adjustment and dense matching steps in modern drone mapping software systems. Since all known commonly employed algorithms actually merge the color channels into a single luminance channel (i.e. a monochrome image), wavelength-specific lens aberrations such as longitudinal and lateral chromatic aberration are fairly important to minimize since they will contribute to blur in the merged image.
Lenses designed for larger pixels, when fitted to cameras with comparatively dense pixel arrays, perform poorly in these areas since their images are now sampled beyond the tolerances they were designed to meet. This effectively produces highly sampled lower resolution images, which is just a waste of bandwidth and storage space. Conversely, using an over-optimized lens design with a relatively sparse sensor is often a waste of money.
Why Aberrations Happen
Why do these aberrations happen? The answer has to do with the lens’ failure to focus multiple wavelengths of light. For a full explanation, and a look at other distortions, download the technical addendum to this article.
For a detailed look at aberrations and distortions, download the technical addendum to Part II
The Complexities of Lens Choice
A benefit of using modular DSLR-style cameras is there is usually a large choice of lenses even within the same focal length, many of which are produced to extremely high tolerances. However, it’s common for a company producing DSLRs in both APS-C and “full frame” formats to use the same lens mount. Therefore, it’s important to see for which sensor format the lens is primarily intended. Using an APS-C-targeted design on a full-frame camera will usually result in inadequate coverage of the format by the small image circle because the lens smaller than the mount. On the other hand, mounting a lens designed for a 35mm sensor (full frame) with a 4.5µm pixel size to a camera with a smaller sensor having 2.5µm pixels will not incur loss of coverage, but will possibly bring out “defects” that were not considered in the design because they were irrelevant for the intended lens.
In spite of the pitfalls–and somewhat at odds with realizing the benefits of large pixels–choosing the absolute largest-format sensor may not be the best solution if suitable lenses offering minimal distortion are not available. APS-C sensors fitted with full-frame lenses may bring out some bad dispersive characteristics while improving lens performance geometrically by sampling the inner part of the image circle. Distortion measurements may prove lower with this combination if what would be a distorted lens design on a larger format sensor is relatively “flat” for some distance within the center of the frame. In putting ourselves in the place of the algorithms that will handle the image data, we must also consider how many parameters we want the software to handle.
Thus, our goal is to navigate the complexities of lens choice by having the flexibility to choose the appropriate lens size and mount. With point-and-shoot cameras, you have little to zero flexibility. DSLRs give you options when choosing lenses.
An oft-quoted maxim (From Augustine’s Laws) is “The best way to make a silk purse from a sow’s ear is to begin with a silk sow.” In this case, the fewer imperfections that must be calibrated out of the optical system, the better and more reliable the results will be. So, we should make decisions that afford us the option of minimizing imperfections from the beginning by choosing the best lens for the job.
This is the main advantage of DSLRs: interchangeable-lens designs. Much like modular payload choice, interchangeable-lens design affords us a degree of freedom not available in point-and-shoot cameras. Finally, for the most demanding applications, the availability of well-corrected fixed-prime lenses (i.e. non-zoom lenses) makes a very strong case for DSLRs as the heart of drone mapping payloads. Within the allowable weight and size restrictions available to us, these are the “silk sows” from which we make our “silk purses”.