The Lumariver Depth of Field Calculator (Lumariver DoF) is an app for your mobile device that helps you maximize the sharpness of your photographs. It's primarily intended for landscape, architecture and similar photography genres where the goal is to have a sharp image front to back. Rather than being a calculator for theoretical exercises it's designed for being a practical tool in the field.
It's available for Android and iOS (distributed via the official stores, search for "lumariver") and the user interface is optimized for one-hand operation on mobile phones, but it will scale up to tablet screen sizes too.
Here's a video with a 90 seconds run-through of the app's main functions (no sound). It's not intended to show how things work in detail, just to get a feel for the app:
Below is an an instruction video for the app's depth of field and tilt screens. It doesn't contain more information than in this written documentation but you may prefer this way of learning. The settings screens are not covered in the video though so you need to read this manual for that part.
Before using the app for the first time:
When the app starts for the first time it's preconfigured with a "generic camera" with a 135 full-frame sensor with a "generic zoom". Using that you can set almost any focal length. However, for the smoothest user experience the idea is that you enter your own camera(s) and lenses into the app. Then you just pick the camera and lens you are currently working with instead of setting sensor size, focal length and other camera/lens settings each time. The user interface also gets cleaner to operate by for example only showing f-stops the chosen lens can handle.
You reach the settings by pressing the cog button at the top right corner in the main depth of field screen.
As distances are shown here and there it's best to set the distance scale unit before you do anything else. You do this via Settings → Distance Scale and the Unit entry. We will return to this menu later and look at the other distance scale settings.
You get to the list of cameras via Settings → Camera Systems. At the top of the list there are three buttons, Remove, Add and Reorder. When pressing the Reorder button you get handles on each row and you can reorder the list to your liking by dragging the entries. Press the reorder button again to hide the reordering handles. When pressing the Remove button you see a removal icon on each row and if you tap one that camera system is removed (press the Remove button again to return to normal). If you remove all camera systems a generic default camera will be added back in as the app requires at least one camera system.
When you press the Add button you get three alternatives:
When you have entered your own camera system(s) you can remove the generic one if you like.
Each camera has a full name and a shortname. The shortname is used when displaying it in the header in the main screen, and should be an abbreviation. For example "Canon EOS 5D mark III" as full name could be “5Dmk3” as shortname.
Sensor size is chosen from the list, or you chose a custom size and enter width and height manually. In this app "width" is always the long edge, the sides will be reordered automatically if you enter them the other way around.
You can also optionally specify live view screen size (or actually the image size on live view) and the sensor pixel pitch. The pixel pitch is only used if you want your circle of confusion to relate to that. The live view size is only used if you have tilt lenses and want the wedge span to be shown as millimeters on live view rather than on the sensor. It's generally easier to measure on live view of course.
When tapping the "Lenses" entry you get to the list of lenses. The lens list works exactly as the camera list, with the possibility to add, remove and reorder.
As with cameras you specify both the full name, shown when selecting the lens, and a shortname, shown in screen headers. For example "Canon EF 24-70mm f/2.8L II USM" as full name could be "24-70/2.8L II" as shortname.
If it's a zoom lens you toggle that and set the shortest and longest focal length in the range, otherwise you disable zoom and set the fixed focal length.
If you want access to the tilt screen for this lens, enable "Supports tilt".
Most lenses have 1/3 stop f-stop scales, but it could also be 1/2 or even full stop, which you can specify. You might prefer to use only full stops if that is what you use in the field anyway. The f-stop division is shown on the f-stop scroller so with a coarser division you get fewer entries and faster to scroll.
You also specify lowest and highest f-stop, again you don't necessarily have to use the full range the lens can handle if you don't want to. The app is typically used for relatively small aperture photography so you don't really need to have f/1.4 wide open even if the lens supports it if you want a shorter scroller range.
Some lenses, in particular large format lenses, may start at and "odd" f-stop, like f/6.8 and then continue with a normal 1/3 f-stop scale. In this case you still have to choose a starting f-stop that fits the 1/3 scale, for example the closest below.
You can optionally specify the near limit. If you don't specify it the standard value from the distance scale will be used, otherwise the specified near limit will be the closest distance setting on the focus scroller.
Another optional feature is to specify a minimum lens step in micro;m. This will only be used if the "Optimal" distance scale setting is enabled. In that case you have already specified a global minimum lens step, but you can here override it with individual settings per lens if you desire.
If you specify the effective focal length for a prime lens, which usually is some several-decimal number close but not exact to the announced focal length, you may want to set a specific "display focal length" which matches the model name. If not the focal length displayed in the main screen header will have those decimals of the effective focal length.
Finally you can optionally specify the nodal point separation. This is together with a correct effective focal length required if you want a 100% exact distance scale. If not specified the distance error can be say +/- 100mm (most for tele and retrofocus lenses), that is small enough to be of no real practical meaning as this app is not intended for macro distances. However if you have a high precision focusing ring (ALPA, Cambo etc) and want to match that distance scale exactly "for show", you should specify this, and it's for that this function exists. Nodal point separation is rarely found in data sheets though, and if you have a regular lens with regular focusing ring you shouldn't worry about this number.
If you combine your lens(es) with a teleconverter/extender you need to enter each desired combination as a separate lens entry, and multiply the focal length and adjust f-stop range accordingly.
You can choose unit between meters and decimal feet. Pixel pitch and senor/live view measurements will still be in metric units (µm and mm).
The default scale type is "Optimal" which fills the scale with a "suitable" spacing between entries, to balance precision and entry count. It can be configured with the following parameters:
The default parameters of the "Optimal" scale are good, there's normally no need to reconfigure them.
If you use a classic rangefinder to measure distance to objects it's helpful to adapt the distance scale so it matches that on your rangefinder. A few popular rangefinders are pre-configured (for both feet and meters), but you can also enter a custom scale. You enter it as a text string with dashes between each entry and leading zeroes can be left out, like this: .5-.6-.7-.8-.9-1-1.1-1.5-1.8-2.5-5-10-20-100. A dash rather than a space is used as number separator as it's available on the smartphone's numeric keyboard.
You can select "1/2 steps" on the scale too, then the inbetween entries are filled in, which is usable on most mechanical rangefinders as the midpoints are easy to pinpoint.
If you don't use a fixed scale rangefinder leaving the scale at the default "Optimal" is recommended. The same counts if you have high precision focusing rings like on an ALPA, Cambo or Arca-Swiss technical camera.
The scale will seem coarse at long distances, but this is because at those distances very tiny lens adjustments leads to large real distance changes, and the depth of field makes up for that too. So there's no need to worry about that.
A preview of the distance scale scroller is updated live at the bottom of the screen, using the currently selected lens (the "Optimal" scale depends on lens, note that if the selected lens has individual settings that overrides the global scale settings this preview can be misleading).
Some camera brands, like ALPA, provides manual high precision focusing rings with as many as 270 entries (over 270 degree turn). It may be desirable to match the distance scale of the app with that of the lens. In many/most cases you use the app in the way that you set the depth of field edges and read the resulting (exact) focus distance rather than the other way around. In that case you don't really need to match the focus distance scale with your lens. If you still want to do it here's how.
Those 270 entries on the ring are way too many for a scroller (it becomes very slow to operate), and most of those entries make up for very close distances (with resulting ultra-short depth of field) which isn't the main field of use for the app. So we don't want an exact copy, but rather have the long distance entries, and then merged into more reasonably spaced entries on shorter distances. This is taken care of automatically if you set the distance scale to "Optimal".
The remaining work to match the lens scale is to find out the effective focal length of the lens, the lens step between each scale mark, and the nodal point separation.
Even if a lens is called "120mm", its effective focal length is generally slightly different, say "123.4mm". You can generally find the effective focal length in the technical data, and it's this focal length you then should enter for the lens (you can enter a separate display focal length). If you use ALPA, there's already a bundled database with most of ALPA's lenses and their effective focal lengths, so you can simply add lenses through that.
Note that if you don't intend to match the distance scale with that of a lens it's generally not important to use the effective focal length, as it only changes depth of field with a few percent, but if you're a perfectionist you may still want to do it.
Then you need the minimum lens step, which can be harder to find. That is how many µm the lens is shifted between each scale mark. If it's the same for all your lenses you can use the global "minimum lens step" setting, otherwise you need to enter it specifically per lens that differs from the global setting.
Finally you need to have the nodal point separation, that is the distance between the front and back nodal points. Again this can be hard to find.
If you do this right the long distance entries should match (due to rounding and the exactness of the technical data you got it may differ a tiny amount still). If you want to match the whole scale you need to set the "fixed steps from infinity" to "near limit", and set the "minimum distance step" to zero, but that's not recommended. Try to keep the number of entries in the distance scale below about 60 entries.
This app has an advanced circle of confusion (CoC) configuration. You can read more about it in the specific section about depth of field and circle of confusion. It's highly recommended to read that before changing the default configuration.
You set both a smaller "sharp" and a larger "soft" circle of confusion. You will only use the "sharp" if you don't enable the sharp/soft CoC buttons which let you switch between a sharper and softer near and/or far edge to make trade-offs in difficult situations.
You set a size model and scale factor for each, and then at the bottom of the screen the actual resulting CoC sizes are listed (for the currently selected camera!), and what factors they depend on.
As the calculated depth of field rarely matches fixed scales the active values in the center row are auto-adjusted to show correct values. This means that you will see values animate and change when you adjust the scrollers.
If you tap a lock button the corresponding scroller will be locked. Only one scroller can be locked at a time. The scales of the remaining scrollers will be auto-adjusted to fit the new limited range. For example if you lock the focus distance the only way to increase/decrease depth of field is by changing f-stop, and thus the near and far limit scales are recalculated to exactly match the range provided by the f-stop scale.
The purpose of locking scrollers is to support different workflows to calculate the depth of field. Here are a few examples:
Most cameras have very coarse and/or unreliable distance scales on the lens. There are exceptions such as some types of medium format digital technical cameras, but the common case is indeed that you don't have the ability to set a specific distance on the lens. This means that you must have an object to focus at. This is generally not an issue, either you have a main subject that you want to focus at anyway, or there's a stretch of objects in depth in the scene so you can pick something out at almost any distance.
There's still the challenge to measure the distance though. A simple mechanical rangefinder is adequate, such as a classic Leitz Fokos or other that you find in the second hand markets, or a Fotoman rangefinder which is produced today. There are also many electronic options, such as various laser distance meters (only the better ones work well outdoor in irregular subjects though!). A laser distance meter provides much more precision than you need, but if you work with architecture photography it can be a very practical tool.
Instead of using a rangefinder you can also simply estimate distances. This won't be very exact, but the thing is that as depth of field sharpness changes very gradually it does not really require very high precision, the farther away the less important the precision is.
We recommend to bring a simple mechanical rangefinder with you and use when needed, and estimate the remaining. The more you use a rangefinder the better you become at estimating distances, so in time you will need to use it less often.
As the app is using scrollers, which is the reason it becomes fast and efficient to use even with one hand, means that you cannot set any specific focus distance, you need to pick one out from the entries in the distance scale.
While this might seem imprecise and limiting at first glance it is a deliberate design choice to make the app efficient, and in practical use there's no reason to worry about the distance scale spacing. This app is intended for real use in the field, not for theoretical exercises at the desk.
(You can indeed enable "Allow manual scroller values" in the settings if you want to override the fixed entries by tapping the center entry, but it's only inteded to be used in special cases.)
If you've set the scale to "Optimal" (default), the coarsest scales will be for the wide angles at a distance. However in actuality the scale is equal in precision regardless of focus distance or focal length — the reason the spacing increases is because the depth of field increases with the same amount. Shorter focal length means larger depth of field and thus a coarser scale, it's a zero-sum game between distance scale spacing and depth of field span.
Then this scale is also only used to set the depth of field edges, not where you put the focus. In most scenes you focus at some specific subject and that precision is up to you and your camera. It's generally more important to put the plane of focus at your main subject than having millimeter precision on the depth of field edges, as they are not really "edges", but a very smooth transition towards a larger blur spot.
Finally, there's no reason to be more precise than you can measure. Thus it can be worthwhile to set the scale to match your rangefinder if you're using one, which you can do in the settings.
Note that as the scales adapt to get the exact distances in the center row you will get adapted exact focus distances if you set the depth of field using the near and far scrollers. For example if you scroll the far scroller to infinity you get the exact hyperfocal distance on the focus distance scroller.
Focus stacking can also be used in full scale scenes for example in landscape photography, and it's for this application Lumariver DoF's stacking functionality is designed for, while it's not suitable for the macro use case. The purpose of using focus stacking in landscape is to allow shooting at an ideal aperture with minimal diffraction and still get the depth of field of a smaller aperture, or to make extreme near-far compositions that's not possible even with the smallest aperture.
Before using focus stacking, consider the following:
If you're using a view camera where you focus using a rail you can simply move the lens with the depth of focus distance between each shot, with some suitable overlap. The app covers this use case as it allows to round the stacking step to even quarter, third, half or full millimeters.
When working with normal single shot depth of field you can generally do without proper lens markings, by using a range finder or estimations to measure distance to a certain object which you then focus at using live view. With focus stacking you need somehow figure out a reliable way to turn the focusing ring a certain amount.
If your lens has appropriate markings and a large wide-range focusing ring you can easily focus stack with it just using the lens markings. Note that the depth of field markings on the lenses use traditional CoC sizes though. Modern lenses designed for auto-focus generally don't have that good markings on the lens barrel and can then be difficult to use for stacking. Manual lenses often have bad distance scales too, but at least a long turn. You can make your own finer scale on a tape and put on the barrel. For focus stacking you just need even spacing, and you can use the app to figure out how many divisions you need from near limit to infinity.
Instead of making your own scale and tape to the barrel it may be possible to make use of the rubber grip ring on the lens, which often has a small repeating pattern. Use the app to calculate how many frames that is required from near limit to infinity for a suitable f-stop. Then check how many pattern repeats on the grip ring you have on that range, and divide. Make sure to have an overlap.
In this case when you perform stacking in the field you will then probably not use the app for stacking but just use that derived fixed scale on the lens. You can then disable the stacking function in the app which removes the stack button and gives some more space to the depth of field diagram.
When stacking you can choose to start at the far point and stack backwards towards the near point, or the other way around. It may be best to start in the end that gains most from high resolution, which usually is the far end.
Here's an example stacking workflow using the app:
To understand the tilt screen and text here you need to have some basic understanding of the Scheimpflug principle and the resulting shape of the depth of field (a wedge).
In close scenes, such as when you point the camera down and focus on a small patch on the ground, it's usually better to use traditional tilt focusing technique than using the app, that is choose a near and far point and use tilt and focusing wheels to focus peak on live view or ground glass. The app is most useful in grand open scenes, where tilt focus peaking is often hard to do on live view so it's easier to measure or estimate the distance to the ground and adapt the hinge distance accordingly.
The image here shows a screenshot of the tilt screen. The black header and footer is related to the Android phone interface and is not part of the app. The look of the app itself is the same on both iOS and Android. The red numbered labels point out the functions, which are as follows:
The concept of interconnected lockable scrollers are the same as the depth of field screen. There's one exception though, the last lock button which is always locked and decides if the wedge should have the far/lower limit, plane of focus or near/upper limit locked at its current slope when the wedge grows/shrinks.
When the span is locked you might notice a very slight breathing of the wedge span in the diagram when you move the other scrollers. This is because the app works with the assumption that the span is fully independent from focus distance to make it smoother to use, while in reality it has a tiny effect. It's with a margin small enough to not have any impact in practice.
The app supports this use case by allowing to set the ground distance separately, you do this by tapping the ground button. This should be the vertical distance from the camera down to the ground.
If you then increase the hinge distance to a large value than the ground distance you will see intersections in the diagram as pictured here. The intersection for the near/upper limit and the plane of focus is shown, both as horizontal distances from the camera (2.6m for near limit and 5.0m for the plane of focus in the image) and as projected on the sensor plane (-28mm below center for near limit and -14mm below center for the plane of focus). By studying those intersections you can find a suitable fit for your scene at hand.
Note that the projections on the sensor is relative the sensor center before any shift has been applied.
The "incline", "span" and "tilt hinge" scroller headings doubles as buttons that brings you into angle measurement modes, where the device's tilt sensor is used to measure angles.
The buttons open a corresponding modal which contains instructions of how to measure. You should carefully aim the device in the direction you want, looking with one eye along the screen while holding the device with the screen facing up. While holding still, and thus not looking at the screen, you tap anywhere at the modal and the device will vibrate to indicate that the measurement was recorded.
If we start with plane of focus incline the measurement should be made from the hinge line, which is often not practical (as the hinge line may be below the ground level) so you need to take hinge line parallax into account. This is fortunately easy: if the distance from where you hold the device down to the hinge line is X you just aim the same distance X above the target. If you aim at a distant target the hinge parallax will of course be negligible and then you can ignore it.
If you use a live view camera you rarely need to measure the plane of focus incline as you simply set it by focusing at the middle of the tallest feature, and thus you don't need the app to make a focus distance calculation for you.
If you don't need the correct incline you can generally ignore hinge line parallax when you measure the span, as the span will be the same regardless of parallax. However there is one exception — if there is an object closer that becomes higher when the viewpoint is at the hinge line that should be used as reference instead. If you need the correct incline, so you can use the focus distance setting the calculator provides, then you must always consider the hinge parallax in the same way as for the incline measurement — that is always aim the parallax distance above the target for each angle.
Finally the "tilt hinge" button lets you measure a vertical distance (for ground or hinge distance) through an angle and a horizontal distance. The angle should be measured from the camera height, otherwise you need to compensate for the parallax. Usually it's quite easy to estimate ground or hinge distance by relating to your own height, so you will probably not use this function often. If the distance is large and you have a good rangefinder for the horizontal range it's useful though. You then first measure the angle to the base of the object that represents the target vertical distance, and then you enter the horizontal distance using a scroller.
Tiltable lenses are typically also shiftable, and to render features like tree trunks and buildings upright the camera is usually kept horizontal and the composition is adjusted by shifting rather than tilting the camera.
This is expected to be the normal use case, but in some situations you may want to tilt the camera and the app supports this. To show proper information in the diagrams you must tell the app how much you have tilted the camera which you do with the camera tilt button.
It brings up a modal where you can use either the device's tilt sensor (put it on a horizontal feature of the camera, like the hot shoe and measure) or enter the slope manually using a scroller. Negative values is sloping forward (looking down to the ground).
The screenshot here shows how the diagram looks after the camera has been tilted -15 degrees, that is looking into the ground. The diagram keeps its horizontal orientation for clarity and instead adjusts the ground line. There's also an additional perpendicular-to-the-ground dashed brown line added, and the ground projected-at-infinity dashed brown line in the sensor plane view.
In the screenshot the diagram has been tapped to show the "gnd", relative to the ground measurements. So the degrees and mm are relative to the ground.
The screenshot also shows an example where a specific ground distance has been set and the intersections are shown. The horizontal distances (2.3m and 5.4m in the image) are always from the vertical line camera down to the ground (dashed brown line), not the tilted film plane.
If the ground had been set to match hinge, the ground distance is auto-adjusted so it goes through the hinge line. With a camera tilted forward this means that it will be a slightly smaller value than the hinge distance. This value is then shown in the ground distance button so you can see what it is.
Tilt decides the hinge distance, and in the typical wide angle shot you set it to the same as the ground distance or a little larger. F-stop decides the wedge span, so you set that to cover the tallest object in the scene from base to top. So in the basic case you do the following:
This workflow is the basic one that covers most cases.
The ground distance and intersection feature becomes useful when you shoot longer lenses, say 50mm and longer or so (135 equivalent). The idea is then to set as large hinge distance as possible in order to gain wedge span without having to stop down excessively. You can read more about that in the description about the ground distance button.
What if the ground is sloping? The app doesn't support setting a slope for the ground, but instead you just think of the ground as the reference and set camera tilt in relation to that.
Keep an eye on the hyperfocal distance, so you don't engage in a complex tilt compromise when plain focusing would suit the scene better. As a rule of thumb tilt is useful when the ground visible in the image is close, due to a low tripod, a lens shifted down or a camera looking down. If you don't have close ground visible due to a lens shifted up for example, plain focusing usually works just as well or better. Likewise if you have tall objects close, such as in a tight forest scene with tree trunks from top to bottom in the image frame, tilt is rarely better than without.
The wedge is razor thin close to the camera so it will often be some compromise there unless the ground is very flat. To conclude, consider the app to be a support in your own judgements, don't expect it to always provide the perfect no-compromise solution as it's not possible in all cases.
What's "too much out of focus"? This is defined by the circle of confusion (CoC), that is how large blur diameter we can accept before we consider the image to be out of focus. Traditionally this blur diameter is decided from a model assuming a constant relationship between print size and viewing distance and an observer with good but not exceptional eyesight. The resolving power of the observer's eyesight is the limiting factor in this model.
This traditional model works well if you want to estimate if the tip of the nose in a headshot portrait will look blurry or not when focused on the eyes. In this case the viewer will look at the whole image at once at some distance and it doesn't matter if the edge of the depth of field isn't that well defined, and you generally also want the background to be blurred so you don't want too deep field. However if you shoot landscape and architecture and want everything in the image to look sharp, the traditional model uses a much too large circle of confusion to make sense. The problem is two-fold: first this type of image is more likely to be printed large and be viewed up close (breaking the traditional viewing condition), and second the large circle of confusion leads to a preference of apertures so large that they don't contribute to any meaningful sharpness increase of the plane of focus, but rather just make the out of focus areas blurrier. To this can be added that digital image sharpening techniques makes it feasible to push diffraction further and thus use smaller apertures. And finally, the traditional model assumes that you won't crop your image as it sets the CoC in relation to the image diagonal.
Many landscape and architecture photographers have identified the issues with the traditional model and use their own circle of confusion sizes in their depth of field calculations. There is no specific consensus regarding what the right size is, so you need to make up your own mind. You may actually prefer shoot at a bit larger apertures and use the subtle out-of-focus differences to "layer" the scene, and in that case the traditional model might suit you well. The Lumariver DoF app allows you to customize the circle of confusion size so you can get exactly what you prefer.
There are these sizing models to choose from:
It's a matter of taste though, use what you think is best. You can use the sections below and the image crops to get a feeling of what type of blurs you prefer.
Lumariver DoF uses pixel pitch in some of its circle of confusion models, but what if you shoot film? While film doesn't have pixels, you can still see the pixel pitch as the smallest resolving unit on the film. Film resolving power is usually specified in line pairs per millimeters (lp/mm), to convert to pixel pitch we can say that two pixels are required for each line pair. So if we have a film with 80 lp/mm, it corresponds to 1000 / (2 × 80) = 6.25µm. This means that a 4x5 inch film sheet would resolve 300 megapixels. While it certainly can, it requires a very high scanning resolution and the image will be rather grainy when you pixel peep.
Film is generally scanned at much lower resolution than the film can resolve to minimize visible grain, and it of course makes more sense to look at the resolution of the final product. If you scan at 2000 dpi (typical scanning resolution) the corresponding pixel pitch value is 25400 / 2000 = 12.7µm which yields about 70 megapixels from a 4x5 inch film sheet. Note that the typical flatbed scanner reduces resolution considerably due to poor optics, you generally need a high quality drum scan or other dedicated film scanner to get the resolution as advertised.
So when you configure the app with a film camera system you can still specify a pixel pitch which you then can use as a reference when you decide the circle of confusion size.
The image below shows the diffraction blur in the plane of focus at various airy disc diameters (relative to pixel pitch). Due to the increased popularity of pixel peeping and image sensors without anti-alias filters (as used in the crops here) it's today popular to shoot at fairly large apertures to get that knife sharp crispiness when you view the image at 100%. We don't recommend this though, as knife sharp pixels means aliased pixels (jaggies and false details) so for true image quality it's generally better to shoot with a bit smaller apertures to let diffraction soften the pixel peep level slightly.
From the crops we can conclude that if the airy disc diameter is about 2× the pixel pitch and below, blurring is negligible. However there's also some aliasing, which would be even more visible if the subjects had narrower details. If the sensor lacks anti-alias filter like here a suitable shooting aperture would be one that gives an airy disc diameter which is about 3× the pixel pitch. You can calculate this as follows: 3 × PixelPitch × 0.75. Note that the pixel pitch should be in micrometers. So if we have a pixel pitch of 4.5 µm a suitable shooting aperture would be 3 × 4.5 × 0.75 = f/10.125, that is about f/10, rounding it up to the full stop of f/11 should be okay too. Many recommend a little bit larger shooting aperture (=smaller f-number) than this and there is no consensus on what is optimal. You can make some test shots with your own camera and get to your own conclusion.
As the image below shows the diffraction blur at the plane of focus is about equal when the circle of confusion diameter is half of the airy disc. This means it's not worthwhile to have a circle of confusion diameter that is less than half the airy disc diameter as the depth of field limits are then trying to be sharper than the plane of focus.
Note that the images with circle of confusion blur uses a large aperture (negligible diffraction). When significant circle of confusion and diffraction is active at the same time there would be some further blurring, so the comparison doesn't show the whole truth, but serves well as a rough guide.
The image below shows various circle of confusion blurs relative to the pixel pitch diameter. You can use this image to get an idea of where you want to put your depth of field limit.
Does it make sense to view the circle of confusion blur at 100% pixel view as in this example? Not with the traditional meaning of depth of field where the reference is the human eye's resolving power at a certain viewing distance. However if you use a depth of field calculator to help you make as sharp images as possible, regardless of viewing distance (large panoramic prints are often viewed up close), and if you want to make the best use possible of a high resolution camera, viewing the circle of confusion blur at 100% and comparing it to the sharpness in the plane of focus is the right thing to do.
If you choose a too large circle of confusion the depth of field calculations will suggest to shoot at unecesarily large apertures, making the plane of focus sharper than the camera can resolve, and the depth of field limits very visibly blurrier than the plane of focus when watching the print a bit closer.
If you choose a too small circle of confusion you will get so shallow depth of field that you will be tempted to stop down to very small apertures, and then you may actually end up with that the plane of focus is blurrier due to diffraction than the circle of confusion blur represents.
By using the airy disc and pixel pitch models in Lumariver DoF you can avoid these pitfalls.
Lumariver DoF has a unique feature in that it allows specifying both a "sharp" (=small) and a "soft" (=larger) circle of confusion. If activated you can then on the fly switch between the two. The idea is that if you run into an "impossible" scene where you can't get enough depth of field, you switch to the softer mode which allows the edges to be a bit more out of focus. An elegant feature is that you can choose to switch only the near or the far edge if you like.
For example if you have detailed structures at infinity and larger structures up close you can choose to have a sharp far edge and a soft near edge. Or if the atmospheric conditions is limiting resolution at infinity anyway you could choose the other way around.
As an example, this function can help you make decisions to shoot at say f/11 with a slight blur increase at one or both edges instead of shooting at f/16 when you think it's more important to avoid a high amount of diffraction (or long shutter speed) rather than having the depth of field edges as sharp as the focal plane.
This can also be used in the tilt screen. One strategy in a difficult situation can be to have the top part of the wedge (near edge) softer as it will be higher up in the image and likely further away from the viewer.
Note that when you use an asymmetric depth of field, softer in one edge than the other, some of the standard truths about depth of field no longer holds, such as the near edge being half the hyperfocal distance. The app will show you the appropriate numbers at all times though.
The default setting in Lumariver DoF for the circle of confusion sizes is to use the airy disc model only, as that makes it independent from pixel pitch (which is not set in the default "generic" camera). The default is a good setting, but if you can provide the pixel pitch we instead recommend the following:
The goal of this configuration is that the depth of field limits should be almost exactly as sharp as the plane of focus regardless of viewing distance (even at pixel peep). It may seem that it would lead to unreasonable shallow depth of fields, but this is not the case. The resulting depth of field is very workable in the field and you can shoot your lenses at reasonable apertures.
The airy disc part says that the circle of confusion should be 0.5 × airy disc, which means that diffraction blur and circle of confusion blur is about the same. This gives the intended effect for very small apertures when diffraction blur is clearly visible in the plane of focus. However for typical shooting apertures the diffraction blur isn't as visible and then the 0.5 × airy disc would make depth of field unnecessarily short. Therefore we also set a 2.5 × pixel pitch size, which for typical apertures will be larger than 0.5 × airy disc and thus be the active circle of confusion.
A suitable soft circle of confusion related to this is the following: