A modern digital camera provides lots of different settings to play with, depending on the lens that you use with it. You can adjust the f-number, focal length, exposure time and many many other various settings. One of them is ISO. Of all the settings a camera provides, it is the one setting that is misunderstood the most.
Why is that? To understand what ISO is, what it is not and how you can use it to get the best possible results for Astrophotography we have to dive into a small history lesson first. This will be followed by the technical aspects of modern digital cameras, cover what ISO is, isn’t and how I stopped worrying and started to love the gain.
(ISO in film)
To understand where many myths of ISO are coming from, we have to explore the ISO in film first. To give you a short summary of that: ISO in film describes the sensitivity of the film used. A film with a smaller ISO, hence slow film, will need longer exposures to properly be exposed and provide a picture that is not too dark. This knowledge applied and reversed – films with higher ISO are way more sensitive to light and require only short exposures, hence called fast films.
Officially the commonly used ISO in film ranged from as slow as 0.8 up to as fast as 10000. Doubling the value of ISO essentially doubled the sensitivity of the film. A film with ISO 200 only needed exposures that were half as long as with ISO 100 to get a properly illuminated picture. So far this sounds pretty similar to your digital ISO knowledge, does it not? Except it isn’t. More on that later.
Increasing the ISO in film, shortening the exposure time and increasing the sensitivity of the film all had adverse side effects, most notably the well-known film grain. But where does this come from? Film typically produces a negative of an image that you capture. The more substrate is “removed” (by shining light on it), the more light can shine through with the negative. This results in a brighter spot on the developed image. More sensitive film utilized larger grain spots, which decreased the resolution and was one reason for a more grainy effect.
The substrate of typical film itself also had a Quantum Efficiency, which was typically around 10%. That means that just 10% of the photons that hit the film actually were absorbed by the substrate. This was also the reason that high ISO meant larger grains. Since the QE was solely determined by the material of the grain used, the only option to capture more light was to increase their size, at the cost of resolution.
So what is the takeaway here? Higher ISO meant lower resolution and higher visual noise in film. Both of those attributes are limited to the physical attributes of the substrate of the film itself. Now how does this apply to digital? I will tell you in the next chapters, but to spoil it a bit: it doesn’t at all.
Sensing down to the metal
(Basic Sensor Functionality)
You hopefully now understand what ISO meant in ancient times and how it affected images. ISO was always related to the film used and nothing to do with the camera body at all. All of this changed with digital cameras, since film was abandoned for pure silicon sensor technology. Those come, even nowadays, in two varieties: the older CCD and more modern CMOS sensor archetypes. To understand ISO, you have to understand the functionality of the sensor as well.
Both sensor types have their advantages or disadvantages, but I don’t want to go too deep into detail which sensor type is better. Generally, CMOS is newer and more relevant for most of you, so I will focus on this. Let’s dive down a bit to the metal to understand how a CMOS sensor works roughly, since this is the technical equivalent to analog film. Maybe you will already understand along the way how nothing that applied to film applies to modern sensor technology anymore.
CMOS are sensors commonly used in modern DSLR devices. They are also upcoming and getting highly popular in the Astrophotography realm. They come in various sizes, where the typical are 35mm full frame and so-called “crop” APS-C sensors. Crops come also in two sizes, most notably with a crop factor of 1.6 and 1.5. For Astrophotography many CMOS are even smaller than APS-C, with 1″ being the average sensor size.
Now let’s get down to how a typical CMOS actually works. In essence, a sensor consists out of millions of tiny transistors with a photo diode, which can hold a specific amount of energy. This is the so called full well capacity. Each transistor/photo diode pair represents a single pixel on the final image with its full lightness information as it comes. I will refer to the pairs as pixels in the future.
Each well of a single pixel captures photons, converts them into electrons and thus electric energy and stores it in itself. When the image capture is complete each transistor makes sure that it reads out the energy from its own photo diode well and transmits it to the sensors ADC. From there on it goes further through the post-processing of any given camera body and results into the data that you receive on the SD card or your computer.
Note: since every photo diode captures just pure photons and doesn’t discriminate based on the light captured, each pixel of a “color” camera has a small interference filter in front of it. This limits the capture of photons to a specific bandwidth of light wavelengths. This is called the Color Filter Array (CFA), the most commonly used CFA is the Bayer matrix. This applies to almost all modern consumer cameras.
Let’s have a look at a cross section of a small portion of two sensor types, front illuminated and back illuminated:
I will focus on the front lit sensor for now. As you can see, the sensor is made of multiple layers. On the top we have the microlenses, which help to gather and focus light onto the light receiving surface of the transistor, reducing the light scattering. Then, we typically have the color filter array. After that we have wiring to connect the different transistors to the outside of the sensor and transport the data they gathered to further process it.
Finally we have the photo diode which gathers the photons and converts them into an electrical charge. Every pixel, and thus, well, holds a specific amount of electrons, which is converted into a voltage. This capacity is limited by the well size and is called full well capacity. There is a typical correlation with pixel size and well capacity, the smaller a single pixel can get, the smaller the well gets too.
“But darkarchon”, you say, “I just wanted to learn about ISO, now you’re talking about how a sensor works, please just tell me what to do?”. Well, what can I say – understanding how a sensor works on a technical level is important for the full understanding on how ISO is applied to your images.
The microlenses and overall structure of the sensor, the size of the metal wiring beneath and pixel size all determines the sensors Quantum Efficiency (see where this is going?) – the actual ability to gather light. Typical peak QE for front-illuminated sensors range between 30% to 55%. But what does that mean? In essence, 30% to 55% of the photons that fall on any given pixel are converted into electrons, and thus are used for the image. Compare that to the typical 10% of film and you can see how much more light even the cheapest digital sensors can gather. Back illuminated sensors go even further with peak QE of around 85%. Those can be mostly found in smartphones and higher end CMOS sensors (most notably made by Sony).
Let’s sum this up real quick. Digital sensors work fundamentally different and are technically way more complex than film. They are fixed entities with fixed attributes which are literally burned into metal. Quantum Efficiency, pixel size and full well capacity, microlenses, color filter arrays, internal structure – everything was set in stone when the sensor was created. Nothing in software can change those attributes. Except one, maybe? (No)
|Resolution||Fixed due to the amount of pixels||Depends on ISO granularity|
|Sensitivity||Fixed but depends on the sensor structure||Substrate has a fixed QE, larger grains lead to more sensitivity|
|Visual Noise||Noise from electronics and stray photons||Grain size depending on ISO, stray photons|
|Full Well Capacity||Fixed based on photo diode size of each pixel||Depends on ISO|
Computers can only count to 1
(The role of the ADC)
ISO in digital. What is it even? As you have seen, the sensitivity of any given sensor is set in metal by the production process. Unlike film it cannot change the sensitivity or resolution of the sensor itself. So what does it do?
Every pixel on the camera has its own small amplifier, which is controlled by the ISO setting. In Astrophotography ISO is more commonly referred as and aptly named Gain. The amplifier will amplify the voltage that has been read out by every pixel to a higher voltage, thus technically increasing the brightness of any given signal. This data is sent to the ADC and converted into digital values. Once the amplification is set in software, it will be applied to the read out image and cannot be changed afterwards.
“Wait a minute,” you might say. “Voltage? Amplification? ADC? Will you please stop with all this technical nonsense?!” No. I can’t, since ISO is a pure post-processing effect of the chip itself. This is why we have to dive in deeper into full well capacities and the ADC of the sensor.
First, we have to take a look at the ADC of a sensor. Let’s take for example the ADC of a typical DSLR. An ADC utilizes incoming analog voltage to assign digital values which are then passed on to the rest of the camera, based on the voltage that it receives. The analog voltages passed on to the ADC are the voltages actually stored in the photo diode of each single pixel. The ADC takes a fixed range of voltages and converts them into the appropriate value. The resulting values of the ADC are digital values limited by the bit depth the device provides. Typical for DSLR is a bit depth of 14 bit.
In essence that means that a typical DSLR ADC can convert a specific given voltage range into 16384 (2^14) distinct values. Those values are known as ADU. With that knowledge in mind let’s dive into the full well capacity of any given sensor. Each pixel well can only hold a specific amount of electrons, which is determined by the pixel size and photo diode depth.
For the sake of simplification and to calculate with an example, let’s make a few assumptions for an imaginary sensor first. This sensor has the following attributes:
- Pixel full well capacity of around 40000 electrons.
- The ADC can detect voltages of 0.01 V up to 0.1 V
- The ADC is 14 bit, thus maps the 2^14 digital values on that range. 1 electron in a single pixel corresponds to 0.01 V and 40000 electrons correspond to 0.1 V.
- We don’t do any amplification of the data.
With those values in mind, let’s continue with that thought experiment.
Okay, now I have thrown a lot of numbers at you – “BUT WHAT DO THEY MEAN?” you might cry out in vain, just trying to learn what ISO to use for Astrophotography. Stay with me, it will all come together and make sense very soon.
Given those values we can make some easy calculations. 40000 electrons in a single pixel will make that pixel white. 0 electrons in a single pixel will make that pixel black. Anything in between will result in a different shade of grey. This is why sensors do so called additive capture. Makes sense so far, no?
The ADC can convert the 40000 electrons to only 16384 values. Do you see a discrepancy here? That’s right. Multiple captured electrons will be assigned to a single ADU value. This is the so called e/ADU – or, interestingly, also often named gain. To calculate the e/ADU you simply divide the full well capacity by the ADC capacity. Simple, right? In this case, it’s just 40000/16384 which results in 2.441 e/ADU.
The ramifications are immediately clear. To fill one ADU value of the ADC we need 2.441 electrons! In this case the ADC is undersized to resolve the full well capacity of each pixel – it is under-sampling. There are also sensors where it is the other way round, an over-sized ADC for the provided full well depth which will result in over-sampling. At this point you might start to actually understand where ISO, or, amplification, comes into play.
The takeaway from this section is simple. Each sensor range resolution capacity is limited by the well capacity of a single pixel as well as the ADC employed. Sensors with larger full wells have ideally larger ADC so each pixel can be fully resolved into a digital value.
Amplifying amplification amplifiers
(ISO in theory)
I’ve stated before that ISO in digital photography is in essence the control of an amplifier. Each pixel in CMOS sensors holds its own amplifier. The amplifier amplifies analog voltage that is captured by the full well and transferred to the ADC, which then converts it to digital values. With that knowledge at hand we can dive head-on into amplifying our analog data. We still have our imaginary sensor with 40000 electrons full well capacity and a 14 bit ADC which read outs voltages from 0.01 to 0.1 V. Now what would the amplification do?
A fixed rule is that the voltage read out by the ADC cannot be changed. It is baked into the sensor, transistors and is non-adjustable after the silicon sets. So whatever we amplify our analog values to, once they will surpass 0.1 V they will become white in the final image.
Let’s do some simple math, let’s just assume we amplify the voltage that was read out by a factor of 2. So we have 1 electron correspond to 0.02 V and 40000 electrons correspond to 0.2 V. The ADC has still a static cut-off at 0.1 V. That gives us the result that half of the 40000 electrons, practically anything above 20000 captured electrons, will become pure white and blown out in the image!
All that we have done at this point is effectively halved the effective accessible full well capacity. This is essentially what your sensor does every time you double the ISO! It will halve the accessible full well capacity for each full ISO step (100 -> 200 -> 400 -> 800 -> …).
This of course also changes the e/ADU values that your ADC is going to convert. With only 20000 electrons mapped to 2^14 values we get 1.22 e/ADU. Which is not a bad thing at that point, though! We get closer to the so called Unity Gain. Unity Gain simplified means that 1 electron is mapped exactly to 1 ADU. With a 14 bit ADC the Unity Gain would be at a full well capacity of exactly 16384 electrons. Utilizing this point of amplification of your data is where the ADC of your camera provides exactly enough bit depth to not over-sample or under-sample the read out electrons from the sensor.
The Unity Gain in our specific sensor is at a questionable position because it is in between different full ISO stops. You might want to think that intermediate stops would be a good idea to use at this point, but they really aren’t. Intermediate ISO stops utilize the next higher or lower full ISO stop and apply some extra stretching of the data themselves. There is nothing gained utilizing intermediate stops.
It is also to be of note that changing the amplification of the data also changes our imaginary sensor characteristics from under-sampling (at ISO200) to over-sampling (at ISO400). It cannot provide an ISO that will give you a proper Unity Gain.
From a detailed technical perspective of the whole sensor technology down to the amplification, when you think of it, ISO is actually pretty simple to understand. So where do those misconceptions of increasing ISO leading to more noise come from? Why does everyone correlate a high ISO with lots of noise? There are multiple reasons for that, let me explain you why.
There’s an image in my noise… somewhere
(Noise in images)
Okay so far I have given you a very simplified and theoretical view on ISO in sensors. In an ideal world this is how any sensor would perform when adjusting the ISO. Without any further ramifications but the only change being the adjustment to the full well capacity, that would be really great. Sadly, that’s rarely the case in the real world.
Highly invariant variance variability
(Read noise in ISO variant and invariant sensors)
“But everyone is telling me that my ISO is too high! My images are full of noise! If I were to lower my ISO then I would surely get less noise in my images!” you might say, at that point. And this is something I would strongly disagree with.
High ISO in real world examples can reduce noise at the cost of dynamic range. There. I said it. Let that sink in for a bit. High ISO can reduce the visual noise in your image. Especially in the field of Astrophotography that is often the case. Why?
The main reason for that is the so called read noise. Read noise is introduced by the electronics themselves and, depending on signal strength of the data at hand, can swamp the data itself. With low signal targets like dark skies, this can often be the case. If you don’t expose for long enough to swamp the read noise itself with data, all you get will be randomized noise introduced by the electronics.
Especially Canon cameras are notoriously bad in regards of low ISO read noise. They actually scale the inverse way than what you would expect to. At low ISO they have extremely high read noise while at high ISO they have low read noise. At some level of ISO they typically start to behave in a linear way with the read noise not further decreasing. Why is it that way? Well, you will have to ask Canon. Sensors that are behaving in such a way are called ISO-variant sensors.
On the other hand there are sensors that do not behave like that at all. The read noise stays approximately the same throughout the whole ISO range. For them, ISO does not matter for the exposure at all. They could dramatically underexpose to the left and still keep the read noise at a very low level throughout the whole range of amplification. Those sensors are typically called ISO-invariant sensors. Typically this ISO invariance is employed with modern CMOS sensors, most of them by Sony (Nikon and other manufacturers employ Sony sensors in their cameras).
You might ask yourself how to find out if your camera is ISO variant or invariant. You can find some information about different camera bodies on http://sensorgen.info/. If your camera is not available there, there is an easy way to see how your camera performs throughout various ISO levels.
For that you need to set a relatively high ISO, like 6400, and expose “correct”, using for example Auto mode on your camera. Note down the aperture and shutter speed used. Take the same image with the same aperture and shutter speed in manual mode, but reduce the ISO one full step at a time. Increase the brightness of all lower ISO images to get approximately the same histogram as the highest ISO and compare the noise characteristics. If they are roughly similar it won’t matter at what ISO you shoot, your camera is ISO invariant.
If I were blind would there be noise?
(Sources of noise)
“Okay”, you say. “High ISO might actually be not that bad. My images are still noisy. What now?” – well if read noise would be the only source of noise we would really have an easy time.
To define noise, we have to define light. Or maybe we don’t actually have to define it. But the absence of it, that is basically noise. It’s simple as that. Where you have no light, you will have noise, because sensors are not perfect entities and various sources of noise will be always present on your images – this is unavoidable. Let’s break down the various sources of noise and see what we can do about them.
Reading is hard
One of those sources is read noise. As mentioned before it exists in all sensors and is nothing you can avoid. If you don’t have enough signal (= light) read noise will start to show. Stacking this data will show even more noise because it assumes that the read noise is actually signal and will stack it. How to avoid it? Expose for longer or with a faster lens. If you can’t, increase ISO to reduce read noise.
It’s getting hot in here
Another source for noise is dark current. Dark current is the noise that will accumulate in the wells of the sensor by pure heat. The longer your exposures are the more dark current will accumulate over time. On DSLR there is little you can actually do about that, but due to the relatively even distribution this type of noise is relatively harmless. It can harm the total possible contrast of the image, though. To avoid it you have to get a dedicated cooler for your camera and somehow cool it to sub zero (C) degrees. This is a reason why Astro cameras often come with dedicated cooling mechanisms.
Can’t make me stop
While many DSLR camera bodies have hot pixel suppression built-in in their firmware (and some go way overboard with it, looking at you, Sony), it still might happen that some hot pixels will still show up in the image. Hot pixels are easily corrected in pre-processing or even rejected when stacking the image. To prevent them a cooling solution for the camera helps. Especially with cameras made for Astrophotography with true raw frame output you will not be able to avoid them on your images.
Is anyone here?
Photon noise. Yep, that’s a thing. Stray photons can hit various parts of the sensor and you cannot do anything about it. They will lighten up various pixels more than others. Given enough total integration time, those can easily be rejected during stacking. It’s also possible that your sensor will be hit with cosmic rays which will show up as large and very weird streaks. Next time you take dark calibration frames, inspect them carefully and I’ll promise you, you will see some.
Physics wouldn’t hit a barn door from a meter away
With QE of the sensors not being perfect, one of the reason for noise comes from short exposures. It’s the so called shot noise. Light distribution on the sensor is not even and with the different QE not all pixels will be lit up evenly. Given the same settings of the camera, same target and everything, images will vary since we don’t have a perfect light distribution. This is generally hard to avoid except with long exposures. But even then you won’t get the same signal strength for each pixel. During integration with lots of different images those levels will usually be equalized and give you the true values of the pixels at hand.
Pretending to be a part
(Fixed Pattern Noise)
While some noise is not apparent in single frames of an image, it will strongly show after integration and subtraction of light polluted background. This is typically fixed pattern noise. Since sensors are not perfect they can exhibit various forms of fixed pattern noise like banding (horizontal or vertical lines) or “raining” noise (if you have it, you’ll know what I’m talking about). There is little you can do about that expect to dither every frame. While that might cost capturing time, it aids a lot after integration, making the background noise more even and easier to handle. Dark frames can help with those, too.
Another source noise that can show after integration is color mottle. Given the CFA on color sensors and the way color images are generated (demosaicing, more on that in an upcoming article), color mottle is almost unavoidable. Dithering does help a fair share to reduce it, but using a sensor with a CFA you cannot fully avoid it. The only way to get around this is using a monochrome sensor and full sensor color interference filters.
Computers don’t always know best
(ISO and automatic exposure times)
In my opinion one of the reasons that ISO is so misunderstood in common photography is the way that automatic exposure modes in cameras work.
Think about it for a second, to get the same amount of exposure with a severely limited accessible full well capacity, you need way less light than before. Less light means more noise, since not all pixels can be illuminated fully. Due to the way a sensor is built and the QE of it not all pixels will be hit in an equal manner, adding various forms of chromatic noise and grain.
Due to the severely limited full well capacity at high ISO levels a single electron will be stretched over several ADU values. With our imaginary sensor of 40k we would have barely 312e to work with at an ISO of 12800! That gives us a gain of 0.019 e/ADU. The sensor is severely over-sampling at this point with 1e filling up a whole 52 ADU! That will give you a very coarse result in terms of image quality and color accuracy.
And what does the image processor in your camera do when you crank up the ISO? Well it tries to expose “right”. By exposing “right” to get a properly illuminated image, you will have to capture a shorter time or with a smaller aperture. So really, the automatic handling of exposure is really the issue. For normal photography, that does make a lot of sense. Sometimes you just can’t shoot slower to get a properly exposed image or you could get read noise into the image by underexposing it.
But for Astrophotography? Your exposure time is always limited to a specific time. It is almost always a low signal area. You will get noise, no matter what you do. So for Astrophotography it can make a lot of sense shooting at a higher ISO, since it can have actual noise reducing effects.
Quo vadis ISO?
Let’s try to sum up what we have learned, debunk the common myths and wrap up the whole article. Here are the three most commonly named traits that are brought up in connection with a high ISO:
More Noise. Factually untrue. If anything, a higher ISO can produce images which have less noise if you have to limit yourself to short exposures (for Low Signal Areas).
Higher Sensitivity. Factually untrue. The sensitivity of any given sensor is limited by the way it is built. It is not possible to increase the sensitivity of any given sensor after it was etched in stone.
More Grain. Factually untrue. The digital grain that comes with high ISO from normal photography is due to the shorter necessary (and automatic) exposures. Given short exposure times at high ISO the signal might look more grainy or noisier since it’s stretched further. Shooting the same exposure time on a low ISO can be severely worse instead.
With those common myths debunked you should have a full understanding on how ISO in your camera works and how it will affect your images by now. With that knowledge at hand you should be fully prepared to get the best Astrophotography images that your current level of equipment can offer.
As much as I would wish to, I cannot give you any real advice on what ISO to employ in your imaging efforts. This highly depends on whether your camera is ISO-variant or -invariant, the speed of the lens or scope, your exposure time and whether you’re tracking or not.
Here are some ideas how to proceed from this point:
- Find out at what point your camera becomes ISO invariant and use that point as base for Astrophotography. In doubt, try starting at ISO 1600.
- If you don’t have a tracker and can only take short exposures, stick to a higher ISO and take many shots, ideally from a fast lens.
- Should you start to blow out stars at your selected ISO and your given exposure time, reduce it. Your exposure will be long enough to cover the read noise at hand.
Thank you for sticking with this highly technical article. I hope you have learned something from it and understand that ISO is not the bad boogeyman many people make it out to be. In fact it can be a helpful accessory for you which will lead you to better images.
If you liked that article, check out the other various article covering topics for beginners in my Articles section.
And if you want to stay informed, subscribe to my newsletter at the bottom of the page, like me on Facebook or follow me on Twitter!