The rise of the LED light and problems therein


LED (Light-Emitting Diode) is a type of light that for a long time now become very popular, and is still increasing in popularity. The reasons for it being so popular is that it can create a very strong and intense light with very little power usage. Most smart phones with a light on it has a LED-light for example. It is very tiny, and yet produces a lot of light without severely draining the battery. I don’t have any references, but I suspect that your screen gobbles up far more power from being turned on than the LED-light does, even though the light is much more intense in your LED-light.

This technology has lately also entered the world of photography. It is possible to buy panels of varying size that create consistent light for studio use. “Consistent” is not actually 100% correct as LED works by shooting quick bursts of light. The bursts are just on a frequency so fast that the human eye perceives it as consistent. Because eyesight is pretty much the same as photography in that images are exposed and presented in quick succession. Because of this, the quick bursts are not visible in photography either, unless you have an insanely fast shutter. Another appealing factor of LED light is that it often has the option of emitting light in any color in the RGB scale (Red Green Blue).

So the pros are: low power usage without losing light intensity, any color is possible, and it is a rather cheap option. So why should you not just run out and get this right away? Lets look a bit closer on LED vs digital camera technology.

The Bayer-filter

First thing we need to understand (at least on a superficial level) is how the camera records an image. It is a rather complex process. While normal analogue film simply has several layers that react to different colors of light, and then simply get burned when exposed, the digital sensor needs a lot of processing done to the light before it can understand it. I’m not going to go into detail here, but for understanding why LED can give some problems, there are a couple of things that is helpful to understand. The probably most important thing is something called a Bayer-filter. Roughly illustrated, the filter looks like this:

Basic illustration representing a Bayer filter

Basic illustration representing a Bayer filter

Now imagine this is just a tiny fragment of the entire filter. There are far, far more dots of red, green and blue than presented here. Their job is to work as an exclusion filter. The green dots allow only green light to get through, and so on for the other colors. Right beneath this filter is the camera sensor itself, which then stores the intensity of light given. Because the Bayer filter is static and never changing, the image processor simply knows that this particular spot on the sensor is calculating the color green (or red, or blue). Also notice how the green dots cover about 50% of the filter surface, while red and blue respectively covers 25% each (one pixel is represented then by two green dots, one red and one blue). When the image processor looks at the data from the sensor, it will start using the data there to create an image representation of what you just shot. The fun thing about it here is that it doesn’t just look at the values at one specific pixel. It looks at the other pixels directly around it too and makes a sort of educated assumption about what the pixel actually looks like. There are several kinds of algorithms for these kinds of processes, and depending on the scenario, some may give off a better result than others.

 

LED basics

There are more differences between LED and normal light besides the power consumption and color versatility. They are not so easily noticed by the human eye however. Deep insight into why this is is not required to grasp what I’m getting at here, so don’t worry if you don’t understand it. (I don’t fully understand it either). The thing about LED light is that the wavelengths in the light are very narrow. How narrow exactly depends on the color, but all in all its quite narrow compared to normal light. The problem is also that say you fire a hard red color at a subject using an LED, then your subject will be lit up with very narrow wavelengths of red. Nothing else. No other shade of red, and certainly no blue or green. The light (ie. the wavelengths) are so intense that when adapting your camera settings to get a good exposure, the resulting image will be an exposure based more or less solely on the light from the LED. You can even this out using a flash to push some strong more normal light into the shot, but this kind of makes the image loose some contrast in an interesting way.

Never mind the solution of using a flash in addition right now. Imagine you expose your sensor to a light that is red and only red. The Bayer-filter will filter the light coming in so that the sensor gets an exposure. It will try to record the values of blue, green and red. The thing is though that in an intense red LED light, there is no green or blue at all (due to the narrow wavelengths). It is not like this makes the sensor go haywire and break. It will even successfully record an image. The problem is when the image processor will attempt to create a JPEG image from this data. Remember when I said that this processor tries to get an idea of color distribution by looking at surrounding pixels? Well, if some of those pixels are only red and nothing else, and then suddenly an even distribution of other colours, then some of the processors will just not understand what is going on.

Rn7WCqD

In the image above, notice how the top of the head of the three leftmost people are downright over exposed with very hard edges on the “glow”. Also in particularly notice how the girl in the middle’s hand is completely purple. These artefacts are the results of the image getting over exposed, but only in one or max two of the colour channels (for example over exposed red and blue, but no green light at all). Here these colours are lost and the RAW processor do not have the details needed to efficiently bring out details.

IMG_9854

In the shot above, I placed a flash with an umbrella to my right, and held up a LED unit on my left (mirrored in the image of course). The light from the flash overwrites some of the light from the LED. Notice for example my right cheek and the right side of my forehead. There are clear details there. The areas that would without the LED be just shadow are however very deeply red with very little detail coming through. The edges of the red light is also very hard.

I’m usually using Adobe Lightroom, which uses Adobe Standard processor. For situations like these, it is not a very good processor. There are different results depending on the RAW processor you are using. Here are some examples with the RAW processor used in the small caption text below each image. (Thanks to reddit.com user iainmf for these).

ACR-adobestandard

ACR Adobe Standard RAW Processor

Adobe Camera Faithful

Adobe Camera Faithful

DX09 Neutral Color - Factory Tonity

DX09 Neutral Color – Factory Tonity

RawTherapee, no colour profile, luminance highlight recovery

RawTherapee, no colour profile, luminance highlight recovery

Rawclipping

RAW clippings in the image, showing over exposed areas that is lacking image data

 

Can anything be done to fix this?

Most stages that use LED also have normal spotlights. If both of these are used efficiently you can get a great shot with an intense color in the background. Take this example:

Knauskoret

Knauskoret performing with normal spotlights on their bodies, LED in the background

In this shot the choir is lit using a strong spotlight, while the background is lit up with deep blue colors from LED spots. There was also some LED coloured spots on the front of the choir, but the powerful spotlight evened it out.

Final words

I cannot speak for every photographer out there, but I do not think many would recommend using LED as an alternative for cheap lighting in home studios or anywhere else for that matter. Concerts have taught me that LED can be used to make dramatic backgrounds, but that’s about all I would use them for. At least for now until they come up with some new invention that fixes the narrow wavelength issue.


Leave a comment