The technologies that help make better photos with your mobile

How to improve the photos we take with our smartphone is perhaps one of the most recurring issues among users. We already tell you at the time some tricks to make better photos with the phone and we even talk about mobile applications to retouch the photos, but it is also often beneficial to know how the camera of our terminal really works and better understand the technologies that give it thrust.

Today we are going precisely to it, so we do not entertain ourselves anymore: these are the technologies that are hidden inside the camera of your phone.

Pixel binning

The first technology I am going to talk about is becoming fashionable in recent years and many manufacturers already include it. It’s called Pixel Bining.

This technology is nothing more than the fusion of several pixels to obtain a larger one and thus more light can be captured.

But of course, this technology does not help us in well-lit scenarios, so here you can use the full resolution of the sensor that, speaking of Pixel Bining, is usually 48 MP, 64 MP or up to 108 MP. When we are in a scene with low light, we can use this technology that, even if it reduces the overall resolution of the image, will capture more light and, therefore, in the final result you will notice the difference.

High dynamic range (HDR)

And speaking of light, it is precisely a very important element to understand HDR technology.

HDR is a mode that increases the dynamic range through a bracketing of images (take a series of overexposed, neutral and underexposed photographs to then join them into a single image with a greater dynamic range).

But the thing is not there. Surely other similar modes sound like the HDR + and the enhanced HDR + of the Google Pixel, or the Smart HDR of the iPhone. These modes are nothing more than an HDR but it covers even more dynamic range if possible. You can see an example of how it works just under these lines.

The problem comes with the amount of information and its processing and is something to consider more than you think. The higher the dynamic range we capture, the more information we will have and, therefore, the longer it will take our phone to process the image. So, if we want to take a quick photo, the lower that HDR range, the faster it will be processed.

Deep Fusion and Out of Frame Capture

I mentioned before the iPhone, so it’s a good time to explain two exclusive technologies (for now) of this Apple phone: Deep Fusion and capture out of the frame.

The first is a different processing system than SmartHDR that seeks a different result, emphasizing the sharpness of the image.

Before we take the picture, the iPhone takes 4 underexposed photos and 4 photos with correct exposure. When the shutter is pressed, it takes a long exposure photo and then merges it with the 4 photos with correct exposure. Meanwhile, look for the sharpest photo of the underexposed and merge it with the previous fusion. The result is a photo with more detail and a greater dynamic range – in the video you have at the beginning of this article you can better appreciate it with an example.

The problem is that this processing system does not work in all situations: if we talk about the wide angle lens, this technology does not work with it; for the angular objective (the one of all the life), the Deep Fusion will work in conditions of average and low luminosity; And, in the case of the telephoto lens, it will work in all situations as long as we do not shoot directly at a large light source, such as the Sun itself.

On the other hand, we have the call Capture outside the frame. When the light conditions accompany, the iPhone takes a photograph with the telephoto lens and another with the angular lens (or the angular and wide angle), and then joins both images by sewing them along the edge so that there are no differences between them. So we get a photo that we can use to re-frame if necessary.

But beware: if you don’t use the “extra” part of the photo in 30 days it will be deleted, leaving only the first capture you made.

Scene Recognition

All these technologies that I am talking about are supported by artificial intelligence. Yes, that sounds so much every time we witness the launch of a new phone and is responsible for the known as scene recognition.

Applying artificial intelligence, the phone detects the type of scene we have in front of us: if it is a plate of food, a landscape, the face of another person, etc. And, consequently, adapts the exposure, saturation and other parameters of the photo to obtain the best possible snapshot automatically and without having to worry about adjusting anything. The photo will improve (in the vast majority of cases) thanks to the automatic recognition functions of the phone’s own camera.

Portrait mode

The next mode or sure technology that also sounds to you because perhaps it is the best known of all. I mean portrait mode.

The portrait mode pretends to be taking a picture with a brighter lens to generate what is known as the “bokeh” effect, which is nothing more than separating the main subject in the background photo and blur that background simulating that it is far away.

There are phones that do so only through software, others that rely on other lenses or additional sensors (such as TOF sensors) and, finally, there are those that combine both. Some people prefer a flatter blur and others who prefer something more gradual (that depends on the tastes of each user), but, without a doubt, it is one of the most popular and used technologies in today’s telephony.

Night mode

Finally, I want to talk to you about the night mode, “Night shift” or stellar photography. We are, in short, the way phones use to take pictures when there is low light.

Depending on the phone, it is applied in one way or another, ranging from simply taking a long exposure photograph (allowing the light to enter the sensor for longer) to take many short exposure photographs (and then merging them). Anyway, with both modes, you will get a brighter photo with less noise.

This type of photography can also be called stellar photography, which is nothing more than taking the night mode to the next level. Here, instead of exposing the camera for 5-10 seconds, we can make an exposure of 30 seconds or even more and, as its name implies, get to catch the light of the stars.

This name is actually what Google has given its Pixel, but, it is not something exclusive of them: you can take this type of photos on any phone that exposes the sensor for the necessary time.

All this set of technologies also help you in the end, as we explained at the beginning of this article, to take better photos at the most appropriate times. And, most importantly, without you having to worry about practically nothing.