What is digital photography? | Popular photography
[ad_1]
The most profound change in photography since the transition from film to digital is happening now and most people don’t realize it yet.
That change is computer photography, which refers to the processing of images, often as they are captured, in a way that goes beyond simply recording the light hitting the image sensor of the camera. ‘a camera. The terms machine learning (ML) and artificial intelligence (AI) are also used when we talk about this wide range of technologies, which inevitably leads to confusion.
Does the camera do everything automatically now? Is there a place for photographers who want to shoot manually? Do cloud networks know everything about us from our photos? Have we lost control of our photos?
It’s easy to think âAIâ and imagine dystopian robots armed with smartphones pushing us aside. Well, maybe it’s easy for me, because now this particular mini-movie is playing on a loop in my head. But over the past few years, I have responded to the legitimate concerns of photographers worried about the push to integrate AI and ML into the realm of photography.
So let’s explore this fascinating space we find ourselves in. In this column, I will share how these technologies are changing photography and how we can understand and benefit from them. Just as you can take better photos when you understand the relationships between aperture, shutter speed, and ISO, knowing how computer photography affects the way you shoot and edit is sure to improve your photography. general.
For now, I want to talk about what computer photography in general is and the impact it already has on the way we take photos, from capturing to organizing to editing. .
Let’s eliminate the terminology
Digital photography is an umbrella term that basically means “a microprocessor and software did extra work to create this image.”
True artificial intelligence researchers may bristle at how the term AI has been widely adopted, because we are not talking about machines that can think for themselves. And yet “IA” is used most often because it is short and people have a general idea, based on fiction and film over the years, that it refers to a certain independence from the machine. . Plus, it works great in promotional material. There is a tendency to throw “AI” on a product or feature name to make it cool and bold, just like we used to add “cyber” to anything that vaguely hinted at the Internet.
A more precise term is machine learning, which is the amount of computer photography technologies built. In ML, software is fed by thousands or millions of sample data – in our case, images – as building blocks for “learning” information. For example, some apps can identify that a landscape photo you took contains sky, trees, and a van. The software has ingested images that contain these objects and are identified as such. So when a photo contains a green, vertical, roughly triangular shape with appendages that look like leaves, it is identified as a tree.
Another term you may come across is high dynamic range, HDR, which is achieved by mixing several different exposures of the same scene, creating a result where a clear sky and a dark foreground are balanced in a way that only one. shot could not Capture. In the early days of HDR, the photographer combined exposures, resulting in garish, oversaturated images where every detail was illuminated and exaggerated. Now, this same approach happens automatically in smartphone cameras during capture, with much more finesse to create images that look more like your eyes – which have a much higher dynamic range than a sensor’s sensor. camera – perceive at the time.
Smarter capture

Perhaps the best example of computer photography is in your pocket or purse: the cell phone, the ultimate compact camera. It’s not to feel disruptive, because the process of shooting on your phone is straightforward. You open the Camera app, compose an onscreen image, and press the shutter button to capture the image.
Behind the scenes, however, your phone does millions of operations to get that shot: assessing the exposure, identifying objects in the scene, capturing multiple exposures in a split second, and mixing them together to create the photo displayed for just a few moments longer. late.
In a very real sense, the photo you just captured is fabricated, a combination of exposures and algorithms that make judgments based not only on the lighting of the scene, but also on the developers’ preferences for rendering. dark or light from the scene. That’s a far cry from removing a cap and exposing a strip of film with the light coming through the lens.
But let’s take a step back and be pedantic for a moment. Digital photography, even using the first digital cameras, is itself a computer photography. The camera’s sensor registers the light, but then applies an algorithm to transform this digital information into colored pixels, and then typically compresses the image into a JPEG file optimized for good appearance while keeping the file size small.
Traditional camera makers like Canon, Nikon and Sony have been slow to integrate the types of computer photography technologies found in smartphones, for practical and undoubtedly institutional reasons. But they haven’t been inactive either. The Sony Alpha 1’s bird eye tracking feature, for example, uses subject recognition to identify birds in the frame in real time.
A smarter organization
For a while, applications such as Adobe lightroom and Plant Photos were able to identify faces in photos, making it easier to display all images that contain a specific person. Machine learning now allows software to recognize all kinds of objects, which can save you from having to type in keywords, a task photographers seem quite reluctant to do. You can enter a search term and view matches without touching the metadata for any of these photos.
If you think keywording is a chore, what about reducing a few thousand shots into a more manageable number of actually good frames? Software such as Optyx can analyze all images, flag those that are blurry or severely underexposed, and mark those for deletion. Photos with good exposure and sharp focus are enhanced, allowing you to evaluate several dozen, saving you a lot of time.
Smarter editing

The post-capture stage has seen a lot of ML innovations in recent years as developers add smarter features for photo editing. For example, Lightroom’s Auto feature, which applies several adjustments based on the needs of the image, improved dramatically when it started to reference Adobe. Sensei cloud-based ML technology. Again, the software recognizes the objects and scenes in the photo, compares it to similar images in its dataset, and makes more informed choices about how to adjust the shot.
As another example, ML functionality can create complex selections in seconds, compared to the time it would take to draw a selection by hand using traditional tools. Skylum’s IA Lighting identifies objects, such as a person’s face, when opening a photo. Using the AI ââStructure tool, you can add contrast to a scene and know that the effect will not be applied to the person (which would be terribly unflattering). Or, in Lightroom, Classic Lightroom, Photoshop, and Photoshop Elements, the Select Subject feature makes an editable selection around the important element of the photo, including a person’s hair, which is difficult to do manually.
Most ML features are designed to relieve pain points that otherwise take up valuable editing time, but some are simply capable of doing a better job than previous approaches. If your image was taken at high ISO in dark lighting, it probably contains a lot of digital noise. Denoise tools have been available for some time, but they usually risk turning the photo into a collection of colored smears. Now applications such as ON1 RAW Photo and Topaz DeNoise AI use ML technology to remove noise and preserve detail.
And for my last example, I want to highlight the ability to magnify low resolution images. Enlarging a digital photo came with the risk of softening the image quality, as you often just magnify existing pixels. Now, ML-based resizing features, such as Pixelmator Pro‘s ML Super Resolution or Photoshop’s super resolution, can increase the resolution of a shot, while intelligently keeping the focus areas sharp.
The “smarter image”

I’ll quickly skim through the possibilities to give you a rough idea of ââhow machine learning and computer photography are already affecting photographers. In the coming columns, I’ll take a look at these and other features in more depth. And along the way, I’ll cover news and interesting developments in this growing field. You can’t throw a dead Pentax around without touching something that has added “AI” to its name or marketing materials these days.
What about me, your smart columnist (-ish / -aleck)? I have written about technology and the creative arts professionally for over 25 years, including several photography-specific books published by Pearson Education and Rocky Nook, and hundreds of articles for outlets such as DPReview, CreativePro, and Seattle weather. I co-host two podcasts, host photo workshops in the Pacific Northwest, and drink lots of coffee.
It is an exciting time to be a photographer. Unless you’re a dystopian robot, in which case you’re probably fed up with nudging photographers.
[ad_2]
Comments are closed.