No invention exists in a vacuum. Every part of the technology we use is based on previous knowledge and discoveries. Bluetooth, the humble connectivity standard for your wireless headphones and keyboards, is based on more than 30,000 patents. Smartphones, on the other hand, incorporate more than 100,000 patents, and the best smartphones have unique patents that help them stand out from the competition. And each of those patents is based on prior art. Some of that technology dates back years, some decades, and some more than 2,000 years.
Camera lenses and optical technology
One of the most important aspects of taking a photograph is getting light from outside the camera onto the film or photosensor. But light is not a monolith. Light is not just a spectrum of colors. It’s also a spectrum of wavelengths, and getting those various wavelengths to diffract and come together in a coherent way was the work of centuries of scientists.
The early lens
The lens, which deflects light passing through it, dates back to the 5th century BC. C. Early lenses were used to light fires by concentrating the sun’s energy at a single point. However, early Roman writers were aware of the magnifying ability of the lens.
Glasses were invented in the late 13th century, giving rise to the optics industry. Around 1600, someone (it’s not clear who) came up with the idea of using more than one lens, giving rise to the telescope and microscope.
the dark room
Parallel to the development of the lens was the camera obscura, a closed box or room that only lets light in through a small hole (opening). That light is then projected onto the surface opposite the hole. It was first described more than 2,000 years ago and was mentioned by Aristotle as being useful for studying eclipses without having to look directly at the sun.
In the 16th century, lenses were built into the openings to help focus light. For the next 200 years, it remained a tool of artists and scientists. In the 19th century, Niépce and Daguerre revealed the secrets of chemical photography, which was essentially a means of capturing the image in focus in the lens of a camera obscura.
Early cameras used a single lens to focus light. To overcome the optical aberrations unavoidable when wearing one lens, lens manufacturers found ways to overcome some of these limitations by using two lenses made from different types of glass. As the century progressed, progress in lenses was marked by advances in understanding the shape and configuration of lens elements.
Modern lenses began to emerge in the late 19th century with the development of the anastigmatized lens. These lenses corrected astigmatism, an aberration that causes areas outside the center of the shot to be out of focus. This was the first lens to correct most of the optical aberrations that had plagued photographers up to that time.
The 20th century saw two events that laid the groundwork for 21st century mobile photography. The first was aspherical lenses, which have been known for hundreds of years, but became viable for commercial use in the 20th century. The profile of most wearable camera lenses is spherical, but the more complex geometry of aspherical lenses allowed them to replace more complex multi-element lenses.
The next big development was the rise of plastics. In the 1930s the first optical quality plastic lenses were developed. Kodak led the way with plastic lenses in its cameras in the 1960s, selling millions of Instamatic cameras with plastic lenses. The low cost and rapid manufacture of plastic lenses (compared to optically superior glass lenses) led to a low-cost photography boom in the 1970s, with some cameras costing less than $10.
These two developments, plastic and aspherical lenses, directly address some of the biggest problems early mobile phone camera makers would have to face: size and cost. Plastic lenses could be made cheaper and faster, and aspheric lenses could overcome some of the optical shortcomings of plastic and mean fewer lenses would need to be worn.
Image sensors and semiconductor technology
Long before the digital camera was even a concept, scientists and inventors were familiar with the tendency of certain chemicals to react with light. As soon as it was discovered how to stop the photochemical reaction in the 19th century, the first photographs became possible. By the early 20th century, the quantum nature of light and the photoelectric effect had been firmly established, opening the theoretical door to electronic imaging. Still, it would take decades and a revolution in technology for our phone cameras to even be a possibility.
The genesis of today’s image sensors lies in the search for a way to combine multiple electronic components into a single device. By the 1950s, vacuum tubes were being replaced by transistors, but the complexity of computers was growing beyond human ability to maintain them. The problem was in the number of components that had to be maintained and connected to each other. If something stopped working, it could take days or weeks to find the error.
Source: Wikimedia Commons/Chemistry
By the end of the decade, all the technologies were ready to consolidate complex circuits into a single device and also to miniaturize them. The first integrated circuit was invented in 1960, and the unmet need was so great that the industry exploded. As early as 1965, Gordon Moore commented that the density of transistors in integrated circuits doubled every two years: Moore’s Law.
charge coupled device
In 1969, less than 10 years after the invention of the integrated circuit, the charge-coupled device (CCD) was invented at Bell Labs. Initially, researchers were trying to create a memory chip (DRAM would not hit the market until 1970). ), but its potential for capturing images was noted early on.
The CCD is essentially an array of metal oxide semiconductor (MOS) capacitors arranged in an array. MOS capacitors act as photodetectors, converting light into stored electrical energy. In 1971, two years after the CCD was invented, researchers were able to turn theory into practice by taking images with a CCD device. Kodak made its first digital camera in 1975, and by the 1980s, CCDs were available in consumer camcorders.
Complementary Metal Oxide Semiconductors
Back in 1963, a process for producing MOS ICs called complementary MOS (CMOS) was invented, the hallmark of which was its low power consumption. In the 1980s and 1990s, CMOS was the industry standard for computer chips, but CCD was still the standard for digital imaging.
The problem with CCD image sensors was their power consumption and speed. Previously attempts had been made to use CMOS-based image sensors, but they suffered from noise problems and produced unusable images. NASA’s Jet Propulsion Laboratory set out to solve these problems to make lighter, more reliable image sensors for its spacecraft. In 1993, they hit on the solution and quickly licensed the technology for public use. The combination of low power consumption, small size and integrated design perfectly meets the needs of mobile phone camera manufacturers.
imagine the future
This is just the foundation for mobile photography that came with the new millennium. Since then, the field has grown by leaps and bounds. The first cell phone cameras consisted of lenses with four elements and image sensors with fewer than a million photosites. Today’s cameras have seven or more lenses and image sensors made up of more than ten million photosites. And we’re not even talking about phones with more than one camera and infrared range detectors. Given how far we’ve come in the last 20 years, who knows where we’ll be in 10 more years?