The first LED was invented, or actually, accidentally discovered, in the fall of 1961 by engineers working at Texas Instruments. James R. Biard and Gary E. Pittman were working together on a project for the U.S. Airforce to build “low noise parametric amplifiers for X-band radar receivers,” and discovered that IR light was being emitted from one of the diodes they had built on gallium arsenide substrate. The following year, the first commercial LED product, the SNX-100 GaAs LED, came out, selling for $130 each ($1,180.80 in today’s dollars). Over at GE, also in 1962, physicist Nick Holonyak invented the first LED with visible-spectrum light — red.
LED, of course, stands for “light-emitting diode.” Technically, an LED is a semiconductor that converts electrical energy into light when a current is passed through the layers of materials contained in the chip. This process is called electroluminescence. LEDs produce different wavelengths (colors) based on the molecular makeup of the material the current is passed through. Without getting too deep into solid-state physics, the reason the molecular makeup of the compound is so critical is that the color (wavelength) and amount of light (efficiency) produced is determined by how far the electrons have to jump, or fall — otherwise known as a “band gap” or “energy gap.”
To create different LED colors, scientists had to experiment to see what happened when currents were passed through different materials with unique molecular makeups — how wide the energy gap was, and what wavelength was produced. Trying out different compounds was a very slow trial-and-error process. The ideal compound had to efficiently produce protons and be thermally stable (i.e., it wouldn’t break down when energy currents were passed through it) — but it also had to be a compound that could be efficiently manufactured. Only recently was a machine learning algorithm invented that could predict what compounds might produce a desired band gap, and therefore, specific color.
The right materials to produce both red and green LEDs were discovered relatively early on in the LED development process. But the right one to create a blue LED proved elusive. Scientists knew it needed the widest energy gap of all to create light with a blue wavelength, but no one had been able to pin down the right compound.
Why did the blue LED matter so much?
Engineers knew that LEDs could revolutionize many scientific and commercial applications that required light. Back in 1969, engineers at Hewlett-Packard wrote about the idea of a “wall-mounted color television set” that would use LED technology. They estimated it to be about 10 years away. But for many of the applications, a pure white light was required, and it wasn’t yet possible.
A single LED can’t produce pure white light on its own because white light isn’t made up of a single wavelength; it’s a combination of multiple wavelengths. The combination of red, green and blue light in equal amounts appears visibly to the human eye as white light. In color theory, RGB is an additive color model in which red, green and blue can be used to produce a broad array of colors — over 16 million different hues, in fact. Without a blue LED, there would be no white LED, which severely limited the applications of the new technology.
In the coming decades, incremental improvements were made in the production of blue LEDs. A possible material with a wide energy gap was discovered in 1972 (magnesium-doped gallium nitride) by researchers at Stanford, but the first LED actually built with it emitted green light, not blue. Another version of events says that engineers at RCA figured out the right compound in the early ‘70s, which ended up being bad timing; RCA founder David Sarnoff had just died and his son Robert tried to make RCA a leader in computing, but failed to compete against IBM. RCA’s blue LED project was scrapped.
In 1989, Cree introduced the first commercially available blue LED, which technically emitted blue light, but was very inefficient (and therefore, dim). Finally, in 1993, Japanese scientists Shuji Nakamura, Isamu Akasaki and Hiroshi Amano figured out how they could produce high-brightness blue LEDs by growing crystals of gallium nitride that they had molecularly engineered to have the precise energy gap they wanted. This made high-brightness blue LEDs finally a real possibility. But this method of growing gallium nitride crystals was inefficient and expensive, and it took many scientific minds another two decades to figure out how to get to mass production. In the end, those same Japanese scientists would win the Nobel Prize in Physics in 2014 for the invention of the blue LED. Many consider the invention of the blue LED one of the most significant engineering advancements of the last century, after the invention of the transistor.
From early on, scientists were excited about the possibility of LEDs. They are much more energy-efficient and long-lasting and emit significantly less heat than previous lighting technology. Twenty to thirty percent of all electrical energy consumed is used for lighting, so efficiency improvements are significant, both economically and environmentally. Since the blue LED problem was solved, the quality, brightness, efficiency of LEDs have increased dramatically, while the size of the chips and production costs have fallen. In LED display technology, there are still many manufacturing improvements to come that will further decrease costs and make direct-view LED displays price-competitive across a range of AV applications. But if the rapid rate of progress since commercial blue LED production became possible is any indication, that’s not too far off.