Lightmatter: rethinking computing

matrixpartners_gray

It’s not everyday that one gets the opportunity to change the nature of something as fundamental as computing. Jack Kilby did when he demonstrated the first integrated circuit at TI in 1958. Robert Noyce and his colleagues at Fairchild did it when they figured out how to make it planar. Robert Dennard did it when he invented DRAM at IBM.

At Lightmatter, Nick, Darius and Thomas have set out on an equally seminal task: to move computing into the realm of light. We are super excited to lead Lightmatter’s $11M Series A round and to be part of their journey, and here’s why.

In April 1965, Gordon Moore proposed that the component density of electronic circuits would double every year. His construct, since named “Moore’s law,” has turned out to be remarkably accurate and has stood the test of time for decades. Until now.

Transistor gate dimensions have actually stayed approximately the same since the 32 nm node from the early 2010s, even if by making making transistors taller, chipmakers have continued to pack more per unit area. But even that game is about to be up: transistors have become so small — today’s cutting edge 10 nm feature is roughly 20 silicon atoms wide — that phenomena, like leakage currents and quantum tunneling, are requiring ever more sophisticated techniques to overcome, driving up expense and slowing down development cycles. And we’re very close to reaching single atom scales, when all improvement will stop. In fact, advances in the power required and speed of switching silicon gates have already dramatically slowed down.

Prescient chart from Gordon Moore’s 1965 article in Electronics magazine

The community is aware that soon we’ll need a different paradigm if we want to keep improving computing performance. Major efforts are underway trying to develop exotic new approaches like quantum computing, neuromorphic computing… and optical computing. All still need breakthroughs and require gigantic amounts of investment for “all or nothing” developments. We may be waiting a long time.

At least that’s what we believed until we met the Lightmatter founders. They came upon a new idea: by using fairly simple silicon photonic structures, ones that are considered “elementary” by serious silicon photonics companies like our now-public portfolio company Acacia Communications, they can perform a key computing operation in light, leaving the rest in standard silicon. This operation, matrix multiplication, turns out to be the critical computation used in artificial intelligence — the computation that uses up all the power and limits the speed. In light it can be performed at blazing speed and using an incredibly small amount of power. By limiting the use of optical computing to matrix multiplication, Lightmatter doesn’t have to wait for breakthroughs like optical memory to introduce optical computing to the world.

To test the idea, Nick built a prototype silicon photonics matrix multiplier in his lab at MIT as part of his Ph.D. thesis. It worked.

Lightmatter now had the beginnings of a completely different artificial intelligence processor, one that is dramatically different from those being built by more than two dozen startups who are all trying to apply a similar set of processing “tricks” to the problem: take advantage of matrix sparsity, compress the data, increase the on chip cache size, and apply sundry accelerators often seen in the more sophisticated microprocessors. If it works, rather than fine tuning the Chevy engine, Lightmatter will be powered by a rocket engine which they can fine tune as well — the “tricks” everyone is applying aren’t easy but are widely known.

Sure, there is still a ton of technical risk. While Nick, Darius, and Thomas are standing on the shoulders of many engineers who worked for years to solve the multitude of problems with silicon photonics, they will doubtlessly run into unpredictable roadblocks of their own. They will need to scale up their prototype, build out the rest of the system, write the software, build the tools. They may be delayed, and there is of course a chance of failure. But if they succeed, not only will they build a disruptive AI processor, they will have started the optical computing revolution. And they will have a chance to take us into the next computing age like Kilby, Noyce, and Dennard.

And isn’t that what startups are supposed to be about?

Follow us on Twitter: Stan Reiss, @stanreiss and Matrix Partners, @MatrixPartners.