{"id":859,"date":"2021-12-17T12:18:23","date_gmt":"2021-12-17T12:18:23","guid":{"rendered":"https:\/\/wordpress.nkisiland.com\/?p=859"},"modified":"2021-12-17T12:18:23","modified_gmt":"2021-12-17T12:18:23","slug":"lightmatters-mars-chip-performs-neural-network-calculations-at-the-speed-of-light-mit-spinoff-harnesses-optical-computing-to-make-neural-networks-run-faster-and-more-efficiently","status":"publish","type":"post","link":"https:\/\/wordpress.nkisiland.com\/?p=859","title":{"rendered":"Lightmatter&#8217;s Mars Chip Performs Neural-Network Calculations at the Speed of Light  MIT spinoff harnesses optical computing to make neural-networks run faster and more efficiently"},"content":{"rendered":"<div id=\"sSS_Default_Post_0_0_19_0_0_1_2_0_0\" class=\"mb-2 article_post current_post_media current_post\" style=\"font-weight: 400;color: #0d0d0d\">\n<div class=\"posts-custom posts-custom-section section-holder clearfix\">\n<div class=\"posts-wrapper clearfix\">\n<div class=\"widget post-partial tag-optoelectronics tag-processors tag-robot-software tag-optical-circuits tag-deep-learning tag-neural-networks tag-accelerator-on-a-chip tag-analog-computing tag-photonics post-section--topic\/semiconductors\">\n<article class=\"clearfix page-article sm-mb-1 quality-HD post-2650280390\">\n<div class=\"row px10\">\n<div class=\"row px10\">\n<div class=\"col sm-mb-1\">\n<div class=\"widget__body clearfix sm-mt-1\">\n<div class=\"social-author clearfix\"><strong>For some years now, electrical engineers and computer scientists have been trying hard to figure out how to perform neural-network calculations faster and more efficiently. Indeed, the design of accelerators suitable for neural-network calculations has lately become a hotbed of activity, with the most common solution, GPUs, vying with various application-specific ICs (think Google\u2019s\u00a0Tensor Processing Unit) and field-programmable gate arrays. (from ieeee spectrum)<\/strong><\/div>\n<div class=\"social-author clearfix\"><\/div>\n<div class=\"social-author clearfix\">\n<div class=\"social-author clearfix\">by\u00a0<a class=\"social-author__name rm-stats-tracked\" style=\"font-weight: normal;color: #404040\" href=\"https:\/\/spectrum.ieee.org\/u\/david-schneider\">David S<\/a>chneider<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/article>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"sSS_Default_Post_0_0_19_0_0_1_2_0_1\" class=\" row-wrapper clearfix \" style=\"font-weight: 400;color: #0d0d0d\">\n<div class=\"row \">\n<div id=\"sSS_Default_Post_0_0_19_0_0_1_2_0_1_1\" class=\"current_post_content col sm-mb-2 md-mb-4 s12 m12 l9\">\n<div id=\"sSS_Default_Post_0_0_19_0_0_1_2_0_1_1_0\">\n<div id=\"sSS_Default_Post_0_0_19_0_0_1_2_0_1_1_0_0_1_0\" class=\"non_member_follow\">\n<div id=\"sOpen_Current_Default_Post_0_0_13_0_0_0_1_0\" class=\"mb-2 article_post article_post--body-and-tags\">\n<div class=\"posts-custom posts-custom-section section-holder clearfix\">\n<div class=\"posts-wrapper clearfix\">\n<div class=\"widget post-partial tag-optoelectronics tag-processors tag-robot-software tag-optical-circuits tag-deep-learning tag-neural-networks tag-accelerator-on-a-chip tag-analog-computing tag-photonics post-section--topic\/semiconductors\">\n<article class=\"clearfix page-article sm-mb-1 quality-HD post-2650280390\">\n<div class=\"row px10\">\n<div id=\"col-center\" class=\"col sm-mb-1\">\n<div class=\"widget__body clearfix sm-mt-1\">\n<div class=\"body js-expandable clearfix js-listicle-body js-update-url css-listicle-body-2650280390\">\n<div class=\"body-description\" style=\"color: #0d0d0d\">\n<p>Well, another contender has just entered the arena, one based on an entirely different paradigm: computing with light. A MIT spinoff called\u00a0Lightmatter\u00a0described its \u201cMars\u201d device at last week\u2019s\u00a0Hot Chips\u00a0virtual conference. Lightmatter is not the only company pursuing this novel strategy, but it seems to be ahead of its competition.<\/p>\n<p>It\u2019s somewhat misleading, though, for me to call this approach \u201cnovel.\u201d Optical computing, in fact, has a long history. It was used as far back as the late 1950s, to process some of the first\u00a0synthetic-aperture radar\u00a0(SAR) images, which were constructed at a time when digital computers were not up to the task of carrying out the necessary mathematical calculations. The lack of suitable digital computers back in the day explains why engineers built analog computers of various kinds, ones based on\u00a0spinning disks,\u00a0sloshing fluids, continuous amounts of\u00a0electric charge, or even\u00a0light.<\/p>\n<p>Over the decades, researchers have from time to time resurrected the idea of computing things with light, but the concept hasn\u2019t proven widely practical for anything. Lightmatter is trying to change that now when it comes to neural-network calculations. Its Mars device has at its heart a chip that includes an analog optical processor, designed specifically to perform the mathematical operations that are fundamental to neural networks.<\/p>\n<p>The key operations here are matrix multiplications, which consist of multiplying pairs of numbers and adding up the results. That you can perform addition with light isn\u2019t surprising, given that the electromagnetic waves that constitute light add together when two light beams are combined.<\/p>\n<p>What\u2019s trickier to understand is how you can do multiplication using light. Let\u2019s me sketch that out here, although for a fuller account I\u2019d recommend reading the\u00a0very nice description\u00a0of its technology that Lightmatter has provided on its blog.<\/p>\n<p>The basic unit of Lightmatter\u2019s optical processor is what\u2019s known as a\u00a0Mach-Zehnder interferometer. Ludwig Mach and Ludwig Zehnder invented this device in the 1890s, so we\u2019re not talking about something exactly modern here. What\u2019s new is the notion of shrinking many Mach-Zehnder interferometers down to a size that\u2019s measured in nanometers and integrating them together on one chip for the purpose of speeding up neural-network calculations.<\/p>\n<p>Such an interferometer splits incoming light into two beams, which then take two different paths. The resulting two beams are then recombined. If the two paths are identical, the output looks just like the input. If, however, one of the two beams must travel farther than the other or is slowed, it falls out of phase with the other beam. At an extreme, it can be a full 180 degrees (one half wavelength) out of phase, in which case the two beams interfere destructively when recombined, and the output is entirely nulled.<\/p>\n<p>More generally, the field amplitude of the light at the output will be the amplitude of the light at the input times the cosine of half of the phase difference between the light traveling in its two arms. If you can control that phase difference in some convenient way, you then have a device that works for multiplication.<\/p>\n<p>Lightmatter\u2019s tiny Mach-Zehnder interferometers are constructed by fashioning appropriately small waveguides for light inside a nanophotonic chip. By using materials whose refractive index depends on the electric field they are subjected to, the relative phase of the split beam can be controlled simply by applying a voltage to create an electric field, as happens when you charge a capacitor. In Lightmatter\u2019s chip, that\u2019s done by applying an electric field of one polarity to one arm of the interferometer and an electric field of the opposite polarity to the other arm.<\/p>\n<p>As is true for a capacitor, current flows only while charge is being built up. Once there is sufficient charge to provide an electric field of the desired strength, no more current flows and thus no more energy is required. That\u2019s important here, because it means that once you have set the value of the multiplier you want to apply, no more energy is needed if that value (a \u201cweight\u201d in the neural-network calculation) doesn\u2019t subsequently change. The flow of light through the chip similarly doesn\u2019t consume energy. So you have here a very efficient system for performing multiplication, one that operates at, well, the speed of light.<\/p>\n<p>One of the weaknesses of analog computers of all kinds has been the limited accuracy of the calculations they can perform. That, too, is a shortcoming of Lightmatter\u2019s chip\u2014you just can\u2019t specify numbers with as fine resolution as you can do using a digital circuitry. Fortunately, the \u201cinference\u201d calculations that neural networks carry out once they have been trained don\u2019t need much resolution. Training a neural network, however, does. \u201cTraining requires higher dynamic range; we\u2019re focused on inference because of that,\u201d says\u00a0Nicholas Harris, Lightmatter\u2019s CEO\u00a0and one of the company\u2019s founders. \u201cWe have an 8-bit-equivalent system.\u201d<\/p>\n<p>You might imagine that Lightmatter\u2019s revolutionary new equipment for performing neural-network calculations with light is at this stage just a laboratory prototype, but that would be a mistake. The company is quite far along in producing a practical product, one that can be added to any server motherboard with a PCI Express slot and immediately programmed to start cranking out neural-network inference calculations. \u201cWe are very focused on making it so that it doesn\u2019t look like alien technology,\u201d says Harris. He\u00a0explains that Lightmatter not only has this hardware built, it has also created the necessary software toolchains to support its use with standard neural-network frameworks (TensorFlow\u00a0and\u00a0PyTorch).<\/p>\n<p>Lightmatter expects to go into production with a commercial unit based its Mars device in late 2021. Harris says that the company\u2019s chips, sophisticated as they are, have good yields, in large part because the nanophotonic components involved are not really that small compared with what\u2019s found in cutting-edge electronic devices. \u201cYou don\u2019t get the same point defects that destroy things.\u201d So it shouldn\u2019t be difficult\u00a0to keep yields high and the pricing for the Mars device low enough to be competitive with GPUs.<\/p>\n<p>And who knows, perhaps other companies such as\u00a0Lightintelligence,\u00a0LightOn,\u00a0Optalysis, or\u00a0Fathom Computing, will introduce their own light-based neural-network accelerator cards by then. Harris isn\u2019t worried about that, though, he says. \u201cI\u2019d say we\u2019re pretty far ahead.\u201d<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/article>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>For some years now, electrical engineers and computer scientists have been trying hard to figure out how to perform neural-network calculations faster and more efficiently. Indeed, the design of accelerators suitable for neural-network calculations has lately become a hotbed of &hellip; <a href=\"https:\/\/wordpress.nkisiland.com\/?p=859\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/wordpress.nkisiland.com\/index.php?rest_route=\/wp\/v2\/posts\/859"}],"collection":[{"href":"https:\/\/wordpress.nkisiland.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/wordpress.nkisiland.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/wordpress.nkisiland.com\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/wordpress.nkisiland.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=859"}],"version-history":[{"count":5,"href":"https:\/\/wordpress.nkisiland.com\/index.php?rest_route=\/wp\/v2\/posts\/859\/revisions"}],"predecessor-version":[{"id":894,"href":"https:\/\/wordpress.nkisiland.com\/index.php?rest_route=\/wp\/v2\/posts\/859\/revisions\/894"}],"wp:attachment":[{"href":"https:\/\/wordpress.nkisiland.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=859"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/wordpress.nkisiland.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=859"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/wordpress.nkisiland.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=859"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}