LOGIN

Our Story

In 2012, Fabio L. Traversa, PhD joined the team of Professor Massimiliano (Max) Di Ventra, PhD at the University of California, San Diego as a Research Scientist in Physics. Di Ventra’s group was already working on a new computing paradigm Di Ventra had named “memcomputing” where memory performs both the tasks of storing and processing of information vs. current computers that follow the von Neumann architecture where memory and processing are separate. Note that this separation creates what is known as the “von Neumann bottleneck”, which is the limit in throughput due to the latency of communication between processing and memory. 

The inspiration for memcomputing came to Di Ventra when he learned that even simple organisms, such as amoebas, can “learn” patterns when appropriately trained, and realized that such a behavior is a consequence of the ability of the organism to store that information in some form of physical memory, and retrieve it for later use to perform simple tasks. Following up on this work, Di Ventra and his team then started to investigate whether such a behavior could also be used to compute some problems, such as finding the solution of a maze, the shortest path in networks, and some optimization problems. The success of these early studies gave more support to the notion that memory can indeed be used as an efficient computational tool in novel architectures without the need to physically separate the CPU from the memory bank as it is currently done in our modern computers. This type of computing instead resembles closer the one that is believed to occur in the brain.

As a first step towards a novel type of memcomputing architecture capable of performing universal logic, Traversa and Di Ventra, along with some other colleagues, invented what they called “Dynamic Computing Random Access Memory” (DCRAM). This invention (patent issued in Jan 2016) realized Di Ventra’s vision of combining memory and processing. 

Max and Fabio continued alone from this point on and focused on the solution of hard combinatorial and optimization problems that are of great importance for both academia and industry.

Their research led to an initial design of an analog memcomputing machine. While this architecture avoided the von Neumann bottleneck, it had other limitations. In particular, being that the machine was fully analog, it requires infinite precision and this limits its scalability, namely, how big you can make it. Thus, while it represented an important advancement, it was unsatisfactory.

Max and Fabio then focused on methods that would allow them to build a digital version of memcomputing machines. After a couple more years of research, they did indeed succeed in designing a Digital Memcomputing Machine (DMM) which does not suffer from scalability issues. When they realized their accomplishment, they immediately filed for a patent. This was in 2015 (the patent issued in Mar 2018). They also produced a paper that included the mathematical equations that prove the validity of the DMM concept. They produced additional papers to understand the strengths of DMMs using functional analysis and topological field theory, further cementing the validity of the technology.

However, all of this evidence didn’t satisfy the scientific community. Far from it, instead it seemed to instigate some backlash. Overcoming conjecture limitations to the computational complexity is something that many in computer science and engineering have been attempting to do for decades. How could a couple of physicists come up with something that had eluded computer scientists all this time? Part of the challenge was that many who commented against their findings had actually reviewed and reacted to articles on the original analog version of memcomputing, not its digital version.

That said, memcomputing was still “theoretical”. Therefore, Max and Fabio set out to secure funding in the form of grants in order to continue their research on Digital Memcomputing Machines. Surprisingly, all of their grant applications were turned down by every entity that they applied to, be it government or foundations.

At that time, John A. Beane had recently joined UC San Diego as an Entrepreneur in Residence (EiR). John is a serial entrepreneur who was involved with many successful startups, including the Pyxis Corporation (acquired for $960M) and ecoATM (acquired for $350M) where he was also a cofounder. As an EiR, John’s role was to meet, speak and advise undergrads, grads, professors and staff at the university interested in commercializing their technology.

John was introduced to Max and Fabio. They pitched John on memcomputing and let him know that they were seeking funding to build a prototype of the DMM. John was truly impressed with the implications of the technology. Admittedly, the science was over his head. John told Max and Fabio that he would fund the prototype. He did not want to see this technology get put on a shelf due to lack of funding. He was not discouraged by those who had commented critically on the technology. In fact, in virtually every one of his startups, his teams were responsible for building technology that was considered “impossible” by the “experts”. Impossible wasn’t in his vocabulary. While the endeavor might be considered high risk, the magnitude of the potential reward for industry made it a risk well worth taking.

The three decided to take it one step further and create MemComputing, Inc., the company (founded February 26, 2016). John funded the work and negotiated with the San Diego Supercomputer Center to evaluate the prototype when it was complete. Recall that the underlying technology is a digital circuit architecture, i.e., a chip. Max and Fabio couldn’t build a chip, that would take an invest on the order of tens of millions of dollars and a good 5 years. The prototype was simply a software emulation of the circuit. In fact, every chip starts out as by building a software emulation. The “circuit” can then be tested, and corrections can be made well in advance of spending any money on the hardware. So, what they were doing was not out of the ordinary.   

To test the prototype, the San Diego Supercomputer Center chose a set of very hard optimization problems from the 2016 Max SAT Competition. This is an annual competition where the best and brightest from industry and academia submit software solutions to these problems. It was now 2017, so the Supercomputer Center had the problems, the expected results as well as the compute time for the winners of the competition.

The problems of the Max SAT Competition were known to scale exponentially. That means that as you add input variables, the amount of time required to solve the problem grows exponentially. The theory behind memcomputing was that it would scale polynomially on these problems. (Polynomial scaling means that the computing time will grow at a much slower, more reasonable pace and thus could handle larger sized problems.) The MemComputing, Inc. team (John, Fabio & Max) were confident that their prototype would indeed scale polynomially, but there was one result that the Supercomputer Center found that they did not expect. That is, not only did the compute time grow very slowly, the software emulation of the memcomputing circuit was faster than the winning software applications from the competition. One would expect that the overhead in simulating the circuit would naturally be slower than highly optimized software written to solve a problem. And not only was it faster, it was orders of magnitude faster. That means 10’s, 100’s, 1000’s, etc. of times faster than the winning software solutions.

The Supercomputer Center went one step further. They took one of the sets of problems and created larger and larger problem sets. They wanted to see if the memcomputing circuit would continue to scale polynomially regardless of size, as Max and Fabio’s proofs indicated. The testers built bigger and bigger problem sets, growing first by hundreds of thousands and then millions of variables. At approximately 2 million variables, the Supercomputer Center estimated that today’s best solvers would take beyond the age of the universe to solve a problem of this size. However, the memcomputing circuit, emulated in software, solved the problem in about 2 hours. This seemingly unbelievable feat was repeated 1000’s of times, thus it was no anomaly. It truly represented the incredible performance that the memcomputing architecture could provide. 

There was another very large and very positive consequence of this performance. This meant that the company could start by releasing the software version of the technology.  The memcomputing chip, when built, will solve these same problems in real-time. However, recall that building a chip would take tens of millions of dollars and 5 or more years. By starting off as a software company, the team would require only a fraction of the investment and could complete the work years earlier with a much smaller team. So, the company is releasing the technology as both a Software as a Service (SaaS) and a Software Development Kit (SDK). The SaaS, available in the summer of 2019, allows for an initial period of free use so that potential customers can kick the tires before committing. Paying clients can then choose to continue using the SaaS and/or they can use the SDK. The difference between the SaaS and the SDK is that MemComputing hosts the SaaS on its cloud while clients can integrate the SDK in their software and run it directly on their own systems.

Where it goes from here remains to be seen. However, many of the most recent problems that MemComputing has been working on are problems that are intended for Quantum Computers. Keep in mind that viable Quantum Computers are still a decade away. The evidence is that the MemComputing software-based MemCPU® Coprocessor is solving these problems today. The impact on industry and society can be tremendous. Think about it, MemComputing is delivering the performance expected of Quantum Computers a decade before it was even thought possible.