LOGIN

Publications

Feb 15, 2021

 

Global minimization via classical tunneling assisted by collective force field formation

Francesco Caravelli, Forrest C. Sheldon, Fabio L. Traversa

Simple dynamical models can produce intricate behaviors in large networks. These behaviors can often be observed in a wide variety of physical systems captured by the network of interactions. Here we describe a phenomenon where the increase of dimensions self-consistently generates a force field due to dynamical instabilities. This can be understood as an unstable (“rumbling”) tunneling mechanism between minima in an effective potential. We dub this collective and nonperturbative effect a “Lyapunov force” which steers the system towards the global minimum of the potential function, even if the full system has a constellation of equilibrium points growing exponentially with the system size. The system we study has a simple mapping to a flow network, equivalent to current-driven memristors. The mechanism is appealing for its physical relevance in nanoscale physics, and to possible applications in optimization, novel Monte Carlo schemes and machine learning.

View arXiv version  →

Feb 6, 2021

 

Directed percolation and numerical stability of simulations of digital memcomputing machines

Yuan-Hang Zhang, Massimiliano Di Ventra

Digital memcomputing machines (DMMs) are a novel, non-Turing class of machines designed to solve combinatorial optimization problems. They can be physically realized with continuous-time, non-quantum dynamical systems with memory (time non-locality), whose ordinary differential equations (ODEs) can be numerically integrated on modern computers. Solutions of many hard problems have been reported by numerically integrating the ODEs of DMMs, showing substantial advantages over state-of-the-art solvers. To investigate the reasons behind the robustness and effectiveness of this method, we employ three explicit integration schemes (forward Euler, trapezoid and Runge-Kutta 4th order) with a constant time step, to solve 3-SAT instances with planted solutions. We show that, (i) even if most of the trajectories in the phase space are destroyed by numerical noise, the solution can still be achieved; (ii) the forward Euler method, although having the largest numerical error, solves the instances in the least amount of function evaluations; and (iii) when increasing the integration time step, the system undergoes a “solvable-unsolvable transition” at a critical threshold, which needs to decay at most as a power law with the problem size, to control the numerical errors. To explain these results, we model the dynamical behavior of DMMs as directed percolation of the state trajectory in the phase space in the presence of noise. This viewpoint clarifies the reasons behind their numerical robustness and provides an analytical understanding of the unsolvable-solvable transition. These results land further support to the usefulness of DMMs in the solution of hard combinatorial optimization problems.

View arXiv version  →


Related Readings

DEC 22, 2020

Coupled oscillator networks for von Neumann and non von Neumann computing

Michele Bonnin, Fabio Lorenzo Traversa, Fabrizio Bonani

The frenetic growth of the need for computation performance and efficiency, along with the intrinsic limitations of the current main solutions, is pushing the scientific community towards unconventional, and sometimes even exotic, alternatives to the standard computing architectures. In this work we provide a panorama of the most relevant alternatives, both according and not the von Neumann architecture, highlighting which of the classical challenges, such as energy efficiency and/or computational complexity, they are trying to tackle. We focus on the alternatives based on networks of weakly coupled oscillators. This unconventional approach, already introduced by Goto and Von Neumann in the 50s, is recently regaining interest with potential applications to both von Neumann and non von Neumann type of computing. In this contribution, we present a general framework based on the phase equation we derive from the description of nonlinear weakly coupled oscillators especially useful for computing applications. We then use this formalism to design and prove the working principle and stability assessment of Boolean gates such as NOT and MAJORITY, that can be potentially employed as building blocks for both von Neumann and non-von Neumann architectures.

View arXiv version  →

Nov 12, 2020

Efficient solution of Boolean satisfiability problems with digital memcomputing

Sean R. B. Bearden, Yan Ru Pei & Massimiliano Di Ventra

Boolean satisfiability is a propositional logic problem of interest in multiple fields, e.g., physics, mathematics, and computer science. Beyond a field of research, instances of the SAT problem, as it is known, require efficient solution methods in a variety of applications. It is the decision problem of determining whether a Boolean formula has a satisfying assignment, believed to require exponentially growing time for an algorithm to solve for the worst-case instances. Yet, the efficient solution of many classes of Boolean formulae eludes even the most successful algorithms, not only for the worst-case scenarios, but also for typical-case instances. Here, we introduce a memory-assisted physical system (a digital memcomputing machine) that, when its non-linear ordinary differential equations are integrated numerically, shows evidence for polynomially-bounded scalability while solving “hard” planted-solution instances of SAT, known to require exponential time to solve in the typical case for both complete and incomplete algorithms. Furthermore, we analytically demonstrate that the physical system can efficiently solve the SAT problem in continuous time, without the need to introduce chaos or an exponentially growing energy. The efficiency of the simulations is related to the collective dynamical properties of the original physical system that persist in the numerical integration to robustly guide the solution search even in the presence of numerical errors. We anticipate our results to broaden research directions in physics-inspired computing paradigms ranging from theory to application, from simulation to hardware implementation.

Go To Publication  →


Related Readings

Jan 15, 2020

Mode-Assisted Unsupervised Learning of Restricted Boltzmann Machines

Francesco Caravelli, Forrest C. Sheldon, Fabio L. Traversa

Restricted Boltzmann machines (RBMs) are a powerful class of generative models, but their training requires computing a gradient that, unlike supervised backpropagation on typical loss functions, is notoriously difficult even to approximate. Here, we show that properly combining standard gradient updates with an off-gradient direction, constructed from samples of the RBM ground state (mode), improves their training dramatically over traditional gradient methods. This approach, which we call mode training, promotes faster training and stability, in addition to lower converged relative entropy (KL divergence). Along with the proofs of stability and convergence of this method, we also demonstrate its efficacy on synthetic datasets where we can compute KL divergences exactly, as well as on a larger machine learning standard, MNIST. The mode training we suggest is quite versatile, as it can be applied in conjunction with any given gradient method, and is easily extended to more general energy-based neural network structures such as deep, convolutional and unrestricted Boltzmann machines.

Go To Publication  →

View arXiv version  →

Related Readings

Mar 19, 2019

 

Aircraft Loading Optimization: MemComputing the 5th Airbus Problem

Massimiliano Di Ventra, Igor V. Ovchinnikov

On the January 22th 2019, Airbus launched a quantum computing challenge to solve a set of problems relevant for the aircraft life cycle (Airbus challenge web-page). The challenge consists of a set of 5 problems that ranges from design to deployment of aircraft. This work addresses the 5th problem. The formulation exploits an Integer programming framework with a linear objective function and the solution relies on the MemComputing paradigm. It is discussed how to use current MemComputing software MemCPU® to solve efficiently the proposed problem and assess scaling properties, which turns out to be polynomial for meaningful solutions of the problem at hand. Also discussed are possible formulations of the problem utilizing non-linear objective functions, allowing for different optimization schemes implementable in modified MemCPU® software, potentially useful for field operation purposes.

View arXiv version  →

Related Readings

Mar 18, 2019

 

Digital Memcomputing: from Logic to Dynamics to Topology

Massimiliano Di Ventra, Igor V. Ovchinnikov

Digital memcomputing machines (DMMs) are a class of computational machines designed to solve combinatorial optimization problems. A practical realization of DMMs can be accomplished via electrical circuits of highly non-linear, point-dissipative dynamical systems engineered so that periodic orbits and chaos can be avoided. A given logic problem is first mapped into this type of dynamical system whose point attractors represent the solutions of the original problem. A DMM then finds the solution via a succession of elementary instantons whose role is to eliminate solitonic configurations of logical inconsistency (“logical defects”) from the circuit. By employing a supersymmetric theory of dynamics, a DMM can be described by a cohomological field theory that allows for computation of certain topological matrix elements on instantons that have the mathematical meaning of intersection numbers on instantons. We discuss the “dynamical” meaning of these matrix elements, and argue that the number of elementary instantons needed to reach the solution cannot exceed the number of state variables of DMMs, which in turn can only grow at most polynomially with the size of the problem. These results shed further light on the relation between logic, dynamics and topology in digital memcomputing.

Go To Publication  →

View arXiv version  →

OCt 08, 2018

 

Taming a non-convex landscape with dynamical long-range order: memcomputing the Ising spin-glass

Forrest Sheldon, Fabio L. Traversa, and Massimiliano Di Ventra

Recent work on quantum annealing has emphasized the role of collective behavior in solving optimization problems. By enabling transitions of large clusters of variables, such solvers are able to navigate their state space and locate solutions efficiently despite having only local connections between elements. However, collective behavior is not exclusive to quantum annealers, and classical solvers that display collective dynamics should also possess an advantage in navigating a non-convex landscape. Here, we propose a simple model that demonstrates this effect, based on the recently suggested digital memcomputing machines (DMMs), which utilize a collection of dynamical components with memory connected to represent the structure of the underlying optimization problem. This model, when applied to finding the ground state of the Ising spin glass, undergoes a transient phase of avalanches which can span the entire lattice. We then show that a full implementation of a DMM exhibits superior scaling compared to other methods when tested on the same problem class. These results establish the advantages of computational approaches based on collective dynamics.

Go To APS Physics   →

View arXiv version  →

Aug 29, 2018

MemComputing Integer Linear Programming

Fabio Lorenzo Traversa, Massimiliano Di Ventra

Integer linear programming (ILP) encompasses a very important class of optimization problems that are of great interest to both academia and industry. Several algorithms are available that attempt to explore the solution space of this class efficiently, while requiring a reasonable compute time. However, although these algorithms have reached various degrees of success over the years, they still face considerable challenges when confronted with particularly hard problem instances, such as those of the MIPLIB 2010 library. In this work we propose a radically different non-algorithmic approach to ILP based on a novel physics-inspired computing paradigm: Memcomputing. This paradigm is based on digital (hence scalable) machines represented by appropriate electrical circuits with memory. These machines can be either built in hardware or, as we do here, their equations of motion can be efficiently simulated on our traditional computers. We first describe a new circuit architecture of memcomputing machines specifically designed to solve for the linear inequalities representing a general ILP problem. We call these self-organizing algebraic circuits, since they self-organize dynamically to satisfy the correct (algebraic) linear inequalities. We then show simulations of these machines using MATLAB running on a single core of a Xeon processor for several ILP benchmark problems taken from the MIPLIB 2010 library, and compare our results against a renowned commercial solver. We show that our approach is very efficient when dealing with these hard problems. In particular, we find within minutes feasible solutions for one of these hard problems (f2000 from MIPLIB 2010) whose feasibility, to the best of our knowledge, has remained unknown for the past eight years.

View Results  →

View arXiv version  →

Related Readings

Jun 30, 2018

Stress-testing memcomputing on hard combinatorial optimization problems

Fabio Lorenzo Traversa, Massimiliano Di Ventra, Haik Manukian

Memcomputing is a novel paradigm of computation that utilizes dynamical elements with memory to both store and process information on the same physical location. Its building blocks can be fabricated in hardware with standard electronic circuits, thus offering a path to its practical realization. In addition, since memcomputing is based on non-quantum elements, the equations of motion describing these machines can be simulated efficiently on standard computers. In fact, it was recently realized that memcomputing, and in particular its digital (hence scalable) version, when simulated on a classical machine provides a significant speed-up over state-of-the-art algorithms on a variety of non-convex problems…

Go To Publication  →

View arXiv version  →

Related Readings

Feb 20, 2018

Memcomputing: Leveraging memory and physics to compute efficiently

Fabio Lorenzo Traversa, Massimiliano Di Ventra, Haik Manukian

It is well known that physical phenomena may be of great help in computing some difficult problems efficiently. A typical example is prime factorization that may be solved in polynomial time by exploiting quantum entanglement on a quantum computer. There are, however, other types of (non-quantum) physical properties that one may leverage to compute efficiently a wide range of hard problems. In this perspective we discuss how to employ one such property, memory (time non-locality), in a novel physics-based approach to computation: Memcomputing. In particular, we focus on digital memcomputing machines (DMMs) that are scalable. DMMs can be realized with non-linear dynamical systems with memory…

Go To Publication  →

View arXiv version  →

Related Readings

Jan 01, 2018

Accelerating Deep Learning with Memcomputing

Fabio Lorenzo Traversa, Massimiliano Di Ventra, Haik Manukian

Restricted Boltzmann machines (RBMs) and their extensions, often called “deep-belief networks”, are very powerful neural networks that have found widespread applicability in the fields of machine learning and big data. The standard way to training these models resorts to an iterative unsupervised procedure based on Gibbs sampling, called “contrastive divergence”, and additional supervised tuning via back-propagation. However, this procedure has been shown not to follow any gradient and can lead to suboptimal solutions. In this paper, we show a very efficient alternative to contrastive divergence by means of simulations of digital memcomputing machines (DMMs). We test our approach on pattern recognition using the standard MNIST data set of hand-written numbers…

Go To Publication  →

View arXiv version  →

Related Readings

Dec 23, 2017

 

On the Universality of Memcomputing Machines

Fabio Lorenzo Traversa, Massimiliano Di Ventra, Yan Ru Pei

Universal memcomputing machines (UMMs) [IEEE Trans. Neural Netw. Learn. Syst. 26, 2702 (2015)] represent a novel computational model in which memory (time non-locality) accomplishes both tasks of storing and processing of information. UMMs have been shown to be Turing-complete, namely they can simulate any Turing machine. In this paper, using set theory and cardinality arguments, we compare them with liquid-state machines (or “reservoir computing”) and quantum machines (“quantum computing”). We show that UMMs can simulate both types of machines, hence they are both “liquid-” or “reservoir-complete” and “quantum-complete”. Of course, these statements pertain only to the type of problems these machines can solve, and not to the amount of resources required for such simulations. Nonetheless, the method presented here provides a general framework in which to describe the relation between UMMs and any other type of computational model.

Go To Publication  →

View arXiv version  →

Oct 23, 2017

Evidence of an exponential speed-up in the solution of hard optimization problems

Fabio Lorenzo Traversa, Massimiliano Di Ventra, Pietro Cicotti, Forrest Sheldon,

Optimization problems pervade essentially every scientific discipline and industry. Many such problems require finding a solution that maximizes the number of constraints satisfied. Often, these problems are particularly difficult to solve because they belong to the NP-hard class, namely algorithms that always find a solution in polynomial time are not known. Over the past decades, research has focused on developing heuristic approaches that attempt to find an approximation to the solution. However, despite numerous research efforts, in many cases even approximations to the optimal solution are hard to find, as the computational time for further refining a candidate solution grows exponentially with input size. …

Go To Publication  →

View arXiv version  →

Related Readings

Aug 29, 2017

Instantons in Self-Organizing Logic Gates

Sean R. B. Bearden, Haik Manukian, Fabio L. Traversa, and Massimiliano Di Ventra

Self-organizing logic is a recently suggested framework that allows the solution of Boolean truth tables “in reverse”; i.e., it is able to satisfy the logical proposition of gates regardless to which terminal(s) the truth value is assigned (“terminal-agnostic logic”). It can be realized if time nonlocality (memory) is present. A practical realization of self-organizing logic gates (SOLGs) can be done by combining circuit elements with and without memory. By employing one such realization, we show, numerically, that SOLGs exploit elementary instantons to reach equilibrium points. Instantons are classical trajectories of the nonlinear equations of motion describing SOLGs and connect topologically distinct critical points in the phase space. By linear analysis at those points, we show that these instantons connect the initial critical point of the dynamics, with at least one unstable direction, directly to the final fixed point. We also show that the memory content of these gates affects only the relaxation time to reach the logically consistent solution. Finally, we demonstrate, by solving the corresponding stochastic differential equations, that, since instantons connect critical points, noise and perturbations may change the instanton trajectory in the phase space but not the initial and final critical points. Therefore, even for extremely large noise levels, the gates self-organize to the correct solution. Our work provides a physical understanding of, and can serve as an inspiration for, models of bidirectional logic gates that are emerging as important tools in physics-inspired, unconventional computing.

Go To Publication  →

View arXiv version  →

Sep 17, 2017

Absence of periodic orbits in digital memcomputing machines with solutions

Fabio L. Traversa, and Massimiliano Di Ventra

In Traversa and Di Ventra [Chaos 27, 023107 (2017)] we argued, without proof, that if the non-linear dynamical systems with memory describing the class of digital memcomputing machines (DMMs) have equilibrium points, then no periodic orbits can emerge. In fact, the proof of such a statement is a simple corollary of a theorem already demonstrated in Traversa and Di Ventra [Chaos 27, 023107 (2017)]. Here, we point out how to derive such a conclusion. Incidentally, the same demonstration implies absence of chaos, a result we have already demonstrated in Di Ventra and Traversa [Phys. Lett. A 381, 3255 (2017)] using topology. These results, together with those in Traversa and Di Ventra [Chaos 27, 023107 (2017)], guarantee that if the Boolean problem the DMMs are designed to solve has a solution, the system will always find it, irrespective of the initial conditions.

Go To Publication  →

Dec 13, 2016

Memcomputing Numerical Inversion With Self-Organizing Logic Gates

Fabio Lorenzo Traversa, Massimiliano Di Ventra, Haik Manukian

We propose to use digital memcomputing machines (DMMs), implemented with self-organizing logic gates (SOLGs), to solve the problem of numerical inversion. Starting from fixed-point scalar inversion, we describe the generalization to solving linear systems and matrix inversion. This method, when realized in hardware, will output the result in only one computational step. As an example, we perform simulations of the scalar case using a 5-bit logic circuit made of SOLGs, and show that the circuit successfully performs the inversion. Our method can be extended efficiently to any level of precision, since we prove that producing n-bit precision in the output …

Go To Publication  →

View arXiv version  →

Sep 11, 2016

Topological Field Theory and Computing with Instantons

Fabio Lorenzo Traversa, Massimiliano Di Ventra, Igor V. Ovchinnikov

It is well known that dynamical systems may be employed as computing machines. However, not all dynamical systems offer particular advantages compared to the standard paradigm of computation, in regard to efficiency and scalability. Recently, it was suggested that a new type of machines, named digital –hence scalable– memcomputing machines (DMMs), that employ non-linear dynamical systems with memory, can solve complex Boolean problems efficiently. This result was derived using functional analysis without, however, providing a clear understanding of which physical features make DMMs such an efficient computational tool. Here, we show, using recently proposed topological field theory of dynamical systems…

Go To Publication  →

View arXiv version  →

Dec 16, 2015

Polynomial-time solution of prime factorization and NP-complete problems with digital memcomputing machines

Fabio Lorenzo Traversa, Massimiliano Di Ventra

We introduce a class of digital machines, we name Digital Memcomputing Machines, (DMMs) able to solve a wide range of problems including Non-deterministic Polynomial (NP) ones with polynomial resources (in time, space, and energy). An abstract DMM with this power must satisfy a set of compatible mathematical constraints underlying its practical realization. We prove this by making a connection with the dynamical systems theory. This leads us to a set of physical constraints for poly-resource resolvability. Once the mathematical requirements have been assessed, we propose a practical scheme to solve the above class of problems based on the novel concept …

Go To Publication  →

View arXiv version  →

Related Readings

MAY 05, 2014

Universal Memcomputing Machines

Massimiliano Di Ventra; Fabio L. Traversa

We introduce the notion of universal memcomputing machines (UMMs): a class of brain-inspired general-purpose computing machines based on systems with memory, whereby processing and storing of information occur on the same physical location. We analytically prove that the memory properties of UMMs endow them with universal computing power (they are Turing-complete), intrinsic parallelism, functional polymorphism, and information overhead, namely, their collective states can support exponential data compression directly in memory. We also demonstrate that a UMM has the same computational power as a nondeterministic Turing machine, namely, it can solve nondeterministic polynomial (NP)-complete problems in polynomial time. However, by virtue …

Go To Publication  →

View arXiv version  →

JUN 01, 2014

Dynamic computing random access memory: A brain-inspired computing paradigm with memelements

Massimiliano Di Ventra; Fabio L. Traversa; Fabrizio Bonani; Yuriy V. Pershin

Abstract
The present von Neumann computing paradigm involves a significant amount of information transfer between a central processing unit and memory, with concomitant limitations in the actual execution speed. However, it has been recently argued that a different form of computation, dubbed memcomputing (Di Ventra and Pershin 2013 Nat. Phys. 9 200–2) and inspired by the operation of our brain, can resolve the intrinsic limitations of present day architectures by allowing for computing and storing of information on the same physical platform. Here we show a simple and practical realization of memcomputing that utilizes easy-to-build memcapacitive systems. We name this architecture dynamic computing random access memory (DCRAM). We show that DCRAM provides massively-parallel and polymorphic digital logic, namely it allows for different logic operations with the same architecture, by varying only the control signals. In addition, by taking into account realistic parameters, its energy expenditures can be as low as a few fJ per operation. DCRAM is fully compatible with CMOS technology, can be realized with current fabrication facilities, and therefore can really serve as an alternative to the present computing technology.

Go To Publication  →

View arXiv version  →