After participating in the Global AI Hackathon San Diego (June 23-25, 2017), where I implemented my own Python classes for Deep Neural Networks (with theano), I decided to “relax” by trying to keep abreast of the latest developments in theoretical physics by watching YouTube videos of lectures on the IHÉS channel (Institut des Hautes Études Scientifiques).

After watching Barbon’s introductory talk, I was convinced that numerical computations involving tensor networks are ripe for GPU acceleration. As a first step, I implemented the first few iterations of the construction of a matrix product state (MPS) of a 1-dim. quantum many body system – which involves applying singular value decomposition (SVD) and (dense) matrix multiplication to exponentially large matrices (2^L entries of complex double-precision numbers) – using CUDA C/C++, CUBLAS, and CUSOLVER, and with the entire computation taking place on the GPU (to eliminate slow CPU-GPU memory transfers). 2 iterations for L=16 complete in about 1.5 secs. on a nVidia GTX 980 Ti.

I’ve placed that code in this subdirectory

https://github.com/ernestyalumni/CompPhys/tree/master/moreCUDA/CUSOLVER

See files create_MPS_step0.cu and create_MPS_step1_large.cu, and verification of simple cases (before scaling up) with Python NumPy in cuSOLVERgesvd.ipynb

https://github.com/ernestyalumni/CompPhys/blob/master/moreCUDA/CUSOLVER/create_MPS_step0.cu

https://github.com/ernestyalumni/CompPhys/blob/master/moreCUDA/CUSOLVER/create_MPS_step1_large.cu

https://github.com/ernestyalumni/CompPhys/blob/master/moreCUDA/CUSOLVER/cuSOLVERgesvd.ipynb

Tensor networks have only been developed within the last decade; 3 applications are interesting:

- –
**quantum many body physics:**while the Hilbert space exponentially grows with the number of spins in the system, the methods of tensor networks, from MPS to so-called PEPS, which both involved applying SVD, QR decomposition, etc., reduces the state space that the system’s ground state could possibly be in. It has become a powerful tool for condensed matter physicists in the numerical simulation of quantum many-body physics problems, from high-termperature superconductors to strongly interacting ultracold atom gases. cf. 1 **Machine Learning**: there is a use case for supervised learning and feature extraction with tensor networks. cf. 2,3**Quantum Gravity**: wormholes (Einstein-Rosen (ER) bridge) and condensates of entangled quantum pairs (Einstein-Podolsky-Rosen (EPR) pairs) have been conjectured to be intimately connected – accumulation of a large density of EPR pairs (S>>1) seem to generate a wormhole, ER, the so-called EPR=ER relation. This relation is implied from the AdS/CFT conjecture. Tensor network representations have been applied to various entangled CFT states – large scale GPU-accelerated numerical computation of these tensor network representations and their dynamics could be useful (and unprecedented) simulations for the gravity dual (graviton) in the bulk, through AdS/CFT. cf. 4

I believe there is valuable work to be done for GPU acceleration of tensor networks. I am seeking 2 things that I am asking here for help with: 1. colleagues, advisors, mentors to collaborate with, so to obtain useful feedback 2. support, namely financial support for stipend(s), hardware, and software support (nVidia? The Simons Foundation?). Any help with meeting or placing me in contact with helpful persons would be helpful. Thanks!

## References:

- Ulrich Schollwoeck. The density-matrix renormalization group in the age of matrix product states. Annals of Physics 326, 96 (2011). arXiv:1008.3477 [cond-mat.str-el]
- Johann A. Bengua, Ho N. Phien, Hoang D. Tuan, and Minh N. D. Matrix Product State for Feature Extraction of Higher-Order Tensors. arXiv:1503.00516 [cs.CV]
- E. Miles Stoudenmire, David J. Schwab. Supervised Learning with Quantum-Inspired Tensor Networks. arXiv:1605.05775 [stat.ML]
- Juan Maldacena, Leonard Susskind. ”Cool horizons for entangled black holes.” arXiv:1306.0533 [hep-th]