Hillis/Steele and Blelloch (i.e. Prefix) scan(s) methods implemented in parallel on the GPU w/ CUDA C++11

In the subdirectory scan in Lesson Code Snippets 3 is an implementation in CUDA C++11 and C++11, with global memory, of the Hillis/Steele (inclusive) scan, Blelloch (prefix; exclusive) scan(s), each in both parallel and serial implementation.

As you can see, for large float arrays, running parallel implementations in CUDA C++11, where I used the GeForce GTX 980 Ti smokes being run serially on the CPU (I use for a CPU the Intel® Xeon(R) CPU E5-1650 v3 @ 3.50GHz × 12.

scansmainscreenshot20from202016-11-042005-12-36

I have a thorough write up on the README.md of my fork of Udacity’s cs344 on github.

Note that I was learning about the Hillis/Steele and Blelloch (i.e. Prefix) scan(s) methods in conjunction with Udacity’s cs344, Lesson 3 – Fundamental GPU Algorithms (Reduce, Scan, Histogram), i.e. Unit 3.. I have a writeup of the notes I took related to these scans, formulating them mathematically, on my big CompPhys.pdf, Computational Physics notes.

Advertisements

I accidentally `dnf update` on Fedora 23 w/ NVidia GTX 980 Ti & prop. drivers & new kernel trashed my video output for the 2nd time; here’s how I recovered my system; Fedora Linux installation, including install of CUDA

20161031. Note that another, similar (i.e. only a few minor changes), version of this post, in Markdown format, is on my MLGrabbag github repository, MLGrabbag README.md

Oops.  I was on an administrator account and I accidentally ran


dnf update

<!–

–>

dnf update , #fedoralinux #Fedora #linux I’m always so very weary about doing this, because I’ve set up my Linux setup to be as minimal (stock?) as possible with installs and dependencies. In particular, I’ve setup Fedora Linux to use the proprietary @nvidiageforce @nvidia drivers, NOT the open-source and not so good (they WILL trash your video output and get you to Fedora’s own blue screen of death) #negativo drivers. And I’ve changed around and added symbolic links manually into the root system’s collection of libraries involving #cuda, so it’ll make my C++ programming included and library inclusion at make easier. I cringe if dnf update automatically installs negativo or “accidentally” cleans up my symbolic links or breaks dependencies with CUDA.

A video posted by Ernest Yeung (@ernestyalumni) on Oct 30, 2016 at 7:49pm PDT

//platform.instagram.com/en_US/embeds.js

I had done this before and written about this before, in the post Fedora 23 workstation (Linux)+NVIDIA GeForce GTX 980 Ti: my experience, log of what I do (and find out).

Fix

I relied upon 2 webpages for the critical, almost life-saving, terminal commands to recover video output and the previous, working “good” kernel – they were such a life-saver that they’re worth repeating and I’ve saved a html copy of the 2 pages onto the MLgrabbag github repository:

See what video card is there and all kernels installed and present, respectively

lspci | grep VGA
lspci | grep -E "VGA|3D"
lspci | grep -i "VGA"

uname -a

Remove the offending kernel that was automatically installed by dnf install

Critical commands:

rpm -qa | grep ^kernel

uname -r

sudo yum remove kernel-core-4.7.9-100.fc23.x86_64 kernel-devel-4.7.9-100.fc23.x86_64 kernel-modules-4.7.9-100.fc23.x86_64 kernel-4.7.9-100.fc23.x86_64 kernel-headers-4.7.9-100.fc23.x86_64

Install NVidia drivers to, at least, recover video output

While at the terminal prompt (in low-resolution), change to the directory where you had downloaded the NVidia drivers (hopefully it’s there somewhere already on your hard drive because you wouldn’t have web browser capability without video output):

sudo sh ./NVIDIA-Linux-x86_64-361.42.run
reboot

dnf install gcc
dnf install dkms acpid
dnf install kernel-headers

echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf

cd /etc/sysconfig
grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg

dnf list xorg-x11-drv-nouveau

dnf remove xorg-x11-drv-nouveau
cd /boot

## Backup old initramfs nouveau image ##
mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r)-nouveau20161031.img

(the last command, with the output file name, the output file’s name is arbitrary)

## Create new initramfs image ##
dracut /boot/initramfs-$(uname -r).img $(uname -r)
systemctl set-default multi-user.target

At this point, you’ll notice that dnf update and its subsequent removal would’ve trashed your C++ setup.

cf. stackexchange: gcc: error trying to exec 'cc1': execvp: No such file or directory When compile program with popen in php

For at this point, I tried to do a make of a C++ project I had:

[topolo@localhost MacCor1d_gfx]$ make
/usr/local/cuda/bin/nvcc -std=c++11 -g -G -Xcompiler "-Wall -Wno-deprecated-declarations" -L/usr/local/cuda/samples/common/lib/linux/x86_64 -lglut -lGL -lGLU -dc main.cu -o main.o
gcc: error trying to exec 'cc1plus': execvp: No such file or directoryMakefile:21: recipe for target 'main.o' failedmake: *** [main.o] Error 1

So you’ll have to do

dnf install gcc-c++

Might as well, while we’re at it, update NVidia proprietary drivers and CUDA Toolkit

Updating the NVidia proprietary driver – similar to installing, but remember you have to go into the low-resolution, no video driver, terminal, command line, prompt

chmod +x NVIDIA-Linux-x86_64-367.57.run
systemctl set-default multi-user.target
reboot

./NVIDIA-Linux-x86_64-367.57.run
systemctl set-default graphical.target
reboot

Update to CUDA Toolkit (9.0)

Try  to use package manager on Linux distribution as much as possible (as general principle).

Remember to do Post-Installation Actions which must be done manually.  Usually, go to your bash profile, ~/.bashrc , and add the following environment variables:

LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH})

 

 

What I did was use emacs on ~/.bashrc and added these lines manually:
PATH=/usr/local/cuda-9.0/bin:$PATH

LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64:/usr/lib/x86_64-linux-gpu

 

 

Updating CUDA Toolkit (8.0)

Download CUDA Toolkit (8.0)

Then follow the instructions. If the driver is updated already, before using the “.run” installation, then choose no to installing drivers – otherwise, I had chosen yes and the default for all the options.

The Linux installation guide for CUDA Toolkit 8.0 is actually very thorough, comprehensive, and easy to use. Let’s look at the Post-Installation Actions, the Environment Setup:

The PATH variable needs to include /usr/local/cuda-8.0/bin

To add this path to the PATH variable:

$ export PATH=/usr/local/cuda-8.0/bin${PATH:+:${PATH}}

In addition, when using the runfile installation method, the LD_LIBRARY_PATH variable needs to contain /usr/local/cuda-8.0/lib64 on a 64-bit system, or /usr/local/cuda-8.0/lib on a 32-bit system

To change the environment variables for 64-bit operating systems:

$ export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64\
${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}

Indeed, prior to adding the PATH variable, I was getting errors when I type nvcc at the command line. After doing this:

[propdev@localhost ~]$ export PATH=/usr/local/cuda-8.0/bin${PATH:+:${PATH}}
[propdev@localhost ~]$ env | grep '^PATH'
PATH=/usr/local/cuda-8.0/bin:/home/propdev/anaconda2/bin:/home/propdev/anaconda2/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/home/propdev/.local/bin:/home/propdev/bin
[propdev@localhost ~]$ nvcc
nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
nvcc fatal : No input files specified; use option --help for more information
[propdev@localhost ~]$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2016 NVIDIA Corporation
Built on Sun_Sep__4_22:14:01_CDT_2016
Cuda compilation tools, release 8.0, V8.0.44

I obtain what I desired – I can use nvcc at the command line.

To get the samples that use OpenGL, be sure to have glut and/or freeglut installed:

dnf install freeglut freeglut-devel

Now for some bloody reason (please let me know), the command

$ export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64\
${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}

still didn’t help me to allow my CUDA programs utilize the libraries in that lib64 subdirectory of the CUDA Toolkit. It seems like the programs, or the OS, wasn’t seeing the link that should be there in /usr/lib64.

What did work was in here, libcublas.so.7.0: cannot open shared object file, with the solution at the end, from
atv, with an answer originally from txbob (most likely Robert Cravello of github)

Solved. Finally I did:

sudo echo "/usr/local/cuda-7.0/lib64" > /etc/ld.so.conf.d/cuda.conf
sudo ldconfig

Thanks a lot txbob!

This is what I did:

[root@localhost ~]# sudo echo "/usr/local/cuda-8.0/lib64" > /etc/ld.so.conf.d/cuda.conf
[root@localhost ~]# sudo ldconfig
ldconfig: /usr/local/cuda-7.5/lib64/libcudnn.so.5 is not a symbolic link

and it worked; C++ programs compile with my make files.

Also, files, including in the Samples for the 8.0 Toolkit, using nvrtc compiled and worked.

Fun Nvidia video card version information, details

Doing

nvidia-smi

at the command prompt gave me this:

<br />[propdev@localhost ~]$ nvidia-smi
Mon Oct 31 15:28:30 2016
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 367.57 Driver Version: 367.57 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 980 Ti Off | 0000:03:00.0 On | N/A |
| 0% 50C P8 22W / 275W | 423MiB / 6077MiB | 1% Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1349 G /usr/libexec/Xorg 50MiB |
| 0 19440 G /usr/libexec/Xorg 162MiB |
| 0 19645 G /usr/bin/gnome-shell 127MiB |
| 0 24621 G /usr/libexec/Xorg 6MiB |
+-----------------------------------------------------------------------------+

C++, C++11/C++14

Posts, notes, resources on C++, C++11/C++14.

Table of Contents

C++11 timing code execution

cf. Solarian programmer gave an excellent write up in C++11 timing code performance

C++11/14 version – the classic algorithm of binary search, using C++11/14 vector library

I implemented the classic algorithm of binary search using C++11/14 vector(s) (library) in vectors_binarysearch.cpp, inside the folder ../Cpp14 of my CompPhys github repository

functor

C++ Tutorial – Functors(Function Objects) – 2016

My implementation of the examples above for functors here on github:

functors.cpp

C++ templates, class templates and how to put them into header files; Useful links related to splitting up header files for declaration, split to .cpp files for definitions

While I already wrote about it in the README.md of my github repository folder Cpp, for CompPhysUseful links related to splitting up header files for declaration, split to .cpp files for definitions, github repo CompPhys, folder cpp, I had to look it up again, and so I’ll reiterate that material here.

, had an excellent article detailing how understandably confusing it is to split up templates to header files for C++.  The examples and options are comprehensive and crystal-clear.

Gives the reason why.

 

Texture Object API

struct cudaResourceDesc resDesc

struct myStruct myVariable;

struct Leopard leopard;
leopard.base.animal.weight = 44;

Struct declaration
http://en.cppreference.com/w/c/language/struct

Google search
struct inherit copy from another struct declare c c++

http://stackoverflow.com/questions/1114349/struct-inheritance-in-c

Typesafe inheritance in C
http://www.deleveld.dds.nl/inherit.htm

Difference between ‘struct’ and ‘typedef struct’ in C++?
http://stackoverflow.com/questions/612328/difference-between-struct-and-typedef-struct-in-c

Google search
typedef struct

useful

typedef struct vs struct definitions [duplicate]
http://stackoverflow.com/questions/1675351/typedef-struct-vs-struct-definitions

Google search terms

how to declare an instance of a struct C++

Proper way to initialize C++ structs

http://stackoverflow.com/questions/5914422/proper-way-to-initialize-c-structs

Abstract Algebra

I reviewed a little about rings and polynomial rings over this past weekend and I wanted to try to collect the resources I come across here. My aim for abstract algebra is to apply to (of course) topological field theory AND (unprecedentedly) to aerospace engineering, namely combustion CFD (computational fluid dynamics).

Cornell Math 4320 Solutions (probably more “around” this link

Using Sage Math for Abstract Algebra, especially in conjunction with jupyter notebooks, should be the way forward in the 21st century, and from there, digging deeper into various C/C++ libraries. The jupyter notebook I keep is in my qSApoly github repository:

https://github.com/ernestyalumni/qSApoly/blob/master/AbstractAlgebra.ipynb

Givaro – C++ library for arithmetic and algebraic computations

https://github.com/linbox-team/givaro

http://givaro.forge.imag.fr/

https://admin.fedoraproject.org/pkgdb/package/rpms/givaro/

Fedora package for givaro, givaro.

Electromagnetism

P.S. I just enabled Markdown for wordpress and so I’m giving wordpress another shot with Markdown, but otherwise, I find myself updating my github and writing there much more frequently because of their excellent and superior version control, terminal commands, and web interface, displaying automatically pdfs, markdown code, etc.

I was reviewing Electric propulsion which led me to review Electromagnetism, in view of differential geometry, and its covariant formulation.

All of Maxwell’s equations are contained in the following 2 statements:

$dF = 0$
$dF = 4\pi **J$ (cgs) or $dF = **J$ (SI).

Here is a reprint of my jupyter notebook on my github repository Propulsion (you’ll need Sage Math and sagemanifolds) EMsage.ipynb where I show with the Minkowski metric that you recover the usual form of Maxwell’s equations, and give the differential form formulation of the Lorentz Force. This is usually shown “by hand” in textbooks, but I show it with Sage Math and sagemanifolds (stop doing tedious calculations by hand!). I want to encourage people to try their own metric beyond the Minkowski metric and make calculations for Electromagnetism in different spacetimes.

Electromagnetism

Running Sage Math and sagemanifolds with jupyter notebooks tip: First, I’ve found that compiling from the github source the development Sage Math version (see my notes on this under the “Computers” section of my wordpress blog) works with sagemanifolds and either installing sagemanifolds into the binary that you unpack out (the click, download, double click), “breaks” Sage and so that it doesn’t run. Also, following the Sage Math instructions on their website for building from the source didn’t work for me (!!!???).

As a tip on how to run this jupyter notebook and have sagemanifolds available, you’d want to be in the working directory you desire (e.g. Propulsion/EM). But yet your Sage Math build is somewhere else (e.g. /home/topolo/Public/sage). Do this out of the working directory you’re currently working out of:
/home/topolo/Public/sage/sage -n jupyter

For the rationale, or the math, and how the math corresponds directly to the Sage Math code here, you’re going to want to look at Gravity_Notes_grande.pdf in my Gravite repository, and in there, the $\mathbb{R}^3$ section, because I define the charts and atlases for Euclidean space $\mathbb{R}^3$ as a smooth manifold.

M = Manifold(4,'M',r'M')
cart_ch = M.chart('t x y z')
U = M.open_subset('U',coord_def={cart_ch: (cart_ch[1]<0, cart_ch[2]!=0)})
cart_ch_U = cart_ch.restrict(U)
sph_ch = U.chart(r'tsp:(-oo,oo):t_{sp} rh:(0,+oo):\rho th:(0,pi):\theta ph:(0,2*pi):\phi')
tsph, rh,th,ph = [sph_ch[i[0]] for i in M.index_generator(1)]
transit_sph_to_cart = sph_ch.transition_map(cart_ch_U, 
                                            [tsph,rh*sin(th)*cos(ph),rh*sin(th)*sin(ph),rh*cos(th)])
Sphnorm = sqrt(sum([cart_ch_U[i]**2 for i in range(1,4)]))
transit_sph_to_cart.set_inverse( cart_ch[0], Sphnorm, 
                                atan2(sqrt( sum([ cart_ch_U[i]**2 for i in range(1,3)])),cart_ch_U[3]),
                                                            atan2(cart_ch_U[2],cart_ch_U[1]))
cyl_ch = U.chart(r'tcy:(-oo,oo):t_{cy} r:(0,+oo) phi:(0,2*pi):\phi zc')
tcy, r,phi,zc = [cyl_ch[i[0]] for i in M.index_generator(1)]
transit_cyl_to_cart = cyl_ch.transition_map(cart_ch_U, [tcy,r*cos(phi),r*sin(phi),zc])
transit_cyl_to_cart.set_inverse(cart_ch_U[0], sqrt(cart_ch_U[1]**2+cart_ch_U[2]**2), 
                                    atan2( cart_ch_U[2],cart_ch_U[1]), cart_ch_U[3])

Note the mostly positive (-+++) convention I use for the Minkowski metric.

g = M.riemannian_metric('g')
g[0,0] = -1 
for i in range(1,4): g[i,i] = 1

Electric Field

def make_E(ch):
    """
    make_E = make_E(ch)
    make_E creates a time-INDEPENDENT electric field as a 1-form

    INPUT/PARAMETER
    ch = sagemanifold chart
    """
    Ecomplst = []
    for i in range(1,4):
        Earglst = ['E'+str(i),] + list(ch[1:])
        Ecomplst.append( function(Earglst[0])(*Earglst[1:]) )
    Ecomplst = [0,]+Ecomplst
    E = ch.domain().diff_form(1)
    E[ch.frame(),:,ch] = Ecomplst
    return E

def make_Et(ch):
    """
    make_Et = make_Et(ch)
    make_Et creates a time-DEPENDENT electric field as a 1-form

    INPUT/PARAMETER
    ch = sagemanifold chart
    """
    Ecomplst = []
    for i in range(1,4):
        Earglst = ['E'+str(i),] + list(ch[:])
        Ecomplst.append( function(Earglst[0])(*Earglst[1:]) )
    Ecomplst = [0,]+Ecomplst
    E = ch.domain().diff_form(1)
    E[ch.frame(),:,ch] = Ecomplst
    return E

Examples of using make_E, make_Et and displaying the results

print make_E(cart_ch).display()
make_Et(sph_ch).display(sph_ch.frame(),sph_ch)
E1(x, y, z) dx + E2(x, y, z) dy + E3(x, y, z) dz





E1(tsp, rh, th, ph) drh + E2(tsp, rh, th, ph) dth + E3(tsp, rh, th, ph) dph

Magnetic Field

Programming note: make_B and make_Bt

import itertools
def make_B(ch):
    """
    make_B = make_B(ch)
    make_B creates a time-INDEPENDENT magnetic field as a 2-form

    INPUT/PARAMETER
    ch = sagemanifold chart
    """
    B = ch.domain().diff_form(2)
    farglst = list(ch[1:]) # function argument list, e.g. (x,y,z)

    B[ch.frame(),1,2,ch] = function('B_12')(*farglst)
    B[ch.frame(),2,3,ch] = function('B_23')(*farglst)
    B[ch.frame(),3,1,ch] = function('B_31')(*farglst)

    return B

def make_Bt(ch):
    """
    make_Bt = make_Bt(ch)
    make_Bt creates a time-DEPENDENT electric field as a 2-form

    INPUT/PARAMETER
    ch = sagemanifold chart
    """
    B = ch.domain().diff_form(2)
    farglst = list(ch[:]) # function argument list, e.g. (x,y,z)

    B[ch.frame(),1,2,ch] = function('B_12')(*farglst)
    B[ch.frame(),2,3,ch] = function('B_23')(*farglst)
    B[ch.frame(),3,1,ch] = function('B_31')(*farglst)

    return B
print make_Bt(cart_ch).display()
make_B(cyl_ch).display(cyl_ch.frame(),cyl_ch)
B_12(t, x, y, z) dx/\dy - B_31(t, x, y, z) dx/\dz + B_23(t, x, y, z) dy/\dz





B_12(r, phi, zc) dr/\dphi - B_31(r, phi, zc) dr/\dzc + B_23(r, phi, zc) dphi/\dzc

Notice that the orientation is correct (with the right hand rule).

Electromagnetic field 2-form

EM_F = make_Bt(cart_ch) + make_Et(cart_ch).wedge(cart_ch.coframe()[0] )
EM_F.display()
-E1(t, x, y, z) dt/\dx - E2(t, x, y, z) dt/\dy - E3(t, x, y, z) dt/\dz + B_12(t, x, y, z) dx/\dy - B_31(t, x, y, z) dx/\dz + B_23(t, x, y, z) dy/\dz
EM_F[:]
[                0   -E1(t, x, y, z)   -E2(t, x, y, z)   -E3(t, x, y, z)]
[   E1(t, x, y, z)                 0  B_12(t, x, y, z) -B_31(t, x, y, z)]
[   E2(t, x, y, z) -B_12(t, x, y, z)                 0  B_23(t, x, y, z)]
[   E3(t, x, y, z)  B_31(t, x, y, z) -B_23(t, x, y, z)                 0]
latex(EM_F.exterior_der()[:]); # delete the semi-colon ; and you can get the LaTeX code-I suppress it

$ dF = \left[\left[\left[0, 0, 0, 0\right], \left[0, 0, \frac{\partial\,B_{12}}{\partial {t}} – \frac{\partial\,E_{1}}{\partial y} + \frac{\partial\,E_{2}}{\partial x}, -\frac{\partial\,B_{31}}{\partial {t}} – \frac{\partial\,E_{1}}{\partial z} + \frac{\partial\,E_{3}}{\partial x}\right], \left[0, -\frac{\partial\,B_{12}}{\partial {t}} + \frac{\partial\,E_{1}}{\partial y} – \frac{\partial\,E_{2}}{\partial x}, 0, \frac{\partial\,B_{23}}{\partial {t}} – \frac{\partial\,E_{2}}{\partial z} + \frac{\partial\,E_{3}}{\partial y}\right], \left[0, \frac{\partial\,B_{31}}{\partial {t}} + \frac{\partial\,E_{1}}{\partial z} – \frac{\partial\,E_{3}}{\partial x}, -\frac{\partial\,B_{23}}{\partial {t}} + \frac{\partial\,E_{2}}{\partial z} – \frac{\partial\,E_{3}}{\partial y}, 0\right]\right], \left[\left[0, 0, -\frac{\partial\,B_{12}}{\partial {t}} + \frac{\partial\,E_{1}}{\partial y} – \frac{\partial\,E_{2}}{\partial x}, \frac{\partial\,B_{31}}{\partial {t}} + \frac{\partial\,E_{1}}{\partial z} – \frac{\partial\,E_{3}}{\partial x}\right], \left[0, 0, 0, 0\right], \left[\frac{\partial\,B_{12}}{\partial {t}} – \frac{\partial\,E_{1}}{\partial y} + \frac{\partial\,E_{2}}{\partial x}, 0, 0, \frac{\partial\,B_{12}}{\partial z} + \frac{\partial\,B_{23}}{\partial x} + \frac{\partial\,B_{31}}{\partial y}\right], \left[-\frac{\partial\,B_{31}}{\partial {t}} – \frac{\partial\,E_{1}}{\partial z} + \frac{\partial\,E_{3}}{\partial x}, 0, -\frac{\partial\,B_{12}}{\partial z} – \frac{\partial\,B_{23}}{\partial x} – \frac{\partial\,B_{31}}{\partial y}, 0\right]\right], \left[\left[0, \frac{\partial\,B_{12}}{\partial {t}} – \frac{\partial\,E_{1}}{\partial y} + \frac{\partial\,E_{2}}{\partial x}, 0, -\frac{\partial\,B_{23}}{\partial {t}} + \frac{\partial\,E_{2}}{\partial z} – \frac{\partial\,E_{3}}{\partial y}\right], \left[-\frac{\partial\,B_{12}}{\partial {t}} + \frac{\partial\,E_{1}}{\partial y} – \frac{\partial\,E_{2}}{\partial x}, 0, 0, -\frac{\partial\,B_{12}}{\partial z} – \frac{\partial\,B_{23}}{\partial x} – \frac{\partial\,B_{31}}{\partial y}\right], \left[0, 0, 0, 0\right], \left[\frac{\partial\,B_{23}}{\partial {t}} – \frac{\partial\,E_{2}}{\partial z} + \frac{\partial\,E_{3}}{\partial y}, \frac{\partial\,B_{12}}{\partial z} + \frac{\partial\,B_{23}}{\partial x} + \frac{\partial\,B_{31}}{\partial y}, 0, 0\right]\right], \left[\left[0, -\frac{\partial\,B_{31}}{\partial {t}} – \frac{\partial\,E_{1}}{\partial z} + \frac{\partial\,E_{3}}{\partial x}, \frac{\partial\,B_{23}}{\partial {t}} – \frac{\partial\,E_{2}}{\partial z} + \frac{\partial\,E_{3}}{\partial y}, 0\right], \left[\frac{\partial\,B_{31}}{\partial {t}} + \frac{\partial\,E_{1}}{\partial z} – \frac{\partial\,E_{3}}{\partial x}, 0, \frac{\partial\,B_{12}}{\partial z} + \frac{\partial\,B_{23}}{\partial x} + \frac{\partial\,B_{31}}{\partial y}, 0\right], \left[-\frac{\partial\,B_{23}}{\partial {t}} + \frac{\partial\,E_{2}}{\partial z} – \frac{\partial\,E_{3}}{\partial y}, -\frac{\partial\,B_{12}}{\partial z} – \frac{\partial\,B_{23}}{\partial x} – \frac{\partial\,B_{31}}{\partial y}, 0, 0\right], \left[0, 0, 0, 0\right]\right]\right] = 0 $

EM_F.hodge_dual(g)[:]
[                  0  I*B_23(t, x, y, z)  I*B_31(t, x, y, z)  I*B_12(t, x, y, z)]
[-I*B_23(t, x, y, z)                   0    I*E3(t, x, y, z)   -I*E2(t, x, y, z)]
[-I*B_31(t, x, y, z)   -I*E3(t, x, y, z)                   0    I*E1(t, x, y, z)]
[-I*B_12(t, x, y, z)    I*E2(t, x, y, z)   -I*E1(t, x, y, z)                   0]
g[:]
[-1  0  0  0]
[ 0  1  0  0]
[ 0  0  1  0]
[ 0  0  0  1]

$dF$ (for $dF = 4 \pi * J$)

EM_F.hodge_dual(g).exterior_der()[:]
[[[0, 0, 0, 0],
  [0,
   0,
   I*d(B_23)/dy - I*d(B_31)/dx + I*d(E3)/dt,
   -I*d(B_12)/dx + I*d(B_23)/dz - I*d(E2)/dt],
  [0,
   -I*d(B_23)/dy + I*d(B_31)/dx - I*d(E3)/dt,
   0,
   -I*d(B_12)/dy + I*d(B_31)/dz + I*d(E1)/dt],
  [0,
   I*d(B_12)/dx - I*d(B_23)/dz + I*d(E2)/dt,
   I*d(B_12)/dy - I*d(B_31)/dz - I*d(E1)/dt,
   0]],
 [[0,
   0,
   -I*d(B_23)/dy + I*d(B_31)/dx - I*d(E3)/dt,
   I*d(B_12)/dx - I*d(B_23)/dz + I*d(E2)/dt],
  [0, 0, 0, 0],
  [I*d(B_23)/dy - I*d(B_31)/dx + I*d(E3)/dt,
   0,
   0,
   I*d(E1)/dx + I*d(E2)/dy + I*d(E3)/dz],
  [-I*d(B_12)/dx + I*d(B_23)/dz - I*d(E2)/dt,
   0,
   -I*d(E1)/dx - I*d(E2)/dy - I*d(E3)/dz,
   0]],
 [[0,
   I*d(B_23)/dy - I*d(B_31)/dx + I*d(E3)/dt,
   0,
   I*d(B_12)/dy - I*d(B_31)/dz - I*d(E1)/dt],
  [-I*d(B_23)/dy + I*d(B_31)/dx - I*d(E3)/dt,
   0,
   0,
   -I*d(E1)/dx - I*d(E2)/dy - I*d(E3)/dz],
  [0, 0, 0, 0],
  [-I*d(B_12)/dy + I*d(B_31)/dz + I*d(E1)/dt,
   I*d(E1)/dx + I*d(E2)/dy + I*d(E3)/dz,
   0,
   0]],
 [[0,
   -I*d(B_12)/dx + I*d(B_23)/dz - I*d(E2)/dt,
   -I*d(B_12)/dy + I*d(B_31)/dz + I*d(E1)/dt,
   0],
  [I*d(B_12)/dx - I*d(B_23)/dz + I*d(E2)/dt,
   0,
   I*d(E1)/dx + I*d(E2)/dy + I*d(E3)/dz,
   0],
  [I*d(B_12)/dy - I*d(B_31)/dz - I*d(E1)/dt,
   -I*d(E1)/dx - I*d(E2)/dy - I*d(E3)/dz,
   0,
   0],
  [0, 0, 0, 0]]]
latex( EM_F.hodge_dual(g).exterior_der()[:] ); # delete the semi-colon ; and you can get the LaTeX code-I suppress it

$\left[\left[\left[0, 0, 0, 0\right], \left[0, 0, i \, \frac{\partial\,B_{23}}{\partial y} – i \, \frac{\partial\,B_{31}}{\partial x} + i \, \frac{\partial\,E_{3}}{\partial {t}}, -i \, \frac{\partial\,B_{12}}{\partial x} + i \, \frac{\partial\,B_{23}}{\partial z} – i \, \frac{\partial\,E_{2}}{\partial {t}}\right], \left[0, -i \, \frac{\partial\,B_{23}}{\partial y} + i \, \frac{\partial\,B_{31}}{\partial x} – i \, \frac{\partial\,E_{3}}{\partial {t}}, 0, -i \, \frac{\partial\,B_{12}}{\partial y} + i \, \frac{\partial\,B_{31}}{\partial z} + i \, \frac{\partial\,E_{1}}{\partial {t}}\right], \left[0, i \, \frac{\partial\,B_{12}}{\partial x} – i \, \frac{\partial\,B_{23}}{\partial z} + i \, \frac{\partial\,E_{2}}{\partial {t}}, i \, \frac{\partial\,B_{12}}{\partial y} – i \, \frac{\partial\,B_{31}}{\partial z} – i \, \frac{\partial\,E_{1}}{\partial {t}}, 0\right]\right], \left[\left[0, 0, -i \, \frac{\partial\,B_{23}}{\partial y} + i \, \frac{\partial\,B_{31}}{\partial x} – i \, \frac{\partial\,E_{3}}{\partial {t}}, i \, \frac{\partial\,B_{12}}{\partial x} – i \, \frac{\partial\,B_{23}}{\partial z} + i \, \frac{\partial\,E_{2}}{\partial {t}}\right], \left[0, 0, 0, 0\right], \left[i \, \frac{\partial\,B_{23}}{\partial y} – i \, \frac{\partial\,B_{31}}{\partial x} + i \, \frac{\partial\,E_{3}}{\partial {t}}, 0, 0, i \, \frac{\partial\,E_{1}}{\partial x} + i \, \frac{\partial\,E_{2}}{\partial y} + i \, \frac{\partial\,E_{3}}{\partial z}\right], \left[-i \, \frac{\partial\,B_{12}}{\partial x} + i \, \frac{\partial\,B_{23}}{\partial z} – i \, \frac{\partial\,E_{2}}{\partial {t}}, 0, -i \, \frac{\partial\,E_{1}}{\partial x} – i \, \frac{\partial\,E_{2}}{\partial y} – i \, \frac{\partial\,E_{3}}{\partial z}, 0\right]\right], \left[\left[0, i \, \frac{\partial\,B_{23}}{\partial y} – i \, \frac{\partial\,B_{31}}{\partial x} + i \, \frac{\partial\,E_{3}}{\partial {t}}, 0, i \, \frac{\partial\,B_{12}}{\partial y} – i \, \frac{\partial\,B_{31}}{\partial z} – i \, \frac{\partial\,E_{1}}{\partial {t}}\right], \left[-i \, \frac{\partial\,B_{23}}{\partial y} + i \, \frac{\partial\,B_{31}}{\partial x} – i \, \frac{\partial\,E_{3}}{\partial {t}}, 0, 0, -i \, \frac{\partial\,E_{1}}{\partial x} – i \, \frac{\partial\,E_{2}}{\partial y} – i \, \frac{\partial\,E_{3}}{\partial z}\right], \left[0, 0, 0, 0\right], \left[-i \, \frac{\partial\,B_{12}}{\partial y} + i \, \frac{\partial\,B_{31}}{\partial z} + i \, \frac{\partial\,E_{1}}{\partial {t}}, i \, \frac{\partial\,E_{1}}{\partial x} + i \, \frac{\partial\,E_{2}}{\partial y} + i \, \frac{\partial\,E_{3}}{\partial z}, 0, 0\right]\right], \left[\left[0, -i \, \frac{\partial\,B_{12}}{\partial x} + i \, \frac{\partial\,B_{23}}{\partial z} – i \, \frac{\partial\,E_{2}}{\partial {t}}, -i \, \frac{\partial\,B_{12}}{\partial y} + i \, \frac{\partial\,B_{31}}{\partial z} + i \, \frac{\partial\,E_{1}}{\partial {t}}, 0\right], \left[i \, \frac{\partial\,B_{12}}{\partial x} – i \, \frac{\partial\,B_{23}}{\partial z} + i \, \frac{\partial\,E_{2}}{\partial {t}}, 0, i \, \frac{\partial\,E_{1}}{\partial x} + i \, \frac{\partial\,E_{2}}{\partial y} + i \, \frac{\partial\,E_{3}}{\partial z}, 0\right], \left[i \, \frac{\partial\,B_{12}}{\partial y} – i \, \frac{\partial\,B_{31}}{\partial z} – i \, \frac{\partial\,E_{1}}{\partial {t}}, -i \, \frac{\partial\,E_{1}}{\partial x} – i \, \frac{\partial\,E_{2}}{\partial y} – i \, \frac{\partial\,E_{3}}{\partial z}, 0, 0\right], \left[0, 0, 0, 0\right]\right]\right]$

EM_F.hodge_dual(g).exterior_der().hodge_dual(g)[:] 
[d(E1)/dx + d(E2)/dy + d(E3)/dz,
 -d(B_12)/dy + d(B_31)/dz + d(E1)/dt,
 d(B_12)/dx - d(B_23)/dz + d(E2)/dt,
 d(B_23)/dy - d(B_31)/dx + d(E3)/dt]
latex(EM_F.hodge_dual(g).exterior_der().hodge_dual(g)[:]); 
# delete the semi-colon ; and you can get the LaTeX code-I suppress it
\left[\frac{\partial\,E_{1}}{\partial x} + \frac{\partial\,E_{2}}{\partial y} + \frac{\partial\,E_{3}}{\partial z}, -\frac{\partial\,B_{12}}{\partial y} + \frac{\partial\,B_{31}}{\partial z} + \frac{\partial\,E_{1}}{\partial {t}}, \frac{\partial\,B_{12}}{\partial x} - \frac{\partial\,B_{23}}{\partial z} + \frac{\partial\,E_{2}}{\partial {t}}, \frac{\partial\,B_{23}}{\partial y} - \frac{\partial\,B_{31}}{\partial x} + \frac{\partial\,E_{3}}{\partial {t}}\right]

$\left[\frac{\partial\,E_{1}}{\partial x} + \frac{\partial\,E_{2}}{\partial y} + \frac{\partial\,E_{3}}{\partial z}, -\frac{\partial\,B_{12}}{\partial y} + \frac{\partial\,B_{31}}{\partial z} + \frac{\partial\,E_{1}}{\partial {t}}, \frac{\partial\,B_{12}}{\partial x} – \frac{\partial\,B_{23}}{\partial z} + \frac{\partial\,E_{2}}{\partial {t}}, \frac{\partial\,B_{23}}{\partial y} – \frac{\partial\,B_{31}}{\partial x} + \frac{\partial\,E_{3}}{\partial {t}}\right]$

Current 1-form, Current conservation, and the other side (Right-Hand Side (RHS)) of $d*F$

def make_J(ch):
    """
    make_J = make_J(ch)
    make_J creates a time-INDEPENDENT current as a 1-form

    INPUT/PARAMETER
    ch = sagemanifold chart
    """
    Jcomplst = []
    for i in range(1,4):
        Jarglst = ['j'+str(i),] + list(ch[1:])
        Jcomplst.append( function(Jarglst[0])(*Jarglst[1:]) )
    Jcomplst = [-function('rho')(*list(ch[1:])),] +Jcomplst
    J = ch.domain().diff_form(1)
    J[ch.frame(),:,ch] = Jcomplst
    return J


def make_Jt(ch):
    """
    make_Jt = make_Jt(ch)
    make_Jt creates a time-DEPENDENT current as a 1-form

    INPUT/PARAMETER
    ch = sagemanifold chart
    """
    Jcomplst = []
    for i in range(1,4):
        Jarglst = ['j'+str(i),] + list(ch[:])
        Jcomplst.append( function(Jarglst[0])(*Jarglst[1:]) )
    Jcomplst = [-function('rho')(*list(ch[:])),]+Jcomplst
    J = ch.domain().diff_form(1)
    J[ch.frame(),:,ch] = Jcomplst
    return J

print make_Jt(cart_ch).display() # these are examples of displaying the 4-current as 1-form in 
                                    # Cartesian and cylindrical coordinates
make_Jt(cyl_ch).display(cyl_ch.frame(),cyl_ch)
-rho(t, x, y, z) dt + j1(t, x, y, z) dx + j2(t, x, y, z) dy + j3(t, x, y, z) dz





-rho(tcy, r, phi, zc) dtcy + j1(tcy, r, phi, zc) dr + j2(tcy, r, phi, zc) dphi + j3(tcy, r, phi, zc) dzc
make_Jt(cart_ch).hodge_dual(g).hodge_dual(g).display()
rho(t, x, y, z) dt - j1(t, x, y, z) dx - j2(t, x, y, z) dy - j3(t, x, y, z) dz

So here, I had successfully shown that $dF = **J$ or $dF = 4\pi ** J$ (in cgs units), thus recovering Gauss’s law and Ampere’s law.

latex( make_Jt(cart_ch).hodge_dual(g).hodge_dual(g)[:]);
# delete the semi-colon ; and you can get the LaTeX code-I suppress it

$\boxed{ \left[\frac{\partial\,E_{1}}{\partial x} + \frac{\partial\,E_{2}}{\partial y} + \frac{\partial\,E_{3}}{\partial z}, -\frac{\partial\,B_{12}}{\partial y} + \frac{\partial\,B_{31}}{\partial z} + \frac{\partial\,E_{1}}{\partial {t}}, \frac{\partial\,B_{12}}{\partial x} – \frac{\partial\,B_{23}}{\partial z} + \frac{\partial\,E_{2}}{\partial {t}}, \frac{\partial\,B_{23}}{\partial y} – \frac{\partial\,B_{31}}{\partial x} + \frac{\partial\,E_{3}}{\partial {t}}\right] = \left[\rho\left({t}, x, y, z\right), -j_{1}\left({t}, x, y, z\right), -j_{2}\left({t}, x, y, z\right), -j_{3}\left({t}, x, y, z\right)\right] }$

Current conservation is easily calculated, $dJ=0$:

make_Jt(cart_ch).hodge_dual(g).exterior_der().hodge_dual(g).display(cart_ch)
M --&gt; R
(t, x, y, z) |--&gt; d(j1)/dx + d(j2)/dy + d(j3)/dz + d(rho)/dt

Lorentz Force

def make_beta(ch):
    """
    make_beta = make_beta(ch)
    make_beta creates a time-INDEPENDENT velocity field

    INPUT/PARAMETER
    ch = sagemanifold chart
    """
    betacomplst = []
    for i in range(1,4):
        betaarglst = ['beta'+str(i),] + list(ch[1:])
        betacomplst.append( function(betaarglst[0])(*betaarglst[1:]) )
    betacomplst = [1,]+betacomplst
    beta = ch.domain().vector_field()
    beta[ch.frame(),:,ch] = betacomplst
    return beta

def make_betat(ch):
    """
    make_betat = make_betat(ch)
    make_betat creates a time-DEPENDENT velocity field

    INPUT/PARAMETER
    ch = sagemanifold chart
    """
    betacomplst = []
    for i in range(1,4):
        betaarglst = ['beta'+str(i),] + list(ch[:])
        betacomplst.append( function(betaarglst[0])(*betaarglst[1:]) )
    betacomplst = [1,]+betacomplst
    beta = ch.domain().vector_field()
    beta[ch.frame(),:,ch] = betacomplst
    return beta
make_beta(cart_ch).display()
d/dt + beta1(x, y, z) d/dx + beta2(x, y, z) d/dy + beta3(x, y, z) d/dz

For interior products, you’re going to have to dig into how sagemanifolds implements Tensor products, tensor contractions, and the use of index notation, as sagemanifolds doesn’t have a “stand-alone” interior product function. From my EuclideanManifold.py implementation in sagemanifolds, look at my curl function (def curl) as a template for implementing interior products.

betaeg = make_betat(cart_ch)
Beg = make_Bt(cart_ch)
(betaeg['^i']*Beg['_ij']).display()
(-B_12(t, x, y, z)*beta2(t, x, y, z) + B_31(t, x, y, z)*beta3(t, x, y, z)) dx + (B_12(t, x, y, z)*beta1(t, x, y, z) - B_23(t, x, y, z)*beta3(t, x, y, z)) dy + (-B_31(t, x, y, z)*beta1(t, x, y, z) + B_23(t, x, y, z)*beta2(t, x, y, z)) dz

So we now have a prescription on how to implement both the interior product and the curl of 2 “vectors” –
if you want this:

$-i_{\mathbf{\beta}} B$ which is the differential form version of $\mathbf{\beta} \times B$ (curl), then do this in sagemanifolds:

-betaeg['^i']*Beg['_ij']

q = var('q',"real") # define a single charge variable in Sage Math
LorentzForce1form =  make_Et(cart_ch) - make_beta(cart_ch)['^i']*make_Bt(cart_ch)['_ij']

EY : 20160530 I have a question; is there a good way for Sage Math variables such as q in this case (var) to “play with” sagemanifolds tensors? For instance, I obtain this when I multiply a sagemanifolds 1-form by a Sage Math variable (var) q:

q * LorentzForce1form
---------------------------------------------------------------------------

TypeError                                 Traceback (most recent call last)

 in ()
----&gt; 1 q * LorentzForce1form


/home/topolo/Public/sage/src/sage/structure/element.pyx in sage.structure.element.ModuleElement.__mul__ (/home/topolo/Public/sage/src/build/cythonized/sage/structure/element.c:12191)()
   1369         if have_same_parent_c(left, right):
   1370             raise TypeError(arith_error_message(left, right, mul))
-&gt; 1371         return coercion_model.bin_op(left, right, mul)
   1372 
   1373     def __imul__(left, right):


/home/topolo/Public/sage/src/sage/structure/coerce.pyx in sage.structure.coerce.CoercionModel_cache_maps.bin_op (/home/topolo/Public/sage/src/build/cythonized/sage/structure/coerce.c:9915)()
   1077         # We should really include the underlying error.
   1078         # This causes so much headache.
-&gt; 1079         raise TypeError(arith_error_message(x,y,op))
   1080 
   1081     cpdef canonical_coercion(self, x, y):


TypeError: unsupported operand parent(s) for '*': '' and 'Free module /\^1(M) of 1-forms on the 4-dimensional differentiable manifold M'
5.*LorentzForce1form
1-form on the 4-dimensional differentiable manifold M

Nevertheless, for $q=1$, then the 1-form version of the Lorentz Force $F$ is given by the following, and keep in mind that we integrate 1-forms, we don’t integrate vectors (because it goes back to how we transport vectors along a curve, and 1-forms either abscond, circumvent, this problem or is the most natural way to do integration on manifolds):

latex( LorentzForce1form.display() );

$F = \left( B_{12}\left({t}, x, y, z\right) \beta_{2}\left(x, y, z\right) – B_{31}\left({t}, x, y, z\right) \beta_{3}\left(x, y, z\right) + E_{1}\left({t}, x, y, z\right) \right) \mathrm{d} x + \left( -B_{12}\left({t}, x, y, z\right) \beta_{1}\left(x, y, z\right) + B_{23}\left({t}, x, y, z\right) \beta_{3}\left(x, y, z\right) + E_{2}\left({t}, x, y, z\right) \right) \mathrm{d} y + \left( B_{31}\left({t}, x, y, z\right) \beta_{1}\left(x, y, z\right) – B_{23}\left({t}, x, y, z\right) \beta_{2}\left(x, y, z\right) + E_{3}\left({t}, x, y, z\right) \right) \mathrm{d} z$

<br />

Fedora 23 workstation (Linux)+NVIDIA GeForce GTX 980 Ti: my experience, log of what I do (and find out)

OhNoFedora

Learn from my mistakes. Getting to this screen in Fedora 23 (Linux) is a mini-nightmare.

 

 

Table of Contents

I most recently took delivery of a Titan workstation computer (thank you Titan Workstation computers!, the Titan X199: here is some of the configuration:

  • Processor:Intel Xeon E5-1650 v3 Haswell 3.5GHz (3.8 GHz Turbo Boost) 140W 15MB L3 Cache 6 Core
  • Motherboard:MSI X99A SLI PLUS LGA 2011-v3 Intel X99 SATA 6GB/s USB 3.1 USB3.0 ATX Intel Motherboard
  • Memory:32GB (8x4GB)288-Pin DDR4 2133 (PC4-17000) Desktop Memory
  • Power:650W – Deepcool 650W ATX12V SLI Ready CrossFire Ready 80 PLUS GOLD Certified Active PFC Power Supply
  • Video Card 1: NVIDIA GeForce GTX 980 Ti 6GB 384-Bit DDR5 GDDR5 Video Card

First, I wanted to get a workstation to try out some CFD computations, parallelized to the processor(s) and GPU (hence the NVIDIA GeForce GTX 980 Ti, $649.99!), and some Deep Learning/Machine Learning computations, again parallelized out to the GPU. With the NVIDIA, I wanted to learn CUDA. Also, I wanted to build from source Sage Math (it requires a whopping 6 GB of hard drive space) and needed wanted a more capable computer to deal with building Sage Math all the time. Third, I wanted a workstation dedicated to Linux because a lot of the scientific/numerical computation programs work better/install or compile or make “better” in Linux, and I went with Fedora Linux, after doing a Google Search and reading about, more or less, “best linux distro for scientific/numerical computation (e.g. quora).

By the way, I am interested in CFD, Deep Learning/Machine Learning computations, and computational physics, and hence this new workstation, solely because I am currently “seeking opportunities in propulsion development” (i.e. I really want to work in the new companies for commercial space industry, SpaceX, Virgin Galactic, Blue Origin) and am trying to develop my skills (set) to help out in that area.

Coming back to this wordpress post, this post will be continuously updated (just like my other posts on TQFT, General Relativity, Propulsion (for aerospace stuff), and Computers; I wanted to focus on 4 main topics and collect all my writings into 4 blog posts, 1 for each topic, because I wanted to try to allow for deeper insight, than to fire off a cursory blog post, spamming followers; for instance the Computers post is a running log of various tips on programming, software, installation; Gravity has the stuff or links to, including links to my github repo, of what I pick up on GR), and it’ll link back to the Computers because this is my experience dealing with computers. So you can always easily navigate to this post from my simple menu with only 4 topics: TQFT, Gravity, Propulsion, Computers. PS. I wish wordpress had a github-like way of doing version control on blog posts and how you could Publish or push blog posts and media files from the command line, instead of the browser. I’m finding my github repositories way more easy (and fun) to update, either from the command line or browser.

Now, I was/am a sole Mac OS X/iOS user (I find myself losing my memory of how to use Windows as I haven’t used Windows in a long time; I used to edit my Windows key registry for fun) and switching to Fedora Linux so far has been a huge learning curve. I’m going to go ahead and write on tips, hints, advice, and things that I’ve learned even if they might be rudimentary or too simply (or silly) to advanced users because they were not simple to me (and hopefully to others)!

Oh no Fedora! Something has gone wrong; A problem has occurred (with Nvidia drm, rpm and nouveau drivers with a new Fedora kernel); panic, and how I recovered my system

Don't let this happen to your distro.
Learn from my experience: don’t let this happen to you and your Fedora 23 Workstation distro.

I had Fedora 23 Workstation (Linux) up and running and with the fresh install, I first installed the NVIDIA proprietary drivers but simply following the instructions off their official website and the driver itself.

Much later in the day, Fedora asked me to upgrade, via the Software program in Activities, and I did that with dnf system upgrade.

Now when I turn on the computer, it can’t go into X i.e. the GUI and it flickers sometimes:

FlickeringfailedVideodevicestartup

Taking a look at the built-in EFI boot(er) (I tried and tried again and again to reinstall off a Live USB disk, but it didn’t work because it went straight to this built-in EFI),

builtinEFIforTitanX199

Fedora 4.2.3 23 is the original kernel; Fedora 4.4.8 is the offending kernel( right after it installed and restart automatically, fedora’s X graphics environment doesn’t work anymore). Either 3 options can’t load the graphics environment and I’m not sure how to check what driver or package install was bad and remove , configure and try to run again. In either of the 3 options I keep getting this until I ctrl alt f2

Also, I was receiving error messages when I booted up and couldn’t get into my X11 X (graphical) windowing environment; I was stuck at the low-resolution command line.

From my Xorg.0.log, it said

(EE)
fedora linux Nvidia Failed to initialize the Nvidia kernel module please see the Nvidia system's kernel log for additional error messages and consult the NVIDIA README for details

No devices detected

Fatal server error:
no screens found(EE)

The symbol (EE) is where errors occurred in the boot up.

What happened to me has happened with other people when they use (or, in a kernel update, was switched over to) nouveau drivers (open-source, I think?) for their NVIDIA GTX video card.

cf.
Nvidia driver causes boot hang when upgrading to Fedora 23

and also

Nvidia drivers not loading correctly on Fedora 23. However, I would not follow the advice given in, for downgrading X11, nor given in that stackexchange question, respectively.

Fix

Instead, what worked for me following, to the letter, the If not true then false Fedora 23/22/21 nVidia Drivers Install Guide. This guide worked for me for reinstalling proprietary NVIDIA drivers after a conflicting kernel upgrade, accidental installing of the nouveau or nvidia-drm drivers. Go there.

I also end up back at the official NVIDIA Linux-64 bit drivers page, especially their Additional Information subpage for instructions on how to install their proprietary drivers; what helped me install the first time, and then reinstall their driver, is this page. Also, keep in mind that you can uninstall using the same command

sh ./NVIDIA-Linux-x86_64-346.35.run --uninstall

but with the uninstall flag (look it up, I forgot the exact syntax of the uninstall command).

Before those steps mentioned above, I I’ll try removing the kernel that was the last major change (it said need to install and restart upgrade and restart led to Oh no screen of death)
cf.
http://www.labtestproject.com/using_linux/remove_fedora_kernel.html

From that page, I did the following commands:
rpm -qa | grep ^kernel

You want to be sure that you’re not removing the current kernel you’re running:
uname -r

Finally, the remove:
sudo yum remove kernel-4.4.8-300.fc23.x86_64 kernel-headers-4.4.8-300.fc23.x86_64 kernel-devel-4.4.8-300.fc23.x86_64 kernel-core-4.4.8.300.fc23.x86_64 kernel-modules-4.4.8-300.fc23.x86_64

Then I uninstalled (from the command line) and reinstalled (following the if not True then False guide) the NVidia drivers.

Wrap up

Finally, you’d want to things like Display video card driver version.

lspci | grep VGA

So in conclusion, my advice from my experience, and of almost losing my X11, X, startX, graphical windowing environment is to

  • Be extremely careful about doing a dnf or yum system upgrade or kernel upgrade, and watch out what dependencies get installed when you do install a new program
  • If you run into trouble, check dnf history to see what steps to (manually) undo
  • In my case, I had to uninstall the new, offending kernel off the built-in EFI boot(er), following http://www.labtestproject.com/using_linux/remove_fedora_kernel.html
  • Uninstall and reinstall the proprietary Linux driver; just follow what it says.

Make a USB (live) boot disk of your distro

Whoops, the NVIDIA GTX 980 Ti did not like that last Fedora 23 upgrade and I have no idea which, in the logs, is what Fedora 23 didn’t like, and so my GUI or X (startX) isn’t starting. Unfortunately, the only thing left to do is to reinstall from a Live USB boot disk.

I did this to find out where my USB disk is on my Mac OS X:

diskutil list

I made a note of which /dev/diskn number (1,2,or 3, etc. where n is, e.g. /dev/disk2) and which number (e.g. it said #2: SANDISKCRUZ and SANDISKCRUZ is the name that I named the disk when I formatted the USB stick, and so #2 it is).

After downloading the 64-bit iso I needed for Fedora, I did, e.g.


sudo dd if=Fedora_Live-Workstation-x86_64-23-10.iso of=/dev/rdisk2s2

I read that adding the ‘r’ in ‘rdisk2s2’ speeds things up.

This process took about 86 minutes on a MacBook Pro, Late-2013 (!!!). To check the status I did Ctrl-T and it gave me records in, records out, and total bytes transferred. I tried pkill and sending a signal with kill but couldn’t work that out.

cf. How to Copy an ISO to a USB Drive from Mac OS X with dd (super useful article/link); is ‘dd’ command taking too long?, Show progress of dd command (clarified many things; his experience on using dd)

Also, keep in mind the official Fedora documentation for making a Live USB:

https://fedoraproject.org/wiki/How_to_create_and_use_Live_USB

Off to try to reinstall with this USB disk…

…And it didn’t help. I discovered on my own that Titan workstation computers keep Fedora 23 Workstation linux on the built-in EFI (EFI is like the new bootloader, newer than BIOS). No matter how many times I try to boot off the USB disk by changing the boot order or disabling the SATA disk drive, or any disk drive, the workstation directly boots to the built-in EFI boot loader, for the Fedora 23 Workstation. Aaaaaaaaaaaaaaahhhh. AAAArggggg.

See the above section, Oh no Fedora! Something has gone wrong; A problem has occurred (with Nvidia drm, rpm and nouveau drivers with a new Fedora kernel; panic, and how I recovered my system, to see how I manually, from the command line, recovered my X (graphical) environment.

Installation of NVIDIA CUDA on Fedora 23 Workstation (Linux)

See also my github repository MLgrabbag, the README.md file, for the latest update, as well.

Installation of NVIDIA’s CUDA Toolkit on a Fedora 23 Workstation was nontrivial; part of the reason is that it appears that 7.5 is the latest version of the CUDA Toolkit (as of 20150512), and 7.5 only supports (for sure) Fedora 21. And, this 7.5 version supports (out of the box) C compiler gcc up to version 4.* and not gcc 5. But there’s no reason why the later versions, Fedora 23 as opposed to Fedora 21, gcc 5 vs. gcc 4.*, cannot be used (because I got CUDA to work on my setup, including samples). But I found that I had to make some nontrivial symbolic linking (ln).

I wanted to install CUDA for Udacity’s Intro to Parallel Programming, and in particular, in the very first lesson or video, Intro to the Class, for instructions on running CUDA locally, only the links to the official NVIDIA documentation were given, in particular for Linux,

http://docs.nvidia.com/cuda/cuda-getting-started-guide-for-linux/index.html

But one only needs to do a Google search and read some forum posts that installing CUDA, Windows, Mac, or Linux, is highly nontrivial.

I’ll point out how I did it, and refer to the links that helped me (sometimes you simply follow, to the letter, the instructions there) and other links in which you should follow the instructions, but modify to suit your (my) system, and what NOT to do (from my experience).

Gist, short summary, steps to do (without full details), to just get CUDA to work (no graphics)

My install procedure assumes you are using the latest proprietary NVIDIA Accelerated Graphics Drivers for Linux. I removed and/or blacklisted any other open-source versions of nvidia drivers, and in particular blacklisted nouveau. See my blog post for details and description.

  1. Download the latest CUDA Toolkit (appears to be 7.5 as of 20160512). For my setup, I clicked on the boxes Linux for Operation System, x86_64 for Architecture, Fedora for Distribution, 21 for Version (only one there), runfile (local) for Installer Type (it was the first option that appeared). Then I modified the instructions on their webpage:
    1. Run `sudo sh cuda_7.5.18_linux.run`
    2. Follow the command-line prompts.
    3. Instead, I did


      $ sudo sh cuda_7.5.18_linux.run --override

      with the --override flag to use gcc 5 so I did not have to downgrade to gcc 4.*.

      Here is how I selected my options at the command-line prompts (and part of the result):


      $ sudo sh cuda_7.5.18_linux.run --override

      -------------------------------------------------------------
      Do you accept the previously read EULA? (accept/decline/quit): accept
      You are attempting to install on an unsupported configuration. Do you wish to continue? ((y)es/(n)o) [ default is no ]: yes
      Install NVIDIA Accelerated Graphics Driver for Linux-x86_64 352.39? ((y)es/(n)o/(q)uit): n
      Install the CUDA 7.5 Toolkit? ((y)es/(n)o/(q)uit): y
      Enter Toolkit Location [ default is /usr/local/cuda-7.5 ]:
      Do you want to install a symbolic link at /usr/local/cuda? ((y)es/(n)o/(q)uit): y
      Install the CUDA 7.5 Samples? ((y)es/(n)o/(q)uit): y
      Enter CUDA Samples Location [ default is /home/[yournamehere] ]: /home/[yournamehere]/Public
      Installing the CUDA Toolkit in /usr/local/cuda-7.5 ...
      Missing recommended library: libGLU.so
      Missing recommended library: libX11.so
      Missing recommended library: libXi.so
      Missing recommended library: libXmu.so

      Installing the CUDA Samples in /home/[yournamehere]/ ...
      Copying samples to /home/propdev/Public/NVIDIA_CUDA-7.5_Samples now...
      Finished copying samples.

      Again, Fedora 23 was not a supported configuration, but I wished to continue. I had already installed NVIDIA Accelerated Graphics Driver for Linux (that’s how I was seeing my X graphical environment) but it was a later version 361.* and I did not want to uninstall it and then reinstall, which was recommended by other webpages (I had already gone through the mini-nightmare of reinstalling these drivers before, which can trash your X11 environment that you depend on for a functioning GUI).

    4. Continuing, this was also printed out by CUDA’s installer:


      Installing the CUDA Samples in /home/propdev/Public ...
      Copying samples to /home/propdev/Public/NVIDIA_CUDA-7.5_Samples now...
      Finished copying samples.

      ===========
      = Summary =
      ===========

      Driver: Not Selected
      Toolkit: Installed in /usr/local/cuda-7.5
      Samples: Installed in /home/[yournamehere]/Public, but missing recommended libraries

      Please make sure that
      - PATH includes /usr/local/cuda-7.5/bin
      - LD_LIBRARY_PATH includes /usr/local/cuda-7.5/lib64, or, add /usr/local/cuda-7.5/lib64 to /etc/ld.so.conf and run ldconfig as root

      To uninstall the CUDA Toolkit, run the uninstall script in /usr/local/cuda-7.5/bin
      To uninstall the NVIDIA Driver, run nvidia-uninstall

      Please see CUDA_Installation_Guide_Linux.pdf in /usr/local/cuda-7.5/doc/pdf for detailed information on setting up CUDA.

      ***WARNING: Incomplete installation! This installation did not install the CUDA Driver. A driver of version at least 352.00 is required for CUDA 7.5 functionality to work.
      To install the driver using this installer, run the following command, replacing with the name of this run file:
      sudo .run -silent -driver

      Logfile is /tmp/cuda_install_7123.log

      For “ PATH includes /usr/local/cuda-7.5 ” I do


      $ export PATH=/usr/local/cuda-7.5/bin:$PATH

      as suggested by Chapter 6 of CUDA_Getting_Started_Linux.pdf

      Dealing with the LD_LIBRARY_PATH, I did this: I created a new text file (open up your favorite text editor) in /etc/ld.so.conf.d called cuda.conf, e.g. I used emacs:


      sudo emacs cuda.conf

      and I pasted in the directory


      /usr/local/cuda-7.5/lib64

      (since my setup is 64-bit) into this text file. I did this because my /etc/ld.so.conf file includes files from /etc/ld.so.conf.d, i.e. it says


      include ld.so.conf.d/*.conf

      Make sure this change for `LD_LIBRARY_PATH` is made by running the command


      ldconfig

      as root.

      I check the status of this “linking” to PATH and LD_LIBRARY_PATH with the echo command, each time I reboot, or log back in, or start a new Terminal window:


      echo $PATH
      echo $LD_LIBRARY_PATH

    5. Patch the host_config.h header file

      cf. [nstall NVIDIA CUDA on Fedora 22 with gcc 5.1 and CUDA incompatible with my gcc version.

      To use gcc 5 instead of gcc 4.*, I needed to patch the host_config.h header file because I kept receiving errors. What worked for me was doing this to the file – original version:


      #if __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ > 9)

      #error -- unsupported GNU version! gcc versions later than 4.9 are not supported!

      #endif /* __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ > 9) */

      Commented-out version (these 3 lines)

      // #if __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ > 9)

      // #error -- unsupported GNU version! gcc versions later than 4.9 are not supported!

      // #endif /* __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ > 9) */

      Afterwards, I did not have any problems with c compiler gcc incompatibility (yet).

    6. At this point CUDA runs without problems if no graphics capabilities are needed. For instance, as a sanity check, I ran, from the installed samples with CUDA, I made `deviceQuery` and ran it:

      $ cd ~/NVIDIA_CUDA-7.5_Samples/1_Utilities/deviceQuery
      $ make -j12
      $ ./deviceQuery

      And then if your output looks something like this, then success!


      ./deviceQuery Starting...

      CUDA Device Query (Runtime API) version (CUDART static linking)

      Detected 1 CUDA Capable device(s)

      Device 0: "GeForce GTX 980 Ti"
      CUDA Driver Version / Runtime Version 8.0 / 7.5
      CUDA Capability Major/Minor version number: 5.2
      Total amount of global memory: 6143 MBytes (6441730048 bytes)
      (22) Multiprocessors, (128) CUDA Cores/MP: 2816 CUDA Cores
      GPU Max Clock rate: 1076 MHz (1.08 GHz)
      Memory Clock rate: 3505 Mhz
      Memory Bus Width: 384-bit
      L2 Cache Size: 3145728 bytes
      Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
      Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
      Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
      Total amount of constant memory: 65536 bytes
      Total amount of shared memory per block: 49152 bytes
      Total number of registers available per block: 65536
      Warp size: 32
      Maximum number of threads per multiprocessor: 2048
      Maximum number of threads per block: 1024
      Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
      Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
      Maximum memory pitch: 2147483647 bytes
      Texture alignment: 512 bytes
      Concurrent copy and kernel execution: Yes with 2 copy engine(s)
      Run time limit on kernels: Yes
      Integrated GPU sharing Host Memory: No
      Support host page-locked memory mapping: Yes
      Alignment requirement for Surfaces: Yes
      Device has ECC support: Disabled
      Device supports Unified Addressing (UVA): Yes
      Device PCI Domain ID / Bus ID / location ID: 0 / 3 / 0
      Compute Mode:

      deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 8.0, CUDA Runtime Version = 7.5, NumDevs = 1, Device0 = GeForce GTX 980 Ti
      Result = PASS

    7. Getting the other samples to run, getting CUDA to have graphics capabilities, soft symbolic linking to the existing libraries.

      The flow or general procedure I ended up having to do was to use `locate` to find the relevant `*.so.*` or `*.h` file for the missing library or missing header, respectively, and then making soft symbolic links to them with the `ln -s` command. I found that some of the samples have different configurations for in which directory the graphical libraries are (GL, GLU, X11, glut, etc.) than other samples in the samples included by NVIDIA.

    To be continued, and see my github repo MLgrabbag, the README.md file, for the latest update (html code is a pain compared to markdown and I don’t want to download anymore programs to convert markdown to html (I’m already doing a lot of installing already)).

    Sage Math: Installing programs on Fedora 23 (Linux)

    First, I tried this, following Sage Math install from source. I followed the steps until, at Step-by-Step installation procedure, General procedure, 4. Read the README.txt I read README.md
    I use emacs so I did emacs README.md where sage-7.1 is; in README.md, in section More Detailed Instructions to Build from Source, I did
    export MAKE="make -j14"
    because my processor has 12 (I find that out by the following:
    cf. http://www.binarytides.com/linux-cpu-information/

    $ less /proc/cpuinfo
    and
    $ cat /proc/cpuinfo | grep processor | wc -l
    and then I use more than 14 jobs for 12 processors. It took about 25-30 minutes.

    However, it failed to build, again and again, even for libraries I successfully installed through Anaconda conda (from Continuum) such as git-2.6. and matplotlib. So now I am trying to follow the instructions I received from Eric Gourgoulhon (LUTH) gave me for building Sage Math from the git develop version, which I had already accounted for in my Computers post, under Starting or beginning developing (i.e. contributing code) to a major open-source project, in this case, Sage Math.

    If you’re getting errors when building from github Sage Math using my and Gourgoulhon’s instructions,

    Check the errors you’re receiving and the suggested log files. In the “root” directory of sage directory with the source src, there is a logs/pkgs directory with all the logs of installed or failed packages, and in my particular case, flask_babel-0.9.log failed. Reading the log,it was a “Download Error!” So it was probably a problem with my internet connection (I’ve had problems with Time Warner Cable as a service provider for service interruptions and I cannot recommend TWC).

    Try your make again, but be sure not to overwrite the previously successful package build by typing at the command prompt of the main sage directory


    SAGE_KEEP_BUILT_SPKGS=yes

    cf. http://hpc.wm.edu/SciClone/documentation/software/math/sage-5.1/html/en/installation/source.html

    Also, I was able to build from the pre-built Linux binaries:

    In my experience, either following the instructions I and Eric Gourgoulhon gives, as stated in my “Computers” blog post, to build straight from the git development version, and pre-built binary, in Mac OS X and Fedora Linux, is the way to go for installing Sage Math – building from source instructions in Sage Math haven’t worked for me.

    TeXLive Install for LaTeX

    This was straightforward. I did this:
    yum install texlive-scheme-full

    cf. How to fully install Latex in fedora?

    Good intentions; bad advice i.e. DON’T follow these commands carelessly in Fedora 23 (Linux)

    You may be (at least I certainly was) in a rush to fix something and so you are furiously doing Google searches and searching forum posts and trying any kind of command(s) to fix the problem. But here, I collect commands NOT to do (casually).

    http://www.liquidweb.com/kb/how-to-install-and-configure-git-on-fedora-23/

    dnf -y upgrade

    Don’t do dnf upgrade casually. This is because NVIDIA’s proprietary drivers may have conflict with the latest kernel. This has happened with others.

    You don’t need to downgrade X11!!!

    I found that I didn’t need to downgrade my X11 as advised in the article NVIDIA – Incompatible with Fedora 23 Xorg – and a Workaround.., as the latest NVIDIA drivers did just fine.

    How not to replace nouveau drivers in Fedora 23

    cf. HOWTO: Install NVIDIA driver on Fedora – replacing Nouveau

    I wouldn’t do it this way (and it didn’t work for me, in the crucial step #4 of theirs, to blacklist nouveau in /etc/modprobe.d with the commands

    echo 'blacklist nouveau' >> /etc/modprobe.d/disable-nouveau.conf
    echo 'nouveau modeset=0' >> /etc/modprobe.d/disable-nouveau.conf

    Instead, what worked for me, again, as previously linked and written about above, is to following, to the letter, the If !1 0 Fedora 23/22/21 nVidia Drivers Install Guide.

    While on that note, advice in the fedoraforum post, entitled
    [SOLVED] Oh no! Something has gone wrong didn’t help me. I was thinking of trying to do a reinstall of Fedora into the built-in EFI, but this post, how to install Fedora 11 in EFI shell and GPT partition? didn’t help.

    I had the same problem as described here (with a similar log), in FC22: nvidia kernel module loads, but X can’t initialize GPU, but the fix the member StefanJ proposed didn’t help in my situation.

    Number of “cores” on Fedora 23 (Linux)? In Linux, they’re called cpus or processors

    grep processor /proc/cpuinfo

    cat /proc/cpuinfo | less

    Also, other system information:

    cat /proc/meminfo | less
    lspci

    cf. http://www.cyberciti.biz/faq/linux-display-cpu-information-number-of-cpus-and-their-speed/

    getting error “Can’t create transaction lock” with rpm

    getting error “Can’t create transaction lock” with rpm

    Solution:

    “Try running your command as root. It worked for me.” –phathutshezo

Gravity – Gravité

Table of Contents

Notes on General Relativity (GR) and Gravity – includes notes on the Central Lectures given by Dr. Frederic P. Schuller for the WE Heraeus International Winter School on Gravity and Light

Notes (LaTeX format) on General Relativity (GR) and Gravity – includes notes on the Central Lectures given by Dr. Frederic P. Schuller for the WE Heraeus International Winter School on Gravity and Light; link to github, will be the most up-to-date and permanent

Notes (LaTeX format) on General Relativity (GR) and Gravity – includes notes on the Central Lectures given by Dr. Frederic P. Schuller for the WE Heraeus International Winter School on Gravity and Light; link to github, will be the most up-to-date and permanent

Notes (pdf format) on General Relativity (GR) and Gravity – includes notes on the Central Lectures given by Dr. Frederic P. Schuller for the WE Heraeus International Winter School on Gravity and Light; link to github, will be the most up-to-date and permanent

Lecture # Lecture name Lecture link Tutorial # Tutorial name Tutorial video link Tutorial sheet (pdf)
3 Lecture 3: Multilinear Algebra (International Winter School on Gravity and Light 2015) https://youtu.be/mbv3T15nWq0 3 Tutorial 3: Multilinear Algebra (International Winter School on Gravity and Light 2015) https://youtu.be/5oeWX3NUhMA tensors_neu.pdf

NOT Updated: NOTES ON GENERAL RELATIVITY (GR) AND GRAVITY in wide, “grande” format; includes notes on the Central Lectures given by Dr. Frederic P. Schuller for the WE Heraeus International Winter School on Gravity and Light from wordpress NOT UPDATED

gravite github repository

github.io page for Gravite

github repository for Gravite

Euclidean space as a Manifold – R^2,R^3,R^n

Using SageManifolds, Euclidean space, R^2, R^3, and R^n, is implemented as a manifold.

Rn.sage – Euclidean spaces as manifolds using sagemanifolds

Features

  • R^2,R^3,R^n as a manifold, with a chart atlas
sage: load(‘‘Rn.sage’’)  
sage: R2eg = R2() 
sage: R3eg = R3() 
sage: R4 = Rn(4) 
sage: R2eg.M.atlas()
[Chart (R2, (x, y)), Chart (U, (x, y)), Chart (U, (r, ph))]
sage: R3eg.M.atlas()
[Chart (R3, (x, y, z)),
 Chart (U, (x, y, z)),
 Chart (U, (rh, th, ph)),
 Chart (U, (r, phi, zc))]
sage: R4.M.atlas()
[Chart (R4, (x1, x2, x3, x4)),
 Chart (U, (x1, x2, x3, x4)),
 Chart (U, (rh, th1, th2, ph)),
 Chart (U, (r, the1, phi, z))]
  • (carefully) define a spherical coordinate and cylindrical coordinate chart on Euclidean spaces, e.g.
sage: R2eg.transit_sph_to_cart.display() 
x = r*cos(ph)
y = r*sin(ph)
sage: R3eg.transit_sph_to_cart.display() 
x = rh*cos(ph)*sin(th)
y = rh*sin(ph)*sin(th)
z = rh*cos(th)
sage: R3eg.transit_cyl_to_cart.display() 
x = r*cos(phi)
y = r*sin(phi)
z = zc
sage: R4.transit_sph_to_cart.display() 
x1 = rh*cos(ph)*sin(th1)*sin(th2)
x2 = rh*sin(ph)*sin(th1)*sin(th2)
x3 = rh*cos(th2)*sin(th1)
x4 = rh*cos(th1)
  • calculate the Jacobian!
sage: to_orthonormal2 , e2, Jacobians2 = R2eg.make_orthon_frames(R2eg.sph_ch) 
sage: Jacobians2[0].inverse()[:,R2eg.sph_ch]
[ cos(ph) -r*sin(ph)]
[ sin(ph) r*cos(ph)]
sage: to_orthonormal3sph, e3sph, Jacobians3sph = R3eg.make_orthon_frames(R3eg.sph_ch) 
sage: to_orthonormal3cyl, e3cyl, Jacobians3cyl = R3eg.make_orthon_frames(R3eg.cyl_ch) 
sage: Jacobians3sph[0].inverse()[:,R3eg.sph_ch]
[ cos(ph)*sin(th) rh*cos(ph)*cos(th) -rh*sin(ph)*sin(th)]
[ sin(ph)*sin(th) rh*cos(th)*sin(ph) rh*cos(ph)*sin(th) ]
[ cos(th)         -rh*sin(th)        0                  ]
sage: Jacobians3cyl[0].inverse()[:,R3eg.cyl_ch]
[ cos(phi) -r*sin(phi) 0]
[ sin(phi) r*cos(phi) 0] 
[ 0 0 1]
  • equip the Euclidean space manifold with a metric g and calculate the metric automatically:
sage: R2eg.equip_metric() 
sage: R3eg.equip_metric() 
sage: R4.equip_metric()

sage: R2eg.g.display(R2eg.sph_ch.frame(),R2eg.sph_ch)
g = dr*dr + r^2 dph*dph
sage: R3eg.g.display(R3eg.sph_ch.frame(),R3eg.sph_ch)
g = drh*drh + rh^2 dth*dth + rh^2*sin(th)^2 dph*dph
sage: R3eg.g.display(R3eg.cyl_ch.frame(),R3eg.cyl_ch)
g = dr*dr + r^2 dphi*dphi + dzc*dzc
sage: R4.g.display(R4.sph_ch.frame(),R4.sph_ch)
g = drh*drh + rh^2 dth1*dth1 + rh^2*sin(th1)^2 dth2*dth2 + rh^2*sin(th1)^2*sin(th2)^2 dph*dph
sage: R4.g.display(R4.cyl_ch.frame(),R4.cyl_ch)
g = dr*dr + r^2 dthe1*dthe1 + r^2*sin(the1)^2 dphi*dphi + dz*dz
  • Calculate the so-called orthonormal non-coordinate basis vectors in terms of the (local) coordinate basis vectors, showing clearly and distinctively the difference between the two (concepts)
sage: e2[1].display( R2eg.sph_ch.frame(), R2eg.sph_ch)
e_1 = d/dr
sage: e2[2].display( R2eg.sph_ch.frame(), R2eg.sph_ch)
e_2 = 1/r d/dph
sage: for i in range(1,3+1):                                                         
    e3sph[i].display( R3eg.sph_ch.frame(), R3eg.sph_ch )
....:     
e_1 = d/drh
e_2 = 1/rh d/dth
e_3 = 1/(rh*sin(th)) d/dph
sage: for i in range(1,3+1):
    e3cyl[i].display( R3eg.cyl_ch.frame(), R3eg.cyl_ch )
....:     
e_1 = d/dr
e_2 = 1/r d/dphi
e_3 = d/dzc

Rn_examples_01Rn_examples_02Rn_examples_03

I’m on the physics stackexchange!

profile for ernestyalumni2014 at Physics Stack Exchange, Q&A for active researchers, academics and students of physics