Friday, October 3, 2014

Rendering for a 3D Movie or Smashing Two Copper Spheres Together at 1 km/s

At one point, I needed to have a general code for finding clusters of atoms. I was going to use a cutoff distance between atoms to determine if there was a bond and thus cluster connectivity. At the time, I didn't realize I was implementing a breadth-first search. Fast forward a few days , I realized that needed I to test it on something. I decided it would be fun to use and exploding/fragmenting structure to get many clusters to test my code on.

For this, I simply smashed two copper spheres into each other at ~ 1 km/s using molecular dynamics. I used a Rydberg potential for copper that I had optimized for mechanical properties. I actually implemented it into LAMMPS and sent the pair potential file and description in their format to the developers so they could add it to pair_style library. That was about three years ago and as of this post, they haven't added it. I guess not enough people use the Rydberg potential for them to take interest.

Anyways, I had these exploding copper spheres via atomic trajectories that had fulfilled their scientific purpose. Just for fun, I decided to render the simulation in POV-Ray and make a movie with FFmpeg. Here is that looked like:


Pretty cool huh? Well that sat on my hard drive for about a year. Meanwhile, I had been trying to convince my adviser, John Kieffer to purchase a 3D TV to replace our office projector for two reasons. The first reason was that 3D TVs could be used to view atomic structures and the second was that the projector exhaust fan would cook whomever was sitting next to it. He finally found the money for this and put me in charge of purchasing a TV. I chose the Sony KDL-70R550A with the advice of the University of Michigan 3D Lab because passive 3D TVs take two side-by-side images and interlace them for the 3D output. It much easier to configure the 3D system when you don't have to worry about the GPU outputing a switching signal for each eye. The disadvantage is that you get half the lateral resolution in 3D mode. So if you have a side-by-side 3D mode on your screen, check out this video:



The funny thing is I don't care for 3D movie showings in theaters, but as a scientific tool they are pretty handy.

Sunday, September 21, 2014

My Problem with Isosurfaces of Molecular Orbitals

When people plot their MO's, they show us an isosurface in electron density. It is common procedure to use a density (or wavefunction) value for this surface and report that (Hopefully). I take issue with this.

By not having a standard, we may or may not be missing features of our MO's. Or worse, undesirable features could be hidden. I propose that the fraction of the MO contained by the isosurface be listed. Because we can create a cumulative distribution function for the MO density, specifying the fraction contained by the surface is a sufficient condition for specifying an isosurface density reproducibly.

To demonstrate this loss of detail, I have two isosurfaces from the same series of calculations. These isosurfaces are of the highest occupied MO (HOMO) for a pair of molecules SubPc and C60. The first animation shows an isosurface containing 80 % of the HOMO as the molecules are separated. This is the percentage that I have used to present at conferences.
80 % of the HOMO looks great! You can clearly see the lobes associated the pi orbitals. Next, I'll render the 99 % isosurface of the HOMO.
You can clearly see that some of the HOMO has tunneled to the C60 and that the amount that has tunneled decreases with separation distance. This has an actual affect on some of my calculations. Since we as field have MO's of varying size, the total fraction contained seems to be a more general metric to allow us to more quickly evaluate the works of others.

Or maybe I am just being pedantic? 

If you are wondering, I detailed how I render MO's in another post. The series of images was animates with ImageMagick. In any case, I am feeling very under the weather so do pardon any grammatical errors in this post.

Saturday, September 20, 2014

Total Electron Density in an Organic Crystal

Following on my post about electron density isosurfaces for molecular orbitals (here), I used the same methods to make total electron density isosurfaces from VASP output on a boron subphthalocyanine chloride crystal. Here is the result is very high resolution:
The cool thing is that for some of the atoms you can see where the project augmented waves have left holes in the electron density. This is from the so called muffin-tin approximation, where the nuclear core and inner shell electrons are removed and replaced with a pseudo-potential.

Thermal Ellipsoids

At one point, I was supposed to compute electron/hole mobilities in boron subphthalocyanine chloride using charge transfer integrals. The problem is that these don't give good results unless you have realistic thermal distortions to the molecule. Before getting bogged down in charge transfer physics, I decided to use ab initio molecular dynamics on a single unit cell to get realistic thermal distortions. I used VASP, which has the ability to perform ab initio molecular dynamics at constant temperature via a Nose-Hoover thermostat and at constant volume. I would have preferred to use a constant pressure barostat but there is not option for that in VASP yet. Once they do implement a Parrinello-Rahman barostat, I will be very happy.

Back to the molecular dynamics. Once the system had equilibrated at 300 K, I ran the simulation for about 10 ps. Once I had the trajectory data for each atom, I computed the covariance matrix of the x, y, and z position lists. A singular value decomposition (SVD) can be used to get the principle axes of the multivariate normal distribution. The isosurfaces of probability density are surfaces of constant Mahalanobis distance. We can figure out the Mahalanobis distance that corresponds to any total fraction contained in an iso-probability surface by inverting the radial cumulative distribution function. Which looks like this for 3D multivariate normal:


For an ellipsoid containing the densest 50% of the probability, we can see that we need a Mahalanobis distance of ~ 1.5. So we need scale our principle axes (from the SVD of the covariance matrix) by 1.5 to get our ellipsoid. 

For rendering these ellipsoids in POV-Ray, we can use the matrix transform option on a unit sphere to get our ellipsoid. The principal axis form the column vectors of this transform matrix. Here is what it looks like:


Sorry If changed writing styles much in this post, I've been watching Luther.

How I Render Molecular Orbitals

I want to start by saying that I don't like GaussView. It makes rendering of molecular orbitals well enough to convey information but I want very fine grained control over how I stylistically represent data and it bogs down our old Macs. Also, it is not free, which is odd because it is useless without Gaussian. I know that they are making money off of it, but I feel like the freemium model would work better case.

Anyways, Gaussian is a pretty impressive piece of software with really terrible I/O options. To render my molecular orbitals (MOs), I have to use the formchk utility to make .fchk files for the cubegen utility to make .cube files. I usually make a fine mesh for a nicer rendering.

MOs are usually rendered by their isosurface value of the wavefuction or electron density. I use electron density for my isosurfaces. Once I have my isosurface density chosen, I pick a density above and below that and remove all the cells that are outside that range. This filtering drastically speeds up the next step. I scan through all the cells and compute a linear polynomial fit for each cell and its 8 nearest neighbours. Basically, it uses a cell and neighbours to make linear local density function of the form:
rho = a*x + b*y + c*z + d
Since the desired isosurface forms a plane in this function, we can find the point in the plane that is closest to the center of the cell. If this closet point is inside the cell of interest the point is added to the surface point list. 

For each point in the list, I add a sphere with a size close to that of the cell size. This collection of spheres for representing a surface is blended together in POV-Ray using the blob object. The blob object blends the spheres together using the metaballs method for a smooth look. I color the spheres by the sign of the wavefunction at each cell. Here is an example using the highest occupied MO of subphthalocyanine chloride.
Bonus C60 molecule.

Glass Made from Mables

This is an old video, probably my earliest video that I can easily find. I made this video using a collection of scripts and maybe a library that would eventually become Kirke. I had just started learning how to use POV-Ray... so about fall 2010. We didn't have a development server and I didn't have my desktop Mastodon. To get POV-ray, I had ask for it to be installed on Nyx. I didn't even have anything cool to make a video of, I had just joined my research group! So I asked Kathrine Sebeck for some of her data.




This is a soda-lime glass molecular dynamics simulation rendered in POV-Ray with ray tracing. Each atom was had a sphere created for it with some size meant to represent the atom size. I have no idea what size I used for each, probably the atomic radius given on Wikipedia. Anyways, each of the spheres was made to look like a glass marble by changing the index of refraction. After rendering each frame, I combined them all using FFmpeg.

Meshing Densities with Smoothed Binning and Interface Tracking

About two years ago, I was working on growing oxide surfaces on silicon using molecular dynamics. I was basically just creating a bunch of oxygen atoms over the surfaces of a ~ 10 nm thick sheet of silicon. I kept everything at 1200 C and constant pressure so I could get a nice layer thickness versus time curve. The problem is that using binning to compute the layer density is a bad idea because you'll end up with sharp peaks if you reduce to the same size at the lattice spacing or less. There are two solutions to this problem that I know of. One is to use kernel density estimation and the other is to use smoothed bins with weighting functions. At the time, I had the smoothed binning implemented with weighting functions implemented up to 7th order. So I went with that.

I should also point out that kernel density estimation is basically the continuous analytical extension of smoothed bins with the weighting functions. Basically, if the order goes to infinity and the bin size gets infinity small you get the same result. A good reason not to use kernel density estimation is computational complexity. For smoothed binning, it is the number of particles (or measurements positions) Nparticles is multiplied by the order of the weighting function Norder. O(Nparticles*Norder). In higher dimensions, its O(Nparticles*Norder^dimensionality). I think I used 3rd order here.

For kernel density estimation, every output point Noutput has to have a sum over all the position data. So the complexity goes as the number of particle  Nparticle times the number of output points, or mathematically O(Noutput*Nparticles). For most cases, the number of output points is much higher than the order of the binning so its almost always faster to using smoothed binning but not as pretty.

A few words on the weighting functions. Conceivably, you could use whatever function you want to determine what fraction of a particle gets added to its nearest bins. I prefer to use convolutions of the rectangle function. Conceptually, this is like saying that a particle in a 1st order bins has a uniform distribution in that bin (a rectangular function), but giving it a second bin convolutes the two rectangular functions probability distributions giving a triangular function. And indeed, the triangular function gives a fraction of the mass to the two nearest bins linearly weighted for the fractional distance between them. You can go on forever convoluting these rectangular functions getting higher order piecewise polynomials that represent a function that looks increasingly like a Gaussian. There is a second great aspect of the convolutions of the rectangle function: They have simple Fourier transforms. The Fourier transforms of convolutions of the rectangle function are simply the sinc function raised to the power of the order of the convolution. I will say that I am fond of third order weighting functions because the mass is given to a to the cell containing the particle and the two nearest cells. In this scheme, if the particle on on the border of two cells, it is given evenly to those two and no more. If it is in the middle of the cell, it gives ~80% to the middle cell. (I'd have to double check for the exact number) I feel like 3rd order is the most sensible since it shouldn't share with more than the two nearest cells.

But I digress, These first video shows the mass density of the thin sheet of silicon. There are other oxygen atoms but they are removed if they are not bonded to silicon.

The only problem with this method is that people want a layer thickness and my layers are diffuse. I decided that the best course would be to fit a rectangle by using the variance of the height of the oxygen atoms and overlaying it with a rectangle with the same height variance. This rectangle's height would represent my layer thickness. I used total number of oxygen atoms in the layer and the volume of rectangle to compute the oxygen density in the layer. It works very well. Here is a video with the assumed density profiles plotted with binned profiles.

You can see at first the layer grows in density then thickness as the surface reaches full oxygen coverage and the oxide layer grows thicker. Now I can plot my layer thickness and density versus time. I also get two surfaces (more data points) because I use a thin sheet of silicon.


The layer thickness pretty well matches the power law growth predicted by diffusion models. Nice. As for the density, it looks like an exponential decay to a constant value as the surface saturates and then doesn't change much as the layer gets thicker.
Welp, that was cool. I made all the plots with Matplotlib and movies with FFmpeg or whatever they were calling it then. The simulation was done in LAMMPS and post processed with Kirke.

My next post will have less text, more pictures.

Friday, September 19, 2014

Multiple Scales in Matplotlib

For my first post, I would like to post an example of multiple scales/axes being used on the same plot in Matplotlib. It also has some example on how manipulate the legend properties. I am going to post many Matplotlib examples here because I want to use them as future references for myself and perhaps others will find them useful.

Here is the code that generates that plot:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
from pylab import *
from pylab import log as ln
fig2=figure(2,figsize=(4,3),dpi=96*3)


ax = subplot(111)
zs = arange(0.0, 6.0, 0.01)



xi1 = 0.60
n1 = xi1/(1.0+cosh(zs + ln(2.0*xi1-1.0)))
E1 = -(1.0 - 2.0/(1.0 + exp(-zs - ln(2.0*xi1-1.0))))


ax.plot(append([-1,0,0],zs),append([-1,-1, 1.0-1.0/xi1],E1),linewidth = 2.0,label = r'$\chi = 0.60$',color='b') # I have a piece-wise function t0 plot

axclone = ax.twinx() # where I clone the axes with the same x axis
axclone.plot(zs,n1,linewidth = 2.0,linestyle = '--',label = r'$\chi = 0.60$',color='b')

yt = [0.0, 0.25, 0.5, 0.75]

ytl = ['$0$',
    r'$\frac{1}{4}\frac{N}{\lambda}$',
    r'$\frac{1}{2}\frac{N}{\lambda}$',
    r'$\frac{3}{4}\frac{N}{\lambda}$']

axclone.set_yticks(yt) # setting ticks for the clone axis
axclone.set_yticklabels(ytl)

# Here I very much control the legend properties
leg2 = legend(title = r'$\chi=\frac{\epsilon_1}{\epsilon_1+\epsilon_2}$', fontsize = 14,ncol=2,
    handlelength = 2,
    handleheight = 0.0,
    handletextpad = 0.0,
    borderpad = 0.25,
    borderaxespad = 0.25,
    columnspacing = 0.1,
    loc = 'lower right')
setp(leg2.get_title(),fontsize = 14)#its odd that I had to set the legend title font size with a different function.


axclone.set_ylabel(r'$n\left(z\right)$',fontsize = 22)

#tick positions
E = [-1.0, 1.0-1.0/xi1, -0.5, #0.0, 
    0.5, 1.0]

# tick labels
El = [r'$-1$',
    r'$\left(1-\frac{1}{\chi}\right)$',
    r'$-\frac{1}{2}$', #r'$0$',
    r'$\frac{1}{2}$',
    r'$1$']



# Now I set the axes and move the spines around to look nice
axclone.set_ylim(-0.5,0.5)
ax.set_ylim(-1.1,1.1)
ax.set_xlim(-1,6)
ax.spines['bottom'].set_position('center')
ax.xaxis.set_ticks_position('bottom')
ax.spines['top'].set_color('none')
ax.spines['left'].set_position(('data',0))
ax.spines['left'].set_smart_bounds(True)


ax.set_yticks(E)
ax.set_yticklabels(El)

ax.set_xticks([-1, #0,    
    1,2,3,4,5,6])
xt = ['$-\lambda$', #'$0$',
    '$\lambda$', 
    '$2\lambda$',
    '$3\lambda$',
    '$4\lambda$', 
    '$5\lambda$', 
    '$6\lambda$']

ax.set_xticklabels(xt)
ax.set_ylabel(r'$\frac{E\left(z\right)}{E_{1,\infty}}$',fontsize = 22)
ax.set_xlabel(r'$z$',fontsize = 22)
fig2.subplots_adjust(bottom=0.00,top=0.95,left=0.17,right=0.85)
savefig('Efield_with_charge_dists.png',fmt='png',dpi=96*3,transparent = True)
show()