Note: if you plan to do any development on this, see *Development.
I recommend you install like this. Go to a directory on your python path and run these commands.
git clone [email protected]:espyranto/espyranto.git
pip install -e espyranto
I am pretty sure you can update espyranto later by just going to the directory and running:
git pull
We are currently working on “Generation 1” espyranto. This code is located in ./espyranto/g1/.
This will have an a Plate class to read from a directory to get the compositions of the wells, and the kinetic data that Eric Lopato has derived from image analysis.
If you want to develop on this, I suggest you fork espyranto on Github, and then install your fork. You can create pull requests to merge your development into the main code. This takes some skills and practice, so please start with small changes, and talk to Dr. Kitchin about how to do it before investing a lot of time in it.
- Make a new branch off the master of your fork
- Make a set of changes in your branch
- Push your branch to your fork
- Go to Github and make a pull request off the fork
- We will review the request, and if it is ok, merge it
- Then, you would update your master which now should have your updates in it.
It is a little tricky working with branches and forks. We can add documentation here as needed to at least make it simpler.
Please report issues at https://github.com/espyranto/espyranto/issues.
If you don’t have a ~/.ssh/id_rsa.pub at NERSC, then generate your ssh keys at NERSC:
ssh-keygen -t rsa -b 4096 -C "[email protected]"
Then add the contents of ~/.ssh/id_rsa.pub as a new key at https://github.com/settings/keys.
Then clone as described in *Installation.
Then see this notebook ./readme.ipynb
It seems like you have manually install these.
pip install --user ase pycse quantities uncertainties
This example assumes you have installed espyranto on your path.
You have to set the timestep for the images. There is not currently a way to derive these that is complete. This timestep should be the amount of time the wells are illuminated between each image.
Note, a simple way to get machine-readable key/value pairs in the metadata is to add cells in the Parameters column with contents like ‘A = Pd’. This class will parse that into the metadata dictionary.
from espyranto.g1.plate import Plate
from espyranto.g1.mmolH import mmolH
%matplotlib inline
import matplotlib.pyplot as plt
p = Plate('/Users/jkitchin/Desktop/generation-1-data/03/RucolCurow-I/')
print(p)
mmolH data 100 images were acquired. Start time: 2019-03-21 20:24:33 End time: 2019-03-22 13:02:46 The timestep is 600 s mmolH data has shape: (96, 74)
Here we can access the ‘A’ element of the metadata because there is a cell containing ‘A=Pd’ in the Parameters column of the Excel spreadsheet.
p.metadata['A']
Composition in row 0:
for i, (a, b) in enumerate(zip(p.A.reshape(p.nrows, p.ncols)[0],
p.B.reshape(p.nrows, p.ncols)[0])):
print(f'col {i:2d} {p.metadata["A"]} {a:5.2f}, {p.metadata["B"]} {b:5.2f}')
mmolH = p.data['mmolH']
mmolH.maxh()
mmolH.plot_mmolH(23)
mmolH.plot_mmolH([1, 11])
mmolH.plot_mmolH_grid(); # ; suppresses a lot of matplotlib output
mmolH.plot_mmolH_max()
mmolH.plot_mmolH_max_contourf()
<Figure size 432x288 with 2 Axes>
mmolH.show_plate(20)
<Figure size 432x288 with 1 Axes>
mmolH.show_plate(slice(0, -1, 20));
These are reduced dimension plots. For example, here we plot maxH vs mole fraction of Ru in a scatter plot, where the size of the circle is related to the total concentration of metals.
import numpy as np
tot = p.A + p.B
x = p.A / np.where(tot > 0, tot, 1) # one cell is empty as a control
plt.scatter(x, np.max(mmolH.mmolH, axis=1), (p.A + p.B) * 200, alpha=0.5)
plt.xlabel(f'$x_{{{p.metadata["A"]}}}$')
plt.ylabel('Max H2 formed')
<Figure size 432x288 with 1 Axes>
This suggests Ru is not very good, Cu gets better with increasing concentration, and together they are much better than you would expect.
p.movie_ffmpeg()
p.movie_imagemagick()
Eric used a smoothed function for this.
I think Kirby has been fitting a first order rate law, maybe with a delay. That will not always work, some data does not look like that.
Many of these show some period of no activity before there is an onset. We should develop a way to estimate what that is so we can see what factors affect it.