Skip to content

Commit

Permalink
add color blind friendly colormaps
Browse files Browse the repository at this point in the history
  • Loading branch information
robinzyb committed Dec 4, 2024
1 parent 110f6d5 commit f929bed
Show file tree
Hide file tree
Showing 10 changed files with 151 additions and 59 deletions.
27 changes: 27 additions & 0 deletions cp2kdata/plots/colormaps.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# create color blind friendly colormaps

from matplotlib.colors import LinearSegmentedColormap, ListedColormap
import matplotlib as mpl

Check warning on line 4 in cp2kdata/plots/colormaps.py

View check run for this annotation

Codecov / codecov/patch

cp2kdata/plots/colormaps.py#L3-L4

Added lines #L3 - L4 were not covered by tests

# the colormap was taken from the following source:
# [1] Wong, Bang. "Points of view: Color coding." nature methods 7.8 (2010): 573.
color_blind_map = [

Check warning on line 8 in cp2kdata/plots/colormaps.py

View check run for this annotation

Codecov / codecov/patch

cp2kdata/plots/colormaps.py#L8

Added line #L8 was not covered by tests
#[0.0/256, 0.0/256, 0.0/256, 1], # Black
[230.0/256, 159.0/256, 0.0/256, 1], # Orange
[86.0/256, 180.0/256, 233.0/256, 1], # Sky Blue
[0.0/256, 158.0/256, 115.0/256, 1], # Bluish Green
[240.0/256, 228.0/256, 66.0/256, 1], # Yellow
[0.0/256, 114.0/256, 178.0/256, 1], # Blue
[213.0/256, 94.0/256, 0.0/256, 1], # Vermilion
[204.0/256, 121.0/256, 167.0/256, 1], # Reddish Purple
]

cb_lcmap = ListedColormap(color_blind_map, name='cp2kdata_cb_lcmap')
cb_lscmap = LinearSegmentedColormap.from_list(name='cp2kdata_cb_lscmap', colors=color_blind_map)

Check warning on line 20 in cp2kdata/plots/colormaps.py

View check run for this annotation

Codecov / codecov/patch

cp2kdata/plots/colormaps.py#L19-L20

Added lines #L19 - L20 were not covered by tests

mpl.colormaps.register(cmap=cb_lcmap)
print("color blind friendly colormap registered as cp2kdata_cb_lcmap")
mpl.colormaps.register(cmap=cb_lscmap)
print("color blind friendly colormap registered as cp2kdata_cb_lscmap")

Check warning on line 25 in cp2kdata/plots/colormaps.py

View check run for this annotation

Codecov / codecov/patch

cp2kdata/plots/colormaps.py#L22-L25

Added lines #L22 - L25 were not covered by tests


9 changes: 4 additions & 5 deletions docs/backlog.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,9 @@

# Idea List
1. manipulate cube, pdos data
2. modify step information on cube files
3. extract information from output
4. generate standard test input and directory
5. generate nice figures
1. modify step information on cube files
2. extract information from output
3. generate standard test input and directory
4. generate nice figures

# TO DO
cli interface
Binary file added docs/figures/cb_lcmap_plot.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/figures/cb_lscmap_plot.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/figures/cp2kdata_cb_lcmap.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/figures/cp2kdata_cb_lscmap.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
104 changes: 104 additions & 0 deletions docs/plots.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
# Plotting in CP2KData

## Color blind friendly colormaps

> If a submitted manuscript happens to go to three male reviewers of Northern European descent, the chance that at least one will be color blind is 22 percent.
by {cite}`wong2010points`

This shows the importance of creating color blind friendly plots.
As suggested by the above reference, I implemented the recommended color blind friendly colormaps in the CP2KData package.
The usage is summarized in the following,

1. Register the colormaps using cp2kdata

```python
import matplotlib as mpl
import cp2kdata.plots.colormaps
```
```stdout
#output
color blind friendly colormap registered as cp2kdata_cb_lcmap
color blind friendly colormap registered as cp2kdata_cb_lscmap
```

2. Get the colormaps

There are two colormaps in the package.
The first one is a listed colormap, which can also be understood as a discrete colormap.
```python
mpl.colormaps['cp2kdata_cb_lcmap']
```
![cbl_cbar](./figures/cp2kdata_cb_lcmap.png)

The second one is a linear segmented colormap, which can also be understood as a continuous colormap.
```python
mpl.colormaps['cp2kdata_cb_lscmap']
```
![cbls_cbar](./figures/cp2kdata_cb_lscmap.png)

3. Example for using the listed colormap
```python
import matplotlib.pyplot as plt
import matplotlib as mpl
import numpy as np
import cp2kdata.plots.colormaps
plt.style.use('cp2kdata.matplotlibstyle.jcp')


cp2kdata_cb_lcmap = mpl.colormaps['cp2kdata_cb_lcmap']
plt.rcParams["axes.prop_cycle"] = plt.cycler("color", cp2kdata_cb_lcmap.colors)
row = 1
col = 1
fig = plt.figure(figsize=(3.37*col, 1.89*row), dpi=300, facecolor='white')
gs = fig.add_gridspec(row,col)
ax = fig.add_subplot(gs[0])

t = np.linspace(-10, 10, 100)
def sigmoid(t, t0):
return 1 / (1 + np.exp(-(t - t0)))

nb_colors = len(plt.rcParams['axes.prop_cycle'])

shifts = np.linspace(-5, 5, nb_colors)
amplitudes = np.linspace(1, 1.5, nb_colors)
for t0, a in zip(shifts, amplitudes):
ax.plot(t, a * sigmoid(t, t0), '-')
ax.set_xlim(-10, 10)

fig.savefig("cb_lcmap_plot.png", dpi=100)
```
![cbl_plot](./figures/cb_lcmap_plot.png)
4. Example for using the linear segmented colormap
```python
import matplotlib.pyplot as plt
import matplotlib as mpl
import numpy as np
import cp2kdata.plots.colormaps
plt.style.use('cp2kdata.matplotlibstyle.jcp')


cp2kdata_cb_lscmap = mpl.colormaps['cp2kdata_cb_lscmap']
N = 13
plt.rcParams["axes.prop_cycle"] = plt.cycler("color", cp2kdata_cb_lscmap(np.linspace(0,1,N)))
row = 1
col = 1
fig = plt.figure(figsize=(3.37*col, 1.89*row), dpi=300, facecolor='white')
gs = fig.add_gridspec(row,col)
ax = fig.add_subplot(gs[0])

t = np.linspace(-10, 10, 100)
def sigmoid(t, t0):
return 1 / (1 + np.exp(-(t - t0)))

nb_colors = len(plt.rcParams['axes.prop_cycle'])

shifts = np.linspace(-5, 5, nb_colors)
amplitudes = np.linspace(1, 1.5, nb_colors)
for t0, a in zip(shifts, amplitudes):
ax.plot(t, a * sigmoid(t, t0), '-')
ax.set_xlim(-10, 10)

fig.savefig("cb_lscmap_plot.png", dpi=100)
```
![cbls_plot](./figures/cb_lscmap_plot.png)
3 changes: 3 additions & 0 deletions docs/references.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Bibliography
```{bibliography}
```
3 changes: 3 additions & 0 deletions jupyter-book/_toc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,9 @@ parts:
- caption: Parameter Test
chapters:
- file: docs/input_test
- caption: Plots
chapters:
- file: docs/plots
- caption: Plugin
chapters:
- file: docs/dpdata_plugin
Expand Down
64 changes: 10 additions & 54 deletions jupyter-book/references.bib
Original file line number Diff line number Diff line change
@@ -1,56 +1,12 @@
---
---
@inproceedings{holdgraf_evidence_2014,
address = {Brisbane, Australia, Australia},
title = {Evidence for {Predictive} {Coding} in {Human} {Auditory} {Cortex}},
booktitle = {International {Conference} on {Cognitive} {Neuroscience}},
publisher = {Frontiers in Neuroscience},
author = {Holdgraf, Christopher Ramsay and de Heer, Wendy and Pasley, Brian N. and Knight, Robert T.},
year = {2014}
}

@article{holdgraf_rapid_2016,
title = {Rapid tuning shifts in human auditory cortex enhance speech intelligibility},
volume = {7},
issn = {2041-1723},
url = {http://www.nature.com/doifinder/10.1038/ncomms13654},
doi = {10.1038/ncomms13654},
number = {May},
journal = {Nature Communications},
author = {Holdgraf, Christopher Ramsay and de Heer, Wendy and Pasley, Brian N. and Rieger, Jochem W. and Crone, Nathan and Lin, Jack J. and Knight, Robert T. and Theunissen, Frédéric E.},
year = {2016},
pages = {13654},
file = {Holdgraf et al. - 2016 - Rapid tuning shifts in human auditory cortex enhance speech intelligibility.pdf:C\:\\Users\\chold\\Zotero\\storage\\MDQP3JWE\\Holdgraf et al. - 2016 - Rapid tuning shifts in human auditory cortex enhance speech intelligibility.pdf:application/pdf}
}

@inproceedings{holdgraf_portable_2017,
title = {Portable learning environments for hands-on computational instruction using container-and cloud-based technology to teach data science},
volume = {Part F1287},
isbn = {978-1-4503-5272-7},
doi = {10.1145/3093338.3093370},
abstract = {© 2017 ACM. There is an increasing interest in learning outside of the traditional classroom setting. This is especially true for topics covering computational tools and data science, as both are challenging to incorporate in the standard curriculum. These atypical learning environments offer new opportunities for teaching, particularly when it comes to combining conceptual knowledge with hands-on experience/expertise with methods and skills. Advances in cloud computing and containerized environments provide an attractive opportunity to improve the effciency and ease with which students can learn. This manuscript details recent advances towards using commonly-Available cloud computing services and advanced cyberinfrastructure support for improving the learning experience in bootcamp-style events. We cover the benets (and challenges) of using a server hosted remotely instead of relying on student laptops, discuss the technology that was used in order to make this possible, and give suggestions for how others could implement and improve upon this model for pedagogy and reproducibility.},
booktitle = {{ACM} {International} {Conference} {Proceeding} {Series}},
author = {Holdgraf, Christopher Ramsay and Culich, A. and Rokem, A. and Deniz, F. and Alegro, M. and Ushizima, D.},
year = {2017},
keywords = {Teaching, Bootcamps, Cloud computing, Data science, Docker, Pedagogy}
}

@article{holdgraf_encoding_2017,
title = {Encoding and decoding models in cognitive electrophysiology},
volume = {11},
issn = {16625137},
doi = {10.3389/fnsys.2017.00061},
abstract = {© 2017 Holdgraf, Rieger, Micheli, Martin, Knight and Theunissen. Cognitive neuroscience has seen rapid growth in the size and complexity of data recorded from the human brain as well as in the computational tools available to analyze this data. This data explosion has resulted in an increased use of multivariate, model-based methods for asking neuroscience questions, allowing scientists to investigate multiple hypotheses with a single dataset, to use complex, time-varying stimuli, and to study the human brain under more naturalistic conditions. These tools come in the form of “Encoding” models, in which stimulus features are used to model brain activity, and “Decoding” models, in which neural features are used to generated a stimulus output. Here we review the current state of encoding and decoding models in cognitive electrophysiology and provide a practical guide toward conducting experiments and analyses in this emerging field. Our examples focus on using linear models in the study of human language and audition. We show how to calculate auditory receptive fields from natural sounds as well as how to decode neural recordings to predict speech. The paper aims to be a useful tutorial to these approaches, and a practical introduction to using machine learning and applied statistics to build models of neural activity. The data analytic approaches we discuss may also be applied to other sensory modalities, motor systems, and cognitive systems, and we cover some examples in these areas. In addition, a collection of Jupyter notebooks is publicly available as a complement to the material covered in this paper, providing code examples and tutorials for predictive modeling in python. The aimis to provide a practical understanding of predictivemodeling of human brain data and to propose best-practices in conducting these analyses.},
journal = {Frontiers in Systems Neuroscience},
author = {Holdgraf, Christopher Ramsay and Rieger, J.W. and Micheli, C. and Martin, S. and Knight, R.T. and Theunissen, F.E.},
year = {2017},
keywords = {Decoding models, Encoding models, Electrocorticography (ECoG), Electrophysiology/evoked potentials, Machine learning applied to neuroscience, Natural stimuli, Predictive modeling, Tutorials}
}

@book{ruby,
title = {The Ruby Programming Language},
author = {Flanagan, David and Matsumoto, Yukihiro},
year = {2008},
publisher = {O'Reilly Media}
}
@article{wong2010points,
title={Points of view: Color coding},
author={Wong, Bang},
journal={nature methods},
volume={7},
number={8},
pages={573},
year={2010},
publisher={Nature Publishing Group}
}

0 comments on commit f929bed

Please sign in to comment.