Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

catch up #178

Merged
merged 54 commits into from
May 24, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
54 commits
Select commit Hold shift + click to select a range
90ce2d0
First compilation on macOS and it gives some good advice.
kaschau Nov 29, 2023
49e94df
Whoops, got too excited there.
kaschau Dec 2, 2023
ce2828c
Still dont need MWmix
kaschau Dec 2, 2023
5e3e250
Dont need this.
kaschau Dec 10, 2023
fe1a0dc
Renamed secondOrderKEEP to KEEPpe, added regular KEEP.
kaschau Dec 11, 2023
1dcfaef
Added my KEEP formulation.
kaschau Dec 11, 2023
b988af9
Added warning for using nChemSubSteps more than 1 with implicit chem
kaschau Dec 12, 2023
bf71c35
KEEPes standard
kaschau Dec 12, 2023
a404a31
Finally got install working.
kaschau Dec 13, 2023
2d1a325
ignore venv
kaschau Dec 13, 2023
55585c2
Installing in the correct director with pip.
kaschau Dec 17, 2023
0551134
Middle initial
kaschau Dec 17, 2023
bf0e007
Added a logo.
kaschau Dec 24, 2023
f02bb9f
Using png.
kaschau Dec 24, 2023
4a94f34
Update README.md
kaschau Dec 24, 2023
1b08ec3
Smaller images
kaschau Dec 24, 2023
4c9e86d
Typo.
kaschau Dec 24, 2023
a4c40f0
Update README.md
kaschau Dec 24, 2023
ca3bf22
Building documentation structure.
kaschau Dec 24, 2023
3536585
Add install.
kaschau Dec 24, 2023
ec63d33
Bullets
kaschau Dec 24, 2023
d4081b4
Bullets again.
kaschau Dec 24, 2023
1bdccca
More...
kaschau Dec 24, 2023
7916a49
Better images.
kaschau Dec 25, 2023
f259536
Even better images.
kaschau Dec 25, 2023
ccd54d5
Best images yet!
kaschau Dec 25, 2023
e11b1cf
New images.
kaschau Dec 26, 2023
e305ece
Need the circle.
kaschau Dec 26, 2023
686b107
Updating docs.
kaschau Dec 27, 2023
4bfdb4d
Update LICENSE
kaschau Dec 28, 2023
2ef61b3
Create CNAME
kaschau Dec 29, 2023
b68cd31
Update CNAME
kaschau Dec 29, 2023
89f20e2
Delete CNAME
kaschau Dec 29, 2023
b24580e
Create _config.yaml
kaschau Dec 30, 2023
2b428b8
Update and rename _config.yaml to _config.yml
kaschau Dec 30, 2023
2110b3c
Update README.md
kaschau Dec 30, 2023
6ccf901
Clean up (#176)
kaschau Dec 30, 2023
410159a
Little tweak to splash.
kaschau Jan 2, 2024
ea409b6
Changed Stanford_Skeletal to FFCMY
kaschau Jan 3, 2024
9ae41b9
Renamed typo module. Speeding up pure fluid detection.
kaschau Jan 8, 2024
49932e5
Case documentation (#177)
kaschau Jan 11, 2024
786ceef
Crap, my bad...
kaschau Jan 12, 2024
b1833d4
Added coprocessing documentation.
kaschau Jan 12, 2024
72822cd
Fixed markdown
kaschau Jan 12, 2024
1794ae6
Hot fix, if pure component we still nee MWmix
kaschau Feb 15, 2024
404cf4f
Bumping minimum cmake so that CMP warning goes away.
kaschau Feb 28, 2024
e789aa7
Changing name pending publication?
kaschau Mar 7, 2024
6c9cf27
Damnnnnnn
kaschau Mar 7, 2024
3c65ab6
Sorry im switching to VScode and things are hard.
kaschau Mar 7, 2024
08d4cd4
Linting with ruff, also removed incorrect limitation comment of cutGrid
kaschau Mar 25, 2024
c1ec20c
Kokkos include should be in <> not ""
kaschau Apr 5, 2024
af5a8dd
Ruff conforming entirely
kaschau Apr 5, 2024
bfbe4d4
Simplifying.
kaschau May 20, 2024
97d40f4
Making flux calculations function calls so I don't
kaschau May 23, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -6,3 +6,4 @@ __pycache__
compile_commands.json
.cache
TAGS
.venv
2 changes: 1 addition & 1 deletion AUTHORS
Original file line number Diff line number Diff line change
@@ -1 +1 @@
Kyle Schau <[email protected]>
Kyle A. Schau <[email protected]>
6 changes: 4 additions & 2 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
# CMake BOILER PLATE #
###########################################################################

cmake_minimum_required(VERSION 3.16 FATAL_ERROR)
cmake_minimum_required(VERSION 3.27 FATAL_ERROR)
enable_language( C CXX )

###########################################################################
Expand Down Expand Up @@ -55,7 +55,6 @@ set( CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall" CACHE STRING "" FORCE )
# BUILD PEREGRINE COMPUTE LIBRARY #
###########################################################################
project(compute LANGUAGES C CXX)
#Set the install path

#Turn off lto junk
set(CMAKE_INTERPROCEDURAL_OPTIMIZATION OFF )
Expand Down Expand Up @@ -95,3 +94,6 @@ pybind11_add_module(compute ${compute_src} ${PROJECT_SOURCE_DIR}/src/bindings.cp
Find_Package(Kokkos REQUIRED)
target_link_libraries(compute PUBLIC Kokkos::kokkos)
set_property(TARGET compute PROPERTY CXX_STANDARD 17)

#Set the install path
install(TARGETS compute LIBRARY DESTINATION ${PROJECT_SOURCE_DIR}/src/peregrinepy/)
34 changes: 17 additions & 17 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -1,28 +1,28 @@
Copyright (c) 2021-2022 Kyle A. Schau
BSD 3-Clause License

All rights reserved.
Copyright (c) 2021-2024 Kyle A. Schau

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:

a. Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
b. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
c. Neither the name of PEREGRINE nor the names of its contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.

2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.

3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
DAMAGE.
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
132 changes: 45 additions & 87 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,119 +1,77 @@
# PEREGRINE
# PEREGRINE: Accessible, Performant, Portable Multiphysics CFD

An attempt at a superfast Python/C++ Multi-Physics CFD code using Kokkos for performance portability.
<p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" width="800" srcset="docs/images/pgSplashD2.jpg">
<source media="(prefers-color-scheme: light)" width="800" srcset="docs/images/pgSplashL2.jpg">
<img alt="peregrine logo" width="800" src="docs/images/pgSplashL2.jpg">
</picture>
</p>

## About

# Installation
PEREGRINE is a second order, multiblock, structured-grid multiphysics, finite volume, 3D CFD solver. The main novelty of PEREGRINE is its implementation in [Python](https://www.python.org) for ease of development and use of [Kokkos](https://www.github.com/kokkos/kokkos) for performance portability. If you are unfamiliar with Kokkos, do a little digging, it is a great project with a healthy community and helpful developers. The TLDR; Kokkos is a C++ library (not a C++ language extension) that exposes useful abstractions for data management (i.e. multidimensional arrays) and kernel execution from CPU-Serial to GPU-Parallel. This allows a single source, multiple architecture, approach in PEREGRINE. In other words, you can run a case with PEREGRINE on your laptop, then without changing a single line of source code, run the same case on a AMD GPU based super computer. PEREGRINE is massively parallel inter-node via MPI communication.

## Installation

``` setup.py install ```
You must first install [Kokkos](https://www.github.com/kokkos/kokkos) and set the environment variable `Kokkos_DIR=/path/to/kokkos/install`. The Kokkos installation controls the Host/Device + Serial/Parallel execution parameters, there are no settings for the python installation.

Or just set PYTHONPATH to point to /path/to/PEREGRINE/src/peregrinepy
followed by manual install
## Easy Install
For editable python installation:

``` mkdir build; cd build; ccmake ../```
``` pip install -e . ```

To generate compile_commands.json,

``` cmake -DCMAKE_EXPORT_COMPILE_COMMANDS=ON ../ ```


# Data Storage

There are two means of manipulating data:

1) on the python side as numpy arrays and,
2) on the C++ side via Kokkos Views.

Numpy arrays are accessed by a dictionary attribute
of the block class called "array", i.e.

blk.array["q"][i,j,k,l]

Gives access to the primitive variables. On the C++ side, the kokkos views are accesses
as members of the same block class, i.e.
Note, installation with pip is hard coded to Debug mode. I can't figure out how to make that an option.

b.q(i,j,k,l)
## Recommended Install
For development, it is better to set the environment variable `PYTHONPATH` to point to `/path/to/PEREGRINE/src/` followed by manual installation of the C++ `compute` module:

Note, the Kokkos Views are accessible from the python side, by accessing the block class
method as in Kokkos, i.e.
```cd /path/to/PEREGRINE; mkdir build; cd build; ccmake ../; make -j install```

blk.q

However, you cannot access elements of the Kokkos view on the python side.
To generate compile_commands.json,

## Data Residence
``` cmake -DCMAKE_EXPORT_COMPILE_COMMANDS=ON ../ ```

Depending on if you are running a CPU or GPU simulation, the arrays live in different places.
Obviously, the Kokkos views exist wherever your execution space is, CPU or GPU. But the python
side numpy arrays in the ```blk.array``` dict are always on the host, so they are accessible
within python. To facilitate this, we also create a dictionary of mirrors, i.e.
## Documentation

blk.mirror["q"]
See the documentation [here](./docs/documentation.md).

Which is a kokkos mirror view of the main kokkos view. This mirror view always exists on the CPU,
and so the ```blk.array``` numpy arrays wrap around these mirror views. So to update the numpy
arrays from kokkos view data on the GPU, simply perform a deep_copy from the kokkos view to the
mirror view. Since the numpy array wraps the mirror view data, the numpy array will be updated
with this deep_copy operation. If you are running a CPU case, all these arrays and views point
to the same data.
## Profiling GPU via NVTX
Download and install the libraries found at [here](https://github.com/kokkos/kokkos-tools). At runtime, ensure the environment variable

CPU CPU CPU/GPU
array dict --> wraps ( mirror dict ) --> mirrors ( kokkos view )
$ export KOKKOS_PROFILE_LIBRARY=/path/to/kokkos-tools/kp_nvprof_connector.so

# Array Names
is set. Finally, run the simulation with nsys enabling cuda,nvtx trace options.

| Name | Variables | Index | Units |
|:---------:|--------------------|:-------:|--------------|
| **q** | **Primatives** | | |
| | pressure | 0 | Pa |
| | u,v,w | 1,2,3 | m/s |
| | temperature | 4 | K |
| | mass fraction | 5..ne | [] |
| **Q** | **Conserved** | | |
| | density | 0 | kg/m^3 |
| | momentum | 1,2,3 | kg m / s.m^3 |
| | total energy | 4 | J/m^3 |
| | species mass | 5..ne | kg/m^3 |
| **qh** | **Thermo** | | |
| | gamma | 0 | [] |
| | cp | 1 | J/kg.K |
| | enthalpy | 2 | J/m^3 |
| | c | 3 | m/s |
| | internal energy | 4 | J/m^3 |
| | species enthalpy\* | 5..5+ns | J/kg |
| **qt** | **Transport** | | |
| | mu | 0 | Pa.s |
| | kappa | 1 | W/m/K |
| | D[n]\* | 2..2+ns | m^2/s |
| **omega** | **Chemistry** | | |
| | dTdt | 0 | K/s |
| | d(Yi)dt\*\* | 1..ns-1 | []/s |
jsrun -p 1 -g 1 nsys profile -o outPutName --trace cuda,nvtx -f true --stats=false python -m mpi4py pgScript.py

\*We store all species' enthalpies and diffusion coeff
## Performance

\*\* While d(Yi)/dt is stored at the end of chemistry, d(rhoYi)/dt is applied to dQ/dt
PEREGRINE is pretty fast by default. However, when running a simulation with multiple chemical species, it is recommended to turn on `PEREGRINE_NSCOMPILE` in cmake, and then specify the value of `numSpecies`. This will hard code `ns` at compile time, and gives a considerable performance improvement for EOS/transport calculations.

# Profiling GPU via NVTX
Download and install the libraries found at
``` https://github.com/kokkos/kokkos-tools ```
At runtime, ensure the environment variable
``` export KOKKOS_PROFILE_LIBRARY=$HOME/software/sources/kokkos-tools/kp_nvprof_connector.so```
is set. Finally, run the simulation with nsys enabling cuda,nvtx trace options.
```jsrun -p 1 -g 1 nsys profile -o twelveSpecies30Cubed --trace cuda,nvtx -f true --stats=false python -m mpi4py threeDTaylorProf.py```
## Parallel I/O

# Performance
PEREGRINE is pretty fast by default. However, when running a simulation, it is recommended to turn on ```PEREGRINE_NSCOMPILE``` in cmake, and then specify the value of ```numSpecies```. This will hard code ```ns``` at compile time, and gives a considerable performance improvement for EOS/transport calculations.

# Parallel I/O
Parallel I/O can be achieved with a parallel capable h5py installation.

$ export CC=mpicc
$ export HDF5_MPI="ON"
$ export HDF5_DIR="/path/to/parallel/hdf5" # If this isn't found by default
$ pip install h5py --no-binary=h5py

``` $HDF5_DIR ``` must point to a parallel enabled HDF5 installation. Parallel I/O is only applicable when running simulations with ```config["io"]["lumpIO"]=true```.
`$HDF5_DIR` must point to a parallel enabled HDF5 installation. Parallel I/O is only applicable when running simulations with `config["io"]["lumpIO"]=true`.

## Attribution

Please use the following BibTex to cite PEREGRINE in scientific writing:

```
@misc{PEREGRINE,
author = {Kyle A. Schau},
year = {2021},
note = {https://github.com/kaschau/PEREGRINE},
title = {PEREGRINE: Accessible, Performant, Portable Multiphysics CFD}
}
```

## License

Expand Down
6 changes: 6 additions & 0 deletions _config.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
remote_theme: pages-themes/[email protected]
plugins:
- jekyll-remote-theme

title: PEREGRINE's Homepage
description: Bookmark this to keep an eye on my project updates!
File renamed without changes.
64 changes: 64 additions & 0 deletions docs/codeStructure/arrayMirrorView.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
# Data Storage

There are two means of manipulating data:

1) on the python side as numpy arrays and,
2) on the C++ side via Kokkos Views.

Numpy arrays are accessed by a dictionary attribute of the block class called "array", i.e.

blk.array["q"][i,j,k,l]

Gives access to the primitive variables. On the C++ side, the kokkos views are accesses as members of the same block class, i.e.

b.q(i,j,k,l)

Note, the Kokkos Views are accessible from the python side, by accessing the block class method as in Kokkos, i.e.

blk.q

However, you cannot access elements of the Kokkos view on the python side.

## Data Residence

Depending on if you are running a CPU or GPU simulation, the arrays live in different places. Obviously, the Kokkos views exist wherever your execution space is, CPU or GPU. But the python side numpy arrays in the ```blk.array``` dict are always on the host, so they are accessible within python. To facilitate this, we also create a dictionary of mirrors, i.e.

blk.mirror["q"]

Which is a kokkos mirror view of the main kokkos view. This mirror view always exists on the CPU, and so the ```blk.array``` numpy arrays wrap around these mirror views. So to update the numpy arrays from kokkos view data on the GPU, simply perform a deep_copy from the kokkos view to the mirror view. Since the numpy array wraps the mirror view data, the numpy array will be updated with this deep_copy operation. If you are running a CPU case, all these arrays and views point to the same data.

CPU CPU CPU/GPU
array dict --> wraps ( mirror dict ) --> mirrors ( kokkos view )

# Array Names

| Name | Variables | Index | Units |
|:---------:|--------------------|:-------:|--------------|
| **q** | **Primatives** | | |
| | pressure | 0 | Pa |
| | u,v,w | 1,2,3 | m/s |
| | temperature | 4 | K |
| | mass fraction | 5..ne | [] |
| **Q** | **Conserved** | | |
| | density | 0 | kg/m^3 |
| | momentum | 1,2,3 | kg m / s.m^3 |
| | total energy | 4 | J/m^3 |
| | species mass | 5..ne | kg/m^3 |
| **qh** | **Thermo** | | |
| | gamma | 0 | [] |
| | cp | 1 | J/kg.K |
| | enthalpy | 2 | J/m^3 |
| | c | 3 | m/s |
| | internal energy | 4 | J/m^3 |
| | species enthalpy\* | 5..5+ns | J/kg |
| **qt** | **Transport** | | |
| | mu | 0 | Pa.s |
| | kappa | 1 | W/m/K |
| | D[n]\* | 2..2+ns | m^2/s |
| **omega** | **Chemistry** | | |
| | dTdt | 0 | K/s |
| | d(Yi)dt\*\* | 1..ns-1 | []/s |

\*We store all species' enthalpies and diffusion coeff

\*\* While d(Yi)/dt is stored at the end of chemistry, d(rhoYi)/dt is applied to dQ/dt
File renamed without changes.
Empty file.
48 changes: 48 additions & 0 deletions docs/documentation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# PEREGRINE Documentation


## Executable Mode

PEREGRINE can run in both a scriptable mode for simple cases, or executable mode for larger production runs. For examples of scripting modes, see [examples](../examples). For executable mode, see the case directory structure [here](./executableMode.md).

## CoProcessing

Coprocessing with ParaView is amazing, but takes effort to make it work well in my experience. It seems they have cleaned it up a lot from back in the day. To start download and install Paraview from source. I have tested up to 5.11 and it works well for me. To make coprocessing work, you need to compile paraview in catalyst mode. With cmake, pass the argument:

> ccmake -DPARAVIEW_BUILD_EDITION:STRING=CATALYST /path/to/ParaView_src/

Assume you are installing it in `/path/to/paraview`.

### Some Tips and Tricks

You must set the cmake flags:
```
PARAVIEW_USE_MPI=ON
PARAVIEW_USE_PYTHON=ON
```

Make sure ParaView finds the correct python you are planning to use with PEREGRINE.

Once you configure a bunch of times, look for a group of options that look like `VTK_MODULE_USE_EXTERNAL_VTK_*`. You want to turn on as many of those as you can, otherwise ParaView will download, and fail, to install many of these. Especially `mpi4py`, `png`, `libxml` and so on. Hopefully your cluster has these already and you can module load them, or your package manager probably has them too.


### Running

To get it to work, you have to set these environment variables so python can find the catalyst install.

```
export PYTHONPATH=$PYTHONPATH:/path/to/paraview/lib/python3.XX/site-packages
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/to/paraview/lib
export PYTHONPATH=$PYTHONPATH:/path/to/paraview/lib
```

You will know it worked if `from paraview.catalyst import bridge` works in an interpreter.

Good Luck!


## Code Structure
* [Python + Kokkos](./codeStructure/pythonKokkos.md)
* [Multiblock Structure](./codeStructure/multiblock.md)
* [Array, Mirror, View](./codeStructure/arrayMirrorView.md)
* [Boundary Conditions](./boundaryConditions.md)
33 changes: 33 additions & 0 deletions docs/executableMode.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# Executable Mode (Running real cases)


The directory structure is as follows:

.myCase
├── runPeregrine.py # symlink to runPeregrine.py
├── Archive # Folder to write archive results (*.h5, *.xmf)
├── Restart # Folder to read/write restarts (q.h5, q.xmf)
├── Grid # Folder to read/write grid (g.h5, g.xmf)
├── peregrine.yaml # PEREGRINE config file (see /src/peregrinepy/files/configFile.py)
├── Input # Folder to hold all input files
│ ├── conn.yaml # Connectivity file
│ ├── bcFams.yaml # Boundary conditions file
└── └── blocksForProcs.inp # Load balancing file


## Connectivity File

The connectivity file `conn.yaml` uses GridPro notation for block connectivity.

## Boundary Conditions File

The boundary conditions file `bcFams.yaml` specifies the boundary conditions. See [templates](../src/peregrinepy/bcs/bcFamTemplates).

## Load Balancing File

The load balancing file `blocksForProcs.inp` is just a text file where each line represents the nth MPI rank and which block numbers that rank is responsible for. For example:

```
0,1 # 0th MPI rank has blocks 0,1
2,3 # 1st MPI rank has blocks 2,3
```
Binary file added docs/images/pgSplashD.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/pgSplashD2.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/pgSplashL.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/pgSplashL2.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading