Mechanical-MNIST is a benchmark dataset for mechanical meta-models
The Mechanical MNIST dataset contains the results of 70,000 (60,000 training examples + 10,000 test examples) finite element simulation of a heterogeneous material subject to large deformation. Mechanical MNIST is generated by first converting the MNIST bitmap images (http://www.pymvpa.org/datadb/mnist.html) to 2D heterogeneous blocks of a Neo-Hookean material. Consistent with the MNIST bitmap (28 x 28 pixels), the material domain is a 28 x 28 unit square. All simulations are conducted with the FEniCS computing platform (https://fenicsproject.org).
The full dataset is hosted by OpenBU. Link to the collection: https://open.bu.edu/handle/2144/39371
Here we provide code to generate the low-fidelity versions of Mechanical-MNIST in the "generate_dataset" folder. The script "run_FEA_simulation.py" will run the 2D coarse mesh simulations in FEniCS, and the script "run_FEA_simulation_3D_UE_twist.py" will run the 3D simulations in FEniCS. Note: command line inputs are required for both scripts. The folder "sample_data" will have to be un-zipped to access the input bitmaps. The code to generate the original Mechanical-MNIST dataset is available in this repository: https://github.com/elejeune11/Mechanical-MNIST
The folder "sample_data" contains the following:
- mnist_img_train.txt -- the input bitmaps for training data
- mnist_img_test.txt -- the input bitmaps for test data
- UE_psi_train.txt -- simulation results for the UE high fidelity dataset, training data
- UE_psi_test.txt -- simulation results for the UE high fidelity dataset, test data
- UE_CM_28_perturb_psi_train.txt -- simulation results for the UE-CM-28-perturb dataset, training data
- UE_CM_28_perturb_psi_test.txt -- simulation results for the UE-CM-28-perturb dataset, test data
3) the code used to create the metamodels in the paper "Exploring the potential of transfer learning for metamodels of heterogeneous material deformation" (link forthcoming)
The folder "metamodels" contains the following scripts:
- metamodels.py -- code to train CNN metamodels
- metamodels_with_pretrain.py -- code to train CNN metamodels with pretraining
- metamodel_evaluation.py -- code to evaluate the trained metamodels (with and without pre-training), model test error can be easily computed from the outputs of this script
- metamodel_visualize_first_layer_activation.py -- code to visualize the trained CNNs, used to generate figures in the manuscript (code is adapted from: https://github.com/utkuozbulak/pytorch-cnn-visualizations)
- metamodel_visualize_first_layer_activation_plot_nicely.py -- code to nicely plot the results of the visualizations
Note that "sample_data" will have to be un-zipped for the code to run as is. The full dataset can be downloaded from the OpenBU repository.