This is all the infrastructure needed for running the entire Integrated Knowledge Environment.
This will include the following:
- sonatype nexus - a repository manager for maven artifacts
- posgresql database - a database for nexus
- ike website - a static website that is built from the ikmdev repo at ikmdev/ikmdev-site
- komet - a web application for managing knowledge artifacts, which can be found at ikmdev/komet
This repository is for demonstration purposes only. It is not intended for production use. Please consult security professionals before using these products in a production environment. In general, it is suggested that you use hardened images or equivalent organizational systems for production use. This repository is not intended to contain hardened images.
- Install Docker and Docker Compose (or something compatible, such as on your local machine.
These are steps for running in Amazon linux by deploying a subdomain-based routing solution routing using Nginx as reverse proxy on AWS and EasyDNS to procure and set up a domain name and all of it’s necessary records for hosting IKE in a Box
ssh -i /path/to/your-key.pem ec2-user@<EC2_PUBLIC_IP>
example: ssh -i ~/.ssh/docker-deployment-key.pem [email protected]sudo dnf update -ysudo dnf install docker -ysudo systemctl start docker
sudo systemctl enable dockersudo usermod -aG docker $USERDOCKER_COMPOSE_VERSION=$(curl -s https://api.github.com/repos/docker/compose/releases/latest | grep tag_name | cut -d '"' -f 4)
sudo curl -L "https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-composesudo chmod +x /usr/local/bin/docker-composedocker --version
docker-compose --versionsudo dnf install gitClone ike-in-box repository
git clone https://github.com/ikmdev/ike-box.gitIf you haven't created the DNS entries for the domains and subdomains that you want to use, you can do so by following the instructions in the DNS Management document. This will allow you to create the necessary DNS entries for the domains and subdomains that you want to use with this repository.
This project uses a centralized domain configuration approach. All domain names are defined in a single .env file at the root of the project. This makes it easy to change the domain name across all services without having to modify multiple files.
-
The repository includes a default
.envfile. Please fill out the values in this files to match what your expected environment and contact information will be. To use a different domain, simply edit theBASE_DOMAINvalue in the.envfile. All other configurations will automatically use the updated domain.
Note: You will have a ACME_CHALLENGE at this point, so you will need to leave this blank. -
Create the original DNS entries for the domain in your DNS provider.
a. For creating DNS entries in AWS, run the provided scripts to generate configuration files from templates:
docker-compose run aws-dns-setup
b. For creating DNS entries in EasyDNS, run the provided scripts to generate configuration files from templates:
docker-compose run easy-dns-setup
-
Start the cert generation process by executing the following command:
docker-compose run certbot-initialize
This will generate the necessary certificates for your domain and subdomains. The certificates will be stored in the
./certbot/confdirectory. At this point, there should be an acme challenge value printed on your screen.
DO NOT CLOSE THIS WINDOW OR TERMINATE THE PROCESS as it is necessary for the next step. -
Edit the .env file to include the
ACME_CHALLENGEvalue that was printed in the previous step. This value is necessary for the certificate generation process to complete successfully. -
(Likly in a new Window) Run the DNS scripts again to include the ACME_CHALLENGE value in the DNS records. This will allow the certificate generation process to complete successfully.
a. For AWS, run the following command:
docker-compose run aws-dns-setup
b. For EasyDNS, run the following command:
docker-compose run easy-dns-setup
Use Docker Compose profiles to switch between subdomain and path-based routing.
Note:
- Only one Nginx service should be active at a time to avoid port conflicts.
- Only the nginx-subdomain service is activated when you use the --profile subdomain flag
- You may want to remove the container_name: nginx line or make the names unique if you ever run both at once.
- No conflicting port mappings. (Only one Nginx service should run at a time on the same port.)
- Make sure your entrypoint script executable inside the subdomain-based directory
RUN chmod +x /<name-of-script.sh>
docker-compose --profile subdomain --build
docker-compose --profile subdomain up -dTo shut down the applications for Subdomain Profile, run the following command:
docker-compose --profile subdomain downCheck Running Containers:
docker psCheck Nginx Logs (if troubleshooting):
docker logs nginx-subdomainThis project uses Certbot to automatically obtain and renew SSL certificates from Let's Encrypt for your domains.
You should configure the email address used by certbot for Let's Encrypt notifications:
[email protected] docker compose up -dOr combine multiple environment variables:
NGINX_PORT=8080 [email protected] docker compose up -dCertificates are automatically obtained for all domains specified in the DOMAINS environment variable in the docker-compose.yml file. The certbot service is configured to:
- Obtain certificates for each domain if they don't exist
- Check for renewals every 12 hours
- Automatically renew certificates when they're within 30 days of expiration
- Persist certificates in the ./certbot/conf directory
No manual intervention is required for certificate renewal as long as the certbot service is running.
Open your browser and visit:
Follow the steps below to build and run static website on your local machine:
-
Clone the repository from GitHub to your local machine
-
Change local directory to cloned repo location
-
Enter the following command to execute startup path-based profile:
docker-compose --profile path-based up -d
-
To view the applications directly open your web browser and navigate to: http://localhost we should be able to view landing page and naviagte to respective pages.
-
To shut down the applications, run the following command:
docker-compose --profile path-based down
The application should now be running in the Docker container using Nginx as Reverse Proxy with the path based routing can be access in your web browser.
Note: On the off chance that you have issues with running on the specific port on your computer, the docker-compose file is configurable to allow for other ports. This can be run in the following way, substituting 8080 for whatever port you would like to assign:
NGINX_PORT=8080 docker-compose --profile path-based up -dkomet - a web application requires login credentials which is defined in users.ini file located at ./komet-data/users.ini
To access the JPRO journal view for Komet, download the required dataset from Nexus, and placed in Solor folder in your local machine home directory. If you're running the app from IKE-box in a server transfer the dataset to the server using the scp command.
-
Download the dataset to your local machine (e.g., to ~/Solor/).with required credentials
-
Transfer Dataset to Server Using SCP
Ensure you have the SSH private key (e.g., docker-deployment-key.pem) with the correct permissions
400 ~/.ssh/docker-deployment-key.pemUse SCP to Transfer the Dataset:
Run the following command from your local machine:
scp -i ~/.ssh/docker-deployment-key.pem -r ~/Solor/ ec2-user@<server-ip>:/home/ec2-user/ike-box/komet-data- Verify the Transfer:
Check that the dataset is present in
cd /home/ec2-user/ike-box/komet-data- Load Dataset in Komet/JPRO
Restart Komet if necessary to pick up the new data. Access the journal view again to verify the dataset is available.
Technical and non-technical issues can be reported to the Issue Tracker.
Contributions can be submitted via pull requests. Please check the contribution guide for more details.