diff --git a/README.md b/README.md index c86efe9..6f8495b 100644 --- a/README.md +++ b/README.md @@ -23,12 +23,12 @@ Each file has upload progress bar shown, stores the files on server giving them Clicking submit starts processing the files and progress state of each file is displayed in the associated card which polls the backend for celery task status. -It Shows necessary plots and also ngspice output after file is processed. Once the files are processed Show output button is visible. -#### Basic Demo +It Shows necessary plots and also ngspice output after file is processed. Once the files are processed Show output button is visible. +#### Basic Demo ![Basic Demo](./documentation/MainDemo.gif) -A set of netlist files are being uplaoded which, pressing submit triggers processing the files, once they're processed by celery worker show output button is visible showing user the ngspice output and graphs. +A set of netlist files are being uplaoded which, pressing submit triggers processing the files, once they're processed by celery worker show output button is visible showing user the ngspice output and graphs. -#### Session Storage Demo +#### Session Storage Demo ![Session Storage Demo](./documentation/maintain-session.gif) Session is maintained for the user , which means once the files are uploaded the user can come back and view the final results later. @@ -37,9 +37,9 @@ Session is maintained for the user , which means once the files are uploaded the Any number of celery worker instances can be spawned with each celery worker having concurrency 10 making it highly scalable. Redis is used as a cache for celery. -Django endpoints are served by gunicorn with 2 instances of it running multiple workers for scalability. +Django endpoints are served by gunicorn with 2 instances of it running multiple workers for scalability. -Nginx reverse proxy is used to bind backend and frontend routes and also load balancing. +Nginx reverse proxy is used to bind backend and frontend routes and also load balancing. ## Features @@ -47,25 +47,32 @@ Nginx reverse proxy is used to bind backend and frontend routes and also load ba * Containerized services * Stores Session of users, thus one can upload files and come back later to see results. * Load balanced backend(nginx), Scalable multithreaded workers (celery , docker-compose) -* REST API using Django Rest Framework for intuitive endpoint documentation. -* Github Actions Used for CI Testing. +* REST API using Django Rest Framework for intuitive endpoint documentation. +* Github Actions Used for CI Testing. * Intuitive drag and drop file upload with progress bars. ## Execution Instructions -* Clone the repo and install docker-compose and docker +1. Clone the repo and install docker-compose and docker +2. ```sudo docker-compose -f docker-compose.prod.yml build``` this builds necessary docker containers. (It'll take a while, grab some coffee :D) -* ``` sudo docker-compose -f docker-compose.prod.yml run --rm django ./manage.py collectstatic --noinput ``` +* To create necessary tables in the DB and generate static files and build run the following commands( need to be run only once during the initial execution) +3. ``` sudo docker-compose -f docker-compose.prod.yml run --rm django ./manage.py collectstatic --noinput ``` +4. ``` sudo docker-compose -f docker-compose.prod.yml run --rm django ./manage.py migrate --noinput ``` -* ``` sudo docker-compose -f docker-compose.prod.yml run --rm django ./manage.py migrate --noinput ``` - -* ``` sudo docker-compose -f docker-compose.prod.yml up --scale django=2 --scale celery=3``` +* To finally run the servers use the following command, you can modify the scale parameters to create more containers for load balancing. +5. ``` sudo docker-compose -f docker-compose.prod.yml up --scale django=2 --scale celery=3``` +6. Browse to ```http://localhost:8000``` +## Alternate execution instructions +``` +chmod +x first_run.sh && ./first_run.sh + ``` ### Backend Endpoint Screenshots Django rest framework endpoints for easy documentation and use ![](./documentation/DRF-Tasks.png) -Step 1: Uploading files , returns file uid's and taskid +Step 1: Uploading files , returns file uid's and taskid ![](./documentation/S1.png) Step 2: Start given task using endpoint to process files ![](./documentation/S2.png) diff --git a/first_run.sh b/first_run.sh new file mode 100755 index 0000000..612c610 --- /dev/null +++ b/first_run.sh @@ -0,0 +1,4 @@ +sudo docker-compose -f docker-compose.prod.yml build +sudo docker-compose -f docker-compose.prod.yml run --rm django ./manage.py collectstatic --noinput +sudo docker-compose -f docker-compose.prod.yml run --rm django ./manage.py migrate --noinput +sudo docker-compose -f docker-compose.prod.yml up --scale django=2 --scale celery=3 diff --git a/run_scaled.sh b/run_scaled.sh deleted file mode 100644 index be6d677..0000000 --- a/run_scaled.sh +++ /dev/null @@ -1 +0,0 @@ -sudo docker-compose -f docker-compose.prod.yml up --scale django=2 --scale celery=3 \ No newline at end of file