-
How to build the code ?
- Install docker desktop : https://docs.docker.com/desktop/
- Clone the project directory from GitHub
git clone https://github.com/sibinms/national-archives-data-import.git
- Move to the project directory
cd national-archives-data-import
- Create .env file
touch .env
- Add the following values to the .env values
DEBUG=1 SECRET_KEY=jjFOtlCde0mHjUGfh23rH0ArHIpDvycY5taHc8Eq DJANGO_ALLOWED_HOSTS=*
- Run the docker
docker-compose up
- Open a new terminal in the same folder and run the following commands
docker-compose exec web sh python manage.py migrate python manage.py createsuperuser
- Now your development server is ready , if needed check the admin http://0.0.0.0:8000/admin/login/
-
How to run the output ?
- Now we can import the data using management command.
- Open a new terminal in the same folder and execute the following commands. Note: Replace <record_id> with actual record ID you wanted to import
docker-compose exec web sh python manage.py import_record_by_id <record_id>
- Once you successfully imported the data go to : http://0.0.0.0:8000/archive-records/
- You will be able to see the following page
-
How to run tests ?
- Open a new terminal in the same folder and execute the following commands
docker-compose exec web sh python manage.py test -v 2 // to see detailed output
-
DB Design
- Model : ArchiveRecord
- Fields :
reference_id : CharField ( In the API its mentioned as string )
title : CharField ( In the API its mentioned as string )
description : TextField (API response have HTML entities better to use a text field)
citable_reference : CharField ( In the API its mentioned as string )
- reference_id is unique in the Model
-
Ideal solution
- The ideal solution should be a celery task which is getting invoked from the View
- This celery task will run in background and fetch the data from the API and will store the data in the DB
- Client will be polling the API end point to get the output from the celery task