Skip to content

Commit

Permalink
added better readme
Browse files Browse the repository at this point in the history
  • Loading branch information
Rayahhhmed committed Sep 3, 2024
1 parent 9f08684 commit d3acaa9
Showing 1 changed file with 11 additions and 5 deletions.
16 changes: 11 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,10 +10,16 @@ There are couple of things you must ensure.
The scraper was written to be batch inserted into Hasuragres. (Look at hasuragres / GraphQL API).
The course data is scraped from the UNSW timetable website https://timetable.unsw.edu.au/year/.

You need to fill out the relevant environment var details in a `.env` file. See the `.env.example` file for the format.

If you run `cargo run -- help`, it will give a list of commands you can run.
<br/>

<ul>
<li > scrape - Perform scraping. Creates a json file to store the data.</ li>
<li > scrape_n_batch_insert - Perform scraping and batch insert. Does not create a json file to store the data.
<li> batch_insert - Perform batch insert on json files created by scrape.</ li>
<li > help - Show this help message </ li>
</ ul>

Usage:
scrape - Perform scraping. Creates a json file to store the data.
scrape_n_batch_insert - Perform scraping and batch insert. Does not create a json file to store the data.
batch_insert - Perform batch insert on json files created by scrape.
help - Show this help message
Generally running a `scrape_n_batch_insert` is enough if you do not want a json file with everything written to disk (faster as well).

0 comments on commit d3acaa9

Please sign in to comment.