This README is for development of the APIs. Public documentation is located here.
DO NOT PUSH DIRECTLY TO MASTER UNLESS YOU INTEND TO DEPLOY A NEW VERSION OF THE SITE. THE SITE AND ALL APIS IS CONTINUOUSLY REDEPLOYED WHEN THE MASTER BRANCH IS UPDATED.
NOTE: At all times, the following should match the contributions page on our website (so update both simultaneously)!
The future of Brown APIs depends on you! All of our code is open source, and we rely heavily on contributions from the Brown community. You can view our code (along with open issues and future plans) on Github.
There are many ways to help further the development of Brown APIs. You can add new APIs, maintain and enhance current APIs, fix bugs, improve this website, or build better tools to help others contribute. Check the issues on our Github for suggestions of what to do first. You don't need to be able to code to help either. Reach out to CIS and other university organizations to get easier and wider access to campus data.
The APIs are written in Python and run on a Flask server. This website is also served by the same server and uses Jinja templates with the Bootstrap framework.
Data is stored in a single MongoDB database hosted on mLab.com (Note: This was probably a bad decision that could really use some contributions to fix!). Because there is only one copy of the database, developers must take care to avoid corrupting the data while testing fixes or new features.
You'll need the latest version of Python 3, along with virtualenv
and pip
. Go ahead and look up these programs if you aren't familiar with them. They're crucial to our development process.
- Clone this repository to your own machine:
git clone https://github.com/hackatbrown/brown-apis.git
- Open a terminal and navigate to the top level of the repository (brown-apis/).
- Create and activate a virtual environment (again, look up
virtualenv
online to understand what this does):virtualenv -p `which python3` venv
source venv/bin/activate
- Install all the required libraries in your virtual environment:
pip install -r requirements.txt
- Create a new branch for your changes. For example (while on the master branch):
git checkout -b <descriptive-branch-name>
- Make any changes you want to make.
- Commit your changes, push them to
origin/<branch-name>
, and open a new pull request. - To test your code, you may merge them into the
stage
branch. These changes will be automatically reflected on our staging server. You can merge changes from the develop branch into the staging branch with:git checkout stage
git fetch origin
git reset --hard origin/master
git rebase <your-branch-name>
git push --force
- Note: This won't work if multiple developers are doing this at the same time.
- You're code will be merged into
master
once your pull request is accepted. Your code will be run against flake8, a tool which will check for coding style and common mistakes. You can runflake8
locally from within the virtual environment.
- Navigate to the top-level directory (brown-apis/).
- Run the script from a package environment, allowing it to import the database from the api package:
python3 -m api.scripts.<scriptname>
where 'scriptname' does NOT include the '.py' extension.
- You can include any script arguments after the command (just like you normally would).
We use MongoDB to store various menus and schedules, as well as client information. In MongoDB, all objects are stored as JSON, and there is no schema that forces all objects in a collection to share the same fields. Thus, we keep documentation of the different collections here (and in the API overviews below) to encourage an implicit schema. Objects added to the database should follow these templates. If you add a new collection to the database, remember to add a template here, too.
- username: <STRING>,
- client_email: <STRING>,
- client_id: <STRING>,
- valid: <BOOLEAN>, <-- can this client make requests?
- joined: <DATETIME>, <-- when did this client register?
- requests: <INTEGER> <-- total number of requests made by this client (not included until this client makes their first request)
- activity: list of activity objects which take the form:
- timestamp: <DATETIME>, <-- time of request
- endpoint: <STRING> <-- endpoint of request
- DEPRECATED: client_name: <STRING> <-- replaced with username
- urlname: <STRING>
- name: <STRING>
- contents: <STRING>
- imageurl: <IMAGE>
The Dining API is updated every day by a scraper that parses the menus from Brown Dining Services' website. The hours for each eatery are entered manually inside of the scraper script before each semester. When the scraper is run, all this data is stored in the database. Calls to the API trigger various queries to the database and fetch the scraped data.
- eatery: <STRING>,
- year: <INTEGER>,
- month: <INTEGER>,
- day: <INTEGER>,
- start_hour: <INTEGER>, <-- these four lines describe a menu's start/end times
- start_minute: <INTEGER>,
- end_hour: <INTEGER>,
- end_minute: <INTEGER>,
- meal: <STRING>,
- food: [ <STRING>, <STRING>, ... ] <-- list of all food items on menu
- <section>: [ <STRING>, <STRING>, ... ], <-- category (e.g. "Bistro") mapped to list of food items
- ... (there can be multiple sections per menu)
- eatery: <STRING>,
- year: <INTEGER>,
- month: <INTEGER>,
- day: <INTEGER>,
- open_hour: <INTEGER>,
- open_minute: <INTEGER>,
- close_hour: <INTEGER>,
- close_minute: <INTEGER>
- eatery: <STRING>,
- food: [ <STRING>, <STRING>, ... ]
The WiFi API just forwards requests to another API run by Brown CIS. Their API is protected by a password (HTTP Basic Auth) and is nearly identical to the WiFi API that we expose. The response from the CIS API is returned back to the client.
The Laundry API is updated manually with a scraper that pulls all the laundry rooms and stores them in the database. When a request is received, the API checks the request against the list of rooms in the database and optionally retrieves status information by scraping the laundry website in realtime.
- room
- name: <STRING>
- id: <INT>
- machines: list of objects with:
- id: <INT>
- type: <STRING> (one of
washFL
,washNdry
,dry
)
The Academic API used to scrape course information from Banner and store it in the database. Since Banner has been deprecated for course selection, the Academic API scraper has stopped working, and we are no longer able to collect course data. Thus, the Academic API is unavailable for the foreseeable future. Contributions are especially welcome here.