Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jack b ps #22

Open
wants to merge 10 commits into
base: main
Choose a base branch
from
112 changes: 39 additions & 73 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,78 +1,44 @@
# Fall 2024 CS 3200 Project Template Repository
# StartUpConnect
## A FullStack Application to Connect Students and Startups

This repo is a template for your semester project. It includes most of the infrastructure setup (containers) and sample code and data throughout. Explore it fully and ask questions.
This repo is a semester project for Database Management. We are creating a platform that uses backend code, an API, Flask, SQL, and some front-end/UI tools to create a holistic product to help students on co-op connect with startups.

## Prerequisites
## What is StartUpConnect?
StartUpConnect helps Northeastern take its experiential learning to the next level by
connecting the relevant people to actual startups that need help. By matching students with startups based on their skills, co-op cycle, and what industries they want to enter, StartUpConnect facilitates connections and helps more students get hired at co-op experiences they want to do!

- A GitHub Account
- A terminal-based or GUI git client
- VSCode with the Python Plugin
- A distrobution of Python running on your laptop (Choco (for Windows), brew (for Macs), miniconda, Anaconda, etc).
## Key Components
- Student-startup matching based on skills and availability.
- Career center analytics for placement success, industry trends, and personalized help.
- Feedback to continue to improve the experience for both students and companies.

## Current Project Components

Currently, there are three major components which will each run in their own Docker Containers:

- Streamlit App in the `./app` directory
- Flask REST api in the `./api` directory
- SQL files for your data model and data base in the `./database-files` directory

## Suggestion for Learning the Project Code Base

If you are not familiar with web app development, this code base might be confusing. You will probably want two versions though:
1. One version for you to explore, try things, break things, etc. We'll call this your **Personal Repo**
1. One version of the repo that your team will share. We'll call this the **Team Repo**.


### Setting Up Your Personal Repo

1. In GitHub, click the **fork** button in the upper right corner of the repo screen.
1. When prompted, give the new repo a unique name, perhaps including your last name and the word 'personal'.
1. Once the fork has been created, clone YOUR forked version of the repo to your computer.
1. Set up the `.env` file in the `api` folder based on the `.env.template` file.
1. Start the docker containers.

### Setting Up Your Team Repo

Before you start: As a team, one person needs to assume the role of *Team Project Repo Owner*.

1. The Team Project Repo Owner needs to fork this template repo into their own GitHub account **and give the repo a name consistent with your project's name**. If you're worried that the repo is public, don't. Every team is doing a different project.
1. In the newly forked team repo, the Team Project Repo Owner should go to the **Settings** tab, choose **Collaborators and Teams** on the left-side panel. Add each of your team members to the repository with Write access.
1. Each of the other team members will receive an invitation to join. Obviously accept the invite.
1. Once that process is complete, each team member, including the repo owner, should clone the Team's Repo to their local machines (in a different location than your Personal Project Repo).

## Controlling the Containers
## Technologies Used
- Python
- Flask
- Streamlit
- Docker
- SQL
- Mockaroo

- `docker compose up -d` to start all the containers in the background
- `docker compose down` to shutdown and delete the containers
- `docker compose up db -d` only start the database container (replace db with the other services as needed)
- `docker compose stop` to "turn off" the containers but not delete them.


## Handling User Role Access and Control

In most applications, when a user logs in, they assume a particular role. For instance, when one logs in to a stock price prediction app, they may be a single investor, a portfolio manager, or a corporate executive (of a publicly traded company). Each of those *roles* will likely present some similar features as well as some different features when compared to the other roles. So, how do you accomplish this in Streamlit? This is sometimes called Role-based Access Control, or **RBAC** for short.

The code in this project demonstrates how to implement a simple RBAC system in Streamlit but without actually using user authentication (usernames and passwords). The Streamlit pages from the original template repo are split up among 3 roles - Political Strategist, USAID Worker, and a System Administrator role (this is used for any sort of system tasks such as re-training ML model, etc.). It also demonstrates how to deploy an ML model.

Wrapping your head around this will take a little time and exploration of this code base. Some highlights are below.

### Getting Started with the RBAC
1. We need to turn off the standard panel of links on the left side of the Streamlit app. This is done through the `app/src/.streamlit/config.toml` file. So check that out. We are turning it off so we can control directly what links are shown.
1. Then I created a new python module in `app/src/modules/nav.py`. When you look at the file, you will se that there are functions for basically each page of the application. The `st.sidebar.page_link(...)` adds a single link to the sidebar. We have a separate function for each page so that we can organize the links/pages by role.
1. Next, check out the `app/src/Home.py` file. Notice that there are 3 buttons added to the page and when one is clicked, it redirects via `st.switch_page(...)` to that Roles Home page in `app/src/pages`. But before the redirect, I set a few different variables in the Streamlit `session_state` object to track role, first name of the user, and that the user is now authenticated.
1. Notice near the top of `app/src/Home.py` and all other pages, there is a call to `SideBarLinks(...)` from the `app/src/nav.py` module. This is the function that will use the role set in `session_state` to determine what links to show the user in the sidebar.
1. The pages are organized by Role. Pages that start with a `0` are related to the *Political Strategist* role. Pages that start with a `1` are related to the *USAID worker* role. And, pages that start with a `2` are related to The *System Administrator* role.


## Deploying An ML Model (Totally Optional for CS3200 Project)

*Note*: This project only contains the infrastructure for a hypothetical ML model.

1. Build, train, and test your ML model in a Jupyter Notebook.
1. Once you're happy with the model's performance, convert your Jupyter Notebook code for the ML model to a pure python script. You can include the `training` and `testing` functionality as well as the `prediction` functionality. You may or may not need to include data cleaning, though.
1. Check out the `api/backend/ml_models` module. In this folder, I've put a sample (read *fake*) ML model in `model01.py`. The `predict` function will be called by the Flask REST API to perform '*real-time*' prediction based on model parameter values that are stored in the database. **Important**: you would never want to hard code the model parameter weights directly in the prediction function. tl;dr - take some time to look over the code in `model01.py`.
1. The prediction route for the REST API is in `api/backend/customers/customer_routes.py`. Basically, it accepts two URL parameters and passes them to the `prediction` function in the `ml_models` module. The `prediction` route/function packages up the value(s) it receives from the model's `predict` function and send its back to Streamlit as JSON.
1. Back in streamlit, check out `app/src/pages/11_Prediction.py`. Here, I create two numeric input fields. When the button is pressed, it makes a request to the REST API URL `/c/prediction/.../...` function and passes the values from the two inputs as URL parameters. It gets back the results from the route and displays them. Nothing fancy here.


## Current Project Components
### REST API
The backend is built using Flask and can be broken down into each user persona:

- Students: Uploading resumes, searching for internships, and co-op updates.
- Startups: Posting job opportunities, accepting and receiving candidate applications, and posting feedback for co-ops.
- Northeastern Career Center: Viewing analytics on placements and skills trends.
- Post-Grads: Platform to view full-time job opportunities and feedback on specific startups.

### Streamlit
The front end is built using Streamlit
- Provides dashboards for different types of users (personas)
- Interacts with the back-end to query real time results

## Contributions
- Harrison Dolgoff
- Jack Harmeling
- Rohan Francis
- Nicolas Ignaszewski

## Demo Video
- Here is a walkthrough on the project [here](URL)
29 changes: 29 additions & 0 deletions api/backend/applications/applications.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
### FILE FOR APPLICATIONS FROM THE REST API MATRIX
from flask import Blueprint
from flask import request
from flask import jsonify
from flask import make_response
from flask import current_app
from backend.db_connection import db



#------------------------------------------------------------
# Create a new Blueprint object, which is a collection of
# routes.
applications = Blueprint('applications', __name__)

# Making a request given the blueprint
# Looking at job applications given a job id
@applications.route('/applications/<int:job_id>', methods=['GET'])
def job_apps(jobID):
cursor = db.get_db().cursor()
cursor.execute('''
TO BE DONE
''')

theData = cursor.fetchall()

the_response = make_response(jsonify(theData))
the_response.status_code = 200
return the_response
223 changes: 223 additions & 0 deletions api/backend/positions/positions.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,223 @@
### FILE FOR POSITIONS FROM THE REST API MATRIX
from flask import Blueprint
from flask import request
from flask import jsonify
from flask import make_response
from flask import current_app
from backend.db_connection import db



#------------------------------------------------------------
# Create a new Blueprint object, which is a collection of
# routes.
positions = Blueprint('positions', __name__)

# Making a request given the blueprint
# Looking at all available positions
@positions.route('/positions', methods=['GET'])
def get_positions():
cursor = db.get_db().cursor()

# Get filter parameters from query string
# TO DO: Add correct filters
filters = {
'Location': request.args.get('Location'),
'ExperienceRequired': request.args.get('ExperienceRequired'),
'Skills': request.args.get('Skills'),
'Industry': request.args.get('Industry'),
'SalaryRange': request.args.get('SalaryRange'),
'PositionType': request.args.get('PositionType'),
'StartUpName': request.args.get('StartUpName'),
}

# Start with base query
query = 'SELECT * FROM positions WHERE 1=1'
params = []

# Dynamically add filters if they're provided
if filters['Location']:
query += ' AND Location LIKE %s'
params.append(f'%{filters["Location"]}%')

if filters['ExperienceRequired']:
query += ' AND ExperienceRequired LIKE %s'
params.append(f'%{filters["ExperienceRequired"]}%')

if filters['Skills']:
query += ' AND salary >= %s'
params.append(filters['min_salary'])

if filters['Industry']:
query += ' AND Industry LIKE %s'
params.append(f'%{filters["Industry"]}%')

if filters['SalaryRange']:
query += ' AND SalaryRange LIKE %s'
params.append(f'%{filters["SalaryRange"]}%')

if filters['PositionType']:
query += ' AND PositionType LIKE %s'
params.append(f'%{filters["PositionType"]}%')

if filters['StartUpName']:
query += ' AND StartUpName LIKE %s'
params.append(f'%{filters["StartUpName"]}%')

# Execute the query with any applied filters
cursor.execute(query, tuple(params))
theData = cursor.fetchall()

the_response = make_response(jsonify(theData))
the_response.status_code = 200
return the_response

# Making a request given the blueprint
# Creating a new position
# TO DO: Add correct filters
@positions.route('/positions', methods=['POST'])
def create_position():
try:
# Get the position details from the request body
current_app.logger.info('Processing position creation request')
position_details = request.json

cursor = db.get_db().cursor()
cursor.execute('''
INSERT INTO positions (
Location,
ExperienceRequired,
Skills,
Industry,
SalaryRange,
PositionType,
StartUpName
) VALUES (
%s, %s, %s, %s, %s, %s, %s
)''', (
position_details['Location'],
position_details['ExperienceRequired'],
position_details['Skills'],
position_details['Industry'],
position_details['SalaryRange'],
position_details['PositionType'],
position_details['StartUpName']
))

# Commit the transaction
db.get_db().commit()

return_value = {
'message': 'Position created successfully'
}

the_response = make_response(jsonify(return_value))
the_response.status_code = 201 # 201 means "Created"
return the_response

except KeyError as e:
return_value = {
'error': f'Missing required field: {str(e)}'
}
return make_response(jsonify(return_value), 400)
except Exception as e:
return_value = {
'error': f'Error creating position: {str(e)}'
}
return make_response(jsonify(return_value), 500)

# Making a request given the blueprint
# Deleting a position
@positions.route('/positions/<int:job_id>', methods=['DELETE'])
def delete_position(job_id):
try:
cursor = db.get_db().cursor()
cursor.execute('DELETE FROM positions WHERE JobID = %s', (job_id,))

if cursor.rowcount == 0:
# No position found with this ID
return_value = {
'error': f'No position found with ID {job_id}'
}
return make_response(jsonify(return_value), 404)

# Commit the transaction
db.get_db().commit()

return_value = {
'message': f'Position {job_id} deleted successfully'
}
return make_response(jsonify(return_value), 200)

except Exception as e:
return_value = {
'error': f'Error deleting position: {str(e)}'
}
return make_response(jsonify(return_value), 500)

# Making a request given the blueprint
# Updating a position
@positions.route('/positions/<int:job_id>', methods=['PATCH'])
def update_position(job_id):
try:
current_app.logger.info(f'Processing position update request for ID {job_id}')
updates = request.json

# Start building the dynamic update query
query_parts = []
params = []

# Build query dynamically based on provided fields
if 'title' in updates:
query_parts.append('title = %s')
params.append(updates['title'])

if 'company' in updates:
query_parts.append('company = %s')
params.append(updates['company'])

if 'location' in updates:
query_parts.append('location = %s')
params.append(updates['location'])

if 'salary' in updates:
query_parts.append('salary = %s')
params.append(updates['salary'])

if 'description' in updates:
query_parts.append('description = %s')
params.append(updates['description'])

if not query_parts:
return_value = {
'error': 'No valid fields provided for update'
}
return make_response(jsonify(return_value), 400)

# Construct the final query
query = 'UPDATE positions SET ' + ', '.join(query_parts) + ' WHERE JobID = %s'
params.append(job_id)

# Execute the update
cursor = db.get_db().cursor()
cursor.execute(query, tuple(params))

if cursor.rowcount == 0:
return_value = {
'error': f'No position found with ID {job_id}'
}
return make_response(jsonify(return_value), 404)

# Commit the transaction
db.get_db().commit()

return_value = {
'message': f'Position {job_id} updated successfully'
}
return make_response(jsonify(return_value), 200)

except Exception as e:
return_value = {
'error': f'Error updating position: {str(e)}'
}
return make_response(jsonify(return_value), 500)
Loading