Our project is a website where users can input text (such as facts or news) and check it for accuracy. We will fine tune a GPT-3 model to evaluate the accuracy of inputted text, and provide users with an evaluation of whether their text is accurate or not, along with an explanation of why. Our workflow will look something like this: a user logs in to the fact checker website using their credentials. Once they log in, they see a text box where they can input text and receive an evaluation of its accuracy. They can also view their history (which is their previously inputted text along with the corresponding evaluations).
If you're using a version of this project that is hosted by someone else, skip this section. If you don't have access to an existing host, see here and here for instructions on setting up a development server.
- Navigate to the correct address in your web browser.
- Type or copy-paste any statements that you want evaluated for misinformation in the text field labelled 'Start typing, or copy and paste your document here.'
- Click the button to the bottom right of the text field labelled 'Submit.'
- Wait a few seconds, then look for an evaluation (misinformation/not misinformation) above the text field.
-
User accounts. Users will be able to create accounts with which they can use to gain access to our website fact checker.
i. Non-operational: Firebase login authentication system not yet implemented.
-
False text detection. Once users log in, they can input text and receive an evaluation of its accuracy along with an explanation of the evaluation.
i. Operational: Fully connected endpoints with feedback, although not trained on dataset.
-
Text validation. The text that users input will be vetted and verified for acceptable language and profanity to encourage result accuracy.
i. Operational. Maximum query length and profanity filter have been implemented and are displayed below input field.
-
User history. Once users log in with their credentials, they can view their history of previous inputted text along with the corresponding accuracy evaluations.
i. Non-operational: Database can store and interact with user history, but endpoint to UI not implemented.
Report all bugs as GitHub issues.
Known bugs can be found here.
Main Directory Contents:
- /src/App.jsx - React router for web page views
- /src/pages/ - Top level web page views - About, Login, and Polygraph
- /src/components/ - React components for use top level pages
- /src/api/ - Typescript modules containing API handlers for request/response with Polygraph backend
- /src/util/ - Typescript modules containing utility functions
- /public/ - Static assets (images, icons, etc)
-
If you don’t have Node.js or npm installed, install it from here
-
Clone this repository
-
Navigate into the project directory
$ cd polygraph-ui
-
Install dependencies
$ npm install
-
Add a new environment variable to your development environment called REACT_APP_FIREBASE_API_KEY with your Firebase API key as its value.
-
Make sure you have cloned and are running the polygraph backend
-
Run the app locally using the following command:
$ npm run start
Test suites can be run from the command line with the following command:
$ npm test -- --coverage
This will run all test suites and generate a code coverage report. Note that some tests will not pass until you have set the REACT_APP_FIREBASE_API_KEY environment variable to a valid API key.
New test suites can be added to by creating a file suffixed with .test.jsx anywhere in the /src/ directory. Individual test cases should follow this format:
test('test title/description', () => {
test body
});
Unit tests should be grouped according to the component/module that is being tested. Files should aim to test only one component or module and should have the same name as that component/module (ex: App.jsx should be tested in App.test.jsx). Each file should go in the same directory as the component/module that it is testing.
The linter can be run from the command line with the following command:
$ npm run lint
This will check that your code follows this project's style guidelines.
Before attempting to build a release version, make sure that you have followed all of the steps in the setup section. A release build of the project can be generated with the following command:
$ npm run build
By default, this will place the relevant contents in the /build/ directory.