This is originally based off the research done on Atom Community Server Backend which was written in Golang. But has been switched to JS for more broad support.
Developers please read through this whole document again, as it documents large changes in how to contribute.
To read through an overview and find links for contributors take a look at the Documentation
Please note that there should be two versions of this repo.
The version of this server that exists on confused-Techie
's repo is intended to be the version that reaches feature parity with the existing Atom.io Backend Server. As in until it reaches release version 1.0.0 - Once this happens that repo will likely be archived, or stop receiving updates. The reason this should still exist untouched, is so that any other user of Atom that wants a drop in replacement of the backend server will be able to use this, with zero modifications, to support their Atom instance.
But once version 1.0.0 is released, then all new features, and new developments should be brought over to pulsar-edit/package-backend
since that repo will contain the Backend Server intended to be used by Pulsar.
So with this in mind, please use the above to correctly address any issues or PRs. Until this warning is removed.
Atom-Backend is MOVING
While this is currently the home for the new backend during development, that won't always be the case.
The long term goals will be to have this package move to the new Pulsar-Edit
repo. If you are wondering what's 'Pulsar' while the title here says 'Atom' read more about this change.
The goal for now is of course to keep this new Backend compatible with any fork of Atom that may arise, as currently it should be a drop in replacement of the currently existing backend.
If you'd like to get more involved in the new 'Community-Based, Hackable, Text Editor' feel free to visit the 'Pulsar-Edit' Org and get involved however you can. Additionally there's a 'Pulsar-Edit' Discord to get involved with other maintainers.
To start developing with this package, it should be rather quick.
In the root folder for the downloaded package:
npm install .
To install all package dependencies.
Then go ahead and create an app.yaml
file, or rename the app.example.yaml
to app.yaml
. The contents of these files can mirror each other or you can read through the comments of app.example.yaml
to find suitable alternatives for what you are testing. This config specifies the port of the server, the URL, as well as many other aspects.
Finally, while not at all recommended, you can run the API Server with node .
.
It is instead recommended that you use the built in scripts to run the server. Of which there are several that can be run with npm run $SCRIPT_NAME
-
start
: Starts the Backend Server normally. Using yourconfig.yaml
for all needed values. This is what's used in production. -
test:unit
: This can be used for unit testing, like items placed in./src/tests
. Using this does the following:- Sets
NODE_ENV=test
- Sets
PULSAR_STATUS=dev
- Runs the unit tests located in
./src/tests/
usingjest
. - Requires that there will be no calls to the Database. This does not handle the loading of the Database in any way.
- Uses mocked responses for all functions in
./src/storage.js
to avoid having to contact Google Storage APIs.
- Sets
-
test:integration
: This is used exclusively for Integration testing. As in tests that require a Database to connect to.- Sets
NODE_ENV=test
- Sets
PULSAR_STATUS=dev
- Runs the integration tests located in
./src/tests_integration
usingjest
. - Requires the ability to spin up a local Database that it will connect to, ignoring the values in your
config.yaml
- Uses mocked responses for all functions in
./src/storage.js
to avoid having to contact Google Storage APIs.
- Sets
-
start:dev
: This can be used for local development of the backend if you don't have access to the production database. Spinning up a local only Database for testing purposes. But otherwise running the backend server exactly likestart
.- Sets
PULSAR_STATUS=dev
- Starts up the server using
./src/dev_server.js
instead of./src/server.js
- Requires the ability to spin up a local Database that it will connect to, ignoring the values in your
config.yaml
- Uses mocked responses for all functions in
./src/storage.js
to avoid having to contact Google Storage APIs.
- Sets
-
api-docs
: Uses@confused-techie/quick-webserver-docs
to generate documentation based off the JSDoc style comments, only documenting the API Endpoints. This should be done by GitHub Actions. -
lint
: Usesprettier
to format and lint the codebase. This should be done by GitHub Actions. -
complex
: Usescomplexity-report
to generate complexity reports of the JavaScript. Keep in mind this does not support ES6 yet, so not all functions are documented. This should be done by GitHub Actions. -
js-docs
: Usesjsdoc2md
to generate documentation based off JSDoc comments within the codebase. This should be done by GitHub Actions.
There are some additional scripts that you likely won't encounter during normal development of the Backend, but are documented here for posterity.
-
contributors:add
: Usesall-contributors
to add a new contributor to the README.md -
test_search
: Uses the./scripts/tools/search.js
to run the different search methods against a pre-set amount of data, and return the scores. -
migrations
: This is used by@database/pg-migrations
to run the SQL scripts found in./src/dev-runner/migrations/001-initial-migration.sql
to populate the local database when in use by certain scripts mentioned above.
To make development as easy as possible, and to ensure everyone can properly test there code, the ability for a local database to be automatically spun up is now included in the Backend Server.
This means that if you run any of the following scripts it will assume you have the ability to run a local database.
test:integration
start:dev
The ability to spin up this local database means that your system must have docker installed. Then using docker the Backend Server will automatically start up a development database and insert data into it. This means you can safely delete any data, or otherwise test as if working on the production database.
To check what data you should expect to be able to find within the database view the Migration Script.
If you experience any issues using this new feature, please feel free to open an issue.
To view researched information and the behavior that will be mirrored from the original Backend is available here.
If you would like to read more about the routes available those are available here.
There's a crash course guide to any new contributors available develpers.md. As well as JSDoc generated documentation of the source code.
Additionally for any bug hunters Complexity Reports are generated. Keep in mind since the tools underlying AST generator doesn't support ES6 not everything is included currently.
Finally there are many TODO::
s scattered around for things that still need to be done. Otherwise a collection of 'Good First Issues' are available. And lastly theres a collection of all functions/methods and their current status to help someone quickly jump in.
If you'd like to help Atom-Community Core there is still much work to be done.
Please note that for the time being this documentation of the Read Me is overly verbose to allow easy communication while in development.
There is quite a bit of data that can be obtained from the API.
The API needs to be aware of:
- Package Repo entries
- List of valid users and tokens to provide for them.
- Each users list of all stared repos
- Each repos list of all users that have starred it.
- The time a repo was created
- The last time a repo has been updates.
Things to note:
- When a repo has a change of name, the old name will permanently redirect to its current version of the repo.
- This means the name of a repo is never able to be a unique identifier.
- A repo can transfer owners
- This means the owner or creator of a repo is also not sufficient to be a unique identifier.
This means that simply storing the archive or future package repo entries will not be sufficient.
The proposed solution to this issue:
A possible way to solve this issue while not having to work with complex data types.
This file will act as a pointer to the specific data of each package. With each key being a packages name, and pointing to its raw file location. If a package changes name a new entry will be created under the new name, pointing to the same file. This also means if we don't remove the previous entry, the old name will still point to the file.
These JSON files will be saved under a UUID like so UUIDv4.json
allowing any content inside to change
while always pointing to the same file.
This UUIDv4.json will also be the location the package_pointer.json index will refer to.
Within each package file will be additional data not retreived in the API. Including a list of all users that have stared the package, listed by their username. Additionally will list the creation date, and last modified date. But these values will be removed before returning via the API.
A large user object. Where their username is the key to the object, and inside will be (once we determine how to handle auth) any valid keys for the user, which is used to check authenticated requests, additionally which will contain an array of every package they have stared. This value can use the packages name, as long as it then uses the pointer to find the package data.
Thanks goes to these wonderful people (emoji key):
confused_techie 💻 |
Giusy Digital 💻 |
DeeDeeG 🤔 |
ndr_brt 💻 |
This project follows the all-contributors specification. Contributions of any kind welcome!