Skip to content

LBHackney-IT/addresses-api

Repository files navigation

Addresses API

The Addresses API let you search the local and national address database. Local being the addresses which Hackney Council holds a record for.

This API is based on the HackneyAddressesAPI with some improvements and updates.

Hackney Addresses API user documentation is available for more information about how the API can be used.

Stack

  • .NET Core as a web framework.
  • nUnit as a test framework.

Contributing

Setup

  1. Install Docker.
  2. Install AWS CLI.
  3. Clone this repository.
  4. Rename the initial template.
  5. Open it in your IDE.

Note: Additional setup and troubleshooting information can be found in the docs file.

Development

Running the application

To serve the application, run it using your IDE of choice, we use Visual Studio CE and JetBrains Rider on Mac.

The application can also be served locally using Docker. It will be available on port 3000.

The postgres database connection string is an environment variable, CONNECTION_STRING.

Similarly, the elastic search URL is in an environment variable, ELASTICSEARCH_DOMAIN_URL.

You can set the environment variables in launchSettings.json, but don't commit local values to git.

On windows you can set global environment variables instead (note you will need to reboot for gloal environment variables to update).

Local variables (in launchSettings.json) will trump global ones.

There is a make file that can be used for launch. Make can be problematic to install on windows, but it is possible.

make build && make serve

If not using your own local instances, you can use the databases provided in the project that contain seed data:

Setting up the development database

On a separate terminal run:

make migrate-dev-database && make seed-dev-database

This will run migrations on the development database and then seed it with data. This data can then be retrieved by calling the endpoints locally.

Setting up the development elasticsearch instance

In your terminal run:

make seed-es-data

If you changed the elasticsearch seed files, then you can run

make remove-es-data

to remove the docker container and volume. Then next time you start and seed the dev-elasticsearch container it will have the new data loaded.

Setting up Kibana

If you want to query the elasticsearch database you can use the provided Kibana container.

To start Kibana, run:

docker-compose up -d kibana

Kibana UI will be available on http://localhost:5601/app/dev_tools#/console

Running the tests

You can run the tests in a container:

make test

Or locally if you prefer:

dotnet test

If not using make, or to debug the tests in visual studio, start the test databases in their docker containers before starting the tests

docker-compose up -d test-database
docker-compose up -d test-elasticsearch

NOTE - if you have a local version of postgres installed (and it is running on the default port 5432), you will need to stop it else the unit tests will fail - the docker postgres also runs on port 5432 and there will be a clash.

In windows, go to services and stop the postgres server service.

On a Mac, you can check which ports are in use by running

sudo lsof -PiTCP -sTCP:LISTEN

If postgres is running, you can either uninstall it using the uninstaller or kill the process.

The migrations for the test database are run as part of the initial test setup.

Release process

We use a pull request workflow, where changes are made on a branch and approved by one or more other maintainers before the developer can merge into master branch.

Circle CI Workflow Example

Then we have an automated six step deployment process, which runs in CircleCI.

  1. Automated tests (nUnit) are run to ensure the release is of good quality.
  2. The application is deployed to development automatically, where we check our latest changes work well.
  3. We manually confirm a staging deployment in the CircleCI workflow once we're happy with our changes in development.
  4. The application is deployed to staging.
  5. We manually confirm a production deployment in the CircleCI workflow once we're happy with our changes in staging.
  6. The application is deployed to production.

Our staging and production environments are hosted by AWS. We would deploy to production per each feature/config merged into master branch.

Creating A PR

To help with making changes to code easier to understand when being reviewed, we've added a PR template. When a new PR is created on a repo that uses this API template, the PR template will automatically fill in the Open a pull request description textbox. The PR author can edit and change the PR description using the template as a guide.

Adding a Migration

For this API's Postgres database in RDS, we are using EF Core Code first migrations to manage its schema. To make changes to the database structure e.g add columns, etc. Follow these steps:

  1. If you haven't done so previously, you need to install the dotnet ef cli tool by running dotnet tool install --global dotnet-ef in your terminal.
  2. Make the changes you want to the database model in the code, namely in AddressesContext or any of the DbSet's listed within the file.
  3. In your terminal, navigate to the project root folder and run dotnet ef migrations add -o ./Infrastructure/Migrations -p AddressesAPI NameOfThisMigration to create the migration files. NameOfThisMigration should be replaced with your migration name e.g. AddColumnNameToCrossReferencesTable.
  4. Go to the folder /AddressesAPI/V1/Infrastructure/Migrations and you should see two new files for the migration. In the one which doesn't end in .Designer you can check through the migration script to make sure everything is being created as you expect.
  5. If the migration file looks wrong or you have missed something, you can either:
  • Make sure the test database is running and then run:
CONNECTION_STRING="Host=127.0.0.1;Database=testdb;Username=postgres;Password=mypassword;" dotnet ef migrations remove -p AddressesAPI
  • Or you can delete the migration files and then revert the changes to AddressesContextModelSnapshot.cs. After which make the necessary changes to the context, then create the migration files again.

Note: You must not commit any changes to any DbSet that is listed in AddressesContext without creating a migration file for the change. If not the change won't be reflected in the database and will cause errors.

Static Code Analysis

FxCop runs code analysis when the Solution is built.

Both the API and Test projects have been set up to treat all warnings from the code analysis as errors and therefore, fail the build.

However, we can select which errors to suppress by setting the severity of the responsible rule to none, e.g dotnet_analyzer_diagnostic.<Category-or-RuleId>.severity = none, within the .editorconfig file. Documentation on how to do this can be found here.

NOTE FxCop is now deprecated by Microsoft, and a different code analysis tool is run as part of the build pipeline in circleci. It would be good to align these, as currently it is possible to check in code that has no issues locally only to have it rejected by the circleci static analysis.

Smoke testing an environment

After deploying to an environment, there is a postman suite which can be run manually to 'smoke test' everything is working properly. This is at PostmanTests\Addresses-api test suite.postman_collection.json Load this into postman and set a global variable called addresses-api-url

The value of this should be set to the appropriate production or staging URL up and including the 'api' bit but without the trailing slash. Also, you will need to set the API key in each test - you will need to obtain a suitable key for the environment, they are different for V1 and V2. The key should be pasted into the 'Auth' header in each postman test.

Once set up, you can run through the tests and check the results are as excpected.

Agreed Testing Approach

  • Use nUnit, FluentAssertions and Moq
  • Always follow a TDD approach
  • Tests should be independent of each other
  • Gateway tests should interact with a real test instance of the database
  • Test coverage should never go down
  • All use cases should be covered by E2E tests
  • Optimise when test run speed starts to hinder development
  • Unit tests and E2E tests should run in CI
  • Test database schemas should match up with production database schema
  • Have integration tests which test from the PostgreSQL database to API Gateway

Data Migrations

A good data migration

  • Record failure logs
  • Automated
  • Reliable
  • As close to real time as possible
  • Observable monitoring in place
  • Should not affect any existing databases

Contacts

Active Maintainers

Other Contacts