Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add server-side hashing of passwords #619

Open
wants to merge 13 commits into
base: development
Choose a base branch
from

Conversation

tobil4sk
Copy link
Member

Implements argon2id hashing of passwords on the server, with automatic rehashing of the old Md5 hashes. The Md5 hashing on the client is kept as is for now.

See #314.

@tobil4sk tobil4sk force-pushed the client-server-hashing branch from 412284c to 60283ff Compare April 12, 2024 11:09
package haxelib.server;

import haxelib.server.SiteDb;

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to use transactions for the public methods of the Update class. There are mulpile replicas of haxe lib server pod running in the cluster.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, both dev (development-lib.haxe.org) and prod (lib.haxe.org) connect to the same DB. When dev updated the DB, would prod (that runs older code) still works correctly with the upgraded DB? I haven't think through all the features yet, but e.g. registering new user may not work because the salt column is NOT NULL.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to use transactions for the public methods of the Update class. There are mulpile replicas of haxe lib server pod running in the cluster.

I've wrapped the methods in transactions. It looks like the Manager.all() method uses locking by default, though this only works on mysql, not sqlite, though it seems the server is using mysql anyway?

https://github.com/HaxeFoundation/record-macros/blob/00afa3924e8478203dd500f8b8a6f2174b2f8e13/src/sys/db/Manager.hx#L475-L477
https://github.com/HaxeFoundation/record-macros/blob/00afa3924e8478203dd500f8b8a6f2174b2f8e13/src/sys/db/Manager.hx#L54-L56

When dev updated the DB, would prod (that runs older code) still works correctly with the upgraded DB?

This would be an issue. Would it be possible to disable the development server while this transition is taking place? Otherwise, I can open a PR to temporarily disable any registrations or other commands that would be a problem. Then we can push it to disable that on the prod server temporarily, perform the migration, and then re-enable everything once it is done.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding the transactions. I believe it's alright to care only MySQL because it's what being used in prod.

I am not comfortable deploying directly to prod though. What about keeping the DB backward-compatible by keeping the existing pass column unmodified and adding a new pass_argon2 column? Once we confirmed the changes is good in dev, we can deploy to prod, and then remove the old pass column.

Concretely, the new code should:

  • Create salt and pass_argon2 columns during start up table migration. The columns should be nullable because existing haxelib instances may still do user registration.
  • Whenever there is a need to use passwords, it checks whether pass_argon2 is null. If so, it rehashes and write to salt and pass_argon2. If not, it use salt and pass_argon2 directly.

Later, after the above is deployed to prod, second stage table migration:

  • Fill in the salt and pass_argon2 for all the remaining rows.
  • Remove the old pass column.

for (m in managers) {
if (TableCreate.exists(m)) {
hasOldTables = true;
} else {
TableCreate.create(m);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@andyli Regarding what you said about multiple server instances, this may be a problem here since checking if a table exists and creating it is not done atomically. I'm not sure if this might end up being an issue if two instances both add conflicting Meta table entries, which is meant to be a table with a single field? If the Table.create would fail in that case anyway then I guess it wouldn't be a problem.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. I think the latter command will fail when there are two parallel TableCreate.create(). Ideally we should use CREATE TABLE IF NOT EXISTS... but well.

I agree it should be fine to let the latter one fail since apache/k8s will restart the instance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants