-
-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add server-side hashing of passwords #619
base: development
Are you sure you want to change the base?
Add server-side hashing of passwords #619
Conversation
025d596
to
412284c
Compare
This allows password to be automatically rehashed if the database is old
412284c
to
60283ff
Compare
- Passwords are now hashed on the server, with a salt.
This will be done via sql statements for now, because I'm not sure how to run `skeema push` automatically.
13ebb11
to
3bba6c3
Compare
package haxelib.server; | ||
|
||
import haxelib.server.SiteDb; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need to use transactions for the public methods of the Update
class. There are mulpile replicas of haxe lib server pod running in the cluster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, both dev (development-lib.haxe.org) and prod (lib.haxe.org) connect to the same DB. When dev updated the DB, would prod (that runs older code) still works correctly with the upgraded DB? I haven't think through all the features yet, but e.g. registering new user may not work because the salt
column is NOT NULL
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need to use transactions for the public methods of the Update class. There are mulpile replicas of haxe lib server pod running in the cluster.
I've wrapped the methods in transactions. It looks like the Manager.all()
method uses locking by default, though this only works on mysql, not sqlite, though it seems the server is using mysql anyway?
https://github.com/HaxeFoundation/record-macros/blob/00afa3924e8478203dd500f8b8a6f2174b2f8e13/src/sys/db/Manager.hx#L475-L477
https://github.com/HaxeFoundation/record-macros/blob/00afa3924e8478203dd500f8b8a6f2174b2f8e13/src/sys/db/Manager.hx#L54-L56
When dev updated the DB, would prod (that runs older code) still works correctly with the upgraded DB?
This would be an issue. Would it be possible to disable the development server while this transition is taking place? Otherwise, I can open a PR to temporarily disable any registrations or other commands that would be a problem. Then we can push it to disable that on the prod server temporarily, perform the migration, and then re-enable everything once it is done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding the transactions. I believe it's alright to care only MySQL because it's what being used in prod.
I am not comfortable deploying directly to prod though. What about keeping the DB backward-compatible by keeping the existing pass
column unmodified and adding a new pass_argon2
column? Once we confirmed the changes is good in dev, we can deploy to prod, and then remove the old pass
column.
Concretely, the new code should:
- Create
salt
andpass_argon2
columns during start up table migration. The columns should be nullable because existing haxelib instances may still do user registration. - Whenever there is a need to use passwords, it checks whether
pass_argon2
is null. If so, it rehashes and write tosalt
andpass_argon2
. If not, it usesalt
andpass_argon2
directly.
Later, after the above is deployed to prod, second stage table migration:
- Fill in the
salt
andpass_argon2
for all the remaining rows. - Remove the old
pass
column.
for (m in managers) { | ||
if (TableCreate.exists(m)) { | ||
hasOldTables = true; | ||
} else { | ||
TableCreate.create(m); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@andyli Regarding what you said about multiple server instances, this may be a problem here since checking if a table exists and creating it is not done atomically. I'm not sure if this might end up being an issue if two instances both add conflicting Meta
table entries, which is meant to be a table with a single field? If the Table.create would fail in that case anyway then I guess it wouldn't be a problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. I think the latter command will fail when there are two parallel TableCreate.create()
. Ideally we should use CREATE TABLE IF NOT EXISTS
... but well.
I agree it should be fine to let the latter one fail since apache/k8s will restart the instance.
Implements argon2id hashing of passwords on the server, with automatic rehashing of the old Md5 hashes. The Md5 hashing on the client is kept as is for now.
See #314.