Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Permissions 0777 for host keys are too open - sshd terminates #81

Closed
1 task done
phantomski77 opened this issue Jan 18, 2024 · 9 comments
Closed
1 task done

Permissions 0777 for host keys are too open - sshd terminates #81

phantomski77 opened this issue Jan 18, 2024 · 9 comments

Comments

@phantomski77
Copy link

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

After creating a new openssh-server container, I couldn't connect to the SSH server with neither the key nor password. The ssh client responded with an error: kex_exchange_identification: read: Connection reset by peer

Opening terminal for the container, I couldn't see sshd running and there was also no service listening on the port 2222. Checking the sshd log file, there were multiple errors for each individual host key file:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@         WARNING: UNPROTECTED PRIVATE KEY FILE!          @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0777 for '/etc/ssh/ssh_host_ecdsa_key' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.

Finished by:

It is required that your private key files are NOT accessible by others.
This private key will be ignored.
sshd: no hostkeys available -- exiting.

Indeed, by checking the permissions for host key files in /etc/ssh (or /config/ssh_host_keys), they've been all set to 0777.

By using chmod 0600 ssh_host* for changing the recommended permissions to 0600 (rw by owner) and restarting the container, everything worked as it should.

I don't know if the host keys are generated with those permissions outright, or if it's the combination of my environment (Synology NAS) where I run the container and PUID/PGID of the user I'm setting it to (strictly limited access user), but this was the result for me on the clean run from the latest image (sha256:098b5b04ceb2c43ced28a89ac27bfc073a5c806b96e07a64cbe744584994486e)

Expected Behavior

The container should start with the host keys permissions correctly set to 0600.

Steps To Reproduce

  1. Synology NAS running DSM 7.2.1-69057
  2. Create new project in the Container Manager
  3. Import or create new compose.yaml file with environment variables set as desired
  4. Try to connect

Environment

- OS: Synology NAS running DSM 7.2.1-69057
- How docker service was installed: DSM Package Center

CPU architecture

x86-64

Docker creation

services:
  openssh-server:
    image: lscr.io/linuxserver/openssh-server:latest
    container_name: borg_openssh-server
    hostname: openssh-server #optional
    environment:
      - PUID=<user>
      - PGID=<group>
      - TZ=Europe/London
      - PUBLIC_KEY=<key>
#      - PUBLIC_KEY_FILE=/path/to/file #optional
#      - PUBLIC_KEY_DIR=/path/to/directory/containing/_only_/pubkeys #optional
#      - PUBLIC_KEY_URL=https://github.com/username.keys #optional
      - SUDO_ACCESS=true #optional
      - PASSWORD_ACCESS=true #optional
      - USER_PASSWORD=<password> #optional
#      - USER_PASSWORD_FILE=/path/to/file #optional
      - USER_NAME=<user> #optional
      - LOG_STDOUT= #optional
    volumes:
      - <path>:/config
    ports:
      - <port>:2222
    restart: unless-stopped

Container logs

See above
Copy link

Thanks for opening your first issue here! Be sure to follow the relevant issue templates, or risk having this issue marked as invalid.

@thespad
Copy link
Member

thespad commented Jan 18, 2024

I suspect this is an issue specific to Synology because its ACLs use an underlying POSIX permissions of 0777 and the container isn't aware of the host ACLs so all it will see is the 0777 permissions mask.

@delner
Copy link

delner commented Jan 28, 2024

Just FYI, this is also happening on QNAP (QuTS Hero) when run in its Container Station (which is just a wrapper for Docker.)

Modifying the permissions in the config volume does not work. Executing within the container itself I can see:

Screenshot from 2024-01-28 15-45-37

These files seem to be generated/contained within the Docker container env itself? Not sure how QNAP/Synology ACLs come into play...

@thespad
Copy link
Member

thespad commented Jan 28, 2024

/config is a persistent mount that exists on the host filesystem, where the ACLs are in effect but invisible to the container; all it sees are the POSIX permissions which are not what it expects to see because ACLs. The files in /etc/ssh are just symlinks to the actual files in /config.

@delner
Copy link

delner commented Jan 29, 2024

Ah, I see, didn't know about the symlinks (makes sense.) To address the problem, I did some changing of permissions via shell to make progress on this. I will try to share what I did and what I observed when I have the chance.

@LinuxServer-CI
Copy link
Contributor

This issue has been automatically marked as stale because it has not had recent activity. This might be due to missing feedback from OP. It will be closed if no further activity occurs. Thank you for your contributions.

@thespad thespad closed this as not planned Won't fix, can't repro, duplicate, stale Feb 29, 2024
@viker81
Copy link

viker81 commented Mar 21, 2024

I would like to add here (also most probably for my own reference) that I ran into this same issue while running under WSL. It took a very long search and a lot of digging for me to find a solution out. My problem was that for WSL the permissions changes are not reflected on the windows file system when you chmod somthing. However you can enable this by setting the metadata option, which will allow you to manipulate the permissions with chmod that are remembered.

The steps that I took to fix this was:

  • Open a WSL terminal and elevate the permissions:
sudo -i
  • Edit the wsl.conf of the distro with vi with
vi /etc/wsl.conf
  • Add the section below it
[automount]
options = "metadata"
  • Exit the terminal and stop the running wsl with in my case
wsl --terminate ubuntu
  • Wait a few seconds and start a new terminal again.
  • Navigate to the location where the config is stored and the ssh_host_keys are stored and execute
chmod 700 ./ssh_host_keys
chmod 600 ./ssh_host_keys/ssh_host* 

to set the right permissions to start

@aptalca
Copy link
Member

aptalca commented Mar 21, 2024

with wsl, we recommend putting config folders on the local linux filesystem, not windows or remote mounts so they don't go through an abstraction layer that can and do break things

Copy link

This issue is locked due to inactivity

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Apr 21, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
Archived in project
Development

No branches or pull requests

6 participants