-
Notifications
You must be signed in to change notification settings - Fork 758
SSH issue after running devsec.hardening.ssh_hardening role #854
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hey @jobetinfosec, we would appreciate if you use the provided template for reporting Issues. Which version of our collection are you using? Since this is a bug, that was fixed in 10.0.0 (more specific #784) it should not happen anymore. |
Hi @schurzi |
interesting. What does the task |
TASK [devsec.hardening.ssh_hardening : Ensure privilege separation directory exists] |
I think I found the culprit... |
I am glad you solved the issue for your case. I consider failures that lead to an inaccessible server very serious, so I'd like to understand how you arrived at this problem. I tried several ways to replicate this issue with my test servers. I could not reproduce this problem. Can you describe a bit more clearly how I can trigger this problem? |
Hi @schurzi |
Hi @schurzi However, testing it again on another server this time using an Ansible playbook, a further issue came out...
The Ansible playbook I used, simply updates and upgrades system packages, add 3 sudo users and installs a few basic packages:
Any idea? |
@schurzi BTW, if it can be of any help, this is the Ansible version I'm currently using: |
I'm experiencing a similar issue on a DigitalOcean droplet (512 MB Memory / 10 GB Disk / SFO3 - Ubuntu 24.04 LTS x64) while running as root. My playbook runs fine but fails during SSH hardening. Root cause update: After further testing, I've found that using these two roles together ( The culprit appears to be this line which "Resets the ssh connection to apply user changes." This reset conflicts with the SSH hardening configurations, effectively locking out access to the server. To reproduce: Create a minimal playbook that includes both roles (in any order) and the server will become inaccessible after execution. ---
- name: Example
hosts: example_host
become: true
roles:
- role: geerlingguy.docker
- role: devsec.hardening.ssh_hardening |
I would need to solve this issue. Did you manage to replicate this error somehow? |
FYI, we also have encountered issues when integrating It turned out that in our base 24.04 image, openssh was still in version 1:9.6p1-3ubuntu13.5. And there are important fixes to the socket-activation thing in later version, 1:9.6p1-3ubuntu13.6. We now apply some |
Sorry I am currently swamped with other tasks and will not get to a work on this in the next few weeks. I beleive the comment from @thomasgl-orange might have the solution in it. I kind of want to verify this and then we could include a update tasks in our role. I am not sure however what also needs to be done besides the update. We will need to test if we also need to reconnect the Ansible ssh session, reload systemd and how it should be ordered with our config changes. |
FYI: We run a daily image build pipeline with Ubuntu 24 and also get the same error. But only in about 30% of the cases. The error often does not occur again with a new run. |
Hi @schurzi |
Same on my end... Killing sshd / ssh processed using Provider console fixed issue. Ubuntu 24.04 (Hetzner). Not sure it makes a difference, but i am using:
|
fwiw, this is indeed something I can anecdotally confirm as well. At least not the exact details, but running |
I ran this role against a fresh installed Ubuntu 24.04 server, and the end, the following error showed up:
fatal: [domain.tld]: FAILED! => {"changed": false, "msg": "Unable to start service ssh: Job for ssh.service failed because the control process exited with error code.\nSee \"systemctl status ssh.service\" and \"journalctl -xeu ssh.service\" for details.\n"}
Via a dashboard console, I managed to log as root user and check logs:
fatal: chroot ("/run/sshd"): No such file or directory [preauth]
How may I fix this?
The text was updated successfully, but these errors were encountered: