-
Notifications
You must be signed in to change notification settings - Fork 2
Configuration design: Separate Access Management? #25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
The ssh key has to be stored on the machine the management cli is running on. The cli user needs read access to to the private key anyway. Why is it more dangerous to put the ssh password or private ssh key (reference) into the config than just keeping it in |
Alex said:
I assume that you refer to using ~/.ssh/config and ~/.ssh/id_rsa, right?
Sure no inventions here - we directly use SSH keys! Just consider:
Did you read the comments regarding ssh keys in the document Installation exhibition computers ESO/HITS/Imaginary? |
Eric said:
Explain the mechanism. The server has a private key and a public key, and there are no reasons why it wouldn't have them. The stations need the public key in its authorized hosts list, and this is transferred just once in a secure manner because it needs to be done using the station's password... so it makes no sense to have it in the cfg. Is there some other mechanism? Because anyway nothing but a public key should leave a system... the private key shouldn't be transmitted or known by external systems.
The same way they'll install an OS, Docker, setup networking, a user, backups, OMD, etc. on 150 stations... This is well beyond the scope of our tool! There are countless tools for parallel administration / deployment. Setting up passwordless ssh access is a security-sensitive operation that is done just once, so it has no reason to be in the config. ... But, how to set up passwordless ssh to 150 stations? With a list of 150 hostnames + their passwords + a script that transfers the server's public key to the authorized_keys file in the home of the user that will run docker using ssh with the password. Then you delete the list of hostnames + passwords. Though, once again, we don't need to take care of this with hilbert.
I don't have access to that document I think (just searched)... but if the idea completely vulnerates the security of stations and ssh credentials then it doesn't matter. If I'm failing to understand something and this is indeed reasonable and perfectly secure then consider a prospective customer might have the same concerns I have and go away, so we need to be clear and avoid misunderstandings. |
Normal scheme: To connect to the station from the server you have both a private key and a public key in the server and have an authorized_keys file in the station with a copy of your public key. Now, this is the relevant piece of cfg:
Problems with this:
Alex said in previous posts this was a scheme to install the ssh keys in the stations, which I don't think it's something our system should do, because it's not specific to it and its just one of many deployment tasks. |
@elondaits, you didn't answer my question, I think. You said above that we should never publicize a private key. But if the private key or a reference to it is in the config, it is not more public than a private key that is stored in But I would agree, that we could just leave out the @malex984 Why would you need to put the private key into the config in order to set up password-less ssh? If password-less ssh is not configured yet, you need a password to log in. Then you place the public key into the Conclusion: Our system relies on properly set up password-less ssh access. Setting this up is a sysadmins job. |
@porst17 I didn't answer it because I think we're thinking of different things... you are talking about the server's private key (which is in the same machine as this cfg), but I assume that the private key that is associated to a station in the cfg is in fact the private key of the station (because the private key of the server is the same for all the stations and because Alex said it was for deploying the ssh keys to multiple stations easily). A reference (path) to the server's private key doesn't expose anything, of course... my issue was with putting references to the stations' private key files, which suggests they're accesible by the server. As for putting the server's private key in the file, yes, that could be a security problem... because the private key should be non writable and only readable by the user that connects through ssh with the station... while the cfg file could be readable/writeable by a different user or group (specially if it's synced through djangoplicity). |
It seems that for this project WE are going to generate a single ssh key pair and during the general SW installation phase our public ssh key will be added to authorized_keys on each station by others (automatically via puppet). Let us consider the management back-end in more details:
The above means that private ssh keys may be installed as a part of the initial Hilbert's Server Configuration Package (which will also include all docker images and Hilbert-Station CLI with initial config for the server as a station). AFAIK Management container may easily access private keys that where previously installed on the server in many ways. But what about On the other hand i propose to specify which of those pre-installed private ssh keys (if there are several) is required to access which station (with all ssh connection details) in the general configuration file. No pre-installed In this case the initial Hilbert's Server Configuration Package will only pre-configure access to CMS for downloading the general configuration file (which is required in any case). ps: in the above i assumed that our servers will be running exactly the same low-level NOTE in what follows i tried to describe my understanding of the general responsibilities:
NOTE: maybe the 1st item above may also be performed by one of our SW packages (i.e. be part of 2nd step).
NOTE: servers are stations with special configurations - they start: Registry, OMD Server, Management Dashboard etc. Configuration Update procedure:If triggered Management Dashboard should be able to:
|
@elondaits I think, I have a better understanding of your remarks now. @malex984 This is again a lot of text. Do you have any specific questions? If you don't want to expose private keys to the management backend container, we could ask for ssh-agent running on the server and then forward the agent into the container. (Does not work on docker+mac right now, though. See docker/for-mac#483 and docker/for-mac#410.) If you really need Managing and deploying keys should not be our concern. This is the IT admins job. To us, it doesn't matter if the keys are set up in advance by hand, using puppet or self-tailored shell scripts. Also, replication to a second backup server is not our task. The management of the ssh keys and ssh config can be part of the syncing. This should solve most of the issues you stated above. The sync script will be specific for each installation of hilbert. I don't indent to provide a generic solution for the syncing. Hilbert has some requirements. Hilbert does not care, if the IT admin is fulfilling these requirements by hand, via puppet or via a hand-written bash-script that syncs via some external CMS. Hilbert takes the host name/ip ( |
Sorry, i will try to be as brief as possible. I think we are safe to assume that ssh key pairs for stations and access to CMS are fixed and installed separately ahead of time and they will probably never change later on, correct?
Either
NOTE:
Do you mean Hilbert-Server getting them from CMS? This is how it is done in the current prototype. I thought that was not appropriate for you guys?
I considered the case of primary server crash: they power on a backup server - upon power-on it has to automatically download something from CMS and become as good as the failed one. It seems possible to me only if backup server has pre-configured access to up-to-date backup of all data & SW packages & configurations. Therefore either:
|
When you say "ssh key pairs" it sounds like they're a lot. But it's just the private/public pair in the server. The public has to be copied to the authorized_keys of every station... just that. If the server accesses the CMS then it's the same private, with the public copied to the CMS. If the CMS accesses the server then it will need to install its own public key in the server... but that's something outside the scope of hilbert. |
AFAIR, CMS access was never discussed in this regard.
Yes, for the current project we may do with a single pair for accessing:
from within the container with the Hilbert management backend. Note: that ssh pair will be installed on the server host system! I can see the following way to use it by management container:
Currently i tend to read-only mount |
What do you mean by never? If a key pair has been compromised, I am sure, IT admins will want to replace it. We shouldn't rule out the possibility of changing the login credentials. But what is your actual question and is it relevant to the discussion?
If you want to put the info relevant for generating the Your argument that maintaining
I seems like you still have a misunderstanding here. The main point was that hilbert does not care how or who set up ssh access to the stations. All that matters to hilbert is that it can access the keys and maybe the ssh config on the local machine (via mounting, ssh-agent or whatever).
I have no problem with hosting the data externally. All that I am saying is that hilbert relies on a working local copy of that data. And hilbert is not responsible for putting the data there nor for keeping it up to date. |
Options 1. and 3. have the same permission problem. The files in I am ok with option 1. |
Question (out of ignorance): Why do we need to ssh INTO the docker container? Stopping and starting the station and changing apps is done through the host system. What actions does hilbert have to perform "inside" the container that requires to ssh into it? Can we do those things in some other fashion and just avoid the issue altogether? |
@elondaits You are wrong! We only need ssh OUT of docker container to the server host itself since it is treated by the management backend as just another station. Therefore it may need to do the following on the server:
|
I'm not sure I understand your explanation, perhaps there's a word or two missing, but what I understand is the following:
Is this what you're saying? If so:
|
@porst17 Ok, i understand. Let me summarize:
Consequences for the General Configuration (provided via CMS) are as follows:
ps: I would make the
|
@elondaits I hope my comment above explains the architecture. Here are some more details:
Docker container for Hilbert Management Backend runs the Dashboard Back-end that uses
NOTE: there is no need to install
No. Server Host itself is a station with pre-installed
yes,
yes, that is one of commands |
|
@porst17 I totally support your proposal above. Just a few comments:
Decision recap:
For reference please check out sample Let us vote on that ps: I do not see why additional network aliases might be necessary... AFAIK we will be able to cope with them if required. I suggest we do not discuss them now. |
I think the things being discussed and decided on now have not much to do with the original issue, and the discussion has opened to such a wide span of things it's really impossible for me to understand what I'm voting on or what I'm quietly accepting "by default". It seems it all has to do with ssh and credentials, but we're talking about the ssh between the server and the stations, between the stations and the hosts, credentials and security, what goes or doesn't go in the configuration, etc. Can we have a simple diagram that shows the server and three sample stations and that indicates what containers / components exist on each of those and what forms of communication are established between each? (e.g. the ssh connections as an arrow that indicates who establishes them, etc) On the "decision recap" above:
But on a different note:
|
|
@elondaits @porst17 Here is extended and updated system architecture diagram: 995afb3 Partial description were already given in
Which parts require more explanation? Update: the following comment contains an outline of general station start-up sequence |
Yes, unfortunately management backend runs isolated inside a docker container - it cannot directly access the host system (execution only via ssh, data has to be previously mounted). See the current architecture diagram with the Server and a Station (since stations differ only by Services and Application running on them).
All systems use exactly the same simple
Conceptually Server is just another Station. Any Station may be turned into a Server by changing its configuration accordingly.
I do not understand: at closing time Dashboard will stop all stations before shutting down the server. After that one would not be able to remotely power-on any stations (at least via Dashboard since it become unavailable) before starting the server again... The sequence is:
Update: this PowerOn/StartUp sequence is has been added to the architecture diagram with c380a48 NOTE: the above is a part of the general system architecture design (hilbert/hilbert-docker-images#28) |
Now back to access management: According to proposals by @porst17 and @elondaits we may choose to assume that
Note:
|
Good, it is decided then! |
Well, this is basically my initial proposal to have all ssh details like BUT then it was suggested in #25 (comment) (if i understood @porst17 correctly) to remove all those details from general YAML configuration and rely on Please vote on:
|
@malex984 @elondaits I leave the decision to you. My two cents are: Contents of |
Yes, as this is out of our control, admins may also configure some things (e.g. default settings) in Currently i can see only one way to re-enable the mentioned default overwriting routine within a container: to forget QUESTION: is there any other way to re-enable the mentioned default overwriting routine? Well, currently we do not really use Please let me list all discussed points so far concerning where to put ssh access details:
I would really prefer if only one of |
With both solutions, values given in the hilbert config can be used to overwrite the main hosts defaults. The second option may require more work, but may be more in line with our restart-hilbert-to-update-config philosophy. |
@porst17 I 👍 the 2nd solution! It sounds a lot like a special See for details |
The 2nd solution still requires someone to configure which port and user to use for each host (the command outputs the defaults for the user it's running under, otherwise), so I'm not sure we're gaining anything. Also, my original comment's intention was "we should not ruin security by doing security sensitive stuff without being experts"... but right now we're adding a new vector, namely:
So I would go with "the simplest thing that will work" which in this case is Solution 1. |
@elondaits Your security related concerns are only valid in the context of @malex984 implementation. It is not a problem of approach 2 in general. Also: Everybody knows the paths to ssh keys on a machine. The paths are |
Yes. That's why I don't understand why we need a more complex solution than the first one.
Yes. But it's a more complex solution that requires manipulating possibly sensitive information. The first idea of how to implement it had problems... that's why I think we should aim at the simplest. Specially since we're allowed to be restrictive to the users because these is a dedicated server and dedicated stations.
I Agree.
No. That does not follow from the above. You have to be sure nobody else guessed and created the path with extra permissions, for instance. You have to set the umask before creating the files so they're not readable/writeable. I don't claim these are unsolvable problems... just that it's not the same, and since it's not the same we have to think of the ways in which it's not the same. And that's the extra complexity we should not force ourselves to worry about by making things harder than the easiest possible.
Yes. And so our spec or code (and our tests) would need some notice/comment that said "the temp path should be secure and only readable by the user, don't use the /tmp directory, etc." to reflect this idea we had right now because we're thinking more than 30 seconds in the problem... otherwise someone might change it and it's all for naught... If we can do it simpler, we can code with less concerns. |
Well, the current problem is: ssh access is pre-configured on the Server host (e.g. in In #25 (comment) @porst17 showed how to find out all possible references (hopefully accessible by that SPECIAL_USER) to outside so that they can all be collected together in a single (temporary but well secured) place to be mounted into the container. Update: Another way to solve this issue: to proxy ALL ssh connections from within the docker container via Server Host, e.g. access any station |
I understand, but it's not a huge problem. The hilbert server is a dedicated server that must run under whatever OS we support and with the dependencies we need. If we indicate we're only going to mount /etc/ssh and ~/.ssh in the container then it's not a huge problem. And if someone, for some good reason, really needs to mount extra cfg files or directores, he can add them to the server Dockerfile. And then we can see if we should add that ourselves as well. A more complex auto-detect feature is non-trivial to develop, needs extra tests, extra docs, and requires us to be careful with security. And it's REALLY likely no one will ever need it, nor use it. |
Do you mean
We already use it a lot in the prototype, where they are part of configuration files which include scripts. Clearly, there is no need in running any auto-detection on the host system if ALL services and applications were previously containerized by us. The very trivial case of using Furthermore:
ps: @elondaits please remember the main topic for this issue. |
By auto-detection I meant @porst17 second solution. |
@malex984 Your I agree with @elondaits that solution 2 might cause problems we are not aware of, because it tries to deal with all possible OpenSSH configurations. Therefore, I think we should stick to method 1 if @malex984 doesn't come up with well thought-through solution for option 3. Option 1 seems easiest to implement and to maintain for now. Method 1 can be extended in the future by utilizing ideas of method 2: Mount I am curios to see if option 3 can work, but please don't post half-backed ideas. |
@porst17 Here is a working proof-of-concept script. It runs on a host, takes an ssh alias (known to the current user) as its argument and demonstrates all the necessary steps to access Conceptually its only benefit is that: the actual proxy may be anywhere => admins may choose a separate (but accessible) host to contain the whole Access DB... Therefore: i vote for option 1. |
OK, let's implement option 1 then. Your proof-of-concept for option 3 looks very promising and we may implement this in the future if necessary and if we all agree that is doesn't open additional security holes. |
Currently Mounting |
@porst17 DONE: |
Ok, we can reopen this issue if we find out that the current implementation isn't good enough. |
Note: this is a separation of super-issue #13
Station: 4.5 Why do we need
key_ref
? If you configured passwordless ssh then you only need the username to connect. The correct key is picked by ssh.Station: 4.6 Also, once again. I really don't understand why you'd ever publicize a private key and if I can't understand it and you can't explain it to me then you'll have a harder time explaining it to a customer or interested party. We shouldn't "invent" our own security mechanisms but instead use things already in place.
The text was updated successfully, but these errors were encountered: