xthread: (Default)
xthread ([personal profile] xthread) wrote2006-01-23 10:10 am
Entry tags:

Centralized System Configuration Management

Or, Pull Model is bad, M'Kay?

Should you have occasion to design a system to manage the configurations of a large number of computers or networking devices, you will realize a number of things quickly:
  1. You can make mistakes really quickly. If someone (including one of your administrators) puts bogus data into the system, the system itself can misconfigure a large number of the systems under management in very little time.
  2. You will want for the hosts/devices under management to pull their configurations from the central server, because this allows you to configure more of the systems, including system binaries, applications, user data, etc. It's also less work, involves more use of existing off-the-shelf software without additional revision, etc.
  3. The centralized server will end up having the keys to the kingdom. You will want to distribute system binaries and user credential data from the central server to the clients, in which case you now have a single system that can be used to horribly break your network (if someone puts bogus data into it by accident) or very, very efficiently attack it (if someone puts bogus data into it on purpose).

However, that second desire, that the clients pull from the server, puts the server at a risk that the client (or something claiming to be a valid client) will attack it. This is no fun. In particular, it leads to you wanting to allow protocols with notoriously weak security models, such as TFTP and DHCP, direct untrammeled access to your centralized server. The one with the keys to the kingdom. The one where your admins will be helpfully setting the standard passwords that can get into everything, because there are only so many passwords that people can keep track of before they start printing up little cards for their wallet that list all of the passwords for the secure systems back at the office [Ed's note: That's a bad thing].
So what to do? Push instead of pull. Have the out of the box config on the client be just smart enough that it can allow the server to connect to it [Ed's note: There are more sophisticated refinements that one can make to this, but that would be a much longer article]. If someone hacks the central server itself, than own the network, but you bought that farm when you decided to do centralized management (and presumably you decided to do centralized management because you have a lot of systems to manage and want them all to behave predictably). The centralized server can get into the clients and hand them configuration information, which they may or may not respect, but the server itself is no longer at risk from all of the things that it's managing.

And that's a good thing.

(Anonymous) 2006-01-23 08:06 pm (UTC)(link)
Push is indeed better, but far from perfect. Instead of a single point of failure, push gives you as many points of failure and vulnerability to attack as there are systems, scripts, and admins capable of pushing out the configs.

Better yet is rendezvous, where admins must authenticate at the clients before providing authorization for a certain central server to push one or more config files for one time only. Certainly this can be done in a script, but it allows different clients to have different passwords if you want, or find that you need that.

No avoiding the trade-off between ease and risk.

[identity profile] xthread.livejournal.com 2006-01-23 08:16 pm (UTC)(link)
Unfortunately rendezvous scales truly badly.

[identity profile] woody77.livejournal.com 2006-01-24 01:24 am (UTC)(link)
Most broadcast request/response protocols scale truely badly.

By a push setup, do you mean a voluntary push, or an involuntary push? A push where the client is designed to simply receive data and use it? It seems like in that case, you still have to worry about a client that gets breached such that it can stop listening to the config servers' commands. Then, it can use those commands as a method to locate, and start attacking the command servers.

With the pull scenarios, are you sure you're not confusing the methodology with the current implementations of it? Clients can be installed with a bootstrapper that allows them to use a much more robust mechanism for determining their configuration than just tftp'ing a file. And having recently written a TFTP server, I know that the protocol (as with DHCP) really has no security. It boils down to white-lists and blacklists in either case.

But you could easily implement a system that bootstraps off of TFTP, and then starts running code that supplies a more robust method of pulling config data. However, you still have the case of a rogue client hardware coming up, and running the tftp in a sandbox, and then decomposing the bootstrap program. You'd be in a situation that's very much like open source, with no obscurity to hide behind, and items like macIds can be spoofed, so a client could be fully spoofed with the right sandbox.

Then, it seems, the tack to take would be that even if a client was running spoofed by a rogue entity, what damage could it do? What are you pushing to it, that it could use to attack other computers or the server?

[identity profile] xthread.livejournal.com 2006-01-24 01:40 am (UTC)(link)
That is an excellent set of comments; sufficiently so, that I need to actually compose a response instead of just dashing one off. I'll try to do so tonight; if not, we should natter some on the subject next time we're in the same place...

[identity profile] woody77.livejournal.com 2006-01-24 05:43 am (UTC)(link)
*nods*, but when we don't have non-techies around to bore stiff...

[identity profile] xthread.livejournal.com 2006-01-24 06:44 am (UTC)(link)
Yeah, I felt bad about that

[identity profile] evilcyber.livejournal.com 2006-01-24 08:00 pm (UTC)(link)
Facinating. :) I wouldn't mind chatting with you about this at some point. I can definitely speak to pain points and tradeoffs.