So basically... Stick all your configuration in source control and you can check out old versions when you screw it up?
Git push origin undo-my-last-disaster
"I'm about to make a change that will probably wipe out all of our systems." That was the message a developer employee once delivered to Alexis Richardson, CEO of Weaveworks, just prior to an attempted system update at his company. Richardson, who also serves as the Technical Oversight Committee Chair for the Cloud Native …
COMMENTS
-
-
Thursday 17th May 2018 13:20 GMT Voland's right hand
There was a system do that using CVS for Cisco and other network gear as far back as 1999.
I remember setting it up in 2 jobs and swearing madly that I cannot use it in the third one where HP screwed up the set term length command and paging in a way which made getting full config via term impossible.
The approach works fine if the config is human readable so you can see what has changed on the diff. If you are diffing things like machine generated json or xml you have already lost the battle.
-
Thursday 17th May 2018 18:23 GMT Crypto Monad
> There was a system do that using CVS for Cisco and other network gear as far back as 1999.
Rancid - the HP bug should now be fixed. Or there's Oxidized.
But this isn't really "gitops". It's just sucking down the configs: if you make a screw-up, it's up to you to upload or apply the right config changes to bring it back into sync. Nobody likes rebooting routers and switches.
-
-
Thursday 17th May 2018 14:35 GMT Lee D
I've been doing that in everything from CVS, BZR, SVN, Git for years.
It just seems common sense and a natural part of change management - revision control and rollback.
It's one of the first things I do on a new Linux machine that I'm about to tinker on, over the whole /etc/ folder.
And then every change you make, you can do an "svn commit --message 'Why I am doing this'" or equivalent.
Sure, rollback isn't super-automated and amazing but there's no reason I couldn't make it so.
If I was that bothered, I'd have it auto-commit once a day too, just in case I forgot to do so. That's one cron-job which - if there was no change - literally doesn't take a single byte extra in the repo as it won't bother to commit. And, hey, that cron-job is also subject to the same revision control...
When even the little guys like me have that, and things like VM snapshotting, replication and rollback, it's actually quite disappointing to realise that someone running the bigger things thinks that this is somehow amazing.
Now... if you wrote some code that automatically detected downtime, attributed it to a recent commit, and auto-rolled-back without human intervention or losing data... now THAT I'd be impressed by. But not much.
-
Friday 25th May 2018 09:20 GMT errordeveloper
I agree with you, and actually, the point is that Kubernetes is the key enabler here.
Kubernetes uses containers, which is important for isolation, but also for packaging, and it helps a lot to ensure that what you've checked in to git, is what will be running. Besides that, it also constantly reconciles the configuration, and unlike config management systems, it actually does the right thing.
Additionally, to your point about automatically detecting the downtime – we do this with Prometheus.
-
-
Thursday 17th May 2018 15:58 GMT Munchausen's proxy
"So basically... Stick all your configuration in source control and you can check out old versions when you screw it up?"
It's actually more than that. The point (I think) is that you can automate that process almost completely, so you can commit a change, press a button, and the change works its way through your machine farm, paying attention to which machine does exactly what thing, and therefore needs exactly which change, and so on.
Of course, if you can automate it, you can give the button to someone who doesn't understand the inner workings, doesn't have a good model of those machines and their relationships, and is all too willing to push the button because, hey i'ts automated - the button knows all that stuff.
What can go wrong?
As I understand it, there's a reason U.S. Navy submarines have so many people on board. They think it's better to have humans who know stuff in the loop than to have a fully automated system with catastrophic failure modes.
-
Thursday 17th May 2018 13:32 GMT Charlie Clark
The illusion of control
If you're willing to cede control to Kubernetes
For Kubernetes read also Ansible, Salt, Puppet, Chef, etc. But you should never cede control to these systems, you should always just be delegating.
Anyone using these systems without some form of VCS is going to be in trouble. But, of course, putting configuration information, including credentials, brings its own problems with it.
-
Thursday 17th May 2018 13:44 GMT Anonymous Coward
Nothing new here...
You really don't need anything fancy to set this up. Simply using a Git repository where each branch contains a specific configuration and is provided through a worktree is more than enough. I've been using this system for quite a while now, blogged about it here, and it works like a charm.
Once you try to set up something like this will you fully realize just what an amazing product Git actually is and all the weird stuff it can actually do. I'm still a little hyped about (for example): git push -u remote-repo +HEAD:refs/heads/newbranch. This will push your current repo onto a remote repository but as a new branch. And it will track said remote repository branch too. Try doing that with tools such as Subversion :)
-
Thursday 17th May 2018 14:40 GMT Lee D
Re: Nothing new here...
Yep.
And I imagine it's not that difficult to "patch" a branch to include a configuration item, and then pull that patch into all the similar configurations to solve problems globally.
Sure, you wouldn't be able to guarantee to blanket-remove every instance of a conflicting config, but you could push 99.9% of the problem out of the way with one commit.
I do still wonder, though, why configuration is random files of plain-text and not database-driven for almost anything. Because then this is literally a transaction you can rollback at any point, and you could do things like:
UPDATE * FROM Apache_sites WHERE SSL = 'enabled' AND SSL_Private_Key.expiry_date < NOW()..... etc.
-
Friday 18th May 2018 09:47 GMT Charlie Clark
Re: Nothing new here...
I do still wonder, though, why configuration is random files of plain-text and not database-driven for almost anything.
Really?
Configuration is declarative for a reason. It makes auditing and testing a lot easier. Use databases to support deployment and maintenance and possibly to populate some templates but there some things that VCS are better suited to.
-
-
-
Thursday 17th May 2018 15:24 GMT Anonymous Coward
Wanky name - but useful idea
Hate the name but authorizing changes to state via git hooks is a goodness, as I already trust the people with access to my repo.
However reusing the implicit authorization of write access to a given respository as a proxy for permission to make changes to configuration is different from storing config files in a SCM.
GitOps is still a wanky name for this idea.
-
-
Thursday 17th May 2018 19:33 GMT Alistair
svn/csv/git > repo DEV repo QA repo NPE repo PROD repo DRP
cfengine/chef/puppet config files. (working to add ansible to the list)
rulesets based on OS/ver Purpose/env and location/toolset app/db network - lots of *if this then that* type filters.
We've been doing this for.
um
too goddamn long. I think we're at about 12 or 14 years now.
4K+ hosts both BareMetal and VM and now cloudy crap both onprem and off.