"lines of code produced or function points created"
Those have never been valid metrics, DevOps or otherwise.
If you want a brief summary of DevOps, it goes like this: a lot of those who claim to be implementing DevOps aren't getting it right. And British companies are doing worse than their peers abroad. That’s the potted findings of a CA Technologies earlier this year that claimed there exists a gap in perception, pointing out that …
Well, they've never been valid. They have been tried, ISO9002 tried to make lines of code a thing.
My view on poor adoption of DevOps - so many organisations got burned with their wall of process folders in ISO9002 and the many man-years that died therein, that anything that resembles it is rightly treated with suspicion.
QMS standards like 9001 do NOT say that you must measure lines of code. They do say you should set measurable quality objectives and measure them. What you measure is up to you.
The problem is that it is very hard to think of good metrics for software development (or development in general). Even high level ones like delivery against original planned dates is difficult with dependancies on other groups, clients and changing requirments and environment.
Right. I've been hearing this Pearl of Wisdom, "you shouldn't measure lines of code", anytime in the past 20 years.
But I've never heard of anyone actually doing it.
ISO9002 certainly didn't suggest it. All it said was "have metrics", nothing about what they should be. It's possible that someone somewhere thought "lines of code" would be a sensible metric, but I've never seen a first-hand account from anyone who worked in a company that had that idea.
and the first rule of ISO9002
Is to have a meeting on aroject and document all the bits of the standard that you are NOT going to follow thus covering your collective arses when it comes to audit time.
As for the lines of code. What a load of bollocks. Some code is easy. lots is hard especially when the effing customer keeps changing their mind every other day.
If you want the lines of code then the mantra that is spoken in some circles that a function/subroutine/procedure should never be more than 11 lines long is a very good way to artificially boost your lines of code numbers. Does it actually mean anything in the long run? Apart from making maintenance/suppot a lot harder esp as most of these functions etc are never used anywhere else... WFT?
And making ir run a lot slower... it means to many of us, SFA.
YMMV and probably will but those are the views of a self confessed dinosaur of the IT world with a mere 44 years of coding behind me.
I once spent a fortnight refactoring a bunch of classes, that consisted of huge amounts of cludgy cut'n'paste code, and had been created with so little thought or skill that I managed to end up with code that did the same thing and was 80% - 90% smaller in terms of both lines of code and function points.
The project manager there was a contractor, who was obsessed with metrics and Microsoft Project, who loved to turn up for meetings with senior management armed with all sorts of charts to impress them with.
So of course, he soon appeared at my desk looking confused and clutching charts showing lines of code and function points created for various applications, with mine shpwing a massive negative bar on it. The fact that this refactoring had to be done to get the work finished quicker and with much less code (and bugs) in the product was incomprehensible to him.
If code metrics are seen as part of the "DevOps" mindset, does this mean that all the "Agile Evangelists" who were obsessed with refactoring are now out of vougue? We need to be told!
... my hypothesis is that this person is an idiot. In many development sectors MTBF is all that matters because "remediating" the failure isn't a software issue. eg: crashing a plane, emptying the wrong bank accounts, overheating a greenhouse, flooding a factory floor ... etc.
DevOps, if it has any relevance at all, is a methodology for people that think "software" automatically means: ECMAScript, AngularJS, REACT, iOS, node.js and hyperconverged cloudy dockers.
Argh. Another meaningless DevOps article. Taking obviousness and applying it to DevOps and calling it a article. Sheesh.
Ok, How about this metric. Is you customer satisfied? If you've reduced cost of release by 98%. Great. But was that your cost or your customers? Was that a product they are interested in? Was that their priority? DO THEY CARE?
What you do, is go to the customer and ask them, and that's your metric. And, if more numbers happen to help them understand what a great job your doing, to make them happier then fantastic, if not don't. If your saving your own cost, which is sensible, don't confuse it with making your customer happy.
The rest, internal metrics are for yourself. Based on hopefully a continuous strategy of improvements to your systems and processes (which may or may not include DevOps). They may be interesting as the how you did it, but the what is customer satisfaction.
Number of mentions in this article of customer? 0
"What you do, is go to the customer and ask them, and that's your metric. And, if more numbers happen to help them understand what a great job your doing, to make them happier then fantastic, if not don't. If your saving your own cost, which is sensible, don't confuse it with making your customer happy."
This is it, it should be about providing the product a customer wants, and instead of month of marketing making it up, you release a prototype. Then fix and develop the customer want improved and forget about the features you guessed at but they don't use. The quicker and smaller the release, the less the customer is annoyed by the changes (only as a general rule.) and major bugs can be fixed quickly.
You don't get better at something by not doing it.
Yep, if your customer is out there, the great unwashed, this I think this is it.
It made me laugh - as I'm working on a side project - for myself, and I can't get features how I want them. I knocked something up last night, released it (jenkins - chef in about 5 minutes), try it for a while, and realise what doesn't quite work how I want in practise. And I'm the customer, for myself. What chance does marketing have when the customer doesn't/can't know.
Oh what's that, Life Support 1.3 has has a problem, what, it's crashing. Oh hang on, we'll get right on that, our normal test cycle is 4 months. What, people are dying? Blimey, we'll we can try and knock it out immediately, but we won't be able to test it, and Geoff, the release guy is on holiday, but I think he left some notes around here.
The point being, that 4 month test cycle, (followed by probably intensive third party testing for certification) means it probably doesn't have a crash bug , or if it does, it affects a very small number of people or circumstances.
DevOps style development where you throw releases out the door and see what sticks is fine for a messaging app, but not for, say, air traffic control, financial transactions, life support . I.e anything actually important.
No matter how clever you try to be, metrics have the same two problems as regulations:
- The more and more complex they are, the more overhead you have to pay to implement and track them.
- All are corruptible: given enough incentives, individuals start to look for ways to work around them for their own benefit, and find them eventually. Attempts to "fix" this problem only makes the previous one (complexity and cost) worse.
There is no work around those two problems, as they are ingrained in their design (overhead) and their intent (reward) Your only hope is to hit the right balance of complexity vs. cost of measurement vs. cost of corruption. If it sounds like the familiar Project Management triangle of entanglement (cost, speed, quality), it is because it is exactly the same kind of problem.
I've never come across a metric that was useful, let alone incorruptible.
The biggest issue seems to be that the people who want metrics aren't actually sure what it is they're trying to measure, because measurement is an exact science and the things they want to measure are wishy-washy management bollocks like 'satisfaction' and 'performance'.
Metrics used to measure performance are the easiest to corrupt. All engineers know about booking time to projects so that they can do their job effectively while making management think their metrics work.
How about this. If loads of your engineers think you're a c*nt, you're doing a bad job. If your company is losing money, you're doing a bad job. That's about as accurate as it gets.
84 per cent of UK organisations agreed it is important "to have IT and business alignment in relation for DevOps", but just 36 per cent ... actually what?!?
Is that a typo? I can't parse it. My in-head compiler just spat out an uninformative syntax error. I think my gripe is the "in relation for", but I dare say there are other ways of fixing it.
Or is this what DevOps is -- postmodern psychobabble for PHBs?
OK, I've had an hour or so for my sub-conscious to grind it down. My current best guess is that they meant to say it is important to have "IT and business aligned" for DevOps. Admittedly this statement is so "duh-brain" bland it pre-emptively nukes taste-buds from orbit, but it is at least a statement, which is more than I can say for the original.
Every time I see the expression "Dev-Ops" I just want to bang heads against the wall. If this cannot be consistently explained, let alone consistently deployed then its just the next in a very very long line of IT BS doing the rounds yet again. I've been reading about IT aligning with the business for nigh on 40 years. Hasn't anyone actually got it done yet? Well, that is, outside of any IT shops that I seem to know or exist in. I've had my fill of snake oil decades ago and am not partial to it these days. Dev-Ops. Really. Next!
Yay! DevOps! We are doing it wrong! Yay! Our culture stinks! DevOps is awesome because it is...erm...Yay DevOps!
Why do DevOps articles seem have at least one insult in them directed at their potential audience? If anyone reading this article likes the DevOps articles on this site or is convinced by or sold on them could they let us know? Otherwise we can tell the twats paying to feature this stuff they are wasting their time and they should go elsewhere with their shiny marketing patter.
Have a great weekend everyone.
“Traditional metrics are useless,”
Well its nice that there's something in the article I can agree with. Unfortunately replacing traditional metrics with other metrics just encourages people to work towards those metrics at the expense of anything you didn't think of measuring. And that will be true however much you beat your readers with the yammer hammer.
The Reg is having a fine old time with Dev Ops. Do you know what I've noticed?
"We implemented dev ops for our new start up"
"Try DevOps first on a mobile app"
"Don't get dragged down by the old legacy systems"
DevOps is great, it seems, if you're starting from scratch. If a good number of your staff weren't outsource in the last bout of IT bait & switch, if you can turn off the 30 year old mess of COBOL running on a mainframe for some reason lazy system admins haven't moved over to, er, what's cool now? Ruby or something, running on AWS
Virtualised networking is hard when you've got a 10 year contract to outsource your network support. Virtual storage is hard when you've just shifted all your storage SAs to Uganda.
Not to mention the fact that your devs are earning $5 a day in Manila and are barely able to code, let alone be trusted with admin access to production.
The reason we're all "doing it wrong" is because unless you work in an office with a soft play area, it's impossible to do right on all but the smallest project.
The reason the separation between Ops and Dev exists is Devs are too gung-ho to be trusted with actual live environments and Ops have spent the last 20 years being centralised and centralised so that they can't really know what everything does because there's so much of it, doing so much disparate things