along with bringing in solicitors Slaughter and May
Nominative determinism?
The boss of British bank TSB has insisted it carried out rigorous testing ahead of a systems migration that saw thousands without access to their accounts for over a week – but revealed he won't be getting a chunk of his bonus. In a heated evidence session in front of Parliament's influential Treasury Committee, …
Whilst Slaughters are far from the biggest law firm even in the UK, if you put aside pure scale, they are by a long chalk THE most prestigious law firm in the UK, and certainly top five in the world. So that's going to be VERY expensive.
Unfortunately the top City law firms all know which side their bread is buttered, and will rarely act against the banks. I would wager that they are not going to come in and point any fingers at TSB. They might do against any suppliers involved, but the primary purpose is presumably to ensure that TSB are defended against any high value claims, and they can say "look, we had a proper investigation".
Ledswinger,
indeed, I was responsible for figuring out how to automate getting their many documents to Office 97 (and back if it went wrong) when the big green button was pressed to migrate to NT & Office 97.
Slaughter & May were VERY switched on.
Obvs this was 'nowt on the scale of a banking migration, but it worked properly...
Jay
"It's faithfully replicating the broken applications running in production." If that is the case then its not just the data migration and the log on that is the issue. The transaction processing should be OK in DR and that is if what TSB are saying is true. Otherwise its a much bigger problem than is being reported.
As one who knows - any paragraph in any document that doesn't contain the assurance of increased profit is not read or digested at the board table.
It does not register. So any regulatory bumf or common-sense belt-and-braces stuff is left for others to worry over - just as the last-minute confidential tag is affixed to ensure that no-one else gets to even read it.
Ooops!
Just like at a company I used to work for. The IT boss would decide to do a migration and even when it was obvious it didn't work, he'd push on in the vain hope everything would just click into place.It never did. HIs replacement had the same mentality and didn't last long when he stuffed up the company payroll replacement system.
The problem, Pester said, is that the middleware systems were unable to deal with the number of customers that wanted to access the banks systems
And he's trying to use this as an excuse? "Please sir, it wasn't me, the computer did it"
Can he not see that determining the expected load and planning for adequate resources to deal with that load are what his job is supposed to be?
It's like he's claiming that suddenly there were twice the number of customers from what they expected.
It's not like setting up a normal publicly accessible website, where planning for expected visitor numbers is always a bit of a gamble.
In this case, there are a finite number of account holders, so working out the expected load should be easy, even if, because of the downtime, more of their customers were trying to log in to see what had happened to their money...
/rant
GPDR looming, and the host bank going to IBM cloud.. so they HAD to migrate.
I see several problems here.
The first problem is testing, including volume testing.
It is quite obvious that you can't test the normal volume. After several days of outage, you are going to have several times the requests of a normal day.
Second: quite obviously they did not design a system that refused to allow more users that those it can acommodate. That is horrible, and something I only do for small companies. I have designed big systems and the underlying auth system should also control the number of active users.
On a big system as a bank this is non trivial to do, and many people don't even understand WHY you have to do this.
Third: that all explains the outages. But the auth problems? People had wrongly assigned accounts. At some point they deleted and recreated at least part of my data.. my common account was my personal one --> then no accounts --> no login (deleted) --> correct account assignments, BUT it is quite suspicious that my accounts appear on a different order. for performance reasons I pretty much doubt they are doing an "order by" in the DDBB, therefore probably (speculation) I see the natural order in the DDBB.. so as my personal account is older than the shared one.. to my trained eye this suggests that they run a script to correct account associations, instead of reloading all data. this is pure speculation and I might be wrong, of course.
Anon, as I am looking for a job right now.. and you never know..
On a well designed system? No, not at all, and the compute and memory cost of rejecting them is very small.
the system would say "Currently our systems are whateveryouwanttotellthem", for a period of time, and other users will be able to log in if the system is available.
The cost of doing this (in compute terms) is very small if done properly.
The real issue is how do you determine capacity.
I can demonstrate and do the math for it, but hey, I charge for that!
I have proven it with a live system that has many million time sensitive calls, and other smaller systems.
You have to judge that in comparison to the reaction TSB customers are having at the moment. Being told you can't log in, then getting in a minute later and everything being fine is no big deal for most people most of the time. Compared to this balls-up it's clearly a better system.
I prefer to be mildly irritated rather than unable to access any of my money.
"But the auth problems? People had wrongly assigned accounts"
Ive been into a branch today because my business accounts are still inaccessible - when they went into the system to view it my personal account user name is somehow assigned to the business accounts.. BUT it wont let me log in with those details..
The girl in the branch was really helpful but said that they couldn't rectify it in branch - its been passed over to the IT team. This isnt a capacity issue this is a wholesale data fuck up.
You have to plan for peak load + a margin over and above that.
Peak load times would be end / beginning of month when people receive their salaries and pay their bills, the end / beginning of the tax year when lots of people put money in pensions and ISAs, and around Christmas, when people spend lots of money on things.
More TSB incompetence amusement - they sent me an apologetic email yesterday (obviously this has gone out to all their customers) which near the very top says "We want you to recognise a fraudulent email if you receive one. We will always greet you personally with your name and quote the last four digits of your account number..."
The account number digits were correct, but my name was nowhere to be found...
(I'm going to Nationwide today to open my new main current account, but leaving the TSB one as a backup - I suppose that means I'll be a "retained" customer in whatever face-saving report they concoct.)
Banking exec - Is my bonus affected to a point where I'll notice any dip in my income?
Yes - This terrible crisis must be resolved to the satisfaction of all affected I reeeealy mean it
No - I do not give a fuck, I will make a show of caring but I really really do not give two shits and will leverage this as an opportunity to cut more within the bank and leverage my share options just prior to jumping ship at the star of the next clusterfuck.
"The boss of TSB has insisted it carried out rigorous testing ahead of a systems migration that saw thousands without access to their accounts for over a week - but revealed he won't be getting a chunk of his bonus."
I'd say this is correct. They did rigorous testing on the note counting machine to count all the £50 notes that would make up their bonus payment.
The bank I worked for was moved to TCP/IP while all the others in the country at the time were captive IBM accounts using SNA.
When Burroughs became a problem and a move to IBM s/390 and onwards was done I refused to succumb and used Cisco channel-attached routers - tunneling the SNA between sites and avoiding the FEP and associated software costs.
Imagine my astonishment when the person in charge of IT enquired about our SNA network!
It took 2 days to craft a suitably snotty reply that pointed out that while he was being a big-shot and contemplating his navel we had saved a bundle of money and had only a single IP network,
Execs usually cannot discriminate between execute as in kill and execute as in carry out operations!
Are you just trolling?
Yes, there have been issues, but in what universe are any of the above proven today to be the cause. The root cause analysis is still under way, the result of which will probably never be made public. Everything we're being told is rumour and conjecture, including probably what has been fed to MPs.
I reiterate, you're just trolling and that's why you're AC
I've been working in a bank when they've applied an across the board 10% pay cut to all contractors to reduce costs. Similarly, Natwest previously seemed to claim outsourcing wasn't to blame for one of their previous outages, because the off-shore team didn't do the failed migration, despite the fact they'd made redundant some of those most knowledgeable about the migration process so the on-shore team that was left had insufficient skills to ensure its successful completion, as a direct result of the off-shoring effort. It's easy to point the finger at one of the IT teams as having done something wrong, here's the point.
Things can and do go wrong in migrations. A process fails, some unexpected data blocks the process. While you obviously try to minimise the chance of things going wrong, you also need to ensure your plans make absolutely sure that you can back out and get a reasonable level of service provision back promptly when things do go wrong. The blame for a lack of adequate planning lies with management, not any of the lower layers.
Customers aren't too fussed about a few minutes extra downtime in the night while you roll back after a failure. They don't care about a new faster system being rolled out a bit later (they probably don't want to have to learn to use a new system anyway). What they care about is when they have days or longer where they can't access their money, or where their data is exposed and leave them vulnerable to fraud or theft. I don't think they'll be finding they've made those savings, because the reputational damage has got to have outweighed many years of cost-cutting - which, unfortunately, just seems to be all too common a result of short-sighted cost-saving measures.
"I've been working in a bank when they've applied an across the board 10% pay cut to all contractors to reduce costs."
To which the appropriate response is either find a new role to coincide with your notice period and don't sign up, or if that's tricky, each renewal you look for a new role and when you find one, leave at the end of your contract, effectively giving them no notice.
It is generally the really good people that are in demand that walk from these type of demands, so in my experience it down-skills a contract workforce and often ends up costing money.