Hahahaha. I'm on TalkTalk so I'm fine.
Anonymous for obvious reasons.
Brits using mobile networks EE, O2, Giffgaff, and BT Mobile are unable to make calls this evening, UK time, amid an ongoing network breakdown. Judging from complaints online, subscribers cannot make outgoing calls, and some can't receive any, either. It appears the fault lies in the BT national network. "There seems to be …
It doesn't matter which network you're on, you're not fine. If you're on one of the affected networks you won't be b able to make or receive off netwotk calls, but if you're not on the affected networks then you still won't to be able to make or receive calls to or from those networks.
This sort of outrage affects everyone.
People need to do a refresh of watching "Brazil" - bearing in mind that all the explosions and other carnage blamed on terrorists in the movie were actually caused by decrepit equipment and shonky practices (the guy who actually fixed things was also branded as a terrorist)
It's becoming uncomfortably true - those exploding pavements in London being one example.
Service Status
giffgaff Website/App and Log-In not available
Activations/SIM Swaps not available
Top-Ups/Purchasing goodybags/Auto Top-Ups not available
Handset Pay Outright/Loan/Return and Repair not available
New Number Transfers In and Out of giffgaff not available
giffgaff Money not available
Requesting a PAC code not available
Calls, Texts and Data Available
Recurring goodybags Available
Outgoing calls to landlines Unavailable for some members
Outgoing calls to 0800 numbers Unavailable for some members
Outgoing calls (more info TBC) Unavailable for some members
Here in the U.S., companies are selling their landline service, in my case from AT&T to Frontier Communications, and I suspect it's only a matter of time before they try to phase those old services out. You'll have to pry my landline from my cold, dead hands(TM). Why? Because I can get crystal clear connections, never dropped (except when a tree falls on the lines...), always in range. I can understand the convenience of a mobile phone, but until they can match the service, I want my wires. Cellular is simply inferior.
Horses for courses. I've heard some pretty horrid landlines, and having a cell means not paying twice to have a phone that works fine at home and abroad.
This sounds like a case of having systems that were inter-reliant, and either not realizing it or not taking appropriate steps. Hm.
Here in the U.S., companies are selling their landline service
Maybe there, but not in this part of the US. They need the landline for this DSL thingy. I hear Comcast needs some sort of a wire for their cable service thingy, too.
I'm told it's all terribly technical and that I should just make sure my check is in the mail and those nice gentlemen at the phone company will take care of it for me.
Funny ... Had difficulty calling the wife (gg to gg) and just after 6 this evening from Woodford to home a few miles away. Tried 3 times and got a series of beeps. Plenty of signal. A few mins later worked ok.
Assumed it was network congestion at the time , but never had the problem in that area before.
I wonder if a carrier ever says anything other than "some of our customers"...
Given this sounds like a major network interconnect, I would suspect it's an all or nothing.
If they have redundancy (and they certainly should have!), and that is actually still working, it looks like everyone is now being stuffed down one very narrow pipe which is incapable of handling the demand. I'd let them use a "most" for that situation.
(And a serious slap for whoever underspecified the redundant route - It's supposed to be invisible, i.e. be able to handle all the load of the primary system. If not you need to go reread what redundancy is all about!).
Well thanks anonymous coward... All I can say in my defense is that in the field of communications I work in, and have worked in, system redundancy really does mean the ability to handle 200% load.
In fact one previous existence had almost 300% so any one of three geographically distant sites could handle the entire requirement.
I will end with the definition of redundancy to those who may have been hoodwinked by salesmen...
"In engineering, redundancy is the duplication of critical components or functions of a system with the intention of increasing reliability of the system, usually in the form of a backup or fail-safe."
Duplication being the key word... Aka exact copy... System redundancy would therefore be system duplication... Which would invariably lead to 200% capacity if it's done right.
How are those for some fact?
(Automatic troll face was very apt in your case).
While network redundancy can be provided in a number of ways, diverse routing being but one it is rare that such provision is designed to handle 200% of the expected maximum load. I'm retired now but would have cheerfully have accepted a diverse route plan that could, when split and broken handle up to 100% of the average load for say the 12 normal busier hours but might fall short, perhaps down to 60% or less in the few real peak periods. It is down to practical issues like available income and the return that investment will likely make.
(I am assuming that there are a number of quiet hours where even a 30% of peak capacity could assure 100% service availability.)
Of course if the issue was access to 'logical services' rather than e.g. the physical line system then redundancy can become a whole lot more complex. Data centre failures, devices 'data bombing' each other, etc. can be a nightmare that takes time to resolve and no service level assurance can be held in those circumstances.
We all know the sorts of issues that affect interconnected devices if one or more go rogue, and flood the system with cries for whatever the device(s) wrongly think they need.
In extreme cases it can be very hard to re-establish control rapidly.
...as we pump out about several thousand calls an hour we spotted this becoming a major issue around 4pm.
Initially we suspected it was only EE, but not all, for example we could dial our personal ones around 9 in 10 times, but work ones would either come back with a standard busy (not fast busy) or have no response at all. As time went on the situation got worse and affected more and more.
Phoning EE, they stated there were no issues in our area, despite twitter being lit up with issues and their own status page saying issues.
MrsJP is on Tesco, and yesterday around 4pm, tried to call a local landline 19 times from her Tesco mobile. The call just "dropped". No message, no tone. In the end I made the call from my Vodafone wok phone - first time.
This incident reminded us of something similar 2 years ago - again on Tesco (which is o2 really). I tried calling my sons phone, and was told "the number you have dialled has not been recognised" which lasted a few hours.
However, for balance, I haven't had any problems (yet) with the giffgaff SIMs I use in my Wileyfox ....
Quite how complicated making a phone call is these days.
Used to be a direct wire connection between two handsets (admittedly, a long time ago).
It's now a radio to a mast, on to a microwave backhaul (sometimes), on to another mast, onto a fibre line into an exchange, then bounced to the caller's network servers, then back out and around wires and switches and such, then on to the recipient's servers, then bounced around more switches and such, then spat out of a fibre line onto a mast, then maybe fired off over a microwave backhaul onto another mast, then on to the recipient's phone. Pretty much every part of that could be owned & operated by a different company, and yet most of the time you dial a mobile and the call goes through in a matter of seconds.
It's quite a feat, and we don't realise until this sort of thing happens.
And it's not just the physical interconnections that can go wrong.
There's a reason some calls take longer to establish than others. Because of the way number blocks were and still are allocated, if a customer ports to another network, the original network has to maintain a record for that customer.
Now imagine you port three or four times (as I've done)... Call setup can take a couple of seconds. Why? When someone calls you, the caller's network has to handshake with each network in turn to reach the callee's current network. Anyone who's ever hosted your number. Essentially they follow the breadcrumbs down the line (for billing and routing purposes), and all of these systems have to play nice. If one provider's system has a bit of a wobble... Chaos.
I understand it's also moderately complex for landline numbers ported to SIP providers, although UK mobiles take the cake.
From a conversation with someone at a telecoms company dealing with mobile and SIP who really knew what he was talking about, I got the impression our comms networks might as well be held together with sticky tape and love. It also sounds like a seriously soft target for attacks on critical infrastructure (a bit like those unprotected SCADA systems) and only operates because the networks basically trust each other not to screw it up.
> Now imagine you port three or four times (as I've done)... Call setup can take a couple of seconds. Why? When someone calls you, the caller's network has to handshake with each network in turn to reach the callee's current network.
I'd hope the s/w was written to first try the "last known good" operator, and not all of them each time. And it may fire them off in parallel.
Mostly, I'd expect the delays being caused by:
a) paging. If you're in spotty coverage, you may need to be paged multiple times for your phone to notice that you've got an incoming call. There are gaps of O(seconds) between the paging messages to give you time to respond on a potentially congested random access channel.
b) more paging (but less likely). If the network has lost track of you, it gradually expands its search within the network to find you - so the first time you're paged, it might only be on cells which aren't covering you
c) LTE, but not VOLTE. Here you have to get redirected down to a lower tech to establish your call. Not sure of the signalling flow, but would expect incremental delay here
d) network congestion (either end of the call). This may redirect the call setup to a different technology, too, which will incur delay in processing
Occasionally, you will also need to re-establish your ciphering/encryption keys to the network, which will incur further delays.
A lot of the delay is the air interface, which can take 10s-100s of milliseconds to transfer a small signalling packet. Once the data is on the wire, it whizzes along, and hardware processing/routing is in the micro/nano second scale (unless you're calling the moon).