DR
yup DR is a still a requirement across data centres.....'the cloud' isnt magic and neither is bog standard hosting
HostingUK and its big brother, iomart, are still struggling to restore online services more than 12 hours after a bit barn blackout. A fibre cable break took down the St Asaph data centre in Wales last night and, judging by the howling on social media, pretty much all of HostingUK. Users began reporting problems from 2055 UK …
A cloud is great.
As ONE item of redundancy.
You could spread across several different clouds, across in-house and cloud, across in-house, externally-hosted and cloud. But a particular cloud is ONE point of potential failure.
Anyone who thinks otherwise ends up with problems like this.
Anyone with a brain, in such an instance, would go "Okay, time to failover to our secondary site which has NOTHING in common with the primary... not a company name, not a cable, not a service, not a switch".
"You could spread across several different clouds, across in-house and cloud, across in-house, externally-hosted and cloud. But a particular cloud is ONE point of potential failure."
Depends which cloud. Go to one of the big boys - AWS or Azure say - and you'll get a data centre network with the capability* for you to host and run your applications with proper and complete redundancy and fault tolerance built in. Go to one of the relative minnows who have been bought and sold by a bunch of VCs with costs stripped to the bone each time and you'll get what you pay for. For better or worse this whole sector is becoming like the big supermarket chains vs the Mom'n'Pop corner stores - the bigger guys do it mostly better and cheaper, but diversity dies.
* although they give you the capability to do it right, they're kind enough to let you do it wrong if you are so inclined, so you'll be just as vulnerable to this kind of failure unless you tick the right boxes on the web console too
"Anyone from advertising would probably sell a one drived desktop computer as a cloud computing service, to me its just marketing and I take no notice until actually seeing what it behind the marketing rubbish."
They do. Just look at the marketing for consumer NAS boxes (or even just external USB drives with a built-in in web interface FFS!)
I've seen this time and time again over the past 10 years..
A small data center, initially setup to serve local businesses (for small hosting outfits, or online companies), they market themselves as having multiple connections but fail to mention they all use the same pipes.
This is not always their fault, as periodically diverse geographical routing to multiple POP's is not feasible or too expensive. But once acquired, a company the size of IOMart has no excuse for not addressing this.
in spite of what the telco salesman said
Or in our case, after the supplier of one fibre bought the supplier of the other and decided to reduce costs by 'simplifying' the joint network. It takes ongoing monitoring to track that sort of behind-the-scenes shenanigans.
It also takes money. I remember St Asaph, but can't remember who originally came up with a cunning plan to locate a datacentre there. I do remember a couple of pitches to buy it or PoP it though, and quickly found a.. snag.
Network wise, it's pretty much in the middle of nowhere, so the civils costs to get either our own fibre, or persuade other carriers to run fibre to it were horrendous.. And then there would be minor logistical challenges, like persuading the Welsh HA to grant permits to dig up kinda key roads in and around N.Wales. Since then, there's been work for supporting wind farms off N.Wales, and sorting out backhaul for some new cables that land around there, so there may be some more options. Wales isn't (or wasn't) very well served by competing fibre providers, especially if you were after dark fibre.
But such is telecomms. When looking for services, it's critical to specify exactly what you're after. If you want true diversity, or to specify minimum route seperation, you have to order that as a service. Otherwise providers can (and often will) re-groom capacity to suit their needs, and unless you have a contract specifying route seperation, you're generally SOL. Or just offline. Just ordering 2 circuits and hoping for the best may work out cheaper in the short term, but it's best to try and speak to the planners so you get what you need.
And in our case the secondary provider who was using KComms fibre network when we signed the contract, while our main route used BT, then subcontracted his service to Openreach. This ended up with a site with dual entrance of separate fibre networks with disparate paths back to separate cabinets being routed back to the same single fibre 100 meters up the road. all completely transparent to our network monitoring kit.
all completely transparent to our network monitoring kit.
Ah, well.. You can monitor changes with the right kit. So either an OTDR and occasional tests to see if route length has changed at all, or that's often a function built into decent DWDM kit. Which can then alert on changes, cuts, and sometimes degradation. But those only work on dark fibre routes.
"periodically diverse geographical routing to multiple POP's is not feasible or too expensive"
In a lot of cases the telco sells a customer "geographically diverse connections" and then as soon as they're out of sight of the premises both connections end up shunted down the same duct.
BT has been particularly bad for this wee rort. Make sure you get a map of your entire fibre route from start to finish and get the salestwat to sign off a statutory declaration that it's been done to spec.
"they market themselves as having multiple connections but fail to mention they all use the same pipes."
Yes, that struck me too on reading the article. Surely those "converging fibres" ought to converge inside the DC, having taken different routes to get there.
In North Wales one would have thought that the farming would mostly be livestock rather than arable.
Moreover, putting in an occasional crop of winter barley or similar would only involve ploughing, harrowing and seeding to 12 inches at most.
Trench your fibre down a couple of feet and no normal farming practices will ever touch it - as evidenced here where it was cut by a trencher cutting a micro-trench to a depth of two metres, not a plough, disc harrow or other common farming implement.
In this specific case though, the fibre to St Asaph runs from Chester all the way out to Holyhead and across to Dublin as well as looping down around West Wales.
Seems short-sighted that all the transit for their DC would be going East/in-bound to England and that they can't fail over to a Westerly route, even if traffic has to go via Ireland in the process. This is presumably what they refer to in their status updates when they mention transit re-route and new fibre provisioning as two of their three resolution paths (in addition to getting the existing links repaired!).
Hedge cutters can be a bigger risk than the dreaded back-hoe fade. Depending on how cables are buried, soil creep can move them. Then hedge cutters being a big spinning spikey disk can quite happily dig into soil, yoink the cable up and break it. And running hazard tape above the cable is easily ignored. Billing the farmer for damage might be less easily ignored, depending on how the wayleave was granted.
I've lost home connectivity (I work from home in a field down a lane) twice now with hedgecutters borking it and have had to fallback to 4G connectivity.. fallback to 4G connectivity... fallback... Hey, I have a great idea for a money earner! We'll get all the kids together, and put on a show and call it Ethel the Aardvark Goes Business Continuity Planning.
"Who exactly thought it was "a good idea" to plant a fibre line in a farmers field "
When I was last involved in a project of that nature, the "planting" involved a pair of Caterpilar D8 bulldozers and a mole plough putting the armoured fibre _at least_ 2 metres down with plenty of warning tape on top before the trench was reclosed.
It's not so much "who thought" of the path as "who cut corners" when implementing it.
A Datacenter I dealt with found out the hard way that despite spending a fortune on having 3 separate power lines heading out in different directions, they all ended up at the same substation which went poof when an idiot with a hacksaw removed himself from the mortal plane trying to steal a copper bus bar!
Rules of natural selection, as my Spanish-wife calls it (shes Dr). Her favourite mentions:
- you'll rarely find a yellow line or "mind the gap" signs in train stations in Spain, if people are too stupid or drunk to see the gap, well that's their fault
- They don't close a whole motorway after an accident, just the lane, people should see the results of stupidity
Have an upvote and a beer :)
I've honestly lost count of how many DC's have single suppliers and/or single points of failure.
I remember one company I worked for, now gone, that boasted how one of theirs had redundant-everything, including power and fibre.
However, both power and data were from a single provider. Both came into the DC together down the same routes.
I know of at least one large gov department could be shut down just by digging a trench outside their office. All of their redundant network lines go down the same physical route, side-by-side until they branch off in opposite directions as well.
Back in the early seventies, when I was working for a very large electrical engineering company in the Midlands, we were all beavering away making Arnold's Millions for him, when the whole site went dark. A construction company was improving the main road out from town out towards the motorway and a very big digger had cut right through the main incomer to our site. In the resulting explosion, the bucket and outer half of the arm had melted, and the molten metal had cooled and fused into a solid lump in the resultant hole. It took most of the week to reinstate the high voltage supply to the factory, but the offices were given an emergency supply by means of a couple of diesel generators in the car park. ( Icon because that's what it looked like from our office).
We did have to different fibre paths ( 2 fibre trunks with 12 strands each) between our two DCs. We even had (under NDA) the full schematics for these paths through the city including the layout of the electric, gas and waste lines. Inlets into the buildings were on different sides and the cables were never crossing. Only on one street bend they were on opposite sides for about 10m. And the cables were inside the old gas pipes that were used by the local gas/electric company for fibre deployment.
One day HP Openvie not only screemed red for on side of the network interconnect, but even the SAN links on that side. Our monitoring guy had that cross-referenced and Openview alerted a full cable break.
We went int our DC used the OTDR equipment and found the break distance exactly at that point.
Cue 3 networkers and 2 SAN people running flat out to that position (on foot faster, because we could use a park) with our team lead alerting facilities about an incoming power outage (the substation main line was about 1m below our fibre).
We could stop the back hoe driver from destroying out backup link and municipal utilities did have some serious words with them (and much swearing). On the paperwork the construction company used, the fibre conduits were still classified as active gas lines. How they could even think to continue after hitting one, the mind boogles.
You mean:
- Not to request a Google Earth map with routes of the fibre prior to signing any paperwork.
- Not to put it contractually that any changes to fibre topology are to be made clear in advance and potentially to allow for contract break (pun intended) in case of changes or altogether to prohibit shenanigans like that in advance?
We did have redundant paths with our AS and telephone exchange to two different providers. They were contractually obliged to have them at least 5m apart. about 50km south the lines had to cross the river Elbe at Hamburg. The lines were in different pipes and the distance was kept. But they used the same bridge for the crossing only on different sides. One day a lorry carrying diesel fuel caught fire and damaged the bridge. The fire destroyed all cables inside the pipes on both sides of the bridge. We lost all connectivity for a day.
Legal had to fess up that they had not required different crossings or risk mitigation beyond 25km.