Engage sprinklers....
... disengage when water damage exceeds fire damage.
A flood and fire in one of BT's major exchanges in London's west end has left stores on Oxford Street and Regent Street unable to process card payments in the last few shopping days before Christmas. The small blaze overnight at the Gerrard Street facility in Soho has left thousands of businesses without broadband and …
Credit card swipe terminals are ideal for radio. The message format is fixed and short.
Summer fairs have been provided with both CC swipes as well as ATM's, the latter in a muscular looking security vehicle complete with guards, all connected wifelessly.
Very practical applications they are, too.
Ah, never been in Gerrard Street exhange?
Do you know the area, too?
The exchange is an example of a quart in a pint pot, some of the local businesses affected would also be several 'model agencies'.
Gawd nose how some sort of wireless would work around there.
(go for Street View to get a better idea)
That would suggest someone had sat down and worked out all the ways something could go wrong, and actually taken a stab at figuring out a way to fix it. This is BT we're talking about here. They're about as prepared as BP. Sprinklers in the server room, as has already been pointed out, is about the peak of their disaster recovery technology. As for redundancy, oh yes, they have lots of those...
The article makes no mentions of sprinkers in there, that was a previos commentator's invention - and it's a telephone exchange, not a server farm. I've spent decades in premises like these, and I've never, ever seen sprinklers in a BT equipment room. Some other operators used Halon before it was banned but water? Never.
Full redundancy requires twice as much kit - plus whatever you use to switch between the two sets. Do you fancy paying twice as much as you do now for your phone line or for broadband?
I'm sure they have mobile disaster recovery kit but given the failure it would appear to be much quicker to just replace what's damaged than try and lug a few containers into central London and then wire up tens of thousands of circuits by hand.
Time to dig out the old zip-zap manual credit-card imprinters.
Assuming that the stores can find the imprinters, its carbon-paper sheets, and train the staff on how to check the customers' signatures. Of course, there's not security with them, which is why the till-operators need to check the sigs.
The last time they were used was probably Jan 2000 when one of the major credit-card processors went tits-up.
A lot of cards now are "Electronic Use Only" (i.e. non-swipeable). Also, I think Chip & Pin was partly designed to get rid of that possibility too.
I blame BT but, most importantly, the stores. You're raking in millions of pounds worth of business and can't afford a second independent connection to the Internet? Hell a 3G stick would probably do in an emergency, or a leased line, or a sat connection, or something.
You're seriously telling that if the exchange goes down you can't take cards *at all* except (possibly) by an intensively manual process that most customers wouldn't be arsed to wait for you to do?
Venus is a fibre internet operator based in Soho Exchange and others. Since our offices are around the corner we were able to bring over some fully charged UPS units (battery backup) and get all our customers back up and running by 9.30am
It's quite clear the BT has worked very hard to get this sorted out and power was restored for many of the LLU broadband operators before lunchtime.
The problem is that the melted circuit comes AFTER the generators etc... We've seen things like this happen in quite a few of the major data centres... it seems that disaster recovery plans are very hard to get 100% right.
Introducing dual feeds after the main/standby power supplies is actually a massive safety risk. The chance of an electrician being killed if you did that is high. Pull the fuse out - the circuit should be dead. Dual feed it and that may well not be the case. I don't think electrical regulations even allow it in normal circumstances.
The theoretical risk of a 'stupid' thick cable and fuse, clearly marked and running in its own ducting and trays failing is minute. Alas in this case it looks like the minute risk happened. As it's a 'stupid' thick cable, it's relatively easy to replace, you just need to wait for things to cool down a bit.
across all segments. Looks to me like this failure is close to the end of a distribution line which is where it gets exponentially more expensive to maintain redundancies. Especially since somehow or another you have to pay for them.
Several years back the CIO was talking big about having two vendors supply internet access to us, and even have them coming in to different corners of the building so that if one line got cut during street work, the other would still be up. Then he got the price quote for the work and binned the plan.
Of course, I don't know the specific geography so maybe I'm all wet and this is a large geographical area where such redundancies should have been planned in.
Looks like your CIO did things backwards, as usual when unskilled people try to setup disaster recovery. I've lost count of the number of times a customer relationship has started with the note "customer has bought XXX, and wants to know how to set it up to get maximum availability" :( As the old Irish joke has it:
"Can you tell me which way I should go to get to Dublin?"
"Oh, I wouldn't start from here."
Before even thinking about the solution you need to make the business case. Work out what can go wrong, what impact it might have, how likely it is, and how much money it will cost you when it does.
Then you can figure out if it's financially worthwhile to protect against, and what sort of solutions are worth considering. Just leaping in with "oh, two internet feeds must be better than one" isn't the way to start.
Most operators will offer full redundancy as an option.. It's not 'cheap'. It's almost certainly cheaper to lose a day's business once evey fifteen or twenty years than to buy full resilience.
The network hasn't failed, the services provided by one node has - as far as I'm aware the national PSTN is still working.
last mile redundancy: You don't unless you specifically ask to be provided with diverse feeds from two or more different exchanges which is not cheap but can be done.
Also the water damage was caused by ingress from an adjacent building. Most telecoms/data centre facilities use a high fog mist system which is not the same as a "sprinkler". This is designed to lower the air temperature and smother the flame whilst reducing water damage to electronics.
The use of inert gas suppression systems are fairly uncommon (except in very specific applications) now due to health and safety risks (e.g. suffocation).
Inert gas suppression systems (Inergen, Argonite, FM-200 etc) are used (seen them in the UK, Europe, and some dodgy places in subsaharan africa, but they require a gas tight room. A typical 1950s era BT exchange would leak like a sieve. Everywhere I've seen this type of system the building had a purpose built room, or a room within a room was built - usually with the AC switchgear outside the gas protected area...
The thing is that the mult-million pound retailers use ten quid a month broadband connections to connect their POS systems to the world.
The way the broadband network is built it does not and cannot provide any kind of decent SLA. Anyone who thinks they can spend 10 quid a month, or even fifty quid a month and get rock solid connectivity is seriously deluded.
Reminds me of a system admin of ours, many years ago. Standing on a stepladder to put something on a shelf, he overbalanced. Made a wild grab to save himself, and caught the lever for the main power breaker for the server room.
He said the strangest part of it, after the initial Oh Shit! moment, was the sound of his footsteps echoing across the silent room, without the sound of fans, disks and A/C that normally drowned out all other noise.
Cost him a few beers, that did...
I don't know how it is in the UK but here in the US one of the problems you have with trying to get redundancy from different providers is pipe sharing . So the Cable companies will trench with the phone company. Snap that pipe and the cable and phone company lose signal. Same thing with other data providers . You can have sprint exchange on one side of town and ATT on the other side. If you get ATT and Sprint running to your building at some point they will share the same pipe. I saw this with MCI and ATT. MCI had exchange, and this exchange was used as a peering site for ATT. Three miles down the road they were sharing the same pipe. At the last mile it looks liked it was coming from different areas .
Not seen one of those for ages. Last time was when I worked in the north east and some muppet plugged a yellow flt into a blue socket. (yellow and blue were dc charge sockets but reverse pins).
Not only blew a 300AMP high rupturing cap fuse, but ripped the door off the fuse cab and melted aroudn 100 meters of trunked wiring that also feb the oil heating system for the warehouse. The pictures fuse looks bigger but its state is similar to the 300Am p one I saw.
There is a *good* reason why you cannot have the power on with the door open ;-)
In out case $Boss replaced the trunked wiring but I had to sit and work out which wired went where as there was no labels and wires were all "black" or as good as ;-(
FYI HRC fuses hold a set of || fuse elements enclosed in sand. The plan is that as each fuse wire goes the sand absorbs the heat etc. However if the short exceeds the rating (by any significant amount - as in a reverse wired FLT) you get what I like to call a proper bang. In such cases all fuse element go pop at once and the sand and glass "cannae take it".
Jacqui
The power blew after a fire and subsequent water damage from an adjacent building.
A BT spokesman said: “BT can confirm that water damage from an adjacent business disrupted power supplies to an exchange in the west end of London.
“We expect that all services will be restored by the close of business today, and apologise to all customers impacted.”
If the retail outlets didn't bother to invest in alternate connectivity they'll be stuck with the standard fix times on a single BT line i.e. they get what they pay for.