(when say the kill shot(s) came from elsewhere)
My money's on the grassy knoll...
4260 publicly visible posts • joined 19 May 2010
The number of pupils signing up for GCSE computing has plateaued just years after the qualification was introduced, raising concerns that not enough is being done to help teachers with more difficult courses
Because it's the teachers' fault, not the fact that the course is completely crap, and any pupil looking at their GCSE choices will instantly recognise that.
We must be doing something wrong then, we only have two VDSL2 connections which offer real world speeds of 60Mb/s down and 16Mb/s up, (we are 2 miles from the exchange) and then two dedicated 20Mb/s fibre links. We are an SME, so where would we get 100Mb/s uplinks from without going bankrupt?
"The four roles in question are:"
*lists five items*
Our chief weapon is surprise...surprise and fear...fear and surprise.... Our two weapons are fear and surprise...and ruthless efficiency.... Our *three* weapons are fear, and surprise, and ruthless efficiency...and an almost fanatical devotion to the Pope.... Our *four*...no... *Amongst* our weapons.... Amongst our weaponry...are such elements as fear, surprise.... I'll come in again.
It's taken me a while to respond but I wanted to check my facts before posting.
I remembered reading an article which stated that as larger and larger capacity disks are used in RAID arrays, the more likely you are to see non-recoverable read errors on some sector of the disc. If you take the manufacturer's stated figures, the incidence of URE is about 10^14.
Which means that once every 100,000,000,000,000 bits, the disk will be unable to read a sector. That figure of 10^ 14 is roughly equivalent to 12TB. So if you use drives of 12TB or more, then you will have a URE at some point.
So imagine if you have a disc failure in a RAID array using 12TB disks. The time taken to rebuild the array means that there is a significant chance of a non-recoverable read error on one of the other drives during the rebuild (BER / URE). That would be GAME OVER.
So using more, smaller drives in larger arrays in RAID 6 or better, greatly reduces the chances of a multiple failure and data loss.
What would be the point in churning out cheap 1 TB drives versus more expensive bigger drives that have a lower cost per byte?
It's an interesting question, at the end of the day spinning rust will break, and I definitely would not be comfortable in having 16TB of data on one drive in an array, the rebuild time would be horrendous.
I for one would rather have larger arrays of smaller drives (although I'm not sure that 1TB isn't too small nowadays as you say.
But there must be a sweet spot where price-per-byte and the time taken to recover from a failure intersect, although I haven't looked into it, I would guess around 4TB.
Spectra Logic's Digital Data Storage Outlook 2017 report predicts IBM will emerge as the sole tape drive manufacturer.
Looking at the way IBM is rapidly disappearing up its own backside, this may be wishful thinking!
It ( / they) seem to be divesting themselves of anything hardware related, in an effort to become all cloudy and edgy, and service oriented.
If Three did not breach OFCOM rules then what right in law does OFCOM have to impose a penalty?
Apparently reading and comprehension have failed here.
The incident they were investigating - a failure of a fibre link - was not deemed to be in breach of OFCOM rules.
However in the process of investigating that incident, they found that Three only had one Datacentre handling all emergency traffic, with no failover, which is a breach of OFCOM requirements.
If you can show your applications side-by-side unsnapped doesn't the need to snap go away?
It can be a pain to have to manually resize and move all the windows to fit, compared to just maximising them on one monitor. To the best of my knowledge, you can no longer tile application windows in current versions of MS Windows.
And certain applications don't necessarily show all the content if they aren't maximised - a classic example is RDP sessions, which (depending on the resolution of the remote machine) may not show the whole desktop and taskbar unless the window is maximised.
Whenever you think you've sussed The Rules in biology, somebody discovers an organism that breaks 'em.
Ahh, that made me think of the following - sometimes referred to as the Harvard Law of Animal Behaviour:
"under strictly controlled experimental conditions of temperature, time, lighting, feeding, and training, the organism will behave as it damn well pleases."
2) One of my Uncles called a fuel can a flimsy (he claimed he learned the term from the Brits in Italy during the last days of WWII ... not sure where this actually came from, he's the only person I ever heard use it).
This comes from the British issued fuel cans being made of very, very thin metal, due I suppose to rationing of materials. Most British land forces quickly adopted captured German ones (and later copied and manufactured their own), which were much more strongly built. Hence why they are still commonly called Jerry cans.
When I was a lad, at boarding school, the only loo paper provided was Izal Medicated.
I too remember (without fondness) Izal Medicated. Very similar to greaseproof paper, and completely non-absorbent.
But, 40(cough) years on, it's the smell of carbolic soap which always brings back memories of my infant school toilets - tiled in dark green, with lighter patches where the moss had taken hold...
"please let us know if this service interruption (some services possibly up to two weeks) could be a major impact to your product release schedules".
Well, given that:
The IT services provided by those facilities included virtual systems, control desk, software configuration management, build automation, ID build automation, appscan source vulnerability scanning, and continuous delivery with UrbanCode Deploy
I would suggest it might have an impact?
I wonder if this is a somewhat drastic method of finding out what services they need to keep... turn 'em off for two weeks, and if no-one complains, they obviously weren't required.
If you want control over your connection, use YOUR OWN router. Using your own modem would be a good idea too.
And what if the ISP doesn't allow third-party routers / modems to be connected? This is quite common.
And to forestall your next idea, what if there is only one ISP in your area, so you can't change?
You can implement any version of OS as a guest, on say a Linux base build, take an image regularly, only operate the PC using the guest OS, such that any issues means that you can immediately restore good known image.
Any zero day, or unpatched exploit should be less of an issue as you just restore the good known image. So any old OS can be supported indefinitely, running the proprietary software.
The problem may be that whilst you can emulate software in a virtual environment, it is not so easy to emulate custom hardware in a VM. This is not the case for all of NHS's problems by any means, but may be a reason for sticking with real hardware in some cases.
And if the breach is down to having to use weak encryption because the Government wants to snoop on everybody all the time?
An interesting point, but the fact is the majority of the data breaches that have happened are not down to encryption failures, they are down to easily preventable exploits like SQL injection, which should have been a solved problem years ago.
European Union ministers have approved new rules for video that will oblige Facebook, Google, Twitter and others to remove hate speech and sexually explicit videos online or face stiff fines.
Because of course, American companies, in America, are going to care about what some bunch of other countries want to do. How do the EU propose to enforce these fines?
I'm not sure if it was BT or O2, who I recently had the pleasure of listening to for an hour.
But what drove me mad was that despite the hold music being a pleasant piece of Chopin, I think, every time the "your call is important to us, please hold" message came on, it restarted the music from the beginning.
How bloody annoying that is!
The bottom line is I had a much needed Doctors appointment cancelled last week. And part of me feels like I should feel sorry for them and take part in the public anger against hackers. But another part of me feels annoyed I lost that appointment because someone didn't know how to do back-ups.
I think you are making an unwarranted assumption. The actual number of systems affected by the ransomware was quite small, and most were simply shut down as a precaution, and to limit the spread of the infection, which was absolutely the right thing to do at the time.
This was obviously a difficult decision, but in balancing the ability to honour appointments for a day against the likely impact of a ransomware infection, the answer is clear. There is no indication that GP surgeries do not have sufficient valid backups available.
We have spent the week fundamentally changing the way we manage our office networks, in order that we have some protection against Cryptolocker, WannaCry and other ransomware attacks.
Should we have done this before? YES
Could we have done this before? NO.
It's only due to the widespread publicity garnered by the WannaCry attack that our Directors and PHBs have been stung into releasing the necessary funding to allow us to do it.
Luckily, we've long had a plan ready to implement.
So our backups are now on a separate LAN, with no direct routing, and no SMB connectivity.
We've also restricted SMB between individual hosts on the LAN, and moved all non-essential hosts (directors' phones, laptops, tablets etc), to a separate WiFi network, with no access to the corporate LAN.
It makes life harder to do certain things, but it does mean that even if the boss's secretary clicks on an attachment, or a link in an email, we are probably going to survive it.
I'm feeling a lot more comfortable at the end of this week, than I was at the start of it.