I'd say less so
I mean historically, you tolerated a bit of droptus/vodafail or worse because their plans were priced competitively. If reliability was critical, you paid your Telstra tax. These days, Telstra is probably the worst of the three.
2545 publicly visible posts • joined 7 May 2012
> Why not just have an increasing delay between logon attempts?
That defence only works against online attacks. And it is probably easier to detect enumeration attempts from the same IP and blacklist it. More likely though, someone forgot to password protect their Mongodb which gets lifted and then they throw hashcat at it.
I mean, don't misunderstand me, cold callers are only one step above politicians, but would they reasonably be expected to know how many printers a specific farm might require?
And farmers who apparently can't afford $1/L milk at colesworths but don't notice 80K in their IT consumables budget? Weird.
Also, been a while since I looked at my map, but why is the Brisbane Times reporting on a Victorian tribunal decision about 2 Melbourne businesses?
Exactly. The Windows zero day (for example) will get reported to Microsoft when both
* A better/faster/less detectable exploit is discovered/purchased; AND
* They catch an adversary doing it.
If the first point hasn't happened, the second point won't be a consideration.
Tudge has got to go. As outrageous as the design flaws with the system are, the real shocking thing is that they haven't paused the automation whilst they sort it out. I am in the fortunate position to have had very little to do with them, but what I did see was an organisation that was unable to arrange for a human to assist with an enquiry. Everything was about being redirected to their online portal seemingly developed by Satan himself that when you followed those instructions then told you that you needed to go in person.
The sooner they understand that half their clients are only there because of poor and immoral decisions made in boardrooms half a world away and not because of laziness (some are of course), the sooner they can start treating people with respect they deserve as humans.
> the clear implication of the government (a) lobbying itself and (b) pretending to be a private individual.
I have no problems with (a). It is a good thing™ for governments to extoll their positions on any matter and to force them to justify the positions they are advocating. My problem is with (b). And I have a big problem with it. It has the optics of an attempt to present a case for change or not without the usual skepticism applied to a normal government mouth piece.
> some are likely to go to litigation.
Not buying it. There is no way they'll pay up without lawyers at 12 paces.
<Tinfoil hat mode>
I could believe that HPE were offered a very good settlement in exchange for falling on their sword. The government really doesn't want any more IT failures on its watch.
</Tinfoil hat mode>
Curious about the down vote. Happy for anyone to disagree with me, but at least state your argument so I can see where you're coming from.
Despite what the story states, it is not going to be everyone's job to check the backup. Most of that 75 won't even have the rights to do so, nor should they. And it's not unheard of for places to be temporarily overstaffed. Think about what happens with planned mergers or spin offs, where IT functions can get duplicated for a while or sit there idle until some other department gets up to capacity. Sometimes it is cheaper to pay people to do nothing for a few months than to scale up or down, especially where the skillsets are not so fungible.
That said, it appears some mock DR exercises would have been a better use of time with hindsight.
> Difficult call - without further information
Very easy call. Of the 32.5 people let go, I could believe that it was 1 person's job to do the backup monitoring and tapes so fair enough for them. Also someone was their manager and failed in their oversight. That's another one or two. So what did the other 30 do wrong? Assuming they had finished their tasks and they had their management's permission to be running whatever application, then the decision to let them go can't be disciplinary related.
> Ultimately somebody has to have the power to do this because shutting down servers is a valid admin activity. However it should be made a multistep process with plenty of Are You Sure? types prompts
How about "Please enter the shutdown validation GUID. This can be found on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard."
Firstly, not the down voter. I agree with the general gist of what you are saying. My only disagreement would be about the easiness of detection. Remember that a lot of things need to hold true for encryption to be secure. A few years back, Debian's RNG was accidentally screwed up by removing some code that looked buggy but was necessary for seed initialisation. That fundamentally compromised all encryption operations for a 2 year period.
See https://www.schneier.com/blog/archives/2008/05/random_number_b.html
For me the question isn't whether a back doored encryption approach wouldn't prevent some crime, even some serious crime. Of course it will. There is a large overlap between the Venn diagram of idiots and criminals, so it is obvious that some idiot criminal is going to use the back doored crypto and be caught with much fanfare. The question is what do we have to trade off to get that? One is the risk of some rogue group getting their hands on such a key. The other is for misbehaviour of its trusted custodians.
Anyone who has studied history will immediately recognise the difficulty of considering that to be "a good trade-off". Heck, we have detailed information now in the public domain about top secret intelligence operations, compromised hardware/software/algorithms because they couldn't stop one of their own from "stealing it". Colour me unconvinced on this...
Nearly 80% of the state's population live in or around Adelaide, and no-one lives in most of it. Compare that to about 65% for Sydney metro vs NSW. Don't get me wrong, they definitely have questions to answer. At so close to the margin and with plenty of warning, everyone should have been on standby. Can't do much about freak storms knocking down your towers (other than, you know, maintenance), but this should not have happened. They have really stuffed up of SHY sounds sensible.
> The Register has filed a freedom of information request with the ATO, seeking documents explaining the nature of the outages
HAHAHAHAHA. You must be new here. Where even the Attorney General's appointments on a specific date range are "too much effort" to respond to a FOI, I really don't like your chances...
> or partitioning the document and using the same hash method to "cover" overlapping portions of it
Just be aware that whilst the pigeon hole principle shows that it is mathematically certain that two documents that collide must exist where the input size exceeds the hash output size, it does not follow that two inputs that are smaller than the hash output cannot collide. In a well designed algorithm, it should be both very rare and computationally infeasible to find the other. IE. Nothing short of brute force
> There are far more possible documents than there are hash function outputs
Known as the Pigeonhole principle. Where the size of input exceeds the size of the hash output, there not only "can be" but "must be" collisions. To those harping on about file size differences, Windows explorer rounds to the closest KB unless you specifically check the details of the properties. For most use cases, having a slightly different file size is unimportant, but it is impressive nonetheless that they could locate a collision even constraining themselves to the valid file format and same file size.
Designing effective hash functions is really hard. I had to last year and stuffed it big time. It wasn't a cryptographic hash but one that would see hundreds of thousands of our objects get hashed into different buckets for faster dictionary lookups. So basically you had an infinite combination of inputs that had to hash to 32 bits (about 4 billion). I managed to create an accidental swarm to zero which meant that whilst there was good distribution generally, a substantial proportion of real world objects would end up in bucket 0. After fixing it, the worst I saw was in the 3 objects per bucket out of a million range.
Why does the size have to be identical? As long as the artifact is a believable size you can launch an attack.
Imagine an executable file download through some sort of update mechanism that uses sha1 to validate the binary before executing it. No-one will notice that the 64MB upgrade.exe is 25KB larger. But now the attacker has replaced the intended payload with their own. It would be interesting to know about the collision algorithm. Like does it require the two files to be nearly identical? Does a large file take longer to generate a collision?
It really did look like some sort of phishing attack. And certainly Google have now through lack of foresight opened up their user base to fall for the next one. They should have had a website explaining exactly why you needed to reauthenticate. Not a mystery popup!
> your fictitious drone videos them from a handy window would more likely to go unnoticed than a HDD thrashing for no apparent reason
Nonsense. It's simple to mask. Simply call the executable svchost.exe and no-one will bat an eyelid when it randomly consumes all the system resources.
You are treating this attack vector as if it is a fairy tale, but remember stuxnet was a weapon that accidentally got out but it was designed to take out Iran's nuclear enrichment capabilities on air gapped systems. It is not beyond comprehension to imagine a machine that is not air gapped but is fire walled off. Sometimes the observer just needs a private key so they can MitM on the legitimate channel without detection. This sort of bandwidth could send out a private key sub second with no packets apparently leaving the network.
Why? It will just say something like "in the discharge of their authorised duties, the employee agrees to at all times refrain from actions likely to cause damage to the company, its suppliers, customers, associates, ...."
If your company gives you a car, you have the right to depress the accelerator or brake hard to avoid an emergency. It does not follow that you are permitted to do it for kicks until you've damaged it.
Hope he loses. What an arse hat.
Wrt to the collision rate of fingerprints, that is a side issue. It actually becomes worse in some cases. Some occupations are notorious for using chemical compounds that effectively eat away the prints so for those people the templates have a lot less points of interest and so collisions become possible. Most APIs won't let such people record a template. But the templates are basically a set of angles and distance measurements. No two scans of the same finger would ever result in the same measurements any more than taking two photos from a tripod could create a byte wise identical bitmap. The question is never "are they a match" (hint: infinite FRR). It is always "are they acceptably close". That's where the complex math starts because you are expecting features in a similar location to distort in a similar way, and some features are missing altogether because of sloppy scans.
Most biometric APIs I have played with allow you to trade off your false accept rate (FAR) vs false reject rate (FRR). FAR and FRR are opposite sides of the same coin. You can't improve one without making the other worse. There are usually two broad use cases.
1. The person claims an identity and this is a second factor where they prove it. (Well technically they only prove they have your finger/iris/hand but you need to understand your threat model)
2. Out of a large number of candidates, decide which identity has presented their digit.
With 1, you can tolerate a much higher FAR (it's the FRR that makes usability suck). With 2, you need a very small FAR but that does require a nicer template and a nicer scan than 1
If you take a mobile phone use case, it's actually much closer to 1. You want it to unlock even with the vaguest of touches in any orientation and with any light level. You could tolerate a 1:10000 FAR quite easily. For blame purposes, you want FAR to be 1:10s of millions+.
Humans pay no taxes. They merely pass them on as a reduction of their consumption from businesses. Just as silly an argument.
According to economics, businesses will charge as much as the market will bear and no more and seldom less. Humans will seek out the best deal for them (including a cost minimisation objective). Taxation is an input cost to businesses. So are employee salaries. So are executive salaries. Any business who wants to raise prices in response has to either have a monopoly/duopoly or hope that their competitors follow suit.
It's a genuinely interesting problem. If more folk are going to be displaced (I hate that word because it doesn't capture the impact on the individuals) then unemployment costs and pensions will rise, tax take will drop and spending capacity of society as a whole will drop. That is a negative feedback loop, so we are royally stuffed if we don't find some way of dealing with it. I think there are several issues with Bill G's suggestion (when does a piece of equipment become a robot as well as handling the import tariffs you would need to stop offshoring of the robot labour to give two examples) but it is worth considering in the mix of ideas.
You also need to take into account the new gas export infrastructure that has come online in recent years. Prior to this the supply and demand equation was primarily domestic consumers. Now gas prices follow the international market as many producers can make more by selling it overseas. As a result, wholesale prices have at least doubled and this ruins the economic assumptions behind such plants. But hey, at least Chevron, Santos ExxonMobil et al contribute to our collective wealth by paying a fair share of tax.....
> I've seen more fobs fail than work.
That I strongly doubt. Yes, fobs can run out of battery but in my experience you tend to get at least a small warning where for a few days or weeks you have to press it a few times before it goes entirely. And yes, operating then with gloves can be a challenge.
But
We have seen jeeps get remotely driven into ditches. We have seen Nissans have their climate control activated from another hemisphere (literally). And by now some of these cars are being sold to second and third owners who are blissfully unaware that the original owner's iPhone can still unlock it. And that's before the more novel attacks from fake charging points that sideload apps as demonstrated just this week that could quite easily grab those credentials and the GPS location where that phone is often kept.
Now I grant that water can block some frequencies used by key fobs, but frankly if the ice is that thick, you ain't even getting to the handle, forget about driving it today.
> Tried using it on a frozen winter morning in the dark
No. Temperatures around here seldom drop that low and my car is garaged. And the transponder on my keyring does a reasonable job of unlocking the doors even if there is ice over the lock. There's just no need to do it over the internet. It adds a whole bunch of security attack vectors. The only reason it's there is so they can add an extra bullet point on their feature comparison when you are picking your trim level.
No I don't mean legacy stuff using MFC. Of course that uses it, but not all things that use it are MFC. If I look at the processes on this system that have a handle open to gds32, I can see 148 of them, including things like Firefox, Chrome, cmd, devenv (VS 2015), Notepad++, powershell, Process Explorer (ironically), sqlservr, w3wp (IIS worker), and of course the various office applications, updaters, svchosts etc. I literally just created a new otherwise empty WPF application, and even it loads gdi32 when you double click it.
That is the Graphics Device Interface library for 32 bit applications. You know, responsible for minor things like drawing lines and shapes, painting bitmaps on your screen (actually even a printer is considered a canvas) and rendering text. Basically the 2D stuff.
Whilst it has its quirks, even the modern frameworks will at the end of the day be interacting with it at the bottom of the object rabbit hole. They could abandon it I guess, but that would just break backwards compatibility with all the win32 applications or there, and well win RT never really took off. Btw, on 64 bit machines it really is just a shim to translate calls into the equivalent 64 bit instructions. I can understand why they may be cautious about changes. At low levels, medicines can easily be much worse than illnesses.