* Posts by dan1980

2933 publicly visible posts • joined 5 Aug 2013

Victorian government teacher-laptop scheme illegal, says judge

dan1980

The rule is simple: if your job requires you to use some specific piece of equipment then your employer supplies it.

Voting machine memory stick drama in Georgia sparks scandal, probe

dan1980

I don't care for the Diebold vs not-Diebold or the fraud vs mistake argument.

They are valid, sure, but arguing about them risks diverting from the main point, which is that electronic voting machines have been proven worse than good old paper and pen.

Yes, a paper ballot requires plenty of staff to make it all work but it's no more expensive (actually cheaper, I believe) than an electronic system. And yes, there are opportunities for fraud but the nature of the process means that it is much more difficult to impact the outcome.

The question is: what is the benefit of electronic machines over paper?

TPP: 'Scary' US-Pacific trade deal published – you're going to freak out when you read it

dan1980

". . . and that the deal is so broad and balanced across multiple industries that claims to know what the impact will be are wildly speculative at best."

I don't know the author, personally, nor know any of his other work so I can't determine the tone of this statement. So, that disclaimer aside, is that really suppose to be a comfort?

One of the most important criticisms of FTAs is exactly this: that it is very difficult to work out what the result will be. Which prompts one to wonder - if the impact of such an agreement is unknown at the time of ratification then on what grounds have the participants decided to engage in the process in the first place?

Saying that the trying to predict the real-world impact of an FTA is "wildly speculative at best" implies that when out governments assure us that the results will be a boon for the country and its people, they putting forward a proposition for which they have no evidence - it could just as well be that Australia (e.g.) will get utterly shafted.

FTAs are, conceptually a good thing. I have never been accused of having even a toenail on the right-hand side of the political/economic/social line but it is my strong belief that businesses are the heart of a country, just as they were they heart of small communities as we trace our social history back through time. Businesses and investment in businesses allow people to be employed and taxes to be paid which - ostensibly - allows for an overall improvement in the community/country.

So, to the extent that FTAs have the potential to increase these measures, they have the potential to increase the quality of life for those covered by it.

BUT, the broad nature of modern FTAs means that gains in one area can, and often are, offset by losses in another. This comes about due to the negotiating between governments and the strengths of various lobbies in each country.

To directly address one of the author's points, however, I he talks about the role of FTAs in simplifying the laws that have built-up, piece-meal over generations. That is not incorrect. BUT, implicit in that assertion is that, as times change, so should our laws and agreements to refelct the new situation(s). Again, I agree with this underlying assumption.

We should, however, take this further than just the the laws governing tariffs and imports and regulations. We should also apply this logic to the concept of 'Investor-State Disputes" (ISDs). These provisions were added to agreements due to the risks faced dealing with countries that had poorly-developed legal systems. And, while that was an important consideration 30 or 40 years ago, it just isn't so much anymore, and especially not when the agreements are between countries with well-developed legal systems. Say what you might about lawyers but to suggest that countries like Australia and Japan and New Zealand and the US and Canada have poor legal systems is ignorant at best. (But more accurately, grossly insulting to the detailed, exacting and rich history of these countries and their legal traditions - many of which has a shared heritage.)

And that's one of the biggest elephants - ISDS was developed for the express purpose of protecting investors against the possibility that they would lose their investments due to the volatile nature of politics in less-developed countries. The idea being that the less stable a political system and the less independent and rigorous a judicial system, the less confidence an investor will have and thus the less likely an investor will be to, well, invest.

That's the heart of it - investment is considered a good thing (I make no assertions as to whether it actually is good or not) so it is logically reasoned that any agreement aimed at promoting investment must address barriers to that goal. ISDS was an implemented to address this in countries with unstable political or legal systems and so there is no reason for ISDS provisions to exist in an FTA between the US and Australia.

Why? Because the problems that ISDS solved does not exist in these modern countries. Hell - not only are the legal systems in the US, Australian, Canada and New Zealand well-developed, they share a common history in the British legal system!

Startup founder taken hostage by laid-off workers

dan1980

Re: re. "a mass streamlining ....etc"

@frank l

While I do, agree - in jest - with that, I think that, language aside, this mass lay-off was actually warranted. I don't say that lightly but it seems that their only other choice was to close-shop and thereby lose everyone, which would be the worst outcome.

They were floundering and the hot cash injection seemed to come with the condition that they quickly shed large amounts of overhead. Unfortunately, there is simply no quicker way to do that than to close offices.

It's regrettable, to be sure, but the fact that this was nearly forced upon them, coupled with the fact that the founders themselves went out to deliver the bad news in person means that I think that this is one of the (few) instances where a mass layoff of staff has not been done in order to increase the value of your stock by half a percent or to reach a target for some performance bonus. (Which amounts to taking all the pay from the fired staff and giving it to the CEO - ;blood money', really.)

So, since this did not seem to be purely for profit but was instead a last-ditch attempt to keep the company alive, and taking into account the personal responsibility the founders evidently felt (otherwise why go in person?), I think that, while the frustration and upset of the staff is fully understandable, their behaviour is reprehensible.

AMD sued: Number of Bulldozer cores in its chips is a lie, allegedly

dan1980

Re: Everyone knows

That's, in a way, my point above - that perhaps we are moving towards some hybrid architecture where the idea that a 'core' necessarily must contain a dedicated FPU is not useful anymore.

As a layman - and correction are very, very welcome - I wonder if such a hybrid architecture might have elements of this AMD part, which is to say that the FPU as a component of the CPU could be shared between several cores and used just for those functions and instructions that can't be efficiently offloaded to a GPU-style processor.

The plaintiffs are essentially asking the judge to legally define a 'core' such that it must contain a full, dedicated FPU. To me that sounds a bit restrictive.

dan1980

"The lawsuit . . . claims it is impossible for an eight-core Bulldozer-powered processor to truly execute eight instructions simultaneously – it cannot run eight complex math calculations at any one moment due to the shared FPU design, in other words."

Okay, so let's try to understand what this is saying. I am not really qualified in the area of processor design but I can read a sentence good.

It seems to me that the plaintiffs are using the term 'instruction[s]' in a very specific, restricted sense to mean a "complex math calculation" and therefore one that must engage the shared FPU. Unless I am gravely mistaken, however, there are plenty of instructions that would not need to engage the FPU.

In essence, the plaintiffs appears to be attempting to exclude such 'instructions' by definitional fiat. So, one suspects that this case will end up involving rather a lot of complex expert testimony regarding exactly what the definition of a 'core' is.

I have an AMD proc on one of my home PCs, an Intel in the other. An Intel in my laptop and HTPC and an Intel on my work PC. I manage a shed load of servers and there is a mix there (though slightly favouring Intel). SO I really don't support one or the other and, while the chip business is a bit shaky at the moment, I think AMD is vital for a healthy industry - many here will remember the increase in power in Intel chips that came from the competition of AMD.

In the end, I think it would be exceptionally arrogant for any judge to rule that the Bulldozer 'modules' do not in fact contain two 'cores' because to do so would be tantamount to legally defining exactly what constitutes a 'core'. Perhaps some might think that's not so dreadful an idea but I doubt a judge would inclined to do that - especially considering that, as a poster above mentioned, there are plenty of historical processors that had no FPU at all and there is always the possibility that future architectures will structure this relationship differently, as AMD have done.

If you have a legally bound definition of a 'core' then that means processor manufacturers will be forced to design within that definition or risk having their processors viewed as inferior due to having fewer 'cores'. And that has the potential to stifle innovation.

We are seeing a big rise in the importance of GPUs and one can imagine that the future will bring us architectures that meld these two together. What if these new parts don't fit neatly into a legal definition of a 'core'?

Perhaps at that point we may need new terminology anyway, but the point is there - in such a highly technical, continually evolving field, legally defining what some technology or term is runs the risk of more-or-less forcing vendors to fit their research and production into that box.

Top FBI lawyer: You win, we've given up on encryption backdoors

dan1980

Arrogance is definitely part of it - not just an arrogance when it comes to their abilities and knowledge but also when it comes to their 'mission'; they believe that their purpose and their function is more important than any petty concerns like privacy or freedom.

In other words, they are either self-righteous or stupid (possibly both at the same time). And that is a generous way to read it. The other way to interpret their behaviour is as outright fascism.

But, going back to what you said about 'time pressure', that's not relevant here because they weren't developing something in house - they were asking the 'tech community' to come up with the solution. And yes, there are certainly times when you end up making a bunch of assumption as shortcuts so you can get on with the work. That's valid in some situation but the important thing in such an approach is that, when someone else (or indeed everyone else) points out not only that your assumptions are wrong but exactly how and why they are wrong, then you should bloody well LISTEN TO THEM!

That, to me is the most damning thing about this affair - whatever excuses and explanations can be trotted out for why this plan was devised and pushed-for in the first place cannot be used to explain why they persisted despite the advice of all the experts who weighed-in it.

In that way, it's similar to the elliptic cryptography PRNG blow-out (last year?). In that instance, it seemed that the NSA developed an intentionally flawed cryptographic standard that was then published and promoted by NIST. There were several detailed, expert analyses from experts in the field dissecting it and showing exactly where the weaknesses were but NIST continued to promote it.

dan1980

"Maybe that is scientifically and mathematically not possible."

Maybe?

You could add that it was also plain stupid and displayed a dangerous* lack of understanding or a truly Orwellian lack of concern for anything beyond the power of the government.

So, even assuming that this admission is 100% genuine and the FBI really understands that this isn't the way forward, security is the very last place that once should indulge in 'magical thinking' and the fact that an agency as (supposedly) important to the safety of the public pushes for measures without actually understanding the environment, the technology or the consequences is thoroughly damning.

In using the term "magical thinking", along with the reference to the "amazing technology sector", they are attempting to imply that they were doing it for the right reasons and had good intentions but perhaps dared to dream a bit too big; that they put too much faith in the IT world to get creative and sort it out.

But that is rubbish. The FBI shouldn't be engaging in 'magical thinking' in any part of their jobs. EVERY expert in the tech and security sectors pointed out the faults from DAY ONE. What is their excuse for not listening to them until now? Hell, let's go back further - if they didn't have the knowledge and expertise to start with then why didn't they at least run the idea by those experts beforehand?

That's a far larger and deeper problem than just being optimistic or that simply being dumb - it's being arrogant and reckless and believing that, because terrorism, they should be allowed a free pass to do what they want without having to be accountable. It's exhibited every time a spokesperson for these agencies talks about how people against some measure are 'weakening our security' and 'putting lives at risk' and 'enabling criminals to roam free'. It's the implied assertion that due diligence and caution and placing a value on personal privacy and rights are all unimportant red-tape that must be cleared because the threats are so dire and so immediate that we can't waste a single second on doing things properly.

It's the unspoken assertion that when you allow rights and privacy and any kind of expectation of a free way of life to get in the way of security then the 'terrorists have won'.

And that's what they really need to apologise for and really need to change.

* - Literally. Given the positions of the people pushing for this, their stubbornness in demanding things without understanding or caring about the negative consequences can't just be laughed off as they have the power to steamroll any objections.

Drones are dropping drugs into prisons and the US govt just doesn't know what to do

dan1980

Re: Kaboom

@Aqua Marina

Let's put aside the 'shoot them down' part because there is still the question of exactly how you would accomplish that. I'm not saying it's impossible - just that it is something that would need at least some measure of discussion on which method or technology would be best.

The first part of your comment, however, is SPOT ON. You start by making some laws restricting drone use in sensitive areas.

That's a roll-a-six-to-start because if it's not illegal then what right to you have to stop the drones flying overhead in the first place?

GCHQ's CESG team's crypto proposal isn't dumb, it's malicious... and I didn't notice

dan1980

The goal of all this is not just to find track phone or internet records or whatever any given proposal or law is targeted at.

The core goal, the real aim, is to have every person uniquely identifiable to the government and for that identity to follow them wherever they go and be attached to whatever they do.

They want and internal passport.

Not only that, but they want that passport to tie into and be required for everything you do. In regards to phone numbers and IP addresses, the goal - I am nearly sure - is to have these identifiers attached not to hardware like phones and routers, but to people, such that when you use a phone you have to log in (in whatever fashion) and that phone is then assigned your phone-number equivalent. Modern VoIP systems already have this feature - Cisco call it 'Extension Mobility'.

Every device, of course, would also have a unique identifier (lMEI but similar system for 'fixed' phones) and it would be easy to have phones able to dial out without anyone logged-in, but to restrict this to certain emergency numbers so as to handle that concern.

To receive calls, of course, you must be logged in and this provides further benefit for tracking people.

Perhaps that all sounds a bit far fetched but spooks want to be able to uniquely identify who is making or receiving a call and they have indicated that they want to do so with phone numbers - something not currently suited for the task. One interpretation of that is that they are just being idiots. BUT, another way to interpret the intersection of those facts is that spooks want phone numbers to become unique identifiers of the person using the phone at that moment.

Anyone have a spare roll of foil? I'm almost out.

dan1980
Meh

Any colour , as long as it's black

Is it just me or are we about to find that the US ends up spying and monitoring and slurping less on its citizens than the UK and Australia?

As I commented in a previous post, it seems to me that the AU and UK governments has watched, with distress the backlash against the revealed surveillance and so laws have lapsed and compromises are being investigated. As they have watched this example (however weak) of 'democracy in progress', they have come over very scared indeed that the people might somehow get a say in how they are governed.

And so they are each doing their level best to enshrine as much surveillance in law as possible - casting such laws in the vaguest and most permissive of language, protected only by bare assertions that everything will be fine, really.

The ridiculous two-party system leaves us with zero choice in these matters because both sides want the same thing and so our votes mean nothing on these matters. We get the choice of intrusive, unregulated surveillance or intrusive, unregulated surveillance. The only choice you have, at least in Australia, is to choose one single letter; you can have your intrusive, unregulated surveillance, implemented but someone with the letters ALP behind them or you can have your intrusive, unregulated surveillance implemented but someone with the letters LNP behind them.

Yay democracy.

Join The Reg in Sydney for a FAST Christmas

dan1980
IT Angle

Small problem . . .

I am greatly amused that, on the signup page, the t-shirts come in medium and above only. Is that a comment on the general fitness level of IT bods?

Also, where is the sales part being held and will there actually be some hands-on of the gear and some demos of deploying and managing it?

Like many in IT running virtual workloads (rather than racks of NoSQL boxes) I am interested in the new wave of converged boxes and have previously considered rolling my own with the various software packages available to share storage and cache to SSDs but, again, like many in IT, just can't find the time.

But I've really no interest in a sales pitch - I've seen the fluff and read the somewhat ambitious numbers and what I need now is actual face-time with these things because we don't have the resources to buy a couple for testing.

Jet boating is fine and a beer to see off the last of the sun is always welcome but if we can get some clarification of exactly what we'll be getting out of the 'boring' part, that would be helpful.

When it comes down to it, the thing - beyond some hands-on - that would be most helpful is a frank discussion about what particular workloads and at what scale this makes sense over other setups, like the traditional cluster+SAN. Because, let's face it, this kit is bloody expensive - at least in these early days - and one can easily deploy a perfectly capable SAN-backed cluster (packing more compute power) for the price of a trio of these units. Especially if you already running other such systems and have those expertise in house.

Thanks,

d

Former nbn CEO Mike Quigley ends his silence, unloads on government

dan1980

Re: It's all politics

@P.Lee

This is exactly my point - both plans were going to be dreadfully planned and ineptly managed and thus unnecessarily costly and delayed.

But that is de rigueur for any government project, nearly a given for a government infrastructure project and a certainty for a government technology project. Thus if the NBN - a governement technology infrastructure project - did not run wildly over time and budget and was not plagued by inefficiencies and mistakes from day one, then I would start looking over my shoulder, waiting to see a couple of dudes on horses.

In the end, one plan would have balanced the pain and expense and delays with a good, future-proof system that unshackled us from Telstra and the other will not.

dan1980

"It's regrettable, Quigley told the program, that this particular omelette can't be unscrambled: the Telstra agreements in place and nbnTM's ownership of the copper assets make it impossible to return to FTTP."

Which is exactly the way Abbott wanted it.

It doesn't matter whether one thinks FTTP was the best solution or not, or even whether the NBN as a whole was a good idea - or not. The fact is that it was there, on the table and underway, and Abbot wanted it destroyed.

The contracts that the LNP have signed the Australian people up to ensure that any future ALP government wishing to bring back the original FTTP plan will have to incur great costs in doing so, which is something that the LNP will be able to criticise loudly from opposition.

They have salted the proverbial earth in an act of political and ideological spite and I, for one, feel utterly betrayed by those who actively made this happen, and have done so without caring one iota that we, the tax payers, will be paying for this for decades.

It is a fact that an FTTP infrastructure is more future-proof than an FTTN one and a fact that the longer an FTTP network is in place, the cheaper it will become compared to an FTTN network. No serious analysis has ever suggested otherwise.

Given that, it doesn't matter how much bandwidth people want now. Even if, as people like aberglas (above) assert, the vast majority of people neither want nor need the speed that FTTP could provide, it would work out cheaper in far fewer years than one might think.

Of course, one might argue that the whole NBN concept - regardless of technology was inherently bad and wasteful, but it was already underway and so we must look at the outcomes from that point.

I am not joking, nor exaggerating when I say that I feel betray by Abbott and Turnbull here and, though I should be hardened to this kind of disappointment, it almost brings me to tears and, again, I am not exaggerating. This has cemented what I always knew anyway: that our politicians put the people last.

At Microsoft 'unlimited cloud storage' really means one terabyte

dan1980

Re: Oh, the productivity, the fliexbilty, the POSSIBILITIES! @dan1980

@Sandtitz

"Maybe MS just asked the user?"

That is absolutely a possibility. I would think it's a rather slim possibility, however but you are correct - they could have done that.

dan1980

Re: Free storage isn't free storage

@Martin an gof

"I don't think 1TB is at all excessive these days."

Indeed. But more importantly, Microsoft didn't think it's excessive either, making reference to the increasing amount of data people need to store and how the landscape has change over time such that the increases (from 7GB to 15GB for free and from 1TB to 'unlimited' for O365) represent what it suitable and reasonable now. Or at least at the time they made their announcements.

Apart from questions such as: "how can you abuse and unlimited service?", the biggest question this raises is about just how viable cloud storage really is.

Many people see it as the future and I certainly agree that there is a place for it in a great many situations. (Not all by any means.) But a move like this really must make people stop and think. If your cloud vendor tells you that limits are "a thing of the past" and encourages you to start backing up your PC and shifting all your files over with the knowledge that you'll have enough space for everything - not just now but in the future as your needs grow, what happens when they turn around and pull the rug from under you?

It undermines confidence in the entire cloud model in my view because it brings home just how much you, as a customer, are at their mercy. Larger clients may well be able to secure contracts with guaranteed levels of service and provision but many of the smaller companies can't.

It's a very, very bad look.

dan1980

Re: @dan1980 "It's not just that they should have expected this to happen;..................

@Arctic fox

Indeed it is quite the feat to have Bott disapprove!

dan1980

Re: Free storage isn't free storage

@Graham Jordan

Trivially easy for some. One of my clients is a professional photographer and generates huge amounts of data. His main home storage unit is an 8-disk RAID-6 array with 1TB drives - 6TB total - and he is looking at buying anew unit that can handle 3TB drives. He writes his data to this and to 2TB external drives, which go offsite.

Another client is a geologist and the magnetic anomaly data generated is quite impressive. Have a friend who is a freelance graphic designer. Her current storage is 4TB but there are numerous extra drives lying around and if it could all be consolidated then that would be handy. Another client is a web designer who deals with a fair bit of animation - they're at around 6TB over two servers and several backups drives.

One of my colleagues runs a relatively small test lab at home and that's over 3TB of VMs.

Point is that it's really not that difficult to have significantly more than 1TB that could be stored in the cloud. In all of the above instances, it would be used for a second-level backup, mostly for previously-generated resources that are rarely needed but need to be kept safe and off-site all the same

dan1980

Re: Hmm, yes. When even Ed Bott* reacts in the following way.........

@Arctic fox

It's not just that they should have expected this to happen; they were essentially encouraging it.

Okay, maybe they didn't fully appreciate what some people might do but they did say that unlimited storage provides unlimited "possibilities".

What they are saying - or at least conveying - is that once you remove the limits the service becomes something qualitatively different and this is a point of difference and a reason to choose OneDrive over competing services.

The point is that, for some people and some uses, the different between, say, 1TB and 5TB is irrelevant. If you've got 20TB of data, what good does it do to have a service that goes from providing 5% of what you need to 25%?

One of the 'yay us, we're great and isn't this awesome' explanations of why it was awesome and, moreover, responsive to customer needs, was that this would enable people to store all their stuff in one place. This is an outcome/configuration that Microsoft have advocated and talked-up as being a major benefit of these capacity increases and the O365 OneDrive limit removal.

To come back and say that they're discontinuing the service because some people are actually storing all their stuff on there is as amusing as it is predictable.

dan1980
Headmaster

Re: Oh, the productivity, the fliexbilty, the POSSIBILITIES!

Sorry. I just want to apologise for the unforgivably poor spelling of the word "flexibility".

And, honest truth, when I wrote the above line, I wrote: "felxibility". Damned inability to order my letters correctly.

dan1980
Meh

Oh, the productivity, the fliexbilty, the POSSIBILITIES!

Reading through the linked post explaining the change, we see:

". . . a small number of users backed up numerous PCs and stored entire movie collections and DVR recordings."

The first question that popped into my mind on reading this was: how the hell do they know what people are storing on there?

Yes, of course, they have the technical ability to find out - it's elementary - but what does it say for how much Microsoft respect their customers' privacy? Okay, these few people were really using rather a lot of space but it was billed as an UNLIMITED service so they can hardly be judged to be taking the mickey.

I mean, that's the very point of an UNLIMITED service - you can do things with it that you simply can't with a limited service, and that includes backing up a few dozens PCs and some servers and it includes digitalising your entire Blu-Ray and DVD and CD and recorded TV collection and it includes storing the results of your professional photography and videography business so you can share links of selected works with clients and it includes creating a temporary storage point for your entire virtual infrastructure to enable the IT support at a branch office over the other side of the world to selectively download reference machines for testing without having to build and maintain an entire parallel system full-time.

So, really, what's the point of an UNLIMITED system if it's not to enable you to do those things that other, limited system won't allow?

The answer, of course, as explained in the article, is that it was there as a PR stunt - a bullet point in marketing material that Microsoft could hold up as unique. They presumably figured that the overall effect would be only a small increase in storage needs and it would thus be a quick and cheap bit of PR.

It is telling, however, that the limit they have settled on is exactly the same as the limit that was in place before the move to 'unlimited' storage: 1TB. If it's just a few people storing huge volumes of data then why not instead drop it to 10TB, or even 5TB*? Likewise, why drop the 200GB and 100GB paid plans and limit them to 50GB for new sign-ups, or slash the 15GB free storage to 5GB?

The last one is particularly amusing considering that 15GB limit was an upgrade from a 7GB limit, meaning that the decision to drop to 5GB is a 30% decrease compared to what it was before the upgrade.

With that upgrade, Microsoft explained that:

". . . we believe providing 15 GB for free right out of the gate – with no hoops to jump through – will make it much easier for people to have their documents, videos, and photos available in one place."

So, if doubling storage to 15GB - "right out of the gate" - will make it easier for people to have all their stuff "in one place", what does cutting it by two thirds do?

At that same time, they also told us how they were giving customers "as much flexibility as possible" by providing their monthly subscriptions at "dramatically reduced rates" and thus giving you to option to purchase 100GB of extra storage for $1.99 or 200GB for $3.99.

So again, if the option to purchase a 100GB or 200GB plan is providing "flexibility" at "dramatically reduced rates", then how are we to interpret the decision to not only cut the number of options in half but to cut the storage in half as well, while of course keeping the price the same. One can only conclude that this move is designed to reduce "flexibility" and to do so at a dramatically increased rates.

But back to the big one - the axing of the 'unlimited' storage for OneDrive for Office365, we were informed that we could "get more done on the devices [we] love" because, with "unlimited OneDrive storage coming to Office 365 . . . the possibilities are, well, unlimited."

An example of just such a possibility was provided by Chris Jones (VP for OneDrive & SharePoint), who suggested that customers:

" . . . get started using [their] 1 TB of storage today by backing up all those work files kicking around on your PC – with the knowledge that even more storage is on its way!"

That's right, folks, start those backups and copies safe in the knowledge that there'll be plenty of room not only for the backups but for all the other 'unlimited' possibilities you will now have. So don't stint and don't worry 'cause Microsoft have got you covered and if you can dream it then you can do it.

Because, you know, "storage limits [are] a thing of the past with Office 365 . . ."

----------------------------------------------------------------------------------------------------------

* - The reason, I believe is that if you provide unlimited storage, you can no longer wax lyrical about what wonders the latest incremental increase will enable. If the storage is unlimited then there's way to increase it and thus one less opportunity to try and convince everyone how awesome and responsive to user needs you are.

In other words, if storage limits are a "thing of the past" then the, perhaps unanticipated, repercussion is that being able to drip-feed storage limit increases for PR is also a "thing of the past".

Net neutrality debate: If startups want to rival Google, they must show some green to telcos

dan1980

Re: A bit like leeches

@Only me!

"What I should pay for is an unmetered service at say 100 Meg."

Actually, here is a point - the 'unmetered' part. People in the US, however much they complain about speed, take download quantities for granted. In Australia, we have had limits since day dot.

One might argue that download limits are not the same thing as available bandwidth and thus it costs a carrier no more for you to download 1TB than it does for you to download 1MB. This may indeed be strictly true but the missing bit is that people expect to be able to download that TB at a decent rate and that is where the problem comes in.

You see, what download limits achieve is to limit how frequently people are downloading, which limits the amount of concurrent use in the network, which limits the bandwidth being used at any one time, which increases the speed for everyone.

That's a generalisation to be sure but then that's exactly the point - it's something that works in aggregate.

If you have 100GB per month, you may not necessarily spend an entire weekend streaming HD video from Netflix, whereas someone on an unlimited plan has no reason not to do that every weekend - social life not withstanding. Likewise they will torrent files (legally or illegally) and be quite comfortable running backups every day (while they're at work) of their home PCs to a cloud-based backup service.

All this is wonderful for the user but the more subscribers doing this, the less bandwidth is available for everyone, including those who just watch a movie now and again.

Perhaps it is just a cultural difference based on our historical experiences, but I have no problem with the concept of metered downloads and in fact I think it's a relatively fair way to structure things.

The idea - in theory at least - is that the higher the volume of data you download, the longer you are going to be consuming the (limited) bandwidth and thus the more of an impact your will have on the speed achievable for everyone. To accommodate that while keeping speed constant requires more capacity, so the extra money from your larger plan going to the periodic upgrades to make sure capacity is keeping up with demand.

Of course, that's naive but it's not completely ridiculous. Until recently in Australia, we had a truly excellent ISP: Internode. They were always my no.1 recommendation for personal users because they were actually a little more expensive than many others but they really did re-invest that money back into their network and so we found that overall experience with them was noticeably better than with most other ISPs.

Now, if you are really paying the proper cost for a 0-contention line then that is another matter because you should be able to get that speed 24/7 regardless of what anyone else is doing, but the vast majority of residential services simply don't work like that and are much cheaper as a result.

dan1980

Re: A bit like leeches

@frank ly

Or, to use a popular analogy, you could charge trucks on a toll road based on the value of the goods they were transporting, rather that their weight (which is what the extra charge for trucks over cars and cars over motorbikes amounts to).

The important consideration is that there can be no such thing as a 'fast lane' without a 'slow(er) lane'. If you have the capability to transmit at a certain speed/bandwidth then that is something you must actively be withholding from other clients. There's just no two ways about it.

dan1980

Re: Basic economics, is all

@Yes Me

There is one fundamental difference between adding more lanes to a road and adding more bandwidth to links, which is that there is a real physical barrier with the road, which is the availability of land. That same barrier is just not relevant when it comes to Internet links.

Yes, increasing bandwidth may require some physical space, but it is just not comparable.

The point is that adding more lanes to roads brings in a lot of considerations and hurdles whereas adding more bandwidth to your links is largely just about the providers investing the money.

Anti-adblocker firm PageFair's users hit by fake Flash update

dan1980

Re: Hah!

@nematoad

Regarding 3rd party services 'enhanc[ing] functionality', well, some of it is indeed for the visitors. a CDN can certainly make for a quicker and more responsive experience as well as faster downloads of files.

Other third-party tools can be javascript libraries used to build portions of the site, such as jQuery, wForms/qForms and js Charts or DateJS and these can definitely be used to 'enhance' a website. That's subjective, of course, and a prettier menu does not necessarily equal a better experience but a website with well-built forms with good validation that are able to parse information in all the myriad ways that people may enter it, well, that is good for everyone.

Other examples are pre-built engines for things like eCommerce, which can offer a far broader range of payment options and, generally, better security than many smaller sites could offer on their own.

Yes, many third-party services benefit people other than visitor but it's far too general a statement to say that all third-party services do.

dan1980

Re: NoScript

@Steven Roper

I'm with you. I use NoScript and, while it's a pain to enable things selectively, for me it is better than the alternative. And I never enable trackers - even if the page won't load without them. I enable one thing at a time until the content I want comes up, but never trackers. (At least the ones I can identify.)

I am no web developer but it is insane the number of websites that will just be unusable - in any way - without a half-dozen sets of JavaScript.

Doctor Who's The Zygon Invasion shape-shifts Clara and brings yet more hybrids

dan1980

And all this time Clara was a sleep and has no idea who the Doctor is?

Not likely (the second part) - she was cloned when she went to find the child (Sandeep's) father. Also, why Clara in that scenario? That supposes that the Zygons cloned an otherwise unremarkable and unimportant human and then worked to get that person into the Doctor's company and trust. Far, far more sensible to just clone someone already in that position.

One must also remember that these rogue Zygons are apparently a NEW breed and so they have only actually come about (born/hatched/grown/whatever) in the recent past, well after Clara met the Doctor. Unless, of course, they have time travel capabilities but that has not been hinted at and seems unlikely anyway.

Okay, I'm a nerd - it's official.

dan1980

Incorrect: Ingrid Oliver is lovely. (And funny.)

South Australia: Great for wine, murder, insecure, outdated over budget government IT

dan1980

Re: Someone should write SA Health an interoperability paper

Then well done for your work, in vain though it was.

Reports and studies and papers and analyses are used for two things in politics: to tell them something they already want to hear or to be seen to be doing something without actually having to, you know, do anything.

Perhaps this is why so many gov IT projects fail - this process has alienated anyone who might otherwise want to get involved as they know their work won't be appreciated and that technical considerations will give way to political concerns.

dan1980

Well now I know that the Reg's southern bureau is losing the plot. To have Richard report that an Australian government IT project was poorly planned, poorly implemented, is going over budget and doesn't do what it's supposed to, well, I just can't believe that . . .

(Irony)

Does anyone involved in these projects have the first idea how to run them? Budget overruns happen - not just to government projects - and that's something you should, well, budget for. Or at least be prepared for. What's not really acceptable is ending up with a system that doesn't address the real requirements and needs because accurately identifying what a system must do is fundamental to designing it in the first place.

It's a roll-a-six-to-start.

I am reminded of a review document dissecting what went wrong with an asset management system the NT government deployed. In it, people admitted that they simply hadn't made any real attempt to understand the the processes and procedures currently in place and how the new system would need to be designed to address that or, on the other side, how the procedures and processes might need to change to fit the new system.

Further, there was core functionality that didn't exist because no one had taken the time to learn and understand what the current system did.

That's the part that baffles me because I don't understand how a project can even get off the ground without those involved first forming an accurate understanding of what the project needs to achieve.

Telstra claims ideas created in Hackathon as its own for 18 months

dan1980

Re: All your base are belong to us

Because you didn't read the T&Cs.

I expect there to be the usual explanations of a 'misunderstanding' from Telstra and a change in language.

Ransomware victims: Just pay up, grin, and bear it – says the FBI

dan1980

Paying ransomware seems rather on par with paying patent trolls: prudent for an individual int he short term but perpetuating the problem in the long term.

Dad who shot 'snooping vid drone' out of the sky is cleared of charges

dan1980

@Your alien overlord - fear me

No, high-powered green lasers are a genuine issue for pilots and, certainly in Australia, if you were found to be aiming them upwards, it might not go down overly well.

I would suggest that some kind of counter-drone would be a fantastic idea. It would 'need' a camera to record the incident, as evidence if needed, a second camera to detect drones and help 'home in' on them and then a mounted laser that disrupts the camera.

Of course drones come in many configurations so that might be difficult but I'm sure someone clever could figure it out.

If not that then another option would be for a specially hardened drone that could disable other drones simply by hitting them. It wouldn't need to be overly forceful.

It's almost time for Australia's fibre fetishists to give up

dan1980

Re: No..that's stupid.

@matthew24

Actually, the potential of a network is indeed relevant, though not in any way the whole story.

Many people who criticise the FTTP plan cite the figures you just have, expecting that these numbers are the only reason to implement such a network.

Not so.

Yes, fast speeds are important, but more important is that fast speeds are available everywhere. The reason being that we currently have a rich cousin/poor cousin situation where come areas are well served and others are poorly served. This matters because internet communications are a vital part of modern infrastructure, as important as postal services were in ye olde days. By that I mean that the ease and speed of information transfer is directly relevant to businesses and, with our population centres growing while regional areas stagnate, it is vital to start boosting growth in the less traditional locations.

This was part of the point of a homogeneous network - to enable businesses and people to access adequate services no matter where they were, thus opening up pathways for investment in more areas. AS I have said more than once, I, personally, can vouch for clients who have either shelved expansion plans or changed branch office locations based on connectivity. That might sound far-fetched but when one location requires $15k for fibre install and ~8k/mo for 10/10mb while another site is able to get EFM at 20/20 for a fraction of that, well, that can and does influence decisions. I have a client that closed a branch office (2 pax) because connectivity was so poor that the database didn't work properly, resulting in a succession of people quitting because the system hampered their ability to make sales, which in turn reduced their earning potential.

It's really not just a hypothetical - inadequate comms genuinely hurts 'the economy'.

So, really, the question is not whether the average person will use X Mbps or Y Mbps but whether a user or business needing Z Mbps will be able to get that regardless of where they set up. If the answer is yes then that promotes growth and allows cheaper areas to attract investment. What good, after all, is saving $5,000/mo when the Internet connection costs $8,000/mo more than if you set up closer to the city?

dan1980

@Simon

"Or even a remarkable achievement."

Do you know why it is a remarkable achievement? Because they have managed higher speeds on an medium inherently unsuited to it. No one is gaping at fibre achieving these speeds for a very good reason - it's designed to handle the fastest speeds that the equipment - whatever equipment - can throw at it.

When it comes to it, the very reason this achievement is so remarkable is that it was done using a medium that is not designed for it. They are pushing the boundaries but you simply cannot push the boundaries forever.

Further, that is cutting-edge stuff actually works against it for the purpose. VDSL2, is still new enough that there can be issues with interoperability and limited choice. Achieving these speeds over fibre is de rigueur and the equipment to do it is mature, stable and well understood.

What this means is that to get copper to approach speeds that are by-the-by with fibre, you have to use the bleeding edge of technology with all the pitfalls of expense, lock-in, bugs and, of course, cost.

Don't get me wrong - if I had 1Gbps to my home, that would be bleeding amazing but that really is not the point. It's about choosing the best technology/topology for the job and FTTP ticks all the boxes. The boxes that FTTN + copper ticks are price and speed of deployment but both of these are based on massive assumptions about the existing state of the current infrastructure as well as very selective reporting of on-going costs.

Whatever the speed achieved by copper, the running costs of an active FTTN node will never be able to compete with the running costs of a passive fibre distribution node and nor will the maintenance costs of copper be able to compete with that of fibre.

dan1980

Re: Is el reg reduced to trolling for business?

@Simon

"It all reminds me of when Nicholas Negroponte wrote off wireless as a carriage medium in Wired in about 1992. And then along came WiFi, 3G ... the rest his is history."

An interesting point but there is one crucial difference: when you're dealing with radiowave-based wireless, there is no choice of medium: it's the air. There is no danger of choosing the wrong medium or upgrade cost to replace it. You just replace the equipment at either end. Further, with the exception of spectrum issues, you can run easily migrate over time from one technology to another - say from GPRS to 3G, and on to 4G and whatever comes next (5G . . .)

This is manifestly not the case with wired connections because the choice of technology is bound up with the medium you have installed. You also have actual physical ports at exchanges/nodes that are connected, physically, to one type of device or another and to transition from one technology to another requires not only installing the new device but physically moving a section of of the physical medium from one port to another.

But, tacking the spirit of the reference more generally, there is something you are missing. Yes, wireless is not very prevalent and some of the technologies can achieve respectable bandwidth. But it is not a direct competitor to wired connectivity because for any wireless technology, there is a faster wired technology.

And this is important because the required bandwidth keeps increasing - we not only have HD streaming but ULTRA HD streaming. Sure that's overkill for many but the point is that data requirements generally don't go down.

Now, while 4G may be able to handle a HD stream from Netflix, provided the coverage is decent, it is a shared medium so the more people using it for data-intensive applications, the slower it is for everyone. I recall that when iPhones were released in Australia, several of the networks were simply unable to sustain 3G connectivity in certain areas. Optus in North sydney for example (despite it being their headquarters at the time!)

Again, that's focusing too specifically on wireless rather than the spirit of the quote but my point is that the nature of wireless technology as a whole - rather than any specific wireless protocol - has certain downsides.

So, if you choose air as your medium over copper, you are bound by the inherent limitations of that medium, the main one being that it is a shared medium. With copper over fibre, you have bound yourself to a different set of limitations, being increased attenuation - which limits the lengths - and a more involved and frequent maintenance and replacement schedule.

By the time copper catches up to where fibre is now, fibre will be further along.

dan1980
Happy

Re: Is el reg reduced to trolling for business?

@Simon

"Prats? Hmmm ... not sure I want to wear that."

If you caper around the place wearing lycra then wearing the moniker of 'prat' is hardly unprecedented : )

(For anyone from another country reading, the joke/good-nature is implied by our national genius for insulting each other as a form of affection*.)

* - Not that the thought of Simon dodging buses down Parramatta road clad in skin tight apparel is the source of the 'affection' . . .

dan1980

Continuing, you say:

When we planned FTTP we did so because it was felt the useful life for copper wasn't long. Turns out it can probably do the job for quite some time yet.

Again, what you really mean here is the useful life of copper as a medium which is only one part of the equation. Connecting increasingly able devices to lengths of copper does not do anything to extend the lifetime of that copper - it will need replacing at some point, if it doesn't already (and much does). And pure speed and network infrastructure life are not the only benefits of a FTTP NBN - we would get to unshackle from the existing providers, which is nothing to be scoffed at.

As a final note, even if all the copper currently installed was in A1 condition, that doesn't come free because we still have to rent it from the providers and that is money that, when combined with running costs and upgrades will eventually exceed what a FTTP network would have cost.

For once we had a real long-term plan (however poorly implemented) where, instead of selling off infrastructure for a quick dollar only to rent it back, we were going to actually invest in new, future-proof infrastructure that would work out cheaper (and better) in the long run - yes, even with the inefficiencies and contractor issues and overruns and blowouts.

I have rambled again . . .

dan1980

I also have to disagree.

Repeating what I said in my (long) post below, you are not making the correct distinction, which is between copper as a medium and copper as something that is actually currently installed, as it is in Australia.

The only way a FTTN deployment makes any kind of sense is by showing that you can save time and money by utilising as much of the existing infrastructure as possible. For bit of copper you have to replace, that's fibre that you could have installed.

Working in IT, there are plenty of clients that I have who use ADSL2 connections that are around a 1km from the exchange but who are getting ADSL1 speeds, along with not-infrequent dropped packets and outright disconnections due to horrid attenuation and noise on the line.

As you say, we need information about the state of copper and the cost-effectiveness of using copper is entirely dependent on not having to replace too much of it.

Copper, as a medium, may well have a future, but that future is certainly not brighter than fibre so you are constantly trying to increase the abilities of copper with new technologies to overcome the deficiencies. This is great and interesting and very useful, but if you were starting from scratch, you would be mad to choose to install copper for a national network.

Now, we certainly aren't starting from scratch but the more of that old copper we replace with new copper, the closer it comes to equalising and the longer the network stays in place, the more expensive the copper becomes as it requires more frequent replacement (if it is to keep up with or approach fibre) and the nodes themselves have not-insignificant running costs when they are deployed in the kind of numbers that are required to keep the distances short enough to see appreciable benefits from these new technologies.

dan1980

@cantankerous swineherd

Appreciating the comment, it is important to note that the connection will not, in the majority of planned instances, be between the premises and the exchange but between the premises and the node.

Thus the question is not how far away the exchange is but how far away the node is.

These higher-bandwidth connections, like VDSL and VDSL2 and indeed like ADSL2 drop off sharply. Thus at about 1km (of cable) there is little difference but even 500m drastically lowers the benefit.

The upshot is that to get an appreciable benefit from these higher-bandwidth technologies, you have to keep the distances short and thus deploy a lot of nodes. And, as I mentioned in my post below, each node is a collection of active devices, which need to be powered and cooled and maintained.

For another look at one of the FTTN nodes, this close-up shows some of the DSLAM. Note the fans (there are more above, cooling the whole unit), the numerous switches and half-dozen or more different cable types - just in this one small section.

An upgrade to technology requires that each of these nodes be upgraded, replacing hardware and reconfiguring. If lucky, that just means a straight swap but the new units may have different power needs that then requires upgrades to that section of the node as well. It's also more training for techs, which costs more. You'd also need a new gateway device at the premises. So yeah, there are possible future upgrade paths but they require time and money to implement.

This idea of starting with lesser devices and technology and then upgrading as time goes on results in higher costs as hardware is replaced and a patchwork network of old and new technology, where new sites are installed with newer, faster equipment while older sites and therefore older customers are stuck with older, slower equipment.

The older equipment is then selectively replaced, but only in those areas that are more 'commercially feasible'. Some locations will have the equipment only partially replaced, leading to nodes with a mixture of ports - some faster, some slower.

The end result is a repeat of what happens right now with Telstra - which was something that a homogeneous, all-fibre network was going to rectify so that we no longer had this ridiculous situation where the bandwidth you are able to achieve is dependent not only on the suburb you live in, but sometimes down to the street or even the block. (Or the timing of the installation where there are no high speed ports left.)

That's something that the new NBN doesn't adequately address and is a huge part of what was important about the original plan - it's not just about the max Mbps a given technology and medium can achieve.

dan1980

Simon . . .

Mate.

You are confusing two things: copper as a medium and copper as it actually is. So the question is not whether copper as a medium has a future but whether the copper that is currently running through the streets and bunched up in flooded pits and hung over trees and tangled together in masses of inadequately protected, tangled splices bundled together with duct tape is suitable for the future.

The argument for FTTN being cheaper and quicker is not predicated on the medium and equipment being cheaper or easier to install; it's based on the assumption that much of the existing infrastructure will be REUSED.

Unfortunately, as many know, there is a lot of very dodgy copper out there. And even if all that was replaced with nice new copper, the problem ill just recur.

Even, however, assuming that these newer technologies really will provide - and continue to provide - the increasing bandwidth that is required even now and will definitely be required in the future, that's not the whole story because you also have the overhead of the N - the nodes.

Part of the speed promised by these technologies, as with VDSL before them, is based on a relatively short distance because the faster the speed, the sharper the drop-off as you move further away. Most tests seem to agree that once you get past 1km from the exchange, there is little difference between VDSL2 and ADSL2. Some suggest that it doesn't equalize until about 2-3km but others show it happening much sooner. Seeing as the state of the copper has an effect, I think it would be prudent to take a lower measure.

So, the nodes have to be built close to premises for there to be a real, meaningful benefit. and that is indeed the plan, but it does have a downside: a lot of nodes, each requiring active equipment that must be powered, cooled and maintained - which includes testing and replacing batteries.

FTTP, on the other hand, does not require active equipment in the path and thus the distribution nodes can be smaller and less numerous as well as cheaper to install, cheaper to run (nothing) and cheaper to maintain.

I linked some comparative images in some previous posts and I think I over did it that time so here are just a select few.

FTTN nodes

FTTP 'nodes'

If you look at the second fibre distribution cabinet, they have started with one 1x12 splitter. To expand capacity, you drop in additional splitters into the slots in the bottom left and then cable up to a patch-point in the main section. These assemblies can be cabled up - or purchased ready cabled - before a tech goes out to install it. And, while that box is currently using a 1x12, you can get up to 1x32 units in a similar size, giving you 192 connections from 6 incoming lines in a very compact unit - one that's still larger than it really needs to be, as evidenced by the last image.

All this complexity and need for power and cooling and maintenance costs money and so, the longer the network is in place, the cheaper a FTTP network become. I saw some conservative estimates (i.e. using the Coalition figures) that indicated parity of total cost by 2027 - 12 years. Every day after that, FTTP is cheaper.

So to have an article that implies - if not outright states - that the case for fibre is now, or will soon be, moot is not necessarily giving the whole story. That's because it's not just about the raw bandwidth available over a given medium in a lab.

The NBN project gave us the opportunity to replace the existing, aging copper network with a new, future-proof network and, in the process, uncouple us from Telstra and the problems that this has caused.

It was ambitious and poorly planned when it came to the details and poorly costed and definitely poorly handled. BUT, the goal was not just to increase Mbps and so any comparison that focuses solely on this is missing the point. When that comes from the government, it is a deliberate attempt to mislead. From Simon I hope it's just him trying to salvage some good news by suggesting that at least that one aim - of increasing available bandwidth significantly - might not be completely lost.

Even then, much still depends on the state of the copper and how this fits in to the mixed networks being proposed. And, further, those in poorly-serviced areas are likely to continually be poorly serviced.

You own the software, Feds tell Apple: you can unlock it

dan1980

Re: @dan1980 - designing the phone so it is impossible for Apple to unlock

@DougS

That was my understanding - that it was something that has been done but I wasn't sure if it was actually a deliberate policy of Apple to do this or whether the situation depends not on a conscious decision on the part of Apple Management but instead is down to the idiosyncrasies of the iOS software such that in one instance it is unlockable and the next it isn't and then the one after that it is again, simple because they're using different code which has a side effect of enabling this.

So what I am saying is that it needs to actually be a requirement such that if the software is tested on this front and patched/corrected if it fails.

When you talk about opening up the phone, however, that - to me - falls into Apple tampering with something the customer does actually 'own': the phone itself. Thus Apple would presumably have a defense against being compelled to do this as the EULA no more allows them to dick-around with peoples' phone hardware than it allows them to break into their home.

How it would actually go down is another question but it is a markedly different situation when judged by the argument being made in this case, which is that the EULA means Apple owns and controls the software.

dan1980

Re: Hypothetical situation ....

I would say not because the key differentiator here is that the people you sell the locks to own those locks. In the case of software, such as iOS, the software is not owned by the end user; it is owned by the vendor. (In this case Apple.)

This is relevant because it is not the phone per se that is locked but the software and while the user may own the phone hardware, they are merely using the software under a strict and limited license, with Apple retaining actual ownership.

Not that I support what is being asked for but it does seem to hold up.

dan1980

@Mark 85

Not quite so because there is one crucial point you're missing here: Apple are technically able to unlock this phone.

As the article said, Apple could not say that they unable to do what was required and that is why they have taken the tack that they are not allowed to do so. If they couldn't do it anyway then it would be a moot point.

Of course, there is still an interesting and troubling precedent waiting to be set but it is - at least conceptually - easy to avoid without having to change a line of the EULA: design the software and hardware such that it is impossible for Apple to unlock their phones.

As I understand it, that is already the case in some instances so it just needs to be a policy that this is always done. Perhaps, and I am no engineer, but one can imagine a system that would allow an Apple store to unlock the device but still actually require the customer's input, for example with a secondary code that the user can choose.

This case actually ties in with other issues around cloud services being asked to hand over data* because the the whole issue comes about because the companies can hand over the data. If they were technically unable to do so then all the warrants and national security letters in the world wouldn't help.

Yes, it would be great it this wasn't an issue because law enforcement agencies actually respected privacy and proper process (which does appear to have happened here) but it is obvious that that is not the case so the only way to secure data is with real 'no knowledge' systems, meaning that the vendors cannot access the data at all.

This is something that needs to happen and needs to happen as quickly as possible and in as concerted an manner as possible so that it gets done before these agencies can cry to law-makers demanding back doors.

* - Like the MS Ireland case.

FBI, US g-men tried to snatch DNA results from blood-testing biz. What a time to be alive

dan1980

Governments and law enforcement agencies in particular believe that their goals trump any rights to privacy of any kind for any person.

That is the underlying problem with all of this.

When you have legislation that uses terms like 'reasonable' you are always going to run into a problem because that is a relative term and can be played with by these agencies because, as stated, they think invading privacy is reasonable.

Standards body wants standards for IoT. Vendors don't care

dan1980

"SOC calls on the industry to be fair in how it collects and handles data, transparent in what it intends to do with that data, and to make privacy a design consideration."

Thanks - I needed that one. Been a lousy day and that was just the pick-me-up I needed.

Yahoo! launches! password-free! push! logins! for! mobes!

dan1980

Huh?

I'm not quite getting this. So, once enabled, is this 'push' method the only way to access the account? If so, what happens if you misplace your mobile - are you prevented from logging in? If there is another way to access the account in that instance, wouldn't that likely take the form of a password of sorts?

If it does then that would have all the problems of using a password as the main method of authentication. Actually, it'd be even harder to remember as it would be used so infrequently.

Want to self-certify for Safe Harbor? Never mind EU, yes we can

dan1980

Re: Listen carefully

@AC

"Does anyone believe that the for over a decade the EU failed to comprehend that "Safe[sic] Harbor" is a sham?"

Well, it depends what you mean by "the EU". The EU itself, as in the governing body has repeatedly made noises about this but the members themselves have, largely, ignored that. This ruling now make it very clear and very public that those member governments can no longer just ignore the problem.

Euro privacy warriors: You've got until January to fix safe harbor mess – or we unleash hell

dan1980

Okay, so let me get this straight . . .

The courts have ruled that it is ILLEGAL to store this data in the US but this illegal activity continues until a solution can be found.

Is that right?

Now, I fully understand that these changes cannot happen overnight and it won't help to tell the affected companies that they can no longer service requests from EU citizens, effective immediately. BUT, what's to stop another (otherwise identical) case being lodged right now? The operation of these services is illegal.

My question is: how is it up to the politicians to set deadlines? Surely the court should be doing this as, in the absence of some ruling that says the behaviour will be temporarily allowed, what's to stop further lawsuits being brought?

I am not a legal scholar in any sense so excuse the naivety but surely the best course of action would be for the court to decide on a deadline? It can't continue indefinitely, right?

P.S. - I don't mean that the courts should rule on a deadline for amended legislation, as it's not their business to make law. I mean a deadline for when the operation of these companies must comply with the law - whatever that is.