207 posts • joined 3 Jul 2009
Funnily enough, China fuming, senator cheering after Huawei CFO cuffed by Canadian cops at Uncle Sam's request
China doeth protet too much
Let's not forget that China has history of doing the same. Australian mining executive Stern Hu was jailed by the Chinese for eight years after China wasn't getting its way during iron ore price negotiations and Rio's refusal to admit Chinese company Chinalco into one of its projects.
Pencils don't leak when stored
We use a pencil because they are easier than pens to store for the long period between elections. If that worries you, well you are permitted to use your own writing device to mark the ballot.
Voter ID will almost certainly disenfranchise everyone living in remote areas. Very few people have their issued documents (birth certificates and so on) and getting replacements when the mail takes three weeks and is based on addresses rather than names isn't straightforward.
A lot of the posts here have poor familiarity with Australia's polling process. The idea that you'd be able to open a ballot box and fiddle with the contents is a little unrealistic. So pencil marks are fine. It's well worth volunteering to be a scruitineer at least once in your life. It is eye opening to see the degree to which Australian elections are secure.
There's little fraud. Partly because the compulsory voting means that the real voter will also present themselves, leading to fraud being quickly discovered. Australian's aren't upset by being forced to appear on a Saturday to vote. They are upset when being forced to appear on a Saturday to vote and then being told they voted twice. That's the sort of anger which leads people to give their full cooperation to the Federal Police, and then not letting the Police slack off.
The undermining of out voting process is really happening through the postal voting system. For example, by political parties putting themselves forward as the agency to approach to obtain the postal voting papers through. The wide range of reasons for postal voting is also too broad: eg, employers should be forced to release people for voting, rather than those staff seeking a postal vote. Postal voting means that incidents close to the election date have less influence then they ought to. You'll remember it was only days before the first ACT election that it started to be known that the leading party was a pack of new right loons. The recent by-election in Wentworth would have been much less close if there were less postal voting, as noted by former PM Turnbull.
IBM already had access to Red Hat's patents, including for patent defence purposes. Look up "open innovation network".
This acquisition is about: (1) IBM needing growth, or at least a plausible scenario for growth. (2) Red Hat wanting an easy expansion of its sales channels, again for growth. (3) Red Hat stockholders being given an offer they can't refuse.
This acquisition is not about: cultural change at IBM. Which is why the acquisition will 'fail'. The bottom line is that engineering matters at the moment (see: Google, Amazon), and IBM sacked their engineering culture across the past two decades. To be successful IBM need to get that culture back, and acquiring Red Hat gives IBM the opportunity to create a product-building, client-service culture within IBM. Except that IBM aren't taking the opportunity, so there's a large risk the reverse will happen -- the acquisition will destroy Red Hat's engineering- and service-oriented culture.
Your RSS is grass: Mozilla euthanizes feed reader, Atom code in Firefox browser, claims it's old and unloved
RSS also useful for enterprises
An interesting choice because RSS is very useful for enterprise applications. It's the easiest way to get "dashboard summaries" into people's browsers (for things like Top Ten open issues, or unread phone messages). So by removing these features Mozilla is pushing their users towards IM clients like Slack.
Why CPU rng
A few reasons:
1) The CPU's random number generator can be random, based upon provably random phenomena rather than a pseudo-random number based upon mathematical manipulation.
2) There are some sources of actually-random data in a computer, although they are usually not the same strength as "provably random". An example is the jitter from disk drive events. But these sources are rapidly disappearing as physical devices towards silicon. This is the operational problem with not enough 'entropy' (aka real randomness) being available as a machine starts.
3) It's "too easy" for these actually-random sources of data in a computer to be influenced from outside the computer. Since they are not built as cryptographic devices. Whereas the random instructions within the CPU can include tamper detectors (such as for high EM fields).
4) Timing and other covert channel attacks are simpler against software than against hardware. Those attacks are also simpler against hardware not intended to be cryptographic devices than against hardware designed with covert channels in mind. It is easier in hardware to build a black box where all instances of the instruction take the same time to complete, use the same power, and so on. (As an aside the current issue with CPUs is that the care of design needed to defeat covert channels done for the RDRAND instruction needs to be repeated throughout the CPU design for other instructions.)
These reasons explain the last line of Ted's LKML e-mail: "Note: I trust [Intel's hardware instruction] RDRAND more than I do Jitter Entropy [from the computer's hardware devices]".
Steele memo not only source of Russian influence
Note that the Steele Report isn't the sole source. From July 2016 Australia's intelligence agencies were warning the US's FBI of Russian attempts at subversion of their Presidential election. The initial Australian intelligence was gained from old-fashioned "drink the source under the table" espionage.
Obviously this second source doesn't fit into the argument the Nunes Memo is promoting, since it makes the Steele Report irrelevant -- the FBI was going to investigate whatever the provenance of Steele's work.
SHL just got real-mode: US lawmakers demand answers on Meltdown, Spectre handling from Intel, Microsoft and pals
Another set of predictions
The development we didn't predict. The reputation of Silicon Valley -- and IT startups in general -- was trashed by their insularity, their poor behaviour towards women, their dismissive attitude towards social responsibility in general, and towards paying taxes in particular.
How my prediction went. Poorly, WPIT didn't make the mainstream press as a potential sinkhole of taxpayer funds and a risk to the nation's economy which needs to be managed beyond the usual levels of IT project executive oversight.
2018 prediction: optical networking prices will continue to plummet, and many corporate networks will insert a WDM fabric under their ethernet transmission. The NBN's powered boxes by the side of the road will look archaic within half a decade. A newer design would use small, cheap, unpowered in-pit WDM muxes. Yes that needs fibre-to-the-premises, but fibre is now cheaper than copper for all but trivial cables.
Related: networking gear from China will become so cheap that bespoke-made items will make sense for large networks.
"If the burden of argument in the US is the same as English law than it would be balance of probabilities". That applies to issues of fact, but the meaning of that clause of the GPLv2 is an issue of law. So the court will determine that matter of law, and if Perens is correct in his assessment of the license then he has a defence of truth for the claim of defamation.
Single-flow speed of nBase-xx4 links (was: SFPs + Fiber = cost more than switch?)
"So, that means that the highest possible speed for a single connection is 100Gb?"
No, you get 400Gbps. The nBase-xx4 interfaces run four "lanes" of ethernet symbols. The symbols are round-robined between the four lanes. An ethernet symbol is 64 bits logical, 66 bits on the wire (to allow for clock recovery).
If you are thinking that this means the media carrying the four lanes needs to have exactly the same latency then you would be correct. This is conveniently enforced using fibre assemblies and connectors with multiple fibres.
Warrant for access to a safe?
You can't get a warrant to access a safe from a safe manufacturer. There is no backdoor. They'll just tell you to buy a drill and brute-force it.
You can place a warrant against the safe's end-user. But that's exactly what the feds are trying to avoid here. Because this isn't about access to gain evidence, it's about access to do surveillance. That's why the Five Eyes forum was seen as appropriate by the Australian government, and downright Orwellian to the rest of us.
Re: I don't get it...
"Intel are claimed to be using protected IP in their product, but Apple are being taken to court?"
Yep. You are thinking along the right track. If you buy a chip from I, and they've used Q's invention without a patent license, then I is the only party from which Q can gain satisfaction. You, as the purchaser of I's phyical product, have no liability (which isn't as great as it sounds, as the settlement between I and Q might well remove from the market the product you purchased, thus lowering its usefulness).
But to this we add the ITC. They can prevent import of a product into the USA based upon a claim of patent infringement. Now toss in some sharp business practice by Q: they ask you for a patent license. Now you can respond "no", upon which Q says "it would be a shame if we made an allegation of patent infringement to the ITC". Now you could choose to fight this out, and win. But a win is not useful if you have been forbidden from selling your widgets for the years the court system can take. So you pay Q.
Moreover Apple are complaining that Qualcomm aren't just seeking at patent license based on the price of the radio chip (bugger all) but based on the price of the iPhone. That is, the patent license fee covers the inventions of others too. That's cuteness by Apple -- you can base a patent license fee on the phase of the moon -- but all the same it is an appealing argument.
So what's the cost to people running internet routers? We've taken a handful of route table entries and auctioning them by /24 increases the number of route table entries a hundred-fold. I think we should probably put a stop to this behaviour before it becomes endemic and filter out the more specifics of auctioned addresses.
SDN future is driven from cloud providers, not supplier strategies
"The problem is that large customers rely almost exclusively on Cisco and VMware, and they aren't interested in the open-source switches and open-source hypervisors with open-source management software that's needed to make hybrid SDN actually workable today."
This paragraph summarises my issue with the article. It's writes as if the enterprise vendors are the major source of influence over SDN. Whereas SDN is being driven by the cloud vendors, all of whom build their own switches, all of whom run their own software on those switches. It's likely that the future of SDN in the enterprise will be a byproduct of the main game at those cloud vendors; rather than anything in the strategic plans of VMware or Cisco.
In my view it's very likely that one of the cloud vendor SDN technologies will become so widely known that enterprises persisting with traditional enterprise networking and VxLAN will find themselves in an expensive niche.
I'd be a little bit cautious to ascribing an outage to the last thing to fail in a chain of failures. Especially in a report written by one of the players. It soft-soaps AEMO running its own weather models, and thus missing the warnings from BoM. The result was that the SA grid hadn't been prepared for a major weather event. Also there's a number of forward-looking statements in the report about future grid design, but the question why AEMO management failed to address these design issues prior to the SA outage isn't discussed.
There's plenty of blame for all involved. Even for SA residents and their installation of air conditioning rather than purchasing efficient homes in the first place. Demand management is one area which the SA government hasn't sought change, despite it being one of the cheapest ways to lower electricity prices.
Let's see what other countries do
I suppose the test will be what the UK and France do, as they have access to substantially the same facts.
Banning large batteries from the cabin isn't the worst idea. It's basically a decision that they'd like to deal with explosions of 150g to 1000g of explosive in the hold rather than in the cabin. The list of airports seems approximately where a substitution of battery for explosive could be expected which also have flights to the USA.
I also wonder if the agencies are concerned about an explosive laptop being used as a tool in a larger scheme, such as breaching the flight deck door.
Weather in South Australia
Folks, it hardly matters what the energy mix was. Let's have a thought experiment where we return to operation the coal-burning power stations at Port Augusta and Leigh Creek. The six tornadoes would have still cut the large powerlines between Adelaide and those generators.
The essential failure was the lack of awareness of South Australian weather at NEMCO. That lead to poor decisions, such as not bringing online all the gas generation actually located in Adelaide. We even had this misunderstanding from the Deputy Prime Minister, who said that this wasn't a severe weather event on par with a cyclone, which is to misunderstand the destruction a tornado can cause, although in a smaller area than a cyclone.
The shutdown of wind power due to electrical distribution system instability was very unfortunate. But again, that software behaviour was squarely NEMCO's job to know. And they didn't. At least being software this issue is cheap to fix. Not that there was enough wind power for the state in any case, since those tornado-affected distribution lines were carrying power from many of those windmills too.
The discussion about nuclear reactors is even more laughable. Less than a year ago South Australia had a Royal Commission into the nuclear fuel cycle -- including nuclear power -- which reported that all forms of nuclear power are uneconomic for this state.
What is really interesting is the very different read of this issue within South Australia -- people who actually experienced the edge of the weather event -- and elsewhere.
"Effectiveness" is code
Note that the spokesperson is saying that the future review is into the "effectiveness" of the section. In Australian Public Service policy language "effectiveness" is a very different thing from "efficiency". "Effectiveness" is how well the mechanism works _without regard_ to other factors, such as expense or the robustness of the Australian Internet.
This would signal a substantial policy change from the current s115a, which requires the judge to weigh up the competing interests when approving a proposed injunction to block access to the "online location". That is, the legislators desired website blocking to be "efficient" rather than merely "effective". Therefore "effectiveness" should not be the primary criteria for evaluation of the legislation.
It would have been useful for Simon to have questioned the spokesperson on their choice of words. If the response was written then the expectation is that words hold their usual meaning.
Re: I must be way out of step..
I think what is lacking is compelling *systems*.
Drones aren't an interesting thing. A set of drones which can find a lost child on a crowded Bondi Beach is interesting.
Similarly wearables aren't interesting. But a wearable which manages your diet and exercise is interesting. At the moment they only pump out raw numbers and if you want to track diet and exercise there's still a lot of "getting thongs to talk with things" to do the analysis. Let alone putting that analysis into immediately useful terms: can I have this bit of cake I just waved under the wristband's camera?
The basic problem is that whilst hardware is cheap, systems are expensive. The iPhone wasn't only a touch screen, battery, CPU and radio. It was the "app store" system which made that bit of glass interesting; just as iTunes Music Store made the iPod a better MP3 player than the better hardware from Creative.
CES simply threw a lot of hardware out there. Worse still, it will throw out different hardware next year. So if systems builders rely upon products released at CES will never get beyond the "make it run on the platform" stage before having to start over. At best CES is a demo of technical capability which allows systems builders to assess potential hardware partners.
My prediction: WPIT
The acronym WPIT will become known outside Canberra. The Welfare Payments Infrastructure Transformation is essentially the replacement of the Model 204 database and applications code originally established by the Department of Social Security in 1983. The code has survived name changes (to Human Services/Centrelink), umpteen ministers, and 35 years of budgets and mini-budgets of changes (all of which had to be live by a particular date, a date usually set for political or accounting reasons rather than as the result of an implementation plan, so we're not talking a lot of programming to a deadline with no nice-to-haves which might ease future maintenance or migration).
The cost of rewriting this code to run on a replacement system is said by the government to be $1b to $1.5b. $1.5b seems optimistic: even on simple SLOC-based measures the 30m lines of code will cost roughly $2b. It's hard to see how it could be lower, as a lot of the measures for reducing cost aren't available for this task (eg, incremental feature delivery). All this technical discussion hides that Australia doesn't have many people with management experience of this scale of project and management is where the real risk hides (the seeming over-optimism about future project costs is a worrying sign).
This is high stakes IT: the scale; the risk to clients; the macroeconomic risk. Stuff this up and there's no saving your government and your country could enter recession.
The Minister appears competent, which is a good start. But of course if he's too good then he won't be content to stay at DHS for the decade this job will take.
Not sure this works in Assange's favour
I don't think this is a win for Assange. He still can't leave the embassy, as the UK will arrest him for his failure to appear, at which point the USA might well lob in a deportation request. A request which will then be top of the queue, assuming that Sweden withdraw their request for arrest.
As for things being different with President Trump, let's see. Because Trump owes the FBI a lot, and the US law/intelligence agencies desperately want Assange. If only to make an example of, as they are doing with Manning. I'm not sure Trump views Assange as anything more than a convenient dropbox for the work of Putin, and if Wikileaks didn't do the job then someone else would have been found.
I get the feeling that this is much more about solving Equador's problem than Assange's problem.
VW Dieselgate engineer sings like a canary: Entire design team was in on it – not just a few bad apples, allegedly
Realistic tests are a recent development
The problem with faulting the 'government' tests is that you assume that the test is possible outside a lab. Remember how VW got busted: a lab had finally made it's emissions test gear small enough to fit inside a car, so emissions could now be tested in the field.
Before the car-portable test what is the government to do? To not regulate at all, because no realistic test was possible? Or to regulate a lab test and then ensure some real-world effect by preventing car makers from optimising specifically for the test?
Update -- Comodo to abandon trademark registration
This thread <https://forums.comodo.com/general-discussion-off-topic-anything-and-everything/shame-on-you-comodo-t115958.3.html> contains the most hilarious statement ever by a CEO, see comment #3. A staffer later posts that Comodo will file to abandon the trademark registration:
"@robinalden Reply #28 on: Yesterday at 03:41:45 PM:
"Comodo has filed for express abandonment of the trademark applications at this time instead of waiting and allowing them to lapse.
"Following collaboration between Let's Encrypt and Comodo, the trademark issue is now resolved and behind us and we'd like to thank the Let's Encrypt team for helping to bring it to a to a resolution."
I think it very much depends on the sector as to what BYOD means.
For universities it means that students bring their own laptop and expect it to work with minimal fuss: connect to wireless, print, plug in somewhere to recharge. There's no attraction at all in a device without a screen -- the huge use of mobiles by students suggests that the screen is actually the important part of the computer.
For schools I wonder if you could take your idea once step further. The kids don't carry their computers around at all, but only the computer's storage (say, a Micro SD card). That storage is the boot device for a VM at both home and school. Add some simple software maintenance and I think this has some value and is worthwhile poking around with. The biggest problem would be Windows.
Business doesn't know what to do about BYOD, and they keep watering down the concept in the hope that it becomes something else. Unfortunately in doing so they lose the benefits of the BYOD approach, and loop back to the start of the process without making any headway. Increased BYOD by contractors and the lack of "enterprise mobile" means they'll have to grasp the nettle eventually. If only offering "outside the firewall" Internet with a certificate-mediated access (VPN or PKI) back into selected resources.
@James51 and originality
The NAPLAN test is the worst sort of high stakes testing. Writing a essay outside of the standard criteria will --- even with humans marking --- get you poor results as it won't fit within the marking rubrics. These rubrics -- 'marking criteria' would be the less jargon phrase -- are designed to allow no scope for creativity. As a trivial example of creativity: if you gave the answer as a poem that would garner no additional marks and would threaten the marks allowed for grammar and spelling.
The NAPLAN system is gamed by schools, with weeks of "teaching to the test" being commonplace. Although the government denies it, the NAPLAN preparation constrains the time available for actual teaching of material. In particular the Year 9 NAPLAN falls exactly when algebra is being taught and at a recent corridor chat at a teaching conference there was consensus that there was a fall in student ability in basic symbolic manipulation because NAPLAN has vacuumed time away from that foundation skill.
The government denies the tests are high stakes. But in reality they gateway admission to all advanced programmes. Even for trades programmes oversubscribed programmes are often determined by NAPLAN ranking -- why wouldn't you drag up your school's average given the opportunity?
Perhaps more attention to dimensions and weight?
Looking around uni is always interesting, as students put down their own cash for laptops and expect to use them seriously rather than for games. The typical notebook by far is the Macbook Air, followed by the Dell XPS 13. With that in mind I'd suggest that this review doesn't give enough attention to dimensions, weight, and battery life. Just on dimensions alone it is difficult to recommend a lot of the laptops in this review, as they're not going to fit well into a school bag.
If you want to see what bargain manufacturers could be doing for school users then look at the Toshiba Chromebook 2. Small, light, good screen, quiet. It's well underpowered for WIndows, it's lack of sockets limits its upgradability (and thus lifetime), but you'd hope that manufacturers would take hints from the form factor.
Return to Home not much safer
The Return to Home function doesn't solve the problem. There's 25% odds that path to Home will be across the firebombing circuit. Realistically Return to Ground is the only safe alternative. But there's a strong scofflaw element within drone operators and due to the likely loss of the aircraft such a safety mechanism is likely to be disabled.
What's needed here is a social change. As one small example, no stories in the online media with INCREDIBLE DRONE FOOTAGE OF FIRE from non-official sources.
BTW, there's a huge lack of understanding of aviation in the drone forums discussing this issue. Such as postings claiming that rotor blades aren't under any stress hitting a drone, or that drone shrapnel can be sucked through turbine blades without threatening the aircraft. There's no appreciation at all for the pilot workload of firefighting operations, something apparent to even beginning pilots.
BYOD works in some organisations, not that you'd know it from this author
Reading this article you wouldn't think that universities happily have thousands of BYOD devices on their networks every working day. It would have been better if the article, rather than condemning BYOD outright, looked at how they do it and the risks and benefits to the organisation.
Rather too upbeat
What an odd article. No large computing platform uses Oracle hardware or software: Google, Facebook, Amazon, and so on. They don't even subcontract Oracle's engineering expertise to construct their own internal-use products.
What's left is really the crumbs, with an "enterprise" label whacked on. And those crumbs are under threat from the products developed or maintained by the large computing platforms: from Linux to OpenStack to Software Defined Networking. Worse still, despite the increased costs over Google, Facebook, etal the performance and uptime of enterprise applications is typically worse than the cloud applications.
So Microsoft joins the fray
This is just Microsoft's (late but good) attempt at owning cloud authentication. Every company is trying to do that at the moment: Facebook, Google, LinkedIn, ... It is part of the reason that authentication on the web is such a mess. Microsoft has some advantage in already being at the heart of a lot of enterprise authentication, and is trying to use that as a lever.
I use LibreOffice on a Mac. It is good. It will even open Visio drawings, which is nice.
But my daughter also uses LibreOffice and trying to round trip documents -- author them on LibreOffice/Mac, edit them at school on Word/Windows, edit them again at home on LibreOffice/Mac -- fails too often to make for happy users. There's only so many times you're willing to go and fix formatting details.
So I'd strongly recommend LibreOffice/Mac if you are able to share the document as an unrevisable PDF. If you need to edit the thing then either use LibreOffice or Word at both ends.
Not Amazon but ASIO
If such a proposal does get up then it won't be put to tender. The "agencies" will make sure they are legislated to provide the service (because that is "more secure") and will charge well over the odds for it. And then in a few decades time we'll find out that they've been sharing it with all and sundry and using it well beyond it's legislated purpose.
Telecommunications is a substitute for travel, we urgently need substitutes for travel
Money for government services has to be raised somehow. Complaining merely because it is on something we use a lot of is no better than the special pleading of other groups on whom taxes fall.
The questions of taxes are if they are fair, efficient to collect, don't distort the economy in unwelcome ways, and don't conflict with broader policy priorities. At the moment a tax on Internet traffic would be progressive (the rich paying more than the poor) and efficient to collect (easily measured, identifiable parties to request payment from).
You could argue that the effect on economic activity isn't going to be great. The "tax" of overpriced mobile telecommunications hasn't stopped people using mobile phones. Demand for telecommunications seems very inelastic to price.
I am opposed to this because of the conflict with the policy priorities of government. Increasing the price of telecommunications inhibits greater use of telecommunications; telecommunications at high speeds are a substitute for travel; and reducing the use of internal combustion engines is a national priority to avoid climate change catastrophe.
Almost all governments seeking increased funding could, for the next 10 to 20 years, do that through a carbon tax. That would kill two birds: advantage government policy in an important area, and raise revenue.
Interaction of low power LED and better PV solar efficiency.
To me the real effect of LED using less power is that designs using PV solar cells and a small battery becomes the obvious solution to powering the LED, rather than the cost of cabling to the mains.
So LEDs might well use more electricity. But not add to CO2 creation.
Tapping backhaul providers would do the same job more simply
Cables are cut by fishing boats all the time. They are pulled up onboard a ship and repaired all the time. So although repair and re-splicing is fiddly, it is also an everyday fiddly task. Not some impossibility as your article comes close to suggesting. If the 10KV was actually enough to damage a fishing trawler we'd be very happy -- sadly it readily dissipates in seawater. That gives NSA a simple technique to cut a live cable -- clamp chains to the cable 100m apart, chop the cable in the middle, pull up the chains.
The point Briscoe makes is that SCCN would know about this. But you can readily imagine some misdirection, such as cutting the cable once again a few Km away to give a despatched repair ship something to fix.
The question is -- is this likely? And it's not really, because of the backhaul problem. You've applied your splice, you've got a copy of all the data, now how do you get that data back to land? The only choice is to hire wavelengths or complete fibre on the same cable under some pretext (such as connecting Pine Gap back to the USA). That's not really possible to do mid-span without a high chance of stuff up (such as the wavelength used gaining power mid-span, or a FEC incompatibility).
The NSA's desire is much more simply met by tapping the backhaul fibre heading away from the landing site : there's no water, no voltage, no close monitoring, no forward error correction. Just simple dark fibre in a conduit.
The NSA could require a Room 641A type arrangement to tap each cable as it is patched from the undersea cable headend to the customer. But Briscoe is saying that isn't the case (although he explicitly did not call out the Australian landing sites in his denial). Briscoe might well be truthful -- you can only imagine that having had Room 641A revealed by a junior technician that the NSA would look to less apparent ways to do the task.
I don't think it's likely that the NSA are using CALEA or other interception requests for transmission networks. Those legislative mechanisms don't suit transmission networks at all.
BTW, carriers don't encrypt link traffic. It was thought that there was no need. It's fair to say that the various leaks from the NSA are changing that view. However encryption of high speed, high latency, high natural error links isn't as simple as you might hope. That means it's expensive and thus the engineering desire has to overcome the beancounting hardheads.
"Expensive" is in the engineering sense
Oninoshiko, it's all very well to put in a dig at Cisco, but you miss the meaning of "expensive". In this space "expensive" is really talking about wireless transmission time. The more you transmit, the shorter the battery life or, in the case of an aircraft, the more money you hand over to satellite owners.
The article is very fine at describing the current situation where the applications and architecture are obvious, but also where unless there is one interoperable specification then its not going to find its way into consumer devices.
Bear in mind the real work has yet to begin. For example, there's no profiles for things such as machines reporting their parts inventory and status. Let alone to determine to whom that inventory should be reported to: to the manufacturer, or to your chosen washing machine repairer? It's in everyone's interest for the repairer to appear with the likely-to-be-faulty part on their first visit, that's a feature any purchaser would see the benefit in. And yet the industry is doing its best to make sure this never happens. Let alone more advanced applications.
Uber, lighting $100 notes by the box
Adelaide is a small Australian city of a million people -- it's pretty much on the other side of the planet from everything. Yet Uber are burning cash here like there is no tomorrow: huge billboards near the airport, relentless Facebook ads trying to recruit drivers, direct appeals to taxi drivers. Yet Uber isn't available to answer simple questions: does driving for Uber breach the Passenger Transport Act; do I need insurance above the typical motor vehicle insurance.
Anyway, if Uber are burning so much cash here in Adelaide they must be shovelling into the incinerator in larger cities. It all seems very "crash through or crash", and very much aimed at IPO returns to venture capital rather than any sane way to build the business.
@IGnatius T Foobar -- ARM
Sure, Intel could have used ARM. But it would have been stupid to do so.
Firstly there are no ARM designs for 14nm. So it would have to license and then develop a design. So it is already costing Intel more than using its own x86 architecture, for which it need not pay licensing fees. Intel *would* have to license: using the historically-licensed design isn't going to cut it as Intel would want the ARMv8 64b design. It does not make financial sense to develop a 32b design with its 2GB memory restriction -- that simply won't have the sales lifetime for the investment required.
Secondly, the market would expect those ARM designs to retail for less than what they would get for the equivalent x86 design.
Thirdly, ARM sales have less lock-in than with x86. If a fab overcommits capacity then a customer can run off some cheap ARM and threaten your business plan. That doesn't happen with x86.
Fourthly, a lot of the work making ARM work on 14nm benefits later arrivals to the 14nm process who then can license that work from ARM Ltd rather than pay development costs. Why would Intel, the first with a 14nm process, ease the way for its rivals?
The statement that "Linux runs fine on ARM" is irrelevant. Linux runs fine on x86 too.
Generation-long problem, but what are the side effects?
Phil argues it's going to be one of those generation-long problems, similar to access to strong crypto. That doesn't mean that it there aren't knock-on issues beyond that generation. In that way Phil is too sanguine.
Take crypto. When I wrote a Pine patch to provide PGP-encrypted mail there was a notice issued preventing the export of that beyond Australia. So we had a generation of mail clients without strong crypto (Pine was the "market leader" in Internet e-mail clients at that time, so competitors would have sought feature parity). Importantly, without strong crypto there can not be sophisticated crypto key management.
That lack of sophisticated key management -- that is, who you communicate with and how well you know who they are -- pretty directly allowed the rise of spam. Now there have been attempts since at "email reputation management" to mark particular uses are spammers or compromised, but the lack of widespread key management for e-mail means that those attempts have never got much further than the network layer -- marking particular IP addresses as suspect.
The cost of the side-effects has been immense. We can't even mark a Nigerian scammer as untrused. It's not at all clear that the two decades of additional ability to tap email has resulted in less threat to the people's welfare.
Not too bad
This isn't too bad for a well designed network.
(1) OSPF shouldn't be seen or accepted on the leaf subnets used by computers. (2) It requires the defeat of OSPF authentication (easy or hard, depending solely upon the randomness of of the key).
A surprising element is that Cisco's OSPF will accept unicast OSPF from anyone, not just predefined unicast neighbours. That's something to add to the router protection access control lists.
On a poorly designed network this is a bit of a disaster, since the only recovery is to reboot the router (which isn't really an issue: since it has just blackholed all IPv4 traffic the router was no longer doing much worthwhile anyway). By far the quickest work-around for those networks is to deploy OSPF MD5 authentication.
Nice work picking up on the importance of single laser versus multiple lasers and wave-division multiplexing.
In the future you could also look into the uncorrected error rate, the distance (or optical loss) and if it used a ITU-specified cable (ie, something which may be in the ground versus something lab-built). That would help with the apples v oranges nature of these sort of comparisons (and I'm not at all suggesting that the variation is deliberate, merely reflective of the small number of labs working in this field, all of which make different reasonable choices in an environment where there's no pressure for interoperation).
Tbps not a useful measure
The lifetime of routers is set by the port density of their fastest interface. Quoting that, rather than aggregate inter-port Tbps, is a more useful measure of the awesomeness (or otherwise) of a router. Also useful is maximum packet-per-second of small packets: this is particularly where CPU and network processor designs are used, as this limit is usually reached well prior to bps.