What will an oligopoly feel like with the end of net neutrality.
I don't know.
But the US is going to find out.
On the internet (being) big business is always better than being small and being biggest is best of all.
16327 publicly visible posts • joined 10 Jun 2009
1) Make stuff people like to buy.
2)Don't have it collect personal data on them or their belongings
3)Don't send it to a remote server farm
4)Don't sell it to WTF pays you the most.
I like to think of it as the (none of your f**king) business model.
I'm guessing that means a) There's a lot of parameters to twiddle (Not < 10, 100s or 1000s). b) "Success" criteria are complex (it's one of those n-dimensional optimization problems).
c)Prioritizing them is a massive PITA d) This algo (and its UI) implement a "Method of Experiments" process to identify what parameters would give the biggest data take from a shot and e) Then use human judgement to evaluate the result so the operator decides which is "better."
It would seem that building hardware that can run on an 8 min turnaround cycle is at least as important to this as the SW.
A few notes on other fusion systems and fission reactors.
PWR don't run at 1000F they run at around 593F. They run at about 200Atm to stop the water boiling until then. Their efficiency is around 25-30%. Modern high pressure coal/gas/oil boilers can hit 932F, about 35%+. PWR's are only dominant because the USN paid essentially all the development costs for Westinghouse. As power plants they make great submarine drives.
The USN did fund a fusion project directed by the late Dr Bussard (he of interstellar ramjet fame). It was progressing well. His lectures on youtube are interesting for why people don't think tokamaks are very good. I think they are still in business and still making slow progress, more due to lack of funding and the need to improve their modelling SW (their design is not exactly off the shelf).
Both MIT with ARC and a British company text of link plan to use High Temperature Superconductor tapes of Rare Earth Barium Copper at around 20K (which is high temperature to people who are used to liquid He at 4K) with innovative engineering of the tokamak to deliver a net power generating fusion system costing less than $300m to develop. Both plan much higher magnetic fields than ITER, and hence can be much much smaller.
Yes physicists have thought about getting the heat out. Current plans call for a blanket of molten Lithium to absorb the neutrons from the Deuterium Tritium fusion reaction to breed more Tritium without a fission reactor and the run it through an HX to drive a steam turbine at the same conditions as a conventional fired power plant. The wall materials are difficult as they have to take space ship reentry temperature and high radiation fluxes and be repairable/replaceable by remote control. Something like the nose of the space Shuttle (RCC) seems to be a candidate.
And as for a free idea....
Laser fusion systems turn the laser energy into "Extreme Ultra Violet," or (as everyone who isn't trying to sell a wafer fab exposure tool calls it) soft X-rays.in the 250eV range. The EUV tools use 20Kw lasers to hit a liquid metal target to get < 100W of actual exposure energy (IIRC more like 10W), which is not much when you're trying to expose a 300mm dia wafer.
A more direct route would be to use a "Smith Purcell" generator. This uses electrons launched across a diffraction grating of alternating conducting and insulating ridges. There appear to be conflicting theories of how the process works at the quantum level (so plenty of opportunity to optimize it), the grating frequency would be in the nanometre range and the electron beam (ideally a wide wave front) needs to be as close to the grating (roughly) as the grating frequency, IE about 6-7nm period for emissions at right angles to the grating, which needs a near atomically smooth plane. Coupling improves exponentially with distance, so closer is better, without hitting the grating.
The upside is that electron emission is a very efficient process and can be quite fine tuned to a specific emission energy, making acceleration to the level needed to drive the grating quite efficient also, if you can form a layer
I'm guessing there's 2-3 PhD's and a shed load of degrees to be earned building a machine that could make this work.
Hahahahahahahahahahahahahahahahahahaha
Final task description, maybe.
Now this.
"Meanwhile, the Pentagon's director of operational test and evaluation told a US Congress committee earlier this year that the aircraft won't be ready before 2019, mentioning 158 "Category 1" software flaws that could cause death, severe injury or illness unless fixed."
158 Cat 1. IE It fails people and planes start falling out the sky (assuming none of them are in the software controlling takeoff of course).
It's true what they say C/C++ lets you make errors faster (even with a 158 page style guide).
IIRC Python has pretty good facilities for adding packages to the language already.
As for sending your IP to "anonymous server farms in unknown jurisdictions" the cloud that gets latency and security issues for free.
No doubt something that sounds a great idea at the end of a 25mbs pipe in SF, but less attractive elsewhere.
About 764 billion times bigger in fact.
It's also enormously faster and probably not far off the power consumption of that single chip.
Is there any other field that's progressed that much in that short a time span (even aircraft flight speed)?
This is not AI, it's data whoring sh*t.
True.
But your timeline is off. People called Lotus Agenda a PIM, in the 80's.
Although it was written with input from people on the AI side at Stanford I don't think they specifically called it AI.
Icon because I always thought there should have been an option to turn "Clippy" into "Gimpy."
systemd
'oh! DNS lib underscore bug bites everyone's favorite init tool, blanks Netflix
OK that's a use case.
So to need this functionality at boot time you need...
(Remote drive) X (only known by host name, not IP address) X (Must be available to apps by the time server is booted).
And basically if you can get the IP address any other way, or you can delay starting up the apps that need that drive to mount it through a script this use case disappears.
So much for DevOps
The only use case I can come up with is
Boot problem --> need to Google something --> have no other PC/server/laptop/tablet/phone with internet connectivity with which to do this.
But I don't know. Do you often have to look up a bunch of domain names to get their IP addresses to stock some data file or other?
If you don't it just seems very odd.
It seems to work as
Map algorithm to --> generic FPGA architecture
Re map algorithm --> specifics of FPGA architecture.
I guess the problem getting it published would be "Why don't the FPGA mfgs do this already in their SW?"
The honest answer is probably "Because they want to sell chips. As long as the design algorithms good enough, and fast enough for their products they don't really care about optimal routing, and of course everyone knows if you really need the last iota of speed you go to ASICs anyway."
Turning the conceptual logical building blocks of the algorithm into the HW building blocks on the chips is often called "compilation." It seems FPGA vendors will re-discover the "optimize" stage, where the system takes more time to generate a more efficient result. This will no doubt be announced as a massive leap forward.
As for changing cable lengths in a mainframe. This was mentioned in ref to Multics, but because the GE645 didn't have a central clock, it was all asynchronous. Lots of "timing" signals but not a central "clock" signal as such. Something did something once a pulse got to it down a piece of specif length coax. Needs more time? Stick in a longer cable. Speeding up would have been more difficult, as you'd probably need to shorten multiple cables to get the result.
What I can't get about FPGA is individual transistor toggle speeds have continued to climb over the yeas. They've got to be over 10GHz by now.
So WTF can't you get an FPGA that can routinely map algorithms that run at > 1GHz?
If it's because they are all using the same poor mapping algorithms we might be in for quite a performance boost in the next few years.
Hard to believe intelligent people don't know this already, but they probably don't feel it in their bones.
Now try and get the PHB's to actually spend money doing them.
And the biggest piece of BS.
"IT change drives business change."
Wrong. You want to change your business. You need to change IT to do that.
But that requires senior management to understand their business well enough to realize they need to change in the first place.
That can be tested.
If I'm reading Imagination Engines text heavy web site clearly. FWIW it's a notion I agree with.
In principle all human knowledge can be processed by a set of very complex neural networks, since that's what a human brain is.
But IRL humans cannot program this system directly (and did not know it existed till the invention of the microscope)
That level is only dealt with directly when we learn to do manual tasks like walk, or learning a language solely by matching sounds in one language against sounds (and their meanings) in another. Even this assumes we know a language already (What if the person was deaf since birth?)
For everything else we operate with higher level abstractions. Words on paper --> language conventions --> concepts --> restructure thinking --> change weights within the NNs.
Would multilayer neural nets deep learning be easier if we acknowledged that?
It's simulation.
Or search through a "solution space."
And guess what, as the game rules get more complex the solution space gets much bigger, so the ability to "imagine" consequences X moves ahead becomes much more valuable .
A fact first learned when people decided to try to get computers to play chess.
So what makes this such a big deal other than the current hype for all things AI & "deep learning" in particular?
Ooops.
MUMPS was not Forth based, as it preceded Forth by about 5 years (1966 Vs 1971).
Although its terseness and design of breaking code into 2KB blocks is very Forth like.
OTOH variables <==> files <==> b-trees mean anything can be made persistent across all instances (IE a file) just by putting "^" in front of the name is not very Forth like.
And then there is the command abbreviation, combined with number of spaces between some of them being significant. That could make for a complete mindf**k when reading through old code, to the point of writing a tool to expand such abbreviations to make the whole thing more readable.
"She's an undertaker now..."
That's sort of my point.
IIRC it was Forth based and allows abbreviations of commands. IOW it's for those who find C a bit too verbose.
I think it can legitimately be said that after you've used it you won't want to use another programming language.
Because you won't want to do programming ever again.
So the takeaway is
"We are GoDaddy.
We are not as s**t as our competitors at loosing investors money."
Which is true. They had to roll out in 53 countries to lose that kind of money. HPE could do that in just one.
Be interesting to see what that did to their stock price.
A noble idea, but seriously difficult to do in a government system like (for example) Universal Credit.
The problem is the fear (not the reality, the fear) of the number of interfaces between each of those "systems that does one thing well".
It doesn't help that the people who really understand how the existing systems work together (because the UK civil service is a "mature" environment IE it's got a very complex ecosystem already) are buried deep in the bureaucracy and have taken years to learn this.
And this will continue until those writing these spec realize fear is not real.
This statement has to be parsed with a lot of care.
Are they talking about a remote trigger whose signal is undetectable? Probably not. Except what good would that do you? What you want is the hold to be RF shielded to stop it doing anything if someone sends it.
Are they talking about a remote trigger that would not show up on current sensors? That sounds more likely. Dodgy circuit board on Xray machine monitor? Time for a deeper probe.
Note. Modern laptop batteries are not passive devices. They contain embedded electronics as well.So on balance just about true, and IRL likely to be so for decades to come.