>They need to yank IBM out of this group, as they have no products nor services that qualify them to produce anything, on this scale
OpenPOWER
15 publicly visible posts • joined 16 Jun 2009
>It took them18 months to publish their own code under the LGPL?
That depends on whether it was completely in-house project, whether they started with an already existing application and adapted it or whether multiple entities worked on the software from different projects in a collaborative fashion.
My guess it that the copyright wasn't transferred by the authors when it was being written so they had to do a code audit, find all the authors wherever they are now and ask them to sign a form before they could change the license.
Another factor was probably the issue (as mentioned in another comment) that anything produced by the US government is automatically public domain. If the government funded part of the development then that might have needed some legal checks to make sure they could actually release under GPL.
>Opteron is still in the number three super computer...
Yes, but that machine is 4 years old.
More relevant to the current health of competition is the chip manufacturer for the new machines in the last release of the Top 500 list. There were 154 new machines, 153 of them used various models of Intel CPU and 1 of them used a Sunway CPU (the new Chinese-designed #1 machine).
I done plenty of deleting directories I didn't mean to using badly defined variables but two similar cockups stand out in my memory:
The first I've done a few times is accidentally adding an additional / to the "src" of an rsync command when trying to update a subdirectory, for example:
$ ls /bar
aaa bbb ccc ddd eee
$ ls /path/to/foo
file
$ rsync -a --delete /path/to/foo /bar
$ ls /bar
aaa bbb ccc ddd eee foo
$ rsync -a --delete /path/to/foo/ /bar
$ ls /bar
file
There's then a slow dawning realisation of what's happen, I swear profusely and think "oh shit, where can I get that data back from?"
The other was when I was a young misguided tcsh user and I was telling some veteran ksh users how good it was because it had features like "set rmstar" where it would warn you if you do "rm *" and proceeded to demonstrate this in my home directory on my network login on a different machine than I normally use in a shell where it was unset, much hilarity ensued.
> Backups?
The story went that the backups were in a mounted directory which rm happily traversed and trashed, which is not an inconceivable scenario if a naive user was backing up to a network share or Dropbox.
In the age of ransomware it's become even more important not to store backups anywhere they can be easily accessed.
Classical simulated annealing is not a particularly good optimization technique which was acknowledged in the paper:
"It is often quipped that Simulated Annealing is only for the 'ignorant or desperate'."
There are much better classical optimization technique which would compare much more favourably but they used it because it was the closest analogue to quantum annealing.
Simulated annealing also doesn't do well with potential energy surface which contain deep, narrow wells where it can get trapped. Quantum annealing can tunnel very efficiently through them and so they chose a problem which suited it. From the paper:
"carefully crafted proof-of-principle problems with rugged energy landscapes that are dominated by large and tall barriers"
I was hitting my data limit before WiFi-assist.
I went with the strategy of turning off mobile data for every app I had installed.
I then turned it back on as and when I first needed it.
I haven't hit my data cap since, not even with WiFi-assist and haven't noticed any loss of functionality.
iOS gives you the information about how much data different apps are using right in the settings,
Nathan Hobbes wrote: "Not to state the obvious, but this explicitly talks about having 2 separate sockets, one for handsfree (mic/headphones) and one just for headphones"
Claim 1 ends with "said input-output and output interfaces to be jointly employed for said first and second headsets where said first and second headsets being the same headset" which at a stretch capturess the single connector with a single headset option.
In that case the prior art would be a mobile phone which played music/radio before 2001.
Chris Simpson wrote:
"What about Folding@Home, Granted not one single computer but 5+ Petaflop"
What Folding@Home does is millions of slighty different small problems.
What a supercomputer does is one very large problem.
The difference is tolerance to latency.
The answer to one F@H problem is independent of the other problems and so it can be task farmed. If you're dealing with one big simulation then you need as fast as possible communication between all the CPUs otherwise you'll be waiting an eternity for your answer.
Let's say you want to do some molecular dynamics on a piece of material the size of a grain of salt. Just holding the coordinates and velocities of all the atoms would require about 10 petabytes of memory. The drive for larger simulations, smaller approximations and finer resolutions will continue to feed these machines although Amdahl's law and other software engineering problems are raising their heads.