Re: (beer because there's no saki)
You're right... there is no precedent
(先きの無いコンピューターでしょう...)
2662 publicly visible posts • joined 8 Nov 2007
Francis Bacon's essay of the same title starts off:
DISSIMULATION is but a faint kind of policy, or wisdom; for it asketh a strong wit, and a strong heart, to know when to tell truth, and to do it. Therefore it is the weaker sort of politics, that are the great dissemblers.
http://www.authorama.com/essays-of-francis-bacon-7.html
Besides still being a relevant observation on the value of privacy (rendered as "closeness, reservation, and secrecy"), it's also telling as an indictment of Bush and Blair. All politicians lie, and we expect them to "dissemble" (pretend not to be what they are) and "simulate" (pretend to be what they are not) to some degree of another, and as appropriate to the circumstances. However, these men have taken simulation and dissimulation to such a level that one wonders if they are the only ones who cannot see their lies as anything but transparent falsehoods. Such people are beyond being merely immoral---they are outright dangerous.
"worse than CUDA with a fraction of the performance"
How do you make that out? Care to mention a bit of hardware that uses CUDA (in a more or less similar price range) and that outperforms a PPU/SPU app? I'm not saying it doesn't exist, but your comment is meaningless without looking at hardware--the programming architectures simply aren't directly comparable. On top of that, you also need to consider what type of application you're running. You can't make a blanket statement like that without considering what problem is being thrown at the system.
While I admit I'm not an expert on CUDA, I would consider myself pretty good at programming on the Cell. It seems to me that it would be a lot more difficult to get anything approaching full utilisation of the GPU in a CUDA architecture in comparison. The main reason is that CUDA seems to be designed around throwing many less powerful compute cores at a problem, and due to Amdahl's laws, there's a limit to how effective this can be, depending on the type of problem. CUDA does seem to be an advance over most previous GPGPU attempts, mainly in supporting a richer set of programming primitives. Bitwise operations is the main one I see, but perhaps it also supports branching? I'm fairly ignorant on this, but I know previous GPU shaders didn't support branching, something that seriously restricted their power from a programmer's point of view. Another CUDA downside is the lack of recursive function calls. There are probably more restrictions I'm not aware of.
On the Cell, on the other hand, the SPUs are much more capable computers in their own right, with a full and pretty rich instruction set. It still has a performance penalty for (unhinted) conditional branches and it's really designed as a vector processor so there's a performance (under-utilisation) hit there too if you're writing scalar code for it. But the point is that it's a much more advanced core. Add to that that you've got great inter-core communication possiblities (hardware mailboxes and interrupts and very high bandwidth DMA, though not directly to the GPU--you have to use a flip buffer in main memory) and it's quite possible to think of dedicating some SPUs to specific tasks the way you would threads or concurrent processes in other systems. Or of using a hybrid model, with some SPUs loading code dynamically as needed for small, compute-intensive kernels, while others are statically allocated to certain tasks/threads.
The Cell is also much better at keeping the main CPU free from having to act as a master to slave cores, with all the attendant housekeeping that can entail. In a properly designed application, SPUs can basically act asynchronously and can coordinate work amongst themselves, with main CPU overhead kept to a minimum. My guess is that a CUDA system needs to dedicate a fair amount of main CPU grunt to keep GPU cores singing. And that's power could be spent implementing other parts of your app that the GPU can't help you with at all.
The upshot of all this is that while a CUDA system could very well beat a PS3 at a narrowly-defined task (eg, password cracking, though not things that involve rainbow hashes, since memory/disk bandwidth is the bottleneck there), such applications generally have to be embarrassingly parallel to begin with. So maybe you can write part of your render pipeline by throwing more cores at it in a CUDA system (constrained by your memory/DMA buses), but because of Amdahl's law, there's a limit to how far that will take you. At a certain point, you need to start thinking of apps as distributed programs with complex data interdependencies instead of purely parallel ones with only simple pipelines, and that's where the Cell's architecture really shines, in my opinion.
Hmmm.. I foolishly hit reload on the page that had the original url. Now it's gone. Does the Register search box have some sort of silly iframe deficiency that lets search terms embed other pages? Or is it simply a well-placed %40 and a double decoding problem with browsers exclusively? Now that the original post has been edited, I guess I'll never know :(
And Bill Posters is innocent, too.
Seriously, though, what's with the influx of rabid AC commenters on this story? It surely can't be the PR departments of media companies, and I sincerely doubt it's a bunch of artists. Although maybe if it is the latter, it might explain why they aren't making as much money as they'd like from their fans ...
I was quite shocked to read your letter describing your problems balancing our bank account. I suggest we meet up to discuss same. How does Blackfriars Bridge on the 18th sound?
So mote be it!
PS please bring some bricks, if you can, on the off chance that we may chuck them at the ducks.
Dear Shakira,
I get letters telling me since I moved away
you've taken to hanging out on that rock about a mile from shore
given what I know about that rock mainly that it's populated by seals
I strongly suggest to you that you not hang out there anymore
'cause the seal is a wily and a vicious creature
and the seal will bite you if you give him half a chance
yeah the seal has a mind set on violence
and the seal is the sworn enemy of man
now when I say that the seal is vicious I use the term advisedly
according to webster's 9th new collegiate definition 4b.
which states that vicious means marked by ferocity
and offers as a synonym...savage
'cause the seal is a vicious and a wily creature
and the seal has a mind full of evil designs
and the seal will harm you and laugh about it
yeah the seal is not a creature you want to toy with
yeah the seal is not a creature you want to toy with
in the equipment used to etch a pattern on silicon. Quantum tunnelling is how the uncertainty principle manifests as electrons travel along ever-smaller circuits. As you shrink circuits and the overall energy levels approach Planck scale (which might be measured in terms of energy gaps or distance) then it causes electrons to apparently "teleport" at random, so smaller circuits introduce quantum glitches.
If you're talking about the actual process by which circuits are etched, however, you're probably talking about very high energy beams (x-ray lithography or, in this case, an electromagnetically accelerated electron beam) then the energy of the photon (x-ray) or electron (CRT-like accelerator) can be ramped up to a level where they're well in excess of the Planck-scale energy levels, so won't be as affected by the uncertainty principle.
There are still problems, though. Even x-ray lithography (higher energies relative to UV) mightn't have enough energy to cast a clean shadow against the mask--hence (I take it) the need for multiple masks and x-ray sources. As for CRT, aiming is still hard at high energies due to the need to have a very high frequency circuit for steering the electron beam. Aiming has been a problem with CRTs since the beginning. The traditional solution (to get the electron to hit the right pixel) is to have a charged mesh close to the target which helps to focus electrons that are slightly off-target or absorb those that are more wildly off. Higher-energy electron beams probably do something similar.
The designers of these kinds of etching hardware still have to worry about the uncertainty principle as they get to ever-smaller scales, but the physical description of their problems manifests more as wave/particle duality (inability to cast hard shadows due to edges causing a diffusion/diffraction of the beam) than quantum tunnelling per se ("teleporting" low-energy electrons in a circuit). At least, that's how I understand it...
I can see cases where it matters (basically anything that requires interactivity, such as database back-ends, game servers or things that need to be as real-time as possible) but how many of these are going to be held up by client-side latencies (application loading) or network latencies (things accessed from cloud or web interfaces)? I can also think of many more applications (indexing, compiling, transcoding, rendering, etc.) that are more compute-bound or that would much prefer to have bigger throughput/bandwidth than lower latencies. Besides, in a lot of applications the latencies associated with data transfer can be hidden by doing double- or triple-buffering of work packets or precaching data. This can usually reduce latencies to effectively nil, provided there's a discernible access pattern and it's not just purely random access (which the tiered, "temperature"-based storage caches won't handle well anyway). So does latency really matter so much?
Maybe because their artificial gravity system is effectively in a bubble with no net change to gravitational forces outside the ship? So like, maybe, the gravity in all the levels pulls in a direction they decide to call "down", but up at the "top" of the ship they've got the reverse pull to balance things out?
TBH though, invoking "artificial gravity" explanations kind of bugs me. We know it's all made up, but there's no need to lampshade it. The one exception: inertial systems that actually work, ie spinning ships and stations, a la 2001.
I know the article wasn't being entirely serious when suggesting it, but dual sim isn't going to allow you to create two walled-off identities. The problem is that the IMEI is transmitted when you register with the GSM system and this is uniquely tied to the phone, not the SIM. So no good for drug dealers, extortionists, kidnappers, etc.
The first password hacking I ever tried was to extract netbios (iirc) login passwords from memory with debug on IBM PS/2s. Must have been in the late 80's. It turned out to be surprisingly easy as the password often remained in memory even after the user had logged out. Security was a bit of a joke back then, though, and there wasn't much practical use for the networking except to play snipes.
I did think I noticed some difference when I wired them up and picked the wiring that I thought sounded better. I'll do the mono test later and see what that sounds like. I'm not looking forward to resoldering them when I'm still not 100% sure that I can aurally tell the difference between the two polarities :(
You wouldn't happen to have any links that explain exactly why phase matters for headphones, would you? My searches didn't turn up anything conclusive, and I still can't quite understand what makes out of phase headphones sound wrong. Thanks :)
I had an old set of cans whose cable had broken off in one ear and I finally got around to soldering it back on. With no markings on the terminals, I downloaded some in-phase and out-of-phase samples to listen to after wiring it up both ways. From reading how out-of-phase wiring should sound, I *think* I wired it up right, but I still have doubts. I know this isn't a problem these cans are supposed to fix, but I'm curious whether anyone here can give a definitive answer as to whether headphones being out of phase really matters?
AFAICT, out of phase stereo signals sound different on speakers because the sound waves coming from each speaker interfere with each other, selectively destroying parts of the signal it shouldn't. Thinking about this in terms of headphones, it's not clear that there should be any interference pattern set up at all unless the brain is doing some analogous processing on the sounds. So my question, as per title--does stereo phase matter at all when wiring headphones?
On topic: speaking of stereo, it seems that if each bud had two sensors (antennae placed at right angles to each other) you could do detection in much the same way that a theremin works. The articles mention only one sensor, and the reg article mentions breaking a circuit, but I wonder if there isn't some mini theremin gizmo at work here? Can you even miniaturise a it to that level? I have no idea.
Off topic again... I've had theremins on my mind since the recent reg article on increasing screen resolutions on tablet computers. I mentioned the problem of accurately hitting a patch of screen with a finger as the resolution of the monitor goes up. Later it occurred to me that something like a theremin could detect an incoming finger and selectively zoom in on the target area (with the feature possibly keyed to a gesture, like circling the finger in as it approaches the screen). It might need two detectors for near/far range. Just throwing the idea out there, fwiw.
Several commenters have already made similar posts, but I'll just throw out a few objections of my own...
1. Sheer volume. There are 100,000's, if not millions of pieces of software out there. How are you ever going to certify each one?
2. Barriers to entry. Only the very biggest companies can afford certification, making it harder or impossible for small producers to compete.
3. "No warranty" boilerplate and liability. GPL says "there is no warranty for this free software" and it's also often repeated in other docs. Most proprietary software also has the same "no warranty, not even a guarantee of fitness for purpose" kind of language. This is incompatible with kitemark-like schemes and it could open up the producer to some sort of liability.
4. Alternative: bug bounties. Money spent on certification would be much better spent paying people to find bugs. It also gives users much more confidence that the makers are serious about software quality.
5. Alternative: certify processes, not products. Although it's overkill, at least the ISO 9000 standards have the right idea (IMO) by certifying that you're following good practices and not making guarantees of product quality/safety.
6. Impostors/policing: If you have a software kitemark and you teach users to associate it with quality/safety, isn't this just another way for conmen to trick you? You'd need a massive software signing infrastructure to certify each piece of software--you can't just rely on stickers saying something is approved/certified. Signing all software is completely impractical, even for the likes of Microsoft, so what chance does it have for a voluntary/semi-regulatory body?
7. Reputation: how do you build up brand trust with a voluntary system like this? Each bug or security lapse erodes not only the credibility of the software producer, but also the certifying agency. Do you really want to tarnish the established kitemark "brand" like this?
Not sure exactly what you're trying to get at. It would probably be wrong of me to point out that the Camera Obscura was invented in Greek times (not exactly a "modern" camera, as you put it). But I'll match your quote with a contradicting one:
"when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science, whatever the matter may be." -- Lord Kelvin
I meant that to specifically refer to magnetic repulsion, as in "build a big enough electromagnet that will interact with the Earth's magnetic field and knock the thing off course". Not the fundamental force of attraction/repulsion between charged bodies. OK, I know at least that Maxwell's equations unify "electricity" and magnetism (and light), but what I was thinking of was whether an electromagnet (using a purely magnetic motive force) could deflect the thing enough when you take the relative size of the Earth and the attendant gravitational attraction between it and the asteroid. I meant that the electromagnet would probably have to be huge in order to overcome Earth's specific gravitational attraction, and not a statement about the relative size of forces in the abstract.
It would be quite interesting to see whether such a magnet-based solution would work, or whether building and powering a large enough one would even be possible.
might be to land and secure a giant electromagnet to it. It would have to be damned big, though, since the electromagnetic force is tiny compared to the force due to gravity and mass. A hybrid solution combining the idea of magnetic propulsion and the solar sail concept might be to attach superconducting tethers (which should be nicely chilled in the vacuum of space) to the rock and have them extend out as straight as possible (easy if the rock has any rotational moment, but prone to snapping if it's rotating too fast). Perhaps the combination of interactions between the system and the solar wind, the system and the earth's magnetic field and the magnetic flux generated by spinning the conducting wires in the other fields might be enough to impart it with enough momentum to direct it off course? I don't know enough about magnetism, let alone about spinning superconducting wires to know whether it actually works like this.
For the sake of argument, let's say I wanted to see if there were any free MP3 downloads from Lackluster (an alias of electronic artist Esa Ruoho). Typing in "lackluster download free mp3" into Google currently gives the following link as the third result: http://www.lackluster.org/releases?type=69&format=All
Scanning through that page, I count over 20 recordings which the artist has made available for free download. After checking out the site, it's fairly obvious that this is legit and that kudos is due to the artist for providing us with such an array of freebies. However, if these proposals were to be implemented, my search terms would be subject to extra scrutiny since it includes many trigger words. I realise that the proposals here would not automatically stop me from accessing the search results in this case, /provided/ the artist has taken steps to register his site with an as yet non-existent "certification" entity. However, no guarantees are given that the site will not be blacklisted by default based purely on my search terms.
It's highly likely that these "certification" entities will, in fact, be either collection agencies or agents of the big labels. Neither have a sterling record (to pardon the pun) when it comes to copyrights they don't actually own. A case in point is Edwyn Collins, who was (in)famously prevented from sharing his music via myspace: see http://www.guardian.co.uk/music/2009/oct/06/edwyn-collins-sharing-music
The essential point that I am trying to make here is that although the proposals seem innocuous enough on first reading (provided you read past the knee-jerk reaction that this is simply censorship, pure and simple, and try to see some merit in them), I think it's highly likely that this will end up hurting independent artists. In effect, the established players (not really a good word for them) in the music industry are attempting to set themselves up as gatekeepers, deciding what you can and cannot access. It doesn't matter whether their intentions are as pure and egalitarian as they make themselves out to be here, it's almost guaranteed that "mistakes" will happen, and innocent sites will find themselves cut off from their audiences.
The devil is really in the details of implementation. Can independent artists (including those artists that just compose, record and release music just for fun) ensure that all the search engines won't blacklist them by default? What happens if takedown notices are issued in error? Will the accusation count for more than the eventual exoneration (ie, will there be mechanisms for ensuring that accusations are effectively forgotten once overturned)? How will sites and artists know that their traffic is being blocked by search engines? How will sites and artists know who to contact to remedy the situation? What sort of bureaucracy will be involved in getting un-delisted? What happens to site rankings if some legitimate takedown orders are processed against it--will other users of the site end up being tarred with the same brush and have their details delisted? What about searches with similar keywords? What about sites like archive.org? What about blocking sites in other countries that don't use English as the lingua franca? Et. cetera...
Just my €0.02.
I assume that all these devices are touch-screen displays. It's all well and good producing higher res screens, but there's a limit to how useful the screens are if we're stuck with addressing them with ever-pudgier fingers (relatively speaking). Should parents-to-be plan on reviving the ancient art of bonsai digitry in the next five years?
(icon illustrative of problem)
Maybe it's because I'm all atwitter about the news about the Raspberry Pi going into production, but it seems to me that there are only a few metrics that really interest me about this segment of the market (basically, low-powered, cheap PCs). The first two are LCU(*)/GFlop and Gflops/watt. Unfortunately, the entire netbook market has either stagnated or regressed on both of these fronts since their first introduction. It's not as if screen size or installed RAM has broken out of the (alleged) plateau (allegedly) imposed by Wintel, and Flops/watt has only improved marginally, at least in the Atom/SoC world (and you can forget about low power usage outside of Atom in the x86 world).
The other thing (not really a hard metric as such) is the number of available cores and ease of programmability of these extra cores. A lot of SoC chipsets will handle video decoding and a modicum of accelerated graphics processing, but it's very hard to put those bits of hardware to good use for general programming tasks. OpenCL is nice, but where are the frikkin drivers?
So that's why I'd rather invest in 6--12 RP boards and a few USB hubs and Ethernet switches over ultrabooks, tablets, smartphones, e-readers or any other deliberately crippled sub-notebook format. I may be wrong, but I think a lot of people agree with this sentiment.
(*) Local Currency Unit. I like putting spare netbooks to work in parallel transcoding clusters when they're not downloading web pages or showing short/interruptible videos. That's why $£€/GFlops matter to me. As does GFlops/watt.
I've just (in the last two days) updated one of my ps3s to firmware 3.55 from 2 point something and have managed to retain (rebuild) my OtherOS capability thanks to custom firmware signed with Sony's "private" package-signing keys. Now that I know that I can continue to use it/get back into using it for development I might actually be tempted to buy another one (even a lower-wattage slim model that never had OtherOS to begin with). Don't discount the power of pragmatism over principles.
No, the photon doesn't have any expiry date on it just because it came from the soup(*). Also, energy is conserved because the mirror's momentum is sapped by an amount equal to and opposite of the energy of the created photon. Or, since the mirror isn't physical in this case, the creation of photons means that the power supply for the apparatus has to pump more joules in in order to achieve the same acceleration curve.
(* Although I'm sure the Thomas Edison Electric Light Company (aka General Electric) would love to be able to sell photons with a "use-by" date, a quick search shows that photons are thought to have an infinite half-life)
For the few people who asked this question (and couldn't be bothered to do a web search themselves):
http://en.wikipedia.org/wiki/Comparison_of_online_music_stores
This isn't a complete list. There are quite a few independent labels and some bands (groups) that offer flac downloads. You'd have to go to their websites to check for yourself. There seems to be a lot more electronic artists whose catalogue is available in Flac format as compared with more mainstream/pop artists. Check out bleep.com for a pretty decent selection in this genre. FLAC costs more than MP3, but that's totally understandable.