the US will be the new Somalia
Not if the MPAA and RIAA have their way. A lot of lobby dollars says that won't happen.
2662 publicly visible posts • joined 8 Nov 2007
Interesting. I was wondering about this the other day. I was reading about subvocalisation, which is where people mentally form words when reading. When learning foreign languages, this can be a necessary step, but if you get into the habit of hearing each single word in your head as you read it, it means that your reading speed is limited to how fast you can vocalise it (so reading speed = speaking speed, effectively).
Anyway, that got me thinking about how people who are deaf from birth process written material. I suppose that's a variation on this "antiphasia" you mentioned, though I still wonder can people who were born deaf still have mind's-eye style auditory hallucinations even absent the signals needed to prime it? Is it possible that the brain uses other sense data (such as muscle memory of tongue position, mouth shape and so on, as gained from speech practice) as a proxy for subvocalisation?
I haven't read the paper, so I'm not sure of why the authors decided to investigate this, or even how it's implemented. While I was reading the article, though, I was thinking about a couple of things. First, is how they reckon that sleep is necessary for most (if not all) things with a brain. Something to do with assimilating memories and inputs, most likely, and shifting experiences around between different layers of memory. The other thing that I was thinking about is research on combining neural nets with expert systems of some kind, particularly of the fuzzy-logic variety. Oh, and also some of the stuff that Douglas Hofstadter was researching on "creative analogies" and kinds of symbolic intelligence.
Like I said, I have no idea how these guys are implementing their nets, but it seems to me that something that mimics the way the human brain dreams, complete with multiple levels of memory (with associated reinforcement and deliberate forgetting) and some sort of symbolic reinterpretation of neural network states (equivalent to codifying an expert system) would give you a system that is capable of the same kind of trick as outlined in the article. Namely, integrating new "experiences" and "skills" without nuking what's there already.
The biggest problem with neural nets is that they are opaque. You can observe its "thinking" only by reference to the outputs, but explaining the reasons (and hence giving a usable expert system that isn't just a non-symbolic rehash of the neural weights) isn't easy. Still, if you could combine a kind of symbolic (associative) memory with something that's designed to play around with stored memories (ie, dream), for example, building trial fuzzy cognitive maps, you could perhaps compress the large neural network state matrices into some more manageable expert-system-like rules.
I'm sure that the learning algorithms would have to be adapted for this to work. You can't just compress a neural network state into a fixed expert system without lossage. So as stuff is shifted around between different types of memory, the system would have to self-check to make sure that the new model still works with the training set. Probably this would involve replaying and reformulating the steps that the net made as it learned (or "experienced") as a result of being corrected (with back-propagation or whatever). I imagine that a kind of blockchain structure could work very well, albeit one that provides a very subjective and revisionist version of events, thanks to it needing to be rewritten as the underlying representation of stored knowledge shifts around across the different memories and procedural parts.
> leads to the concept of infinite _dimensions_
Erm, no. That's a pretty fundamental misunderstanding of what fractional dimensions are.
To take the example you mentioned, of coastlines not being circles, the length we measure depends on the length of the ruler we pick to measure it. The fractal part is due to self-similarity at various scales and the overall "crinkiliness" of the thing being measured.
The thing is/things are:
* physical law determines that things have to bottom-out at the Planck scale, so any weirdnesses observed with your set of rulers is merely an epiphenomenon when compared with c/Planck-based metrics
* Mandelbrot's "nature" is not the same "nature" as in the "nature of reality" (whether it be relativistic, string-theoretic or multiversal or whatever); Mandelbrot's "nature" is stochastic and has underlying power laws
* using relativistic rulers is by definition the "wrong thing" when dealing with the fundamental nature of things; it's like measuring how "plaid" the universe is
* something like the fractal/Hausdorff dimension is a mathematical abstraction, not a real "dimension" (again, see power laws)
Besides, just because there are fractions doesn't mean that there have to be an infinite number of numerators and denominators (and associated explanations for them as separate things) in the universe. Unless you want to try to argue that, your argument falls apart.
I suppose it's just a case of "because I/we can".
ARP spoofing is still a thing. If you can connect to the same network segment, you can craft packets that make other machines on the segment associate your network MAC address with the IP of the real DHCP server. From there, you just run a DHCP server giving bogus IP addresses and routing information so that you can "man in the middle" machines the next time they renew their DHCP lease.
I suppose that a USB-based attack is probably going to be quicker. If it auto-configures, then there's no waiting around for existing DHCP leases to expire. As an attacker, you still have the problem of needing to connect to the local net segment and doing traffic forwarding (masquerading as the target machine) so that the user (and any running applications) doesn't notice any discontinuity.
Given that both methods need physical access to the LAN, I think that a Breaking Bad style device (that Walter White plugged into his DEA brother-in-law's PC Ethernet port) is probably the best approach, though I'm sure that it will need some sort of power supply.
To be honest, the Japanese doesn't look too bad, though as it's a single long run-on sentence, it's hard to deal with anaphoric references. That aside, it appears that the only real problem with the final translation is not knowing what to do with 使いやすい and 簡単に, which both get translated to "easily".
This, surely is an artefact of focusing on collocation data. On the one hand, I think that this is a very sensible approach to translation between language pairs (eg 彼は背が高い versus "he is tall"), but on the other, the more hops you take through intermediate languages, the more it becomes a case of Chinese whispers. Once you start stringing together the little islands that make up sensible, mutually intelligible utterances without any reference to the underlying semantics, you're bound to end up with an archipelago where the first and last island will definitely not be mutually comprehensible to each other.
I don't know if you speak Japanese, or if you just picked it as an intermediate language for its strangeness factor. If you do, I'm sure that you can come up with many examples where the character of each individual language and (to take a slightly Whorfian viewpoint) the cultural backdrops and implied meanings make it difficult to translate things exactly. Stuff like the differences between I shall/will vs "going to" in English or conditional + いい[のに] (or ちょっと) in Japanese, plus all the rules for ellipsis in each language and what they means, plus, obviously, things like explicit anaphora in English vs implicit topics and referents in Japanese. Handling all of that needs deep understanding of both target languages at both a linguistic and (sometimes) a cultural level, so it's no surprise that this "island hopping" leads to mutual unintelligibility at the ends of the chain.
I think that if you're just looking for number crunching or high-level stuff, then Torvald's comments probably don't apply. Porting the kernel to a new ARM board isn't straightforward because there's no standard equivalent to the PC BIOS to arbitrate between hardware and OS at boot time. ARM provides reference implementations, but then chip and board manufacturers can go off and make their own proprietary changes. Chip and board manufacturers (eg, Samsung) are often quite antipathetic to free software guys, not wanting to open up the platform unless you pay.
However, we have guys like Linaro (plus other small hardware manufacturers like hardkernel) doing a great job on getting the main components (like boot loader and kernel, and maybe GPU?) working. Once you have that (and we can assume they have this for the board mentioned in the article), as a user you can pretty much forget about it and start thinking about the over-the-top stuff like Docker instances or some sort of parallel/distributed number crunching framework (eg, MPI or Hadoop; unfortunately, OpenCL is a bit sketchy on ARM thanks to vendors not fully/properly supporting it).
re: Don't look at the world. Just look at yourself.
You've probably heard of the Irish dad did that on his trip to Las Vegas (inadvertently). Here's a nicely acerbic take on it and on selfies/vlogging in general:
http://www.vice.com/read/what-irish-gopro-dad-can-teach-us-about-the-future-of-vlogging-104
This could be quite useful as an out-of-band signalling method. The article goes on to say that it could be used as a broadcast medium in something like a stadium. I think that this sort of oob channel could also be useful as an adjunct to a reliable multicast system. The problem with many multicast algorithms is that explicit ACK/NAK packets get progressively worse the more listening stations you add, to the point that they consume the bandwidth required by the broadcaster, making the whole thing less efficient.
To quote Leonard Cohen: "The fourth, the fifth / The minor fall, the major lift / The baffled king composing Hallelujah"
Assume that we have a modulation scheme to signify explicit ACK/NAK using a particular "chord", and a Bluetooth-like (base) frequency hopping algorithm to encode frame numbers, then providing the receiving stations have enough power to pump out their ACK/NAK packets, then the broadcasting station can listen to a wide spectrum of audio input and use FFT plus some sort of convolution (?) algorithm to detect specific chords at any base frequency. As a first pass, this should be able to figure out the actual error rate (by listening to the loudness of the NAK chord signature across all frequencies), and with more processing it could identify particular packets/frames that need to be retransmitted.
Still with the stadium example, you could imagine shrinking the technology down so that each phone could act as a transceiver, with a quorum-sensing algorithm quenching explicit OOB signalling in a localised area (with a hard cut-off to effectively become deaf to all the other nearby chirps outside a certain radius) along with lower-bandwidth retransmission of lost packets and possibly directionality so that those at the back can find out just how blessed the cheesemakers are.
(I'll bid you farewell. Don't know I'll be back---they're moving me tomorrow to the tower down the track ...)
Have like a riot cannon mounted on the front of the car. It can hit a person with enough force to fling them into the air but presumably not kill them. This can give the car a few more moments in which to compute another course of action that can safely avoid hitting the obstacle (or simply hit it at a less lethal speed)
A giant boxing glove mounted on a kind of scissors can also be used, as it's easier to reset/reload. Or a cannon loaded with quick-setting riot foam that can first reduce the relative velocity of impact, and second maybe protect them from a lethal knock by immobilising and/or cushioning the blow.
Thinking in particular of the episode with the hospital with no patients. Obviously not a success by any normal person's standards. but in the fairytale land of civil service budgets and metrics, it's the most efficient hospital in the land.
Come to think of it, I'm not even sure Sir Humphrey could get his enormous brain around Ms. Cole's logic.
First off, I hate this "reputational damage" malarkey. What's wrong with the good old-fashioned "damage to their reputation"?
Secondly, without saying "they deserved it" for having such a basic (sqli is basic) vulnerability, the fact that this vuln was so obviously latent, just waiting for someone to come up and turn the key, as it were, should the full cost/blame fall only on the first guy to "immanentise the escutcheon"?
> And IBM has Watson, a machine that famously beat human competitors by answering more
> questions correctly on the American game show Jeopardy.
Hmm, should that be "questioning more answers"? It is "Jeopardy", after all. I guess I'll have to leave it to the AI to decide which is more correct...
> "I know what it means, i didn't even have to look it up......."
I was rather surprised that the author didn't try to work it in somewhere (oops, no pun intended).
Kind of hard (oops) to make a pun out of "priapism", but maybe describe the spiders as "peripatetic priapistic poisoners"?
> ARM was inspired by 6502.
Yes and no.
http://www.theregister.co.uk/2009/06/11/pcw?page=2:
Sophie Wilson, the best 6502 programmer ever, became disappointed with what she could do with the BBC Micro, and went off on her own to design a RISC processor that would do all the good things she liked about the 6502, and all the other things which she wished the 6502 could do.
So apparently the nice thing about 6502 was the simplicity of it, but they were determined to build something completely different (a RISC processor with no real architectural heritage from the 6502 itself):
https://people.cs.clemson.edu/~mark/admired_designs.html#wilson
I can still write in hex for [the 6502] - things like A9 (LDA #) are tattoed on the inside of my skull. The assembly language syntax (but obviously not the mnemonics or the way you write code) and general feel of things are inspirations for ARM's assembly language and also for FirePath's. I'd hesitate to say that the actual design of the 6502 inspired anything in particular - both ARM and FirePath come from that mysterious ideas pool which we can't really define (its hard to believe that ARM was designed just from using the 6502, 16032 and reading the original Berkeley RISC I paper - ARM seems to have not much in common with any of them!)
A USB dongle that I can plug into a PVR (or other box) that will appear to the box to be a standard USB drive, but in reality connects wirelessly to wherever your actual storage resides. It might not be the most effective use of your wireless bandwidth, though: a USB2 connection would saturate an 802.11n link, but you might get 2 or three such devices working on on .ac link. Still, the convenience and cool factor seems like it could be a useful gadget to have.
I suppose a more useful version of this would come with wires. Do any NAS boxes exist that let you emulate a different disk drive (each with its own storage space/quota) over different USB OTG links?
Yeah, "meh" on the "manbang" being funny. I actually liked Samurai Champloo (from Manglobe studios) and didn't feel overly inclined to fall into paroxysms of laughter at the mention of either "loo" or "globe". But that's just me ...
Anyway, on a slightly different, but slightly related note, check out Chuck Norris vs. Communism. Best Romanian film I've ever seen. Hmmm... not meant to damn with faint praise ...