Re: I don't quite follow the logic.
Gawker published Hulk Hogan's sex tape - this was illegal
Was it? Citation needed.
12132 publicly visible posts • joined 21 Dec 2007
He was smart enough to make a cipher that took fifty years to crack
He created a cipher that no one put enough work into to crack for 50 years. Or someone did, but didn't publish the fact. That's not a useful indicator of the design quality of a cryptosystem, and says very little about the capabilities of its "inventor" (a dubious title anyway, since Z doesn't appear to have used any concepts not already well-documented for pen-and-paper ciphers).
Kerckhoffs's principle, one of the tenets of modern cryptography, was demonstrated to be wrong here.
It most certainly was not "demonstrated to be wrong". You don't understand Kerckhoff's Principle.
The point of KP is that the key is the secret parameter to the cryptosystem. If parts of the system are (believed to be) secret, they become part of the key vector, in addition to the nominal key.
And that's a poor contribution to the key vector, because they can't easily be administered, and generally the strength of the additional security is difficult to estimate accurately, because it has dependencies on the known parts of the cryptosystem, which reduce its effective entropy.
The most economical and easiest-modeled contribution to the security of the system is additional key material chosen over a uniform distribution, forming a single homogeneous secret key. Anything else is sub-optimal.
Had Zodiac (who probably knew little about cryptography) chosen a stronger but known system, with a longer key, the message would still resist decryption. But, of course, that wasn't Z's intent anyway; there's little point in sending a publicity-seeking message which can't ever be read and is indistinguishable from noise. Z was probably hoping the message would be decrypted within a year or so of receipt, when it was still current.
Those other ideas collapse when revealed as untruths.
I am firmly in favor of strong protections for freedom of expression, and automatically hostile toward any censorship system. But this comment is simply incorrect, as a vast number of methodologically-sound psychological experiments and the vast sweep of human history both attest.
Even testable false hypotheses don't show any sign of being overwhelmed by truth. Take, oh, the Flat Earthers. Or the homeopaths. And of course untestable hypotheses (religion, conspiracy theories, solipsism, etc) cannot logically be refuted.
Education helps. Some economic pressures can help - though it's difficult to institute most of those without unacceptable constraints on expression. For the most part, though, we have to bear the costs of significant numbers of people believing false ideas and acting accordingly, as the price of freedom of expression. That's a trade-off inherent in the human condition.
It's a wind-eye. A society can choose to be blind but sheltered from the cold, or confront the gale and see.
To be fair, the water problems in California are at least as much the fault of the Federal government as the state. In particular, they're the result of bad policies created by the Bureau of Reclamation (which rivals the Army Corps of Engineers and the Tennessee Valley Authority for the title of "most destructive US Federal organization"), and rampant corruption which let the Bureau ignore the only good aspects of those policies.
The result was not just massive misuse of water resources, but misuse to benefit a handful of wealthy agriculturalists rather than the bulk of the state's population.
(There are critiques of Cadillac Desert, but the updated edition is still the best accessible, general treatment of the water problem in the western US that I know of.)
The explanation for the overrun is bizarre. Who starts a web-crawling project and thinks "oh, yeah, the web definitely an acyclic graph"? Making a mistake like that is just wildly technically incompetent.
If someone came to me with some web-link-traversal project for any purpose, my first question would be how they're handling loops, because that's important for performance and scalability. And if the response was "oh, we hadn't thought of that", it would be a long time before anything got deployed in any sort of environment that might incur liability.
In the US, at least, it's not just "someone skilled in the art", but someone of ordinary skill in the art. (I recently attended a presentation by a lawyer specializing in the IT patent process who discussed this point.) In other words, the intent of the "obvious" provision in US patent law is that an invention shouldn't be eligible if a hypothetical most-common-practitioner would find it obvious.
There are a lot of programmers, and sometimes what's obvious to you or me is not obvious to many of them. I think there's ample evidence for that in, well, pretty much any large-enough sample of software.
The obviousness test and other restrictions still seem to reject a fair number of applications, considering that USPTO only grants around half of the applications every year. (Yes, that's arguably still too high, but it's a far cry from the usual accusations in these parts of rubber-stamping everything that comes before them.) But it's not a high bar.
There is no maths that can make it work without increasing workforce combined with inflation to reduce the real value of future pensions paid.
Yes, and that's exacerbated by increasing lifespan (without a corresponding increase in the retirement age) and rising post-retirement medical costs.
Really, what has to keep increasing is total productivity, not necessarily the size of the workforce, but that's no easier to achieve.
"Hell if I know" makes a lot more sense than the Matrix backstory, and would have saved us some of the more tedious scenes in that tedious series.
(Is this thread all a bunch of whoosh, or did everyone understand alain williams' joke but hijack the thread back to "computer AI" anyway? Just curious.)
"Naturally programmed" is already a questionable metaphor that does a poor job of representing the complex and heterogeneous relations among biological drives, conditioning, unconscious impulses, and conscious choices, even without taking interpersonal and social interactions into account.
Like many of the posts in response to this story, OP is just naive, reductive sociobiology, absurdly generalized. Its explanatory power is negligible.
I admit I have very little idea what the IT job market is like these days (or, really, ever has been like), but I hope you're right that having social-media accounts isn't treated as a qualification.
I have heard many reports of interviewers and HR representatives requesting access to candidates' accounts, which is a fine argument for not having them. I mean, I'd refuse such a request; but I have the luxury of doing that. Someone in a more-precarious financial situation might not.
Citing the Devil's Dictionary ought to be adequate warning that the author is being sarcastic.
I don't think dispossessing and removing members of the Five Civilized Tribes is inherently worse than similar actions against, say, the Iroquois Confederacy, or the Pueblos, or the Anishnaabe, or any of the other native peoples of the Americas. Or of anywhere else. All of those actions (and many others, such as forced assimilation, reneging on treaties, termination, BIA fraud, etc, etc) were reprehensible, and ranking them by how "civilized" the victims were is rather suspect.
An excellent point, but it's also true that some organizations are much more prone to fire the lawer-guns. And sending C&Ds to security researchers is nearly always an indication of a firm which neither understands nor cares about security. When a company that supposedly specializes in security products does it, it's a red flag.
Right. Supremacy != generally useful. Also, as Aaronson pointed out recently, supremacy is a slippery beast; it's not trivial to determine where the classical cutoff might be for some of these problem domains.
Supremacy has lost most of its utility as a measure of QC progress.
And, of course, even if we eventually have a reliably-working "generally useful" QC machine, that says nothing about the economics. It might be able to solve only a handful of interesting problems in an economically-feasible fashion.
(Ironically, again according to Aaronson, the recent Pan & Lu BosonSampling experiment was constrained by the economics of classical computing: "A couple weeks later, the authors responded, saying that they’d now verified their results up to n=40, but it burned $400,000 worth of supercomputer time so they decided to stop there." But that's just a reminder that ultimately it's the economics that will dominate "useful".)
Yeah, OP needs a lot of qualifications on that statement.
There are days here where we have several inches of snow on the ground and icicles hanging from the roof, but none of the heating (wood stove, gas heater, or electric radiant floor, depending on which room you're in) is on, and the house is still warm enough that it would be uncomfortable if the air weren't so dry and thin. All thanks to passive solar - and we don't even have that much window area.
Now, the Stately Manor in Michigan - that's another story. I keep the thermostat there at a sensible setting and everyone bundles up in sweaters and the like. With the drafts and humidity, plus force-air heating, that house rarely feels warm in the winter. It's pretty, though, and I enjoy that sort of climate too.
It's almost like the northern hemisphere is a large place with a variety of climates.
Here at the Mountain Fastness it's still t-shirt weather most days. Sure, there's snow on the ground a few dozen feet away, on the north side of the fence where it's shaded from the sun. But if you're sitting somewhere the sun can shine on you, you definitely don't want any sort of jacket. And in the house by midday we have the door open, and sometimes the windows. Passive-aggressive solar heating.
Mind you, we had a foot of snow on the ground in October, and there's a good chance of a White Christmas. But it's not uncommon, particularly in the spring, to have a snowstorm one day and be mowing the grass a couple of days later.
Different people are different. When I was in my teens and 20s, I found getting up early was a real chore, and looked to that morning shower, coffee, and in the winters the brisk1 Boston weather to get me going.
These days, I wake up between 6:00 and 6:30 with no assistance from an alarm, dress,2 start the coffee maker, build the fire in the wood stove, and start working. At some point I remember to get up and get my first cup of coffee. If I'm going to feel tired during the day, it will likely be mid-day; I'm most productive in the mornings and evenings.
1For which read "agonizingly cold, wet, and windy".
2I suppose I could work in the nude - the only person likely to see me is my wife, and she's seen all that before. But we have pets, so I'm not going to skip footwear, at least, and wearing nothing but shoes seems silly.
OPAQUE is, broadly speaking, resistant to offline brute-forcing. It's more resistant than most password-verifier systems in use today, because it uses an oblivious PRF. That not only prevents the server from learning the client's password (as with the best-known and most widely deployed PAKE, Tom Wu's SRP), but also prevents the client from learning the server's salt.
Matt Green has a good write-up on PAKEs, OPAQUE, and why OPAQUE is superior to SRP.
Note that OPAQUE can be implemented with any PRF, so it can be used with scrypt or (better) Argon2 or some other memory-hard PRF, which makes brute-forcing far more expensive than just using something like a salted message-digest function. And the workload is mostly on the client, so it scales.
As with SRP or any other improvement to web authentication, the problem is browser support - because obviously you can't trust a Javascript implementation you download from the server; that has the same threat model as a compromised server harvesting your credentials.
In case anyone's concerned: The OpenSSL issue is rated High because it's a potential DoS, but nothing more than that - it's a null dereference. And in practice it probably doesn't affect many applications. The most plausible attack vector involves a malicious certificate and a malicious CRL. Some applications check received certificates for CRL access points, and then use those to try to download an updated CRL, which would make that attack feasible; but it's not trivial to implement that using OpenSSL (it requires using various fairly-obscure OpenSSL APIs and using some sort of HTTP client), so I believe it's relatively rare.
Of course there's something to be said for updating to the latest 1.1.1 release, and if you're on 1.0.2 and don't have a support contract you have bigger problems. But this isn't one that most people have to scramble over.
The 432 had some good ideas - a capability CPU could have made a huge difference for security and reliability of PCs. But then as now the market was more than willing to sacrifice security for performance, and the instruction-set ship had sailed. Firms like IBM and Apple with captive markets could still get away with changing architectures; Intel and most of its customers couldn't.
I recall a bit of common wisdom from circa 1990: The 80486 was the best CISC CPU ever, and the i860 was the worst RISC design ever, but the 860 still outperformed the 486. (I said it was common; I didn't say it was right. But there was a grain of truth in it: despite its design flaws, the 860 managed 5-10 times the MFLOPS of the 486, so if floating-point was what you wanted...)
I believe the IBM z10 was still true CISC, dispatching the actual zArchitecture CISC instructions to the cores (based in part on this IJRD article).
That was 2009, though. The current z CPU is the z15, and this writeup mentions "CISC instruction cracking" in one of the illustrations, which certainly sounds like the pipeline is decoding CISC instructions into simpler ones.
That would also make sense because z10 is superscalar but in-order, while z15 is out-of-order. It's generally easier to reorder RISCy instructions.
z has over a thousand opcodes, between the public instructions and the special ones used in microcode. Going to RISC cores was probably inevitable. z10 cores were big - a thousand opcodes means a lot of gates.
BITNet, 1981. Usenet, 1980.
Christensen & Suess's Chicago CBBS went online in 1978.
According to Melinda's history, online VMSHARE started in 1977, running over TYMNET.
Wikipedia mentions Computer Memory in Berkeley, which was a pure computer bulletin board system (log on to a central system and read or post messages; no interactive chat, file downloads, etc) that started in 1973. (I don't recall hearing of Computer Memory before; it's an interesting case.)
Someone may well know of earlier examples of constructs that could reasonably be called (online) "social networks". It depends on your definition, of course. If a community of people of reasonable size, with no other obvious connection (e.g. same employer), could use the system to post and read messages that are public to the community (as opposed to the point-to-point nature of email), and the system is used for social interaction (and not just instrumental communication, e.g. for work), then I think it qualifies. VMSHARE would definitely meet that definition, and I suspect Computer Memory would have.
Fun fact from Wikipedia: Computer Memory was coin-op. You'd deposit a coin into a box attached to the terminal to get access. I'm taking the Wiki article's word for this, but I hope it's true; I love the Futurama feel of a coin-op social network.
Known RCEs, or unknown issues from code of uncertain provenance? Pick your poison, I suppose.
(And, of course, never ever run Teams, whether in its own "app" or under a general-purpose browser, with elevated privileges. I'm guessing you aren't, but it doesn't bear thinking about how many Windows users are.)
If you don't need the *.foo filter,
ls | xargs rm
is shorter. And even if you do,
ls | grep '\.foo$' | xargs rm
is still technically shorter, and avoids the chance of accidentally omitting the -maxdepth argument. But it's a bit inelegant.
Also note the "+" form of -exec was I think only standardized in the 4th edition of the Single UNIX Specification; not all UNIX systems support it.
But all of these are vulnerable to spaces in filenames. Assuming sufficiently capable versions of the utilities in question, you're a lot safer with:
find . -maxdepth 1 -print0 | xargs -0 rm
which prevents any argument splitting. It means rm will be executed once for each filesystem object, but unless there are a great many of them indeed, or you need to do this as part of the inner loop of some performance-sensitive action, with modern systems you'll probably never notice the difference.
TL;DR: There are various ways to get around globbing limits. Watch out for argument splitting, though.
Back in the mists of time, IKEA stuff was mainly pine
Actually, here at the Mountain Fastness we have an IKEA bedframe which is made entirely of dimensional-lumber pine (except for the steel struts and other hardware). It's only four or five years old, so not that far back.
I hate IKEA, other half does not
I hear ya, man. Even in the world of flat-pack furniture they're close to the bottom of my list. I don't like the designs, I don't like the engineering, I loathe the instructions.
even discovers on its own that I'm taking a walk
Hooray, more surveillance!
Personally, I can tell on my own whether I'm taking a walk, and when I've exercised sufficiently.
It even does that with hand washing, and then automatically times the Covid19 20 seconds for you.
So on the plus side, there's no need to behave like an adult. Your nanny-watch will guide your every action.
Wearables: Instrumentality for the willingly helpless.
Today JavaScript runs on every computing device
Well, this rather casts doubt on everything else Ben Vinegar has to say, doesn't it?
The substantial majority of computing devices here in my home and surroundings don't run any version of ECMAScript. Perhaps he's unaware that there are embedded computers.
And, of course, there are general-purpose computers which never run ECMAScript. I don't know that I have any of those physically on the premises (I don't really have the room for 'em here), but I use some on a regular basis. For that matter, I have VMs on my work and personal computers which have never executed any ECMAScript, though I suppose we could quibble about whether those are "devices".
Brian Subirana had a Viewpoint piece in the July 2020 CACM arguing for a "Voice Name System" to standardize wake phrases and other aspects of voice control. Interesting, and somewhat horrifying. Here's a sample aside:
We developed a skill in our lab and in July 2017 Alexa's parsing surprisingly changed. The phrase "Alexa, Target shopping list add soap" went from adding it to the skill's list to adding it into Amazon's shopping list.
I take issue with the "surprisingly" (what, Amazon decided to block you from using a competitor? shocking!), but it's only one of the examples of voice-control abuse that Subirana mentions. (This is probably the article that describes the Burger King ad I mentioned in another post.) And this is from someone promoting voice control.
That is real simple, and the small exercise is good for me.
I realize this makes me insufferably self-righteous,1 but I can't deny that I feel avoiding all these home-automation systems is itself a kind of paideia, and makes me a better person. Making even a small effort to do things deliberately attests to my relationship with my environment.
It's the same reason why I don't garage my car, and particularly dislike attached garages. When I go somewhere, I should be exposed to the outside world, at least briefly.
Obviously there are people with disabilities, and people who need to monitor and control systems remotely for good reason. I'm not interested in making my life "easier", though; compared with the vast sweep of human existence, historical and contemporary, it's already absurdly easy. A little incidental labor is a good-in-itself.
1Too late.
There may well be cases of advertisements accidentally waking voice assistants, but the famous case is a Burger King ad which deliberately activated the Google assistant. It was voted "most intrusive advertisement of all time" at some industry shindig. Can't be bothered to look up the reference at the moment, but just yesterday I was reading an article in CACM that discussed it briefly (and was properly cited).
Oh, there's a huge array of documented security issues with "voice assistants". I think I see a new paper on the subject every week or two.
Personally, I find the things excruciatingly annoying, so I wouldn't use them even if I wasn't concerned about the vulnerabilities.
I remember when I got a Sun workstation (a SPARCstation something-or-other) around 1990 and it came with a (corded) microphone. "What the heck will I use that for?", I wondered. Nothing, as it turns out; I had that machine for eight or nine years and never activated the mic.
When OS/2 Warp came out with Dragon Naturally Speaking built into it and those "I talk to my PC" advertisements, I thought that was kind of nifty - in theory. Again, I never actually used it, and even the theoretical appeal gradually faded.
In my callow youth I read Kevin O'Donnell's "McGill Feighan" novels, which featured (incidentally) a home-automation system with voice input. That seemed terribly cool. Then I grew up.
(My favorite pop-cultural reference to voice command is the Bat-Computer from the 1960s Batman television series, though. You may recall that it had natural-language voice input, but output was a punchcard, which Batman would read. Bit of hit-and-miss in predicting UI development there.)
the internet enters a new phase where the underlying protocols and standards themselves are starting to change
As opposed to that long period when no new RFCs were published, which happened ... never.
Sure, there's this New IP noise. But there has always been noise, just as there have always been people reinventing TCP using UDP and all of the other "new" protocols some folks are excited about.
IP has changed in the past - IPv6 is the obvious example, but even in v4 there were changes to protocol handling such as CIDR. TCP has changed: Path MTU, PAWS, window scaling, and so on. (Not a lot has happened to UDP, though there is ROHC.) The higher-level protocols change with wild abandon. We aren't entering a time where Internet protocols are "starting to change"; they've never stopped changing.
I need to give QEMU a try. I use VirtualBox (carefully kept free of the Extensions with their "Oracle now owns your soul" license) occasionally when I want a VM locally rather than just using one of the many on our network, and it does the job, but it never hurts to give alternatives a spin. This advent calendar and the seasonal time off might be my excuse. (Though any time off is typically preempted by my granddaughters, whose requirements expand to fit available resources. It's the Mythical Grandmonth.)
vm_map.c line 644: map_addr = start_aligned;
vm_map.c line 645: for (map_addr = start_aligned;
Maybe they should assign start_aligned to map_addr a few more times, just to be safe. (Yes, this is not a vulnerability. It's a code smell.)
So, yeah, I think this could use some desk-checking, static analysis, and dynamic analysis (either under a test framework, or using a symbolic-evaluation-simulated-execution hybrid).
By the way, what's a "fuzzy-logic probe" in this context? Do you mean fuzzing? Fuzzing has nothing to do with fuzzy logic, and I'm not aware of any common application of fuzzy logic (which is more often found in control systems) to software vulnerability testing. I may well be unaware of some innovation in this area, though.
I have to agree. Curtailing overreach by the executive branch (which has expanded its power under other administrations too - Obama's endorsement and use of Yoo's foul application of the "unitary executive" theory to permit any sort of abuse is a glaring example) and enforcing the rule of law is more important than tinkering with the H1-B system, however broken the latter might be. Congress has patently failed to do its job for the past two decades, so the judicial branch and the private sector are the only restrictions on the executive abrogation of power.
It might be argued that an individual has already surrendered their expectation of privacy when they choose, under licence, to drive a glass box on public roads.
Expectation of privacy should not be atomic. And under current US law, at least, there's still some expectation of privacy for vehicle operators and passengers. The Fourth Amendment still applies to vehicles; they can't be searched without a warrant, permission, or reasonable cause (which is why the police try so hard to get permission to search when they stop a vehicle not being operated by a wealthy white person).
Or perhaps BB mean that drivers who choose to enter into an agreement with an insurance company to supply said insurer with live data can now do so using this BB system. In which case, their system isn't compromising the driver's privacy, since it is only transmitting data that the customer had agreed to.
Say, do you have a bridge for sale?
BB used to have a pretty good reputation in this regard, back in the phone days when that was profitable.
FTFY.
Let's stay sceptical. Cynicism is counterproductive.
Cynicism is accurate, to a first approximation.
Yes, it actually takes some effort to leave an S3 bucket unsecured too.
If the bit about "their Hong Kong IT provider" is true, then it's time to find a new provider. It should be trivial to provide basic security for a cloud-based backup system, including encrypting the data at rest. This is inexcusable.