Re: Really?
This is blatant Neo-anti-neoism and we of the People's Neoist Front will not stay silent in the face of it. For shame, sir. For shame.
12269 publicly visible posts • joined 21 Dec 2007
The problem is data centers in daft locations like Phoenix, or elsewhere in the Great American Desert (as it used to be known), using up scarce freshwater resources. If that water then precipitates down over wetter parts of the country, or the ocean, that's still a big problem.
Using seawater would avoid this, but 1) you'd have to build data centers near the sea, rather than in Arizona1 or Texas2 or other terrible choices; and 2) evaporating seawater leaves behind all sorts of tricky stuff to deal with, particularly salt.
As for "captur[ing] the humidity": That would require either a great deal of additional energy input to move the heat somewhere else, or magical engineering. That steam will condense when it gets high enough in the atmosphere to dump its heat through radiation. How big a cooling tower do you think you can build? Again, that water will return to earth somewhere far away. (And, no, a bunch of data centers along the Pacific coast of the US will not put enough steam into the atmosphere to make a difference to the drought.)
1Seriously, Arizona. What, was the Sahara not available?
2Where you can have simultaneous water and energy crises. Yay!
If we can waste enough of their time they become unprofitable.
Unfortunately I don't think that's practical. Few people will do it, and thanks to the low overhead of VoIP, bots, and minimally-compensated people working from home, these scams are profitable even with a very low hit rate. It might make some recipients feel better to waste the callers' time – and if so, go right ahead – but I doubt it has any noticeable effect on the bottom line.
I don't know when they started with macros
The important date here is 1999, when Kwyjibo wrote the Melissa virus for Word and Smith released it. Melissa was a pretty major event, hitting ~1M users and making the national news.
That it took Microsoft over two decades to properly restrict macro execution in Office products (after years of ineffective half measures) shows just how resistant the Office product-management team is to curtailing "features" that are actually serious security vulnerabilities.
Yeah, it's not clear from the summary in the article what exactly the test was. I expect it was actually in effect "find the passage in the context window that doesn't match what you would expect for the next token", so it wasn't diffing Real TGG against Modified TGG; it was running Modified TGG against the entire model, which included somewhere in its parameter space a gradient matching Real TGG but not the actual text in a literal representation.
Not a hugely interesting experiment, as far as I'm concerned. Exactly what I'd expect a really large model to be able to do. So what?
Now, if the underlying model had been trained on a data set from which all copies, excerpts, and references to Real TGG had been removed, and it still caught the offending passage, that would be a slightly more interesting experiment. (It's feasible for a transformer LLM to do this, if there's enough similarity between the world of the novel and the world of the training set for most-probable completion to get a strong disagreement on the altered passage.) A better test would be to use a freshly-written unpublished novel, of course, so there's no possibility of data-set contamination. But even then, all you've confirmed is that the surface of parameter space contains a gradient that diverges sufficiently at the point where the out-of-place passage appears.
And that's a big problem with LLMs. They converge on a middle ground of expectation. They seek to reduce surprise, which is another way of saying they reduce information entropy in the output. They're bland. They have no style. They have no conversation, as we used to say of uninteresting people. They regurgitate the most likely continuation, in a dull fashion. You can anneal them into slightly higher valleys with prompting, but the existing models and their architectures fundamentally lack the inconsistency of human discourse. And that's what makes us interesting.
Counterpoint: Putting a PC on every middle-manager's desk destroyed the typing pool. Typist was a skilled trade, and replacing the typing pool with a PC and word-processing software cost those jobs, and it cost managers time because dictation to a skilled human is faster than hunt-and-peck typing into Microsoft Word or the like, and it reduced the quality of business prose because it was no longer trained professional writers producing the final copy.
There have been a number of studies which suggest the "PC revolution" was actually fairly expensive in terms of productivity.
On a similar note, giving spreadsheet software to bookkeepers and others who understood how to use paper spreadsheets was productive. Giving them to people with no idea how to use them correctly? Quite possibly not.
A CACM article on the 20th anniversary of Powerpoint (which presumably was published around 16 years ago, but I'm not going to go look for it) noted that in the '80s, similar presentations were generally either B&W overhead transparencies1 or carefully-orchestrated multimedia presentations with synchronized slide projectors and tape decks that took many hours to create. Now Powerpoint Rangers generate zillions of fancy presentations every day with graphics! and animation! and mind-numbing stupidity! – which, yes, is a lot more output, but is it more value?
And I recall a Byte article from many years ago (obviously) about the "Fat Bits" option in Mac Paint (or whatever it was called): the zoom function, basically. The author suggested that having a zoom function, and being able to do pixel-by-pixel editing, led to people wasting a vast amount of time fiddling with details that no audience member was likely to notice, and thus offered essentially no return on investment.
Information technology has severe revenge effects, especially when it attracts a lot of attention2 and triggers obsessive behavior in users.
1Or "foils", if you worked at IBM, the Land of Our Own Damn Nomenclature, Live With It.
2One of the great ironies of the current LLM fervor is that it was touched off by a paper titled "Attention is All You Need". The use of "attention" as a term of art in transformer algorithms is an accidental gesture toward the greatest problem they currently cause.
I agree with the last point. Certainly it will vary quite heavily by the user. I've read a number of recent research papers about LLMs, and reasonably well-informed and intelligent articles from a variety of perspectives, and I've yet to see an LLM do anything better than I can, in my areas. Or, indeed, do anything that would make one worth my time.
Natural language is a poor search interface and a poor user interface for the vast majority of use cases. LLM code completion is a trap: learned helplessness coupled with a failure to understand the proposed solution, and a concomitant one to explore the solution space and potentially learn something. Leaning on an LLM at a minimum costs the user the opportunities for skill development and serendipitous discovery.
This is always true of information technology, of course. The printing press cost a number of scribes the opportunity to incidentally learn things from the books they copied. But the trade-off for the printing press was clearly profitable: a small opportunity cost to a few people, which could be recouped by using some of their returned time to simply read, in exchange for a huge benefit to many people. So far the demonstrated "benefits" of LLMs are much, much less, and the cost to users much higher.
Would "products with digital elements" include antique clocks with numbers on their faces? Would it include gloves?
It's an idiotic term, well-suited for the rest of this idiotic bill. Good intentions perhaps, but a braindead approach to achieving them. And I say that as someone who's long argued that market forces will not fix the vast software-security crisis and regulation is necessary.
China sees great potential for blockchain in many industries – as you'd expect of a nation that likes to know what its citizens get up to.
No, actually, I wouldn't, since blockchain is an impressively terrible way to surveil people and industrial activity, particularly under an authoritarian regime. For any use case that doesn't involve distributed Byzantine consensus, blockchain is just a really poor design for an append-only ledger.
Agreed. I think she figures this is a good career move even if Twitantic continues to sink, and I suspect she's right about that. CEOs are rarely held to account for the failure of their firms, this is a move further into the circles of the club, and coming in to try to rescue a disaster gives an exec some credibility (didn't just take the safe jobs) even when it fails.
Who thought that auto linking, fetching and executing in mails was a good idea?
Borenstein and Freed started us down this particular crumbling cliffside path.
Admittedly, RFC 1341 was inspired partly by the need to support character sets outside ASCII, which is a legitimate problem. And 7.4.2 manages to list a surprising number of security issues with "active" content, for 1992; unfortunately it's clear few implementers gave this much thought.
Even "displaying what was contained within it" is an unnecessary vulnerability, since many image-rendering libraries, for example, have had exploitable flaws.
MIME hugely increased the attack surface of email, and overly-ambitious MIME MUAs ushered in a world of pain.
Image display ought to be optional, with images not rendered until the user asks them to be. (Outlook has incomplete support for this; I raised an issue about Outlook's rendering of Windows metafile images, which can't be disabled, decades ago on VULN-DEV, for example.) Only local fonts should be allowed, with no font embedding. There's no reason to support audio or video at all. And so on.
With 5 of 48 orders analyzed. So it's probably more like $250M wasted,or 1/7. That sounds pretty unreasonable.
And that's just wasted in this fashion. How much waste for overpriced products? How much for systems that are not fit for purpose, or are significantly less productive than they should be?
That's up to the shareholders, and the shareholders apparently have decided not to do so.
I'd say I'm surprised that Apotheker has been appointed to a number of boards (at least two as chair) since the debacle, except really I'm not. Everyone knows that corporate boards are a club and you have to offend the other members to get kicked out. Merely being terrible at your job is regarded as a quirk.
Under what statute do you believe HP's management and board committed criminal negligence?
They were negligent, sure. They were foolish and irresponsible. They cost their shareholders dearly. However, they were doing the job they were hired to do – just very, very poorly (with a few exceptions, such as Lesjak). The remedy allowed for this is for the board to replace the senior management, and for shareholders to replace board members (not necessarily in that order).
But, hey, don't let facts get in the way of your uninformed rant.
Sorry, who would have performed due diligence? We know HP didn't; that's well documented and has been discussed ad nauseam here and elsewhere.
The record is clear that Apotheker didn't read the preliminary report, fired the consultants before they could prepare the final report, and ignored advice from his own CFO, among other things. He was wildly reckless and incompetent. None of that is in doubt.
although the US taxpayer would be happy to pay for the same thing
Well I, for one, wouldn't. We spend far, far too much on incarcerating people in this country. And while Lynch is very likely guilty and is not at the top of my list of people I'd like to see released, he's also not near the top of my list of people I think deserve to be locked up.
Why would anyone ask? When a listed company is bought, the money goes to the shareholders. I haven't bothered looking, but the scheme would have to be published. It's not like this was some kind of secret deal – it was widely discussed before, during, and afterward, not least here (interminably) in the comments pages of the Register.
And they'll be happy to pay you once you deliver a working one. COD.
As someone noted above, this is not a risk. Microsoft has just promised to buy a little (for them) electricity at a reasonable price in the future, should it be available. Unless the price of electricity drops enormously by then, they're not taking on any risk.
Trust experience, question everything else.
An impressively foolish maxim.
Personal experience is by definition anecdotal. The sample size of personal experience will be much too small to justify any generalizations for most categories of experience.
Humans are prey to a large number of well-documented perceptual and cognitive limitations and traps. Our ability to observe situations and draw rational conclusions from them is severely limited. That's why we have epistemological protocols for mitigating those limitations and not trusting personal experience.
Learning from experience is both necessary and unavoidable. But "trusting" it is the hallmark of uncritical thought.
Or the job may not be what you want to do. Or the company culture may be a poor fit, or you may not get along with your new co-workers. There might not be a good new job that doesn't require you to relocate. There are many reasons why jobs are not fungible, and those claims of "employers want a zillion more people in specialized field X" are largely meaningless.
I dare say I could find a new job quickly if I needed to, but the idea of switching, with all its attendant costs and stresses, sounds awful.
not everyone is able to carry out their work whilst lounging on a sun kissed beach sipping a Margarita
Sure. I find the sun washes out the laptop screen and makes it too hard to see what I'm doing.
(Also I don't drink alcohol, so that margarita just ends up sitting beside me.)
I've been working from home for nearly a quarter-century. I don't have any worries about which or how many hours I work; I've never found that to be a problem.
I used to enjoy periodic trips to various offices. That was gradually being reduced to cut costs before the pandemic, and of course halted entirely during it. I wouldn't mind the occasional one, though now my "local" airport is a 2 1/2 hour drive rather than a 30 minute one, so travel is more of a hassle. (There is one big office about a five-hour drive away, which would be fine for an overnight trip, and another that's about ten hours.)
If we had an office near me I wouldn't mind going in occasionally. I remain utterly unpersuaded by back-to-the-office mandates, however, which are just as much of a broad generalization as "people work just as well from home", and equally unsupported by anything I've seen. If there are methodologically-sound studies on the question they've escaped my attention.
Nor is the problem "time spent communicating". That can be just as productive as any other activity. I've known plenty of programmers who could take four hours to accomplish something that could have been done in ten minutes if they'd asked the right question of the right person.
This is a rubbish study, based on rubbish data and rubbish premises. Pure marketing fluff.
... or already deleted. I created it about 15 years ago to see whether I thought there was anything of value in Twitter. I never posted anything, but I followed a few people for a little while via RSS, until Twitter went to OAuth and broke my reader. There was no compelling reason to get things working again,1 and I haven't used my Twitter account since. Don't even recall what my handle was.
1I keep seeing people in IT security – one of my fields – insisting that Twitter is an important source of information for them, but I've yet to see anything reported elsewhere that made me think "wow, I wish I'd seen this a day earlier on Twitter". Similarly, the various reposts and summaries of Twitter conversations I read in articles always leave me with the impression that seeing them in situ would have added no value whatsoever.
Secure Boot was always vulnerable to the theft of a private key. That's true for any security feature that relies on a secret.
Not that I'm saying Secure Boot was a good idea – I believe there are legitimate concerns with it. But this isn't due to a flaw in the design of Secure Boot; it's due to a flaw in MSI's security which let the private key be discovered and exfiltrated by attackers. It's not, in fact, a Microsoft bug at all. It's just exploited by malware written to attack Windows, and Microsoft are therefore providing a patch for it. (And that patch is problematic because key revocation is a hard problem.)
Exactly what people have said about every other WordPress plugin vulnerability.
No one has to use them. But people do. This is not the fault of the WordPress developers, except that they opened the door.
There's no cheap, simple fix for this problem. "Don't use plugins" is not a fix, because the problem is other people deciding to use plugins. It's all just part of the tremendous mess the industry has made of the Web, starting with Netscape's decision to stick LiveScript into the browser, and Microsoft's to invent DHTML (compounded by Microsoft's invention of XHR, and Google's popularization of it).