Re: Extension of the Existing Situation
Why do you think your process of generating a work in the style of another artist is equivalent to a large transformer model doing so, in fact or under the law? That's a rather glaring assumption.
12132 publicly visible posts • joined 21 Dec 2007
My mother died of pancreatic cancer a few years ago, and in her last several years she was pretty stress-free. Others I know who have succumbed to the disease were also not what I'd consider "stressed" (at least not significantly beyond what I've experienced as the norm).
The topic of this thread is just selection bias, I think. There are quite a few famous tech figures; it's not hard to find a handful who have died of various fairly-common causes. Pancreatic cancer may be responsible for "only" on the order of 104 deaths in the US each year, but that's still a pretty big pool.
And relatively wealthy people will be more likely than the population at large to die from a cause such as pancreatic cancer, because they can afford a lifestyle and medical care that will improve their resistance to a number of other more-common pathologies. Reducing the likelihood of generally-more-common ends shifts probability mass toward the less-common, treatment-resistant ones. (As someone noted above, pancreatic is so often fatal because it's detected so late.)
Oh, everyone has something to teach you – if nothing else, precisely what sort of idiot they are.
Not everyone's teaching is worth your investment of time and attention, of course. We have actually developed some decent mechanisms to help people filter that; they're not perfect, but they do a lot of the work. Social media largely lacks them.
Indeed.
Back when I still regularly read Usenet – which I started on in 1990 (I know, I was late to the party), though I'd used various BBSes and the like before that – there certainly were people whose names I'd learn to recognize. There were some who were famous in that context, like Kibo; there were some whose names I recognized from meatspace, or whose outside activities I'd become aware of over time. There were even a few I eventually met in person, and in a few cases some I formed close relationships with.
Aside from that last group, I'm not sure I'd claim I "connected" with any of them in any significant way. I was interested in what they had to say, perhaps. I learned from them, and perhaps taught some. But the same is true of the thousands of fiction authors whose work I've read, as part of my literature studies or on my own, and I wouldn't claim any connection with them.
How many of these social-media connections have any life outside a few posts? Any actual emotional or intellectual relationship?
Pesce started his piece by suggesting that having a lot of social-media input is (or so he thought at the time) a good way to learn. Well, there are already plenty of excellent ways to learn – far, far more than any of us could possibly explore in our lifetime. I've yet to see any evidence that social media offers anything of substance not to be had elsewhere.
Depends on your definition of "forging", I suppose. In some contexts, it's used to mean something along the lines of producing a document that falsely attests to provenance and authenticity, regardless of the document's accuracy in other respects. Under that sort of definition, using a key that's shouldn't be authorized for this purpose would be a sort of "forging".
In the forensics and cryptoanalysis of PKI-based attacks, "forging" seems to be used in a number of ways, so I can't really fault Microsoft for their use of the term here.
But your point is good, in that it helps to provide technical accuracy.
Not under just Earth-gravitational compression, no. The problem is underspecified, of course. It's physically possible to compress quite a lot of feathers into a bucket, however "bucket" is defined. So both answers are wrong with some probability, but the "bucket is denser" answer is wrong with higher probability...
... unless we're also including the bucket itself ("the density of a bucket of feathers" doesn't definitively imply it's not included), in which case all bets are off. If I were to use, say, a galvanized steel bucket that I have here and fill it with feathers, the resulting agglomeration would definitely be denser than a single feather.
So it's a stupid problem, and people declaring one answer or the other is obviously correct haven't given it enough thought.
I don't know what you (and five upvoters) have been reading, but it's discussed plenty in the literature already, considering how young this field is.
See for example Will GTP Models Choke On Their Own Exhaust?, a post from Ross Anderson, which links to a paper from his group on arXiv investigating this issue. It's also been raised (in a technically sophisticated fashion) in places like LW posts, so it's not just researchers in their day jobs looking at the problem.
The lay press and J Random Tweeter may not be flagging the issue, but actual, like, researchers are hardly keeping silent about it. Which is hardly surprising since the problem is prima facie evident in the training strategy.
Agreed. Wikipedia says the permanent population is around 5 1/2 million, so if "millions of tourists" are visiting each year, that's a significant portion of the population. Why cram more in? (Yes, the tourists are not all there at the same time. It's still a measure of "tourism density". Indonesia has 50x the population and 2600x the area; it's hardly surprising it also sees more tourists.)
Of course, the problem with having a Tourism Board is that attracting more tourists is pretty much their reason for existing. So that's what they'll do, whether it makes sense or not.
Two. OO is clearly established, and there are a number of real-world fuzzy-logic applications, often in areas like industrial control.
Other hyped technologies lost their lustre but actually succeeded in a quieter way. "Mobile" was a source of much scrambling for a while, and technologies like WML were the rage, but then hardware caught up and mobile apps became just another mundane deployment option. There's a lot less chatter about Big Data these days, but there's still a lot of big-data processing and data science happening; people just stopped treating it as something exciting.
There are hyped technologies which found some market success and then plateaued, failing to reach the heights some people forecast. Wearables and home assistants are good examples.
And then of course there were those that succeeded only in niches, like Fourth-Generation computing and blockchain. (From the latter I'm excluding good applications of proper Merkle graphs.) And the ones that still attract a lot of hype with little to show for it, such as VR.
Personally, I am still wildly unimpressed by LLMs and other deep-transformer-stack applications, but I suppose we'll see.
You could implement that now with a couple of VMs (or a VM and a host OS). One VM has Internet access but restricted access to the corporate network, and the guest OS only allows signing on as a limited-privilege account (with maybe some provision for installing approved software by the end user). The other VM has access to the corporate network, source repositories, etc, but not to the Internet.
If you run a grownup windowing system, such as X11, you could even have your side-by-side application windows.
Any jobs that require consulting documentation require Internet access these days, so everyone who actually does anything to the Google stack is going to need Internet access.
This is simply false. It is entirely possible to mirror or proxy all the documentation that a set of developers legitimately require.
OP was referring to documentation, not the package implementations.
That said, there is certainly no reason why Google couldn't have their own copies of the documentation, and indeed they should. If you're hosting your own package repositories (which ought to be standard practice everywhere; public code repositories are toxic and have been the source of many, many vulnerabilities over the years), you should also be hosting documentation to match the package versions you're hosting.
Developer systems are a huge source of vulnerabilities in most organizations. Developers build and execute code, often of uncertain provenance. They tend to download and install a lot of tools and toys. They often run with excess privilege – I don't know how many devs I've seen running browsers as root / Administrator, out of sheer laziness. Studies have shown that developers are more likely than average users to commit certain security failures or fall for certain types of phishing and other attacks, possibly due to a combination of comfort level (so lower suspicion) and overconfidence. Developers and development systems are a tempting target for attackers, since they provide a route to supply-chain and infrastructure attacks.
Assuming your system has finite storage, if the stack keeps growing – due to unbounded recursion, say – then at some point you'll hit the limit. Where and how an OS imposes that limit varies; UNIX users should be familiar with ulimit and get/setrusage, for example (and their quirks, such as hard versus soft limits and the "unlimited" setting, which means "not limited by this mechanism, so you'll hit some other limit").
On a protected-memory system growing the stack shouldn't actually "overflow" as such; the process's request to add another page will simply be denied. But "stack overflow" is still commonly used for that condition.
"Stack overflow" is also used to refer to overflowing a specific stack-allocated area into an adjacent area, which doesn't involve (attempting to) resize the stack at all.
One could get a piece of mirror roughly cut to fit and glue it in place until a proper replacement can be had
Might not even need to get it cut. I once did an emergency replacement after the mirror glass fell out of a wing mirror (on a rather antiquated vehicle) with a hand mirror and some 5-minute epoxy and some sort of shim (don't recall what I used for that). Basically glued the hand mirror to what remained of the wing-mirror internals. Worked well enough until I was able to get a replacement.
But the style these days is for excessively complex, fragile solutions to simple, long-solved problems.
Reminds me of the Haynes instructions to replace the alternator in the Civic I had many years ago, which also specified removing a number of other parts. Turns out you can unbolt the alternator from its bracket, wiggle it up higher in the engine, unbolt the bracket itself from the frame, wiggle that out, then by a series of rotations get the alternator out through the gap. Much faster and less to go wrong.
Agreed. While prescriptivism in English diction and usage is no more than wishful thinking, I doubt you'd find many educated people in the US who think "illusive" is an acceptable substitution for "elusive" even in AmEng. It's a straightforward homophonic confusion.
(Alas, we've lost the similar fight against "insure" versus "ensure", but then those two share an etymology – unlike elusive/illusive – and "insure" as a variant of "ensure" has been in use for centuries. Nothing to be done about it now.)
That delay comes at the expense of needing to hire lawyers - and they will not work for a promise of jam tomorrow.
Oh, I don't know about that. Trump's had some success in getting lawyers, of a sort, to work for free. What's Sydney Powell doing these days? She hasn't been disbarred yet.
Is that not an unfair contract by asking the user to sign away their rights.
Eh? In the US, at least, contracts restrict parties' rights all the time. My mortgage, for example, forbids my renting out the house.1 Restricting rights is one of the main things contracts do.
That's what makes certain rights "inalienable" – they can't be forfeited. Other rights can be.
1Presumably to combat the plague that is AirBnB, which I'm fine with. Unfortunately it also prevents using the house as a long-term rental, which we might want to do in the future.
Who doesn't want to take it for a drive?
Me. I don't.
I've done ML and NLP work, and I have degrees in writing and rhetoric (and CS). I've read a bunch of the Transformer papers. I find the architecture uninspiring and the actual implementations completely uninteresting. The transcript excerpts I've read in various papers and articles don't make me any more inclined to use the damned things.
I have much better things to do with my time.
Here in northern New Mexico I pay a bit less than 10¢ per kWh, partly because the local electric cooperative is now completely daytime-photovoltaic (on average; battery banks allow a bit of carryover which compensates for the rare cloudy days). Of course there are fixed costs as well, and in the winter months when we're not running the ceiling fans (we don't have, or want, A/C) and the fridge and freezers don't work as hard, it's not unusual for the fixed costs to form the larger part of the monthly bill.
(There's more lighting in the winter months, of course, but with LEDs that barely registers. And we have electric radiant-floor heat in a couple rooms but between the large thermal mass of the building and passive solar gain, it doesn't come on very much.)
We're looking into getting a PV & battery backup system for the new house, just to be able to run the well pump during outages. The house isn't large but has a single-plane roof so we could fit a decent area of panels.
Bullshit has been used to describe "humanist" grotesque typefaces since they were first invented. Probably used to describe most other typefaces too, for that matter, but the humanist-grotesque designers seem particularly prone to it for some reason.
I suppose the designers of geometric-grotesques have other justifications they can trumpet ("simplicity!" "geometry!"), so they don't need to handwave in the direction of abstractions like "trust". And serif-typeface designers can just look smug.
Sans Serif are better to read where there is more text
Citation needed.
In my graduate Visual Rhetoric course we read a couple of studies1 debunking this and other myths regarding typography and readability. Readability of various typeface features and families varies widely by reader, and generalizations about the superiority of sans-serif, for example, don't appear to hold under scrutiny.
1Reasonably methodologically sound, though N was not particularly large.
I'm not sure why they want this clause to hire people and scan them other than maybe they want to hedge their bets
Showing you used some likeness you licensed for $200 is much cheaper than trying to prove in court that you used a machine-generated likeness that just happens to resemble the plaintiff.
Sure. I wouldn't give them the time of day either. Unfortunately there are potential reasons to boycott most or all of the big-media conglomerates these days. I'm staying away from Warner Brothers Discovery after Zaslav's massive act of vandalism.1 Once in a great while I watch something on Prime but I'm seriously thinking of boycotting them, given Amazon's generally repulsive business practices. Disney is pretty awful too, particularly with their continued abuse of IP law for their own benefit and against others'. There's some decent imported content but in the US the conglomerates are buying up all the import-reselling streaming services.
Truth is I don't watch television much any more, and I suspect it will be less and less as time goes on.
It's not like there was some golden age of ethical studios and producers, of course, but I'm tired of participating in it. At least with books2 there's a dribble of money going to the creators.
1And, I suppose, because he and his company are generally assholes. Rebranding HBO – the original subscription TV service in the US, a significant historical legacy – as "Max" just because they can is typical of their "cheap and stupid content for cheap and stupid viewers" philosophy, which of course is right out of the Discovery playbook. A plague on the lot of them.
2Conventional books, with a single or sometimes two authors, or the occasional collection, where I have some decent evidence about the provenance and confirmation that the authors are real people.
Challenge any one to explain exactly how it isn't a Ponzi scheme. They can't.
There are many examples of cryptocurrency fraud which are not Ponzi schemes. Not all fraudulent practices are Ponzi schemes, and reducing everything to one is simply lazy thinking.
Celsius does appear to have been running a Ponzi scheme, particularly in their lending business and issuance of the CEL token. But they also appear to have been conducting other fraudulent operations which were not Ponzi schemes, such as market manipulation and making claims of excessive returns.
If we're going to criticize the cybercurrency industry – and, sure, do so, because it's a pile of shit – we could at least try to do so accurately.
there IS no ethical way to earn billions
Some estimates claim J. K. Rowling is a billionaire. The Harry Potter books are crap,1 but I don't think they're unethical. And if they get some portion of their readers to read2 more than they otherwise would have, they may even have a net good effect.
1Abysmally shallow characterization, plots that depend on dei ex machina magical toys that are never used again and completely unrealistic behavior, utter lack of stylistic grace – compelling evidence, if any were needed, that popularity is unrelated to quality.
2And write. Fanfic may be the subject of a great deal of scorn, but it's still intellectual paideia and a flexing of the literary muscles. And I've seen reputable claims that some of it is quite good. (I'll note that the protagonist's fanfic-in-a-fiction in Rowell's Fangirl is superior to its Potter-like fictional source material; obviously that's not an example of real fanfic, but Rowell apparently did a lot of research into fanfic when writing the novel.)
Similar comments have been made by a number of security experts, such as the editors of SANS NewsBites. There's a lot of concern over the lack of information in the report and possibly explanations. One of the SANS editors (Frost, maybe) noted that the only obvious "leaked key" he could think of offhand in this scenario is the key used to sign JWTs, and losing control of that would be bad indeed.
NoScript and UBlock Origin and the like can certainly block many of the major tracking mechanisms, because using well-known mechanisms and not getting data from a small portion of users is fine with most sites. But they can't prevent all tracking because there is no perfect decision procedure for determining whether a resource – which can include anything on the page – has a nefarious purpose. Any HTTP request can be a side channel.
It's not possible in general to distinguish a tracking resource from a non-tracking resource.
If I have my server inject an IMG element with SRC attribute that specifies an image server and a unique path for every response, then when the browser requests the image, the image server has a reliable way of associating that request with the one that served the page that included the IMG element. The returned image data might be a single-pixel transparent GIF, but the browser doesn't know that until it receives the response. Since the path changes, the browser can't cache it.
And that's just the simplest form of web bug. There are many variants.
HTTP requires the client identify the desired resource, obviously, and the resource identifier (the Request-URI) can be used as a side channel.
Either it's moral to do it or not.
What are you, a child? That might be the stupidest thing I've read in a comment today.
Either it's legal or not. Doesn't matter if it's Mom or Microsoft.
Since a great number of laws distinguish between individual and corporate actors, this is patently false.
You're zero for two. Care to try again?
I hope they didn't incentivise him
Her. As the article notes, and a quick bit of research could have confirmed.
Corley is pretty highly regarded. I find this decision rather surprising, to be honest. "Yes, the bear has a history of mauling hikers, but has promised not to maul this particular hiker for at least 10 years." But I haven't read the (redacted) opinion, just the summary in the article.
(Of course, Corley clerked for Keeton, who is famous for various things; but among them is coming to the wrong decision – twice – in Lotus v. Borland, which had to be reversed by the Supreme Court.)