Re: Department of Energy?
Right on the DoE's website's home page:
The Energy Department's 17 National Labs tackle the critical scientific challenges of our time -- from combating climate change to discovering the origins of our universe.
12268 publicly visible posts • joined 21 Dec 2007
To be fair, photographic evidence is already highly suspect. I don't think Samsung are contributing much to that problem.
(A wider problem is that all evidence is suspect, or should be. Can't trust still photography, video, audio recordings. There's a ton of research showing eyewitness accounts are crap. Some forensic science is complete rubbish, such as bite identification and facial reconstruction; the good stuff is highly susceptible to abuse, as various instances of tampering by labs, placing of evidence, and other cases have shown. But this has always been a problem.)
Exactly. Yes, part of the problem here is that Samsung is being deceptive in its marketing; but the bigger part is the idea that an altered photograph is inherently a better photograph.
I can find all the nice photos, real and faked, of the moon that I like online. I don't need a phone that lies to me in order to get one. And the same goes for any other pictures I take with any device.
There's an obnoxious advertisement for the Google Pixel Whatever that plays on some streaming service we have, where they show it "removing distractions", i.e. other people, from photos. What the everlovin' fuck is wrong with these developers? What sort of vile, narcissistic relationship do you have to have with the world to believe that other people are mere "distractions" from the fantasy you want to construct with your faked photography?
I don't find Richard's argument convincing in the slightest. I don't believe anyone needs "better" photographs achieved through fakery. Grow up and learn to live in the real goddamned world.
And, of course, he didn't say he'd open-source all the Twitter code; he said he'd open-source "the algorithm", by which he presumably meant the ranking-and-propagating algorithm for individual tweets. Twitter has a large codebase (just look at that page of bits they have open-sourced to get an idea of how large), so presumably even most Twitter developers would have needed to do quite a bit of exploring around the repositories to find that particular chunk of code, even if it's properly isolated and abstracted away.
So it was a dumb thing to promise, and likely quite difficult to deliver.
The Supreme Court has a bunch of Catholics as members
Six of nine: Roberts, Thomas, Alito, Sotomeyer, Kavanaugh, and Bryant. But the latter two have only been on the bench a few years, and it's hard to see Sotomeyer joining the other five in some sort of Catholic cabal.
And this is a historical aberration. Only 15 of the 115 SCOTUS judges have been Catholic, so nearly half of them are presently on the bench.
I'd argue more for undue organized-Christianity influence of various competing sects in US Federal and state government. The pressures applied by various Evangelical organizations since the late nineteenth century are well documented, for example. That's not to give the Roman Catholic Church a pass – they put their thumb on the scales whenever they can – but it's a scrum, not a coordinated movement. And even in the RCC there are many differences of opinion; it's not nearly so consistent as it likes to pretend.
Lying requires agency, which transformer LLMs lack. The term of art is "hallucinated", which means the token predictor followed a gradient into a non-factual basin.
Train a model on a corpus of much of the web, and it'll have the average accuracy of what's on the web.
Chat-GPT is not a liar. It's not malicious. It's a tool, and it's blunt one that isn't really good for much except generating anecdotes about how lousy or wonderful (depending on your inclination to be impressed by Stupid ML Tricks) it is.
Scientific fraud can be difficult to prove, too, particularly in such details as who actually committed the fraud. You could have six or eight co-authors on a paper, most of whom did their part of the research in a perfectly above-board way, and just one or two who faked initial observations or tweaked datasets that others subsequently analyzed.
It is time to institute an across-the-board rule that says something along the lines of "If you are caught with AI fakes in your paper, you and your department will never get grant money ever again. Period. Because we can't trust cheaters, and everyone knows that once a cheat, always a cheat." And maybe rivet a big, polished brass S to the department head's forehead for being Stupid enough to allow the paper to be submitted for review in his/er department's name.
It's completely unreasonable to expect a department chair to police the academic output of the department. That's not what they're trained for, they don't have the resources for it, and it's not a feasible process anyway.
So all that would happen under this regime is, first, a bunch of innocent researchers people would be punished because of one bad actor; and then universities would tie the whole nonsense up in the courts and get the rule stayed by judicial order forever.
We absolutely do need to mitigate the revenge effects of the publication mandate, which has all sorts of other problems, such as the excruciatingly low rate of reproduction studies (because they're not rewarded). And there are a number of cognate issues; the ACM has in recent years raised the problem of the close relationship between conference presentation and publication, for example, which can also impede good research. But a guilt-by-association overreaction certainly will not help.
Compensation for board members isn't always this generous, but it's usually pretty damn sweet considering the amount of work involved. Yeah, I'd read some reports and attend some meetings in swank locations for an extra $100K/year, thanks.
But you know they do good work, like not punishing executives who fail utterly. The club doesn't run itself.
I thought it was because MongoDB is "web scale".
But seriously, yeah. Adopting a tool because it lets you make wild stabs in the dark is a good way to cut yourself.
Congress repealed Glass-Steagall, in the GLBA, which you might note is named for three Republican legislators. Clinton just signed it.
More importantly, both 1999 and 2008 were earlier than 2010, so your point is irrelevant to the claim you're arguing against. What members of either party did prior to Dodd-Frank says nothing about who did what to Dodd-Frank.
New York just took over Signature Bank. Signature was more like Silvergate than like SVB – heavily exposed to the cryptocurrency players. The Signature takeover closes another of the very few connections between cryptocurrency exchanges and the legitimate US financial system.
More details as usual are available from Molly White. Worth reading if you're a cryptocurrency supporter (for the news) or skeptic (for the lulz – seriously, this is one of my favorite daily treats).
That's Hamilton!, Ohio, with the exclamation point. Which really tells you most of what you need to know about Hamilton!.
(I know, I know. Email corrections.)
But of course this is the point. Enjoy-your-symptom surveillance will be abused by law enforcement in small jurisdictions at least as eagerly as it will be in large ones. And the small ones are the ones more likely to be conducting personal vendettas. The NSA doesn't care about you personally, but you might have rubbed a local sheriff's deputy the wrong way. Yet many people happily sign up, and indeed pay, for the privilege of being surveilled.
It's not Nineteen Eighty-Four; it's Brave New World.
Otherwise both are basically barren rocks into which we would have to burrow for shelter and find ice to melt for water
I.e. far inferior to adverse environments available here on Earth, such as Antarctica. Perhaps we are "killing Earth" (though that's a rather strong claim, even accepting AGW, pollution, etc), but it will take a lot of work to make the Antarctica or the Sahara or other less-convenient spots on this planet less amenable to human life than the Moon and Mars are.
Space exploration as primary research? Fine. But human colonization of other planets would have enormous costs and offers very little return.
Anything a significant portion of the audience recognizes as a word, is a word. That's how human language works.
Sorry, but those are the slithy toves of English.
As for your actual question, none of the LLMs currently deployed are capable of "imagining" anything. They're just walking gradients in parameter space, and there's no evidence that space is of sufficiently high dimensionality to implement anything that can reasonably be called "imagination". They can certainly converge on unexpected optimizations; that's been demonstrated more than once with much smaller transformer-architecture models that optimize for solving mathematical problems, for example. But currently the hallucinations are just gradients that lead into non-preferred basins.
Crank a transformer model up a few orders of magnitude in parameters and maybe you'd get functional imagination, though it would help to have a much larger prompting context as well. Crank it up further and run a lot of them and you'd have some non-zero probability of getting a Boltzmann Brain within parameter space, though what that would look like from the model interface is impossible to predict.
If memory serves, Codrescu wrote an essay suggesting that the rise in accusations of "political correctness" followed immediately on the demise of the USSR and the end of the Cold War. As accusations of being a Communist lost their sting, conservatives had to find a new label for their villains. Second-rate public intellectuals (I use the term loosely) like Dinesh D'Souza, Bill O'Reilly, and William Bennett replaced the Cold War with a fairly feeble culture war, primarily targeting academics. (Codrescu wasn't the only one to make this connection, of course; Wikipedia notes that Du Mez, for example, develops this argument at greater length.)
Codrescu had a nice little formulation about conservatives replacing "CP" (i.e. Communist Party) with "PC", which is indeed about the level of subtlety of the US culture war of the 1990s.
if you discount the native peoples of North America who have largely been displaced
'Round these parts, those are fighting words. Plenty of places in the US have significant native and native-mixed populations.
Last I checked, my wife wasn't "displaced", and discounting her has not gone well for others in the past.
But I take the force of your argument, which is that anti-immigration sentiment is historically and economically naive, and the resulting policies are both intellectually and morally offensive.
Honestly, I'm having trouble thinking of a technology that turns bad programmers into good ones. The only one I can think of that works in a significant number of cases is education.
COBOL was the first widely-deployed attempt at that, as far as I can recall, and it didn't succeed. COBOL may have let non-programmers write parts of programs, but it didn't make them good programmers. Functional programming has its advantages, but it didn't turn bad programmers into good ones. Same for structured programming, object orientation, 4GLs, StackOverflow, GitHub Copilot, and so on.
All the technologies you use now were "bleeding edge" at some point.
And the force of the argument here is precisely against needing "bleeding edge" technology. It's that an evolution in RDBMS capabilities, which is already underway thanks to SQL/PGQ standardization, removes the need to switch from an established technology to a significantly different one.
Your question about "the researchers had put similar effort into extending the native graph DB" doesn't make sense. The capabilities they added to DuckDB are already in graph DBMSes that support GQL, because they're a subset of GQL. What the paper shows is that it's feasible to add the SQL/PGQ enhancements to an existing analytic RDBMS and when they did so the performance was superior to the existing GRDBMS.
I'm not qualified to have a strong opinion on the debate here, but this particular line of argument is irrelevant to it. The question at hand is whether GDBMSes fundamentally handle a significant subset of use cases better than RDBMSes. "GDBMSes have X and RDBMSes don't have X yet, but it's been shown they can have it" supports the RDBMS side, not the GDBMS one.
Or one idiot single-issue politician with an axe to grind proposes something, and most of the rest are afraid to be seen voting against it and being accused of being "soft on crime". Or the surveillance-state lobby suggests those who support it will find a bit of support for their own pet projects. And so on.
"Log4j is used in the vast majority of software," ArmorCode's Lambert said
That hissing is the sound of someone's credibility rapidly evaporating.
Log4j is a Java component, and Java doesn't represent a majority of existing software, much less a "vast" majority of it. What an idiotic thing to claim.
Yes, and converting from some sufficiently-regular "plain text" collection of information to JSON and XML, or vice versa, is something that can easily be scripted. If someone's "plain text" summary of their dependencies can't easily be converted, then it's probably not useful for any real-world purpose anyway.
Really, if you have more than a handful of dependencies on external components, this information should be in a database anyway, in which case generating CycloneDX, say, is trivial.
There are some great Asimov stories about robots that lie in order to comply with the laws of robotics.
Every time I see someone refer to Asimov's laws, I have to wonder whether they've actually read the stories (and novels), because most of them are about unexpected results and failures of the schema. The laws aren't a description of how machine intelligence would or should behave, or a prescription for achieving alignment; they're a thought experiment that shows how difficult alignment is.
Asimov's main point was: Look, here are three very simple principles that seem sound, individually and as a set. Now look how quickly they fall apart.
LLMs are highly likely to report a number of classes of non-factual information, and larger models are more likely to do so. Cleo Nardo recently posted an accessible explanation of why that's the case.
Chat-GPT never "thought" Hanff was dead. It doesn't think. It's a long way from anything that can reasonably be described as thinking.
It was phrase-extending and it hit a gradient in parameter space that took it down a non-factual path. That's all. Everything else imputed to it in Hanff's piece – lying, "doubling down", making things up – is a category error. There are no qualia in the GPT models. There's no malevolence. There's no imagination.
I am not a fan of LLMs, which I regard as unimpressive (and I have cred in this field too), an enormous waste of resources, a terrible distraction from things that matter, and a likely source of adverse effects. But could we please stop turning them into demons? They're just very crappy tools.
Right, which is why they should be feeding into a SIEM, and probably into a UIBA system to flag unusual behavior.
Logs aren't good for much other than post-exploit forensics – if that – if they're not being continuously analyzed, and there's no way humans can do that. Even if you could find people to try, human beings are not good at constant vigilance.
Exactly. Any CI/CD system that doesn't at a minimum support version control and individual authentication for all of its configuration components is rubbish. Any organization that uses CI/CD and doesn't at a minimum keep all of that under version control with individual authentication is direly broken and, frankly, deserves to get pwned.
Hell, moving more stuff into the version-control domain is one of the biggest advantages of IoC, along with the usual things like repeatability, auditing, recoverability...
It'd be better if CI/CD configuration was signed, and those signatures verified regularly, but version control and strong authentication are at least a start.
"[T]ruly new inventions" is so patently a "no true Scotsman" cipher that this thesis is hardly worth responding to.
What was a "truly new invention"? The wheel/roller, lever, wedge, and inclined plane all have analogues in nature. Control of fire is conceptually obvious once uncontrolled fire is observed. Go on, tell us what counts.