Suggestion: Replace Management at Boeing by an AI
OK, that's probably too easy as a real test ...
226 publicly visible posts • joined 2 Sep 2016
> Once it started hitting the atmosphere its disassembling accelerated very rapidly.
Not really.
At first it managed to maintain intended attitude (tiles-protected belly first) well into the altitude where plasma builds up.
You could see the fin on one side moving to maintain that attitude (which seemed to work pretty well at first).
Then Starship rolled out of control, the fin basically folded up and stopped moving(would have been interesting to see what happened on the other fin at that moment)
The craft then rotated the tiles sideways, exposing unprotected skin to the plasma, then reoriented to an engine-first reentry.
Then transmission stopped.
Overall absolutely fascinating.
A big Thumbs Up to the SpaceX team.
The only thing even weirder than trying to build security on complex and notoriously hard to manage Microsoft tools like Windows, AD, Outlook, Azure and Exchange would be to cut loose AI to "help" manage that mess.
Generally: You don't fix complexity by throwing more complexity at it. AI or not.
Trying to fix security with inconsistently and in some cases unpredictably performing AI is not even trying to sell the usual Snake Oil - that's more like suggesting to fill your fire extinguisher with gasoline...
Finally, Microsoft being unable to keep and then drive Intruders completely out of their own systems does not seem to be an especially good marketing pitch for this service.
https://www.theregister.com/2024/03/08/microsoft_confirms_russian_spies_stole/
As I understand the piece, they were regular Cognizant employees.
So while Google dealt with "contractors" from Cognizant, they in fact were regualr employees there.
So they should have got standard sick/vacation pay from Cognizant, notice period, etc.
> It claimed that US president Joe Biden held Putin responsible for Nalvalny's death, and that, in response, Putin called the accusations "baseless and politically motivated."
> Putin has not made a public statement about Navalny’s death.
Similar to DS9999's argument:
The point is: US President accuses Putin of XXX and Putin calling the accusations "baseless and politically motivated." is such a strong statistical signal in all training data, that the "AI" assumes this to be true also in this specific case.
Which again shows that LLM's are not about rational understanding or facts, but about statistical relationships between words.
IANAL and all, but this having to do with the now corrected value sounds most plausible.
Lending 26M$ to a company expected to be worth 600-something M$ might seem like a relatively low risk.
Lending 26M$ to a company now shown to be worth only 36M$ as per the offer basically means the company is worth 10M$ plus the money you gave them.
I can see why, as a lender, you'd call debt of 2,6 x actual value a "default" ...
I'm pretty sure that can be done, the specs are low enough to be mobile - but I don't get the point _why_ one should attempt it.
A radiation-rich, unservicable environment is still hard to design for, so having comms equipment in space and computing/storage on the ground is the established norm.
What is the actual benefit of also moving compute and storage into space?
> Fewer software licences mainly, as there are vastly fewer cores
Fewer cores means less processing power.
One can also get X64 servers with less cores.
Or are you trying to claim her, that one core in this box is as powerful as 8-12 X64 cores?
> Also less power usage and other environmentals,
Citation needed ...
> And finally, fewer admin costs,
Citation urgently needed. Maintanance&Admin cost for specialized hardware typically comes at a premium over bread-and-butter X64 gear.
maybe Quantum-something, "AI" and the attached fine collection of Snake-Oils explain this price tag.
From the link :
IBM LinuxONE 4 is the latest iteration of IBM LinuxONE enterprise servers with on-chip AI inferencing and industry-first quantum-safe technologies.
Clicking on "quantum-safe" lands you on a full pool of snake oil :
- IBM Quantum Safe Explorer
- IBM Quantum Safe Advisor
- IBM Quantum Safe Remediator
Can run anything every other Linux server can run, too. Just much more expensive...
> It's only a shrinking market until customers realise they might be better off on-prem.
The enormous complexity and high cost caused by running and maintaining complex setups of traditional enterprise software on-prem were what led most customers to start investigating cloud in the first place. Cloud was cheap for a while, so instead of getting complexity and volume under control on-prem, the exiting hairball of code was thrown into the cloud.
Today, there is no longer a cheap and secure platform to run a lot of complex stuff, so customers IMHO need to do it it the hard way this time:
Start getting it (more) simple again, reduce the number of activce software components. Reduce complexity. Reduce use of traditional "Enterprise" catch-all software that often lump dozens of separate tools together to create more complexity than they solve.
From there deploy a lower number of potentially simpler components either on-prem or in cloud (probably a mix will be optimal) of what you really need.
for building and flying a wildly successfull experimental aircraft and for achieving 1st flight on another celestial body.
And hats off to JPL and NASA as organizations for daring to undertake this experiment despite the probably not too positive risk assessment.
Maybe I missed the irony tags in your post, however .,..
This is a failure of thought many persons make while judging AI behavior.
There is nothing human, there is nothing like rational thought or understanding in AIs.
There is instead some pretty clever code and a lot of data, mostly trained statistics, which generate outputs from inputs.
If you ask a person to repeat a word over and over, it will not start telling you stories from its years at school.
Some AIs start spilling their training data.
https://www.theregister.com/2023/12/01/chatgpt_poetry_ai/
The failure modes or the way to mislead a person vs. AI into doing something unexpected/stupid/dangerous are completely different between humans and AIs .
Answers given by LLMs do seem to be pretty sharp at times.
This is an illusion.
Searching through the text at https://docs.google.com/document/d/1k0vNLaU__btQrN1AREEnUo5LANJGiSPw/edit?pli=1
It contains quite a few times "Artifical Intellligence" and over 1000 times "AI system", however, I was not able to find a clear definition of those terms.
AI today (even if you exclude marketing-only usage) is summarizing multiple quite different technologies that have large overlaps with Big Data, Statistics and Analytics.
So this proposed legislation without a clear definition of what it really tries to regulate and what not could become an ongoing legal risk for companies working in technologies that might be considered AI-related by technical laymen ( e.g. judges) who need to rule necessarily without a clear understanding of technology just based on this text.
For both your posts above: I see a sequence of words that do not seem to form a coherent thought or argument.
Have these just been generated by an AI-type process, maybe without sufficient training related to the actual subject at hand?
> “consistently reproducible correct output” is a ridiculously high bar.
It is not for traditional, deterministic von Neumann architectures.
> You know what would be worth billions of dollars a year in itself? An LLM that could perform code-review at decent accuracy rates; not perfect, just decent. Spot the standard top 10 coding errors, plus top 10 “best practice style” issues, finding 90% of those actually existing on released production codebases. Just that.
No it won't. This kind of tools have been available for years now - completely free of any "AI"...
https://owasp.org/www-community/Source_Code_Analysis_Tools
We can agree, however, that humans are the creative but inconsistent, hard to become correct part of the human-tech interaction, that needs processes and tools and reviews and whatnot to get things right overall.
But how does combining inconsistently performing humans with inconstently performing AI solve any problem then?
OK, they tell you to "think big", but ...
AI as practiced today in the form of ML and LLMs is still (admittedly a bit oversimplified) statistics on stereoids.
While these technologies often seem to turn out impressive results at first sight in a number of applications, the missing details and intermediate complete fails caused by "hallucinations" and complete lack of any process resembling real understanding generate even more impressive "results".
Call me old-fashioned but having a working solution that turns out consistently reproducible correct output still has to be demonstrated to be achievable (let alone being achieved) by the curent AI approach.
So it might still be a bit too early to optimize our chip production capacity large-scale towards the kind of hardware only usable by this specific branch of technology.
Your single argument seems to be that warming is/was slowing down - which the theory does not explain, so the theory must be wrong.
If so, this is not correct.
Once you look at air/ocean/soil as a combined system of energy storage and average out natural oscillations, the long-term trend is unbroken.
https://www.climate.gov/news-features/climate-qa/why-did-earths-surface-temperature-stop-rising-past-decade (from 2013, so this is hardly news)
The theory of rising CO2 trapping more heat on earth still stands.
It is actually pretty simple and CO2 could be shown to absorb/reflect infrared radiation in a simple lab setting - already in 1859 by John Tyndall.
For everyone willing to trust his own eyes and own brain.
Who are _you_ calling a cultist?
> There are of the order of a million million tons of CO2 in the atmosphere. Sounds a lot, but is trivial by planetary standards.
120ppm sounds tiny, but the rise from 280 to 400ppm already gives us a hard time
> Natural events have in the past meant that sometimes there is a lot more, sometimes a lot less.
Yeah and sometimes sea level was hundreds of meters higher or lower than today.
> The first major failing of this article is that it fails to discuss those natural processes.
happening typically over the course of millions of years, not just 150.
> The second major failing of this article is that it fails to discuss the oceans. There is a thousand times as much CO2 in the oceans, dissolved or as carbonates, than there is in the atmosphere. If we did somehow withdraw 1.0E9 tons of CO2 from the atmosphere, .99E9 tons would be released from the oceans to restore equilibrium.
Actually no.
Currently the raised CO2 in the atmosphere is "venting" into the oceans, seeking a new equlibrium, and leading to more acidification there.
> What are those 'natural processes' I mentioned? I suggest emission from the junctions of tectonic plates, as CO2 is expelled from subducted carbonate rocks. Expelled into the ocean, where we do not directly see it, but still dwarfing any man-made emissions.
Again those processes exist, but are actually pretty slow.
The raise in CO2 since 1850 can be attributed to men burning coal and oil. Volcanos etc. are little blips in the human-made longterm trend upwards.
> It is time to stop the hot air about CO2, and to start preparing ourselves for an inevitable further rise in sea level to a geological long term normality.
It's time to stop listening to distractors, it's time to stop companies that enable and pay distractors - like you.
> > Once the amount of biomass stabilizes
> That never happens.
> Many habitats are anoxic because of bacteria proliferation and therefore C can't bind with O.
Where are large anoxic habitats where we actually try to grow forests for the purpose of capturing CO2?
The big carboniferous coal deposits were built up under very specific circumstances during a relatively short period of time 300-350My ago, when large flooded continental shelfs provided the aneroxic environment that preserved dead wood from oxidizing back to CO2 under Water long enough that it could be covered by mud and sand to become trapped an anaerobic environment.
This is basically the same process that manages to preserve wooden ship wrecks for hundreds or thousands of years.
If what you assume (forests somehow continually store more and more un-oxidized C without growing in biomass) would be true, we would be able to find coal (and oil) deposits from all ages all over the planet and not only in very few places all from one specific period of time, where conditions "just were right" for coal/oil to form.
> then ends up in the huge amount of dissolved ocean organic matter and eventually accumulates in deep sea sediments.
> Same applies to carbon sequestrated as CaCO3 from biological origin
These are 2 examples of a real long-term CO2 sinks - but a completely different process from growing forests as CO2 sinks.
> Combustion is never complete. That's why burnt forest sites are black.
Maybe, but the dead, not completely burned, wood then also decays to CO2 like alll other dead wood amid fresh air and water.
A new forest will eventually grow, capturing back the CO2 released in the fire, but this will again take ~ 50 years.
> So, that story of "stabilisation" of biological carbon sinks is a fallacy.
I'd wish you were right, because then CO2 capture would be a lot easier.
But this is only wishful thinking.
They are sinks, yes, up to a point while the forest grows.
Once the amount of biomass stabilizes (tree growth volume equals tree death), so does it's bound CO2 volume.
Plus, this balance is only stable while the forest is healthy.
Once weather shifts it may become too dry or too wet to sustain the chosen species of trees at a given location.
Pests or droughts can weaken the forest, kill trees, free the bound CO2.
Forest fires can free _all_ CO2 that has been bound in several decades in a few hours/days.
So yes, a growing, healthy forest is a sink.
I would not qualify it as "great" because of its long-term fragility.
No law is perfect.
And yes, it can be complex to interpret.
And yes, it may codify things you do not support/accept.
However, that does not make the law or the agencies that enforce it unconstitutional.
Neither does that allow a private company to ignore the law.
And I assume that no one else at SpaceX but Musk really thinks this suit is a bright idea - even if no one might dare to voice that concern out of fear of being fired, too.
With this case SpaceX starts to feel the effects of Musk's detoriating leadership, which may well damage SpaceX reputation and business.
After all those guys demanding separation between SpaceX and Musk had a very valid point...
Wiping out or at least limiting the variablity of fake caller IDs should be a manageable task.
Always displaying a correct country code for example already would help a lot.
My parents are both in their 80s and get harassed by those kind of calls multiple times a week from fake, presumably local numbers.
Blocking numbers (basically the only available defense mechanism on fixed lines) is meaningless, as long as attackers are able to switch country/region/number with ease.
Worse: Baisically telcos are acting as a partner in crime here. They earn money by carrying those fake calls.
So they are not only not incentivized to stop this abuse - au contraire - the even profit from it.
Looks like an regulation issue to me... which would finally result also in less human trafficking and abductions.
There does not seem to be a downside besides less profits for Telcos.
> What they have all failed to realise is that the data feeding this monster is what makes GenAI good - or bad.
True. I think that is one of the more common misunderstandings in the current hype.
If your business problem can not be described in the form of of detecting weak statistical dependencies (or lack thereof) in very large data sets, then the current GenAI approach will probably not lead to a useful solutions for your business. Which is true for I guess 99% of companies...
Nearly every new version of old IT products nowadays presumably contains "AI" capabilities in some form.
SAP, Excel, Fortinet, Cisco, etc.
However, if you look closer at the so-called AI capabilities, they usually do resemble closely what was advertised in the last years as simply automation of some kind, "integrated analytics", "big data", "data fabric" or any of the "smart"-somethings of the late 2010s.
The seemlingly unstoppable trend of AI-in-everything does often seem to result from a marketing department in full overdrive mode, while credibility of claims is built by some interesting and often impressive simulation of text-comprehension by publicly available LLMs - which are still just applied statistics on steroids...
Contrary to what the companies above (and many others) claim - I still need to see an "AI" implementation in older products that does not just result in a more or less useful text generator, that manages to summarize things it read "on the internet" sometimes correctly.
On one hand it should meanwhile be obvious to any investor that his intensifying god-complex, FU-attitude and right-wing conspiracy tendencies can no longer be ignored.
All of those pose a clear danger to anyone who plans to actually earn money by giving some of theirs to Mr Musk.
So it will become interesting to observe if Mr. Musk's ongoing harsh treatment of investors/victims of his X/Twitter personal blog will result in enough people thininking hard about this proposal for his personal "non-woke" AI bot.
On the other hand - as the article seems to indicate - investment decisions driven by greed and rational reasoning usually do not seem to be tightly coupled. Or coupled at all.
Everyone being hurt by this investment should have known better.
I'm out of this and prefer to invest in popcorn...
Networked products that employ AI in the wild not only introduce a new set of classic attack vectors, now even completely new anti-AI methodologies might be developed.
Wonder what hallucinating AIs may effect in security appliances. Will be interesting to find out.
Very much this.
And then replace "location harvesting" with "information harvesting about citizens" in general.
The amount of unclear and often low-quality data (guess-work) about private citizens being sold as "verified information "commercially is staggering in some countries.
Each bit of it - true or false - can hurt you by limiting your effective ability to get/keep a job, get affordable housing, get afforable loans, etc. without you ever knowing why you did not get a specific job or an apartement you inteded to rent or why you are just the one allways getting the most expensive loan offers from banks.
What is true for the CIA (executing kills based on obtained metadata) is also true for commercially entites buying this data: business decision, often automated, are made based on purchased matadata.
Metdata in both cases cannot be controlled/verified/corrected by the victim of the decisions that are based on such data/information.
And in both cases the user of this information will not even confirm/deny what information payed a role in decision making.
Worse: The information harvesters can disseminate low quality or even wrong information about you without any control, verification or consequences.
Personal Information is in many countries a very big market without any controls or safeguards.
You seem to imply NSA wiretapping to take place on american soil only, which is not the case.
Wiretapping takes place in exchanges throughout the world: UK, Australia, Germany, Pakistan, etc.
So this law makes it legal for the NSA to e.g. record a call between London and Berlin by 2 non-US nationals.
I understood the opportunity to ram your adverts specifically into the eyeballs of right-wing nuts, whose opinions are "cancelled" in the rest of the "woke" media to be Musk's actual business plan for TwiX.
So why acts anyone surprised when adverts on this platform are shown next to hate and antisemitism? That's simply what the designated target audience consumes most.
This is just how this dark twin of Twitter is designed to operate nowadays.
Don't like your ads next to hate? Don't advertise on TwiX...
Agree. Musk behaves over the last years as if someone successfully planted the idea that "hate sells" would somehow work out commercially.
This is a style that wasn't obvious (at least to me) in Musk 10 years ago.
We know from his other ventures he is determined to follow what impulse tells him is the right way.
I just wonder how deep will he run TwiX into the ground until he re-evaluates this belief or digs out of the bubble he obviously has fallen into - and how dangerous he will become to the success of Tesla and SpaceX if he stays in his bubble.
May be him being pushed out of PayPal was a sign of things to come ...
> The sh1t has already hit the fan.
Not really.. wait a few months/years and AI impact and potential will probably get better, increasing the risk further.
However, given the non-linear and non-deterministic behavior of LLMs in general combined with a ever changing, fluid definition of "national security" , good luck with determining if any given system might pose such a risk.