Infinite Improbability
I presume the caffeine is best administered as a fresh cup of really hot tea.
2773 publicly visible posts • joined 1 Mar 2007
Interestingly the NEC V20 could emulate an 8080 in hardware, as well as being a slightly enhanced 8088. In theory it could run CP/M programs natively, as CP/M was written for the 8080, although in practice after the Z80 came out later versions of CP/M and its applications started to use the Z80 extensions to the 8080 instruction set, which the V20 didn’t have.
In the ideal case you might expect your drones to have fall-back modes if one method of communication is blocked.
I wonder if the case here is that rather than being conventional military equipment, they might be a quick and dirty lash-up using off-the-shelf commercial equipment without all the backups to see if they can get it done on the cheap to rapidly fill a hole in their arsenal.
I find it depends what I’m doing - if I want to motor through something, WFH can let me get on without as many interruptions as when I’m in the office.
On the other hand, working in the office reminds me that the people I promised some work for still exist and that I should actually do something for them instead of just the other projects that are more interesting.
A bit over 20 years ago I started using Minolta SLR cameras, followed by DSLRs. My current camera from Sony (which has a compatible lens system) replaced the optical viewfinder with an electronic one. While being able to overlay graphics into the eyepiece is useful, it did take a bit of getting used to its electronic version of reality, and I still like the purity of my earlier cameras’ optical systems.
I remember the 68000 did the same thing. As well as the standard clock signal there was a slow clock output at 1/10 the CPU clock speed, typically 0.8MHz for an 8MHz CPU, which was used to feed the E input of 6800 family IO devices as Motorola were a bit slow getting 68000 IO devices out. If I remember correctly, there were a few 68000 opcodes specifically designed to access 8 bit 6800 family devices that only used half the data bus.
You have a good point - I was trying to find out about the economics of self publishing, and came across an article suggesting many authors make more from talking about what they’re writing about than they do actually writing about it, and the books are more a way-in to doing the talks.
Running the numbers, if I’ve understood it correctly:
97 ‘books’, each of which takes 6-8 hours to write (call it 7) means 679 hours of work.
$2000 income from that means each hour’s earnings is about $2.95.
At that productivity rate I might question whether it’s actually worth it!
It seems that the only real intelligence in LLM AI comes from the user knowing how to prompt it to give the answer you want. Everything after that is just a souped up word association game.
In the words of Dr Alfred Lanning’s hologram in I Robot “You must ask the right questions”.
I think there are some AI (or rather ‘applied statistics’, I don’t believe there is any actual intelligence) that are yielding good results in some fields, and wouldn’t dismiss it all - for example the pancreatic cancer finding AI.
But that I think is the problem - the more reliable AIs are trained on a limited dataset set specific to the problem. When you have a very large language model of everything and have something that can answer general questions on anything, I think the noise in the system is going to make it unreliable.
Maybe what we need instead of general AI are multiple independent AIs trained for specific tasks - e.g. software development assistants trained on good software, medical assistants trained on medical sources, etc. without them also being trained on the complete works of Shakespeare, all the nonsense on Reddit, etc. so what they produce is focussed on their ‘expertise’ without the noise of all the other stuff. Yes, it probably means they will have more limited conversational abilities and users will have to think more in how they query them, but I’d rather that if it ‘knows its stuff’ rather than giving plausible but wrong answers some of the time.
Sure not everyone can, not everyone wants to (I prefer to go in to work).
But talking about Dell, yes they produce physical products, but many of their production lines are outsourced to Asia, with some in Europe and South America too. Not everyone involved in the process of creating a physical product will actually need to be there to actually make the thing.
If, as other articles in this esteemed journal suggest, ChatGPT had mostly been taught using works of science fiction, possibly a bunch of hackers might be more familiar with the source material than academic linguistic professors.
Not suggesting of course that we’re all a bunch of nerds who prefer a rollicking space opera to the complete works of Jane Austen!
If it read everything known to humankind only once (and there were no quotations of other works within those, which excludes a lot of works) then it might be that it doesn’t have an internal representation of any one work.
However, if it has had the same document as input multiple times (e.g. it’s crawled the web and found multiple copies, or multiple copies of extracts) or there is something that is often quoted in other works, wouldn’t the model parameters be biased towards reproducing sequences found in those works?
Could that reinforcement then be considered ‘memorising’?
When watching a movie (particularly if it’s a historical drama where there are known facts) and they get something very wrong about something I have a particular interest in, my suspension of disbelief vanishes as it sets me wondering what else they’ve got wrong in subjects I don’t know so much about.
I get a similar feeling with ChatGPT - I’ve seen enough examples of things I know about that it’s got wrong that I have trouble believing anything it comes out with.
Ok, so it may be ok to use it as a springboard for ideas, but I wouldn’t trust it with facts.
I do actually think that ChatGPT might be beneficial, not because it produces (or doesn’t produce) any reasonable code, but that the ‘conversation’ you have with it along the way, correcting it and providing more information, might clarify in your own mind how to solve a particular problem.
“down to the training data”
With current AI generations, as I understand it, the training data was frozen at about 3 years ago.
However, if future generations are continually learning, how ring-fenced will the training data be, and how easy would it be to deliberately introduce extra vulnerabilities into AI generated code by poisoning the training data by posting multiple copies of intentionally vulnerable code and associated keywords?
“and ideally, we used to write and unit test our own code”
I thought the ideal was that someone else would write the unit tests based on the specification and interface of the unit. That way the test writer is less likely to make the same assumptions as the unit writer and more likely to pick up obscure fault conditions.
I remember back in the late 70s/early 80s RAM was one of the more expensive components in a computer (more so than ROM), and backing storage was slow or expensive, so various computers were sold back then with the ability for the end user to buy extra firmware either in the form of a ROM to plug into the motherboard (e.g. the various word processors, spreadsheets, graphics extension ROMs for the BBC Micro), or in the form of cartridges, so whole applications were there instantly and didn’t eat valuable RAM.
I say ‘buy’, but probably more likely buy a blank EPROM, and borrow someone else’s firmware and a EPROM programmer.
“the features now integrated into its existing anti-plagiarism products can detect AI-generated text with "98 percent confidence" – but has failed to provide any evidence of this.”
As someone who will soon be receiving a pile of student dissertations in his inbox to mark (which, incidentally, are screened by TurnItIn), if such a statement appeared in their dissertations without supporting evidence, that would be a fail from me.
I saw another article yesterday about ChatGPT being asked to generate Windows licence keys (which it did with a low success rate).
Ask it directly and it tells you it’s illegal and you should pay for it (actually as the demo was for Windows 95, it said to buy a newer one). Reformulate the question and it has a go.
If it’s so easy to hoodwink ChatGPT into doing things it’s specifically not supposed to do, I have an inheritance from a Nigerian Prince it might be interested in.