Re: Only X ?
As stated in the story, it isn't the only one, but it is the only one that seems to be blatantly ignoring its legal duties.
6778 publicly visible posts • joined 27 Nov 2009
The rumours circulating in German IT forums are, that the networking gear licenses ran out and nobody bothered to renew them.
Rumour has it, that the person who switched from 5 or 10 year licensing for the network gear to annual left the company a few months back and nobody took over the responsibility for the licensing, so when the invoice turned up, nobody countersigned it, so accounting didn't pay, as it hadn't been signed off... The networking gear then stopped working, when the licenses expired...
When that is the case, big oops!
Probably because most people don't have graphic cards or slots to put them in. Most have laptops or compact desktops with integrated graphics.
Also, GPUs are much more efficient than current CPUs at AI tasks, but they are still a long way from being optimised for AI. Then there are bandwidth issues, the graphics RAM is quick, but you still have to shovel all the data over the bus to the graphic card.
This is where integrated designs, like the Apple Silicon range excel, fast memory directly integrated into the chiplets.
Integrating the NPU into the CPU, along with graphic cores, makes sense for a lot of devices, where GPUs are not needed or there is no energy or thermal overhead to cope with them.
I'm glad I have a dumb car, and I won't be updating it to anything smart.
Given the track records on security alone, I just don't see these companies providing monthly security updates in 10, 15 or 20 years. I'll continue to treat cars like every other appliance I buy, I'll buy a dumb version and add cheap, replaceable smarts, where they make sense.
Europe has relative good free speech rules, but you can't incite to riot, cause harm, death or racial hatred for example.
Germany is more sensitive, holocaust denial and the glorification of national socialism is illegal, which, given their history, is understandable.
The big problem with these platforms is, they ignored such laws when they were small or threw lawyers at the problem, instead of actually tackling such problems, until the cost of lawyers and fines exceeded acceptable levels or the cost of doing it properly. But, by the time they were big enough that authorities really start pressing them to comply with the law, they were too big to do any sort of compliance at scale.
If they had implemented the compliance at the beginning, it would have scaled with their userbase - the userbase wouldn't have been able to grow as quickly, as it would have to have been held in check with their compliance responsibilities, but at least we wouldn't have the problems we face today. But lawyers seem to be cheaper than staff to actually deal with the problems these companies have caused.
No, you can do that. But other programs shouldn't be able to that without your permission.
If you try and edit the config file yourself, you should get a warning, but can continue.
If another program tries to edit the settings, or replace part of the program, it is blocked and you are warned, if you started that other program, you can let it continue, but until you give it permission, it should be blocked.
This is the part that is broken, if that other program uses a sandboxed program, like TextEdit to do the dirty work for it (automation), you don't get warned. If you try and edit the config file yourself with TextEdit, you won't get a warning.
The PDF example is a system wide operating system setting, it isn't changing the PDF reader or its settings, it is using a central OS API function to define which application the OS calls, when opening a PDF file.
This is one application changing specific settings in another application, or the application's code. Only the program itself should be able to change its setting.
For example, you don't want a rogue web browser add-in changing the configuration of the AV software to whitelist the downloads folder, so that any files downloaded are no longer checked for malware, the security mechanism should stop this, but it seems, if it uses a go-between (E.g. a sandboxed app, like TextEdit) to do the work for it (automation), it can bypass this security feature.. This is what the feature in macOS is supposed to stop, but it seems that it is not doing its job properly.
SfB still exists. There is still SfB onsite. SfB in Microsoft 365 has, AFAIK been deprecated and users moved to Teams.
This is the editing of the configuration file that belongs inside the application packet. This should be blocked by the OS and letting the user decide whether to continue.
The nearest equivalent on Windows would be a user trying to change a configuration file in the application folder or Windows folder, Windows will ask them to enter the administrator username and password, before they can save any changes (assuming they are following best practices and aren't logged in with an administration account). This is a bit more thorough, or rather is supposed to be more thorough, even logged in as an admin, an application shouldn't be able to change another application or its settings, without the OS informing the user of the fact and letting them decide, whether to proceed or not.
This would, for example, stop malware from changing the settings in the web browser to stop it checking for malware, or changing the AV software to whitelist a certain app or directory, for example, or overwriting an application with an infected version. Using automation and existing sandboxed apps, the malware can seemingly get around this restriction.
We had a sex-obsessed character in one game. The DM was getting tired of him wenching his way through every tavern or encounter on the road, so he had made one encounter with witch, who cursed him...
Let's just say, if it was dark, we didn't need a torch, just a trouser-less dwarf. It also made the character less appealing to the women he encountered...
My T480 is nearing its end of life. My colleague had an L480 and joined a month after me, but it died a couple of months back - it looks like that generation of Ls had a lot of problems with dry soldering joints, bad BIOS updates, it throttled itself to 400Mhz on several ocassions, for example.
But with companies championing AI, and Apple putting neural cores in their iPhone and Apple Silicon Macs, it is a poor show that they still can't work out the context of a sentence and decide whether a pronoun or a verb is needed. It is even worse in German, where every verb is also a noun and all nouns start with capital letters, even their keyboard on the iPhone and iPad gets confused with this all the time.
Having the option to mark words as pronouns would be useful - although I suspect, given the sinking levels of comprehension, that might confuse some users.
I know, when I learnt German, I had to go back and relearn some English concepts I took for granted, I knew what nouns, pronouns, adjectives and adverbs were and could use them without thinking, but actually thinking about them, when trying to apply it to a foreign language, made me realise how much I don't need to think when speaking or reading English. And verbs, present, future i/ii, conjunctive, past, past perfect, future i/ii conjunctive, future progressive and so on. Then subject, accusative, dative, genetive, singular, plural (and then in German masculine, feminine or neutum).
I feel for the author and the people he interviewed. I have the problem doubled, in that I dictate or speak to Siri in both English and German and it often confuses the system.
The same is true when typing. On the iPhone, it turns verbs, especially German verbs into nouns (capitalises them) and it often goes back and changes words earlier in a sentence, that you have already checked, with random other words, either changing the context of the sentence or turning it into complete jibberish. As you have already checked those earlier words, you often don't notice they have changed, but I will often be looking at the text whilst typing and notice random words in other parts of the text changing.
Android does this as well, to a lesser extent.
macOS does some autocomplete and if you are typing a word it doesn't know it will always replace it with a known word! I was typing a reply yesterday and it replaced a company name it didn't know with a word from its dictionary every time I spelt it out in full and pressed space. Being a touch typist and looking at some source material whilst typing, I failed to notice this for a while and had to go back and replace all instances with the company's name. I have since turned off the autocorrect feature in macOS.
I hope Apple really do pull their finger out, or realise that it is harder than they thought and go back to dictation software makers and work with them, again.
For me, doctor, dentist, shops (supermarket and clothing, shoe shops, electornics, book store etc.) are within 15 minutes walk, railway as well. When I worked in the city, I'd walk to the station every morning and catch the train to work.
I now work in the town where I live and commute on my bike - 15 minutes. I hardly need to use the car at all these days. When I was in home office, I'd walk to the local supermarket at lunch time and buy ingredients to cook a fresh meal.
I've always lived in towns, where everything was within walking distance, apart from work. I spent a long time working on contract all over the country.
But where I now live, doctor, dentist, shops, railways station are all within 15 minutes walk and work is a 15 minute bike ride (12 minutes by car). I wouldn't want to live somewhere, where I couldn't just pop to the shops at lunch time and buy fresh produce for lunch. Even at work, I can walk around the corner to the supermarket & buy fresh stuff and cook it in the office kitchen.
Companies like Google have been championing products that allow remote collaboration for well over a decade, but they don't want their staff to make use of their tools? Am I reading that right?
Sundar Pichai said the company had to optimize its use of what expensive real estate remained,
IFTFY: We invested too heavily in real estate and it would look bad to investors if it stood empty most of the time, so we need you to come back to cover our poor decisions.
This is why cars should be "dumb". This is being found when the cars are still relatively new, what is when a similar fault is found in a 10 year old or a 20 year old car?
Cars have a much longer life than consumer electronics and unless the manufacturers are willing to invest in long term security, they should leave the vehicles as dumb as possible, at least in terms of accessability from outside the vehicle.
Yes, we had a well in our garden, in north Germany. It was used by the previous house owner for nearly all water in the house. We removed the pipes and only used it for watering the garden, but the well ran dry in 2016 and we tried to dig a new one, we hit bedrock at about 6M and only about 20cm of water, too shallow to pump up to the surface.
Also, if you are in volume manufacturing, the last thing you want is a flakey cloud system ecompassing the company motto "That's the only way to deliver innovation with speed and agility,"
You want stability, you want consistency, you want 100% uptime. You don't want "move fast and break things," that might work for social media, but it doesn't work in big business.
"That's the only way to deliver innovation with speed and agility,"
And the one thing you don't want, when you are running a business is your software vendor to move fast and break things. If the ERP system is offline or returning the wrong results, that will cost your company the proverbial arm and a leg. That is why you have development, test and staging environments, so that you know things will work, when they finally hit the live system.
I work in an industry, where processing can't be in the cloud, due to regulations, so no matter how hard SAP try and push, the systems will remain on-prem in that industry. Luckily we use a different system, tailored to our industry, and, surprise, surprise, it is on-prem only.
As we work 24/7, downtime for updates has to also be planned in. That is usually in a 2-3 week window, we get many more releases from the supplier, but only about a quarter get onto our test system - and 80% of those were broken in recent months and had to go back to the supplier to be fixed, before they could be tested again (one was no invoices for line items with 6 figure quantities, if you supplied 250,000Kg, the invoice would show 50,000Kg!), and those were official public releases from the supplier, just imagine if 80% of updates broke the system so badly you couldn't work, because it was a "fast and agile" cloud solution!
this is also interesting, and other manufacturers have started doing similar things. The chips are being optimised for the expected tasks and load. That means they might turn in poor general benchmark results, because the bits needed for those are not optimised (or might not even be present), but they will scream when doing the tasks they are designed for, and use less power in the process.
I think this is very interesting and hope more and more companies will start doing this. It makes running standard benchmarks difficult, and futile, because current benchmarks aren't designed around real-world tasks. Maybe we will start seeing more and more specialised benchmarks (AI tasks, Hadoop and various transaction processing benchmarks). When speccing a system, you then look at the chips designed for your workload and look at the relevant benchmarks for doing the type of work you will be doing. Specialisation and optiomisation, instead of a jack of all trades.
As an aside, regarding optimisation, we had a mainframe manufacturer turn up and install a demo machine for us - we were looking to consolidate a fleet of VAX computers with a few mainframes. The rep gave us a tape and said, "here is a FORTRAN benchtest, compile it on the VAX and on our system, then call me in a week, when ours is finished..."
By the time he got back to the office, there was a message for him to call us back, the VAX was finished! It turns out that the mainframe might have been quicker, but the VAX, or at least its compiler, was smarter. The VAX FORTRAN compiler had looked at the code and decided that no input, lots of processing of a huge in-memory array, transformations of the arrays etc, and no output meant that the lots of processing bit in the middle just wasn't needed, it generated a stub program that finished running, before the operator had taken their finger off the return key... The mainframe, on the other hand was using its amazing processing power to prove to the world just how fast it was!
Work smarter, not harder.
It makes sense that Google are looking to integrate LLM, and I assumed that they (Apple, Amazon, Microsoft, Google & Meta) were all working on it, in the background, already.
The problem is, ChatGPT is a bit of fun, and if it throws up the wrong answers, well, it is still an experiment, it isn't a product that is good enough or consistent enough for daily use. But if you have a voice assistant, you want it to do what you tell it and to give you correct answers. (Yes, I know, that isn't very good at the moment, either.)
If you are sitting at a keyboard and ChatGPT & Co. give you some wild answer, you can go, "wait, what?" And look into it. If you are out and about, using voice, you generally don't have the time to stop & think about the answer and a "wait, what?" moment will just get you frustrated & you won't use it again.
This is where the additional testing is really needed, before you can expose the product to the world. The problem is, you get half finished products like ChatGPT, that provide some good answers, but it is a coin toss, whether you get a sensible answer or an hallucination.
They can't afford that, with a "real" assistant, it is better that it remains half-way usable in its current form, rather than giving wrong answers half the time or not doing what you have told it to do. An AI overhaul for beta users? Yes. An AI overhaul, without sufficient testing & proven accuracy for the general population? No way.
I gave an upvote, but... It depends on the copyright the code is bound by - and that doesn't necessarily have to be in the header of each or any module of the code. If there is no header in the code describing which copyright model it is held under, you will need to find out which one applies. You can't just assume, because there is no copyright notice or open source license etc. in the code that it is totally public domain, you should assume that it is copyrighted, until you can prove otherwise - E.g. if it is internal code from a company that has been stolen, they might not have copyright headers, because they never expected that the code will never be seen by anyone outside the company...
And the NHTSA wants it all by July 19 – just two weeks after it sent the letter.
It might be 2 weeks after notification of fines for non-compliance, but that is still over a year after it asked for the information. It seems like kick in the butt to get them moving, and if they have already dragged their feet (at least on some of the information) for over a year, it doesn't sound that unreasonable.