4686 posts • joined 9 Mar 2007
Re: There are some technical bugs we can certainly fix
Well VNC is one way this could be done. Considering the terrible state of the web, I don't think VNC would actually require more data than the current web. After all most websites are now larger than screenshots of themselves.
However there are lots of other ways to do this. This is something I don't have a set answer for, but something I'd like to encourage experimentation.
Re: It's fascinating to see how people are so much behind the times
"There only four things that schools need to teach:"
Well those things are absolutely important, however we also need to show children the world around them. Even if you are a great learner, knowing what to learn, what might be interesting to you is hard. School needs to show you the world at least how it is now and how it was before.
There are some technical bugs we can certainly fix
For example the Web has the problem that it's possible to have 3rd party elements on a page. This used to be used for webcounters, but now is mostly abused by advertisers and Facebook.
Imagine a different protocol, one that is more like a terminal protocol. You have your "screen" which in case of traditional protocols is composed out of a grid of character cells, and in a new standard might be more like the browser DOM tree. This "screen" can be manipulated via a single persistent TCP/IP connection. If you just want to display a quasi static document, the DOM tree includes some sort of URL for links and you send that URL when connecting. After the whole "screen" has been transmitted, the connection is dropped or put into an "idle" state from which you can request a new page from the same server if you wish.
If you want to use an application however, the connection stays open and elements of the "screen" can send events to the server. This allows for much simpler "Web Apps" as they can now work synchronously and don't have to string together disjoined HTTP-requests into some sort of session.
The beauty of this is that it's compatible with what we already have. SSH can easily carry such a stream and you can outsource your authentication and encryption to it. One could even create it in a backwards compatible way to ANSI terminals so you can instantly use it as a drop-in replacement for your terminal.
Re: Look to SciFi for inspiration
Well we can still prepare for it, for example by making future alternatives to the web work with lower bandwidth and complexity requirements. This would allow us to have lower bandwidth devices.
I personally don't think the Internet itself is broken, IP(v6) is to simple to be broken. What is broken ist the protocolls on top of it, particularly the new ones big coorporations try to force upon us. So maybe just like when we kickstarted the popular Internet from the telephone network, we could kickstart new ways of communications on top of IP(v6).
It's fascinating to see how people are so much behind the times
I mean there already is a set of guidelines for the "Digital World" (whatever that is supposed to be) and that's the "Hacker ethic". Additionally what is needed is to educate people about computers, in order to give them some idea of how they work. In kindergarten we have learned how printing works by building our own sets of movable type from potatoes. Today computers represent a technical achievement just as important as printing was.
If you do not give people the tools they need to understand the world around them you are sure to enter a dark age in which only an elite can control the population. Democracy needs good education and we have failed to provide computer education for to long now.
Re: Any change to notepad is big news of course.
Well Notepad++ and Notepad are something completely different. Notepad has it's use for just being a "paste buffer" that can also strip format information from your data.
I've looked at Notepad++, and I see little reason to use it. It seems to lack a unifying vision. It just looks like a lot of non orthogonal features added to a simple word processor. It fullfills most prejudices people have about Windows software.
Sending a photo via SMS
I mean yes, there were standards to send images via SMS... however I doubt there is much use in sending a 32x32 monochrome pictures theese days.
Of course the sensible thing to do would be to define a standard format for document "facsimiles" which includes a simple high resolution bitmap of the page along with an UTF-8 export of its contents.
If you ban fax machines, people are going to send office documents through mail... which is _much_ worse security wise.
What I don't understand about a memory centric architecture
Memory is today one of the slower parts of computing. Whenever your CPU actually has to access it it takes a long time. Caching solves a bit of the problem, but it quickly gets very difficult.
Wouldn't it make more sense to not have one of the slowest part of your computer be your bottleneck?
SEAL up your data just like Microsoft: Redmond open-sources 'simple' homomorphic encryption blueprints
No actual applications are currently on the horizon, but it might be usefull some day.
However we now have the situation of people peddling homomorphic encryption as the solution for the cloud. To those people I can simply only say, if you don't want others to get to your data, store and process it on your own computers.
It's actually not that relevant
Those LAN ports are barely ever used at all, and if they are used they are used with a second network card. After all the LAN port is just a cheaper way of running GPIB to a PC.
And seriously, if a single device on your network can own the whole network, you have seriously messed up.
" I don't see a dust filter on any one of them!"
Yes, that's because through clever design those are integrated into the case. Essentially meshed surfaces are fairly decent dust filters. The idea is that they catch all the big particles while the rest will simply be blown out by the fans. I've first seen an eary concept of those in an expensive measurement device. There was a hive-like structure perhaps 2-4 cm deep just mounted on the air inlet. This structure cought all the relevant dust and kept the rest of the device virtually dust free.
Re: There's a non filler talk on that topic
"Not sure what a non-filler is"
Some sites have lots of articles without any useful content, those I call "filler articles". It seems like there are even whole conferences devoted to nothing but filler material designed to take up space and to make you look innovative even though it's extremely low on content.
In a way this article is why reviews today are mostly worthless
It gets the facts wrong while hammering about how it looks like. It ignores "no-go" areas like the missing Ethernet port or the non hot swappable battery. It doesn't actually test anything, like how long it takes to replace the keyboard. In short it's mostly worthless as it brings no information you couldn't get from the marketing blurb.
It was a lit CeBIT see, got teeny weeny, world's biggest tech show yearly party... closed its German fest's doors yesterday
Yeah, but it was dead for years
Instead of showing you new products and ideas and telling you things you couldn't read in the marketing blurbs, they just had marketing droids telling you things you already knew.
And now instead of re-focussing on technology, they did the same thing bad companies do to try and gain new employees, make a cargo-cult festival out of it.
Re: Win10 telemetry had one job. And it failed.
"But... wasn't that the whole point of telemetry??????"
Why do you asume that it's for quality control? Why would Microsoft care about quality at all? I mean they had a short time when they cared about quality and every developer had to fix bugs before writing new code. That happened in the early 2000s, just after Windows XP. Although it was most likely a coincidence (Vista) managers probably see that as the reason why sales slowed down afterwards, so they do a U-turn.
Because of Hype
"Why would any even partially competent sysadmin still do these things?"
There are lots of people out there who happen to come accross a few gigabytes of data. Then they find out that when they put it into an SQL table and don't think about what they are doing everything is slow. They decide that this must already be "big data" so they google "big data" and come accross all those tools designed for it. Since they previously have proven that they have no idea what they are doing, they will of course fail installing their fancy new toys.
People who both know what they are doing and have to use things like Hadoop to achieve their goals are rather rare. Therefore it's likely that any given installation was done by people who have no idea what they are doing.
Re: AP Powers down when not in use
"Remind me again how clients find APs in the first place!"
Yes, but the APs that run can signal the controller to turn on some extra APs. After all range typically isn't the problem with high density installations, so you'll have less APs turned on when there are less users/data, with the other APs automatically turning on when there are more users/data.
Since those installations typically have ways to nudge the user into roaming to another AP, that can work rather smoothly.... in theory.
Now if Mozilla would look at itself
They'd flag themselves for:
* doing DNS over JSON over HTTPs with Cloudflare
* putting more and more privacy invading features into the web (e.g. Blutooth, WebAssembly)
* trying to coax people into having accounts with them to share their browser history
and probably lots more privacy threatening stuff going on at Mozilla.
OK Google, what is African ISP Main One, and how did it manage to route your traffic into China through Russia?
The obvious solution would be a "Web of Trust"
After all you have lots of entities peering with each other. Each of those peerings requires an agreement. It would be sensible to use this to also sign keys, after all you typically know who you are peering with.
However I don't think attribution is much of the problem here as it is usually rather easy to find the culprit. What's really needed is route filtering.
Re: Why do companies use full blown PC's for displays?
"His reason for not doing it - if I left, nobody would know how it worked."
Yes, but seriously if you actually have people who know Windows, that is justified. Unfortunately in 99% of companies using Windows, nobody has the faintest clue about Windows. Even Microsoft often seems to not have read their own documentation.
Also if I had a company with people knowing Windows, I'd seriously be worried that Microsoft buys them out, after all there are perhaps a couple of thousand people who actually know Windows, while there are millions of people who know Linux.
I remember the times...
... when those systems ran on dedicated hardware on redundant hardware which would be switched in between several times per second so any error would flash. Also those systems would check every graphics primitive they have drawn.
And before that they had fault detecting relay circuits which would signal when any of their relays would fail.
Today it seems those systems are made on the least suitable systems for the job, with no thought on how to make it work safely.
Russian computer failure on ISS is nothing to worry about – they're just going to turn it off and on again
Actually transistent failures are to be expected
The area up there, although not very far away from the surface of earth, is still full of cosmic radiation. Things like latch-ups, where an unintended thyristor in your chip gets started by some particle causing your chip to have a short, are not uncommon.
Though this is space flight. The task at hand is not very computationally intensive. So it's likely those 3 computers use space hardened hardware with structure sizes from the 1980s. So we are probably talking about the complexity of an early 1980s home computer.
It's a common theme
Usually adding more complexity to a problem makes it less secure. That's why most common "security in a box" solutions had their own vulnerabilities. One prominent example was Microsoft who executed Visual Basic in a virtual machine running at "system" privilidges in order to find out if said program was malevolent. It's also common for AV systems to choke while processing obscure archive formats.
Worldwide Web wizard Tim Berners-Lee sticks wellington boot into Worldwide Web's giants: Time to break 'em up?
Let's take a look at Prestel/Minitel/Bildschirmtext/Ceefax
What if we look into those old online services. The biggest problem was that those were controlled by the post office. This was because back in the 1970s when they were thought up, the idea of a private person owning their own computer was considered to be idiotic. Today that is a very real possibility, virtually everybody has their own computer connected to a high speed network capable of establishing a connection in milliseconds.
What few people know is that those old technologies were meant to be extended. For example the French Minitel had a provision for vector graphics. The Singapore system even had full colour photographic images. Provisions for audio have been made and adding video wouldn't be hard.
Now think of it, building a WYSIWYG editor for those old standards is near trivial. 40x24 characters also work quite well on mobile or TV-like devices. On desktop devices you can always display "successive" frames. When using this over TCP/IP(v6) speed is no issue, and since TCP/IPv6 is peer to peer it is really simple for anyone to have their own website.
Since those standards were essentially terminal protocolls, they define a captive connection. This makes session management trivial, you have one TCP connection which is one session, no cookies or other complicated stuff to get wrong. Instead you have a fairly simple program looking for key presses and sending a file.
Of course I do not claim that this is the solution from the problems of the web, however I believe that in order to find a better solution one also must know about previous attempts and how they worked.
Re: Well it's probably the Google brain drain
"It's not 'just as bad' - in ways that matter (security/privacy) it's orders of magnitude worse."
Compared to what? None of the mobile operating systems out there are any good for security and privacy. It's like comparing the tasty how tasty different kinds of industrial waste are. Sure the one coming from the sewage works might be tastier than the one comming from your lead mine, but both are not suitable for human consumption.
Well it's probably the Google brain drain
In the image of potential employees Google used to be a company supported by ads doing cool stuff. Now it seems that image shifts more and more to a company doing mundane stuff to shift more ads.
The result is that more and more of the smart people are leaving the company, leaving behind the "not so smart" people. Eventually this will mean that the average competence of the people inside the company is considerably lower than the average competence of new hires, as the "smart" ones will leave quickly while the "dumb" ones stay behind.
Eventually you are left with a company of people who are bad at what they are doing. Add the inability of those people to take any criticism and you are probably at where Google is now.
Google rarely produces "Cool stuff" any more, their Android is just as bad as any other mobile operating system, lacking a simple core design idea like all truely successful software works have.
Even their AI developments are more or less a few new ideas applied to insane amounts of CPU power.
Welcome back, 'ping of death', it has been... a few months. Now it's Apple's turn to do the patching
You know the funny thing is...
... that Microsoft still calls Office "productivity software", even though virtually all things you can do with it can be done better and with less work with more suitable software packages. It's probably more of a time waster than Minesweeper and Solitaire combined.
It's not about what they know...
"Would you mind enlightening us further about what you know and they don't?"
... but what they choose to believe. The basic idea behind a blockchain is that you have a public log which everybody can check and everybody does check in a very distributed way.
That's only superficially what logistics need. Sure a common log would be good, but it shouldn't necessarily be public, also you only have a smallish number of partners. A more sensible solution would be to have a contract (as in shipping order) which is signed by both the sender and the recipient. This will be stored at all parties who need to know, and in case of disputes every side can proof which contract was agreed uppon. No "blockchain" or crypto puzzles nessesary.
Microsoft to staff: We remain locked and loaded with US military – and will keep adding voice to AI ethics debate
The important word here is "still"
I mean commercial distributions seem to be particularly interested in trying out new things that can increase their number of support calls. It's probably just that networkd is either to new and therefore not yet in the release, or still works so badly even the most rudimentary tests fail.
There is no reason to use that NTP daemon of systemd, yet more and more distros ship with it enabled, instead of some sane NTP-server.
Re: Not possible
"This code is actually pretty bad and should raise all kinds of red flags in a code review."
Yeah, but for that you need people who can do code reviews, and also people who can accept criticism. That also means saying "no" to people who are bad at coding, and saying that repeatedly if they don't learn.
SystemD seems to be the area where people gather who want to get code in for their resumes, not for people who actually want to make the world a better place.
Fujitsu: Closes director's gate to Tait, 9 execs abdicate, and for German workers – a crap Weihnachtszeit
Well not quite
ISPs still know what websites you go to by looking at the IP-headers. If you send a packet to an IP address you can just look up who this is registered to and know that the user was talking to that particular company.
There's only 2 reasons for which this is useful:
1. Censorship avoidance (until this service is blocked)
2. Centralizing the DNS infrastructure so it's easier to monitor.
Considering that both Cloudflare and Mozilla are large companies which could be coaxed into acting for people who would like 2 to happen, I don't think it's a good idea.
But then again, the RFC is not really a big deal. That puts it onto the same level as "IP over Avian Carriers" (RFC 1149) or "Scenic Routing for IPv6" (RFC 7511).