Have they fixed the serial port issue?
The Raspberry PI3 essentially had only its "mini-UART" on the header, which had its clock derived from the CPU core and therefore was rather unusable.
4850 publicly visible posts • joined 9 Mar 2007
Well it's actually rather expensive. We are talking about perhaps 5 Euros per board, which is far to expensive for something only few people need.
What makes it expensive is that it needs galvanic isolation. So it's about the same number of parts you have in a fixed voltage power supply, you only need beefier components as you're dealing with lower voltages and therefore higher currents.
I'm working on a new way to digitize video tapes. Essentially I'm trying to get the raw signal from the video heads and then process it later. Gigabit Ethernet, even if it's just connected to USB2 could give more than 30 Megasameples per second which should be good enough for most consumer formats.
The Pi would then interface with an FPGA with some extra RAM essentially used as a FIFO.
"Also, for an UPS with an ATX connector fitted you would have to install it inside the computer case. Would there be sufficient room for the battery, and a free drive bay where you can put the monitoring panel? "
Well but then you need more converters. You'd first need to convert your 12V to some voltage your battery needs to charge (e.g. 14V) and then you need converters for all the voltages your ATX connector has. Which is +12V +5V +3.3V -5V -12V.
That's a lot of converters which need to work at fairly low voltages making them less efficient. It's far easier to have 2 converters. One from 230V AC to 14V DC, and one from 10-14V DC to 230V AC than all those converters you'd need to have when going from the ATX connector.
What would work is a dual stage integrated design, where you'd build it into your power supply. Essentially you have a barely regulated conversion from 230V AC to (up to) 28V DC, then you go from 20-28V DC (depending on the battery change) to the voltages you need. Google does that in their servers.
Well.... actually those pieces of software often have been designed by "software engineers". It's just that there are many really bad ones around. Just look at old devices which were made back when you actually had the firmware designed by hw engineers. Those tend to work for 20+ years without ever needing a firmware update, because they knew what they were doing.
It's not a question about the field, but about experience.
Edit:
Actually here it's also often a question of outsourcing. HW companies tend to outsource the code to SW companies who have no clue about what they are making. Thanks to outsourcing even if the SW people would know better, they couldn't make it better as the specification has already been agreed upon.
Well a good electrical engineer can usually code somewhat better than a bad CS person. It's just that good people in both professions are rare, and the fields are dominated by hacks... though there are lots of people who go into CS because they can re-install Windows. You don't really have that equivalent for EE.
However I think the point behind the rant is clear. There are things that could be done better. For example UPSes could have a mode where they aquire a minimal charge to their battery before turning on again after the power comes back. (to avoid PCs crashing when the power goes out again before they can shut down again)
First of all, converting those 3,5ish volts up to 12 volts which is used by mainboards isn't simple.
If you want to combine your PSU and your battery backup, do it the Google way and use a 24-ish volt intermediate stage with 2 12V lead acid batteries. The first stage converts the 230 volts to roughly to 24-28 volts to charge the battery, while the second stage converts those to the voltage the rest of the computer needs.
"Of course slapping a 15kHz analogue filter on all audio ports would also work."
Actually no. You could still use lower frequencies. Thanks to spread spectrum technologies you can make that less silent than the fans. All you would hear is a very soft noise from your speakers. You couldn't even be sure if that actually came from the speakers or some fan running at low speed.
What you can do is of course to install an amplifyer between the sound chip and your speakers/headphones so information can only travel one way and turn off your microphones when you don't need them.
Also don't run malware and don't allow Javascript to access the sound devices.
... as it can be abused into sending selected few special malevolent malware.
This is my propper update mechanisms don't bother with that and just have a signed file available on a webserver which makes it much harder to push individual updates.
"it just adds more capacity using existing infrastructure, no need to lay additional cables to get more bandwidth between point a and point b."
Well we actually are far away from full utilisation of the currently used fibres. Many fibres only run one wavelength. However this will essentially trickle down as slower ports become cheaper and cheaper.
What's fascinating is that we are still essentially at the same level of 9600bps modems, although with light it's obviously much harder to reach that level of sophistication. There is still quite a lot of headroom for optical systems.
"And if they have a word processing product that sufficiently meets their needs, then why exactly spend extra money on a newer product?"
Yeah, though I don't think Word meets anybodies needs, it's usually used because people don't know anything better, or they are forced by their IT-departments to use it.
It's not uncommon for non-technical people to use Office Software. Besides lots of people actually enjoy wasting their time getting a program like Word to output something decently looking onto the paper. I mean if you can do a task in 10 hours instead of one, the result must be much more worth.
So I didn't have the "slow mode".
Using tokens to store BASIC code wasn't uncommon back then, as it not only reduced the memory footprint, but also gave you faster BASIC execution.
Such low end home computers also nicely show where the line between a "single purpose" computer and a real universal one lies.
A big show-stopper is the keyboard layout, the "3rd" layer of the keyboard doesn't work. So it's very hard to type even basic symbols like the "-" (minus) require going through an on screen keyboard. Considering that they actually printed those signs on the keys, I assume that this is still a bug.
Here's a video of someone trying Android as a primary operating system. He does that by using termux.
https://media.ccc.de/v/zeteco-59-termux_als_betriebssystem
I have to say I'm impressed by the hinge, it stays open by itself so you don't need a desk to use the keyboard. For blind typing the little bumps on f and j are missing, but the keyboard is to small for that anyhow. It should work for typing with your thumbs though. Since these early models still have Android on them, it's hard to properly judge them.
The FM-radio does have RDS but I haven't been able to get it to play stereo, the app for it the usual barebones Android app which is terribly barebones for something shipped a billion times. There is some quite heavy crosstalk into the audio even when nothing is playing. It's particularly audible when scrolling.
Yes it kinda comes with an SSH client, but it's closed source and you need to install it externally. They must have looked hard to find a closed source SSH client, and this kinda eliminates the point of having SSH.
Other than that it seems to be stock Android, which constantly tries to nudge you into getting a Google account.
Well Android has the problem of lots of crud from Android one does not really want.
On the other hand Android is severely lacking on other fronts. For example if I want to take a photograph and send it via e-mail, it's very hard to scale it down to some reasonable size. On a normal Debian I could just use imagemagick or gimp to scale it down.
Same goes for simple things like access to fileservers. Yes you can get an SMB client for Android, but for that you need Google Appstore access or root. On a normal Linux system that's just installing a package and running a mount command.
Well but adding a browser to your project and using web technology for GUI-Applications is kinda the worst you can do. The web was never made for Applications therefore people have to do strange things to get those working.
The only reason why one would ever do a thing like that is because they just don't know anything else. Just like companies who rely on VBA for their processes.
... you'd surely not want them to call the emergency services. After all the vast majority of calls would be "butt-calls".
Besides the majority of people still believe that they don't send everything to be processed to some manufacturers cloud.
Usually those are the ones which seem trendy or are discussed during training. That's why you had lots of ultra crappy Windows desktop software in the 1990s, and later lots of crappy Java software.
Today early programmers typically mess around making Websites or mobile apps.
Particularly since some people see "Digitalization" as some people seem to think that "Office Software", "Smartphones" or badly implemented Chats like "Slack" are part of this. Those are productivity destroyers causing more distractions and more time wasted on things people aren't good at.
Computers can greatly increase your productivity if you use/program them directly, but believing that you can just put a computer in front of someone and expect them to be more productive is stupid.
Well your 4,4 billion devices will usually be behind some sort of NAT. If there's a bug in the SIM-card implementation you can trigger it from the network.
If I was an operator I'd be very reluctant to allow this on my network. If you hand out 1000 SIMs you have a guarantee that you'll only have 1000 users or less. If you hand out a key you never know how many devices will use it.
I mean just look at all of those snakeoil companies peddling products anywhere from useless to harmfull.
I'm sure that if Microsoft would, for example, offer a version of Windows cut down to the functionality of Windows 2000, but with all the bugs removed, people would buy it, given the choice.
Well actually to exploit Spectre/Meltdown you'd first have to find a way to execute code on the management processor of the switch. In this case you usually have full control over that processor as you usually are either root or have no security boundaries between "userland" and "kernel".
Even though I could not imagine any situation where executing code on the microprocessor of the control processor could happen and could be made even worse by Spectre/Meltdown and any simmilar bug that's still to be discovered, there are companies who have over and over again surprised us with stupid ways to increase the attack surface of their products.
We expect virtually all things we usually buy to be sufficiently bug free. If there's a bug in your washing machine causing it to leak after a few month, we expect the manufacturer to fix that problem at their cost, and perhaps even go give you a new one for free.
Why do we accept so much from software companies? We allow them to push out fix after fix, with some even breaking important functionality. We even pay for the fixes ourselves, by downloading and installing them.
Microsoft took risks by not checking that piece of software adequately, and by making it so complex they cannot fix it easily, however now that they failed they don't want to carry the blame for it. It's not like they took propper precautions for example by writing it as simple as possible and using a memory safe language.
I mean, yes 8 Gigabytes of RAM should be standard by now even my trusty old X200 has those.
What's more usefull, however are things like the Thinklight. It's something so insanely simple and usefull, I have no idea why it's not standard on all laptops. Any patents should have expired by now.
Instead manufacturers focus on making the devices thinner and thinner. Something that may be usefull if you want to steal them (more will fit into your bag), but really is irrelevant when using them.
The problems probably arise in SIP and it's hugely complex implementations in VoLTE which allow you to have handovers to UMTS and even GSM during a call, as well as authentication via the 56 bit DES Key in your SIM, instead of a password. (OK, I'm overdramaticing, SIMs are slowly switching over to tripple DES)
"How would you fit the 40 plus icons on the ribbon on to a 2 x 3 inch screen?"
I'd say that 90% of the programs used in companies today predates that ribbon nonsense by one or two decades. Remember, the golden age for Windows GUI software was in the 1990s and early 2000s. After that people made web applications whenever possible.
As win32 is kinda the core value of the Windows plattform.
If I was working at Microsoft I'd start a program to develop some "RDP-like" solution which cuts up up the GUI around its elements and re-positions it to work with mobile devices. (Automatic or guided by some additional files)
This would bring most Windows software (which is legacy anyhow) to mobile devices without the hassle of having unusable interfaces.