Re: I live in hope ...
I've read this comment several times and can't quite make out if you missed a word or you're being sarcastic!
Clearly the author didn't do enough testing.
5089 publicly visible posts • joined 6 Aug 2009
I spent a fair chunk of last Thursday fixing a collection of unit tests that had a similar problem. I removed a redundant class and suddenly found myself facing over a dozen test failures relating to code that I hadn't directly touched.
I was in a bad mood by the time I was done.
Dyson spheres are highly unlikely to ever be constructed anywhere. They would have a totally devastating impact on the climate of any planet orbiting the star, so are not a practical solution even if the means existed to make them.
Once you have the sphere you don't need the planet ;)
Assuming you can solve the gravity problem you have so much surface area to inhabit inside a Dyson sphere that the loss of one planet or a hundred, even a thousand is no big deal.
There are two proven ways to reduce recidivism:
1) Never release prisoners.
2) Use the incarceration to rehabilitate prisoners, support them on release.
Too many people think that 'criminal' is a sub-species of humanity and that they are beyond hope. These are the 'lock 'em up and throw away the key' brigade. Despite centuries of evidence they persist in thinking that locking someone up will somehow stop them doing it again on release.
Anyone who understands human behaviour knows that locking someone up makes them more bitter and resentful. What's needed is a serious attempt to treat 'criminality' as an illness. Most criminals can be changed if enough effort is put in.
Based on the look of that UI (especially the icon on the button) it must date back to the last century. Probably written using one of the Borland tools. Borland C++ probably. Dunno if that's bad because no-one has bothered to invest and keep it up to date or impressive because apparently it's still (sort of - depends how often this kind of thing has happened over the last 20+ years) fit for purpose.
"I am paying for 80Mbps service and am currently getting 16Mb/s with loads of DSL carrier drops."
No you aren't. No CP currently or has ever sold such a package. The complainant is paying for an up to 80Mb/s connection. The technology will provide a data connection that is rate adapted to their telephone line up to a theoretical maximum of 80Mb/s on the best lines.
16Mb/s in and of itself fits that description and merely indicates a relatively high level of electrical resistance (a long line or perhaps aluminium involved). However the frequent drops should not be happening and indicate a fault that Zen ought to be able to report to Openreach. Fixing that issue would eventually increase the complainant's speed. Whether it would ever get as high as 80Mb/s depends on their particular line's electrical resistance and to a lesser extent on the amount of electrical interference it is subject to.
In the meantime the complainant could ask Zen to drop them down to a 40/10 service so as to save some money.
You can apparently get decent shielding with water. Make a spacecraft’s skin hollow and store the water for the crew’s needs there.
I seem to recall that's what Robert Heinlein proposed in one of this books. Podkayne of Mars I think.
We had a drinks lady back when I was working for S&S (producers of Dr Solomon's Antivirus Toolkit) back in the 90s. She was a lovely lady but she insisted on filling the kettle from the drinking water using the cold output tap. So she'd empty the cold reservoir to fill the kettle then fill a couple of glasses for those who wanted them with barely cool water.
By the time the kettle had fought the laws of thermodynamics the water was at room temperature. I got in the habit of getting myself some water when I saw her gathering the cups. That way I could beat her to the cold water :)
Many years ago at polytechnic I was at a loose end. I'd already finished my assignments but as I liked programming I thought I'd wander into the lab and have a bit of fun. I entered my credentials and behind me there was a beep. I looked at the Bleasdale (uh huh, that long ago) and the screen was showing the text 'Panic. System stopped'.
So while the other students moaned I quietly slipped away.
It's bad enough trying to find a tap that supports my house 'water network' already. It took me a couple of weeks to find a pair of bathroom taps that worked down to 0.1 bar. 0.2 bar taps do work upstairs but not particularly well. Tolerable for a small ensuite sink but not acceptable for the main bathroom.
I'm still putting up with the 'low pressure compatible' kitchen sink taps. They claim to be 0.5 bar which in practice means that the hot tap does little more than dribble out. It's enough to fill a pan or the sink but you need the patience of a saint. I did find a 0.2 bar version but it was twice the price at nearly £300.
I hate plumbing.
P.S.: If anyone knows of a three way (hot, cold, drinking) kitchen tap with the hot/cold on a hose that works on less than 0.2 bar please tell me. Surely I'm not the only house left in the UK with a gravity-fed hot water system?
What's wrong is
1) Programmers not using new/dispose when it's available to them. You ought to write your own wrappers for C).
2) C++ programmers defaulting to the heap for storage instead of the stack, statics or fields. The heap should be your storage of last resort.
3) C++ programmers not using RAII and smart pointers for those situations where the heap is the only reasonable choice.
4) Not enough use being made of copy ctors (especially private ones).
5) Not enough use being made of references, especially const references.
6) Not enough use being made of const full stop.
7) Programmers using a language that requires them to choose whether or not to follow the above rules (and others) in order to write safe code.
I loved programming in C++ but it requires too much knowledge and too much care and attention. I prefer C# these days. That too has its issues but at least you have to try (or be deliberately stupid) to write dangerous code.
Conversely, what percentage of users are running a Linux Desktop
Oh very true, and really this goes to the heart of it. "The year of the Linux desktop" never did happen. Windows and MacOS are the two most popular operating systems. Linux is used by sysadmins and a few geeks.
As long as Linux users are happy to accept they don't represent the majority of computer users then all is fine.
Many (many, many) years ago a friend went to get cash out of an ATM. This would be in the 1980s when I was young and stupid (I'm not young any more). He put his card in then typed his PIN, There was a clap of thunder and the ATM rebooted. His card remained inside the machine.
I think he said he waited ten minutes before giving up and going home.
None of the solutions I've employed have consumed much more than 10wh. It's possible that a cloud solution might have consumed less but it would have cost more. The current Win7 implementation is running on hardware rated at 20w maximum. Based on it being barely luke warm to touch leads me to think it's probably averaging a lot less than that.
Most likely it's idle and the idle rating is 7w.
you are supposed to get & apply a fix to a computer that MS has just turned into a brick?
If you read the KB articles you'll see that repeated reboots should eventually result in the recovery console and from there you can manually run chkdsk/f and all will be fine again. It doesn't look to me like you need to download anything. Just reboot, reboot until either you get sick of it or the recovery console appears. I don't mind that idea but can't help thinking that if it could at least display a message eg 'Two more reboots and I'll appear' or 'Hold down F12 for recovery console' it would be more friendly.
I haven't tried this myself (and given how my PC is behaving I think it's a hardware fault) but recovering from this issue appears to require nothing more than some patience and rebooting the affected computer a few times.
Still a bit shitty but hardly the end of the world.
I said it will be a pain without ipmi. If you do not have console access remotely and have to schlep a monitor to the server each time something is not working properly
The only time I have to attach a monitor, keyboard and mouse is when the failure is catastrophic. This has only happened three times in the 14 years I've been running this server. The rest of the time I just use Remote Desktop. I'd estimate the up-time of this server as being better than 99% (tentatively estimating a total of one week of it being dead during 14 years of 24/7/52 operation).
It doesn't do a huge amount but it is a mail, video and music server. It started off as Vista on a low-spec Toshiba laptop, then a Fit PC2. Then it was upgraded to Win7 and ran on a Fit-PC3. Then I bought an AcePC and it ran as Win10 for nearly a year. Now it's back on the Fit-PC3 as Win7.
The times when it was dead have been annoying and frustrating simply because losing a server always is. I did consider some flavour of Linux when thinking of Win7 but it meant learning too much new stuff and I couldn't find a mail server that would offer aliased addressing as easily as VPOP3 does. So in the end I just upgraded to Win7 which if I remember correctly took about an hour.
Windows is a perfectly competent server platform (even for things waaaaay beyond what I need). I am familiar with Windows and it's had an awesome uptime for me. Linux would have had to be very, very much better to attract me and I could see no evidence of that. Just me having to learn a load of new stuff and no expectation that it would be any more stable than Windows.
Interesting. My new server died shortly after getting this update. It runs headless so I don't know if it tried to run Chkdsk or not but I'm now back to Win7 and old hardware. I might investigate the new hardware further over the holiday but it's probably a coincidence. The failed machine seems to be incapable of booting off anything. After the manufacturer logo it just switches to text mode and goes no further.
I assume that if this bug is the cause users would at least see some evidence of Windows starting up. Still, it's worth a look.
A 70Mb/s connection can handle multiple 4K streams with ease. You only need 20Mb/s and if not streaming live less would suffice, Given enough time to run through a decoder 15Mb/s will do. A good ADSL2 connection would be enough for a single 4K channel.
So a 70Mb/s connection would be 'bursty' while carrying a 4K video stream. And of course we shouldn't overlook the fact that most people do not have 4K TVs and of those that do most are sitting too far away to detect the difference between 4k and 'standard' HD.
They do. They are. Ofcom price control FTTC and ADSL but are allowing BT more or less free reign on how much they charge for FTTP. Last I heard FTTC was allowed to be barely profitable but ADSL has to be sold at a loss.
But as for actual penalties: How is that in any way helpful? You're suggesting penalising a company that is struggling to finance an extremely expensive roll-out of technology that 90% (at least) of its customers don't need yet. There is no evidence of significant demand for FTTP speeds. Most people don't want it (or at least see no reason to pay for the best they can already get).
Encouraging them to keep upgrading and investing makes sense. But there is no justification for penalising openreach. UK internet usage has always been amongst the highest in the world. The idea that most of the British public is being held back by BT's network is rubbish. There are a few people without access to a decent connection (~5%) and an even smaller number of people whose connection is inadequate for their particular needs.
Well over 90% of the country have a connection they are happy with. Encouragement is what's needed not penalties.
So..the figure they give is correct then? The phrase 'up to' has two slightly different but equally valid meanings in English. Here's an example of the kind of usage intended by ISPs:
Motor cars can travel at speeds up to 320mph.
- An entirely correct statement even if you happen to own a Reliant Robin.
You are connected using a technology that delivers speeds up to 80Mb/s.
- An entirely correct statement even if your particular telephone line's characteristics limit your connection to 5Mb/s.
Some would argue that you should be charged according to your speed. The problem with that argument is that's not representative of the costs to the ISP. The difference in running costs between a 5Mb/s and 70Mb/s connection is miniscule. It might even be more expensive for the slower service because:
* It implies a poor quality line which is statistically more likely to experience faults.
* Slightly more electrical power might be required to push the signal that distance (depends how much power is saved by not having to transmit/receive the higher frequencies).
As far as actual usage a 5Mb/s connection running flat out 24/7/52 is probably more of a problem for the ISP than a 70Mb/s being used intermittently. Of course a 70Mb/s connection running flat out 24/7/52 is worst of all but it's a lot harder for most people to do that. What ISPs want is bursty traffic and a 70Mb/s connection is almost always going to be lovely, easy to deal with bursty traffic :)
In my experience most estimates (and they are only estimates) are quite accurate. People I've known who got significantly less than what they were advised were experiencing various issues mostly internal to their own property. I've helped many people fix their broadband and get their connection speed back up to more or less what their ISP predicted.
The only things I couldn't fix were people signing up to cheap ISPs who then got throughput drops during peak hours. But the solution there has always been to move to a better ISP. BT's last mile almost always has enough capacity, it's the ISP that often doesn't. As for VM - they just like to run a hot network and you have to live with it. At least there if you've signed up for 200Mb/s a 50% throughput drop might not be noticeable apart from the jitter.
One of the things I noticed living on mainland Europe, compared to friends and family in the UK is that my broad band speed is pretty consistent 24 hours a day. Whereas it seems the UK is more prone to over sell which means that there are times in the day when my friends in the UK seem to really struggle for bandwidth
That just means your friends are tightwads and are signing up with the cheaper providers :)
I use IDNet. It costs a bit more but this is what I get in the evening. My sync speed isn't quite the full 80Mb/s but I get the same throughput 24/7. Oh and IPv6 since they rolled out dual-stack on their network many, many years ago.
Region is always a sore point with me. I live in Brackley so officially that's 'East Midlands' but we're so far south that most of us have more in common with the South East (Banbury and Bicester) and indeed our bit of Northamptonshire pokes down into the South East (part of the M40 is in Northamptonshire).
South Midlands is a thing and would make more sense but that's not an official UK region.
Anyway it's fairly academic in this case. It was obvious we were going to be in tier 2 and anyone who needed to check probably hasn't been paying attention to local news.
I've sworn through most of my life (verbally and metaphorically) that one day I will catch the compiler out. That I will be the one in the right.
The great news is that the moment finally came a year or so ago (and continues sporadically to this day) with Visual Studio. It's quite common now when working on C# to be able to make the compiler do it again because it got it wrong and it actually does get it right the next time. Mostly it's because it sometimes can't find the files it's supposed to be generating (.gs) but on several occasions it has just been plain wrong.
The funniest was the time it told me off because object didn't have a default constructor. I almost danced with joy while pointing at the monitor and giggling maniacally. Well, okay, thanks to working from home that is, actually, what I did. After 30 years of putting up with sanctimonious shit from compilers I feel entitled :)