Do combine harvesters pay the congestion charge?
If you have an actual "local farmer's market" in Marylebone, you must have a farm nearby.
Where's the farm that is "local" to Marylebone?
276 publicly visible posts • joined 4 Feb 2015
The animal in question, now identified as a feline called 'Bernice', had merely got fed up with being locked in a box and had taken the first opportunity to hoof it through an open window.
All the experimenter had to do was set a saucer of milk by the window and call the cat's name in an enticing voice.
Most people are experts in one (occasionally more than one) domain, and complete amateurs outside it.
It's very likely that a professional plumber or electrician could come to your home, take one look at the shower you fitted or the extra socket you installed, and realise that you have no idea what you're doing.
You probably don't even have the best tools for the job. This is because you are not trained in that area and no-one has told you differently.
OK, so if bots are half of all internet traffic
That's not what the article, or the original Device Atlas post, say.
They say that that half of all web requests are from bots. That's not the same as saying that half the traffic sent in response to those requests is due to bots, and it's certainly not the same as saying that half of all internet traffic is due to bots.
See Fred Brooks and The Mythical Man Month, and the discussion of the tendency towards irreducible number of errors.
"In a suitably complex system there is a certain irreducible number of errors. Any attempt to fix observed errors tends to result in the introduction of other errors."
True, but in a lot of industries, especially finance, the regulatory environment changes, and that makes the software out of date. Effectively, the software gains bugs by not changing, and so failing to keep up with the world in which it operates.
Everyone knows the official figures are unobtainable
That's true - no one every gets the quoted figure. But the point of the official figure, oddly enough, is not to say what the actual consumption is. The intention is that it's used purely as a relative comparator, not an absolute, so that you know that a car with an official consumption figure of 70 mpg will be more efficient than one with a 60 mpg figure, and not that either will actually deliver the quoted consumption figure
The problem is that it has a unit associated with it, so it becomes measurable in the real world. If it was simply a grading, say from A to Z, with A being really good and Z being really bad, people wouldn't complain. That grading comes with its own issues, though, such as how do you grade a new car in a few years time that beats the current 'A' grade?
I don't work in an NPM & JavaScript world, so this may be way off base, but if the system a developer is working on won't even build without this external code being available at what is, essentially, compile time, does that mean that if someone changes the hosted JavaScript, your compiled code now uses that changed JavaScript?
If so, how on earth do you test a system today, and know that it still works tomorrow when you rebuild it, knowing that you haven't changed any of your code?
I can understand taking a snapshot of third-party code, and using that instead of rolling your own - that makes perfect sense. Refreshing it periodically would also be a good idea. But why is there a need to always pull down the latest? How does that enable you to build stable and tested systems?
there's nothing in the open source community that's proven to work reliably at our scale.
Yes, it's simply outrageous that no-one has decided to use their own free time, and coding just for the love of it, to develop something that Dropbox can use without having to pay for it.
MSVC has no business removing this memset
It wouldn't, since the memory was subsequently referenced by another function call. What it might do is optimize away a call to that was used at the end of a function to clear out memory that stored information such as a password in clear text, so that it wasn't left in memory.
See https://msdn.microsoft.com/en-us/library/windows/desktop/aa366877.aspx for an example.
As I understand it, these things communicate back to the Bezos mothership primarily for the voice recognition engine. Sure, a request to play an internet radio station or tracks from Spotify requires an internet connection, but for tasks like setting a timer, controlling the lighting and so on, it isn't necessary apart from the remote processing.
Once it becomes commercially viable to package the required processing power into the device itself, I can see them really taking off. I'd be delighted to have one central device that responded to voice commands and could do things like set an alarm, turn on the lights and draw the curtains*, but only if I could be sure that nothing, and I mean nothing, was leaving the four walls of the house.
*additional hardware sold separately
A few years ago now, I put together a totally fanless system - a BUC-666 case (a close cousin of the CS-80 mentioned), a CR-95 cooler, SSDs, an Nvidia Quadro NVS 450 and a fanless PSU.
It has a four-core i5 (2.9GHz 3570T) and the CPU never goes over 50C.
The peace and quiet was a revelation. I find it so much easier to concentrate without the whirring. When I run a backup to a 2.5" external hard disk, the noise of the disk is the loudest thing in the room.
The downsides are that the graphics card, which was picked for its quad-monitor support, is no good for playing any game more demanding than Solitaire, and the big heatsink sits over the memory slots on the motherboard to such an extent that two of the four DIMMs can't be removed with the heatsink in place.
For me, those are very minor issues, and a price well worth paying for the resulting silence.
The two other possibilities:
1) They actually made a large profit, and they got the creative accountants in to avoid having to actually pay tax.
2) They actually made a large(er) loss, and they got the creative accountants in to avoid having to answer awkward questions from the investors.
Don't forget, you can always tell a good accountant with a simple Q &A:
Q: "What's two plus two?"
A: "What would you like it to be?"
DevOps is like the fire safety procedures in any office. You won't need a fire extinguisher or an evacuation plan every day but they are there because when there is a fire having them is incredibly helpful
An interesting analogy, since the key point of an evacuation plan is to get everyone out of the way, and let the professionals, who are trained to deal with the emergency situation, come in and work unimpeded.
Every office I know has a policy of "if there's a fire, hit the fire alarm button and get out. Don't try to tackle the fire yourself". Partly that's the lawyers and health and safety getting involved, but partly it's because unless you know what you're doing, an amateur tackling a fire can make things worse (there's certain types of extinguisher for certain types of fire - get it wrong and you make matters worse).
Shhh...
How can they say it's resilient now?
Because resilient doesn't mean failure-proof.
Resilient:
a : capable of withstanding shock without permanent deformation or rupture
b : tending to recover from or adjust easily to misfortune or change
A resilient service comes back again, just as Heart apparently has.
In fact you really want people to learn, have an alert every time in email programs that can never be disabled asking the user if they really know the sender, trust the sender, if this is a typical message?
All this will do is teach people that whenever they read an e-mail, there's an extra pop-up window they have to get rid of. They won't learn anything about security - they'll just learn that computers are now a little it more annoying.
All that needed is - common templating system and automated template builds from mib files
Until you have something that isn't monitored via SNMP. To take just one example - connect to a remote web server, validate its SSL certificate, and warn you if it's due to expire soon. I don't know of any way to do that via SNMP.
At the operating system level, for Windows there's lots of really useful information that simply isn't exposed via SNMP. This is true to a lesser extent on Linux, which does broadly have better SNMP support - but try monitoring the contents of the logs in /var via SNMP.
You want the Audit failures to stop? Block IP
This mis-understands the problem - an auto block treats the symptoms, not the cause. If you have a disgruntled employee, or someone who doesn't know how to do their job properly, an IP block really isn't going to help.
Yes I want more space automatically allocated
You can't allocate "more disk space in the cloud" when the disk is question is full of SQL Server log files on an internal production box. Even if it is possible, simply allocating disk space may well just mask an underlying configuration problem, and not actually solve the problem.
Drop the server out the load balancer and automatically rebuild another one or add more servers
First, even if this was in some sort of load-balanced situation (e.g. a web server farm), simply dropping the box and rebuilding doesn't address the underlying problem, which is probably a software bug somewhere that needs fixing. Second, there's a lot of systems out there that can't simply have more servers allocated - not everything is built that way.
The role of monitoring is to detect problems, proactively if possible and reactively when not. The role of the system administrator is to make sure that, whenever possible, the problem doesn't re-occur, and that's very difficult to automate.
I haven't come across a good one yet!
Clearly, you haven't looked at the one I help develop! :)
Don't just monitor and alert a human.Fix it
We get this quite a lot, but in reality, a lot of problems that a system administrator needs to know about can't be fixed automatically, or if they can, any automated solution is probably the wrong one. For example:
1. An unusually high number of audit failures are logged against a production SQL Server, and these are coming from inside the network. How would you fix that automatically? It may be a disgruntled employee trying to "hack" the system, or it could be a genuine mistake made in good faith by someone who simply needs some training. An automated system can't know.
2. A server is running low on disk space. What's the automatic response? Delete the oldest files? Delete the biggest files? Somehow automatically reconfigure the SAN to allocate more space? None of those is the right answer - the only practical way to do it is to get a human expert to look at the situation and decide.
3. A process is burning 100% CPU across all cores and slowing everything down. The possible solutions are to force-terminate the process or lower its priority to allow other processes to run. Neither of those two is the "right" answer - they don't solve the problem, just mask it.
I think that one of the problems, oddly enough, is the move away from the waterfall model, with it's stages of system test -> integration test -> user acceptance test.
There seems to be an idea developing that TDD and automated unit testing results in full-working software.
"What do you mean, it has bugs? I checked it in, and our CI server said it passed all the tests. That means it works"
There is a bit of "Quis custodiet ipos custodies?" here - how do you know the tests are correct and provide full coverage? To bring the Latin up to date, "Who tests the tests?"
I've seen plenty of code that has lots of unit tests, and the code passes all of them, but they either end up testing that basically the compiler works (e.g. making sure that getter/setter pair get and set as expected), or the actual functionality is mocked out, just to get the test to pass.
This isn't to say that unit tests are a bad idea (they aren't) or that the waterfall model is the best way to write software (it isn't), but in the rush to improve the way software is developed, some of the key tasks have been left behind.
From the linked FAQ:
"It relies on user interaction: double-clicking the .JAR attachment in the email"
From many posts on El Reg
"I set up my husband / wife / partner / S.O. with Ubuntu / Mint / Cinnamon"
I think you can only be smug if you believe that none of that group of people would ever double-click an enticing attachment in an e-mail.
Judge for yourself (possibly NSFW depending on policies)
I'm sure they have a UPS. Even if you have a UPS that doesn't guarantee you won't lose power to one or more racks.
Maybe the UPS was being serviced, and someone forgot to bypass it first. Perhaps the bypass switch itself was faulty. Possibly there was a fault with the power distribution infrastructure between the UPS and the racks. The list goes on...
"skyscraper exteriors above a certain elevation are hermetically sealed to contain the pressure"
That makes no sense. You would need to hermetically seal the whole building, not just the upper floors and that is probably impossible, and even if it could be done, would make getting in and out rather tricky. When was the last time you went into a tall building and had to go through an airlock?
In any case, even if it were true, the pressure difference from ground level at 1000 ft is about 4%, nowhere near enough to "create quite a blast".
I'm sure those who build these things have very good reasons for it, but I've never understood why skyscraper windows have to be cleaned from the outside. Surely some sort of mechanism could be included in the frame to allow the whole panel to rotate 180 degrees, so that the outside side is reachable from someone on the inside?
There must be more to it than just the power cord getting hot due to current flow. The amount of power drawn by a Surface will not cause a noticeable temperature rise in the cable. Simple test - next time you're making a cuppa, feel the kettle lead. That's taking 13A - way more than any Surface tablet, and it will be cool to the touch. It won't get even slightly warm, no matter how much you coil it.
What's more likely is that the insulators between the cores are failing, and the live is bridging over to neutral or earth. That would cause heat to be generated, and would be more likely to occur with cables that have been coiled and uncoiled lots of times.
I'm not in favour of this level of surveillance at all, but one of the problems is that any government can't point to the successes they have with it.
Let's say PRESTON was instrumental in preventing a Paris-style attack in London, and the terrorists were caught before they could harm anyone. This is simply not going to appear on the 10 O'clock news. If you have these capabilities, the last thing you do is tell anyone about them, since by doing so, you necessarily expose some of your SigInt capabilities, and that just makes your job harder next time.
It's the restore that's the problem. Having 1TB of backups remotely is great, provided you can get the data back. We have 6TB of collocated off-site storage, which we update once a month (about 1TB goes up each month). This is throttled to avoid taking all our outbound bandwidth (it's on a 1 Gbps link at the hoster).
Should disaster strike, we won't even try to restore data over the internet. Someone will drive down to the hoster, put the physical device in the back of a car, and drive back to the office.
The bandwidth of a bunch of 4TB disks doing 70mph on the M1 is quite high...