I have a feeling they were bought out by Yodel some years back... I have memories of them being "less than good" as well...
edit - a quick google reveals they went into administration and were bought out by "DX Deliveries" in 2016...
5761 publicly visible posts • joined 20 Jul 2010
Why did the senior DBA let him do it if he was really looking over his shoulder?
More to the point, why did the senior DBA teach the junior DBA to use TRUNCATE, especially on live data.
If you know you're on a database which is definitely not a production one, preferably on a server that definitely isn't a live one, and you've made sure that you don't have another session open on any live server anywhere that you might accidentally type a command into, then maybe you might use TRUNCATE, but even then, you'd probably DROP or DELETE FROM instead, unless the table in question was huge, or the hardware very slow, because typically, if you're doing something on a test system, you'd want to replicate what you'd do, and the way you'd do it, on a live system.
Typically, if doing anything that involves deleting data from a live database, the very first thing you'd do is make a copy of the data you're about to operate on. The senior DBA should have been teaching teh junior that as a matter of course. If not making a copy of the entire table, then running your DELETE statement as a SELECT first to copy that data somewhere else.
Then, of course, you'd check for any cascading delete constraints...
Also, taking into consideration Moore's Law (yes I know it's a rule of thumb, and probably can't be projected back to the '40s). With a doubling of processor power every 18 months, computers in 2019, would be 2^52 times faster than they were in 1940.
Using the same encyrption techniques, but using 256-bit, rather than 88-bit encryption, the decryption time of, for argument's sake, 12 hours in 1940, using the same hardware would become 5 x 1047 years (that's 6 x 1037 times the age of the universe).
Applying Moore's law, and assuming faster hardware, that still comes out at 1.4 x 1022 times the age of the universe.
And algorithms are better now.
Even if my back-of-an-envelope caluclations are out by 30 orders of magnitude you're still talking about hundreds of years.
Quite apart form the fact that ciphers are completely different now to those used in the 1940s*, You are aware that all things being equal, 256 bit is not 2.9 times harder to crack than 88 bit (256/88), but is in fact 3.7 x 1050 times harder (2256-88), since every bit added to a key doubles the number of permutations.
*One of the main things that made the enigma ciphers crackable was the propensity of certain Nazi commanders to always start and end their messages with the same phrases. Knowing a part of the ciphertext made cracking the encyrpted messages (just) possible on a timescale that meant that they were still meaningful.
Indeed, algorithmic AI, is the real "hard" problem of AI, and it has been for half a century or so.
If you can identify the fundamental aspects of consciousness and reproduce those in algorithmic form, then you might have a fighting chance of creating something with the intelligence of something more advanced than an insect. That might even incorporate an ANN for state processing, to replicate the way actual neural networks do it. The technological capability to do so is still science fiction.
We need to learn about it, learn how to accommodate ourselves to it and how it can be used to our benefit.
The problem is the third one of those - beyond gimmicks, where the result doesn't really matter if the "AI" gets it wrong*, it is next to useless, because it is utterly unreliable. It's fine for processing inputs where all the permutations are known and nothing unexpected can crop up, but do you know what else is good for that (and considerably cheaper and easier to maintain)? A traditional algorithm.
*The best recent example I've seen is "Alexa, remind me to feed the baby". Google it, and you may see the issue with asking "AI" to do anything reliably.
A good error message should have four parts:
- A meaningful title, i.e. "Could not make fondue"
- What happened, i.e. "Out of cheese"
- What the user can do about it, e.g. "Please refill the cheese hopper."
- Where applicable, technical information for troubleshooting, e.g. "The cheese processor reported error code #47 - OUT OF CHEESE ERROR"
Where at all possible, the third part should be logged somewhere reachable instead, and the message should contain something like "Please quote log ref #C4EE5E".
Lock the bus, mask interrupts, anything else you need to do, and you can have arbitrarily complex code as an atomic op.
Well, that may prevent something else from interrupting your arbitrary code. You still need to ensure that it can't fail half-way through, or if it does, it fails in a way that leaves the state of whatever it was working on unchanged. This is harder than you may think, once you factor in things like accounting for hardware failure (disk crashes, memory failures, etc.) or power outages. There's a point where you have to accept that the universe might hate you today, and your software might do something unexpected as a result. There is no such thing as "unreachable code" - if it's compiled in, it can run, under specific enough circumstances...
Hmmm. Magnesium bin? Well, if you're going to go with that, why not put a couple of kilos of thermite mix in the bottom, since it would be a shame to waste that ignition temperature, and just in case the boss finds the real water extinguisher (thermite burns underwater)...
Bonus points if the bin is on the floor above the underground car-park, and placed directly above the boss's car...
Don't get me started with "MVVM" - not the concept (it does make sense), but the fucking acronym, "Model, View, ViewModel".
Can you spot the problem? It's like the braindead middle-endian American date format that nobody else on the rest of the planet would think of using: A view sits on a viewmodel, which in turn sits on a model, so why on earth is is called "MVVM", and not "VVMM", or even "MVMV" if looking at it from the other end? Either way, the view doesn't sit in the middle of the stack...
Abstracting things out into layers is supposed to make comprehension and maintainability of the code base easier for all involved. Inconsistent jargon doesn't really help the fisrt of those...
Yeah, that whole, "I'll comply with the laws of the country I'm in, but only when I've left the coutnry, and at my convenience, at their cost" defence. Why should the Swedish prosecutors have to leave the country and travel to a place of Assange's choosing to do their job?
To use an analogy, imagine buying an item from an online tat bazaar (let's call it iBuy) where the seller states that the item is collection only, and is based in, lets say, Leeds.
After negotiating the sale, you then refuse to pay until the seller comes and personally delivers the item to you, at your new address, in the south of France. You refuse to return to Leeds to collect the item, and complain that the seller is being unreasonable. Now, I don't know what the wholly fictitious iBuy's terms of service would say about this, but I doubt they would smile favourably upon you.
I believe the point here is that Sweden's extradition treaty with the US is more equitable than ours, and requires evidence to be put forward, rather than simply allegations.
IANAL, but my understanding is that it is considerably easier for the US to extradite someone from the UK than it is from Sweden.
Without wanting to state the obvious...
It tends to be more difficult to obtain evidence 7 years after the fact. If the investigation hadn't been hampered at the time by Mr A skipping across the North Sea to Blighty, it would quite conceivably have meant that the investigators at the time would have been able to interview him, and then if this indicated further avenues of enquiry to follow them, and possibly obtain corroborating evidence (or otherwise; no assumption of guilt or otherwise is made).
...We had a bomb scare in our building some time back.
The receptionists spotted a suspicious package that someone dodgy looking had crept in and left in the lobby when they were away from their desk.
They duly hit what they thought was the evacuation alarm, which unlike the fire alarm, didn't release any door locks (including those on the fire escapes).
Cue the occupants of the building proceeding to evacuate through reception, the only way out of the building, unknowingly past the suspicious package. (We weren't told until outside why we were being evacuated, or what the single-tone alarm meant)
Fortunately, it turned out to not be a bomb. I think it was some stolen goods some toerag had stashed there thinking it was a good idea.
It also highlighted how thoroughly useless the local plod are at dealing with bomb-scares; they didn't turn up in the hour I spent waiting outside the building before giving up and going home, and apparently took a couple of hours more to send someone round to take a look, who promptly picked up the suspicious bag to take a look.
So, in summary, fails all round.
And sometimes people are either sensitive to certain smells or just more aware of odours in the environment
Certain things can also make people more sensitive to some or all odours, either temporarily, or rarely, permanently, including, but not limited to, hangovers, migraines, pregnancy and effects of medication.
I think the salient point here is that if a web site acts only as a neutral conduit for user content, between users, then it's fair to say that they are acting as a "common carrier" and aren't responsible for that content.
On the other hand, if they start "promoting" some content over other content, using opaque alogorithms that the user has no control over, and inserting advertising based on that content, then they should be seen as a publisher, and are responsible for the lawfulness of that content.
I think FB have gone quite a long way over the line from one (where they used to be) to the other (where they clearly are now). The relevant lines are where they started "curating" content, and where they started targeting advertising based on a profile built up from your "likes" and "views" and yor demographic data. There's no way they should be able to claim anything other than responsibility for those adverts, especially if they are political adverts with no fact-checking, during an election.
...cue a rant about how the "new" HTC10 my wife bought wouldn't do the OTA upgrade to Android 9 due to a "corrupt file system" and also wouldn't re-flash the official stock image from HTC, crashing half-way through the update process. Bought from an eBay seller purporting to be in Leicester and then, when trying to return the phone as faulty, turns out to be in Singapore (and also turns out to have several eBay accounts all with very similar names, created on the same date).
Dodgier than a Naruto-running Jerboa driving a Dodge Cherokee to its dodgeball match...
Worryingly, most of the strains in circulation aren't included in the BCG jab, which IIRC contains 23 separate needles for the strains it does cover (I stand to be corrected on this, as it's 3 decades since I had one). This indicates something about why TB is such a successful pathogen (and still the world's number 1 killer infectious disease), this and the fact that it is hard to kill because (a) it lives in the lungs, where it is hard for antibiotics to reach, (b) it divides slowly (antibiotics typically kill cells when they divide), (c) it has a thick cell wall, affording the bacterium some degree of protection, and (d) people don't finish their courses of antibiotics and stop when they feel better, leading to low-level disease reservoirs. oh, and (e) a lot of infected people are asymptomatic.
If you develop a cough, I'd suggest a visit to your GP...
The thing is, the human vision system is very good at spotting moving things, especially in peripheral vision, and that's without using doppler radar/lidar, which presumably this could use (if it's sensitive enough). Admittedly, that's only good for things moving towards or away from you; the vision system involved needs to be a lot better at detecting transverse movement.
I think the long-and-short of it is: until "AI" can produce something that at least approximates theory of mind, it's going to be no use in applications that require theory-of-mind to work, such as anticipating what other sentient beings in the environment may do, be they people, animals, or, if self-driving-cars do end up with proper AI, other vehicles.
The more pertinent question should be: if it detected an unknown object that may or may not have been going into its path of travel, why didn't it slow down (unless it was already going much faster than the 39 mph it was doing when it hit)?
If a human driver saw something in the road that they couldn't identify, ignored it, didn't slow down and then hit that something after it turned out to be a person at 39 mph, and killed them, they'd quite possibly be facing a charge of causing death by careless driving.
Hazard perception is a major component of the dirving theory test, and one of the most important skills when driving. Humans aren't actually that great at it, hence the need for driving lessons beyond those required to learn how to control a vehicle, and it's the sort of thing a computer should be better equipped to deal with.
As I said, at the very least, you'd expect the response to an unidentified hazard to be to slow down until it has been better evaluated. If that results in a self-driving car erratically braking, then this suggests that the hazard identification technology isn't yet good enough, and shouldn't be on the road, not that the brakes should be disabled...
I'm sorry to tell you, Mr Uber, but on this occasion, you have not passed your driving test.
I suspect that it's probably possible to use spectrograms in quality control to check product consistency.
In general IR spectroscopy does one job well - verifying the purity and identity of a single compound. If you know the spectrum of the chemical you think you are looking at, IR spectroscopy will tell you whether it is that compound, or not, and probably whether it is of high purity or not.
It's used in chemistry labs to provide an extra data point to establish identity of chemicals, along with other things like hi-res mass-spectrometry, which will tell you the elemental (and isotopic) composition of your sample, NMR, which tells you about the functional groups and structure, and various forms of chromatography, which tell you how many different compounds you actually have in your mixture.
If someone were to come up with a hand-held GC-MS-NMR machine then they could do real tricorder-type stuff. They'd also be wiping every bank card in the room, because NMR magnetic fields are in the order of 1T (for reference, that's around 200,000 times stronger than the Earth's magnetic field)...
Shine an invisible near-infrared light on something, read the reflected spectrum, analyse it, and voila! Every chemical has a unique spectrographic fingerprint
I'm assuing that those touting this have never tried to read a Raman infrared spectrum then. Sure, every chemical has its own fingerprint, but those spectra aren't composed of neat individual peaks, like mass-spectrometry, they're sometimes very-broad curves. It would be difficult to identify the spectra of two separate chemicals mixed together, let alone the thousands of individual compounds you're likley to get in a mouthful of food.
Now, you might be able to do something clever by blasting your food with monochromatic infrared at a very specific frequency to pick up compounds with known narrow vibrational frequencies, but I would think that the IR sources for this would be expensive, and because you're looking at a mixture, not pure compounds, it's quite likley that the other compounds in that mouthful of food will alter a compound's chemical environment enough to shift the peaks you're looking for (e.g. by hydrogen-bonding) , leading to false positives and false negatives.
Looking at this from both a (lapsed) chemist's perspective, and that of a software dev, I'd say that even if you threw in some fancy-pants pattern recognition using AI, this sort of application is still many decades away.
Don't get me wrong, my ability to get 100MBps is relatively recent, and is contingent on my being in a property in the middle of a largeish city that is plumbed into (co-ax) cable internet. Prior to that, I had to make do with spending the same amount, or more, on unreliable ADSL that gave anything between 2MBps and 12Mbps, apparently depending on whether it was wet out or not, because my property didn't have cable. If I lived somewhere rural, I'd be thankful to get that. I can still remember the days of dial-up, and nobody in the household being able to make a phone call while you're using it, and in the scale of things, it really wasn't that long ago.
Still, for any serious business to describe 50MBps as "fat pipe" in this day and age is a bit rich. Yes, they may get better SLAs than consumer broadband, less contention, and possibly symmetric upload/download speeds, but I bet you that 99.99% of the time, the service actually provided is indistinguishable from consumer offerings, except for the price.
I do note that the OP is in SA, so perhaps the average offerings there are worse than in the UK. I'd find it surprising though, our infrastructure in this country by-and-large is suffering from decades of under-investment.
I think it's a fairly common exploit to go searching in public repositories for cloud service keys. I've heard of it happening before - this may even be the same instance?
The first mistake here, of course, is putting your business source code in a public repository. I'm pretty sure you can host stuff on, for example, github, and share it with those that need it, and nobody else, for no cost. Public repositories are fine for open-source stuff, but even then I'd still be working on a fork in a private repo until all my changes were ready to commit (sans AWS keys)
Haha, you crazy Americans with your irrational hatred of socialism. You do know socialism and totalitarianism aren't the same thing, right?
Uncle Joe isn't hiding under the bed waiting to brainwash your children into being nice to each other.
I know, I know, you think corporate free-market capitalism is the solution to all of life's woes. Come back and tell me that again when you don't have the highest per-capita prison population in the developed world (and most of the undeveloped world too), and a massive (rising) gap between the wealth of the richest and poorest.
Edit - I'll just add, from a historical perspective, the thing that most accurately describes what you are thinking of is the polar opposte of the "S" word, that one that starts with "F", and seems to be on the rise again.
Why should the NHS spend a fortune upgrading their systems just because some scrote has a cheap SDR dongle and chooses to broadcast their pager messages?
The same reason that they shouldn't be putting confidential patient files in a dumpster without shredding them. IIRC, trusts can, and have, been fined for doing exactly that.
Natural or otherwise, it depends on the half-life of the isotope in question.
More strictly accurately, it depends on the half-life, the decay type and decay energy.
Radiactive decay typically involces one or more of electron emission (beta radiation), alpha particle emission (alpha radiation), high-energy photon emission (gamma radiation), and neutron emission.
Ionised particles (alpha and beta emission) cause damage by basically smashing into other atoms, alpha moreso, because an alpha particle consists of a positively charged combination of two protons and two neutrons (some 8000 times more massive than the electron released in beta decay and with twice the electric charge).
The amount of damage such particles can cause is pretty much directly related to the amount of energy they hold when they are emitted, which is specific to the isotope that is decaying, but for alpha particle is usually around 5 MeV.
Gamma radiation depends very much on the energy of the photon in how harmful it can be. They typically cause damage by ionising the atoms they pass by.
Neutrons can hit other nuclei and make them into unstable isotopes. This requires a direct-hit on the nucleus, and the vast majority of every atom is empty space, so neutron flux has to be quite high for this to be hazardous. A single decay has a very low chance of being harmful, compared to the potential of an alpha particle. Neutrons, however, being uncharged, can travel a lot further, so the safe distance from a neutron source is a lot more than the safe distance from an alpha source - alpha particles typically don't get more than a few cm, but neutrons will travel many metres. Free neutrons also have a half-life of around fifteen minutes before decaying into ionised particles (a proton and electron) and a neutrino. If it does so when it's inside you, that proton and electron are going to cause damage to whatever is around them.
It might have identified itself to Amazon as a TV. "Samsung Huawei" makes me immediately suspicious that it was not. Presumably teh connection is done over the internet.
I'm sure I could open up Fiddler right now and make an API request to one of Amazon's publicly accessible APIs telling it all sorts of things that aren't true, including user agent, IP address et al.
I suspect an undisclosed flaw in one of Amazon's APIs that allowed someone to set up a spoof device and make purchases through that 'device', no TV involved.