Just the kids?
from the report:
> Figure 55: Childrens belief in the truthfulness in [ social media ] websites ... 28% of 12-15's think "all or most is true"
How many people read this article and thought it was true?
3497 publicly visible posts • joined 10 Jun 2009
You can imagine the scene inside Talk Talk's IT department:
The IT boss is there, yelling at his/her/its subordinates: "Your (note the shifting of ownership) security was so crap that even children could break into it! Maybe I should sack the lot of you and employ some kids, instead?"
and from the back of the room comes the anonymous, quiet reply: "We've been telling you it was hopeless for years, but you management did nothing about it. Maybe we should replace the management team with some script kiddies who know the importance of security in IT systems?"
Journalist writes article saying journalists won't be replaced by machines.
The triumph of hope over experience, or was that article already written by a computer?
Although I can see a large number of sub-editors being replaced, After all it can't be that difficult to automate the spelling / grammar / fact-checking aspect, can it?
The two rules for defending your job against all-comers, including automation:
1.) Don't tell them everything you know.
2.) Well, that would be telling. Wouldn't it?
The abilty to read small stuff a long way away is a function of the pixel size and focal length (magnification, to the layman) of the lens. Not the number of pixels on the sensor. That will increase your field of view, but not your resolution.
It's still an impressive chip. Gimme call when I can get the colour version in my DSLR for < (a grand).
> Players are encouraged to bribe, do deals with drugs lords, set up shady contacts with weapons traders, flash money around and make a killing, all without getting caught.
What a real corrupt person would have done would be to claim this was a training programme, not a video game and then applied for a grant to develop it.
> the wake will be held in the Restaurant at the end of the Universe
> Attendance by Invitation Only
And invitations may be posted to oneself from the Big Bang Burger Bar where you can also deposit 1p which, by the laws of compound interest will be more than enough to pay for the funeral service when the universe ends.
Definitely worth growing an extra arm for.
> don't understand why Time decided to show everyone what the Oculus looks like when you're observing someone using it, instead of an illustration of what the experience of using the Oculus is like
Two obvious reasons: first, more people will see someone else using one of these than will ever use one themself. Since most people form an opinion of another (a) very quickly and (b) almost entirely based on visual perception it is important for potential buyers to be aware of how others will perceive them if they are seen wearing one of these. It's similar to buying a little motor-scooter, Segway (remember them?) or the "wrong" sort of car. It still gets you from A to B, but many will not like being the object of ridicule.
Secondly, it's impossible to depict to someone on the web, or reading a magazine, what the user-experience is like, as it will look just like the innards of any other video-game screenshot.
As a final offering, the shot also shows that the user (or wearer) is basically blindfolded while using this device. That should alert all pranksters to get their thinking-caps on as to how best take advantage of the user's predicament.
So when will be the year of "Oculus on the desktop?"
Seriously, needing a dam' wire to connect it and generally being so big, ugly and intrusive this thing doesn't stand a chance of adoption - except by the sort who are attracted to the tech for its own sake.
Think: 3-D TV headsets, but without the attraction of content that ordinary people want to watch.
Tweets are fine for conveying factual information in one direction if you don't require an immediate response - or any response at all. However I would assume that the emergency services would usually like a bit more information than can be packed into such a short message - and that the additional information would depend a lot on the circumstances of the emergency.
And then we have the question of how, exactly would you report a fire in Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch (or even it's abbreviated form: Llanfairpwllgwyngyll) if you spell it wrong and the tweet-operator, possibly outsourced and completely unfamiliar with Anglesey and tries to Google the place you're talking about? Or, worse: your spell checker "autocorrects" your location and sends the emergency services somewhere completely different.
Personally, on the few occasions I've called for emergency assistance, the reassurance that there is actually someone there is very reassuring. To simply fire off a tweet and maybe (or not) get a response sometime doesn't seem very helpful.
> pornographic content will be forbidden from appearing on the Oculus store
Well, I suppose that's one way to kill it off.
I guess the smut-mongers will either have to wait a few weeks until there is a cheaper chinese clone on the market, or wait a day or two until someone roots the device and makes an absolute fortune for themselves with their own Succubus Rift store.
> However, Microsoft could release their own Linux distribution and this would be (almost) comparable to Apple utilising similar technology in their O/S.
The thing is, no user cares, any more, what the underlying O/S is. They do care about the quality, range and ease of use of the applications they want to run.
And this is where Linux still falls down, flat on its face. Sure, it provides some apps that are said to be "compatible" - where compatible means they can read some of the same file formats and perform some of the same functions: GIMP and OpenOffice/LibreOffice are the examples used to "prove" that Linux can do anything that Windows can.
Until. that is, you actually try to do that "anything" with these free tools. Then you find that they are lacking in basic features, have such terrible UIs, simply don't work or have major bugs at the highest priority for many YEARS, that will never be fixed as the support team says: Just to be honest - it's very unlikely that this will be fixed soon. We are a community of volunteers - people fix what bugs they want to fix generally. In the grand scheme of things this really isn't a big deal - compare to crashes [ ref: https://bugs.documentfoundation.org/show_bug.cgi?id=37960&redirected_from=fdo ]
And it turns out that most Linux advocates simply don't understand that people are willing to pay for stuff that works. Everything past Windows XP has been at least as usable and reliable as Linux, supported a far wider range of peripherals and has professionally written and supported apps that actually work.
And if, like me and everyone else who works for pay, you can place a monetary value on the personal time spent dickin' about trying to dig out obscure documentation, patches, fixes and "how tos" - Linux is no longer "free" in financial terms, either.
And for those people who do want "free" AND who want simple to install, easy and intuitive to use and only costs pennies, there's always Android.
> Microsoft says it can measure 43 metrics* and spit 'em all out into a pretty, GUI-fied Azure console.
And as usual, all these monitoring tools do is find easy to record data from the kernel and present it in wizzy, pretty graphical format.
Even though it is all totally irrelevant.
Providing this is like telling a car driver the piston temperature, the headlight colour, the average pressure exerted by passengers on the seats and the methane content of the cabin. What drivers want to know is answers to the important questions, such as: am I going too fast? will something break? do I need to take corrective action?
And so it is with computer monitoring. All the monitoring services seem to be in a race (and, truth be told: have been for decades) to provide the greatest number of different measurements of obscure, irrelevant and often inter-related factors. However none of them provide anything that is of primary importance, such as: how long do I have to wait for the answer to appear? can I run something in the background without affecting the important stuff? Is there time to back my stuff up before I go for lunch?
So if Microsoft want to merely mimic all the crap that today's tools produce, they'll go down the metrics route. Even though most of the stuff is irrelevant, has little effect on the BIG QUESTIONS and in itself (without knowing what the applications are doing) provides nothing of value. As all the other monitoring tools that have come, promised and then disappeared into oblivion have done since the 1980s (remember "sar" and "vmstat"?).
However, if they want to truly provide something that is useful, user-centric and actionable they will extract I-O data (volume, latency, cache efficiency) on a per-file basis. Query times broken down into CPU, storage and network latency. Internet access times by site (or IP address), memory usage per processs - identifying shared, CoW'd and local volumes - best of all - a calculation of how much "slack" is available in these key areas for users to fill up with additional workloads.
All of this is hard. And most of it is not available from Unix or Linux kernels without a lot of hacking about. If it was easy then the dozens of other capacity planning / performance monitoring programs, companies and freeware would have done it years ago. However they have all failed to produce information that users value - which is the tricky bit, but, ultimately, all that matters.
> stop repressive regimes around the world from buying sophisticated software that can be used to spy on political opponents and others
But it's only repressive regimes who WANT to spy on "political opponents and others¹"
[1] where "others" would imply everyone who isn't a political opponent. So that has pretty much all of us covered.
> It's simply some disgustingly expensive carnival attraction.
First of all, the money is NOT destroyed, it's merely moved from one individual who has too much to a company that is at the very least doing something that no-one has done before (or yet). Making progress, if you will.
Although it won't get us to Mars or the Moon, it's investing in technologies and techniques that make space travel better, cheaper, (hopefully) safer and more accessible. That adds to the sum total of moving in the right direction and is probably better for us all and the person involved than other ways of blowing the same amount of cash.
Whenever someone asks when a piece of software will be ready (or any other IT goal met, for that matter) the standard response is that it'll take "another couple of weeks".
That is long enough to make progress - possibly even get close to something that works. It's also long enough in the future that you can hope whoever you made the commitment to will have forgotten. It's also close enough to satisfy the person asking, without them criticising or objecting too much.
Of course, it's incredibly unlikely that you will have it all sorted by then - but when they come back you can say "oh there were a few complications, but I reckon it should be ready in a week ..... or two"
> Why the obsession with using smartphones as remotes?
It's arguable that a true IoT device shouldn't even need a remote. In fact, most of them shouldn't need a person to tell them what to do, at all.
Take an IoT'd lightswitch as an example. Instead of a boring old PIR to detect the presence of people and then turn the light on or off accordingly, a better solution is to design in a $5 ultrasonic Doppler sensor to detect the direction of people: into or out of the room. When $number_of_people == 0 OR $light_level > daylight then turn off the lights.
All that this obsession with remote controls does is to add one more level of indirection into the equation. Instead of a person making a physical action to control a device, they are now required to do so by means of another piece of equipment. As a design concept it's as daft as calling for your butler to do these trivial things for you: more trouble than it's worth, so I'm told.
Let's just sit back and see how long it is until the station receives a donation of LAN equipment from, say, Iran, the PRC or N. Korea. I'm sure it would all be perfectly fine and that there would be no chance it would have any backdoors, bugs, covert comms channels, spyware or other nasties.
Although the pollsters claim a 3% margin, that is clearly shown to be false. You just have to look at the results of the various polls, where the spread from one poll to another was much greater than 3%. When Sky News was broadcasting before the election they had a rolling graphic on the screen that summarised the most recent polls. Looking at the numbers as they went by, one could see that few of the polls were within 3% of each other.
It would be easy to dismiss the spread by saying that the polls were taken on different days. However, if there really was so much variation: one day to the next, then the pollsters were measuring a quickly changing variable to too high a degree of accuracy. The poll might have been accurate to the stated degree, but its shelf-life at that accuracy was so short that it was probably obsolete before it was even published.
... don't value what you measure.
> This put Tories and Labour within a whisker of one another around the 34 per cent mark.
But that's an irrelevant statistic. Who wins a First Past the Post election (as we have in the UK, designed primarily to produce clear, if somewhat less than truly representative, outcomes) is the party with the most seats, not the most votes.
The system is supposed to be highly sensitive (there's only 1 winner, whether the majority is 1 or 10,000) to small differences in numbers of votes between the most voted for and the next candidate.
Everyone: the pollsters, the media who commissioned them and the general public all know this. So to huff and puff and say "well, we were almost right with a measurement of something that's useless" is a ridiculous defence.
And if you really are off-grid and rely on this technology to keep you powered up, you'll need a backup in case of failure (guarantees are nice, but they don't keep the lights on and if it takes a week to deliver a replacement to your remote, off-grid location ... ), or for those times when your "old" battery is being replaced.
The thing about multiple LA batteries is just that: you already have the makings of a resilient solution. Or at least one that can operate at reduced levels, rather than being a single all-or-nothing proposition.
So, as with all H.A. systems: computer or home, the cost of a truly reliable system is a multiple of the cost of a single purchase.
> a fan of the Internet of Things (IoT)
If you're a true geek you'll be rolling your own IoT out of ESP8266's and looking seriously at the possibility of configuring them as a mesh network. Given that they are < £5 a pop, any geek's house should have these in every room (they also work as wifi APs and stations) and should be hooked in to the home internet, too.
> the concerns of law enforcement authorities on new encryption technologies
Encryption is just one form of obfuscation. Another is substituting one word or phrase for what you really mean: like saying "the product" instead of a crate-full of weapons, or "collateral damage" instead of dead civilians.
However, it goes further. We are all aware that in broadband sales the phrase "up to" means less than, "unlimited" means we know you're stupid enough to believe that and saying something "may" happen means it's a dead cert. - unless that would be a good thing™. So would banning encryption also outlaw weasel words used in advertising, or require politicians to tell the truth or for government communiques to contain facts?
If only the European Commissioner would stop speaking in code.
> Let the world hope that you're not in security, since you clearly lack any understanding of it.
Lack understanding - hardly. Because asking for a considered and quantifiable measure of risk and downside is such a bad thing?
At least with that information people would be able to make a proper assessment of the threats they face and hence to apply the correct amount of effort. Instead of employing Wild Assed Guesses that either address the wrong issues, fail to resource their security teams correctly or even learn how to identify a real threat from ignorant media jibberings.
You never know, the next step might even lead to fact-based professionalism.
> “Nine out of 10 times when we see equipment from that manufacturer, 90 percent of the time, this is the password.
And exactly how many cases have there been of this being exploited? It would interesting to see a study of how many times "well known" security holes do actually get compromised.
What a lot of security professionals do (and you can't blame them, since that's how they make their money) is to point at every vulnerability: whether theoretical, practical or exploitable for gain and say "LOOK! it's a massive security hole. everyone must fix it immediately".
Now, it's true that once a weakness has been "outed" it's far more likely to be explored - especially if hackers can get some material gain from it. However, that doesn't mean that every single weakness is in that class. At least not until some security geek goes blabbing to the entire world about it. It may even be that the small cost of having a single password across a long-lived range of equipment is far outweighed by the savings and speed for maintaining it or having to call someone in when you've changed the password and subsequently forgotten it.
> 13% of people can comfortably understand foreign languages.
13% of people know about Google translate AND have the nous to find the same product described in their native language AND manage to get a decent price for inter-country delivery charges AND trust "foreign" postal / courier services.
Sounds like a pretty high percentage, to me.
> If you want to fix the world, then economics is the way to do it.
I would humbly suggest that economics describes the problem - though that is the crucial first step before it can possibly be fixed. In order to fix it, the world needs education (foremost for Guardian journalists opinion-scribblers).
MS have finally realised that they were their own biggest competitor.
People didn't feel the need to upgrade from XP to something newer, as what they already had was good enough. This isn't so much a compliment to XP: that it's stood the text of time. It's more a criticism of everything that's come since as failing in innovation, backwards compatibility, or being too expensive in terms of what you got, and not providing anything sufficiently "must have" that people were willing - or able - to ditch it and invest in a newer product.
It's still unclear that W10 has addressed these failings. But I think we can expect MS to have built something into W10 to force its adopters out of their "bad" habits.
> focusing in on fully exposed buttocks
Given that "arsehole" can describe both a part of the body and a person who displays ignorance, it's a shame that exposing a buttock isn't construed by FB as pointing out the failures or views of such a person.
There would then be much less need to unfriend or block peoples' posts.
> So expansive and vague is the BBC's current Royal Charter, that BeebWash would actually tick all its boxes (not only the "mission to inform, educate and entertain", obviously, but also "sustaining citizenship and civil society") and happily sit within the corporation's remit.
I have the sneaking suspicion that whoever wrote the BBC charter naturally assumed that the phrase " ... by making programmes and broadcasting them " was such a blindingly obvious limit to the scope of the British Broadcasting Corporation's charter that it simply did not need to be explicitly stated.
However, the BBC has always had a penchant for jumping on trendy bandwagons (sometimes even creating them, then jumping on). When it involves something that few of their "customers" understand, then it allows them to go into full patronising mode, too. As well as collecting "charter" brownie points to make up for all the crap they push out, which clearly has no (positive) charter content, at all.
But it is just that: a bandwagon. The BBC has many more influential ways to promote "digital" stuff. The most effective, cheapest and least popular with its management and programme makers would be to present "techies" in a positive light. Have someone on Eastenders who does more than argue in cliches with the other characters and (say) gives them advice on how to fix their PC, or load apps onto their phone, or scan for viruses. They could even have programmes (gasp!) that don't ghettoise "geeks" by either assuming a set of interests, being banished to obscure channels and times (e.g. Click), talking to people as if they were 8 years old and limiting each "segment" to a sound-bite friendly couple of minutes - with all the annoying electronic background music that is used to inform us that this is a technical subject, and the worn-old theme of a literal "journey" of discovery.
> a springboard for more advanced products such as the Arduino
If an Arduino counts as "more advanced" than this jobbie, it's difficult to imagine just how basic this Micro-bit device will be.
[ Edit: though since you could make an S-R flip-flop out of a 7400, I think you'd already be bordering on the limit of understanding of most 11 y/o's ]
> the intelligence will be added on as a paid-for upgrade
Please can we (the callers) have this, too?
Most of the time calling the "support" desk is a screening process to weed out those callers who aren't absolutely desperate to get help, assistance or support. The trial-by-patience means that while you are on hold you will either be flipping through the (badly worded, inaccurate and out-of-date) instruction manual, or searching out solutions on the web, or scanning through Amazon for an alternative product that will have different and hopefully not show-stopping faults. If you succeed in finding a fix of your own, then you ring-off before you get connected to the (non-flying) drone who's only suggestion, after taking all your personal details for the Nth time. is to "turn it off and on again".
If only there was a way of getting through to the "right sales or support staffer for a particular task no matter where they are". Something along the lines of "Press 1 if you're an idiot. Press 2 if you've already turned it off and on again. Press 3 if you think there's a race condition in the interrupt routine. Press 4 if you're prepared to pay £10 to get in front of the other 23 people in the queue and talk to our 1 single qualified techy RIGHT NOW".
> Endurance of five years at 10 full drive writes/day
OK, I admit I needed my fingers and my toes to work this one out.
But 10 "full drive writes" per day seems to me to be 80TB of data.
A day contains 86,400 seconds. So to write 80TB (80,000 GB) in that number of seconds requires a write-rate of:
80,000 / 86400 or about 0.9 GByte/sec.
But the drive spec says it can write less than 560MB/sec.
</confused>
This project seems to be an extension of a Sound Triggered Flash. This was a staple in hobbyist electronics magazines, going right back to the 1970's. It's popularity was that it only needed a handful of easily sourced components.
How can something so simple cost so much and go so wrong?
I suspect the answer is that the project creators fell foul of the problem of a lot of software (free as well as commercial) projects and tried to do too much, add too many features, make the thing too flexible and (ultimately) didn't really have much idea what their goal was. As more cash poured in from the "corrrr, that's so kewl!" brigade - desperate for a new toy that would probably only be used a few times before something even "kewler" grabbed what little attention they had - the project used that as an opportunity to extend, expand and "improve" the original project.
It's just a shame they hadn't heard of the Arduino - which seems to have all of the functions and features they wanted and can be made to do all the stuff they were after (possibly without a UI - but really .... what the hell is that all about?) - for a small fraction of the £300 £250 they were asking for a production Triggertrap.
What do people want from their "news"?
If we watch TV news, it's pretty clear that the primary goal is simply to keep viewers watching. This is done by a combination of vivid (lurid?) images - sometimes inserted more for their shock value than to convey information (though in factual programmes almost NO information comes from the video stream: just turn away from the picture and you STILL get the whole, errr, "picture" - the same cannot be said for turning the sound off). And to keep viewers from switching over with promises of stories about loss (since fear of loss is probably the greatest motivator of all), hints about celebs and other small furry animals and SPORT. Just keep watching the boring stuff about bad things happening in far away countries, and we'll get to the juicy stuff ... after the break.
For newspapers, the intention seems to be to push a point of view (and also to get them to watch the advertisements). Since the article uses The Guardian's Comment is Free monicker (it's not BTW. It's very heavily censored, as any commentard who even suggests that their angry-women columnists might only be using a selected version of the facts) we can use their text as examples. The Gruaniad''s pieces appear to be (very carefully) crafted to encourage clickage, attract eyeballs and generally maximise advertisement revenue. They do this by putting their own political slant on their pieces - which one wouldn't really call "journalism".
If you want "information" from the news media, then the simple way is simply to skim the headlines. In most cases this tells you all that is known about a breaking story. The rest of the piece being merely guesswork, conjecture or a rehash of what people on twitter are saying about it. Google News does this very well - and it's so quick to use. It also seems that journo's are writing their stories to be GN friendly, with all the facts in the first sentence or 2.
Understanding and background can be found. But that usually requires the "expert" to arrive. So for depth regarding stories, one generally has to wait a day or two. But by then it's all been forgotten and the next set of 140-character news-bites has washed away any important but dull stores and we're back to cats playing with celebs (or what colour is a dress) again.
> I think this is a great kit for someone who just wants to get the end result achieved.
The difficulty is that it doesn't work.
I experienced a number of IO errors, all related to I²C communications, during the time I spent with the Kit. Dexter Industries Forum posts relating to the GrovePI+’s predecessor suggest this is not uncommon but at least can be caught and managed in the core Python code.
If the kit has a buggy I²C implementation (or the software running on the Pi does) then it's of little use to non-technical people who's abilities limit them to "plug and play" components. Not only would those people not be able to produce a working outcome, but they wouldn't have the skills to diagnose or fix the problems, themselves.
And for the more advanced user: would the time needed to work through or around those bugs, or to work within the limitations imposed by them, be worth the convenience of buying this kit - especially when there are debugged hardware alternatives with known quality software available from other sources? Or even (perish the thought) Arduino solutions that have a proven track record.
Maybe it would be better to wait for version 2.0
One simple way for IT admin people to gauge their competence is this 2-stage process:
Tot up the number of *ckups you have fixed, avoided or alerted your employers to. Assign a realistic financial value to them - remembering to include fractional values where it wasn't just you who contributed to the fix (or fault). Subtract the value of the ones you have caused.
If the total amount saved is greater than the cost of them employing you (not just what you get paid, but the cost of your employment, including overheads) then voilà you can count yourself among those competent to do your job.
> The Post Office is ramping up its plans to become a virtual mobile phone network
So it'll take 3 days for them to connect your call (or 2 weeks at christmas) - which might go to someone else if they can't reach the person you wanted to speak to. And then, if they fail to connect your incoming calls, you'll get a TXT saying you have to go to the Sorting Office to collect your voicemail?
> They're still out there.
In theory, yes.
However, it's 50 years since anyone's put a nuke on a rocket, lit the blue touchpaper and had a successful "boom" - rather than a <phut>, ooops or "oh crap it's heading back in our direction". That means that the last people who did it (assuming they were in their 20's and 30's) are now retired and the people they trained and passed on the "tricks of the trade" to are now getting on and have (presumably) passed on all the folklore to a new generation.
So would a system that was last end-to-end tested half a century ago, with all the subsequent innovation, upgrades, redesigns, changes and cost-cutting have any realistic chance of working? I can't see much hope for it - but I hope nobody reads this and decides to try it out.