Two tiers
He went on to argue that 5G will cause applications and services to split into two tiers: non-5G-connected and 5G-connected.
Excellent. I can just ignore the "5G-connected" tier, then.
12268 publicly visible posts • joined 21 Dec 2007
Obviously use cases vary considerably, but I use multiple laptops more or less daily, and I almost never bother using them on battery. The battery is very handy as a built-in UPS and when moving the machine from one room to another; but I don't think I've ever gone more than a couple of hours on battery on any of them, and even that I wouldn't particularly miss.
I've been a touch-typist since 1983, and the placement of the Fn and Ctrl keys on Lenovo keyboards has never been a problem for me. Mileage varies, I suppose.
My complaint about the keyboards on these things is that they lose far too much space to that accursed touchpad. But there's no getting around that; the market for laptops without a touchpad is too small for anyone to bother.
Who exactly can grab this private key, and how?
Anyone with a copy of the Atlassian Confluence desktop application, by debugging. The private key is embedded in the desktop app.
This is a classic error, and per the Twitter thread, there are likely many, many more offenders.
You need to discard part of the merge history "all the time" on public branches? I have to side with Hipp on this one; that sounds like bad practice, frankly.
Pretty much all the rebase use cases I've ever seen boil down to "Shit, I've made a mess. How can I pretend to the world that didn't happen?" Maybe there are good use cases for it, but I haven't seen any examples.
an aspect ratio of 16:5
Eh, why not "an aspect ratio of 3.2"? Most people won't care. Of those who do, some will be able to do the arithmetic to convert it to whatever denominator pleases them. The others will look for some magical term they don't understand, or assume that bigger numbers are better, so for them there's no point in converting to "more sensible values".
Personally I like screen ratios around 0.425π.
Try playing Wing Commander 2 on a 286 with EGA at a few seconds per frame.
Luxury! In my day we played games on Model 33s. After every move you had to wait for it to print the next page character-by-character. But we were happy.
We had this issue at the Mountain Fastness - television set wasn't particularly wide, but we wanted to put it on a quite narrow cabinet we'd picked up for $10 at a salvage place.
A plank on top of the cabinet solved that nicely. I was even going to whack it down to the correct length with the mitre saw at some point, for extra classiness, but then after a year or so we decided to wall-mount the TV instead. Then we unmounted it a year later and sat it on an antique oak dresser we picked up for $45 someplace.
I suppose what I'm saying is that these things are easier when you don't give a rat's ass what it looks like. It's not going to win a beauty competition against the views out the windows anyway.
If you're sat roughly 2 feet away from the screen, that's really poor ergonomics, HSE and your eyes,
Yes. I've never seen the attraction of these huge monitors. Frankly, it's been decades since I was even interested in anything bigger than my laptop screens. They do a good job of filling the center of my field of vision when I'm working. I don't have any use for more screen real estate.
When I was working at IBM circa 1990 I had a multi-headed workstation with four monitors connected to it. At the time it seemed pretty nifty, but I quickly realized that I only ever made real use of two of the screens, and one of those was connected to an experimental graphics card so it was only used for testing.
I realize I'm in the minority here, but I have no interest at all in these giant monitors.
I recall a VR demonstration at SIGGRAPH 1989. It wasn't thrilling 30 years ago, and it's not thrilling now.
It's hardly a "new technology" (it's improved, true, but clearly not enough to make much of a difference in the consumer market), and if Microsoft is going to "miss out" on any big VR opportunity, it won't be soon.
I own a .org, and I will not be renewing it. I don't want to provide even the smallest support to Chehade.
I won't be supporting ISOC either, though to be honest I haven't in the past (other than by paying for a .org); their mission is well down my priority list.
It may well be that one of the better practical approaches to "AI Explainability" is to use image recognition techniques for many subordinate feature-groups, followed by statistical pattern matching of the presence of several (but not requiring all) of these subordinate feature-groups (also in realistic geometric arrangements).
In the jargon of the field, that's interpretability, not explainability. The latter is analyzing models post hoc; the former is constructing models using already-understood features. See the last link in my post above.
When you come up with a robust mechanism for doing that, you let us know, eh?
So far the best humanity has been able to come up with appears to be late capitalism with a moderately-strong regulatory state, and that's still so buggy Adobe would be ashamed of it.
I've seen several collections of NN activation 'heat maps" for image classification, and they're pretty much always surprising. That is not, I think, very reassuring.
There have been some attempts to extract deeper explanations for these "evolved features" (that is, functions that the NN stack has generated through unsupervised learning). Explanatory techniques such as the feature ranking performed by Google AI Explanations aren't particularly useful for the deep convolutional stacks typically used for image recognition, because the evolved features aren't created by human judges and so don't make much sense to human analysts (or have useful labels like "max temp"). So researchers have turned to information theory, for example, and visualization.
Heat maps are one approach to visualizing what signals are being extracted from images by deep convolutional NNs. Unfortunately, while this area of research has produced hundreds of really quite interesting and thought-provoking papers, we still really don't understand why deep CNN architectures evolve the features they evolve. That's one reason for the more recent push toward creating interpretable models rather than trying to explain black-box ones.
Windows Remote Desktop sucks for remote printing
Why would anyone use RDP for printing?
I mean, SMB sucks too - axiomatically - but it does remote printing, and has since, what, the late 1980s?1 And it's built into every version of Windows.
If you're going to do Windows-to-Windows remote printing, SMB printer sharing is by far the simplest solution.
You can also do it with non-Windows clients and servers using Samba. Of course, if you have a pure-UNIXy environment, lpd or CUPS would probably make more sense.
(Ah, lpd. I remember when that was the new kid. Hell, it wasn't even standardized until RFC 1179 in 1990!)
1That would be LAN Manager 1.x. IBM actually introduced SMB in '83, but I don't remember if it supported printer sharing for those first few years. And it turns out the oldest version of the SMB spec I have to hand is relatively recent; it's revision 6.0, from 1996.
Generally they seem to discontinue these services when they feel they've scooped enough data from them. They're just harvesting exercises.
Personally, I'm just as glad they're killing Google Cloud Print; it will be one fewer thing to go in and disable every time my phone updates.
Yes, I never use my phone for this either. Even when people have their phone unlocked and the bar code on the screen, it often (in my experience) takes several tries to get it to scan, while the paper boarding passes generally scan on the first try. It's just obnoxious.
If I'm in an aisle seat I'll usually wait until the end of the line, or close to it. If I'm in a middle or window I'll board with my group in the hope of not having to make someone else get up to let me by. (I try to never have carry-on luggage that needs to go overhead, because I'll be damned if I'll let the airlines con me into fighting for overhead space. So I don't need to rush on for that.)
The funny thing is that I believe the usage of "digital" instead of "numerical" in English is wrong as well.
People believe all sorts of things with no justification. I suppose that might be funny to some.
Regarding "digital" in English: The use of "digital" to mean information technology that primarily represents data using discrete rather than continuous encodings was apparently introduced around 1940, but is based on usage - in the sense of a digital representation of a number - that goes back to the fifteenth century. "Digital" in general was not commonly used before its adoption as a term of art in ICT to distinguish from analogue methods. Since this new usage was both justified etymologically and not in conflict with any existing popular usage, it's difficult to see how anyone could argue that it is somehow unjustified.
Certainly, in English, there's very little ground to argue that "numerical" would be better in this context. Analogue computers are also numerical; and there is nothing about, say, discrete signaling systems which makes them "numerical".
Am I the only one that thinks that
If the Internet has taught us anything, it's taught us that there's an excellent chance any opinion, no matter how idiotic, is shared by at least a few. So, no, probably not.
... the phrase "Digital Transformation" is retarded?
I suspect the phrase was coined by someone who was developmentally typical.
I don't see anything wrong with it myself. My digits go through transformations all the time - typically affine translations, modified catenaries, that sort of thing.
I'm usually on board with the "don't blame the user" argument, as my past posts will show. In this case, though, there are quite a few red flags, significant user action was required (it's not the same as recording a potentially-embarrassing scene with your phone and failing to secure that device), and people were uploading particularly sensitive data.
While it's clear that the people behind this service should be banned from creating software until they've passed some remedial courses in basic thinking, in this case I feel at least some of the users share a significant portion of the blame.
And in this case, the VEEDiots repeatedly ignored messages from the researchers, and from the Register. They're clearly not competent.
Personally, I'm still (despite the continual deluge of evidence) a bit amazed that people use services like this. Really, folks. You're trusting a startup with no reputation or record with your sensitive data. Even their name is stupid. Would you buy medications from a startup named PILZ.io? (Yes, I suppose many of these people would.)
It is certainly possible for an entity to issue securities that are backed by real assets (such as stock in a corporation with tangible assets), and that could, theoretically, include some real-world asset for backing a cryptocoin
My understanding from the article is that's precisely what Zaslavskiy was promising - that his funds owned sufficient non-cryptocoin assets to guarantee the value of the coins. He wasn't claiming the coins themselves were backed by anything.
For any sensible investor that should raise all sorts of red flags, but obviously sensible investors were not his target audience.
There are days when I think it would be viable to create a fund that invests in obviously fraudulent operations, then cashes out of them (taking Ponzi profits, i.e. bleeding off capital supplied by other, duped investors) before they go bust.
I don't really know what Microsoft is trying to do (short of maintain a revenue stream)
That's probably about all that can be meaningfully said about Microsoft as a whole.
Divisions will have more-specific goals and directions, but overall strategy for an organization that size is always going to be superficial and amorphous.
Yes, let's ruin astronomy so more people can play games.
In my experience, satellite Internet service is generally rubbish. It's not so much the latency I mind, though it is annoying for SSH connections (I don't play online games). Weather can frequently cripple or disrupt service, and existing providers such as HughesNet impose ridiculous data-volume caps. Frankly, I don't see that improving much.
The vast majority of the rural US population has utility electricity delivered by overhead wire. Adding fiber lines to those poles is a relatively small additional expense: most of the infrastructure already exists, and the power companies already have to do routine maintenance. At the Mountain Fastness, we have fiber-to-the-premises courtesy of the local power co-operative, which does exactly this. They created an ISP subsidiary to fund the rollout.
For the cost of launching the satellites to provide equivalent coverage, you can run a lot of fiber alongside existing infrastructure. And once the fiber is already run next to someone's property, it becomes a lot easier to sell them a hookup and ISP contract.
It might take SCOTUS to clarify the exception, but lower courts can, and likely will, throw out evidence gathered in what the presiding judge considers a violation of the intent of the exception; and proceeds from that evidence, under the "fruit of the poison tree" doctrine.
Historically, the US (as part of its UK-derived common-law model) has let the courts refine the scope of exceptions such as "extenuating circumstances". It's very difficult to define these things in a productive way in legislation.
And, of course, there already exists substantial precedent regarding extenuating circumstances for other police activities that normally require warrants. It's not like this is a procedural vacuum.
as a piece of design it was hard to beet
Indeed. I wonder when its like will turnip again. I suspect such design is not enough of a carrot to win the market; it's not easy to salsify consumers. It seems like Motorola is still approaching phone design rather gingerly. (I'm trying to stop, but they keep cumin.)
Indeed. Mine fits in my pocket just fine. I'm not seeing the advantage here.
Back in the day when decent ones were available on the gray market, I did prefer slider phones - but that was because they offered full mechanical keyboards.
A folding touchscreen is about as appealing to me as, say, edible furniture: it's not that I can't imagine use cases, it's just that they're extremely implausible, of very small incremental value, and likely to have bad failure modes.
I don't see this doing well. As a novelty, it's far too expensive, even if it proves reliable. Yes, there are some people willing to spend stupid amounts of money on a phone, but that market is limited. Making it a Verizon exclusive in the US won't help with that. On the other hand, it's not exclusive enough to be a Veblen good. It's conceivable some tastemaker will get it to go mainstream but I wouldn't bet on it.