Wonder how it will compare to the Nokia monster cameras
as seen on the Nokia 808 and Lumia 1020. The former in particular should be hard to beat, some test reports indicate the Lumia 1020 implementation was not quite as good.
1981 publicly visible posts • joined 18 May 2007
Drivers should be shipped as source code and built with a compiler at install time.
Yes, but even this would not work in Linux (given current policies), because the driver API is not so stable even at the source level. This is justified by the need to preserve the freedom to change the kernel implementation.
> So how do you do a JIT compile, where data is necessarily code and code is necessarily data? Harvard architectures can't do a JIT compile, which is a necessary speed boost sometimes.
Compile the code as data to a page (or pages) marked non-executable, then change the protection to execute-only. Arrange things so that the compiler is the only application that can change the page protection bits this way, and that it will compile only data that has been originally loaded from valid bytecode files (use checksums for example). This also requires that the CPU refuses to execute anything from a writable page. Perhaps not foolproof, but should make it much harder for malware to write stuff to a data page at run-time and then execute it.
No defensive programming can fix that.
No, but that is not what it is about. The application must just be able to decide it cannot handle the situation, give a sensible error message, and exit, instead of mysteriously freezing. This is especially important for software in consumer devices.
Handling error situations well is one of the things that distinguishes quality software from poor hacks.
Sony: “The symptoms being experienced are not a failure of the TV, but are as a result of specification changes made by YouTube that exceed the capability of the TV’s hardware.”
Total BS from Sony. If your system crashes because it gets unexpected input from the network, it is your fault. The Youtube application need not work with the unexpected input, but it must notify the user and shut down gracefully, without taking the system with it.
But the Bravia bug is typical of the software quality of consumer devices. Like the LG DVD player I have that locks up if it is fed a disk in a format it cannot handle, or is too scratched.
LibreOffice is now what OpenOffice should have been. It is already far ahead. Among other things, LibreOffice has cleaned up the code base and build system, making further development much easier.
Problems in the original build system was one reason why the security bug was not fixed in a timely fashion in OpenOffice: they could not even compile the dang thing! OpenOffice really is a dead office suite walking.
Or in my case it was, "I have to upgrade because I keep getting that security message".
The living-room laptop had that disease until I finally got annoyed enough to find and run a "never10" (or some such) free utility on it, which shut it up by patching registry. The other Windows 7 laptop in the house got the Linux treatment.
The first one would have been Linuxified as well, but I need one WIndows machine to run my negative scanner that has no Linux driver.
Sadly, as many recent reports have shown, much of the Rest of the World are busy talking out of a similar orifice to the one Mr Comey appears to favour, and demanding, or moving towards demanding, the same thing.
Yes, and if the FBI gets its way in te U.S, it will accelerate similar backdoor schemes elsewhere. When every major governement wants access to a backdoor, the magic keys will leak even faster, and the security afforded by such encryption will be worse than that of a girl's toy lock on her pink diary.
Given enough time and resources all messages can be broken and read.
Enough time, sure. As in millions of years. And adding bits to the key makes the time go up exponentially. DES with its 56-.bit key is now considered crackable, so it has been replaced by algorithms with a longer key. I expect they too will be replaced as computing power grows. But it does not really matter, as long as the time needed for a brute force attack is longer than the time the message is expected to be relevant.
But it has exposed USB ports. Seriously?
I wonder if the attack could be extended to work with other attached devices, like a mouse: you can send configuration and status request commands to it. Or if the PC or laptop has earphones, you could send very high-pitched modulated sound, which would turn into very low-frequencey radio. Sound cards can often output up to 20khz, it does not matter if the earphone does not reproduce it, and most adults cannot hear it anyway, so the hidden carrier would be undetectable.
Now, if some specific company gets a better treatment than others, it can be ruled a "state aid" - the government "pays" the company renouncing to taxes - which is forbidden by EU rules.
Not to mention extremely unfair to other companies, Irish or foreign.
Any true free market enthusiast should actually be cheering the Commission, even if they don't like taxation: if there are taxes, the same rules shall apply to everyone, so as to not distort the market.
wget
is broken and should DIE, dev tells Microsoft
>People still used FTP?
I still often find it to be the only common way to move files between unlike systems. Even if a better alternative is available for some OS; it may not have been installed by whoever is in charge of the system I need to communicate with. Or there is stupidly configured firewall blocking the way for other methods. I don't think FTP is going away any time soon...
The technology, once invented, cannot be uninvented. If you can park something around the moon, you can plow something into the Earth.
The same states that can (perhaps) alter the orbits of rocks in space have also the capability of dropping fusion bombs anywhere on Earth. So this does not give me anything extra to worry about...
HMD global Oy, the parent company of Nokia,
Say WHAT? HMD Global just tries to relaunch the "Nokia" phone brand, but it is most certainly not the parent of Nokia the company (which is still going strong in network equipment). Nokia just licenses the brand to HMD, and has a representative in HMD's board.
Sloppy reporting.
instead of fixing long standing but difficult issues like FOSS GPU drivers STILL sucking,
Doesn't the blame here belong more to information-hiding hardware vendors?
(If I were the Great Dictator, I would prohibit the sale of any computing-related hardware, unless full programming information is made available for at most nominal cost, and without NDA restrictions.)
Isn't it about time we just assume that the default setting is security = nonexistent?
Looks like it. The problem is, security problems are not visible to most customers, until too late, and the vendors escape any liability. Same thing has happened in comparable situations with other technology. Cars used to be "unsafe at any speed", until increased awareness and regulation improved the situation.
>Honestly, the best protection against macro viruses now is to be running an up to date version of Word. It won't run macros unless you, the user, explicitly enable them.
Not sure if that helps against a good phishing attack. If the attachment comes from a plausible-looking sender, the recipient is likely to enable the macros anyway, especially if it looks like the document cannot be read otherwise.
Really, the only solution is using document formats with no macro feature, or at most macros that are strictly limited to operating on the document contents itself, with no kind of programmable access to the file system or network at all.
"LibreOffice isn't quite as fast as Word, but it's getting there. What is yet to be determined is not only whether or not I can defang all the "smart quote"-like stupidity and either have it preserve my settings through upgrades or make the settings changes something easy that can be injected at boot."
Yes, unfortunately LibreOffice also comes with these "I know better than you do how you want to write" settings enabled by default, but they can be turned off ("Tools->Autocorrect Options" and "Tools->Spelling and Grammar...->Options..."), and so far it has been very good at retaining these settings over upgrades (however, have not yet tried the latest version).
But "Microsoft Love's Linux".....
When they see an advantage in doing so, like in cloudy stuff, where Linux currently rules (the "embrace" phase). So there is no inconsistency.
Anyway, from Microsoft's point of view, this was about fixing a bug. Supporting Linux on these tablets was never promised.
"There's no hardware you can trust."
Actually, there could be: a mechanical switch or jumper that would be connected directly to the write-enable pin of the firmware memory. Low-tech, and would keep the control in the hands of the owner of the machine, instead of Microsoft, which is of course we have the overly complicated UEFI "secure boot" instead. (And when you hand a complex spec to a vendor, it is guaranteed to screw up the implementation).
From article: "and if users desperately need to run 32-bit legacy applications, the'll have to do so in containers or virtual machines."
A strange statement. Actually, the x86_64 version of the Linux kernel runs 32-bit applications perfectly transparently, if the distribution provides the 32-bit versions of shared libraries, and they are installed. Or at least that is how it is in Red Hat and OpenSUSE, where 32-bit libs live in /lib and /usr/lib, and 64-bit libs in /lib64 and /usr/lib64, so installing them side by side is no problem.
I'm not that familiar with Ubuntu and other Debian derivatives. Maybe they use /lib and /usr/lib also in 64-bit systems, in which case I can see why they have extra trouble here. Too bad, they could have avoided it.
The rules ask for the source as a zipped text file, but there are two common text file representations: CRLF terminated lines, like on Windows, and LF terminated lines, like on Linux and other Unix-style systems (I am not sure if any Macs still use CR-terminated lines, I believe the older ones did). Can the judges handle all of these, or must the entry be normalized to one specific format?
The rules say each program must be submitted as a single zipped text file. This is a bit unnatural for Java, which requires a 1-1 relationship between public classes and source files, although probably feasible in this case. The problem does not appear to require a complex program. Just use a single public class.
Thanks for your comments. some replies: The delay loop at the start of some versions is meant to bring a low-resolution (one second) clock function to the next tick, so the actual measured code starts just after a second has flipped over. This reduces jitter a bit. However, I'm not sure how much it mattered. For example the difference between Python 2.7 and Javascript on node.js was very large, any clocking method would have detected it. But I agree that using the time libraries of each language is one potential source of error in close cases, because they may be implemented more or less efficienly. This can be mitigated by doing a lot of computation between peeking at the clock, like the test programs in fact try to do.
About the dynamically allocate array in C++: I did it that way to keep the versions in different languages closer, and believe it should not have any effect. Firstly, the allocation and deallocation of the array occurs outside the measurement loop, so that overhead is not included. Secondly, any C or C++ compiler worth its salt will keep the base address of the allocated array in a CPU register during a tight loop like this, so there is no difference between accessing it and a stack-allocated array (which would in fact also be accessed indirectly via a register).
I'd be wary of drawing conclusions from implementing half a page of code in various languages and running it.
I fully agree one should not draw too many conclusions from microbenchmarks like this, but it helps get a feel of how various features behave in different languages or compilers.
I also find it hard to believe that you'll outperform C or C++ in an integer focused task, using a JVM language. I'd be very interested to replicate your results, if you provide some details on your methodology.
After thinking about it, I did not find hard to understand. Java is a statically typed language, and modern JVM:s do JIT, where they can apply all the same optimizations as the C++ compiler (at least for algorithms like this that do not require using run-time type information). So it gets down to which compiler has the better code generator. If you want to check for yourself, see macrorodent.blogspot.fi, where I just copied the benchmarks. If you get interesting results, please post comments there.
The overhead is minimal (add a segment to the LDT) and you can trap any overrun from any language. Sure when you DO trap, there is a huge overhead... but you are debugging then!
Actually there is quite a bit of overhead with this method, because access to such far data requires generating a more complex code sequence than for data in the "default data segment". You need to load a segment register (a compiler can sometimes optimize this away, but usually not, and there are not many of these registers, only ES, FS and GS are free for general use). Loading the segment register is expensive in protected mode in the 386 architecture (it loads the descriptor data and checks protections), and the overhead has even got worse in succeeding generations of the Intel architecture, because it is seen by Intel as a legacy feature that almost nobody uses. It is kept around for compatibility, but they don't care about its performance.
Yes, I too have worked with an embedded system that uses the Intel segmentation feature for fine-grained memory protection (still occasionally do), and I can assure you it is a bad idea!
If you must use strncpy, then at least use 'strncpy(bufer, string, maxlen-1)' to make room for the null.
Reasons for that include having to take into account old C libraries. The strl* functions are newfangled inventions. I recall reading somewhere the reason for the dangerous behaviour of strncpy when the target size is exceeded comes from its usage in the original Unix file system, where file name components were limited to 14 characters. They were stored in fixed-size directory entries with 14 bytes reserved for the name, and only names shorter than 14 were nul-terminated. So strncpy with size 14 writing to the file name field did the right thing...
That only works for STATIC bounds-checking, but a lot of the overruns come from DYNAMIC buffers with bounds only known at runtime
This gets language-dependent. If you have a language where the compiler knows how the size of a dynamic array can be determined (for example Java), it can optimize bounds checking also in those cases. I agree this is hard to make work in C, and we might not even want to, if we just use C as a close-to-the-metal language, and use something else for higher-level applications.
About that Java, which always has array bounds checking enabled: Last summer I spent some idle time trying to see how well various current languages do on the classic Eratosthenes Sieve benchmark (which mainly loops through an integer array). The test was on CentOS7 Linux, and the "contestants" included C++ (GCC 4.8.3), Java (1.7), Python (2.7.5) and JavaScript (Node.js 0.12.7). The clear winner? Java. C++ was close, of course. Of the two dynamic languages, JavaScript beat Python handily, it was about 10 times as fast, and achieved about half of the C++ or Java performance (which I find impressive).
That's the big problem with bounds-checking: it necessarily draws a performance penalty in a world where speed mattered.
Yes, if done naïvely, but a good compiler can actually eliminate most of the overhead (for example, deduce that looping over an array needs to check the bounds only once). Of course, the early compilers for microcomputers were limited in this department.
It is amusing to see claims that weakening the copyright protection of APIs is bad for free software, because if APIs had been considered copyright protected back in the 1980's, there would not be any free software implementations of operating systems. Free software started by building tools (like Emacs and GCC) running on proprietary Unix versions, and only later replaced also the operating system part with a mostly compatible implementation. In the case of BSD:s, only part of the system needed replacement, but there we had an ugly lawsuit until AT&T was forced to agree very little of the original Unix implementation remained in BSD.
Had the Unix API been considered copyrightable back then, neither Linux nor FreeBSD could exist in the present form. Probably not at all, because they would have had to implement an API that is totally different from any other OS else, and would therefore have started with a nonexistent set of applications.
Many have pointed out that Oracle's flagship product actually implements an API originally by IBM (the SQL language for issuing commands to the database). Strong API protection might have meant Oracle also would not have started. Similarly, Microsoft's first product was re-implementing DEC's flavour of the BASIC language, which arguably includes an API, I'm pretty sure they did not license it. (At the time it was actually still unclear if even code could be copyrighted - Bill Gates famously wrote an open letter pleading microcomputer to not copy his BASIC). Now that Oracle and Microsoft are established companies, they want to block the same routes they used to get started. Understandable in a way, but not something we should encourage.
The field of software already has serious a lock-in problem, which makes it hard for users with an investment in some existing platform to switch vendors and promotes monopolies. Strong protection of interfaces will make it even worse. So I fervently hope Google wins. It should also be remembered the same judgment can be used against Google, if someone wants to make a competing product that re-implements the Android API, and Google sues. If that happens, I for one will certainly root for Google's opponent!
but if you want a Linux machine I'd say go with anyone OTHER than Dell.
All big PC vendors are like that. Dealing with a small-scale PC assembler where you can specify known Linux-friendly components is a better way. The result is also likely to be more upgradeable and repairable, as it will contain only generic parts, instead of funny stuff specially molded for Dell, HP or whatever.
.. +1 As long as you're not making that suggestion to the manufacturers for their consumer PCs. Just imagine the enhancements they could do then, or look at the nonsense the carriers and manufacturers do to Android phones.
On the other hand, a laptop manufacturer that simply pre-loaded an up-to-date, well-known Linux distribution with NO "enhancements" (apart from harmless ones, like a branded default background image) could now stand out from the crowd, and win friends.
Not doing this was where the original mini laptops went badly wrong. They had oddball Linux versions that didn't have any software repositories, no community, and required the manufacturer to do all support, which they typically did not do well, and soon dropped (my experience with an Asus EE PC 901).
"None of the above" does not work. Not short-term, at any rate: maybe there's a place for a third party in the US system, medium-to-long term.
Won't happen until you switch to a proportional voting system. And that will never happen in the U.S. because both the GOP and the Democrats understand it would erode their power.
Windows is outselling Linux on the desktop 90 to 1, what conclusions would you draw out of that?
Almost nobody explicitly buys Windows. It comes with the PC or laptop, whether you want it or not. Nobody buys Linux either: Red Hat Enterprise Linux and the like are really support contracts (the GPL ensures this). Any consumers using Linux get it on a magazine cover DVD, or more commonly legally download it for free. So I really don't know what to conclude. Maybe it is that attempting to sell operating systems is a very bad business these days?
The keyboard is pretty good,
In WP7, you cannot turn off autocorrection in the keyboard. This is cursed by all Finnish users, because the method of guessing a word used by Windows is a poor match, and if you don't check the text, you may wind up sending quite lunatic texts... The prediction is also useless in Finnish, because our words tend to be long and have varying inflections. By the time WP7 has a correct suggestion, there are seldom more than 2 letters left to type.
(Not sure if other smartphones do any better in this department).
True. Any competent programmer could learn Fortran in a day or two.
Especially if it is Fortran 77 or later. It is a quite approachable conventional language. The earlier versions might be more challenging. For example, it is possible to write Fortran statements with no white space at all except for the mandatory leading indent (saves time when punching cards!), and anything after the 72. column is ignored by classic compilers. Used this feature once to write a program that is both valid Fortran and ANSI C at the same time! I think I still have somewhere Microsoft Fortran for CP/M with which I tested this hybrid program...
It's Lumia, not Lumina we are nostalgic about. About IoT, MS is not in the running because of licensing. Device makers these days want access to the source of the OS; and also zero cost for the OS per unit, and MS cannot compete with Open Source OS's in this game.
Now the ole' Nokia 710 is getting a bit tired it's finally time for a change.
Same situation (as I wrote some time earlier). But now, given Nokia has just licensed the brand and IP to a Foxconn-backed phone company with HQ in Finland, I think I will wait and see if i can get again a Nokia smartphone...