I'll bet...
The Wintards will be all over this.
Oh, wait all they know is point and click. Never mind...
A huge amount of Linux software can be hijacked by hackers from the other side of the internet, thanks to a serious vulnerability in the GNU C Library (glibc). Simply clicking on a link or connecting to a server can lead to remote code execution, allowing scumbags to steal passwords, spy on users, attempt to seize control of …
You bet, because the arrogance and ignorance shown toward professional companies has been rather relentless. While MSFT and to a lesser extent Apple have woken up over the years and actually done something about their code etc, Lintards hid behind the 'we are better than anyone'. Well, guess what when you have amateurs hacking at code, this is what you get.Reality finally strikes for Linux arrogance that it is free of the risks faced by more popular sw. The BS wagon must finally have overflowed on the FOSS we get to inspect the code therefor it is better nonsense.
The good news that might be jumped on by the Linux apologists is that the install base is finally large enough to be worth attacking. Since the mental shields are down, you all make a great target :-)
Yes, but I would say there's still a fair bit of crow for the free *nix crowd to eat on this one.
- The bug has been out in the open for more than a year.
- It seems they DID opt for obscurity while fixing it because it was too sensitive to do in public.
While I regard the second item as prudent, it's pretty much been an article of faith for the Penguinistas that work needs to be done publicly and ALL vulnerabilities disclosed publicly as soon as known. Hell, they've even criticized Google for giving a 90 day grace period on vulnerabilities.
All in all I still think the *nixes are more secure than the commercial offerings. But the Wintards aren't the only fanatics in the flame wars.
Even though it is obvious you are a troll and cannot read, but I will bite:-
"While MSFT and to a lesser extent Apple have woken up over the years and actually done something about their code " - they are still shit what are you talking about?
"you have amateurs hacking at code" - Google and Redhat are amateurs and BTW Microsoft have their own Linux distro too.
"Reality finally strikes for Linux arrogance that it is free of the risks faced by more popular sw" - good lol - Linux is on more computers in the world than Windows or MAC, it just is not on as many desktop PC's.
Microsoft have lots of bugs which they admit they will never fix and this issue is easily mitigated by a Linux administrator. Further, glibc will be patched to fix this, so yes FOSS wins again.
Even though it is obvious you are a troll and cannot read, but I will bite:-
""While MSFT and to a lesser extent Apple have woken up over the years and actually done something about their code " - they are still shit what are you talking about?"
Which will be why a lot of hackers have turned their attention to apps, applications and plug ins such as Adobe Reader, Flash Player and Oracle's Java..
""Reality finally strikes for Linux arrogance that it is free of the risks faced by more popular sw" - good lol - Linux is on more computers in the world than Windows or MAC, it just is not on as many desktop PC's.
Microsoft have lots of bugs which they admit they will never fix and this issue is easily mitigated by a Linux administrator. Further, glibc will be patched to fix this, so yes FOSS wins again."
If Linux is on more computers than Windows or OSX, that makes this any Linux security hole potentially a FAR more serious concern than any Windows or OSX flaw. It's also like that a lot of those computers are in places where a Linux administrator is not available, thus there will be no one available to mitigate the flaw, or install patches.
The fact is that the Linux advocates missed a security flaw for the best part of a decade while sitting there and criticising Microsoft, Apple and other big businesses for their security.
Don't get me wrong. I am not a fan of a particular OS and will happily use whatever I need to do something (be it Linux, Windows, Unix or OSX), and I don't believe any OS is 100% secure.
"The fact is that the Linux advocates missed a security flaw for the best part of a decade while sitting there and criticising (sic) Microsoft, Apple and other big businesses for their security."
Not to mention that most administrators have missed the the security flaw that Windows is for that last 2+ decades...
I'll bite... I know I'm stupid for doing so, but I'll bite.
Look at the description of the bug. This is something which should never be able to happen in a proper code review environment. So far as I know, there's no company or operating system which has large number of highly skilled developers actively watching their repositories for this kind of stupid.
Linux, FreeBSD, Windows, Mac OS X all suffer differing levels of stupid. This particular flavor of stupid is actually as the troll suggest a special kind of linux stupid. Let me explain.
While the Linux kernel developers and to some extent the glibc developers have embraced within some constraints the use of data structures, their means of embracing them has always been weird and highly inconvenient.
See, where object oriented languages make implementing data structures a breeze and therefore can centralize major fixes of code to where the failure exists, structured languages like C tend to make use of some interesting creative tricks to accomplish the same. The gnome community for example implemented gobject which is the most obscenely inconvenient mechanism to reproduce the entire C++ language in C ever ... well next to Microsoft's COM. They go so far as to manually implement vtables which in a single inheritance environment doesn't cause much harm, but in multiple inheritance can be a disaster. On top of that, they implemented some of the weirdest RTTI methods I've ever seen.
glibc doesn't use gobject. Instead it tends to borrow from the Linux kernel kind of stupid which makes weird use of over-inflated monster structures which are REALLY REALLY REALLY efficient, but their complexity is bonkers kind of stupid. I've seen so many poor uses of rbtree.h and rbtree.c that I shake in my boots whenever a header file includes rbtree.h. I also know that all it would take is one bad line of code in rbtree.c to completely destroy the entire linux kernel for security.... and it has barely any unit tests at all.
Well... at least if the glibc guys would have used a linux style data structure, this wouldn't be a problem... but they didn't... instead they decided it was too much work to use one of the simulated classes. Instead, they reinvented the wheel... with 4 sides on it and made an array and chose to manage it themselves. This means all security holes or bugs found in the code would be localized. So, while this bug has been fixed in 5634543 different places in the kernel and glibc already, it was probably too much work to fix, so they just left it there. Funny thing is, I probably saw it a long time ago (1999) when I was writing a DNS resolver and peeking at glibc to see how it's done.
Let's be honest though... all operating systems have these problems. Only Lintard and Wintards and so forth are stupid enough to think that it's unique to the other guy. If you actually were smarter than an amoebae, you'd realize that all code is insecure and Windows and Linux are both pretty decent for what they do but should never be trusted for security. That said, neither should any other code.
I regularly teach how to hack through Checkpoint, Cisco, Palo Alto, etc... firewalls. I show that finding a nifty problem in a kernel driver or better yet in the syscall interface of the kernel can give you a golden ticket without the firewall software ever seeing the malicious code. I've got a few in my toolbox at the moment for Linux if I need them. Darwin is a goldmine of them. Windows is a little trickier since you have to actually dig a bit because it's closed. But, pretty much all operating systems are written like shit.
If you want a personal opinion on which I think is cleanest at the moment, I actually have to give Microsoft the crown. Ever since the introduction of the Windows 8 kernel, it's been such a massive improvement that I like them best. They have some of the best coding practices at the moment and they seem to be taking process really serious. There was a few shortcoming in retaining legacy driver support in Windows 10 which bit them, but at this time, they're quite good. Mac is pretty close to the bottom. Apple releases more half-finished code than even GNU does these days. Their unit testing is pathetic and I expect there to be massive amounts of "Fixed it.. broke it again.,.. fixed it... broke it again" in the Darwin kernel.
LLVM is maybe the most important project ever in the open source, but the quality of LLVM has been decreasing far too rapidly. The errors and warnings generated by the compiler are generally terrible for assisting with identifying root cause or even general error location. As such, the quality of the Mac kernel is only as high as it is because of duct tape and crazy glue... possibly some bubble gum as well.
Oh... ummm I forgot...
RedHat generates absolutely massive amounts of "it kinda works, it must be done" code.
Google does pretty well when they're focused. I'm actually often amazed at how much good code comes from them. That said, there's a good bit of slop as well. But would you seriously believe you can employ that many programmers and have nothing but good code?
If RedHat were out of the game, there would be far less new bad code in Linux.... that said... there would be far fewer bug fixes as well. So I'm not sure if it would be a good or a bad thing.
I'm hoping there will be a new small and simple OS which could make a run for being the new "Let's try it" platform.
aww the poor Lintard babies had to downvote your post. a little bit too much reality for them. when the revolution comes you little Lintards will be the first against the wall. actually, a firing squad is too good for you. we'll just stick you in an elevator whose microcontroller has had its software written to exacting GLoonix Open Sores standards. (i.e.: code written by twisted sycophantic knob-polishers running around like headless chickens avoiding the retarded vituperation of that quadruple chinned Finnish bloatwagon named Loonis)
Well - except someone DID inspect the code - Redhat and Google - and flagged a bug *before* it was exploited (as far as anyone knows).
So, whilst there might be a notion that FOSS is perfect because Granny checks the apache source while Gentoo is installing, the fact that normal people rarely read source does not mean no-one reads the source.
Sure, Google - hehe - bypassed its "90 days policy after which you're dead" in this case - it was afraid if it was made public a lot of its own infrastructure would have been at risk... as usual, different standards for you and your competitors, right?
And how do you know nobody exploited this bug? It has been there sitting unseen for eight years.... and this is not the first time I see DNS resolving code failing for non "common", longer yet fully compliant answers (usually because there are more valid data than most DNS returns). I have a router that made many devices based on BusyBox fail because of its longer DNS answers.
Lintards hid behind the 'we are better than anyone'. Well, guess what when you have amateurs hacking at code, this is what you get.Reality finally strikes for Linux arrogance that it is free of the risks faced by more popular sw. The BS wagon must finally have overflowed on the FOSS we get to inspect the code therefor it is better nonsense.
Can't tell if srs but assuming you (AC suggests you might be) are you're not nearly as smart as you think you are for an extremely long list of reasons - not least you'll note it's Google who dug this one out. Just throwing that out there.
You think you've made some kind of snarky point with your remark. All you've done is highlighted open source working as intended. The problem was identified and is being worked on. No code is perfect, and this highlights the importance of open source.
Now can you tell me how many vulnerabilities are in your closed source OS I wonder, in a company that fired nearly all its Q&A? Oh yeah - you cant.
I am so not looking to recreating my image (virtual hd as well iso's) files. It doesn't matter which ecosystem gets hit since I do them all. And I'll be seeing trickle down from each upstream package.
I could care less about comparing security track records, more eyes, less evangelists, please.
The Wintards will be all over this.
To which you only need remind them of the recent bug that left many Windows anti-virus packages with serious holes in them, amongst other things.
I'll say it again. There is no such thing as a completely safe operating system. If you want to avoid being hacked, stay offline!
I get Stratus VOS and VMS confused, but I certainly did zero downtime patching on one or both of them. The OS let you replace a running executable and the runtime migrated all the threads as they terminated, you could even migrate threads between nodes on a cluster thereby enabling zero downtime firmware upgrades.
That's more likely VOS, but I'm no expert with Stratos kit.
VMS still needed reboots for certain library updates (yes, I'm looking at you, C RTL - you were usually the worst offender), and if you had to AUTOGEN the system to update certain system parameters. Clusters might achieve uptime measured in years (if you could reboot individual nodes to apply updates) but standalone boxes, not so much.
Autogen mostly (maybe not entirely) went away when VMS systems with sensible amounts of memory arrived. Much of Autogen was about tuning the allocation of limited real physical resources in the most appropriate way for a given system's workload, in a way which widely used OSes don't bother with. When the system has multiple GB of memory, that's not always a big issue, and that now includes VMS too. Autogen's still there if you want it.
VMS itself is still with us, the port to x86-64 is announced and timetabled, and VMS development and support is now being done by people with clue outside HP (with HP's agreement). Many of those people are well known from previous roles when VMS was a DEC product.
http://www.vmssoftware.com/
(no connection except as an observer)
"You'd have to jump through a lot of hoops to build one these days if for some odd reason you wanted to."
What's so odd about it?
For example, FreeBSD has known the /rescue folder for quite some time now; it's basically a folder which is packed with statically linked binaries (from bzip2 to mount, sed and tar and a whole lot more) and the reasoning behind it is quite simple: if for some reason your libraries become unavailable (for example because of the /usr filesystem crashing, some installation going wrong or even a human error in removing the wrong file(s)) then you can always fall back to these tools.
I've never needed it myself so far, but I still think that there's nothing odd about the underlying philosophy.
If interested then the rescue(8) manualpage has more information on this.
No one will own up to that. So much for the 'lots of eyes on the code' BS. Since there is no payback on actually reviewing code, it doesn't get done. Commercial companies OTOH have a vested interest in improving their products, hence the focus from MS and Apple, and even a bit of Google on proactively finding holes and fixing them.
It seems there's some history starting in 2000 with a vuln which was apparently fixed in 2013 for version 2.18. I just checked a freshly updated Debian system and it is running a much old version. I guess there is some good reason to keep using the older versions if that's what Debian has been doing. Can someone here explain this?