back to article Linus calls Linux 'bloated and huge'

Linux creator Linus Torvalds says the open source kernel has become "bloated and huge," with no midriff-slimming diet plan in sight. During a roundtable discussion at LinuxCon in Portland, Oregon this afternoon, moderator and Novell distinguished engineer James Bottomley asked Tovalds whether Linux kernel features were being …

COMMENTS

This topic is closed for new posts.

Page:

  1. Lou Gosselin

    No surprises here.

    Maybe Linus will finally overlook his pride and give up his stubborn decade long argument for feature-bloated monolithic kernels.

    Today, with so many drivers and so many different various linux kernel flavors, it's making less and less sense to stick with the monolithic kernel, where code glitches become impossible to isolate and recover from. I'm sick of having to recompile drivers over and over again against each new kernel version.

    Instead, an extensible microkernel approach should eventually be adopted such that the base kernel is as simple as possible. Also, drivers should be compiled once and usable on any kernel having the same major version number.

    Of course Linus' real reason for pushing the monolithic kernel with no binary compatibility is that it fits in with his philosophy of making it it as difficult as possible to release binary-only drivers. Linus has himself to blame that the source tree has gotten out of control.

  2. Anonymous Coward
    Gates Horns

    I Wonder...

    I wonder when the first Microsoft commissioned white papers that quote Linus will be issued? I wonder if they are stupid enough to use the performance gains of Windows 7 over Vista as proof of the superiority of Windows development and ignore the gigantic performance degradation of Vista over XP?

  3. Victor 2

    comments...

    "Linux creator Linus Torvalds says the open source OS has become "bloated and huge," with no midriff-slimming diet plan in sight."

    Linux is not an OS, it's a kernel.

    "Asked what the community is doing to solve this, he balked. "Uh, I'd love to say we have a plan," Torvalds replied to applause and chuckles from the audience."

    Yes, that has always been Linux' problem... no plan, no direction, no engineering, no thinking... only people adding more and more stuff, then replacing some of that with new stuff changing the whole ABI from one release to the next... I wouldn't applaud that, it's kind of sad actually.

    "He maintains, however, that stability is not a problem. "I think we've been pretty stable," he said."

    I beg to differ.

    The plan... it's simple: Kill the Batman.

  4. Anonymous Coward
    Anonymous Coward

    what problem?

    If the kernel is 2% slower per year, but the hardware is 2-10% faster per year... then there is no net problem, is there?

  5. Steen Hive
    FAIL

    @Lou Gosselin

    "Maybe Linus will finally overlook his pride and give up his stubborn decade long argument for feature-bloated monolithic kernels."

    Running drivers in user-space would help code-bloat and performance how, exactly? The drivers aren't going to get any smaller - probably the reverse given the relaxed requirements - and running drivers in user-space is probably the best way to kill kernel performance known to man. Case in point - xnu has to be hybrid to avoid being unusable, and it runs like a mangy dog nonetheless.

  6. Phil Koenig
    FAIL

    Code bloat vs Moore's law etc.

    AC wrote: "If the kernel is 2% slower per year, but the hardware is 2-10% faster per year... then there is no net problem, is there?"

    Why yes, yes there is.

    Personally I think it's a damn shame that with today's fire-breathing CPU's, there are many tasks that I could do far quicker on my 20-year-old Commodore Amiga than on some over-bloated modern monster with an OS that takes 1-2GB of RAM just to boot.

    In my personal version of utopia, designing products like that should result in jailtime for the coders.

  7. Seán

    Where is the journalism?

    What does RMS have to say Linus may think he's Jesus but RMS is YHW

  8. Adam Williamson 1
    FAIL

    lou:

    you mean, we have Linus to thank for the fact that all our hardware isn't run by black box, binary-only drivers?

    Thanks, Linus!

  9. Anonymous Coward
    FAIL

    Slowlaris

    So, the Slowlaris Kernel gets faster all the time, while the Linux kernel gets slower.

    We quickly need to find a new name for Linux, like Snailux.

  10. Ramon Cahenzli

    Windows comparison unfair?

    I wonder if it's fair to compare kernel sizes when the Windows kernel doesn't support nearly as much hardware and as many exotic devices as the Linux one?

  11. Wortel
    Grenade

    So,

    Linus answers honestly. Don't see a problem there. Some of you negative commenters forget this man enabled us to have a very flexible OS in every corner of our lives, with all freedoms, in as short a timeframe as 15 years. Have you forgotten how long ago it was that Microsoft started putting out an OS? what are it's limits? Yeah i'll stop there.

  12. Mectron
    Coat

    Wow....

    as a kernel gain more feature, it became bloated (and full of bugs).

    maybe now the linuxoid will think twice before bashing windows?

  13. This post has been deleted by its author

  14. Anton Ivanov
    FAIL

    Re: No surprises here.

    First of all, Linus is right. I have had at least 2 machines which were perfectly usable as media center client tipped over into unusability. They are now too slow (from 2.6.18 to 2.6.26).

    Last 10 releases is roughly since NFSv4 has fully gone in. The ghastly thing has brought a few regressions that had not 2%, but 92% performance drops. Even with most of the problems fixed there is a boatload of places screaming for optimisation as of the last time I looked at it (2.6.28). Iteration across all elements is used instead of hashing and so on. Add to that slowdowns from moving portions of USB, parallel, etc to userland (hello Microkernel fans) and the picture is more or less complete - it definitely needs a feature freeze for at least a year in many areas until the code is sped up and optimised properly. Microkernel has nothing to do with it.

    Using iteration to walk an ever-expanding permissions cache will be slow in microkernel. Same as in monolithic.

  15. Doug Glass
    Go

    Bloat's The Thing

    "....streamlined, small, hyper-efficient kernel..." Looks like Windows envy to me spawned by a perceived need to "catch up" to Windows and win the hearts of the unwashed masses.

    Before the age of indoor plumbing, the only water leaks you had were in the roof. With the advent of in-wall water piping came leaks, corrosion, and clogs which mandated a greater need for maintenance. Whenever you add features, you add problems; that's just the nature of the beast.

    If Linux actually expects to compete with Microsoft Windows, then bloat is the way to go. Well, unless you could convince the common user to accept less, which isn't likely.

    Linus T. is still living in dreamland.

  16. ratfox
    Thumb Up

    Good

    I like it when people do not spam marketing on the world.

  17. Sam Liddicott
    Linux

    it's only bloated if you build and load it all

    The kernel source is bloated, but it is only a template for a kernel.

    It's not a requirement to build and load it all.

    Many small and unbloated kernels are built from the bloated source.

  18. A J Stiles
    Boffin

    What everyone is missing

    The Linux kernel comes in Source Code form. If you're really desperate to squeeze every last trace of performance out of it, you can trim it right down to just the bits you need. And as recently as five years ago, that's exactly what people were doing. With 2.2 and 2.4, it was entirely normal to compile your own kernel: you compiled the filesystem and chipset drivers hard into the kernel, and built modules only for the hardware you actually had (or thought you might acquire).

    Now that Linux is mainstream, and now processors are riduculously fast (probably due to the demands made by other operating systems), it's simply got past the point where anybody can be bothered to strip it down anymore and reached the point where you can spend more time deciding what not to include in the kernel, than you actually save by leaving it out. It probably isn't helping that hardware manufacturers all insist to make products that are largely incompatible with one another, thus requiring separate kernel modules, either.

    But one thing is certain: If there's a gap in the market for a stripped-down Linux, it *will* be filled, one way or another.

    This is one thing Apple actually got right. By controlling the hardware on which OSX runs, they at least know what they need to put in the kernel and what they can leave out.

  19. northern monkey
    Megaphone

    @Anonymous Coward 2:20

    Aaaah - I hate that argument! Just because computers are getting faster does not mean we should write our code less well, less efficiently 'because we have room too'. Write the same streamlined efficient code you wrote for old, "slow", memory challenged machines and it'll run like s**t off a stick. Write bloated inefficient code because there's no need to bother putting the effort into good programming then it will run OK, but give it to a colleague with an older slower machine and you're stuck.

    They should put a contract on every copy of every programming language tutorial book, every computer science course, every training course, that requires the owner/attendee to promise to endeavour to write efficient code. If they can't promise that or don't see why they should then they should be shown the door.

  20. seanj
    Stop

    Re: Funny.

    Not a fanboy of either MS or Linux, use them both at work and have no real preference (maybe I'm just not techie enough!), but:

    "Linux getting slower and bloated while Windows (7) getting faster and leaner."

    Seriously? That's your argument?

    Faster and leaner than what? Vista? An OS that Microsoft wants condemned to the recycle bin of history? That's like gaining 6 Stone last week, but boasting I've lost 2lb this week, aren't I amazing! It just doesn't fly...

  21. Crazy Operations Guy
    Unhappy

    Why closed source software works.

    Projects this large, especially OSes, require that there is someone there to kill certain ideas before they become a problem. Its this sort of feel-good attitude that infests the open-source community that is killing it. What we have is developer that are trying to contribute by adding functions are extra features to things, but aren't good enough to make them lean and responsive. But no one wants to tell them that their code sucks and that they should do more practice and studying before it can be included. They don't want to do this because it will make them look like the bad guy, look like he is against the community and be flamed to hell and back.

    The community reminds me of how current society is going. This whole "we are all winners and we should accept our differences", fuzzy warm feeling political correctness. This sort of bland non-offensive, culture-neutral crap. Yes, people should be treated fairly, based on their merits, but everyone should be told when they make mistakes. We only get better if we know we made a mistake. I mean, we are all adults, we should be able to handle such minor things, if someone tells me that 'your work sucks' I am not going to take it personal, I am going to work and try to show him wrong.

    Really, its the community that is destroying itself. I was once a Zealot myself, but then I saw the dark side of the community. I saw the arrogance of the senior community members, constantly believing that their were always right and 'correcting' other people's work without giving them any information on how they can improve their code. I saw the constant in-fighting and power struggles amongst the project, each believing that they should be in charge of the project. I've seen the completely new programmers (Usually a fresh CS graduate, or sometime a high-school student) who've dove right in to projects messing things up code, trying to apply every rule they learned school (Usually rules that only apply to writing in BASIC or Java) and destroying some of the most elegant code I've ever seen, especially not documenting their code (Or sometime Over-documenting their code, take a look at the config file for Lynx if you want an example of what I am talking about). I've seen hard working, highly skilled developers sidelined because they don't just don't have the courage to say what they think. But the worst thing I saw was the near unlimited army of users that are constantly white-washing everything, trying to paint everything with puppies and kittens, completely ignoring the elephants in the room, these are the ones that pushed me to leave Open-source, and programming in general behind, to be the cynical, miserable bastard I turned into when I turned 21.

    I congratulate Linus on coming out and addressing what has been ignored for the last several years, I hope more people start to speak out and maybe open-source may once again be respectable in my eyes. Hes is just a few years too late. I wholly agree that the Kernel has become too-bloated, there is so much that doesn't need to be there. Thre is far too much support for far too many devices built-in, sure it will support some obscure system bus that was made some manufacturer for only 2 years, but who the fuck cares? When was it said that the needs of the few have to outweigh the needs of everyone else? Why has society done this too? Why must I censor myself because it may offend someone? When did we become slaves to lawsuits and fines, afraid to say even the smallest thing to prevent alienating a small group of people? When did we move from 'rule of the majority, protection for the minority' to 'rule by special interest, fuck the majority, screw the other 98% of society, they don't know what it means to be oppressed'. When?

  22. Anonymous Coward
    Anonymous Coward

    All Aboard the Minix Train

    The Linux train will be docking in the station soon, passengers wishing to continue their journey should proceed to platform 3 where the Minix bullet train is awaiting.

  23. tiggertaebo
    Grenade

    Always a trade off

    Microsoft learned the hard way with Vista that OS efficiency is becoming important to the "average user" again - Vista felt like treacle on hardware that XP felt like lightning on and without giving the user that all important sense that it was really doing more for it. Linux needs to be careful not to cross this line, if indeed it hasn't already - these days when selecting an OS for my older boxes I'm generally finding XP gives me a more optimal experience for the resources.

    I know there are some nice skinny distros out there that will run quite nicely on the older hardware but often this involves either compromising the user experience or the ease of access to the software I want - sure these are generally things you can work around but when XP is going to do the job and for a fraction of the effort why bother?

  24. Ken Hagan Gold badge
    Troll

    Re: Bloat's the thing

    ' "....streamlined, small, hyper-efficient kernel..." Looks like Windows envy to me'

    Hahahahahahahahahahahahahaha!

    And as for the earlier "Linux getting slower and bloated while Windows (7) getting faster and leaner.", have you actually *used* Windows recently, Mosh? As a software developer, I need to regularly flip between XP, Vista and Win7 on the same hardware and whether you are at the low or high end, XP is *way* faster every time.

    Linus is just being honest. *All* operating systems are getting bloated, even his.

  25. Anonymous Coward
    Anonymous Coward

    It's not all doom and gloom.

    When I upgraded my Lenovo laptop from Debian 3.1 to Debian 4.0 it got noticeably faster at booting.

    On the other hand, the keyboard and touchpad occasionally stop working now and I don't know how to bring them back to life without a reboot. It might be a hardware problem, and if it's software then I would guess it's more likely a problem in the X.Org driver than the kernel. It doesn't happen often enough for me to be sufficiently motivated to investigate further, and you can blame the speed of rebooting for that. :-)

    I see Debian 5.0 is out now. Do I risk it?

  26. windywoo
    Thumb Up

    If this had been MS

    It would have been marketed as feature rich and compatible, If this were Apple we wouldn't have heard anything and the fans would make excuses that its feature rich and compatible.

    Linux may lack a bit of direction, but I'd rather have people who create the OS for the love of doing it than because it makes them big bucks.

  27. Anonymous Coward
    Anonymous Coward

    Re: Funny

    Yeah, if the trend continues, somewhere around 2099 we'll all be switching to Windows.

    Quickfix: stick a pretty front end on the kernel compiler that automagically does all the complex stuff depending on your choices for the simple stuff; add some fancy hard ware analysis bits; slap the whole lot in a bootable CD image; bingo - custom lean kernels.

  28. Anonymous Coward
    Anonymous Coward

    interesting..

    I haven't noticed it myself, but if Linus says so, I'm inclined to take his word for it. However, his "slow and bloated" kernel is still noticably slimmer and quicker than any other kernel that I run- and hell, my netbook flies under linux, crawls under windows (though of course, some of this is down to Windows userspace bloat, also).

  29. Dr. Mouse

    @Sam Liddicott

    Yes and No.

    Compiling your own kernel with only the features you require will always be quicker than running the 'catch-all' kernels supplied by the distros.

    However, the problem comes when core parts are modified to support new features, and that code is not fully optimised. You need that chunk of the kernel in your own, hand-rolled kernel, but it is slower than the code in the previous release due to modifications, so your new kernel is more bloated and slower.

  30. Kebabbert

    Linux should aim for quality

    instead of quantity. The Linux code base is 10 millions line of code. ONE SINGLE KERNEL. The entire Windows NT was 10 millions LOC. I think Linus should reconsider, and have a plan instead of redesigning everything all the time.

    When he states that they fix bugs faster than they add code, so what? The code they bug fixed will soon be swapped out to new code that contains new bugs. It doesnt matter if they fix bugs, because that code will soon be swapped out. And again and again. This is the reason Linux has no stable ABI, and this is one of the prime reasons Linux is unstable.

    Even Linux kernel developer Andrew Morton complains about the declining quality of the Linux kernel. His words:

    http://lwn.net/Articles/285088/

    Q: Is it your opinion that the quality of the kernel is in decline? Most developers seem to be pretty sanguine about the overall quality problem. Assuming there's a difference of opinion here, where do you think it comes from? How can we resolve it?

    A: I used to think it was in decline, and I think that I might think that it still is. I see so many regressions which we never fix.

  31. Owen Williams
    Linux

    Wot he said (Sam Liddicott)

    Otherwise the Acer Aspire One wouldn't have you logged in in 12 seconds. And the kernel is stable. And the kernel is faster than Windows and MacOSX. Windows 7 and Snow Leopard are only now trying to speed things up. And how? By dropping features. With Linux the user can choose the features he wants to run with.

  32. Ally G

    What Ever Happend to....

    Recompiling the kernel for set machines for speed?

  33. Jason Bloomberg Silver badge
    Linux

    The bigger picture

    Any distro ( Linux, Windows or other ) consists of a number of core essentials; kernel, drivers, protocol stacks, file system handlers and applications. The problem seems to be in how that is divvied -up; what's installable at runtime, what's compiled into the kernel, and it stands to reason the more pre-compiled into the kernel, or essential to load at runtime, the more bloated an OS as a whole becomes.

    As Sam Liddicott says, it's entirely possible to build lean Linux kernels, and distros for that matter, and XP Embedded does the same for Windows. But that approach creates OS's for specific machines or a subset, not a fit for purpose OS for any and everything a user may have or want in the future, which is what a Desktop environment is.

    Disk footprint is entirely different to run time memory footprint and execution speed so the approach to bloat should be towards lean and mean kernels, with the user able to add or remove drivers, protocol stacks, file systems and apps at runtime. Drivers in user-space may not be acceptable as Steen Hive notes but that doesn't mean they have to be pre-compiled into the kernel. Lou Gosselin is right to put the blame on "feature-bloated monolithic kernels", and it's time to fix that; Linus has dug his own hole and needs to get out of it.

    Bloat is anything a particular user doesn't want or need but cannot be got rid of, so come on Linux guys and gals - and Linus in particular - show us how it should be done, don't just accept the "unacceptable but unavoidable" shrug of the shoulders and resignation to it.

  34. Tom 7

    At least with linux you can cut out the bloat

    and compile your own kernel if you so wish.

    Something Mosh Jahan will wish he could do when W7 has had a couple of service packs and is back to being Vista.

  35. TeeCee Gold badge
    Grenade

    Re: what problem? (AC 02:20)

    I see. Who are you astroturfing for, Microsoft or Intel?

  36. Doug Glass
    FAIL

    @Wortel

    Yeah, and it's a shame the marketing is so dismal to.

  37. Anonymous Coward
    Anonymous Coward

    How many psychiatrists...

    ...does it take to change a light bulb?

    Only one, but the light bulb must "want" to be changed.

    At least Linus admits the problem and states it openly; that's the first step towards a solution.

    Microsoft, along with many other commercial organisations, is unable to admit to problems like this, not because they are "evil", but because it becomes a financial issue with stock prices falling and investors and internal politics getting in the way of engineering solutions.

    Fortunately the FOSS world is able to say and do things that commercial organisations can't, and that at least puts the commercial organisations under some pressure to improve products.

    If Windows 7 is better than Vista, it will be in some part due to competition from Linux (remember the first Netbooks?); would Windows 7 have even been delivered without the threat from Linux, or would Microsoft still be trying to push Vista onto us?

  38. Rod MacLean
    Joke

    RE: Funny!

    Mosh Jahan wrote: "Linux getting slower and bloated while Windows (7) getting faster and leaner."

    Yeah, that's like saying "I saw Kate Moss eating a few chips - but Mealoaf is on a diet"

  39. gerryg
    Linux

    Would that be the same performance-lite kernel...

    ...that runs on 19 0f the 20 fastest supercomputers and the vast majority of the next 480?

    Oh, it's customisable, you say? You don't need to use everything it comes with?

    Who'd have thought it?

  40. Edwin
    Linux

    PEBKAC

    For years, we have known that compiling your own kernel is cool and results in a faster kernel.

    For years, we have complained that ordinary users won't use Linux.

    For years, ordinary users have feared Linux because it's not easy to install or use.

    More recent distros are much more user friendly precisely because the kernel is so bloated.

    So what do we want - a universal kernel that will run on pretty much anything, or some form of hideous hardware scanning autocompilers that recompile the kernel as soon as you plug in a new USB device?

    It's a little like Apple's iPhone business model: it sucks, but 90% of the planet likes it, so we happy few will have to live with it.

  41. James Penketh
    Linux

    @Crazy Operations Guy

    "But no one wants to tell them that their code sucks and that they should do more practice and studying before it can be included."

    Try and submit some sucky code to the linux kernel devs, and someone will definitely tell you that your code sucks.

    "They don't want to do this because it will make them look like the bad guy, look like he is against the community and be flamed to hell and back."

    When you get a major kernel dev, or even Linus himself, saying this, people tend to side with them. ;)

    Not that I've submitted code to the kernel, I'm not good at C. But I have made a fool of myself on LKML and the replies were a little scathing. Certainly taught me to double-check things before clicking "Post".

  42. fishman

    In the past

    In the past, someone would come up with some metric showing how the linux kernel had slowed down from previous releases. The next efforts were then spent on fixing whatever had slowed the kernel down.

    So all they need to do is develop a set of metrics that demonstrate and quantify the problem, determine where the problems lie in the code, and then fix it. Easy :).

  43. Anonymous Coward
    Flame

    @Mectron

    Yeah maybe so, but at least we can have an honest and open discussion about the problems in Linux, with a possibility of some action. Windows what you got? A lot of whingeing and a hell of a lot of praying and hoping Billy and the boys will fix your problem! If not, nevermind there'll be a new version of the O/S along in, oh lets say 7 years time?!!?

  44. The First Dave

    @Ken Hagan

    Correct me if I am wrong, but part of the advertising for Snow Leopard is that it is smaller and faster than the previous version, but I don't suppose that Linus really wants to get into a comparison with OSX?

  45. Sooty

    @Northern Monkey & Others

    "Just because computers are getting faster does not mean we should write our code less well, less efficiently 'because we have room too'. Write the same streamlined efficient code you wrote for old, "slow", memory challenged machines "

    This sort of thing shows you up as 'not a software developer' or if you are, please god don't let you be one that i have to work with.

    Old 'streamlined' software, was streamlined because it had to be, it sacrificed stability and maintainability in order to maintain execution speed and a small memory footprint. People don't write code that streamlined anymore 'on purpose' as it leads to problems. It's often easier to re-write large chunks of it than make the smallest updates. People were writing it like that because they absolutely had to, not because they wanted to!

    Would you rather your software crashed constantly, or whenever it did just gave you an unhandled exception, like in the good old days? Or would that the coder used a few, extremely innefficcient, checks on responses to make sure it either continued working, or gave meaningful errors.

    As a developer, i know it's much slower to read things dynamically from config files, than to hard code them, but i still read them dynamically, as it means i can update them in seconds, rather than searching for masses of hard coded values, and recompiling the whole lot.

    Remember the Y2k bug, 2 digit dates were used to streamline memory usage and processing power as it was expensive back then! If that happened again with modern code (significant change to the date format), i would hope it would just be a small update to the date type/class and a recompile.

    Perhaps kernel code is different, but any halfway competant developer of most other software will be continually sacrificing execution speed and memory usage, for maintainability and stability, not just because they feel like it. Functions, no chance, all that mucking about swapping values on and off the stack is inefficient! Try/catch blocks are horrifically inefficient, but the core functionality of most error trapping!

    There are reasons that companies/banks hire armies of assembler developers to make the smallest changes to their batch processing, as small inefficiencies can make hours of difference in those volumes, but it takes a long time to make negligibly small changes, and is mostly indecipherable to another person without taking a lot of time to investigte. Not to mention the smallest error causes the whole thing to fall over.

    Yes, some software is inefficient without any need, and that should be eliminated, but don't just assume that because newer software has more of an overhead, and runs slower than older stuff, that there is necessarily anything wrong with it!

  46. Anonymous Coward
    Anonymous Coward

    So ...

    Who here actually thinks that Linux is slower and less-efficient than windows? Even with a full-bloat kernel any Linux distro will match performance or exceed it against Windows every time, even on inferior hardware.

    Try file serving, database access, high-thoughput processes and processor-intensive loops and such like -- Linux tends to seriously outperform Windows every time.

    Now, since most real-world Linux custom installs have a custom-compiled kernel anyhow, this is a bit of a non-story.

  47. AndrewG

    The first thing I always do

    Is recompile the kernel to be modular and only use the hardware I've got installed..mind you, the source is now HUUUUGE but the source is supposed to cover everything it's installed on, and most (not all) of the main distros run a big monolithic kernal to make sure you don't have any problems at install.

  48. A J Stiles
    Thumb Up

    @ Edwin

    "So what do we want - a universal kernel that will run on pretty much anything, or some form of hideous hardware scanning autocompilers that recompile the kernel as soon as you plug in a new USB device?"

    Actually, that's got legs.

    Put a bloated catch-all kernel on the install disc, but also provide an advanced "super racing tune-up" installation option that will compile a brand new kernel with support for the auto-detected hardware and any more that the user selected (either from a menu, or just by having the user plug in their USB devices one at a time and auto-detecting them). After all, we know which modules we loaded in the first place, and which ones go with the new devices ..... well, they're obviously the ones we need to compile. Display a warning that this will take a long time and this is the last chance to bail out. Use a bootloader that supports multiple kernels, so you can start up in "super fast" mode (with your custom kernel) or "failsafe but slower" mode (with the stock one).

    Now, if the user later acquires a new piece of hardware for which they didn't compile a driver module but the Source is in the Tree, the required module can always be built at a later date. Even if it is some device that needs its driver to be compiled "hard" into the kernel, or requires a new Kernel Source Tree to be downloaded, it will only be necessary to boot failsafe and rebuild the custom kernel.

    This whole process can of course be almost fully automated, perhaps with a progress bar or even an amusing slideshow, for the sake of people who presumably have difficulty remembering how to spell "make".

  49. HarryCambs

    He should have thought about it tobegin with

    Having every single driver in the universe shoehorned inside the kernel definetly contributes for the vast majority of the bloat.

    The Linux community is still extremely offensive against drivers that are not bolted inside the kernel.

    So the problem was created from the conception.

  50. Jim T

    @Crazy Operations Guys

    Seriously, fact check. Linus and his lieutenants(sp?) have absolutely no problem tearing your patch to shreds, rejecting it because it's useless or just plain doesn't fit with the kernel.

    They have no problem with being seen as the bad guys.

Page:

This topic is closed for new posts.

Other stories you might like