back to article Hardware has never been better, but it isn't a licence for code bloat

My iPhone 6 recently upgraded itself to iOS 11. And guess what – it's become noticeably slower. This is no surprise, of course, as it's the same on every platform known to man. The new version is slower than the old. It's tempting to scream "code bloat" but that's not necessarily fair because new stuff usually has extra …

  1. Anonymous Coward
    Anonymous Coward

    It is what you get with bolt on development

    "Hey the system it fast enough, we don't need to bother with optimization that will take time, lets just bolt some more crap on and call it a new system"

    On a par with the one about how a carpenter also deserves to profit from his work, even when every other carpenter makes work that doesn't break if they sit upon it

    1. fobobob

      Re: It is what you get with bolt on development

      Needs more JQuery. I've got a fever, and the only cure is more JQuery.

    2. Anonymous Coward
      Anonymous Coward

      One of my favorite languages

      One of my favorite programming languages, PowerBASIC, dealt with this issue quite a while ago. They added a BLOAT metacommand to their compiler. From the compiler & development environment help file:

      Purpose

      Artificially inflate the disk image size of a compiled program.

      Syntax

      #BLOAT size_expression

      Remarks

      #BLOAT allows the creation of artificially bloated program files on disk, in order to match or exceed that generated by competing "BloatWare" compilers. #BLOAT does not affect the memory image size (running size) of a compiled program.

      size_expression

      The size_expression parameter is a simple Long-integer expression that specifies the total desired size of the compiled programs disk image, but is ignored if it is smaller than the actual program size. #BLOAT uses sections of the actual compiled code to fill and obfuscate the portion added to the file.

      While #BLOAT adds no true merit to the technical efficiency of the compiled code, there are a number of reasons for its use, including:

      1. To allow "BloatWare" programmers to feel more comfortable when using PowerBASIC.

      2. To impress project leaders/managers with the volume of executable code created.

      3. To allay the fears of uninformed customers who may mistakenly infer that "such tiny programs couldn't possibly do everything that..."

      4. To make certain versions of a program more readily identifiable simply by examining the size of the file on disk.

      5. To improve convolution of the contents of the executable disk image, because the bloat region appears to contain executable code.

    3. Oh Homer
      Flame

      Re: It is what you get with bolt on development

      That's one cause. There are many.

      The most common affliction is commercial projects that aim for maximum profit at minimal effort, typically by utilising high level abstraction (minimising programming skill requirements and time) in vast, highly generic, shared third-party resources, with a huge redundancy overhead because the project only actually uses a tiny proportion of those shared resources.

      Sometimes those resources only occupy disk space (but hey, "disks are cheap"), but often the entire resource(s) need(s) to be loaded at runtime too, eating memory (but hey, "RAM is ... oh") and sometimes even CPU cycles ("who cares, today's processors are fast").

      We don't even need to consider the subtler aspects of assembler optimisation, optimal array sorting, and other speed tricks, etc., or rather the lack thereof, because right there you're already looking at 99% of the problem.

      In summary, modern software development is more like self-assembly furniture than carpentry. All the actual engineering was done once, as a template, then mass produced, and the end result is a vast warehouse full of junk that is barely fit for purpose.

      But that's OK though, because it's "cheap". Oh, and the vendor makes lots of money. Mission accomplished. The fact that you and I have to endure longer and longer loading times, cripplingly slow execution, and an endless upgrade treadmill to compensate, is simply irrelevant to the one and only objective of today's software development ... money.

  2. MacroRodent
    Unhappy

    Very good points!

    I would just like to add another: Nowadays it is rare that your program is the only one running on your computer, especially on interactive systems. This means all those gigabytes of memory and gigaherz of CPU are not all for your code. If you code as if they were, the user will be very annoyed when switching to another task, finding the machine grinds to a halt for a while. Sadly, most of the stuff on a typical Windows (or Linux!) desktop behaves like this. Frankly, the performance experience of using a 2017 Windows desktop is very much like using a 1997 Windows desktop, except for some added chrome and glitz...

    1. Anonymous Coward
      Anonymous Coward

      Re: Very good points!

      It is the cost of allowing a developer to load their own libraries with the OS but forcing any other developer to use their compilers if they want to use anything from the libraries. Code re-usability goes out the window but is good for selling your own devkit.

      Personally I prefer a good monolith that runs even when they randomly decide to change their libraries, so as to prevent anyone piggybacking off their code without buying their devkit

      1. Charles 9

        Re: Very good points!

        But now you see the tradeoff. Monoliths take up more memory due to code duplication, so it's less space-efficient, which can be an issue if you're running a bunch of them at once. It's basically a situation where there is no one answer for everything. After all, not everyone needs a compact all-in-one job like busybox, but if you had to work with a relatively tiny footprint, you'd see the point.

      2. Anonymous Coward
        Anonymous Coward

        Re: Very good points!

        Today, the space occupied by code in memory is often a tiny fraction of the memory allocated by the code for data. Shared libraries were very important when your machine had just a few KB or MB of memory, and disk space was small too, but far less important today - the code size didn't increase linearly with the amount of memory used for data.

        Today the issue switched more on how to keep a system up to date and vulnerability free. If you have just a copy of, for example, OpenSSL it's far easier to keep it up to date, instead of having n applications each with their own copy or the code which needs to be updated separately (hoping an update is available).

        1. MacroRodent
          Boffin

          Re: Very good points!

          > Shared libraries were very important when your machine had just a few KB or MB of memory, and disk space was small too, but far less important today

          There also is another issue here: memory speed has not kept up with CPU speed, even if the amount of memory has grown. This makes fast CPU caches important for getting any kind of performance, but they have not grown as much. With shared libraries, it is more likely the library code is in the cache, than if you have N copies of the library. This also applied to other reasons of code bloat. So code size still matters, but for slightly different reasons than before.

          1. This post has been deleted by its author

            1. Yet Another Anonymous coward Silver badge

              Re: Very good points!

              Shared libraries were very important when your machine had just a few KB or MB of memory, and disk space was small too, but far less important today

              Although ironically we now have to run apps inside containers inside VMs to cope with different apps needing different versions of some shared system component

          2. fobobob

            Re: Very good points!

            At least the viability of embedding discrete memory modules into the CPU package has been proven, e.g. eDRAM. SRAM operating at core speed is just too costly (in terms of die real estate, which translates directly into costs). I believe some *Lake stuff had as much as 128MB of the stuff. While nowhere near as fast as core-speed SRAM (I'm seeing numbers in the 10s of GB/s), it's still a huge improvement over accessing external memory; it keeps the latency penalty for cache misses far more reasonable (perhaps half the latency), as can be seen from the graphs here:

            https://www.anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested/3

    2. big_D Silver badge

      Re: Very good points!

      The problem is, most younger programmers today don't have the first clue about optimization, at least not for optimizing the code to run better on a certain architecture.

      They are taught to write elegant, readable and maintainable code... A processor doesn't care about how elegant the code is to read, it only cares about how it is executed.

      A case in point, I was working for an Internet ad-slinger and online shop creator. One of their shops was causing real problems. When the PayPal newsletter went out and the client was in the newsletter, you could guarantee that the poor DB admin would spend the next two days restarting the MySQL server every couple of minutes, because it had ceased up. And that was on, for the time, large servers running over a load balancer. When the rush started, the 4 servers would collapse when they reached 250 simultaneous visitors.

      A quick look at the code and the SQL, then a quick analysis of the database and indexes found the problem. The programmers had organized the code to be human readable and, for a human logical, without even bothering to look at how efficiently it went through the database... Rearranging the query to use the indexes properly and start at the highest common denominator and working down through the stack meant the query went from over 60 seconds under load to around 500 milliseconds under load! That meant that the DBA didn't have to restart the MySQL service once during the next newsletter.

      Likewise, re-arranging the code to test positive instead of test negative improved the load on the 4 front-end server, so that, instead of 4 servers servicing 250 users, each of those servers could service 250 users with ease.

      The resultant code wasn't really any less elegant and it wasn't any harder to follow, but it did, from a human point of view, not do things in the right order, but for a machine it made much more sense.

      1. Charles 9

        Re: Very good points!

        "They are taught to write elegant, readable and maintainable code... A processor doesn't care about how elegant the code is to read, it only cares about how it is executed."

        But then, the NEXT person to maintain the code may not be them, which is why the emphasis on readable code. Kind of a Golden Rule thing. The next time you enter a coding project in media res, you would prefer being able to pick up on the details quickly.

        Incidentally, that doesn't mean you're not able to write relatively tight code, but it would help out a lot if you explain what you're doing so the next person can pick up where you left off.

        1. big_D Silver badge

          Re: Very good points!

          My point was, the optimized code doesn't have to be unreadable or unfathomable, but the programmer needs to understand how the hardeware and software stack in the background works in order to optimize, not just know how to indent and use camelCase.

          Likewise, the last project I worked on at that place was a warehouse tracking system for photographing requisites. The phpDox generated documentation ran to over a thousand sides, and that for a 3 month project with just one programmer. The code was elegant AND optimized and very well documented.

          The shop code on the other hand was elegant, NOT optimized and NOT documented...

          1. DJSpuddyLizard

            Re: Very good points!

            but the programmer needs to understand how the hardeware and software stack in the background works in order to optimize, not just know how to indent and use camelCase.

            This has always been the case. For example - my first real programming job, 1993, IBM 4680 Basic compiler. String parameters to functions passed on the stack. Not a reference, the whole string on the stack. Throughout the application code there were comments like "if you remove the following line, program will break, but I don't know why". Well, if you'd read and understood the compiler manual, you'd know why.

  3. Anonymous Coward
    Anonymous Coward

    The important thing is to have a way to measure which parts of the code are taking the time - then optimise those parts. What looks efficient in the design stage - may not be in the execution.

    Total re-implementations of legacy systems often lead to protracted developments to cover all the often unsuspected angles - and can end up as "blistered" as the original.

    1. Anonymous Coward
      Anonymous Coward

      "The important thing is to have a way to measure"

      Still, the number of developers who can't use a profiler, and sometimes even a proper debugger (littering the code with debugging and tracing/logging code which just slow down everything) is staggering.

      Now that they've been sold "telemetry" features so they can make paying customers perform the real beta testing is even worse.

      1. Korev Silver badge
        Joke

        Re: "The important thing is to have a way to measure"

        You mean there's something wrong with print statements printing the contents of a variable to the terminal?

        1. Baldrickk

          Re: "The important thing is to have a way to measure"

          The most effective debugging tool is still careful thought, coupled with judiciously placed print statements.

          Brian Kernighan - "Unix for Beginners" (1979).

          1. Charles 9

            Re: "The important thing is to have a way to measure"

            I will agree on the print statements or some equivalent like putting the results in a status or debug window. Anytime I'm now quite sure about how something will turn out, I will go straight to debug outputs, pauses, and even the occasional early termination to make sure I iron everything out.

            1. Anonymous Coward
              Anonymous Coward

              "some equivalent like putting the results in a status or debug window."

              Sometimes tracing is necessary, but it should be implemented in a smart way, and there must be way to disable it to avoid impacts on the code performance. I/O to the screen or to a simple file may require a lot of time, especially if multiple threads or processes write to the same output and needs to be synchronized. Smart ways could be for example using async messages to a separate process that will handle the output, and minimize the required I/O, by buffering it and using smart I/O techniques.

              Languages that allow for IFDEFs at least allow to remove debug code in production releases, those lacking them become soon littered by IF..THEN... or have to go through a function call anyway that will check the debug level and act accordingly. Add too many, and the performance will suffer.

              Of course, the effort you put in your tracing code may depend on the application needs - but when performance are important, debugging code may slow down a lot.

          2. Yet Another Anonymous coward Silver badge

            Re: "The important thing is to have a way to measure"

            The most effective debugging tool is still careful thought, coupled with judiciously placed print statements.

            That was perhaps true when you had the source to the operating system in front of you.

            There are several APIs I use where the only way to get the format of the input data correct is to send something random and step into the call in a debugger. The code is so obfuscated with macros and inline assembler that you can't even look at the source and work out the format

          3. Anonymous Coward
            Anonymous Coward

            "Brian Kernighan - "Unix for Beginners" (1979)"

            Exactly. It was 1979 and it was for beginners - a different era, dinosaurs still dominated the IT landscape.

            Careful thought stay, print statements have to go away, even for beginners which should learn how to use a debugger ASAP, and a profile not much later. Especially when they could print out sensitive data "for debugging purposes".

            The problems are exactly those who still believe things should have been done like nothing changed, and whatever Kernighan & Ritchie said are holy words from the Great Zero & One themselves.... turn technology and science into a religion, and you'll litter your program with ranks of evil bugs - after all what happened with most C programs after people understood they could exploit them as vulnerabilities to p0wn systems...

        2. Yet Another Anonymous coward Silver badge

          Re: "The important thing is to have a way to measure"

          printing the contents of a variable to the terminal?

          A terminal ! It's not debugging unless you are blinking LEDs with morse code

  4. James 51

    When your logic is more exceptions than rules it can be difficult to get a smooth algorithm to cover everything.

  5. Anonymous Coward
    Anonymous Coward

    A remote terminal system suddenly started to crawl. A network protocol analysis showed that the X-Windows traffic was several magnitudes different between the old and new releases.

    It was nothing to do with the latest application code. A new X-Window library was sending considerably more primitive commands to effect the application's graphic draws.

  6. Anonymous Coward
    Anonymous Coward

    Android phones still hang

    It's 2017, we have phones with Gigahertz processors, and yet we really cannot make an Operating System that keeps the UI functioning when apps misbehave ?!

    1. Malcolm 1

      Re: Android phones still hang

      Of course we can. We (by which I mean the market) has decided that it's not as important as other concerns. Symbian required the developer to do more work to ensure memory and performance efficiency - the end result was a dearth of apps as the development experience was more difficult (and therefore more expensive) than alternative platforms.

      Features are easier to sell than performance and efficiency sadly. You can see also see this approach in mobile hardware specs where unnecessarily large core counts are the flavour of the month when in fact you'd probably be better off spending that transistor budget on L2/L3 cache and higher clock speeds (like the iPhone in fact).

      1. Dave 126 Silver badge

        Re: Android phones still hang

        > Features are easier to sell than performance and efficiency sadly. You can see also see this approach in mobile hardware specs where unnecessarily large core counts are the flavour of the month when in fact you'd probably be better off spending that transistor budget on L2/L3 cache and higher clock speeds (like the iPhone in fact).

        It's an interesting topic: how do you communicate (sell) the virtues of your computer. If computer X has 1TB HD for the same price as computer Y's 500GB disk, it's easy. But communicating sound quality, camera quality etc is harder to do in print. Interestingly Apple don't emphasise the technical details of their chips, preferring to stick to a simplistic message of 'twice as fast as our previous model' or sonesuch... and hey, that approach has some merit. As an approach, it only works if you have a track record or reputation. If your company doesn't have much of a public reputation, you can buy it in - that is, stick a Leica sticker on your phone to advertise the camera, or a Harmon Kardon or B&O sticker to communicate that the sound quality should be better than normal.

        Apple used to put Harmon Kardon stickers on iMacs, but after a generation or two the market knew what an iMac sounded like, so Apple had no need of the partnership any more.

  7. Anonymous Coward
    Anonymous Coward

    Developers too often ignore the effects of network latency if an application communicates with another device. Many small interactions add up.

    A customer complained about an in-house development crawling due to "the network being slow". Analysis of the traffic showed that the network and server round trip response time was 1ms. The client was issuing 30,000 requests for each screen update.

    1. Anonymous Coward
      Anonymous Coward

      Yup can testify to that one.

      We had the same several years ago where it was doing a complete screen refresh every time they typed something. Each time was about 2 Mb, and these were the days when LAN was 100mb/s and WAN was 2mb/s.

    2. Korev Silver badge

      There was a slide doing the rounds on Twitter of how many network calls Slack* made over a WAN whilst starting up from Australia to the US and totalling them up came to the best part of 10s.

      I think it was Slack, please jump in if I misremembered the software.

      1. Anonymous Coward
        Anonymous Coward

        I've seen an application struggle when the routing had it going from Edinburgh to London & back (don't ask - legacy of a network security design decision before my time). Latencies don't have to be big if you have enough of them on a critical path.

        Similarly, a badly mounted filesystem using direct IO vs buffered caused a batch job writing millions of records to a text files to slow to a crawl (>1 hr vs ~5 minutes in test). Remounting buffered took it to about 8 minutes (symptom of production replicating to DR which wasn't done in test).

    3. Sammy Smalls

      Years ago, I was a network admin who had a running battle with the developers when the file I/O moved from local disk to the network. 'But it's so slow' they cried. I tried to convince them that reading one byte at a time wasn't perhaps the best approach.

      Eventually one dev relented and wrote a program to test file I/O with variable length requests.

      Guess what? A 10 byte request was 10 times faster.....

  8. Anonymous Coward
    Anonymous Coward

    I think a lot of the developers could easily learn from the past

    I remember the philosophy behind MDK.

    If it doesn't work smoothly, go back and find out why and fix it. Don't move on until it's fixed.

    This is a nice summary, but I remember there being a whole detailed article (The Edge Magazine?) of them running on lower spec machines while designing so it would work perfectly on "normal" spec boxes,

    https://en.wikipedia.org/wiki/MDK_(video_game)#Technology

    1. Charles 9

      Re: I think a lot of the developers could easily learn from the past

      All fine and dandy if you can take your time on it. Flies right out the window, though, when you have a deadline.

      1. Dave 126 Silver badge

        Re: I think a lot of the developers could easily learn from the past

        As Charles 9 says. A *perfect* product can still fail in the market if it is released months or years after a mostly good enough product.

        1. Anonymous Coward
          Anonymous Coward

          Re: I think a lot of the developers could easily learn from the past

          A *perfect* product can still fail in the market if it is released months or years after a mostly good enough product.

          Yeah, look at Duke Nukem. ;)

      2. Anonymous Coward
        Anonymous Coward

        Re: I think a lot of the developers could easily learn from the past

        "All fine and dandy if you can take your time on it. Flies right out the window, though, when you have a deadline."

        and who agreed this deadline? if it wasn't part of the devteam then you shouldn't be encouraging the monkey to speak for the organ grinder.

        1. Charles 9

          Re: I think a lot of the developers could easily learn from the past

          Who says you had to agree? It's not like the dev team can dictate terms to the board.

  9. Fihart

    Ah, when I were a lad.....

    On my first computer, a 1985 Apricot with twin floppy drives, you could run a word processor from a 720k floppy and still have room for documents on the disk.

    The program was Superwriter from Computer Associates, which I think had been ported from CP/M to DOS -- as indicated by the restriction on document length that Superwriter would support as a legacy of CP/M's 8bit (?) origins.

    1. big_D Silver badge

      Re: Ah, when I were a lad.....

      I had the Apricot Xi, with a 10MB hard drive.

      It had a GUI, dBase, WordStar, EasyCalc, EasyWord, C compiler, C interpreter(!!), BASIC compiler, BASIC interpreter, the source code for a VT100 terminal emulator, several databases and documents and the drive still had a couple of MB free!

      Try getting a GUI to run in 640KB RAM and 10MB of disk space these days, let alone all the apps!

    2. John Styles

      Re: Ah, when I were a lad.....

      And you probably could still run it if you wanted to. And it would be fast. But.

      1. Charles 9

        Re: Ah, when I were a lad.....

        Makes me think it may be time to take a page from the old school when software was stored a little more permanently. Not in ROM exactly like old computer or the old Macintosh Toolbox. But with a move toward compact solid-state storage, perhaps it would be a good move to start designing motherboards with a primary position to store a modestly-sized (say 64-128GB) SSD where the ONLY thing on it would be the basic OS. And it could have a physical provision where the write pin can ONLY be engaged by way of a switch (keyed or not, up to you) so that there is at least the OS cannot be overwritten except at someone's physical intervention. Everything else is fair game, and OS-fungible stuff can still go there, but dedicate a space (that can be physically write-protected) for the base OS. Heck, with 64-bit addressing, a lot of that OS can be memory-mapped as well. Another throwback.

    3. ThaumaTechnician

      Re: Ah, when I were a lad.....

      The old Flight Simulator for Apple ][ used look-up tables for 'solving' calculations because it was faster than doing the math on the, er, fly.

      1. Charles 9

        Re: Ah, when I were a lad.....

        Understandable given the CPUs we're talking about here (1MHz 6502's). I think some of these computers kept sine or other complex computation lookup tables for the same reason.

        1. big_D Silver badge

          Re: Ah, when I were a lad.....

          I remember seeing the source code for a game on the C64 and the Amiga, ISTR that it was Goldrunner... The programmer on the Amiga found the copy and paste option in his code editor... On the C64, the delay loop was done in about 4 bytes, on the Amiga, there were just pages and pages of NOP instructions (No OutPut) and, depending on the level, he would jump into the list at a different point!

  10. Malcolm 1

    Relevant Article

    On a similar subject - this article is an interesting read on two developers more used to modern development practices porting a relatively simple game of solitaire to from Windows to MSDOS, particularly the perf issues and what was necessary to address them.

  11. Doctor Syntax Silver badge

    It ain't necessarily so

    At least, not as far as program size is concerned.

    This morning I've just installed upgrades for 3 graphicsmagic packages, 2 tzdata packages and wpasupplicant. Overall it reported freeing 116kB of disk.

  12. Bronek Kozicki

    Learning

    Dave I loved your article, but there is so much more that could be said on the subject. There is general confusion between code and asset (code is documentation - binary is asset). Unsurprisingly this is both on the enterprise side and on developers side. As a result, little learning happens and when it does happen, it is rarely applied. Similarly, unnecessary code is often added while old code is rarely optimized or removed - or tested (automatically). I could go on .... but not now.

  13. bencurthoys

    I'm with you all the way to the conclusion.

    It's easy to write slim, elegant code that works when the users are doing what the developer expects.

    It's easy to wish you could start again and throw the legacy away.

    But once your product is out in the real world, by the time you've fixed all the edge cases and made it do all the things that real people need, you'll be bloated right back to where you started having wasted a few years.

    https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/

    1. Bronek Kozicki

      This is why the most valuable lesson a programmer can receive is not on programming techniques, or libraries, or methodologies. It is what the users actually want and will do with your piece of software. Writing a nice piece of code has no value, if users are not using it. Code has no inherent value in itself. It is what users do with your program which brings the value, and what we (programmers) are paid for.

      1. Charles 9

        But then you run into the classic problem: a lack of communication between what the client wants and what they actually need (which aren't necessarily the same thing). To really score with the client, you practically have to be a mind reader.

  14. Alan Sharkey

    Overlays

    Who remembers overlays in MSDOS programming. A way of getting a load more code into less than 640Kb of space. Worked for me when I wrote a decent text editor back in 1989 (EasyEdit II - reviewed by Byte magazine - another faller by the wayside of progress)

    Alan

    1. Anonymous Coward
      Anonymous Coward

      Re: Overlays

      In some ways, they still exist, overlays were a precursor of virtual memory. Just later swapping code (and data) in and out physical memory became a task of the OS, and the applications didn't need to bother any longer about what was happening under the hood. Just, like all "magic" things, unaware developers may write less optimized applications.

      1. Dave 126 Silver badge

        Re: Overlays

        I seem to recall image editors like Aldus Photostyler had the ability to load and work on only a piece of an image. It was possible in those days for a scanned image to be bigger than the available RAM. Today, CAD software still has a 'Load Lightweight' option.

  15. Anonymous Coward
    Anonymous Coward

    Java less buggy?

    Java, like other languages alike, was designed to "sandbox" programmers. The rationale was you could now hire less skilled, cheaper ones, because they couldn't make too damage even with sloppy programming. It turned out Java itself was a source of huge vulnerabilities, and even with automatic memory management, you could still fire into your feet with any relatively complex application when resources are not deallocated correctly.

    Moreover beware of applications with huge memory needs just because of lame memory management - on systems where multiple applications are used at the same time, they will end to push other applications out of memory to the swap space, slowing everything down, even when you have a lot of RAM (but not everybody really have a lot of RAM).

    The less the developer understands about how the underlying OS work, the higher the risk they will write code without taking into account not only how many resources their applications will use needlessly, but also that they won't be well-behaved applications able to work without issues on users' systems.

    Moreover, more features doesn't necessarily means a slower system, because many features may not be active all the time, and old code may be better optimized, and old compatibility needs removed.

    But I'm afraid many of the newer features are broader and always on telemetry - which surely have no little impact on a system performance, as data are gathered, stored, and transmitted.

    1. DCFusor

      Re: Java less buggy?

      Yes, the advent of tools making it easier for the less skilled to code leaves us with code written by the less-skilled. I've been watching attempts at this since before the '70s and they always end the same. Not that anyone ever learns from this. And each newest (in my mind) fad has the fans that will viciously downvote anyone not on their bandwagon about how their new toy is gonna fix everything, when it's obvious it cannot.

      As an experiment I'll just mention Rust here in that context. Really, you guys think you can define everything unsafe in the compiler upfront, and when it finally compiles without warnings it's gonna be good and secure? Vanity much? How about side-channel timing attacks to crypto (or worse, TEMPEST), for just one example no compiler or rule-set easily catches. Or will short of real AI, which we're pretty far away from.

      How about the halting problem? Guys who think they can trivially solve it with their magic/special/new insight - I'm all ears, but extraordinary claims require extraordinary proof, and always will. Even if you don't overwrite the other guy or crash from a bad pointer, you can eat the whole machine unless some other program - the opsys - stops you from doing so.

      Coding at the screen and editing till it stops complaining isn't the way...and for most, super-duper unit and system testing automation is too much work. But a skilled architect and avoid the need for either, or at least a couple 9's worth of that need. If we want our profession to be, well, professional, let's stop calling beginning low-value coders software engineers for a start. Code monkey might make management think a little harder about whether it's good to have a few really good people - at cost X, vs a ton of crappy ones - which may or may not be cheaper per time period, but will almost always cost more over time. Sadly, those morons can't seem to think beyond the next quarter themselves.

  16. jms222

    Doesn't help that gcc's output for ARM is and always has been poor like not making good use of condition instructions but instead conditional jumps.

    I can't say I have looked at Clang's output.

  17. Alexander Giochalas

    Niklaus Wirth talked about it...

    ("A Plea for Lean Software", Computer, Feb 1995), googling it returns the article in PDF format.

    The discussion had started somewhere in the 1970s.

    Bloated s/w always existed, but now it is much easier to miss, just because it runs on machines with much better specs.

    1. Naselus

      Re: Niklaus Wirth talked about it...

      Yeah, this is a very, very old discussion (older even than Cartwright's previous article on virtualization, which really belonged in 2005). I seem to recall a Worstall article about deliberately hiring Eastern Block programmers back in the 70s because the inferior soviet computing infrastructure had trained them to be much more efficient on the hardware.

  18. J.G.Harston Silver badge

    I grew up in the era of having to code your transient utilities into 512 bytes of memory. ;)

  19. Charlie Clark Silver badge

    Software will always expand to fit the available hardware

    Now, I think it's fair to believe that companies such as Apple and Microsoft do try their damnedest to make their products perform as fast and efficiently as they can

    Have to take issue with this. Both companies have reliably been able to expect customers to buy faster processors and, especially, more memory. As a result, they've targeted such environments when developing.

    The clearest example for this is probably the shift in web browsers from memory efficient single-process programs to multi-process JS runtimes.

    Even so, it's not right simply to blame the developers. There's no doubt that modern hardware, especially the expanded memory space, makes things possible that were previously inconceivable: Excel's spreadsheet limit used to be 256 cols x 65564 rows; it's not 16384 cols x 1000000 rows and even the row limit is somewhat arbitrary.

    Modern software projects are now probably too big to be able to be optimised by people, which is why compilers are ever more important: static and JIT compilers now produce better low level code than people can. But, of course, they can't optimise away stupidity.

    But when it comes to assessing code performance CPU cycles and memory use are much more important than SLoC and why zero-copy memory will always be your friend.

    1. DJSpuddyLizard

      Re: Software will always expand to fit the available hardware

      Now, I think it's fair to believe that companies such as Apple and Microsoft do try their damnedest to make their products perform as fast and efficiently as they can

      Have to take issue with this. Both companies have reliably been able to expect customers to buy faster processors and, especially, more memory. As a result, they've targeted such environments when developing.

      There's already an incentive to buy a newer phone (faster hardware), but bloat provides iphone users with a disincentive not to upgrade their phone. Granted, at some point, a particular iPhone generation has had it's last update (can't put iOS 11 on my daughter's iPhone 5, for example), but Apple is all about encouraging consumers to buy new hardware every few years, they make $0 from the software.

    2. BinkyTheMagicPaperclip Silver badge

      Re: Software will always expand to fit the available hardware

      @charlie

      'Even so, it's not right simply to blame the developers. There's no doubt that modern hardware, especially the expanded memory space, makes things possible that were previously inconceivable: Excel's spreadsheet limit used to be 256 cols x 65564 rows; it's not 16384 cols x 1000000 rows and even the row limit is somewhat arbitrary.'

      Rubbish. The Mesa spreadsheet was happily handling well over 64,000 rows on OS/2 in 1994. That many rows of data can be held on systems with small amounts of memory.

  20. stu 4

    Unacceptable

    None of this addresses the strange postulate you make at the beginning - that you almost EXPECT an OS upgrade to be slower, and accept that ?

    why is that sensible ? I certainly don't think it is so - in fact about the ONLY OS upgrade I'm interested are ones that make things QUICKER - not adding 100 more pieces of social media pieces of shite I have no interest for.

    To stay with Apple - Snow Leopard was the last good OS upgrade - 64 bitting things and speeding things up considerably with GPU stuff, etc. Since then it's all being about adding more crap I don't want.

    iOS is of course even worse. I made the mistake of 'upgrading' my ipad air to ios11 a few weeks ago..

    it ran like a dog, changed or broke things all over the place - moving bits of UI around for no reason for example (e.g. ios - watch full screen video in browser, scrub bar at the top where I hold the screen..ios11 moved to BOTTOM of screen. these changes for no good reason get right on my tits.

    So of course then I had to spend hours working out how to restore ios10 back on there, because fuck me, apple don't want you doing that.... finally got there, re installed all my apps, and made sure I have insufficient free space for it ever to download ios11 and bug me to upgrade again).

    imho there has been a dramatic change in what 'upgrades' mean since the advent of 'apps' and it seems to affect OSs and 'real computers' too.

    In the past application upgrades could be pretty much relied on to add features, never take any away and 9x out of 10 optimised and make things faster, using GPUs, multithread, etc - maybe not major jumps, but not things you had to ask yourself 'what is this going to fuck up', just 'is it worth the money'

    Now, starting on mobile app upgrades started to mean:

    - taking away key features you relied on

    - changing the UI radically

    - adding need for accounts and logins where none existed before

    - breaking compatibility with older devices

    and of course all while making it as difficult as possible to uninstall or revert the upgrade.

    It only took this to happen a few times for me to switch off auto upgrades on my mobile devices now - I look at what the upgrade says it offers, and only if I really really need it do I upgrade.

    Which of course assuming others are doign the same thing means we are all less safe, because app makers have destroyed any confidence users have in upgrades, so they don't get bug fixes, or security fixes either.

    Now that attitude has expanded to OS upgrades (mobile and real computers), and real computer applications too - again lead by Apple - FCPX is a good example of this - a point release upgrade sometimes only being available WITH an upgraded OS, and then you find that it totally changes the UI, stops various plugins working, etc, etc.

    It gets right on my tits.

    1. Fihart

      Re: Unacceptable @ stu4

      To take the example of Windows, MS got into the habit of launching new versions mostly because shareholders came to expect the windfall that followed. Windows 7 was replaced to facilitate a world (phone, tablet, PC) domination strategy. So, Win 8 and 10 had less than ever to do with users' wishes.

      Fortunately, the world domination strategy was doomed -- but can we expect a slimmer, more user oriented replacement for 10 ? Can we hell !

  21. Jim 59

    It has always been a license for code bloat

    Moore's law: your new laptop is 1000 times faster than that Pentium in the loft.

    Bloat: They take the same time to boot up. Also to run MS Word.

  22. handleoclast

    EPNS Bullets

    A large part of the problem is all the EPNS bullets weighing things down. All of them aimed at one or more of the following laudable targets, and all of them intended to be silver bullets:

    1) Make all programmers more productive.

    2) Make the productivity of the least-productive programmers closer to that of the most-productive (not necessarily increasing the productivity of the most-productive).

    3) Make the code less buggy/more reliable.

    4) Prevent big projects from falling over in a smelly heap that have to be abandoned.

    And so new languages/frameworks/paradigms keep appearing. The idea behind all of them is that it's cheaper to buy a faster computer than to hire a better programmer or use a language that produces tight, fast code. So make the language easier for idiots to write in, even if that makes it less efficient. Make the language more bondage-and-discipline (and, incidentally, far less efficient) to stop people shooting themselves in the foot (except they then beat their feet to a pulp using the gun as a blunt instrument). Come up with all sorts of new ideas pulled out of your arse and insist that they will magically fix all the problems and continue to ram them down people's throats when there is no statistical evidence that they do any good whatsoever (and happen to require much faster hardware).

    To some extent this thinking has worked in the past. You can write tighter, more efficient code in assembler or you can spend a lot less time writing the same thing in a high-level language (but it's less efficient and larger). Increasingly it produces EPNS bullets that are worse than what they replace. And occasionally it produced not EPNS bullets but turds wrapped in kitchen foil (see Kernighan's politely scathing essay).

    I don't see it ever changing. Because there's always the promise that if you just adopt this new language/framework/paradigm all your programmers will fart rainbows and shit gold. It may require faster hardware, but it's worth it. Occasionally, very rarely, we may see a genuine silver bullet. Most of the time we'll get EPNS bullets. And sometimes we'll get turds wrapped in kitchen foil. All will be touted equally enthusiastically and most will result in buggy bloatware that needs a supercomputer in your phone.

  23. Zot

    What Apple and Microsoft seem to teach us is..."throw more resources at it!!!!"

    But I started out writing assembler in Hex code on A4 paper, for the ZX81, so what do I know?

  24. Adrian 4

    code bloat is not necessarily slow

    Code can get bigger as it gets faster : the size doesn't necessarily slow it down. If it's handling a lot of alternatives, having separate code for each might be a lot faster than having a small, highly parameterised pice of code that does everything.

    What causes bloat is the use of libraries and frameworks to speed development. Yes, they do speed it - providing they do what you want and you know how to use them. But very often, you're only using a small part of their functionality yet you get a large part of the baggage.

    1. Adam 1

      Re: code bloat is not necessarily slow

      A simple example of this is inlining. For example

      foreach thing in myThings

      {

      this.ValidateThing(thing);

      this.ProcessThing(thing);

      }

      Without inlining, every iteration of the loop need to jump to and back from each of the methods. If the valuation is pretty simple (say check something != null) then the time the CPU spends jumping in and out of those methods is going to be relatively significant. Inlining copies the method implementation so no jumps are required within that loop. You could do that manually but your code will be unmaintainable. The cost of the inlining operation by the compiler or jit is that your application will be bigger. And that is just one example.

      We could consider the trade-off between binary size and boxing/unboxing operations. For example

      List<Animal> pets = new List<Animal>();

      pets.Add(new Dog {Name="Fido"});

      Console.WriteLine(pets[0].Name);

      If I didn't use generics then I would have just a List and the last line would be a much less performant

      Console.WriteLine(((Animal)pets[0]).Name);

      Plus all the other fun bugs that come from accidentally casting something to something it is not. But again, this costs file size because I need a separate definition for List<Animal> vs List<Commentards> Vs ....

      1. tiggity Silver badge

        Re: code bloat is not necessarily slow

        Compilers sometimes matter too.

        It's not just how you write code, its how "clever" the compiler is in optimizing what you have written (e.g. a half decent compiler should mean that using register keyword in your code should no longer be needed).

        I remember back in the day, the fun to be had compiling c code with MS and then using WATCOM, the WATCOM code was far smaller and faster.

    2. Jim 59

      Re: code bloat is not necessarily slow

      @Adrian 4

      What causes bloat is the use of libraries and frameworks to speed development. Yes, they do speed it - providing they do what you want and you know how to use them. But very often, you're only using a small part of their functionality yet you get a large part of the baggage.

      Exactly. "Hello world.c" might contain 2 lines, but how many lines after pre-processing ? 5000 ?

  25. Palpy

    And speaking of hardware speed versus user experience:

    For my money, Windows antivirus applications have more impact on application speed than anything else on my machine. At work I have a generic-plain-vanilla I3-3320 4 core, 3.30 GHz with 8 GB RAM. When the antivirus is scanning -- which seems to all the time, lately -- it can be outrun by my old 32-bit single-core Toshiba from 2008. The diffy, of course, is that the Toshiba runs Linux and Clam AV is only used (in this case) for user-initiated scans of downloads.

    Software optimization? We've heard of that.

    But seriously, can antivirus for Windows be optimized in any meaningful way? Or are we doomed to an inverse-Moore law which mandates that as security threats diversify, software countermeasures become more and more intrusive, ending in a return to manual typewriters and adding machines?

  26. Hans 1
    Boffin

    Now, I think it's fair to believe that companies such as Apple and Microsoft do try their damnedest to make their products perform as fast and efficiently as they can

    Wwwwwwwwwwwhat ? This is beyond funny, silly or ....

    I am pretty sure code is littered with something along the lines of:

    if (iPhione.version() <= lastGen -1)

    {

    /*Cheap kid, let's compute pi with enough decimals to make the phone crawl */

    compute_pi(1000000000000000000000000000000000000000000000000000000000000000000);

    }

    else if (iPhione.version() <= lastGen)

    {

    /*come on, upgrade to a new phone!*/

    compute_pi(10000000000000000000000000000);

    }

    Note that I was once servicing an XP box, must have been in 2008, and the computer's fans were running like mad, causing an incredible racket, the lady who was using it was about to throw the box out and get a new one.

    I backup documents (docs, photos, vids etc), nuke the disk and reinstall XP SP2 (NON-OEM)... fans are silent, I run Windows Update, after a hours, and its 15 reboots (lol), the fans go wild again ... I had not even gotten to the point of installing anything else or restoring the documents, so I go and download latest drivers from Dell, reboot x times, same thing ... the box got a Debian treatment, I restored documents, all fine and everybody happy ... little hand holding, she kept using that box for years ...

    So, Windows was intentionally making the fans go wild to get the lady to purchase a new computer. I cannot prove anything, I cannot see the code, but ... why else would this happen ? Faulty driver ?

    1. Hans 1

      Ohhh, and I can prove it:

      benchmarks:

      5S, 6S, 7S at a non-Apple task, with OS as originally shipped ... perf increase of mere percentage points from one to the next.

      With latest iOS, 5S is unusable, 6S is substantially slower than 7S.

      There, what more do you need ... they are proprietary for a reason, you know ... to shaft you!

      Note that this is not limited to Apple, MS do the EXACT SAME THING, have done for decades ...

  27. paulc

    "This is no surprise, of course, as it's the same on every platform known to man. The new version is slower than the old"

    nope... not with Linux. My Mint LXDE has been getting faster and uses less RAM with each update...

    Now Firefox... that gets slower and uses more RAM though with each update...

  28. BinkyTheMagicPaperclip Silver badge

    Responsiveness may only benefit the user

    The thing to remember is that for commercial success the product only has to be 'good enough'. The top priority for consumer systems is making money, below that is low support costs, making the user happy is a distant third..

    Although, to defend Windows, it's become faster and leaner with each post Vista release (note I did not say 'better'). Even with Vista Microsoft had their arm twisted by suppliers wanting to sell systems clearly below the requirements Microsoft wished to use. Admittedly Vista was rather buggy on release, and it took time to sort out display drivers in particular.

    Definitely agree with not glossing over Wonder APIs. If an API makes a hard task easy, always ask what it's doing under the hood, and what assumptions are being made.

  29. Nick L

    A Mind is Born...

    Any fans of tight coding want to see what's possible in 256 bytes on a Commodore 64? Thought so...

    https://linusakesson.net/scene/a-mind-is-born/

  30. GrumpenKraut
    Devil

    "a lack of understanding about how the underlying systems work"

    My agree-o-meter almost exploded upon reading this. I recently started asking people with shiny toys what processor architecture it is build upon. A fun hobby for those with morbid inclinations.

  31. Starace

    I blame Python

    I also blame Java and all the other toys that people have since used to build 'proper' software.

    Big footprint, crap performance. It might be easier for you to write (though I doubt it going by the mess that makes up a lot of project source) but at what point did people decide that light fast code was less important than using their favorite collection of bits? Especially as most of it still isn't particularly portable and it certainly isn't readable. The only benefit seems to be for lazy types who didn't understand what they were actually trying to implement and who can get away with their sloppy code.

    I know it still annoys me every time I need to have a reasonably hefty machine to run something that does very little, especially when you drill down and try to work out how such a small amount of function takes so much CPU time to work.

  32. This post has been deleted by its author

  33. John Riddoch

    Prices

    It's now cheaper to throw an octo-core 3GHz CPU with 32GB of RAM at a problem than pay a programmer to code it on a single core 1GHz CPU with 2GB of RAM. It's perfectly plausible in many cases to do the latter, but why pay your expensive developer to do that when you can get a bigger server relatively cheaply?

    1. Charles 9

      Re: Prices

      Does that include the ongoing costs in wasted electricity and heat (meaning you also pay additional electricity for the A/C)?

  34. Version 1.0 Silver badge
    Boffin

    Hardware vs Software

    When I first go into this business, I was told on the first week that one good hardware engineer could keep ten programmers busy ... after a few months, as I moved over to code writing, I was told that one good programmer could keep ten hardware designers busy.

    OK - it's been nearly 40 years at this game but I think both statements are true.

    1. Charles 9

      Re: Hardware vs Software

      So what happens when you have a good software engineer up against a good hardware engineer? Who needs help first, and what if it cascades?

  35. doug_bostrom

    As we're all aware, elegantly efficient code is created by a limited number of people, a proportion very difficult or perhaps impossible to expand. We can all be trained to write musical notation but few of us will ever be brilliant composers and it's not at all obvious this limitation can be remedied by training.

    As well, our plethora of half-baked commercial tools with their designs centered on extraction of money as opposed to more fundamental concerns does not help.

  36. Kevin McMurtrie Silver badge
    Trollface

    This is too slow

    Fixing it looks really hard but I think I can add some code to work around it being slow.

  37. Anonymous Coward
    Anonymous Coward

    Try upgrading to iOS 11.1

    The .1 release is always faster on old hardware, I think they don't optimize for the older hardware on the .0 because they are more concerned with getting the new features stable and everything work on the brand new iPhone models. All the beta users have been reporting more battery life in 11.1 so probably should perform better as well since CPU would seem to be the only thing you could improve - a software update can't isn't like to make radios or the screen use less power.

  38. Adam 1

    a couple of observations

    Firstly, jumping to 64 bit, doubles your pointer sizes. Every array now takes up double the amount of memory as its 32 bit cousin. Every instruction now needs an extra 4 bytes to describe the memory address it applies to and so on.

    Secondly, time is a finite resource in a development team. Optimisation takes time to both profile, figure out whether it is CPU/disk/memory bound and try alternatives. That is time that cannot be spent on other shiny shiny features and digging out other bugs. So the feature of having it work faster or having it work on older hardware gets weighed up. This is true in both open and closed source worlds.

    Thirdly, optimisation changes with hardware evolution. 25 years ago you were probably trying to optimise to some maths coprocessor. Today, you are probably trying to parallelize loads and get GPUs or cloud load balancers to improve throughput.

    Fourthly, developers fix what they see and experience. That's the reason why software can suck on low resolution laptops; the team writing it has a 4K dual monitor setup on their i7s with at least 8 cores and somewhere north of 16GB RAM and an SSD. They simply haven't had to tolerate it in a 5 year old netbook so the spend half a day making that feature quicker never gets prioritised.

    1. Anonymous Coward
      Anonymous Coward

      Re: a couple of observations

      The code density is actually better with ARM64 due to instructions ARM32 lacks like conditionals. iOS 11 should require less RAM than iOS 10 did since they've dropped all support for 32 bit code. Previously the kernel had to support 32 bit APIs because you were allowed to run 32 bit apps in iOS 10 (which would cause 32 bit libraries to load) If you actually ran 32 bit apps (which would have been common when the 6 came out, but pretty uncommon by the time of iOS 10) then it had to load a bunch of 32 bit libraries also which used up even more RAM.

  39. Anonymous Coward
    Anonymous Coward

    I am glad we'[re not using assembler much these days. Fortunately many languages abstracted away from having to know that much about a machine's hardware.

    The ideas and concepts behind OOPS and relational databases are great, and have resulted in some great tools. But the discrepancy between the user-requirements and the knowledge of the correct tools and implementation still mean a lot of sticky-tape is used to create applications with lots and lots of code.

    I think the tools will increasingly become network based and improved & standardized. But that will take time. Currently it is still more common to want to program on stand-alone hardware and optimize for that architecture. But with the processor speeds and networking speeds increasing that will slowly change. With more devices requiring connectivity the necessity to abstract the hardware away to the level where auto-updating, running security patches etc. is lifted out of the developer's hands. But there will be gazillions of lines coded before changes and their adoption are widespread.

  40. martinusher Silver badge

    Glad you've finally noticed

    Bloat has been the bane of computer systems for many decades, certainly since the early days of MS-DOC (MSFT being an early offender). When you complained about it you just got people babbling on about "Moore's Law" plus the usual disdain that application programmers had for anyone not part of their club of true believers. The result has been the systematic degradation of computer systems to the point that there's now an expectation by users that any machine will automatically run slower and slower over time and so have to be replaced with 'the latest'.

    I have to program, too, but because of the specifics of my work and my work history I actually know what a crock this is. I can recognize appalling heap management and poor task design; I am annoyed that it takes gigabytes of memory and a multicore processor to manage email or display a web page. I have never had the luxury of being part of the "hose the stuff at the barn wall and see what sticks" school of programming and its frustrating that so much stuff that's out there -- especially for web applications and portable units -- is built like that. (....and don't get me started on testing methodologies that appear to be along the lines of "sling the stuff together and wait for the user to complain").

    Time for a rethink. (...or maybe just time to retire....leave the whole edifice to crumble under its own weight).

  41. kneedragon

    I don't know how it's taught now, but 20 years ago when I studied this, first at TAFE and then again at uni, (QUT) they told us a lot of things. We went over the theory of complexity, and we went over testing & debugging. We went over hand optimising and changing the instructions slightly, we even looked at inline assembler, and used it.... they threw a lot of information at us, and then stood back and told us they had faith we'd do a good job. The lesson to take home, there is a remarkable amount of information & knowledge in your diploma or degree course, which will have some bearing on this, but exactly what the right answer is, that's a bit of a puzzle... it depends on the situation.

    It sounds like nothing much has changed.

    It would be nice to have some kind of rule-of-thumb, about whether to keep hacking about with the code you have, or dummy-spit and start again. Begin with your concept and write pseudo-code, with a big black mark every time you copy (or paraphrase) the code which is already there. Go back to first principles, and describe the problem again, using different phrases, plain english language... if your plain english pseudo-code starts to resemble real programming code, then it isn't pseudo. Don't get caught half way between PASCAL and Pseudo-PASCAL. That's not english a normal muggle could read and it's not code a compiler could read, so it's shit!

    ...

    ... and that doesn't look at maintainability. You can write code which is clever and small and fast and elegant, and the first time someone has to look at it, ten year later, (it may even be you) they'll have NFI how that smart-arse routine works... One advantage of modern computers, is you can write larger code with more comments and make the structure of the code reflect your mental model of the problem, and then working with it is easy. The smallest + fastest + simplest + most elegant code, may be an absolute beast to maintain & work with later, because even to you (who wrote it) trying to figure out how it actually works is a nightmare...

    Bloat is one thing that should be avoided. There are others.

  42. William Towle
    Boffin

    Sudoku: how to?

    > A fiver says you don't simply try every possible permutation of digits in each box until you get to the right answer – the number of Sudoku solution grids has been calculated as 6,670,903,752,021,072,936,960. No, you apply logic and deduction to identify which numbers go where, and it takes just a few minutes to solve the puzzle.

    Strictly speaking having that many solution grids doesn't relate to the complexity of solving any given puzzle, it's the reason you don't attempt to store all the possible puzzles and do straightforward lookup.

    ...Perhaps pedantically (humour me, from here on in I'm addressing fellow puzzle fanatics and not the direction of the article), hoping to find it obvious "where the numbers go" throughout doesn't suffice technically either; only when you have in effect determined a search tree, pruned it until [relatively] sparse, and then walked it can you argue you have properly iterated and eliminated all impossible situations (thereby including "where the number's *don't* go") as per Occam's razor at every step from start to finish.

    One solution -and only one- always results for me for solving by hand, as required. I've written some of my methods as code, but not all ... however at this point I am suspicious of the possibility that typical "brute force" solvers may be at risk of (mis)identifying a puzzle as having multiple solutions when it's not necessarily the case...

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like