back to article Language bugs infest downstream software, fuzzer finds

Developers working in secure development guidelines can still be bitten by upstream bugs in the languages they use. That's the conclusion of research presented last week at Black Hat Europe by IOActive's Fernando Arnaboldi. As Arnaboldi wrote in his Black Hat Europe paper [PDF]: “software developers may unknowingly include …

  1. Steve Davies 3 Silver badge
    Facepalm

    but... will anyone learn from this?

    and include all these tests (and more) into the regression testing of these tools before they are released?

    What? They don't do regression testing!

    Facepalm.

    TBH, it seems like the pressure on developers to release before it is ready is having an impact on all of us.

    Once upon a time I was able to properly test my code before it got anywhere near the users.

    Now with all these automated testing tools and test cases and all the rest, have we actually improved the quality of what gets shoved out the door?

    There are days (like today when I look out at the snow covered landscape) when I really have my doubts.

    On the bright side, adopting the mythical DevOps will fix it all and we will be happy bunnies :wink:

    1. Charlie Clark Silver badge

      Re: but... will anyone learn from this?

      How would regression tests help here?

      Automated testing just automates your testing, it won't write them for you. That said, correctly set up it does help you test your code properly before you release. What would suggest otherwise?

    2. Dave Mitchell

      Re: but... will anyone learn from this?

      The standard perl interpreter build and install process runs 1157593 regression tests from 2629 test scripts. Any bug fix is almost always accompanied by a new regression test.

      Also, the bug they supposedly found in the perl interpreter was actually a bug in an obscure library that is extremely unlikely to be used in a way that would be exploitable.

      Also, the perl development team already regularly run fuzzers.

      The whole paper appears to be hype.

      1. Primus Secundus Tertius

        Re: but... will anyone learn from this?

        @Dave M

        It does look as though there are too many "security consultants" in this business, all scraping the barrel for edge cases they can use to make publicity and boost their business.

        Perhaps they should return to writing real programs.

      2. Charlie Clark Silver badge

        Re: but... will anyone learn from this?

        The whole paper appears to be hype.

        Not quite: in some environments you should be prepared to run more extensive penetration tests, especially on stuff you don't control directly.

        1. Vendicar Decarian1

          Re: but... will anyone learn from this?

          Why should anyone care if their apps are insecure when every app and OS is insecure?

    3. HmmmYes

      Re: but... will anyone learn from this?

      Well ..... this fuzzing thang is pretty new-ish.

      Im sure well run projects will incorporate the tests into their release tests. I know Im looking at using fuzzing - anything a computer can do testing wise rather than me is 'A good thing (c)'

      I note the XDIFF tool is written in python, so Python cant be that bad ...

      1. Charlie Clark Silver badge

        Re: but... will anyone learn from this?

        I note the XDIFF tool is written in python, so Python cant be that bad ...

        It isn't. I think the main point is code using libraries without a thorough security review (where appropriate). For example, the main Python XML libraries do have known security issues and should, therefore, not really be used with untrusted data.

        1. HmmmYes

          Re: but... will anyone learn from this?

          Ah, not a problem. I have a fix for that - I never use XML.

    4. Vendicar Decarian1

      Re: but... will anyone learn from this?

      Regression testing?

      Oh please. If your buffer can overflow, then just make it bigger.

      That is the solution to every problem isn't it?

    5. This post has been deleted by its author

  2. cantankerous swineherd

    an oldie but a goodie: https://dl.acm.org/citation.cfm?id=358210

    there is a way round this, but not sure the average programmer would bother. i certainly haven't.

    1. Destroy All Monsters Silver badge

      You don't need to go that far (i.e. hiding injected code in the compile and regenerating it from there)

      It's about making sure that "undocumented features" don't exist.

      One of the major points in any high-assurance software that will get your arse kicked sideways by an audit if you don't show code review, coverage metrics and signed-off justifications for any feature suddenly uncovered.

      1. Dazed and Confused
        Facepalm

        Re:"undocumented features"

        It's about making sure that "undocumented features" don't exist.

        You don't always have to go that far. Sometimes developers don't do a full trace back to zero check of library function paths to know about the documented features.

        I remember years ago coming across the feature of the chsh command, due to this needing to modify the passwd file it needed to run SUID to root. The command was quite simple and just relied on the standard password file functions of the time and those functions relied on the stdio functions. There was kind of a sibling command to chsh, chfn, the developer of chfn had read all the manual pages and knew that the underlying function wasn't going to handle a line of over 1024 characters and so had included code to ensure that it wouldn't be asked to. The developer of chsh hadn't been so diligent, So you could chsh and give yourself a shell with a short name, such as the Bourne shell had, then using chfn you could max out the length of your passwd file entry. Then going back to chsh you switched to a shell with a longer name, csh or ksh would do, and the command didn't sanity check on the data nor understand the lower level functions and would write out a really dumb entry in the file.

        The feature was known, but only to people who had RTFM'd.

  3. Starace
    Flame

    Real languages don't add bugs

    Begone with your toytown stuff; you wanted something easy to use and you're shocked that it might be flakey.

    This is the cost you pay for code that hides the underlying mechanics and tries to do everything for you.

    1. Charlie Clark Silver badge

      Re: Real languages don't add bugs

      Try writing something on modern hardware with one of your bug free languages and not introducing your own bugs. It's worth noting that many of the security issues that have popped up recently aren't necessarily bugs but can arrise from the context in which the software is running.

      1. yoganmahew

        Re: Real languages don't add bugs

        @Charlie C

        "Try writing something on modern hardware with one of your bug free languages and not introducing your own bugs."

        I write in Assembler on z/TPF on z13 mainframes every day. You cannot escalate privileges, the privileged sets are hard-wired in. You cannot inject SQL, it is not supported (not yet anyway :|). You cannot buffer overflow, a CTL-3 or opr-4 provides no opportunity to intercept.

        There are some disadvantages, but security bugs are not one of them...

    2. Anonymous Coward
      Anonymous Coward

      Re: Real languages don't add bugs

      So, you manually write all the machine opcodes and do register allocation and memory management in a spreadsheet?

      Every library will have bugs. Bugs in standard libraries will have more wide-spread effect.

  4. Michael H.F. Wilkinson Silver badge

    This is a perennial problem

    I wonder why this is news, as this issue has been known under many guises for a long time. My own code might be provably correct, but what about the compiler, libraries or even the OS it is running on? For that matter, what about the hardware (Pentium floating point bug, anyone)? I remember having to create quite a few workarounds for compiler and run-time library errors in image processing software I developed in the past. The problem with workarounds is that they might actually bork the code when the error in compiler or library is fixed. Luckily these were DOS systems not connected to the internet, so the attack surface consisted mainly of floppies thoughtless users inserted into the system, but the fundamental issue remains: The security of your code depends on that of many others.

    1. Charlie Clark Silver badge
      Thumb Up

      Re: This is a perennial problem

      IOW: Anyone who thinks their system is unbreakable is delusional.

    2. GrapeBunch

      Re: This is a perennial problem

      Original provable output:

      I am having one wonderful day.

      Real world compile:

      I am having 0.9999999998 wonderful day.

      Real world compile with work-around:

      I am having one wonderful day.

      Real world compile with work-around, after compiler bug fixed:

      I am having 1.0000000002 wonderful day.

  5. Christian Berger

    That's why one should reduce the total complexity

    Just outsourcing complexity to libraries won't make it go away. That's why movements to increase complexity, from HTTP/2 to ME are so problematic.

    The Internet was created as a way to reduce complexity, Internet-Email is so much simpler than X.400. HTTP/HTML used to be simple protocolls you could implement easily.

    So to summarize: Complexity is bad, there may be good reasons to add some complexity, but without a _really_ good justification you shouldn't add more complexity to any system.

    1. Anonymous Coward
      Anonymous Coward

      Re: Internet-Email is so much simpler than X.400.

      Reducing complexity (and reducing the size of the attack surface) is generally a Good Thing from a robustness and security point of view.

      That said, Internet email may have been simpler than X.400 back in the day when X.400 was the Next Big Thing, but in the real world the "simplicity" of Internet email from that era has long gone, to be replaced by a nightmare collection of unarchitected bandaids and patches trying to build security on something that fundamentally isn't trustworthy and is barely fit for purpose in today's environment.

      X.400 and friends were architected, designed, and built to do exciting 'modern' stuff like compound documents, to support a character set with more than USASCII characters, to do stuff like trustworthy documents and trustworthy identities and trustworthy proof of delivery etc - these things were fundamental to the design, not bolted on as afterthoughts. So X.400 etc was inevitably more complicated than the teletype-era protocols POP/SMTP (and their dependencies) were back then. That extra complexity also required more compute power, which back then wasn't readily available at low cost like it has been for many years.

      Nowadays the need for trustworthy email has massively increased, while the cost of the necessary compute power has massively decreased. But courtesy of the commodity internet "service" providers and others, most of us are stuck with the email architecture of the 1970s, plus the band aids and elastoplasts that attempt to make it workable in today's untrustworthy world.

      Further reading: see e.g.

      https://www.isode.com/whitepapers/x400-messaging.html

      Alternatively: to fail to plan is to plan to fail. Internet email wasn't planned to be the way it has ended up, and it shows. X.400 was planned, designed, and implemented, but internet email was simpler **at the time*. Sometimes it's best to accept that something is no longer fit for wide usage.

      Complexity is bad in general. There may be good reasons to add some complexity, and X.400 and friends address many of those from the ground up.

      Without a _really_ good justification you shouldn't attempt to force-fit a toolset into an environment which would be better served by a different toolset, even if it means abandoning the 'cheap' legacy toolset.

      1. Christian Berger

        Re: Internet-Email is so much simpler than X.400.

        Well actually integrity is provided in E-Mail by PGP... which essentially encodes your UTF-8 text into ASCII Text, at minimal complexity.

        1. Charlie Clark Silver badge

          Re: Internet-Email is so much simpler than X.400.

          Well actually integrity is provided in E-Mail by PGP

          Nope, PGP is an add-on that, while very useful, suffers from non-universability and the key distribution problem.

          Systems that do provide secure end-to-end encryption, such as Signal, suffer from things like portability and interoperability.

    2. Anonymous Coward
      Anonymous Coward

      Re: That's why one should reduce the total complexity

      The Internet was not created "to reduce complexity".

      Neither SMTP nor HTTP are trivial to implement correctly with all features and edge-cases considered.

  6. Frank Bitterlich
    Holmes

    This is ridiculous.

    I just read the section about the PHP "bug" that guy claims to have discovered. In reality, it is a known, documented, behaviour that occurs with buggy PHP code (ie. accessing an undefined constant). The described case (this is nowehere a PoC) actually relies on at least two coding mistakes in the PHP code.

    It only proves two already well-known things: 1. Bugs in your code can cause vulnerabilities, and 2. PHP is designed in a way that makes it easy to mess up.

    I can only assume that the other so-called "language bugs" also come frome the department of the bleeding ovious.

  7. Adrian 4

    unhelpful errors

    "information disclosure in NodeJS via error messages"

    That would be a problem, I can see. I always thought unhelpful error messages were just the vendor's laziness, but it seems they're actually a security requirement.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like