back to article Here's a fab idea: Get crypto libs to warn devs when they screw up

Building warnings into crypto libraries that alert developers to unsafe coding practices turns out to be an effective way to improve the security of applications. At the USENIX Symposium on Usable Privacy and Security (SOUPS) 2018 this week, a group of researchers from several universities in Germany reported findings to this …

  1. Ken Moorhouse Silver badge

    OK. Own up. How many coders...

    Put code into their apps which silently discard unhandled errors?

    (Whatever other programming misdemeanours I'm guilty of, that is one I can put hand on heart and say "not me").

    >Seventy-three per cent of the participants who received the security advice fixed their insecure code.

    Surely that's 27% who either put up with the embarrassment of that warning message or silently suppress it.

    1. A Non e-mouse Silver badge

      Re: OK. Own up. How many coders...

      Put code into their apps which silently discard unhandled errors?

      I suspect lots as far too many examples at Stackoverflow usually have code that looks like:

      try {

      ...

      }

      catch(Exception e) {

      //Do something

      }

      1. Gene Cash Silver badge

        Re: OK. Own up. How many coders...

        > Stackoverflow

        Ugh. when I was trying to do SSL client auth, I saw dozens of "it won't take my SSL certificate" answered by "here's how to accept **ANY** certificate" followed by "thanks, I put that into production"

        I'd hope this API would simply segfault when asked to do something that retarded.

        1. JohnFen

          Re: OK. Own up. How many coders...

          Personally, I think that using code you find on Stack Overflow (and similar sites) should be a straight-up firing offense.

          Using Stack Overflow to deepen your understanding of how to do something? Great!

          Just copy-pasting "solutions" you find there? That's abandoning your duty as a software engineer.

    2. JohnFen

      Re: OK. Own up. How many coders...

      Like all programmers, I've developed my share of bad habits -- but I can honestly say that isn't one of them.

    3. Anonymous C0ward

      Re: OK. Own up. How many coders...

      I have the Sisyphean task of trying to purge this behaviour in our codebase at work. The outsourcers keep making more of it.

  2. DCFusor
    Headmaster

    resentment of Clippy

    You mean he resented us in return? I had no idea AI had advanced so far.

    1. cream wobbly

      Re: resentment of Clippy

      Clearly that's Soviet Russia Clippy. Клипливо?

  3. GIRZiM

    Security is a process that requires hitting people over the head with their errors

    It's just gagging for it really,isn't it?

    Obligatory xk:cd linkout.

  4. Daggerchild Silver badge
    Terminator

    Take it to the next level

    I had some code that mailed me if someone was using my stuff the dangerously wrong way.

    Alas, never got to the released version, but it would have been amusing for an error handler to have physically manifested a batwielding human behind the developer.

  5. Phil Endecott

    This doesn’t seem all that smart to me; it’s easy to add code to check if the caller has asked for DES rather than AES, but much harder to check if they have handled exceptions or error return values correctly. That really needs some sort of static analysis tool.

    1. Anonymous Coward
      Anonymous Coward

      I prefer the LibreSSL strategy of simply removing all the bad, broken or exploited algorithms. Then your code simply doesn't compile/run if you try to use a garbage encryption algorithm or weak parameters. While I'm sure devs appreciate the hints on how to use a good algorithm correctly, it still won't stop them from doing stupid stuff when they're apathetic or pressed for time.

      Ideally, all the crypto--including exception handling--should happen in the crypto library. Asking a regular dev to understand how all the fiddly crypto details work is a recipe for disaster. Getting to that ideal is not easy though.

    2. Charlie Clark Silver badge

      Agreed, but that's not what this is about. The warnings API are not catchable exceptions (but the display level can be configured) that are displayed when code is run. The effect is different to going through the report generated by some static analyis or tracing: all it's more immediate and directly relevant to what you're doing. They're generally used to inform rather than complain. Here's one that I've just started getting from Numpy:

      RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88

      Of course, static checkers should pick them up as well but I think the nudge approach is a good idea.

  6. John Geek

    my experience is, crypto APIs are incredibly complicated, hard to use, and poorly documented. this especially applies to openSSL.

    1. dajames

      my experience is, crypto APIs are incredibly complicated, hard to use, and poorly documented. this especially applies to openSSL.

      Designing and securely implementing cryptographic interfaces is a complex process really hard to do well ... but using them is much easier ... or would be if there was any documentation, which there all too often isn't (yes, OpenSSL, I'm looking at you, too).

      Adding a few carefully-worded Doxygen comments would make all the difference ...

  7. Nick Kew

    Wood for the trees

    This looks like classic deprecation warnings. And yes, warnings are a useful tool: don't you always use at least -Wall with gcc and insist it builds cleanly? A policy that can make life more difficult when faced with a third-party or legacy codebase that generates reams of warnings, especially when combined with a PHB who insists you treat it as a 'black box'.

    But it's a narrow focus. And when it makes a programmer's life more difficult, it risks being counterproductive, by causing the programmer to take his eye off the ball and risk introducing other errors that should be obvious. Perhaps the next experiment should test whether the warnings are productive when the programmers are presented with a legacy codebase that generates a gigabyte of them?

  8. Stephen Horne

    Possible risk of risk compensation?

    Increased safety measures are claimed to sometimes lead to more accidents and injuries because sometimes the perceived reduction is risk is less than the actual reduction - a particularly problematic case of risk compensation. I wouldn't be surprised if people who see these warnings will learn to rely on them, and that might make them overconfident so they overlook other problems that the library can't detect that they might otherwise have checked for.

    Which is not to claim this is a bad idea - it's just that forcing idiots like me to write secure code needs a more all-inclusive (yet still checked by default and more convenient than working out how to turn it off) approach.

    Otherwise, just about the only thing we can do is make everyone who's responsible for the security of software responsible for the security of that software - LOL!

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like