back to article You can resurrect any deleted GitHub account name. And this is why we have trust issues

The sudden departure of a developer from GitHub, along with the Go code packages he maintained, has underscored a potential security issue with the way some developers rely on code distributed through the community site. The individual identifying himself as Jim Teeuwen, who maintained GitHub repository for a tool called go- …

Page:

  1. Jamie Jones Silver badge

    The current owner had no way to directly redirect the repo, so he made such work-around so that he could safely go home without being blamed by his supervisor," he explained. "And of course, hoped this would also save someone else trapped in similar situation."

    Although it doesn't necessarily imply he's to blame, if the current owner is responsible for code that relies on some third party developer, on some third party site, with no contract agreements in place, he deserves to be blamed, fired, and shot by his supervisor.

    I mean... WTF? If you use a third party module in live code, you surely don't link it to a live repository. What am I missing here?

    1. Andrew Commons

      What am I missing here?

      Nothing.

      If you use a third party module you download it, put it under your source code control, and be prepared to maintain it yourself.

      1. mrtom84

        Re: What am I missing here?

        I would say a bare minimum would be forking the repository to your own organisations github account and linking to that.

        1. Anonymous Coward
          Anonymous Coward

          'a bare minimum would be forking'

          But this way you would need to merge back upstream changes.... isn't github just a repo, not a VCS? Merging is difficuuuuuuult, that's what I've been told, why risk to make a mistake? Just milk someone else's work, with minimum effort....

          1. mrtom84

            Re: 'a bare minimum would be forking'

            Github makes it pretty trivial to keep your fork synced with upstream changes and if you’re not contributing there won’t be anything to merge.

            1. Anonymous Coward
              Anonymous Coward

              Re: 'a bare minimum would be forking'

              > Github makes it pretty trivial to keep your fork synced with upstream changes

              What do you do with your forked dependency's dependencies? You fork them too? And *their* dependencies? And their dependencies dependencies? And...

              1. Jon 37

                Re: 'a bare minimum would be forking'

                > What do you do with your forked dependency's dependencies? You fork them too? And *their* dependencies? And their dependencies dependencies? And...

                Yes.

                If you're working in a responsible manner, you need to do a license review of every dependency anyway, so you will be making a list of all dependencies anyway (including dependencies of dependencies etc) and can just fork all of them.

                That way:

                a) You don't have problems due to a server being down

                b) You don't have problems due to someone pushing a bug or non-backward compatible change

                c) You can check the licenses of all the software you're using, in case some dependency adds a new dependency with an unacceptable license

                d) If something breaks, it's possible to answer the question "what changed".

                1. Anonymous Coward
                  Anonymous Coward

                  Re: 'a bare minimum would be forking'

                  > If you're working in a responsible manner

                  Mind giving an actual example from your own experience? Cheers

                2. ibmalone

                  Re: 'a bare minimum would be forking'

                  While this is the ideal, it also has costs. You're taking on maintenance of everything you fork too, Security flaw in one of your dependencies? Now you're responsible for backporting it, or upgrading to the fixed version and checking compatibility anyway, though what you've gained is a bit more control of that process. And how far down do dependencies go? On Linux do you fork glibc, on Windows do you need the OS source? Unless you are developing the whole stack it's a question of how you handle the parts that are out of your control.

                  1. Claptrap314 Silver badge

                    Re: 'a bare minimum would be forking'

                    What you do here depends entirely on whether or not you can trust your infrastructure. If you have good test coverage on your code with signed updates from a reputable source, you in fact DO update your dependencies to latest by default, in many cases. (And if the build fails, rollback the dependency & create a ticket.) Otherwise, you poll your sources regularly daily (or more), and create a ticket when a new version becomes available. And by "you", I mean your integration pipeline.

                    Security is neither cheap nor easy, but we don't have to make life miserable for ourselves, either.

              2. Anonymous Coward
                Anonymous Coward

                Re: 'a bare minimum would be forking'

                What do you do with your forked dependency's dependencies? You fork them too? And *their* dependencies? And their dependencies dependencies? And...

                That's precisely why you keep local copies of them all, how else do you avoid falling into dependency hell?

                Just to take one small example, imagine that you have some components which depend on OpenSSL 1.0.2. One day one of them gets updated, and now needs OpenSSL1.1. You don't actually need any of the changes in the new version, but it gets pulled down automatically, and with that it upgrades OpenSSL to 1.1 to meet its dependencies.

                Unfortunately 1.0.1 and 1.1 are not compatible, the APIs changed, so every other component in your build that requires OpenSSL will break. Keeping your own fork, and only upgrading if & when you need to, when you can avoid such issues, is only common sense. Something severely lacking in these Agile DevOps days, it would seem.

                1. bombastic bob Silver badge
                  Devil

                  Re: 'a bare minimum would be forking'

                  "Something severely lacking in these Agile DevOps days, it would seem."

                  ya think?

              3. find users who cut cat tail

                Re: 'a bare minimum would be forking'

                > What do you do with your forked dependency's dependencies? You fork them too? And *their* dependencies? And their dependencies dependencies? And...

                By avoiding dependencies that bring such dependency hell with them.

              4. David Nash Silver badge

                Re: 'a bare minimum would be forking'

                "What do you do with your forked dependency's dependencies? You fork them too? And *their* dependencies? And their dependencies dependencies? And..."

                Yes. That's the point that is being made.

          2. bombastic bob Silver badge
            Devil

            Re: 'a bare minimum would be forking'

            "Merging is difficuuuuuuult"

            no, it's not. do a snapshot and merge to the master branch (on the original) with a pull request from time to time. Easy.

            Or you can fork the repo, fix things on your end, and THEN do a pull request into the original repo [which I did a while back with the Arduino project, as an example]. It's the best way to contribute. It's all well documented on github.

            1. Anonymous Coward
              Anonymous Coward

              "no, it's not. do a snapshot and merge to the master branch"

              It looks that for some people sarcasm is hard to understand, despite the hints, just like merging for others, despite the tools.

              1. Nick Ryan Silver badge

                What's with combining the build process steps?

                Pretty much every project has dependencies and (depending on the target of course) generally a system will only build if all the dependencies are available locally anyway.

                This does not necessitate downloading the latest version of every dependency and mindlessly using it every time a system is built.

                Updating dependencies is a good thing. Updating dependencies with no thought or control over the process is a bad thing and, in fact, is a very naive and foolish thing to do. Without control over dependnecies there is no reproducability, no genuine testing possible and therefore it is not possible to truly support a system much beyond hope and guesses.

      2. Anonymous Coward
        Anonymous Coward

        Re: What am I missing here?

        Especially since git is designed to support exactly that model: you just clone the repo locally and use that copy.

        1. Anonymous Coward
          Anonymous Coward

          Re: What am I missing here?

          > Especially since git is designed to support exactly that model

          You are aware that GitHub, as the name may suggest, is nothing more than a host of Git repositories with various bells and whistles tied into it?

          Local cloning is exactly what you do.

          1. Anonymous Coward
            Meh

            Re: What am I missing here?

            Yes, I am quite aware of that, thanks. The problem is that a lot of people, and the infrastructures they build, aren't: rather than cloning (even cloning into GH) and relying on that clone to be a stable source tree *which they control*, they rely on cloning from the upstream GH repo *for each build or installation*, which is a catastrophe, even if GH did not allow account names to be reused.

            None of this makes GH allowing account name reuse excusable: it's terrible design because it directly supports the construction of compromised versions of tools (wait for the original person to delete their account, create a new account with the same name, clone your secreted copy of the repo (so all the commit hashes are fine), complete with new commits adding badness, into that account and wait).

      3. a_yank_lurker

        Re: What am I missing here?

        To make matters worse, it is trivially easy to download the source code from github and create your own local fork.

      4. bombastic bob Silver badge
        Happy

        Re: What am I missing here?

        "If you use a third party module you download it, put it under your source code control, and be prepared to maintain it yourself."

        Or, just fork it.

    2. bazza Silver badge

      What am I missing here?

      Not a lot so far as I can see. Just another symptom of the lack of rigour within the software dev community these days.

      Internet repos have made programmers lazy. Perhaps there ought to be a quarterly outage (on some random day within the quarter) on all of these online repositories, just to remind people. Bit like Netflix have a piece of software that deliberately randomly crashes bits of their infrastructure to make sure that their devs have built a resilient system.

      As for GitHub permitting a new account to be stood up with a previously used name - terrible.

      1. Doctor Syntax Silver badge

        "As for GitHub permitting a new account to be stood up with a previously used name - terrible."

        Agreed. But is there a need to allow the original owner to re-open the account?

    3. Lysenko

      I mean... WTF? If you use a third party module in live code, you surely don't link it to a live repository. What am I missing here?

      You're missing the fact that almost all modern JavaScript development is based on exactly this model (via node/npm/yarn) and it is even considered to be "bad practice" to bring code into your own source tree rather than download on demand. Given that so many developers start with JS these days, this becomes standard practice and they don't even think about it.

      1. Anonymous Coward
        Anonymous Coward

        And this is why many implementations of "DevOps" that are driver by Developers are a really bad thing in practice - most Developers seem to be completely naive when it comes to licensing implications, version control, and even covering their arses when someone removes a package that their code depends on...

        When their answer is "oh we always use the latest from NPM", you know that they're complete idiots...

        1. Anonymous Coward
          Anonymous Coward

          > And this is why many implementations of "DevOps"

          If you are anywhere that says they do "DevOps" (why do they always come up with such stupid names?) you know the place is staffed by losers playing at being cool guys.

          1. Jonathan 27

            "DevOps" is what everyone is doing now, outside of your standard 20-years behind the times bloated IT departments. I know this site is particularly filled with older neckbeards stuck in dead-end IT support jobs, but that's just reality now.

            I'm a developer and when I went looking for a job last year there wasn't a single job description that didn't contain the phrase "agile" or "DevOps". I'm not saying this is a good thing, just that it is now the reality we live in.

            And yes, if NPM or Nuget falls over, so does our build process. We're also totally dependent on a big cloud service provider for everything. No I don't think this is good, but it's not up to me.

            1. Anonymous Coward
              Anonymous Coward

              "DevOps" is fine if it's designed by people who actually know and understand how to keep a mission-critical system up and running. Which is typically not developers...

              Relying on third-party points of failure with fuck-all SLAs as part of your build/test/deploy process is idiotic at best, utterly negligent at worst.

              Most "developers" aren't systems engineers and never will be. They're hack-and-bash, copy-and-paste, trial-and-error Google/StackOverflow-searching code monkeys who don't really understand what they're doing. And you can't "develop" a mission-critical system, it needs to be engineered. The devil is always in the detail, and assuming you can ignore the abstracted specifics of how things work under the hood is fatal when you're building something that actually needs to work 99.99% of the time.

              None of this precludes DevOps and Agile if it's done right - however 90+% of the time, it's not...

              1. Anonymous Coward
                Anonymous Coward

                > Most "developers" aren't systems engineers and never will be. They're hack-and-bash, copy-and-paste, trial-and-error Google/StackOverflow-searching code monkeys who don't really understand what they're doing.

                True that, although I have seen a lot of regional variation in this respect.

                In my experience, the above applies guilty-as-charged to your "typical" programmer in the UK and Norway. In France, Germany and Italy on the other hand, you do not have programmers pretending to be system analysts, but rather proper systems analysts, with relevant qualifications and experience. In Austria, Czech Republic, Croatia and Ukraine, it is a strange mixture of both: they generally know what they're doing but if not they'll just make it up.

                That's just my anecdotal experience of the places I have some experience with.

              2. Anonymous Coward
                Anonymous Coward

                A bit harsh, but fair IMHO

                And made me laugh so +1.

                In my shop there seems to be a assumption that Agile and JFDI are the same methodology too !

            2. Anonymous Coward
              Anonymous Coward

              > I know this site is particularly filled with older neckbeards stuck in dead-end IT support jobs, but that's just reality now.

              Not everyone, although definitely at least 10 of them as of this writing. :-)

              It is true though that the busiest posting times here seem to coincide with UK office hours, and a second, smaller wave at US hours.

              1. Solmyr ibn Wali Barad

                "Not everyone, although definitely at least 10 of them as of this writing. :-)"

                Not quite. That guy did not offer solid arguments in defence of the practice being disputed. Instead he proclaimed it to be 'a modern way of development', loosely paraphrased of course, and resorted to quite unnecessary name-calling towards his imaginary opponents.

                I'm fairly tolerant towards other commenters and tend to upvote comments if they contain at least one good point (which sometimes means ignoring insults and other not-so-good bits), but in this case I did not find any redeeming features worthy of upvote.

                1. Doctor Syntax Silver badge

                  "but in this case I did not find any redeeming features worthy of upvote."

                  Likewise. He did say he didn't necessarily approve but that's not good enough.

                  Some of us have been around long enough to look at DevOps and realise that that's what we were doing long ago - one team developing, managing the system and supporting our users. We were also aware that what the users were doing was what brought in the money to pay our salaries so not only did we take care to provide what they needed but were also paranoid about protecting the integrity of the data. As we were running systems which weren't internet connected it was second nature to us to keep control of source ourselves. Had the opportunity been given to us to store source elsewhere that sense of responsibility would have precluded it anyway. That's how you build a business to last decades - assuming the manglement doesn't have other ideas.

            3. Phil O'Sophical Silver badge

              And yes, if NPM or Nuget falls over, so does our build process. We're also totally dependent on a big cloud service provider for everything. No I don't think this is good, but it's not up to me.

              So your build process automatically pulls in unverified and untested code, that anyone could have inserted malware into, and you're OK with that? You are, frankly, a fool.

              Oh, and when some personal data escapes and the GDPR guys come around with the €20m fine I think you'll find it is up to you. Your employer will hang you out to dry.

            4. nijam Silver badge

              > when I went looking for a job last year there wasn't a single job description that didn't contain the phrase "agile" or "DevOps".

              Fair enough, and you may be right. But remember, "job description" != "job".

      2. Anonymous Coward
        Anonymous Coward

        > Given that so many developers start with JS these days, this becomes standard practice and they don't even think about it.

        Bit of a logic jump there. The model that you accurately describe, while far from perfect, has the advantage of significantly reducing cost and complexity, speeds up prototyping, and ties in well with continuous delivery approaches which from an economic point of view are far more efficient than delivery management or waterfall models.

        Yes, there are also disadvantages which mean this approach is not suitable in every case, one has to know and assess the risks.

        In particular, if your organisation is middle-management heavy this is not going to work. Then again, if you're in such an organisation you're pretty well fucked anyway.

        1. Lysenko

          The model that you accurately describe, while far from perfect, has the advantage of significantly reducing cost and complexity, speeds up prototyping, and ties in well with continuous delivery approaches ...

          Shortcuts are almost always quicker and insurance policies/backups are almost always more expensive. Therefore, it follows that any naive, short-term, "efficiency" calculation will favour risk-taking and dispense with "overhead" like backups, documentation, specifications, quality assurance, redundancy and reliable idempotence. There's nothing new about any of that, all that's changed is that (some) developers think "dynamic risk-taker" ("chancer", in old money) is a compliment.

          As you say, it fits in well with the "Rachman" methodology, where you move the tenants into a building that is half finished (or demolished), and try to shore it up around them before getting bored and moving on to the next (dynamic, innovative, cutting edge) development, leaving the roof still leaking and bare wires poking out of the walls. Nothing wrong with that for a disposable web ad campaign of course, but it's a disaster if you're handling medical records or benefit payments or train bookings or - well, anything that actually matters.

          1. Anonymous Coward
            Anonymous Coward

            > Shortcuts are almost always quicker and insurance policies/backups are almost always more expensive.

            Why do you equate efficiency with shortcuts? The problem with the audience here is that they seem to think that projects and businesses relying on, let us call it crowd-sourced technology, do not know what they are doing.

            My direct experience however is diametrically opposite: three projects that I can think of right now that rely heavily on things like NPM and GitSomething repositories, one is in the area of weather visualisation, another one is groundbreaking research into computer vision and AI, another one Earth sciences, those are staffed by some of the most brilliant and successful people I have ever met, including two billionaires who made their fortunes in IT.

            You *are* quite welcome to stick to your Borland libraries or whatever it is that the so-called corporate crowd use these days, but please do not assume that everyone else are bumbling idiots.

            1. Lysenko

              Why do you equate efficiency with shortcuts?

              I don't. I was replying to a previous comment which implied that faster/cheaper were the only metrics by which efficiency is calculated. Personally, I believe in the iron triad - "Fast, cheap, right - pick any two".

              Also, I don't necessarily think project leaders are unaware of what they are doing. Some are, but not all. Anyone focussing on "mean time to remediate" metrics (for example) clearly knows they are shipping something buggy, feature deficient or both. That may well be perfectly rational if your strategy relies on "beta test in production" and you are dealing with something based on machine learning where substantial inaccuracies are axiomatic.

              You *are* quite welcome to stick to your Borland libraries or whatever it is that the so-called corporate crowd use these days, but please do not assume that everyone else are bumbling idiots.

              It's been a long time since I fired up a Delphi IDE. Most of what I work on is Python, C, Go, TypeScript (Angular) and ES6. That's why I'm very well aware of the hilarity that ensues when Github (as they did last month) goes offline for a few minutes to update some SSL certificates.

              I don't assume that chancers are necessarily idiots. Gambling can be a very effective strategy and is a key characteristic of highly successful people. What is idiotic is making npm/github integral to your build system and then getting agitated when it goes down and blows up your deadlines. Competent gamblers know the odds and bet based on calculations, not on magical thinking and cognitive dissonance.

        2. Wulfhaven

          The advantages you list are all paid for with an utter lack of security and stability. There is always a price to pay, and if you are building something with an SLA connected to it, pulling code straight from some random repo is just horribly, horribly inappropriate.

          It has fuck all to do with middle management, and everything to do with accountability. If you do not manage the risks, you rightly pay for the eventual fallout when shit go boom.

          But, developers rarely think about such things.

    4. Anonymous Coward
      Anonymous Coward

      Another aspect is that if Github goes down so does your stuff.

    5. geekguy

      Re: Good.

      Agree,

      Besides if you're going to use open source in your own stuff, you should be working on a fork and possibly contributing to the main projects with pull requests. Never rely on someone else's repo always being there. That's just madness.

      1. heyrick Silver badge

        Re: Good.

        "Never rely on someone else's repo always being there. That's just madness."

        And even when it is there, never rely on it being what it is supposed to be.

        This bloke had good intentions, to revive the repo and restore the code so his stuff worked (and others too). What if the person reviving the repo had a working copy that also punted malware or crypto mining? The question now (and one GitHub need to think about) is how many projects might unknowingly be affected by this because the developers put blind faith in the contents of a third party repository? [that's madness as well]

        1. Richard 12 Silver badge

          Re: Good.

          Buildroot and similar build systems have integrity checks that require the downloaded package to match the expected hash.

          So if someone did change it by any means, the download would fail and the new build machine would need to get its copy by another path.

          I had presumed everybody did that.

          Aside from that, it is almost impossible to create a new git commit with the same hash as an existing commit.

          What kind of idiot uses the tip, or a named label without checking it is the same commit?

    6. Anonymous Coward
      Anonymous Coward

      Universities

      I really do wonder what the Universities think they're teaching these days. Once upon a time graduates knew something of software engineering; now they seem to produce a bunch of people who think that hacking a load of code together is a "professional" way to do it, "never mind if it's wrong to begin with that's what Agile is for..". As for writing anything that isn't web based, oh dearie me.

      Well, this is great for those of us who like to spend an iota or two thinking about what the hell it is we're writing. We're more likely to get it right, sooner. I've got nothing against Agile development methods as such (if appropriate and done well it's fine), but one does see the label being attached to projects that are nothing but massive hack-fests that never actually produce a useful output.

      1. Anonymous Coward
        Anonymous Coward

        Re: Universities

        > I really do wonder what the Universities think they're teaching these days. Once upon a time graduates knew something of software engineering;

        For the purpose of providing context to your comment, in particular you familiarity with what is and isn't taught in higher education, or ever has been, could you please state whether you are a CS graduate and if not, what exactly your academic qualifications are? Cheers.

        1. Anonymous Coward
          Anonymous Coward

          Re: Universities

          For the purpose of providing context to your comment, in particular you familiarity with what is and isn't taught in higher education, or ever has been, could you please state whether you are a CS graduate and if not, what exactly your academic qualifications are? Cheers.

          OK. Degree: Electronic Systems Engineering, which translates as IC / CPU design, electronics engineering, requirements engineering, systems specification (including formal specification languages), computer science, and on top of all that there was software engineering too. The RF, power electronics, communications, mathematics, politics and mechanical engineering were just extras for fun. It was one hell of a degree. The Chartered Engineer and EurEng qualifications are merely things that I role out on occasions like this.

      2. nijam Silver badge

        Re: Universities

        > I really do wonder what the Universities think they're teaching these days.

        Well, maybe look into that, instead of wondering about it.

        I'm pretty sure (working at one) that what you are criticising isn't part of any curriculum I've seen.

    7. ee_cc

      Or the supervisor

      Might be the case that the developer in question - or a predecessor, whose history set precedent in the company - had indeed argued for a privately maintained repository, or to verify dependency hashes and whatnot, but was dealt with managerial resolution.

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like