back to article You can resurrect any deleted GitHub account name. And this is why we have trust issues

The sudden departure of a developer from GitHub, along with the Go code packages he maintained, has underscored a potential security issue with the way some developers rely on code distributed through the community site. The individual identifying himself as Jim Teeuwen, who maintained GitHub repository for a tool called go- …

  1. Jamie Jones Silver badge

    The current owner had no way to directly redirect the repo, so he made such work-around so that he could safely go home without being blamed by his supervisor," he explained. "And of course, hoped this would also save someone else trapped in similar situation."

    Although it doesn't necessarily imply he's to blame, if the current owner is responsible for code that relies on some third party developer, on some third party site, with no contract agreements in place, he deserves to be blamed, fired, and shot by his supervisor.

    I mean... WTF? If you use a third party module in live code, you surely don't link it to a live repository. What am I missing here?

    1. Andrew Commons

      What am I missing here?

      Nothing.

      If you use a third party module you download it, put it under your source code control, and be prepared to maintain it yourself.

      1. mrtom84

        Re: What am I missing here?

        I would say a bare minimum would be forking the repository to your own organisations github account and linking to that.

        1. Anonymous Coward
          Anonymous Coward

          'a bare minimum would be forking'

          But this way you would need to merge back upstream changes.... isn't github just a repo, not a VCS? Merging is difficuuuuuuult, that's what I've been told, why risk to make a mistake? Just milk someone else's work, with minimum effort....

          1. mrtom84

            Re: 'a bare minimum would be forking'

            Github makes it pretty trivial to keep your fork synced with upstream changes and if you’re not contributing there won’t be anything to merge.

            1. Anonymous Coward
              Anonymous Coward

              Re: 'a bare minimum would be forking'

              > Github makes it pretty trivial to keep your fork synced with upstream changes

              What do you do with your forked dependency's dependencies? You fork them too? And *their* dependencies? And their dependencies dependencies? And...

              1. Jon 37

                Re: 'a bare minimum would be forking'

                > What do you do with your forked dependency's dependencies? You fork them too? And *their* dependencies? And their dependencies dependencies? And...

                Yes.

                If you're working in a responsible manner, you need to do a license review of every dependency anyway, so you will be making a list of all dependencies anyway (including dependencies of dependencies etc) and can just fork all of them.

                That way:

                a) You don't have problems due to a server being down

                b) You don't have problems due to someone pushing a bug or non-backward compatible change

                c) You can check the licenses of all the software you're using, in case some dependency adds a new dependency with an unacceptable license

                d) If something breaks, it's possible to answer the question "what changed".

                1. Anonymous Coward
                  Anonymous Coward

                  Re: 'a bare minimum would be forking'

                  > If you're working in a responsible manner

                  Mind giving an actual example from your own experience? Cheers

                2. ibmalone

                  Re: 'a bare minimum would be forking'

                  While this is the ideal, it also has costs. You're taking on maintenance of everything you fork too, Security flaw in one of your dependencies? Now you're responsible for backporting it, or upgrading to the fixed version and checking compatibility anyway, though what you've gained is a bit more control of that process. And how far down do dependencies go? On Linux do you fork glibc, on Windows do you need the OS source? Unless you are developing the whole stack it's a question of how you handle the parts that are out of your control.

                  1. Claptrap314 Silver badge

                    Re: 'a bare minimum would be forking'

                    What you do here depends entirely on whether or not you can trust your infrastructure. If you have good test coverage on your code with signed updates from a reputable source, you in fact DO update your dependencies to latest by default, in many cases. (And if the build fails, rollback the dependency & create a ticket.) Otherwise, you poll your sources regularly daily (or more), and create a ticket when a new version becomes available. And by "you", I mean your integration pipeline.

                    Security is neither cheap nor easy, but we don't have to make life miserable for ourselves, either.

              2. Anonymous Coward
                Anonymous Coward

                Re: 'a bare minimum would be forking'

                What do you do with your forked dependency's dependencies? You fork them too? And *their* dependencies? And their dependencies dependencies? And...

                That's precisely why you keep local copies of them all, how else do you avoid falling into dependency hell?

                Just to take one small example, imagine that you have some components which depend on OpenSSL 1.0.2. One day one of them gets updated, and now needs OpenSSL1.1. You don't actually need any of the changes in the new version, but it gets pulled down automatically, and with that it upgrades OpenSSL to 1.1 to meet its dependencies.

                Unfortunately 1.0.1 and 1.1 are not compatible, the APIs changed, so every other component in your build that requires OpenSSL will break. Keeping your own fork, and only upgrading if & when you need to, when you can avoid such issues, is only common sense. Something severely lacking in these Agile DevOps days, it would seem.

                1. bombastic bob Silver badge
                  Devil

                  Re: 'a bare minimum would be forking'

                  "Something severely lacking in these Agile DevOps days, it would seem."

                  ya think?

              3. find users who cut cat tail

                Re: 'a bare minimum would be forking'

                > What do you do with your forked dependency's dependencies? You fork them too? And *their* dependencies? And their dependencies dependencies? And...

                By avoiding dependencies that bring such dependency hell with them.

              4. David Nash Silver badge

                Re: 'a bare minimum would be forking'

                "What do you do with your forked dependency's dependencies? You fork them too? And *their* dependencies? And their dependencies dependencies? And..."

                Yes. That's the point that is being made.

          2. bombastic bob Silver badge
            Devil

            Re: 'a bare minimum would be forking'

            "Merging is difficuuuuuuult"

            no, it's not. do a snapshot and merge to the master branch (on the original) with a pull request from time to time. Easy.

            Or you can fork the repo, fix things on your end, and THEN do a pull request into the original repo [which I did a while back with the Arduino project, as an example]. It's the best way to contribute. It's all well documented on github.

            1. Anonymous Coward
              Anonymous Coward

              "no, it's not. do a snapshot and merge to the master branch"

              It looks that for some people sarcasm is hard to understand, despite the hints, just like merging for others, despite the tools.

              1. Nick Ryan Silver badge

                What's with combining the build process steps?

                Pretty much every project has dependencies and (depending on the target of course) generally a system will only build if all the dependencies are available locally anyway.

                This does not necessitate downloading the latest version of every dependency and mindlessly using it every time a system is built.

                Updating dependencies is a good thing. Updating dependencies with no thought or control over the process is a bad thing and, in fact, is a very naive and foolish thing to do. Without control over dependnecies there is no reproducability, no genuine testing possible and therefore it is not possible to truly support a system much beyond hope and guesses.

      2. Anonymous Coward
        Anonymous Coward

        Re: What am I missing here?

        Especially since git is designed to support exactly that model: you just clone the repo locally and use that copy.

        1. Anonymous Coward
          Anonymous Coward

          Re: What am I missing here?

          > Especially since git is designed to support exactly that model

          You are aware that GitHub, as the name may suggest, is nothing more than a host of Git repositories with various bells and whistles tied into it?

          Local cloning is exactly what you do.

          1. Anonymous Coward
            Meh

            Re: What am I missing here?

            Yes, I am quite aware of that, thanks. The problem is that a lot of people, and the infrastructures they build, aren't: rather than cloning (even cloning into GH) and relying on that clone to be a stable source tree *which they control*, they rely on cloning from the upstream GH repo *for each build or installation*, which is a catastrophe, even if GH did not allow account names to be reused.

            None of this makes GH allowing account name reuse excusable: it's terrible design because it directly supports the construction of compromised versions of tools (wait for the original person to delete their account, create a new account with the same name, clone your secreted copy of the repo (so all the commit hashes are fine), complete with new commits adding badness, into that account and wait).

      3. a_yank_lurker

        Re: What am I missing here?

        To make matters worse, it is trivially easy to download the source code from github and create your own local fork.

      4. bombastic bob Silver badge
        Happy

        Re: What am I missing here?

        "If you use a third party module you download it, put it under your source code control, and be prepared to maintain it yourself."

        Or, just fork it.

    2. bazza Silver badge

      What am I missing here?

      Not a lot so far as I can see. Just another symptom of the lack of rigour within the software dev community these days.

      Internet repos have made programmers lazy. Perhaps there ought to be a quarterly outage (on some random day within the quarter) on all of these online repositories, just to remind people. Bit like Netflix have a piece of software that deliberately randomly crashes bits of their infrastructure to make sure that their devs have built a resilient system.

      As for GitHub permitting a new account to be stood up with a previously used name - terrible.

      1. Doctor Syntax Silver badge

        "As for GitHub permitting a new account to be stood up with a previously used name - terrible."

        Agreed. But is there a need to allow the original owner to re-open the account?

    3. Lysenko

      I mean... WTF? If you use a third party module in live code, you surely don't link it to a live repository. What am I missing here?

      You're missing the fact that almost all modern JavaScript development is based on exactly this model (via node/npm/yarn) and it is even considered to be "bad practice" to bring code into your own source tree rather than download on demand. Given that so many developers start with JS these days, this becomes standard practice and they don't even think about it.

      1. Anonymous Coward
        Anonymous Coward

        And this is why many implementations of "DevOps" that are driver by Developers are a really bad thing in practice - most Developers seem to be completely naive when it comes to licensing implications, version control, and even covering their arses when someone removes a package that their code depends on...

        When their answer is "oh we always use the latest from NPM", you know that they're complete idiots...

        1. Anonymous Coward
          Anonymous Coward

          > And this is why many implementations of "DevOps"

          If you are anywhere that says they do "DevOps" (why do they always come up with such stupid names?) you know the place is staffed by losers playing at being cool guys.

          1. Jonathan 27

            "DevOps" is what everyone is doing now, outside of your standard 20-years behind the times bloated IT departments. I know this site is particularly filled with older neckbeards stuck in dead-end IT support jobs, but that's just reality now.

            I'm a developer and when I went looking for a job last year there wasn't a single job description that didn't contain the phrase "agile" or "DevOps". I'm not saying this is a good thing, just that it is now the reality we live in.

            And yes, if NPM or Nuget falls over, so does our build process. We're also totally dependent on a big cloud service provider for everything. No I don't think this is good, but it's not up to me.

            1. Anonymous Coward
              Anonymous Coward

              "DevOps" is fine if it's designed by people who actually know and understand how to keep a mission-critical system up and running. Which is typically not developers...

              Relying on third-party points of failure with fuck-all SLAs as part of your build/test/deploy process is idiotic at best, utterly negligent at worst.

              Most "developers" aren't systems engineers and never will be. They're hack-and-bash, copy-and-paste, trial-and-error Google/StackOverflow-searching code monkeys who don't really understand what they're doing. And you can't "develop" a mission-critical system, it needs to be engineered. The devil is always in the detail, and assuming you can ignore the abstracted specifics of how things work under the hood is fatal when you're building something that actually needs to work 99.99% of the time.

              None of this precludes DevOps and Agile if it's done right - however 90+% of the time, it's not...

              1. Anonymous Coward
                Anonymous Coward

                > Most "developers" aren't systems engineers and never will be. They're hack-and-bash, copy-and-paste, trial-and-error Google/StackOverflow-searching code monkeys who don't really understand what they're doing.

                True that, although I have seen a lot of regional variation in this respect.

                In my experience, the above applies guilty-as-charged to your "typical" programmer in the UK and Norway. In France, Germany and Italy on the other hand, you do not have programmers pretending to be system analysts, but rather proper systems analysts, with relevant qualifications and experience. In Austria, Czech Republic, Croatia and Ukraine, it is a strange mixture of both: they generally know what they're doing but if not they'll just make it up.

                That's just my anecdotal experience of the places I have some experience with.

              2. Anonymous Coward
                Anonymous Coward

                A bit harsh, but fair IMHO

                And made me laugh so +1.

                In my shop there seems to be a assumption that Agile and JFDI are the same methodology too !

            2. Anonymous Coward
              Anonymous Coward

              > I know this site is particularly filled with older neckbeards stuck in dead-end IT support jobs, but that's just reality now.

              Not everyone, although definitely at least 10 of them as of this writing. :-)

              It is true though that the busiest posting times here seem to coincide with UK office hours, and a second, smaller wave at US hours.

              1. Solmyr ibn Wali Barad

                "Not everyone, although definitely at least 10 of them as of this writing. :-)"

                Not quite. That guy did not offer solid arguments in defence of the practice being disputed. Instead he proclaimed it to be 'a modern way of development', loosely paraphrased of course, and resorted to quite unnecessary name-calling towards his imaginary opponents.

                I'm fairly tolerant towards other commenters and tend to upvote comments if they contain at least one good point (which sometimes means ignoring insults and other not-so-good bits), but in this case I did not find any redeeming features worthy of upvote.

                1. Doctor Syntax Silver badge

                  "but in this case I did not find any redeeming features worthy of upvote."

                  Likewise. He did say he didn't necessarily approve but that's not good enough.

                  Some of us have been around long enough to look at DevOps and realise that that's what we were doing long ago - one team developing, managing the system and supporting our users. We were also aware that what the users were doing was what brought in the money to pay our salaries so not only did we take care to provide what they needed but were also paranoid about protecting the integrity of the data. As we were running systems which weren't internet connected it was second nature to us to keep control of source ourselves. Had the opportunity been given to us to store source elsewhere that sense of responsibility would have precluded it anyway. That's how you build a business to last decades - assuming the manglement doesn't have other ideas.

            3. Phil O'Sophical Silver badge

              And yes, if NPM or Nuget falls over, so does our build process. We're also totally dependent on a big cloud service provider for everything. No I don't think this is good, but it's not up to me.

              So your build process automatically pulls in unverified and untested code, that anyone could have inserted malware into, and you're OK with that? You are, frankly, a fool.

              Oh, and when some personal data escapes and the GDPR guys come around with the €20m fine I think you'll find it is up to you. Your employer will hang you out to dry.

            4. nijam Silver badge

              > when I went looking for a job last year there wasn't a single job description that didn't contain the phrase "agile" or "DevOps".

              Fair enough, and you may be right. But remember, "job description" != "job".

      2. Anonymous Coward
        Anonymous Coward

        > Given that so many developers start with JS these days, this becomes standard practice and they don't even think about it.

        Bit of a logic jump there. The model that you accurately describe, while far from perfect, has the advantage of significantly reducing cost and complexity, speeds up prototyping, and ties in well with continuous delivery approaches which from an economic point of view are far more efficient than delivery management or waterfall models.

        Yes, there are also disadvantages which mean this approach is not suitable in every case, one has to know and assess the risks.

        In particular, if your organisation is middle-management heavy this is not going to work. Then again, if you're in such an organisation you're pretty well fucked anyway.

        1. Lysenko

          The model that you accurately describe, while far from perfect, has the advantage of significantly reducing cost and complexity, speeds up prototyping, and ties in well with continuous delivery approaches ...

          Shortcuts are almost always quicker and insurance policies/backups are almost always more expensive. Therefore, it follows that any naive, short-term, "efficiency" calculation will favour risk-taking and dispense with "overhead" like backups, documentation, specifications, quality assurance, redundancy and reliable idempotence. There's nothing new about any of that, all that's changed is that (some) developers think "dynamic risk-taker" ("chancer", in old money) is a compliment.

          As you say, it fits in well with the "Rachman" methodology, where you move the tenants into a building that is half finished (or demolished), and try to shore it up around them before getting bored and moving on to the next (dynamic, innovative, cutting edge) development, leaving the roof still leaking and bare wires poking out of the walls. Nothing wrong with that for a disposable web ad campaign of course, but it's a disaster if you're handling medical records or benefit payments or train bookings or - well, anything that actually matters.

          1. Anonymous Coward
            Anonymous Coward

            > Shortcuts are almost always quicker and insurance policies/backups are almost always more expensive.

            Why do you equate efficiency with shortcuts? The problem with the audience here is that they seem to think that projects and businesses relying on, let us call it crowd-sourced technology, do not know what they are doing.

            My direct experience however is diametrically opposite: three projects that I can think of right now that rely heavily on things like NPM and GitSomething repositories, one is in the area of weather visualisation, another one is groundbreaking research into computer vision and AI, another one Earth sciences, those are staffed by some of the most brilliant and successful people I have ever met, including two billionaires who made their fortunes in IT.

            You *are* quite welcome to stick to your Borland libraries or whatever it is that the so-called corporate crowd use these days, but please do not assume that everyone else are bumbling idiots.

            1. Lysenko

              Why do you equate efficiency with shortcuts?

              I don't. I was replying to a previous comment which implied that faster/cheaper were the only metrics by which efficiency is calculated. Personally, I believe in the iron triad - "Fast, cheap, right - pick any two".

              Also, I don't necessarily think project leaders are unaware of what they are doing. Some are, but not all. Anyone focussing on "mean time to remediate" metrics (for example) clearly knows they are shipping something buggy, feature deficient or both. That may well be perfectly rational if your strategy relies on "beta test in production" and you are dealing with something based on machine learning where substantial inaccuracies are axiomatic.

              You *are* quite welcome to stick to your Borland libraries or whatever it is that the so-called corporate crowd use these days, but please do not assume that everyone else are bumbling idiots.

              It's been a long time since I fired up a Delphi IDE. Most of what I work on is Python, C, Go, TypeScript (Angular) and ES6. That's why I'm very well aware of the hilarity that ensues when Github (as they did last month) goes offline for a few minutes to update some SSL certificates.

              I don't assume that chancers are necessarily idiots. Gambling can be a very effective strategy and is a key characteristic of highly successful people. What is idiotic is making npm/github integral to your build system and then getting agitated when it goes down and blows up your deadlines. Competent gamblers know the odds and bet based on calculations, not on magical thinking and cognitive dissonance.

        2. Wulfhaven

          The advantages you list are all paid for with an utter lack of security and stability. There is always a price to pay, and if you are building something with an SLA connected to it, pulling code straight from some random repo is just horribly, horribly inappropriate.

          It has fuck all to do with middle management, and everything to do with accountability. If you do not manage the risks, you rightly pay for the eventual fallout when shit go boom.

          But, developers rarely think about such things.

    4. Anonymous Coward
      Anonymous Coward

      Another aspect is that if Github goes down so does your stuff.

    5. geekguy

      Re: Good.

      Agree,

      Besides if you're going to use open source in your own stuff, you should be working on a fork and possibly contributing to the main projects with pull requests. Never rely on someone else's repo always being there. That's just madness.

      1. heyrick Silver badge

        Re: Good.

        "Never rely on someone else's repo always being there. That's just madness."

        And even when it is there, never rely on it being what it is supposed to be.

        This bloke had good intentions, to revive the repo and restore the code so his stuff worked (and others too). What if the person reviving the repo had a working copy that also punted malware or crypto mining? The question now (and one GitHub need to think about) is how many projects might unknowingly be affected by this because the developers put blind faith in the contents of a third party repository? [that's madness as well]

        1. Richard 12 Silver badge

          Re: Good.

          Buildroot and similar build systems have integrity checks that require the downloaded package to match the expected hash.

          So if someone did change it by any means, the download would fail and the new build machine would need to get its copy by another path.

          I had presumed everybody did that.

          Aside from that, it is almost impossible to create a new git commit with the same hash as an existing commit.

          What kind of idiot uses the tip, or a named label without checking it is the same commit?

    6. Anonymous Coward
      Anonymous Coward

      Universities

      I really do wonder what the Universities think they're teaching these days. Once upon a time graduates knew something of software engineering; now they seem to produce a bunch of people who think that hacking a load of code together is a "professional" way to do it, "never mind if it's wrong to begin with that's what Agile is for..". As for writing anything that isn't web based, oh dearie me.

      Well, this is great for those of us who like to spend an iota or two thinking about what the hell it is we're writing. We're more likely to get it right, sooner. I've got nothing against Agile development methods as such (if appropriate and done well it's fine), but one does see the label being attached to projects that are nothing but massive hack-fests that never actually produce a useful output.

      1. Anonymous Coward
        Anonymous Coward

        Re: Universities

        > I really do wonder what the Universities think they're teaching these days. Once upon a time graduates knew something of software engineering;

        For the purpose of providing context to your comment, in particular you familiarity with what is and isn't taught in higher education, or ever has been, could you please state whether you are a CS graduate and if not, what exactly your academic qualifications are? Cheers.

        1. Anonymous Coward
          Anonymous Coward

          Re: Universities

          For the purpose of providing context to your comment, in particular you familiarity with what is and isn't taught in higher education, or ever has been, could you please state whether you are a CS graduate and if not, what exactly your academic qualifications are? Cheers.

          OK. Degree: Electronic Systems Engineering, which translates as IC / CPU design, electronics engineering, requirements engineering, systems specification (including formal specification languages), computer science, and on top of all that there was software engineering too. The RF, power electronics, communications, mathematics, politics and mechanical engineering were just extras for fun. It was one hell of a degree. The Chartered Engineer and EurEng qualifications are merely things that I role out on occasions like this.

      2. nijam Silver badge

        Re: Universities

        > I really do wonder what the Universities think they're teaching these days.

        Well, maybe look into that, instead of wondering about it.

        I'm pretty sure (working at one) that what you are criticising isn't part of any curriculum I've seen.

    7. ee_cc

      Or the supervisor

      Might be the case that the developer in question - or a predecessor, whose history set precedent in the company - had indeed argued for a privately maintained repository, or to verify dependency hashes and whatnot, but was dealt with managerial resolution.

    8. Claptrap314 Silver badge
      Facepalm

      What you are missing here is management. I previously worked at a top 1000 website that was doing this, only with rubygems.org. Then rubygems.org got hacked. Even after the fact, I was unable to convince senior management that we needed to bring all of our dependencies in house. Speaking with principle-level engineers today, this continues to be an area of substantial pushback from management.

  2. steelpillow Silver badge

    Design flaw?

    Seems to me a better design would be to freeze an account rather than delete it, and archive its code assets so they can still be accessed via the old path but not updated or deleted. That buys time to create a new fork.

    (armchair idea, I'm not a GitHub user myself).

    1. Richard 12 Silver badge

      Re: Design flaw?

      That would be fine, except for copyright and *spit* software bloody patents.

      Github and github customers must be able to remove things when asked, or lawyers will close them.

      Eg a company is bought and revokes a patent right, or loses a lawsuit that some troll brought against them.

      1. steelpillow Silver badge

        Re: Design flaw?

        That would be fine, except..."

        I don't see that. When you create a GitHub account you sign up to its Terms and Conditions. They can be amended to set out the revised withdrawal process, specifically that material will remain for say a week or a month before being purged. Posting of copyrighted material constitutes release of the material under said terms. Any file altered under the new terms will constitute acceptance of them for the whole file. A little bit of coding to implement different algorithms for files with different datestamps, and bingo! Job's a good 'un.

        If a third party is demanding takedown, there must always be a reasonable time period for this to be implemented. Just make sure that the GitHub delay squeaks within such a reasonable period.

  3. John Miles
    Stop

    Seems we never learn

    One big issue for the internet has been the protocols that it originally run on weren't designed to be used in untrusted environment leaving it open to be abused by spammers and other criminals. Now we have same problem in software where we have moved some of it into an untrusted environment with public repositories and ignore anything to help keep the trust in place as being too difficult, not to mention the extra fragility that adds.

    So how long before we something like "I'm harvesting credit card numbers and passwords from your site" is real?

    1. David Roberts
      Devil

      Re: Seems we never learn

      Thanks. Fascinating read.

      Although I might be tempted (if I could code) to invest some time as a maintainer for popular packages to ensure the highest distribution rate once I decided to slip my little gift into an obscure corner.

  4. Romløk

    Source code, source code, source code

    I've been saying for a while that GitHub (and Bitbucket, and GitLab, etc.) are doing open source a disservice, by making the primary route to accessing a project's source code be tied to a particular identity - either that of a single person, or a single organisation.

    As this news exemplifies, this is a short-sighted position to take when dealing with open source software. People come and go; companies end support; organisations get disbanded, but none of these are necessarily enough to kill a popular project.

    A platform which truly understood the nature of free and open-source software, and that things change over time, should make the project - the source code itself - have primacy. Instead of https://github.com/someuser/importantproject/, we should be visiting https://git.example/importantproject/.

    I've seen a few projects on GitHub which appeared dead on the surface, but if you delved into the "network graph" you'd see this amazing explosion of forks and branches lasting long after the original fork died. Sadly, they're mostly all fixing the same problems over and over, because most people don't look at the network graph, and assume all work on the project ceased when the original developer lost interest.

    1. Doctor Syntax Silver badge

      Re: Source code, source code, source code

      "I've been saying for a while that GitHub (and Bitbucket, and GitLab, etc.) are doing open source a disservice, by making the primary route to accessing a project's source code be tied to a particular identity - either that of a single person, or a single organisation."

      What do you suggest? A project site has the same issue: somebody has to own the registration of the domain, arrange for hosting, etc. A distributed model without its own domain has the problem of keeping all the copies in sync in the absence of an agreed master.

      1. find users who cut cat tail

        Re: Source code, source code, source code

        > somebody has to own the registration of the domain, arrange for hosting, etc.

        In contrast to personal accounts, all these things can be transferred (and have been in practice). Moreover, when they exist there is strong incentive to continue *the existing project*, as opposed to creating an endless forking mess and leaving users confused.

      2. Roml0k

        Re: Source code, source code, source code

        > Doctor Syntax: What do you suggest?

        All that's required, at minimum, is for the code-hosting sites to create a project-overview page, which allows the user to see the current status of all forks of a codebase. Importance of forks (eg. which comes top of the list) should be determined either democratically (eg. stars) or algorithmically based on metrics such as commit quantity; frequency; recency, bug reports, pull activity, etc.

        > AC: I disagree that, on the balance of things, they are doing a disservice.

        I disagree that you disagree!

        I do indeed believe that GitHub & co. have been a net positive to development and collaboration in the free/open-source software community. I was only meaning that this particular issue of developer primacy was sub-optimal, and possibly harmful to sustained development.

    2. Anonymous Coward
      Anonymous Coward

      Re: Source code, source code, source code

      > I've been saying for a while that GitHub (and Bitbucket, and GitLab, etc.) are doing open source a disservice, by making the primary route to accessing a project's source code be tied to a particular identity

      You are correct about the "primary route" bit and the rest of your analysis is very well thought out, but I disagree that, on the balance of things, they are doing a disservice.

      With that said, I am not a big fan of GitHub at all and I acknowledge the problem you mention. Not sure it is fair to tar GitLab with the same brush though, seeing as you can self-host it quite easily and affordably.

    3. Anonymous Coward
      Anonymous Coward

      "A platform which truly understood the nature of free and open-source"

      You mean, like the Apache foundation? But they are looked at like dinosaurs today, all the cool and fashionable guys open a GitHub repo today, that their name is there makes them more visible to the "community", it's all about the ego.

      1. Adam 52 Silver badge

        Re: "A platform which truly understood the nature of free and open-source"

        "it's all about the ego."

        Cannot upvote this enough.

        You should have seen the uproar when we suggested that (a) code paid for by the company should be licensed by the company, (b) in private repos by default and (c) use corporate friendly usernames.

        Management caved in and as a result we got hacked using information gleaned from our public repo and nobody knows who's comitting what.

  5. Anonymous Coward
    Anonymous Coward

    Err... no

    "Usernames, once deleted, should never be allowed to be valid again."

    That is poor man's security not a real fix. One thing we could do in these instances is use integrity checking and/or PKI.

  6. cantankerous swineherd

    as stated above, if the code is that important to you then create a fork, update as necessary. if it disappears upstream ¯\_(ツ)_/¯

    1. Anonymous Coward
      Anonymous Coward

      > as stated above, if the code is that important to you then create a fork, update as necessary.

      I do not know about this specific case, relating to a Go user.

      For Node projects however, which tend to be quite heavy users of web-based dependencies, what you describe is sort of how it works.

      During a (pristine) build, npm downloads the relevant version of all leaves in your dependency tree, which then reside in the build computer. Unless you are doing something exotic (and in production you probably won't) whatever happens to your dependencies in GitHub or elsewhere does not affect your deployments.

      If a dependency breaks for whatever reason, such as the developer pulling the repository, then your next pristine build is going to fail, you will spot the problem and then do something about it. I have seen this in a personal project once, not long ago, I can't remember what it was but it was just a transient issue, which meant my continuous integration server (which always does pristine builds) was paused for an hour or so until everything came back online. In the event of a more permanent issue you would eventually replace the dependency with another alternative, find a fork, host it yourself, or rewrite as appropriate. It seldom happens, it does not affect production deployments, and frankly it is not the end of the world.

      It is totally a bazaar (if not a brothel) rather than a cathedral approach, but it does the job very nicely, much as it may shock a few parishioners.

      1. Lusty

        @anon

        Lol are you saying npm never has issues? Like the global news story about left-pad breaking the Internet linked to in the story?

      2. Ken Hagan Gold badge

        "If a dependency breaks for whatever reason, such as the developer pulling the repository, then your next pristine build is going to fail, you will spot the problem and then do something about it."

        Either you or I have mis-understood the person you were replying to. I took the suggestion to be that you make yourself dependent on your local copy (fork, or whatever) of the third party code. That dependency cannot break. (If the original source disappears, you lose the ability to update your local copy, but since there cannot now be any new fixes being posted to that original source, this isn't actually a problem.)

        You seem to be advocating just linking to the remote source and only taking a local copy *after* the remote one goes titsup. Sorry, but to me that sounds like being lazy and having to face the consequences at an inconvenient moment, possibly after everyone who understands exactly what to do has left the company.

  7. JavaJester
    FAIL

    It's Not GitHub's fault

    The fault is dynamically loading code from random folks accounts on GitHub rather than from a proper repository and then hosting either in a CDN you control, or within the application itself. The Maven/Gradle model, where the code VCS is divorced from the code repo is a much more grown up way of doing things. I don't see why JavaScript libraries can't either use the central repository, or come up with something like it. With this model, if my project states that it uses version 1.1, then that's what it will use until I update my dependencies. My site won't suddenly go batshit or start mining cryptocurrency because of some change in a library. I won't get the new version until I ask for it. To me, this is a much better way of doing things than to rely on a third party repo that could change and bork my application. It buggers my mind that people would want to always get the latest changes from third party sites the don't even know, let alone control.

    1. bombastic bob Silver badge
      Devil

      Re: It's Not GitHub's fault

      "The fault is dynamically loading code from random folks accounts on GitHub rather than from a proper repository and then hosting either in a CDN you control, or within the application itself"

      people who do that probably [ab]use ginormous libraries that they don't understand, doing a zillion things they do not need, and polluting the dependencies with unnecessary download requirements, both on the client end AND on the server end. My $.10 anyway.

      I prefer copy/pasta the relevant parts and then maintain it as part of my OWN repo. Proper acknowledgements and licensing as needed.

      (why do I need to load BLOATWARE when all I want is what 'left-thingy' does...)

    2. Adam 52 Silver badge

      Re: It's Not GitHub's fault

      It also means that your site won't get the latest patches, so unless you've got a vulnerability scanner in your pipeline somewhere you may end up with your site mining crypto currency anyway.

      I can't help feeling that there's a halfway house somewhere between "go get" and the nightmare that is Maven.

  8. Karlis 1

    GO is a special snowflake case here...

    To be fair this has more to do with the fact that Go current toolchain not only encourages, but literally presents as the only option dependency management by pointing them to live repos. That's the only reason why I refuse to invest any time and effort in learning more of it what otherwise looks like a really nifty cross of good and pragmatic ideas and computer science.

    Now, the bigger picture can be analysed on and on and on and things like node repos model is braindead too, but at least they pretend. In case of Go it is the official way.

    (which might work for google where everything is one big repo and they are so far ahead of everyone else that they rarely need to worry about _external_ dependencies, but is like giving a loaded gauge #4 of buckshot to a depressed teenager and expecting it to end well)

    1. Claptrap314 Silver badge

      Re: GO is a special snowflake case here...

      Docker has the same issue (and, I believe, Jenkins). I already mentioned rubygems being default in the ruby environment. But you know, what? The three I know all have the same easy fix--repoint the root of the search space to something you control.

      The hard part is maintaining the almost-a-mirror running on a server you control. Hard, but at least possible.

  9. Mage Silver badge
    Big Brother

    Names

    Only the original owner(s) should be able to reactivate an account name or a lapsed domain name or email account etc.

  10. Claptrap314 Silver badge

    DevOps--for good?

    I'm starting to feel unease for the term, but let's stick with it. The problem is that in even mid-sized companies, the requirements of operations are exceeding the capacity of people whose expertise lie outside software engineering. I'm no systems engineer, and I don't expect to ever be. And I would bet that if you lined up 100 senior systems engineers, 95 of them would be much better programmers than I am an admin. But if you need to manage 1000 service-server pairs, you really don't want to handle it with scripts.

    Software engineering expertise is a set of skills and a worldview that has evolved for generations to solve problems. Systems engineering expertise is a set of skills and a worldview that has evolved for generations to solve problems. I am completely reliant on the SEs to teach me enough for me to address the modern problems of operations with my toolset.

    My goal is that the SEs not be so grumpy. That won't happen unless they can see that I am addressing their concerns with the tools and libraries that I'm providing.

  11. vasuarv2

    Use go-dep

    Golang has a very utility "go-dep" which takes care of downloading the version of the the dependencies for any project. That takes care of the making sure that you don't "accidentally" upgrade to the later version of any package.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like