back to article Scary code of the week: Valve Steam CLEANS Linux PCs (if you're not careful)

Linux desktop gamers should know of a bug in Valve's Steam client that will, if you're not careful, delete all files on your PC belonging to your regular user account. According to a bug report filed on GitHub, moving Steam's per-user folder to another location in the filesystem, and then attempting to launch the client may …

Page:

  1. Aoyagi Aichou
    Gimp

    Halftruth?

    Apparently that was already fixed.

    https://github.com/SteamDatabase/SteamTracking/commit/155b1f799bc68f081cd6c70b9af47266d89b099d#diff-1

    1. Steve Crook

      Re: Halftruth?

      If you've not got the latest version it's still a whole truth.

      When I read the code I was wondering if I'd fallen into a wormhole and it was 1980. Again. Anyone seen a groundhog recently?

      1. Aoyagi Aichou

        Re: Halftruth?

        See, there's an "if" in that full truth. And I thought that in the IT world, it's assumed everyone has software up to date, especially when it comes to an online service's client.

    2. Teiwaz

      Truth is in the eye of the beholder

      Just checked my steam version (Ubuntu 15.04 - updated this fine morning) the code is still there under ~/.steam/steam.sh, line 468 of 756

      A quick look at the fix shows the addition of an if $STEAMROOT != "" encapsulating the rm -rf comand.

      1. e^iπ+1=0

        Crap fix

        addition of an if $STEAMROOT != "" encapsulating the rm -rf comand.

        Hmm, I wonder how well this would cope with something like

        STEAMROOT=/

        or

        STEAMROOT=/.///./

        or such like.

        1. Blitterbug
          Facepalm

          Re: Hmm, I wonder how well this would cope with...

          Exactly. It's still utter rubbish until a comparison is made against permitted path strings. Some detailed logic is needed, not some numbnuts single-line comparison operation.

  2. John Sanders
    Holmes

    If my memory serves me right...

    SteamOS was still in beta isn't?

    So, while unfortunate as this is (someone got his drives completely wiped of his user accessible data) and the person who wrote the script should know a bit better, in the end you're not supposed to run beta stuff on your production data.

    1. streaky

      Re: If my memory serves me right...

      This isn't SteamOS it's the Linux Steam Client, they aren't the same thing.

      1. John Sanders

        Re: If my memory serves me right...

        DOH!, Clearly I wasn't paying any attention to the article.

        What always gets my attention about the Steam client in Linux, its not so much that it has lots and lots of scripts (and obviously this amounts to bugs like this one), it is the fact that it does not use the package system for updates, only uses it for bootstrapping the install.

        1. Charles 9

          Re: If my memory serves me right...

          Two reasons. One, Steam has its own content distribution system separate and apart from any Linux package manager (and well predates Steam on Linux, for that matter). Second, game updates can be very piecemeal, particularly when the update concerns game content rather than program code, so Valve recently updated its content system to reflect this. It reduces update package sizes most of the time: a kind consideration for people with limited bandwidth allotments.

  3. Anonymous Coward
    Anonymous Coward

    I feel the need...

    ...of a Torvalds hairdryer to that coder! Sheesh!

    1. Khaptain Silver badge

      Re: I feel the need...

      Forget the hairdryer and borrow his hammer..

      1. Anonymous Coward
        Anonymous Coward

        Re: I feel the need...

        Instructions for use:

        1. Take hammer

        2. Take a clue

        3. Place clue on head

        4. Repeatedly hit clue with hammer until the knowledge enters the skull

        Note: in the event of the skull already being full, some existing knowledge may be lost as the clue is inserted.

  4. Anonymous Coward
    Linux

    It's shit programming

    As per subject.

    Running a command like that without testing to see what $STEAMCLIENT actually points at beforehand is pretty irresponsible.

    I've never heard of this sort of nonsense before and certainly never done anything like it myself 8)

    As for running beta on a "production" machine: I doubt many people can afford a testing gaming rig. The bloke even had a backup and this wiped that out. Shame his backups weren't owned by another user - say "backup" but even so it's not his fault really.

    Cheers

    Jon

    1. BongoJoe

      Re: It's shit programming

      I think that it's going to take some time before someone finds a better howler that one...

      ...unless, of course, they find my source code.

    2. Bloakey1

      Re: It's shit programming

      Agreed and what is worse in my view is that the programmer knew it.

      1. TimeMaster T

        Re: It's shit programming

        ".. the programmer knew it."

        Just because someone can create a script it does not mean they really know what they are doing.

        Last place I contracted for the "trained and qualified programers" were doing some really stupid $%#& in the scripts they wrote. Like deleting a directory that was being mounted by /etc/fstab onto another part of the filesystem at startup. When the system booted it would hang due to a missing mount point. Not a big deal, unless it is a headless system in an embedded environment. So about twice a week I would have to spend 40 minutes to pull the device apart to get at the CPUs VGA connector so I could fix it.

        I pointed out to the author of the script, described as a "Skilled programmer", exactly what the issue was and how to fix it (I have 15 years working with *NIX OSs and he knew it). He told me he had effectively zero experience with Linux and had just cut and pasted stuff from the Internet to make the script.

        He didn't change the script, his excuse was that since they were setting up some new update manager the directory at issue wouldn't get deleted anymore, since it was being deleted by a different script during updates.

        Due to that and many other similar issues I didn't renew my contract and moved on.

      2. regadpellagru

        Re: It's shit programming

        Yep, and it is even worst timing:

        http://www.gamespot.com/articles/valve-steam-machines-will-be-front-and-center-at-g/1100-6424591/

        So basically, they're gonna unlock the steam machines from their stasis, running on SteamOS, which is a modified debian.

        And they stupidily screw up in the steam Linux client !

        I'm sure the dev's butt has already been nicely kicked.

    3. Turtle

      Re: "I've never heard of this sort of nonsense before"

      "I've never heard of this sort of nonsense before"

      I have. If memory serves, an early version of 12 Tone Systems' Cakewalk music/audio production software would, after finishing an installation, delete C:\Temp and all files in it - which was not a problem except if there was no C:\Temp folder, in which case it would delete C:\. And that *was* a problem...

      1. J.G.Harston Silver badge

        Re: "I've never heard of this sort of nonsense before"

        Back in Win3 days... I installed something on a friend's DOS/Win system. It ended by wiping %TEMP%. But... his system had %TEMP%=C:\DOS

  5. Anonymous Coward
    Anonymous Coward

    I'll bet that

    Steam was coming out of his ears.

    There was no way of him blowing off steam

    because he couldn't get up a head of steam.

    etc etc

  6. Destroy All Monsters Silver badge
    Thumb Up

    Scumbag Steve Meme goes here

    1) Writing "# Scary!" in the code

    2) Not performing any checks anyway

    ...probably because "no risk, no fun!"

    1. Steven Raith

      Re: Scumbag Steve Meme goes here

      To be fair, I'm a bit inexperienced at *nix admin - if I wrote something and thought it was 'scary', I'd probably get someone else to have a look at the code and see if it can be done in a less scary way.

      The most potentially dangerous, buiggy code out there is written by people who don't think anyone knows better than them, and don't bother checking.

      Steven R

      1. Dan 55 Silver badge
        Windows

        Re: Scumbag Steve Meme goes here

        I thought one of the first things everyone learnt with the shell that if it's an environment variable then it's just as unreliable as user input, but it looks like they skip that lesson these days.

        1. MJB7

          Re: Environment variable

          It isn't an environment variable. STEAMROOT is a shell variable (which isn't the same thing at all, although they are accessed with the same syntax). It's calculated from $0 which is another shell variable.

    2. OliverJ
      FAIL

      Re: Scumbag Steve Meme goes here

      IANAL, but doesn't the comment provide enough legal ground for a class action lawsuit against Steam on the base of gross negligence?

      1. Anonymous Coward
        Anonymous Coward

        Re: Scumbag Steve Meme goes here

        Let me guess... American?

        Not everything requires a lawsuit.

      2. Steven Raith

        Re: Scumbag Steve Meme goes here

        OliverJ - it's your responsibility, as the computer owner/user to do backups and verify their veracity and recoverability.

        This goes double if it's a machine you make earnings on.

        Otherwise, every tech support shop out there would be sued out of existence for that *one* time they acci-nuke a HDD.

        Steven R

        1. MrDamage Silver badge

          Re: Scumbag Steve Meme goes here

          Accidentally nuking one drive by a repair shop is one thing, having all drives nuked by poorly written code is quite another.

          The term "fit for purpose" springs to mind, which this software clearly isn't.

          1. Steven Raith

            Re: Scumbag Steve Meme goes here

            Which is why you take a backup before doing anything precarious, like moving a folder and symlinking it back, not knowing if it'll mount or be seen correctly.

            All code is, in some respect, shit. Backups aren't hard to do, but everyone learns that only just *after* they needed to know it...

            HTH.

            Steven R

            1. Anonymous Coward
              Anonymous Coward

              Re: Scumbag Steve Meme goes here

              Which is why you take a backup before doing anything precarious, like moving a folder and symlinking it back, not knowing if it'll mount or be seen correctly.

              Did this user not mention that his files were deleted from a backup drive mounted elsewhere on the system?

              Ergo: he was taking backups. Then Steam's client decided to delete those too.

              I'd agree with others: gross negligence. It's not like the user was keeping his files in /tmp and there was a copy stored on an external drive. (Like some keep theirs in the "Recycle Bin" / "Trash")

        2. OliverJ

          Re: Scumbag Steve Meme goes here

          AC, Steven R - I respectfully disagree. The programmer knew he was doing it wrong ("scary"), but obviously didn't act on it. More importantly, this issue raises the question how this code got through quality assurance in the first place.

          This takes the case from the "accidental" into the "gross negligence" domain. IT firms need to learn that they have to take responsibility for the code they dump on their customers.

          And regarding your argument of making backups - that's quite true. It's good practice to make backups, as it is good practice to wear seat-belts in your car. I do both all the time. But this does not mean that the manufacturer of my car is allowed to do sloppy quality assurance on the ground of my requirement to wear seat belts to minimize consequences of an accident - as GM is now learning...

    3. Vic

      Re: Scumbag Steve Meme goes here

      1) Writing "# Scary!" in the code

      You missed :-

      0) Not using the right command in the first place (e.g. readlink)

      Vic.

  7. Anonymous Coward
    Anonymous Coward

    i think the moral of the story is...

    ...no kind of structural security can protect you from a program you trust going bad.

    And don't keep your backup volumes attached for longer than is necessary to run a backup.

    1. John H Woods Silver badge

      What is the best practice here?

      Apart from expressing the fact that the script writer was a total jerk - I could forgive it if it weren't so clear they realized it was dangerous and couldn't be arsed to do a 10 second google to see how to phrase it - I'm like to know what people recommend here. Removal of backup devices or media is obviously good, but what are the additional strategies here to defend against executables you want to trust, but not completely?

      Back up to tar files (preserves permissions and owners), which themselves are owned by 'backup' and/or not writeable? Run such executables as a different user? Chroot them?

      1. Boothy

        Re: What is the best practice here?

        Mount backup drive

        Backup to a different username i.e. 'backup'

        Unmount backup drive

        Don't mount the drive again till you need it.

        Preferably phsically remove the drive until needed, or use a separate NAS.

        1. This post has been deleted by its author

          1. Peter Gathercole Silver badge
            Linux

            Re: What is the best practice here?

            Traditionally, in the UNIX world where you normally have more than one user on the system, you backup the system as root. Tools like tar, cpio and pax then record the ownership and permissions as they create the backup, and put them back when restoring files as root. This also allowed filesystems to be mounted and unmounted in the days before mechanisms to allow user-mounts were created.

            The problem is that too many people do not understand the inherent multi-user nature of UNIX-like operating systems, and use them like PCs (as in single-user personal computers). To my horror, this includes many of the people developing applications and even distros maintainers!

            There is nothing in UNIX or Linux that will prevent a process from damaging files owned by the user executing the process. But that is not too different from any common OS unless you take extraordinary measures (like carefully crafted ACLs). But at least running as a non-root user will prevent bad code like this from damaging the system as a whole.

            1. DropBear

              Re: What is the best practice here?

              But at least running as a non-root user will prevent bad code like this from damaging the system as a whole.

              Not much of a consolation these days when a desktop Linux is likely used as a single-user machine where all the valued bits likely belong to said user, while the system itself could probably be reinstalled fairly easily...

              Anyway, I know one guy who'll be doing all his Steam gaming on Linux with a separate user that isn't even allowed to flush the toilet on the system...

            2. John Doe 6

              Re: What is the best practice here?

              best practice is to check what $STEAMROOT is and if it is sane

              change to $STEAMROOT

              remove files from $STEAMROOT

              if $STEAMROOT is not sane (/ or ~) you throw an error telling the user that $STEAMROOT can't be located.

              This IS NOT rocket science, almost everybody has been doing this for 30 years on UNIX,

            3. Anonymous Coward
              Anonymous Coward

              Re: What is the best practice here?

              But at least running as a non-root user will prevent bad code like this from damaging the system as a whole.

              You think so?

              A year or so ago our sysadmins started getting calls from users (this in an office with 100+ Unix developers) about missing files. The calls quickly turned into a queue outside the admin's offices. Running a "df" on the main home directory server showed the free space rapidly climbing...

              Some hasty network analysis eventually led to a test system running a QA test script with pretty much the bug described here. It was running as a test user, so could only delete the files that had suitable "other" permissions, but it was starting at a root that encompassed the whole NFS-automounted user home directory tree. The script was effectively working its way through the users in alphabetical order, deleting every world-accessible file in each user's home directory tree.

              Frankly, if it had been running as root it would probably have trashed (and crashed) the test system before too much external harm was done. Fortunately our admins are competent at keeping offline backups.

              1. Peter Gathercole Silver badge
                FAIL

                Re: What is the best practice here? @AC

                And the problem here is typified by your statement 'could only delete the files that had suitable "other" permissions'.

                Teach your users to set reasonable permissions on files! It goes back to my statement "too many people do not understand the inherent multi-user nature of UNIX-like operating systems".

                With regard to running the script as root. You're not that familiar with NFS are you?

                If you are using it properly, you will have the NFS export options set to prevent root access as root (it should be the default that you have to override), which is there to prevent exactly this sort of problem. This maps any attempt to use root on the test system into the 'nobody' user on the server, not root. Anybody who sets up a test server to have root permissions over any mounted production filesystem deserves every problem that they get!

                There are people who have been using NFS in enterprise environments for in excess of quarter of a century. Do you not think that these problems have not been addressed before now?

                1. Anonymous Coward
                  Anonymous Coward

                  Re: What is the best practice here? @AC

                  Teach your users to set reasonable permissions on files! It goes back to my statement "too many people do not understand the inherent multi-user nature of UNIX-like operating systems".

                  They're not my users, they (and I) are, for the most part, senior developers who are well aware of how umask works, and may (or may not) choose to share files.

                  With regard to running the script as root. You're not that familiar with NFS are you?

                  I am, as it happens. At the kernel level.

                  If you are using it properly, you will have the NFS export options set to prevent root access as root (it should be the default that you have to override), which is there to prevent exactly this sort of problem. This maps any attempt to use root on the test system into the 'nobody' user on the server, not root.

                  And that is, of course, exactly how our systems are configured.

                  It is also why I said that running the script as root would have been less serious, since not only would it have been potentially less serious for the NFS-mounted files, it would have permitted the test server to destroy itself fairly quickly as it wiped out /usr and /etc. Instead the faulty script (running as a QA user) didn't destroy the critical system files, it only destroyed those files that people had left accessible. The server remained up.

                  There are people who have been using NFS in enterprise environments for in excess of quarter of a century

                  True. I'm one of them.

                  Do you not think that these problems have not been addressed before now?

                  Indeed they have, and fortunately by people who read and understood the problem before making comments.

                  1. Peter Gathercole Silver badge

                    Re: What is the best practice here? @AC

                    I stand by every word I said. I do not think that your post is as clear as you think it is.

                    You cannot protect from stupidity, and setting world write to both the files and the directories (necessary to delete a file) is something that you only do if you can accept the scenario you outlined. Just because you have "experienced" developers does not mean that they don't follow bad practice ("developers" often play fast and lose with both good practice and security, claiming that both "get in the way" of being productive). And giving world write permissions to files and directories is in almost all cases overkill. Restrict the access by group if you want to share files, and give all the users appropriate group membership. It's been good practice for decades.

                    You did say "Frankly, if it had been running as root it would probably have trashed (and crashed) the test system before too much external harm was done", but this is probably not true. You did not actually point out that root would not traverse the mount point of the NFS mounted files, but you did say "starting at a root that encompassed the whole NFS-automounted user home directory", implying that it was not the root directory of the system that was being deleted, but just the NFS mounted filesystems.

                    From personal experience, I have actually seen UNIX systems continue to run damaging processes even after significant parts of their filesystems have been deleted. This is especially true if the command that is doing the damage is running as a monolithic process (like being written in a compiled language or an inclusive interpreted one like Perl, Python or many others) and using direct calls to the OS rather than calling the external utilities with "system".

                    Many sites have home directories mounted somewhere under /home, so if it were doing a ftw in collating sequence order from the system root, it would come across and traverse /home before it would /usr (the most likely place for missing files to affect a system), so even it it did run from the system root, enough of the system would continue to run whilst /home was traversed. Not so safe.

      2. JulieM Silver badge

        Re: What is the best practice here?

        Run all executables which are supplied without Source Code in a chroot environment which is on its own disk partition. Such a location is secure against a program misbehaving, because no file anywhere else on the filesystem can be linked into or out of it. Hard links cannot transcend disk partitions, and symbolic links cannot transcend chroot.

        1. Destroy All Monsters Silver badge
          Holmes

          Re: What is the best practice here?

          Run all executables which are supplied without Source Code in a chroot environment

          Whether you have the source code is only relevant if your theorem prover can prove that shit won't hit the fan (however defined) when the program corresponding to said source is run. This is unlikely to be in the realm of attainable feats even in best conditions and even if said open sourced program has been written in Mercury.

    2. waldo kitty
      Megaphone

      Re: i think the moral of the story is...

      And don't keep your backup volumes attached for longer than is necessary to run a backup.

      exactly! plug the media in or otherwise make the connection to it avaiable first, then do the backup, finally disconnect from that media... and no, rsync can also kill ya when it sees the files is should be keeping updated are gone and removes them from the remote...

  8. Anonymous Coward
    Thumb Down

    Sheesh

    If you're running Steam on Linux, it's probably best to make sure you have your files backed up and avoid moving your Steam directory, even if you symlink to the new location, for the time being.

    Better advice might be to hold off on using Steam until the programmer responsible has been hunted down and re-educated.

    1. fearnothing

      Re: Sheesh

      In that context, 're-educated' has a deliciously violent undertone to it. Room 101 anyone?

  9. jrd

    I once wrote a shellscript which, during testing, removed *itself* due to a malformed 'rm' command. I had to rewrite the script from scratch but at least I learned a valuable lesson!

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like