Re: In campus AI groups didn't these use to be called "Baby killer" contracts?
Apparently, their did call them "baby killer" contracts. I'm not sure if they were being ironic or not, given that the contracts are explicitly to develop technology to either help identify the right window to put a superbly accurate (and therefore small) laser guided bomb into precisely the right place, or make the weapon even more accurate to prevent killing babies because the bomb went astray.
Not having accurate (and preferably somewhat affordable) guided weapons leads to taking the opposite approach such as used by Russia in Syria, which is simply to accept that your weapons are going to be inaccurate, and go for cheap, big explosions and then use lots of them.
In terms of effect, this means that instead of a laser guided bomb going in the right window killing that person and blowing out a few windows elseware you get the brute force approach. You know the target is within a city block or so and then cluster bomb the general area until nobody is left alive in it, leading to large chunks of countries involved looking like this.
One of these approaches is intended to limit the number of people killed to those targeted, the other doesn't care if everybody within 500 yards is blown to bits. Apparently, protesting against the development (or sale) of the former is good. I think that this form of moral absolutism is almost childish when it ignoring the result, which is that the result is that the entire city gets carpet bombed instead.