Some Thoughts on the Killer ‘Bots

Posted in Politics
Mon, Aug 3 - 9:00 am EDT | 2 years ago by
Comments: 64
Be Sociable, Share!
    Use Arrow Keys (← →) to Browse

    Mantis1 and Global Hawk,2 IMI Mastiff;3
    Reapers4 that shoot Hellfires5 up Afghan asses;
    Ghods6 using lasers and GNATs reconning,7
    These are a few of life’s scarier things.8

    Lines of Departure - Thoughts on the Killer 'Bots

    Ordinarily, in this space, I try to give some answers. I’m going to try again, in an area in which I am, at least at a technological level, admittedly inexpert. Feel free to argue.

    Question 1: Are unmanned aerial drones going to take over from manned combat aircraft?

    I am assuming here that at some point in time the total situational awareness package of the drone operator will be sufficient for him to compete or even prevail against a manned aircraft in aerial combat. In other words, the drone operator is going to climb into a cockpit far below ground and the only way he’ll be able to tell he’s not in an aircraft is that he’ll feel no inertia beyond the bare minimum for a touch of realism, to improve his situational awareness, but with no chance of blacking out due to high G maneuvers..

    Still, I think the answer to the question is “no,” at least as long as the drones remain under the control of an operator, usually far, far to the rear. Why not? Because to the extent the things are effective they will invite a proportional, or even more than proportional, response to defeat or at least mitigate their effectiveness. That’s just in the nature of war. This is exacerbated by there being at least three or four routes to attack the remote controlled drone. One is by attacking the operator or the base; if the drone is effective enough, it will justify the effort of making those attacks. Yes, he may be bunkered or hidden or both, but he has a signal and a signature, which can probably be found. To the extent the drone is similar in size and support needs to a manned aircraft, that runway and base will be obvious.

    The second target of attack is the drone itself. Both of these targets, base/operator and aircraft, are replicated in the vulnerabilities of the manned aircraft, itself and its base. However, the remote controlled drone has an additional vulnerability: the linkage between itself and its operator. Yes, signals can be encrypted. But almost any signal, to include the encryption, can be captured, stored, delayed, amplified, and repeated, while there are practical limits on how frequently the codes can be changed. Almost anything can be jammed. To the extent the drone is dependent on one or another, or all, of the global positioning systems around the world, that signal, too, can be jammed or captured, stored, delayed, amplified and repeated. Moreover, EMP, electro-magnetic pulse, can be generated with devices well short of the nuclear.9 EMP may not bother people directly, but a purely electronic, remote controlled device will tend to be at least somewhat vulnerable, even if it’s been hardened,

    Question 2: Will unmanned aircraft, flown by Artificial Intelligences, take over from manned combat aircraft?

    The advantages of the unmanned combat aircraft, however, ranging from immunity to high G forces, to less airframe being required without the need for life support, or, alternatively, for a greater fuel or ordnance load, to expendability, because Unit 278-B356 is no one’s precious little darling, back home, to the same Unit’s invulnerability, so far as I can conceive, to torture-induced propaganda confessions,10 still argue for the eventual, at least partial, triumph of the self-directing, unmanned, aerial combat aircraft.

    Even, so, I’m going to go out on a limb and go with my instincts and one reason. The reason is that I have never yet met an AI for a wargame I couldn’t beat the digital snot out of, while even fairly dumb human opponents can present problems. Coupled with that, my instincts tell me that that the better arrangement is going to be a mix of manned and unmanned, possibly with the manned retaining control of the unmanned until the last second before action.

    This presupposes, of course, that we don’t come up with something – quite powerful lasers and/or renunciation of the ban on blinding lasers11 – to sweep all aircraft from the sky.12

    Question 3: Will AI-controlled combat systems be restricted to the air or the sea?

    Not just the air, no; there are some nautical drones in existence.13 For that matter, I think there’s a very good case to be made that certain torpedoes and some of the more sophisticated mines are already AI-controlled drones and have been for decades.

    However, there are two truths about technology in war that are not often understood. One is that much technology is intended to be and serves as a substitute for training. It may be more effective than what it replaces, as with the heavy Anti-Tank Guided Missile taking over from towed anti-tank guns and recoilless rifles, but that’s not necessarily why the replacement took place. Rather, despite the cost of an individual round, to say nothing of the launch and guidance systems, it’s a lot quicker, easier, and cheaper to train a TOW14 crew – especially one just off the street – to get a hit at X meters than it is with a recoilless rifle crew, because you don’t have to program a human brain to do anything an unusually clever monkey couldn’t.15

    The second truth is that high technology seems to be most required and work best – I suspect those two feed each other – in comparatively simple environments, space, air, and sea.16 Consider, for example, that we can already have drones that can do quite a bit of maneuver, but DARPA’s only been able to get vehicles through fairly simple courses in fairly simple deserts, no anti-vehicular mines, no city blocks, no other traffic, no trees, no rivers, no bridges. I mentioned in footnote 15 that the human brain is a fantastic fire control computer. Well, it doesn’t even need to be programed much to make life tough for enemy autonomous vehicles; the malice and cleverness are already there.

    It’s interesting, too, I think, that when science fiction writes autonomous AI-driven ground combatants, they tend to be huge. Why? One reason is that to a 40000 ton combat vehicle, the Potomac and White House aren’t really complex obstacles….if it noticed them at all.

    Short version: maybe someday we’ll have effective, independent ground combat drones, but that day is much further off than it is for air and sea, largely because the problem is much less tractable.

    Question 4: Is it moral to use machines for war?

    We’ve always used machines for war; a bow is a machine. A sling is a machine. Arguably, maybe so is the rock. But what the question is really getting as is the lack of restraint or conscience on the part of an AI-driven war machine. Couple of thoughts on that.

    One is that, as our civilization weakens, as the ruthlessness necessary for it to survive dissipates, we may have no choice – if those trends continue – but to turn our defense over to machines. No, that day’s not that close yet. Even so, people – ICOTESCAS, say – sucking away and spitting out onto the dirt our civilizational spinal fluid might want to think about whether they’d a) like that any better or b) really think they can control it.

    Another is that we tend to delude ourselves to the extent we think that having “a man in the loop” will really bring human conscience into play to limit the damage the machines will do.17 It won’t, or not much.

    Consider this instance cited in GQ, a couple of years ago, from an enlisted Air Force Predator pilot, on his first shot and kill:

    “The smoke clears, and there’s pieces of the two guys around the crater. And there’s this guy over here, and he’s missing his right leg above his knee. He’s holding it, and he’s rolling around, and the blood is squirting out of his leg … It took him a long time to die. I just watched him.”
    ~ Airman First Class Brandon Bryant18

    GQ in that article is, unsurprisingly, missing the point. Focused on the poor, ultimately upset airman, they neglect to note sufficiently that, whatever psychic price he ultimately paid, he did his job pretty much without any excess restraint or remorse for years. “I just watched him [die].”

    Distance does that. Mass murderers in Einsatzgruppen – murderers of the innocent at close range – went insane in large numbers. A fair number shot or otherwise disposed of themselves.19 There’s at least one instance of a German committing suicide beforehand, to spare himself from becoming a murderer.20

    That’s not to say that the men dropping bombs on Hamburg or Tokyo were in the same moral class as the Einsatzgruppen, but to the women and children turned to crispy critters, below, the distinction probably wouldn’t mean much. And those firebombers continued to function well enough, long enough, just as did Airman Bryant. Why? Distance; they were removed from and didn’t have to see the details of what they’d done in real life.

    Oh, and Asimov’s Three Laws of Robotics, if one seeks solace in those?

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.21

    Utter nonsense. Now who can tell us why?

    ___________

    1 https://en.wikipedia.org/wiki/BAE_Systems_Mantis
    2 https://en.wikipedia.org/wiki/Northrop_Grumman_RQ-4_Global_Hawk
    3 https://en.wikipedia.org/wiki/Tadiran_Mastiff
    4 https://en.wikipedia.org/wiki/General_Atomics_MQ-9_Reaper
    5 https://en.wikipedia.org/wiki/AGM-114_Hellfire
    6 https://en.wikipedia.org/wiki/Ghods_Mohajer
    7 https://en.wikipedia.org/wiki/General_Atomics_GNAT
    8 With apologies to the shades of Rodgers and Hammerstein, as well as the entire cast of The Sound of Music
    9 http://www.fas.org/sgp/othergov/doe/lanl/pubs/00326620.pdf
    10 Though Ralph Peters, I think it was, did suggest in one of his novels that AIs could possibly be tortured.
    11 https://www.icrc.org/applic/ihl/ihl.nsf/0/49de65e1b0a201a7c125641f002d57af?OpenDocument
    12 I consider this not entirely likely. There are almost always technical solutions to technical problems, and tactical solutions to technical problems. What those will be for aircraft I can’t say. I can say, however, that Man is a clever beast. Consider how every new anti-tank weapon has been heralded as the end of the tank. Note that tanks are still going strong.
    13 http://makezine.com/magazine/transatlantic-drone-takes-to-the-sea/ and http://www.dailymail.co.uk/sciencetech/article-2326138/The-stealth-drone-boat-set-hunt-pirates-undercover-world.html
    14 https://en.wikipedia.org/wiki/BGM-71_TOW
    15 Personal opinion: the finest fire control computer in the world is the human brain, but it is bitching hard to program.
    16 Hat tip, Martin van Creveld’s Technology and War
    17 http://www.bbc.com/news/technology-33686581
    18 http://www.gq.com/story/drone-uav-pilot-assassination
    19 http://www.deathcamps.org/occupation/einsatzgruppen.html
    20 http://www.yadvashem.org/odot_pdf/Microsoft%20Word%20-%203848.pdf
    21 I, Robot, Isaac Asimov.

    Tom Kratman is a retired infantry lieutenant colonel, recovering attorney, and science fiction and military fiction writer. His latest novel, The Rods and the Axe, is available from Amazon.com for $9.99 for the Kindle version, or $25 for the hardback. A political refugee and defector from the People’s Republic of Massachusetts, he makes his home in Blacksburg, Virginia. He holds the non-exclusive military and foreign affairs portfolio for EveryJoe. Tom’s books can be ordered through baen.com.

    Note: If you follow the retail links in this post and make purchases on the site(s), Defy Media may receive a share of the proceeds from your sale through the retailer’s affiliate program.

    Don’t miss Tom Kratman’s other Lines of Departure columns. Click through the gallery below to read more.


    Social Justice

    Don't miss this three-part series on our social justice armed forces.

    Photo by zabelin/Getty Images

    Women in the Military

    Should women be required to register for the draft? Step right up, ladies!

    Photo by Getty Images

    The Kurds

    Tom Kratman sounds off on our gallant allies, the Kurds, and other fairy tales.

    Photo by John Moore/Getty Images

    Sorry Rodney

    Tom Kratman explores Islam and why we just can't get along. Read Part I, II and III of this series.

    Photo by Retrovizor/Getty Images

    Service Guarantees Citizenship

    Read this three-part series from Tom Kratman, inspired by Starship Troopers: Part I, II and III.

    Photo by Marko Marcello/Getty Images

    Immigration

    Tom Kratman explores why immigration doesn't work like it used to.

    Gun-Free Zones

    Tom Kratman discusses military gun-free zones and the ill-logic of the Left.

    Dear Germany

    Read this open letter to Germany regarding the "refugee" crisis.

    Photo by Adam Berry/Getty Images

    Sanctuary Cities

    Tom Kratman explores the real problem with sanctuary cities.

    Gun-Free Zones

    Tom Kratman discusses military "gun-free" zones and the ill-logic of the Left.

    Price in Blood

    Recently President Obama announced that the government would no longer threaten prosecution of those who pay ransom privately for the return of kidnapped loved ones. Read about the possible effects of Obama's ransom order.

    Torture

    Read Kratman's two-part series on torture:

    Jade Helm 15

    Don't miss this three-part series on Jade Helm 15. Is it necessary and should Americans be worried about it? Read: Part I, Part II and Part III.

    Does China Really Want War?

    Read Part I, II and III in Tom Kratman's series about the possibility of war with China.

    Breakup of the United States

    Be sure to read Tom Kratman's five-part series on the breakup of the United States:

    The Bergdahl Case

    If found guilty, should Bowe Bergdahl be sentenced to death?

    U.S. Navy

    No matter what you've read elsewhere, no -- our Navy is not big enough.

    Military Chow

    Read Tom Kratman's three part series on military food:

    The Soldier's Load

    Tom Kratman's series on the average American soldier's load is a must-read. Don't miss:

    The Left and the Military

    Ever wonder why the Left concentrates so closely on using the military to promote social change? Read part 1 and part 2 from Tom Kratman about the Left and the military.

    Defining Terrorism

    Don't miss Col. Kratman's five-part series on terrorism:

    Humanitarian Assistance

    Why does the military – not just ours, everyone’s, or everyone’s that matters – get tapped for disaster relief and humanitarian assistance over and over and over again? Read this column on the military and humanitarian aid to find out.

    Why War Games Fail

    It's another Lieutenant Reilly story. This time, we are talking about war games and why they fail. Read part 1 and part 2 in this series.

    Military Integrity

    Unfortunately dishonesty, fraud and a lack of integrity are sometimes not just accepted in the military, they are expected. Read this poignant piece about military integrity.

    Arab Armies

    Read this Lines of Departure column from Tom Kratman to find out why Arab armies are so generally worthless.

    The Purpose of War

    A military is about more than self-preservation. Security is a principle of war; safety is not. Risk is in the soldier’s job description. Read: The Purpose of War is to Win.
    Use Arrow Keys (← →) to Browse

    Be Sociable, Share!

      Related Posts

      • Jack Withrow

        Drones scare the crap out of me. Not because of what they can do, but because they are far too easy to use, especially if a terrorist organization gets their hands on some armed UAV’s. Politicians look at drones as some type of silver bullet, that can do anything they want, yet they refuse to try to come up with any defense against them for when the other side starts using them. Like you I suspect they will be very vulnerable to EW, but that is not a realistic defense for the US to use currently as it would severely disrupt our economy.

        There are times I firmly believe UAV’s were rushed into service with little thought on what the ramifications of their use would be. The genie has been freed from its bottle and there is no putting it back into the bottle.

        • Tom Kratman

          Yeah, and, while I don’t want to encourage the Tin Foil Hat Brigade, Luddite Battalion, too much, and while I don’t think the lefties running the country at the moment are by and large bright enough to actually plan anything that doesn’t involve sticking blunt object A into willing oriface B, drones _are_ a tool for a government to use against it’s people without having to worry all that much about a military mutiny, since the numbers involved are small and can be bribed or terrorized into cooperation.

        • Jack Withrow

          Yes there is that also. I didn’t want to open that can of worms on a public forum, but that is also a very important consideration on if they should be used or not.

        • Tom Kratman

          See question 4; it’s not clear that we CAN stop developing them, given our domestic enemy.

        • James

          Pandora’s box.

          The scary part is imagine the amazing accuracy of present robotics when applied to guns and then add in guided rounds.

        • Ori Pomerantz

          Even if we could, all it would mean would be that whoever wanted to use them would buy them from somebody else. There’s no way to stop China, for example, from making them.

        • Tom Kratman

          Right, that’s part of the intellectual fallacy of presumption of either isolation or ability to impose universality in the absence of a credible plan to do so.

        • PavePusher

          “they are far too easy to use”

          Uh, no, not really. Huge logistical tail, and they are actually rather difficult to apply accurately. Which is why mistakes are made with some frequency.

        • James

          The biggest killer of Air Force drones?…Mountains.

        • Ming the Merciless

          No. Runways. Something like half of them have crashed on take-off or landing.

        • James

          Yea, from what I have heard the AF crashed a lot into mountains in Astan and yea a lot on Take off and Landings. The army just put in auto landing software.

          People love to talk about drones but the reality is that situational awareness is shit.

        • Jack Withrow

          Politically armed Military Drones are far too easy to use. Politicians don’t have to put a human being in danger for any of their ill thought out schemes, and unless for some reason the drone is brought down it has a far higher degree of deniability than using manned aircraft. Historically a lot of wars have been fought because of a stupid mistake, and there is no way I can believe that drones do not make those stupid mistakes easier to make. And I would ask you, since when are politicians concerned about how easy or hard it is for service members to employ a weapons system?

          And with the explosion of civilian hobby drones it is also far too easy to use them for terrorism. It does not take a very large drone to dust an area with radioactives. And it will not be long before someone figures out how to mount a gun on some of these smaller civilian hobby drones.

        • PeaceMaker

          already done, civie drone mounting a pistol, with a remote operated trigger.

        • Tom Kratman

          Oh, I doubt a wing of Predators or Reapers requires any more, if even as much, support as a wing of F16s.

        • PavePusher

          Ah, I see what you mean. Not sure how I misunderstood you earlier, sorry.

      • Daric Wade

        Drones can do some impressive things that manned aircraft can’t, such as extended loitering. The capability for a “persistent stare” is very handy. But, they’ll never replace the human mind’s capacity for unpredictability and ingenuity, and I’m in disagreement with the futurists that AI is possible in the short-term. We might have something that sounds human, but that isn’t quite the same as being human.

        As to Asimov’s three laws, the best I can do is point to the film adaptation, in which an AI concluded that humans must be controlled and subjugated in order to keep from harming themselves, as an ultimate conclusion based on the three laws.

        Having not read Asimov, I could be, and probably am, off-base from where you’re going with it.

        • Tom Kratman

          No, that’s true, but there’s another truth. Give the robot at a distance a physical means of destruction at distance. Imagine I am raising a gun to shoot you at close range. What does the robot do, consistent with the first law?

          That’s one reason why I consider intellectuals to be, usually, profoundly unintelligent: They have fantasies they refuse to apply rational thought to.

        • Daric Wade

          Since there’s no such thing as a non-lethal distance weapon (that I know of), and since any less-lethal tool he can use would also probably injure you, he faces an impasse. Either he hurts you or lets you hurt me. Either way, he breaks the first law.

          Alternatively, assuming he has a real weapon, he either kills you or lets you kill me. Same impasse.

          Priority of life? How do you judge that on the fly? Maybe I have a knife and was the original aggressor, and you’re drawing on me to defend yourself.

        • Tom Kratman

          Rubber bullets, but they can sometimes be lethal, even if designed not to be. Plus that ultrasonic thing that hurts the nerves in your skin. CS projectors.

        • Daric Wade

          Very true. I’ve heard of some emerging sound-based tech that looks to be truly non-lethal, but until then, robots essentially have to deal with the dilemma that to obey the 1st Law, they’d have to break it.

        • Daric Wade

          Not much room in Asimov’s laws for the inherent right to self-defense.

        • Duffy

          Even Asimov explained the problem with the 3 laws. With the 4th Law, or Zeroth Law “0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”. You should be able to extrapolate the problem with that from your average Social Progressive Politician. What is the operable definition of harm? A Robot grabbing that fat greasy unhealthy hamburger from your mitts before you can ingest it and harm yourself. By extension, in the end, the Robots will, like the Social progressive, have to control everything. For our own good, and the Children, of course.

        • Daric Wade

          Also true, and essentially what the film adaptation of “I, Robot” alluded to. Since we’re generally creatures of violence, the only way to keep us from hurting each other is to impose external control.

      • Ray

        If robots actually followed Asimov’s laws we’d all live in comfortable, cotton lined boxes. Because even being outside carries some risk.

        • Tom Kratman

          Yeah, kinda. If Asimov hadn’t been something of a prog I’d think he was satirizing them. But he was, so I doubt satire entered in there to much.

        • Matthew

          To be fair, he pointed that exact issue out himself in one of his short stories. “Little Lost Robot” (1947) had a weakened first law robot specifically because the standard ones prevented humans from working in a dangerous environment (a factory floor)

      • Mavwreck

        This is probably nitpicking…but you pointed out that a drone’s base/operator is a potential target of anti-drone action. I’d actually make those two separate targets – they may well be in complete different locations. Of course, this doesn’t make the drones any less vulnerable; if anything it makes them more vulnerable since there are more potential links in the drone management chain.

        • Tom Kratman

          Thought about that. Also thought about retransing the transmission site. Let it go because…well…we’ve done that kind of thing for decades and operators still get found and destroyed; it’s just harder.

      • Lawrence F. Greenwood

        40000 ton combat vehicle, yep send in the Bolo’s! An issue I see with Drones is the one I doubt many think about. What happens when those delicate electronics get wet or damaged? They break down and are hard to replace or even figure out how to fix in the field. The other is what happens when someone designs a limited, cheap, EMP bomb that knocks out all those wonderful drones.

        • Tom Kratman

          Similar to the attack the drone vice attack the manned plane, the electronics are always vulnerable to something. Thing about EMP, though, is the radius of action. That said, a lot of the hype is just hype and it isn’t that hard to harden against, accepting that sometimes hardening won’t be enough.

      • TBR

        Second the mention about torpedoes.

        In general unmanned underwater vehicles are much more difficult to do as a technology. For one the available methods to communicate with them are limited and/or unreliable so they need to be really autonomous. For the other, while a less complicated environment as to obstacles etc. underwater sensing is vastly more complicated than in-air and in-vacuum sensing. Especially if you want to do it covert.

        But on the gripping hand this is exactly the tactical and technological environment that stimulated very early developments of ever more sophisticated autonomy in “robotic” vehicles even if some go by more traditinal names like “torpedo”. In regards to true autonomy the underwater technology world is far more advanced than the aerospace and ground world.

      • James

        So here is a question. Where does abortion come into the three laws?

        • https://plus.google.com/+JoelCSalomon J. C. Salomon

          Definition of “human”. Loopholes in that definition are discussed in many stories. (E.g., one xenophobic culture had its robots only recognize as human those who spoke with their accent.)

      • https://plus.google.com/+JoelCSalomon J. C. Salomon

        “Oh, and Asimov’s Three Laws of Robotics […]? Utter nonsense. Now who can tell us why?”

        Isaac Asimov did, in great detail: Many (most?) of his Robot stories explore flaws in the Three Laws, or loopholes through them. IIRC, the most obvious flaw—harm to one person only preventable through harm to another—was dealt with in his very first story that dealt with the Three Laws. (Later stories allowed the robots to make a moral decision about whom to protect at who’s expense.)

        (The real hole in the premise is the high level of abstraction needed to define these laws. We can build fairly complex behavior and voice recognition and [almost] natural-language commands into machines now, and we’ll be able to do far better, long before we can design anything into which the Three Laws can be programmed. And the Laws are supposed to be at an unalterably low level? But that’s a technological objection, not a logical one, and an objection Asimov & Campbell could not have foreseen.)

        • Tom Kratman

          Loopholes through them do not necessarily mean he didn’t believe in them when first written. I suspect very strongly that he did; they’re _just_ the kinds of things progs love.

        • Duffy

          I think Asimov did believe in them, or that something like them would be needed as AI developed. But he also invented them as a plot device, and in using them as plot devices, I think he started identifying all of the problems with it. In the end, one Robot, or Artificial Intelligence, R. Daneel, was in fact manipulating the whole human race and civilization behind the scene. I finished “Foundation and Earth” with the feeling Asimov might like R. Daneel Olivaw and viewed him as the protagonist, I think he also understood the danger, and had, in fact sort of recreated God…in the form of R. Daneel. You know, someone to look over us all. What is rather astounding is not the fact that Asimov created a Social Progressive meme that rapidly became popular. It is that somehow the whole concept of the three laws were ignored so much in Science Fiction. Thank you Fred Saberhagen and Glen Larsen for not taken the Three Laws as some kind of Gospel. And one wonders how those who love the three Laws so much, never really comment on what goes on in Star Wars.

        • Ori Pomerantz

          I think he believed in them initially, and as a good scientist set out to test them to destruction. By the end of his life, he had destroyed them.

        • Anonymous

          Exactly so. If you read Asimov’s robot stories, just about all of them are about circumstances under which the Three Laws break down. The message that I came away with, at least was: “this isn’t good enough. Try harder.”

      • PeaceMaker

        “it’s a lot quicker, easier, and cheaper to train a TOW14 crew – especially one just off the street – to get a hit at X meters than it is with a recoilless rifle crew, because you don’t have to program a human brain to do anything an unusually clever monkey couldn’t”- do not forget graft, corruption, and govt. contracts, but your point is taken. to a point. recoilless system could probably train 2 or 3 times as many crews for the cost of the TOW, servicing, Depot maintenance, product improvement programs, etc etc……. that put $$$$$ back into OPP (other politicians pockets ) instead of money on training troops.

        • Tom Kratman

          Well…a TOW can be trained via crew drill, lecture, a biot of field maneuvering, and some pre-printed “Do Svidanya Rodina” letters to next of kin. Firing practice isn’t, strictly speaking, needed at all, even if it’s nice. But the 106…I’d guess that one TOW round costs about 14 times more than a 106 round, were we still producing them. 14 x 106 rounds do not a crew train.

        • Anonymous

          I sometimes wonder whether something like the old 106mm recoilless rifle might still have certain utility as a crew-served antiarmor system. Laser rangefinders, ballistic computers, and thermal sights are dirt cheap nowadays. And rumor has it that the 106mm RR is still in reserve stocks in Israel–and, further, that the Israelis got to examine some captured tandem-charge RPG29 warheads and have copied the design, adapting it to, among other systems, the 106mm RR.

          I grasp that a decision was made at the highest levels in the early 1970s to go away from recoilless rifles and move to precision guided munitions as the heavy antiarmor support systems for infantry units. But it seems to me that–especially in an era in which counterinsurgency missions seem more likely in the near term than orders to die in place delaying the Eighth Guards Tank Army in the Fulda Gap–that recoilless rifles have certain virtues. The ammunition is cheap, and they can do certain things that antitank missiles can’t do quite so well, like throw old-fashioned HE and flechette rounds for infantry support. Large caliber direct-fire HE support is still nice if you can get it, especially from a weapon that can be mounted on a HMMWV or jeep instead of requiring a large AFV to cross several rickety Third World bridges between Point A and Point B. It’s just a thought.

        • Tom Kratman

          Check of the Italian Folgore, which is already very close to what you’re suggesting.

      • http://randolphbeck.com/blog/ Randy Beck

        You can’t really use a current-day A.I.’s inability to win a wargame as an indicator that they’ll never replace human pilots.

        Computers are getting smarter with each passing year. A supercomputer is now a Jeopardy champion. A PC-sized computer may be able to do that in 10-15 years. And computers don’t need to be a Jeopardy champions to kill jihadis.

        BTW: they’re already driving in traffic. There are limitations, but they’ll be beaten.

        • Tom Kratman

          It’s not smarter, exactly. It may count on its fingers a little quicker. It may have some odd randomness programmed in but even that’s going to depend on an outside human programmer.

          Intelligence may happen, mind you, but I don’t think it’s going to be that soon.

        • http://randolphbeck.com/blog/ Randy Beck

          I could be wrong about 10-15 years, but it’ll happen sooner than you think. I’m relying, of course, on Ray Kurzweil, who could be too optimistic, but even radically discounting his optimism would only mean an error of a decade or two. The point is, it’ll happen — unless you think a real soul is required for this.

          Keep in mind that they don’t need human-level “thinking” to be effective in combat. A computer can play chess pretty well without more brain power than a wasp.

        • James

          Part of the problem is its still a program. Missiles can be just as smart same for targeting systems. They only have to work once and have a very narrow focus.

        • Daric Wade

          What I think we’re talking about here are “expert systems” more than true AI. If war is formulaic to a degree (i.e. lines of battle or “in case of breakout, commit reserve”), then an expert system can analyze an opponent’s actions and formulate a course of action. However, I think that an expert system would likely reach for the shortest-distance solution, and it would be a relatively unimaginative/uncreative solution at best.

          And, if the computer’s opponent is fairly intelligent, he can do things that make no sense, or trick the system into committing a fatal error.

        • http://randolphbeck.com/blog/ Randy Beck

          Quite true. In the case of IBM’s Jeopardy challenge, the producers had to agree that the questions would not be written in a way that would intentionally exploit defects in the algorithm.

          But that’s still only a problem in the near term. To steal a thought from Keynes, in the long run, the jihadis are all dead.

          Or, to steal from Clausewitz, all war supposes human weakness, and
          against that it is directed — until now.

        • Tom Kratman

          Part of the problem with an expert system, I think, is that you cannot explain to the programmer just how you arrived at a decision that is instinctive rather than thoughtful.

      • Iron Spartan

        with multi million dollar budgets, millions of dollars in prize money and the best minds in the world working on it, you get get results like this.

        https://www.youtube.com/watch?v=NeFkrwagYfc

        • Iron Spartan

          pathing and collision avoidance is tough even in simplified and controlled spaces is very tough. add in active jamming and spoofing and it becomes worse.

          There are applications that could work for area denial with non recoverable assets using current software. But it would have to be in a area where the are no non combatants or those losses are acceptable.

          I think the next revolution will be using drones as dirt cheap, short range smart munitions.

        • KenWats

          Like the Hornet WAM, only cooler? They gotta have better stuff now.

          https://en.wikipedia.org/wiki/M93_Hornet_mine

      • Ming the Merciless

        “Comms” are not an unmanned problem. The manned aircraft also requires comms to survive, find, and kill the enemy (that’s the point of all that network-centric stuff). EMP will disable manned aircraft just as much as unmanned. Also, increased autonomy mitigates the need for the drone to talk to the operator.

        I have never yet met an AI for a wargame I couldn’t beat the digital snot out of

        How many times did you die learning how to do that? You don’t get to respawn in real life. How many human pilots can we lose learning how to beat the robot?

        If we don’t field the killer robots, the Chinese will. They have plenty of ruthlessness and will to survive.

        One problem with the Three Laws — men will program robots to kill other men.

        • Tom Kratman

          Attribute it to whichever fluke you like, but as near as I can recall, never. The peculiarities of a given AI program become obvious very quickly, and the digital-snot-beating-out-of invariably follows. Note, though, that I am talking about collective combat, not 1st person shooter games, where speed of reaction gives them an edge. On the other hand, I lack the hand-eye coordination and reactions of a fighter pilot, too.

          There’s a difference between minimum essential survival electronic for the drone and for the man. The man can cut commo, get in the weeds, and use a map; the drone, if controlled from the rear, inherently cannot.

        • Ming the Merciless

          We don’t fight that way (“down in the weeds with a map”) any more. We are rapidly approaching the point where we can’t fight that way any more, if we’re not there already. Do we still train to find targets with eyeballs, using no off-board information at all, and then drop dumb bombs on them?

          Over North Vietnam, our pilots did not fight “comms denied”, and off-board comms enabled a much higher survivability rate than would otherwise have been the case. If you want to fight “comms denied” today with manned aircraft, you’d have to buy many, many more aircraft (unaffordable) and be willing to accept far higher losses and collateral damage (unacceptable).

          A lot depends on what you want the plane to do. If you want to hit a fixed target, there is no reason a drone can’t do that in a “comms denied” environment. (You just have to be willing to accept the occasional “oops” when the bridge gets hit with a busload of children on it.) If you want to find and kill moving targets, you will need comms whether you are manned or unmanned. This is all the more true if you have to fight at distance; an enemy that can deny comms can probably also deny close-in bases.

        • Tom Kratman

          You are perhaps confusing the difference between do and can, as well as what one can do, with the proper training, if one must, and what one does not do because spoiled by tech.

          Actually, there are excellent reasons drone’s can do some things, _especially_ if they dependent on GPS.

        • Ming the Merciless

          Cannot.

          Manned aircraft cannot survive modern air defenses without off-board connectivity.

          The nation cannot afford to buy the number of aircraft and munitions needed to fight without off-board connectivity.

        • Anonymous

          As a wargame hobbyist, I observe that maybe a rephrasing or clarification is in order.

          I do not speak for Col. Kratman but when he says “wargame” I do not think he means Doom or Quake, which is what the term “respawn” implies. There is a relatively small area of computer gaming, and prior to that board gaming, in which on the screen we see a map, and objects representing map counters, which in turn represent military units, are moved about by the players, or, in some instances, by an artificial intelligence program written into the game code so that a single player can play the game.

          These game AIs are very nearly without exception abysmally stupid and do not do much more than “move units in general approximate direction of objective,” not infrequently failing to calculate paths properly so that, for example, on a map with an oxbow shaped lake, a non-trivial number of the AI’s units will march into the “peninsula” in the middle of the oxbow. And stay there, apparently simulating just looking around stupidly, unable to figure out that they’re stuck and need to back out and go around.

          Two wargame AIs I’ve found that are brighter are for Gary Grigsby’s old “War in Russia” and Holdridge’s “TacOps.” The former has better path-finding ability than most and contains much code simulating things like the abstracted logistical planning that allow it to do things like bring the shattered remnant of an army to a supply center, then wait several turns for it to be replenished and absorb replacements rather than throwing it back into the line immediately to complete its destruction. The latter, according to Holdridge himself, has OPFOR selecting randomly at the start of the game from a number of pre-written, pre-planned scenarios that Holdridge created himself and replaying them, as if they were recorded on tape, so to speak–and the disadvantage, of course, being that in each instance the pre-written plan is specific to one particular scenario, one particular order of battle, one particular map. Looking at a map, doing what the Army used to call “tactical reconnaissance,” saying “these are the high points, do we know what the fields of view and fields of fire are from these place?” “these are the choke points for a unit moving westward through this area,” and so on, these are tasks that so far–at least to the best of my knowledge–no one has yet written an AI to do very well.

          Likewise, as a very long-time computer flight simulator hobbyist, with a particular interest in combat flight simulators? No one yet in the games business, at least, has written an AI that can manage an aircraft in realtime and consistently win or even be a challenge 1v1 or 2v2 with even halfway competent–and that’s by the standards of game hobbyists, not Air Force fighter jocks–human opposition. AIs are stupid. They can take off and fly from A to B at this altitude and do a landing circuit and land, yes. Ask them to do basic fighter maneuvers against an eight-year-old kid who plays a lot of video games? They lose. Badly. As one example you can beat 90% of them by diving for the ground and pulling out at the last minute, then flying away from them at ground-skimming altitude. Those AIs that don’t slam into the ground a moment after you pull out will follow at a higher altitude, seemingly puzzled, then try to do one of the canned basic fighter maneuvers written into them for “enemy lower and at lower speed,” like the one the fighter jocks call the Barrel Roll or the High Speed Yoyo, and paint themselves all over a mountainside because they don’t seem able to do “terrain avoidance” and “BFM” simultaneously, and most don’t even seem to check whether their paths will intersect with the Earth during the attempt to line up guns. If anyone knows someone who’s really, really good at writing AI code and who would like to try to tackle the problem, there’s a lot of money in the gaming industry for anyone who can create a challenging virtual opponent.

      • Christopher DiNote

        One correction Tom, the A1C isn’t the pilot, he’s the sensor operator. The rated officer is the pilot and the trigger puller on the Pred and the Reaper.

        • Tom Kratman

          Good to know, thanks. Hence that article is playing a little fast and loose?

        • Christopher R. DiNote

          Not really, because the sensor operator is really seeing all of that – RPAs (drones), at least the larger ones, are a crew – pilot, sensor operator, and usually some additional analysts and people coordinating with other agencies (ground forces, the C2 node, etc) And that’s just the crew flying the aircraft – the “back end” – the analysts looking at all those hours and hours of feed, are geographically separated elsewhere. The AF does drones a lot differently than the army. When it comes to weapons employment though, it’s a rated guy who’s going to fire. I’ll send you a personal message.

        • Tom Kratman

          Yes, but the article is trying to present the boy as the shooter, rather than the witness. Got the email. Still mulling.

      • Emilio Desalvo

        Samuel Clemens had something to say about Dumb Human Opponents: “There are some things that can beat smartness and foresight? Awkwardness and stupidity can. The best swordsman in the world doesn’t need to fear the second best swordsman in the world; no, the person for him to be afraid of is some ignorant antagonist who has never had a sword in his hand before; he doesn’t do the thing he ought to do, and so the expert isn’t prepared for him; he does the thing he ought not to do; and often it catches the expert out and ends him on the spot.”

      • Josiah Humphries

        It seems like combat drones will become reality incrementally with more and more tasks and decisions being taken over by automated systems.

      Be Sociable, Share!