Gulliver Travels To the Pentagon

The houyhnms

by Willy B

What if Jonathan Swift were to re-appear today to add a fifth book to the Gulliver’s account of Travels Into Several Remote Nations of the World? Secondly, what of he were to make the Pentagon the scene of that fifth book? What would Gulliver find there? Recent news reports might suggest that he would find that the Höuyhnhnms have truly taken over the Department of Defense. After years of tireless efforts, they have finally trained themselves to believe that computers can predict human behavior.

US Indo-Pacific Command claims to have developed an artificial intelligence tool that will predict for it what Chinese reactions will be to particular US provocations. It was shown to Deputy Secretary of Defense Kathleen Hicks during a stop in Hawaii on Dec. 15. “With the spectrum of conflict and the challenge sets spanning down into the grey zone, what you see is the need to be looking at a far broader set of indicators, weaving that together and then understanding the threat interaction,” Hicks told a Reuters reporter traveling with her. The tool calculates “strategic friction,” a defense official said. It looks at data since early 2020 and evaluates significant activities that had impacted U.S.-Sino relations. The computer-based system will help the Pentagon predict whether certain actions will provoke an outsized Chinese reaction, the official claimed. It seems that the brains behind this “tool” have no understanding of cognitive human behavior and so they’ve programmed a machine–which cannot possibly be cognitive–to do their thinking for them.

The unnamed official further claimed that the tool provides visibility across a variety of activities such as congressional visits to Taiwan, arms sales to allies in the region, or when several U.S. ships sailing through the Taiwan Strait could provoke an outsized or unintended Chinese reaction.

On second thought, maybe Gulliver has already been to the Pentagon. Willy B

This entry was posted in government, Humor. Bookmark the permalink.

41 Responses to Gulliver Travels To the Pentagon

  1. Pat Lang says:

    I have watched people try to predict human behavior with math, formulae, etc. all my life. This always fails because there are so many variables in human behavior.

    • blue peacock says:

      Col. Lang,

      Machine Learning neural networks are good at distinguishing between images of dogs & cats when repeatedly trained with images of dogs & cats. As we see with self-driving software stacks the perception engine that is used to fuse sensor data radar, camera, lidar to detect & classify objects on the road are extremely brittle right now despite immense amounts of training data & tens of billions of dollars of R&D because edge cases keep popping up that the perception engine has not previously seen.

      So, at best what this machine learning system of the Pentagon can predict is what would happen based on it’s training data set. If there’s a pattern it hasn’t seen before, the prediction could be wildly off. As you note since human behavior is non-linear these systems will be by definition extremely brittle.

      • Christian J. Chuba says:

        So all we have to do is go to war with China over and over again and watch how it started 🙂

        ML is actually quite good, but as you mentioned, it needs good historical data and in this case, for something that has not happened yet. ML is used a lot by Amazon for ‘you might be interested in these products’ as well as streaming services, ‘movies we think you will like’, and as you mentioned, pattern recognition.

        ML is good to process data sets that are not economical for humans to bother looking at.

        • blue peacock says:


          Yes, a popular application for ML are recommendation engines which learn the behavior of your online activity. There’s some validity to the fact that we are creatures of habit, however, the recommendation engines that Amazon, & Netflix use don’t have disastrous consequences if they’re wrong. At worst you don’t watch the movie they recommend. That now becomes another behavioral input. Google & Facebook business is prediction products as that is what they sell advertisers and 95+% of their revenues. Search, YouTube, etc are just bait to continuously learn your online engagement behavior.

          The big problem with ML is that it is a black box. You don’t know how it arrived at its prediction. All you can do is retrain if its predictions are completely off and hope that over time it gets pretty good.

    • Babeltuap says:

      Indeed Pat. Too much “spooky action” with human behavior. Imagine asking AI a year ago what three words would conservatives latch onto to mock Joe Biden. It would have not got anywhere close to, “Let’s GO Brandon,” That’s why I like Cohen Brother movies. The plot looks like it’s absolutely going somewhere predictable but then it does not. It actually goes nowhere.

    • YT says:

      Col. Lang,

      Sir, that’s not what this gent claims.

      Even tho I am skeptical due to his family name (i.e. noted globalists and technocrats), he appears to be telling us – insinuating – that israeli eggheads have “perfected” it unlike those of Thomas Schelling or the “wizkids” under [hated] McNamara.

  2. Schmuckatelli says:

    We need AI because apparently there’s so little of the real McCoy. Before we apply it to the CCP maybe we should see if we can use it to keep our ships from running into each other first.

    • Razor says:

      As a late dear friend of mine used to say; “Common sense is a wonderful thing, pity it’s so uncommon”

  3. Barbara Ann says:

    Now a few years ago I would have treated such reports of the infinite complexity of human behavior being reduced to mere algorithms as Swift-like satire. Yet in the Wonderland world of 2021, where trust in science has reached dogmatic heights indistinguishable from unquestionable religious orthodoxy, such reports need to be taken seriously. I recall the 1967 movie Billion Dollar Brain starring Michael Caine featured a similar, seemingly omniscient computer system. The finale included the appropriately spectacular demise of the hubristic megalomaniac whose project it was.

    Colonel, I would add another factor to the many variables issue which demonstrates the futility of such a project replacing quality HUMINT – of the kind you spent a career perfecting: Human behavior, like the weather is subject to the Butterfly Effect. One seemingly trivial interaction can be magnified to affect a geopolitically significant event. Maybe Xi had an argument with his SWMBO the morning of the day where otherwise he wouldn’t have gone to war. The “For Want of a Nail” proverb expresses this issue well. It seems to me to be unbelievably dangerous to rely on machines to tell us the limits above which to expect “ outsized or unintended Chinese reaction”.

    As an aside, I would highly recommend James Gleick’s book Chaos to anyone interesting in the field of Chaos Theory (non linear dynamics is the more prosaic scientific name). Anyone having read Gleick’s book would understand that complex phenomena like weather & human behavior (e.g. the stock market) are fundamentally unpredictable over anything other than the shortest timescales. Perhaps apredictable is a better word.

  4. sbin says:

    MIC, 17 secret police agencies and DC in general have lost all connections to reality.
    Those lunatics should have their funding drastically reduced.

  5. Leith says:

    It’s been 50 years or more since I read Swift. But I seem to remember that he portrayed Höuyhnhnms as intelligent and reasoning horses. If Gulliver ever went to the puzzle palace on the Potomac he would find lots of stupidity but more like the Yahoos instead of a race of educated equines.

  6. TTG says:

    My first introduction to the field of AI was The Inventive Machine, an automated decision support system based on the problem solving theory (TRIZ) of Genrich Altshuller, a Soviet inventor and science fiction writer. It applied rules to a massive database of patents to come up with innovative possible solutions. It didn’t make decisions. It assisted or supported the operator in making the decisions. As I told Babeltuap earlier today, AI is nothing more than a tool.

    In the 1990s, I did a lot of collecting on various military decision support systems (DSS). They all relied on sensors that picked up specific signals that the DSS rules then interpreted as specific events. Knowing those signals, allowed us to spoof the DSS. Any adversary that used these DSS to make decisions or automatically trigger a response without human intervention were setting themselves up for a royal spoofing.

    This AI system being peddled to the Pentagon is surely far sophisticated than those early DSS, but they’re still limited. This is just the latest in a long line of “miracle machines” being peddled to the Pentagon. If anyone could come up with an AI that could predict human behavior with any degree of accuracy, it would be the Chinese. They have been hiring the best in AI and algebraic geometry for well over 20 years. They’ve also collected a hell of a data set in the hack of the OPM’s databases including all those SF-86s and security clearance investigative files. But even the Chinese would be stupid to rely on such a AI system to make their decisions for them.

    • Pat Lang says:

      You can’t predict human behavior that way. Surely you know that. Only Hari Selden could do that.

      • TTG says:


        The best these things could due is present COAs. Some may be decent. Some may be off the wall. And they’ll miss others altogether. You’re right. Predicting human behavior is guesswork.

        • Pat Lang says:

          Not guesswork, the work of genius informed by vast amounts of well-absorbed data. I spent a lot of time with other strategic analysts trying to inform an idiot retired BG that this was the truth. He actually had a contract to create what he thought was how to do strategic analysis. The task should be to find the tiny number of people who can actually do that kind of work.

    • Christian J. Chuba says:

      FWIW I took a about, 3hrs of training classes on Machine Learning just to see what all of the fuss was about. I’m not claiming to be an expert, but I got the gist of it.

      Short answer, for something as serious as prodding the Chinese military?
      No, no, and no. The same person will make a different decision based on what happened to him that morning.

      I did read that the Chinese were using it to develop auto-piloted aircraft that outperformed human pilots. Did they actually succeed? I don’t know but I believe that is plausible. ML needs to study data and make generalizations on it, and then test it and here you are talking more about fast mechanical actions.
      Also, we are already applying this to self-driving cars.

      • TTG says:

        Christian J. Chuba,

        Back in 1990 I picked up a copy of Borland’s Turbo Prolog in an effort to learn something about what I was collecting as a case officer. I learned to program a rudimentary expert system in DOS. Expert systems were the state of the art in AI back then. Current AI is well beyond that now even though is is still being oversold as in the efficacy of Tesla’s self-driving cars.

        Years later, when my AI genius was demonstrating his software, I noticed a piece of Prolog III code as he was scrolling through his code. I mentioned this to him and we have been friends for well over a decade now.

  7. Fred says:

    I sure our man in Havana, ah Peking, will keep such an accurate information flow coming that this system will work just as promised.

    • Pat Lang says:

      Humint of all kinds can provide a small but sometimes vital element of the whole. A lifetime of learning is far more important. I had analysts so learned they frightened me, but they tolerated me well. Clapper fired them all as director or drove them into retirement. A true idiot.

      • Fred says:

        Yes sir. Clapper is another one of those people frieghtened by those who are truly competent. I can’t imagine how badly damaged our Intel capabilities are after he was done.

  8. What are the machines saying about the economic meltdown going on in Turkey?
    Erdogan has been strattling the fence between East and West for awhile now, but this could make him have to pick one side or the other. Doubtfully he wants to go to the IMF again, while Russia and China would love to take him under their wing.
    Given Turkey is at the center of that geopolitical fault line, the implications should be getting more attention than they appear to be.

    • jim ticehurst says:

      The machines are saying That There are Means and Methods to Cook a Turkey..and Ruin The the Cook Gets Blamed..Make The Food so Expensive No One can afford to Buy It…Or Produce..It..Because of Hyper Inflation..which makes the Currency weaker..Even the Tea Producers in His Home Town are mad at Mr Erdogan..they cant afford to produce any More…Fertilizer is too Expensive..and Things arent going well In Syria..I Believe..All this Currency and Economic Manipulation Reminds me of Germany and Japan..being Forced to React..The Food on THAT Plate…Smells BAD..

  9. jim ticehurst says:

    You cant Beat Humint..Humans Have Instincts..Combined With Learning how to Analyze based on Gathered Data..Education..Association..Communication..and Emotion…And Experience…machines are Machined. Fallible…How many Disasters. Like Nuclear Responses. Use of Deadly Force.. etc been Avoided Because the Big Screens and Machines Provided False Data. Machines Can Be Manipulated…A Human Picks Up The Red Phone..Has The Codes..Turns The Keys…..And Someone with Experience Pat Indicated..Is The Most Important Element…To Analuze and Respond..NOT a Machine.Sometimes Times I Think For Alien Intelligence..And It May Not Be So Friendly..To Humans..Machines do Not Creat LIFE..They Creat Death…

  10. Deap says:

    DH and I are terrorized by every new home appliance we are forced to buy, since they are all now smarter than we are.

    Biggest problem is they are programed to meet someone else’s standards for energy and water savings, but do not really get our clothes cleaned or our dishes dry. No one should need five remotes just to turn on a TV.

    Bring back an analogue switch and a dial – on/off will suit me just fine.

  11. Eric Newhill says:

    The Chinese know we have this AI.

    I don’t think AI is very good at the intended task in the first place, having seen some of it demonstrated, albeit in a corporate setting, and looked under the hood a little, but, even it is good at the intended task initially, all the Chinese have to do is feed it adversarial inputs. The AI will then start “learning” all of the wrong info and incorporating it into its analysis – the old garbage in/garbage out problem.

    The adversarial inputs could be random; just to screw with the AI and inhibit its ability to develop an “Understanding”, or the inputs could be designed to cause the AI to learn an incorrect, but intentionally directed misunderstanding.

    With the random adversarial inputs, the best AI could do is recognize its being screwed with and raise that red flag, IMO.

    People are always dazzled by short cuts and sorcery. AI is modern sorcery.

    • YT says:


      I recently stumbled upon the writings of this here fella living in Montana.

      Some years back, one of the readers of this blog highlighted his posts as well.

      “Well-known globalist Bertrand Russell worked tirelessly to show that the entirety of the universe could be broken down into numbers, writing a three-volume monstrosity called the Principia Mathematica.

      Russell’s efforts were fruitless and Godel’s proof later crushed his theory.

      Russell railed against Godel’s proof, but to no avail.

      Now, why was an elitist like Russell who openly championed scientific dictatorship so concerned by Godel?

      Well, because Godel, in mathematical terms, destroyed the very core of the globalist ideology.

      He proved that the globalist aspirations of godhood would never be realized.

      There are limits to the knowledge of man, and limits to what he can control.

      This is not something globalists can ever accept, for if they did, every effort they have made for decades if not centuries would be pointless.”

      “There are limits to the knowledge of man, and limits to what he can control.”

      But of course, my [unsavory] mainland ‘cousins’ are probably counting on dumb Americans jeopardizing themselves via “feminism” (i.e. abortion: “my body, my choice”), more ‘diversity’ (an army can fight foreign devils with trans-freaks, queers and butches?!) and placing all their bets on their own quantum or supercomputers R&D to trounce the “white monkeys” of the West.

      It’s now a matter of who breeds fastest than who develops better A.I.

      • jld says:

        “There are limits to the knowledge of man, and limits to what he can control.”

        Most certainly, but that is NOT what Gödel proved.
        His incompleteness theorem states that a formalized system (i.e. any AI implementation) cannot prove it’s own consistency (a.k.a. correctness) but THIS is a proof enunciated by a human… Gödel himself.
        So this say nothing about human limits per se.

  12. Deap says:

    Peter Navarro’s new book tells tales from inside the White House:

    ……..”His book is a rock-and-roll narrative and exposes Anthony Fauci as an audacious, mendacious, sociopathic dissembler — a malefactor who did “more damage to this nation, President Trump and the world than anyone else this side of the Bat Lady of Wuhan.”

    That’s just a little bit of the benefit of reading the book: since Navarro was with the president from the beginning, he gives a reader a sense of the problems confronted by the Trump administration, which attempted to recruit reliable supporters of the policy positions but got traitors instead and worse: weaklings, leakers, and saboteurs. …….”

  13. Leith says:

    Pat & TTG –

    What are your thoughts on the use of AI by NSA to accumulate insights on SIGINT? And I understand that they intend to start using AI in the cybersecurity realm now also.

    And what about IMINT use of AI within NGA?

    • TTG says:


      We’re already there. A lot of the implementations are just enhanced search and pattern recognition tools. With the immense size, disparity and sometimes fleeting nature of data sets being examined, these AI tools are indispensable.

      Towards the end of my career at DIA, I worked with a true genius in this field. He actually developed a new field of mathematics in order to further his programming in AI. His work wasn’t just an AI beast sitting in a sub-basement being feed data sets. His baby consisted of independent and interconnected “agents” that worked in the wild creating and testing hypotheses on what they discovered. These agents kept the operator informed of what they discovered, what they hypothesized and what they recommended. They would respond to the operator’s direction in natural language and real time. It was like having a swarm of bold, ingenious and tireless scouts at your command. I can’t be more specific, but it was definitely the stuff of William Gibson novels from “Neuromancer” to “Pattern Recognition.” My AI genius buddy’s creations are now doing amazing, cutting edge stuff in medical, intelligence and cybersecurity applications. They’re probably doing things I can’t imagine… but they’re still tools.

    • Leith says:

      Instead of IMINT I should have said the new buzz, GEOINT. Showing my age, I guess. Found that USGIF (Geospatial Intelligence Foundation) just held a symposium in October where AI was deeply discussed, mostly for GEOINT for NGA contractors.

      Looks like there has been a JAIC (Joint AI Center) in the Pentagon for three years or more. And there was an NSCAI (National Security Commission on Artificial Intelligence) that had been around for three years but just shut down in October after their final report.

  14. TV says:

    “strategic friction,”
    Given the current state of the US military, it should be “strategic FICTION.”

  15. walrus says:

    If anyone ever develops a workable AI theory and a computer system to give effect to it, its first use is not going to be military. Human nature indicates it will be stock markets, closely followed by other forms of gambling.

    Furthermore, such use will trigger red alerts in prudential security systems the world over since a successful AI behavior, by definition, will break the laws of probability. One can only go on proclaiming “dumb luck” as a reason for winning for so long.

    In a military setting, how many troops are willing to risk their lives on the basis of AI predictions?

    • TV says:

      “how many troops are willing to risk their lives on the basis of AI predictions?”
      As opposed to the troops willing to risk their lives at the hand of the current
      empty suit “leadership?”

  16. Christian J. Chuba says:

    I am assuming that they are using AI based on Machine Learning because that is all I see being taught in the Universities and that is where our STEM graduates come from. If someone else knows different, please tell me.

    Let me start out, this is a bad use of ML. ML requires a feedback cycle, you feed meaningful data into the program, run it. Y0u get results and then compare it to your desired outcome. You repeat this process but customize parameters to see if you can better results.
    This requires a very clear way to measure success.
    Examples of ML:
    1. Call centers, how many times you had to transfer a call to satisfy the request (or the victim hung up.) This is a hard problem for decentralized orgs like Amazon. Easy to measure, the best answer is ZERO.

    2. Customer recommendations for products or streaming services. Again, easy to measure, how many times did they watch (or finish watching) the movie.

    3. Pattern recognition, easy to measure, how many times did it identify the dog, cat, or neither in different types of pictures within 3 sec’s. I hate when they throw in a Griffin.

    4. Speech recognition, how many times did it identify the correct word from the audio.

    So what are we measuring in the Pentagon application, how many times we deterred aggression? This seems a bit subjective and nebulous.

    Now maybe we could get something meaningful to measure if all of the Chinese commanders involved in the incident agreed to fill out a questionnaire afterward and return the bio-monitors we gave them. Then we could tell if they were frightened or pissed off.

    I could see potentially trying ML on our own troops to analyze military exercises. If we collected heart rate info or performance data that we could feed into a program, it might be able to come up with an observation that human analysts overlooked. I would not replace humans, just have one running along side them as an extra observer. Call him Krazy Kat. Krazy Kat has no authority, he just writes a report for the other analysts.

  17. scott s. says:

    I studied computer science with emphasis in AI in the early 80s (thanks to the USN), when there was an AI boom based on symbolic AI. There was a lot of hype then which never came to fruition. Now deep learning has replaced symbolic AI in the hype machine. I guess it would help if all the crypto miners redirected their hardware to neural nets. There was an excellent series of articles in IEEE Spectrum a couple months ago on the state of play with AI.

  18. aka says:

    Well AI and Machine Learning has become sort of a mystic art for the general population. Throw in a neural network (NN) or a (god forbid) deep learning (DL), things sounds like special relativity.
    Most of ML is statistics. With faster computers and cloud, previously time consuming computations have become faster.
    Simple ML is just regression. A single equation which is optimizing itself based on previous data to predict something. With NN, you are using multiple equations instead of one for more complex predictions. With DL is just a more complex NN which have even more equations even more complex predictions.

    • Christian J. Chuba says:

      My take away from the limited classes I took was that everything you mentioned comes under the umbrella of ML.

      I would chart it like this …
      1. A) Data => B) ML voodoo of your choice (DL, NN, regression …) => C) analyze results. Repeat step 1 and modify step B based on the results in step C.
      During development you do this many times with the same data to optimize performance, after deployment, you keep monitoring the results and collecting data.

      BTW I took the course eval test, which was pretty hard, short time limits per answer to prevent ‘google’ A’s. I was basically graded, convincing sycophant. Not trying to oversell myself.
      The Pentagon proposal, as stated in the yahoo link, is insane.
      There is no meaningful data collection or results to analyze. Since we have not used this to evaluate our own military performance, where we have full access to personnel, why have brinkmanship with a real military be our first application.

      • Christian J. Chuba says:

        I meant to say ‘convincing dilettante (not sycophant)’

        Also, there are about 30hrs of online courses offered through my company as electives and I don’t use it on my job. This was out of curiosity. So by definition, I suppose I am a dilettante. The test was not based on what I took, it was a full proficiency test. Kind of hurts, I should take more and try again.

Comments are closed.