Suppose you're a student at the Catholic Franciscan University of Steubenville, who is -- somehow -- unfamiliar with the contents of Catholicism.
One day over lunch, it comes out that you basically have no idea what the Church of Rome is.
"I don't know why I should bother learning more," you say. "The idea that there's this person who created the world, sent intermittent imperious messages to one tribe in the Middle East for millenia, then eventually decided to visit and started the Catholic Church? It sounds like a Brandon Sanderson fantasy novel -- it's just not reasonable."
"Yeah, I get why you would say that," says your interlocutor, Francis, patiently. "But people in the Church actually care very much about reason. John Paul II called faith and reason the two wings by which man rises to God."
"There are actually," Francis continues, "really good, reasonable arguments showing that God exists."
He hands you a book by St. Augustine, and points out a specific argument in it. You say that you'll read it.
When you meet up next week he asks you what you thought of Augustine.
"Uh," you say. "It didn't seem like a very good argument? Like Augustine seemed to have somewhat-self contradictory beliefs about the immutability of God and the Incarnation?" You spell out your objections in a little more detail.
"Hrrm," says your interlocutor. "Well, not everyone thinks that Augustine's argument works, even all Catholics. How about instead you read the Five Ways by Aquinas?"
You sigh, and agree to read it.
Once again, next week, you meet up. Francis asks what you thought of Aquinas.
"Uh," you say. "He also seemed pretty bad. Like, it took me a few more hours to figure out why, but I think pretty much all his reasoning involves these -- fake categories?" You try to spell out your objections in a little more detail.
"Hrm," says your interlocutor. "Well, not everyone thinks that Aquinas' argument works, not even all Catholics. How about instead you read one of C.S. Lewis' arguments --"
"Wait wait wait," you say. "People have been thinking up arguments for God's existence for a thousand years. Even if every single one of these arguments is unsound and invalid, if I keep reading them I'll probably find one whose flaw I cannot locate, because I'll eventually just run into my own blind spot."
"But," you continue, "it would be unreasonable to convert to Catholicism just because my attention inevitably lapses at some point."
"Instead," you continue, "How about you just show me an argument that the Catholic Church has definitively judged to be good, instead? And I'll evaluate that argument, and just that argument."
"Well," Francis says a little awkwardly. "The Catholic Church has ruled that there is a valid argument for God's existence from natural reason alone. But it hasn't said any particular argument for God's existence is any good."
"Really," I say. "So the Catholic Church has ruled that some argument for God is valid, but not that any particular argument is."
"Yeah," he says.
"Does that seem... problematic to you?"
"Why... why would it be bad?"
"Well," you say. "Let's back up. We have this belief, Catholicism, that is held by many millions of people. Let's say there are two different hypotheses for why these people hold it."
"On one hand, lots of people could hold to Catholicism because it's true. The evidence in favor of Catholicism might be so good that lots of people investigated to find out if it was true -- and they all found that it was true! There were enough miracles that were definitely miracles, and not people misreporting the evidence. Or there were enough valid arguments for God's existence that really made sense, and were sound and valid. Catholicism might be the kind of thing that spreads because it is true, like belief in Newtonian physics or in evolution or in the existence of New Zealand."
"On the other hand, lots of people might hold to Catholicism for reasons unrelated to the truth of Catholicism. The reason that people are Catholic might be because Catholicism does a good job turning other people into Catholics. Maybe the belief that you'll go to Hell if you don't spread Catholicism helps make other people be Catholic, independent of the truth of that belief. Or maybe the belief that you should have a dozen kids helps make more Catholics. Catholicism might be the kind of thing that spreads itself entirely independent of whether it is true, like belief in astrology or the belief that the greed of billionaires causes inflation."
"So," you conclude, "this disagreement among Catholic philosophers about which arguments work seems like some degree of evidence in favor of the later position. Of course, Catholicism could still be true even if most people are Catholics for reasons independent of its truth. But it would be some level of evidence against the truth of Catholicism."
"Would it be?" Francis says. "Maybe? I'm not sure. I have to go to class."
Suppose you're a student at University of California Berkeley, who is -- somehow -- unfamiliar with the concern over AI safety.
One day over lunch, it comes out that you basically have no idea what AI "notkilleveryoneism" is.
"I don't know why I should bother learning more," you say. "The idea that there's going to be this superintelligence, who can outsmart all humanity put together, and who will be by-default omnicidal and want to kill everyone? It sounds like an Aladair Reynolds science-fiction novel -- it's just not reasonable."
"Yeah, I get why you would say that," says your interlocutor, Sarah, quickly. "But the AI-risk people actually care a lot about reason," she says "The community really got going because Yudkowsky wrote essays and a novel specifically designed to help people reason well."
"There are actually," she continues, "really good arguments showing that AI is going to be super hard to steer."
She hands you a book by Nick Bostrom, and points out a specific argument in it. You say that you'll read it.
When you meet up next week she asks you what you thought of Bostrom.
"Uh," you say. "It actually seemed really bad. Bostrom seems to be operating off mistaken notions of what reinforcement learning is? And a deeply pre-deep-learning notion of how these things would be trained." You go on to spell out some objections to what he says.
"Hrrm," says Sarah, "Well, not everyone thinks that Bostrom's argument works, not even all AI-doom pessimists. How about instead you read the List of Lethalities from Yudkowsky?"
You sigh, and agree to read it.
Once again, next week, you meet up. She asks what you thought of Yudkowsky.
"He... seemed pretty bad," you say quickly. "Like it seemed like he was working with fundamentally confused categories." You spell out some objections in more detail.
"Hrm," says your interlocutor. "Well, not everyone thinks that Yudkowky's perspective is right, not even all pessimists about alignment. How about instead you read one of Christiano's arguments -- "
"Wait wait wait," you say. "Folks in California have been thinking up arguments that AI is going to kill us for decades. Even if all of their arguments are bad, if I keep reading them eventually I'll probably find one that I can't find the flaw in. I'm not obliged to run an adversarial attack on my own cognition."
"Instead," you continue, "How about you just show me the argument that is universally-agreed-upon to be good -- or even mostly agreed-upon-to-be-good -- by people worried about AI risk? Like, you say that everyone cares about truth in this community -- wouldn't it be weird for them not to have worked out what kind of argument actually works? Or have at least some kind of a notion of which arguments were most important?"
"Well," she says a little awkwardly. "You have to understand, this is a really hard thing to reason about. It's a pre-paradigmatic kind of a thing."
"So there's no agreement about which arguments are valid, or what model is correct, or even which matter the most?" you ask. "A bunch of people in this community all think AI is super dangerous, but they cannot point to a particular line of reasoning they all agree with as to why?"
"No... I mean... I think there is agreement about the essentials of the story." she says. "Just not agreement on the details of the sketch."
"But isn't that some evidence that the belief in AI doom spreads for reasons unrelated to its truth?" you say.
You continue: "Like -- of course -- we all think that some religions spread because they tell their members to spread the religion. Isn't disagreement among people worried about AI like this, at least some level of evidence that AI-worry spreads for similar reasons unrelated to its truth? In general, when one encounters a compelling narrative of doom with obscure arguments shouldn't a red flag go off in your mind somewhere?"
"Frankly," Sarah says coldly, "I think that comparing it to religion is manipulative and bad-faith of you. Look, I have to go to class."
A week later, you're walking down the street when you see a protest. Everyone is holding up signs saying things like:
"STOP AI"
"DON'T BUILT THE DOOMSDAY MACHINE"
"SHUT DOWN TSMC NOW"
And you notice Sarah. She's speaking into a microphone and saying things like this:
"No one has any responses to the arguments for AI Doom! Yann Lecun just ignores them! They all just ignore them! They mock them, because they have no possible response!"
Something about this strikes you as odd. So you wait until the protest is over.
Afterwards, you approach Sarah and say: "Hey, during the protest, I heard you say no one has any response to the arguments for AI doom. Which arguments does no one have a response for?"
"I mean, you know," she says. "The arguments."
"Which arguments," you say. "I read some arguments! There seemed to be pretty good responses to them."
"You know, the basic arguments." she responds.
"But there aren't any agreed-upon arguments for AI doom," you say. "If you talk to 10 different AI safety researchers, you'll get like 10 different stories of why AI alignment is hard. Which arguments do you find most convincing?"
"Well..." says Sarah, in a somewhat different tone of voice, "I thought that... there was the argument about AI as a species... look, I'm more of an activist than a theoretician."
She cuts off that line of thought. "Look," she says, "there's a consensus of experts that agree it is a risk. It's normal to simply defer to experts in such a case."
"It's totally normal to defer to experts," you agree. "That's absolutely a wise thing to do under many circumstances. But it's important to notice when you're doing so, rather than pretend that one's own belief rests on understanding the matter."
You continue: "So, you, personally, think AI is a risk because you trust the relevant experts, who have come to a consensus, rather than because of any particular argument?"
"Well," Sarah returns hotly, "the relevant experts do understand the arguments, that's all you need to know."
"Huh," you you say. You fold your arms.
"So when I look into the 'experts' there appears to be no consensus among even those convinced of AI doom about what model -- what argument -- is correct."
"So -- for example -- you can find people on EA forum who ask for the most representative arguments for AI doom. And the general response is that there is no unified perspective. There were maybe 5 different perspectives people pointed to. And each of these are very loose clusters with lots of people in them with separate stories of their own."
"Like you can read literally thousands of words of diffuse disagreement between Yudkowsky or Christiano and others. Or you could try to like, follow the bread-crumb-trail of maybe two dozen winding and inconclusive blogposts on VNM consistency on LessWrong. But there's no central argument that everyone would sign onto about why AIs are likely to kill everyone. Stories about VNM consistency are enormously important to some, but don't matter all to others!"
"So what!" says Sarah. "Just because there's disagreement, it doesn't mean we're not right."
"Of course," you say patiently. "If you had a dozen bad arguments for the existence of Europe, Europe wouldn't therefore fall into the void."
"So why are you on my case?" she returns.
"Because," you respond, "You're blaming people for not responding to your arguments. You're acting like there's some determinate, specific argument that they could respond to. But there is -- very clearly -- is no such argument. There isn't even like, an agreed-upon hierarchy of the most important arguments."
"Instead, it looks to me like there's this enormous cloud of vague arguments. You have giant Yudkowsky blog-posts, which are themselves pointers to other blog posts from ten years ago, which themselves made some dubious predictions; you have big Open-Philanthropy-funded examinations from Carlsmith, which might or might not actually be the real reason that anyone believes anything; you have long 100-comment threads on LessWrong; you have a literal multi-million-word BDSM Pathfinder fanfic from Yudkowsky. The situation is so bad that people you can find blogposts about why it's pointless to try to hash out these disagreements."
"Asking someone to respond to the arguments under these circumstances is like -- hrm, an analogy."
"Throughout the history of the Catholic Church, a lot of people have made arguments that God must exist or Catholicism. Anselm, Aquinas, Augustine; Bonaventure, Bossuet, Bloy; Chesterton, Cajetan, Christoph. But it's unreasonable to say, 'Please respond to the arguments for God's existence' without saying which ones you actually think are good. There are just too many, too spread out, with too vague lines between them."
"Of course, saying 'RESPOND TO THE ARGUMENTS' might give you points in a debate, because it's unanswerable. But it's unanswerable for reasons entirely unrelated to the truth of the arguments. If you actually care about talking to someone, rather than about scoring points in a debate, you would say, 'Well, I think that God must exist because of Chesterton's argument in Heretics,' or something like that. And notably, this would be true even if God actually did exist."
"The 'RESPOND TO THE ARGUMENTS' construction in your movement is also essentially trying to turn the disorganization and lack of clarity in your own movement into a virtue."
"Look," says Sarah, "that doesn't mean you're right! I bet that people who don't think that AI is likely to kill everyone don't have agreement among themselves as to why it isn't likely to kill everyone."
"I'm sure that's the case," you respond. "This 100% does not mean that the anti-doom side is right. Maybe AI will by default be omnicidal, and the only way to save is to start an international consortium that's the only group allowed to do AI research."
You continue: "If you find someone who doesn't believe in AI doom repeatedly pestering someone with 'YOU CAN'T RESPOND TO OUR ARGUMENTS AGAINST AI DOOM' without specifying the argument, you're welcome to direct them to me. Such a construction would be equally propaganda -- that is, it's unanswerable regardless of the true state of affairs."
"But so is your construction. My chief point is that this particular rhetorical construction is about propaganda rather than truthseeking -- although, to be fair, I tend to see it much, much more among people who believe in AI doom."