I don't want to talk about ai
intellectual honesty doesn't feel safe when people ask me about the tech apocalypse
like any good ea, I try to have a scout mindset. I don’t like to lie to myself or to other people. I try to be open to changing my mind.
but that kind of intellectual honesty only works if you don’t get punished for being honest. in order to think clearly about a topic, all of the potential outcomes have to be okay - otherwise you just end up with mental gymnastics trying to come up with the answer you want.
and in some cases - like existential risks from ai - none of the potential outcomes of thinking deeply about it are especially attractive.
if I look into ai safety and come to believe the world’s going to end in ten years, that would be super depressing, especially if I can’t do much to contribute. so that outcome, while tolerable, isn’t especially attractive.
you might think, “but maybe you’ll figure out that ai isn’t really a risk after all! wouldn’t that be reassuring?”
let’s think through what’s going to happen if I investigate ai safety and realize it’s not that important:
a lot of arguments. people in ea want to hear ideas that are different to their own - it’s a credit to their epistemic humility! I encountered this a lot when I first got into ea in 2017. I remember being cornered at a new year’s eve party, being interrogated in detail on my views on ai safety. it convinced me the guy I was talking to cares deeply about being correct about ai, but it wasn’t a lot of fun.
a lot of re-education. after hearing my views, I’m guessing people will want to share their own. at church, when my views deviated from the norm, I would usually get a lot of interested questions followed by the same 2 or 3 recommended readings. I’d expect the same thing here.
a lot of judgment. I’m not a scientist. if I tried to form my own view on ai safety, it might be really stupid. or it might just be kind of weird. either way, there would probably be at least a few people who would think less of me.
and maybe - MAYBE - I could convince other people to shift their resources to something else. if I were right, that would be very positive; if I were wrong, it would be very negative. but as social sciences major, my chances of being both right and persuasive on ai safety seem astronomically low.
if I wanted the best possible outcome for me personally, I’d just memorize two or three sentences from one of Ajeya or Holden’s blog posts, and quote them when I’m asked about my views. “I agree with Ajeya that in the coming 15-30 years, the world could plausibly develop “transformative AI”: AI powerful enough to bring us into a new, qualitatively different future, via an explosion in science and technology R&D” sounds pretty good and I think it would impress most of the people I talk to.
so in summary, while forming an inside view on ai might be very altruistic of me, I just can’t bring myself to do it. it would take a long time and it’s hard for me to imagine any good coming from it. the next time someone asks me what I think about ai safety at a new year’s eve party, I plan to blithely respond, “I’ve never really thought about it. Would you like another drink?”
Iconic and relatable post ! I feel this way all the time, and then feel kind of bad for feeling this way, and then I change my mind and get annoyed that I feel like I have to feel bad and decide to feel fine.