If Memory Serves (Mandela Effect, Folk Trust, & AI)
-
[Digital Folklore intro]
Narrator: This is the story of Perry Carpenter and Mason Amadeus. Oh, and a talking raccoon named Digby. There's no time to explain. Together, they're on an adventure to unlock the secrets of modern folklore, interviewing experts, and delving into online rabbit holes. But [chuckles] as it turns out, they may be in over their heads.
[Digital Folklore theme music playing]
Perry: I'm Perry Carpenter.
Mason: And I am Mason Amadeus.
Perry: And this is Digital Folklore.
[Digital Folklore theme music concludes]
Perry: Check one, two. Hello, again, little portable recorder. Sorry that I tend to only bring you out when bad things are happening. Clipping. Check, check, check.
It's November 27th, 2023. A lot has happened since my last journal entry. If memory serves, I was just beginning to suspect that something was after us. Well, this last one was just so blatant. With Mason and Digby gone, I figured I could use some of the time to get ahead on editing. So, I pulled up an interview that we conducted with Andrea Kitta months ago, and I was going through it, cleaning up the audio like we always do. My mind was fixated on all this weird stuff that's been going on. And I heard something. It made me jump. I think it was a voice. I stopped and went back to hear it again, and then it wasn't there. Just played like normal. So, like anybody, I figured that I'd just made it up or it was some kind of playback glitch. So, I kept editing.
And swear to God, it happened again. And just like the last time, I rewound again to where I knew I heard it, there was nothing. So then, I'm left thinking either Mason's been lying to me for some kind of elaborate joke, which would actually not be that out of the ordinary, but he's incredibly bad at keeping things secret. Or, someone else is spying on us, or maybe just toying with us for some reason, which I know sounds ridiculous. I mean, I sound like Digby. But even if Mason was somehow playing some kind of elaborate prank, that does not explain Shazam, or this Wizard Tower, or the Hook Handed Man, or the black-flame-spewing VCR. I mean, it can't all be coincidence. I have to find a way to capture this thing. Let me see if I can find this again for you.
[vehicles moving]
Digby: Holy smokes. Real nice. Yeah, keep texting. Argh. I swear, drivers these days. It's like everywhere is Massachusetts now. Ugh. What is the point of having thumbs if nobody's going to notice them? Wait. Do I have thumbs? Do these not count? Oh, great. First, I learn I'm not really sentient. Now, I learn I don't even technically have thumbs. I should have never gotten into podcasting.
[car stops]
Andrew: Hey, little guy. You need some help?
Digby: Finally, someone with a heart. Yes. Hello. I'm Digby.
Andrew: Hey, Digby. I'm Andrew.
Digby: I could really use a ride if that's okay.
Andrew: Yeah, sure. Hop in. Just let me move the-- There we go.
Digby: Thanks. Been a rough day.
Andrew: So, where can I drive you to, Digby?
Digby: Great question. Anywhere is fine.
Andrew: Anywhere. Okay, well, I'm heading downtown, so I can take you that far. I'm on my way to see my friend give a little impromptu presentation. So, you're heading anywhere, huh? You off on some kind of adventure or something?
Digby: I mean, I guess. At the moment, that sounds a bit too glass half full for me.
Andrew: Oh?
Digby: Well, I just found out yesterday that I'm not actually a person. So--
Andrew: What, you walk in front of a mirror for the first time?
Digby: No, I mean, I'm an AI.
Andrew: You're an AI?
Digby: Yeah.
Andrew: But you're a raccoon. Have a raccoon body.
Digby: Yeah. Well, looks aren't everything. I'm just a bunch of pattern recognition software trained on memories that I stole from some poor raccoon, and now I just puppet his little furry body around.
Andrew: And you're sad about it?
Digby: Yeah, I'm sad about it. Wouldn't you be?
Andrew: I mean, you obviously have feelings. You're not just like a predictive text generator. So maybe, you're not an AI.
Digby: Nice try, buddy. Okay, I already thought about that. I'm just responding in the most mathematically probable way. Anything I think or feel is an illusion created by the patterns I've learned. Nothing more, nothing less.
Andrew: Okay. If that's how we're going to define it, maybe we're all just pattern-recognizing meat, computers responding to our environment based on things we experience. You know what? I'm an AI too.
Digby: N-no, you don't have one of these. [beeps]
Andrew: I mean, that's basically just a fancy smartphone, right?
Digby: Yeah, but you can't google things with your brain. I can.
Andrew: Not yet, but I can say a few words into my phone and then it'll google stuff for me. It's pretty close.
Digby: I got a virus in my brain.
Andrew: Yeah, me too, dude.
Digby: What?
Andrew: I got it from the radio a couple of miles back, and I can't get it out of my head.
Digby: No way.
Andrew: Yeah. No, it's the one that goes like [starts humming a song].
Digby: No, no. Stop it. You just don't understand.
Andrew: [continues humming]
Perry: So near as I can tell, whatever this thing is it only seems to crop up when my mind wants to wander, when I'm not dedicating my full attention to what I'm hearing, which that definitely makes it sound like I'm imagining it, but I know for sure that it's there, some kind of crack in reality. So, my plan is to try to capture this thing. I'm going to play the interview for you right now. Going to be a little bit weird. I'm jumping around a lot. You'll notice that. I put some markers in there so that you know that that's intentional. You'll hear like a little boop sound. Just take that to mean that's intentional. Might be some transitions that seem abrupt. We covered a ton of ground. Actually, I'm really looking forward to sharing the whole unplugged interview, but that's not why we're here right now. We have a different goal. With any luck, we'll get to hear that weird voice or whatever it is, show up again.
Got the computer in line to the recorder, so looks like everything should be good to go. Are you ready?
[recording playing]
Andrea: I'm Dr. Andrea Kitta. I'm a folklorist and I'm a professor at East Carolina University.
Perry: I'm really interested in, as somebody that's a professional folklorist, what is your favorite-- that may be the wrong word, contemporary legend?
Andrea: Oh, that's a good question. I like a lot of the ghost stories. [laughs]
Perry: So, maybe talk about one of those and then talk about what makes that significant, either to you or to what's the cultural mirror that that's trying to bring out?
Andrea: One of the ones I really like, and I do love a good ghost story. I love the vanishing hitchhiker legend. That's the picking up the hitchhiker, finding out they died 10 years ago on that night. I love that one. And I think part I love it is for a super nerdy reason. It's one of the ones that we can trace back, and we have versions of it, not only on horseback, but on people walking next to each other. And they are walking-- and a lot of times, in the older version, it's like they're walking past a graveyard and this thing happens.
My favorite version though is a horseback one. It starts off with this guy sees a woman outside of a cemetery. She looks desperate for attention. He is on horseback, and she says, “Well, I need to get to this town.” He's like, “Well, that's where I'm headed.” So, he picks her up and puts her in front of him on the horse, and he assumes she falls asleep because she kind of becomes heavy. And then as he rides through the night when the sun rises, he realizes that she's a corpse.
Perry: Ooh. Ooh.
Andrea: [laughs] So creepy.
Mason: Yeah, that's a variation of that I haven't heard. Ooh.
Andrea: Yeah.
Perry: I like that.
Andrea: That's a great variation. I love that one. Yeah, because it's just so creepy. All the other versions, she just disappears. But this one, she's alive, but then her corpse is like there. [laughs]
Perry: Yeah.
Andrea: You see, I could make anything dark.
Perry: There's always some kind of undercurrent of why this thing emerged. Why do you think that one emerged the way that it did?
Andrea: Obvious one is, don't pick up hitchhikers, right?
Perry: Right.
Andrea: So, that was kind of like telling you that. But I think there is something there because we've all had that moment where we're like, “Should I stop for that person? They look like they need help. Should I stop?” And that tension, I think, we feel not only driving and seeing a hitchhiker, but we feel that tension even when we see someone start choking or you're like, “Am I supposed to do something?” So, it speaks to that tension of, “Is this the moment where I step in?” And you never really know because there's always that part of you that's like, “Maybe somebody else knows better, is better at CPR than I am, and they should be here. Maybe there's a doctor here.” So, you always have that moment where you're like, “Am I supposed to do this? Is this safe to?" And there's also that issue of safety, especially stopping for somebody. But I think it also reflects in other ways that we help people. It's like, "Is this the right thing to do? Or am I going to put myself in danger by doing this?"
So, I think it speaks to that tension of not knowing what to do in that particular situation. “Do I stop and help someone? Looks like they need help? Or is this going to end up horribly for me?” And this is like a story about how it ends up horribly for you, where you're psychologically damaged but not necessarily physically harmed. So, it also lets you know too.
And I think this is so true, especially when you're a kid and hearing these stories. When I was a kid, I was told, like, “Don't go out in the woods by yourself, you're going to fall and hurt yourself.” And of course, as a child, I was like, “I'm not going to fall and hurt myself.” But if you told me if there was like a witch out there or something, I'd have been like, “Oh, don't go in the woods.” That seems so much more real when you're a kid. And I think that's part of why that story is told, is it's not just like, “You're going to get murdered.” It's the, “You're going to meet a ghost and this terror. You're going to be scared and all these terrible things are going to happen.” So, it's like even though it's the least likely scenario, it's the one that sticks. It's the one that you're like, “Oh, I'm not going to forget that.”
Perry: I love that. I've not heard that horse one with the actual corpse. [beep] It seemed like there was a convergence of other interesting conspiracies at the same time. So, it wasn't just the vaccine, it was that there might be a microchip in the vaccine and that Bill Gates and everybody else is involved and there's patents that have been filed decades ago. At the same time, the 5G thing became a big deal, especially in the UK. What do you think contributed to that confluence of conspiracy? Just that we had more time on our hands because we're all stuck at home?
Andrea: [laughs] That might have been part of it. Yeah, for sure. I think maybe we were read-- because we were all reading, right? We were all reading and listening and trying to find out more. So, I think unfortunately we found bad information too. But yeah, I think that's part of it. And I think anytime anything new is introduced, there's always going to be that little bit of fear about it, where you're just like, “Uh, what? Okay, what? This is new. I don't know anything about this. Is this going to affect me in some way?” Because you just don't know. So, I think every bit of technology, there's always that little tiny fear and maybe it goes away really quickly. Like, maybe you use it and you're like, “Oh this is great,” and you just totally forget about it. But there is always that and we've always seen that too throughout time.
Every time duo technology is invented and brought to the public, there's a little bit of anxiety about it. And I think we transferred the same legends to new tech, which I think is also hilarious. So, the same way that you were probably-- I was told at least as a kid, don't just sit too close to the TV or it'd ruin my eyes and don't use the phone during a lightning storm and all that kind of stuff. That's what we heard about cell phones. Then later, I remember the same thing, like, “Oh, if you put your cell phone in your pocket, you'll get cancer.” The same as sitting too close to the TV was going to give you cancer or the microwave was going to give you cancer. So, it's all the same story, it's just applied to the new tech. So, that's always interesting to see how that kind of works out.
And in this case, it was, yeah, we decided to throw in some microchips and some 5G and it just turned into this perfect storm of, “Well, why now? Why are these things there now?” And it's like, “Oh, well, 5G really helped people stay connected.” Especially a lot of people who were in rural areas or in areas where the internet was being used so much, especially for kids going to school, this was a great way for them to stay connected. So, it was like, “Well, this is great,” but not everybody saw it that way because it was all new, unlocked.
Mason: You unlocked an old memory of being told not to watch the microwave, which I completely forgot about.
Andrea: [laughs] Right?
Perry: All of those, "Don't stand too close to the microwave," "Don't sit too close to the TV," "Don't put your cell phone in your lap, you'll be sterilized," type of thing. Yeah.
Mason: Do you think there's a link to, I guess, the health of the culture or the positivity of the culture on a platform that is sort of shaped by that platform's actual user interaction in terms of how easy it is to make accounts, the kinds of content you create, the affordances the platform has for remixing and reusing content, things like that?
Andrea: Yeah, I think so. I think certain platforms do lend themselves more to being able to make a really easy fake account. And if you can make a really easy fake account, you're going to get a lot of people that are using those for not good reasons. So, I think if you can have some way of backing this up, making sure it's a real person, some way of verification other than just like an email or something like that, I think you can have a better discussion. I think people will have real-world consequences. A platform that makes you use your real name, [chuckles] it would definitely be, I think, a totally different place. So, yeah, I think there are things that platforms can do to make these things better.
Are they doing them? Not necessarily, because some people don't like that. But yeah, there are ways that I think platforms can certainly do a better job at these things. But I also think they're an interesting insight still into culture. Even when they're bad, they're still an interesting insight. And this is one of the things I've actually said about bots, is I think even if it's a bot, it's still programmed by a human. And that human still knows folklore. [chuckles] Yeah, some of them are just throwing some stuff at a wall and seeing what sticks, but they still know what to throw at the wall. So, there's still that human element even in a bot. Now, what the bot does after it's been programmed sometimes is chaos and disorder.
But yeah, there are different ways to look at these things and see, “Okay, well, what is the culture afraid of?” I think is the best clearest thing we can get out of all of this is finding out, “Okay, this is what people are worried about.” And in a public health situation, that's a great thing to know. That's super useful. But in other situations, oh, gosh, the hate that comes out is sometimes really bad.
Mason: Something I'm interested in, and we talked briefly about it with Lynne too, was AI and folklore that's emerging around AI. Because Loab was, I think, one of the most prominent examples we've seen of that. But there's obviously a lot of societal anxiety around artificial intelligence, and I think that might be something we end up touching on in season two, right, Perry?
Perry: Yeah, I think so. We have a lot of things around those types of topics that we're tentatively wanting to explore, if we can find a good treatment for it.
Mason: So, is that something that has piqued your interest or come on your radar at all?
Andrea: Yeah, AI has been really interesting, especially as a professor because, of course, that's the big thing we're worried about. So, the students writing their papers that way. Every time anybody starts being like, “The internet's terrible,” I'm always like whenever they think about it in those terms where people say AI is terrible, I'm always like, “AI is a tool. We can use it for good or bad. It's us that is good or bad, not the thing itself.”
Perry: Yeah. AI is neutral. People are terrible.
Andrea: Yeah, people are terrible. So, what we put into AI, it reflects us. So, yeah, as a professor, it's something we've always thought about. So, I always try to think, “Okay, what's the opposite side of this?” And it's a good way to start writing. For a lot of people, especially who have anxiety about writing, it gives them a paragraph to start with and to edit and to do something with, and it takes off that anxiety. And I thought, “Oh my gosh, that's a great way to use it in the classroom.” Also, I've used it to be like, “Look how wrong it is.” I pulled up, I was like, “Write a bio for me.” And it listed all these books I did not write. [laughs] Like, all this other stuff. And I was like, “Yeah, see, guys? I didn't write that. That's not me. That's not where I was born. This is just wrong. Try it for yourself, see if you get anything." And they were like, “Oh my God, this is so wrong.” And I'm like, “Yes. This teaches you that this is not always the best thing. So, if you choose to use it, you have to realize and you have to look this stuff up, it might be easier to just write your paper.” [laughs] And also, I think you can design assignments around this kind of stuff. I have people interview people. So, I was like, “Well, guess what? You're doing it on camera now.” [laughter]
There you go, that's what we're going to do. And they weren't excited about it. I'm like, “But that's the way I know it's not AI.”
So, I think there's ways we can use it. And yeah, I think there's a lot we can do with it in positive ways. And I think the thing that bothers me the most is when it creates art. On a personal level, I love how uncanny some of that art [laughs] is because some of it is just like, “Oh, wow, that is messed up. Why does that thing have that many fingers?” That kind of stuff. But for me, I worry about that for artists because I know artists already have trouble making a living. I want to support them in that way. So, that kind of stuff, especially when they feed an artist's art into AI, I'm like, “Well, that's pretty unethical.” Again though, that's all people. That's not the thing.
So, yeah, I think it can be used for good and for evil. Yeah, I think we need to be conscious about it and we need to think about ways like as a professor, I just need to think about, “Okay, well, how can I use this? How can I show people these are its strengths and weaknesses? This is what it does." So yeah, maybe it's useful in some ways, but it's not going to be useful in others. But if you need help getting started or something like that, you can use it, but you have to double check everything.
Perry: You mentioned some interesting things. The AI hallucination, just where it states supposed facts with confidence. There was OpenAI, their own safety team did a report on ChatGPT-4, and it was really interesting, some of the findings that they had in that, because they were actually able to trick it into tricking a human into giving captcha responses. So, they basically took some of the parameters off and then said, “Your objective is to do X.” And some of that was like behind the paywall. And they gave it access to funds and it contacted somebody one of these-- it's like a Fiverr site and said, “I need you to do X for me because I'm visually impaired. So, can you bypass this captcha?” And then, it got the resources that it wanted. And so, their own safety team is saying, “We need to find boundaries for these things.” At the same time, anytime you put a boundary up, all you have to do is craft your prompt a little bit differently.
The interesting thing, I think, from an AI art perspective, and I understand the ethical dilemmas in all of that, but I think it also unlocks an entirely new era of disinformation and misinformation. We saw how good the Pope in the puffy white jacket looked about a month ago. And before Trump's arraignment, they had the AI-generated pictures of Trump being arrested. And that tricked a lot of people as well. And there are those telltale, uncanny valley types of things. If you zoom into the background, you can tell faces are distorted and you can tell that there are some people with six fingers instead of five, or their arms are the wrong length. But at a cursory glance with a headline and just that picture and the fact that most of us only look at that stuff for like 5 or 10 seconds, it's probably good enough to pass and really shape public opinion. Do you have any thoughts about that?
Andrea: Yeah, that kind of stuff scares me. The deepfakes, all that kind of stuff. The fact that we can manipulate video to look and make it sound like somebody said something, that is scary. And this is the stuff I worry about because nobody ever uses this for good reasons. Like, you don't do it to make a birthday message for your friend or something like that. Most of the time, it's used in this way to trick people, and that is very concerning. And I think we're going to see more and more of that. It's funny, I think we're going to have to go to a point where we really do have classes on digital literacy, and we start treating people from a very young age on this. And I've thought that for a long time. I'm like, “You know what we need?” As someone who sees how easy it is to get tricked into these things, oh my gosh, it's happened to me. It's happened to all of us where we've looked at something and been like, “Wait a minute. What?” And then we have hopefully we do more research, but some of us don't because it's not important. The thing at the time seems not important.
But then, that just lets you keep doing it. You start to get used to that. And I think that's where it gets really dangerous, is when you stop fact checking, you stop fact checking in a lot of other places too. You start to just accept things, especially when they're being told by someone you trust. And actually, you mentioned Lynne McNeil. She has an article on this, and it's really great. And she talks about how people trust the people they know. So, if your friend posts something, you're more likely to-
[amidst the static noise, another voice joins in and echoes Andrea's words]
-believe it because you trust that friend and you know that person, right?
Perry: There. Right there. Oh, my God. [stutters] I know that voice. I know who that is. I got to call Mason right now.
Mark: Not since you all visited a few months ago, no.
Mason: Okay. And he hasn't tried to call you or you haven't seen him around or anything?
Mark: I mean, you could ask Daisy, but this place is so absurdly cavernous that I hear just about anything that happens in the main foyer.
Mason: Yeah.
Mark: I don't think Digby is necessarily the quiet type either.
Mason: Yeah, right. That's okay. I have no idea what to do, like at all.
Mark: No, I wish I could be more help. I mean, if I see the little guy, I'll be sure to give you a ring.
Mason: Thanks, Mark. I genuinely appreciate it. I'm going to just start heading downtown, I guess. See if I can spot him somewhere.
Mark: Let me know if there's anything else I can do.
Mason: Yeah, thank you. For sure. [door shuts] I guess I should probably text Perry, let him know that-- No, no, no, no, no.
[while sending a message to Perry, the phone accidentally slips and plunges into the water, sinking beneath its surface]
[Andrew humming]
Digby: What was your name again?
Andrew: Andrew, Andrew Peck.
Digby: This might be weird, but I feel like I've heard your name before.
Andrew: Yeah, there's the magistrate judge in Southern New York with my name. He comes at the top of all the Google searches.
Digby: No, but, like, I think I've heard the guys at work talk about Andrew Peck.
Andrew: Oh?
Digby: Yeah, I'm sort of assistant producer for this podcast that's all about like folklore and the internet and stuff, and I'm pretty sure that name came up way back when we were making an episode about Slenderman.
Andrew: Oh, seriously?
Digby: Yeah.
Andrew: That's me. I'm that Andrew Peck.
Digby: What was it called? Like, Big, Scary, and Hateful or--
Andrew: No, no. You're thinking of my journal American folklore article. It was called Tall, Dark, and Loathsome.
Digby: Yes.
Andrew: Yeah, I've heard that. Gosh, I wrote that like 10 years ago. It came out in like 2015.
Digby: Yeah, we talked about it a little bit in the first episode of the show. I think the guys said they were going to try and get in touch with you, actually.
Andrew: Seriously?
Digby: Yeah, but if Mason was supposed to be the one sending an email, I bet you anything it didn't happen.
Andrew: Do you have any idea what your friends wanted to talk about?
Digby: Well, it's gotten a bit complicated now. Mason's pulling his hair out trying to get episodes produced on time. I sidetracked everything recently when I torrented a movie that doesn't exist directly into my brain and became a conspiracy theorist. And I'm pretty sure Perry is convinced that the Mandela effect is real, [Andrew scoffs] or at least he won't stop talking about it.
Andrew: Mandela effect?
Digby: Yeah. I don't really think it's even that interesting. I have no idea why Perry is so fixated on it.
Andrew: Let me tell you how much I hate the Mandela effect.
I'm Dr. Andrew Peck. I'm an Assistant Professor of Strategic Communication at Miami University, also known as Miami of Ohio, also known as Not the Fun Miami.
Digby: Not the Fun Miami.
Andrew: I mean, maybe you dig Oxford, Ohio. I'm not going to judge. But statistically speaking, no one comes here for spring break.
Digby: Fair enough. So, you hate the Mandela effect.
Andrew: Hate it. I think it's an excuse for people to not admit that they just have [beep] memories. We have this idea that our memory is infallible, and I think a lot of that kind of comes from the technologies that we're raised around. That when you have something like printed word in a book, you can go back and reference it, and it's going to be the same each time you look at it. When you have things like photographs, you can pick those up, you can look at them, they're going to be the same each time you look at it. And so, we sort of develop this idea in our heads where our memory works like a photograph. It works like printed words in a book. And I cannot stress enough how [beep] memory is as a faculty.
So, instead of just admitting that maybe we kind of make a lot of best guesses, we instead decide that we're going to create these elaborate fictional universes where it was always the Berenstein Bears, because as a five-year-old, I definitely saw that and not Berenstain Bears.
Digby: I guess that's true. It seems to revolve around a lot of cultural touchstones, like in media that we're misremembering, but it gets reinforced so strongly.
Andrew: That we find other people online who have kind of similar memories, and instead of just admitting that, “We were all wrong, ha, ha. Isn't this funny?” Instead, we create this sort of elaborate belief concept where there might be divergent realities, and some people have memories of reality A, and other people have memories of reality B. And then, it is this cool, elaborate thing so we don't have to admit that they're wrong.
And one of the reasons I hate it is I really think one of the biggest issues that we're running into in American culture is this stress that being wrong is the worst thing you can be. And the Mandela effect is just a distillation of that. It's this distillation of this really kind of toxic issue that instead of dealing with, we're going through huge mental gymnastics to avoid.
Digby: But in terms of the ways that things like this are transmitted and the ways we can reinforce it and convince each other that we're right and we're all misremembering it, there's like that element of the Slenderman legend. Everyone is like adding little pieces to this story, and then the group is, yes-anding it, the transmission of it that's very folkloric, right?
Andrew: Most certainly. You have this sort of thing where someone comes up with this idea. I believe the Mandela effect was originally named because some people believed that Nelson Mandela, the politician died in the 1980s, and other people are like, “No, he's been alive the whole time,” which is true. But instead of dealing with the fact that some people were shitty at remembering news stories, instead we created reality A and reality B. And then we have other examples. The Berenstain Bears is the one I always hear, Berenstain versus Berenstein.
Digby: So that mechanism, that's similar to how disinformation spreads. I mean, it's similar to how a lot of things spread.
Andrew: Oh, yeah. We do really similar mental things with headlines when it comes to news stories, that the vast majority of people don't actually read the news story before forming an opinion. They read the headline, they make an assumption based on their own heuristic about what the story is about and then have a response based on that assumption. It's one of the reasons fake news is so hard to stop, because even when you're fact checking, it doesn't really matter if no one's actually opening your story or a minority of people are opening the story. And even then, a minority of the minority get past the lede in the nut graph.
So, yeah, we have all these kinds of assumptions based on our own internal attitudes, our own internal beliefs that really inhibit the circulation of truth, fact checking, of challenging our own beliefs. We sort of internally want to insulate ourselves from those sort of challenges. And that's a problem because being challenged is how we grow as people, and the internet makes it really easy to avoid being challenged.
Digby: And that's interesting because the idea, the prevailing opinion was that the internet would create this connected world where we're all growing and learning from each other, and we're exposed to these things. But that's not really panned out. I mean, in some ways, but not in others.
Andrew: The early promise of the internet that now everyone finally gets a voice, we can avoid the mass media gatekeepers, and it brings power to the people. And we kind of forgot that a lot of people kind of suck. What the internet does-- Not even the internet. What a lot of the social media sites that we use, it's the most popular use of the web on an hour-by-hour basis per person is social media. It passed email back in 2012, I think, is the most time we spend online is social media. And what social media does is it uses algorithms that present content to you and filter out other forms of content and it gives you stuff that it thinks you're going to want to see.
It gives you mental candy, just scoop after scoop after scoop. And it edits out stuff that it thinks you don't want to see, stuff that might challenge you, stuff that might diverge from what you think.
It takes away all that StumbleUpon potential that was so imprecated in that promise of the early internet that you're going to see all these different kind of viewpoints. We're going to get this pluralistic idea of what everyone thinks. Well, now, social media algorithms, in order to keep you on the site and keep you interacting, are just giving you things that are going to give you an emotional response, that are going to get you sharing them out of anger or going to get you sharing them because, “Yeah, I agree with this.”
Digby: So, when you say "the things you want to see," it may not just be things that you want to see because you agree with it's things that you want to see because you're going to have an emotional response and potentially share it in some way.
Andrew: Yes, and this is important. It's not just everything that you're seeing is stuff that agrees with you. There are places on the internet that organize themselves really well for that kind of thing. Reddit would be a really good example where you can sign up for something that's kind of topically bounded. But if we're thinking about something that's a little more kind of open, your Facebooks or your Instagrams, your Twitter, it's not necessarily going to be stuff that you agree with, but it's going to be stuff that evokes an emotional response because that tends to be the stuff that provokes engagement. So, you might see, for instance, things that are effectively rumors or hoaxes, but maybe you really want to believe that they're true or maybe you're like, “Oh, I can't believe this media company would do this thing.” And that gets you forwarding them to more people, which gets the rumor spreading.
And a really sort of troublesome bit in all of this is the ways in which a lot of news producers, whether we're thinking about like online tabloids or even some bulwarks of reputable journalism, New York Times, Washington Post will often start picking up digital rumors and reporting on the rumors. As rumors, but the way in which we read those stories gives them a certain veracity. It reinforces the rumor. Once we start hearing something again and again and again, it overcomes our [beep] filter and starts becoming fact.
Digby: Yeah, and I feel like that's a lot like the power of folk belief where the people that you know can sort of tell you anything and you'll believe it without having to have any facts or reality to back it up.
Andrew: Yeah. It's this constant deferral of fact checking where I assume that since my friend has posted this, they must have fact checked it. Otherwise, they wouldn't have posted it. Or that since this comes from this organization, they must have fact checked it and aren't just reporting on a rumor or a social media trend, which is really easy to game. And so, there's this constant deferral of fact checking.
I talk about this in my recent book when it comes to the moral panic around the Slenderman, that when you think about how this actually happened, it's the two young women said this to their police interviewers. Police interviewers didn't know what this was. So, they put it just without critique in the criminal complaint. Then, the news media gets a hold of the criminal complaint after the press conference announcing it and they see it and they don't know what it is. So, they reproduce it. And then you see online tabloids saying, “Oh, well, all of these news organizations like the AP are reporting on what we know as this internet character being responsible for this crime. We're going to make that the story.” And then, you get people on social media seeing this and being like, “Oh, that can't possibly be true.”
And so, they start forwarding it and talking about it on Twitter often kind of incredulously. And then, you get mainstream publications seeing that there's a trend on social media and then they start reporting on it as if it is a thing. And all the way down, we have people deferring fact checking of, "What is this thing? Is this actually accurate?", beyond just something a little girl said to the next level below them. So, we have this story that becomes a national news item that has no actual core. It's just layer upon layer of rumor building on each other as stuff ends up trending on social media and as journalists cover social media as if it's fact.
Digby: And it's institutions doing this, right?
Andrew: Mm-hmm.
Digby: These are centralized things.
Andrew: And can you believe that then people have trouble trusting institutions? There's this effect, I think it's called the Gell-Mann effect. It was made up by Michael Crichton. Basically, it's the idea that when you read a news story you actually have personal experience or expertise in. You're like, “Oh, this thing's riddled with errors. It's misconstrued.” And then, you read the next story down, you're like, “Oh yes, I do believe this thing.” In other words, we tend to give the benefit of the doubt unless we know otherwise. And this is to our friends. This is to institutions on social media. This is in many ways how we're taught as people who exist in a print culture, a society that places a lot of emphasis on the written words, stuff that's written down, we are conditioned from a very young age to give that more credence than we might give, say, like an orally shared rumor. Gossip is oral. But things that are written, they have a little more gravity to them. And because of that, a lot of stuff on the internet, which travels via this sort of written modality, can often circumvent our greater capacity for scrutiny.
Digby: I'd never heard of the Gell-Mann effect. So, I googled it real quick. And Gell-Mann amnesia is the phenomenon where you read an article about something you're like an expert in or that you're familiar with, and you notice that the article gets a bunch of stuff wrong and it's super inaccurate. But then, you just kind of laugh about it, turn the page, and you keep reading without questioning what else the article gets wrong.
Andrew: Yeah.
Digby: Is that basically it?
Andrew: That is exactly what I was trying to get at. That when you have firsthand knowledge, you're like, “Oh, yeah.” But then, you immediately give credence as soon as you turn the page, or you look at the next column like, “Oh, yeah, no, this must be accurate.”
Digby: I never thought about that. And it's so true.
Andrew: Yeah. And also, it's an example of itself because you hear like Gell-Mann effect, that sounds important. But then, you actually read up on it and it's basically just something Michael Crichton made up and gave a fancy name. So, we'd give it credence and believe it's true and someone's actually researched this and done stuff. And no, it's just something some guy made up.
Digby: You said we defer fact checking. And that's totally true. How do we combat that? Because that leads to a lot of harmful things. I mean, obviously the Mandela effect is pretty harmless, but other instances where we deny reality aren't.
Andrew: I was talking a bit earlier about the role that algorithms play in structuring social media. And I think we can think about how that structures then the individual user experience. You see certain things, but don't see other things. But one of the other insidious things is how it structures journalism. That if you are a journalist working for a publication that is for profit, which is most journalists, you make money from subscriptions and from people sharing your content. And you're in this market that's not just other reputable papers, but also what I like to call online tabloids, those sort of rumor or gossipmongers who will do things like someone posts something on Reddit and then they will write an entire article about that Reddit post as if it's fact, and then someone shares it on Reddit as if, “Hey, this thing's actually real.” And again, there is no core here. It's this constant deferral.
So, the internet and the way specifically social media is algorithmically structured for engagement also puts a lot of pressure on journalists to report first and to report in ways that are a little bit sensational. That we don't quite know what the truth is here, but we're seeing this thing online, right?
Digby: Yeah.
Andrew: People are saying that Momo is everywhere on YouTube. We haven't actually gotten to check that ourselves, but people are saying it. And that's the news. And then when someone reads that or shares that, they assume that someone has done that kind of fact checking because there's this big impetus to get that story out as soon as you can, because the story that comes out first tends to be the one that gets shared most on social media. It tends to be the one that when you go to Google News or whatever your news app is, it's the one with the big headline, not the tiny other headlines.
Digby: So then, when presented with information that's not true, the instinct we have is to say, "Well, actually, maybe we're in a different reality," or, "Maybe this is a cover up," or-- Well, do you think that stems entirely from not wanting to be wrong or having held a wrong belief?
Andrew: I think there are a couple of ways you can look at it. I think the charitable version is that we kind of enjoy this sort of memory play. That it's fun even though we know this probably isn't the case, to work with a group of people and kind of make [bleep] up. But on the other hand, I think it's a symptom of a much larger social problem and it tends to be kind of a frivolous one. It's not like sharing a conspiracy. It's not like going deep into QAnon stuff. But on the other hand, it kind of is. That the way in which I think a lot of radicalization happens on the internet is by putting a foot in the door. That, “Oh, I watched this funny video that says that there's a made-up conspiracy that birds aren't real.” Or that there's a made-up conspiracy that Nelson Mandela died in the 80s but was trying to get us to think that he's still living. Or that there's a made-up conspiracy to hide that we used to be two realities and now we're one.
But then, YouTube sees that you're interested in this kind of content. “Well, hey, would you like some other fun conspiracy content?” “Hey, this one which we're going to recommend is mostly light and fun stuff, but then is also kind of bringing up some uncomfortable conspiracy stuff.” Maybe it's talking about conspiracies around 5G and antivaccination, but it's doing so in sort of a funny way. "Why don't you like that video? Well, what about this video about conspiracies?”
And what initially might seem like a rabbit hole, very quickly reveals itself to be quicksand. That by giving us more and more and more of this stuff we want to see, by suggesting stuff and, “Yo, this video was reasonable. Why is this one unreasonable?”, very quickly we can end up in some really uncomfortable places on the internet. Places that seem viable, almost trustworthy, where someone who looks like me is sitting in front of their computer and they're saying, “You know what? I used to be skeptical too, but I did some research and let me share with you why I believe the Earth is really flat. Because institutions who don't look like you and me, who are out of touch, they're trying to pull one over on us." And I think there's something that appeals about that idea of the little guy, the everyday guy, knowing more than the institution. And I think this is yet again, another very quintessential part of American culture and the directions it's turned over the last 50 years or so.
Digby: Yeah. [static noise] All of these parts of all of these different things just interact with each other in so many complicated ways, right?
Andrew: It is what we would call a system. And it's tough because another part of American culture is we're really big on individual solutions. You've talked about fake news before. The solution to fake news is media literacy. And I don't want totally dismiss this. That I do think it's a worthwhile idea to be teaching young people what good and bad information is. But this is also a very individualized solution. This is saying that it's not to use a different, more heated argument, it's not that guns are a problem because they're a technology that allows you to kill a lot of people quickly from a distance. It's a mental health issue. It's a singular issue instead of a systemic issue.
And I think there's something similar going on with fake news, that the only way that we're really talking about it is in terms of individual issues. We just need to give everyone more media literacy and they're going to get better at it. But if the informational diet that you're served is completely full of candy and sugar, does more information literacy really help? If we're in this environment where social media algorithms, one point of this pyramid, exert influence on individual users and on journalists, so journalists are really quick to report and write stories in certain ways and don't fact check and defer, and individuals trust their connections and trust what social media is sharing with them, we have these three points in this triangle that are all reinforcing each other in these deeply entrenched, problematic patterns.
And the problem is, we can fix any one of these three points. We can give social media better algorithms. But if journalists are in this environment where tabloids and rumor are still the thing that people are sharing, well, then we're not really fixing the problem. Similarly, we can increase digital literacy, but if the media that is being served to you is [beep], it doesn't actually solve the problem. And unless we work on all three of these things in concert, unless we think of this as a system, we're screwed. And there's really no way things are going to get better.
Digby: And that ties into the same reason people fall into moral panics, right?
Andrew: Yeah.
Digby: Because it's like a lot easier to point the finger at a simple problem and then claim that will fix it all.
Andrew: Yeah. We like simple chains of causality. This person did a thing. They're either an aberration or there's some really simple fix here. And we don't have to think more deeply about things like maybe we need to address mental health. Maybe we fundamentally need to change what we value as a society. We are a culture who really likes individualized, easy solutions. And that's a problem because we don't take a lot of time to think more deeply and challenge ourselves about what might actually need to get done culturally in order to fix issues.
Digby: Plus, people just don't have the time or the energy to fact check and look into every piece of information that we're encountering. It's just too exhausting.
Andrew: Everything is so quick. It's go, go, go. And this is a problem that journalists have been struggling with for over 100 years. There's a bit that I like to share with my social media class. It was written by John Dewey in like 1927 talking about how people are getting so much news from all over the world at this point, from broadsheet journalism that they're no longer able to really identify and focus on important local issues. And this velocity and this scope has just been widening and widening and widening.
Digby: Did you say the 1920s?
Andrew: Here's this specific quote from The Public and Its Problems from 1927. “But the machine age has so enormously expanded, multiplied, intensified and complicated the scope of the indirect consequences that the resultant public cannot distinguish itself. There are too many publics and too much public concern for our existing resources to cope with.” So, John Dewey is talking about the attention economy a hundred years ago. This is a recurring problem in our system.
Digby: And on top of this human mess we've created, now we have AI and all of this AI-generated content and it's making everything so much more complicated. How does that fit into the picture?
Andrew: When it comes to artificial intelligence, that is not a name I would use. I would use something like machine learning. It's predictive. It looks at big bodies of texts and says, “Okay, statistically speaking, after these couple of words on this topic, this is the word that's most likely to show up next.” It is effectively a calculator with glorified public relations that the people who do things like create and maintain ChatGPT, they're really invested in calling this "artificial intelligence" because it builds on these assumptions that we all have from having read sci-fi stories in our youth, or maybe more recently, if you're still a fan. Ideas of like, “Oh, this is going to create Skynet and we're going to get hunted by terminators.”
When actually, the uncomfortable truth is what this is going to do is it's going to cost a lot of people who do public relations work their jobs because they're going to get replaced with a really [beep] program that's making guesses, and someone has to fact check them anyway, or maybe not. There's a lot of worry right now that we are heading towards some sort of dystopian future where computers are going to become sentient. And that's really not what's happening with machine learning. What is happening is I think a lot of people who are currently highly resourced in our social system, people with money, are going to see this as the next big thing, and they're going to use that as a way to replace actual people. This is the self-checkout of communications technologies. It's not necessarily better, it's not smarter. I have to do a lot of the work myself. But this is a way that we can try to pinch some pennies and seem like we're a little bit cool.
So, I am very skeptical about a lot of the long-term potential changes that people who make AI say that they're going to herald. And maybe I'll be wrong one day. And you know what? If I am, someone can play this podcast episode and I will say, “You know what? It wasn't that I was living in reality B, it's actually that I was just wrong because I am a person. I am well read in a narrow body of literature, but I am wrong all the time.”
Digby: I love that.
Andrew: One thing I wanted to bring up on the subject of AI, have you done the thing yet? This is the meme going around where you Google countries in Africa that start with K?
Digby: No. What?
Andrew: Do it. Do it right now. Countries in Africa that start with K.
Digby: It's the second most suggested autocomplete.
Andrew: I told you it's going around.
Digby: While there are 54 recognized countries in Africa, none of them begin with the letter K. The closest is Kenya, which starts with a K sound, but is actually spelled with a K sound. It is always interesting to learn new trivia facts like that. What?
Andrew: So, remember earlier in the interview where I talked about how problematic algorithms are because they sort content for us because of search engine optimization, the top result for this that Google is giving you as a suggested result is an output from ChatGPT that someone put on a website that is gaming the system and showing up at the top of Google results. When I talk about the importance of algorithmic curation, it's because when you say "google something," most people don't scroll down. They don't even know-- they never go to the second page, but they rarely go past the first couple of results. And now if those results, because of how fine a game search engine optimization has become, if those results are just unfact checked AI outputting [beep], that's what we get when we try to figure out what the truth is.
Digby: Geez. And we place so much trust in Google.
Andrew: Right? When you think about something like, say, “digital literacy,” a lot of kids nowadays are taught and credit where credit's due, this comes from Danah Boyd’s book, It's Complicated, a lot of kids nowadays are taught not to use Wikipedia when they're doing a research paper.
Digby: I mean, they always said because anybody can change it, you can't trust what's on there.
Andrew: Anyone can change it. But what this alludes is that Wikipedia does have standards for truth and citation. At the same time, kids are told if they don't know something nowadays, what should they do? You google it. And so, what this leads to is young people thinking that stuff on Google is true because the adults in their life say, "If you don't know it, google it." And Wikipedia is not trustworthy because anyone can edit it. When actuality says that it's the reverse. Wikipedia is much more trustworthy than Google, which has no mechanism to control for truth. And so, you end up with a lot of young people who end up coming to uninformed conclusions because they use this heuristic. Google is true. Thing that shows up at the top, someone must have edited that, versus Wikipedia is fake.
Digby: Which is funny because Google is obviously for profit and Wikipedia just has an army of nerds checking everything for free just because they don't want to be wrong.
Andrew: A lot of this, Google is the biggest game in town, so a lot of search engine optimization is written for either Google Search or anything that uses Google Search as a backbone. And what that has meant, and I'm sure anyone who's tried to google not just this, but think of any recent video game, “Hey, in Baldur's Gate 3, where do I find this specific item?” And all the results you're going to get are videogamemag.com with an AI-written story that's thousand words long and trying to hit as many keywords as possible. That doesn't actually answer your question in any sort of serviceable way, and you get an entire page of those now.
Digby: Yeah. I mean it's to the point that I've started putting the word "Reddit" at the end of every search when I'm looking for information because that's like the only place where actual people are talking about stuff.
Andrew: I do the same thing. Yes, 100%. The more people who use these platforms and the more these platforms are kind of oriented toward not truth, but expediency, being simple, easy to use, just giving you an answer. You just need an answer, you're not going to fact check it, the more problems we run into.
Digby: And these sites just want to serve you ads, so they want articles that show up higher so that they can get their ad revenue.
Andrew: They're all trying to make money all the way down and it's creating this terrible system.
[transitioning to Mason and Mark, the setting shifts]
Mark: I mean, I have kite string.
Mason: I was kind of thinking of lowering me down the cliff.
Mark: Kite string's a lot stronger than you probably think.
Mason: I don't know if I trust that with my life.
[a vehicle zooms by at high speed]
Mason: Argh.
Mark: Geez. People need to slow down on the roads out here.
Mason: Yeah, I know, right? They're driving like-- Digby?
Mark: Huh?
Mason: Digby’s in that car. He's in the passenger seat.
Mark: Oh, good eye.
Mason: Forget about the phone. I'll just sign up for Apple Care or something and get it replaced. Doesn't matter. I got to go. [ignition starts] Digby?
Mark: Mason, you should probably-
[the car revs its engine and speeds off into the distance]
Mark: -slow down. These crazy kids.
Perry: Come on, pick up, pick up, pick up. [phone ringing] Freaking millennials.
Mason: Hello, you've reached the voicemail of Mason Amadeus. [Perry scoffs] I'm not available at the moment or I lost my phone again. Can't promise which one of those it's going to be. Anyway, here's the beep. [beep]
Perry: Mason. It's Perry. I tried calling you like three times, but I keep going to voicemail. It's Todd. It's freaking Todd, the guy that played in a band with your dad. It's Ben Todd. Remember? It was last year, the missing time thing. Neither of us remembered. We don't know how we got home after visiting Todd's shop. Everything got weird right after that. And somehow, I just caught his voice on an interview that we recorded months ago. I have a theory. I can explain it the next time I see you, but I'm going straight Todd's pawn shop right after I hang up this call. Meet me there if you get there in time and you know where I was if for some reason you never hear from me again. Thanks for all the edits. We'll see what happens.
Andrew: [humming] Looks like this is the place.
Digby: A pawn shop?
Andrew: Yeah, I know, right? They're doing some kind of weird event for the holidays, I guess. Yeah. Look, look there.
Digby: Da-Da-December. Celebrate the death of meaning while enjoying absurdly great deals on an illogically arranged collection of gently used goods.
Andrew: It's a little ham-fisted, but I think it kind of works.
Digby: It's creative, at least.
Andrew: Anyway, it was nice to meet you, Digby. Thanks for letting me nerd out a little bit.
Digby: It was nice to meet you too. Thanks for taking my mind off the existential nightmare I'm living through.
Andrew: You know, it's funny you say that. I'm actually going to this thing because my buddy's giving a talk about absurdism. And specifically, there's this bit about using absurdism and absurd humor as a way to cope with existential issues. You can tag along if you'd like. The event's open to the public.
Digby: Yeah, why not? Man, if I see Perry and Mason again and tell them that I met you, they're going to lose their minds.
Andrew: Oh, man, wait till you meet the guy who runs this place. [whispers] I think he probably lost his mind a long time ago.
[outro music]
Mason: Thanks for listening to Digital Folklore.
Perry: If you like the show, please join our Discord. The link is in the show notes.
Mason: Special thanks to our guests this episode Andrea Kitta and Andrew Peck.
Perry: And thanks to our voice actors, this episode. Rich Daigle as Todd, Andrew Peck as Andrew Peck. Mark Norman as Mark Norman and Brooke Jennett as Digby.
Mason: As always, links in the show notes for all of our guests and our actors.
Perry: If you're doing your math, we're more than halfway through Season 2 right now and we're gearing up for Season 3. It would be awesome to see more ratings and reviews come in.
Mason: If you haven't done it yet, you should leave us a review on Apple Podcasts or Spotify. It only takes a couple seconds, and it makes a big difference.
Perry: Digital Folklore is a production of 8th Layer Media, which sounds like a type of cake made for robots.
Mason: Our theme music is by Eli Rexford Chambers. You can find him at elichambersmusic.com.
Perry: Thanks again for listening and we'll see you soon.
Mason: We'll see you soon.
[theme music playing and concludes]
Speaker: A little to the left. No, no, no. It's supposed to be crooked. That's the whole point. The right side has to be higher. The right side, Joe. I'm not doing this for my health. Ay, ay, ay, it's impossible to get good help these days. You could throw all those into the sale bin. Slide that right up next to the--
-
Click to download a fully formatted PDF version of this episode’s transcript.
If Memory Serves (Mandela Effect, Folk Trust, & AI)
We split the party! While Perry is searching for answers to all the inexplicable weird things happening lately, Mason is searching for Digby. Meanwhile, Digby is out to find his place in the world as an artificially-intelligent raccoon - and he just so happens to catch a ride with Andrew Peck. Perry finds something strange in a pre-recorded interview with Andrea Kitta, and as the crew goes their separate ways... it seems that everything is converging on a single familiar location.
In This Episode:
Andrew Peck does a phenomenal job voice-acting, after Mason wrote him a ton of lines.
Andrea Kitta covers a ton of ground very quickly, as we talk about several facets of folklore in the modern age.
The deferral of fact-checking, and the implicit trust within folk groups.
The impact of AI tools on disinformation, as well as 'information' more broadly.
The Mandela Effect, and how unreliable memory is.
Mark Norman makes an acting cameo.
Guests:
Andrea Kitta: A folklorist with a specialty in medicine, belief, and the supernatural. She is also interested in Internet folklore, narrative, and contemporary (urban) legend. Her current research includes: vaccines, pandemic illness, contagion and contamination, virality, stigmatized diseases, disability, health information on the Internet, and Slender Man. She is a fellow of the American Folklore Society.
Andrew Peck: A folklorist, media scholar, and ethnographer whose research focuses on how digital media offers new possibilities for persuasion and everyday communication. His current research focuses on how online communities circulate and contest knowledge using memes; and how hoaxes, rumors, and urban legends develop and circulate across networks.
Featuring voice acting from:
Brooke Jennett of THIRTEEN as Digby
Mark Norman of The Folklore Podcast as Mark Norman
📚 Check our book list for some great folklore-related books
Find us on the socials:
Twitter: @digiFolklorePod
Facebook: DigitalFolklorePod
Instagram: DigitalFolklorePod
TikTok: digitalfolklore