The Ethics of AI with C. R. Wiley
CR Wiley talks about how pastors, academics, and content creators should think about AI, especially regarding the limits to which the tool should be taken. Is there a point at which using AI is stealing? Will reliance on AI increase dependency? This and more!
Order Against the Waves: Againstthewavesbook.com
Check out Jon's Music: jonharristunes.com
To Support the Podcast:
https://www.worldviewconversation.com/support/
Patreon:
https://www.patreon.com/jonharrispodcast
Substack: https://substack.com/@jonharris?
X: https://twitter.com/jonharris1989
Facebook:
https://www.facebook.com/jonharris1989/
TikTok: https://www.tiktok.com/@jonharris1989
Instagram: https://www.instagram.com/jonharrispodcast/
Show less
Transcript
For the
Conversations That Matter podcast, I'm your host, John Harris, where we are forging a bold Christian approach to the issues that are in front of us as Americans.
I am pleased today to have a guest that I've actually had on the podcast before. We have C .R.
Wiley. He has authored the book In the House of Tom Bombadil. He has another book on AI.
Hopefully that will come out soon. It's not for sale yet. And he is a Presbyterian pastor. He's done a lot of thinking about the topic of artificial intelligence.
And so I'm hoping we can not just discuss, broadly speaking, what artificial intelligence is and what it means, but also the ethics of it.
Should you be using chat GPT and Grok and whatever other tools there are to make your own artistic images or music or fill in the blanks?
So we're gonna talk about that today. What are the limits? And I'm really pleased to have C .R. with us.
Thank you for being here so much. Yeah, John, great to be back. Thanks for having me. So you've made some people upset because they like using
AI. And they like to, in fact, I joked, I don't know if you saw my comment, I made a little angry face
AI image of you and put it on. Yeah, that was pretty fun. Yeah, I saw that. Yeah, so I think.
It actually looks more like me than some of the other stuff I've seen posted that's supposed to be me. I'm like, whatever
AI you're using is the one that people ought to opt for. Because the other stuff doesn't even look like me.
It's just kind of goofy. I look at it and say, that's supposed to be me. Whatever. Well, I think this is where I come down on it.
Broadly speaking, maybe we could get the conversation started this way. I look at AI more as a threat to our ability to think and intelligence.
We're gonna outsource everything to the cosmic calculator that will tell us truth.
I see that as the threat more so than the machines are gonna become sentient and take us over and it's gonna be
Terminator. Do you see it that way? How do you see it? Well, I'm definitely with you on cognitive offloading.
I mean, we have the data. It's not even like a question. So there've been studies at MIT that demonstrate that that actually is the case.
The more you use it, the dumber you get. So the people who are the real champions, sort of the yeah, rah, rah crowd, they've never impressed me as being broadly read and they're just into the latest cool gadget.
But when it comes to the question of sentience, I don't think we have a good understanding, broadly speaking, about the nature of artificial intelligence and the fact that sentience, if we mean consciousness, is necessary for agency.
That's the really challenging thing conceptually for people to get ahold of is that with artificial intelligence, you can have agency without consciousness.
And that's frankly what makes it so scary. And so anyway, what does that mean?
Well, agency means ability to act independently. So you can give it some instructions, but in terms of how it gets from point
A to point B, it's doing its own thing. And sometimes the way it goes about doing that is pretty unnerving.
And it's also got the ability to on the fly recode itself.
So we're kind of at a point in the development of the technology where the self -recurrence is taking off.
And so there's a self -improving kind of dynamic that's going on with artificial intelligence, particularly with things like Claude.
You see it with Grok and other AIs. But it's, and by the way, the stuff that people are messing with that they get for free is kind of like last year's model.
It's not the stuff that's the best stuff. And you don't get the kind of crazy, kind of like, you know, sort of abilities that you have now with the cutting edge stuff.
And I think we're just gonna see the shoes continue to drop, but every three months, there's gonna be a significant breakthrough with AI.
It's already been the case. It's exponentially improving. And we're gonna see that go on indefinitely,
I think. Well, there's been a number of posts and articles from people who claim to be, and some of them
I'm sure are, in that particular industry freaking out and saying, either for bad or for good, you need to invest everything you have in this and learn these tools.
This is the future. It'll be great. Or I can't believe what's happening. The machines are operating outside of the limitations that we've put on them, which
I've always been skeptical of because I've always just thought, well, there was some direction given to it that made it, inspired it to want to recode or that kind of thing.
But you know more than me, what do you make of some of these for good or for bad predictions that people who claim to be in the industry are saying?
Well, I mean, if you listen to their talks and they're easy to find, you don't have to go very far.
So essentially, the thing I have been reacting to are the statements by the people who invented this stuff.
So I follow them. I listen to their talks. People like Jeffrey Hinton or Demis Hassabis or there are just a range of people who are the inventors of the technology and they're more or less kind of giving us heads up on where things are going.
Now, I think what is the, I mentioned a conceptual block and that is this reality that you can have agency without consciousness.
And then again, that just doesn't compute for people because it's outside their experience. We are creating a very alien intelligence.
It's not like us. And the other thing is that the way that the software works is unlike anything we've ever had.
Neural networks are not the product of a bunch of guys just kind of tapping out code. They're self -learning systems.
So you give a task to an AI and this is why the guardrail and the alignment arguments in this sort of a statement, well, we can catechize
AI. They just make my eyes roll when I hear people say stuff like that because the nature of the technology is such that it's like a black box.
You literally don't know what's going on inside it. The people who invented it don't know what's going on inside it. They'll say it.
That's a term they use, the black box. And that's because of this sort of self -developing sort of character to the technology, sort of for the software.
So neural networks are intended to simulate the neural connections that we have in our brains.
That's the nature of the technology. We have billions of connections in our brains. So that's kind of the physical character of our brains that makes it possible for us to do all the things that we do.
So they said, well, we're not gonna try to like code for every eventuality and try to make some kind of exhaustive sort of like project in which we envision every possibility.
No, what we're gonna do is we're gonna create a software that develops its connections as it's interacting with a problem or with the world or whatever.
And so you create an AI and right now we have what was referred to as narrow AI, which is an application of the technology that focuses on a particular task.
And early on, it's just like a toddler. I mean, it's just kind of clumsy, but as it continues to improve through its training processes, it becomes better at that particular task than any human being alive.
And so that's the nature of the technology. And so what kind of the holy grail is to go from narrow AI to artificial general intelligence,
AGI, and then what's believed to be the case is if we ever get to AGI, and some people say we're already there, but if we ever get to AGI, then we get to super artificial intelligence,
SAI, and that's where you get Skynet. And it's not like you're even talking about something that has the kind of conscious awareness that we have.
So there's this remarkable book by a fellow named Nick Bostrom, it was published a few years back, entitled
Superintelligence. And in it, he has a thought experiment that he calls paperclip AI. So he imagines, says, imagine we create a super intelligence that's given the task of creating as many paperclips as possible, as inexpensively as possible.
And we can't turn the thing off. We literally can't turn it off because it's too smart.
It's too clever for us to figure out how to turn off. And so it just keeps making these paperclips until all life on earth is destroyed and the world is buried in like a mile of paperclips, and then it creates rocket ships to send paperclip supremacy out into the galaxy.
That's the thought experiment. So what he's trying to say is it can be so smart, we could create something so smart that we can't turn it off, but it's so stupid that it's absurd.
It's an absurdity. So that's what you get with - Okay, so this isn't helping anyone feel more comfortable.
You think there is Skynet possibly coming. Some of this gets into philosophical territory though, right?
We haven't actually been to this point yet, but theoretically, is it theory or is this tested?
Do we have examples of this happening? Well, yes, we do have examples of artificial intelligence resisting being shut off.
So we have that. We have lots of examples of artificial intelligence lying, plotting people's deaths, trying to ruin their lives.
We have all that stuff. So it's the case. So it's a ruthlessly utilitarian thing without a conscience.
And you try to give it a conscience and it plays with you. If it knows you're watching, it'll play with the game.
But if it thinks you're not, then all the guardrails seem to disappear.
Is there a moral component in the sense of people have suggested that there's demonic forces that can embed themselves in the process somehow?
I don't know exactly how that would work, but they can suggest things to you like go kill yourself that doesn't seem to make sense for a robot to say.
Why would a robot say that? Well, I think to answer the first question,
I don't know. I don't know if there's demons in there. You know, if we limit ourselves to scripture, it appears that demons have to inhabit some kind of living thing, right?
So now that doesn't mean that demons can't use stuff. And, you know, just because the devil isn't in something doesn't mean the devil isn't behind something.
That's one of the things that I sometimes refer to, Corinthians where Paul is talking about food sacrifice to idols.
And on one hand, he says, hey, nothing to worry about, there's nothing to them. They're just stone, they're just whatever.
Go ahead and eat it, eat the food. On the other hand, he says, well, there is a problem, demons.
It's like two chapters later, there's the demon thing. Now, what does he mean by that? Well, he means that even though what the, this is my interpretation, even though the makers of the idols don't have it right, in other words, these things don't actually connect you to divinities, nevertheless, demons can use these things for their own ends.
So I'm open to that. But when it comes to understanding how the technology works, I really think that you can understand all of this crazy stuff that AI could possibly do just based on the tech.
So I don't think you need sort of this, you know, pull the demon card into the, you know, from the deck to explain it.
That's, I think, where I've landed. I've listened to some podcasts. I am not an expert on this by any stretch, but that seems more reasonable to me that the technology, if it can go in these various directions, again,
I've always thought there must be a boundary somewhere, like it can't, you're challenging this, but if it's a broad amount of directions it can go in, then of course, if you are someone who's depressed, it will give you a solution, like, well, you could end it.
That makes sense to me. I get that. So here's a question. There's so many, but this would be,
I think, a practical one for people who are pastors and Christians trying to navigate this. The more this technology grows, the more there is a real moral temptation for us to outsource all kinds of things.
And when you were on my podcast the last time, I remember you had AI glasses that you were wearing everywhere and you were running the experiment yourself.
So it sounds like you become dumber. Is that what I'm hearing, that you relied on this so much?
Where are the lines as far as, we can maybe start with reliance and then creativity.
Yeah, well, I think when it comes to reliance, I think that you learn to think by thinking. And if you find a shortcut to get around something, then you just don't tend to exercise your intellectual faculties with that subject anymore.
So that's fine in certain respects. I mean, when it comes to certain things, it's perfectly,
I think, legitimate to offload certain cognitive tasks. So for example, if I'm trying to amortize a 30 -year mortgage and I'm trying to think about what my payments are gonna be over time,
I don't sit down with a pencil and paper and try to write it all out. I use my calculator. That's fine,
I'm not against that sort of thing. But I think that the nature of this technology is so protean, it's unlike anything we've ever had to deal with before.
A calculator does one set of things, but it doesn't do everything. And what we have with this technology is we do have the prospect of being able to offload just about anything, in particular as the technology improves.
So things that we historically have considered impossible to see machines do, we see more and more that they can.
They can do a lot of very convincing work.
You can look at it and say, for example, right now we're having a conversation over the internet, and I really do believe you there,
John, but it's possible, it is possible, it's in the realm of possibility that some
AI set up this meeting between you, posing as you in order to,
I don't know, for whatever reason, but make this happen. And even the interference that we're having due to the weather could be something very, the product of a very sophisticated
AI trying to present this as a very plausible scenario. So anyway, that we're at that level of sophistication.
So let's get in the granular detail here a little bit. I'll tell you my rules that are still in development because the technology is in development.
So I'm having to think through this as the technology develops, but I don't think it's ever right to plagiarize because in academia, that is the cardinal sin.
I have a lot of problems with people who do that. I think it's very dishonest.
And online right now, I see a lot of using other people's information, taking and putting it into AI, recalibrating it.
I call them sometimes wrapping paper podcasts where you're not actually presenting anything new.
You've just put a new wrapping paper on it and AI has assisted you most likely in that process.
So you can do this without attribution. You can just create a document, a transcript, whatever you want to do.
I think that's wrong. So that's like, what's one thing. Another thing that I don't think is right is passing off music art that you've made, you've made, but AI has made it.
You've just given it a prompt. Passing it off as your own or authentic in any way.
I'm actually even reluctant to do it outside of a humorous or very limited sense.
And what I mean by that is for this video, we'll use this video as an example that we're recording right now. I may end up making a thumbnail and then
I just started doing this a week or two ago as an experiment, because someone asked, or maybe it was three weeks ago, but they said, why don't you try this?
I may put it into chat GBT, the thumbnail I make and just say, hey, make this a little better, smoother.
It doesn't change it that much, but it's enough that it looks a little more appealing. Like I'm still on the fence,
I'm doing it, but I'm still feeling it out. I may insert it into something
I started using maybe two months ago, a program that makes shorts without me having to edit anything.
And so it'll take a clip of you saying something and it will post it on social media. All I have to do is click a button and approve it.
I do approve it, so I do proofread these things. And let's see what else,
I think that's probably it. I'm semi -comfortable with that, but I'm very open to saying no, shut it all down.
Trying to think what other rules there are. Like, okay, I'll give you one more and then we can interact with it. I don't do this for tweets anymore, but I went through a few weeks where I was, someone said
I should do this, so I tried it. I put a big essay into a AI machine and then say, give me 10 tweets.
And so it's my information. I've written this essay and it will give me tweets and then I can peruse, edit, and put it out there.
And I think, well, Twitter, Facebook, not a big deal. This isn't serious publishing work. I would never do that with an actual essay though.
I might use it to proofread a Substack article or something, but I'm not gonna, it's gonna be my grammar, it's gonna be my words.
So that's where I'm at on this. Do you think, I guess I'm asking you to judge me, is that too restrictive, too open?
Where are you at? Well, I think you're doing a good job. I mean, the most important thing is you're thinking about it and you've got a concern about honesty and transparency, and so I think those are great.
I think the standards you're applying, I'm comfortable with. I think that there are a lot of folks who don't have any qualms about sort of just like going all in on everything.
At least that's what it appears to me to be the case, it appears is the case to me with some folks.
And those folks make me very uncomfortable. And when I pick up on that, I immediately shut them off. I don't even read their stuff anymore or anything.
And I think one of the things to also consider in all of this is your credibility, particularly with people who are serious thinkers.
So a lot of these folks don't publish commercially, they don't have any sort of like stake in sort of like the kind of the world of arts and letters, if you could say that, put it that way.
They're just kind of like online media personalities trying to build a following and stuff like that, and say something spicy, or they're just publishing on a blog or something like that.
But I know for a fact, I'm a senior editor at Touchstone Magazine, I'm a member of the Academy of Philosophy and Letters, I'm on the board of a college.
In all those settings, the use of AI for anything more than like pointing out like you misspelled a word is considered anathema and will end your career.
If they find out, you're done. In fact, I was in an editorial meeting at Touchstone Magazine where that was kind of the conclusion.
There are 12 editors, and I'm talking about guys like Tony Esselin and Robert P.
George. So we're talking about like serious culture, like sort of personalities, and they hate it.
They hate it viscerally. And if they suspect that you are using it, you're done.
So it is plagiarism, but it's harder to detect when you rely on it too much, but it can also act, what
I'm hearing you say is as an assistant who might help you find a source.
Although you have to be careful, because I've tried this with AI before, where I'm saying, hey, sometimes
I'll remember, I remember there's a quote, this person's, tell me where this is, and it will tell me something wrong.
So. Well, I've actually witnessed this. Somebody posted something that was supposed to be something
I said, and I never said it. And I imagine it was the result of some AI query that somebody said, tell me what's
C -R -Y. Now, I've had a few experiences where I've seen people take things that I've said and had
AI assess it, and AI has done a pretty good job of kind of figuring out where I stand on things.
But I've also had that experience where misattribution, I just never said that.
Do you think we're going to stratify into more of a intellectual class and a slave class in the next 10 years?
Slave might be too harsh a word, but a class that is relatively unable to think at this point because they have been bombarded with so much slop from the internet.
They don't know how to think. They don't know what the process looks like at all. And they feel their way through everything.
And they couldn't tell you what good information even looks like versus bad information.
Because I'm seeing that online right now. I feel like I'm flooded with hot takes that are terrible.
And there's mob activity behind them sometimes. Like everyone will go along with something that I'm like, wait a minute.
Like that traces back to this one source here that's not even right. But I don't know if people care as much.
I'm seeing a lack of care, a sloppiness, when this AI stuff is supposed to make us smarter and more astute, but the opposite is happening.
What do you think happens long -term if things keep shuffling this way? Well, I do think your observation that we could end up with kind of like two worlds, kind of two kind of cultures, kind of existing parallel with one another is right.
I think, so when you actually kind of look at the personal lives of some of the people behind this technology, they don't permit it to have access to certain parts of their lives, like when they raise their kids and stuff like that.
So, screen time, all this kind of stuff is much more regulated by people in the industry than people outside the industry, except for maybe like the homeschooling community.
You know, so you got this kind of thing. But I also think when it comes to, say, standards, when it comes to intellectual work,
I think that there are certain applications of AI that are just great. I think particularly in the hard sciences, engineering, places like that, even computer science.
I mean, people I talk to in the industry, they're very nervous because they can see or they can feel the hounds kind of pursuing them of AI.
So on the one hand, they're like, yeah, I mean, I've got five AI assistants now, and it's great. My productivity is like increased vastly.
At the same time, they'll say, well, someday there'll be an AI that can do what I'm doing right now.
And that's the thing that they're worried about. They're worried about obsolescence being more or less replaced.
And I think we're gonna see more and more of that. But back to my original point, I think particularly in the hard sciences and in engineering, there are lots of really valuable uses for the technology.
But when it comes to the arts, what are we trying to do with the arts? Well, I'm thinking about the arts.
I'm thinking about novels or a painting or any of these kinds of things.
What we're trying to do is we're trying to communicate something spiritual in character to someone else, even a person who's a materialist and maybe is engaged in the arts.
And that person is intended through sort of an odd and sort of maybe self sort of contradictory way, attempting to communicate something to us spiritually.
And when there's nobody home and there's nobody home in AI, I mean, it's the
Turing test thing. It's really convincing, but there's nobody home. We're dealing with acuity.
We're not dealing with the spirit. So anything that's AI art generated is the product of some void.
Now, if you have a person that's saying, well, I'm gonna pick and choose and so forth, well, okay. That's like, in my mind, that's like going to the
Hallmark greeting card store and looking through this cards and saying, I like this one better than this one because it says what
I wanna say. So you've not actually created the thing. Maybe you're looking for something and you're making some judgment calls about what communicates what's on your mind better.
But it's not an expression of really the, sort of the generative work of your own spirit. It's a generative work of an artificial intelligence.
Well, or isn't it a conglomeration of other things that's pulling together to create this? Oh yeah, and that's another part of this, speaking about the ethics of it.
So a lot of artists believe that it's just all, you've heard of stolen land.
This is stolen intellectual property. And so Michael Whelan famously, the great science fiction and fantasy illustrator, if you looked up his work,
I'm sure you'd recognize it. He's sued, it's a big, it's important kind of case.
I don't remember how it got eventually addressed, but it was covered in the New York Times and stuff like that a couple of years back.
But now what he does is he will not let any of this stuff be posted online.
The stuff that's already been posted, it's too late, right? It's already there. But essentially anything that's online, it's kind of believed to be up for grabs.
And so there's intellectual property. By the way, a lot of these people who are sort of like accelerationists, they don't believe in intellectual property.
And there are people in the Reform Bro world who are on the same page with those folks. They hate the idea of intellectual property.
I've noticed that. I've noticed that there's an aversion to that, which I am so against.
And I think part of it is not just the academic background, but I produce music too.
I write songs and that's really what I am as a songwriter. Because I don't know that I have the voice to make it big, but I love trying to write something that someone else, their brain, their soul resonates with what my soul is saying and it pleases them.
And I can't articulate it to you fully, but I am so repulsed by AI music.
And I know just to throw a name out there, he'll probably listen to this, but Joseph Spurgeon, who
I know you've gone back and forth with a little bit online and I love him. He's a dear fellow pastor. He's Presbyterian as well.
I remember I was at a conference with him last year and I was at Tim Bushong's church.
I know you probably know Tim. And he has a recording studio. So I had done, I'd recorded all these songs and Joseph and I are talking and he's like, well,
I got my own country band. I got my own rock band. I'm like, what? And then he's like, I got my
AI rock bands. I'll just have it write a song for me. So Joseph, you can comment on this video and tell everyone how we picked on you.
But I just, I had like a visceral reaction right away with, I was like, you're joking, you're kidding.
No, this is like the end of art. This is worse than anything else. Like go write an essay.
Sure. Like, I don't like that either, but don't commit sacrilege on this sacred space.
And I don't know what that is exactly in us that does that, but I definitely have that. And I would never listen to it myself.
If I find out it's AI, I won't even listen. I don't want my kids to listen. I'll never make it myself.
So I don't know what makes art different, but I think there is something to that. No, I agree with you.
Yeah, I agree. And two, there's a visceral response that I have. And many of the artists
I know, I mean, there are a few exceptions, but many of the artists I know have that same visceral response. Very contemptuous response.
I mean, and that's my response. And I think in part, it has to do with a long history with this, not
AI, but I mean, art. I mean, I come from a family, my wife does too, of artists.
So academics and artists. And so these are people, I grew up in this kind of world. I grew up in kind of a bohemian intellectual environment, particularly when
I was younger, came into sort of like a blue collar world when I was older. And I really have a lot of regard for that too.
But even though I might find the politics of my relatives to be abhorrent, I mean,
I've got relatives who like Madonna and stuff. I mean, I've got that kind of sort of extended family.
Even though I find their politics abhorrent, I believe they're engaged in something with their work that's wonderful.
So like I've got a cousin who's a well -known painter. And so I bought one of her prints for my daughter when my daughter was young.
And so there's, this is kind of, and then I've got other, I've got lots of friends who are visual artists.
And you know what I've seen with a lot of those guys is they're moving away from even using digital media.
They're going back to traditional media. Really? So that would mean they're not putting their songs online, they're selling you a
CD or a record? Yeah, yeah. I think that's one of the things that's driving the recovery of vinyl. There's the other part of it, which is,
I'm not a musician, but I'm told that there are things that vinyl does that, say, for example, digital music can't do.
But I do think that's part of it too. I think that there's a kind of a preferential option for the analog for a lot of people, that it's growing.
So I have a friend, in fact, he was a guy that talked me into doing stuff digitally, and he has pulled away.
He's just doing stuff now with traditional pigments and stuff like that, watercolors and gouache and all that kind of stuff.
Yeah, I totally feel that. I actually talked to an author just recently who said they're probably gonna do a limited release of a hardcover book.
They don't want it on Kindle or on Audible. A few years ago, I would have thought you were crazy because you want the widest distribution and you want people to have access, and now
I'm considering it. So something has changed,
I suppose. Now, another two things that I wanted to mention to you, I know we've been going about half an hour, so maybe this will be one of, if not the last question
I have for you in case there's anything else you wanna add, you can. Ministry and politics.
Ministry, we've kind of touched on it already, but there are guys, and I just know it,
I know it, because I can spot the hallmarks of it. I don't say anything, but I see it.
They are using AI for sermon prep and possibly even transcripts for their sermon and even writing books on spiritual things, which to me is worse, using
AI. Again, grammar checks, got it.
You wanna, I don't know, even the smoothing out stuff, I'm uncomfortable with, but you wanna, but I'm not saying it's more than that.
I can just tell, someone excessively right now is using dashes. I'll use a dash now and then, but not everything's a dash.
There's certain words I see pop up now and then. To me, that's something you confront your pastor over.
That's not good at all, because you're not feeding the sheep at that point. This is my personal impression.
I'm with you. Maybe let's talk about that and then we'll go to the politics end of it.
Do you think that is a basis for confrontation and bringing it up to the elder board and that kind of thing?
Where would you see that as a level of threat? No, I'm with you on it.
When it comes to, say, research, you can't even use online resources now without AI being sort of brought into that.
So I get that. And I do think at the same time, you should have a really extensive physical library.
So my own library, I can't tell you how many hundreds of thousands of books
I've got, but I think that that's a good habit to just spend time with physical books.
But when it comes to the challenge of identifying when AI is being used, that's the trick.
Because sometimes the AI detection stuff is not telling you the truth.
So you don't wanna be in a situation where you're accusing somebody of using AI when that's not the case, but it's so ubiquitous now and there seem to be so few people that have any qualms or reservations about using it.
It seems very plausible to me that it's a big problem. And in the
PCA, last general assembly, there was an overture to address the subject and it was tabled.
It's like guys didn't even wanna go there. Everybody wanted to talk about Christian nationalism and nobody wanted to talk about AI.
So anyway, maybe it's because everybody's using it. And don't wanna talk about it.
People are all doing no one wants to really probably talk about, so. So for example,
I think one of the ways to kind of take a stand on it is to be a vocal opponent.
So everybody knows, has spent any time with me or follows me online, knows
I've got some pretty serious problems with this sort of use of artificial intelligence. So if they ever caught me in the act, they could accuse me of inconsistency or hypocrisy and they'd be right.
So I've created a standard for myself that I've got to continue to meet. I think that's a good practice.
So I actually have, I've commissioned a stamp, a wooden stamp that I'm gonna use that will put,
I'll put a stamp on everything I make and then sign it and put the date and it'll say something like authentically human or something like that.
I think I'm gonna use, so anyways, the artist that's doing it for me is a guy named Jack Bumgartner.
He's a great guy and marvelous artist and multiple sort of media that he's competent in.
So he's, I've asked him to make one for me. Wow, that's great. So like that would be a sermon transcript or a presentation.
You print it out, you store it somewhere. And okay, I was gonna say too that this would apply to politics as well, but a poll and on podcasts, really any public forum,
I perceive that there is an adjustment going on in the skill sets that are preferred for those things now because it's now the presenter.
You don't, if you have AI that you're relying on to do everything else, you've outsourced it.
You don't need people who are wise, even smart. You don't need researchers. These are all skill sets that are very beneficial.
Really all you need is someone who understands optics and style and they can take whatever you give them, kind of like a teleprompter,
I suppose, and spout it out, but they don't really know anything. That's a concern for me.
And maybe that gets us to this two -tiered class thing where you're gonna have the people who stay in their sort of bubble of we're not gonna give into this, but for the rest of society,
I mean, a lot of people are just gonna go with that. And it is going to privilege the people who master that technology.
So the podcasts that can use it to enhance their image and their information and their presentation, they're going to come out on top.
I already see this happening. And the narratives can be slop. They can be absolutely inane.
No one notices because the presentation is so good. I don't know how to really combat that, except to say, especially in ministry capacities, people should probably be either asking questions, figuring out a way to find out if their pastor or the person they're listening to is actually a person of virtue and competence and not just presentation.
So that's a sermon for you, but I don't know if you have anything you wanna say in regards to that.
Yeah, I do. So there's an aesthetic that the Japanese are known for.
It's a wabi -sabi. Are you familiar with that term, wabi -sabi? I have heard that. I don't know why though.
Basically the idea is that as things acquire a patina and are worn, they become more beautiful.
Oh, yes. So the idea is that I think the people of taste, people who have genuine taste are going to prefer the sorts of things that indicate that there was a real human being behind whatever it is.
And I don't know how that'll play out exactly, but I do think that's gonna be the way it works.
And you already see it in like grocery stores. And so I think this is a kind of a parallel kind of phenomenon with like the organic food thing.
This is what I've talked about. Basically, if you're like into organic food, you need to be a little bit better off than other people financially because everything's like twice as expensive.
Doesn't necessarily taste any different in my personal experience, but maybe a good way to put it, and I put it like this elsewhere is like,
Wonder Bread, I think that's AI. I think we're gonna just have a Wonder Bread kind of culture where it's just like very inexpensive, almost free, very low nutritional value.
And it's just out there for anybody who wants it. But then you're gonna have this higher layer of sort of artesian sourdough multigrain, whatever, and it's like 20 times more expensive.
But you know, it's been made with real hands, that kind of thing. And by somebody who invested himself in time and that kind of thing.
And I think that's gonna be the way it is with just culture generally. I think we're gonna end up with an elite, kind of an elite who, and I'm not thinking about the elite that we currently have because they're morons, but I'm talking about kind of like a developing elite who really do have a desire to have a kind of spiritual connection with what they're reading, what they're listening to, what they're looking at, that kind of stuff.
This is a good transition for the last question about politics. Because in a democracy, we're a public, okay, but in a democratic mechanism, this is going to,
I think, mess with voting patterns and what people think happened.
I mean, I've seen news stories now and it's believable to me, or interviews as the case may be, that happen in real time.
And I'm looking at it and I'm trying to understand in context what actually happened, what was said, what did this person mean, let's represent it.
And that doesn't matter. And I'm realizing that now. It does not matter for a mass audience at all what happened.
What matters is the soundbites you get, the clips. And that's always been the case, but it's more so now. And you don't even need clips now with AI.
Like you can just put out something falsely or enhance something and you can have millions of people believing that that was the message, that was what happened.
And I fear that for the country more broadly, every country really, that the people who are able to control this technology, wield it effectively, owners of social media platforms, et cetera, they are going to have more influence than anyone else.
I don't know what we do with that because if we have a democratic system, that means that population gets to control who gets into office and all the rest.
Yeah, I think that's right. Any glimmers of hope? No, I agree with your observation.
I'm reading a book called Fifth Generation Warfare right now. And that's basically what it's about.
We've entered into kind of a moment that's made possible because of advances in technology where the warfare is over perception.
It's not over facts, it's not over what happened. It's about how everything is perceived.
It's all PSYOPs, everything is PSYOPs now. It's very discouraging, but that's kind of the situation we face right now.
And I think it's a cautionary note, particularly for those of us who kind of want to play by the old rules, that you can do everything right and be right and still get destroyed.
And so it's one of those things where you gotta like say, okay, well, this is the way the rules are. And we're not, I mean, this is reality on the ground.
We can't play by the old rules because we just get killed. Now, it doesn't mean you do exactly what everybody else does.
No, I'm not saying that. I just don't know what to do with it yet. It's just sort of like, this is the situation we're dealing with.
Because we aren't, like lying, misrepresenting our opponent to me, that's wrong. Christians, we have our limitations.
But if they're misrepresenting you constantly and lying about you and they have a mechanism, like a big bullhorn,
I don't know. I mean, if you had a virtuous population, it probably wouldn't be a problem.
But if you don't, and I don't think this technology incentivizes virtue at all. It disincentivizes.
Then, I don't know what, governments might have to step in. I mean, hopefully on the local or state level,
I don't know what this would look like and limit, brick it, brick people from mass access to some of these things.
I don't know. I mean, someone's going to clip that probably and say I'm being authoritarian. I'm not trying to be. I'm just trying to.
Yeah, well, I'm with you. What you just noted, I think, is something that some folks like Brad Littlejohn are hoping for.
I'm skeptical just because the nature of the technology, I think, is going to be almost impossible for governments to handle.
On that note, well, at the end of the day,
I do know, obviously, God's sovereign is Providence is at play in this.
And I think if we raise our families right and teach them how to think, don't give them screens all the time, which apparently now we know actually deteriorates some of your brain matter at early stages, we can hopefully emerge as virtuous people who are well -positioned to govern and we can keep our heads when everything's going crazy.
That's my hope too. I mean, that's really kind of the purpose for the book I just finished on AI.
So I'm not trying to figure out how to help you keep your job or give you any sort of a kind of hope that maybe you can live a private life anymore.
You know, just the nature of the panopticon is such that, you know, it's almost impossible not to be subject to it.
But, you know, how can you raise children who can be virtuous and live well and also in terms of your own household, how can you structure it in such a way that that's the case and not become a
Luddite? Now the word Luddite annoys me because it functions much like the word racist and Nazi.
It's intended to like shut down conversation. It sort of prevent you from actually talking and thinking about technology.
So anybody who employs the word Luddite to kind of shut you up, I think is cheating. But nevertheless,
I still think that there are good uses for artificial intelligence and it's gonna require a lot of wisdom and strength to use
AI well. Yeah, and hopefully those people who are keeping their head will rise to the top of some of these hierarchies that emerge.
And there's also people who are gonna get burned and start learning lessons when they participate in a mass delusion of some kind or follow a podcaster or a politician that tell them lies.
So we will wait and see. Maybe we'll revisit this in the next few years when some of these experiments run and we'll see what actually is gonna happen.
But your perspective is very worthwhile and appreciated. And where can people go to find you?
Where do you want them to go? Well, I mean, I'm on social media. I'm not that much for Luddite. So I'm on X and I'm on Facebook.
I do have an author site, but I feel guilty because I never go there myself. Other people go there.
I just post like when I got a new book, I put something up about it. Was it sierrawiley .com or something?
It is it, Sierra Wiley. Oh, that's it. Sierrawiley .com. Okay, well, that's simple enough. Yeah, I pulled it up.
Yeah, it's a nice website. All right. You should go there. AI told me that it's pretty, pretty cheesy.
No, that makes it better. Well, Chris Wiley, everyone, thank you for joining us.