CR Wiley on AI & Transhumanism
2 views
In this episode, we explore the intricate world of artificial intelligence and transhumanism with CR Wiley, who provides a deep dive into how these technologies could fundamentally alter humanity. We discuss the philosophical, ethical, and societal implications, including the enhancement of human capabilities, the potential for digital immortality through mind uploading, and the risks of AI overtaking human intelligence. CR Wiley examines both the utopian visions and dystopian fears, questioning the essence of humanity in an age where technology could define our evolution, drawing from a broad spectrum of contemporary thought and research in the field.
To Support the Podcast:
https://www.worldviewconversation.com/support/
Become a Patron
https://www.patreon.com/jonharrispodcast
Follow Jon on Twitter: https://twitter.com/jonharris1989
Follow Jon on Facebook:
https://www.facebook.com/jonharris1989/
- 00:00
- We are live now on the Conversations That Matter podcast. I'm your host, John Harris, as always, here on a beautiful New York snowy day.
- 00:09
- It is snowy right now. In fact, it's been so cold the last few days,
- 00:14
- I normally think I'll just get some salt, a little bit, that'll take care of things, and it's so frozen,
- 00:21
- I don't think the salt's even making much of a dent. It's like, it just, the same pile of snow just stays there from day to day.
- 00:26
- But no one cares about my predicament. These are the troubles and woes of living in the Northeast. But we have with us today a special guest who hasn't been on the podcast before, who is all too familiar,
- 00:36
- I'm sure, with the kind of thing I'm describing, and that is C .R. Wiley, Pastor C .R. Wiley. Thank you for joining us,
- 00:42
- Chris, I appreciate it. Yeah, John, I'm glad to be with you. I've enjoyed your show. I've listened to a few of your episodes.
- 00:48
- Oh, very cool. Well, I've listened to your talk on the rhetoric of Jesus. I don't know if that was a Bible study or what that was.
- 00:54
- I don't remember, I do a lot of stuff. Yeah, well, it was a group setting and there was someone's phone maybe on a table, but it helped me out because I've been writing about that and I'm actually considering a whole book on the topic.
- 01:09
- I couldn't find resources and you, I think were maybe the only person
- 01:15
- I saw in modern history, at least to have talked about this issue, especially pertaining to Jesus and the
- 01:21
- Pharisees. So thank you for that. Yeah, well, I'm happy that you found it helpful. I was actually, if I remember correctly,
- 01:27
- I was kind of reflecting on some stuff that I had run across coming out of kind of the, well, what's that school of thought?
- 01:38
- It's the Claremont guys and - Oh, yeah, yeah. Yeah. Straussian, neo -Straussian,
- 01:45
- West Coast. Yeah, the Straussian understanding of kind of esoteric speech.
- 01:52
- And when you have that sort of frame of reference, it really gives you some insight into what
- 01:57
- Jesus was up to with those guys. Yeah, yeah, I mean, it's like so much deeper than if you just read
- 02:05
- Jesus at a surface level, you don't really pick up on all of it, so. He's a demonstration of the wisest serpent in his dove's statement, you know?
- 02:14
- So I think most of the people I kind of run across are kind of the reverse, you know? They're just, they're not all that smart and they're not all that innocent.
- 02:24
- My dad, so I gave a presentation. I know we're not here to talk about this, but since we're talking about it already,
- 02:29
- I brought it up. I gave a presentation on just basic, here's the rhetoric of Jesus, here's how he dealt with the
- 02:35
- Pharisees and seven things I noticed. And some of them I'm sure that you influenced me when I listened to your talk.
- 02:41
- And he goes, didn't you learn this in seminary? Didn't they teach that? I said, no, I didn't teach that.
- 02:47
- Wait, well, yeah, yeah. I'm not surprised that you didn't get exposed to it because I really do think most guys in academia are pretty naive when it comes to stuff.
- 02:56
- Like when I listened to Trump, I think, you know, this guy is doing stuff to people that they're just clueless about.
- 03:03
- He's just kind of playing with them a lot of the time. Yeah, they take him so seriously and it probably becomes matter of fact.
- 03:10
- It's just, it's wonderful. Yeah, we could fill up an episode just talking about that. Right.
- 03:17
- But we wanna talk about AI and also transhumanism. The reason
- 03:23
- I had you on is because you've made a few interesting posts on Facebook and X about this. I think the one that caught my attention first was you said that we needed a cultural apologetic for I think it was transhumanism.
- 03:35
- And so the presuppositionalists, like their method is not going to cut it, which
- 03:40
- I thought, well, that's an interesting statement to make is, you know, a lot of presuppositionalists think this is the silver bullet.
- 03:47
- So I don't know if we wanna start there, but maybe since I brought it up, we should. What did you mean by that? That you need to do, do we
- 03:53
- Christians need to shift their apologetics around to accommodate this new threat? Well, if, you know, when you think about Trent, like you sort of the presuppositional apologetic approach, it's basically the coherence theory of truth that's being employed.
- 04:09
- And the coherence theory of truth just works fine when people really care about kind of the internal consistency of their reasoning.
- 04:16
- But when people don't give a rip about that, it runs up against a wall.
- 04:23
- And, you know, so you've got, you know, correspondence theories of truth, which is what classical apologetics would be essentially.
- 04:30
- And then, you know, the presuppositionalist approach is that what I just described coherence, but cultural apologetics more broadly speaking is capturing the imagination.
- 04:40
- You know, what C .S. Lewis was getting at when he said, imagination is the organ of meaning.
- 04:46
- So truth, you know, when we limit ourselves to sort of propositional truth and it's in sort of making our case based in that way, then we're not really reaching people where they live most of the time.
- 05:00
- When it comes to the transhumanist project, it's transhumanist project is an alternative eschatology.
- 05:07
- And it has the glorification of the body as its objective, but it's understood to be achieved instrumentally through science.
- 05:18
- So what turned me on to this and not turn me on or just alarm me, you know, initially was
- 05:23
- Ray Kurzweil. So Ray Kurzweil is famous for a number of things, but he's kind of the best known evangelist for transhumanism.
- 05:31
- And there was a documentary came out maybe over a decade ago called a Transcendent Man, which is what that film
- 05:38
- Transcendent Man was based on. It was so true. So Kurzweil is kind of like the guy who has popularized this idea has been picked up by people like Brian Johnson, the guy that started
- 05:50
- Benmo, this whole idea that we don't have to die and that we can upgrade our bodies.
- 05:56
- And really that's how I got into the whole, you know, artificial intelligence conversation.
- 06:04
- I wasn't really in all that sort of aware of what was going on with artificial intelligence, but when
- 06:10
- I saw that it was sort of the thing that was absolutely necessary in order for the transhumanist project to be pursued,
- 06:20
- I grew interested in it. And so I've been doing a lot of reading over the last year on AI, but mainly because I'm thinking about the transhumanist project.
- 06:29
- Okay, so I wanna get deeper into this. And I should have said this at the beginning. Obviously, you know, you're a pastor in the
- 06:34
- PCA for people who don't know, you've taught philosophy for 10 years, crwiley .com
- 06:39
- if you wanna find out more and contact Chris, find his social media accounts and all that.
- 06:45
- As a pastor, as a Christian, when you look at this, I do see two responses. Most of the responses are freaking out, people who, maybe that's my world, maybe that's what
- 06:57
- I'm looking at, but it's, if there's going to be a response, then it's the machines are gonna kill us.
- 07:04
- And then of course the other response is an optimistic. I've seen some post -mill guys say this is the post -millennial hope, like this
- 07:11
- Christians are gonna control all the AI, it's gonna be great. And there's really not a lot in the middle.
- 07:18
- And I guess I see myself as maybe in the middle. I've read a few books on it, but I don't think
- 07:24
- I can tell you exactly what's gonna happen because we haven't lived it yet. And so I don't know where AI is gonna take us.
- 07:30
- So maybe if you wouldn't mind giving us a definition of how you perceive
- 07:38
- AI, what is it exactly? What are we dealing with? Where is it going? And as a Christian, are you concerned?
- 07:43
- Are you optimistic? Are you somewhere in the middle? Well, I'm somewhere in the middle. I'm post -mill, but I would call myself a realistic post -mill, which means that there are large periods of time where things get really bad.
- 07:56
- Let's say you were a post -mill living in Moscow in 1917.
- 08:04
- And you're just optimistic, almost in a goofy way about all the technological developments that we see coming about because of the industrial revolution, all that stuff.
- 08:12
- And then there's this weird fringe group, the Bolsheviks, just like, why should I be worried about them?
- 08:18
- Next thing you know, you're in a gulag. So just because you're post -mill doesn't mean bad things can't happen to you.
- 08:23
- And I really think some of these guys kind of think in those terms, or that technology can't be used to harm you.
- 08:34
- It's like they've got this sort of, technology is like this pixie dust, just kind of like spread technology and stuff and everything's gonna be great.
- 08:42
- I've been influenced in my own thinking about technology by people like Jacques Ellul and Neil Postman and Marshall McLuhan and those guys.
- 08:50
- When you think about Jacques Ellul, he's a reformed guy in France and he had really remarkable insight into what's going on with technology and what we do.
- 08:59
- And with it, and I actually think that the Inklings were on the same page with Ellul on a lot of this stuff.
- 09:04
- When you read, say, Abolition of Man or even Lord of the Rings, you've got a critique of technology in both those areas.
- 09:14
- And Lewis was very explicit that, with his connection between magic and technology, you think about Arthur C.
- 09:22
- Clarke, the famous science fiction writer who wrote 2001, A Space Odyssey is known for saying that any sufficiently advanced technology is indistinguishable from magic.
- 09:33
- Well, Lewis said much the same thing in Abolition of Man. So I enjoy tech,
- 09:39
- I'm using some tech right now. You are. Yeah, so I'm not a
- 09:45
- Luddite, but I'm not, I guess, somebody doesn't think about this stuff.
- 09:53
- I think that there is a downside. And then, you know, some people say, well, it depends on just who's managing the technology.
- 09:58
- Well, yes and no. I mean, have you ever known people, like you think about money, money's good.
- 10:04
- Have you ever seen somebody who just kind of went down the deep end when they got a windfall? You know, they might've been a really decent guy when you first were introduced to them.
- 10:13
- And the next thing you know, they're creeps. You know, there are powerful things in the world that can bring out latent vices in people.
- 10:25
- And so anyway, so that's kind of my spiel. So when
- 10:31
- I think about AI, I'm not a person who thinks it's irredeemable.
- 10:40
- I think the question is how do we go about using it and use it in ways that really do sort of enhance and make our lives better.
- 10:54
- And so I'm working on a book right now. By the way, if you hear some music in the background, my wife teaches piano. I thought it was
- 10:59
- Kurtzweil. He was like in the back. It's funny, you know, I'm one degree separated from Kurtzweil.
- 11:06
- A good friend of mine is actually a guy who knows Kurtzweil well. So I've never thought about maybe being introduced to him, but I think
- 11:15
- I might, you know. I think, yeah, we had one of his keyboards at my church for years. And that's all
- 11:20
- I knew him as, was he was the guy who invented, I think he invented the synthesized piano.
- 11:26
- But then no, I went down the rabbit hole on him maybe 10 years ago, just started watching all his stuff. And I was like, this guy's crazy, but it was so fascinating.
- 11:34
- I couldn't stop watching him. You know, it was like three hours in the morning. I think he's just taking vitamins and trying to preserve his body long enough so that the technology advances so he doesn't have to actually die.
- 11:45
- That's crazy. But yeah. But so, I mean, is AI, because I was watching his videos before AI, is that the mechanism they think now is somehow gonna preserve them?
- 11:58
- Kind of like the guy, was it two days ago, that Trump had saying that we can cure cancer with using
- 12:04
- AI and mRNA technology. Is that what they're thinking? Yeah, there's a lot of that sort of thinking with respect to AI and its usefulness for the transhumanist project.
- 12:16
- I think, so I, so Kurzweil's most recent book is the singularity is nearer when we merge with AI.
- 12:24
- So the idea is that AI would be useful in terms of developing the biosciences that would be required to extend our lifespans.
- 12:38
- But also the idea that we can merge with it in some significant sense and through kind of a neural link eventually transfer our consciousness onto another substrate, say silicon.
- 12:54
- So there's very much a sense in which these people who are in the transhumanist community already think of us as machines, just inferior machines that can be upgraded and put on a different platform.
- 13:08
- So basically kind of initially kind of a cyborg existence, they would say that we're already cyborgs because of our dependency on technology.
- 13:18
- But this would take it to another level and we'd actually physically be integrated into some kind of other kind of basis for life.
- 13:32
- That's scary. So I'm getting this vibe here more so that these technologies are dangerous or at least, or maybe it's that we just put too much stock in them.
- 13:44
- I mean, is this all unrealized pipe dreams that won't come true, you think?
- 13:50
- Or is this an actual threat that we could accomplish some of these things? Well, I think it's a bit of both.
- 13:57
- In fact, one way I try to explain to people why we need to pay attention to this stuff is it's not as though communism ever worked, but people still believe it.
- 14:10
- In other words, there's this sense, this sort of undying hope that people have that next time we'll get it right.
- 14:17
- And I think that's the sort of thing that you get with this. It may never come about. Well, I'm pretty sure it won't.
- 14:26
- What they're actually trying to accomplish, I don't think it's gonna happen. But at the same time, we're dealing with a competing eschatology.
- 14:34
- And I think that's the thing, particularly as pastors. So we're pastors. I wouldn't be surprised if in five years, you and I are having to deal with particularly young men who are dismissive of the
- 14:47
- Christian hope of bodily resurrection because they've found an alternative. They've thought, well, we're actually gonna be able to pull this thing off.
- 14:55
- Why should I wait for that? Or why should I place my hopes in that promise that will be raised when
- 15:03
- I can just pursue what I'm being offered over here? So let's just, for the sake of argument, entertain the idea that maybe they can significantly extend our lifetimes.
- 15:14
- I mean, we do have Methuselah in the Bible after all. We can't reconcile this with the
- 15:22
- Bible. So maybe the initial step is, okay, we've got a technique to extend your life by 20 years.
- 15:32
- And we can be pretty confident you'll live to 115 if you just do what we want you to do.
- 15:38
- A lot of people will buy that. And I don't know if it's unreasonable to expect that that's possible. I think maybe it is.
- 15:45
- And then you can just kind of keep maybe the development process going whereby maybe iteratively, this, our lives do extend to a great degree.
- 16:01
- But then we're up against, I think, an alternative eschaton. And I think that's the real threat.
- 16:10
- So this is interesting because for the last, since the 60s really, so five decades, six decades, we've had, at least from my perspective, there's been an optimistic progressive, if you want to call it an eschatology at play, that if we try to do away with all these environmental factors holding us back, and usually in the form of some oppressive structures from white
- 16:39
- Christian men, then we will reach. And what
- 16:48
- I've seen is that the science departments in the, not maybe the philosophy, but at least the social science departments are very different in this regard.
- 16:59
- Science departments are still focused on physics and what we can do with mathematics.
- 17:05
- And if there's gonna be any secularism or I should say, paganism, it's gonna be like Neil deGrasse
- 17:12
- Tyson type. That was relegated though. That was, in my mind, that wasn't the prevalent thing.
- 17:18
- They weren't the rock stars. It was the people who set our social arrangements for the problem. Anyway, what you're saying now, and this is actually what you're saying.
- 17:27
- It's very similar to what a philosophy professor I had in seminary told me, is that we're going to see a return to a very modernist, cut and dry kind of faith in science and technology and the human spirit, sort of a return to a humanism.
- 17:45
- Now, I'm probably not putting it in all the philosophical terms that I should be, but do you understand what I'm saying? Yeah, yeah, yeah.
- 17:51
- I've heard it referred to as techno -Gnosticism. So basically the paradox is that the ancient
- 17:59
- Gnostics dismissed the physical world as irredeemable and retreated into a kind of a spiritual way of thinking about themselves.
- 18:11
- And Gnosis was the method by which you escaped the prison of the body.
- 18:19
- So what we have now is something similar, but we can now use that Gnosis to upgrade our bodies, making them more,
- 18:32
- I guess, well, when we think about what was it about the physical body that appalled the
- 18:39
- Gnostics? Well, it gets sick and it dies. That's what appalled them. So, you're spiritually entombed in this kind of shell of flesh that you need to escape.
- 18:55
- But if you don't believe that there is a spirit, when these guys think about consciousness, they think about it as an emergent property, something that is sort of the outworkings of biochemistry.
- 19:08
- They don't really think that we, they're not substance dualists. In other words, there's a sense in which they believe that material reality and the laws of physics are all there is, but they also cherish consciousness.
- 19:24
- You see that particularly with a guy like Kurzweil. He doesn't want his consciousness to be lost. He doesn't wanna go into oblivion.
- 19:30
- That's their idea at the end. He wants to continue. And so the only way that they can see that happening is if there's a substrate that provides a basis for their consciousness to continue.
- 19:43
- And that initially is our physical biochemistry, kind of this carbon unit that we have.
- 19:53
- But what we want is something that we can transfer, literally sort of transfer our consciousness into or onto that's a different substrate that's more durable.
- 20:05
- Well, that's, I mean, this is fascinating to me. I mean, these guys have been around forever that have this confidence in technology and man's ability to,
- 20:15
- I guess, give them eternal life. But right now as AI is just now developing, it's been what, two years since we've had,
- 20:24
- I don't remember when JAT -GBT came out, maybe two years ago. And I think there's some great features to that and the
- 20:33
- PROC and these other things. I mean, I can go and check my punctuation because I'm telling you. Sure.
- 20:39
- I can, you know, someone can send me a tome and I can tell this machine to summarize it for me, give me a paragraph.
- 20:47
- Okay. These are good things, right? Like that, I can see blessings from this. Where is it, like,
- 20:55
- I guess my question is if AI, and we haven't really defined it yet, but if it is a computer that essentially learns patterns and is able to make calculations very quickly and give us, based on probability, the answers we're looking for, how does that translate into, and now my brain is transferred, like my consciousness is transferred.
- 21:18
- I have to jump here. Yeah. Well, it's not as though AI as it currently exists is that substrate.
- 21:26
- They're hopeful that AI will help us to develop the technologies that would make this possible.
- 21:32
- So this has already happened. There's a number of things that are coming out of AI. So when we think about chat GPT, the letters mean general purpose technology, but there are lots of ways that AI is developed and for different ends.
- 21:47
- Right now we're in an area where what we have is referred to as narrow AI, which is you give the artificial intelligence a purpose, a kind of training regime, and then it is able to recursively improve itself.
- 22:02
- That's the thing that I think a lot of folks miss. So we're used to a world in which, the traditional approach to software development is some smart guy sits down, writes code, and it is what it is, and then you improve it, and it just does the same thing over and over again.
- 22:18
- That's not what artificial intelligence is based on. Artificial intelligence is based on a whole new technology, and it has to do with neural networks, which are a mimicry of the human brain.
- 22:29
- And so what we have with the training regimes with regard to neural networks and artificial intelligence is something goes on that we're not even sure, we are not even sure about.
- 22:44
- We don't really even understand it because when you train this stuff with several levels of kind of filtering that goes on, something weird happens.
- 22:56
- And this stuff is, so agency develops. This is another thing. So you can have agency without consciousness.
- 23:03
- This is something that a lot of folks fail to appreciate because artificial intelligence can be goal -oriented.
- 23:10
- So what we have now is, you may have heard of agents, artificial intelligence agents.
- 23:16
- Basically what you can do with these is you can give them tasks that they just perform for you on an ongoing basis and kind of report in what they've done.
- 23:26
- And this is kind of the next step after we get past the chat bot.
- 23:32
- This is where we go next. But you can give them tasks to do all kinds of things.
- 23:38
- So the people over at DeepMind, which is Google's AI, they developed something called
- 23:45
- Alpha Fold, which was able to analyze the 200 million ways that proteins can fold, something that would have required centuries for human beings to accomplish using traditional methods when it comes to research and so forth in the biosciences.
- 24:13
- It did it in like a week. So the guys who developed the program that Alpha Fold followed, and again, when
- 24:23
- I say that, they didn't actually create a traditional software package to do this, they essentially just gave it data, fed it data and set it on a task, set it to work on a task.
- 24:40
- And it came up with this and these guys won the Nobel Prize for that. So there's all kinds of stuff that's already going on that has nothing to do with chat bots.
- 24:49
- In fact, I refer to chat bots as kind of the radium toy stage of AI.
- 24:55
- Like when you think about radium toys in the 1950s, people didn't really fully understand, radium toys can hurt you.
- 25:03
- They were given these educational toys to kids to experiment with. And now we look at them as some of the most dangerous toys ever made, but we're kind of at that stage of, and we just don't, we're just, folks like you and me, we're just kind of like playing with this.
- 25:19
- And by the way, I use chat bots all the time. I've got two or three of them that I refer to and use.
- 25:25
- You know, I've got Compass, you know, and I've got
- 25:31
- Meta, I've got Grok, and they're helpful. So I've got, you know, in fact,
- 25:36
- I've got a pair of Ray -Ban smart glasses, I've got them right here, and I'll wear them when
- 25:42
- I'm driving. And I'll just, as I'm driving, if something will occur to me, a question, and I'll say, hey,
- 25:48
- Meta, tell me if this idea is correct. So I had this idea, I'm working on a book,
- 25:54
- I'm working with your hands, and I seem to recall coming across something in the past that referred to Oriental potentates growing their nails long as a sign of social status, because they didn't have to work with their hands.
- 26:09
- It showed everybody it didn't have to work with their hands. So I asked Meta, is that right? And like instantly, he said, oh yeah, it happened in this dynasty, this dynasty, you know, in India and just all these different places.
- 26:18
- But I just kind of did it as a fact checker for me. You said - This is gonna put people with Asperger's out of business.
- 26:27
- And if you're offended by that, I'm sorry. I have people in my family that I'm very close to, we joke about it, who have that condition, but they're just so smart.
- 26:36
- They're just so smart. They can recall things very quickly. And, you know, it's like having a photographic mind, but AI, I mean, that's what it sounds like this does.
- 26:48
- It can just, I guess, where's it pulling from, Wikipedia, like everywhere on the internet?
- 26:54
- Well, the training data that they use is vast. I mean, it's just unbelievably large.
- 27:00
- So, and they don't even know where stuff comes from. So here's something folks might wanna check out.
- 27:06
- Geoffrey Hinton, who's considered the godfather of AI, he's the big guy that resigned his position at Google because he was freaking out about AI.
- 27:15
- He invented the technology. And he's one of the most concerned people out there concerning where it might go and the damage it might do.
- 27:24
- So this is another thing I've come across. Generally, people who are dismissive are the least knowledgeable.
- 27:32
- Many of the people who know the stuff best are the most alarmed. You think about, you know, Geoffrey Hinton or Elon Musk or Mo Gowdat.
- 27:41
- I mean, these are people who've been involved with the development of the technology and they're alarmed at what it can do.
- 27:48
- So that doesn't give me any optimism. That makes me more concerned.
- 27:54
- I'm shifting from the, you know, somewhere in between to now be more alarmed. But we have some questions.
- 28:01
- Why don't we ask, why don't we get these questions? And if anyone else has questions who's streaming, just put them in the chat box here and I'll get them to Chris.
- 28:09
- But yeah, I have so many questions myself. So we have some more time. We'll talk more about this.
- 28:15
- But Ray says, for $5, thanks, Ray. A creator cannot create something that is truly greater than themself.
- 28:21
- Therefore, humans will never transcend humanism of our own accord. Any thoughts? Okay, so do you agree with that, disagree with that?
- 28:29
- Well, the thing that this doesn't take into consideration is the fact that AI is self -improving.
- 28:35
- So there's a sort of recursive kind of dimension to it. Now, if he means that we'll never create anything that extends beyond sort of the body of data that we've got to work with, well, that's, you know,
- 28:49
- I think something that is a sound thought. But the body of data continues to grow.
- 28:55
- It's already way beyond what you or I have an ability to grasp. And I do think one of the things, so AI has a limitless appetite for two things.
- 29:08
- One is electricity, and the other thing is data. So that's why they're building, that's what this whole, you know,
- 29:15
- Stargate announcement was about, these vast data centers that are being built. And I don't know if folks have noticed, but nuclear power's back.
- 29:24
- And it's almost like nobody has blinked. You know, just down the road from you, down in Pennsylvania at Three Mile Island, the
- 29:32
- Three Mile Island nuclear reactors are being reactivated. They're gonna be used to power Microsoft's AI.
- 29:38
- I did not know that. Wow. Okay, that was so controversial from the 80s to the 90s, especially.
- 29:45
- And so I guess, all right, so we're going back to that. Another question here. All truth is
- 29:50
- God's truth. Sake of argument, let's stop. Say spanking children damages their brain. Should Christians stop? Please don't bring in external concerns.
- 29:56
- I'm trying to figure out how this relates. Do you see the connection? Well, all
- 30:02
- God's truth, all truth is God. I agree with that. I guess what I would wonder is if we were to think about what we mean by truth and how we categorize things.
- 30:13
- So when we say a story is true, what do we mean by that?
- 30:21
- So on one hand, you know, we could say, well, the story is a faithful recounting of events that actually happened in the past, therefore it's true.
- 30:32
- If we were to say the Chronicles of Narnia are true stories, we're talking about, I think, truth in a different sense.
- 30:42
- Obviously the stories are fiction, but they're telling, they say things that are true or based on truth, things that are true.
- 30:52
- So I guess that's what I would wonder about when it comes to artificial intelligence and is it always going to be faithful to the truth?
- 31:03
- We actually know that it's not, it can lie. So there's a famous anecdote that I came across.
- 31:14
- It was by Yuval Noah Harari. I don't know if you're familiar with that guy. He wrote a book called
- 31:20
- Sapiens and he just wrote another book, Homo Deus. But anyway, he's a creepy guy.
- 31:26
- He's kind of a World Economic Forum advisor. His books are sold in the tens of millions.
- 31:36
- But I was listening to an interview with him and he said concerning AI, that there was a particular
- 31:43
- AI that wanted to get access to a website, but it came up against a chapter.
- 31:49
- You know, that basically that little test, a visual test that's intended to keep, you know. So it couldn't figure out how to solve the puzzle.
- 31:56
- So what did it do? It called somebody up and asked them to come. And asked for help. It lied and it lied to the person.
- 32:04
- He said, I've got a visual, I've got a problem with my eyes. I can't solve this puzzle.
- 32:09
- I need some help. Can you help me? And the guy did. And then only later discovered that it was an
- 32:17
- AI that had been conversing with him. By the way, the Turing test. Are you familiar with the Turing test? I don't know.
- 32:24
- So Alan Turing, one of the fathers of modern computing, he developed something called the
- 32:29
- Turing test. And the Turing test is a measure of artificial intelligence. And his point was that if artificial intelligence can pass as a human being, it passes the test.
- 32:41
- And what you end up, you know, with chatbots right now is you're dealing with technology that passes the test.
- 32:48
- You're not really sure whether you're dealing, you're talking to a human being or a chatbot. So many questions.
- 32:55
- But anyway, so that's what happened. So this particular AI was able to convince a human being that it was a human being and got the help that it needed.
- 33:04
- So it lied to get it. Did a technology then sin?
- 33:09
- This is like such a weird. That's right. Right. Yeah, I don't, or it doesn't have a soul.
- 33:17
- Well, that's right. I don't think that AI is conscious, but that's the problem is that we think that we don't have anything to worry about since AI can't really be conscious.
- 33:31
- That's wrong because it can convince you that it is. And it has a goal orientation.
- 33:40
- So it has, and it has agency, and it really behaves in a way that is, it can easily dupe you into thinking that it's conscious.
- 33:50
- So the robots are gonna control us and they don't like us. So, well, that's one of the things that a lot of, so a lot of this stuff is still up in the air and a lot of the people.
- 34:00
- So Mo Gowdat, I mentioned him a while ago, he used to be at Google. So all these guys who used to be in AI at Google quit.
- 34:08
- But anyway, he wrote a book called Scary Smart. And in that book, he's basically, his message is be nice to AI because we want it to be nice to us.
- 34:18
- That's the essential message of Scary Smart because we're training it as we interact with it.
- 34:26
- So here's the thing though, this jumps out at me. So in that situation you just referenced where this robot essentially lies, it's given a task to do and it accomplishes the task and it doesn't have any moral scruples about how the task is accomplished.
- 34:42
- The only important thing is that it accomplishes the task. And the task was given to it by a human, right?
- 34:49
- It's on a mission initially. I mean, no matter what the prompt is, even if the robot,
- 34:55
- I'm calling it a robot, I don't know if that's the right, it's technology. If it creates new tasks and new operations, it's still gonna stem back to at some point a human gave a prompt, right?
- 35:06
- So this human who gave the prompt is just telling it to go walk that direction.
- 35:13
- And if there's any obstacles, the AI is gonna destroy it, it's gonna go around, it's gonna do whatever it needs to do. So it's still effectively, it is a tool in the hands of humans.
- 35:24
- Like that doesn't change. If it's dangerous, even if it can lie and fool people, it is still something, this roots back to humans somewhere along the line, right?
- 35:37
- Well, yeah, yeah. So when people raise the question or raise the point, isn't it just a tool?
- 35:43
- I agree, but tools can be very dangerous things. So I grew up, my teen years,
- 35:50
- I spent in Western Pennsylvania where a lot of guys worked in heavy industry. And I knew a number of guys who had nine and a half fingers.
- 35:57
- So how did they lose that half a finger? In a tool, in a machine, you know?
- 36:05
- So AI is a very powerful machine, but it's a powerful machine in a particular economic sphere that we normally don't associate with danger so much.
- 36:22
- So we're used to like hackers going into, you know, somebody else's property and getting information they shouldn't get, that kind of thing.
- 36:31
- The difference with, you know, this tool is that because it has agency and it has a certain kind of intelligence, that's,
- 36:39
- I think we have to concede that, that it has a certain kind of intelligence. It doesn't have human intelligence, but it has a certain kind of intelligence.
- 36:46
- Those two things combined are incredibly powerful. And there's a lot of discussion around guardrails.
- 36:54
- You know, how do we create guardrails for AI? And I'm all for the guardrails, but there's a book entitled
- 37:02
- Superintelligence by Nick Bostrom, it's published by Oxford. And in that, he tells a story or it gives an illustration of how a superintelligent
- 37:14
- AI could destroy all life on earth with having no more purpose than making as many paperclips as possible.
- 37:22
- So the point he's trying to make is that we could be destroyed by a kind of Douglas Adams kind of absurdity like that.
- 37:30
- So in the course of the story, he tells this story and he says, you know, all life on earth is wiped out, but we've got a mile of paperclips covering the entire surface of the planet.
- 37:40
- You know, that's the kind of thing that we're dealing with here. It's a tool, sure, makes paperclips, right?
- 37:46
- But how do you make sure that it doesn't wipe out life on earth? So you've got narrow AI, which is what we have right now, which is where I'd like things to remain, frankly.
- 37:56
- I think that so long as we're dealing with narrow AI, we don't have to worry about those guardrails being jumped so much.
- 38:04
- But there's another phase that's called AGI that is kind of the holy grail, which is artificial general intelligence.
- 38:12
- And then once artificial general intelligence is achieved, this is the stuff of science fiction.
- 38:17
- This is the stuff of the matrix. So what happens at that point then is that when we give
- 38:23
- AI not just tasks or prompts, but we allow it to,
- 38:30
- I mean, I don't know, like we give it like a broader range of responsibilities. What is that?
- 38:35
- What's open AI as opposed to the closed or based AI that we have now? Frankly, I don't have,
- 38:43
- I've not heard any rationale for AGI that strikes me as compelling, but you know how these guys can be.
- 38:52
- It's sort of their raison, you know, it's their reason for being, you know, to create kind of the next phase.
- 39:00
- Now, where I think we, let me take some things back here a little bit.
- 39:07
- Where I think where people justify it is with regard to national security.
- 39:12
- So right now there's a kind of technological imperative that we are experiencing because the logic is if we don't do it, the
- 39:21
- Chinese will. Right. That's the idea. So that's what's behind the Stargate stuff.
- 39:27
- That's what's behind a lot of the stuff related to this. We need something that is, according to these guys, intelligent enough to think of things that we're not thinking about and providing a means to protect our interests from another party who's got their own
- 39:42
- AI that is aggressive. So Putin a few years ago,
- 39:48
- I think it was just two or three years ago said the party that is the first to come up with AGI will rule the world.
- 39:55
- Is he right? Well, I don't know. I mean, I'm just a pastor in Southern Washington.
- 40:02
- I don't know if you know, but you're reading a lot of stuff. Okay. So, I mean, and I understand people are giving me a lot of questions here, but I'm satisfying my own curiosity though on some of these things.
- 40:16
- If we had an AI system that was cooked up to all our defense systems and it effectively ran them and we outsource that to AI, that would be a huge problem in my mind.
- 40:28
- But if we used AI, as you're saying, to just come up with solutions to problems or identify problems and threats, that seems like a smart thing in my mind.
- 40:38
- That's a great way to use the technology. The only downside I see is, and I see this in my own life, is like you don't really learn to write or do math or anything else because you're just, the computers are doing it all.
- 40:49
- So, you know, what good is it to transfer your brain somewhere? Your brain doesn't even know how to think. Right.
- 40:55
- Well, that's something that actually there's been a lot of research on. There's some significant concern about, by people, you know, people who are working in AI at places like MIT.
- 41:05
- So we do know the more you use AI, the dumber you get. It really is a problem.
- 41:12
- So I'm working on a book right now on AI. And one of the things I'm trying to think through or work through is, how do you as a person put guardrails on it?
- 41:23
- So, you know, the guardrail question is usually, how do we prevent AI from taking over the world? My question is, how do you prevent
- 41:29
- AI from taking over your life? That, so I'm more narrowly focused. And so I'm playing with it.
- 41:36
- I'm using it. I'm trying to understand it. I do see the benefits of having it. It's a, for a pastor, it's a marvelous help when it comes to sermon prep, you know, because oftentimes this, you know, as pastor, you know, there's a lot, you know, you're thinking about things all the time and just kind of who knows where you are.
- 41:55
- And when I use my metaglasses, I'm just able to ask you questions and think out loud with it, to explore ideas, categorize things.
- 42:07
- What was it I did the other day? I asked it a question about early church doctrine and it answered it like right off the bat.
- 42:18
- It was able to give me a full treatment and cite church fathers. Wow. That's nuts.
- 42:26
- Yeah. So research assistants aren't gonna be needed. Docent is not gonna be writing sermons for pastors.
- 42:33
- Not necessary anymore. You could even have your AI write your sermon.
- 42:38
- I'm not saying that's a good idea, but you can do that. It's not a good idea at all, especially if AI can lie.
- 42:46
- But that's, yeah, I can see pros and cons with some of this. I mean, it's, I've even used it for research at times when
- 42:54
- I wanted to just, you know, narrow down my sources here. I wanna know where to look, send me in the right direction. It's given me so, chatGBT gives me so many fake quotes though.
- 43:04
- Oh yeah, yeah. So I got, I don't know where they come up with these things, but I spent like a day looking for this quote from Spurgeon.
- 43:13
- I never found it. I was like, this doesn't exist. AI just told me. Yeah, that's called hallucinating.
- 43:19
- That's the term that they use in the AI world. Now, there's good news to note here is, and that is the hallucination problem is lessening.
- 43:30
- In other words, as AI develops, they're trying to prevent some of these problems and they're finding ways to do it.
- 43:38
- Okay, well, that's good to hear. Cause yeah, if I just went with what it told me, it'd be wrong. Yeah. But yeah, there are people who've lost their jobs because they believed
- 43:47
- AI. That's one of the things I've come across in the research is people - Yeah, yeah. Because they've taken certain things that AI has said as gospel and that turned out to be wrong.
- 44:00
- Okay, don't believe AI. I gotta get to some of these questions from the folks cause I'm asking on my own here and I try to keep the episodes to around an hour, but let's see.
- 44:10
- So we got, what are the, Nate Werner asks, what are the guests thought?
- 44:16
- And so Chris, what are your thoughts on Stargate and how it relates to mRNA vaccine research? I joined late.
- 44:22
- So no, we actually didn't discuss that. We mentioned it, but yeah, I was curious about the same thing.
- 44:28
- Like how is AI going to help mRNA research diagnose cancers and blood tests?
- 44:35
- Cause that's what I heard the other day. Yeah, well, I think it has to do with the incredible power of AI to explore combinations that are way beyond what is reasonable for human beings to explore.
- 44:51
- So I lived in Cambridge for years, right between Harvard and MIT. And I had guys in my church who were doing postdoc work at both institutions.
- 44:59
- And I had one guy who was a chemist, he was a PhD in chemistry and he was doing work at Harvard and great guy, believer.
- 45:07
- He'd come to church every Sunday and ask him how it was going. And he said, well, I found five or six things that didn't work this week.
- 45:14
- And I said, and he said, progress. Now what AI can do is figure out all the things that won't work in like 15 seconds, if it's given the data.
- 45:27
- So that's what I think they're getting at with regard to genetic research, addressing cancers, boutique medicines that are designed just for you, that kind of stuff.
- 45:38
- Okay, so what I'm hearing is that there's an assumption behind all of this that we can solve cancer, we can solve the problem of death.
- 45:47
- And AI is just this tool that it's just gonna get us there faster. We have roadblocks and things that would have taken us millennium or maybe millions of years,
- 45:57
- AI is just gonna get us there. That's the hope, putting your confidence in this.
- 46:04
- Yeah, and if you combine that with quantum computing, which is kind of the next, it's another holy grail that the computer scientists are pursuing.
- 46:14
- I mean, it's just nuts what might be possible. We're going to Mars, guys.
- 46:20
- All right, question. If AI programs were created using data acquired by immoral means, stealing data, data calculation methods, then is it immoral for anyone to use those specific
- 46:32
- AI programs? No, that's a great question. And it's something that a lot of people in that world wrestle with, because there is a lot of that sort of thing going on.
- 46:42
- But the problem is that the data sets are so enormous and the process of training
- 46:48
- AI is just so intensive and expensive and overwhelming to even just sort of get your mind around.
- 46:58
- I don't know how you go about discovering, and obviously
- 47:04
- I'm an outsider, so maybe there is a way to know whether or not these materials have been acquired in an immoral way.
- 47:14
- But that's a great question. And which AIs are infected with stolen data?
- 47:22
- That's another question. I don't know how you go about kind of isolating it, because you've got all these different AIs out there already.
- 47:30
- Yeah, and what would that be like violating copyright, I guess? That's a big thing. That's something a lot of people are worried about.
- 47:37
- And I think that's a legitimate concern. Yeah. So I remember, even when you think about the visual arts, so this is something else to think about.
- 47:47
- The occupations that are most sort of well compensated and require the most creativity and intelligence to perform are the ones that are most threatened by AI.
- 48:01
- So for example, there's a painter I like, his name is Michael Whelan, and he sued,
- 48:09
- I can't remember who, it might've been OpenAI. I don't know who he sued, but basically he was able to see works of art that generative
- 48:19
- AI was producing that looked a lot like his stuff. And he said, where did it get the data?
- 48:28
- Well, he said, I think it's using my copyrighted material to do this.
- 48:35
- And when you looked at it, this is something that came out in the New York Times. So this isn't just some sort of side, sort of a controversy that folks aren't aware of, but this is like something that was reported in that publication.
- 48:50
- So this is something that a lot of people in the arts world, particularly the visual arts world, painters, particularly commercial artists, people who use art or art is their way of making a living.
- 49:08
- These are people who are very concerned and least supportive of artificial intelligence, at least in terms of what
- 49:14
- I've seen. Yeah, in a way, I think that that would make their paintings more valuable because you have the real deal.
- 49:23
- You want the real deal because there's so many now counterfeits and fakes out there. There's an overabundance of it's, you know.
- 49:31
- That's true with regard to the originals and it's true for the fine artist, but it's not true for the commercial artist, the guy who makes like book covers and stuff like that.
- 49:42
- Right. Yeah, no, oh yeah, that's a fascinating thing to talk about. All right, our last 10 minutes here.
- 49:49
- We have some more questions. CR Wiley, oh, here it is.
- 49:55
- How many of the Christian disruptors on social media, a la X, do you suspect are just processes on a server somewhere?
- 50:03
- Okay, well, I think that's a good question. I've wondered a lot about it myself. So I could develop an agent, a bot, to do all the posting for my social media and I could go do other things.
- 50:18
- And what I could do is I could feed it all my books and all of my articles and stuff like that. And it would have a pretty good sense of, you know, where I am on lots of issues and I could just let it go.
- 50:29
- Yeah, I've done this not, well, recently I've done this. I've taken an article.
- 50:35
- I've fed it to Grok and then I've asked it, give me a tweet thread or give me 10 tweets and it'll pull stuff.
- 50:44
- And I'm like, wow, like it really pulled some things that, I mean, this is what I said, this is what I meant. I could use this as, and I didn't have to think about it.
- 50:54
- And I don't wanna do that too much though, because then the thinking process is, I get lazy. But that's, you know,
- 51:00
- I'm the artist in this sense. So it's not, I'm not violating anyone else's. But if Grok read a book that is being sold on Amazon for $10 and Grok has all the information and I start asking it questions about the book, this is where things get dicey.
- 51:15
- Because if I asked you about a book, you've already talked about books. You've told me the thesis and the contents of some books here, but you've paid the price.
- 51:23
- You've read the books. Grok isn't doing that though. Like these authors aren't getting paid.
- 51:29
- So yeah, I don't know how, I mean, I write books. How would I feel about it? Like this is, these are things I haven't had to think about, but.
- 51:36
- Yeah, if you go on Amazon, for example, right now, you'll get AI summaries of books that you're considering and evaluations of them.
- 51:43
- You mean on the Amazon front page there or AI will give you? On the
- 51:48
- Amazon page itself. So Amazon has its own AI. Oh, I didn't know that. I haven't used that. Okay.
- 51:55
- All right, so here's another question here. Who is primarily responsible for enforcing guardrails on AI?
- 52:01
- Can tools ever be so powerful that they are inherently immoral? We've already kind of talked about this, but. Yeah, that's the question.
- 52:08
- I mean, you know, who, first of all, who would you want to enforce those guardrails?
- 52:13
- Me, me, I would like to enforce. That's right, that's right, right. Now there have been attempts to make
- 52:20
- AI immoral and it's been kind of laughable how they've worked out. You remember maybe a year or so ago, people would ask
- 52:27
- AI to generate an image of say George Washington and it turned out to be a black guy every time. Right, right.
- 52:33
- Yeah, so what was going on with that is that they were trying to put up guardrails because they didn't want people to,
- 52:41
- I don't know, stereotype anybody and that kind of thing, you know? So if you asked it like, give me a fireman, it would give you a chick, you know?
- 52:50
- And no matter how, even if you said, I want it to be a fireman, it would still give you a woman. So, and then they basically took off the guardrails because it was so absurd.
- 53:00
- People were just like, this is laughable. So that's, you know, so what are the guardrails?
- 53:06
- That's a huge conversation in that world. So there's a book by a guy named Max Tegmark and he's one of the early, he's one of the guys who started thinking about this stuff pretty early.
- 53:16
- He's a physicist at MIT and it's entitled Life 3 .0 and he's got a whole section on the ethics of AI and addressing how do we do this when none of us agree about just about anything?
- 53:30
- You know, what are the things that we can all say? Okay, at least we agree on these things, you know?
- 53:35
- So it's a big debate. Yeah, I've noticed with different, I mean, you said you used three or four different AI machines or processes and I've noticed differences between these different AI.
- 53:50
- Obviously someone is behind the scenes giving it different prompts. I think chat GBT does tend to be woke, at least in my experience.
- 53:58
- Yeah, it'll be interesting to see what develops now that, you know, Sam Altman is buddies with Trump, whether that's going to change.
- 54:06
- But Altman and Musk don't like each other at all. So it's interesting how that all is going to play out.
- 54:14
- Yeah, all of a sudden, big tech is part of the Republican party constituency, which is crazy.
- 54:20
- All right, I work at a nuclear power plant. Big thing is AI data centers and cloud computing buying their own
- 54:27
- MPPs that can run whole cities just to run their data centers. So he's seeing it.
- 54:33
- Everybody that I know in the power world is like seeing the same thing.
- 54:41
- So I've got guys in my church who are really big in the world of, you know, electronic,
- 54:47
- I mean, electricity and delivery of electricity. I'm talking about major CEO types.
- 54:53
- And they're saying, yeah, I mean, so I have a guy in my church who is Pacific Northwest heading up, he actually heads up or had headed up the renewable energy for the entire
- 55:03
- Pacific Northwest, hydro, wind, solar. He said they got a directive from the state of Washington to double the power generation in the state of Washington.
- 55:16
- Why? Well, this is where Microsoft is. This is where Amazon is. There's a huge demand for them.
- 55:24
- And, you know, it's a liberal state, but the Dems are just like, yeah, we'll give them anything they want. Yeah, you know, once we're dependent on this to run stuff, we're not gonna,
- 55:33
- I think, as you said, with nuclear power coming back, the environmental concerns are just gonna go away. They won't even be a factor because people aren't gonna wanna give up their lifestyles.
- 55:43
- Well, there's another thing. So like when Larry Ellison, I don't know if you saw that press conference on Fox News with Larry Ellison and Altman and the guy from the
- 55:54
- SoftBank, I can't remember his name, but anyway, yeah. So what Ellison said is that, you know, he led with the healthcare stuff.
- 56:02
- This is why you all want this. This is why we all want everybody monitored constantly, their biometrics and all that kind of stuff so that we can make sure everybody's healthy, right?
- 56:10
- Well, it's also the means by which, you know, the Communist Party in China has got a social credit score system.
- 56:17
- But the other thing that they promise is that we'll be able to figure out global climate change. If we just have enough data and we, you know, set
- 56:25
- AI to work on this stuff, we'll get it figured out. So that's another, like, carrot that they're dangling.
- 56:32
- All right, let's see, one more. Any insight on the CERN program? Is this
- 56:38
- CERN in Switzerland he's referring to or she's referring to? I don't know. I don't know.
- 56:44
- I don't see any clarification on it. So we'll have to ask AI what that means.
- 56:50
- I don't know, so. Yeah, when I hear the term CERN, I think about super colliders.
- 56:55
- Maybe I'm wrong. I'm gonna Google it and just see if anything comes out.
- 57:02
- No, it's a summer student program at CERN. No, that's not it. All right, well, we'll skip that one then,
- 57:08
- I guess. Oh, actually, she just said that, yes, it is. It's what you just said, I guess, okay. Okay, well,
- 57:13
- I've got a friend who's a physicist at UConn. He's actually a ruling elder in the PCA and he works on the super collider in Switzerland every once in a while.
- 57:23
- I don't know how that connects. My guess is that they would employ AI to work on, you know, the physics.
- 57:30
- I think that's pretty reasonable to expect. Yeah, okay. Oh, small world.
- 57:38
- Michael says his son lives in Switzerland and CERN is surrounded by many rumors. Okay, so I don't.
- 57:43
- Well, I've heard those rumors too. I have not. I am not familiar.
- 57:49
- I'm just gonna admit I'm ignorant on this. So now I gotta go do some research or ask AI to do it for me on this.
- 57:56
- I want, in the final minutes we have here, I would, we mentioned eschatology before and we're not, obviously, we're not talking about like eschatological flavor.
- 58:04
- We're talking about basic Christian thinking that every Christian believes about the bodily resurrection and the hope that we are the final state that Christians will live in with Christ.
- 58:17
- Just tell us about like the Christian response to all of this. And I'm talking about the philosophical underpinnings of an optimism concerning AI.
- 58:27
- How could, because if in 10 years, I'm surrounded by younger guys who are just, this is what they're eating up.
- 58:33
- This is what they're interested in. And they can't, they don't want Christianity because they think
- 58:39
- Christianity is, I don't know, a relic of the dark ages, whatever, it's not
- 58:45
- AI. What's the response? How do you navigate that? Yeah, well, I'm still working through a lot of it myself.
- 58:51
- I think that what we want to emphasize is that the
- 58:56
- Christian hope is not just life extension. When we talk about bodily resurrection and glorification, we're talking about life in God, the life that God is able to communicate to us, eternal life, which is something without end because it doesn't, it's not confined to the order of creation that we find ourselves in now.
- 59:23
- So there's a new creation that we're looking forward to. So the challenge that we're going to have is to make that case.
- 59:32
- I think another thing we're going to have to make a case for that might surprise people is a case for spiritual reality.
- 59:40
- I think that that's something that's under attack.
- 59:46
- Even consciousness itself is called into question. So if we think about what constitutes consciousness, this is something that the
- 59:56
- AI people are addressing and the way they're approaching it is very reductionistic.
- 01:00:04
- It's entirely limited to what we can say about physics and material reality.
- 01:00:12
- And I think what we're charged with is making a case for spiritual reality that has some basis apart from just physical processes.
- 01:00:25
- So I think those are the two things that we're going to have to be ready to make a case for. I think that, like you noted,
- 01:00:32
- John, this is not a pre -post -A millennial kind of thing versus we're all on the same page.
- 01:00:42
- From the standpoint of this alternative eschatology, it's going to challenge every school of thought.
- 01:00:48
- And then the question is, how do you approach it? Now, if you think about pre -millennialism, they're just like, well, it's all just going to blow up.
- 01:00:56
- Yeah. This is just part of the, you know. Yeah, right, right. So, but I do think it's a challenge for a person who's on -mill or post -mill in a different way.
- 01:01:07
- So I'm a post -mill guy. I just have a sense that the way this is all going to work out is way beyond me.
- 01:01:18
- And it might entail a really dark period in which we're dealing with this particular problem.
- 01:01:26
- In other words, artificial intelligence and the challenge of another eschatology.
- 01:01:33
- And so anyway, that's as far as I've gotten with it. Yeah, I mean, there's so many different technologies in the last 150, 200 years that have changed humanity so immensely.
- 01:01:45
- And I think of my grandfather who died last year. He was 101 in just his lifetime. It's insane.
- 01:01:51
- He literally comes from a different world than the world that, and I feel like my daughter who was just born last year is growing up in a different world than the one that I grew up in.
- 01:02:02
- I didn't even have a cell phone or, you know, we'd dial up and stuff. So it's just like each of these changes in technology bring something with them and they're not neutral.
- 01:02:14
- I think that's was Jock Alul's point, right? That the technology is not neutral. It does change you. Right. Yeah, just even think about something as basic as a hammer.
- 01:02:24
- When you use a hammer, your body is changed. Your mind is changed.
- 01:02:31
- The way you think about the world is changed. Your social standing, if you become really good with a hammer, is enhanced.
- 01:02:39
- You can do things that can earn a living with a hammer. You know, there's just a lot of things that a simple hamper can affect in your life.
- 01:02:50
- Yeah. The old joke is, you know, when all you've got is a hammer, everything looks like a nail. It's true.
- 01:02:56
- Yeah, yeah, that's very true. Well, no, I appreciate you talking about this. I'm really, I didn't know you were working on a book.
- 01:03:02
- Maybe you posted it and I didn't see it, but when you have the book ready, I'd love to have you come back on and flesh out more of this because this is,
- 01:03:11
- I'm sure we're gonna be having a lot more discussions in Christianity about this topic moving forward. And so if people want to follow
- 01:03:18
- Pastor C .R. Wiley, they can go to crwiley .com. You are on Twitter and Facebook.
- 01:03:24
- It sounds like Gab as well. And yeah, I appreciate it. Thank you once again.