By Philip Larrey
Philosophy (Johan Seibers)
Although I am obviously interested in how new technology is shaping our times, my normal occupation is teaching philosophy. Fortunately, I was able to meet up with a Dutch philosopher who came and visited the Pontifical Lateran University last year. Professor Johan Seibers is Associate Professor of Philosophy and Religion at Middlesex University in the UK. Before going to Middlesex, he was Reader in Philosophy and Critical Theory in the Department of English Language, Linguistics, Literature and Cultures at the University of Central Lancashire, where he continues to hold an honorary readership, and he is a member of the School of Advanced Study, University of London, Senate House. He also worked for Shell Oil Company, creating future scenarios that could be studied in order to calculate risks the company needed to take.
Even though I showed up an hour late for our scheduled meeting in central London, Johan was kind enough to sit down with me at the Senate House and talk about the philosophical aspects of the new digital age. His reflections are personal and profound, and although at times the content of our conversation can be challenging for those unfamiliar with philosophy, a persistent reader will be well rewarded.
What are your thoughts about artificial intelligence? I am going to talk with people at the DeepMind project tomorrow, yet they are very secretive.
I don’t know what their goal is at DeepMind but I think the ultimate goal of AI is to create an artificial intelligence that would be indistinguishable from a human being. There are still huge obstacles to overcome as far as that is concerned. For example, natural language understanding: this has eluded us up until now and I cannot see that they would have access to any kind of insight that we don’t have already about natural language understanding, which is not enough to reproduce it artificially. The only real difference today is that we can throw more computing power at it than before. This means undoubtedly we can achieve a greater similarity between human language use and the language processing of machines. I don’t quite know and I’m speaking a little bit out of turn but, of course, in linguistics there has been a statistical turn by using enormous computing power and big data to develop a more statistical understanding of language structure; more, that is, than a rule‐based understanding of syntax. This is a very recent development in linguistics.
That is very profound.
And I imagine that they could apply something like that and attain an impressive artificial intelligence, or at least something that is artificial and looks a lot like intelligence and conscious use of language, which is more than structured symbolic behaviour; it means to be a speaker of a language.
Okay. We can come back to this theme in a minute. May I ask if you agree with the philosophical distinction between syntax and semantics [in other words, is the structure of a language (such as grammar) different from the meaning of a language, which represents a classical theme in linguistics]?
Well, I’m not a professional linguist although I did study these things. In terms of my background, I was taught by the famous Dutch linguist, Pieter Seuren, who was one of the founders of the generative semantics movement. He always stressed the organic unity of semantics and syntax.
The original generative semantics movement came out of the first phase of transformational generative grammar, where the idea was that the syntactic surface structure was a transformation of a deep‐level semantic representation to a syntactic representation. Thus, the function of syntax was nothing other than to take a semantic dependency tree and translate it into a linear structure that you can actually voice. This approach was called generative semantics and a few people around Chomsky initially started it. From what I understand, it was quite aggressively cut down by the Chomskians, who soon dismissed the idea of a strong rule‐based relation between syntax and semantics. They were supportive of a minimalist programme, as you know.
I think that Chomsky has since repented.
I haven’t actually followed recent developments. It could well be, but there was certainly a phase where he tried to make the syntactical component completely independent from semantics, from any semantic content.
I recently read a phrase from Kurt Gödel, referring to his incomplete- ness theory, in which he states that the irreducible relation between syntax and semantics is a demonstration of the uniqueness of human intelligence. I think he means the presence of the soul, personally; he doesn’t say that, but that is my interpretation.
I would agree with that. I think all of this is very important. For me, what is important in understanding language and in under‐ standing thought, is to understand that there is something that cannot be reduced to a rule‐based operation, nor to a statistical pattern.
Okay. Let’s talk about that. I’m very curious to hear what you think about meaning or meanings.
Meanings represent a very difficult area to think about, but I would like to say that in meaning there is always something that is not generated by us that comes from without. There is something given and meaning is infinite, it is endless. And the fact that these two things are there implies that meaning can never be reduced to an operation that we carry out, and that seems to me to be important. For me, artificial intelligence is, in a way, uninteresting, because it never touches on the question of meaning. It never gets there and, furthermore, it seems to say we don’t need it. First of all, I don’t see how you can do without it; and I also don’t see why you would want to do without it. It would be terrible if meaning were to disappear.
If what were to disappear?
This infinity of meaning, its being beyond itself. Heraclitus says no matter how deeply you look into the soul, you will not find the boundary.1 I think that begins to express what meaning is. Heraclitus is ahead of Aristotle here. His statement does not only imply that ‘the human mind is in a certain way all things’, it also implies that the structural principle, the measure, of the soul, if I can call it that, is boundlessness.
I think from a historical point of view you are absolutely right in terms of the apparent incompatibility between a very strong artificial intelligence research project and meaning. Therefore, the initial interest was to deny the importance of meaning. I think as we have become more sophisticated and we have begun to attempt to simulate natural language in artificial intelligence through some sort of inter- face, meanings are now very, very important. What you said before, that this is extremely complicated, is exactly right; so I’m not exactly sure where I am on this either. However, I do think that the strongest AI projects today, like DeepMind from Google or Watson from IBM, are trying to simulate meaning. I agree with you in that the machine will never possess meaning, but I think it will eventually simulate what we consider having meaning to be.
Exactly: simulation is the proper term. So, the simulation is based on producing something that, to somebody, looks like something else. I think we can place this idea in very simple terms; for example, in the book Meaning of Meaning, by Richards and Ogden from the 1920s2 . . . Ogden was the translator of the Tractatus by Wittgenstein and I. A. Richards is the great rhetoric scholar who wrote a wonderful little book called The Philosophy of Rhetoric which contains his lectures from the 1920s and 1930s, in which he says (I’ve always thought that he was right about this) no matter what anybody tells you, don’t believe that anybody can explain how it is that we can have one thing standing in for another. So, the basic symbolic operation, that one thing stands for another, is a function of consciousness, if it is not simply what conscious‐ ness is. How is it that there can be an identity between something and something else, which remains another thing? Here we can also see what it means to be a speaker of a language as opposed to a language processing unit. The speaker of the language occupies this ambiguous symbolic space: to speak a word that is not the thing, but somehow the thing and the word belong together and can stand in for each other. For a language processing unit, there are no real words, you might say. There is no verbum, no act of speaking.
Are you talking about a token? Is this what logicians refer to as a token?
I think it is a completely general problem and I would call it the problem of symbolization. A symbol is something that stands for something else. Without it, we cannot have consciousness. We do it all the time yet we cannot explain how it is that we do this. We can only say we do it all the time; we can understand that it’s always happening and it underpins everything. It may well be that you cannot make a machine that does that, but it is an open question and one whose answer will depend more on trying to make one anyway, rather than on speculation. We don’t know what reality has yet in store.
But the machines manipulate symbols, don’t they?
Yes, they are seen to be manipulating symbols to someone who already understands the symbol.
Okay. You’re right. Now, backtracking just a little bit, when you use the word ‘consciousness’, what are you referring to?
I didn’t want it to be an embarrassing question. Personally, I do not know how I would answer it.
Consciousness is knowing that you know, experiencing that you experience.
That is a pretty classic understanding of what is consciousness.
Consciousness is any first‐person, subjective state, on this I am with John Searle. I don’t see how anybody has ever given a credible suggestion as to how these states could be explained only on the basis of third‐person states. It is perhaps even the other way round (some‐ thing Kant would have said, I think).
I don’t think it can be, even by definition. Searle gives the example of feeling pain as a first-person experience: you know what feeling pain is, and you can also describe that objectively, but those are two different things.
Yes, exactly. Although AI is concerned with intelligence, not consciousness, let alone self‐consciousness, I think AI is based on the presupposition that one day it will be possible that, on the basis of third‐person facts or processes, we will be able to create something that is a first‐person state. I do not see how that could be done. I think philosophically it raises interesting questions because, at that point, either you become a Cartesian dualist and you say, ‘There are two fundamental substances’ or you say, ‘No, somehow this is possible’ and you become a panpsychist, in the sense that every‐ thing has consciousness. Actually, that is the line that I would defend, to a very far extent. Consciousness is already there. But intelligence and consciousness, as I said, are not the same thing. Intelligent behaviour, even learning behaviour and the management of ambiguity, fuzziness and ambivalence, may well exist without self‐consciousness. There are many examples of it already, and it is an open question what might yet turn out to be possible.
I would not have thought you would defend that position, but explain it to me further.
Because it seems to me this would be the only logical option; that somehow this consciousness must be everywhere, latent, or in the nucleus. Consciousness must be there, even in the most basic excitations of the real. There must be already a kind of referral back to itself, in some form or another, in the very act of being itself. I would say that gets us out of a lot of problems that otherwise seem to be unsolvable. Perhaps, many ontologists might say, the price you have to pay for that is far too high, because now suddenly even electrons are at least potentially conscious. But it is just another way of stating the identity of being and thinking, a good old meta‐ physical principle. Being is just that which is thought, or known; thinking is thinking of being. The two are the same and yet differ‐ ent. We can understand this in terms of a divine consciousness or in terms of subjectivity and I think we can also make it amenable to a broadly materialist ontology.
You’re absolutely right: that’s very consistent.
Some philosophers have come very far trying to think in this way; for example, Alfred N. Whitehead in his great contribution, Process and Reality, in as much as he takes experience as an ontological process, or reality as a self‐actualizing event, an occasion right there in the constitution of the event: it is in the basic form of the real.
That is incredibly interesting. I don’t know if I would want to go there. Let me throw this at you: could consciousness be an attribute of a being that has a certain complexity? So, in this way, we would reserve consciousness to some, but not to all.
Yes, and also Whitehead would certainly say that only what he calls hybrid prehensions can have the kind of consciousness that we would consider consciousness and that is probably true. It’s almost a statement of fact. This does not mean that the principle of consciousness (which in my view is not something complex but simple, namely, something stands for something else) cannot be present in a much more primitive way.
Can I ask you where you think consciousness comes from? In a way, you’ve already answered this using panpsychism, latent in all of reality.
Yes, then the question becomes: ‘Where did all of reality come from?’ Why is there something rather than nothing? A question often misunderstood today by limiting its scope to the presence of a material universe. It is a confrontation with nothingness, perhaps as an impossibility, or at least a ‘rather not’. Leibniz formulated this question in this way for the first time. In his words the ‘rather’ is already there. There is a will‐intensity to existence, it is not mere presence. Understanding why it may not be possible for us to give a definitive answer to these questions is more important than trans‐ lating their meaning or offering an opinion about them. I don’t think it is helpful to say, ‘I believe this or I stand here’; it is import‐ ant to understand what the reasoning behind these positions is. I have, personally, come to a view where I accept that there is a very strong argument to be made in philosophy for the existence of necessary Being; that must be seen as the cause of the rest. I would like to say there is obviously contingent being; contingent being cannot be fully understood without a reference to necessary Being because if all being were contingent, there could be a situation in which there were nothing, but that is a contradiction, or rather when we try to think of that we discover that our concept of ‘nothing’ is a limit‐notion, perhaps even a pseudo‐concept. And so affirming a necessary dimension in being seems reasonable: it can’t be such that there is nothing at all.
That sounds like Thomas Aquinas.
Yes, this is a paraphrase of the Third Way. I may change my view on this again, but at this point in time I think that from many view‐ points that is a very sound way to think about this problem. What do you think?
Obviously, I teach at the Pontifical Lateran University and it is nearly impossible to be a Catholic philosopher and not agree with Thomas Aquinas. If you’re speaking specifically about consciousness, I’m not sure where I am at the present moment. I may have an idea which might sound contrary to Catholic doctrine and let me just test the waters with this idea. It is very controversial and so if you prefer not to comment that’s fine. I’m taking a large risk by mentioning this but here goes. It refers to artificial intelligence. If consciousness is the result of a sufficiently complex computational process, then AI will achieve it. This is a big ‘if’ and I think there are people working in AI who believe that as long as we get the power and the complexity going, then it will achieve consciousness. I disagree with that point of view, but I do think there is a reason that people hold it. The controversial position I want to take now is the following: if we managed to create a sufficiently complex machine, God would endow it with a soul and it would then become conscious. Hence my provocation to you.
That is a very interesting idea. Let’s take a few steps back and thank you for placing this question in front of us. Karl Rahner had a similar idea, and Church officials often did not like him; at least in the beginning and also looking back, such ideas were too controversial before Vatican Council II and even after it. Rahner, I believe, had the idea that there might be extra‐terrestrial intelligence and there might be Christ events on other planets — that God would also have sent his son to whomever these intelligent species are. I always thought that that was a wonderfully open‐minded idea, and that it does not take away anything from the unicity of the Christ event, but that singularity is not in conflict with the idea that something happens more than once.
That is correct. That is quite a tenable proposition, although we would have to consult a theologian to be sure.
Something similar might be the case with the creation of beings with souls. There might be, even if we stay strictly within a theological or even Catholic perspective, a latency in nature, the machine with a soul, which is providenced in God’s plan but whose factual existence depends on us making a machine that can house a soul. It is an inverted Frankenstein story in a way, or a ‘homme machine’ in a sense in which De la Mettrie had not conceived of it.
Well, I don’t know what that being will be.
It could be something that we don’t fully understand yet.
Which is precisely my fear. The philosophical or scientific rationale behind a position like that (and I know that these are unexplored waters) stems from considerations concerning in vitro fertilization: we create matter that is offered to God for ‘ensoulment’ and this usually occurs within the womb of the mother.
Yes, but it does not have to be only there.
Now with the technology we have, it does not have to occur in the womb but can also happen in a petri dish. This is what Aristotle called the predisposition of matter (which usually occurs between a father and a mother). We now can do this independently from them and we still achieve a human being. No one doubts that a baby born this way is human, though I think the question may have arisen at the beginning; but now it’s obvious that they’re completely human. Let me take it one step further: what happens when we create artificial sperm (with which they are now experimenting)? So, now we have something we have created and by using a donor egg, we can fertilize that and we have a human being. So what happens when we create an artificial egg and artificial sperm?
Ultimately, you could build it up from as deep as you can go. It would involve a more complete science, as you take a bit of matter and build it up. You would not even say that you are playing God, because you are simply manipulating matter in the way that has been given to us to find out how to do it. But it is perhaps even more interesting to ask why we are driven to these types of fantasy, what does it say about the human beings we know exist that they are so interested in the question of mastery over nature, so much so that they create their own destruction in the process, if we are not careful. I think it is fundamental to the human relationship, to being as a whole, that we can experience it as a gift, as something that we have not made but that reaches out or relates to us, and vice versa. Maritain spoke of a basic ontological generosity that defines what existence is. Even in our fantasy of mastery it comes up, in the idea that God will endow matter with a soul. What is that other than saying that in the light of the absolute, something is made part of a totality without being reduced to it. In a metaphorical sense, every‐ thing we make is endowed with a soul. Now, if you are right and God does his part at that point in time, assuming that it be possible, you could then ask the question: does he want to do it? Does he have to do it?
Well, that’s a very good question. In the Middle Ages, this was actually a question that was presented concerning a child conceived from a sinful act (like in the case of rape or a couple out of wedlock) because such a child was conceived violating, in a sense, God’s law. Well, noone holds that position any more because those babies were just as human as everyone else; they were just as human as babies conceived in love (which is the way God wants it), so God himself is bound to the laws of nature because he has chosen to be.
He has chosen out of freedom and in a way can no longer interfere with it and thus follows the laws of nature. However, I also believe the laws of nature can change and we can see that they are slowly changing or we just don’t know.
We would have to ask Him.
There is no rational reason for us to say in advance that the laws of nature are unchangeable.
I’m not sure I follow you on that last notion.
It is really an inductive inference that the laws of nature are the way they are, as opposed to different; as, for example, the gravitational constant could be different and because it could be different, there is no reason not to suppose that it might have been different at one point.
I agree, but I don’t know what the point of that is.
The point is that you said God is bound to the natural laws of nature, so I would say maybe not. A ‘law of nature’ might be just another statistical generalization. I think we should not forget to hear the oxymoron in ‘natural law’; it is almost an ironical concept, a physis which is thesis at the same time. Nature does not need the law, and the law is not natural. The concept, and the easy use we make of it, hides a metaphysical embarrassment and, perhaps, more of a peek into contingency and groundlessness than we often care to admit.
Okay, I see how those two arguments can be made.
But what we are speaking about now is not necessarily the same point. You can hold that position and it does not interfere with your view on the possibility that God would endow a machine with a soul.
Here is my dilemma: if this does happen, we might witness an atheist AI developer who would claim that he created consciousness. So the question you asked thirty minutes ago is the most important: where did this consciousness come from? I think, as Catholics, we need to start addressing this. Instead of saying the machine will never be conscious, we need to examine all possibilities.
I think that is a very important and a very true argument. It also links to a point made by Pope John Paul II concerning evolution when he said that Catholic thinkers should not try to find problems with evolution theory, that we should not repeat the errors of the Galileo case.3 We should learn from the errors of the past. The Pope insisted that the soul is created each and every time, and that is something which cannot be accounted for in evolutionary terms but we need to make it work while embracing evolution as a bio‐ logical principle wholeheartedly. This is the official position of the Church and I think there is a parallel between that way of thinking about evolution and your idea about sufficiently developed artificial intelligence, such that consciousness may arise.
From a hypothetical point of view, how far away do you think we are from something like this?
As I said before, I don’t see it happening. It would be so miraculous to my understanding if an artificial intelligence engineer would be able to make a machine like that; yet, if it happens, I would be happy to take your view and say God must have put something in there!
I see on the one hand you are open to the idea but on the other hand yours is a strong criticism of AI.
At a philosophical level, I don’t understand how further complexity can lead to the light going on, but I am open. I do not think there are conclusive a priori reasons that you can adduce and say ‘It is not possible’.
I think Nick Bostrom4 refers to this in his book, where he talks about a missing link. I think he is on the same page as you are. Just because you augment complexity to the nth degree, this does not give you consciousness and he agrees. But if we find the missing link, it may happen.
I am open to that and yet I cannot see what it would look like. It is not a link, the metaphor is misleading because it does not incorporate the main point here: that complexity and consciousness are on different levels. It’s not a broken chain. Perhaps bridge is better. We need bridge builders for this task and maybe there is more human creativity that needs to go into it than we think. Maybe we have to sensitize ourselves more to the life in all things before we can see it also in machines. AI is interesting also because of this: we are put‐ ting ourselves on the line, in more ways than one.
Precisely: we don’t know.
In my view, it would have to accomplish a kind of creatio ex nihilo in a way, and in that sense, I can understand why you say what you say.
Okay, can we talk about the future?
Yes, let’s talk about the future.
Can you tell me more about the future, and what you did at Shell? I understand that you were a member of their original team that projected ‘future scenarios’ in order to help plan effective risk management.
I’ve always been interested from a philosophical point of view in the future. I tried to understand temporality and how it relates to our own existence. To me the future, the fact that there is newness, seems to be central to what it means to exist, and yet the discussion of the future arrived relatively late. People don’t usually think about being and thought in terms of the new, in terms of the future. If you look at most views of knowledge, they have some sort of relationship to the past. In Plato, knowledge is remembrance and that goes all the way through to Hegel: knowledge is of what was already there; or essence in Aristotle pertains to that which some‐ thing ‘always already was’. The idea that the openness of the future might be a basic, real dimension of being, that being in other words is to some extent radically undetermined, is not very old.
That is quite metaphysical.
It is the idea that there is a structure; there is a reality and thinking is only finding your way back to it, which in a very strange way contrasts with an experience of life that leads into an unknown future, in which something new might come about or in which we create things that were not there before; the realm of creativity and surprise, of the radically new.
In previous conversations, you mentioned that you have worked with other institutes; and you’ve worked on future planning. My question is that it would seem obvious that we need to reflect on this. Let’s just take one example: investment. You invest in something and you expect to have returns in three years. You obviously need to calculate what the future holds and it seems so obvious that we would be much more involved in this kind of reflection.
We sometimes make a distinction about what ‘investment’ actually means. Where does the word come from? Today, we use the word investment only to indicate a potential return and outcome and output rather than its more original meaning where we ‘invest’ something with a particular quality or a particular purpose or meaning. We have an instrumentalization of investment, a term originally referring to the act of clothing someone in the robes of an office. Now when we hear the word we first think of financial returns and then we think money and only then do we realize that actually we invest a specific object with meaning and so this word carries within it human agency, the possibility to create, change and make new and the human capacity to take things into one’s own hands. This human activity of investment is some‐ thing we need to think about in new ways to understand what it really is, how it is about having a nurturing, agentive relation to a potential future rather than simply one of calculating what my return will be and managing the uncertainty of the future experienced as fate.
These are two fundamentally different attitudes: the first one has to do with the future as an open horizon of possibility. How can I relate to the fact that the future is open? The other one is a colonizing attitude towards the future: how can I close down the future now sufficiently already so that I know what will happen in three years’ time and can use this knowledge to my benefit? That attitude of closing down the future is really a denial of futurity in a way, rather than the first one, which is an opening to what the future has in store and taking up an attitude of responsibility for the future rather than its exploitation.
I guess it depends on what your priority is.
Of course, there is a place for both of these things.
You won’t make a lot of money unless you in some way direct the course of the future.
Yes, but even in the context of business planning it has become very important to make people aware; this is one of the first things that we do in workshops for scenario planning. The first difference we talk about is that between scenario thinking and predicting the future.
I’m not sure I understand that notion because I see those two as very close.
Yes, but they are actually different things because predicting the future means reducing uncertainty: based on what is happening now, we can extrapolate that such and such is going to happen in the future (like predicting the weather), whereas scenario planning is thinking in these terms: based on the drivers that are present today, what are the alternative futures that might be and what are the knowns and unknowns?
I was going to ask in terms of your experience: aren’t corporations doing that kind of thinking?
They are doing that kind of thinking and they are doing it to get a better handle on their environment; you might say the temporal environment in which they exist.
I was thinking about Facebook, for example. I’m sure they have teams working on these kinds of things.
And of course we did the same in Shell. Actually, Shell invented long‐term scenario thinking along with the RAND Corporation in the 1950s and 1960s. They invented scenario thinking and the goal was always to come up with alternative ways of looking at what the future might look like and to use that whole set of alternative scenarios to calibrate or to judge business ventures. They asked, ‘Will this particular business investment or this business decision that we want to make hold up in each of the scenarios?’ The key phrase was scenario robustness: will your business decision be a good one regardless of the scenario that materializes with respect to the business environment?
I guess you could always be wrong too; did you come across that?
Yes, of course. I was at Shell Headquarters when 9/11 happened and we had been working on scenarios for two or three years and no one had come up with that specific scenario.
That’s amazing, but who could have foreseen such an event? How would you explain the rise of the Islamic State, for example?
That is a difficult question.
This is like a pseudo-nation, it’s a caliph.
It is not a nation state by any means. It uses the demise of the nation state to describe itself in what is in fact a geopolitical region.
They receive taxes from their people; they have state education and a state hospital system.
If it goes on, there will come a point at which it will be recognized as a state, and like many other states the bloodthirstiness of its founders will become a matter of history, diplomatic relations will be set up, and so on. I can see that happening.
What can you tell me about the future of humanity with regard to digital technology?
I’ve been thinking about this theme and I’ve recently been doing a lot of work on the future of education and collaborating with people who are working on the future of education. There we see the impact of technology in a dramatic way. We can see that the whole idea of a curriculum is changing. The idea of a classroom is changing and what the teacher does is truly changing under the influence of technology in ways that are atypical. When you are presented with new media, people understand the new medium in terms of the old medium that they already know. For example, when we started to write emails they looked like letters. It took several years for people to realize that emails were not letters. Marshall McLuhan has written a lot about this; for example, with his analogy of the rear‐view mirror: it’s as if we were driving into the future looking into the rear‐view mirror — we understand what is coming in terms of the past, with which we are familiar after all.
But is there any other way we can do it?
It appears there are two modes or mentalities of knowing: tying something back to what you already know, or letting go of what you already know to understand what is new. McLuhan held that artists do the latter. They live right on the shooting line and that is why people say artists are ahead of their time, because they are the only ones who live in the present. And so today educators can learn from artists. They have a great opportunity to give new meaning to Freire’s remark that education is at once ‘conscientization’, politics and art. As education is emancipated from the dominance of certain media, new forms of education can arise with new media. But we can also focus again on those aspects of education that have to do with encountering others, and so in a way with an emancipation of media themselves. We can become aware in new ways of education as an act. But predicting technological development is a very tricky business. We just don’t know what lies five years ahead. We really have no idea.
I think today we can identify several of what you referred to as ‘drivers’ and then we can, to a certain extent, predict where those may take us.
Yes, we try, but there is also an agency involved, isn’t there? There is something we can do (like education): we may be tempted to succumb to an ever greater instrumentalization of the educational process, in which education becomes more and more just the trans‐ feral of skills or the training of skills, assessed by measurements and tests. Or we can search for a more diversified approach in order to get away from the idea that one size fits all. Look at the school as a production belt of diplomas or the industrial model of education: you take the raw material in; you put it through a standardized process; you do a quality check at the end and get the specification out. Even though that model is dying out, it has not disappeared. It’s dying out because our technology allows us to be much more diversified in how we manage that process; but education still happens within the parameters of an instrumental view of what education is. There is also the opportunity to use technology for the development of new ways of educating that actually move away from that old idea to explore and experiment with new ways in which education can be personal formation, holistic and humanizing.
I’m so glad you mentioned that because at my university we have just hosted a convention on the concept of ‘bildung’ and formation like ‘Paideia’ in ancient Greece. The ironic thing for me was that the only avenue we did not discuss was the digitalization of formation. We talked about Renaissance man, we talked about the ancient Greeks . . .
These historical examples can inspire and we can learn from them. But if we get stuck singing the reactionary blues, we’ve lost it, especially in education, which is so centrally about youth, change, blossoming and opening up, a lifelong youth. We can and should salute Renaissance man, but we cannot now go back to him.
I guess I didn’t have the strength to say that to my colleagues.
Why not? Because we tend to think about technology or the digital revolution as something that is an enemy of humanization. This is where I would say future studies can help: to find ways to think about digitalization not as a threat to humanity but as an opportunity to explore avenues that we may not have even known about. This is what an open mind to the future can actually show you.
So would that be your personal position?
That would be my personal position and if I may add one thing to that, I think this has ramifications far beyond education or artificial intelligence; I think it touches on our understanding of religion as well. I think it is necessary today to take up the idea that White‐ head began to explore in Religion in the Making, which was first published in 1926. His idea was that change and development are much more intrinsic to religion than we tend to think. I think in most institutionalized religions there is a kind of harking back to a past, a fetishization of origins and tradition, whereas that runs completely counter to the message of Christ that God is a God of the living and not of the dead. Christianity has a forward‐looking direction embedded into its very essence. It seems to me that clinging to the past, to a fixed identity, is often related to fear and security, but embracing the new can only be done with courage and hope. Religion as a human reality has both elements in it, because they are both part of what it means to be human. But I think the genuine life‐giving newness in being that all religions (not just Christianity) have always sensed and known about is especially important today. Religion has something to offer there that is unique. This openness is much more important than the closed, all‐encompassing meaning that religion sometimes wants to pour out over everything, causing unspeakable suffering in the act. And this openness belongs to religion, by right and from the start. Sometimes the word religion is explained by reference to the Latin root religare, to tie back to the divine, or also to past tradition. We could think of the human orientation on the future and on the full, unknown gamut of its potentialities perhaps as ‘proligion’. Our ties forward to the future are much deeper than our usual notion of ‘progress’ implies, and there is a place for the heart as much as for the head in them.
I think that is because we tend to be control freaks, we want to be in control.
The attitude of wanting to be in control is at the root of the techno‐ logical attitude.
In what way?
Heidegger wrote a lot about technology as a means of control. I think he missed an important point, which one of my professors pointed out to me once. He says the thing we often overlook when we think about technology, also Heidegger, is that if you don’t get it right, it doesn’t work. If you are fixing a car, and you don’t do it right then the car doesn’t work; it doesn’t drive. So, doing it right is not up to us, it’s not up to us to specify how it is to work because there are properties to matter. The idea that with technology we are in complete control over a completely plastic reality is not true. We have to be attentive to what reality shows us of itself, especially in our technological relation to it.
One of the issues that people are bringing up now is nanotechnology.
Nanotechnology is a good example of what Ernst Bloch called ‘alliance’: there is a possible alliance between nature, the human sphere and technology, or at least we can think of the relation between technology, humanity and nature as a relation of alliance rather than as a relation of exploitation and control.
Actually, that is a good point on several levels, not just in macroscopic terms.
Yes, it also applies to the real nuts and bolts of what technology is. There is more to be said about technology than ‘it is just control’. I do take the idea that comes back to the Frankfurt School really, that Western culture has been marked by a controlling attitude towards nature and also towards our own natures. The orientation on the future has got something to say about this. It wants to free us from this old, oppressive aspect of our own psyche and our own behaviour. I think it is deeply ingrained in almost all cultural institutions. The root of it is fear. Today, now that fear is becoming ever less obvious in some ways and is seen as something to be managed and got rid of, the control to which we commit nature, others and ourselves has become extreme. We have not liberated ourselves from this basic feeling, instead we repress it more and more, and as a result our attempts at control become ever more intense and unaware. It is as if we don’t dare to practise alliance instead of control out of fear that there will not be enough for everyone.
I think it is so deeply ingrained that I am more cynical than you are. I can see that it is much more plausible that the course we are going to take will be indicated by the ‘drivers’, or specific interests, especially the interests of the powerful. Rather than open-mindedness and an almost deference towards the future and what the future may hold.
Deference or gratitude — an attitude that pulls us into the direction of the religious sphere. And not just that. It also brings us in the vicinity of what justice means. Justice does not close one off from the future, but keeps open the possibility that the future might be different, it keeps the future itself open. That is a core aspect of just‐ ice. It is what the heart does when it forgives, for example; there is no forgiveness without a future or without hope.
I think the future will be much more humane if that is the course that we take.
But I see your point: there is little indication that this is the course we will take. Look at Nick Bostrom and the world of the trans‐ humanists: if anything, they are creating a new discourse of privilege; they are creating a paradigm of the haves and have‐nots. How much can you afford to enhance yourself?
Do you think that this has already begun?
In a way it’s as old as the world itself and so nothing new, but the scale of the dimensions to this age‐old drive to improve and distinguish ourselves, that is something we really need to get our hands around. But what do you propose then?
I’m trying to speak with intelligent people like you and see if we can’t perhaps defy the status quo and the historical trends that have marked humanity. We did not speak about war but we could discuss that. Pope Francis has said that World War III is already in act but only in a piecemeal way. Look at all the violence in the world. Look at what happened several months ago in Tunisia.5 The city that was attacked in Tunisia, Sousse, was built by the Italians. It’s on the Mediterranean Sea, a beautiful town, with five-star hotels and discotheques. It is a very Western city and that is why they found so many British and other Western vacationers (which is why they chose Sousse as a site to attack). As the Minister of the Interior stated, ‘No country is at zero risk’; he basically said this could hap- pen anywhere. I think he’s absolutely right.
I think the Pope was very right when he said we are already in a state of global war. This is something we’ve been speaking about for many years: the spread of war and the theme of warfare tactics used by actors who are not nation states, like a pandemic, a state of total war, not as permanent military violence, but a smouldering everyday that erupts unexpectedly, now here, now there, a state of permanent terror which has no outside any more.
I am somewhat cynical but also hopeful because I am a priest; I am hopeful that we can harness the powers that would lead us to a better future and a more humane one.
Obviously, the Church has a role to play in this and I hope that the Church sees itself as a shepherd of people, into an open, free future, let us say ‘greener pastures’. But I think the role of all churches or religion is diminishing, even though there has been somewhat of a return of religion in quite a remarkable way. If you look at the 1970s or the 1980s, a lot of people in Western Europe (maybe more than in America) would have thought that religion as an influence in public life was about to disappear completely; yet that has not happened.
But is religion synonymous with spirituality?
No, it is not. There is faith, there is spirituality, and there is religion.
Unfortunately, what I see is a trend towards conventionality of religion, especially in the West. The people who are still religious, I think, are so to a large extent because they are conventional. There is a value to that, even though I personally am not that way. I under- stand that many people are religiously conventional and they value that. I think there is an increase (especially among the young) of liv- ing a spirituality that has no part in organized religion.
And that is very strong. I see it among my students. I see it every‐ where. But it is an open question whether so much spirituality can sustain itself, or whether, as Scholem teaches us so beautifully with the example of Jewish mysticism, the spiritual life needs a framework within which it can flourish, which it also always transgresses and disrupts, but without which it is almost too vulnerable. Religion has to open its doors widely for the most exotic forms of spirituality. It risks excluding the most beautiful flowering of our religious existence if it is any less than welcoming and willing to be a student of spiritual teachers. The rise of spirituality in Western culture is, at least in part, a call to change addressed at the churches. Here Rahner, whom we’ve already mentioned, proved well aware of the future himself. He held that mysticism would be the core of future religiousness.
In terms of digital technology I am also consoled by a tendency I see among my students (tell me if this is your experience too): as the technology becomes much more pervasive (they’ve grown up with smartphones, with Facebook), they tend to use it less.
Yes, it becomes less of a thing to do. Yes, that’s right. I have begun to notice that too. There have been years of being so overexcited with the new, yet now that is wearing off a little bit. I think that applies also to the internet at large. For a time, it was such an explo‐ sive change in our information and the way to view information that it took us a while to find our bearings. But I think we are doing that more and more, and that gives hope. The fact that technology wants to disappear into the background is hopeful.
I think people are more interested now in the quality of their commu- nication and the transferal of information than simply doing it, than simply communicating. I have seen that for example in YouTube in terms of going viral and receiving millions of hits in a short amount of time: those videos are really well done, and they are really well thought out too, there is something which excites the imagination. People are not just clicking on anything, whereas before perhaps they did. People are much more demanding.
Yes, we are finding out that we have to manoeuvre in intelligent ways when publishing things on the internet, but also that our understanding changes of what it means to be an author, a publisher or a member of an audience or public. It’s interesting to think about, and a good example of a scenarios question: what will internet use look like in ten years’ time? I suspect that a lot of it will recede into the background of our lives, at least I hope that it does. I think it is sad that in public spaces everyone is obsessed with their little machines, and eventually people will have had enough of that.
I think sooner rather than later. I think ten years is too long, although I do not know. I was waiting for the train at Gatwick and on the platform everyone was looking at their smartphone. And there is something psychological to that: the need to feel connected. Yet I am optimistic. Let the machine do what it does well and let me live as a human being. So, if my smartphone can take care of a lot of things for me, like mundane issues or things that must be processed, the exchange of information . . . I don’t need to be involved in that, as long as it is reliable.
These are the lines along which we should think about the future, about technology and its implications for the future because in this way it becomes a creative task, it becomes a question of how we actually shape the future that is ours. To shape to a large extent rather than sitting speechless in awe of an unknown and unknow‐ able reality, or gazing at signs in the sky trying to figure out what is going to happen. The hopeful attitude becomes something more than the question ‘Do the indicators favour pessimism or opti‐ mism?’ Hope becomes moral, an active commitment to the proposition that change for the better is possible. Bloch says we do not have the right to be pessimists. We certainly do not need phil‐ osophy if we are pessimists, for in that case the situation takes care of itself. Bloch’s remark may be a rhetorical formulation but it points out something important when thinking about optimism and hope: they are not things that you can base only on the evidence.
Hope is always in the face of the hopeless, otherwise there is no hope. And that is why hope is such an important category when we think about the future. You cannot understand what futurity is if you don’t talk about things like hope and despair. They are not sec‐ ondary effects that come once you realize there is a future. They are the ways in which the future presents itself. That is why these things are so important to discuss.
Kant uses a wonderful analogy in Dreams of a Spirit-Seer:
I find no attachment nor any other inclination to have crept in before examination, so as to deprive my mind of a readiness to be guided by any kind of reason pro or con, except one. The scale of reason after all is not quite impartial, and one of its arms, bearing the inscription Hope of the Future, has a constructive advantage, causing even those light reasons which fall into its scale to out‐ weigh the speculations of greater weight on the other side. This is the only inaccuracy which I cannot easily remove, and which, in fact, I never want to remove.
This ‘inaccuracy’ is ingrained in us in a way that appears as a mis‐ take from a certain perspective. Do you have sufficient grounds to have hope for a better world? Then you have to say ‘no’. But there is a kind of mistake in us, a brokenness, and that brokenness, that lack or absence, is what provides the hope of the future. For it, even lightweight reasons will do. Hope for the future is the inaccuracy that makes us whole. We cannot accept the way the world is: we have to try to make it better. This attitude towards humanity, his‐ tory and futurity does not mean that the angel of history sees no catastrophe. But it can help us to understand a bit better the depth of our investment into the future, which goes as deep as human existence itself. I think this is the way we need to think about the future. We have to do away with the technological and colonizing attitudes towards the future that seek simple solutions to big ques‐ tions. (But this does not mean that we should do away with technology, far from it: there is also a future for technology.) For these solutions in the end mistake means for ends, and make ends — our desire for immortality, for love, for happiness — into means, means for more colonizing technology. They serve to repress our awareness of the meaning those questions have: questions of the alliance of human, nature and technology, questions of being at home in the world with others, questions, ultimately, of what Bloch called ‘the strongest anti‐utopia’, death. When we think about the future, we have to think about what hope is, what anticipation is, and how these may open up a future to us.
Originally published at medium.com