Community//

What are Diverse Intelligences? with Pranab Das

Host Richard Sergay and Professor Pranab Das discuss the definition of intelligence and look at some of the DI projects we’ll be talking about.

The Thrive Global Community welcomes voices from many spheres. We publish pieces written by outside contributors with a wide range of opinions, which don’t necessarily reflect our own. Community stories are not commissioned by our editorial team, and though they are reviewed for adherence to our guidelines, they are submitted in their final form to our open platform. Learn more or join us as a community member!
S2E1: Transcript
What are Diverse Intelligences? with Pranab Das

Tavia Gilbert: Welcome to Stories of Impact. I’m producer Tavia Gilbert, and in every episode of this podcast, journalist Richard Sergay and I bring you a conversation about the newest scientific research on human flourishing, and how those discoveries can be translated into practical tools.

This season, we’re diving into the question: What are diverse intelligences? If you’ve ever wondered…if a dolphin is smarter than a chimp, if honeybees can learn even faster than rats do, and if there’s any way for us to learn about how aliens might communicate, this season of conversations is for you.

Throughout the coming weeks, we’ll be in conversation with some of the world’s most cutting-edge scientists leading research projects focusing on intelligence — not just human intelligence, but animal and machine intelligence as well.

We begin our series speaking with Pranab Das, who has, for 25 years, been a professor of physics at Elon University in North Carolina. He has long been interested in interdisciplinary studies, particularly around the relationship between science and spirituality, and for the last several years he’s been the Principal Advisor to the Diverse Intelligences Initiative from Templeton World Charity Foundation, which has a three-part mission: to map the contours of intelligence found in the natural world; to nourish the dimensions of human intelligence, including social, moral, and spiritual intelligence; and to encourage the practical and positive applications of artificial intelligence and machine learning. Richard will be in conversation with researchers exploring questions around all these intelligences. Let’s start at the very beginning:

Richard Sergay: I want to throw out the basic term intelligence and within a few minutes, although I know we could take the rest of your career to try and define it, tell me how you define intelligence.

Pranab Das: So this is a, it’s a great question. You had mentioned to me in advance that you would ask this and I have been sleepless because of it. I think the first thing to remember is that our project has that “s” on the end: diverse intelligences. It’s implicit, I guess, explicit in that is the idea that there is no one unitary thing that is intelligence, that there are many, many different instantiations of this thing. So I begin with the idea that there is a universe and that apprehending that universe in some successful way is a goal of life. So you could say that our job as thinkers, our job as intelligent beings, is to come into interaction with the world in a way that does good things. So the simplest formulation of intelligence, and you’ll, you’ll see this a lot, has to do with problem solving.

In fact, there are more than a few people who would claim that all intelligences refer to some kind of problem solving. There lies peril, because if you take that route and you begin to ask what problems are, it’s very easy to reduce everything to the kinds of problems that we’re good at, or I should say some of us are good at. And so the most prominent result of that kind of thinking is something called psychometry, psychometry. The psychometry movement was a movement to measure our capacity to solve problems and resulted in the IQ test, among other things. The problem there is you do identify people who are very good at solving particular problems. And you like that sort of thing if you’re the kind of person who likes that sort of thing. In other words, if you like solving math problems, if you like these sorts of social and political and economic and mathematical worlds in which the sort of dominant power sits, those are important to you. But what’s been terrible about that is it has deep privileged empathy, the capacity to get along with each other, which is every bit as important for successfully apprehending the world, especially the world of other people.

And it’s only really recently that a fellow named Howard Gardner published a book called Multiple Intelligences, and he proposed that there might be six or seven or more intelligences that people have—emotional intelligence, kinetic intelligence, the capacity to move and dance beautifully. So that’s a kind of a jumping off point for the idea that if even humans have a diversity of intelligences within us, the world of animals, the world of AI must have even broader diversities. So just to wrap up, I guess what I would say is that for us, the mission first and foremost is to recognize that apprehending in the world comes in many forms and to appreciate as many of those as we can.

Richard Sergay: So you were very clear about adding the “s” to intelligence, so it’s diverse intelligences. And the reason for that is what, exactly?

Pranab Das: You might say that there are different problems in the world and that we can solve them using different tools. So that would be one way to imagine an intelligence. For example, the intelligence I mentioned of IQ is very good at solving word association and mathematical and shape association problems. A different way of thinking underlies people who are wonderful around other people, who get things done in social and political circumstances. Those two things don’t necessarily interpenetrate, they don’t necessarily interrelate. So I would say that those are two different intelligences.

Richard Sergay: So you had mentioned the word empathy a little bit earlier. Can we define that under the rubric of intelligence? Is that accurate?

Pranab Das: That’s a great question. There are people who would say being empathetic or being joyful, being happy, having things that we had described as positive affect might just be helpers, modulating our capacity to make good decisions. So maybe there’s an underlying sort of decision-making ability and all these other things are just fluid modulations, ways of changing or refocusing our attention as we go about solving problems. I think that’s probably mistaken. So there’s great work, and you’ve spoken with Brian Hare in one of your episodes from him and other people who have studied Bonobos. Bonobos, get a bad rap. It’s often emphasized that they use bonding through sex as one of their biggest problem-solving strategies. Well, the fact is, they have many ways of comforting one another, and because they’ve built a society based on empathy, based on comfort, I say society loosely, their, their species culture, if you want, is based on that. They’re the only great ape that doesn’t murder, which is a really profound observation. Gorillas, orangutans, certainly chimpanzees, humans do. So they’ve solved a problem, a way of being with other Bonobos, that we haven’t solved. That’s not decision-making, solving IQ test stuff. That is the practical result of having tools for empathy, a specific kind of intelligence that allows them to be with others in a meaningful and productive way.

Richard Sergay: So tell me about the genesis of the diverse intelligences project at Templeton World Charity Foundation. How did it come about and why did it come about?

Pranab Das: That’s really exciting. Yeah, that’s a great question. So in 2016, the Foundation hired a new president, Andrew Serazin. He pulled together a team, and in that team were folk who were interested in these questions and they had a variety of different workshops. And from those workshops, he emerged an idea of emphasizing the different aspects of diverse intelligences, largely inspired by the writing of the foundation’s founder and donor, Sir John Templeton. He asked, is it possible that there are many diverse intelligences extant in the world that we haven’t taken the time to find, to notice, to recognize? So it was kind of a charge from the donor to inquire whether there might be other things that are apprehending the world, the created world in a useful, meaningful, productive way.

Richard Sergay: So you have an idea that there are diverse intelligences in three big categories: human, nonhuman, and the technology space.

Pranab Das: So that’s one of the challenges of any initiative in science. The fact is that the best laid plans are only starting points. They’re just the beginning from which great things grow. So while we had a variety of different initial blueprints, we were very careful not to let ourselves become overly stuck in any particular way of thinking. By the same token, we didn’t want to fall victim to a kind of higgledy-piggledy, oh my goodness, this is amazing, fall prey to our own enthusiasms as new people came on board, as we discovered new researchers. So we used a variety of different mechanisms, the first of which was what we call champions mechanism. And this is something that was invented by Andrew Serazin. and I think it was really brilliant. He said, look, we don’t know everything. In fact, we probably don’t know very much. Let’s talk to a bunch of people who do.

So we reached out to a couple dozen different impressive scientists and philosophers and said, look, we’re interested in finding people that are doing cool stuff. Could you recommend a few possible grantees? They did. And then a process of peer review and down selection resulted in the first couple of batches of really exciting grantees. While all that was happening, we continued to learn more about the contours of the field. We did a kind of scoping exercise of the various fields involved and settled on a few challenge areas. Again, looking under the streetlights, trying to find things that we thought could be productively researched and acknowledging that there are a lot of areas that we would have to exclude or not concern ourselves with. And so we built out three challenges and each of those then became an area in which we looked for potential grantees.

Now we’re past that point, and we’re moving to something called synthesis. So having developed a very large cadre of grantees, we’re now in the 75 range, I think, of total grants. We’ve built a community of grantees and other experts, and we bring them together annually and they have begun to synthesize across their different areas of expertise, across questions, across domains of analysis. And so what we’re looking to do now is to produce richer, deeper analyses that are synthetic in nature. So we just completed a round of synthesis grants, and we’re now in the process of developing new ideas for another set of syntheses, and these are very exciting. They’re going to be what we call frameworks. That means they will be theoretical structures that have a strong empirical, that is experimental, basis, that can be falsified, that can be tested against real world experiments, but provide a kind of an overarching story of a set of intelligences, how they relate, how they interconnect, how they’re different, how you might get from one to another, what the evolutionary trajectory might be, what the narrative space is that links them all together. And so we’re in the delightful phase now of listening to the experts and hearing what sorts of frameworks they think might meaningfully be applied across many domains and in intelligence.

Richard Sergay: So among the 75 grantees that you have funded so far, pick out a couple for me to represent what diverse intelligences mean. Give me some live examples and why they’re important.

Pranab Das: So I would say to listeners go to the Templeton World Charity Foundation’s website, poke around there, and you’ll see many different, amazing researchers doing amazing things. And each of them has something truly exceptional to offer. These are the best of the best.

Richard Sergay: So I will say as a journalist, one of my favorites is the story of Lawrence Doyle and Fred Sharpe, connecting whale signaling technology, and the possibility of one day understanding an alien signal, if that ever happens.

Pranab Das: Maybe you could think of it as like the Android Doctor Doolittle. What would it take to be able to speak across species and eventually to speak with an alien species that didn’t originate on earth. Those guys are in the process of trying to make the first steps in that direction. So what they suggest is if we are to encounter alien signals, for example, they will come across great distances. They will lack context. They will be pure in some sense, a simple set of codes, words, some kind of clicks, who knows. It’s hard to find anything disembodied like that in our experience of other species, other individuals.

Mostly we have a lot of backstory. We have a lot of cues. We have a lot of what’s sometimes called metadata. Whom am I speaking to? What do I know about them? What’s their body language. So these guys said, what about whales? Whales interact over thousands of miles while they may have some recollection of whom they’re speaking to based on some idiosyncrasies of the voices, almost no other information is carried along with those packages. That may be an analog to what it would be like to get to talk to aliens. And boy, howdy. If we can’t figure out how, what a whale is saying, and we all come from the same genetic stock and we live on the same planet, we’re in big trouble when we try to talk to the hexopods, you know, who come down in their spaceship.

Richard Sergay: So help me, Pranab, understand in terms of diverse intelligences, what does that tell you about what you’re trying to get at, what the foundation is trying to get at and what its impact could potentially be?

Pranab Das: If you take it as possible that the universe has within it tremendous richness, and that that richness isn’t necessarily reaching its peak in present-day humans, then you have to also grant that that richness may reach a more interesting, fulfilling level, extraterrestrially. By the same token, the richness that exists on the earth has not been fully plumbed. Here’s an ideal opportunity to contextualize two different undertakings, both of which take it as axiomatic that the world consists of others beyond humans. If we can learn more about other nests, communication with other nests, then we’re really coming closer to a better understanding of the created world, the world that exists beyond ourselves.

Richard Sergay: Which brings me to another terrific example, which is professor Andrew Barron’s work on the honeybee brain. So he’s studying the honeybee brain as a way of potentially understanding the human brain.

Pranab Das: That’s right. So, Andrew is an exceptional scientist. He has a really, really deep knowledge of the ways of bees, but he also is a polymath, extremely skilled in a number of areas, including the theories of cognition and, and evolution. So what he is interested in that particular area of his work is to suggest that an exquisitely simple, or at least very relatively simple organism, like the honeybee must get all of its behavioral complexity from what seems to us to be just a tiny, tiny little brain. It’s not easy to study. It’s still hundreds of thousands of neurons, but it’s easier. So what he’d like to do, what he is he is doing basically, is creating a connectivity map of the brain of a bee. And from that he hopes to ascertain what’s sort of the minimal structure necessary in order to get rich, complex, surprisingly humanlike behavior.

Tavia Gilbert: And here’s that researcher himself, Andrew Barron, who we’ll get to know in the full episode featuring his exploration of the honeybee brain.

Andrew Barron: So if we can model the bee brain, we can take insights from those models and translate them directly into technological applications. If we can model the bee brain, all of this intelligence, all this dynamic autonomous behavior that we get out of bees, we should be able to capture that in the model. There’ll be things that we can learn from that that we could translate into robotics.

Pranab Das: If we can do that, if he can do that, then he knows a lot about the modules that might be present in humans that we then deploy to do even more complicated things like having language and building supple societies instead of being locked into the kind of social dynamics that honeybees are.

Richard Sergay: So you’re actually telling me that understanding a “simple brain” like the honeybee, scientists could help unlock a much more complex thing like the human brain.

Pranab Das: I think unlock is a tricky word. There is a group, for example, in Seattle, works on the human brain, and they would say, you have to have the whole thing to really unlock what’s going on in a human brain. I think what you can certainly say with reasonable comfort is that there are aspects of human behavior that seem to be paralleled in something like a honeybee. Because we all come from a very common genetic and developmental lineage, it would be surprising if there weren’t some kind of lesson to be learned.

Richard Sergay: Help me understand in the diverse intelligences world, how AI is impacting the world around us.

Pranab Das: People must be the arbiters of morality. There is no morality outside of humans. It’s something that we have come to together that we continue to struggle with. So by no means are they suggesting that we could program a machine to give us morality or to teach us how to be better morally. But what they suggest is that we could teach ourselves to be better if we gave ourselves new tools. So they have a variety of approaches, one of which is to suggest that we poll a lot of people and say, Oh gosh, what’s the right decision. You mentioned organ transplant, and that’s what their grant was about. You poll a bunch of people and say, who should receive this organ under what circumstances? No one probably is completely right or unbiased, and you have to be very careful about whom you’re asking, but over the collective, some kind of zeitgeist, some kind of moral compass emerges.

That’s what morality is in a community. We come to an agreement about what’s the right thing to do. Unfortunately, it’s extremely hard for humans to take on board what hundreds or thousands of other humans are thinking. It’s just, we’re not built for that. We’re built for dyadic interactions, for small group interactions. So if I really want to know what thousands of people are thinking, I fall back on something simple, like opinion polls or political predictions. That’s not a particularly rich way to ascertain what the right thing is, what the good thing is, what the deep moral thing is. Machines, on the other hand, AI, on the other hand, is exquisitely suited to extracting patterns from large data sets. So if you give a machine tremendous access to human morality, the machine could extract in some very cryptic, sort of mysterious way, could extract the essence of the contours of the collective moral mind.

The team at Duke would not suggest you then simply allow the AI to act based on its analysis of the contours of our morality. But instead, it can be our interlocutor. So say you come up to a decision and you feel pretty good about it as a, maybe a committee at a hospital, but then you could ask the AI what it thought the world at large would have done. And nine tenths of the time, it’s going to say pretty much the same thing you came up with, but if 10% of the time you and its understanding of the whole world or our whole community’s sense of morality conflict, then you can have a much deeper conversation. You go back to your committee, you ask in new experts. You ponder, you ask religious figures, Where did we go wrong? Is, you know, is the machine misunderstanding people? Were there a bunch of people that made bad decisions and told them to the computer? Or is it a blind spot of our own? So you could use technology, in other words, to help us help ourselves, not asking to outsource our moral minds, not outsourcing our intelligences, but enriching them through a self-learning process.

Richard Sergay: One of the interesting aspects of that particular project, which we will dive into in a later episode, is the definition of morality and whether morality might change, for example, just dependent on geography. Is, are those who live in California, different from those who live in Georgia? And if so, could a machine that is imbued with whatever we call morality, have a different impact on that decision-making in California for the transplant versus one in Georgia?

Pranab Das: This is one of the biggest problems associated with machine learning in general. If you rely on humans, humans have biases, humans have blind spots, humans have culture, humans have regional differences. And unless you’re very self-aware about how you build your data set, you can end up building terrible mistaken biases into the way the machine apprehends what humans want. Trying to teach machines to know what we want—it’s probably the hardest undertaking presently being seriously worked on an AI. It’s a little bit like the old thing, we want to make a machine that gives a damn. If you could get a machine to understand what it is we want, that’s the first step to making sure that its actions are congruent with our ambitions and our hopes for the future.

Tavia Gilbert: Let’s hear from another of the researchers we’ll meet later this season, Jana Schaich Borg, whose work focuses on AI and morality.

Jana Schaich Borg: I think one of the biggest contributions is not just in how to imbue morality in a machine, but in the second step, how do you use that information to impact our moral judgment? I think uh, the past 15 years that I’ve been trying to understand how we make our own decisions is really impacting how we think about, well how should the information that comes from the AI, be presented to humans to actually influence their behavior in a way that they, that will be effective and useful? And that’s really in some part, one part of my life, I think about things in a way of what would I tickle in the brain to change a moral judgment?

Tavia Gilbert: Back to the conversation with Richard and Pranab Das.

Richard Sergay: Do you ever worry about as a research scientist, that technologists could potentially build a HAL?

Pranab Das: So one of the decisions we made early on was not to dive into the world of AI. It’s very highly resourced, so despite the fact that this foundation has substantial resources allocated to the initiative, they’re dwarfed by the budgets of the Googles and the Facebooks and the Apples of the world. While we work closely with a number of artificial intelligence scholars, we don’t invest in folk working on questions like that. The way that’s generally framed is there’re going to be something called artificial general intelligence, that is intelligences that can come to new problems unexpectedly and learn how to solve them in the same way that we do. If you’re asking me personally, I don’t think that it’s likely anytime soon for a variety of reasons. The most salient, I think, is that we build AI around the idea of goals. Those goals are not entirely static, but are certainly more rigid than the goal structures of humans. I think we often confuse ourselves by imagining that we’re always acting in accordance with some rational or well-defined goal set. In fact, our goals are always changing, and that’s a ferment that is chemical in nature, in many cases, our endocrine system helps retune our goals. Our attention span changes, our visual focus changes depending on how stressed we are, our wants and needs vary with our hunger and other appetites, the human being doesn’t live in a mathematicalized system of goals. And so until AI can develop that sort of suppleness, it seems to be unlikely that the things that come out of artificial intelligence will look very much like those that come out of biological intelligences. And that’s a good thing. The fact is that AI’s are fantastic at a bunch of stuff we aren’t, and we’re really good at a bunch of stuff they aren’t. What a better richer world it is when you have experts’ capacities that coexist and mutually reinforce, what a drab world it is when we all think the same way.

Richard Sergay: I’m curious why this push toward cross-disciplinary work is now so important to Templeton. What does it say about the future of science?

Pranab Das: Vanessa Woods and Brian Hare are rock stars. They are incredible scientists, incredible people, and have greatly both in science and to the public discourse. Before I go any further, I have to give a shout out to their newest book, it just dropped on July 14th. It’s called Survival of the Friendliest , and it’s an interrogation of what makes humans so successful. They argue that it isn’t, our tool use, our language. Those are important, but it’s our capacity to be friendly, to have productive, enriching social interactions with one another and, in the case of dogs, with other animals. Not only is this great science, really a new, novel paradigm in scientific thought, but it also has tremendous implications for human flourishing. So one of the things that matters the most to the Templeton World Charity Foundation is betterment, is humans doing well, being Good in a kind of classical, capital G, sense.

If the work that goes on between disciplines can advance the study, the understanding of what makes certain kinds of intelligences successful can help us magnify those in humans, we stand a real chance of improving human flourishing. And I think, I think that there’s really no better example of scientists who have taken, so, as I said, they worked with Bonobos. You mentioned their work with dogs. They were a New York Times bestseller called The Genius of Dogs . They’re taking what they’ve learned from the capacities, natures, and evolution of dogs and Bonobos, inflecting those through rich, careful science, inviting philosophers, inviting comparative psychologists, inviting ethologists all in on a conversation about how do these things then meet and create a bigger, richer framework under which we might understand humans as well?

Tavia Gilbert: Here’s one of those research rockstars, Brian Hare, talking about his study of diverse species:

Brian Hare: Many of the differences we see between wolves and dogs, we see between bonobos and chimpanzees. And this is a perfect example of what our project would be all about, is trying to understand why is that? Why is it that you have these two distantly related pair of species that have become so similar in the way that they’ve changed from one to another. What was the process that drove it? We think the same evolutionary force has shaped dogs from wolves and shaped bonobos from a chimpanzee-like ancestor. And we think that force is selection for friendliness.

Pranab Das: So what a brilliant undertaking. So the project you’re referring to in particular is ambitious, it seeks to create a kind of common platform across which researchers in a variety of different systems and a variety of different species can interrelate. They can put their data together in ways that are meaningfully comparable. That’s a big undertaking. It was very successful, and now I think it’ll serve as a tool going forward for researchers in several different species and domains of analysis to compare their work with one another.

Richard Sergay: Is anyone else in the foundation world doing this sort of work?

Pranab Das: Yes and no. Foundations are interesting in their idiosyncrasies. There are certainly many foundations who are doing rich work in the artificial intelligence space. The most, I think, comparable explorations of that have to do with the morality of artificial intelligence and its use, the applications and misapplications of AI. Interestingly, that has become a field in and of itself, and it means a little different from what we think of when we use these words. There’s a field emerging called the ethics of AI. In most cases, that means, how do we ethically use AI? How do we ethically develop AI? What are the outcomes of our artificial intelligences? That’s really quite different from something like, ethics and AI’s impact on our development of, our enrichment of our own ethics. So what I would say is, the Templeton World Charity Foundation is unique in the world of foundations for its willingness to boldly, I think, state that there is such a thing as the good, that human intelligences are powerful ways of apprehending the world so we can be better, humbly positing that there are other ways to be good, that there are other things that we can learn, both from animals and from our own creations, our own constructs and from ourselves, and that each of those things will contribute to a human flourishing and a development forward in a way that is really very exciting. It makes me very proud to be associated with that group.

Richard Sergay: The initiative as a whole, what do you hope when you look back on it, um, what its impact will be on the world?

Pranab Das: So we have a couple of ideas, and these are still formative. It is to be hoped that an interdisciplinary community will have formed that is robust, can clearly state theoretical and empirical ideas that have implications for areas that have not yet been plumbed. So one of the most successful sciences, physics, you know, I’m a physicist, so I always see things through that lens, has made many assertions over the centuries about things that have not yet been studied, or the theories of physics have implications for things that have not yet been studied. And when those studies take place, they can be tested against, or I should say, the theory can be tested against the results of those studies. That is not as often the case in some other fields. That’s largely because biology, psychology, philosophy work in such big spaces with lots of moving parts that don’t necessarily make them easily interoperable.

If we’ve done a good job by the end of all of this, studies of intelligences, will be able to hang different results on different theoretical frameworks and to challenge those theoretical frameworks in a way that should make the frameworks better, more robust, and more predictive going forward. So if we’ve done our job, this community will have output a set of frameworks that subsequent researchers will be stimulated by, taken with, and find in them ways of cross-fertilizing and cross-communicating their work, thereby building, strengthening, and making more durable the frameworks themselves.

That’s part and parcel of a strategy to try to keep the thing going when the resource faucet is turned off. It’s one of the great sadnesses of many funders that they can induce excitement, they can induce research by a flow of, of resources, but when those resources shift, many researchers then seek other topics or other areas of study. So it’s our hope that we can help our researchers find other sources of funding, help other funders get excited about this sort of thing, and importantly, keep the community itself alive, so resources not withstanding, they still feel that there are productive things they have to speak about with each other.

Richard Sergay: I’m curious along this journey that you have helped Templeton lead, what have been some of the biggest surprises in terms of your learning, or what the term diverse intelligences means?

Pranab Das: I’m always surprised by the depths of my own ignorance. I think we, as a team, arrived with a lot of excitement, a lot of enthusiasm, and some expertise, but as we meet each of these brilliant researchers, we uncover how much there is yet to know. There’s a great quotation again, from the founder of the foundation, the donor, Sir John Templeton, which is, “How little we know, how much to learn.” So while one can come at something like this with a crude sense of, I think I know what intelligence is are, or, we have a sense of what the blueprint should be, the biggest and most happy surprise is how much else there is. And every conversation elucidates that depth of excitement of how much there is yet to learn.

Richard Sergay: And how little we know.

Pranab Das: How little we know.

Richard Sergay: Pranab, best of luck as you continue this amazing project.

Pranab Das: Thank you, Richard, it’s been a pleasure.

Tavia Gilbert: We’re excited to dive deeper into the question of diverse intelligences in our next episode, when we return with the full conversation with Lawrence Doyle and Fred Sharpe, and their exploration of their research into what humpback whale communication can tell us about the potential for interstellar conversation with alien intelligence.

Fred Sharpe: Well, the sounds in the ocean in some ways make amazing interstellar Analogues. Oceans have this amazing acoustical conductivity so they can probably can be communicating with each other fairly efficiently. They’ve essentially had the ocean internet for millions of years.

Tavia Gilbert: We look forward to bringing you more from that conversation next week. In the meantime, we hope you enjoyed today’s Story of Impact, and that you’re looking forward to hearing more about honeybees, dogs, AI, and more. If you liked this episode, we’d be grateful if you would take a moment to subscribe, rate and review us on Apple podcasts. Your support helps us reach new audiences. And for more stories and videos, please visit storiesofimpact.org.

This has been the Stories of Impact podcast, with Richard Sergay and Tavia Gilbert. This episode produced by Talkbox and Tavia Gilbert. Assistant producer Katie Flood. Music by Aleksander Filipiak. Mix and master by Kayla Elrod. Executive Producer Michele Cobb.

The Stories of Impact podcast is supported by Templeton World Charity Foundation.

Share your comments below. Please read our commenting guidelines before posting. If you have a concern about a comment, report it here.

You might also like...

humpback whale
Community//

How can listening to whales help us communicate with aliens?

by Richard Sergay, Tavia Gilbert
Community//

Introducing the new Stories of Impact Podcast, Sir John Templeton and Templeton World Charity Foundation

by Richard Sergay
Community//

Hear how the Honeybee Brain Informs the Development of Artificial Intelligence

by Richard Sergay

Sign up for the Thrive Global newsletter

Will be used in accordance with our privacy policy.

Thrive Global
People look for retreats for themselves, in the country, by the coast, or in the hills . . . There is nowhere that a person can find a more peaceful and trouble-free retreat than in his own mind. . . . So constantly give yourself this retreat, and renew yourself.

- MARCUS AURELIUS

We use cookies on our site to give you the best experience possible. By continuing to browse the site, you agree to this use. For more information on how we use cookies, see our Privacy Policy.