David Pring-Mill: “Spend more time away from your devices”

Spend more time away from your devices. A healthier balance of offline and online could actually make online better. Remind yourself that you’re dealing with another person, maybe even a whole group of people. And if there’s anger present, there’s probably pain buried beneath it. Lead with empathy, not exclusion. As a part of my interview […]

Thrive invites voices from many spheres to share their perspectives on our Community platform. Community stories are not commissioned by our editorial team, and opinions expressed by Community contributors do not reflect the opinions of Thrive or its employees. More information on our Community guidelines is available here.

Spend more time away from your devices. A healthier balance of offline and online could actually make online better.

Remind yourself that you’re dealing with another person, maybe even a whole group of people. And if there’s anger present, there’s probably pain buried beneath it. Lead with empathy, not exclusion.

As a part of my interview series about the things we can each do to make social media and the internet a kinder and more tolerant place, I had the pleasure to interview David Pring-Mill, a consultant to tech startups and NGOs. David is also an active writer. As a technology journalist and opinion columnist, he regularly explores the sociopolitical implications of emerging technologies.

Thank you so much for doing this with us! Our readers would love to “get to know you” a bit better. Can you share your “backstory” with us?

I was born in Ottawa, the Canadian capital, to an American mother and a British father. I’m obsessed with analysis and communication, partly because of innate tendencies, but also because there were times in my life when I felt unheard. I learned the ropes of mass media by working on indie films and in the post-production departments of reality TV shows. Then I started to work with tech startups and NGOs to help them get their message out there. I’m a writer, a consultant, and an advocate for more ethical and eco-friendly technologies. Right now, I’m trying to creatively engage my readership in the business community so that we can collaboratively build better companies, technologies, and social systems from a shared foundation of values.

I seem to be able to articulate things in a way that certain readers find compelling and I think there’s a responsibility that comes with that. If you look at the way that society is set up, and to the point of this interview, the way that incentives are engineered into social media, we are being drawn away from responsible communication. We are all encouraged to use incendiary language, to construct facades, and to primarily engage through opposition and exclusion, not empathy. We are becoming superficially contrarian, while avoiding the risk of real change.

I’m incomplete like everyone else, with my own flaws and wounds. But professionally, I always try my best to get it right, to get the ideas, the facts, the feelings, and the words just right, because I think that many of the loudest voices sensationalize trivial things and sanitize important truths.

You have a system that is validating counterproductive impulses, readily exchanging offense and outrage for attention and truncating solutions for the sake of shares. We cheapen the value and integrity of our human emotions and we miscalibrate our moral compasses when we shout into the digital world over nothing. And this is happening at a time when there are very complex problems and very real offenses being made against our republic and defining values.

So we need to exercise more care in our responses, even when our devices work against that discernment. We can do this by disregarding the temptations of instant gratification, considering things contextually and charitably, mindfully scoping out the pathways for change, reserving our anger for the deep and systemic injustices, and withdrawing it from the misunderstandings. Then, there is the age-old challenge of transforming righteous anger into strategic action, instead of corrosive hate.

By leveraging my background in and beyond tech, I’d like to counteract some of the social damage caused by social media. We’re living during a particularly critical time for technology and policy; it’s comparable to the Industrial Revolution. I don’t think that we’re understanding AI correctly, but I would say that the capability of technology is being massively upgraded and our new tools are actually creating new risks. I’ve explored these topics, somewhat exhaustively, in my published writing, which can be read in outlets such as TechHQ, The National Interest, SingularityHub, and DMN.

Can you share the most interesting story that happened to you since you started your career?

I’m someone who hypothesizes and strategizes but sometimes I’m still surprised by what resonates most. For example, I investigated vulnerabilities in our election infrastructure. I found highly problematic procedures, backdoors, oversights, funding issues, and conflicts of interest, and I tried to spread awareness. The response felt like crickets chirping. People didn’t seem to care that we’d broken our republic by trying to expedite it with machines that are sold and serviced by an oligopoly. But there have also been pleasantly surprising reactions to other articles.

I still get emails about something that I wrote in early 2018, titled “Why Hasn’t AI Mastered Language Translation?” I noticed that the article, ironically, has been translated, both by people and by machines, and it inspired some online debates, too. For that piece, I interviewed several data scientists and businesspeople, but I also included an associate professor of Spanish who talked about the fragility of interpretation. Readers were really curious about this intersection of technology and language and I think that reveals the enduring, cultural vitality of language. Even setting aside the broader relevance for social media, commerce, and politics, there seemed to be a respect for the diversity and intricacies of language. I find that interesting, satisfying, and hopeful.

In my career, I’ve also found that there isn’t really a connection between the amount of time and effort that I spend working on a piece and the response that it gets. That directly contradicts an industrial sense of work ethic. If there is any relationship, it’s inverted. The less comprehensive pieces sometimes align better with the modern attention span of readers. I think that is interesting. It’s also kind of sad. It would be great if earnestness wasn’t, at times, self-defeating, and if everyone craved the whole truth.

However, I’m sympathetic to this state of information overload. Every day, I sort through information and introductions and try to figure out what things mean and why they matter. And then it’s about connecting my ambitions with specific actions, and connecting my actions with time and resources. This is an imperfect process. It requires some heuristics. I make peace with the impossibilities by reminding myself that knowledge work is always a collective effort. We can be both small and impactful.

Can you share a story about the funniest mistake you made when you were first starting? Can you tell us what lesson you learned from that?

Oh, you know, when I was first beginning to put pen to paper and express myself publicly, I think there were times when I cycled between radical honesty and unnecessary guardedness. And both are ways of sort of shooting yourself in the foot.

I think there were times when I wrote or said things that were shocking and I found that somewhat funny at the time. And now I find it funny that I even thought these things were funny, because even though shock value can sometimes get at an underlying truth or observation, sometimes it’s just miscalculated.

This is not really about restriction, it’s about intention. Today, the cultural edict basically says: “Joke responsibly.” Sure, but that’s kind of contradictory to humor, which is all about conflict and violating taboos and logically building up to illogical conclusions, sometimes to show the absurdity of other people, or yourself, or life. I’d say, “Joke intentionally.”

Are you working on any exciting new projects now? How do you think that will help people?

I have creative projects in the works and there are some opportunities to further elevate the type of work and advocacy that I’m already doing. I think that this social media conversation is really important for both politics and mental health and I’m grateful to be able to participate.

I’m also working on transportation technologies. If companies, policymakers, and academic-industry partnerships approach this right, we could dramatically improve safety, cut emissions, and turn mobility into a service, which would spare consumers from the costs and complications of depreciating assets and potentially improve access to jobs, nutrition, and education. Approximately 1.35 million people die in road crashes every year. This isn’t just a market opportunity, it’s a societal one. There are some skeptics out there, but I think when historians are looking back on this time period, they’re going to be skeptical that there was so much skepticism. By then, these currently hypothetical solutions will be self-evident. And hopefully, by then, we won’t be bickering on social media anymore… But I don’t know, maybe we’ll just be bickering in VR worlds instead.

Ok, thank you for that. Let’s now jump to the main focus of our interview. Have you ever been publicly shamed or embarrassed on social media? Can you share with our readers what that experience felt like?

Firstly, I’d say that the problem with social media isn’t just the shaming, it’s also some of the validation.

Facebook calls itself a “community.” I’ve taken a public stance against that characterization. It’s actually an addictive process of identity construction and validation on the user side, and it’s mass data collection and monetization on the corporate side. Is that a community?

To be clear, there’s value in sharing experiences. But the platforms are primarily designed to capture your attention. That’s how the business model works. So you might feel socially validated when you share something, but some of that validation could be wrong. You might think that you’re learning when you comply with a recommendation engine, but it’s not trying to teach you, it’s a feedback loop trying to get you to click. So you have people who are increasingly alienated from broader social circles and truths, and they conflate that with connection and a sense of self. There’s actually an intimacy to the delusion.

However, you can’t implement a meaningful, equitable solution in this world without first acknowledging and navigating diverse interests and complex systems. I would say that the current setup encourages a form of disconnection masquerading as connection, which in turn becomes inaction or the wrong type of action.

So that is pernicious, the systematic validation that directs minds down these subtle pathways… And it’s doing damage alongside all the invalidation, the direct attacks and public shaming.

But as for my own experiences, yes, absolutely, I’ve been attacked on countless occasions in direct response to my writing. And it definitely hurt when it first started happening but it doesn’t produce the same effect anymore. Sometimes, it’s frustrating, and sometimes, it’s actually funny. However, it never deters what I feel I must do. I’m not going to cede progress or a significant part of my mind to a troll.

I have been attacked over both significant and insignificant things. And that actually makes it easier. Because now I know that they’re going to attack me no matter what. Altering my tactics or scaling back my ambition makes no difference. The only way to avoid the attacks is to do nothing. And I’m not willing to do nothing. That would be a waste of life.

In some opinion pieces, I said that President Trump and his supporters are deliberately ignoring violations of our national security and rationalizing the heinous acts of foreign powers. I called out Putin’s illegal actions and human rights abuses. On social media, Russian bots threatened to murder me. And in the comments sections, I found out that I’m actually part of a conspiracy that is bankrolled by George Soros, which is an interesting theory because I never received my paycheck. When I suggested that moderate Republicans such as Senator Romney should actively challenge Trump, I witnessed anti-Mormon bigotry in some of the responses.

These things involve geopolitical interests and rigid partisan identities, so I can partly understand why the conversations get so charged. But I’ve also been an advocate and consultant for better transportation technologies. I’m on the board of a transportation think tank and I’ve promoted hyperloop as an alternative, sustainable form of infrastructure. I’ve also made very clear caveats, pointed out vulnerabilities and best use cases, and contextualized that technological solution. And there have been some very snarky and extreme reactions to that. Which is fascinating!

I’m essentially saying: we could put freight in a depressurized tube, the system could be entirely powered by solar and privately financed, the stuff would probably get from point A to point B a lot faster, and it’s probably worth considering new transportation ideas because the current methods are literally altering the chemical composition of our atmosphere… And they find a reason to be dismissive of that. I think that it shows how entrenched people are in the status quo, how numb they’ve become to the failings of that status quo, and how eager they are to line up and take preemptive shots against idealism. And then when it becomes clear that my position is actually reasonable and moderate, they mischaracterize it or take it out-of-context.

Some people seem to operate under a framework where they try to fit the world to their expectations. Instead of integrating contradictory or new information and hypothesizing, this is an alternative method of managing unpredictability. They want to be well-balanced with their environment, so there’s a bias toward the present system. It just becomes very ironic when the present system is literally jeopardizing the livability of that environment.

And then even on a lower rung, I published a humorous personal essay in The Los Angeles Times in 2016. And guess what? Some people in the comments section viciously attacked that, too! And one of them even cyberstalked me a bit. Because of comedy. Not foreign policy, not reduced-pressure tubes… just jokes.

Sometimes, there’s no connection between the level of negativity you’re receiving and what you’re actually doing. There are just people who don’t want you to do things that they can’t do, or tried to do, or were afraid to do.

And I would be remiss if I didn’t acknowledge that different people are able to move throughout this world differently. It’s very likely that some of the negative responses would have been more intense and threatening if I looked different if I wasn’t a straight, white man. Maybe some people wouldn’t have even paid attention, they’d just delegitimize the work in their minds. Though quite honestly, in general, the harshest commenters don’t fully read or consider the thing that they’re commenting on. And there are some obvious giveaways.

What did you do to shake off that negative feeling?

I think that some of these social media metrics can be legitimately valuable when you’re trying to chart a trend or sell a product, but they’re not particularly relevant metrics for living a life or orchestrating change. If everyone reacts positively, you believe their assessment and you feel great. If they react negatively, then it follows that you need to believe that, too, and then you feel horrible. And yet, if you dismiss the way that people respond to you completely, you’re kind of verging on being a sociopath because we’re meant to work together as a tribe. So I think that the right approach is to take a passing glance at the reactions, and to say, “Okay, that’s interesting, is there anything valuable I can take from this?” But even as you do that, and after you do that, you have to rely on some internal barometer of what’s right and wrong, not just morally, but sometimes, creatively. And that’s what you ultimately go by, not the taunts and jeers and applause of a largely anonymized coliseum.

But let me circle back to that LA Times essay because there’s a funny story and a great lesson there. And I think it’s important that this happened to me when it did.

So, I’d written these jokes about dating and people were commenting, “Take back the money you paid him!” And one woman researched things that I’d written elsewhere just so that she could craft a more cutting insult. And for a little while, I thought wow, maybe I’d taken the wrong approach here. But then I decided to research this woman right back. I clicked on her user profile and it showed her comment history across the online newspaper articles. And I found out that this woman hated dogs.

Every time there was a news story that included a dog, she objected to it in the comments section. She’d write, “It’s too bad this news story had a dog in it, they’re such nasty and overrated animals.” What?! Firstly, I don’t know how you can hate dogs, but secondly, it’s a news article. That means it’s about something that happened. It’s not a story that the writer dreamt up. Something happened and a dog was simply part of the events that took place.

Doesn’t that tell you everything you need to know about interactions on social media and the internet? Here I was, stewing over something that this woman had said about me, and yet, her perspective on the world had extremely limited credibility. A very sizable portion of her online life was driven by an anti-dog agenda.

Have you ever posted a comment on social media that you regretted because you felt it was too harsh or mean?

I have written some very sharp criticisms of political actors, while trying to counteract the reckless wielding of power. I don’t use social media very often these days. It’s just not a productive use of my time.

But in the past, I engaged in the occasional online debate, in comment sections and in tweets. And when you do that, there’s a certain point where you need to make a calculation, based on a really insufficient understanding. You have to ask: Is this someone who could be a partner in the search for truth or compromise? Or is this person trying to hurt other people? And if I see someone who is trying to hurt other people, then I’m much more likely to use the strength of my words and logic to shut them down.

Do I ever regret that? Well, it’s hard to know the appropriate tactic when you can’t look the person in the eyes and see what they’re about, what they’re after. I think it’s important to remind yourself that you are interacting with another person. Not always, because of the bots. But probably. They had parents and a childhood. They care about someone or something. And they had pain. And because people make meaning out of pain, when you’re disputing someone’s meaning, you’re interacting not just with that person, not just with the issue at hand, but potentially the pain underneath it.

I regret any time that I’ve forgotten that.

Can you describe the evolution of your decisions? Why did you initially write the comment, and why did you eventually regret it?

In some of these debates, I was conversing with an individual who wanted to deny essential rights to others. I think that is relevant. But I have been stunned by the power of respect. There have been instances where people came at me viciously, and I responded to them with respect. It depends on the context, but it’s just basically saying, I see you, I hear you, I get it. Or at least, I’ll try to understand where you’re coming from. This is my view or experience… And when you respond sincerely in that way, something beautiful happens. Sometimes, you can actually transition from volatility and even violence to mutual respect, in just a few sentences. But when you haven’t established that foundation, then fears and negative assumptions rush to fill in that void. And you can’t build off those things.

When one reads the comments on Youtube or Instagram, or the trending topics on Twitter, a great percentage of them are critical, harsh, and hurtful. The people writing the comments may feel like they are simply tapping buttons on a keyboard, but to the one on the receiving end of the comment, it is very different. This may be intuitive, but I feel that it will be instructive to spell it out. Can you help illustrate to our readers what the recipient of a public online critique might be feeling?

I’m mostly concerned about the young people. A lot of actions and emotional states originate from self-image. And I’m not sure that the digital world is the best place for something so vital, so essential, to be developed. Researchers have found links between social media usage and increased rates of anxiety, depression, attention and impulse control problems, and poor sleep. A social media report from the Royal Society for Public Health and the Young Health Movement noted that adolescence and early adulthood both serve as a critical and potentially vulnerable time for social and emotional development. I think that the film “Eighth Grade” did a good job of expressing that struggle, that awkwardness, and the complications from these phones and devices.

So I would say that if someone is tapping those buttons and stringing together hurtful words from a place of detachment, they should pause and reflect. And they should seriously consider the possibility that they’re contributing to the deterioration of someone else’s mental health. And if they’re doing this, if they’re inflicting wounds without awareness or empathy, it’s likely that they’re not in a good frame of mind, either. To disconnect, go for a walk, get some air. Practice gratitude and love.

Do you think verbal online attacks feel worse or less than a verbal argument in “real life”? How are the two different?

I think that really depends on the person, their support systems, and the way they’re being attacked. Some online attacks can have real-life effects. Or in the case of school bullying, sometimes real life and digital attacks are intermixed.

What long term effects can happen to someone who was shamed online?

The data clearly shows that depression and suicide rates are sharply rising, especially among adolescents, and social media appears to be one of the contributing factors.

Many people who troll others online, or who leave harsh comments, can likely be kind and sweet people in “real life”. These people would likely never publicly shout at someone in a room filled with 100 people. Yet, on social media, when you embarrass someone, you are doing it in front of thousands or even millions of people, and it is out there forever. Can you give 3 or 4 reasons why social media tends to bring out the worst in people; why people are meaner online than they are in person?

Firstly, the platform prompts them to say something. Facebook says, “What’s on your mind?” Twitter says, “What’s happening?” When you’re continually prodded to say something, you say anything. You’re satisfying the platform’s need for content and engagement, but you’re not participating in a kind, patient, and fruitful conversation. “What’s on your mind?” is pretty low-hanging fruit. And that fruit is sometimes rotten.

Secondly, people sometimes try to build themselves up by tearing others down. And social media is a place where users can readily construct a public image for themselves and engage in those power dynamics. I also think that spectators are perversely captivated by the upswings and downfalls, which is why we’ve always seen this process happen in traditional media, too. However, people weren’t participating as directly.

Thirdly, the commenters don’t see the other person. Social media introduces some efficiencies to information-sharing but it also introduces distortions. It holds back nonverbal communication. It can be difficult to discern someone else’s feelings, intent, and tone.

Fourthly, the social media platform is working against the better angels of our nature on an algorithmic level. It’s designed to stimulate attention and engagement. However, attention can be easily won by horrific or obscene things. Something isn’t worth just because it captured fleeting attention. And you can solicit engagement by pushing all the anger buttons. Like a mischievous kid in an elevator, you just light everything up. So that’s what the platform is doing and that’s what the users are doing.

There’s also an atmosphere of insincerity as people feign beliefs and lifestyles, in order to chase status. And all of that insincerity creates a craving for raw authenticity. Voters around the world are gravitating toward leaders who seem to be “authentic.” Sometimes, these political figures are candidly expressing ideas that are fundamentally dangerous or misguided, but this is overlooked, rationalized, and defended.

The algorithmically-encouraged echo chambers divide people further. And the quantifiable popularity metrics make people feel alienated and ostracized. Meaningful conversation is sacred and it’s undergone this digital gamification. If you want the internet to be a kinder and more tolerant place, this simply isn’t the right way to structure it.

If you had the power to influence thousands of people about how to best comment and interact online, what would you suggest to them? What are your “5 things we should each do to help make social media and the internet, a kinder and more tolerant place”? Can you give a story or an example for each?

Spend more time away from your devices. A healthier balance of offline and online could actually make online better.

Remind yourself that you’re dealing with another person, maybe even a whole group of people. And if there’s anger present, there’s probably pain buried beneath it. Lead with empathy, not exclusion.

Actively follow people who you disagree with, if only to ensure that you’re getting all the relevant information and perspectives. I’m not suggesting that anyone should do this if it would jeopardize their mental health, and I’m certainly not suggesting that they should needlessly incite conflict or follow hatemongers. But if there’s someone who you find to be intelligent or honest or admirable in some way, and you just can’t figure out why they hold an opinion that is your contrary to your own, then you should probably follow them and be open to where that leads. Maybe there are blind spots, maybe there’s a middle ground, maybe there’s a friendship.

Be patient. Consider things patiently, contribute patiently. Social media draws people in with the promise of instant gratification. But a lot of those dopamine spikes are meaningless.

Pressure the companies to make changes in the public interest. Support worthy, more principled, alternative technologies as they try to enter the market and grow. The existing players in this space have reached a critical mass, both in terms of social dynamics and software engineering. So that works against a user exodus, and it works against some of these on-platform modifications. But innovation and funding are responsive to clear demand.

Freedom of speech prohibits censorship in the public square. Do you think that applies to social media? Do American citizens have a right to say whatever they want within the confines of a social media platform owned by a private enterprise?

I think that internet service providers are common carriers. And if we can protect net neutrality, then alternatives will eventually arise to meet strong consumer demand. If a social media platform chooses to control the conversation too much, too little, or unfairly, then a new startup can pop up and differentiate itself and the people will be able to flock there. In this particular instance, I believe in market forces. But I also believe in the marketplace of ideas and John Milton’s suggestion that we should let truth and falsehood grapple in a free and open encounter. Morally, I think that the platforms should only intervene with a light touch and without bias, and we should still favor well-trained human moderators over wonky, content-policing algorithms. Legally, this is a very big question — but it wouldn’t be a big question, had we not allowed these platforms to grow so big.

That being said, I think that looking for ways to “break up big tech” is, in some respects, the easy way out. We, as users, have given too much. We generated both the content and the data about how we interact with our own content. We bought into the illusion of a “free” service and wound up on the wrong side of a business deal. It shouldn’t be that way. We should pay for good services, instead of huddling under the umbrella of false economy. If you switch to a subscription service model, then suddenly it’s all about facilitating a high quality of engagement, not angry and indiscriminate engagement.

I’ve said this in in other interviews, but I think that the problem with government regulation is not an existential one, to have or not have, despite that recurrent ideological framing. The drag on industry actually comes from the lack of clarity and consistency. That is what makes it harder to invest and innovate, so for me, it’s about how do you solve for that, how do you get everyone in-sync and above board? And how do you get the right kind of competition? Some of this traces back to the fundamental business model, and the funding strategies and consumer behaviors that are driving those business models.

I’m respectful of entrepreneurs and risk-taking at a time when it’s increasingly popular to abstractly, and perhaps disingenuously, blame capitalism for everything. The business works by meeting needs and wants. Sometimes, a “want” is actually an emotional need in disguise.

Obviously, not every need can or should be met through commerce. And there is an array of instances where things become dysfunctional. Some businesses might address short-term needs while also jeopardizing long-term needs through unsustainable practices. Some private equity firms have found ways to gut the life out of perfectly good businesses. And sometimes, the incentives are fundamentally misaligned… If the purpose of health insurance is to collectively manage risk and mitigate your own financial loss in the event of a tragedy, why would you structure that under a for-profit entity that tries to mitigate its own financial loss whenever tragedy strikes?

I’m not blind to any of this. But I do think there’s something redemptive about entrepreneurs absorbing risk while trying to solve a problem for someone else. And I think that we underestimate the extent to which we can profoundly change the nature of economic activity just by demanding more ethical products and switching our consumer preferences from consumption to experiences.

The relationship between the consumer and the business is dynamic, and with business methodologies across industries becoming more agile and responsive, we can really leverage that. We can still demand a better version of Facebook. We can influence the terms of service.

If you had full control over Facebook or Twitter, which specific changes would you make to limit harmful or hurtful attacks?

Firstly, I think that tech companies should create browser extensions and plug-ins to proactively identify and flag deep faked content. This would allow them to reclaim the banner of the “trust economy.” I’ve already published that proposal online.

Secondly, social media platforms, streaming services, and other tech companies should creatively orchestrate more real-world experiences. These events could be sponsored or they could even have tie-ins with digitally delivered IP. This is about protecting the health of users and encouraging face-to-face interaction, instead of trying to keep people trapped in what I often refer to as “the digital rabbit hole.” I’ve written about this, too.

My third recommendation is the most significant… For two years, I wrote about marketing technologies in a trade publication. I think that marketing can be extremely creative and critical to a startup’s success. It’s how people find out about possible solutions to their problems. I’ve worked very hard to supply marketers with relevant knowledge and strategies, while also identifying areas of ethical and financial risk. With that being said, I believe that social media platforms should transition away from the ad revenue model to a subscription business model. And I think that would actually improve the lives of both users and marketers.

The business focus would shift from dysfunctional retargeting algorithms and click funnels to the quality conversation. In order to participate in the platform discussion, marketers would have to invest more fully in creative content marketing and earned media. Businesses would need to focus on making better products that fit the market and are worthy of natural conversation. That is so much more satisfying than just pouring money into the digital advertising duopoly of Facebook and Google and staring at the metrics. For more organic discoverability, we’d have to rethink search, too, but that is a separate topic.

And in terms of user satisfaction, everyone is going to value the service more. Because they paid for it. The platforms could also adjust their algorithms in order to identify and highlight contributions that constructively affect the conversation. Right now, they primarily measure attention and engagement. And they actually hide things from your feed. You can choose to follow someone and it doesn’t guarantee that you’ll see their posts. In some instances, I believe this is part of a tactical effort to get influencers to share their revenue with the platform because they wind up sponsoring posts in order to retain the attention and loyalty of their own fans.

I think that the social media platforms have already run the numbers on this alternative subscription business model and it was rejected because it would be less profitable, though the platform executives wouldn’t put it like that. They’d say that they’re doing it this way to keep the platform accessible to all. It’s a seemingly nice idea but the internet service itself is not free, the devices are not free. And the ad revenue model is not free for users either because the companies are taking your data and pressurizing the conversational forums.

They’re deliberately engineering properties of addiction. This represents the gamification of social reinforcement. And this so-called “black mirror” has encouraged mass anxiety. If there’s a sincere desire to keep things accessible, then you could just have a capped-profit or nonprofit structure. You could sincerely treat social media as a social enterprise. You could solicit donations to support any server, legal, or tech costs related to expansion into less economically developed parts of the world… There are ways to keep those doors open.

Can you please give us your favorite “Life Lesson Quote”? Can you share how that was relevant to you in your life?

The Serenity Prayer. It’s popular among recovery groups but applicable to everyone. For me, it’s about putting things in perspective and managing work.

We are blessed that some of the biggest names in Business, VC funding, Sports, and Entertainment read this column. Is there a person in the world, or in the US with whom you would love to have a private breakfast or lunch with, and why? He or she might just see this if we tag them 🙂

Before he passed away, I would have answered John McCain without hesitation.

I find Ralph Nader to be fascinating, much for the same reason. How can you influence change while still maintaining an outsider status and core honesty? I’d like to take that approach, as well.

How can our readers follow you on social media?

Given that I’ve just railed against the more destructive aspects of social media, maybe they shouldn’t. But to the extent that I’m active on social media, I largely favor my Twitter account, which is @davesaidso. I’d honestly prefer it if they just checked in on my current, weekly column and shared some of the writing, whatever they like most. Those articles can be found at https://techhq.com/author/davidpringmill/

Thank you so much for these insights! This was so inspiring!

You might also like...


Helen Beedham On How We Need To Adjust To The Future Of Work

by Karen Mangia

Shay O’Carroll On How We Need To Adjust To The Future Of Work

by Karen Mangia

Dr. Benjamin Gibson On How We Need To Adjust To The Future Of Work

by Karen Mangia
We use cookies on our site to give you the best experience possible. By continuing to browse the site, you agree to this use. For more information on how we use cookies, see our Privacy Policy.