Women Leading The AI Industry: “Part of the solution will be to create standards for AI” with Briana Brownell and Tyler Gallagher

Part of the solution will be to create standards for AI. I often hear the claim that regulations and standards stifle innovation. However, I think that we’re at a point where we need to be serious about what regulations and standards actually stifle innovation and which ones are successful at protecting the public. I’m involved […]

Thrive Global invites voices from many spheres to share their perspectives on our Community platform. Community stories are not commissioned by our editorial team, and opinions expressed by Community contributors do not reflect the opinions of Thrive Global or its employees. More information on our Community guidelines is available here.

Part of the solution will be to create standards for AI. I often hear the claim that regulations and standards stifle innovation. However, I think that we’re at a point where we need to be serious about what regulations and standards actually stifle innovation and which ones are successful at protecting the public. I’m involved with several standardization efforts both at the national and international level, and we are working to make AI trustworthy, robust, and safe for everyone.

As part of my series about the women leading the Artificial Intelligence industry, I had the pleasure of interviewing Briana Brownell. Briana is a data scientist who took the leap into entrepreneurship. She is Founder & CEO of Pure Strategy Inc., an AI company that helps employees can make faster, data-driven decisions. She believes that within 10 years, most people will have AI coworkers.

Thank you so much for doing this with us! Can you share with us the ‘backstory” of how you decided to pursue this career path?

In high school I was fascinated with physics — quarks, especially — and I started out wanting to be a theoretical physicist. I worked as a research assistant at the Subatomic Physics Institute in my undergraduate years, matching experimental results to the theory in quantum chromodynamics. It was there that I discovered ELIZA, the old chatterbot from the 50’s. ELIZA was one of the first conversational artificial intelligences ever built. Since my computations for the lab took awhile to run, I used that down time to talk with the AI. That’s when I became intrigued by artificial intelligence.

I took a rather winding road back to AI. After graduating, I worked as a proprietary equities trader on the NYSE leading up to the global financial crisis. It was equal measures of fascinating and terrifying. But I became interested in these dynamic systems — ones at the mercy of human decision-making.

From there I moved to data science and worked at a boutique consultancy where I had the opportunity to work with many top tier researchers, influential organizations and successful companies. The breadth of projects was huge. I worked on everything from gamification, product adoption, climate change, customer loyalty, advertising effectiveness, forecasting, user experience, pricing optimization.

But I always knew I wanted to start a company. I finally took the leap right at the end of 2015 and haven’t looked back. Starting Pure Strategy has been such a fascinating experience, and has brought me many new challenges and successes.

What lessons can others learn from your story?

Take time to explore! I have explored many, many different things in my career and they are all driven by curiosity and the constant desire to learn. That driving inquisitiveness is what I credit for a large part of my success. The ability to cross-pollinate ideas from very different fields gives you a unique perspective that accelerates your skill development in a huge way. So many people in history discovered this, like scientist and artist Leonardo Da Vinci, and many contemporaries have too, like engineer and ambassador Winnie Byanyima. It’s something that can give you energy time and time again.

Can you tell our readers about the most interesting projects you are working on now?

A friend once said to me “you’re not happy unless you’re living at least two lives at once.” I’m loving the work I get to do to grow my company, Pure Strategy. We are creating a cognitive technology that understands unstructured text and images in context which is notoriously challenging to do. We’ve had to create our own datasets so it has enough context to understand it. They’re almost a billion rows. I was able to delve deeply into using neural networks and created a unique unsupervised learning methodology modeled after human decision-making. We’ve used it in such different ways; it’s amazing the breadth that this technology can give us. For instance, we used it to understand physicians’ decision-making for treating rare blood diseases and we used it to help parents decide what daycare to choose given the complaints that had been found by health inspectors. Talk about a variety of use cases!

I’m also working on a larger collaborative project called Uncanny Learning. It’s a play on the idea of the “uncanny valley” in robotics where the visual appearance or movement of a robot gives people a feeling of unease. Uncanny Learning is the same sense of unease when we see AI do human-like cognitive tasks like compose music, write a story, or interact with its environment. We feel uneasy that a machine could do something that seems so fundamentally human.

Uncanny Learning challenges us to use science and technology, especially artificial intelligence and machine learning, to give us a greater understanding of ourselves and our own human experiences. My recent TEDx Talk “Can an artificial mind see the Man in the Moon?” is controversial — it challenges us to rethink our relationship with bias in AI. Although bias in AI can have huge detrimental consequences, like exacerbating prejudice and inequality, biases in the human cognitive system are responsible for creating truly meaningful pieces of art. With our current way of thinking about artificial intelligence these two facts are hard to reconcile.

This fall, I’m launching Uncanny Learning as a quarterly, interactive magazine and so I’m pretty excited about planning that.

None of us are able to achieve success without some help along the way. Is there a particular person who you are grateful towards who helped get you to where you are? Can you share a story about that?

I’ve definitely been lucky that throughout my career I’ve had the opportunity to work with many brilliant minds both in industry and academia. I’ve learned so much from them! I’ve also been involved with a lot of research papers — I think it’s over a dozen papers I’ve published now despite not working in Academia and not having a PhD. I absolutely love the act of exploring possibilities on the frontier of knowledge and seem to have a knack for attracting others who share that philosophy.

One of the most formative experiences early in my career was working with a research group in Australia. We were trying to understand why primary producers adopted technology — or didn’t, as was more often the case — to mitigate business risk due to climate variation that caused floods, fires, and droughts. We used unsupervised learning to find emergent groups with similar attitudes and I developed a method to follow them over time so that we could see whether their attitudes shifted.

It was all very abstract on my end, just numbers and algorithms and calculations. But when I came to work in the lab in Australia, the region was experiencing a very severe drought, the kind of climate event that we were talking about in our research.

One day, my fellow scientists took me out for a drive in the countryside. We arrived at what had previously been a large natural lake, but it was completely empty and dried up. It looked like a valley of dust. Around the former lake was a row of hundreds of skeletal trees, all dead.

The lake, which once had been home to a plethora of plant and animal life, was barren and lifeless. It was at that moment that I felt viscerally what I was doing as a data scientist. All of those numbers that I dealt with everyday on my computer meant something deeply significant in the real world. It was an experience that has stayed with me to this day.

What are the 5 things that most excite you about the AI industry? Why?

There’s a lot to be excited about in AI today. First, I love how involved industry is working alongside academia in pushing the boundaries of knowledge in the field. Many companies are contributing to the field publicly. This has really helped the sophistication of the tools that are available now, like pytorch and tensorflow.

I also really believe in the potential of unsupervised learning to reveal patterns in data that no one thought to look for. The insights unsupervised learning can unlock make the technology truly remarkable.

Generative methods, like Generative adversarial networks, allow us to create new things and I think that creation is an important possibility within AI. This is going to become ever more important in the future, and I can see many possibilities like having custom music created to make the listening experience more immersive, generating a completely unique piece of artwork for one person, or writing a story uniquely for you.

I also think that the methods in cognitive technology are shedding light on some of the most fundamental questions that humans have asked themselves for millennia: about creativity, meaning and inspiration. AI is gaining the ability to understand human language and the meaning of what we say, beyond strict logical predicates.

FinaIly, I love the general interest in the field and the huge number of people asking smart questions about what it means for our shared future. It’s a really exciting time to be in AI. The rate of discovery is breathtakingly fast. There are new methods and AI paradigms being explored all the time. It’s a wild ride.

What are the 5 things that concern you about the AI industry? Why?

Thanks to the interest and the tools that were developing there’s always the risk of over-hyping results. History has seen several AI Winters where the expectation technology surpassed what it was able to deliver. There’s always a risk of that happening again. Often data scientists can’t predict how well the modeling will work on a data set beforehand and so there’s an inherent risk in promising too much. Yet investors and customers want sure-things. It’s hard to reconcile.

We’re also losing touch with some of the foundational methods. I can’t tell you how many conversations I’ve had with people who want to use some of the latest AI methods when actually what they want was created right at the birth of the field in the 1950s. There’s nothing wrong with using some of these historical methods — in fact they have many advantages!

Another challenge that we’re facing an AI right now is how unintended effects of AI can affect people’s lives. There is a quote from Mark Zuckerberg that’s “move fast and break things”. But the problem is that some things that you break are really really important. Breaking them has real, significant consequences for people’s livelihoods. I mentioned my experience seeing the dead lake in Australia. When the drought was severe there were many suicides among those who did not mitigate the risks that they faced. That’s something that affects whole communities and families through generations. It’s serious and should be taken with the gravitas it deserves.

As you know, there is an ongoing debate between prominent scientists, (personified as a debate between Elon Musk and Mark Zuckerberg,) about whether advanced AI has the future potential to pose a danger to humanity. What is your position about this? What can be done to prevent such concerns from materializing? And what can be done to assure the public that there is nothing to be concerned about?

There’s no question that AI has the potential to be dangerous. And I think that the most dangerous thing is how insidious it might be. I recently wrote a feature article in Towards Data Science about the discussion between two insightful experts in AI, Technologist Fei-Fei Li and Historian Yuval Noah Harari about what we should be thinking about as AI gains the ability to manipulate humans in a massive way.

Part of the solution will be to create standards for AI. I often hear the claim that regulations and standards stifle innovation. However, I think that we’re at a point where we need to be serious about what regulations and standards actually stifle innovation and which ones are successful at protecting the public. I’m involved with several standardization efforts both at the national and international level, and we are working to make AI trustworthy, robust, and safe for everyone.

How have you used your success to bring goodness to the world? Can you share a story?

Last year I did a talk at a TEDx event about artificial intelligence, and a really important part of it was the story of a 3 million year old artifact — a pebble that had been picked up for a purely aesthetic reason by our hominid ancestors because it looked like a face in each orientation. This “Pebble of Many Faces” is widely considered to be the roots of art. I looked all over to get a picture of all of the sides of this pebble but it was impossible to find! After a LOT of running around I finally got into contact with the Natural History Museum in London who was able to find archival photos of the replica they had.

When I went to London this spring, the curators at the museum were generous enough to let me handle the replica artifact, as well as other important artifacts from early hominids — some real! It was absolutely amazing. To be able to hold a tool that was held by someone in your evolutionary family tree — before humans even existed — left me with such a feeling of awe.

Whenever I share this story I can’t believe how many people tell me that they have never heard of the pebble. This is one of the most fascinating pieces of our human history and nobody knows about it! I was really excited that I could share the story of how this pebble is relevant in our present-day lives so that many more people have heard about it this fascinating artifact.

As you know, there are not that many women in your industry. Can you share 3 things that you would you advise to other women in the AI space to thrive?

There are a lot of women leading the charge in AI in Canada and we’ve already seen the benefits of it. That’s not to say there isn’t a challenge but I do feel that we are making some good progress. I think that one of the one of the most important things is to allow the women leaders in AI to be visible to young women who may be interested in the field but don’t have a family member or friend of the family that is involved in the industry.

Can you advise what is needed to engage more women into the AI industry?

I was lucky to be in the audience when Gwynne Shotwell, one of the most famous engineers in the world and COO of SpaceX, talked about her start in engineering with Chris Andersen. She was interested in becoming an engineer because a female engineer came to her school and she was wearing a really stylish suit.

Why is that important? It sounds on the surface to be very frivolous, but it is quite a deep comment. When she saw this engineer, she could imagine that she would want to be like her. And that’s what we need young women to be able to do — see themselves in this field. If they don’t see anyone like them, it’s really hard for them to do that. That’s why I think that visibility of the incredible contributions that women have made to the field are important to celebrate.

What is your favorite “Life Lesson Quote”? Can you share a story of how that had relevance to your own life?

My favorite quote that has got me through difficult times is something that sounds dark but is actually very hopeful: “The night is darkest just before dawn.” This is true not only in entrepreneurship but everywhere. At some point you’re going to come to a situation where you start to lose hope. Something will always be going wrong: your best employee quit, your top prospect decided not to buy, your current customer is delinquent on paying their invoice and you have a payroll coming up. We all go through these slumps but it’s something that, when you’re in it, no one talks about. So the quote is a realization that there are going to be dark nights, sleepless nights where you think that there is no hope left, but those darkest times are always followed by something brighter.

You are a person of great influence. If you could start a movement that would bring the most amount of good to the most amount of people, what would that be? You never know what your idea can trigger. 🙂

I want people to think about how technology, especially artificial intelligence, can help us understand ourselves as humans. How can we use this to deepen our own knowledge and add to the conversation about our shared future. We all have an important voice.

I’d love for interested people to get involved with Uncanny Learning. I’m looking for great ideas, fascinating people to interview, and talented writers and designers to help put it together.

How can our readers follow you on social media?

Connect with me on LinkedIn https://www.linkedin.com/in/briana-brownell-08067921/

Visit my website at http://www.leopardless.com

Sign up to be a part of Uncanny Learning at http://www.uncannylearning.com

Follow Uncanny Learning at https://www.facebook.com/uncannylearning/

Thank you for all of these great insights!

Share your comments below. Please read our commenting guidelines before posting. If you have a concern about a comment, report it here.

You might also like...


“Never stop learning”, With Tyler Gallagher & Wendy Gonzalez

by Tyler Gallagher

Andrea Gallego of Boston Consulting Group: “I would advise other women in AI is that It’s not just about the technology”

by Tyler Gallagher
We use cookies on our site to give you the best experience possible. By continuing to browse the site, you agree to this use. For more information on how we use cookies, see our Privacy Policy.