Lessons From a Tech Titan: “Run for the hills the moment someone displays a lack of ethics and integrity” With Dr. Salvatore Stolfo and Fotis Georgiadis

Never work with with unscrupulous individuals. Period.

The Thrive Global Community welcomes voices from many spheres on our open platform. We publish pieces as written by outside contributors with a wide range of opinions, which don’t necessarily reflect our own. Community stories are not commissioned by our editorial team and must meet our guidelines prior to being published.

Run for the hills the moment someone displays a lack of ethics and integrity. I will never ever work with or do business with unscrupulous individuals. Period.

As part of my series about “Bleeding edge” technological breakthroughs that seem copied from science fiction, I had the pleasure of interviewing Dr. Salvatore Stolfo a tenured professor, researcher and entrepreneur. He is also a people person, which makes him unique in a field where the focus is on making machines act more like humans. As professor of Artificial Intelligence and Computer Science at Columbia University since 1979, Dr. Stolfo has spent a career figuring out how people think and how to make computers and systems think like people. Early in his career, he realized that the best technology adapts to how humans work, not the other way around. Dr. Stolfo has been granted over 80 patents and has published over 230 papers and books in the areas of parallel computing, AI knowledge-based systems, data mining, computer security and intrusion detection systems. His research has been supported by numerous government agencies, including DARPA, NSF, ONR, NSA, CIA, IARPA, AFOSR, ARO, NIST, and DHS. He is the founder and chief technology officer of Allure Security, an award-winning data loss detection and response startup based in Boston, Massachusetts.

Thank you so much for joining us! Can you tell us a story about what brought you to this specific career path?

As a young professor of Computer Science nearly 40 years ago, I had to choose between eating or having a roof over my head. To make ends meet, I consulted for Citibank and worked with them on researching and developing new ways to detect credit card fraud. I applied advanced machine learning techniques — at that time, these were practically unheard of — and showed how to improve fraud detection and decrease losses. That consulting work led me to think of new ways of applying the same detection techniques to computer security and how to reduce data loss.

Can you share the most interesting story that happened to you since you began your career?

The Citibank project helped me understand how clever fraudsters could be. Building on that work, I had the idea of applying machine learning to computer security. Using that research, I pitched the Defense Advanced Research Projects Agency (DARPA), on my idea to use machine learning to detect zero-day attacks. Later, post-Edward Snowden, DARPA was very interested in solving this persistent problem of data leaks. DARPA funded my Intrusion Detection Lab at Columbia University, where my research group developed a broad range of machine learning-based and deception security technologies that are now commonly used across many security products widely deployed in large and small enterprises. This is where we started to drill down into deception technology and find new ways of applying it beyond the standard honeypot and honeynet applications, especially for nation-state attacks. My goal with this technology is for data to be protected in any format, structured or unstructured, regardless of where it travels.

Can you tell us about the “Bleeding edge” technological breakthroughs that you are working on? How do you think that will help people?

Based on my DARPA research, my company currently offers patented technology — we call it Beacons — to improve early breach detection, inform and initiate response and identify hackers and leakers. My research shows that the earlier you can detect bad behavior, the sooner you can investigate and shut down access to data that hackers and leakers are attempting to steal. The Beacons can be embedded in a real, operational environment. They use patented telemetry and geofencing to sense and alert which documents have been opened. Some documents are real, legitimate files; others are what we call decoy documents. Decoys are highly convincing fake documents that contain nothing of real value. But the hacker or leaker can’t tell that until they’ve downloaded or opened it. The idea is to create a sense of confusion or frustration that leaves the hacker questioning if there is anything worth stealing. The Beacons allow us to track the geographic location of the hacker or leaker and conduct incident response and forensics to reveal their identity.

I hold more than 80 patents in this area. My work will continue to expand on features, capabilities and analytics around deception and detection. I am also developing AI-based, simulated user bots embedded in situ in corporate networks. These bots are designed to behave badly in order to find holes in detection systems, so that we may improve upon our advanced strategic deployment of decoy documents.

These new technologies not only reveal those security controls that do not detect unsafe behavior by people inside a company who are sharing and looking at documents they shouldn’t, but also track how bad actors share sensitive documents. Once they’re downloaded, who are they shared with and who are those people sharing them with, etc.?

The whole goal of developing these bleeding-edge technologies is to finally shift the advantage in favor of data defenders within a business or government agency, by developing automated means of generating deceptive data to force attackers to pay a price for stealing. Hackers and insiders have been getting away with it, consequence-free, for far too long.

How do you think this might change the world?

Securing sensitive data and stopping or slowing the flood of large scale breaches and leaks is in everyone’s best interests. Imagine a world where a hacker or leaker thinks twice about penetrating a system and stealing documents because they know that deception and tracking technology are so widespread, they cannot be sure they will be able to steal anything of value.

Keeping “Black Mirror” in mind, can you see any potential drawbacks about this technology that people should think more deeply about?

Deception has been an effective tool for thousands of years in one way or another. Think about animals who have special markings that fool a predator into thinking they’re a threat, so that predator moves on. It’s the same principle, just applied to data security. A poorly architected deception technology strategy runs a modest risk of interfering with normal operations. If used incorrectly, deception could “fool” the wrong people and interrupt productivity at a company. Deception is not just a honeypot. There is considerable specialized knowledge necessary to optimize its use and effectiveness.

Was there a “tipping point” that led you to this breakthrough? Can you tell us that story?

My early work with the U.S. Government taught me many lessons about the sophistication of nation-state attackers. Deceiving them is hard, but I have seen first-hand that it is doable. I was convinced that deception and tracking technology could be broadly deployed when designed with scientific principles to substantially improve security at large enterprises.

What do you need to lead this technology to widespread adoption?

For deception and tracking technology to move from “bleeding edge” to mainstream, we need more examples of successful early detection and specific attacker attribution. I have been personally involved in many examples, but we need more. Right now, many users of deception tech are reluctant to tell their stories because they consider this technology a competitive advantage. But to help all organizations put an end to data loss, we need to start talking more about what’s working and what isn’t, if we have any chance at all to stop it. Some examples of actual attributed attacks is a great start, but more are necessary to teach how well the technology actually works.

What have you been doing to publicize this idea? Have you been using any innovative marketing strategies?

I’ve been speaking at large enterprise security events, such as the RSA Conference, to share my viewpoint and scientific research in the deception and tracking technology fields. I’ve been writing extensively about my research and approach to deception and tracking tech in large, national cybersecurity publications. I want to reach those who are cybersecurity practitioners to let them know there’s a better way to approach data loss. I also brief government organizations, as needed.

None of us are able to achieve success without some help along the way. Is there a particular person who you are grateful towards who helped get you to where you are? Can you share a story about that?

I have had so many people I am grateful to have helped me, including program managers at DARPA, business professionals and CEO’s of security companies. Singling out any one person will detract from the many others who I should also mention. I also have many students that I’ve mentored who have gone on to achieve great things in the field.

How have you used your success to bring goodness to the world?

I am committed to the idea of never doing harm with my research and technology. My work has always focused on defending systems so all users can create and enjoy the full advantages that computing and the internet provides. I’m not an advocate of so-called “hacking back” under its current definition because I don’t believe that counter-measures in which the defenders seek and destroy a hacker’s systems is a productive way to protect data. If anything, this approach could cause cyberwarfare to escalate to a place we don’t want to be. But I do believe in taking proactive security actions, and I like to imagine a world where hackers and cybercriminals decide that nefarious behaviors simply aren’t worth the effort. We have to make it harder for them to carry out their plans and easier to hold them accountable when they do.

What are your “5 Things I Wish Someone Told Me Before I Started” and why. (Please share a story or example for each.)

1- I wish someone advised me when I founded my first start-up company to focus on a single idea: have a customer first. Building a solution or product, and then finding someone who wanted to use it, is not a good strategy.

2- Always build an organization holding to the principle of hiring new people that “raise the average.”

3- Understand the selling cycle for different organizations. Many large enterprises have a mindset I have inferred from their behavior. Everyone wants to be the very first, second mover. Work hard to find the early adopters and risk takers, the very first, first movers.

4- The power of relationships is real. The right people can connect you to the right thinkers and risk takers willing to be helpful.

5- Run for the hills the moment someone displays a lack of ethics and integrity. I will never ever work with or do business with unscrupulous individuals. Period.

You are a person of great influence. If you could inspire a movement that would bring the most amount of good to the most amount of people, what would that be? You never know what your idea can trigger. 🙂

Ethical computing is already a movement I wholeheartedly ascribe to. In my role as a professor, I always talk with my students about ethics. Learning about security can be dangerous, but I implore them to consider the good their knowledge can be for everyone if they channel their creativity and abilities to do good, and protect those that need protection from the vast array of harm on the internet. Cyber Civics!

Share your comments below. Please read our commenting guidelines before posting. If you have a concern about a comment, report it here.

You might also like...


“To prevent burnout I bought a farm in upstate New York to embed myself in the sounds of nature” with Salvatore Stolfo and Mitch Russo

by Mitch Russo

Rest Assured; It’s Under Control!

by Tamara Nall

Machine Morality: an ethicist, a computer scientist, and a neuroscientist look into building human morality into an AI machine

by Richard Sergay, Tavia Gilbert
We use cookies on our site to give you the best experience possible. By continuing to browse the site, you agree to this use. For more information on how we use cookies, see our Privacy Policy.