Professor Sandra Matz Explores the Intersection of Psychology, Data Science, and Technology for Business Leaders
Masterclass
April 10, 2025
Sandra Matz is the David W. Zalaznick Associate Professor of Business at Columbia Business School and co-director of the Center for Advanced Technology and Human Performance.

As a computational social scientist with a background in psychology and computer science, Matz investigates the hidden relationships between our digital lives and our psychology, with the goal of helping individuals and businesses make better and more ethical decisions. Her work is frequently covered in major news outlets, including the Economist, the New York Times, BBC and the Wall Street Journal.

At a high level, Sandra Matz's talk explores the intersection of psychology, data science, and technology, focusing on how our digital behaviors provide insights into our psychological traits and preferences. She discusses the concept of psychological targeting, where algorithms use digital footprints to both understand and influence human behavior and highlights both the opportunities, such as personalized education, mental health advancements, and business applications, and the risks, including privacy concerns, manipulation, and loss of agency.

"It’s not just about avoiding harm; it’s about redirecting attention to what these technologies can achieve."

Algorithms can now interpret these digital traces to understand our psychological traits and, perhaps more importantly, influence our behavior. Sandra Matz, Associate Professor of Business at Columbia Business School, argues that this transformative ability, known as psychological targeting, represents both a powerful opportunity and a profound ethical challenge.

How Our Digital Footprints Speak for Us

The idea that our digital lives mirror who we are is not new, but Matz, in a recent talk, paints a vivid picture of how comprehensive this mirroring has become. She compares our current reality to growing up in a small village, where everyone knows everything about you. Today, she explains, "we all now live in this digital form of a village," where our actions online—whether liking a Facebook post or swiping a credit card—reveal deeply personal aspects of our psychology.

These revelations, according to Matz, come from two types of digital traces: identity claims, like deliberate social media posts, and behavioral residues, such as patterns in spending or movement. Together, they create an extraordinarily detailed psychological profile. “Algorithms can predict personality traits, personal values, and even mental health,” Matz states, highlighting the vast scope of what digital data can uncover. Remarkably, a study she cites found that just 300 Facebook likes could enable a computer to predict personality traits more accurately than someone’s spouse.

Personalization: A Double-Edged Sword

With this knowledge, businesses are finding new ways to connect with consumers. Matz describes how psychological targeting has been used to craft tailored advertising campaigns. One such campaign for a beauty retailer segmented audiences into extroverts and introverts, designing separate ads that resonated with each group. The results were striking: a 50% increase in purchases.

“The more I know about you, the better positioned I am to influence your behavior,” Matz explains. While this level of personalization can enhance user experience and drive business outcomes, it also raises critical ethical questions. “It’s about privacy, agency, and the fundamental human desire for self-determination,” she warns. When algorithms influence decisions invisibly, individuals lose a measure of control over their lives.

Generative AI: Accelerating the Pace of Change

Generative AI technologies like ChatGPT take psychological targeting to the next level, automating the creation of personalized content at scale. Matz emphasizes how these tools empower even small businesses to craft ads, identify customer personas, and simulate advisory boards. “Think of it as an extremely good intern with unlimited time and resources,” she says.

However, this convenience comes with risks. Matz cautions businesses to safeguard proprietary data when using AI. “If you use generative AI, ensure that your proprietary data is protected,” she advises, noting the potential for sensitive information to inadvertently enter broader training models.

Rethinking Data Governance

The conversation doesn’t end with what technology can do; it also requires addressing how it should be used. Matz critiques existing data governance frameworks, which she likens to giving people “a small boat in a stormy sea” and expecting them to navigate. Transparency and control, while necessary, are not sufficient for ensuring ethical data use, she argues.

Instead, she advocates for systemic changes, such as implementing federated learning, which allows companies to create personalized experiences without storing raw user data. Another promising approach is the formation of data co-ops—member-owned organizations that pool data for collective benefit. A standout example is a Swiss co-op that uses aggregated patient data to improve treatment options for multiple sclerosis while ensuring that contributors retain ownership of their information.

Shaping the Future of Data Use

Despite the challenges, Matz remains optimistic. She envisions a future where psychological targeting is leveraged to address societal challenges, from personalized education to improved mental health care.

“It’s not just about avoiding harm; it’s about redirecting attention to what these technologies can achieve,” she asserts.

The ultimate challenge, she concludes, is balancing innovation with ethics.

“The goal is not just to navigate the storm but to shape the sea itself,” Matz declares. By building safeguards and fostering innovation, we can unlock the transformative potential of psychological targeting while protecting the values that make us human.

/*video overlay play button*/