Digital Trust Does Not Exist | Ross A. McIntyre with Petrina Harper
DIGITAL TRUST WHITE PAPER

Digital Trust Does Not Exist

A guided tour through online deception, deepfakes, and why “never trust, always verify” may be the only sane posture in an age of synthetic everything.

By Ross A. McIntyre with Petrina Harper. Originally published on Matter.StellarElements.com, March 2023, and updated here as a stand‑alone white paper.

Trust as future currency
“In the future, the most valuable currency will be trust. Once it's lost, it's very hard to restore.”
— Satya Nadella, CEO of Microsoft
OVERVIEW

The illusion of safety in a synthetic world.

As we navigate the complexities of the future, trust will become more than a value—it will be the foundation upon which all successful and sustainable technologies are built. In a world where change is the only constant, trust will be the anchor that keeps us grounded and connected. It will also be the currency of personal and professional success.

As technology evolves at an ever‑increasing pace, we humans must adapt to maintain integrity. By examining the evolution of online deception, what to anticipate in the future, and the human impact, we can foster a greater breadth of understanding regarding the past, present, and future of a trend we can never entirely ignore.

For those of us working in technology, the need to be security‑savvy has always been critically important—perhaps a little easier than for some of our non‑tech friends and family. From ISO certifications to mandatory security training, it’s part of our jobs to keep data safe and recognize attempts to steal it. But the ever‑increasing sophistication of scams, combined with the influx of AI technologies, means that everyone—even the savviest—needs to be more vigilant.

Over time, scams have evolved, and some older ones are now well‑known, perhaps even laughable – think about your “boss” asking for iTunes vouchers via a random email or phone number that isn’t theirs, or the troubled Nigerian prince promising vast riches in exchange for help with their financial situation. Sadly, these things can still be convincing to some, and modern iterations continue to dupe new and unsuspecting targets.

It can be useful to share recent things that have almost fooled you in social situations, just to raise awareness. It is often said that “safety is an illusion; danger is the reality” – and it does pay to be alert for bad actors.

THE AGE OF MANIPULATION

Deepfakes, data breaches, and synthetic media.

Cons have undeniably gotten more sophisticated over the past few years and many couple technological exploitation and human psychology. Spear‑phishing is an evolution of phishing that tailors seemingly legitimate messages to individual targets – couple this with generative AI and you have a system that can dynamically respond in a manner appropriate to the situation and the person being imitated. Business Email Compromise (BEC) features attackers posing as company executives and initiating fraudulent wires – the FBI has reported billions of lost dollars due to BEC scams.

The prevalence of data breaches is a large contributing factor. High‑profile breaches at Equifax, Marriott, and Facebook reveal that even well‑established companies struggle to protect user data. Another factor is the lack of transparency and control around personal data – companies might share or sell user data without explicit consent (or consent is buried in a 15‑page End User License Agreement). Finally, inadequate company security practices explain why some of these scams make it to your company laptop. When users experience such incidents, their trust in digital experiences diminishes.

For good or ill, we are firmly inside the borders of an Age of Manipulation, Misinformation, or Disinformation – maybe all three. The ascendance of deepfake technology – synthetic media that has been manipulated using AI to create convincing fake audio and video clips – poses a new threat to digital trust. Such technology could allow greater transmission of misinformation, more extensive fraud, and a media environment in which it is difficult to distinguish between what is real and what is fabricated.

As digital technologies evolve, so do methods for exploiting them. Digital trust violations are apt to increase due to the growing sophistication of cybercriminals. Likely growth areas include:

  • Cryptocurrency scams
  • Ransomware attacks
  • Internet‑of‑Things (IoT) exploits
  • Quantum‑enabled attacks
  • Supply chain compromises
  • Biometric data theft

In order to mitigate these extant and emerging threats, cybersecurity practices, user education, and regulatory and ethical frameworks are essential. Systems capable of dynamic response to threats could more effectively safeguard data and maintain digital trust.

EVERYDAY DECEPTION

Scams you’ve probably seen—and how to respond.

A few scams you may have seen in the past year include:

  • Worldwide, trusted celebrity figures used as unwitting front people for ad swindles or to let people know they’ve “won” phony competitions.
  • Facebook/Instagram ad posts with emotive calls‑to‑action (e.g., “We have to close our business down!”). Often, if you dig a little deeper, you will find they are not legitimate, and may have comments from other users calling out the fraud.
  • Text messages that require extra scrutiny with almost perfect fake URLs from the “courier,” the “bank,” or the “tax department.” These often include urgency in their wording and off‑brand URLs.

Consider these recommendations:

  • A call to action that involves a limited time to address a situation is the number one way people get into trouble because a scammer will utilize this feeling to get individuals to act without thinking.
  • Carefully check email addresses both before and after the “@” sign, to confirm they do not have any red flags such as random numbers or weird domains.
  • Ask whether friends, family, and colleagues usually sign off their text messages with their names. If not, and suddenly a message from a “new number” does, it might pay to be suspicious.
  • Never click in‑message links; instead, use search engines or official apps to get the right URLs for investigation.
  • Take a moment to check social media profiles, groups, or vendors and look for anything out of the ordinary—few likes, circular networks of suspicious profiles, or other red flags.
THE ALLURE OF CYNICISM

Social façades, deepfakes, and why “seeing is believing” breaks down.

What about trust concerns that are not so grave—people you know who are just trying to get likes and reposts in order to build a following? The misrepresentation of self on social media has a few different angles to it: people who build a façade that paints their life in strictly rosy tones and those who leverage deepfakes for revenge porn or similar harms. Not the same type of person, but it is productive to look at how untruthfulness can drive negative psychological effects.

We all know that person – the one who presents a self on social media that is radically misaligned with their true situation. At first, it is astonishing to see the dichotomy, but one quickly realizes that the deception (often viewed as potentially harmless) can be the result of psychological issues or mental illness. We would not want to take everything at face value if we want to protect that person in real life.

Many youths experience inadequacy and low self‑esteem due to the discrepancy between one’s real self and the online persona. Researchers Hui‑Tzu Grace Chou and Nicholas Edge found in a 2012 study that many perceive that others are happier and more successful than they are, according to what they see on social. This has been happening almost as long as social media has existed—we cannot expect it to stop anytime soon.

While some teens are sanguine about the negative impact AI may have on them, educators have a different perspective: 69% predict that AI will have a negative impact over the next decade and 24% expect it to be “very negative.” Can advancing technology make things worse? Unfortunately, of course it can. We cannot ever assume that what we see on social media is objectively accurate.

Deepfakes—any image, video, audio, or text that purports to represent an individual but is created as a knowing misrepresentation using a combination of AI and machine learning—are a distinct threat, not because of the technology alone, but because of human psychology. People have a natural tendency to believe what they see. As such, deepfakes do not need to be perfect or even particularly believable to be effective in promulgating mis‑ and disinformation.

We have already seen deepfakes in the ramp‑up to the 2024 U.S. election. Given the impact this technology is likely to have, it isn't easy to see where legitimate applications exist at scale. While merit is arguable, it can be difficult to perceive how the benefits outweigh the potential damage. All of it makes cynicism extremely attractive – and potentially well‑reasoned.

ZERO TRUST MINDSET

“Never trust, always verify” beyond the network diagram.

What’s on the other side?

So, how can we wade through the various forms of online deception without becoming round‑the‑clock cynics? Should we even try to resist? For the people you care about, education is most important – warn those who are unfamiliar with digital chicanery about some of the more contemporary schemes and go over the various methods scammers and bad actors use.

In the future, we can expect digital trickery to advance rapidly. There will always be a portion of society that is of ill intent, and it is reasonable to expect they will utilize the scale and quality of AI tools to reach new targets with different approaches. The more people that can be targeted, the more likely scammers will find that one person who will fall for it – just make sure that person is not you, or someone you care about.

As we head into tumultuous elections across the world, we can expect to see both traditional scammers and bad‑actor governments utilizing the latest in tech to spread false information and influence opinion.

Zero Trust as personal protocol

Perhaps the only sane response is to adopt protocols like Zero Trust (ZTA). Zero Trust is a framework adopted by IT security groups that amounts to “never trust, always verify.” In its original manifestation, this is oriented primarily towards authenticating users and does not assume anything about identity even if the user comes from a connected, permissioned network.

For more abstract, informational, or conceptual pursuits, it requires the user to check the veracity of anything important they consume – especially if they intend to promulgate that content. Believing something false without sharing it outwardly contains potential damage mainly to the individual. Sharing it amplifies harm.

The “never trust, always verify” mantra also has relevance to information offered by generative AI tools. Such systems are known to “hallucinate” and offer content that is not true or accurate. As these tools gain traction, verify anything you plan to rely on, act on, or share.

While digital technologies offer immense benefits and conveniences, they also introduce risks that must be actively managed. Building digital trust requires more than advanced technology; it also demands robust legal frameworks, transparent practices, and a commitment to ethical standards. If vulnerabilities exist and are exploited by scammers and bad actors, complete digital trust remains out of reach.

In the future—as well as the here and now—learn to pause, don’t take things at face value, and do a little research.

RECOMMENDATIONS

Designing for digital trust when digital trust doesn’t exist.

If digital trust never fully exists in a stable, guaranteed form, the job of leaders, designers, and technologists is to continually earn and re‑earn enough trust to keep people safe and able to participate.