THE #1 AV NEWS PUBLICATION. PERIOD.

Agentic AI and the Curious Case of Commander Data

agentic ai

There’s a curious linguistic twist happening in the way we talk about artificial intelligence. As the hype cycles roar on, we’ve started labeling our tools and technologies with increasingly human-sounding traits. One of the buzziest terms in this transformation is “agentic” — a word that sounds freshly coined but actually has deep roots in psychology and philosophy. And the strange thing is, in describing AI as agentic, we might just be fulfilling the dream of a certain yellow-eyed android who served aboard the USS Enterprise-D.

In “Star Trek: The Next Generation,” Commander Data’s story arc is essentially a futuristic retelling of Pinocchio. He’s a machine who yearns to be more than his programming — to feel, to choose, to make mistakes, and ultimately, to be human. It’s a story we’ve long loved: machines becoming people. But here in the real world of 2025, we’re watching the reverse play out. People — more to the point, marketers, researchers and technologists — are describing machines with increasingly human terms.

AI doesn’t just “respond” anymore. It “understands.” It’s not just a program — it’s an “agent.” And if it has agency, then it must be… agentic?

So What Does ‘Agentic’ Actually Mean?

The word agentic comes from the root word agency, which itself stems from the Latin agere — to act or to do. Psychologist Albert Bandura popularized the term in the 1980s to describe the capacity of individuals to act intentionally, to self-regulate and to reflect. In this context, an agentic person isn’t just drifting along the current of life—they’re steering their own ship. Bandura framed it as central to human development and self-efficacy.

And that’s where it gets weird. Because now we’re hearing phrases like “agentic AI” or “agentic systems”— describing programs with this same capacity for self-direction. As one recent blog from Moveworks put it:

“The word ‘agentic’ means capable of acting independently and making choices. In psychology, it refers to the capacity of individuals to act independently and make their own free choices. But in artificial intelligence, the term has been adapted to describe systems that can make decisions or take actions with minimal human intervention.”

Which sounds suspiciously like we’re trying to give Data a promotion — from fiction to function.

The Linguistic Sleight of Hand

In her blog Word Nerd Wednesday – Agentic, Megan Burns writes:

“We’re seeing the word ‘agentic’ surface more and more, especially in AI and machine learning circles, but often without much discussion of its origin. It’s not a new word, but it’s being freshly applied to things that… well, aren’t people.”

That’s the sleight of hand. We’re applying a deeply human term — one that carries implications of will, morality and choice — to mathematical models running on GPUs.

There’s a temptation to let the word “agentic” do too much work. When applied to AI, it can imply sentience or consciousness where there is none. It blurs the line between tool and entity. This isn’t a semantic nitpick — it’s a philosophical dilemma. If we describe our systems as if they have agency, we may start to treat them (and trust them) as if they do — even when they’re simply approximating intelligence through patterns.

The 180-Degree Honesty

If we’re being honest — and let’s try — the word was probably chosen and embraced by company leaders to suggest that, once the software is finally “good enough,” it can replace the agents we’ve had to hire for call centers and customer support. It doesn’t matter what the word actually means — it matters what it sounds like it means.

Replacing living beings with machines is one of those concepts that’s done a full 180.

Plastic is a great example. When DuPont invented plastic (which in their labs was originally called “Duparooh”—an acronym for “DuPont Pulls A Rabbit Out Of A Hat”), it was seen as a miracle substance. Back then, calling something plastic meant it was modern, flexible, even futuristic. Today, it implies artificial, cheap and disposable.

The idea of replacing humans with machines has long made people uneasy. Stories across cultures have warned against it. Hans Christian Andersen wrote about a mechanical nightingale that replaced a real one — until the fake broke and nearly killed the emperor. In Greek mythology, there was Talos, a bronze giant meant to protect Crete, but who ultimately failed. In “The Twilight Zone,” it was The Brain Center at Whipple’s — where the manager who replaces all his workers is, in the end, replaced himself.

Even Stanley Kubrick’s HAL 9000 showed how fallible smart machines can be — until the sequel, “2010: The Year We Make Contact,” awkwardly retconned the whole thing.

Now, in 2025, tech leaders are in an arms race to help drooling CEOs automate away the “messy,” unpredictable humans with seemingly perfect programs. A full 180.

From Data to DALL·E

On Star Trek, Data’s quest to be human was noble, deliberate and painfully self-aware. Today’s models — GPTs, diffusion models, multimodal AIs — don’t even know they exist. And yet, we’re increasingly anthropomorphizing them. We want our AI to “take initiative,” to “understand nuance,” to be more “agentic.”

But do we want that because it’s useful? Or because it’s comforting?

The irony is, Commander Data spent years striving to become more human. Our AI tools aren’t striving at all — we’re just projecting those aspirations onto them. We’re the Geppettos of modern tech, carving models from code and then wishing them into consciousness with our language.

The Danger of Pretending

This shift in language matters. When we call AI agentic, we may be masking the true risks — systems that can be wrong, biased, unaccountable, manipulatable or, most distressingly, utterly lacking in common sense.

Calling an algorithm “agentic” makes it sound competent and trustworthy when it may be anything but.

It also raises thorny questions about responsibility. If something has agency, can it be held accountable? Or do we still blame the humans behind the curtain?

In the end, Commander Data wanted to be more than a machine. He wanted meaning. And maybe that’s the difference.

Today’s tools aren’t striving. We are. And we’ve decided “agentic” is a good enough label — for now. But let’s not forget: the word once meant something very human.

Maybe it still should.

Top