A recent Newsweek article reported that a majority of American workers believe artificial intelligence is increasing bias, not reducing it. Nearly 60 percent of respondents said AI makes bias worse, and more than half said they only want humans reviewing job applications.
The conversation has largely focused on hiring, interviews, and HR systems. That makes sense. Those are high-stakes decisions with real human consequences.
But the same pattern is quietly showing up somewhere else.
Marketing.
AI has been sold to marketing teams as a shortcut to efficiency:
Faster content
Smarter targeting
Better optimization
More consistent output
In theory, this should reduce bias by removing emotion and guesswork. In practice, it often does the opposite.
AI doesn’t remove assumptions. It scales them.
Just like hiring tools trained on biased data can reinforce inequities, marketing AI trained on existing content, performance metrics, and historical norms tends to reproduce what already exists. The result isn’t better thinking. It’s faster repetition.
This isn’t about political or ideological bias. It’s practical bias.
Marketing AI tends to favor:
The loudest and most saturated markets
The most common pain points
The safest language
The highest-volume audiences
The shortest path to engagement
What gets lost?
Nuance
Context
Brand personality
Regional and cultural differences
Long-term trust
Over time, brands begin to sound alike. Messaging flattens. Differentiation erodes. And teams mistake consistency for clarity.
This mirrors the concern raised in the Newsweek article. Workers aren’t rejecting AI outright. They’re reacting to what happens when AI is used without transparency, oversight, or human judgment.
One of the most telling insights from the article is that people are open to AI improving efficiency, but they still want humans involved in decisions that affect them.
Marketing decisions affect people.
How a company positions itself
Who it speaks to and who it ignores
What it promises and what it avoids
How it sounds when things go wrong
When AI drives those decisions unchecked, brands begin to feel efficient but impersonal. Optimized but forgettable. Innovative on paper, hollow in practice.
That perception gap doesn’t stay contained in marketing metrics. It shows up in sales conversations, recruiting, retention, and trust.
The Newsweek article includes a blunt assessment from HR consultant Bryan Driscoll: many organizations are effectively outsourcing judgment to machines. They buy a tool, trust the output, and move on.
The same thing happens in marketing.
AI becomes a substitute for strategy instead of a support for it. Messaging gets generated before positioning is clarified. Content gets produced before audiences are understood. Optimization happens before anyone agrees on what “good” actually means.
That’s not innovation. That’s abdication.
The marketing teams using AI well tend to approach it differently:
Strategy comes before automation
AI output is treated as a draft, not a decision
Humans remain accountable for what goes out the door
Performance is evaluated beyond short-term clicks
Brand voice and trust are protected intentionally
In other words, AI is a tool, not an authority.
This aligns closely with what workers are asking for in hiring: transparency, oversight, and human involvement where it matters most.
AI isn’t ruining marketing.
It’s revealing where leadership is missing.
If a brand lacks clarity, AI will amplify the confusion.
If a team lacks direction, AI will scale inconsistency faster.
If leadership avoids hard decisions, AI will quietly make them instead.
At GC Strategies, we don’t see AI as a replacement for marketing leadership. We see it as a stress test. It exposes weak positioning, unclear ownership, and systems built for activity instead of insight.
Used well, AI can speed execution.
Used carelessly, it flattens brands and erodes trust.
The difference isn’t the tool.
It’s who’s still thinking.