First, AI flooded the internet with slop, now it’s destroying work, too – this is how you use AI and still be a stellar employee

If there’s one thing we can depend on AI for, it’s to prove time and time again that you can’t simply replace human effort with technology. A new Harvard Business Review and Stanford Media Lab study found that “workslop” is overrunning business and, in the process, ruining work and reputations.

If workslop sounds familiar, that’s because it’s a cousin to AI slop. The latter is all over the internet and characterized by bad art, poor writing, six-fingered videos, and auto-tuned-sounding music.

Workslop, according to HBR, is “AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.”

Because we’re living on AI Time and everything in technology (and life) seems, thanks to generative AI, to be happening at three times its normal pace, we suddenly have Large Language Model (LLM)-driven AI in every corner of our lives.

Generative platforms like Gemini, Copilot, Claude, and ChatGPT live on our phones, and while Google search still far outstrips ChatGPT as a tool for basic search results, more and more people are turning to ChatGPT when they want deeper, richer, and theoretically more useful answers.

That trend continues in the workplace, where, seemingly overnight, tools like Gemini and Copilot are embedded in productivity apps like Gmail and Microsoft Word.

They’re capable of generating:

  • Summaries
  • Reports
  • Presentations
  • Redsearch
  • Coding
  • Graphics

And it’s clear from this report that there has been a quick and broad embrace of these tools for these and many other office tasks. In fact, workers might be squeezing a little too tight.

In the study, 40% of respondents reported receiving workslop, and they’re none too happy about it. They report being confused and even offended.

Even worse, workslop is changing how they view coworkers.

The problem with workslop is that while it appears to be complete and high-quality work, it is often not. AI can still produce errors and hallucinations. OpenAI’s GPT-5 model is the first major update to address the hallucination issue, stopping ChatGPT from filling in the blanks with guesswork when it doesn’t know something. Still, it and other AIs are not perfect.

The work is often weirdly cookie-cutter, in that these are still programs (highly complex ones) that rely on a handful of go-to terms like “delve”, “pivotal”, “realm”, and “underscore.”

It’s not clear if the workers using AI to build reports and projects recognize this, but their coworkers and managers appear to be aware, and let’s just say that the workers’ next performance evaluations may not be recognizing them for “originality.”

A bad look

According to the report, peers perceive that AI-work-product-delivering coworkers as less capable, reliable, and trustworthy. They also think they’re less creative and intelligent.

Now, that seems a bit unfair. After all, it does take some effort to create a prompt or series of prompts that will result in a finished project.

Still, the reaction to this workslop indicates that people are not necessarily curating the work. Instead of a series of prompts delivered to the AI to create some output, they might be plugging in one prompt, seeing the results, and deciding, “That’s good enough.”

The cycle of unhappiness continues when managers and peers report this workslop to their managers. It’s a bad look all around, especially if the workslop makes it out of a company and into a client’s hands.

Our AI coworker

AI work

(Image credit: Shutterstock)

What’s been lost in this rush to use generative AI as a workplace tool is that it was never intended to replace us or, more specifically, our brains. The best work comes from our creative spark and deep knowledge of context, two things AI decidedly lacks.

When I asked ChatGPT, “Do you think it’s a good idea for me to ask you to do work for me and then for me to present it to my boss?” it did a decent job of putting the issue in perspective.

Mostly, ChatGPT discussed how it can help in research and outlining the first version of a project, being a time saver to cut down on repetitive tasks, and helping me generate fresh ideas.

It warned me, however, about

  • Originality & Attribution
  • Accuracy
  • Ethics and Expectations

It was almost as if ChatGPT had already read the HBR study. Even it knows workslop is bad.

How do we avoid workslop?

HBR had some ideas, and I think it’s pretty simple. Remind everyone that AI is not the answer to every problem.

Ensure that everyone knows when it’s best to use AI and understands what should happen to the AI output, i.e., editing, fact-checking, shaping, or rewriting.

Start viewing AI as a very smart assistant, not as another, smarter version of yourself.

Insist on more in-person meetings and direct collaboration. Reembrace the beauty of a brainstorm.

Workslop, like AI slop before it, will surely get worse before it gets better, and there is a real chance that we may soon no longer know the difference between original human work and AI-generated projects, but I hope that day never comes. We can figure this out. Even ChatGPT knows the answer:

“Think of me as your co-writer or research assistant, not a ghostwriter. Take what I give you, refine it, make sure it’s in your voice, and add your personal expertise. That way, you’re delivering something polished but still authentically yours.”

You might also like

Read more @ TechRadar

Latest posts

Ensuring Precision and Accuracy with Metrology Calibration Services

In the modern industrial and technological landscape, measurement is not just a data point; it's a foundation of trust, safety, and quality. The complex...

Oppo’s Find X9 Pro has a detachable telephoto lens and a gigantic battery

Oppo’s latest flagship phone, like the sleek (but hard to buy) Find N5 foldable, goes hard on the tech specifications. In fact, the Find...

Year Walk, Device 6 and other early Simogo games are coming to Steam and Nintendo consoles

Simogo is celebrating its 15th anniversary with some retrospective projects, which include bringing its games to more platforms. The studio has put together the...

Toyota brings Apple Maps EV routing to its newest models

Toyota battery electric vehicles (BEV) owners can now have Apple Maps help them plan charging stops along their route via CarPlay. Alongside an announcement...

Battlefield 6’s free battle royale mode is out now

Battlefield 6's free battle royale game is now available for download. This follows numerous leaks that have been popping up ever since the mainline...

Google Chrome will finally default to secure HTTPS connections starting in April

The transition to the more-secure HTTPS web protocol has plateaued, according to Google. As of 2020, 95 to 99 percent of navigations in Chrome...

Google is once again disputing Gmail was breached

Not for the first time this year, Google has been forced to reassure its users that it has not suffered a large-scale data breach...

Life is Strange developer Don’t Nod is making a narrative game for Netflix

Don't Nod has a long history of making memorable narrative games, and it looks like the studio's next project will come from Netflix programming....

NVIDIA’s next move in autonomous driving is a partnership with Uber, Stellantis, Lucid and Mercedes-Benz

NVIDIA has entered a partnership with Uber to equip more of the rideshare company's vehicles with its autonomous driving infrastructure. The deal centers on...

How to watch Limited Run Games’ 2025 showcase

With digital games outselling physical ones by embarrassing margins, it's easy to conclude that the latter is done for. But sometimes, approaching extinction leads...