Disclosure

You know how when you’re on hold to any customer service line you hear a message that thanks you for calling and claims your call is important to them. The message always includes a disclaimer about calls possibly being recorded “for training purposes.”

Nobody expects that any training is ever actually going to happen—surely we would see some improvement if that kind of iterative feedback loop were actually in place. But we most certainly want to know that a call might be recorded. Recording a call without disclosure would be unethical and illegal.

Consider chatbots.

If you’re having a text-based (or maybe even voice-based) interaction with a customer service representative that doesn’t disclose its output is the result of large language models, that too would be unethical. But, at the present moment in time, it would be perfectly legal.

That needs to change.

I suspect the necessary legislation will pass in Europe first. We’ll see if the USA follows.

In a way, this goes back to my obsession with seamful design. With something as inherently varied as the output of large language models, it’s vital that people have some way of evaluating what they’re told. I believe we should be able to see as much of the plumbing as possible.

The bare minimum amount of transparency is revealing that a machine is in the loop.

This shouldn’t be a controversial take. But I guarantee we’ll see resistance from tech companies trying to sell their “AI” tools as seamless, indistinguishable drop-in replacements for human workers.

Have you published a response to this? :

Responses

Baldur Bjarnason

“Adactio: Journal—Disclosure” Not disclosing that something is AI-generated is so obviously unethical that I expect the tech industry to fight any and every attempt to mandate disclosure tooth and nail. adactio.com/journal/20033

9 Shares

# Shared by Michał "rysiek" Woźniak · 🇺🇦 on Wednesday, March 22nd, 2023 at 10:54am

# Shared by Baldur Bjarnason on Wednesday, March 22nd, 2023 at 10:54am

# Shared by Craig Saila on Wednesday, March 22nd, 2023 at 11:23am

# Shared by noodlejetski on Wednesday, March 22nd, 2023 at 11:23am

# Shared by Dave Simon on Wednesday, March 22nd, 2023 at 11:23am

# Shared by Stuart on Wednesday, March 22nd, 2023 at 11:54am

# Shared by Oblomov on Wednesday, March 22nd, 2023 at 12:22pm

# Shared by Sam Strong on Wednesday, March 22nd, 2023 at 6:09pm

# Shared by Timo Tijhof on Saturday, March 25th, 2023 at 9:28am

18 Likes

# Liked by Baldur Bjarnason on Wednesday, March 22nd, 2023 at 10:54am

# Liked by Michał "rysiek" Woźniak · 🇺🇦 on Wednesday, March 22nd, 2023 at 10:54am

# Liked by Aegir 🏴󠁧󠁢󠁷󠁬󠁳󠁿🇪🇺🏳️‍🌈 on Wednesday, March 22nd, 2023 at 10:54am

# Liked by Jedidiah on Wednesday, March 22nd, 2023 at 11:23am

# Liked by c on Wednesday, March 22nd, 2023 at 11:23am

# Liked by Matthias Ott on Wednesday, March 22nd, 2023 at 11:23am

# Liked by Derek P. Collins on Wednesday, March 22nd, 2023 at 11:23am

# Liked by 消えない《「°屁音」生育中🌿》 on Wednesday, March 22nd, 2023 at 11:23am

# Liked by Adam Lui :verified: on Wednesday, March 22nd, 2023 at 11:23am

# Liked by Rolf van Root on Wednesday, March 22nd, 2023 at 11:54am

# Liked by elroyjetson :apple: :ubuntu: on Wednesday, March 22nd, 2023 at 11:54am

# Liked by Trent Walton on Wednesday, March 22nd, 2023 at 12:22pm

# Liked by Oblomov on Wednesday, March 22nd, 2023 at 12:22pm

# Liked by Ben McKenzie on Wednesday, March 22nd, 2023 at 1:17pm

# Liked by Jesse on Wednesday, March 22nd, 2023 at 3:28pm

# Liked by Chris Taylor on Wednesday, March 22nd, 2023 at 5:36pm

# Liked by Rasmus Kaj on Wednesday, March 22nd, 2023 at 9:12pm

# Liked by Timo Tijhof on Saturday, March 25th, 2023 at 9:29am

Related posts

Changing

I’m trying to be open to changing my mind when presented with new evidence.

The meaning of “AI”

Naming things is hard, and sometimes harmful.

Unsaid

I listened to a day of talks on AI at UX Brighton, and I came away disappointed by what wasn’t mentioned.

Mismatch

It’s almost as though humans prefer to use post-hoc justifications rather than being rational actors.

What price?

Using generative large-language model tools? Sleeping well at night?

Related links

The Gist: AI, a talking dog for the 21st Century.

My main problem with AI is not that that it creates ugly, immoral, boring slop (which it does). Nor even that it disenfranchises artists and impoverishes workers, (though it does that too).

No, my main problem with AI is that its current pitch to the public is suffused with so much unsubstantiated bullshit, that I cannot banish from my thoughts the sight of a well-dressed man peddling a miraculous talking dog.

Also, trust:

They’ve also managed to muddy the waters of online information gathering to the point that that even if we scrubbed every trace of those hallucinations from the internet – a likely impossible task - the resulting lack of trust could never quite be purged. Imagine, if you will, the release of a car which was not only dangerous and unusable in and of itself, but which made people think twice before ever entering any car again, by any manufacturer, so long as they lived. How certain were you, five years ago, that an odd ingredient in an online recipe was merely an idiosyncratic choice by a quirky, or incompetent, chef, rather than a fatal addition by a robot? How certain are you now?

Tagged with

The Generative AI Con

I Feel Like I’m Going Insane

Everywhere you look, the media is telling you that OpenAI and their ilk are the future, that they’re building “advanced artificial intelligence” that can take “human-like actions,” but when you look at any of this shit for more than two seconds it’s abundantly clear that it absolutely isn’t and absolutely can’t.

Despite the hype, the marketing, the tens of thousands of media articles, the trillions of dollars in market capitalization, none of this feels real, or at least real enough to sustain this miserable, specious bubble.

We are in the midst of a group delusion — a consequence of an economy ruled by people that do not participate in labor of any kind outside of sending and receiving emails and going to lunches that last several hours — where the people with the money do not understand or care about human beings.

Their narrative is built on a mixture of hysteria, hype, and deeply cynical hope in the hearts of men that dream of automating away jobs that they would never, ever do themselves.

Generative AI is a financial, ecological and social time bomb, and I believe that it’s fundamentally damaging the relationship between the tech industry and society, while also shining a glaring, blinding light on the disconnection between the powerful and regular people. The fact that Sam Altman can ship such mediocre software and get more coverage and attention than every meaningful scientific breakthrough of the last five years combined is a sign that our society is sick, our media is broken, and that the tech industry thinks we’re all fucking morons.

Tagged with

AI is Stifling Tech Adoption | Vale.Rocks

Want to use all those great features that have been in landing in browsers over the past year or two? View transitions! Scroll-driven animations! So much more!

Well, your coding co-pilot is not going to going to be of any help.

Large language models, especially those on the scale of many of the most accessible, popular hosted options, take humongous datasets and long periods to train. By the time everything has been scraped and a dataset has been built, the set is on some level already obsolete. Then, before a model can reach the hands of consumers, time must be taken to train and evaluate it, and then even more to finally deploy it.

Once it has finally released, it usually remains stagnant in terms of having its knowledge updated. This creates an AI knowledge gap. A period between the present and AI’s training cutoff. This gap creates a time between when a new technology emerges and when AI systems can effectively support user needs regarding its adoption, meaning that models will not be able to service users requesting assistance with new technologies, thus disincentivising their use.

So we get this instead:

I’ve anecdotally noticed that many AI tools have a ‘preference’ for React and Tailwind when asked to tackle a web-based task, or even to create any app involving an interface at all.

Tagged with

Tech continues to be political | Miriam Eric Suzanne

Being “in tech” in 2025 is depressing, and if I’m going to stick around, I need to remember why I’m here.

This. A million times, this.

I urge you to read what Miriam has written here. She has articulated everything I’ve been feeling.

I don’t know how to participate in a community that so eagerly brushes aside the active and intentional/foundational harms of a technology. In return for what? Faster copypasta? Automation tools being rebranded as an “agentic” web? Assurance that we won’t be left behind?

Tagged with

AI wants to rule the World, but it can’t handle dairy.

AI has the same problem that I saw ten year ago at IBM. And remember that IBM has been at this AI game for a very long time. Much longer than OpenAI or any of the new kids on the block. All of the shit we’re seeing today? Anyone who worked on or near Watson saw or experienced the same problems long ago.

Tagged with

Previously on this day

8 years ago I wrote Writing on the web

Thank you, writers.

10 years ago I wrote Codebar Brighton

Ongoing events in Brighton.

14 years ago I wrote Have Kindle, will travel

I want to love my Kindle, I really do.

14 years ago I wrote The medium is the short message

The limits of Twitter.

19 years ago I wrote Talking about microformats

How a harmless mashup landed me a place on a panel at SXSW.

21 years ago I wrote Airline madness

I’ve been comparing air fares recently in anticipation of a possible trip to Ireland.

22 years ago I wrote Bush Demands Recount

Now, this is funny:

23 years ago I wrote Apple - Bluetooth

It wasn’t all bad news from Apple this week. This USB Bluetooth adapter looks very interesting.