14 Comments
User's avatar
Will Turman's avatar

Great article!

Error handling and exposure are challenging to even implement much less *get right* in a piece of software - especially recently as software increasingly exists as productivity increasing glue between applications and services.

(Perhaps overly?) Defensive programming with consistency in delivering actionable information to users at not only a level of understanding, but a level at which *they care* is crucial. If you’re maintaining cURL you’re probably going to want to expose corresponding HTTP error codes - if you’re building a website your users probably don’t care about an HTTP 401 or 403 - they should just be shepherded to a login page.

Microcopy is a term used to capture those small pieces of text that guide users through an application or software - including error messages. It’s hard to get right, and it’s great to think about them across the various levels of technical understanding of a piece of software.

I’ve tried to build software to a point where I don’t get phone calls - basically aiming to deliver decipherable predictable decently documented tools with an error implementation that makes them supportable by the people with an intermediate understanding of the space between users and myself.

Expand full comment
Defender's avatar

"Microcopy" never heard this term, I love that it exists, thank you for sharing this!!

I also 💯 love your goal of creating software that helps people help themselves. I think startups tend to do this far better than big co (thinking about internal tools specifically) because they don't have tons of engineers just sitting around to fix stuff. People need to unblock themselves, and to do that they need enough information about what happened, what went wrong, so they can maybe come up with their own workarounds.

if we treat external users this way, I think everything gets better

Expand full comment
Ljubomir Josifovski's avatar

Very good - thanks for sharing this, like it.

This recalls to my mind "everyone should be treated the same, even if they are not the same." People are different - yet we should treat them all the same, as if they are the same.

Likewise in your case - we should explain to people as if they can understand, even if maybe they can't. I subscribe to this principle. For one thing, we may be surprised - one never knows. For another, how are we to learn new things, if we are only told as much as we already know, but not more.

Yes the teller runs the risk of being overly detailed and ultimately boring. For if the interlocutor doesn't understand, they may get bored and even frustrated. That's fine. My ego can take a hit, I'm fine risking it. When I notice I wrap quickly in a sentence and shut up. Not a biggie.

Amusingly, people find similar when teaching computers new things they have never seen before. (check Jeff Clune lectures, talks, podcasts interviews) Teaching them too easy things they already know how to solve - is a waste of time, they already know, so learn nothing new. Teaching them too hard things is a waste of time too, because they don't solve it fail to get to the solution. But we want them to learn to discover a solution on their own, independently. Not just memorise and in the future pattern match an answer they blurt out. The aim of their research is to teach the model how to learn on his own. Not just what's the right answer is, but to learn the process by which we humans find the answer.

There is a Goldilocks zone, where the system is at A, and we want it too get to B on its own. If B is about 20% away, 20% more difficult, but no more than that, then the model stands a non-trivial chance of discovering in its own, the stepping stones that allow it to get from A to B successfully. And discover that on its own. That part is crucial. They are training the model how to learn on its own. So them laying the stepping stones is no good, is counterproductive. The aim of the exercise is for the model to learn how to go on about discovering the stepping stones on its own.

Expand full comment
Elias Griffin's avatar

How about a compromise between computer terms they don't understand *enough* and actual meaning? Logic! After that, here is my conundrum, what perspective to use? Usually, it's objective and impersonal.

"This message has been deleted"

But some person, some programmer developer DID write that. The error is NOT an impersonal error, it was human reviewed and caught. Is it actually disengenous to make the human speech, robot speech?

We've deleted this message.

Hmm, responsiblity was taken there, that's new. The message did not delete itself. You [user] asked us [service] to delete it with a button that says "DELETE" and we did.

"We deleted this message at your request, but you have a temporary copy on this device. Last chance to copy the message and save it somewhere before exiting this interface ."

Something like that, I wrote that out quickly but I added some perceptual *impact* and recommended *action* for general users who will now understand.

So I created a personal interaction, I showed the user I am responsible for my systems behavior and know it well, and I gave impact and action.

Expand full comment
Defender's avatar

this is interesting because I initially saw the passive "has been deleted" as appropriate! My thinking was, this is like a 404 situation, maybe the flow of the code here _expects_ something to exist, but it does not seem to. I guess in that case you should just articulate the simplest true thing which is, the message is could not be found on the server.

I wonder if "We deleted this message at your request" isn't quite right because, in the "correct" flow the local copy would be deleted along with the server copy. My interpretation (which could be wrong) was that this is a state where a message exists locally but not remotely, and the software wants to resolve this out of sync situation

> I showed the user I am responsible for my systems behavior and know it well

this made me think about what it's like at a big tech company where that isn't always true! I think individual engineers rarely feel amount of ownership & responsibility over the whole system. I think they default to "given this input, we are in an error state, I don't know how we got here" (because a lot of it is outside their team's control and they generally are kind of silo'd)

thank you for sharing your thoughts 🙏

Expand full comment
baol's avatar

"If it does not need to be correct, it can be arbitrarily simple"

Expand full comment
David Vandervort's avatar

Bad error messages are the root of all evil. People who do not study bad error messages will be be forced to endure them, as will everyone else.

Expand full comment
Eric's avatar

There is a typo in the 1st quote. It says: “And there are not sixty two kinds of particles, [...].” But is should, instead, say: “And there are now sixty two kinds of particles, [...].”

Expand full comment
Defender's avatar

thank you, fixed!!

Expand full comment
Chris de Ville's avatar

There is a typo in your comment about a typo.

You wrote:

“But is should, instead, say:…”

But it should, instead, say:

“But it should, instead, say…”

Expand full comment
Will Turman's avatar

“There is a typo in your comment about a typo.”

Also known as Muphry’s Law

Expand full comment
Sinclair's avatar

there's a whole genre of "quantum mechanics for babies" books that are both infuriating to the professional and also not good pedagogy nor particularly engaging because they are just cartoony physics diagrams.

I think if feynman was brought back from the dead to write such a book, it would be like a very pretty and colorful book that taught you colors and had a little to say about color itself. or maybe a cat-in-the-hat-esque nonsense adventure where faeries build machines in a magical floaty faerie world where everything is more sticky, which the astute 12-yr-old babysitter may recognize is actually about modern semiconductor manufacturing

Expand full comment
ai-plans's avatar

Fantastic stuff. This is the kind of mindset we're having when making the alignment guides.

- kabir

Expand full comment
Frecka's avatar

@WillTurman

You cite Murphy's Law but, rather , I think O'Flarrety's Law fits better.

O'Flarrety's law states:Murphy was an optimist.

Expand full comment