Neural networks need Socrateses

We may not need more folks who code, after all.

Here's what we're realising instead: we begin to need folks who ask good questions about the code.

The article I linked to above admits as much between the lines. Many machine learning professionals won't offer you an explanation for why their product or prototype did what it did. Or they'll offer a bullshit explanation.

We're about to have lots more conversations about AI, whether we like it or not. Journalists will talk to tech "experts". DevOps will talk to product/sales teams. Humans will talk to bots. Academics will talk to companies and clients. Regulatory bodies will talk to industry reps. And, sooner or later, lawyers will talk to many, many subpoenaed folks, too.

It's no good if these conversations just get stuck in the loop of black box and bullshit-calling. To get inside the black box, to move beyond the bullshit, you'd need people who can change and carry these conversations. Yes, they'll need to read some code; but crucially, they'll need to read the room as well. They won't let the bullshitters off the hook, but they'll know what the reasonable ask is, at any time - and what the appropriate tests or experiments could look like. They'll have the tech skills, and the social skills; the patience and the know-how.

Three things to note here:

  • None of this will happen if you wait until it's convenient for the corporate. They'll be in a mad rush to ship the product. They won't ever slow down to understand all of it - any of it. This will happen after hours. After the stuff has shipped. After the "lessons learned", even. Maybe after the people who built it have left. You'll always be playing catch-up.
  • None of this is new. Socrates was there to remind the men of ancient Athens that they're bullshitting themselves and each other - and to break things down, for anyone who cared, until some of the bullshit fell away. We don't need to re-invent any of this. Our agile coaches, our therapists, our psychologists, journalists, QA testers, lawyers - they've been doing this for ages, if they've been any good.
  • From the above, it follows: none of this is popular. Flashy, glossy and agile sells; old, slow and obscure doesn't. The exec will not like it. Your teams will not like it. The bottom line will suffer. Didn't end well for Socrates, did it? Maybe bring your own water bottle.

And yet, it will be done. Someone will start calling the bullshit on AI. Someone will sit down, open the black box and dig through all of it. Someone will start working things out - from the smallest certainties to the larger regularities - until they've dispelled some of it, and understood it a bit better. Someone will keep reading the right bits of code, asking the right questions - the helpful ones, the achievable / testable ones - and keep running the right experiments to get the right answers which will then lead to more right questions.

For the tech companies, the cynical choice is this: that "someone" will either be their researcher before the bullshit hits the fan, or someone else's lawyer afterwards.


You'll only receive email when they publish something new.

More from Vic Work: notes on learning, technology and play
All posts