The opposite of The Suspicion Machine

This week's story in Wired is worth thinking about.

"The Suspicion Machine" is an algorithm used by the city of Rotterdam to determine whether a person poses a risk of benefit fraud.

To simplify: it's an automated decision-making process which takes into account several facts about a person, and then assigns a score to each of these facts. If the score reaches a certain level, that person may become the target of a fraud investigation.

Earlier on in the week, the story made me furious. But it's not surprising. And Rotterdam's "suspicion machine" isn't the only one out there. We've built these for ages: to process refugees, to screen job applicants, to decide on college admissions. And we'll keep building these. Our history is full of suspicion machines - built with stones and mortar, or stakes and fire, or pen and paper, or good people following orders.

Right now, I want to think about what we could be building instead. Especially right now - with all the processing power at our fingertips.

What would be the opposite of Rotterdam's Suspicion Machine?

  • It would serve to amplify and ground our good hunches and "good feelings" we may have about someone - where the original served to entrench our biases and put a veneer of credibility on prejudice.
  • It would rely on facts and data we share knowingly and with consent - where the original collected the events and data debris without the subjects' knowledge or approval.
  • It would focus on areas which people can act on and change - what they know, how they act, what they learn, where they work - instead of the original's focus on characteristics we have little control over, like being female, or younger, or of a particular background.
  • It would work in the open, and its methods would be clear and easy to for everyone to see - in contrast to the "mystery black box" approach of the suspicion machine.
  • As such, it would be open-source and ready for others to inspect, copy, adapt, and understand its workings - again, in contrast to the original's closely guarded proprietary secrets.
  • Its ultimate goal would be to enhance and improve the life of every human who worked with it - where the original's end result was to get humans in trouble.
  • It would be 100% voluntary, and informed + enthusiastic consent would be required before it started working with anyone - in contrast to the original, which gathered data about its subjects in secret.

You may say I'm a dreamer. Say that's lofty and unachievable. Well, here's the punchline.

Rotterdam spent years and millions on building their Suspicion Machine. It didn't exist, and then they worked long and hard on it, and now it's there, messing up people's lives. AND IT DOESN'T EVEN DO ITS JOB. It's no better at detecting benefit fraud than random chance would be.

So if someone were to build the opposite of the suspicion machine - spend years and millions going from first principles like the ones above to a prototype, and then a working version - then the bar, for me personally, wouldn't be to achieve 100% effectiveness. It would simply be to make every involved human 1% better, or feeling better, or better off, every time the machine is applied. It would already be an improvement over Rotterdam's stuff, no?


You'll only receive email when they publish something new.

More from Vic Work: notes on learning, technology and play
All posts