So Apple’s going to stop listening in on your Siri requests. Now what?

A week after a report in The Guardian revealed that humans in Apple’s Siri “grading” program were hearing private and illegal activity, Apple has suspended the program to conduct a review. It’s also working on a software update to give users the ability to opt-out (or maybe opt-in).

Apple issued a simple statement: “We are committed to delivering a great Siri experience while protecting user privacy. While we conduct a thorough review, we are suspending Siri grading globally. Additionally, as part of a future software update, users will have the ability to choose to participate in grading.”

That’s the right thing to do, but it makes me wonder what the path forward is supposed to be. Because, while most people don’t realize it, machine learning (ML) and AI is built on a foundation of human “grading” and there’s no good alternative in sight. And with Siri frequently criticized for being a year or two behind its rivals, it’s not going to be easy for Apple to catch up while protecting our privacy.

Everybody does it

What’s this Siri grading program all about? Basically, every time you say “Hey Siri…” the command you utter gets processed on your device but also semi-anonymized and sent up to the cloud. Some small percentage of these are used to help train the neural network that allows Siri (and Apple’s Dictation feature) to accurately understand what you’re saying. Somebody, somewhere in the world, is listening to some of the “Hey Siri” commands and making a note of whether Siri understood the person correctly or not.

Then the machine-learning network is adjusted, and re-adjusted, and re-adjusted, through millions of permutations. The changes are automatically tested against these “graded” samples until a new ML algorithm produces more accurate results. Then that neural network becomes the new baseline, and the process repeats.

There’s just no way to train ML algorithms—for speech recognition or photos recognition or determining whether your security camera saw a person or a car—without a human training it in this way. If there was a computer algorithm that could always accurately determine whether the AI was right or wrong, it would be the AI algorithm!

Apple, Google, Amazon, Microsoft, and anyone else producing AI assistants using machine-learning algorithms to recognize speech or detect objects in photos or video or almost anything else are doing this. They’re listening in on your assistant queries, they’re looking at your photos, they’re watching your security cameras.

Sort of.