While we can find the term AI in every investor’s deck, or company ‘About Page’ on the web, it’s actually quite rare to witness real AI—it’s a very complicated thing to do. There is a world of difference between Machine Learning (ML), Deep Learning (DL), and Bullshit (BS).
Saying that even true AI can go wrong. Recently, there was a trend on TikTok where people used the phrase “I had pasta tonight,” not to talk about what they’d had for dinner, but as a code, word to signal a suicidal call for help.
It wasn’t TikTok’s fault that the algorithm didn’t catch the trend quickly enough to stop promoting these posts, because artificial intelligence (AI) requires ample historical data in order to work. In computer science, it’s referred to as “garbage in, garbage out.” This is why AI can beat humans in chess or Mahjong, but it would have never invented the game.
This was also something I discussed with Harvard’s Professor Steven Pinker last year when he referred to the “art of asking questions,” something that’s still reserved for humans. While AI will get better and better at computing things, it will never fall in love, or ask a question.
AI is really important, and as of now, it plays a big part in content moderation online. It decides what’s okay for us to see and what’s not okay, what‘s harmful, what’s hateful, what’s fake, what gets boosted, what goes “viral” and what gets buried. But as we’ve seen from the big tech platforms over the past years, or from the examples above, it has fundamental issues and even more than that, poses a fundamental question—is AI enough to moderate content, and to moderate ads? Or do we need humans?
AI is an incredible revolution, probably as big as the invention of electricity or the internet, and it will be a huge part of our lives, forever. But there are two important things to know about AI.
- AI only works when there are sufficient data to train the AI model. For example, AI failed to predict the spread and impact of COVID-19 because there was no existing data to model the scale of its actual impact effectively. Or when Face ID was first introduced as a way to unlock your iPhone, it didn’t account for people’s “morning face,” so the iPhone didn’t open. There was not enough data to suggest that people might look different in the morning when they wake up, versus the rest of the day.
- Some mistakes are too big to bear. As an example, if Alexa made a mistake, and suggested that I buy coffee beans I don’t really want to buy based on my behavior, it’s not a big deal. It’s annoying, but not a big deal. If YouTube tagged a video as a “pet video” thinking there were dogs in the video, but there wasn’t, it’s not a big deal. It’s annoying, but it’s not a big deal. I could go on. But if we do decide to use AI in more serious matters, like whether or not we should take the beginning of a virus spread seriously, or when it comes to topics related to democracy, depression, racism, or human rights, it begs a bigger question, is AI enough?
When it comes to serious matters, such as moderating content, we must also recognize the limitations of humans as well. People get fatigued, whereas a computer has endless stamina whether it’s reviewing 100 or 1,000 articles. People have a bias, they have good days and bad days, and so forth. If we are to consider a more human approach to moderating content, it’s important that those content review teams are incredibly diverse, and support.
When it came to finally realize that “eating pasta” was not about eating pasta, and that it was a codename for suicide, it was humans who caught it, or when COVID-19 happened, humans saw it spread, not machines, or when an image recognition AI labeled black people as Gorillas, it was humans who picked it up, not AI.
Humans + Machines
The future will be over-indexed by machines that help us to live a better life across so many daily interactions. But I’m convinced that in serious matters, there are human problems that require humans to solve them with AI in a supporting role.
After the Facebook boycott, I suggested that they hire 50,000 content reviewers in addition to AI, to manually review content side-by-side. I think it is important for every tech platform that has meaningful distribution, and to take responsibility for the content on its platform
The limitations of human review don’t outweigh the risks we’re taking without having it.
I vote for humans (with AI).
Authored Article by Adam Singolda, Founder & CEO, Taboola.