Facebook has come under a lot of criticism about the content that has appeared on its platform over the last few years.
Mark Zuckerberg has been trying to address these issues and now he has released some more details about what the company is doing.
Moving from reactive to proactive handling of content at scale has only started to become possible recently because of advances in artificial intelligence — and because of the multi-billion dollar annual investments we can now fund. To be clear, the state of the art in AI is still not sufficient to handle these challenges on its own. So we use computers for what they’re good at — making basic judgements on large amounts of content quickly — and we rely on people for making more complex and nuanced judgements that require deeper expertise.
In training our AI systems, we’ve generally prioritized proactively detecting content related to the most real world harm. For example, we prioritized removing terrorist content — and now 99% of the terrorist content we remove is flagged by our systems before anyone on our services reports it to us. We currently have a team of more than 200 people working on counter-terrorism specifically.
You can find out more details about Facebook’s plans to monitor the content on their platform at the link below.