Facebook Says Investing in ‘Inclusive AI’ to Curb Harmful Content
Facebook has announced long-term investments in the field of artificial intelligence (AI) to proactively detect content that violates its policies.
“To help us catch more of this problematic content, we’re working to make sure our AI systems can understand content with as little supervision as possible,” Manohar Paluri from Facebook’s AI team said during the company’s F8 conference in San Jose, California, on Thursday.
“Advances in natural language processing (NLP) have helped us create a digital common language for translation, so we can catch harmful content across more languages,” Paluri added.
Facebook has developed a new approach to object recognition called “Panoptic FPN” that has helped AI-powered systems understand context from the backgrounds of photos.
“Training models that combine visual and audio signals further improves results,” said Paluri.
Facebook is currently facing several probes across the world for privacy violations and the spread of harmful and biased content on its platforms, including WhatsApp.
Joaquin Quinonero Candela from Facebook’s AI team said the company is building best practices for fairness to ensure AI protects people and does not discriminate against them into every step of product development.
“When AI models are trained by humans on datasets involving people, there is an inherent representational risk.
“If the datasets contain limitations, flaws or other issues, the resulting models may perform differently for different people,” said Candela.
To manage that risk, Facebook said it has developed a new process for inclusive AI.
This process provides guidelines to help researchers and programmers design datasets, measure product performance, and test new systems through the lens of inclusivity.
“For vision, those dimensions include skin tone, age and gender presentation and for voice, they include dialect, age and gender, said the company.