Meta plans to automate risk assessment for many of its products.

date
02/06/2025
Internal documents show that an artificial intelligence-driven system may soon be responsible for evaluating up to 90% of the potential dangers and privacy risks involved in updates to Meta apps like Instagram and WhatsApp. NPR reports that Facebook, as part of an agreement reached with the Federal Trade Commission in 2012, is required to conduct privacy reviews of its products and assess any potential risks of updates. So far, these reviews have been mainly conducted by human evaluators. Under the new system, Meta indicates that product teams will be required to fill out a questionnaire about their work, and then typically receive an "instant decision" from artificial intelligence identifying risks, along with requirements that must be met before updates or features are released.