Lates News

date
02/06/2025
Internal documents show that an artificial intelligence-driven system may soon be responsible for evaluating potential hazards and privacy risks involved in up to 90% of updates in Meta applications such as Instagram and WhatsApp. NPR reports that Facebook (now Meta) reached an agreement with the Federal Trade Commission in 2012, requiring the company to conduct privacy reviews evaluating any potential risks of updates to its products. So far, these reviews have been primarily conducted by human evaluators. According to reports, under the new system, Meta states that product teams will be required to fill out a questionnaire about their work and will then typically receive an "instant decision" on risks identified by artificial intelligence, along with requirements that must be met before updates or features are released.