Instagram DM Encryption Ends As Platforms Shift Privacy Strategy
Instagram will regain the ability to read users’ direct messages from May 8, as Meta ends support for end-to-end encrypted DMs introduced only two years ago. The decision marks a major reversal in the company’s privacy strategy and reflects wider changes across social media platforms.
At the same time, TikTok confirmed it has never offered end-to-end encryption for private messages. Together, the developments suggest social media companies are moving away from treating privacy as an unconditional promise.
Meta Reopens Access To Direct Messages
Meta’s decision will restore its technical ability to scan and moderate Instagram direct messages. Under the current opt-in encrypted system, even Meta’s servers cannot access message content.
That will change on May 8. As a result, Meta will once again be able to use automated content scanning, AI-powered moderation systems and expanded scam detection tools within Instagram DMs.
The move also simplifies compliance with law enforcement requests and regulatory obligations.
In recent weeks, two of the world’s largest social media companies have signalled a broader shift in how they balance privacy and platform safety. The debate centres on whether strict encryption protects users or shields harmful activity from detection.
TikTok Defends Its Existing Messaging System
A TikTok spokesperson said the company’s messaging approach remains unchanged. According to the company, TikTok messages use industry-standard encryption while data moves across networks and while it is stored on servers.
The spokesperson added that access to message content remains tightly restricted and available only to authorised personnel involved in safety investigations, legal compliance or limited operational needs.
However, TikTok confirmed that its messaging system is not end-to-end encrypted. The company argued that this design helps discourage the spread of illegal material on the platform.
Meta did not immediately respond to requests for comment regarding its decision.
Safety Concerns Drive Policy Changes
Brian Long, chief executive and co-founder of Adaptive Security, said social media companies are reassessing the risks linked to fully encrypted messaging services.
Long said encrypted systems can give scammers and bad actors greater freedom to operate without platform oversight. He argued that companies increasingly recognise the downside of limiting their own ability to monitor harmful activity.
The regulatory environment is also accelerating the shift. The Take It Down Act, passed last year, requires online platforms to remove non-consensual intimate imagery, including AI-generated deepfakes, within 48 hours of a valid request.
Enforcement of the law begins on May 19, just days after Instagram’s encryption changes take effect.
Long said full encryption can make compliance significantly harder because platforms cannot view message content. He also argued that internal safety teams provide the first layer of defence against scams and abuse before cases ever reach law enforcement agencies.
AI-Powered Fraud Continues To Rise
Fraud involving artificial intelligence continues to grow rapidly across online platforms. According to an FTC report, more than one million senior citizens fell victim to fraud last year, resulting in estimated losses exceeding $81 billion.
AI-driven scams now include deepfakes, cloned voices and long-running romance fraud schemes. Long said encrypted messaging channels have become particularly attractive environments for such activity.
Privacy advocates continue to warn that reducing encryption increases platform surveillance and weakens user protections. However, supporters of the policy shift argue that stronger moderation tools are necessary to combat sophisticated online threats.
Long said companies are increasingly concluding that unrestricted privacy protections can create serious risks for users. He added that people seeking maximum privacy still have access to dedicated encrypted communication applications.

