Fraudulent ChatGPT Malware Prevalent, Says Meta Security Team

In Meta’s Q1 security report, the company’s security team has acknowledged the prevalence of fraudulent malware disguised as ChatGPT. Cybercriminals have been taking advantage of the popularity of trending topics and high-engagement subjects to trick users into using counterfeit versions of AI chatbots such as ChatGPT, Bing, and Bard, which have become prominent technology trends. This strategy has become more popular than crypto-related scams.

According to Meta’s security analysts, they have discovered around ten types of malware posing as AI chatbot-related tools, including ChatGPT, since March. Some of these fake tools are available as web browser extensions and toolbars, and some can be accessed via unofficial web stores. The Washington Post reported last month that these scams have also utilized Facebook ads to proliferate.

Fraudulent ChatGPT tools have even been designed with AI capabilities to make them appear like a legitimate chatbot. Meta has blocked over 1,000 unique links to these malicious versions of the software, which have been circulating on its platforms.

Meta has provided technical details on how hackers gain access to accounts, which involves seizing login sessions and retaining access, a tactic similar to the one that led to the downfall of Linus Tech Tips’s YouTube account.

To address this issue, Meta is rolling out new work accounts that support current single sign-on (SSO) credential services, which are frequently more secure and aren’t tied to a personal Facebook account. This new support process for companies whose Facebook accounts have been compromised or deactivated enables them to recover and reclaim access. Business pages are usually vulnerable to hacking because malware targets Facebook users who have access to them. By migrating to this new setup, it should be substantially harder for hackers to launch an attack.

Topics #featured #News #Pakistan