Chat GPT: A Surge and Solution for Fraud Detection

Phishing emails continue to be the bain of existence for enterprise security and individual email accounts. With the recent AI development of Chat GPT, it was destined to become a situation that would worsen over time.

While Chat GPT has been on the growing path, its use by cybercriminals to develop more convincing scams in gaining identity management has been accelerating at an equal pace. Their target is unassuming, with the elderly population easy prey to fall victim to phishing scams due to their vulnerable nature and non-education on technical aspects.

Convincing Phishing

In a notable instance, a grandmother was being scammed out of £21k through a convincing phishing email. The elderly woman would be thankful that a younger family member would aid her in determining the scam before it became too late. By running the scam emails through Chat GPT, it was quickly revealed that they were malicious and linked to criminal activity for gaining identity access management.

With this discovery in use, the birth of Catch came around. An artificial intelligence tool specifically designed to detect scam emails and protect identity and access management hacks stemming from them. Catch rapidly became available and compatible with Google’s Gmail so any phishing email could be flagged and highlighted to prevent fraudulent activity.

Positive Tool for Cybercrime

Generative AI has become one of the best tools for cybercriminals, especially in the world of phishing scams focused on identity management and access management. A 967% rise in credential phishing has been reported since the technology became available. ChatGPT is an AI tool that helps them leverage their activities by writing sophisticated, targeted emails that appear legitimate to the unassuming.

The amount of phishing emails sent daily has become astronomical thanks to generative AI, with ChatGPT, Google Gemini, Claude, and Microsoft Copilot able to generate fresh content, including the use of realistic photo development and voice content. 

With AI features being used to enhance multiple processes in the enterprise, it was only time until tools like ChatGPT would be corrupted for use in more nefarious activity – used for gaining identity and access management information. While businesses may have set protocols in place and training to help employees identify phishing attacks, individuals at home are much less wise to the attack patterns.

Patterns of Growth

It is no coincidence that ChatGPT’s launch coincided with the timeframe of substantial growth in phishing activity. Generative AI has effectively lowered the bar of entry for novice threat actors, and given more experienced hackers the tools for spear-phishing attacks to be performed at scale.

The emergence of FraudGPT provided an exclusive tool for fraudsters, hackers, spammers, and other malicious actors full of extensive features. Another emerging threat has been ‘AI jailbreaks, where hackers can remove any guardrails for the legal use of generative AI chatbots, helping to turn ChatGPT into a weapon that fools users into giving their personal data or login credentials.

Phishing emails are no longer the days of Nigerian princes looking to bestow their fortunes with badly scripted email communication. Today’s ChatGPT-generated phishing emails are extremely convincing and legitimate in appearance, even going as far as to seem they be written by a family member or fellow employee.

Need for Information

Sadly, tech like Catch has not caught on to the level it should, due to victims’ reluctance and seemingly non-caring nature about being hacked and cybercriminals gaining access and identity management credentials. 

To learn more about identity management solutions and how to protect them against phishing attacks, keep tabs on all identity and access management events held in the UK in 2024.

Leave a Comment