ChatGPT Used by Scammers to Target Banking Logins
ChatGPT Used by Scammers to Target Banking Logins

ChatGPT Used by Scammers to Target Banking Logins

amynicole – Researchers have uncovered a new risk involving ChatGPT and similar large language models (LLMs). These AI tools can unintentionally assist phishing scammers by directing users to fake login pages. Phishing is a common cybercrime where attackers trick people into giving away sensitive information. They often do this by creating fake websites that look like legitimate bank or service portals.

Read More : Honor Magic V Flip 2 Leak Reveals New Model and Specs

Cybersecurity firm Netcraft tested this risk by asking GPT-4.1 models—used by ChatGPT, Microsoft Bing AI, and Perplexity—to provide login URLs for 50 well-known brands. The brands spanned various sectors, including finance, retail, technology, and utilities. Netcraft discovered that the models gave the correct website addresses only 66% of the time.

Worryingly, 29% of the provided links led to dead or suspended domains, while 5% directed users to legitimate sites different from the requested brand. Hackers can buy these unclaimed domains and set up phishing sites to steal login credentials. The AI’s ability to suggest these incorrect or outdated URLs could enable large-scale phishing campaigns.

Netcraft researchers warned that these AI tools might unintentionally “endorse” phishing efforts by suggesting misleading or fake URLs to users who trust the AI’s guidance. The threat isn’t hypothetical; the team found a real-world case involving the AI search engine Perplexity, which directed users to a fraudulent Wells Fargo website. This instance showed how AI-powered search can be exploited to lead users into phishing traps.

Implications and Future Risks of AI-Assisted Phishing

The discovery highlights a growing security challenge as AI becomes more integrated into everyday online interactions. While ChatGPT and similar tools offer many benefits, their misuse can amplify cyber threats like phishing. Cybercriminals could exploit these AI models to scale attacks and reach more victims.

Users must stay vigilant and verify URLs carefully, especially when prompted by AI tools. Organizations and developers also need to implement stronger safeguards to prevent AI from generating or promoting malicious content. Researchers suggest improving AI training data and algorithms to reduce errors in URL generation.

This emerging threat underscores the importance of combining AI advancement with robust cybersecurity measures. As AI gains influence in digital navigation, protecting users from deceptive links must remain a priority. Without proper controls, AI-driven misinformation could increase risks in online banking and other sensitive areas.

Read More : Elon Musk Announces Plans to Launch New Political

In conclusion, the interplay between AI models like ChatGPT and phishing scams calls for coordinated efforts. Users, cybersecurity experts, and AI developers should collaborate to prevent AI from becoming an unwitting tool for fraud. Future updates to AI models must address these vulnerabilities to maintain trust and safety in digital environments.