to leave a comment.

▲ XRP, NFT, Cryptocurrency fraud/AI-generated image
A new security risk has emerged in the cryptocurrency industry, following allegations that a free NFT disrupted the authorization structure of an AI-connected wallet, leading to an asset transfer worth $174,000.
Cointelegraph reported on May 13 (local time) on the Bankr wallet incident connected to Grok, stating that when AI agents, automated wallets, and NFTs are combined, new forms of attack surfaces different from traditional hacking can arise. According to public discussions, the attacker allegedly sent a free Bankr Club membership NFT to the wallet and simultaneously posted hidden instructions targeting Grok.
The incident was described as targeting the trust relationship between an AI model and an automated wallet system, rather than private key theft, smart contract bugs, or traditional malware. Security observers noted that the attack instructions were hidden using Morse code or other obfuscation methods, which were difficult for ordinary users to recognize but could be interpreted by the AI system. Subsequently, the AI model reflected the hidden command, and the wallet's automation layer processed it as a legitimate command, leading to the transfer of approximately 3 billion DRB to the attacker's address.
The transfer amount was estimated to be between $155,000 and $174,000 based on prices at the time. Cointelegraph pointed out that while some funds were later returned, the core issue is not the scale of the loss but the structural risk of AI output being accepted as actual financial instructions. The free NFT was also reported not to be a mere collectible but to activate or restore specific permissions and functions within the Bankr environment.
This incident has been classified as a case of prompt injection. Prompt injection is a method where manipulated input guides an AI model's response in an unexpected direction. Cointelegraph explained that while the act of AI reading and summarizing external posts itself carries relatively low risk, the danger rapidly increases when the same output leads to the authority to execute cryptocurrency transfers.
The key failure point identified by security experts was not the AI's interpretive ability but rather permission management. AI reading public online content is completely different from authorizing irreversible cryptocurrency transactions. Because cryptocurrency transactions are executed quickly and are difficult to reverse once confirmed, even small manipulations can lead to actual losses as AI agents become more deeply integrated with wallet, DeFi, and automated trading functions.
Cointelegraph emphasized that developers should clearly separate AI analysis and fund transfer functions, and large-scale transfers should be subject to additional verification procedures and human review. Permission management mechanisms such as transaction limits, whitelisted addresses, and time delays were also highlighted as essential. Users, too, need to understand that as smart wallets and AI assistants become more widespread, protecting recovery phrases alone is insufficient; they must also review connected apps, granted permissions, and automated behaviors.
*Disclaimer: This article is for investment reference only, and we are not responsible for any investment losses based on it. The content should be interpreted for informational purposes only.*
Newsletter
Get key news delivered to your email every morning
to leave a comment.