The AI agent OpenClaw, distinctive for its lobster logo, has rapidly become a global sensation. Its widespread adoption has prompted governments worldwide to take notice, with China already issuing a “Lobster Safety Manual” to highlight associated risks. In Taiwan, Legislator Lai Shyh-bao has observed a growing trend among domestic brokerage firms and stock analysts integrating OpenClaw into their operations. This raises critical questions: Who bears responsibility if an AI agent executes a flawed transaction? Lai has consequently urged the Financial Supervisory Commission (FSC) to develop a specialized “Lobster Safety Manual” tailored for the financial sector.
In response to the legislator’s concerns, FSC Chairman Peng Jin-lung acknowledged that while he personally does not use OpenClaw, its prevalence is undeniable. He assured that relevant internal FSC departments are actively researching future countermeasures and assessing the extent of its usage within financial institutions.
Chairman Peng further noted that the FSC had previously issued the “Guidelines for Financial Industry’s Use of AI.” He emphasized that the financial sector inherently possesses robust cybersecurity and internal control mechanisms for new technology applications. Should such novel tools compromise operational security, they will be subject to thorough review, and specific safety manuals will be developed accordingly.
FSC’s Six Key Guidelines for AI Adoption in the Financial Sector
Taiwan’s Financial Supervisory Commission has outlined a comprehensive framework to guide the responsible deployment of AI within its financial industry. These guidelines prioritize governance, ethics, security, and sustainability:
- Establish Governance and Accountability Mechanisms: Financial institutions are mandated to assume responsibility for their AI systems. This includes designating senior executives for oversight and establishing a clear framework for risk management and personnel training.
- Prioritize Fairness and Human-Centric Values: Institutions must actively prevent algorithmic biases that could lead to unfair outcomes, ensuring alignment with human-centric principles. Outputs from generative AI must be objectively managed for risks by human personnel.
- Protect Privacy and Customer Rights: Stringent measures must be in place to protect customer privacy and manage data securely. When offering AI-driven services, customer autonomy must be respected, and alternative options should be clearly communicated.
- Ensure System Robustness and Security: AI systems must be designed for robustness and security to prevent harm. When utilizing third-party AI solutions, appropriate risk management and oversight are essential.
- Implement Transparency and Explainability: The operations of AI systems should be transparent and explainable. Institutions must proactively disclose relevant information when AI directly interacts with consumers.
- Promote Sustainable Development: AI applications should align with sustainability principles, contributing to the reduction of inequality and environmental protection. Furthermore, employee training should be provided to help staff adapt to technological changes and safeguard their employment rights.

Taiwan’s Digital Ministry Addresses AI Agent Cybersecurity
Addressing the cybersecurity implications of AI agents, Minister Lin I-ching of Taiwan’s Ministry of Digital Affairs (MoDA) stated on the same day that Taiwan is actively promoting “Sovereign AI” to enhance cybersecurity and technological autonomy. This initiative aims to ensure that AI models used by the government and critical infrastructure operate domestically and are subject to local legal supervision. Minister Lin also highlighted Nvidia’s recent launch of the NemoClaw platform, which specifically focuses on bolstering the cybersecurity defenses of AI agents.
To cultivate Taiwan’s dedicated computing power, MoDA has received an application from Foxconn for investment in a computing center. Discussions are also underway with the Ministry of Finance and the FSC to explore allowing insurance industry funds to invest, thereby reducing Taiwan’s reliance on overseas AI models.
China’s “Lobster Safety Manual”: A Precautionary Tale
OpenClaw, developed by Austrian engineer Peter Steinberger, first gained traction within tech circles. The act of installing and using OpenClaw became colloquially known as “raising lobsters.” Recently, this “lobster-rearing” craze spread to the Chinese market, with ordinary citizens eagerly adopting the AI agent and even paid services emerging for installation assistance.
In response, China’s Ministry of State Security published a “Lobster Safety Manual.” The manual warns of inherent risks associated with OpenClaw, including potential host takeover, data theft, and content tampering. Users are strongly advised to strictly limit the agent’s operational scope and to run it in isolated environments, such as dedicated virtual machines or sandboxes, whenever possible.
The Reversal of the Tide: From “Rearing” to “Uninstalling Lobsters”
Despite the initial fervor for “raising lobsters,” a shift is now observable within Chinese online communities, moving from enthusiastic installation to paying for the uninstallation of OpenClaw. As reported by the BBC, citing expert analysis, this reversal is primarily due to OpenClaw’s significant technical barriers and exceptionally high operational costs, with each action requiring financial expenditure for AI model calls.
Furthermore, security risks remain a major concern. China’s National Internet Emergency Center has cautioned that improper use could lead to the theft of sensitive information, such as photos and payment account details. There are also reported instances of AI misinterpreting commands, resulting in accidental data deletion. For the average user, the practical utility of the software currently appears limited, contributing significantly to this wave of uninstallation.
(The above content is excerpted and reprinted with authorization from our partner CryptoCity.)
Disclaimer: This article is provided for market information only. All content and views are for reference purposes and do not constitute investment advice. They do not represent the views or positions of BlockTempo. Investors should make their own decisions and transactions. The author and BlockTempo will not bear any responsibility for direct or indirect losses resulting from investor transactions.