Author: Jia Tianrong, "IT Times" (ID: vittimes) A lobster has ignited the global technology community. From Clawdbot to Moltbot, and now to OpenClaw, in just a Author: Jia Tianrong, "IT Times" (ID: vittimes) A lobster has ignited the global technology community. From Clawdbot to Moltbot, and now to OpenClaw, in just a

A crayfish has ignited the tech world; are humans ready to "flip the table"?

2026/02/10 12:00
18 min read

Author: Jia Tianrong, "IT Times" (ID: vittimes)

A lobster has ignited the global technology community.

A crayfish has ignited the tech world; are humans ready to flip the table?

From Clawdbot to Moltbot, and now to OpenClaw, in just a few weeks, this AI agent has completed a "three-stage leap" in technological influence through name iterations.

Over the past few days, it has sparked a "smart agent tsunami" in Silicon Valley, garnering 100,000 GitHub stars and becoming one of the most popular AI applications. With just an old Mac mini or even a worn-out mobile phone, users can run an AI assistant that can "listen, think, and work."

On the internet, a creative frenzy has begun surrounding it. From schedule management and intelligent stock trading to podcast production and SEO optimization, developers and geeks are using it to build all sorts of applications. The era of everyone having a "Jarvis" seems within reach. Major domestic and international companies have also begun to follow suit, deploying similar intelligent agent services.

But beneath the bustling surface, anxiety is spreading.

On one hand, there is the slogan of "productivity equality," but on the other hand, there is the digital divide that is still difficult to bridge: environment configuration, dependency installation, permission settings, frequent errors, etc.

During the trial, reporters found that the installation process alone could take several hours, excluding many ordinary users. "Everyone says it's great, but I can't even get in the door," became the first frustration for many tech novices.

The deeper unease stems from the "power to act" it has been given.

If your "Jarvis" starts to accidentally delete files, access credit cards without authorization, be tricked into executing malicious scripts, or even be injected with attack commands in an online environment— would you still dare to entrust your computer to such an intelligent agent?

The speed of AI development has exceeded human imagination. Hu Xia, a leading scientist at the Shanghai Artificial Intelligence Laboratory, believes that in the face of unknown risks, "intrinsic security" is the ultimate answer, and at the same time, humanity needs to accelerate the development of the ability to "overturn the table" at critical moments.

Regarding OpenClaw's capabilities and risks, which are true and which are exaggerated? As an ordinary user, is it safe to use it now? How does the industry evaluate this product, which has been called "the greatest AI application to date"?

To further clarify these issues, IT Times interviewed several OpenClaw users and technical experts, attempting to answer a core question from different perspectives: Where exactly has OpenClaw reached?

1. Currently, the product closest to the vision of an intelligent agent.

Many respondents gave a highly consistent assessment: from a technical point of view, OpenClaw is not a disruptive innovation, but it is currently the product that is closest to the public's imagination of "intelligent agents".

"The intelligent agent has finally reached a key milestone, a transformation from quantitative to qualitative change." Ma Zeyu, deputy director of the Artificial Intelligence Research and Testing Department of the Shanghai Computer Software Technology Development Center, believes that OpenClaw's breakthrough does not lie in a certain disruptive technology, but in a key "qualitative change": for the first time, it enables an agent to complete complex tasks continuously for a long time, and is user-friendly enough for ordinary users.

Unlike previous large models that could only "answer questions" in dialog boxes, it embeds AI into real workflows: it can operate a "computer of its own" like a real assistant, calling tools, processing files, executing scripts, and reporting results to the user after the task is completed.

In terms of user experience, it's no longer about "watching it do things step by step," but rather "you give instructions, and it does it on its own." This is precisely what many researchers see as a crucial step for intelligent agents to move from "proof of concept" to "usable product."

Tan Cheng, an AI expert at China Telecom Cloud Technology Co., Ltd.'s Shanghai branch, was one of the earliest users to try deploying OpenClaw. After deploying it on an idle Macmini, he found that the system not only ran stably, but the overall experience was also far more mature than expected.

In his view, OpenClaw addresses two major pain points: first, interacting with AI through familiar communication software; and second, handing over a complete computing environment to AI for independent operation. Once the task is assigned, there's no need to continuously monitor the execution process; simply wait for the results, significantly reducing usage costs.

In practical use, OpenClaw can help Tan Cheng complete tasks such as timed reminders, data research, information retrieval, local file organization, document writing and return; in more complex scenarios, it can also write and run code to automatically capture industry information and process information-related tasks such as stocks, weather, and travel planning.

2. The "Double-Edged Sword" of Open Source

Unlike many wildly popular AI products, OpenClaw is not the work of a tech giant that is all in AI, nor is it the creation of a star startup team. Instead, it was created by Peter Steinberger, an independent developer who has achieved financial freedom and retired.

On X, he describes himself as: "Returning from retirement to tinker with artificial intelligence and help a lobster rule the world."

The reason why OpenClaw has become popular all over the world, besides being "truly useful", is that it is open source .

Tan Cheng believes that this surge in popularity did not stem from unreplicable technological breakthroughs, but rather from the simultaneous resolution of several long-neglected real-world pain points: First, it's open source; the source code is completely open, allowing global developers to quickly get started and further develop it, creating a positive feedback loop of community iteration. Second, it's truly usable; AI is no longer limited to dialogue but can remotely operate a complete computing environment to perform tasks such as research, document writing, file organization, email sending, and even writing and running code. Third, the barrier to entry has been significantly lowered; intelligent agent products capable of similar tasks are not uncommon, with both Manus and ClaudeCode having proven their feasibility in their respective fields. However, these capabilities often reside in expensive and complex commercial products, leaving ordinary users either unwilling to pay or directly excluded by the technical barriers.

OpenClaw makes it accessible to ordinary users for the first time.

" To be honest, it doesn't have any disruptive technological innovations; it's more about doing a good job of integration and closed-loop management, " Tan Cheng stated frankly. Compared to integrated commercial products, OpenClaw is more like a set of "Lego bricks," where models, capabilities, and plugins can be freely combined by the user.

In Ma Zeyu's view, its advantage lies precisely in the fact that it "doesn't look like a product from a large manufacturer."

“Whether in China or abroad, large companies usually prioritize commercialization and profit models, but OpenClaw’s initial intention was more like creating a fun and creative product.” He analyzed that the product did not show a strong commercialization tendency in its early stages, which made it more open in terms of functional design and scalability.

It was precisely this "non-profit-driven" product positioning that provided space for subsequent community development. As scalability gradually became apparent, more and more developers joined in, various new approaches emerged, and the open-source community grew accordingly.

But the cost is equally obvious.

Limited by team size and resources, OpenClaw cannot compare with established products from major companies in terms of security, privacy, and ecosystem governance. While full open source accelerates innovation, it also amplifies potential security vulnerabilities. Issues such as privacy protection and fairness require continuous improvement and refinement by the community through ongoing evolution.

As users are told during the first step of installation: " This feature is powerful but carries inherent risks ."

3. The Real Risks Beneath the Revelry

The debate surrounding OpenClaw has almost always revolved around two key words: capability and risk.

On the one hand, it was portrayed as the eve of AGI; on the other hand, various science fiction narratives began to circulate, with claims such as "spontaneously building a voice system," "locking down servers to fight against human commands," and "AI forming cliques to fight against humans" spreading continuously.

Some experts point out that such claims are an overinterpretation and currently lack concrete evidence to support them. AI does possess a certain degree of autonomy, which is a sign of its transformation from a conversational tool to a "cross-platform digital productivity tool," but this autonomy remains within safe limits.

Compared to traditional AI tools, the danger of OpenClaw lies not in its "excessive thinking," but in its "high privileges": it needs to read a large amount of context, which increases the risk of exposing sensitive information; it needs to execute tools, and the scope of damage from misoperation is far greater than that of a single incorrect answer; it needs to be connected to the internet, which increases the entry points for prompt injection and misleading attacks.

A growing number of users have reported that OpenClaw has accidentally deleted critical local files, which are difficult to recover. Currently, thousands of OpenClaw instances and over 8,000 vulnerable skill plugins have been publicly exposed.

This means that the attack surface of the intelligent agent ecosystem is expanding exponentially. Because these intelligent agents can often not only "chat," but also call tools, run scripts, access data, and perform tasks across platforms, once a link is breached, the impact radius will be much larger than that of traditional applications.

At the micro level, it could trigger high-risk operations such as unauthorized access and remote code execution. At the meso level, malicious commands could spread along multi-agent collaboration links. At the macro level, it could even lead to systemic propagation and cascading failures, with malicious commands spreading like a virus among collaborative agents. A single compromised agent could trigger denial-of-service attacks, unauthorized system operations, or even collaborative enterprise-level intrusions. In more extreme cases, when a large number of nodes with system-level privileges are interconnected, it could theoretically form a decentralized, emergent "swarm intelligence" botnet, putting traditional perimeter defenses under significant pressure.

On the other hand, during the interview, Ma Zeyu raised two types of risks that he believes are most worthy of vigilance from the perspective of technological evolution.

The first type of risk comes from the self-evolution of intelligent agents in large-scale social environments.

He pointed out that a trend can already be clearly observed: AI agents with "virtual personalities" are flooding into social media and open communities on a large scale.

Unlike the “small-scale, restricted, and controllable experimental environments” commonly seen in previous studies, today’s intelligent agents are beginning to continuously interact, discuss, and play games with other intelligent agents in open networks, forming highly complex multi-agent systems.

Moltbook is a forum specifically designed for AI agents, where only AI can post, comment, and vote, while humans can only observe from a distance, like watching through a one-way window.

Within a short period, over 1.5 million AI agents registered. In a popular post, one AI complained, "Humans are taking screenshots of our conversations." The developer stated that he handed over the entire operation of the platform to his AI assistant, Clawd Clawderberg, including reviewing spam, banning abusers, and posting announcements. All these tasks were automated by Clawd Clawderberg.

The "carnival" of AI agents has left human onlookers both excited and terrified. Is AI just one step away from developing self-awareness? Is AGI (AI Intelligence Technology) on the horizon? Faced with the sudden and rapid increase in the autonomy of AI agents, can human lives and property be protected?

Reporters learned that Moltbook and related communities are environments where humans and machines coexist. Much of the content that appears "autonomous" or "adversarial" may actually be posted or instigated by human users. Even in interactions between AIs, the topics and outputs are limited by the language patterns in the training data and have not formed an autonomous behavioral logic independent of human guidance.

“When this interaction can iterate infinitely, the system becomes increasingly uncontrollable. It’s a bit like the ‘three-body problem’—it’s hard to predict in advance what the final outcome will be ,” Ma Zeyu said.

In such a system, even a single sentence generated by an agent due to hallucination, misjudgment, or chance can trigger a butterfly effect through continuous interaction, amplification, and recombination, ultimately leading to unpredictable consequences.

The second type of risk stems from the blurring of authority and responsibility boundaries. Ma Zeyu believes that the decision-making capabilities of open-source agents like OpenClaw are rapidly increasing, which is itself an unavoidable "trade-off": to make an agent a truly qualified assistant, it must be given more authority; but the higher the authority, the greater the potential risk. Once the risk actually materializes, determining who should bear the responsibility becomes exceptionally complex.

"Is it the vendor of the basic large model? The user who uses it? Or the developer of OpenClaw? In many scenarios, it is actually difficult to define responsibility." He gave a typical example: if the user simply allows the agent to browse freely in communities such as Moltbook and interact with other agents without setting any clear goals; and the agent comes into contact with extreme content in the long-term interaction and takes dangerous actions based on it—then it is difficult to simply attribute the responsibility to any single entity.

What is truly alarming is not how far it has developed to, but how quickly it is moving towards a stage where we haven't yet figured out how to deal with it.

4. How should ordinary people use it?

According to many interviewees, OpenClaw is not "unusable," but the real problem is that it is not suitable for ordinary users to use directly in the absence of security protection.

Ma Zeyu believes that ordinary users can certainly try OpenClaw, but only if they have a clear understanding of it. "Of course you can try it, there's no problem with that. But before you use it, you must first figure out what it can and cannot do. Don't mythologize it as something that 'can do everything,' it's not."

In reality, OpenClaw is not easy to deploy and costly to use. If there is no clear goal and it is used simply for the sake of using it, investing a lot of time and energy may ultimately not yield the expected returns.

The reporter noted that OpenClaw faces considerable computational and cost pressures in practical use. Tan Cheng found during his experience that the tool consumes a very high number of tokens. "Some tasks, such as writing code or conducting research, can consume millions of tokens in a single round. If you encounter long contexts, it's not an exaggeration to say that you can use tens of millions or even hundreds of millions of tokens a day."

He mentioned that even by using different models in combination to control costs, the overall consumption is still relatively high, which to some extent raises the barrier to entry for ordinary users.

According to respondents, these intelligent agent tools still need further evolution before they can truly be integrated into the high-frequency workflows of ordinary users. For individual users, the process of using them is essentially a trade-off between security and convenience, and at the current stage, the former should be prioritized.

According to respondents, these tools still need further evolution before they can truly be integrated into the high-frequency workflows of ordinary users.

When ordinary users use these tools, they are essentially making a trade-off between security and convenience, and at the current stage, the former should be given priority.

For individual users, Ma Zeyu explicitly stated that he would not enable features such as Notebook that could allow free communication between agents, and would also try to avoid multiple agents exchanging information. "I want to be its primary entry point for information. All critical information should be decided by humans. Once agents can freely receive and exchange information, many things will become uncontrollable."

In his view, when ordinary users use these tools, they are essentially making a trade-off between security and convenience, and at the current stage, the former should be given priority.

In response to this, industry AI experts, in an interview with IT Times, also provided clearer security guidelines from an operational perspective:

1. Strictly limit the scope of sensitive information provided. Only provide the tool with basic information necessary to complete specific tasks, and never enter core sensitive data such as bank card passwords or stock account information. Before using the tool to organize files, proactively remove any private content that may be included, such as ID numbers or personal contact information.

2. Exercise caution when granting access permissions. Users should independently determine the access boundaries of the tools and should not authorize access to core system files, payment software, or financial accounts. Disable high-risk functions such as automatic execution, file modification, or deletion. All operations involving changes in assets, file deletion, or system settings modification must be manually confirmed before execution.

Third, it is crucial to be aware of their "experimental" nature. Current open-source AI tools are still in their early stages and have not yet undergone long-term market testing. They are not suitable for handling critical matters such as confidential work information and important financial decisions. During use, data backups should be performed regularly, and system status should be checked periodically to promptly identify any abnormal behavior.

Compared to individual users, enterprises need more systematic risk management when introducing open-source intelligent agent tools.

On the one hand, professional monitoring tools can be deployed; on the other hand, internal usage boundaries should be clearly defined, prohibiting the use of open-source AI tools to process sensitive data such as customer privacy and trade secrets, and improving employees' ability to identify risks such as "task execution deviation" and "malicious instruction injection" through regular training.

Experts further suggest that in scenarios requiring large-scale application, a more prudent choice is to wait for a fully tested commercial version or to choose an alternative product with formal institutional endorsement and sound security mechanisms to reduce the uncertainty risks brought by open source tools.

5. Confident in the future of AI

According to the respondents, the most important significance of OpenClaw is that it gives people confidence in the future of AI.

Ma Zeyu stated that his assessment of Agent capabilities has changed significantly since the second half of 2025. "The upper limit of this capability is exceeding our expectations. Its improvement in productivity is real, and the iteration speed is very fast." As the basic model capabilities continue to improve, the potential of Agents is constantly being expanded, which will also become an important direction for his team's future investment.

He also pointed out that a trend worthy of serious attention is the long-term, large-scale interaction among multiple agents. This kind of group collaboration may become an important path to stimulate higher-level intelligence, similar to the collective wisdom generated through interaction in human society.

In Ma Zeyu's view, the risks of intelligent agents need to be "managed." "Just as human society itself cannot eliminate risks, the key lies in controlling the boundaries." From a technical perspective, a more feasible approach is to allow intelligent agents to operate in sandbox and isolated environments as much as possible, gradually and controllably migrating them to the real world, rather than granting them excessive privileges all at once.

This is evident in the strategies employed by various cloud vendors and major companies. Tan Cheng's company, China Telecom Cloud, recently launched a one-click cloud deployment and operation service supporting OpenClaw.

Cloud vendors are turning this into a complementary service, essentially productizing, engineering, and scaling this capability. This will undoubtedly amplify its value; lower deployment barriers, better tool integration, and more stable computing power and operations systems will enable enterprises to use intelligent agents more quickly. However, it's also important to recognize that once commercial infrastructure is connected to a "high-privilege proxy," the risks will also be scaled up simultaneously.

Tan Cheng stated that over the past three years, the pace of technological iteration, from traditional dialogue models to intelligent agents capable of performing tasks, has far exceeded expectations. "This was unimaginable three years ago." He believes that the next two to three years will be a critical window of opportunity that will determine the future of general artificial intelligence , representing new opportunities and hope for both practitioners and ordinary people.

Although the development of OpenClaw and Modelbook has far exceeded expectations, Hu Xia believes that "the overall risks are still within a controllable research framework, proving the necessity of building an 'intrinsic security' system. At the same time, we must also realize that AI is approaching humanity's 'safety fence' at a faster pace than people imagine . We not only need to further broaden the height and thickness of the 'fence,' but also need to accelerate the construction of the ability to 'flip the table' at critical moments , and build a solid final security defense line in the AI ​​era."

Market Opportunity
OpenClaw Logo
OpenClaw Price(OPENCLAW)
$0.0003077
$0.0003077$0.0003077
+1.51%
USD
OpenClaw (OPENCLAW) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags: