This article benchmarks GPT-3.5 and GPT-4 as formal simulators, testing their ability to model state spaces in common-sense and early scientific reasoning tasks. While these models show promise, they achieve only modest accuracy and raise ethical concerns, particularly around misinformation and unsafe outputs. The study highlights both the potential and the risks of using LLMs for simulations, framing the work as an early step toward more capable and responsible AI simulators.This article benchmarks GPT-3.5 and GPT-4 as formal simulators, testing their ability to model state spaces in common-sense and early scientific reasoning tasks. While these models show promise, they achieve only modest accuracy and raise ethical concerns, particularly around misinformation and unsafe outputs. The study highlights both the potential and the risks of using LLMs for simulations, framing the work as an early step toward more capable and responsible AI simulators.

AI Models Can't Be Trusted in High-Stakes Simulations Just Yet

Abstract and 1. Introduction and Related Work

  1. Methodology

    2.1 LLM-Sim Task

    2.2 Data

    2.3 Evaluation

  2. Experiments

  3. Results

  4. Conclusion

  5. Limitations and Ethical Concerns, Acknowledgements, and References

A. Model details

B. Game transition examples

C. Game rules generation

D. Prompts

E. GPT-3.5 results

F. Histograms

\

5 Conclusion

\

\ \

6 Limitations and Ethical Concerns

6.1 Limitations

This work considers two strong in-context learning LLMs, GPT-3.5 and GPT-4, in their ability to act as explicit formal simulators.We adopt these models because they are generally the most performant offthe-shelf models across a variety of benchmarks. While we observe that even GPT-3.5 and GPT-4 achieve a modest score at the proposed task, we acknowledge that we did not exhaustively evaluate a large selection of large language models, and other models may perform better. We provide this work as a benchmark to evaluate the performance of existing and future models on the task of accurately simulating state space transitions.

\ In this work, we propose two representational formalisms for representing state spaces, one that includes full state space, while the other focuses on state difference, both represented using JSON objects. We have chosen these representations based on their popularity and compatibility with the input and output formats of most LLM pretraining data (e.g. Fakhoury et al., 2023), as well as being able to directly compare against gold standard simulator output for evaluation, though it is possible that other representational formats may be more performant at the simulation task.

\ Finally, the state spaces produced in this work are focused around the domain of common-sense and early (elementary) scientific reasoning. These tasks, such as opening containers or activating devices, were chosen because the results of these actions are common knowledge, and models are likely to be most performant in simulating these actions. While this work does address a selection of less frequent actions and properties, it does not address using LLMs as simulators for highly domain-specific areas, such as physical or medical simulation. A long term goal of this work is to facilitate using language models as simulators for high-impact domains, and we view this work as a stepping-stone to developing progressively more capable language model simulators.

6.2 Ethical Concerns

We do not foresee an immediate ethical or societal impact resulting from our work. However, we acknowledge that as an LLM application, the proposed LLM-Sim task could be affected in some way by misinformation and hallucinations introduced by the specific LLM selected by the user. Our work highlights the issue with using LLMs as text-based world simulators. In downstream tasks, such as game simulation, LLMs may generate misleading or non-factual information. For example, if the simulator suggests burning a house to boil water, our work does not prevent this, nor do we evaluate the ethical implications of such potentially dangerous suggestions. As a result, we believe such applications are neither suitable nor safe to be deployed to a setting where they directly interact with humans, especially children, e.g., in an educational setting. We urge researchers and practitioners to use our proposed task and dataset in a mindful manner.

Acknowledgements

We wish to thank the three anonymous reviewers for their helpful comments on an earlier draft of this paper.

References

Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.

\ Ashutosh Adhikari, Xingdi Yuan, Marc-Alexandre Côté, Mikuláš Zelinka, Marc-Antoine Rondeau, Romain Laroche, Pascal Poupart, Jian Tang, Adam Trischler, and Will Hamilton. 2020. Learning dynamic belief graphs to generalize on text-based games. Advances in Neural Information Processing Systems, 33:3045– 3057.

\ Prithviraj Ammanabrolu and Matthew Hausknecht. 2020. Graph constrained reinforcement learning for natural language action spaces. arXiv preprint arXiv:2001.08837.

\ Marc-Alexandre Côté, Ákos Kádár, Xingdi Yuan, Ben Kybartas, Tavian Barnes, Emery Fine, James Moore, Ruo Yu Tao, Matthew Hausknecht, Layla El Asri, Mahmoud Adada, Wendy Tay, and Adam Trischler. 2018. Textworld: A learning environment for textbased games. CoRR, abs/1806.11532.

\ Sarah Fakhoury, Saikat Chakraborty, Madan Musuvathi, and Shuvendu K Lahiri. 2023. Towards generating functionally correct code edits from natural language issue descriptions. arXiv preprint arXiv:2304.03816.

\ Angela Fan, Jack Urbanek, Pratik Ringshia, Emily Dinan, Emma Qian, Siddharth Karamcheti, Shrimai Prabhumoye, Douwe Kiela, Tim Rocktaschel, Arthur Szlam, and Jason Weston. 2020. Generating interactive worlds with text. Proceedings of the AAAI Conference on Artificial Intelligence, 34(02):1693– 1700.

\ Shibo Hao, Yi Gu, Haodi Ma, Joshua Hong, Zhen Wang, Daisy Wang, and Zhiting Hu. 2023. Reasoning with language model is planning with world model. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 8154–8173.

\ atthew Hausknecht, Prithviraj Ammanabrolu, MarcAlexandre Côté, and Xingdi Yuan. 2020. Interactive fiction games: A colossal adventure. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 7903–7910.

\ Peter Jansen. 2022. A systematic survey of text worlds as embodied natural language environments. In Proceedings of the 3rd Wordplay: When Language Meets Games Workshop (Wordplay 2022), pages 1–15.

\ Leslie Pack Kaelbling, Michael L Littman, and Anthony R Cassandra. 1998. Planning and acting in partially observable stochastic domains. Artificial intelligence, 101(1-2):99–134.

\ Bo Liu, Yuqian Jiang, Xiaohan Zhang, Qiang Liu, Shiqi Zhang, Joydeep Biswas, and Peter Stone. 2023. Llm+ p: Empowering large language models with optimal planning proficiency. arXiv preprint arXiv:2304.11477.

\ Kolby Nottingham, Prithviraj Ammanabrolu, Alane Suhr, Yejin Choi, Hannaneh Hajishirzi, Sameer Singh, and Roy Fox. 2023. Do embodied agents dream of pixelated sheep: Embodied decision making using language guided world modelling. In International Conference on Machine Learning, pages 26311–26325. PMLR.

\ Mohit Shridhar, Xingdi Yuan, Marc-Alexandre Côté, Yonatan Bisk, Adam Trischler, and Matthew Hausknecht. 2020. Alfworld: Aligning text and embodied environments for interactive learning. arXiv preprint arXiv:2010.03768.

\ Hao Tang, Darren Key, and Kevin Ellis. 2024. Worldcoder, a model-based llm agent: Building world models by writing code and interacting with the environment. arXiv preprint arXiv:2402.12275.

\ Jack Urbanek, Angela Fan, Siddharth Karamcheti, Saachi Jain, Samuel Humeau, Emily Dinan, Tim Rocktäschel, Douwe Kiela, Arthur Szlam, and Jason Weston. 2019. Learning to speak and act in a fantasy text adventure game.

\ Karthik Valmeekam, Matthew Marquez, Sarath Sreedharan, and Subbarao Kambhampati. 2023. On the planning abilities of large language models-a critical investigation. Advances in Neural Information Processing Systems, 36:75993–76005.

\ Nick Walton. 2020. How we scaled AI Dungeon 2 to support over 1,000,000 users.

\ Ruoyao Wang, Peter Jansen, Marc-Alexandre Côté, and Prithviraj Ammanabrolu. 2022. Scienceworld: Is your agent smarter than a 5th grader? In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11279–11298.

\ Ruoyao Wang, Graham Todd, Xingdi Yuan, Ziang Xiao, Marc-Alexandre Côté, and Peter Jansen. 2023. ByteSized32: A corpus and challenge task for generating task-specific world models expressed as text games. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 13455–13471, Singapore. Association for Computational Linguistics.

\ Lionel Wong, Gabriel Grand, Alexander K Lew, Noah D Goodman, Vikash K Mansinghka, Jacob Andreas, and Joshua B Tenenbaum. 2023. From word models to world models: Translating from natural language to the probabilistic language of thought. arXiv preprint arXiv:2306.12672.

\

:::info Authors:

(1) Ruoyao Wang, University of Arizona ([email protected]);

(2) Graham Todd, New York University ([email protected]);

(3) Ziang Xiao, Johns Hopkins University ([email protected]);

(4) Xingdi Yuan, Microsoft Research Montréal ([email protected]);

(5) Marc-Alexandre Côté, Microsoft Research Montréal ([email protected]);

(6) Peter Clark, Allen Institute for AI ([email protected]).;

(7) Peter Jansen, University of Arizona and Allen Institute for AI ([email protected]).

:::


:::info This paper is available on arxiv under CC BY 4.0 license.

:::

\

Market Opportunity
Threshold Logo
Threshold Price(T)
$0.009467
$0.009467$0.009467
+1.64%
USD
Threshold (T) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

VIRTUAL Weekly Analysis Jan 21

VIRTUAL Weekly Analysis Jan 21

The post VIRTUAL Weekly Analysis Jan 21 appeared on BitcoinEthereumNews.com. VIRTUAL closed the week up 3.57% at $0.84, but the long-term downtrend maintains its
Share
BitcoinEthereumNews2026/01/22 06:54
Dogecoin, Shiba Inu & XYZVerse: Three Meme Coin Paths — Stability, Gradual Growth & Explosive Upside?

Dogecoin, Shiba Inu & XYZVerse: Three Meme Coin Paths — Stability, Gradual Growth & Explosive Upside?

Three meme tokens are taking unique routes in the market. One is holding firm, another is making slow gains, and a third is causing excitement with its big jumps. What sets these coins apart and makes each path interesting? The coming analysis looks at how these strategies could shape their future and what it might mean for traders. From Meme to Mainstream: Is Dogecoin Ready for Another Lift-Off? Dogecoin burst onto the scene in 2013 with a grinning Shiba Inu and a shrug. Its creators, Billy Marcus and Jackson Palmer, wanted a light-hearted twist on serious crypto. They set no hard limit on coins; in fact 10,000 fresh DOGE roll out every minute. What began as a joke became a juggernaut. Social media rallies, led by Elon Musk, pushed its worth above $50 billion in 2021, planting it in the top ten. The surge proved one thing: an online crowd can turn a meme into a market force. Under the hood DOGE runs on the same proof-of-work idea as Bitcoin, yet blocks clear faster and fees stay tiny. That makes tipping gamers, streamers, and friends quick and cheap. The endless supply fuels spending but also keeps a lid on scarcity. In today’s cycle Bitcoin’s rebound has traders hunting for lagging plays. New meme coins flash brighter, yet many fade fast. Dogecoin still owns the biggest fan club and sits on every major exchange, giving it staying power. If utility grows—or another Musk tweet lands—momentum could return in a hurry. Shiba Inu: The Meme Dog That Sniffed Out a Spot on Ethereum Shiba Inu burst onto the scene in 2020, barking at Dogecoin’s heels. Built on Ethereum, it plugs into a huge network of apps and wallets. Its maker, known only as Ryoshi, unleashed one quadrillion tokens. Half went to Vitalik Buterin, who later gave much away and burned the rest. That bold move grabbed headlines and trust. At the same time, it showed the coin was more than a joke. Today, SHIB powers ShibaSwap, a place to trade tokens without a middleman. Soon, holders may vote on new changes and even mint art pieces called NFTs. This wider plan gives SHIB tools that Dogecoin still lacks. The market cycle now rewards coins with clear stories and active teams. Meme coins often ride big waves, and Ethereum-based ones get extra attention because they fit with popular chains like Uniswap and OpenSea. SHIB also has a huge, vocal fan base that can drive fast moves. Prices are still far below last year’s peak, so some see room for a fresh run if the next bull phase appears. Demand for $XYZ Surges As Its Capitalization Hits the $15M Milestone XYZVerse ($XYZ), recently recognized as Best NEW Meme Project, is drawing significant attention thanks to its standout concept. It is the first ever meme coin that merges the thrill of sports and the innovation of web3. Unlike typical meme coins, XYZVerse offers real utility and a clear roadmap for long-term development. It plans to launch gamified products and form partnerships with big sports teams and platforms. Notably, XYZVerse recently delivered on one of its goals ahead of schedule by partnering with bookmaker.XYZ, the first fully on-chain decentralized sportsbook and casino. As a bonus, $XYZ token holders receive exclusive perks on their first bet. Price Dynamics and Listing Plans During its presale phase, the $XYZ token has shown steady growth. Since its launch, the price has increased from $0.0001 to $0.0055, with the next stage set to push it further to $0.0056. With an anticipated listing price of $0.10, the token is set to launch on leading CEXs and DEXs. The projected listing price of $0.10 could generate up to 1,000x returns for early investors, provided the project secures the necessary market capitalization. So far, more than $15 million has been raised, and the presale is approaching another significant milestone of $20 million. This fast progress is signaling strong demand from both retail and institutional investors. Champions Get Rewarded In XYZVerse, the community calls the plays. Active contributors are rewarded with airdropped XYZ tokens for their dedication. It’s a game where the most passionate players win big. The Road to Victory With solid tokenomics, strategic CEX and DEX listings, and consistent token burns, $XYZ is built for a championship run. Every play is designed to push it further, to strengthen its price, and to rally a community of believers who believe this is the start of something legendary. Airdrops, Rewards, and More - Join XYZVerse to Unlock All the Benefits Conclusion DOGE offers steadiness, SHIB moves upward in steps, yet XYZVerse (XYZ) blends sports and memes, presale live, community-led, aiming to beat past 17,000% stars in the 2025 bull run. You can find more information about XYZVerse (XYZ) here: https://xyzverse.io/, https://t.me/xyzverse, https://x.com/xyz_verse   Disclaimer: This article is provided for informational purposes only. It is not offered or intended to be used as legal, tax, investment, financial, or other advice.
Share
Coinstats2025/09/20 16:32
YZi Labs invests in Ethena Labs to support the expansion of the USDe ecosystem

YZi Labs invests in Ethena Labs to support the expansion of the USDe ecosystem

PANews reported on September 19th that YZi Labs announced it has deepened its holdings in Ethena Labs and will continue its strategic support for the development of the USDe ecosystem. USDe is the fastest-growing and third-largest dollar-denominated crypto asset in history, with a current circulating supply exceeding $ 13 billion. YZi Labs' support will promote the expansion of USDe's application across centralized and decentralized platforms, and will contribute to the development of new products : USDtb (a fiat-backed stablecoin) and Converge (an institutional settlement layer).
Share
PANews2025/09/19 21:07