The post AI Models Might Be Able to Predict What You’ll Buy Better Than You Can appeared on BitcoinEthereumNews.com. In brief A new study shows LLMs can mimic human purchase intent by mapping free-text answers to Likert ratings through semantic similarity. Method achieved 90% of human test–retest reliability on 9,300 real survey responses. The study raises questions about bias, generalization, and how far “synthetic consumers” can stand in for real people. Forget focus groups: A new study found that large language models can forecast whether you want to buy something with striking accuracy, dramatically outperforming traditional marketing tools. Researchers at the University of Mannheim and ETH Zürich have found that large language models can replicate human purchase intent—the “How likely are you to buy this?” metric beloved by marketers—by transforming free-form text into structured survey data. In a paper published last week, the team introduced a method called “Semantic Similarity Rating,” which converts the model’s open-ended responses into numerical “Likert” ratings, a five-point scale used in traditional consumer research. Rather than asking a model to pick a number between one and five, the researchers had it respond naturally—“I’d definitely buy this,” or “Maybe if it were on sale”—and then measured how semantically close those statements were to canonical answers like “I would definitely buy this” or “I would not buy this.”  Each answer was mapped in embedding space to the nearest reference statement, effectively turning LLM text into statistical ratings. “We show that optimizing for semantic similarity rather than numeric labels yields purchase-intent distributions that closely match human survey data,” the authors wrote. “LLM-generated responses achieved 90% of the reliability of repeated human surveys while preserving natural variation in attitudes.” In tests across 9,300 real human survey responses about personal-care products, the SSR method produced synthetic respondents whose Likert distributions nearly mirrored the originals. In other words: when asked to “think like consumers,” the models did. Why it matters… The post AI Models Might Be Able to Predict What You’ll Buy Better Than You Can appeared on BitcoinEthereumNews.com. In brief A new study shows LLMs can mimic human purchase intent by mapping free-text answers to Likert ratings through semantic similarity. Method achieved 90% of human test–retest reliability on 9,300 real survey responses. The study raises questions about bias, generalization, and how far “synthetic consumers” can stand in for real people. Forget focus groups: A new study found that large language models can forecast whether you want to buy something with striking accuracy, dramatically outperforming traditional marketing tools. Researchers at the University of Mannheim and ETH Zürich have found that large language models can replicate human purchase intent—the “How likely are you to buy this?” metric beloved by marketers—by transforming free-form text into structured survey data. In a paper published last week, the team introduced a method called “Semantic Similarity Rating,” which converts the model’s open-ended responses into numerical “Likert” ratings, a five-point scale used in traditional consumer research. Rather than asking a model to pick a number between one and five, the researchers had it respond naturally—“I’d definitely buy this,” or “Maybe if it were on sale”—and then measured how semantically close those statements were to canonical answers like “I would definitely buy this” or “I would not buy this.”  Each answer was mapped in embedding space to the nearest reference statement, effectively turning LLM text into statistical ratings. “We show that optimizing for semantic similarity rather than numeric labels yields purchase-intent distributions that closely match human survey data,” the authors wrote. “LLM-generated responses achieved 90% of the reliability of repeated human surveys while preserving natural variation in attitudes.” In tests across 9,300 real human survey responses about personal-care products, the SSR method produced synthetic respondents whose Likert distributions nearly mirrored the originals. In other words: when asked to “think like consumers,” the models did. Why it matters…

AI Models Might Be Able to Predict What You’ll Buy Better Than You Can

In brief

  • A new study shows LLMs can mimic human purchase intent by mapping free-text answers to Likert ratings through semantic similarity.
  • Method achieved 90% of human test–retest reliability on 9,300 real survey responses.
  • The study raises questions about bias, generalization, and how far “synthetic consumers” can stand in for real people.

Forget focus groups: A new study found that large language models can forecast whether you want to buy something with striking accuracy, dramatically outperforming traditional marketing tools.

Researchers at the University of Mannheim and ETH Zürich have found that large language models can replicate human purchase intent—the “How likely are you to buy this?” metric beloved by marketers—by transforming free-form text into structured survey data.

In a paper published last week, the team introduced a method called “Semantic Similarity Rating,” which converts the model’s open-ended responses into numerical “Likert” ratings, a five-point scale used in traditional consumer research.

Rather than asking a model to pick a number between one and five, the researchers had it respond naturally—“I’d definitely buy this,” or “Maybe if it were on sale”—and then measured how semantically close those statements were to canonical answers like “I would definitely buy this” or “I would not buy this.”

Each answer was mapped in embedding space to the nearest reference statement, effectively turning LLM text into statistical ratings. “We show that optimizing for semantic similarity rather than numeric labels yields purchase-intent distributions that closely match human survey data,” the authors wrote. “LLM-generated responses achieved 90% of the reliability of repeated human surveys while preserving natural variation in attitudes.”

In tests across 9,300 real human survey responses about personal-care products, the SSR method produced synthetic respondents whose Likert distributions nearly mirrored the originals. In other words: when asked to “think like consumers,” the models did.

Why it matters

The finding could reshape how companies conduct product testing and market research. Consumer surveys are notoriously expensive, slow, and vulnerable to bias. Synthetic respondents—if they behave like real ones—could let companies screen thousands of products or messages for a fraction of the cost.

It also validates a deeper claim: that the geometry of an LLM’s semantic space encodes not just language understanding but attitudinal reasoning. By comparing answers in embedding space rather than treating them as literal text, the study demonstrates that model semantics can stand in for human judgment with surprising fidelity.

At the same time, it raises familiar ethical and methodological risks. The researchers tested only one product category, leaving open whether the same approach would hold for financial decisions or politically charged topics. And synthetic “consumers” could easily become synthetic targets: the same modeling techniques could help optimize political persuasion, advertising, or behavioral nudges.

As the authors put it, “market-driven optimization pressures can systematically erode alignment”—a phrase that resonates far beyond marketing.

A note of skepticism

The authors acknowledge that their test domain—personal-care products—is narrow and may not generalize to high-stakes or emotionally charged purchases. The SSR mapping also depends on carefully chosen reference statements: small wording changes can skew results. Moreover, the study relies on human survey data as “ground truth,” even though such data is notoriously noisy and culturally biased.

Critics point out that embedding-based similarity assumes that language vectors map neatly onto human attitudes, an assumption that may fail when context or irony enters the mix. The paper’s own reliability numbers—90% of human test-retest consistency—sound impressive but still leave room for significant drift. In short, the method works on average, but it’s not yet clear whether those averages capture real human diversity or simply reflect the model’s training priors.

The bigger picture

Academic interest in “synthetic consumer modeling” has surged in 2025 as companies experiment with AI-based focus groups and predictive polling. Similar work by MIT and the University of Cambridge has shown that LLMs can mimic demographic and psychometric segments with moderate reliability, but none have previously demonstrated a close statistical match to real purchase-intent data.

For now, the SSR method remains a research prototype, but it hints at a future where LLMs might not just answer questions—but represent the public itself.

Whether that’s an advance or a hallucination in the making is still up for debate.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.

Source: https://decrypt.co/343838/ai-models-might-be-able-to-predict-what-youll-buy-better-than-you-can

Market Opportunity
null Logo
null Price(null)
--
----
USD
null (null) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Trump Crypto Adviser Urges Bipartisan Support After Senate Committee Unveils Partisan Crypto Bill

Trump Crypto Adviser Urges Bipartisan Support After Senate Committee Unveils Partisan Crypto Bill

The post Trump Crypto Adviser Urges Bipartisan Support After Senate Committee Unveils Partisan Crypto Bill appeared on BitcoinEthereumNews.com. White House crypto
Share
BitcoinEthereumNews2026/01/23 04:26
CME Group to launch options on XRP and SOL futures

CME Group to launch options on XRP and SOL futures

The post CME Group to launch options on XRP and SOL futures appeared on BitcoinEthereumNews.com. CME Group will offer options based on the derivative markets on Solana (SOL) and XRP. The new markets will open on October 13, after regulatory approval.  CME Group will expand its crypto products with options on the futures markets of Solana (SOL) and XRP. The futures market will start on October 13, after regulatory review and approval.  The options will allow the trading of MicroSol, XRP, and MicroXRP futures, with expiry dates available every business day, monthly, and quarterly. The new products will be added to the existing BTC and ETH options markets. ‘The launch of these options contracts builds on the significant growth and increasing liquidity we have seen across our suite of Solana and XRP futures,’ said Giovanni Vicioso, CME Group Global Head of Cryptocurrency Products. The options contracts will have two main sizes, tracking the futures contracts. The new market will be suitable for sophisticated institutional traders, as well as active individual traders. The addition of options markets singles out XRP and SOL as liquid enough to offer the potential to bet on a market direction.  The options on futures arrive a few months after the launch of SOL futures. Both SOL and XRP had peak volumes in August, though XRP activity has slowed down in September. XRP and SOL options to tap both institutions and active traders Crypto options are one of the indicators of market attitudes, with XRP and SOL receiving a new way to gauge sentiment. The contracts will be supported by the Cumberland team.  ‘As one of the biggest liquidity providers in the ecosystem, the Cumberland team is excited to support CME Group’s continued expansion of crypto offerings,’ said Roman Makarov, Head of Cumberland Options Trading at DRW. ‘The launch of options on Solana and XRP futures is the latest example of the…
Share
BitcoinEthereumNews2025/09/18 00:56
Why The Green Bay Packers Must Take The Cleveland Browns Seriously — As Hard As That Might Be

Why The Green Bay Packers Must Take The Cleveland Browns Seriously — As Hard As That Might Be

The post Why The Green Bay Packers Must Take The Cleveland Browns Seriously — As Hard As That Might Be appeared on BitcoinEthereumNews.com. Jordan Love and the Green Bay Packers are off to a 2-0 start. Getty Images The Green Bay Packers are, once again, one of the NFL’s better teams. The Cleveland Browns are, once again, one of the league’s doormats. It’s why unbeaten Green Bay (2-0) is a 8-point favorite at winless Cleveland (0-2) Sunday according to betmgm.com. The money line is also Green Bay -500. Most expect this to be a Packers’ rout, and it very well could be. But Green Bay knows taking anyone in this league for granted can prove costly. “I think if you look at their roster, the paper, who they have on that team, what they can do, they got a lot of talent and things can turn around quickly for them,” Packers safety Xavier McKinney said. “We just got to kind of keep that in mind and know we not just walking into something and they just going to lay down. That’s not what they going to do.” The Browns certainly haven’t laid down on defense. Far from. Cleveland is allowing an NFL-best 191.5 yards per game. The Browns gave up 141 yards to Cincinnati in Week 1, including just seven in the second half, but still lost, 17-16. Cleveland has given up an NFL-best 45.5 rushing yards per game and just 2.1 rushing yards per attempt. “The biggest thing is our defensive line is much, much improved over last year and I think we’ve got back to our personality,” defensive coordinator Jim Schwartz said recently. “When we play our best, our D-line leads us there as our engine.” The Browns rank third in the league in passing defense, allowing just 146.0 yards per game. Cleveland has also gone 30 straight games without allowing a 300-yard passer, the longest active streak in the NFL.…
Share
BitcoinEthereumNews2025/09/18 00:41