CoinDesk Indices Share Share this article Copy linkX (Twitter)LinkedInFacebookEmail Crypto for Advisors: banks and digi CoinDesk Indices Share Share this article Copy linkX (Twitter)LinkedInFacebookEmail Crypto for Advisors: banks and digi

Crypto for Advisors: banks and digital money

Share
Share this article
Copy linkX (Twitter)LinkedInFacebookEmail

Crypto for Advisors: banks and digital money

Banks are embracing stablecoins and tokenized deposits as a means to upgrade their financial infrastructure, but they are approaching the two technologies differently.

By Sam Boboev|Edited by Sarah Morton
Jan 29, 2026, 4:00 p.m.
Make us preferred on Google
(Point Normal/ Unsplash)

What to know:

You’re reading Crypto for Advisors, CoinDesk’s weekly newsletter that unpacks digital assets for financial advisors. Subscribe here to get it every Thursday.

In today’s newsletter, Sam Boboev, founder of Fintech Wrap Up, looks at how banks are embracing stablecoins and tokenization to upgrade banking rails.

Then, Xin Yan, co-founder and CEO at Sign, answers questions about banks and stablecoins in Ask an Expert.

STORY CONTINUES BELOW
Don't miss another story.Subscribe to the Crypto Long & Short Newsletter today. See all newsletters
Sign me up

-Sarah Morton


From stablecoins to tokenized deposits: why banks are reclaiming the narrative

Stablecoins dominated early digital money discourse because they solved a narrow technical failure: moving value natively on digital rails when banks could not. Speed, programmability and cross-platform settlement exposed the limits of correspondent banking and batch-based systems. That phase is ending. Banks are now advancing tokenized deposits to reassert control over money creation, liability structure and regulatory alignment.

This is not a reversal of innovation. It is a containment strategy.

Stablecoins expanded capability outside the banking perimeter

Stablecoins function as privately issued settlement assets. They are typically liabilities of non-bank entities, backed by reserve portfolios whose composition, custody and liquidity treatment vary by issuer. Even when fully reserved, they sit outside deposit insurance frameworks and outside direct prudential supervision applied to banks.

The technical gain was real. The structural consequence was material. Value transfer began to migrate beyond regulated balance sheets. Liquidity that once reinforced the banking system started pooling in parallel structures governed by disclosure regimes rather than capital rules.

That shift is incompatible with how banks, regulators and central banks define monetary stability.

Tokenized deposits preserve the deposit, change the rail

Tokenized deposits do not introduce new money. They repackage existing deposits using distributed ledger infrastructure. The asset remains a bank liability. The claim structure remains unchanged. Only the settlement and programmability layer evolves.

This distinction is decisive.

A tokenized deposit sits on a regulated bank balance sheet. It remains subject to capital requirements, liquidity coverage rules, resolution regimes and — where applicable — deposit insurance. There is no ambiguity about seniority in insolvency. There is no reserve opacity problem. There is no new issuer risk to underwrite.

Banks are not competing with stablecoins on speed alone. They are competing on legal certainty.

Balance-sheet control is the core issue

The real fault line is balance-sheet location.

Stablecoins externalize settlement liquidity. Even when reserves are held at regulated institutions, the liability itself does not belong to the bank. Monetary transmission weakens. Supervisory visibility fragments. Stress propagates through structures that are not designed for systemic loads.

Tokenized deposits keep settlement liquidity inside the regulated perimeter. Faster movement does not equal balance-sheet escape. Capital stays measurable. Liquidity remains supervisable. Risk remains allocable.

This is why banks support tokenization while resisting stablecoin substitution. The technology is acceptable. The disintermediation is not.

Consumer protection is not a feature, it is a constraint

Stablecoins require users to assess issuer credibility, reserve quality, legal enforceability and operational resilience. These are institutional-level risk judgments pushed onto end users.

Tokenized deposits remove that burden. Consumer protection is inherited, not reconstructed. Dispute resolution, insolvency treatment and legal recourse follow existing banking law. The user does not become a credit analyst by necessity.

For advisors, this difference defines suitability. Digital form does not override liability quality.

Narrative reclamation is strategic, not cosmetic

Banks are repositioning digital money as an evolution of deposits, not a replacement. This reframing recenters authority over money within licensed institutions while absorbing the functional gains demonstrated by stablecoins.

The outcome is convergence: blockchain rails carrying bank money, not private substitutes.

Stablecoins forced the system to confront its architectural limits. Tokenized deposits are how incumbents address them without surrendering control.

Digital money will persist. The unresolved variable is issuer primacy. Banks are moving to close that gap now.

-Sam Boboev, founder, Fintech Wrap Up


Ask an Expert

Q. Banks are increasingly framing stablecoins not as speculative crypto assets, but as infrastructure for settlement, collateral and programmable money. From your perspective, working on blockchain infrastructure, what’s driving this shift inside large financial institutions, and how different is this moment from earlier stablecoin cycles?

A. The meaningful distinction between a stablecoin and traditional fiat is that the stablecoin exists on-chain.

That on-chain nature is precisely what makes stablecoins interesting to financial institutions. Once money is natively digital and programmable, it can be used directly for settlement, payments, collateralization and atomic execution across systems, without relying on fragmented legacy rails.

Historically, we’ve seen concerns around stablecoins focused on technical and operational risk, such as smart contract failure or insufficient resilience. Those concerns have largely faded. Core stablecoin infrastructure has been battle-tested across multiple cycles and sustained real-world usage.

Technically, the risk profile is now well understood and often lower than commonly assumed. The remaining uncertainty is predominantly legal and regulatory rather than technological. Many jurisdictions still lack a clear framework that fully recognizes stablecoins or CBDCs as first-class representations of sovereign currency. This ambiguity limits their adoption at scale within regulated financial systems, even when the underlying technology is mature.

That said, this moment feels structurally different from earlier cycles. The conversation has shifted from “should this exist?” to “how do we integrate it safely into the monetary system?”

I expect 2026 to bring significant regulatory clarification and formal adoption pathways across multiple countries, driven by the recognition that on-chain money is not a competing asset class, but an upgrade to financial infrastructure.

Q. As banks move toward tokenized deposits and on-chain settlement, identity, compliance and verifiable credentials become central. From your work with institutions, what infrastructure gaps still need to be solved before banks can scale these systems safely?

A. For these systems to run naturally, we have to match the speed of compliance and identity with the speed of the assets themselves. Right now, settlement happens in seconds, but verification still relies on manual work. The first step to solving this isn't decentralization. It is simply getting these records digitized so they can be accessed on-chain. We are already seeing many countries actively working to move their core identity and compliance data onto the blockchain.

In my view, there is no single "gap" that, once closed, will suddenly allow everything to scale perfectly. Instead, it is a process of fixing one bottleneck at a time. It is like a "left hand pushing the right hand" forward. Based on our discussions with various governments and institutions, the immediate priority is turning identity and entity proofs into electronic formats that can be stored and retrieved across different systems.

Currently, we rely too much on manual verification, which is slow and prone to errors. We need to move toward a model where identity is a verifiable digital credential. Once you can pull this data instantly without a human having to manually check and verify a document, the system can actually keep up with the speed of a stablecoin. We are building the bridge between the old way of filing paperwork and the new way of instant digital proof. It is a gradual improvement where we fix each short plank in the barrel until the whole system can hold water.

Q. Many policymakers now talk about stablecoins and tokenized deposits as payment infrastructure rather than investment products. How does that reframe the long-term role of stablecoins as banks increasingly place them alongside traditional payment rails?

A. The future of the world is going to be completely digitized. It does not matter if you are talking about dollar-backed stablecoins, tokenized deposits or central bank digital currencies. In the end, they are all part of the same thing. This is a massive upgrade to the entire global financial system. Reframing stablecoins as infrastructure is a very positive move because it focuses on removing the friction that slows down the movement of assets today.

When we work on digital identity systems or nation-level blockchain networks, we see it as a necessary technical evolution. In fact, if we do our jobs well, the general public should not even know that the underlying system has changed. They will not care about the "blockchain" or the "token." They will simply notice that their businesses run faster and their money moves instantly.

The real goal of this reframe is to speed up the turnover of capital across the entire economy. When money can move at the speed of the internet, the whole engine of global trade starts to run more efficiently. We are not just creating a new investment product. We are building a smoother road for everything else to travel on. This long-term role is about making the global economy more fluid and removing the old barriers that keep value trapped in slow, manual processes.

-Xin Yan, co-founder and CEO, Sign


Keep Reading

  • Fidelity Investments is launching the Fidelity Digital Dollar (FIDD), a U.S. dollar-backed stablecoin, in early February 2026 to support 24/7 institutional and retail on-chain settlements.
  • The U.K. government says it expects banks to treat crypto businesses fairly as part of its push to make the country a global hub for digital assets.
  • The U.S. Senate Agriculture Committee has scheduled its markup on crypto market structure for January 29.
Financial AdvisorsStablecoinsCoinDesk IndicesCrypto for Advisors

More For You

Pudgy Penguins: A New Blueprint for Tokenized Culture

Pudgy Penguins is building a multi-vertical consumer IP platform — combining phygital products, games, NFTs and PENGU to monetize culture at scale.

What to know:

Pudgy Penguins is emerging as one of the strongest NFT-native brands of this cycle, shifting from speculative “digital luxury goods” into a multi-vertical consumer IP platform. Its strategy is to acquire users through mainstream channels first; toys, retail partnerships and viral media, then onboard them into Web3 through games, NFTs and the PENGU token.

The ecosystem now spans phygital products (> $13M retail sales and >1M units sold), games and experiences (Pudgy Party surpassed 500k downloads in two weeks), and a widely distributed token (airdropped to 6M+ wallets). While the market is currently pricing Pudgy at a premium relative to traditional IP peers, sustained success depends on execution across retail expansion, gaming adoption and deeper token utility.

View Full Report

More For You

CoinDesk 20 performance update: index drops 2.3% as all constituents trade lower

Avalanche (AVAX) declined 4.4% and Polkadot (DOT) fell 4.1%, leading index lower.

Read full story
Latest Crypto News

Crypto bill clears U.S. Senate milestone as effort advanced through first committee

Grading America’s progress toward becoming the crypto capital of the world

Ethereum OGs revive the DAO with $220 million security fund, Unchained reports

Crypto custody firm Copper in early talks for IPO as crypto 'plumbing' becomes new Wall Street favorite

Live blog: Senate Agriculture Committee advances crypto market structure bill

Bitcoin's Quantum threat is ‘real but distant,’ says Wall Street analyst as doomsday debate rages on

Top Stories

Bitcoin tumbles to 2026 low of $85,200 as gold reverses big gains, Microsoft leads Nasdaq lower

Robinhood is investing in crypto trading platform Talos at $1.5 billion valuation

SEC clarifies rules for tokenized stocks, tightening scrutiny on synthetic equity

First gold and silver, now oil is starting to rally and that's bad news for bitcoin

White House to meet with crypto, banking executives to discuss market structure bill

Metaplanet is raising $137 million to pay down debt and buy even more bitcoin

Latest Crypto News

Crypto bill clears U.S. Senate milestone as effort advanced through first committee

Grading America’s progress toward becoming the crypto capital of the world

Ethereum OGs revive the DAO with $220 million security fund, Unchained reports

Crypto custody firm Copper in early talks for IPO as crypto 'plumbing' becomes new Wall Street favorite

Live blog: Senate Agriculture Committee advances crypto market structure bill

Bitcoin's Quantum threat is ‘real but distant,’ says Wall Street analyst as doomsday debate rages on

Top Stories

Bitcoin tumbles to 2026 low of $85,200 as gold reverses big gains, Microsoft leads Nasdaq lower

Robinhood is investing in crypto trading platform Talos at $1.5 billion valuation

SEC clarifies rules for tokenized stocks, tightening scrutiny on synthetic equity

First gold and silver, now oil is starting to rally and that's bad news for bitcoin

White House to meet with crypto, banking executives to discuss market structure bill

Metaplanet is raising $137 million to pay down debt and buy even more bitcoin

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

21Shares Launches JitoSOL Staking ETP on Euronext for European Investors

21Shares Launches JitoSOL Staking ETP on Euronext for European Investors

21Shares launches JitoSOL staking ETP on Euronext, offering European investors regulated access to Solana staking rewards with additional yield opportunities.Read
Share
Coinstats2026/01/30 12:53
Digital Asset Infrastructure Firm Talos Raises $45M, Valuation Hits $1.5 Billion

Digital Asset Infrastructure Firm Talos Raises $45M, Valuation Hits $1.5 Billion

Robinhood, Sony and trading firms back Series B extension as institutional crypto trading platform expands into traditional asset tokenization
Share
Blockhead2026/01/30 13:30
Summarize Any Stock’s Earnings Call in Seconds Using FMP API

Summarize Any Stock’s Earnings Call in Seconds Using FMP API

Turn lengthy earnings call transcripts into one-page insights using the Financial Modeling Prep APIPhoto by Bich Tran Earnings calls are packed with insights. They tell you how a company performed, what management expects in the future, and what analysts are worried about. The challenge is that these transcripts often stretch across dozens of pages, making it tough to separate the key takeaways from the noise. With the right tools, you don’t need to spend hours reading every line. By combining the Financial Modeling Prep (FMP) API with Groq’s lightning-fast LLMs, you can transform any earnings call into a concise summary in seconds. The FMP API provides reliable access to complete transcripts, while Groq handles the heavy lifting of distilling them into clear, actionable highlights. In this article, we’ll build a Python workflow that brings these two together. You’ll see how to fetch transcripts for any stock, prepare the text, and instantly generate a one-page summary. Whether you’re tracking Apple, NVIDIA, or your favorite growth stock, the process works the same — fast, accurate, and ready whenever you are. Fetching Earnings Transcripts with FMP API The first step is to pull the raw transcript data. FMP makes this simple with dedicated endpoints for earnings calls. If you want the latest transcripts across the market, you can use the stable endpoint /stable/earning-call-transcript-latest. For a specific stock, the v3 endpoint lets you request transcripts by symbol, quarter, and year using the pattern: https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={q}&year={y}&apikey=YOUR_API_KEY here’s how you can fetch NVIDIA’s transcript for a given quarter: import requestsAPI_KEY = "your_api_key"symbol = "NVDA"quarter = 2year = 2024url = f"https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={quarter}&year={year}&apikey={API_KEY}"response = requests.get(url)data = response.json()# Inspect the keysprint(data.keys())# Access transcript contentif "content" in data[0]: transcript_text = data[0]["content"] print(transcript_text[:500]) # preview first 500 characters The response typically includes details like the company symbol, quarter, year, and the full transcript text. If you aren’t sure which quarter to query, the “latest transcripts” endpoint is the quickest way to always stay up to date. Cleaning and Preparing Transcript Data Raw transcripts from the API often include long paragraphs, speaker tags, and formatting artifacts. Before sending them to an LLM, it helps to organize the text into a cleaner structure. Most transcripts follow a pattern: prepared remarks from executives first, followed by a Q&A session with analysts. Separating these sections gives better control when prompting the model. In Python, you can parse the transcript and strip out unnecessary characters. A simple way is to split by markers such as “Operator” or “Question-and-Answer.” Once separated, you can create two blocks — Prepared Remarks and Q&A — that will later be summarized independently. This ensures the model handles each section within context and avoids missing important details. Here’s a small example of how you might start preparing the data: import re# Example: using the transcript_text we fetched earliertext = transcript_text# Remove extra spaces and line breaksclean_text = re.sub(r'\s+', ' ', text).strip()# Split sections (this is a heuristic; real-world transcripts vary slightly)if "Question-and-Answer" in clean_text: prepared, qna = clean_text.split("Question-and-Answer", 1)else: prepared, qna = clean_text, ""print("Prepared Remarks Preview:\n", prepared[:500])print("\nQ&A Preview:\n", qna[:500]) With the transcript cleaned and divided, you’re ready to feed it into Groq’s LLM. Chunking may be necessary if the text is very long. A good approach is to break it into segments of a few thousand tokens, summarize each part, and then merge the summaries in a final pass. Summarizing with Groq LLM Now that the transcript is clean and split into Prepared Remarks and Q&A, we’ll use Groq to generate a crisp one-pager. The idea is simple: summarize each section separately (for focus and accuracy), then synthesize a final brief. Prompt design (concise and factual) Use a short, repeatable template that pushes for neutral, investor-ready language: You are an equity research analyst. Summarize the following earnings call sectionfor {symbol} ({quarter} {year}). Be factual and concise.Return:1) TL;DR (3–5 bullets)2) Results vs. guidance (what improved/worsened)3) Forward outlook (specific statements)4) Risks / watch-outs5) Q&A takeaways (if present)Text:<<<{section_text}>>> Python: calling Groq and getting a clean summary Groq provides an OpenAI-compatible API. Set your GROQ_API_KEY and pick a fast, high-quality model (e.g., a Llama-3.1 70B variant). We’ll write a helper to summarize any text block, then run it for both sections and merge. import osimport textwrapimport requestsGROQ_API_KEY = os.environ.get("GROQ_API_KEY") or "your_groq_api_key"GROQ_BASE_URL = "https://api.groq.com/openai/v1" # OpenAI-compatibleMODEL = "llama-3.1-70b" # choose your preferred Groq modeldef call_groq(prompt, temperature=0.2, max_tokens=1200): url = f"{GROQ_BASE_URL}/chat/completions" headers = { "Authorization": f"Bearer {GROQ_API_KEY}", "Content-Type": "application/json", } payload = { "model": MODEL, "messages": [ {"role": "system", "content": "You are a precise, neutral equity research analyst."}, {"role": "user", "content": prompt}, ], "temperature": temperature, "max_tokens": max_tokens, } r = requests.post(url, headers=headers, json=payload, timeout=60) r.raise_for_status() return r.json()["choices"][0]["message"]["content"].strip()def build_prompt(section_text, symbol, quarter, year): template = """ You are an equity research analyst. Summarize the following earnings call section for {symbol} ({quarter} {year}). Be factual and concise. Return: 1) TL;DR (3–5 bullets) 2) Results vs. guidance (what improved/worsened) 3) Forward outlook (specific statements) 4) Risks / watch-outs 5) Q&A takeaways (if present) Text: <<< {section_text} >>> """ return textwrap.dedent(template).format( symbol=symbol, quarter=quarter, year=year, section_text=section_text )def summarize_section(section_text, symbol="NVDA", quarter="Q2", year="2024"): if not section_text or section_text.strip() == "": return "(No content found for this section.)" prompt = build_prompt(section_text, symbol, quarter, year) return call_groq(prompt)# Example usage with the cleaned splits from Section 3prepared_summary = summarize_section(prepared, symbol="NVDA", quarter="Q2", year="2024")qna_summary = summarize_section(qna, symbol="NVDA", quarter="Q2", year="2024")final_one_pager = f"""# {symbol} Earnings One-Pager — {quarter} {year}## Prepared Remarks — Key Points{prepared_summary}## Q&A Highlights{qna_summary}""".strip()print(final_one_pager[:1200]) # preview Tips that keep quality high: Keep temperature low (≈0.2) for factual tone. If a section is extremely long, chunk at ~5–8k tokens, summarize each chunk with the same prompt, then ask the model to merge chunk summaries into one section summary before producing the final one-pager. If you also fetched headline numbers (EPS/revenue, guidance) earlier, prepend them to the prompt as brief context to help the model anchor on the right outcomes. Building the End-to-End Pipeline At this point, we have all the building blocks: the FMP API to fetch transcripts, a cleaning step to structure the data, and Groq LLM to generate concise summaries. The final step is to connect everything into a single workflow that can take any ticker and return a one-page earnings call summary. The flow looks like this: Input a stock ticker (for example, NVDA). Use FMP to fetch the latest transcript. Clean and split the text into Prepared Remarks and Q&A. Send each section to Groq for summarization. Merge the outputs into a neatly formatted earnings one-pager. Here’s how it comes together in Python: def summarize_earnings_call(symbol, quarter, year, api_key, groq_key): # Step 1: Fetch transcript from FMP url = f"https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={quarter}&year={year}&apikey={api_key}" resp = requests.get(url) resp.raise_for_status() data = resp.json() if not data or "content" not in data[0]: return f"No transcript found for {symbol} {quarter} {year}" text = data[0]["content"] # Step 2: Clean and split clean_text = re.sub(r'\s+', ' ', text).strip() if "Question-and-Answer" in clean_text: prepared, qna = clean_text.split("Question-and-Answer", 1) else: prepared, qna = clean_text, "" # Step 3: Summarize with Groq prepared_summary = summarize_section(prepared, symbol, quarter, year) qna_summary = summarize_section(qna, symbol, quarter, year) # Step 4: Merge into final one-pager return f"""# {symbol} Earnings One-Pager — {quarter} {year}## Prepared Remarks{prepared_summary}## Q&A Highlights{qna_summary}""".strip()# Example runprint(summarize_earnings_call("NVDA", 2, 2024, API_KEY, GROQ_API_KEY)) With this setup, generating a summary becomes as simple as calling one function with a ticker and date. You can run it inside a notebook, integrate it into a research workflow, or even schedule it to trigger after each new earnings release. Free Stock Market API and Financial Statements API... Conclusion Earnings calls no longer need to feel overwhelming. With the Financial Modeling Prep API, you can instantly access any company’s transcript, and with Groq LLM, you can turn that raw text into a sharp, actionable summary in seconds. This pipeline saves hours of reading and ensures you never miss the key results, guidance, or risks hidden in lengthy remarks. Whether you track tech giants like NVIDIA or smaller growth stocks, the process is the same — fast, reliable, and powered by the flexibility of FMP’s data. Summarize Any Stock’s Earnings Call in Seconds Using FMP API was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story
Share
Medium2025/09/18 14:40