In the fast-evolving world of AI, open-sourcing models has become a battleground for innovation, ethics, and regulation. Just recently, on August 25, 2025 (yes, that’s today!), Elon Musk announced that xAI has open-sourced Grok 2.5, its flagship model from last year, making the weights available on Hugging Face. This move echoes OpenAI’s earlier release on August 5, 2025, of two open-source models: gpt-oss-120b (120 billion parameters) and gpt-oss-20b (20 billion parameters), under the permissive Apache 2.0 license with an added usage policy. Both companies are pushing the boundaries of general-purpose AI (GPAI) models — those versatile systems capable of tackling reasoning, coding, math, and more. But with great power comes great scrutiny, especially under the EU AI Act (Regulation (EU) 2024/1689), which sets strict rules for transparency, risk management, and open-source claims. Inspired by a recent Medium article analyzing OpenAI’s models (check it out here), I’ll conduct a similar compliance check for Grok 2.5. Using publicly available info like model cards and announcements, we’ll evaluate its alignment with the Act’s requirements for open-source GPAI models. Spoiler: It’s not as straightforward as it seems. Note that this is a high-level analysis — true compliance needs official regulatory review.Grok Breaking Down the Models: Grok 2.5 vs. OpenAI’s Duo Let’s start with the basics to set the stage. Grok 2.5 (xAI): This beast clocks in at around 270 billion parameters, trained back in 2024 on text-based tasks like reasoning. What’s released? The model weights (a hefty ~500 GB across 42 files) and the tokenizer. But details on the full architecture, training code, or datasets? Slim to none — only hints like a June 2024 knowledge cutoff. The license is a custom “Grok 2 Community License Agreement”: revocable, allows commercial and non-commercial use, but slaps on restrictions like banning its use to train or improve other AI models. No separate usage policy, just the license terms. gpt-oss-120b and gpt-oss-20b (OpenAI): Smaller siblings at 120B and 20B parameters, trained on trillions of filtered tokens with chain-of-thought tweaks. Releases include weights, architecture details, tokenizer (via TikToken), and even some training code snippets. Licensed under Apache 2.0 — super permissive — with a usage policy encouraging responsible AI without heavy restrictions. Training compute (measured in FLOPs) isn’t explicitly shared for Grok 2.5, but given its size, it’s probably under the 10²⁵ FLOPs mark that triggers “systemic risk” status — similar to GPT-3’s estimates (around 3.14 x 10²³ FLOPs). OpenAI’s models are in the same boat, as noted in the original article. The EU AI Act: What Open-Source GPAI Models Need to Nail The EU AI Act classifies GPAI as AI that handles diverse tasks without a narrow focus (Article 3, point 63). For “open-source” ones (Recital 102, Article 3 point 12), the bar is high: They must use a “free and open license” allowing unrestricted access, use, study, modification, and sharing — including derivatives — with no commercial bans or field limits. Key obligations include: Article 53 (Baseline for All GPAI Providers): Technical docs on training/testing, copyright respect (like honoring opt-outs from Directive 2019/790), and usage info. Article 55 (For Systemic Risks): If over 10²⁵ FLOPs or deemed high-risk, add risk assessments, testing, and reporting. Open-source models can snag exemptions from some transparency rules if their license is truly open and they’re not monetized (e.g., no subs). Exemptions and the Code of Practice: Genuine open-source (non-systemic) skips some hurdles if the license is barrier-free. The EU’s upcoming Code of Practice (rolling out in 2025) offers a voluntary roadmap for safety, transparency, and copyright. The Act loves open-source for sparking innovation but calls out licenses with sneaky restrictions. The Compliance Breakdown: Where Grok 2.5 Stands vs. OpenAI Mirroring the original article’s style, here’s a table assessing compliance based on public data. Ratings: “Likely Compliant” (checks out), “Partial/Questionable” (iffy spots), or “Potential Non-Compliance” (red flags). I’ve included OpenAI for direct comparison.https://medium.com/media/8218eea850b241bff8bd2a4f09d44233/href Wrapping It Up: Lessons for xAI and the AI World Grok 2.5 ticks some boxes as a GPAI release but stumbles on open-source purity thanks to its custom license’s revocability and restrictions — potentially stripping away exemptions and inviting deeper EU scrutiny under Articles 53 and 55. OpenAI’s gpt-oss models, with their straightforward Apache 2.0 setup and better docs, seem to sail through more smoothly, qualifying for those sweet exemptions while hitting baselines. If Grok 2.5’s FLOPs secretly top 10²⁵ (doubtful, but possible), the gaps widen. xAI could level up by switching to a standard open license and beefing up transparency. For anyone in AI, this highlights the Act’s push: Open-source is great, but only if it’s truly open. Curious about the EU’s full guidelines? Dive into them or chat with regulators for the real deal. What do you think — will more companies follow suit, or tighten up? Drop your thoughts below! xAI’s Grok 2.5: Open-Sourced, But Does It Pass the EU AI Act Test? was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this storyIn the fast-evolving world of AI, open-sourcing models has become a battleground for innovation, ethics, and regulation. Just recently, on August 25, 2025 (yes, that’s today!), Elon Musk announced that xAI has open-sourced Grok 2.5, its flagship model from last year, making the weights available on Hugging Face. This move echoes OpenAI’s earlier release on August 5, 2025, of two open-source models: gpt-oss-120b (120 billion parameters) and gpt-oss-20b (20 billion parameters), under the permissive Apache 2.0 license with an added usage policy. Both companies are pushing the boundaries of general-purpose AI (GPAI) models — those versatile systems capable of tackling reasoning, coding, math, and more. But with great power comes great scrutiny, especially under the EU AI Act (Regulation (EU) 2024/1689), which sets strict rules for transparency, risk management, and open-source claims. Inspired by a recent Medium article analyzing OpenAI’s models (check it out here), I’ll conduct a similar compliance check for Grok 2.5. Using publicly available info like model cards and announcements, we’ll evaluate its alignment with the Act’s requirements for open-source GPAI models. Spoiler: It’s not as straightforward as it seems. Note that this is a high-level analysis — true compliance needs official regulatory review.Grok Breaking Down the Models: Grok 2.5 vs. OpenAI’s Duo Let’s start with the basics to set the stage. Grok 2.5 (xAI): This beast clocks in at around 270 billion parameters, trained back in 2024 on text-based tasks like reasoning. What’s released? The model weights (a hefty ~500 GB across 42 files) and the tokenizer. But details on the full architecture, training code, or datasets? Slim to none — only hints like a June 2024 knowledge cutoff. The license is a custom “Grok 2 Community License Agreement”: revocable, allows commercial and non-commercial use, but slaps on restrictions like banning its use to train or improve other AI models. No separate usage policy, just the license terms. gpt-oss-120b and gpt-oss-20b (OpenAI): Smaller siblings at 120B and 20B parameters, trained on trillions of filtered tokens with chain-of-thought tweaks. Releases include weights, architecture details, tokenizer (via TikToken), and even some training code snippets. Licensed under Apache 2.0 — super permissive — with a usage policy encouraging responsible AI without heavy restrictions. Training compute (measured in FLOPs) isn’t explicitly shared for Grok 2.5, but given its size, it’s probably under the 10²⁵ FLOPs mark that triggers “systemic risk” status — similar to GPT-3’s estimates (around 3.14 x 10²³ FLOPs). OpenAI’s models are in the same boat, as noted in the original article. The EU AI Act: What Open-Source GPAI Models Need to Nail The EU AI Act classifies GPAI as AI that handles diverse tasks without a narrow focus (Article 3, point 63). For “open-source” ones (Recital 102, Article 3 point 12), the bar is high: They must use a “free and open license” allowing unrestricted access, use, study, modification, and sharing — including derivatives — with no commercial bans or field limits. Key obligations include: Article 53 (Baseline for All GPAI Providers): Technical docs on training/testing, copyright respect (like honoring opt-outs from Directive 2019/790), and usage info. Article 55 (For Systemic Risks): If over 10²⁵ FLOPs or deemed high-risk, add risk assessments, testing, and reporting. Open-source models can snag exemptions from some transparency rules if their license is truly open and they’re not monetized (e.g., no subs). Exemptions and the Code of Practice: Genuine open-source (non-systemic) skips some hurdles if the license is barrier-free. The EU’s upcoming Code of Practice (rolling out in 2025) offers a voluntary roadmap for safety, transparency, and copyright. The Act loves open-source for sparking innovation but calls out licenses with sneaky restrictions. The Compliance Breakdown: Where Grok 2.5 Stands vs. OpenAI Mirroring the original article’s style, here’s a table assessing compliance based on public data. Ratings: “Likely Compliant” (checks out), “Partial/Questionable” (iffy spots), or “Potential Non-Compliance” (red flags). I’ve included OpenAI for direct comparison.https://medium.com/media/8218eea850b241bff8bd2a4f09d44233/href Wrapping It Up: Lessons for xAI and the AI World Grok 2.5 ticks some boxes as a GPAI release but stumbles on open-source purity thanks to its custom license’s revocability and restrictions — potentially stripping away exemptions and inviting deeper EU scrutiny under Articles 53 and 55. OpenAI’s gpt-oss models, with their straightforward Apache 2.0 setup and better docs, seem to sail through more smoothly, qualifying for those sweet exemptions while hitting baselines. If Grok 2.5’s FLOPs secretly top 10²⁵ (doubtful, but possible), the gaps widen. xAI could level up by switching to a standard open license and beefing up transparency. For anyone in AI, this highlights the Act’s push: Open-source is great, but only if it’s truly open. Curious about the EU’s full guidelines? Dive into them or chat with regulators for the real deal. What do you think — will more companies follow suit, or tighten up? Drop your thoughts below! xAI’s Grok 2.5: Open-Sourced, But Does It Pass the EU AI Act Test? was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story

xAI’s Grok 2.5: Open-Sourced, But Does It Pass the EU AI Act Test?

2025/08/25 23:38
5 min read
For feedback or concerns regarding this content, please contact us at [email protected]

In the fast-evolving world of AI, open-sourcing models has become a battleground for innovation, ethics, and regulation. Just recently, on August 25, 2025 (yes, that’s today!), Elon Musk announced that xAI has open-sourced Grok 2.5, its flagship model from last year, making the weights available on Hugging Face. This move echoes OpenAI’s earlier release on August 5, 2025, of two open-source models: gpt-oss-120b (120 billion parameters) and gpt-oss-20b (20 billion parameters), under the permissive Apache 2.0 license with an added usage policy.

Both companies are pushing the boundaries of general-purpose AI (GPAI) models — those versatile systems capable of tackling reasoning, coding, math, and more. But with great power comes great scrutiny, especially under the EU AI Act (Regulation (EU) 2024/1689), which sets strict rules for transparency, risk management, and open-source claims.

Inspired by a recent Medium article analyzing OpenAI’s models (check it out here), I’ll conduct a similar compliance check for Grok 2.5. Using publicly available info like model cards and announcements, we’ll evaluate its alignment with the Act’s requirements for open-source GPAI models. Spoiler: It’s not as straightforward as it seems. Note that this is a high-level analysis — true compliance needs official regulatory review.

Grok

Breaking Down the Models: Grok 2.5 vs. OpenAI’s Duo

Let’s start with the basics to set the stage.

  • Grok 2.5 (xAI): This beast clocks in at around 270 billion parameters, trained back in 2024 on text-based tasks like reasoning. What’s released? The model weights (a hefty ~500 GB across 42 files) and the tokenizer. But details on the full architecture, training code, or datasets? Slim to none — only hints like a June 2024 knowledge cutoff. The license is a custom “Grok 2 Community License Agreement”: revocable, allows commercial and non-commercial use, but slaps on restrictions like banning its use to train or improve other AI models. No separate usage policy, just the license terms.
  • gpt-oss-120b and gpt-oss-20b (OpenAI): Smaller siblings at 120B and 20B parameters, trained on trillions of filtered tokens with chain-of-thought tweaks. Releases include weights, architecture details, tokenizer (via TikToken), and even some training code snippets. Licensed under Apache 2.0 — super permissive — with a usage policy encouraging responsible AI without heavy restrictions.

Training compute (measured in FLOPs) isn’t explicitly shared for Grok 2.5, but given its size, it’s probably under the 10²⁵ FLOPs mark that triggers “systemic risk” status — similar to GPT-3’s estimates (around 3.14 x 10²³ FLOPs). OpenAI’s models are in the same boat, as noted in the original article.

The EU AI Act: What Open-Source GPAI Models Need to Nail

The EU AI Act classifies GPAI as AI that handles diverse tasks without a narrow focus (Article 3, point 63). For “open-source” ones (Recital 102, Article 3 point 12), the bar is high: They must use a “free and open license” allowing unrestricted access, use, study, modification, and sharing — including derivatives — with no commercial bans or field limits.

Key obligations include:

  • Article 53 (Baseline for All GPAI Providers): Technical docs on training/testing, copyright respect (like honoring opt-outs from Directive 2019/790), and usage info.
  • Article 55 (For Systemic Risks): If over 10²⁵ FLOPs or deemed high-risk, add risk assessments, testing, and reporting. Open-source models can snag exemptions from some transparency rules if their license is truly open and they’re not monetized (e.g., no subs).
  • Exemptions and the Code of Practice: Genuine open-source (non-systemic) skips some hurdles if the license is barrier-free. The EU’s upcoming Code of Practice (rolling out in 2025) offers a voluntary roadmap for safety, transparency, and copyright.

The Act loves open-source for sparking innovation but calls out licenses with sneaky restrictions.

The Compliance Breakdown: Where Grok 2.5 Stands vs. OpenAI

Mirroring the original article’s style, here’s a table assessing compliance based on public data. Ratings: “Likely Compliant” (checks out), “Partial/Questionable” (iffy spots), or “Potential Non-Compliance” (red flags). I’ve included OpenAI for direct comparison.

https://medium.com/media/8218eea850b241bff8bd2a4f09d44233/href

Wrapping It Up: Lessons for xAI and the AI World

Grok 2.5 ticks some boxes as a GPAI release but stumbles on open-source purity thanks to its custom license’s revocability and restrictions — potentially stripping away exemptions and inviting deeper EU scrutiny under Articles 53 and 55. OpenAI’s gpt-oss models, with their straightforward Apache 2.0 setup and better docs, seem to sail through more smoothly, qualifying for those sweet exemptions while hitting baselines.

If Grok 2.5’s FLOPs secretly top 10²⁵ (doubtful, but possible), the gaps widen. xAI could level up by switching to a standard open license and beefing up transparency. For anyone in AI, this highlights the Act’s push: Open-source is great, but only if it’s truly open.

Curious about the EU’s full guidelines? Dive into them or chat with regulators for the real deal. What do you think — will more companies follow suit, or tighten up? Drop your thoughts below!


xAI’s Grok 2.5: Open-Sourced, But Does It Pass the EU AI Act Test? was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story.

Market Opportunity
Xai Logo
Xai Price(XAI)
$0.009175
$0.009175$0.009175
+1.00%
USD
Xai (XAI) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

You May Also Like

How The Children’s Movie “Cars” Forewarns A Post-Human Era

How The Children’s Movie “Cars” Forewarns A Post-Human Era

The post How The Children’s Movie “Cars” Forewarns A Post-Human Era appeared on BitcoinEthereumNews.com. In this film, the anthropomorphic vehicles aren’t there
Share
BitcoinEthereumNews2026/04/01 18:14
Trump's reckoning may be coming as even his supporters question his competence: DC insider

Trump's reckoning may be coming as even his supporters question his competence: DC insider

Bulwark podcaster Tim Miller and comedian Jon Lovett say they’re surprised President Donald Trump’s coalition of young and old MAGA members, and its leading influencers
Share
Alternet2026/04/01 17:55
First Multi-Asset Crypto ETP Opens Door to Institutional Adoption

First Multi-Asset Crypto ETP Opens Door to Institutional Adoption

The post First Multi-Asset Crypto ETP Opens Door to Institutional Adoption appeared on BitcoinEthereumNews.com. The US Securities and Exchange Commission (SEC) has officially approved the Grayscale Digital Large Cap Fund (GDLC) for trading on the stock exchange. The decision comes as the SEC also relaxes ETF listing standards. This approval provides easier access for traditional investors and signals a major regulatory shift, paving the way for institutional capital to flow into the crypto market. Grayscale Races to Launch the First Multi-Asset Crypto ETP According to Grayscale CEO Peter Mintzberg, the Grayscale Digital Large Cap Fund ($GDLC) and the Generic Listing Standards have just been approved for trading. Sponsored Sponsored Grayscale Digital Large Cap Fund $GDLC was just approved for trading along with the Generic Listing Standards. The Grayscale team is working expeditiously to bring the FIRST multi #crypto asset ETP to market with Bitcoin, Ethereum, XRP, Solana, and Cardano#BTC #ETH $XRP $SOL… — Peter Mintzberg (@PeterMintzberg) September 17, 2025 The Grayscale Digital Large Cap Fund (GDLC) is the first multi-asset crypto Exchange-Traded Product (ETP). It includes Bitcoin (BTC), Ethereum (ETH), XRP, Solana (SOL), and Cardano (ADA). As of September, the portfolio allocation was 72.23%, 12.17%, 5.62%, 4.03%, and 1% respectively. Grayscale Digital Large Cap Fund (GDLC) Portfolio Allocation. Source: Grayscale Grayscale Investments launched GDLC in 2018. The fund’s primary goal is to expose investors to the most significant digital assets in the market without requiring them to buy, store, or secure the coins directly. In July, the SEC delayed its decision to convert GDLC from an OTC fund into an exchange-listed ETP on NYSE Arca, citing further review. However, the latest developments raise investors’ hopes that a multi-asset crypto ETP from Grayscale will soon become a reality. Approval under the Generic Listing Standards will help “streamline the process,” opening the door for more crypto ETPs. Ethereum, Solana, XRP, and ADA investors are the most…
Share
BitcoinEthereumNews2025/09/18 13:31

Trade GOLD, Share 1,000,000 USDT

Trade GOLD, Share 1,000,000 USDTTrade GOLD, Share 1,000,000 USDT

0 fees, up to 1,000x leverage, deep liquidity