Agentic AI Foundation launched on December 9th. It is the most exciting news of the year for software developers. The era of proprietary agent glue is effectivelyAgentic AI Foundation launched on December 9th. It is the most exciting news of the year for software developers. The era of proprietary agent glue is effectively

MCP Is Here to Save Developers From the AI Agent Glue-Code Nightmare

2025/12/12 01:09

I have spent the last eighteen months working as a digital janitor.

My job title says "Software Engineer," but my day-to-day reality has been far less glamorous. I’ve been building glue. Not the industrial-strength adhesive that holds skyscrapers together. I’m talking about the digital equivalent of duct tape, used to bind one proprietary AI API to another proprietary data source, wrapped in a framework that changes its syntax every three weeks.

It has been exhausting. It has been wasteful. It has been the primary bottleneck for anyone trying to ship actual production AI systems rather than just cool demos on Twitter.

We have been building agents in a fragmented reality. If you wanted your agent to talk to a Postgres database, you wrote a custom integration. If you wanted it to switch to Notion, you wrote another. If you wanted to swap the underlying model from GPT-4 to Claude because the pricing made more sense, you practically had to rewrite the orchestration layer because the function calling schemas were slightly different.

We were not building agents. We were building translators.

That changed on December 9th. The announcement was dry. It involved foundations and governance boards. It lacked the cinematic flair of a demo video showing a robot folding laundry.

But for those of us who actually ship code? It was the most exciting news of the year.

The Agentic AI Foundation has arrived. And with it, the era of proprietary agent glue is effectively over.

Why Are We Still Hardcoding API Calls?

To understand the magnitude of this shift, we have to look at the mess we are currently wading through.

The prevailing orthodoxy in GenAI development has been one of "walled garden innovation." Every major model provider, every framework builder, and every cloud giant looked at the problem of Agentic AI and decided they needed to own the entire stack.

The logic seemed sound on paper. If you own the model, the orchestration, and the tool definitions, you can optimise performance. You can ensure security. You can charge for every step of the chain.

So we ended up with a landscape that looked like the Tower of Babel.

OpenAI had its specific way of defining tools and functions. Anthropic had a slightly different JSON schema. LangChain had its abstractions. LlamaIndex had theirs. Microsoft’s Semantic Kernel did it another way.

Here is what this looks like in practice. This is code I wrote six months ago. It is "legacy" now.

The Old Way: The Integration Trap

# The "Glue" Anti-Pattern # We are tightly coupling the model provider (OpenAI) # to the specific tool implementation (Database) import openai import json from pydantic import BaseModel # 1. Define the schema specifically for OpenAI's format class SQLQuery(BaseModel): query: str rationale: str openai_tool_definition = { "type": "function", "function": { "name": "execute_sql", "description": "Run a SQL query against the prod DB", "parameters": SQLQuery.model_json_schema() } } # 2. Hardcode the logic def execute_sql(query): # Imagine 50 lines of connection logic here pass # 3. The Orchestration Loop # If we want to switch to Anthropic later, we have to rewrite # this entire block because their tool-use syntax differs. response = openai.chat.completions.create( model="gpt-4", messages=[...], tools=[openai_tool_definition] )

If you are a developer, looking at that code should make you itch.

It works. It ships. But it is brittle.

If I want to change the database, I have to update the tool definition. If I want to change the model provider, I have to rewrite the schema generation. If the model provider changes their API version (which happens constantly), my production system breaks.

This is technical debt the moment you write it.

We were building distinct silos of intelligence that could not speak to one another. We were creating a world where an agent built on the OpenAI stack would look at a tool definition written for an Anthropic agent and see nothing but gibberish.

The result was friction. Massive, expensive, soul-crushing friction.

The Model Context Protocol (MCP): A USB Port for Intelligence

The launch of the Agentic AI Foundation (AAIF) is the industry admitting that the "land grab" phase of infrastructure is over. Hosted by the Linux Foundation, it includes Amazon, Google, Microsoft, OpenAI, and Anthropic.

These companies usually agree on nothing. They fight over talent. They fight over market share. Yet here they are, aligning on a single set of open standards.

Why? Because they realised that if the plumbing doesn't work, nobody buys the water.

The crown jewel of this foundation is the Model Context Protocol (MCP), donated by Anthropic.

I have been skeptical of "universal protocols" in the past. Usually, they are abstract academic exercises. But MCP is different. It is practical. It works right now.

Think of MCP as a "Language Server Protocol" (LSP) for AI.

Before LSP, if you wanted to build a code editor (like VS Code or Vim) that supported Python, you had to write your own Python parser and autocomplete engine. LSP standardised the communication. Now, the Python team writes a "Language Server," and any editor that speaks LSP can instantly understand Python.

MCP does the exact same thing for AI tools.

In the MCP world, I don't write a "Google Drive integration for Claude." I write an "MCP Server for Google Drive."

The New Way: The MCP Server

Here is what the code looks like now. This is a TypeScript example using the MCP SDK. Notice the difference in philosophy.

/** * The MCP Pattern * We are not writing for a specific model. * We are writing a server that broadcasts capability. */ import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js"; import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"; import { z } from "zod"; // 1. Create the server const server = new McpServer({ name: "database-server", version: "1.0.0", }); // 2. Register the tool resource // We define the tool ONCE. // Any MCP-compliant client (Claude, Goose, Custom) can use this. server.tool( "execute_sql", { query: z.string().describe("The SQL query to run"), rationale: z.string().describe("Why you are running this"), }, async ({ query, rationale }) => { // The implementation logic lives here, isolated from the LLM. console.error(`Executing query: ${query} because ${rationale}`); return { content: [{ type: "text", text: "Query Result: [Row 1, Row 2...]" }], }; }, ); // 3. Connect via Standard IO // The AI connects to this process via stdio. // No HTTP overhead required for local tools. const transport = new StdioServerTransport(); await server.connect(transport);

\ Why is this better?

  1. Decoupling: The server doesn't know it's talking to Claude, GPT-4, or a local Llama model. It just exposes a capability.
  2. Standardisation: The schema (zod definition) is automatically converted into the format the model understands by the MCP Client.
  3. Portability: I can take this database-server, wrap it in a Docker container, and deploy it anywhere. Any agent with access to that container can use the database.

The model asks: "What tools do you have?" The MCP server replies: "I can list files, read files, and execute SQL." The model says: "Execute SQL."

The protocol handles the handshake. The protocol handles the security context. I don't have to rewrite the glue code ever again.

Is AGENTS.md Just a Glorified README?

Alongside MCP, OpenAI contributed AGENTS.md.

At first glance, this looks trivial. It is effectively a markdown file in your repository that describes the codebase.

You might ask: "Isn't this just a README?"

No. A README is for humans. AGENTS.md is for robots.

When you point an AI agent at a codebase today, how does it know what the code does? How does it know the conventions? How does it know that we use snake_case for variables but CamelCase for classes?

Until now, we stuffed this into the system prompt. We pasted giant blocks of text into the chat window. We engaged in "Prompt Engineering," which is often a polite term for guessing which magic words will make the stochastic parrot behave.

AGENTS.md standardises this context injection.

Context as Code

This allows us to move from "Prompt Engineering" to Context Architecture.

Instead of treating agent instructions as ephemeral text pasted into a chat UI, we treat them as code. They live in the repo. They are version controlled. They are subject to Pull Requests.

Example: AGENTS.md Structure

# Project Context This repository contains the backend for the Payments Service. ## Architectural Constraints - ALL database access must go through the Repository pattern. - NEVER write raw SQL in the controllers. - We use feature flags for all new payment gateways. ## Common Tasks ### Add a New Payment Gateway 1. Create a class in `src/gateways` implementing `IGateway`. 2. Register the key in `config/services.yaml`. 3. Add a migration for the transaction log. ## Known Issues - The `Refund` class is deprecated. Use `TransactionReversal` instead.

markdown

When an MCP-compliant agent (like the newly open-sourced goose from Block) enters this repository, it reads this file first.

It effectively "onboards" itself.

If I change the architectural constraints, I update AGENTS.md. The agent's behaviour changes immediately. No prompt tuning required. This creates a deterministic link between your documentation and the AI's performance.

This seems trivial until you try to automate code maintenance at scale. Without a standard, every agent guesses. With a standard, the agent knows.

The Architecture of Trust

We need to talk about security. This is where the skeptics usually check out, and rightly so.

Giving an autonomous agent access to your production database or your Slack workspace is terrifying. In the "Old Way" (the custom integration), security was often an afterthought. You pasted an API key into an .env file and prayed that the model didn't get hit with a prompt injection attack.

MCP introduces a structural change to security. It moves us to a Client-Host-Server model.

  • The Host: The application running the agent (e.g., the Claude Desktop App, or a local IDE).
  • The Client: The agent logic itself.
  • The Server: The tool (e.g., the Postgres MCP server).

The Host acts as the firewall. It controls the consent layer.

When the Client tries to call execute_sql on the Server, the Host intercepts the call. It can prompt the user: "The Agent wants to run 'DROP TABLE users'. Allow?"

This is not possible when the tool logic is embedded inside the agent's code. By separating the tool (Server) from the brain (Client) via a protocol, we create a distinct boundary where we can enforce policy.

It turns the "Human in the Loop" from a buzzword into a distinct architectural layer.

What This Actually Means

The launch of the AAIF is the industry signaling the end of the "wild west" phase of GenAI.

We are moving from hand-crafted, artisanal agents to industrial-grade infrastructure. We are trading the excitement of the "new release" for the comfort of the "stable protocol."

For the theorists, this might be boring. Standardisation is never as sexy as revolution. They want to talk about AGI and consciousness.

But for the builders?

This means we can stop reinventing the wheel every Tuesday.

This creates a new ecosystem. We are going to see a shift in how SaaS products market themselves. Previously, they touted their "API." Soon, they will tout their "MCP Server."

"Does it integrate with Claude?" will be replaced by "Is it MCP compliant?"

This opens up a massive market for developers. There is now a clear, standard way to build plugins for the entire AI ecosystem. You write the MCP server once, and it works with every agent framework that adopts the standard.

If you are currently writing a complex, custom integration framework for your internal tools that is tightly coupled to a specific LLM provider's SDK… stop.

You are building technical debt.

Investigate MCP. Look at how you can expose your internal APIs as MCP servers. This future-proofs your work. Today you might be using GPT-4 via Azure. Tomorrow you might be using a fine-tuned Llama 4 on-prem. If your tools speak MCP, the migration is trivial.

TL;DR For The Scrollers

  • Proprietary glue is dead. Stop writing custom integrations for specific LLMs. It’s a waste of time.
  • MCP is the USB port for AI. It standardises how models talk to tools (databases, file systems, APIs).
  • Write servers, not scripts. Build "MCP Servers" that expose capabilities, rather than scripts that wrap API calls.
  • Context is Code. Use AGENTS.md to document your codebase for robots, treating instructions like version-controlled code.
  • Security needs boundaries. MCP separates the tool from the model, allowing for a proper consent layer (Host) in the middle.

Read the complete technical breakdown →


Edward Burton ships production AI systems and writes about the stuff that actually works. Skeptic of hype. Builder of things.

Production > Demos. Always.

More at tyingshoelaces.com

What proprietary integration are you most excited to delete from your codebase?

\

Sorumluluk Reddi: Bu sitede yeniden yayınlanan makaleler, halka açık platformlardan alınmıştır ve yalnızca bilgilendirme amaçlıdır. MEXC'nin görüşlerini yansıtmayabilir. Tüm hakları telif sahiplerine aittir. Herhangi bir içeriğin üçüncü taraf haklarını ihlal ettiğini düşünüyorsanız, kaldırılması için lütfen [email protected] ile iletişime geçin. MEXC, içeriğin doğruluğu, eksiksizliği veya güncelliği konusunda hiçbir garanti vermez ve sağlanan bilgilere dayalı olarak alınan herhangi bir eylemden sorumlu değildir. İçerik, finansal, yasal veya diğer profesyonel tavsiye niteliğinde değildir ve MEXC tarafından bir tavsiye veya onay olarak değerlendirilmemelidir.

Ayrıca Şunları da Beğenebilirsiniz

The Channel Factories We’ve Been Waiting For

The Channel Factories We’ve Been Waiting For

The post The Channel Factories We’ve Been Waiting For appeared on BitcoinEthereumNews.com. Visions of future technology are often prescient about the broad strokes while flubbing the details. The tablets in “2001: A Space Odyssey” do indeed look like iPads, but you never see the astronauts paying for subscriptions or wasting hours on Candy Crush.  Channel factories are one vision that arose early in the history of the Lightning Network to address some challenges that Lightning has faced from the beginning. Despite having grown to become Bitcoin’s most successful layer-2 scaling solution, with instant and low-fee payments, Lightning’s scale is limited by its reliance on payment channels. Although Lightning shifts most transactions off-chain, each payment channel still requires an on-chain transaction to open and (usually) another to close. As adoption grows, pressure on the blockchain grows with it. The need for a more scalable approach to managing channels is clear. Channel factories were supposed to meet this need, but where are they? In 2025, subnetworks are emerging that revive the impetus of channel factories with some new details that vastly increase their potential. They are natively interoperable with Lightning and achieve greater scale by allowing a group of participants to open a shared multisig UTXO and create multiple bilateral channels, which reduces the number of on-chain transactions and improves capital efficiency. Achieving greater scale by reducing complexity, Ark and Spark perform the same function as traditional channel factories with new designs and additional capabilities based on shared UTXOs.  Channel Factories 101 Channel factories have been around since the inception of Lightning. A factory is a multiparty contract where multiple users (not just two, as in a Dryja-Poon channel) cooperatively lock funds in a single multisig UTXO. They can open, close and update channels off-chain without updating the blockchain for each operation. Only when participants leave or the factory dissolves is an on-chain transaction…
Paylaş
BitcoinEthereumNews2025/09/18 00:09
American Bitcoin’s $5B Nasdaq Debut Puts Trump-Backed Miner in Crypto Spotlight

American Bitcoin’s $5B Nasdaq Debut Puts Trump-Backed Miner in Crypto Spotlight

The post American Bitcoin’s $5B Nasdaq Debut Puts Trump-Backed Miner in Crypto Spotlight appeared on BitcoinEthereumNews.com. Key Takeaways: American Bitcoin (ABTC) surged nearly 85% on its Nasdaq debut, briefly reaching a $5B valuation. The Trump family, alongside Hut 8 Mining, controls 98% of the newly merged crypto-mining entity. Eric Trump called Bitcoin “modern-day gold,” predicting it could reach $1 million per coin. American Bitcoin, a fast-rising crypto mining firm with strong political and institutional backing, has officially entered Wall Street. After merging with Gryphon Digital Mining, the company made its Nasdaq debut under the ticker ABTC, instantly drawing global attention to both its stock performance and its bold vision for Bitcoin’s future. Read More: Trump-Backed Crypto Firm Eyes Asia for Bold Bitcoin Expansion Nasdaq Debut: An Explosive First Day ABTC’s first day of trading proved as dramatic as expected. Shares surged almost 85% at the open, touching a peak of $14 before settling at lower levels by the close. That initial spike valued the company around $5 billion, positioning it as one of 2025’s most-watched listings. At the last session, ABTC has been trading at $7.28 per share, which is a small positive 2.97% per day. Although the price has decelerated since opening highs, analysts note that the company has been off to a strong start and early investor activity is a hard-to-find feat in a newly-launched crypto mining business. According to market watchers, the listing comes at a time of new momentum in the digital asset markets. With Bitcoin trading above $110,000 this quarter, American Bitcoin’s entry comes at a time when both institutional investors and retail traders are showing heightened interest in exposure to Bitcoin-linked equities. Ownership Structure: Trump Family and Hut 8 at the Helm Its management and ownership set up has increased the visibility of the company. The Trump family and the Canadian mining giant Hut 8 Mining jointly own 98 percent…
Paylaş
BitcoinEthereumNews2025/09/18 01:33