The AWS outage exposed the fragility of centralized systems, underscoring the urgent need for web3 to embrace truly distributed, resilient infrastructure.The AWS outage exposed the fragility of centralized systems, underscoring the urgent need for web3 to embrace truly distributed, resilient infrastructure.

Why web3 needs decentralized infrastructure before it’s too late

2025/10/23 20:46
6 min read
For feedback or concerns regarding this content, please contact us at [email protected]

Disclosure: This article does not represent investment advice. The content and materials featured on this page are for educational purposes only.

The October 20, 2025 AWS outage exposed the fragility of centralized systems, underscoring the urgent need for web3 to embrace truly distributed, resilient infrastructure.

Summary
  • A routine AWS update caused widespread failures across apps, gaming platforms, banking services, and parts of the crypto ecosystem, revealing a single point of failure in critical infrastructure.
  • Web3’s reliance on a few centralized cloud providers threatens the very decentralization it champions, risking downtime and trust whenever a region goes offline.
  • Distributed, redundant hardware and proactive engineering, as implemented by NOWNodes, can absorb failures and maintain service continuity, making infrastructure itself a foundation of trust.

On October 20, 2025, the internet faltered. For hours, countless apps, platforms, and services simply stopped working. Fortnite froze. Snapchat crashed. Alexa went silent. Even major banking and trading apps were down.

The cause wasn’t a cyberattack or a hack; it was a routine software update gone wrong in Amazon Web Services’ US-EAST-1 region, one of the most relied-upon pieces of digital infrastructure on the planet.

A small configuration change led to DNS failures that rippled across the global web, breaking everything from gaming to financial services to parts of the crypto ecosystem.

It was a moment of silence and a reminder that the “cloud” isn’t some ethereal, distributed network. It’s a collection of data centers owned by a few companies. And when one of them sneezes, the internet catches a cold.

For most of the world, the outage was an inconvenience. For web3, it was an existential warning.

Centralized convenience, decentralized illusions

The modern internet runs on convenience. Platforms like AWS, Google Cloud, and Azure have made it much easier for companies to scale. Startups don’t need to buy racks of servers or run their own data centers anymore. Businesses just pay for what they use and can focus on building their product rather than dealing with hardware.

But this convenience comes with a catch: reliance. When the infrastructure lives somewhere else, companies are putting a lot of trust and control into someone else’s hands. 

The Web3 space, the world’s loudest advocate for decentralization, still leans heavily on centralized infrastructure. Many DApps, RPC endpoints, wallets, and validator nodes run on the same few providers, often in the same regions.

If a single cloud region fails, entire “decentralized” ecosystems grind to a halt. The irony is painful: we’re decentralizing finance and governance but centralizing the servers that keep them alive. When AWS goes down, it’s not just a matter of downtime; it also damages trust. If a decentralized system can’t withstand a single point of failure, can it really be called decentralized?

The hidden cost of centralization

Centralized infrastructure concentrates not only risk but also control. Cloud providers can, and do, throttle, suspend, or reprice services at will. They operate as invisible intermediaries with the power to affect everything from latency to liquidity.

For years, cloud computing was cheaper and more flexible than owning hardware. But as the “Big Three” clouds consolidated dominance, the market began to look less like innovation and more like oligopoly.

In 2024-2025, AWS compute costs increased by over 20%, with nearly 40% of companies reporting bill spikes exceeding 25%. The same services that once enabled startup agility now punish success with unpredictable scaling fees.

And when a product’s uptime and financial runway depend on a single provider’s business model, the company is not in control: they’re a tenant.

Hardware returns, not as nostalgia, but necessity

Owning servers might sound outdated, but in 2025, it’s becoming a strategic advantage.

The math isn’t too complicated. A physical server costing about $1,100, spread over ten years, comes out to roughly $110 a month. Compare that to cloud computing at scale, which can easily hit $2,000 to $7,000 a month. But the real benefit isn’t just about money.  

When businesses run their own hardware, they are really the one in charge. The company gets to decide where the data lives. They figure out how redundancy should work. They can tweak things for speed, for security, or for compliance. They don’t have to wait for a cloud provider to roll out some new feature. There’s no need to deal with API limits either. They can just do it their way.
And importantly, the service doesn’t vanish because one cloud region had a bad morning. Owning infrastructure doesn’t mean going fully offline or building bunkers full of machines. It means designing for distribution, spreading systems across providers, geographies, and hardware models so no single failure can take everything down.

So it’s not “cloud vs. metal.” It’s about control vs. fragility. Clouds will fail. Hardware can fail. But when systems are distributed, redundant, and supported by real engineers who understand failure as an inevitability, the overall architecture becomes antifragile.

Designing for failure: the distributed model

This is why building resilient, decentralized infrastructure is no longer optional, it’s essential. NOWNodes is a good example of this approach. It has been designed with one assumption: failure will happen. That’s the reason its architecture is globally distributed across the European Union, the United States, and Asia, with data centers in Germany, Finland, the Netherlands, the U.S., and Singapore. Each of NOWNodes’ locations is selected not just for network performance but for political stability and operational safety.

Every critical system follows a 2N+1 redundancy model. That means if one system fails, another takes over instantly. If two fail, traffic still routes through the third. Downtime isn’t “avoided”; it’s absorbed.

The NOWNodes team also tests failure on purpose. Their engineers run controlled outages in mirrored environments to identify weak points before they break in production. Moreover, the team doesn’t rely on chatbots or long ticket queues. The engineers are on call all the time on Slack, Telegram, and live chat and usually respond in just a few minutes. Most issues are resolved within hours, not days.

Infrastructure is trust

Infrastructure is rarely visible until it fails. Users don’t think about where their transactions are processed or how their wallet connects to a blockchain until it stops working. And when that happens, trust erodes instantly.

The AWS outage was a reminder of that invisible trust relationship between platforms and users. Even “smart” devices couldn’t escape it. There was a viral post about a smart bed that stopped working because AWS went down. Sounds ridiculous, but it’s actually a perfect metaphor. The more connected and “smart” our world gets, the bigger the mess when centralized systems fail.

That’s why decentralizing infrastructure isn’t just about ideology; it’s about functionality. It’s about ensuring the product, the blockchain, or the “smart bed” keeps working when the internet’s biggest provider takes a nap.

Disclosure: This content is provided by a third party. Neither crypto.news nor the author of this article endorses any product mentioned on this page. Users should conduct their own research before taking any action related to the company.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

You May Also Like

Tether Backs Ark Labs’ $5.2 Million Bet on Bitcoin’s Stablecoin Revival

Tether Backs Ark Labs’ $5.2 Million Bet on Bitcoin’s Stablecoin Revival

The post Tether Backs Ark Labs’ $5.2 Million Bet on Bitcoin’s Stablecoin Revival appeared on BitcoinEthereumNews.com. In brief Ark Labs secured backing from Tether
Share
BitcoinEthereumNews2026/03/12 21:44
MySQL Single Leader Replication with Node.js and Docker

MySQL Single Leader Replication with Node.js and Docker

Modern applications demand high availability and the ability to scale reads without compromising performance. One of the most common strategies to achieve this is Replication. In this setup, we configured a single database to act as the leader (master) and handle all write operations, while three replicas handle read operations. In this article, we’ll walk through how to set up MySQL single-leader replication on your local machine using Docker. Once the replication is working, we’ll connect it to a Node.js application using Sequelize ORM, so that reads are routed to the replica and writes go to the master. By the end, you’ll have a working environment where you can see replication in real time Prerequisites knowledge of database replication Background knowledge of docker and docker compose Background knowledge of Nodejs and how to run a NodeJS server An Overview of what we are building Setup Setup our database servers on docker compose in the root of our project directory, create a file named docker-compose.yml with the following content to setup our mysql primary and replica databases. \ \ name: "learn-replica" volumes: mysqlMasterDatabase: mysqlSlaveDatabase: mysqlSlaveDatabaseII: mysqlSlaveDatabaseIII: networks: mysql-replication-network: services: mysql-master: image: mysql:latest container_name: mysql-master command: --server-id=1 --log-bin=ON environment: MYSQL_ROOT_PASSWORD: master MYSQL_DATABASE: replicaDb ports: - "3306:3306" volumes: - mysqlMasterDatabase:/var/lib/mysql networks: - mysql-replication-network mysql-slave: image: mysql:latest container_name: mysql-slave command: --server-id=2 --log-bin=ON environment: MYSQL_ROOT_PASSWORD: slave MYSQL_DATABASE: replicaDb MYSQL_ROOT_HOST: "%" ports: - "3307:3306" volumes: - mysqlSlaveDatabase:/var/lib/mysql depends_on: - mysql-master networks: - mysql-replication-network mysql-slaveII: image: mysql:latest container_name: mysql-slaveII command: --server-id=2 --log-bin=ON environment: MYSQL_ROOT_PASSWORD: slave MYSQL_DATABASE: replicaDb MYSQL_ROOT_HOST: "%" ports: - "3308:3306" volumes: - mysqlSlaveDatabaseII:/var/lib/mysql depends_on: - mysql-master networks: - mysql-replication-network mysql-slaveIII: image: mysql:latest container_name: mysql-slaveIII command: --server-id=3 --log-bin=ON environment: MYSQL_ROOT_PASSWORD: slave MYSQL_DATABASE: replicaDb MYSQL_ROOT_HOST: "%" ports: - "3309:3306" volumes: - mysqlSlaveDatabaseIII:/var/lib/mysql depends_on: - mysql-master networks: - mysql-replication-network In this setup, I’m creating a master database container called mysql-master and 3 replica containers called mysql-slave, mysql-slaveII and mysql-slaveIII. I won’t go too deep into the docker-compose.yml file since it’s just a basic setup, but I do want to walk you through the command line instructions used in all four services because that’s where things get interesting.
command: --server-id=1 --log-bin=ON The --server-id option gives each MySQL server in your replication setup its own name tag. Each one has to be unique and without it, replication won’t work at all. Another cool option not included here is binlog_format=ROW. This tells MySQL how to keep track of changes before passing them along to the replicas. By default, MySQL already uses row-based replication, but you can explicitly set it to ROW to be sure or switch it to STATEMENT if you’d rather log the actual SQL statements instead of row-by-row changes. \ Run our containers on docker Now, in the terminal, we can run the following command to spin up our database containers: docker-compose up -d \ Setting Up Our Master (Primary) Server To configure our master server, we would have to first access the running instance on docker using the following command docker exec -it mysql-master bash This command opens an interactive Bash shell inside the running Docker container named mysql-master, allowing us to run commands directly inside that container. \ Now that we’re inside the container, we can access the MySQL server and start running commands. type: mysql -uroot -p This will log you into MySQL as the root user. You’ll be prompted to enter the password you set in your docker-compose.yml file. \ Next, we need to create a special user that our replicas will use to connect to the master server and pull data. Inside the MySQL prompt, run the following commands: \ CREATE USER 'repl_user'@'%' IDENTIFIED BY 'replication_pass'; GRANT REPLICATION SLAVE ON . TO 'repl_user'@'%'; FLUSH PRIVILEGES; Here’s what’s happening: CREATE USER makes a new MySQL user called repl_user with the password replication_pass. GRANT REPLICATION SLAVE gives this user permission to act as a replication client. FLUSH PRIVILEGES tells MySQL to reload the user permissions so they take effect immediately. \ Time to Configure the Replica (Secondary) Servers a. First, let’s access the replica containers the same way we did with the master. Run this command in your terminal for each of the replica containers: \ docker exec -it <replica_container_name> bash mysql -uroot -p <replica_container_name> should be replace with the name of the replica container you are trying to setup b. Now it’s time to tell our replica where to get its data from. While inside the replica’s MySQL shell, run the following command to configure replication using the master’s details: CHANGE REPLICATION SOURCE TO SOURCE_HOST='mysql-master', SOURCE_USER='repl_user', SOURCE_PASSWORD='replication_pass', GET_SOURCE_PUBLIC_KEY=1; With the replication settings in place, let’s fire up the replica and get it syncing with the master. Still inside the MySQL shell on the replica, run: START REPLICA; This starts the replication process. To make sure everything is working, check the replica’s status with:
SHOW REPLICA STATUS\G; Look for Replica_IO_Running and Replica_SQL_Running — if both say Yes, congratulations! 🎉 Your replica is now successfully connected to the master and replicating data in real time.
Testing Our Replication Setup from the Node.js App Now that our replication is successfully set up, we can configure our Node.js server to observe the real-time effect of data being replicated from the master server to the replica server whenever we write to it. We start by installing the following dependencies:
npm i express mysql2 sequelize \ Now create a folder called src in the root directory and add the following files inside that folder connection.js, index.js and model.js. Our current directory should look like this We can now set up our connections to our master and replica server in the connection.js file as shown below
const Sequelize = require("sequelize"); const sequelize = new Sequelize({ dialect: "mysql", replication: { write: { host: "127.0.0.1", username: "root", password: "master", database: "replicaDb", }, read: [ { host: "127.0.0.1", username: "root", password: "slave", database: "replicaDb", port: 3307 }, { host: "127.0.0.1", username: "root", password: "slave", database: "replicaDb", port: 3308 }, { host: "127.0.0.1", username: "root", password: "slave", database: "replicaDb", port: 3309 }, ], }, }); async function connectdb() { try { await sequelize.authenticate(); } catch (error) { console.error("❌ unable to connect to the follower database", error); } } connectdb(); module.exports = { sequelize, }; \ We can now create a User table in the model.js file
const {DataTypes} = require("sequelize"); const { sequelize } = require("./connection"); const User = sequelize.define("User", { name: { type: DataTypes.STRING, allowNull: false, }, email: { type: DataTypes.STRING, unique: true, allowNull: false, }, }); module.exports = User \ and finally in our index.js file we can start our server and listen for connections on port 3000. from the code sample below, all inserts or updates will be routed by sequelize to the master server. while all read queries will be routed to the read replicas.
const express = require("express"); const { sequelize } = require("./connection"); const User = require("./model"); const app = express(); app.use(express.json()); async function main() { await sequelize.sync({ alter: true }); app.get("/", (req, res) => { res.status(200).json({ message: "first step to setting server up", }); }); app.post("/user", async (req, res) => { const { email, name } = req.body; let newUser = await User.build({ name, email, }); // This INSERT will go to the write (master) connection newUser = newUser.save({ returning: false }); res.status(201).json({ message: "User successfully created", }); }); app.get("/user", async (req, res) => { // This SELECT query will go to one of the read replicas const users = await User.findAll(); res.status(200).json(users); }); app.listen(3000, () => { console.log("server has connected"); }); } main(); When you make a POST request to the /users endpoint, take a moment to check both the master and replica servers to observe how data is replicated in real time. Right now, we are relying on Sequelize to automatically route requests, which works for development but isn’t robust enough for a production environment. In particular, if the master node goes down, Sequelize cannot automatically redirect requests to a newly elected leader. In the next part of this series, we’ll explore strategies to handle these challenges
Share
Hackernoon2025/09/18 14:44
Nvidia shares fall 3%

Nvidia shares fall 3%

The post Nvidia shares fall 3% appeared on BitcoinEthereumNews.com. Home » AI » Nvidia shares fall 3% Chipmaker extends decline as investors continue to take profits from recent highs. Photo: Budrul Chukrut/SOPA Images/LightRocket via Getty Images Key Takeaways Nvidia’s stock decreased by 3% today. The decline extends Nvidia’s recent losing streak. Nvidia shares fell 3% today, extending the chipmaker’s recent decline. The stock dropped further during trading as the artificial intelligence chip leader continued its pullback from recent highs. Disclaimer Source: https://cryptobriefing.com/nvidia-shares-fall-2-8/
Share
BitcoinEthereumNews2025/09/18 03:13