In today's edition: Flutterwave acquires Mono || CNN and 11 other channels will remain on DStv || iOCO’s share buyback programme || OADC to acquire data centre In today's edition: Flutterwave acquires Mono || CNN and 11 other channels will remain on DStv || iOCO’s share buyback programme || OADC to acquire data centre

👨🏿‍🚀TechCabal Daily – Mono joins the Wave

2026/01/06 14:28
9 min read

Happy New Year. ☀

The year already feels like decades are happening in a few days. Fun fact: 2026 is also the start of the second quarter of the 21st century; historically, some of the most important technology and infrastructure advances have happened in this quarter. The 20th century gave us mass electrification. AI in this quarter? Fingers crossed.

If you’re dragging yourself back to work this week, you’re not alone. Judging by the memes flying around WhatsApp, the return-from-holiday struggle has been very real and honestly hilarious.

Either way, welcome back. We’re excited to be back and can’t wait to build more engagement with you this year. Write to us anytime with feedback, thoughts, or ideas for TC Daily and look out for our newsletter in your inbox by 7 AM WAT. Quick shout-out to Martha AI, a startup creating customer support that senses emotion and responds with empathy. (This is not a sponsored shout-out; Martha AI seems like an AI wrapper that might crack this customer support automation vibe check startups keep failing.)

Let’s get started.

—Emmanuel

  • Flutterwave acquires Mono
  • CNN and 11 other channels will remain on DStv
  • iOCO’s share buyback programme
  • OADC to acquire data centre operator
  • World Wide Web 3
  • Job Openings

M&A

Flutterwave acquires Mono in all-stock deal

Image: Justice Flutterwave & Mono

On Monday, Africa’s tech ecosystem woke up to major news: Africa’s largest payments startup, Flutterwave, has bought Mono, a Nigerian open-banking startup, in an all-stock deal reportedly worth between $25 million and $40 million. Mono will retain its CEO and operate independently.

Between the lines: Mono’s open-finance rails give Flutterwave deeper visibility into the data behind the payments—customer accounts, their cash flows and financial behaviour—it processes, which is strategic. Flutterwave can evolve beyond a payment processor into a financial institution capable of offering credit-related services for merchants, lenders, and enterprises, while also strengthening its core payments stack through account-to-account transfers.

The deal is reminiscent of when Mastercard acquired Finicity for $825 million to integrate Finicity’s open‑banking APIs and real‑time financial data access into its own open banking platform. After the acquisition, Mastercard now supports lending, risk scoring, identity verification, and bank payments.

The all-stock structure also tells its own story. For Flutterwave, conserving cash while using equity to absorb a complementary platform reduces balance-sheet strain and aligns long-term incentives (read: its profitability search). 

Some of the online chatter has fixated on the deal valuation, noting that even the upper end of the reported price represents little more than a 2x multiple on Mono’s $15 million Series A raised in 2021. While the modest 2x return doesn’t justify venture capital scale, there’s an opportunity for early backers to get higher returns when Flutterwave lists on a stock exchange.

Mono CEO Abdulhamid Hassan told TechCrunch the business was stable and that this was not a distress sale, but rather the outcome of a strong working relationship between the two startups. The deal allows Hassan, a product guy to the core, to focus on building without the distractions of solo fundraising, while Flutterwave gains a specialised lead to scale its infrastructure as its CEO, Olugbenga “GB” Agboola continues to navigate the global market.

Importantly, the acquisition reinforces that growth-stage startups are becoming comfortable bringing in bigger, complementary players in-house rather than operating in silos, as they double down on their strengths. We saw it last year with Chowdeck and Mira—albeit at a much smaller scale—this new deal carries on that trend.

Powering African Businesses

Your 2026 demands disciplined financial operations. Fincra powers the payments infrastructure businesses rely on to collect, pay, and settle across local and major African currencies with confidence. Get started.

Streaming

CNN, Cartoon Network, others to stay on DStv after new Canal+ deal

Image Source: MultiChoice

While everybody was anticipating the end of the year to take a break from work, one uncertainty was hanging over pay-TV, particularly for DStv subscribers in Africa. In December, about 12 channels, including CNN, the global news channel, were reportedly on their way out of DStv’s catalogue by January 1, 2026. Stalled contract negotiations between MultiChoice, the owner of DStv and GOtv, and Warner Bros. Discovery drove a wedge in talks over a new deal.

However, an eleventh-hour deal changed that. 

What happened? On December 31, 2025, Canal+, the French media giant and MultiChoice’s new owner, struck a new deal with CNN owner Warner Bros. Discovery, the global media conglomerate currently the subject of an acquisition tussle, to keep the twelve channels, including CNN and Cartoon Network, on DStv.

The last-minute turnaround: The new deal now gives Canal+, and by extension MultiChoice, a multi-year, multi-territory agreement to stream content on those channels across all markets where the French and South African pay-TV giants operate. The deal will allow MultiChoice to continue streaming HBO Max content—also owned by Warner Bros. Discovery—and expand its distribution. Showmax, the nimble streaming platform owned by MultiChoice, already streams HBO Max content, including popular titles like Game of Thrones.

With the renewed deal and ‘wider distribution,’ Showmax is well-positioned to deepen its HBO Max catalogue over time, and other MultiChoice-owned platforms, like DStv, could see more Warner Bros.–branded content flow through their bundles.

The agreement also brings short-term clarity amid uncertainty at Warner Bros. Discovery, which has been considering a structural reorganisation that could see its global networks business placed under a proposed Discovery Global spin-off, including CNN. While no such move has been finalised, the new deal ensures continuity for DStv subscribers regardless of how Warner Bros. Discovery ultimately restructures its business.

Companies

South Africa’s iOCO ramps up share repurchase

Image Source: iOCO

iOCO, a South African public company that provides software platforms for telecommunications and digital payments, has completed its share repurchase programme, buying back a further 2.34 million ordinary shares between November 29 and December 31, 2025. The shares were acquired on the open market at prices of up to R4 ($0.24), for a total outlay of R9.4 million ($574,000).

What is this ‘buy-back’ programme? A share buyback allows a company to use surplus cash to repurchase its own stock. Since launching the programme in August 2025, iOCO has bought back more than 4 million shares, which are being held as treasury shares rather than permanently cancelled. The JSE-listed company believes the shares are undervalued and that buying them back is a better use of capital than holding excess cash.

Why did it do this? The board has stressed that the repurchases do not compromise the group’s financial position. iOCO said it continues to meet solvency and liquidity requirements and retains sufficient working capital to fund operations and meet its debt obligations.

Why it matters: The buybacks come as iOCO’s turnaround gathers momentum following years of restructuring under its former EOH identity. The group returned to profitability in its 2025 financial year and is now generating strong cash flows. For retail investors, the buybacks do not deliver immediate cash returns, but they can improve earnings per share and signal growing confidence in the sustainability of the recovery.

M&A

South Africa’s data centre operator OADC secures regulatory approval to acquire NTT Data

Image Source: OADC

The South African Competition Commission has greenlit a deal for Open Access Data Centres (OADC) to acquire a portfolio of seven data centre facilities from NTT Data (formerly Dimension Data).

The acquisition includes facilities in Bloemfontein, Cape Town, East London, Gqeberha, Umhlanga, and two sites in Johannesburg (Bryanston and Parklands). Along with the physical buildings, OADC—a subsidiary of the WIOCC Group—will take over the associated infrastructure, equipment, and existing supplier agreements.

The context: OADC is already a heavy hitter in the African digital infrastructure space, with existing hubs in Johannesburg, Durban, and Cape Town. By absorbing seven new sites, OADC significantly increases its “edge” data centre capacity, allowing it to offer colocation and connectivity services in several cities where it previously lacked a physical footprint.

Between the lines: While the Commission found no competition concerns, the deal comes with a catch. To meet “public interest” requirements, OADC has committed to implementing a Historically Disadvantaged Persons (HDP) transaction as a condition of the approval.

Zoom out: This move, in addition to the group’s securing R1.1 billion ($65 million) in debt financing, reinforces the WIOCC Group’s strategy of building a converged open-access digital ecosystem. By linking these new data centres to its existing subsea and terrestrial fibre networks, OADC is positioning itself to capture the growing demand for local data storage and faster processing across sub-Saharan Africa.

CRYPTO TRACKER

The World Wide Web3

Source:

CoinMarketCap logo

Coin Name

Current Value

Day

Month

Bitcoin$95,523

+ 1.39%

+ 4.53%

Ether$2,921

+ 3.41%

– 3.06%

Yooldo$0.4046

– 1.27%

+ 19.84%

Solana$122.63

– 3.70%

– 9.65%

* Data as of 06.45 AM WAT, January 6, 2026.

Job Openings

  • Deel —Senior Risk Analyst — Remote (Nigeria)
  • Migo —Growth & Product Marketing Analyst — Lagos, Nigeria
  • Piggyvest —Product Marketing & Communications Lead — Lagos, Nigeria

There are more jobs on TechCabal’s job board. If you have job opportunities to share, please submit them at bit.ly/tcxjobs.

  • $670m Sango Capital on exporting African tech to global markets
  • How Nigeria plans to use banks and fintechs to recover tax debt

Written by: Muktar Oladunmade, Opeyemi Kareem, Emmanuel Nwosu, and Zia Yusuf

Edited by: Ganiu Oloruntade

Want more of TechCabal?

Sign up for our insightful newsletters on the business and economy of tech in Africa.

  • The Next Wave: futuristic analysis of the business of tech in Africa.
  • Francophone Weekly by TechCabal: insider insights and analysis of Francophone’s tech ecosystem

P:S If you’re often missing TC Daily in your inbox, check your Promotions folder and move any edition of TC Daily from “Promotions” to your “Main” or “Primary” folder and TC Daily will always come to you.

Email Us
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

The Stunning ASEAN Winner Emerges As Manufacturing Shifts Accelerate

The Stunning ASEAN Winner Emerges As Manufacturing Shifts Accelerate

The post The Stunning ASEAN Winner Emerges As Manufacturing Shifts Accelerate appeared on BitcoinEthereumNews.com. Vietnam US Tariffs: The Stunning ASEAN Winner
Share
BitcoinEthereumNews2026/02/24 08:20
MySQL Single Leader Replication with Node.js and Docker

MySQL Single Leader Replication with Node.js and Docker

Modern applications demand high availability and the ability to scale reads without compromising performance. One of the most common strategies to achieve this is Replication. In this setup, we configured a single database to act as the leader (master) and handle all write operations, while three replicas handle read operations. In this article, we’ll walk through how to set up MySQL single-leader replication on your local machine using Docker. Once the replication is working, we’ll connect it to a Node.js application using Sequelize ORM, so that reads are routed to the replica and writes go to the master. By the end, you’ll have a working environment where you can see replication in real time Prerequisites knowledge of database replication Background knowledge of docker and docker compose Background knowledge of Nodejs and how to run a NodeJS server An Overview of what we are building Setup Setup our database servers on docker compose in the root of our project directory, create a file named docker-compose.yml with the following content to setup our mysql primary and replica databases. \ \ name: "learn-replica" volumes: mysqlMasterDatabase: mysqlSlaveDatabase: mysqlSlaveDatabaseII: mysqlSlaveDatabaseIII: networks: mysql-replication-network: services: mysql-master: image: mysql:latest container_name: mysql-master command: --server-id=1 --log-bin=ON environment: MYSQL_ROOT_PASSWORD: master MYSQL_DATABASE: replicaDb ports: - "3306:3306" volumes: - mysqlMasterDatabase:/var/lib/mysql networks: - mysql-replication-network mysql-slave: image: mysql:latest container_name: mysql-slave command: --server-id=2 --log-bin=ON environment: MYSQL_ROOT_PASSWORD: slave MYSQL_DATABASE: replicaDb MYSQL_ROOT_HOST: "%" ports: - "3307:3306" volumes: - mysqlSlaveDatabase:/var/lib/mysql depends_on: - mysql-master networks: - mysql-replication-network mysql-slaveII: image: mysql:latest container_name: mysql-slaveII command: --server-id=2 --log-bin=ON environment: MYSQL_ROOT_PASSWORD: slave MYSQL_DATABASE: replicaDb MYSQL_ROOT_HOST: "%" ports: - "3308:3306" volumes: - mysqlSlaveDatabaseII:/var/lib/mysql depends_on: - mysql-master networks: - mysql-replication-network mysql-slaveIII: image: mysql:latest container_name: mysql-slaveIII command: --server-id=3 --log-bin=ON environment: MYSQL_ROOT_PASSWORD: slave MYSQL_DATABASE: replicaDb MYSQL_ROOT_HOST: "%" ports: - "3309:3306" volumes: - mysqlSlaveDatabaseIII:/var/lib/mysql depends_on: - mysql-master networks: - mysql-replication-network In this setup, I’m creating a master database container called mysql-master and 3 replica containers called mysql-slave, mysql-slaveII and mysql-slaveIII. I won’t go too deep into the docker-compose.yml file since it’s just a basic setup, but I do want to walk you through the command line instructions used in all four services because that’s where things get interesting.
command: --server-id=1 --log-bin=ON The --server-id option gives each MySQL server in your replication setup its own name tag. Each one has to be unique and without it, replication won’t work at all. Another cool option not included here is binlog_format=ROW. This tells MySQL how to keep track of changes before passing them along to the replicas. By default, MySQL already uses row-based replication, but you can explicitly set it to ROW to be sure or switch it to STATEMENT if you’d rather log the actual SQL statements instead of row-by-row changes. \ Run our containers on docker Now, in the terminal, we can run the following command to spin up our database containers: docker-compose up -d \ Setting Up Our Master (Primary) Server To configure our master server, we would have to first access the running instance on docker using the following command docker exec -it mysql-master bash This command opens an interactive Bash shell inside the running Docker container named mysql-master, allowing us to run commands directly inside that container. \ Now that we’re inside the container, we can access the MySQL server and start running commands. type: mysql -uroot -p This will log you into MySQL as the root user. You’ll be prompted to enter the password you set in your docker-compose.yml file. \ Next, we need to create a special user that our replicas will use to connect to the master server and pull data. Inside the MySQL prompt, run the following commands: \ CREATE USER 'repl_user'@'%' IDENTIFIED BY 'replication_pass'; GRANT REPLICATION SLAVE ON . TO 'repl_user'@'%'; FLUSH PRIVILEGES; Here’s what’s happening: CREATE USER makes a new MySQL user called repl_user with the password replication_pass. GRANT REPLICATION SLAVE gives this user permission to act as a replication client. FLUSH PRIVILEGES tells MySQL to reload the user permissions so they take effect immediately. \ Time to Configure the Replica (Secondary) Servers a. First, let’s access the replica containers the same way we did with the master. Run this command in your terminal for each of the replica containers: \ docker exec -it <replica_container_name> bash mysql -uroot -p <replica_container_name> should be replace with the name of the replica container you are trying to setup b. Now it’s time to tell our replica where to get its data from. While inside the replica’s MySQL shell, run the following command to configure replication using the master’s details: CHANGE REPLICATION SOURCE TO SOURCE_HOST='mysql-master', SOURCE_USER='repl_user', SOURCE_PASSWORD='replication_pass', GET_SOURCE_PUBLIC_KEY=1; With the replication settings in place, let’s fire up the replica and get it syncing with the master. Still inside the MySQL shell on the replica, run: START REPLICA; This starts the replication process. To make sure everything is working, check the replica’s status with:
SHOW REPLICA STATUS\G; Look for Replica_IO_Running and Replica_SQL_Running — if both say Yes, congratulations! 🎉 Your replica is now successfully connected to the master and replicating data in real time.
Testing Our Replication Setup from the Node.js App Now that our replication is successfully set up, we can configure our Node.js server to observe the real-time effect of data being replicated from the master server to the replica server whenever we write to it. We start by installing the following dependencies:
npm i express mysql2 sequelize \ Now create a folder called src in the root directory and add the following files inside that folder connection.js, index.js and model.js. Our current directory should look like this We can now set up our connections to our master and replica server in the connection.js file as shown below
const Sequelize = require("sequelize"); const sequelize = new Sequelize({ dialect: "mysql", replication: { write: { host: "127.0.0.1", username: "root", password: "master", database: "replicaDb", }, read: [ { host: "127.0.0.1", username: "root", password: "slave", database: "replicaDb", port: 3307 }, { host: "127.0.0.1", username: "root", password: "slave", database: "replicaDb", port: 3308 }, { host: "127.0.0.1", username: "root", password: "slave", database: "replicaDb", port: 3309 }, ], }, }); async function connectdb() { try { await sequelize.authenticate(); } catch (error) { console.error("❌ unable to connect to the follower database", error); } } connectdb(); module.exports = { sequelize, }; \ We can now create a User table in the model.js file
const {DataTypes} = require("sequelize"); const { sequelize } = require("./connection"); const User = sequelize.define("User", { name: { type: DataTypes.STRING, allowNull: false, }, email: { type: DataTypes.STRING, unique: true, allowNull: false, }, }); module.exports = User \ and finally in our index.js file we can start our server and listen for connections on port 3000. from the code sample below, all inserts or updates will be routed by sequelize to the master server. while all read queries will be routed to the read replicas.
const express = require("express"); const { sequelize } = require("./connection"); const User = require("./model"); const app = express(); app.use(express.json()); async function main() { await sequelize.sync({ alter: true }); app.get("/", (req, res) => { res.status(200).json({ message: "first step to setting server up", }); }); app.post("/user", async (req, res) => { const { email, name } = req.body; let newUser = await User.build({ name, email, }); // This INSERT will go to the write (master) connection newUser = newUser.save({ returning: false }); res.status(201).json({ message: "User successfully created", }); }); app.get("/user", async (req, res) => { // This SELECT query will go to one of the read replicas const users = await User.findAll(); res.status(200).json(users); }); app.listen(3000, () => { console.log("server has connected"); }); } main(); When you make a POST request to the /users endpoint, take a moment to check both the master and replica servers to observe how data is replicated in real time. Right now, we are relying on Sequelize to automatically route requests, which works for development but isn’t robust enough for a production environment. In particular, if the master node goes down, Sequelize cannot automatically redirect requests to a newly elected leader. In the next part of this series, we’ll explore strategies to handle these challenges
Share
Hackernoon2025/09/18 14:44
Robinhood: Investors Are Looking Beyond BTC

Robinhood: Investors Are Looking Beyond BTC

The post Robinhood: Investors Are Looking Beyond BTC appeared on BitcoinEthereumNews.com. Investors Diversifying Beyond BTC and ETH Robinhood’s crypto division
Share
BitcoinEthereumNews2026/02/24 08:32