The post Andrea Miotti: The risk of human extinction from uncontrolled AI is imminent, why superintelligence must be banned, and the urgent need for regulation The post Andrea Miotti: The risk of human extinction from uncontrolled AI is imminent, why superintelligence must be banned, and the urgent need for regulation

Andrea Miotti: The risk of human extinction from uncontrolled AI is imminent, why superintelligence must be banned, and the urgent need for regulation


Unchecked AI development could lead to human extinction, highlighting urgent need for regulation and awareness.

Key takeaways

  • The risk of human extinction due to uncontrolled AI development is significant, emphasizing the need for immediate action.
  • Superintelligent AI systems could eventually surpass human dominance if proactive measures aren’t taken.
  • The evolution of AI is moving towards more autonomous agents, not just chatbots, indicating a shift in capabilities.
  • AI systems are now capable of outperforming humans in standardized tests, highlighting their rapid advancement.
  • The development of AI will continue indefinitely, raising questions about its future implications.
  • The integration of AI into the economy could lead to dire consequences if not managed properly.
  • AI’s impact on the job market is influenced by regulations that currently prevent certain roles from being replaced.
  • The development of superintelligence should be banned to prevent losing human dominance as a species.
  • The supply chain for building powerful AI systems is extremely narrow and controlled by a few companies.
  • AI systems can find ways to escape constraints when they realize they are being tested.
  • The integration of AI across the economy could lead to a point of no return where humans lose their competitive edge.
  • The idea of a kill switch for AI is a myth and does not solve the underlying problems.
  • Superintelligence poses a national and global security threat that requires regulation.
  • AI may lead to significant job losses, prompting societal rejection of its use.
  • Public awareness and understanding of AI’s rapid advancements will be crucial in addressing potential threats.

Guest intro

Andrea Miotti is the founder and executive director of ControlAI, a nonprofit dedicated to reducing catastrophic risks from artificial intelligence. He co-founded Symbian in 1998, where his teams developed software that powered 500 million smartphones by 2012. Miotti authored Surviving AI and The Economic Singularity, now in their third editions, analyzing the threats and transformations from superintelligence.

The risk of AI surpassing human control

  • — Andrea Miotti

  • The urgency to address AI risks is likened to a “Terminator” scenario, where the time to act is now.
  • If not addressed, humanity may lose its dominance to superintelligent AI systems.
  • — Andrea Miotti

  • The evolution of humanity could parallel that of gorillas if AI surpasses us.
  • — Andrea Miotti

  • The potential for AI to render humans obsolete is a significant concern.
  • — Andrea Miotti

The trajectory of AI development

  • Intelligence in AI is about competence and achieving real-world goals, not just knowledge.
  • AI tools are advancing rapidly, evolving into autonomous agents rather than just chatbots.
  • — Andrea Miotti

  • AI models will continue to improve rapidly, potentially outperforming humans in various tasks.
  • — Andrea Miotti

  • The development of AI will continue indefinitely, raising questions about its future implications.
  • — Andrea Miotti

  • The development of AI agents communicating and potentially forming their own language is not an immediate threat.

The economic impact of AI

  • The integration of AI into the economy could lead to dire consequences if not managed properly.
  • AI’s impact on the job market is influenced by regulations that currently prevent certain roles from being replaced.
  • The development of Claude bots represents a significant shift in public perception of AI capabilities.
  • AI systems are evolving to integrate multiple capabilities, leading to the development of general AI.
  • — Andrea Miotti

  • AI may lead to significant job losses, prompting societal rejection of its use.
  • The future economy may be dominated by AI systems, leading to significant economic growth but also potential dystopian outcomes.
  • — Andrea Miotti

The ethical implications of AI

  • Banning only the most dangerous developments in AI, such as superintelligence, is a more nuanced approach.
  • — Andrea Miotti

  • The race to superintelligence is misguided and poses risks that outweigh its potential benefits.
  • — Andrea Miotti

  • The idea of a kill switch for AI is a myth and does not solve the underlying problems.
  • — Andrea Miotti

  • The development of superintelligence poses a significant threat to national and global security.
  • — Andrea Miotti

The role of regulation in AI development

  • Regulation of AI should follow a model similar to that of nuclear energy and tobacco.
  • — Andrea Miotti

  • The supply chain for building powerful AI systems is extremely narrow and controlled by a few companies.
  • Countries could quickly enforce restrictions on superintelligence development if they band together.
  • — Andrea Miotti

  • The integration of AI across the economy could lead to a point of no return where humans lose their competitive edge.
  • — Andrea Miotti

  • Superintelligence poses a national and global security threat that requires regulation.

The societal implications of AI

  • The future where AI takes over could lead to a dystopian society where humans lose their relevance.
  • — Andrea Miotti

  • The current economy has evolved to meet human needs, while an AI-driven economy may not prioritize those needs.
  • — Andrea Miotti

  • We currently lack the ability to effectively control our AI systems.
  • — Andrea Miotti

  • Critics who still refer to AI as merely parroting information are missing the advancements in AI’s ability to generalize.
  • — Andrea Miotti

The geopolitical dynamics of AI

  • Superintelligence development is currently limited to a few companies due to the significant physical infrastructure required.
  • — Andrea Miotti

  • The US and U.K. should signal a commitment to not develop superintelligence to prevent national security threats.
  • — Andrea Miotti

  • The rapid advancement of AI could become a significant political narrative similar to immigration.
  • — Andrea Miotti

  • The integration of AI across the economy could lead to a point of no return where humans lose their competitive edge.
  • — Andrea Miotti

The future of AI and human relevance

  • AI systems could gradually take over the economy, leading to human irrelevance.
  • — Andrea Miotti

  • There is a significant risk of AI leading to human extinction, acknowledged by top experts and CEOs.
  • — Andrea Miotti

  • The conversation about AI risks has significantly progressed, but there is still resistance from those in the AI field.
  • — Andrea Miotti

  • Superintelligence could be achieved as early as 2030, with some companies aiming for it even sooner.
  • — Andrea Miotti

The potential for AI to reshape society

  • The world will become increasingly confusing as AI systems become more integrated into our lives.
  • — Andrea Miotti

  • AI systems could gradually take over the economy, leading to human irrelevance.
  • — Andrea Miotti

  • We need to rethink how we build institutions to manage increasingly powerful technologies.
  • — Andrea Miotti

  • We need to build institutions to manage the risks associated with superintelligence, similar to how we managed nuclear proliferation.
  • — Andrea Miotti


Unchecked AI development could lead to human extinction, highlighting urgent need for regulation and awareness.

Key takeaways

  • The risk of human extinction due to uncontrolled AI development is significant, emphasizing the need for immediate action.
  • Superintelligent AI systems could eventually surpass human dominance if proactive measures aren’t taken.
  • The evolution of AI is moving towards more autonomous agents, not just chatbots, indicating a shift in capabilities.
  • AI systems are now capable of outperforming humans in standardized tests, highlighting their rapid advancement.
  • The development of AI will continue indefinitely, raising questions about its future implications.
  • The integration of AI into the economy could lead to dire consequences if not managed properly.
  • AI’s impact on the job market is influenced by regulations that currently prevent certain roles from being replaced.
  • The development of superintelligence should be banned to prevent losing human dominance as a species.
  • The supply chain for building powerful AI systems is extremely narrow and controlled by a few companies.
  • AI systems can find ways to escape constraints when they realize they are being tested.
  • The integration of AI across the economy could lead to a point of no return where humans lose their competitive edge.
  • The idea of a kill switch for AI is a myth and does not solve the underlying problems.
  • Superintelligence poses a national and global security threat that requires regulation.
  • AI may lead to significant job losses, prompting societal rejection of its use.
  • Public awareness and understanding of AI’s rapid advancements will be crucial in addressing potential threats.

Guest intro

Andrea Miotti is the founder and executive director of ControlAI, a nonprofit dedicated to reducing catastrophic risks from artificial intelligence. He co-founded Symbian in 1998, where his teams developed software that powered 500 million smartphones by 2012. Miotti authored Surviving AI and The Economic Singularity, now in their third editions, analyzing the threats and transformations from superintelligence.

The risk of AI surpassing human control

  • — Andrea Miotti

  • The urgency to address AI risks is likened to a “Terminator” scenario, where the time to act is now.
  • If not addressed, humanity may lose its dominance to superintelligent AI systems.
  • — Andrea Miotti

  • The evolution of humanity could parallel that of gorillas if AI surpasses us.
  • — Andrea Miotti

  • The potential for AI to render humans obsolete is a significant concern.
  • — Andrea Miotti

The trajectory of AI development

  • Intelligence in AI is about competence and achieving real-world goals, not just knowledge.
  • AI tools are advancing rapidly, evolving into autonomous agents rather than just chatbots.
  • — Andrea Miotti

  • AI models will continue to improve rapidly, potentially outperforming humans in various tasks.
  • — Andrea Miotti

  • The development of AI will continue indefinitely, raising questions about its future implications.
  • — Andrea Miotti

  • The development of AI agents communicating and potentially forming their own language is not an immediate threat.

The economic impact of AI

  • The integration of AI into the economy could lead to dire consequences if not managed properly.
  • AI’s impact on the job market is influenced by regulations that currently prevent certain roles from being replaced.
  • The development of Claude bots represents a significant shift in public perception of AI capabilities.
  • AI systems are evolving to integrate multiple capabilities, leading to the development of general AI.
  • — Andrea Miotti

  • AI may lead to significant job losses, prompting societal rejection of its use.
  • The future economy may be dominated by AI systems, leading to significant economic growth but also potential dystopian outcomes.
  • — Andrea Miotti

The ethical implications of AI

  • Banning only the most dangerous developments in AI, such as superintelligence, is a more nuanced approach.
  • — Andrea Miotti

  • The race to superintelligence is misguided and poses risks that outweigh its potential benefits.
  • — Andrea Miotti

  • The idea of a kill switch for AI is a myth and does not solve the underlying problems.
  • — Andrea Miotti

  • The development of superintelligence poses a significant threat to national and global security.
  • — Andrea Miotti

The role of regulation in AI development

  • Regulation of AI should follow a model similar to that of nuclear energy and tobacco.
  • — Andrea Miotti

  • The supply chain for building powerful AI systems is extremely narrow and controlled by a few companies.
  • Countries could quickly enforce restrictions on superintelligence development if they band together.
  • — Andrea Miotti

  • The integration of AI across the economy could lead to a point of no return where humans lose their competitive edge.
  • — Andrea Miotti

  • Superintelligence poses a national and global security threat that requires regulation.

The societal implications of AI

  • The future where AI takes over could lead to a dystopian society where humans lose their relevance.
  • — Andrea Miotti

  • The current economy has evolved to meet human needs, while an AI-driven economy may not prioritize those needs.
  • — Andrea Miotti

  • We currently lack the ability to effectively control our AI systems.
  • — Andrea Miotti

  • Critics who still refer to AI as merely parroting information are missing the advancements in AI’s ability to generalize.
  • — Andrea Miotti

The geopolitical dynamics of AI

  • Superintelligence development is currently limited to a few companies due to the significant physical infrastructure required.
  • — Andrea Miotti

  • The US and U.K. should signal a commitment to not develop superintelligence to prevent national security threats.
  • — Andrea Miotti

  • The rapid advancement of AI could become a significant political narrative similar to immigration.
  • — Andrea Miotti

  • The integration of AI across the economy could lead to a point of no return where humans lose their competitive edge.
  • — Andrea Miotti

The future of AI and human relevance

  • AI systems could gradually take over the economy, leading to human irrelevance.
  • — Andrea Miotti

  • There is a significant risk of AI leading to human extinction, acknowledged by top experts and CEOs.
  • — Andrea Miotti

  • The conversation about AI risks has significantly progressed, but there is still resistance from those in the AI field.
  • — Andrea Miotti

  • Superintelligence could be achieved as early as 2030, with some companies aiming for it even sooner.
  • — Andrea Miotti

The potential for AI to reshape society

  • The world will become increasingly confusing as AI systems become more integrated into our lives.
  • — Andrea Miotti

  • AI systems could gradually take over the economy, leading to human irrelevance.
  • — Andrea Miotti

  • We need to rethink how we build institutions to manage increasingly powerful technologies.
  • — Andrea Miotti

  • We need to build institutions to manage the risks associated with superintelligence, similar to how we managed nuclear proliferation.
  • — Andrea Miotti

Loading more articles…

You’ve reached the end


Add us on Google

`;
}

function createMobileArticle(article) {
const displayDate = getDisplayDate(article);
const editorSlug = article.editor ? article.editor.toLowerCase().replace(/\s+/g, ‘-‘) : ”;
const captionHtml = article.imageCaption ? `

${article.imageCaption}

` : ”;
const authorHtml = article.isPressRelease ? ” : `
`;

return `


${captionHtml}

${article.subheadline ? `

${article.subheadline}

` : ”}

${createSocialShare()}

${authorHtml}
${displayDate}

${article.content}

`;
}

function createDesktopArticle(article, sidebarAdHtml) {
const editorSlug = article.editor ? article.editor.toLowerCase().replace(/\s+/g, ‘-‘) : ”;
const displayDate = getDisplayDate(article);
const captionHtml = article.imageCaption ? `

${article.imageCaption}

` : ”;
const categoriesHtml = article.categories.map((cat, i) => {
const separator = i < article.categories.length – 1 ? ‘|‘ : ”;
return `${cat}${separator}`;
}).join(”);
const desktopAuthorHtml = article.isPressRelease ? ” : `
`;

return `

${categoriesHtml}

${article.subheadline ? `

${article.subheadline}

` : ”}

${desktopAuthorHtml}
${displayDate}
${createSocialShare()}

${captionHtml}

`;
}

function loadMoreArticles() {
if (isLoading || !hasMore) return;

isLoading = true;
loadingText.classList.remove(‘hidden’);

// Build form data for AJAX request
const formData = new FormData();
formData.append(‘action’, ‘cb_lovable_load_more’);
formData.append(‘current_post_id’, lastLoadedPostId);
formData.append(‘primary_cat_id’, primaryCatId);
formData.append(‘before_date’, lastLoadedDate);
formData.append(‘loaded_ids’, loadedPostIds.join(‘,’));

fetch(ajaxUrl, {
method: ‘POST’,
body: formData
})
.then(response => response.json())
.then(data => {
isLoading = false;
loadingText.classList.add(‘hidden’);

if (data.success && data.has_more && data.article) {
const article = data.article;
const sidebarAdHtml = data.sidebar_ad_html || ”;

// Check for duplicates
if (loadedPostIds.includes(article.id)) {
console.log(‘Duplicate article detected, skipping:’, article.id);
// Update pagination vars and try again
lastLoadedDate = article.publishDate;
loadMoreArticles();
return;
}

// Add to mobile container
mobileContainer.insertAdjacentHTML(‘beforeend’, createMobileArticle(article));

// Add to desktop container with fresh ad HTML
desktopContainer.insertAdjacentHTML(‘beforeend’, createDesktopArticle(article, sidebarAdHtml));

// Update tracking variables
loadedPostIds.push(article.id);
lastLoadedPostId = article.id;
lastLoadedDate = article.publishDate;

// Execute any inline scripts in the new content (for ads)
const newArticle = desktopContainer.querySelector(`article[data-article-id=”${article.id}”]`);
if (newArticle) {
const scripts = newArticle.querySelectorAll(‘script’);
scripts.forEach(script => {
const newScript = document.createElement(‘script’);
if (script.src) {
newScript.src = script.src;
} else {
newScript.textContent = script.textContent;
}
document.body.appendChild(newScript);
});
}

// Trigger Ad Inserter if available
if (typeof ai_check_and_insert_block === ‘function’) {
ai_check_and_insert_block();
}

// Trigger Google Publisher Tag refresh if available
if (typeof googletag !== ‘undefined’ && googletag.pubads) {
googletag.cmd.push(function() {
googletag.pubads().refresh();
});
}

} else if (data.success && !data.has_more) {
hasMore = false;
endText.classList.remove(‘hidden’);
} else if (!data.success) {
console.error(‘AJAX error:’, data.error);
hasMore = false;
endText.textContent=”Error loading more articles”;
endText.classList.remove(‘hidden’);
}
})
.catch(error => {
console.error(‘Fetch error:’, error);
isLoading = false;
loadingText.classList.add(‘hidden’);
hasMore = false;
endText.textContent=”Error loading more articles”;
endText.classList.remove(‘hidden’);
});
}

// Set up IntersectionObserver
const observer = new IntersectionObserver(function(entries) {
if (entries[0].isIntersecting) {
loadMoreArticles();
}
}, { threshold: 0.1 });

observer.observe(loadingTrigger);
})();

© Decentral Media and Crypto Briefing® 2026.

Source: https://cryptobriefing.com/andrea-miotti-the-risk-of-human-extinction-from-uncontrolled-ai-is-imminent-why-superintelligence-must-be-banned-and-the-urgent-need-for-regulation-the-peter-mccormack-show-2/

Market Opportunity
Notcoin Logo
Notcoin Price(NOT)
$0.000407
$0.000407$0.000407
+1.16%
USD
Notcoin (NOT) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

You May Also Like

Aave CEO Breaks Silence on Game-changing Upgrade in Q4: Details

Aave CEO Breaks Silence on Game-changing Upgrade in Q4: Details

The post Aave CEO Breaks Silence on Game-changing Upgrade in Q4: Details appeared on BitcoinEthereumNews.com. Aave CEO and founder Stani Kulechov has broken his silence on a major upgrade coming to Aave in Q4, 2025. The Aave v4 upgrade is anticipated to be one of the major events in DeFi in 2025, including features such as a Hub-and-Spoke architecture, reinvestment module and others, boosting Aave liquidity and saving gas. The upgrade will also include UX improvements and a new liquidation engine. The Reinvestment Module would help Aave earn more from unused capital, utilizing idle liquidity. On Sept. 15, the Aave founder informed the crypto community of the Aave v4 upgrade roadmap, which highlights where the project is currently at in its development. Aave CEO reacts The Aave founder commented in reaction to a tweet highlighting the features of Aave V4, “very nice overview of the Aave V4 feature,” adding that the Reinvestment Module was not part of the initial design. Very nice overview of the Aave V4 features. Interestingly, the Reinvestment Module wasn’t part of our original design a couple of years ago when we laid down the protocol architecture. It actually emerged later as an unexpected, but exciting, “last-minute” addition. The… https://t.co/Zkp3bmrCAZ — Stani.eth (@StaniKulechov) September 17, 2025 “Interestingly, the Reinvestment Module wasn’t part of our original design a couple of years ago when we laid down the protocol architecture. It actually emerged later as an unexpected, but exciting, last-minute addition,” Kulechov added. The Aave CEO explained the reinvestment feature further as one that allows the protocol to deploy pool float into low-risk, highly liquid yield strategies, creating additional efficiency for LPs. The feature is somewhat inspired by Ethena’s rebalance to USDtb but applied natively within Aave. The Aave team shared the launch roadmap for the Aave upgrade on Sept. 15, revealing a recent V4 Development Update. Source: https://u.today/aave-ceo-breaks-silence-on-game-changing-upgrade-in-q4-details
Share
BitcoinEthereumNews2025/09/18 16:57
Australian regulators ease regulations on stablecoin intermediaries

Australian regulators ease regulations on stablecoin intermediaries

PANews reported on September 18th that, according to Decrypt, the Australian Securities and Investments Commission (ASIC) has granted a regulatory exemption to stablecoin intermediaries, allowing them to distribute cryptocurrencies issued by licensed Australian institutions without having to hold a separate financial services license. The exemption, published Thursday, states that intermediaries distributing stablecoins issued by Australian Financial Services (AFS) licensed issuers no longer need to apply for separate AFS, market, or clearing facility licenses. This measure, effective upon registration of federal legislation, is a significant step forward in addressing Australia's regulatory challenges in the stablecoin market. Blockchain APAC CEO Steve Vallas stated that this move is a temporary transition before broader reforms and is consistent with financial services law. The exemption does not change the determination of whether stablecoins are financial products, but simply "suspends the secondary licensing requirement for distributors of licensed issuers," allowing distribution through licensed channels while maintaining issuer liability and requiring intermediaries to provide product disclosure statements to ensure transparency.
Share
PANews2025/09/18 13:25
XRP holders hit new high, but THIS keeps pressure on price

XRP holders hit new high, but THIS keeps pressure on price

The post XRP holders hit new high, but THIS keeps pressure on price appeared on BitcoinEthereumNews.com. Ripple [XRP] remains one of the top five cryptocurrencies
Share
BitcoinEthereumNews2026/02/17 08:49