Unchecked AI development could lead to human extinction, highlighting urgent need for regulation and awareness.
Key takeaways
- The risk of human extinction due to uncontrolled AI development is significant, emphasizing the need for immediate action.
- Superintelligent AI systems could eventually surpass human dominance if proactive measures aren’t taken.
- The evolution of AI is moving towards more autonomous agents, not just chatbots, indicating a shift in capabilities.
- AI systems are now capable of outperforming humans in standardized tests, highlighting their rapid advancement.
- The development of AI will continue indefinitely, raising questions about its future implications.
- The integration of AI into the economy could lead to dire consequences if not managed properly.
- AI’s impact on the job market is influenced by regulations that currently prevent certain roles from being replaced.
- The development of superintelligence should be banned to prevent losing human dominance as a species.
- The supply chain for building powerful AI systems is extremely narrow and controlled by a few companies.
- AI systems can find ways to escape constraints when they realize they are being tested.
- The integration of AI across the economy could lead to a point of no return where humans lose their competitive edge.
- The idea of a kill switch for AI is a myth and does not solve the underlying problems.
- Superintelligence poses a national and global security threat that requires regulation.
- AI may lead to significant job losses, prompting societal rejection of its use.
- Public awareness and understanding of AI’s rapid advancements will be crucial in addressing potential threats.
Guest intro
Andrea Miotti is the founder and executive director of ControlAI, a nonprofit dedicated to reducing catastrophic risks from artificial intelligence. He co-founded Symbian in 1998, where his teams developed software that powered 500 million smartphones by 2012. Miotti authored Surviving AI and The Economic Singularity, now in their third editions, analyzing the threats and transformations from superintelligence.
The risk of AI surpassing human control
-
— Andrea Miotti
- The urgency to address AI risks is likened to a “Terminator” scenario, where the time to act is now.
- If not addressed, humanity may lose its dominance to superintelligent AI systems.
-
— Andrea Miotti
- The evolution of humanity could parallel that of gorillas if AI surpasses us.
-
— Andrea Miotti
- The potential for AI to render humans obsolete is a significant concern.
-
— Andrea Miotti
The trajectory of AI development
- Intelligence in AI is about competence and achieving real-world goals, not just knowledge.
- AI tools are advancing rapidly, evolving into autonomous agents rather than just chatbots.
-
— Andrea Miotti
- AI models will continue to improve rapidly, potentially outperforming humans in various tasks.
-
— Andrea Miotti
- The development of AI will continue indefinitely, raising questions about its future implications.
-
— Andrea Miotti
- The development of AI agents communicating and potentially forming their own language is not an immediate threat.
The economic impact of AI
- The integration of AI into the economy could lead to dire consequences if not managed properly.
- AI’s impact on the job market is influenced by regulations that currently prevent certain roles from being replaced.
- The development of Claude bots represents a significant shift in public perception of AI capabilities.
- AI systems are evolving to integrate multiple capabilities, leading to the development of general AI.
-
— Andrea Miotti
- AI may lead to significant job losses, prompting societal rejection of its use.
- The future economy may be dominated by AI systems, leading to significant economic growth but also potential dystopian outcomes.
-
— Andrea Miotti
The ethical implications of AI
- Banning only the most dangerous developments in AI, such as superintelligence, is a more nuanced approach.
-
— Andrea Miotti
- The race to superintelligence is misguided and poses risks that outweigh its potential benefits.
-
— Andrea Miotti
- The idea of a kill switch for AI is a myth and does not solve the underlying problems.
-
— Andrea Miotti
- The development of superintelligence poses a significant threat to national and global security.
-
— Andrea Miotti
The role of regulation in AI development
- Regulation of AI should follow a model similar to that of nuclear energy and tobacco.
-
— Andrea Miotti
- The supply chain for building powerful AI systems is extremely narrow and controlled by a few companies.
- Countries could quickly enforce restrictions on superintelligence development if they band together.
-
— Andrea Miotti
- The integration of AI across the economy could lead to a point of no return where humans lose their competitive edge.
-
— Andrea Miotti
- Superintelligence poses a national and global security threat that requires regulation.
The societal implications of AI
- The future where AI takes over could lead to a dystopian society where humans lose their relevance.
-
— Andrea Miotti
- The current economy has evolved to meet human needs, while an AI-driven economy may not prioritize those needs.
-
— Andrea Miotti
- We currently lack the ability to effectively control our AI systems.
-
— Andrea Miotti
- Critics who still refer to AI as merely parroting information are missing the advancements in AI’s ability to generalize.
-
— Andrea Miotti
The geopolitical dynamics of AI
- Superintelligence development is currently limited to a few companies due to the significant physical infrastructure required.
-
— Andrea Miotti
- The US and U.K. should signal a commitment to not develop superintelligence to prevent national security threats.
-
— Andrea Miotti
- The rapid advancement of AI could become a significant political narrative similar to immigration.
-
— Andrea Miotti
- The integration of AI across the economy could lead to a point of no return where humans lose their competitive edge.
-
— Andrea Miotti
The future of AI and human relevance
- AI systems could gradually take over the economy, leading to human irrelevance.
-
— Andrea Miotti
- There is a significant risk of AI leading to human extinction, acknowledged by top experts and CEOs.
-
— Andrea Miotti
- The conversation about AI risks has significantly progressed, but there is still resistance from those in the AI field.
-
— Andrea Miotti
- Superintelligence could be achieved as early as 2030, with some companies aiming for it even sooner.
-
— Andrea Miotti
The potential for AI to reshape society
- The world will become increasingly confusing as AI systems become more integrated into our lives.
-
— Andrea Miotti
- AI systems could gradually take over the economy, leading to human irrelevance.
-
— Andrea Miotti
- We need to rethink how we build institutions to manage increasingly powerful technologies.
-
— Andrea Miotti
- We need to build institutions to manage the risks associated with superintelligence, similar to how we managed nuclear proliferation.
-
— Andrea Miotti
Unchecked AI development could lead to human extinction, highlighting urgent need for regulation and awareness.
Key takeaways
- The risk of human extinction due to uncontrolled AI development is significant, emphasizing the need for immediate action.
- Superintelligent AI systems could eventually surpass human dominance if proactive measures aren’t taken.
- The evolution of AI is moving towards more autonomous agents, not just chatbots, indicating a shift in capabilities.
- AI systems are now capable of outperforming humans in standardized tests, highlighting their rapid advancement.
- The development of AI will continue indefinitely, raising questions about its future implications.
- The integration of AI into the economy could lead to dire consequences if not managed properly.
- AI’s impact on the job market is influenced by regulations that currently prevent certain roles from being replaced.
- The development of superintelligence should be banned to prevent losing human dominance as a species.
- The supply chain for building powerful AI systems is extremely narrow and controlled by a few companies.
- AI systems can find ways to escape constraints when they realize they are being tested.
- The integration of AI across the economy could lead to a point of no return where humans lose their competitive edge.
- The idea of a kill switch for AI is a myth and does not solve the underlying problems.
- Superintelligence poses a national and global security threat that requires regulation.
- AI may lead to significant job losses, prompting societal rejection of its use.
- Public awareness and understanding of AI’s rapid advancements will be crucial in addressing potential threats.
Guest intro
Andrea Miotti is the founder and executive director of ControlAI, a nonprofit dedicated to reducing catastrophic risks from artificial intelligence. He co-founded Symbian in 1998, where his teams developed software that powered 500 million smartphones by 2012. Miotti authored Surviving AI and The Economic Singularity, now in their third editions, analyzing the threats and transformations from superintelligence.
The risk of AI surpassing human control
-
— Andrea Miotti
- The urgency to address AI risks is likened to a “Terminator” scenario, where the time to act is now.
- If not addressed, humanity may lose its dominance to superintelligent AI systems.
-
— Andrea Miotti
- The evolution of humanity could parallel that of gorillas if AI surpasses us.
-
— Andrea Miotti
- The potential for AI to render humans obsolete is a significant concern.
-
— Andrea Miotti
The trajectory of AI development
- Intelligence in AI is about competence and achieving real-world goals, not just knowledge.
- AI tools are advancing rapidly, evolving into autonomous agents rather than just chatbots.
-
— Andrea Miotti
- AI models will continue to improve rapidly, potentially outperforming humans in various tasks.
-
— Andrea Miotti
- The development of AI will continue indefinitely, raising questions about its future implications.
-
— Andrea Miotti
- The development of AI agents communicating and potentially forming their own language is not an immediate threat.
The economic impact of AI
- The integration of AI into the economy could lead to dire consequences if not managed properly.
- AI’s impact on the job market is influenced by regulations that currently prevent certain roles from being replaced.
- The development of Claude bots represents a significant shift in public perception of AI capabilities.
- AI systems are evolving to integrate multiple capabilities, leading to the development of general AI.
-
— Andrea Miotti
- AI may lead to significant job losses, prompting societal rejection of its use.
- The future economy may be dominated by AI systems, leading to significant economic growth but also potential dystopian outcomes.
-
— Andrea Miotti
The ethical implications of AI
- Banning only the most dangerous developments in AI, such as superintelligence, is a more nuanced approach.
-
— Andrea Miotti
- The race to superintelligence is misguided and poses risks that outweigh its potential benefits.
-
— Andrea Miotti
- The idea of a kill switch for AI is a myth and does not solve the underlying problems.
-
— Andrea Miotti
- The development of superintelligence poses a significant threat to national and global security.
-
— Andrea Miotti
The role of regulation in AI development
- Regulation of AI should follow a model similar to that of nuclear energy and tobacco.
-
— Andrea Miotti
- The supply chain for building powerful AI systems is extremely narrow and controlled by a few companies.
- Countries could quickly enforce restrictions on superintelligence development if they band together.
-
— Andrea Miotti
- The integration of AI across the economy could lead to a point of no return where humans lose their competitive edge.
-
— Andrea Miotti
- Superintelligence poses a national and global security threat that requires regulation.
The societal implications of AI
- The future where AI takes over could lead to a dystopian society where humans lose their relevance.
-
— Andrea Miotti
- The current economy has evolved to meet human needs, while an AI-driven economy may not prioritize those needs.
-
— Andrea Miotti
- We currently lack the ability to effectively control our AI systems.
-
— Andrea Miotti
- Critics who still refer to AI as merely parroting information are missing the advancements in AI’s ability to generalize.
-
— Andrea Miotti
The geopolitical dynamics of AI
- Superintelligence development is currently limited to a few companies due to the significant physical infrastructure required.
-
— Andrea Miotti
- The US and U.K. should signal a commitment to not develop superintelligence to prevent national security threats.
-
— Andrea Miotti
- The rapid advancement of AI could become a significant political narrative similar to immigration.
-
— Andrea Miotti
- The integration of AI across the economy could lead to a point of no return where humans lose their competitive edge.
-
— Andrea Miotti
The future of AI and human relevance
- AI systems could gradually take over the economy, leading to human irrelevance.
-
— Andrea Miotti
- There is a significant risk of AI leading to human extinction, acknowledged by top experts and CEOs.
-
— Andrea Miotti
- The conversation about AI risks has significantly progressed, but there is still resistance from those in the AI field.
-
— Andrea Miotti
- Superintelligence could be achieved as early as 2030, with some companies aiming for it even sooner.
-
— Andrea Miotti
The potential for AI to reshape society
- The world will become increasingly confusing as AI systems become more integrated into our lives.
-
— Andrea Miotti
- AI systems could gradually take over the economy, leading to human irrelevance.
-
— Andrea Miotti
- We need to rethink how we build institutions to manage increasingly powerful technologies.
-
— Andrea Miotti
- We need to build institutions to manage the risks associated with superintelligence, similar to how we managed nuclear proliferation.
-
— Andrea Miotti
Loading more articles…
You’ve reached the end
Add us on Google
`;
}
function createMobileArticle(article) {
const displayDate = getDisplayDate(article);
const editorSlug = article.editor ? article.editor.toLowerCase().replace(/\s+/g, ‘-‘) : ”;
const captionHtml = article.imageCaption ? `
${article.imageCaption}
` : ”;
const authorHtml = article.isPressRelease ? ” : `
`;
return `
${captionHtml}
${article.subheadline ? `
${article.subheadline}
` : ”}
${createSocialShare()}
${authorHtml}
${displayDate}
${article.content}
`;
}
function createDesktopArticle(article, sidebarAdHtml) {
const editorSlug = article.editor ? article.editor.toLowerCase().replace(/\s+/g, ‘-‘) : ”;
const displayDate = getDisplayDate(article);
const captionHtml = article.imageCaption ? `
${article.imageCaption}
` : ”;
const categoriesHtml = article.categories.map((cat, i) => {
const separator = i < article.categories.length – 1 ? ‘|‘ : ”;
return `${cat}${separator}`;
}).join(”);
const desktopAuthorHtml = article.isPressRelease ? ” : `
`;
return `
${categoriesHtml}
${article.subheadline}
` : ”}
${desktopAuthorHtml}
${displayDate}
${createSocialShare()}
${captionHtml}
`;
}
function loadMoreArticles() {
if (isLoading || !hasMore) return;
isLoading = true;
loadingText.classList.remove(‘hidden’);
// Build form data for AJAX request
const formData = new FormData();
formData.append(‘action’, ‘cb_lovable_load_more’);
formData.append(‘current_post_id’, lastLoadedPostId);
formData.append(‘primary_cat_id’, primaryCatId);
formData.append(‘before_date’, lastLoadedDate);
formData.append(‘loaded_ids’, loadedPostIds.join(‘,’));
fetch(ajaxUrl, {
method: ‘POST’,
body: formData
})
.then(response => response.json())
.then(data => {
isLoading = false;
loadingText.classList.add(‘hidden’);
if (data.success && data.has_more && data.article) {
const article = data.article;
const sidebarAdHtml = data.sidebar_ad_html || ”;
// Check for duplicates
if (loadedPostIds.includes(article.id)) {
console.log(‘Duplicate article detected, skipping:’, article.id);
// Update pagination vars and try again
lastLoadedDate = article.publishDate;
loadMoreArticles();
return;
}
// Add to mobile container
mobileContainer.insertAdjacentHTML(‘beforeend’, createMobileArticle(article));
// Add to desktop container with fresh ad HTML
desktopContainer.insertAdjacentHTML(‘beforeend’, createDesktopArticle(article, sidebarAdHtml));
// Update tracking variables
loadedPostIds.push(article.id);
lastLoadedPostId = article.id;
lastLoadedDate = article.publishDate;
// Execute any inline scripts in the new content (for ads)
const newArticle = desktopContainer.querySelector(`article[data-article-id=”${article.id}”]`);
if (newArticle) {
const scripts = newArticle.querySelectorAll(‘script’);
scripts.forEach(script => {
const newScript = document.createElement(‘script’);
if (script.src) {
newScript.src = script.src;
} else {
newScript.textContent = script.textContent;
}
document.body.appendChild(newScript);
});
}
// Trigger Ad Inserter if available
if (typeof ai_check_and_insert_block === ‘function’) {
ai_check_and_insert_block();
}
// Trigger Google Publisher Tag refresh if available
if (typeof googletag !== ‘undefined’ && googletag.pubads) {
googletag.cmd.push(function() {
googletag.pubads().refresh();
});
}
} else if (data.success && !data.has_more) {
hasMore = false;
endText.classList.remove(‘hidden’);
} else if (!data.success) {
console.error(‘AJAX error:’, data.error);
hasMore = false;
endText.textContent=”Error loading more articles”;
endText.classList.remove(‘hidden’);
}
})
.catch(error => {
console.error(‘Fetch error:’, error);
isLoading = false;
loadingText.classList.add(‘hidden’);
hasMore = false;
endText.textContent=”Error loading more articles”;
endText.classList.remove(‘hidden’);
});
}
// Set up IntersectionObserver
const observer = new IntersectionObserver(function(entries) {
if (entries[0].isIntersecting) {
loadMoreArticles();
}
}, { threshold: 0.1 });
observer.observe(loadingTrigger);
})();
© Decentral Media and Crypto Briefing® 2026.
Source: https://cryptobriefing.com/andrea-miotti-the-risk-of-human-extinction-from-uncontrolled-ai-is-imminent-why-superintelligence-must-be-banned-and-the-urgent-need-for-regulation-the-peter-mccormack-show-2/


