Agentic artificial intelligence (AI) promises to transform how organizations operate. Unlike earlier AI tools designed to summarize documents or generate contentAgentic artificial intelligence (AI) promises to transform how organizations operate. Unlike earlier AI tools designed to summarize documents or generate content

Laura I. Harder: How to Prepare Boards for the Security Risks of Agentic AI

2026/03/19 13:28
Okuma süresi: 6 dk
Bu içerikle ilgili geri bildirim veya endişeleriniz için lütfen [email protected] üzerinden bizimle iletişime geçin.

Agentic artificial intelligence (AI) promises to transform how organizations operate. Unlike earlier AI tools designed to summarize documents or generate content, these systems can act autonomously, execute tasks and interact with enterprise systems. For boards overseeing technology risk, that shift introduces a fundamentally different category of security concern. Laura I. Harder, Vice President of the Information Systems Security Association (ISSA) International and an offensive cyber officer in the U.S. Air Force Reserves, believes many leaders underestimate how quickly those risks can materialize. “The risk to organizations really comes down to having too much agency,” Harder says. “Agents can change permissions, change functionality and create actions that you maybe weren’t expecting.” As organizations move from experimenting with AI to operationalizing autonomous agents, boards must move just as quickly to establish governance structures, guardrails and oversight mechanisms capable of managing systems that can make decisions and take action without human intervention.

Agentic AI Changes the Security Equation

For the past several years, most corporate AI deployments have centered on tools that analyze information or generate outputs. Those capabilities introduced privacy and data integrity concerns, but the systems themselves rarely executed actions inside enterprise environments. Agentic AI changes that dynamic. Instead of simply offering recommendations or filtering resumes, agents can trigger workflows, access databases and interact with software systems across an organization. “It’s now not just giving us advice. It’s taking action and it acts on its own,” Harder says.

Laura I. Harder: How to Prepare Boards for the Security Risks of Agentic AI

That autonomy creates new security challenges because the systems can be manipulated. Just as humans can fall for social engineering, AI agents can be tricked into executing unintended tasks through techniques such as prompt injection. Harder points to real-world examples where hidden instructions embedded in inputs alter how AI behaves. “The AI is going to behave based off of the instructions it’s given,” she says. These threats are compounded by the opaque nature of many AI models. Organizations often rely on third-party tools without full visibility into how decisions are made. The result is a system capable of executing actions while operating in ways that are difficult to predict.

The Hidden Risk Boards Often Overlook

When boards begin evaluating agentic AI, Harder says the most underestimated vulnerability is permissions. Every AI agent operates within a network of systems, data sources and applications. The level of access granted to those systems determines the potential damage if something goes wrong. Harder describes this as the system’s “blast radius.” An agent that is given broad permissions may be able to interact with far more data and infrastructure than leaders realize.

A common example occurs when AI systems are connected to internal collaboration tools or document repositories. If a widely shared folder contains sensitive information, an agent operating in that environment will be able to access and use that data within the permissions granted to the user, service account, or integration it runs under. In practice, that means the agent can surface or act on information that may have been broadly accessible but not actively monitored.

Third-party AI services introduce an additional layer of risk. “If you’re using a model, what information does that model have access to, and can your information be used to train that model?” Harder asks. Without clear controls, proprietary information, intellectual property or sensitive customer data could unintentionally leave the organization through AI interactions.

Building Governance That Can Keep Up With AI

AI governance must be treated as a structured program rather than a technology add-on. Organizations should begin by establishing a dedicated AI governance board, often modeled after existing privacy or risk governance committees. That group should adopt established frameworks such as the NIST AI Risk Management Framework or international standards like ISO 42001. “Having AI governance and AI protections is not just a product that you can purchase,” she says.

These frameworks provide guidance on policies, risk assessments and operational controls. But they still require organizations to define how AI will function within their environment and what data it will be allowed to access. “You need policies, procedures and inventories,” Harder says. “Those pieces will help build the infrastructure that your teams can work from.” One emerging practice is the creation of an “AI bill of materials” that inventories every AI tool used inside the organization, what systems it connects to and what data it can access. Without that visibility, organizations cannot fully understand the exposure created by autonomous systems interacting with enterprise infrastructure.

Guardrails That Prevent AI From Going Rogue

Even with governance structures in place, agentic systems require technical safeguards that limit how they operate. The most effective strategy is to design security controls from the beginning. Systems should initially be developed inside closed, controlled sandbox environments using test data (not production data) and limited privileges. “As you are building your agentic system, you should do so in a sandbox,” she says. “It’s a controlled environment where synthetic systems can operate with low risk and no privilege.”

Testing must also include red teaming, where security professionals attempt to break the system or manipulate its behavior. These exercises expose vulnerabilities before systems are deployed into production environments. “Having a human in the loop ensures that if and when your AI tool decides to make a decision that maybe you didn’t want it to, there’s some sort of restriction,” Harder says. Isolation techniques can also limit risk. In some architectures, agents are contained inside virtual machines where policies restrict what commands they can execute and what systems they can access.

Board Oversight Ultimately Matters

For boards, the rise of agentic AI is a governance and accountability challenge and Harder stresses that organizations remain responsible for the actions their AI systems take. “You cannot go back and say, ‘I didn’t know it could do this,'” she says. “You have to do your due diligence.” That responsibility carries both legal and fiduciary implications. Boards must ensure that autonomous technologies are implemented with clear oversight, constrained authority and continuous monitoring. “Do not connect agents to privileged tools until you can prove that it has constrained authority, human checkpoints and monitoring,” Harder says. As agentic AI continues to move from experimentation into core operations, the organizations that succeed will be those that treat governance and security as foundational requirements rather than afterthoughts.

Follow Laura I. Harder on LinkedIn for more insights.

Comments
Sorumluluk Reddi: Bu sitede yeniden yayınlanan makaleler, halka açık platformlardan alınmıştır ve yalnızca bilgilendirme amaçlıdır. MEXC'nin görüşlerini yansıtmayabilir. Tüm hakları telif sahiplerine aittir. Herhangi bir içeriğin üçüncü taraf haklarını ihlal ettiğini düşünüyorsanız, kaldırılması için lütfen [email protected] ile iletişime geçin. MEXC, içeriğin doğruluğu, eksiksizliği veya güncelliği konusunda hiçbir garanti vermez ve sağlanan bilgilere dayalı olarak alınan herhangi bir eylemden sorumlu değildir. İçerik, finansal, yasal veya diğer profesyonel tavsiye niteliğinde değildir ve MEXC tarafından bir tavsiye veya onay olarak değerlendirilmemelidir.

Ayrıca Şunları da Beğenebilirsiniz

Why African countries are using data protection laws as backdoor to regulate AI

Why African countries are using data protection laws as backdoor to regulate AI

Rather than waiting for comprehensive AI frameworks, which are often complex and slow to develop, governments across the continent are embedding AI-related rules
Paylaş
Techcabal2026/03/19 18:46
YieldMax Funds Explained: How These ETFs Work, What They Pay & The Hidden Risks

YieldMax Funds Explained: How These ETFs Work, What They Pay & The Hidden Risks

If you have spent any time in income-investing circles recently, you have almost certainly come across YieldMax funds the ETFs promising yields of 30%, 50%, or
Paylaş
Fintechzoom2026/03/19 18:14
Canada Canadian Portfolio Investment in Foreign Securities rose from previous $9.04B to $17.41B in July

Canada Canadian Portfolio Investment in Foreign Securities rose from previous $9.04B to $17.41B in July

The post Canada Canadian Portfolio Investment in Foreign Securities rose from previous $9.04B to $17.41B in July appeared on BitcoinEthereumNews.com. Information on these pages contains forward-looking statements that involve risks and uncertainties. Markets and instruments profiled on this page are for informational purposes only and should not in any way come across as a recommendation to buy or sell in these assets. You should do your own thorough research before making any investment decisions. FXStreet does not in any way guarantee that this information is free from mistakes, errors, or material misstatements. It also does not guarantee that this information is of a timely nature. Investing in Open Markets involves a great deal of risk, including the loss of all or a portion of your investment, as well as emotional distress. All risks, losses and costs associated with investing, including total loss of principal, are your responsibility. The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of FXStreet nor its advertisers. The author will not be held responsible for information that is found at the end of links posted on this page. If not otherwise explicitly mentioned in the body of the article, at the time of writing, the author has no position in any stock mentioned in this article and no business relationship with any company mentioned. The author has not received compensation for writing this article, other than from FXStreet. FXStreet and the author do not provide personalized recommendations. The author makes no representations as to the accuracy, completeness, or suitability of this information. FXStreet and the author will not be liable for any errors, omissions or any losses, injuries or damages arising from this information and its display or use. Errors and omissions excepted. The author and FXStreet are not registered investment advisors and nothing in this article is intended…
Paylaş
BitcoinEthereumNews2025/09/18 02:38