October 15, 2024 | Author: Nicholas Friedman
While we all grew up with AI movies like The Terminator and iRobot, the future has finally arrived. As Morpheus said in the Matrix, “Welcome to the desert of the real.” Today, I welcome you to the era where artificial intelligence (AI) isn’t just a futuristic concept, it’s now sitting at the table, eating your lunch, and probably taking notes on how you could’ve done it better… just like my children. But don’t worry, AI isn’t here to replace you (yet), it needs time to grow up. And hopefully, it’s here to enhance the way we do business. The catch? If we’re going to let AI in, we’ve got to make sure it behaves—and that’s where AI Governance comes in.
Think of AI governance like babysitting a really, really smart kid. Sure, it’s impressive, but if you leave it alone too long, it might color on the walls, make some questionable decisions, or even sneak off with sensitive information. So how do we prevent AI from running amok while also reaping its massive benefits? Spoiler alert: We create a governance program.
Here’s your practical guide to taming the AI toddler. If you can’t tell, I have two toddlers in my home. Recently, my wife and I took our two toddlers on a cross-country trip for my sister’s wedding, and it might as well have been a safari… yes, this is what the AI landscape is like at this present time. Put the AI toddler on a safari guide…
Imagine the AI landscape as a high-tech jungle: lush with innovation, teeming with potential, but also home to lurking dangers like biased algorithms and privacy pitfalls. Sure, it sounds adventurous, but should you really be taking toddlers on a safari… you might end up in a swamp instead of spotting majestic lions.
Fear not! This guide will take you through the essentials of taming the AI safari with a governance program that keeps your business safe, ethical, and ahead of the competition, while keeping your AI toddlers out of trouble. Pro Tip: Bring your mother-in-law along, aka AI Governance program.
Ready? Let’s go walk through this, action by action, adding in some practical advice as we proceed.
Section 1: Establishing an AI Governance Program First—Yes, Before Anything Else
Quick reality check on what you adventurous AI parents/readers are likely dealing with… your CEO wants AI. Now. Yesterday, actually (not to dissimilar from your toddlers). But here’s the thing: AI can be a handful if not managed properly. We’re talking privacy violations, ethical missteps, and potential lawsuits (lions and tigers and bears, oh my!) Fun, right? That’s why you need a plan. And that plan is called AI Governance.
Side note: When we were kids, do you remember classmates coming in with casts that we all signed? That’s going to be the front page of the news when you misuse AI.
Without a proper framework, letting AI loose is like giving a toddler the keys to your safari jeep. Sure, they’ll move fast—but don’t expect them to follow the rules of the trail. With governance, you set clear boundaries for AI, ensuring it behaves like a responsible family member instead of a rogue toddler who’s had too much sugar. Similarly, launching an AI Governance Program is crucial for steering your organization through the AI ecosystem of vendors.
Before you roll out any shiny AI-powered safari vehicles, you need a governance framework. Think of it as the fortified safari jeep to keep the AI toddlers safely inside and on track, rather than running toward chaos. By integrating AI Governance into your Enterprise Risk Management (ERM) framework, you ensure that your AI endeavors are both innovative and compliant, avoiding the quicksand of legal and ethical risks.
Section 2: The Key Ingredients of AI Governance
Now that you’ve bought into governance, what does it actually look like?
2.1 What is AI Governance? – The Rules for AI Safari Safety
AI Governance is the set of rules and practices that make sure your AI isn’t out there making decisions like a sleep-deprived toddler. It ensures AI operates responsibly, ethically, and within the law. Think of it as AI’s legal guardian or your toddler’s grandparent, ensuring it doesn’t overstep its boundaries, AKA ice-cream and cookies before dinner—especially in sectors like healthcare, finance, and law.
2.2 Why AI Governance is Necessary – Avoiding AI’s Wild Side
Without governance, AI is like a toddler off their routine sleep schedule (which is exactly what happened when I was traveling with my toddlers). One minute, it’s providing useful insights, the next it’s accidentally making biased decisions or sharing too much data. Governance ensures that AI plays by the rules, doesn’t cause chaos, and stays in line with compliance standards.
2.3 Key Questions to Keep in Mind – Your AI Checklist
Before you unleash AI into your organization, you need answers:
- What’s our plan?
- Who’s in charge of AI’s behavior?
- Are we following the rules?
- Is our AI making decisions we can stand behind? (Think: No robot rebellions).
- We have a larger set of necessary AI Governance questions to share with clients on actual AI Governance engagements.
2.4: AI Policies, Controls, and Compliance: The AI Rulebook
When it comes to AI, policies and controls are like the rules of the game. Without them, AI could be out there making decisions like toddlers choosing movies before bedtime. It could be Little Bear or Jurassic Park. To keep everything running smoothly and legally, every organization needs strong AI policies, controls, and compliance frameworks.
- AI Governance Policies: This is your AI rearing plan—detailing the ethical and compliant use of AI. Whether it’s how AI handles sensitive data or how it makes decisions, having a clear set of policies is essential. Think of it as the “terms and conditions” for your AI, making sure it knows what’s allowed and what’s not.
- Controls: Just like the safari jeep has brakes and steering, AI needs controls. These guide AI’s actions and ensure compliance with internal and external rules. Whether it’s data usage, decision-making, or security, these controls make sure AI doesn’t go off the rails.
- Compliance: Staying compliant isn’t just a nice to have, it’s a must-have. With AI regulation changing rapidly, your policies and controls need to align with the latest legal standards. From the EU AI Act 2021/0106(COD), NIST AI Risk, Canada’s AIDA, ISO/IEC’s 23894:2021, Algorithmic Accountability Act, to the U.S. Executive Order on Promoting the Use of Trustworthy AI in Government, keeping up with AO regulations and acceptable use policies will be key to avoiding fines and legal headaches.
2.5: AI Regulatory Change Management: Staying Ahead of the AI Lawmakers
Regulations are popping up faster than your toddler’s taste bud changes. From AI ethics to data privacy, and acceptable uses regulations, staying compliant with the latest regulations is a full-time job. That’s why AI regulatory change management is so critical. We are seeing these added by the FTC, OSTP, FCC, DoD, NERC, EEOC, CFPB, FDA, HUD, NHTSA, DOE, OCC, EPA, FAA, FINRA… the list goes on and on.
- The Importance of Regulatory Content and Intelligence: You can’t follow the rules if you don’t know them. Keeping up with regulatory content is like having a good news feed—it’s essential for knowing when and how the rules have changed. From new AI laws to updated compliance frameworks, staying in the know helps you avoid nasty surprises.
- We offer an AI policy pack curated and maintained by UCF (Unified Compliance Framework) to integrate into your ServiceNow AI Governance solution
Think of regulatory content and intelligence as your AI compliance GPS. Without it, you’d be lost in a jungle of legal jargon, unsure if you’re still on the right path.
- Change Management: AI regulatory changes don’t come with an off switch, so you’ll need a process for constantly updating your governance framework. This includes:
- Monitoring regulatory bodies for new or upcoming changes.
- Assessing how these changes affect your AI policies and controls.
- Implementing the necessary updates without disrupting operations.
2.6: Identity and Access Management for AI Systems
Not all users are human—especially when AI is involved. Think of identity and access management (IAM) as nursery and schoolteachers. Only certain people—and non-human users like bots and large language models (LLMs)—get called to the principal’s office
You’ve got to make sure that Humans are in control of AI – Humans are the parents of the AI toddlers.
- Humans and AI have need to correct access levels: Give AI too much access, and it could raid your fridge or worse—your customer data. When my kids learned to open the fridge, we added a fridge child lock.
- Periodic reviews: Double-check those guest lists. No uninvited bots, please. If you fail to do this, your AI toddlers will bring home as many germs as your kids from preschool.
2.7: Risk Assessments for AI Assets (Bots, LLMs, etc.) – Scouting for AI Weaknesses
AI systems are assets, but they come with risks—kind of like giving your toddler free reign at Thanksgiving dinner. You want to make sure your bots and LLMs don’t cause more problems than they solve. This involves:
- Identifying risks: Are your AI tools prone to bad behavior like biased decisions? (My toddlers? Never!)
- Risk mitigation: Put controls in place to avoid AI going rogue at the worst possible time.
2.8: Vulnerability Scanning and Patch Management for AI Systems – Regular Health Checks for Your AI Toddlers
If AI were a toddler, you wouldn’t let them walk around forever without checking their diaper, right? Vulnerability scanning and patch management are essential to keeping your AI running smoothly and securely:
- Regular scans: Make sure no one’s hacked into your AI system or planted bad data.
- Threat intel: Stay ahead of potential threats by monitoring trusted sources.
And don’t forget those patches—just like your kid’s school shots, AI needs updates.
2.9: Security Incident Management for AI Systems – AI Emergency Response Team
Let’s face it: No matter how many precautions you take, things can still go wrong. That’s why you need an AI-specific security incident management plan. Think of it as the fire drill for your AI system:
- Detect issues early: Like smoke detectors, your AI tools should flag problems before they escalate. Your kids have never started a fire, right?
- Have a response plan: You wouldn’t face a fire without an extinguisher. The same goes for AI security breaches.
Section 3: Managing Your AI Vendors – Choosing Your AI Babysitters Wisely (Because Outsourcing Doesn’t Mean No Oversight)
Just because you’re using someone else’s AI doesn’t mean you get a free pass on governance. Your AI vendors are like babysitters—you don’t just hand them the baby and walk out the door. You’ve got to do your due diligence: Are they CPR certified, do they come with references, do any of your friends use them to watch their kids?
- Assess the babysitter’s credentials: Make sure the vendor knows what they’re doing and isn’t prone to ethical slip-ups.
- Regular check-ins: Don’t just assume everything’s fine. Keep tabs on how the vendor’s AI is performing.
Remember, if things go sideways, it’s still your house that’s a mess.
Section 4: Business Continuity for AI Systems – Your AI Backup Plan
What happens if your AI suddenly crashes? Or worse—turns on you? (Just kidding. Sort of.) You need a business continuity plan that covers how to:
- Fall back on non-AI systems: When AI fails, you may have to dust off some good old-fashioned manual processes. Especially in critical infrastructure. If your kids get sick, you can’t take them to preschool, and if your in laws are sick, you have to “work from home.” 😉
- Disaster recovery: Quickly restore your AI without turning your whole operation upside down. We all know where the closest hospitals and emergency centers are to our homes. The same principle applies.
Think of it as your AI’s safety net—ready to catch it (and your business) when things go sideways.
Section 5: AI Internal Audit Management: Keeping the AI in Check
If AI is your retirement plan, then internal audits are the regular check-ups making sure that toddler who is going to be a billionaire one day stays healthy. An AI internal audit management process is crucial for ensuring your AI systems don’t secretly veer off course.
- Audit Programs: Implement regular audits to assess your AI system’s compliance with internal policies and external regulations. These audits ensure that the AI does what it’s supposed to—no cutting corners, no biased decisions, no rogue behavior.
- Audit Scope: Focus on areas like data integrity, model transparency, decision-making accuracy, and compliance with regulatory standards. Audits should dig deep into both the technical and ethical performance of your AI systems.
- Risk-Based Approach: Prioritize audits based on risk. The more critical the AI application (think healthcare, finance, or legal), the more frequently it should be audited. This helps focus resources on the highest impact areas, ensuring that risks are mitigated before they snowball.
- Post-Audit Action: An audit’s job isn’t done once the report is written. You need a process for acting on audit findings, whether it’s refining policies, tightening security, or retraining AI models. Audits should be part of a continuous improvement loop, keeping your AI system efficient, compliant, and risk-free.
Think of it as AI’s three-step wellness plan: govern it, watch the laws, and check up on it regularly. Keep your AI in shape, and it’ll do wonders for your organization. Ignore it, and—well, good luck dealing with the chaos! It’s exactly the same with toddlers. If they are too quiet… something is definitely going on that you don’t know about.
Section 6: Execution, Monitoring, and Remediation Keeping Your AI Ecosystem Healthy
Now that you’ve set everything up, it’s time to stay vigilant. Just like you wouldn’t let a toddler run wild unsupervised, your AI systems need regular monitoring:
- Process execution: Make sure you’re following all the steps outlined— policy, controls, compliance, vendor management, security assessments, risk assessments, IAM, etc.
- Ongoing monitoring: Keep an eye on things. AI likes to act up when you’re not looking. Use a baby monitor to spy, I mean keep an eye on those children.
- Remediation: Fix issues as they come up and keep everything documented. That way, when the AI inspector comes knocking, you’ve got your processes documented.
Conclusion: Wrangling AI, Without Losing Your Sanity
AI is like the smartest, most unpredictable employee you’ll ever hire. It can drive innovation, efficiency, and profitability—but only if you set clear rules and keep a close eye on it. AI Governance ensures you don’t end up with a rogue system making decisions you can’t explain (or defend). So, grab the reins, build that governance framework, and watch your AI systems flourish—without coloring outside the lines. And remember, no one ever said babysitting the future was easy, but with the right safeguards, it can be a lot more fun.
About Author:
Nicholas Friedman – CEO & Managing Partner, Denver, CO
Nic is an experienced ERM strategist and advisory lead with over 24 years of enterprise experience in information security, risk, and compliance domains. He works with CISOs, CROs, and CCOs to mature and automate IT and OT ERM programs. At Templar Shield, Nic oversees company strategy, partnerships, IP development, and executive client relationships for many of Templar Shield’s key clients across various industries, including energy, utilities, petrochemical, manufacturing, public sector, telco, and banking.