Thought Leadership

Articles & Insights|Featured Article

A Super Bowl Blitz of AI Ads

GettyImages-2075197848_2048x816

By J.R. Maddox and Mark S. Landauer

Can you get the benefits while managing the risks?

Super Bowl ads this year were overwhelmingly focused on Artificial Intelligence (“AI”) tools promising efficiency, creativity, and a competitive edge.  So, what’s next?  As a business leader, how do you first identify, then weigh, the risks and rewards?  Which tools are best for your business?  How do you deploy them?

Already prolific in day-to-day business operations, most of us are familiar with generative AI tools like ChatGPT.  Then came Agentic AI: systems that can plan, take multi-step actions, and interact with other AI tools or data sources, all with limited human oversight.  These new tools can increase efficiency by automating repeatable tasks, process vast amounts of data in minutes then provide actionable outputs, draft documents, create custom images or audio files, and more.

But life is full of little trade-offs.  AI tools can also introduce new or increased risks related to unauthorized actions, data exfiltration, financial loss, breach of confidentiality, data privacy, intellectual property loss or infringement, cybersecurity, bias and discrimination, misinformation, non-compliance, breach of contract, and even safety.  Shadow AI use (employees using AI tools without the business’s knowledge or authorization) can cause severe problems.

Fortunately, an AI governance policy can help you capture the value of AI tools that best fit your business, while reducing legal, operational, financial and ethical risks.  Such a policy should set clear rules for carefully evaluating, selecting, authorizing, deploying, and monitoring AI tools.  Strong guardrails, human-in-the-loop checkpoints, whitelist approval, and auditability are also important parts of the process.

Existing Laws Apply to AI Use

The existing laws that apply to your business, also apply to your use of AI.  This includes data privacy laws (including industry-specific laws for healthcare, finance, etc., and state comprehensive laws, such as the Minnesota Consumer Data Privacy Act), non-discrimination (in employment, credit, lending, housing, insurance, healthcare, etc.), intellectual property, consumer protection, product safety laws, and even antitrust laws (price fixing and manipulation).

A good example is AI note-taking tools, which you may have seen pop up in business videoconferences.  Such tools are subject to privacy laws (personal, medical, or privileged information), and to wiretapping, consent to record, and copyright laws.  What happens with the recorded information (and who has access to it) can further implicate laws or rights. It’s critical to understand how any new AI tool works in detail, to ensure continued compliance with existing laws.

New AI Laws Pending

Hundreds of state and federal bills have been introduced to regulate AI in the U.S.  There are certain common elements in most proposed AI regulation, many of which originate from AI laws already passed in other countries, like the EU AI Act.  These common elements preview what governance and risk-management practices would be required once new AI laws are enacted.

Primary Elements of AI Governance

Building your AI governance policy means tailoring the primary elements of AI governance to your unique business operations, risk tolerance, and industry needs.  This process can vary dramatically depending on your business.  For example, a business that develops AI tools for others to use differs from one that uses AI tools for certain critical/sensitive functions like healthcare, which differs from using AI tools for basic tasks like drafting emails or pivot tables.

Regardless of the differences, investing in this process up front can help you capture the value of AI tools appropriate for your business, while reducing legal, operational, financial and ethical risks. Below is a select preview of certain foundational AI governance elements.

Governance, Accountability and Recordkeeping

  • Record what the AI tool is used for, who it affects, and where it’s used.
  • Rank AI use cases by impact on customers, employees, operations, compliance, and reputation.
  • A living register of AI tools, with capabilities, limits, typical failure modes, and appropriate uses.
  • Assign ownership for each AI tool, with clear decision rights and escalation paths.
  • Retain assessment reports, test results, version histories, training data summaries, and key decisions.

Risk Assessment and Impact Testing

  • Conduct pre-deployment risk assessments covering intended use, foreseeable misuse, data, fairness, security, and legal considerations.
  • Establish a baseline and monitor performance over time for accuracy, fairness, bias, and/or discrimination.
  • Determine when and how humans review, override, or audit AI tool outputs.
  • Know what data the AI uses, why it’s needed, and how long you keep it.

Use, Training, Oversight, and Appeal

  • Limited pilots with clear success metrics before scaling.
  • Feedback from end users and stakeholders to refine the tool and the guardrails.
  • Curate approved tools, prohibited inputs (e.g., confidential or personal data), and review requirements.
  • For any fully or partially automated systems, determine what oversight is necessary, and ensure decisions are auditable, appealable, and reversible.
  • Ensure staff know when and how to override such decisions.
  • Emphasize that AI assists judgment; it does not replace it.

Transparency

  • Just-in-time disclosure of AI use in a concise, plain-language way.
  • Track and label AI-generated content.

Security

  • Treat Agentic AI as an employee – implement practical access controls, encryption, and environment segregation for training and inference.
  • Log interactions for audit and incident response with retention aligned to your policies.
  • Define what counts as an “AI incident” for your business (e.g., significant accuracy drop, discriminatory impact, data leakage, etc.) and set remediation protocols.

 We can help

We help clients create business-forward, tailored AI governance policies aligned with best practices for responsible AI adoption and risk mitigation.  We have multiple attorneys with Artificial Intelligence Governance Professional (AIGP) certifications from the IAPP (the world’s largest information privacy association with a focus on promoting and improving privacy, AI governance and digital responsibility globally).

We master the minutia while staying outcome focused so compliance can unlock scale, not obstruct it.

The purpose of this article is merely to provide general information and should not be construed as legal advice.

Previous Post
Henson Efron Welcomes Two New Shareholders

AUTHORS