By: Ethan Chen | May 5, 2026
Artificial intelligence has quickly moved from a novelty to a core business tool. Companies are using it for hiring, customer interaction, internal workflow, content generation, and decision support.
In 2026, the question is no longer whether AI can create efficiency, but whether the business can explain where AI is being used, what risks it creates, and who is responsible when the tool produces a biased, misleading, or poorly governed result. The legal exposure is no longer theoretical; it is already being framed through discrimination law, consumer protection law, privacy obligations, intellectual property rules, and contract risk.
That shift is easiest to see in Europe. The EU AI Act has been rolled out in phases, establishing a structured regulatory framework for AI. For example, prohibited practices and AI literacy obligations have applied since February 2, 2025; rules for general-purpose AI models became applicable on August 2, 2025; and most of the remaining rules, including enforcement at national and EU levels, are scheduled to apply on August 2, 2026. Similarly, the European Commission's current guidance makes it clear that the people using AI on their behalf have a sufficient level of knowledge and training appropriate to the system and its risks.
The United States is taking a different path, but not an easier one. Rather than one uniform AI code, businesses are navigating existing agency enforcement and emerging state requirements. The Federal Trade Commission (FTC) has repeatedly emphasized that there is no AI exemption from existing law, and its enforcement activity reflects that position. The agency's public AI docket includes matters involving deceptive AI marketing, supposed "AI lawyer" services, AI content-detection accuracy claims, and AI-driven business-opportunity schemes. Meanwhile, the Equal Employment Opportunity Commission (EEOC) has expressly stated that federal employment discrimination laws apply to AI and other new technologies just as they apply to other employment practices.
State law is beginning to translate those principles into operational obligations. Colorado's AI law, as amended in 2025, is currently scheduled to take effect on June 30, 2026. It requires developers and deployers of certain high-risk systems to use reasonable care to protect consumers from algorithmic discrimination. The statute also contemplates risk-management programs, impact assessments, public-facing summaries, notice when AI is a substantial factor in a consequential decision, opportunities to correct data, and - where technically feasible - human review of adverse decisions. Even for companies with limited Colorado exposure, the law is an important signal: lawmakers are increasingly focusing not just on what AI can do, but on whether a company can document how it governs that use.
For businesses, that means liability is most likely to arise in familiar places. Employment is one obvious pressure point. If a company uses AI to screen applicants, rank candidates, evaluate performance, or influence promotion decisions, the underlying anti-discrimination rules still apply. A system that cannot be explained, tested, or meaningfully reviewed can become a litigation problem very quickly. The same is true for customer-facing uses. If a chatbot, recommendation engine, or automated sales tool overpromises, obscures that a consumer is interacting with AI, or becomes a substantial factor in a consequential decision, the issue is no longer merely technical; it becomes a consumer protection, disclosure, and governance issue.
Intellectual property and data governance create a second fault line. The U.S. Copyright Office has been actively studying AI and copyright, including digital replicas, the copyrightability of AI-assisted outputs, and the use of copyrighted materials in generative AI training. At the same time, regulators have warned that AI providers can face legal exposure if they misrepresent how customer data is collected, retained, or used. Companies adopting third-party tools should therefore ask practical questions before deployment: What data is being entered into the system? Can that data be used to train the vendor's model? Who owns the outputs? What indemnities exist? What happens to confidential or competitively sensitive information once it leaves the business?
The most common mistake is to treat AI risk as a technology issue alone. In practice, it is a governance issue. If AI is influencing hiring, consumer interactions, pricing, eligibility, compliance, or other high-impact functions, the matter belongs in enterprise risk management, not just in the IT department. That means clear internal ownership, escalation paths, human oversight, and documentation that matches reality. A glossy AI policy is not enough if day-to-day practices are inconsistent with it. The better approach is to treat AI the same way sophisticated companies treat other meaningful risks: identify the use case, assess the consequences, assign accountability, and preserve evidence of review.
The new laws and regulations have led many businesses to ask an important question: what should companies do now?
The current regulatory direction in both the United States and Europe makes one point clear: businesses are expected to take AI governance seriously at an operational level, not merely as a written policy.
AI can deliver real value. But in 2026, value without governance is an increasingly fragile strategy. The companies in the strongest position will not necessarily be the ones using AI the fastest. They will be the ones that can show, clearly and credibly, where AI is being used, what controls are in place, and who remains accountable when a challenged decision must be explained to a regulator, counterparty, employee, or court. A careful legal review now is usually far less expensive than defending a preventable dispute later. Companies navigating these evolving requirements should consider consulting experienced counsel, including the team at Chugh, LLP, to assess risk, exposure, and governance practices.
© 2026 Chugh LLP Affiliate Network. All Rights Reserved