Article Cybersecurity Law & Strategy

As Restaurants Roll Out AI, Cyber Risks are on the Menu

In this article for Cybersecurity Law & StrategyDavid Cummings discusses steps businesses should consider when navigating AI-related data privacy concerns.

A quick-service restaurant holding company’s plans to use artificial intelligence to enhance customers’ ordering experience is highlighting a new era of cyber liability risks.

Businesses in all industries are finding new ways to harness AI, whether for internal efficiency, in customer-facing processes, or both. As they look to take advantage of AI-enabled systems, organizations should be mindful of the risks inherent in collecting and storing personal data. Data privacy concerns continue to drive lawsuits, and plaintiffs’ attorneys continue to seek creative ways to litigate privacy violations alongside rapidly evolving AI technologies, often bringing claims under laws that predate the internet itself.  

AI IN RESTAURANTS

In March, Yum! Brands, Inc. announced a partnership with NVIDIA to accelerate the development of AI for Yum! Brands’ 61,000 locations in more than 155 countries, which include KFC, Taco Bell and Pizza Hut restaurants. The company envisions using AI in three areas:

  • Voice automated order-taking AI agents. These agents will deploy conversational AI in drive-throughs and call centers to understand customers’ ordering preferences as well as complex menus.
  • Computer vision-enhanced operations. Real-time analytics and alerts using computer vision is expected to improve drive-through efficiency and labor management in the restaurants’ back-of-the-house functions.
  • Accelerated restaurant intelligence. AI-driven analytics and agents will assess restaurant performance and generate recommendations for managers based on top-performing locations in the restaurant system. Yum! Brands’ goals resemble those of many other businesses rolling out AI: enhance operational efficiency and maximize customer engagement. Although these goals appear straightforward, achieving them while complying with a patchwork of privacy laws and regulations is poised to be a far more complex undertaking.

Specifically, these proposed uses of AI technologies involve capturing consumer and employee data to train AI systems to do a number of things, such as remember customers’ prior orders and analyze employee behaviors. This raises numerous questions that implicate various data privacy laws which regulate and inform the collection of such data. It is critical for businesses looking to implement similar technologies to understand their obligations under these laws and how their use of AI might implicate them.

COLLECT AND PROTECT

AI tools generally require large language models containing enormous amounts of data to run effectively. For example, in order for AI agents to conduct “conversations” that humans perceive as natural, the agents must ingest significant volumes of human speech. For businesses to derive the maximum benefit of gathering data to fuel AI and inform their analytics — and minimize the liability risk arising from that data — organizations must balance the business purposes of collecting consumer and employee data with various protection, notification, and consent requirements imposed by potentially applicable privacy laws and regulations.

Illinois, for example, has particularly robust legal guidance surrounding data collection through multiple laws, including the Illinois Biometric Information Privacy Act (BIPA) and the Illinois Genetic Information Privacy Act (GIPA), to name only a few. The California Consumer Privacy Act (CCPA), which took effect in 2020, is a sweeping law that gives residents of the state broad protections, including the right to know about, opt out and limit sharing of personal data. Similar data privacy laws are in effect in other states, including Connecticut, Massachusetts and Virginia. While privacy laws are on the books in many states, there is no unifying or overarching federal data privacy legislation. This complicates compliance efforts for businesses that serve customers in multiple states or nationwide.

Moreover, hundreds of class-action lawsuits filed in California, Massachusetts and elsewhere are using often novel legal theories to challenge businesses’ use of third-party software or technologies that collect and analyze consumer data, including causes of action based on decades-old laws enacted well before AI technologies became a reality.

For example, class actions in California have cited the California Invasion of Privacy Act, a 1967 law that prohibits the interception of communications by eavesdropping or wiretapping without the consent of all parties. Although California courts are not yet unified on how to apply this law in these contexts, some California courts have determined this law applies (for example, Levings v. Choice Hotels, No. 23STCV28359, 2024 WL 1481189 (Cal. Super. Ct. L.A. Cnty. Apr. 3, 2024)) and have determined that plaintiffs only need claim a third-party software provider has the capability to intercept information, not that the provider used the information for its own purposes. The California Supreme Court has not yet decided which burden of proof should apply.

On the other hand, the Massachusetts Supreme Judicial Court, ruling for the defendants in Vita v. New England Baptist Hospital (No. SJC-13542, 2024 WL 4558621), found the Massachusetts Wiretap Act of 1968 did not apply to the collection of data on individuals’ browsing and interactions with websites. The plaintiff had argued the sharing of browsing information without consent was an interception of her communications with websites. In its decision, the state’s highest court noted the 1968 law was ambiguous as to the meaning of “communications” and it is the job of the legislature, not the courts, to resolve that ambiguity.

In the absence of unifying federal legislation and greater uniformity across states and the courts, businesses deploying AI technologies will need to continue to remain well-versed in the robust legal landscape governing (or potentially governing) data collection and data privacy. This requires an intimate, and constantly evolving, understanding of a patchwork of laws, regulations, and court precedent, particularly for businesses that operate across multiple jurisdictions.

MITIGATING AI DATA PRIVACY RISKS

If AI-related data privacy litigation were a football game, plaintiffs and defendants would still be in the first quarter. The legal landscape is still developing, but the potential for large verdicts from privacy class-action lawsuits is high. In 2022, the first BIPA class action in Illinois, for example, resulted in a jury award of $228 million (Richard Rogers v. BNSF Railway Company, Case No. 19-C-3083 (N.D. Ill.)). In 2024, the Circuit Court of Cook County, Illinois, approved a $75 million settlement of the class action. Additional thermonuclear verdicts of $100 million or more remain a possibility on other, similar matters. Businesses should continue to remain vigilant and attentive to mitigating data privacy risks.

Risk mitigation steps to consider include:

  • Transparency on tracking and AI technologies. Where possible, businesses should notify consumers of data collection and when to expect interactions with AI tools such as chatbots.
  • Terms of use. Technology companies have experience in contractual risk transfer in direct-to-consumer technologies. Their terms of use spell out the technology provider’s limitations of liability. Often these stipulate that use of the product constitutes agreement with the terms, which the tech company can amend at any time without notice.
  • Restaurants and other businesses that offer applications, such as for mobile ordering, might be able to capture consent when a customer downloads the app.
  • Holistic risk management. Cyber liability insurance is a useful tool to mitigate financial losses arising from data breaches and network failures. Technology errors and omissions liability insurance is a complementary form of coverage that also may respond to AI risks. It’s prudent for organizations to look at their whole insurance and risk management programs in the context of their planned use of AI and other customer-facing technologies.

Insurance companies are not yet besieged with AI claims, but underwriters are asking questions to gauge the emerging risks. For example: What technology does an organization have in its pipeline? How does the organization comply with applicable privacy laws? What is the organization doing to manage third-party liability? It is important for risk managers to understand their uses of emerging technologies and, when asked about these technologies during applications and underwriting, look to the appropriate persons with knowledge (the CISO, members of the technology support team, etc.) to assist in providing a current and accurate response.

Whether in the restaurant industry or other sectors, businesses can benefit from discussing their technology strategies and data operations with qualified risk and legal advisors who are monitoring the AI space.

Reproduced with permission. Published July 1, 2025. This article originally appeared in Law Journal Newsletters.