Lawyer Thomas Nietsch: The impact of the AI Act on the hotel industry

Posted by:
Antonia Quell
on
June 16, 2025

The EU is getting serious about regulating artificial intelligence: from 2026, companies that use or develop AI will have to comply with numerous new regulations. What does this mean in concrete terms for users and providers? And why should hoteliers start thinking about the topic now?

Eine Roboterhand, die KI symbolisiert, berĂĽhrt sanft eine Menschenhand

Who better to shed light on the topic than attorney Thomas Nietsch: In our written interview with him, you’ll learn what the so-called AI Act means for the hotel industry, which challenges hoteliers are likely to face, and how they can prepare in time.

What is the AI Act and who does this new law apply to?

The AI Act, considered the world’s first comprehensive regulation on artificial intelligence, is a flagship EU project that was finalized and adopted under significant time pressure ahead of the 2024 EU elections. As an EU regulation, it applies directly in all EU member states and does not require national implementation.

The goal of the AI Act is to promote trust in AI, while also protecting fundamental rights, safety, and democracy from the potential impact of AI. In principle, the law applies regardless of the area of application or the risk level of a given AI system, although it follows a risk-based approach. The entire life cycle of AI is regulated—from development to integration into other systems, distribution, and eventual operation by the end user. Accordingly, both economic and public actors along this value chain are subject to a range of obligations, depending on the specific role they play in the use of AI. The law distinguishes primarily between “providers” who develop AI applications and “deployers” who use third-party applications. It also regulates providers of “general-purpose AI models,” particularly large language models (LLMs), whose applications are not limited and depend on how they are used by the deployer.

In addition to outright bans on certain AI uses deemed excessively risky—such as subliminal manipulation, profiling that causes social disadvantage, predicting the likelihood of criminal behavior, or real-time biometric identification in public spaces—the AI Act establishes a layered framework for transparency, safety, and risk prevention to ensure lawful deployment. Naturally, this framework overlaps with other EU and national regulations, particularly those related to data protection, privacy, intellectual property rights, and trade secret protection.

Why should hoteliers be concerned with the AI Act?

AI is now used for many different purposes across almost every business sector, and the hospitality industry is no exception. If a hotel decides to implement, for example, an AI-based booking system or a chatbot for guest communication, it will have obligations under the AI Act—even if the system is purchased as a ready-made solution from a third-party vendor. This applies even more so if a hotel develops or commissions a custom-built AI solution, in which case it is considered a provider rather than merely a deployer.

Even if AI is only used internally—for instance, for staff management—the hotel operator still falls under the scope of the AI Act and must meet its requirements. The Act does not differentiate between internal and external use cases. As with the GDPR, non-compliance with the AI Act can result in heavy fines: up to €35 million or 7% of global annual turnover for particularly serious violations, or otherwise up to €15 million or 3% of turnover.

The AI Act includes staggered implementation deadlines: a blanket ban on certain uses of AI has been in effect since February 2, 2025. Obligations for general-purpose AI models and the associated penalty provisions take effect on August 2, 2025. Most of the remaining provisions apply from August 2, 2026.

What obligations could apply to hoteliers using AI?

The fundamental requirement for all providers and deployers of AI systems is that any personnel involved in operating AI must be appropriately trained and knowledgeable about how the system works and what risks it may entail. Staff must therefore receive sufficient training.

Additional obligations depend on the risk classification of the AI system and whether the hotel is considered a provider or a deployer. Assuming that prohibited uses of AI will not typically occur in normal commercial activity, the first question is whether a system poses a high risk to protected legal interests under the AI Act.

This may be the case if the AI performs safety-critical functions in regulated products like vehicles, machinery, elevators, protective gear, or medical devices. While this may be less relevant for hotels, it’s not entirely out of the question—for instance, in the case of elevators. A high-risk AI also includes biometric identification systems, systems used for hiring or managing staff (e.g., evaluation, promotion, or termination), or systems used for credit scoring. Such systems may well be used in hospitality, for instance in HR planning or guest credit assessments. In these cases, the regulatory requirements are high.

The provider—the developer placing the system on the market—must implement a risk and quality management system, ensure high-quality training and validation data, produce technical documentation, and design the system in a way that allows usage logging and safe operation by the deployer. The system must support human oversight and be safeguarded against faulty outputs, failure, and cyberattacks. High-risk systems must undergo EU conformity assessment by an authorized body, carry CE marking, and be registered in an EU-wide database.

Deployers of high-risk systems are also subject to obligations: they must ensure the AI system is operated properly and monitored by trained staff, that input data is accurate and appropriate for the AI’s purpose, and that a fundamental rights impact assessment is carried out for specific use cases.

If a system does not fall into the high-risk category, only lighter obligations apply. Mainly, providers or deployers must inform users that they are interacting with AI or that the content they are seeing was generated by AI.

For general-purpose AI models or LLMs integrated into larger systems, additional obligations apply regardless of their individual risk level. Providers must produce adequate technical documentation and share it with customers integrating the model into their systems. They must also have a strategy for complying with copyright laws. If the model poses systemic risks due to its innovation and power, further measures to mitigate such risks are required.

How can hoteliers determine whether their AI systems are compliant?

In any case, hoteliers—even if they are only deploying a third-party AI solution—must fulfill their own obligations under the AI Act. At a minimum, they need to inform users that content has been AI-generated. To determine their exact responsibilities, they must assess whether a system falls into the high-risk category. If it does, significantly more extensive compliance measures will be required, which will also entail additional costs.

Hoteliers purchasing an AI system from a vendor should also ensure that the vendor complies with the relevant legal requirements. While this isn’t explicitly mandated by the law, using a non-compliant AI system could result in civil liability if the system causes harm or malfunctions. It is therefore strongly recommended that deployers thoroughly review the documentation provided by the vendor. If high-risk systems are used, conformity assessments, CE markings, and database registrations should be checked as well.

If a hotelier integrates an LLM into a proprietary application, they must ensure that the model provider meets their obligations, particularly in the case of systemic risk. The documentation prepared by the provider should be reviewed carefully.

Given the complexity of these obligations—especially in companies where AI is not a core activity—it may be difficult to keep track. It would be helpful to develop trust marks or similar mechanisms to help deployers determine whether a purchased solution meets legal requirements.

The AI Act provides for such trust mechanisms through the development of codes of conduct, which allow AI providers and deployers to certify their systems and demonstrate compliance. These codes must first be created, monitored, and approved by regulatory authorities. As such, no quality certification will be available immediately after the AI Act comes into force. Until then, it will be up to deployers to work closely with their providers to ensure lawful use.

What steps should businesses take now to be prepared for 2026?

Every company—whether in hospitality or not—should begin by taking stock of all their existing applications and systems and determining whether they contain AI components. Then, they must assess the risk category of each system. Where necessary, companies should work with their service providers to clarify who is responsible for what and how compliance will be ensured. Delineating responsibilities can be complex and should be addressed cooperatively.

In addition to the compliance obligations imposed by the AI Act, hoteliers must also consider other legal frameworks when using AI. For instance, they must ensure that user input data is processed in compliance with the GDPR. This often involves determining where the data is stored—locally or in the cloud—and whether it is reused for future AI training.

It can also be problematic for a hotelier if it turns out that the data used to train the model was obtained in violation of copyright or data protection laws. This risk applies not only to proprietary systems developed in-house, but also to third-party models or LLMs. To reduce this risk, contractual agreements with the provider should be considered to secure indemnification in the event of liability.

‍

Dr Thomas Nietsch is a lawyer and partner in the Berlin office of the international law firm K&L Gates LLP. He advises a broad portfolio of clients, from start-ups to international corporations, on issues of compliance with digital regulation and the market launch of new digital business models. He regularly focuses on issues of data law, AI and platform regulation, IT security, (open source) software licensing and contract drafting in the B2B and B2C sectors. Thomas Nietsch is also co-head of the working group for third country transfers and a member of the complaints committee for the EU Cloud Code of Conduct, the first EU-wide code of conduct for data protection in the cloud industry.

‍