Artificial intelligence is transforming advertising by enabling hyper-personalised campaigns, real-time adjustments, and automated content generation. Yet, this progress comes with significant legal and ethical questions. Legislators in the European Union, the United States, and the United Kingdom are trying to keep pace, but regulation remains fragmented. The boundaries between permissible innovation and prohibited manipulation are not always clear, leaving businesses uncertain about how far they can push AI-driven advertising strategies.
The European Union has taken a proactive stance by adopting the Artificial Intelligence Act in 2024. Under this framework, AI systems used for advertising must comply with transparency requirements, ensuring that consumers are aware when content is AI-generated. Practices such as subliminal manipulation or targeting vulnerable groups with deceptive techniques are explicitly banned. For example, an AI-driven campaign designed to exploit children’s lack of judgement in online games would fall into the category of prohibited practices.
At the same time, the EU permits AI to be used in recommendation systems, dynamic pricing, and programmatic advertising, provided the systems meet strict standards for fairness and accountability. However, grey zones remain. The use of AI in “emotional recognition” for tailoring ads is under scrutiny, as some argue that it constitutes manipulation, while others see it as advanced consumer research. This lack of clarity forces advertisers to tread carefully.
Importantly, data protection laws such as the General Data Protection Regulation (GDPR) intersect with AI advertising. Any campaign relying on profiling must demonstrate lawful basis and avoid excessive data collection. This means that compliance is not only about the AI tool itself but also about how it interacts with personal data.
Companies operating in the EU must navigate a regulatory environment where penalties for violations can reach millions of euros. For instance, a retailer deploying AI-generated product descriptions without disclosing their automated origin risks facing enforcement action. At the same time, the grey areas around creative AI use—such as using generative models for storytelling in advertising—leave room for interpretation. Businesses often rely on legal counsel to assess whether their campaigns meet both the letter and spirit of EU law.
There is also the risk of cross-border conflict. An advertising strategy considered acceptable in one EU state may trigger stricter enforcement in another. This creates uncertainty for multinational brands, who must constantly adapt campaigns to local regulators. AI developers face additional compliance costs, particularly in transparency documentation, to ensure tools meet pan-European expectations.
Despite challenges, the EU continues to encourage responsible innovation. Public consultations show that regulators recognise the need to balance consumer protection with economic competitiveness, meaning the legal environment will likely evolve further in the coming years.
In contrast to the EU, the United States has not enacted a comprehensive AI law. Instead, the Federal Trade Commission (FTC) applies existing advertising rules to AI-generated campaigns. The FTC has made clear that deceptive or misleading claims are illegal regardless of whether they are created by humans or machines. For example, an AI tool producing fake customer reviews would directly violate truth-in-advertising laws.
However, beyond clear-cut cases of fraud, the U.S. regulatory approach leans heavily on industry self-regulation. Major tech companies have introduced voluntary codes of conduct for AI advertising, such as labelling AI-generated content or restricting the use of deepfake technology in political campaigns. Yet, these measures are unevenly applied and leave room for companies to experiment with practices that might be considered questionable elsewhere.
One particularly grey area in the U.S. involves behavioural targeting using AI. While tracking consumer behaviour for personalised ads is legal, the use of predictive algorithms that infer sensitive characteristics—such as health conditions or political views—raises ethical and legal concerns. Without federal legislation, enforcement often falls to state-level privacy laws, such as the California Consumer Privacy Act (CCPA).
American brands have already tested the limits of AI advertising. In 2023, a beverage company used generative AI to create thousands of personalised video ads based on local sports results. While legal, this approach sparked debate about consumer consent and the psychological impact of hyper-targeting. Similarly, political campaigns using AI-generated avatars to address voters individually have prompted calls for stricter transparency rules at the federal level.
Retailers have also experimented with chatbots that recommend products and simulate personalised shopping assistants. These tools, while innovative, sometimes cross into manipulative design, nudging customers into higher spending without clear disclosure. Regulators have not yet set boundaries for such practices, meaning that enforcement will remain inconsistent.
As discussions continue, some U.S. states are considering introducing AI-specific ad transparency laws. For example, New York and California legislatures are reviewing bills that would require mandatory labelling of AI-generated political content. Whether these state efforts will eventually lead to federal alignment remains uncertain.
The UK, following Brexit, has sought a “pro-innovation” approach to AI regulation while maintaining high standards of consumer protection. The Advertising Standards Authority (ASA) and the Information Commissioner’s Office (ICO) share responsibility for overseeing AI-driven marketing. The ASA requires that AI-generated ads comply with truthfulness and non-deception principles, while the ICO enforces data protection compliance under the UK GDPR.
A notable feature of the UK’s approach is its reliance on case-by-case assessments rather than blanket prohibitions. For example, an AI chatbot used to promote financial services must ensure that it does not mislead consumers. If the chatbot implies guaranteed returns without proper disclaimers, the ASA could impose sanctions. At the same time, innovative uses of AI, such as interactive storytelling campaigns, are welcomed as long as they remain transparent.
However, the UK also faces uncertainties. The government has not yet defined clear limits on the use of AI for biometric or emotional data in advertising. As with the EU, this creates a regulatory grey zone where advertisers must balance creativity against potential accusations of manipulation. The upcoming Digital Markets, Competition and Consumers Bill is expected to provide further guidance, but as of 2025, the situation remains fluid.
British retailers and media companies have begun experimenting with AI to personalise advertising in streaming platforms and online shopping. One well-known case involved an AI system that adjusted ad placement in real time based on facial expressions captured by smart TVs. While the project was suspended due to privacy concerns, it demonstrated both the potential and the risks of AI-driven campaigns. Another case saw a cosmetics brand deploy AI chatbots to recommend products; the ASA ruled the practice acceptable, provided it was clear that the advice came from an automated system.
Smaller companies are also exploring AI, particularly in influencer marketing. Tools that generate synthetic influencers or virtual brand ambassadors are becoming popular. Yet, the ASA has emphasised that these characters must be clearly disclosed as artificial entities. Lack of transparency could trigger penalties for misleading consumers.
As the UK moves towards defining its own regulatory model, businesses are encouraged to follow principles of fairness, honesty, and consumer awareness. The expectation is not simply legal compliance but active responsibility in shaping trust between brands and the public.
Artificial intelligence is transforming advertising by enabling hyper-personalised campaigns, real-time …
In the rapidly evolving business environment of 2025, the fusion …
In the ever-evolving digital marketing ecosystem, iGaming remains one of …