All Services
001

Looking for an award winning Shopify Plus agency?

Upgrade to Shopify Plus with Fourmeta, a leading Shopify Plus agency. We are a full service agency and accredited Shopify experts who specialise in creating customised solutions that convert!
002

Turbocharge your conversions in just 60 minutes!

Get in touch today and receive a FREE workshop session with a Shopify expert worth £250! We'll identify engagement gaps, create solutions, and share immediate fixes to boost your conversion rates.
003

With a wide portfolio of clients, we're the go-to agency for all things digital.

Wherever you're based and whatever sector you operate in, from startup to massive enterprise - share your ideas, and bring us your challenges.
Contact us
All Services
001

Looking for an award winning Shopify Plus agency?

Upgrade to Shopify Plus with Fourmeta, a leading Shopify Plus agency. We are a full service agency and accredited Shopify experts who specialise in creating customised solutions that convert!
002

Turbocharge your conversions in just 60 minutes!

Get in touch today and receive a FREE workshop session with a Shopify expert worth £250! We'll identify engagement gaps, create solutions, and share immediate fixes to boost your conversion rates.
003

With a wide portfolio of clients, we're the go-to agency for all things digital.

Wherever you're based and whatever sector you operate in, from startup to massive enterprise - share your ideas, and bring us your challenges.
Close menu
Works
About
Insights
When AI Becomes Brand Safe: Rights & Trust in Fashion
eCommerce
August 27, 2025

When AI Becomes Brand Safe: Rights & Trust in Fashion

Alex Rodukov
Alex Rodukov
CEO & eCom Strategist
Check with:

I remember a time not long ago when using AI in marketing felt like a daring experiment. Fast forward to today, and AI-generated content is everywhere – writing product descriptions, designing ad images, even coding websites. As the CEO of Fourmeta, I’ve seen firsthand the excitement and the anxiety this shift brings.

On one hand, generative AI promises unbelievable creative speed and scale. On the other hand, it raises tough questions about brand safety, intellectual property rights, and whether customers will trust AI-assisted brands. In this post, I want to share my perspective – with data and real examples – on how we can make AI brand safe for direct-to-consumer (DTC) and mid-market brands alike.


The New Creative Revolution (Without the Fluff)

AI isn’t sci-fi in the boardroom anymore; it’s an everyday marketing tool. By 2024, more than half of businesses were eager to use AI for content creation (55%) and even image generation (53%) as part of their marketing strategy .

Marketers are largely optimistic about this revolution – a whopping 85% of marketers believe generative AI will transform content creation . We’ve already started using tools like ChatGPT for copywriting and Midjourney or DALL-E for visuals in our campaigns. This isn’t about hype; it’s about doing more with less. And for DTC and mid-market brands who often run on lean teams and budgets, AI can be the competitive edge to produce content at the volume consumers expect.

Yet, let’s be clear: embracing AI doesn’t mean abandoning human creativity or good judgment. Take the example of Shapermint, a DTC fashion brand. They built an AI engine named “Altair” to help script influencer videos for TikTok and Instagram. Nine months in, it cut their influencer content production time by around 70% . Seventy percent!

That kind of efficiency gain is gold for a marketing team under pressure. But interestingly, Shapermint still has human creators in the loop to ensure the videos stay authentic and show real products (they deliberately avoided letting AI generate the actual video footage) . In other words, smart brands use AI’s speed – but without sacrificing quality or authenticity.

Other brands are experimenting too. Sephora and Mint Mobile tried using ChatGPT to write ad copy in 2023, and even smaller startups like Juliet Wine have used AI to brainstorm social media captions in the brand’s tone .

Allison Luvera, Juliet’s co-founder, said modern brands “need to be content powerhouses” – producing huge volumes of posts, ads, and emails – especially when resources are limited . AI is helping meet that demand. She even tested AI image generators to create product photos on a budget.

The results? Mixed. The AI-generated images looked okay at first glance, but on close inspection they lacked realistic details and polish, so Juliet Wine hasn’t actually used them in marketing yet . This mirrors what I hear from many mid-market brand leaders: AI can accelerate our content, but we won’t put it out there if it’s off-brand or low quality. Efficiency is great, but brand integrity comes first.

Brand Safety Challenges: Why We Can’t “Set and Forget” the AI

For all its power, generative AI can be a double-edged sword for brand safety. The very speed and autonomy that make AI attractive can also produce embarrassing or even harmful content if left unchecked. In marketing, “brand safety” means protecting a brand’s reputation – avoiding things like offensive, off-brand, or factually incorrect content. With AI, those risks are front and center.

One big concern is that AI sometimes “goes rogue” in its output. It can generate text or images that are biased, inaccurate, or out of line with a brand’s values.

In fact, 60% of marketers using generative AI are concerned it could harm their brand’s reputation through bias, plagiarism, or tone misalignment .

These aren’t hypothetical worries – we’ve seen it happen. Think about an AI writing social media posts that accidentally use insensitive language, or an AI image generator producing a photo with subtle stereotypes. Without careful oversight, a well-meaning brand could post something that alienates or offends their audience in seconds.

Even accuracy can be a minefield. ChatGPT-style tools are known to sometimes fabricate “facts” confidently. If a bank’s marketing AI mistakenly claims an unreal interest rate, or a health brand’s AI-written article gives wrong medical info, you have a major trust problem.

No wonder more than half of workers (54%) worry that generative AI outputs can be inaccurate, and 59% are concerned about biased outputs . I share their concern – a brand’s content is only as good as it is true and fair.

We’ve also learned that AI can clash with brand values in very public ways. A cautionary tale I often cite is Levi’s attempt to use AI-generated fashion models. The idea was to show more “diverse” body types and skin tones with AI instead of hiring more human models. The backlash was swift and brutal.

Critics called it “lazy” and “problematic” – essentially accusing Levi’s of using “fake diversity” as a cheap shortcut . Some even warned it felt like a digital form of blackface, co-opting the image of minorities instead of actually supporting them . Levi’s had to quickly clarify that AI models wouldn’t replace real models or its commitment to diversity. But the damage was done – it became a lesson that just because AI can do something doesn’t mean it should. Brand trust is hard-earned and easily lost, especially on sensitive issues like inclusivity.

And then there’s the wild west of deepfakes and misinformation. In the last couple of years, we’ve seen ultra-realistic AI-generated images and videos sweep the internet – from a fake image of the Pope in a stylish puffer jacket (which fooled millions briefly) to AI-generated ads that look real.

Consumers are growing wary of what they see online. Over 60% of consumers say they fear AI will lead to more fake news, scams, and overall deceptive content online .

Imagine your brand falls victim to a prank where an AI deepfake mimics your CEO saying something inappropriate – that’s a brand nightmare. Or imagine customers doubting your real content just because it might be AI.

This erosion of trust is the last thing DTC and mid-sized brands – who live on customer relationships – can afford. In short, generative AI can create content at scale, but it can also create mistakes at scale. We can’t afford to “set and forget” the AI and hope for the best.

Governance: Putting Guardrails on AI Creativity

How do we reap the benefits of AI while avoiding the pitfalls? The answer starts with governance – essentially, having the right rules, oversight, and quality control for AI usage. In my experience, governance is the bedrock of brand-safe AI. It’s not the most glamorous topic, but it’s absolutely crucial.

What does AI governance look like in practice? For one, it means establishing clear guidelines for your team on how to use (and not use) AI in content creation.

At Fourmeta, we created an AI Content Policy that spells out things like: what types of content are okay to automate, what requires human review, and what data can or cannot be fed into an AI tool. This echoes what I’m hearing across the industry – companies large and small are drafting “AI playbooks” for their staff.

It’s needed, because right now only 37% of organizations are even tracking how employees use AI tools . The other two-thirds have basically a Wild West situation, where a well-meaning employee might unknowingly break brand rules or privacy laws using AI. That’s a huge risk.

Governance also means human oversight is non-negotiable. Remember those early factory machines that still needed a person watching the dials?

Think of AI the same way. In a recent Salesforce survey, employees said that to make generative AI a trusted part of their work, the top requirements were human oversight (cited by 60% of respondents) and clear ethical use guidelines (58%) .

In other words, your team wants guardrails too! In practice, this could be having an editor review every AI-written article before publishing, or a designer touching up every AI-generated image.

At Fourmeta, we’ve made it standard that no AI content goes live without a human eyeball on it. This “human in the loop” approach catches the biases, errors, or tone issues that the AI might miss. It’s like having a safety net for quality.

Technology can help enforce governance as well. New AI content platforms are emerging that let you bake in brand rules and compliance checks by default.

For example, there are enterprise tools that you can feed your brand style guide into – logo usage rules, color palettes, forbidden phrases, etc. – and the AI will auto-police those in any generated content . If the AI is about to create something off-brand, it can flag it or correct it on the fly. Some systems even have a “brand guardian” AI that remembers everything about your brand voice and visual identity, acting like an automated content editor .

I find this promising, because it combines AI’s efficiency with a measure of brand control. It’s the equivalent of having an always-alert assistant who never gets tired of checking compliance.

Finally, governance extends to data and security as well. Brand-safe AI means making sure you’re using trusted data sources and not exposing sensitive info.

Three out of four workers believe generative AI introduces new security risks – and they’re right. If an employee copy-pastes a customer list into a third-party AI tool to “analyze it,” that could be a data breach in the making. So a good governance policy also covers data: which AI tools are approved (e.g., an on-premises solution vs. a public tool), what data can be used, and anonymization standards.

We treat our clients’ data like gold, so any AI we use must meet our security checklist (encryption, compliance with GDPR, etc.). Governance isn’t just internal-facing; it’s about being accountable to your customers and their privacy too.

The bottom line on governance is this: AI without guardrails will eventually crash your brand. But AI with the right guardrails can drive safely and get you to your destination faster. As an industry, we’re learning and sharing best practices on this. In fact, 99% of IT leaders recently said their business must take measures to use generative AI responsibly . We’re essentially building the airplane while flying it – but the emphasis on responsible AI use is a very good sign for the future of brand trust.

Rights and Ownership: Navigating the Legal Gray Areas

One of the thorniest issues in this new AI-powered content world is rights. If your brand is using AI-generated text, images, or music, you have to ask: who owns that content? Are we sure we have the rights to use it? And whose work was scraped to create it in the first place? These questions have kept our legal team plenty busy, and it’s something every brand needs to consider.

A high-profile example can illustrate the stakes. In 2023, Getty Images – one of the world’s biggest stock photo companies – sued Stability AI, the company behind a popular AI image generator (Stable Diffusion). Getty accused them of scraping millions of Getty’s copyrighted photos to train the AI, without permission or payment . That case is now in court and is seen as a landmark for AI and copyright law .

Essentially, the courts will decide if using someone’s creative work to train an AI is fair use or if it violates the creator’s rights. As a brand executive, I’m watching this closely. The outcome will affect whether we can safely use certain AI tools or datasets without ending up on the wrong side of a lawsuit.

Beyond images, we’ve seen a wave of writers, artists, and even actors pushing back on AI using their work without credit or compensation. Remember the Hollywood writers’ and actors’ strikes in 2023? One of their big issues was AI. Actors demanded protections so studios couldn’t, say, scan their likeness and generate new performances without pay. Voice actors worried about AI cloning their voices.

These are rights conversations at heart – the right to one’s own creative output or identity. Your brand might not be Hollywood, but the principle carries over. If you use AI to generate a jingle, did the AI “learn” that from some musician’s catalog? If you have AI write a blog post, did it lift phrasing from an unattributed author online?

Brands have been called out for AI plagiarism in content. The last thing you want is a PR crisis because your AI-written blog post unknowingly copied a paragraph from the New York Times.

So how do we navigate this? First, insist on transparency from your AI vendors. When we partner with an AI platform, I ask: what data was this trained on? Is it licensed? If they can’t answer clearly, that’s a red flag.

Some companies are taking a proactive approach – for example, Getty (the same company in that lawsuit) responded by launching its own AI image generator that’s trained only on Getty’s fully licensed library . That means any image you make with it is cleared for rights. I expect we’ll see more “ethical AI” offerings like this, where the training data is permissioned and the outputs are indemnified.

Second, as a brand, you can protect yourself by staying on the right side of copyright. For AI-generated text, tools now exist to check for plagiarism or too-close similarities to existing material. Use them as part of your content QA.

For images, consider using content that’s either generated from your own assets or from platforms that offer usage rights. And always keep usage licenses in mind – for instance, if an AI art tool says “for non-commercial use only,” don’t sneak it into a campaign ad. It’s not worth the risk.

There’s also the question of who owns the AI outputs your team creates. Many AI tools have terms that either give you full ownership or some that claim a license to use your outputs. Read those terms. We made sure that any AI platform we use for client work explicitly states that our client owns the final output. It’s an evolving area of law, but as a rule of thumb: treat AI outputs like you would any outsourced creative work – ensure you have clear rights to use, modify, and redistribute it as needed.

And let’s not forget data rights and privacy. If you’re feeding customer data or proprietary info into AI, do you have consent? Are you complying with regulations? For example, an AI that generates personalized product recommendations is great – but not if it violates GDPR by using personal data without proper basis.

Responsible AI governance (as discussed) should intersect with your privacy policies. One comforting stat: 64% of business owners believe AI will improve customer relationships – but that will only hold true if we respect customer rights and expectations while using AI.

In short, the legal landscape around AI and content is still a bit like the Wild West in 2025. My advice is to stay cautious, consult your legal team, and when in doubt, lean towards protecting the human creators’ rights. That approach isn’t just ethical; it’s part of earning trust. Brands that show respect for creators (whether they’re employees, freelancers, or the public whose data might train AI) will fare better in the court of public opinion. We all want innovation, but it has to be innovation that respects rights.

Coca-Cola’s “Create Real Magic” platform let fans generate festive digital cards using the brand’s iconic assets, blending personal creativity with AI . Such controlled campaigns show that AI content can engage customers while staying true to brand identity.

The Future of Trust: Transparency and Authenticity

As we look ahead, one thing is obvious to me: trust will be the currency of the future, especially as AI becomes more embedded in brands’ interactions with customers. How do we maintain trust when content might be created by an algorithm? I believe the answer lies in transparency and authenticity.

Firstly, brands should be open about their use of AI when it matters. I’m not suggesting every social post needs a disclaimer, but if AI is heavily involved in something like customer service or a curated experience, it’s wise to let users know. Interestingly, consumers don’t automatically distrust AI – about 65% of consumers say they do trust businesses that use AI technology today .

Many see the benefits outweighing the risks. However, nearly half of consumers also feel that the AI realm isn’t regulated enough yet . That tells me people are cautiously optimistic but expect businesses to act responsibly. A bit of transparency can go a long way in reinforcing that trust. For example, we might add a note like “This image was created using AI” or use an AI-generated label in metadata. It shows we have nothing to hide and are confident in the quality of our AI-assisted content.

In fact, transparency might soon be law, not just a nice-to-have. Regulators are waking up to generative AI’s impact.

The European Union’s AI Act, which was adopted in 2024, includes a rule that any AI-generated or AI-manipulated content must be clearly identified to users . Providers of generative AI systems will be required to watermark or label their outputs in a detectable way. The goal is to prevent deepfake deception and ensure people know when they’re seeing AI-created media.

While some argue about the feasibility of enforcing this, the direction is clear – the future is moving towards more transparency. I won’t be surprised if other regions implement similar regulations or industry standards (the U.S. FTC, for instance, has hinted it’s watching for deceptive AI practices in advertising).

For brands, embracing transparency proactively can be a competitive advantage. It could be as simple as a statement on your website: “We use AI and human creativity hand-in-hand to serve you better – here’s how…”. Or implementing content credentials (a concept pioneered by the Content Authenticity Initiative) on images, which cryptographically attest to an image’s origin and any edits. Adobe has been rolling out such features, where an AI-generated image can carry a hidden watermark or metadata tag indicating it’s AI-made.

I foresee a time when consumers will gravitate to brands that can prove the authenticity and source of their content, especially in e-commerce. No one wants to buy from a site where product photos are AI and don’t reflect reality, for example. But if you do use AI in your product imagery, being upfront and showing that it’s for visualization only can uphold trust.

Authenticity is the other side of the coin. By authenticity, I mean content that genuinely reflects your brand’s values and keeps a human touch. AI or not, people crave authentic stories and connections. The best AI deployments I’ve seen are those that augment human creativity, not replace it.

Coca-Cola’s recent holiday campaign is a great case: they opened up their treasured brand assets (like the classic Santa Claus illustrations and the Coke polar bears) for consumers to remix with AI and create custom greeting cards . It was still their iconic imagery, and the consumers’ own imagination, with AI as the facilitator. The campaign reportedly generated 120,000 images in 11 days and tons of engagement .

The key was Coca-Cola stayed true to its brand – nostalgic holiday imagery – and simply gave people a new, AI-powered way to interact with it. That builds trust and affinity because it feels real and on-brand, even though AI is under the hood.

Contrast that with a hypothetical scenario where Coke had let an AI just randomly generate ads without guidance – we might have seen bizarre or off-message results, which would hurt trust. The lesson: use AI to amplify your authentic brand voice, not to generate a new one. Consistency matters. Research shows that strong governance and consistency in AI-generated content can actually boost brand reputation and stakeholder trust . When people see that a brand’s AI is still aligned with its human values and quality standards, it reinforces the idea that the brand knows what it’s doing.

Looking ahead, I also anticipate new trust mechanisms specifically for AI content. Perhaps industry “AI Responsibility” certifications for brands, or browser tools that can flag AI content (in a good way: e.g., a badge that says “Verified AI – source XYZ”). We may even get to a point where consumers can choose whether they want AI-personalized experiences or not, giving them control. The brands that thrive will be those that earn that opt-in by showing they use AI to genuinely help customers, not trick them.

In summary, the future of trust in an AI-driven brand world will boil down to this: Be honest, be ethical, and keep it human-centric. If you let those principles guide your AI adoption, your customers will reward you with loyalty.

As a CEO, I don’t lose sight of the basics – trust is built slowly through honesty and consistency, and lost in an instant through deceit. AI doesn’t change that. If anything, it raises the stakes, because the scale and speed are greater. But with transparency and authenticity as our north stars, we can navigate the AI era and strengthen the bond between our brands and our customers.

Conclusion: A Balanced Path Forward

Bringing it all together, I remain a cautious optimist about AI in brand communications. Yes, generative AI is disruptive – it’s transforming how we create content, engage with consumers, and operate as marketing teams. It offers incredible opportunities for those of us willing to innovate. But it also comes with new responsibilities. We can’t just unleash AI and hope for the best. We need to govern it, respect the creative and legal rights involved, and double down on transparency and trust.

My journey as Fourmeta’s CEO navigating this space has taught me that “brand safe” AI is achievable. It’s not an oxymoron. It happens when you combine AI’s speed with human judgment. When you pair automation with governance. When you embrace innovation, but within an ethical framework. The brands that get this balance right are already seeing wins – higher efficiency, more personalized customer experiences, and intact (even boosted) trust. Those that get it wrong, well, they make headlines for the wrong reasons and face setbacks in trust and reputation.

To my fellow marketers and brand leaders, especially in the DTC and mid-market arena: I encourage you to engage deeply with these issues. Don’t sit on the sidelines of the AI wave, but don’t dive in blind either. Test and learn with AI, but set your guardrails. Educate your teams about both the capabilities and the pitfalls. Share stories and examples (the good and the cautionary) with your organization, so everyone understands why brand safety matters more than ever in the age of AI.

Ultimately, when I think of the “future of trust,” I see it as a collaborative effort – between companies, consumers, regulators, and even AI developers – to create an ecosystem where AI enhances our lives without undermining our values. We’re not quite there yet, but we’re making progress. And I’m excited for Fourmeta and our peers to be part of that journey, crafting a future where AI is not just powerful and smart, but also respectful, responsible, and worthy of the trust our customers place in us.