Insights from Future IP UK Experts Vol.9: Sarah Jarman on AI, Ownership, and the Future of IP
- Min Nguyen
- Sep 8
- 9 min read

The intellectual property landscape is entering uncharted territory, where AI, data, and regulation collide. For businesses, the challenge is no longer just protection - it’s about building IP strategies that can adapt, scale, and stay ahead of constant change.
We’re delighted to speak with our Conference Chair, Sarah Jarman, VP Legal Compliance & Group Data Protection Officer at Emplifi.
With nearly 20 years of leadership in legal and compliance across technology, media and defence, Sarah brings a wealth of experience in IP enforcement, commercial law, and global AI compliance initiatives.
In this exclusive interview, she shares her perspective on ownership as the new battleground in IP, the litigation and policy developments she’s watching closely, and how IP professionals can adapt their strategies to stay at the forefront of a rapidly shifting landscape.
What’s one big trend in AI and IP that you think everyone in the industry should be paying attention to right now?
Ownership. That’s the battleground that’s going to shape not just IP enforcement, but also how companies think about product strategy, M&A, and even valuation. AI is no longer just about technology, it's now a board-level IP and risk issue. And the reason we all need to pay attention is that there’s no one-size-fits-all answer right now; we are watching case law evolving rapidly. My advice? Bring popcorn.
On one side, businesses are racing to embed AI into their products, unlocking huge gains in efficiency and innovation. On the other, the law is still scrambling to keep up: Who owns AI-generated output? How far can you enforce it? And how do you avoid the embarrassment of discovering your shiny new algorithm was trained on someone else’s copyright portfolio? These are not just law-school hypotheticals, they’re becoming billion-dollar questions.
Are there any recent cases, policies, or market shifts that you think will spark lively discussion at the event?
I suspect this year’s event will be especially engaging because we now have concrete litigation and policy developments rather than just hypotheticals.
In the UK, the Getty Images v. Stability AI case has become one to watch. Getty recently dropped its primary copyright and database claims, but the High Court is still considering secondary infringement, trademark, and passing-off. That narrowing illustrates just how hard it is to apply traditional copyright doctrines to training data, but also how courts may pivot towards branding, watermarks, and unfair competition. A judgment is expected later this year, and it could reshape the way AI companies think about cross-border liability.
In the U.S., the New York Times v. OpenAI and Microsoft litigation is equally important. The court has already issued a preservation order requiring OpenAI to retain ChatGPT output logs. That may sound procedural, but in practice it’s a shot across the bows: courts are treating AI outputs as discoverable evidence in copyright disputes, which will ripple into corporate data governance policies worldwide.
We’re also seeing the news sector fighting back, Dow Jones and the New York Post v. Perplexity AI survived an early motion to dismiss this summer. That signals publishers are unlikely to accept unlicensed scraping as business-as-usual.
And let’s not forget the creative industries. Artists continue to push forward with claims in Andersen v. Stability AI/Midjourney, while media giants like Disney and NBCUniversal have filed suits against Midjourney for overuse of their content. Together, these cases test both sides of the equation: the legality of training and the liability of outputs.
On the policy front, the EU AI Act is now live, requiring transparency around training data for foundation models, while in the UK the IPO has reopened the question of text-and-data-mining exceptions after shelving its broad exemption. Boards and investors are watching this closely, because it directly affects valuation, risk disclosures, and M&A due diligence.
The combination therefore of live cases, regulatory shifts, and market pushback means we’re no longer speculating about ‘what if.’ The guardrails are being built right now—and that makes it a fascinating moment to be working in the IP.
Do you think IP professionals should adapt their skills and strategies to keep pace with technology-driven change?
I think IP professionals already have the core skills to keep pace with technology-driven change, it’s practically in the job description. There is already a need for an eye for detail, to get up to speed quickly on new concepts, and to live in a world where the law is so dynamic.
So the skills aren’t the problem.
What really needs to change is the strategy. It’s less about learning to read another hundred-page statute, and more about how we translate that into business action—when we communicate, how we engage stakeholders, and how we make sure we’re part of the strategic conversations and certainly before the deal is signed or the AI tool is launched.
In other words, the challenge isn’t that IP professionals can’t keep up, it’s that sometimes we’re still showing up to the race in a three-piece suit when everyone else is on an e-scooter. We don’t need new muscles, we just need a better playbook for using the ones we’ve got.
What practical approaches have you found most effective for training teams on AI-related IP and legal risks?
I’ve experimented with just about every type of training you can imagine, from the big, company-wide AI 101 sessions and formal policies, right through to live functional workshops where we either keep things high-level with clear mitigation processes, or going very granular, like dropping a patent on the table and saying, ‘Alright, what do you think?’
What I’ve found is that the most effective approach isn’t casting the widest net, but rather building the why. In a business driven by revenue, it’s easy to see an IP portfolio as just a cost centre and to treat risk discussions as slightly academic. The turning point comes when you connect the dots: competitor analysis, what happens in M&A due diligence, the very real reputational hit if something goes wrong. Those are the examples that get people leaning in.
I’ll be the first to admit, I haven’t cracked the code; AI-related IP risks evolve faster than anyone’s training slides and interest, understanding and capacity can vary, so it’s less a ‘one-off programme’ and more an ongoing conversation with risk principles.
The key, I think, is staying in the loop with the business, keeping the dialogue open, and not being afraid to refresh the message when yesterday’s risk suddenly looks very different today.
With the rise of generative AI, what are your thoughts on the legal and ethical implications of AI-generated inventions? In your opinion, how should questions of authorship and ownership be addressed?
With the rise of generative AI, the questions of authorship and ownership aren’t just technicalities of copyright law, they strike at the heart of what it means to be a creator and who can bear responsibility.
From an ethical standpoint, the starting point must be clear: AI systems are not people. They lack human nature, moral responsibility, and the capacity for virtue. As Professors John Tasioulas and Josiah Ober argued in the Aristotle and AI White Paper at Oxford, ‘AI systems should be conceived primarily as intelligent tools… to enhance the prospects of human flourishing.’ In other words, any claim of authorship or ownership should remain anchored in the human beings who design, deploy, or direct these systems, not the systems themselves.
The White Paper also warns against anthropomorphizing AI. Aristotle made the same distinction two millennia ago: being clever at producing outcomes is not the same as exercising practical wisdom. As the paper notes, ‘even if an AI system could simulate virtuous activity, it lacks the settled disposition to act for the right reasons.’ If we begin treating AI outputs as self-authored, we risk hollowing out the ethical responsibility that comes with authorship.
UNESCO takes a similar line in its Recommendation on the Ethics of AI, which has been endorsed by nearly 200 states. It emphasizes that ‘it is always possible to attribute ethical and legal responsibility for any stage of the life cycle of AI systems… to physical persons or existing legal entities.’ AI may inform decisions, but ‘an AI system can never replace ultimate human responsibility and accountability.’ That principle dovetails perfectly with the Aristotelian view: machines can assist, but they cannot shoulder human dignity, accountability, or moral agency.
So, how do I think we should respond?
First, authorship must remain human linked either to individuals who direct the AI or, where appropriate, to organizations accountable for its training and deployment.
Second, ownership frameworks should reinforce this while being transparent about AI’s role as a tool.
Finally, ethics must lead the way. Accountability is key and we have a responsibility to our non-IP professional humans (!) to help shape that by engaging in discussions like this and helping to drive forward a regulatory framework that reflects the above.
Looking ahead, what role do you see AI playing in the next years in IP law and innovation management?
I see AI embedding itself across the entire IP lifecycle, from development through to protection, identification, and even how we learn as professionals.
On the development side, AI is already accelerating innovation by helping teams design, test, and refine products at speed. When it comes to protection, we’re going to see AI used to draft, analyze, and even stress-test filings, contracts, and enforcement strategies. In identification, AI’s ability to scan vast data sets means it can surface prior art, detect infringement, or spot emerging competitors faster than any human could.
In learning, AI will reshape how practitioners themselves stay current curating updates, tailoring analysis, and helping us digest a regulatory landscape that’s shifting week by week.
Of course, different practitioners will lean on it in different ways: patent attorneys may use it for prior-art searches, copyright lawyers for monitoring generative outputs, and in-house teams like mine for scenario planning and risk management. There will also be the individual preferences and specialisms.
The common thread however is that AI is no longer on the periphery of IP—it’s becoming the connective tissue that runs through innovation management itself, which is a very sobering thought if we don’t ensure the regulatory landscape reflects that.
From a portfolio management perspective, how has AI influenced strategic decision-making, such as filing and licensing?
On the filing side, AI equips us with far richer intelligence about what to protect, where, and when. We can now scan global patent landscapes at scale, distinguish between oversaturated fields and genuine white-space opportunities, and even model examiner behaviour or grant likelihoods. That means we’re no longer filing reactively but making informed choices, whether to patent a new software feature, hold it as a trade secret, or accelerate it to market before competitors catch up (provided we’ve been kept in the loop early enough!).
On the licensing side, AI-driven analytics are transforming how we extract value and mitigate risk. In the SaaS context, that might mean identifying APIs or algorithms that generate more value through licensing than exclusivity, or using predictive models to weigh the commercial upside of exclusive versus non-exclusive rights. It also sharpens risk assessment, flagging where a licence could create dependencies on third-party data sets, raise interoperability issues, or inadvertently allow data to be repurposed for model training within the supply chain.
AI is enabling IP portfolio management to evolve from being a compliance-driven function into a strategic lever, one that balances innovation, monetization, and risk in ways that were simply not possible before and I think it will continue to change and influence at pace.
When building and managing large, global IP portfolios, what are the biggest jurisdictional challenges you encounter?
The most challenging issues I find aren’t simply comparing legal systems or calculating filing costs. The real difficulty lies in aligning the geo-strategic direction of the business with a nuanced understanding of enforcement trends and litigation cultures across jurisdictions.
For example, in the United States, it tends to be a highly litigious environment with the constant risk of patent assertion entities (or my preferred term; ‘patent trolls). That drives a very different defensive filing and licensing strategy than for example, Europe, where the Unified Patent Court offers the prospect of pan-European enforcement but also the risk of pan-European revocation in a single proceeding.
In Asia, the challenges are different again. China has moved rapidly from being seen as a weak IP jurisdiction to one where enforcement is both sophisticated and aggressive, particularly in technology sectors. That forces companies to weigh not only the value of protection but also the strategic cost of potentially defending against counter-claims in a jurisdiction with fast-moving courts.
By contrast, in jurisdictions like India or Brazil, enforcement can be slow and unpredictable, so the strategic calculus is often about whether the cost of maintaining rights outweighs the practical benefit of enforcement.
What this means in practice is that portfolio decisions are no longer purely legal—they’re deeply connected to business strategy.
Whether we double down on filings in the U.S., take a cautious approach in the UPC, or use selective filings in emerging markets isn’t simply about law, but about anticipating litigation risk, competitor behaviour, and even geopolitical exposure.
That again means being part of the business and commercial conversations early and those that aren’t centered purely around IP.
It often involves convincing your stakeholders to invite you into discussions around where sales will be focused, where labour is going to be distributed, what the new product roadmap looks like and sight of features or developments that aren’t yet available to the wider business yet.
Not just to be able to make informed decisions but so that you have time, time to impact assess, time to protect if needed, time to understand and communicate accordingly.
In the future however I expect such time challenges to be reduced through AI!
When you attend events like Future IP UK, what kinds of conversations or connections are most valuable to you?
The organic ones. One of the best things about attending these events is the literal ‘human in the loop’.
You know you aren’t interacting with an AI, it’s genuine discussions with industry leading professionals that aren’t scripted and come with real opinions. That’s where lessons are learnt and value is created for me.
Sarah’s insights highlight that ownership, ethics, and strategy are now central to the future of intellectual property. As Conference Chair of Future IP UK, she will lead the dialogue on how professionals can navigate these challenges, seize new opportunities, and shape IP strategies that are both resilient and forward-looking.
Join Sarah and other leading experts at Future IP UK for discussions that will define the next chapter of innovation and protection.
Written by Min Nguyen, Content Executive





Comments