Artificial Intelligence and Intellectual Property Rights

Artificial Intelligence and Intellectual Property Rights

Posted on September 13th, 2024

Authors

  • Cherop Cherono

As far back as 2016 a group of museums and researchers in the Netherlands unveiled a portrait entitled The Next Rembrandt – a 3D printed painting made from the data of Rembrandt’s total body of work, involving the scanning of data from 346 known paintings by the Dutch painter in a process lasting 18 months. The portrait consists of 148 million pixels and is based on 168,263 fragments from Rembrandt’s works stored in a purpose-built database. The project was sponsored by the Dutch banking group ING, in collaboration with Microsoft, J. Walter Thompson marketing consultancy, and advisors from TU Delft, The Mauritshuis and the Rembrandt House Museum.

In 2023 a Chinese-language work, entitled The Land of Machine Memories, won second prize at the 5th Jiangsu Popular Science and Science Fiction Competition. It took only three hours for Shen Yang, a professor at the Beijing-based university’s School of Journalism and Communication, to generate the award-winning admission.

Jason Allen, a video game designer in Pueblo, Colorado, spent roughly 80 hours working on his entry to the Colorado State Fair’s digital arts competition. Judges awarded him first place, which came with a USD 300 prize. But when Allen posted about his win on social media late last month, his artwork went viral—for all the wrong reasons. Allen’s victory took a turn when he revealed online that he had created his prize-winning art using Midjourney, an artificial intelligence program that can turn text descriptions into images.

Allen’s win created a storm of controversy with artists expressing fear over what AI-generated artworks could mean for them –

“We’re watching the death of artistry unfold right before our eyes — if creative jobs aren’t safe from machines, then even high-skilled jobs are in danger of becoming obsolete
What will we have then?

— OmniMorpho (@OmniMorpho) August 31, 2022

German photographer Boris Eldagsen rejected his first prize recognition from the Sony World Photography Awards, stating that AI-generated images and traditional photography should not compete in the same contests.

Eldagsen said in a statement on his website that he entered the competition as a “cheeky monkey” to test whether such events were prepared to handle AI-generated content and called for a debate on AI’s role in photography.

The rise of AI-generated artwork is undoubtedly here. But is it friend or foe?

AI and the creative process

Robots, machines and AI have been a part of some form of creative process since the 1970s. It’s nothing new. The works of art may have been crude but they were. Most of these computer-generated works of art relied heavily on the creative input of the programmer – the machine was at most an instrument or a tool very much like a brush or canvas.

But today, we are in the throws of a technological revolution requiring us to rethink how computers or AI is involved in the creative process. This revolution is characterised by machine learning software that produces autonomous systems that are capable of learning without being specifically programmed by a human.

A computer program developed for machine learning purposes has a built-in algorithm that allows it to learn from data input, and to evolve and make future decisions that may be either directed or independent. When applied to art, music, and literary works, machine learning algorithms actually learn from input provided by programmers. They learn to generate a new piece of work, in a very general sense making independent decisions throughout the process to determine what the new work looks like. An important feature of this type of artificial intelligence is that while programmers can set parameters, the work is actually generated by the computer program itself – referred to as a neural network – in a process akin to the thought processes of humans.

Perhaps the most basic example of this is ChatGPT the chatbot from Microsoft-backed OpenAI which had millions of active users within the first few months after it had launched. ChatGPT uses natural human language to give answers to anything that happened before 2021. But ChatGPT can also be used for more creative pursuits like writing journalistic articles and even authoring novels – as was seen in The Land of Machine Memories which won second prize at the 5th Jiangsu Popular Science and Science Fiction Competition.

When it comes down to it, who owns the creative work?

The Prompt Engineering & AI Institute – holds the position that:

“AI’s capabilities are predominantly dependent on the information it has been sent. AI doesn’t discern between factual or false data – it merely processes what it’s given.”

And if that premise is correct – “AI merely processes what it’s given” – then who owns the creative work produced by AI? Or put more formally – who owns the copyright over the intellectual property?

Traditionally, the ownership of the copyright in computer-generated or AI-generated works wasn’t in question because the AI was merely a tool that supported the creative process, very much like a pen and paper or paintbrush and canvas.

Strictly speaking, it’s a universally accepted legal premise that creative works qualify for copyright protection if they are original, with most definitions of originality requiring a human author. 

This was illustrated in the 2022 matter of Thaler v. Vidal which found that “A work of art created by artificial intelligence without any human input cannot be copyrighted under U.S. law

Only works with human authors can receive copyrights, affirming the Copyright Office’s rejection of an application filed by computer scientist Stephen Thaler on behalf of his Device for the Autonomous Bootstrapping of Unified Sentience (DABUS) system. Thaler had also applied for DABUS-generated patents in other countries including the United Kingdom, South Africa, Australia and Saudi Arabia with limited success.

Thaler attempted to copyright a two-dimensional AI-authored image titled “A Recent Entrance to Paradise,” created by an image-generating AI tool he developed called the “Creativity Machine.” The Copyright Office rejected the application, stating that the work lacked “the human authorship necessary to support a copyright claim.” In response, Thaler sued the Copyright Office, arguing that he is the owner of the work and that it was a work made for hire by the AI tool.

The above premise around human authorship is certainly echoed by jurisdictions around the globe and most certainly in Kenya where the Copyright Act, Cap 130 (the Copyright Act) at section 2 (1), states that an author in relation to artistic work or a computer program which is computer generated, means the person by whom the arrangements necessary for the creation of the work were undertaken. A person in this situation can, like in other jurisdictions, be assumed to mean a human or natural person. The Copyright Act goes on further to state that an author in relation to a computer program, means the person who exercised control over the making of the program.

However, it must be noted that the European Union passed the Artificial Intelligence Act on 13 March 2024, by a vote of 523 for, 46 against, and 49 abstaining. It was approved by the EU Council on 21 May 2024. It’s the first comprehensive regulation of AI by a major regulator anywhere.

The Act assigns applications of AI to three risk categories. First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated. For a high-level summary of the AI Act click here.  By assigning classifications of types of risks to applications and systems, there are standards and legal requirements on how providers (developers) market them and how users use them. Having a type of governance standard in place in one jurisdiction enables other jurisdictions to apply similar standards to how they categorise, market and use AI, which may then feed into how the creative work produced by AI is treated.

The Kenya Robotics and Artificial Intelligence Society Bill 2023 is still in the works whose primary function is the establishment of a society with the mandate to oversee and regulate robotics and AI professionals, promoting innovation, collaboration, and ethical standards while ensuring public safety. Given that the bill is not yet law, there is room to enrich it by borrowing from the European Union AI Act.

The way forward

It’s clear that the full impact of the creative use of AI has not yet been felt and it seems as if all creatives wait with bated breath as they confront a future that is – at least currently – hard to predict.

But there may be some light at the end of what is a very long dark tunnel. Towards the end of 2023, the U.S. Copyright Office took some initial steps to issue guidance, going so far as to launch a formal AI initiative. While it hasn’t issued any formal rules, the Office has tried to assist in the human authorship requirement while affording AI-assisted creative content copyright some protection.

The March Statement of Policy addressed whether a work that consists “of both human-authored and AI-generated material” can be registered and what information an applicant must provide to the Copyright Office to register such works. It states that in cases where AI-generated work is subsequently edited or manipulated by a human in a “sufficiently creative” manner, the resulting work constitutes an original work that can be copyrighted. So, although the underlying AI-generated material is not protected, the manipulated work can be. This seeming “compromise” may be the answer to situations such as those of Thaler who argued that he is the owner of the work and that it was a work made for hire by his DABUS AI tool.

Whether other countries around the globe will apply this thinking to their own matters is yet to be seen. But what is clear is that – recently – some guidelines have arisen that offer useful guidance that can be considered when dealing with matters of a similar nature both in Kenya and in other countries worldwide.

This will very much be a case of – watch and see. Rest assured, we will keep a close eye on it and provide updates as necessary

If you have any questions about the information we have set out above or need assistance with a legal matter that we have the experience and expertise to assist with, please don’t hesitate to contact us at TripleOKLaw LLP.

(Sources used and to whom we give thanks – The Next Rembrandt; Cyber News here and here; Smithsonian Magazine; Lexology; CBC; The Conversation; Civilian AI Is Already Being Misused by the Bad Guys; Prompt Engineering & AI Institute; Reuters here and here; EU Artificial Intelligence Act;  ).