It’s not surprising these days to see new inventions that either incorporate or have benefitted from artificial intelligence (AI) in some way, but what about inventions dreamt up by AI – do we award a patent to a machine?
This is the quandary facing lawmakers around the world with a live test case in the works that its supporters say is the first true example of an AI system named as the sole inventor.
In commentary published in the journal Nature, two leading academics from UNSW Sydney examine the implications of patents being awarded to an AI entity.
Intellectual Property (IP) law specialist Associate Professor Alexandra George and AI expert, Laureate Fellow and Scientia Professor Toby Walsh argue that patent law as it stands is inadequate to deal with such cases and requires legislators to amend laws around IP and patents – laws that have been operating under the same assumptions for hundreds of years.
The case in question revolves around a machine called DABUS (Device for the Autonomous Bootstrapping of Unified Sentience) created by Dr Stephen Thaler, who is president and chief executive of US-based AI firm Imagination Engines. Dr Thaler has named DABUS as the inventor of two products – a food container with a fractal surface that helps with insulation and stacking, and a flashing light for attracting attention in emergencies.
For a short time in Australia, DABUS looked like it might be recognised as the inventor because, in late July 2021, a trial judge accepted Dr Thaler’s appeal against IP Australia’s rejection of the patent application five months earlier. But after the Commissioner of Patents appealed the decision to the Full Court of the Federal Court of Australia, the five-judge panel upheld the appeal, agreeing with the Commissioner that an AI system couldn’t be named the inventor.
A/Prof. George says the attempt to have DABUS awarded a patent for the two inventions instantly creates challenges for existing laws which has only ever considered humans or entities comprised of humans as inventors and patent-holders.
“Even if we do accept that an AI system is the true inventor, the first big problem is ownership. How do you work out who the owner is? An owner needs to be a legal person, and an AI is not recognised as a legal person,” she says.
Ownership is crucial to IP law. Without it there would be little incentive for others to invest in the new inventions to make them a reality.
“Another problem with ownership when it comes to AI-conceived inventions, is even if you could transfer ownership from the AI inventor to a person: is it the original software writer of the AI? Is it a person who has bought the AI and trained it for their own purposes? Or is it the people whose copyrighted material has been fed into the AI to give it all that information?” asks A/Prof. George.
For obvious reasons
Prof. Walsh says what makes AI systems so different to humans is their capacity to learn and store so much more information than an expert ever could. One of the requirements of inventions and patents is that the product or idea is novel, not obvious and is useful.
“There are certain assumptions built into the law that an invention should not be obvious to a knowledgeable person in the field,” Prof. Walsh says.
“Well, what might be obvious to an AI won’t be obvious to a human because AI might have ingested all the human knowledge on this topic, way more than a human could, so the nature of what is obvious changes.”
Prof. Walsh says this isn’t the first time that AI has been instrumental in coming up with new inventions. In the area of drug development, a new antibiotic was created in 2019 – Halicin – that used deep learning to find a chemical compound that was effective against drug-resistant strains of bacteria.
“Halicin was originally meant to treat diabetes, but its effectiveness as an antibiotic was only discovered by AI that was directed to examine a vast catalogue of drugs that could be repurposed as antibiotics. So there’s a mixture of human and machine coming into this discovery.”
Prof. Walsh says in the case of DABUS, it’s not entirely clear whether the system is truly responsible for the inventions, since Dr Thaler had provided it with parameters to work within.
“There’s lots of involvement of Dr Thaler in these inventions, first in setting up the problem, then guiding the search for the solution to the problem, and then interpreting the result,” Prof. Walsh says.
“But it's certainly the case that without the system, you wouldn't have come up with the inventions.”
Change the laws
Either way, both authors argue that governing bodies around the world will need to modernise the legal structures that determine whether or not AI systems can be awarded IP protection. They recommend the introduction of a new ‘sui generis’ form of IP law – which they’ve dubbed ‘AI-IP’ – that would be specifically tailored to the circumstances of AI-generated inventiveness. This, they argue, would be more effective than trying to retrofit and shoehorn AI-inventiveness into existing patent laws.
Looking forward, after examining the legal questions around AI and patent law, the authors are currently working on answering the technical question of how AI is going to be inventing in the future.
Dr Thaler has sought ‘special leave to appeal’ the case concerning DABUS to the High Court of Australia. It remains to be seen whether the High Court will agree to hear it. Meanwhile, the case continues to be fought in multiple other jurisdictions around the world.