By  AMANDA-JANE GEORGE 

In a world first, the Australian Federal Court last week gave the green light for artificial intelligence (AI) systems to be patent inventors”. What does this mean for the HE sector?

AI systems are being touted as the new technology paradigm shift, and the HE sector is riding the wave – with at least 20 institutions offering undergraduate and postgraduate courses related to AI. And while there were only eight institutions ranked at or above world standard in AI research in the 2018 Excellence in Research for Australia round, it’s safe to say the number in this coming ERA round will likely rise.

But with all the talk on academics becoming entrepreneurs and improving the commercial hit-rate of HE sector research (CMM Nov 12 2020, 1 March, 15 April), just how commercially attractive are AI-generated inventions?

Up until last week, autonomously-generated AI inventions were not patentable. Since patents are the key to commercial success, things didn’t look good for the nascent yet rapidly evolving AI sector.

Incredibly, the problem was (partly) paperwork: patent applications need to disclose a person or company as owner, and name an “inventor.”

This doesn’t cause a problem if the AI system was essentially used as a tool to assist the human team that did the inventing. In those cases, the human inventors are named on the application. The paperwork is compliant, and by virtue of various well-established laws (such as employer ownership) the applicant owner legally “derives” title to the invention, as required, from the human inventors.

But what if the AI generated the invention itself, without human input? There was a real question about whether patent law allowed a non-human entity to be recognised as an “inventor” on the patent application, particularly given that inanimate objects cannot hold title in property (or pass it to the named owner).

Enter the Artificial Inventor Project (AIP).

Stephen Thaler, the (human) brains behind the AIP, invented an AI system called DABUS (Device for the Autonomous Bootstrapping of Unified Sentience). DABUS combines two types of artificial neural network – a generator of “potential memories,” and a network that perceives value in the output stream. The result is an AI system that is said to independently produce new ideas, mimicking the human brain’s major cognitive circuit.

DABUS made a couple of patentable inventions – and Thaler filed applications in 16 different jurisdictions via the Patent Cooperation Treaty (PCT), where the paperwork was accepted by the World Intellectual Property Organisation (WIPO). The applications were then sent to the various jurisdictions for examination and, potentially, grant.

In South Africa, a patent for a DABUS invention was granted last month. However, that jurisdiction has no definition of “inventor,” and no substantive examination process, so the patent was simply issued on the basis that the PCT paperwork had previously been accepted by WIPO.

In the UK, a court has considered whether DABUS could be an inventor under its legislation, and concluded it could not. An appeal was live streamed last week given the importance of this case, but the decision will not be delivered until October … the Chief Justice apologised for keeping everyone in suspense.

The consequences of the narrow view are far-reaching: while an AI system might devise an otherwise perfectly patentable invention, grant is denied because the owner of the AI machine cannot name any legally recognised “inventor,” from whom the owner derives title. The invention, as submitted by the UK Comptroller of Patents, just “doesn’t belong to anybody.”

On Friday, Justice Beach of the Federal Court handed down his AI-friendly decision, while wisely declining to define artificial intelligence, given its thorny issues of awareness, consciousness, or sense of self.

Unlike the UK, our Act doesn’t define “inventor.” So Justice Beach had to work out whether the word, when used in our Act, could encompass an AI system. He found that “inventor” is an agent-noun: the agent is simply the person or thing that invents. This interpretation was said to simply reflect reality in terms of many otherwise patentable inventions where it cannot sensibly be said a human was the inventor. Importantly, this interpretation was consistent with the new “object clause” in the Patents Act, which – since 2020 – requires the patent system to promote “economic wellbeing through technological innovation and the transfer and dissemination of technology.”

But while the AI sector will rejoice, the decision will no doubt raise more issues than it settles.

For example, to be patentable, an invention must make an “inventive step.” Currently, this is tested by reference to whether the hypothetical (natural) skilled person would think the invention obvious (or not) in light of common general knowledge. Will the test now need to accommodate AI skill or knowledge?

And how will we determine ownership where the AI system was not created by an individual like Thaler, but using one or more open-source licences?

Others worry that there will be an influx of foreign AI patent applications.

These kinds of issues are fundamentally important – which begs the question of whether they should be a matter for Parliament. So far, the issue of intellectual property (IP) is conspicuously absent from the government’s AI Action Plan although other jurisdictions, like the UK, have had consultations on the interface between IP and AI.

While Justice Beach ventured his decision would not open the floodgates, causing Patent Office headaches, it is uncertain whether that will hold true for the court system as well.

Dr Amanda-Jane George is a Postgraduate Research Coordinator, teaching and researching in Innovation & IP Law at the CQUniversity School of Business and Law


Subscribe

to get daily updates on what's happening in the world of Australian Higher Education