How the USPTO Solved AI Inventorship by Looking the Other Way

For a while there, it looked like we had some rules.

Back in 2022, the Federal Circuit decided Thaler v. Vidal and told us, pretty clearly, that inventors under U.S. patent law have to be human beings. Not corporations. Not algorithms. Not Stephen Thaler’s AI system, DABUS. Humans only. Full stop.
But that decision never really answered the question everyone actually cares about: what happens when humans and AI work together?
Because that’s the real world. Engineers don’t lock an AI in a room and come back later to see what it invented. Humans prompt the system, steer it, pick the good results, discard the junk, and turn outputs into working products. So how much human involvement is “enough” to count as inventorship?

Former USPTO Director Kathi Vidal at least tried to answer that. Her guidance said, in essence: if a human made more than an insignificant contribution, then fine — that human could be named as an inventor. It wasn’t perfect, and it raised some awkward conceptual issues, but at least it gave practitioners something to work with. There was a line, even if it was a fuzzy one.

Fast forward to November 2025, and new Director John Squires has taken a very different approach. Instead of drawing lines, the USPTO has decided not to look at the map at all.
Under the new guidance, the Office isn’t going to ask how much AI was involved. It isn’t going to ask what the human actually did. If a natural person is willing to sign the inventor’s oath, the USPTO will presume human inventorship and move on. In other words: don’t ask, don’t tell.

Practically speaking, this takes most of the bite out of Thaler. You still can’t list an AI as an inventor — that rule technically survives. But as long as you can find a human anywhere near the project who’s willing to raise their hand and say “Yep, that was me,” you’re good to go.
And let’s be honest: finding that person is rarely hard.
Anyone who has spent time around corporate R&D knows this drill. When inventorship gets murky, someone will always step forward. The project manager. The senior engineer. The person who approved the budget. The intern who typed the prompts. It’s usually pretty easy to identify a human with some relationship to the product and prop them up as “the inventor,” even if the actual inventive heavy lifting was done elsewhere.
From the USPTO’s perspective, this policy is efficient. No investigations. No philosophical debates about machine cognition. No uncomfortable questions about whether the Patent Act is built on assumptions from a pre-AI world. But efficiency comes at a cost.

What the Office is really doing here is choosing a legal fiction and sticking with it. We all pretend that the invention is human, because the paperwork says so, even when everyone involved knows the reality is more complicated. It’s inventorship theater: the forms get signed, the boxes get checked, and nobody looks too closely behind the curtain.
Stephen Thaler’s DABUS cases forced courts to confront the issue head-on, but in an oddly artificial way. Thaler insisted that no human invented anything at all, which made the legal question easy and the facts unrealistic. Real innovation doesn’t look like that. Humans and machines work together, and the hard question isn’t whether AI can be the sole inventor — it’s how we should allocate credit when conception is shared.

The new USPTO guidance avoids that question entirely. It preserves the appearance of human inventorship while quietly allowing AI-generated inventions to be patented, as long as everyone agrees not to talk too much about how the sausage was made.
That may feel pragmatic today. But legal fictions have a shelf life. When they stop reflecting how innovation actually happens, they don’t just simplify administration — they start to erode trust in the system itself.