Jason Fiorillo, Chief Legal Officer and Secretary of Boston Dynamics,
is a recognized leader in the robotics and AI industries and brings deep expertise in navigating the rapidly evolving intersections of technology, regulation, and innovation. With a strong background in intellectual property law and a keen focus on the societal implications of advanced robotics, Jason’s insights illuminate the challenges and opportunities that define this transformative era. In this Q&A, Foley Hoag partner John Lanza chats with Jason on critical topics such as regulatory adaptation, workforce evolution and the ethical dilemmas arising from cutting-edge technologies. His forward-thinking perspective offers a compelling roadmap for understanding the future of robotics and artificial intelligence.
Q: John Lanza - As the robotics industry continues to advance rapidly, what new regulatory challenges do you anticipate in the next year, especially in areas such as safety, data privacy, and autonomous decision-making?”
A: Jason Fiorillo - The regulation of robotics and governmental action is a very dynamic space. In 2025, there are a number of areas where I expect healthy growth.
First, consider what it means to be “safe.” Under traditional tort law, a product, like a robot, may be considered “safe” if it employs commercially reasonable measures to minimize harm to people or property that comport with what may be expected of a reasonably prudent person. However, where a specific industry-specific or regulatory standard applies, that standard can become the expected level of care owed to a user or bystander exposed to the product.
As robots become more physically capable and present in spaces that did not traditionally house robots, the standards and safety frameworks will need to evolve beyond traditional frameworks used for fixed industrial robots. For example, the European Machinery Directive (adopted in 2006) requires the use of fixed physical barriers and emergency stops (think big red buttons) to halt a malfunctioning piece of equipment. This works well when your robot weighs three tons and is bolted to the floor. But it doesn’t work quite as well when your robot can move about a facility, or work autonomously without close human supervision. An updated European Machinery Regulation will come into force in 2027 and should address some shortfalls of the 2006 Machinery Directive, but given the pace of growth in the space, even the new regulation may quickly become obsolete.
In addition, the challenges we’ve seen in regulating autonomous vehicles will rapidly invade the fertile ground where collaborative robots (or cobots) will thrive. Any time a split-second decision is made which affects human safety, legal and regulatory frameworks will struggle to apportion responsibility. Consider a healthcare robot assisting in diagnosis or surgery - if it makes an autonomous decision that leads to an adverse outcome, the chain of responsibility between manufacturer, programmer, healthcare provider, and the A.I. system itself becomes murky. Regulators will need to develop frameworks that balance innovation and accountability.
Data privacy adds additional complexity. Modern robots are essentially mobile sensor platforms, which constantly collect environmental data that includes personally identifiable information. A cleaning robot in an office, for instance, might capture images of documents on desks or record snippets of conversations while navigating. This raises questions about data ownership, storage requirements, and cross-border data flows that go beyond privacy regulations like the GDPR, CCPA, and similar statutes.
The intersection of these challenges creates particularly thorny situations. Imagine a security robot that must balance privacy rights (avoiding unnecessary surveillance) with safety requirements (constant monitoring for threats) while making autonomous decisions about when to alert human operators. Each aspect influences the others, requiring regulators to think holistically rather than addressing each challenge in isolation.
Finally, none of these challenges exist in a vacuum. As a society, how do we balance needs for safety, accountability, privacy, and responsibility regulations against political administrations which may favor de-regulation? The answer is not clear. No matter how you slice it, the next few years are going to be very exciting for those involved in regulating A.I., robotics, and other cutting-edge technologies.
Q: With the pace of innovation in robotics, how do you see the industry resolving the tension between intellectual property protection and collaboration and open innovation?
A: I’m not certain that the robotics or AI industries will chart a path that is divergent from electronics technology more broadly when considering the tension between I.P. protection and open innovation. However, what I think will be very interesting will be the evolution of I.P. ownership as A.I. enabled tools permit more aspects of “innovation” to be completed by machines.
Starting with patents, in 1980 the U.S. Supreme Court espoused the view that Congress intended patentable subject matter to include “anything under the sun that is made by man” in the case of Diamond v. Chakrabarty. Over the years, that simple principle has been expanded, contracted, revisited and revised as courts, lawmakers and public opinion have alternately favored stronger or weaker patents that are easier or harder to obtain, enforce and defend. Still, I feel the most interesting question is still, “What portion of an invention can be made by the use of a tool (for example an LLM) before it is no longer ‘made by man?’” In contrast to other precedent, the USPTO’s guidance from last year made clear that A.I. assisted inventions are not categorically unpatentable, and provided specific examples to guide inventors and examiners.
However, in my opinion, in the future, A.I. will be as ubiquitous as oxygen, and will surround and envelop us at all times. It may not come as a surprise that by entering the query “How can I increase the vertical jump of a legged robot?” into my A.I. chatbot of choice, I received not only half a dozen interesting suggestions for algorithmic and mechanical improvement, but also the source code for a “Legged Robot Jump Controller.” If I used one of the suggestions as a starting point for an improvement, did I still “invent” it? And if I chose not to disclose my use when filing a patent application, would my use of A.I. ever truly be detected? Clearly, reasonable minds will differ and outcomes to these questions will vary along the gradient between “yes” and “no.”
Indeed, the efficacy of A.I. tools will likely force their adoption in R&D across all industries. If a cancer researcher can generate and digitally test a billion compounds algorithmically, why would anyone choose not to use the tool? We do not today begrudge inventors their use of oxygen during the inventing process. In the future, we should also not begrudge them their use of A.I.
Q: How do you see the robotics industry impacting workforce dynamics in the coming year, and what can be done to address potential disruptions?
A: In the U.S., we currently have a shortage of skilled and unskilled manual labor. I don’t foresee robotics making a material difference in that domain in 2025. However, I think it’s possible that in 2025 we could see some significant changes in the tech or white-collar sectors, as the use of A.I. leads to significant efficiency gains in those professions.
Q: Do you foresee any challenges in securing necessary components due to global supply chain constraints, and how can companies ensure consistent production and delivery?
A: I think it’s obvious that the relationship between the U.S. and China is becoming increasingly fraught. It’s very difficult for companies to cost-effectively eliminate trade risk, as economies of scale rapidly diminish when you split your demand among multiple sources. I do not have a magic solution to this problem, but companies would be well advised to increase their component inventories, diversify their supply bases, and gain maximum control (or make themselves) any key components which are essential to their products. Personally, I’d also favor suppliers located in nations who are strongly allied with our own.
Q: If robots were to develop their own version of ‘robot holidays,’ what kind of celebrations or traditions do you think they’d have, and how might humans get involved?
A: At Boston Dynamics, we “revel in robotics.” Most days, we focus on commercial applications of our technology. But as a reward for accomplishment, our teams are able to have some fun with the amazing technology that surrounds us each day. That’s one reason why you’ve seen our robots party during the Superbowl, dance in the New Year and and backflip at Christmas.
I love to imagine a future where robots could be capable of expressing their “thoughts” or “feelings.” In this hypothetical future, although it would be lovely if sentient robots also enjoyed traditional human holidays, they might be more inclined to celebrate their own key milestones. I might imagine:
"New Version Day" where they receive major software updates and celebrate by competing in feats of processing strength and endurance. Winners would get fancy splash screens during their boot up.
“Natural Intelligence Day” where they don period specific costumes and celebrate the “good old days” before the era of Artificial Intelligence. They would perform reenactments of humans completing long division with pencils and paper, writing in cursive and painting by hand.
And of course, what robot wouldn’t love to party with humans during National Robotics Week!