Decoded - Technology Law Insights, V 5, Issue 6, July 2024

Volume 5, Issue 6

Welcome

Welcome to our seventh 2024 issue of Decoded - our technology law insights e-newsletter. We have a few events we want to pass along to those interested in technology, but also other areas of law and business.

Thank you for reading.

 


AI Lays Foundation for Construction’s Financial Transformation

“As builders grapple with razor-thin margins and complex cash flows, AI offers to reshape the industry’s financial landscape, one algorithm at a time.”

Why this is important: When AI is discussed in reference to construction, people think of robots and drones assisting in the completion of a project. But, AI can be used in a more fundamental fashion that does not require additional heavy equipment. AI has been used in the design of projects, and to maximize worker safety. The power of AI is now being focused on some of the most difficult portions of any construction project, finance management, and project scheduling. A new FinTech start-up, Adaptive, seeks to utilize AI to smooth out the ups and downs of the construction cycle, and to manage complicated payment requirements. The use of AI in finance management is anticipated to alleviate cashflow issues by streamlining processes, reducing errors, and optimizing resource allocation. This should allow smaller construction companies to compete with their larger competitors.

The use of AI to assist in maintaining a project’s schedule is a more revolutionary use of AI in the construction industry. Experts say that AI would work as an “enhanced” project manager, “supporting smarter scheduling, improved budget management, and resource efficiency.” The AI, using historical and current data, could, in real-time, identify possible project bottlenecks and provide efficient solutions. AI would accomplish this through enhanced proactive construction planning and control. This includes predicting possible issues in the pre-construction phase, providing solutions, and thereby providing considerable cost savings to the project. The utilization of AI in the construction industry to assist with finance management and scheduling will be a game changer for this entire sector of the economy. --- Alexander L. Turner

Congress Must Update FDA Regulations for Medical AI

“Legislation currently before Congress (Senate Bill 2209 and House Bill 4128), the Verifying Accurate Leading-edge IVCT Development Act (VALID Act), codifies this firm-based approach to regulation.”

Why this is important: Legislation like the VALID Act aims to modernize FDA oversight by focusing on the development methods and reliability validation of technologies rather than their construction. This approach is essential for AI medical devices, which require continual updates and improvements.

AI medical devices, such as those using large language models, present regulatory challenges due to their complexity and potential for inaccurate outputs if trained on unreliable data. Developers are cautious, often classifying AI tools as non-device clinical decision support software to avoid stringent FDA scrutiny.

The proposed VALID Act would enable the FDA to oversee and regulate in vitro clinical tests (IVCTs), which include in vitro diagnostic devices like wearable health trackers and the software that supports them. The challenge, however, is the constantly evolving nature of AI-based diagnostic tools that inherently modify themselves as new information is collected. The VALID Act will seek to balance the needs for both regulatory oversight and rapid innovation in the diagnostic space. --- Shane P. Riley

FDA’s Lab-Developed Test Rule could be First Test of Agency’s Power Post-Chevron

“The Supreme Court’s decision to overturn the Chevron doctrine would make it easier to challenge agency regulations, such as the LDT final rule.”

Why this is important: For those uninformed or need refreshing, the Supreme Court has now put tight restrictions on what a federal agency is allowed to do in terms of regulatory controls/rules without congressional approval. This includes the FDA. So in a world where Chevron has been overturned, what are we to make of the FDA’s regulatory authority, particularly for Lab Developed Tests (LDT)? For decades, the FDA treated LDTs with enforcement discretion, meaning most tests developed in a laboratory were not subject to the regulations for medical devices such as premarket review, device registration, labeling standards, and adverse event reporting. This article dives into details of the fight over LDTs and how that ruling by the Supreme Court could set the FDA on a new course as to its regulatory powers. --- Matthew W. Georgitis

Department of Homeland Security Proposes Rule for Reporting of Cyber Incidents

“Under the new law, covered entities are also subject to supplemental reporting requirements and data preservation obligations.”

Why this is important: This article discusses the Cybersecurity and Infrastructure Security Agency’s (CISA) issuance of a Notice of Proposed Rulemaking and publication of a proposed rule to implement the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (CIRCIA). Among other things, CIRCIA “requires covered entities to report significant cyber incidents within 72 hours and ransomware payments within 24 hours.” This proposed rule places reporting requirements on “covered entities,” and companies need to be aware of whether they fall within one of the 16 “critical infrastructure sectors” that are considered “covered entities.” Companies subject to the proposed rule would be required to report “significant” cyber incidents, which are defined as involving one of the following four scenarios: (1) a substantial loss of confidentiality, integrity, or availability of a covered entity’s information system or network; (2) a serious impact on the safety and resiliency of a covered entity’s operational systems and processes; (3) a disruption of a covered entity’s ability to engage in business or industrial operations or deliver goods or services; and (4) unauthorized access to a covered entity’s information system or network, or any nonpublic information contained therein, that is facilitated through or caused by a compromise of a cloud service provider, managed service provider, or other third-party data hosting provider. The proposed rule also contains exceptions to reporting obligations as well as requirements to preserve information about the incident, confidentiality, and other protections for reported information, and penalties for failing to report.

The public comment period for the proposed rule recently ended. Accordingly, we soon should see CISA take action on this proposed rule. Once a final rule is published, companies need to confirm whether they are within the realm of entities that are governed by the rule. If they are, they need to put procedures in place before a cyber incident occurs to fully comply with the final rule. If you have any questions about how to comply with the rule or other cybersecurity obligations, or if you would like to discuss the proposed rule, contact a member of Spilman’s data privacy and cybersecurity team. --- Nicholas P. Mooney II

2024 Guidance Update on Patent Subject Matter Eligibility, Including Artificial Intelligence

"This guidance update will assist USPTO personnel and stakeholders in evaluating the subject matter eligibility of claims in patent applications and patents involving inventions related to AI technology (AI inventions)."

Why this is important: In line with Executive Order 14110 on the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" (October 30, 2023), the USPTO is updating its guidance on patent subject matter eligibility to address innovations in critical and emerging technologies, particularly AI. This update will help USPTO personnel and stakeholders assess the eligibility of claims in AI-related patent applications and patents. It introduces new examples to aid in applying eligibility guidance during patent examination, appeals, and post-grant proceedings, incorporates stakeholder feedback, and discusses recent Federal Circuit decisions on eligibility. This update, along with the Manual of Patent Examining Procedure (MPEP), will guide USPTO personnel in applying subject matter eligibility law.

Under U.S. patent law, there are four categories of eligible inventions: processes, machines, manufactures, and compositions of matter. In contrast, courts have found that abstract ideas, laws of nature, and natural phenomena are outside or exceptions to the appropriate subject matter for patents. With the rise of AI tools assisting inventors in the discovery, conception, and reduction to practice of new inventions, the lines between what is and is not patentable subject matter have become less clear.

The Alice/Mayo test for determining subject matter eligibility remains unchanged, but the MPEP has been updated to consolidate and include all prior USPTO guidance, with future updates to reflect recent court decisions. Stakeholder feedback highlights specific concerns for AI inventions: evaluating if a claim recites an abstract idea, and assessing improvements in the claim.

For the subject matter eligibility analysis under 35 U.S.C. 101, the involvement of AI in creating an invention is not considered in applying the Alice/Mayo test and USPTO eligibility guidance. The focus is on whether the claimed invention itself is eligible for patenting, not how it was developed.

In contrast, the USPTO recently issued guidance on inventorship for AI-assisted inventions, created by natural persons using AI systems. Current statutes, such as 35 U.S.C. 101 and 115, do not recognize contributions by AI systems for inventorship purposes, even if they were instrumental. However, AI-assisted inventions can still be patented if one or more persons significantly contributed to the claimed invention.

The guidance also includes new examples to help examiners and practitioners apply the USPTO subject matter eligibility requirements. Example 47 demonstrates the eligibility analysis for claims involving AI, specifically using an artificial neural network to detect anomalies. Example 48 applies the analysis to AI-based methods for analyzing speech signals and separating desired speech from background noise. Example 49 analyzes method claims for an AI model that personalizes medical treatment based on individual patient characteristics.

In sum, the latest guidance builds upon the USPTO’s prior actions in addressing the rise of AI tools and their effect on both inventorship and subject matter eligibility and provides more clarity for those working in these spaces. To read the full report and submit a formal comment, please see here. --- Shane P. Riley

Senators Introduce Bipartisan Healthcare Cybersecurity Legislation

“The bill would create a special liaison within the Cybersecurity and Infrastructure Security Agency to help coordinate the government’s response during cyber incidents.”

Why this is important: The healthcare industry is one of the largest targets for cyber-criminals. U.S. Senators Jacky Rosen (D-Nev.), Todd Young (R-Ind.), and Angus King (I-Maine) have introduced a bill, the Healthcare Cybersecurity Act, that seeks to boost the implementation of stronger cybersecurity in the healthcare industry. The proposed bill would create a special liaison to the Department of Health and Human Services (HHS) within the federal Cybersecurity and Infrastructure Security Agency (CISA). This is intended to accelerate the sharing of intelligence and information so that cyberattacks in the healthcare industry can be averted. This bill was proposed as a result of recent cyberattacks that targeted medical claims processor Change Healthcare. Additionally, in an attempt to avoid future interruptions of the delivery of healthcare as the result of a cyberattack, this bill is intended to work in conjunction with HHS’s voluntary cybersecurity goals geared toward raising cybersecurity standards in the industry. --- Alexander L. Turner

Independent and Private Schools Address Unique Cybersecurity Threats

“Cybercriminals might find independent and private institutions more attractive than public schools.”

Why this is important: Unlike public K-12 schools, private and independent schools charge tuition and may offer financial aid. This means they also store greater quantities of sensitive financial information about their students compared to their public counterparts, making them prime targets for hackers. This threat is compounded by the digital nature of today’s classrooms. Laptops have replaced textbooks in many schools. Students use phones and tablets to complete assignments or access school-related apps and platforms. The opportunities for hackers to gain access to schools’ networks are endless.

The goal, of course, is to prevent cyberattacks before they occur and to avoid the costly lawsuits and remedial measures that follow. But those preventative measures come with their own hefty price tags. While some private and independent schools have the donations and tuition fees to support a robust information technology department budget, others do not have the funds to pay for the equipment, software, and updates to maintain defenses against a cyberattack; and it is not just about the hardware and software. In fact, those investments may be ineffective if a school does not also expend time and resources to train staff, faculty, and students on how to use them. Preventing a member of the school community from clicking on a suspicious link or forwarding a viral email is among the most powerful defenses against a cyberattack.

So, what should school administrators do, especially if they are working with limited budgets? Experts suggest they start by implementing basic security tools, including multifactor authentication, firewalls, and data backup. Cyber liability insurance will also help offset the costs of remedying an attack. Regular security audits could help identify weaknesses or changes that occur over time, like when software updates are installed.

At the top of the list: Communication. Administrators should know how and why their school community is using technology, where the community might need support or oversight, and take the time to educate staff, faculty, and students about the risks of using technology in the classroom and how to prevent misuse and cyberattacks. --- Jamie L. Martines, Summer Associate

How to Craft a Generative AI Use Policy in Higher Education

“With students, faculty and staff already using tools such as ChatGPT, universities need to set guardrails for how and why generative artificial intelligence is implemented.”

Why this is important: Higher education institutions across the nation are racing to establish generative artificial intelligence (AI) use policies before the upcoming fall semester. Although only 23 percent of institutions currently have such policies, there are numerous examples to follow. The EDUCAUSE AI Landscape Study reveals a significant need for AI policy guidance, with nearly half of the respondents feeling their institutions lack appropriate guidelines.

Developing these policies can be daunting due to concerns like cheating, fairness, and data security. Starting with basic guidelines and iteratively refining them is recommended. Policies should be flexible to adapt to rapid technological changes, as standardized policies are unsustainable long-term.

Addressing academic integrity is also crucial, as generative AI can facilitate cheating. Redesigning assignments to reduce the likelihood of cheating and using AI-generated text as a learning tool are potential solutions. Policies should help students become responsible digital citizens without being overly restrictive.

Three key areas of focus for AI policies are:

  • Governance: Address ethical, equitable, and accurate AI use.
  • Pedagogy: Allow professors to define AI use in their courses.
  • Operations: Ensure technical training and support for AI infrastructure

Policy development should involve engagement at individual, departmental, institutional, and multi-institutional levels. Measuring the success of AI policies through surveys and maintaining stakeholder engagement is also crucial to ensure that the policies have the intended effect and pivot where needed along the way. --- Shane P. Riley

How to Mitigate Institutional Inequities Involving AI

“When it comes to adopting healthcare AI, large, well-off hospitals are likely to frequently homer while smaller, struggling institutions go down looking.”

Why this is important: AI is projected to be a revolutionary tool in medicine. However, not everyone may benefit equally from this new advancement. Better-off medical institutions, and their patients, will benefit more from the introduction of AI than small, less well-off medical institutions. This will create a disparity of care based on pure economics. There are steps that can be taken to avoid this outcome. The first is to have government policies that will provide funding, training grants, and partnership mandates to assist smaller community hospitals in accessing AI tools. Another way to avoid the inequitable application of AI technology is to increase AI-related education in both academic and continuing education settings. Finally, encouraging collaboration and the sharing of AI tools between larger hospitals and smaller ones. These regional AI hubs would permit less advantaged medical institutions and their patients to have access to these AI tools. By taking these steps, access to advanced AI tools can be guaranteed to all patients. --- Alexander L. Turner

What does Your CEO Need to Know About Cybersecurity?

“CEOs don’t necessarily have to become experts in the technical aspects of cybersecurity to be prepared in case of an attack or — hopefully — stop one before it starts.”

Why this is important: This article provides a significant warning to CEOs through the example of UnitedHealth Group. Earlier this year, its CEO testified before Congress about a cyberattack that occurred at UnitedHealth Group’s subsidiary, Change Healthcare. The CEO testified that the cyberattack occurred because Change Healthcare had not enabled multi-factor authentication (which was commented to be “cybersecurity 101”) and resulted in UnitedHealth Group paying a $22 million ransom in Bitcoin. The article notes that, while cybersecurity breaches are common, a CEO testifying about a breach before Congress has not been. The article warns that the CEO’s role in cybersecurity has changed. Traditionally, cybersecurity was seen as an IT issue. However, CEOs now should be playing a role in their companies’ overall cybersecurity strategy, including analyzing cybersecurity risks and overseeing the incident response plans and disaster recovery plans. They need to get into the weeds and lead the charge to make cybersecurity part of their companies’ overall culture. The risks are too great to ignore: Interruption to business operations, reputational risks, shareholder lawsuits, charges from regulators, and (as seen with Change Healthcare’s breach) testifying before Congress. --- Nicholas P. Mooney II

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Spilman Thomas & Battle, PLLC

Written by:

Spilman Thomas & Battle, PLLC
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Spilman Thomas & Battle, PLLC on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide