Digital Transformation Notes | May 2024

Hogan Lovells
Contact

Hogan Lovells

April saw further regulatory and policy developments across AI and digital assets. The continuously high level of activity across so many legislators is fully in line with the findings of the newly published AI Index Report 2024 – the annual opus magnum of all-things-AI by Stanford’s Institute for Human Centered Artificial Intelligence. Our following note looks into this as well as the latest iteration of the EU AI Act, Singapore’s important new rules for digital assets, and the draft of the U.S. Privacy Rights Act.


AI: Open Source Models under the AI Act of the European Union 

On 17 April 2024, the European Parliament released a corrigendum to the AI Act, the result of a final check by lawyer-linguists designed to polish the text, correct errors, and adjust numberings and references. As a result of this exercise, the Act is now ready for publication. Following its publication, the AI Act will enter into force at some point in June. The first sway of its mandatory obligations, relevant for any business, include:

  • Six months post-publication: any prohibited practices must be discontinued. This prohibition is particularly important as it may apply quite broadly, and covers, among others, AI-based social scoring, facial databases, or manipulative workplace techniques;
  • Nine months post-publication: Codes of practice for general-purpose AI models, such as Chat-GPT or Gemini, must be finalized. Providers of general purpose AI models will need to provide comprehensive information about the interoperability of their models for the benefit of any downstream application. This is particularly relevant at a time when significant business attention rests on the development of new ecosystems consisting of such downstream applications for just about any type of use.

In the face of these important next steps of the AI Act’s route to full application, it is a real surprise that the final updates to the text of the AI Act as now released contain a material change with regard to open-source AI models.

The previous version of Article 2(12) provided that the AI Act “applies to AI systems released under free and open-source licences unless certain conditions were met.” The new, corrected wording reads almost like the opposite: “This Regulation does not apply to AI systems released under free and open-source licences, unless they are placed on the market or put into service as high-risk AI systems or as an AI system that falls under Art. 5 or 50.”

Apparently, the legislator had taken note of the fact that the previous wording for open-source AI models created significant confusion around what was considered mandatory for these very popular and commercially very relevant models. The latest correction of the text seems to clarify this confusion, at least to some degree. It essentially provides that the AI Act only regulates open source models to the extent that:

  • they are marketed or used as high-risk AI systems;

  • they consist of prohibited AI practices;

  • they are explicitly made subject to certain transparency obligations.

Yet, further difficulties and certain confusion regarding the treatment of open-source models still remain. For example, Recital 103 of the AI Act seems to state that the exemption for open-source models does not apply if AI components need to be paid for or are otherwise monetized.  Much could be read into this exception to the detriment of open-source models, as they will typically be accompanied by paid for services.

Moreover, the new definition of open-source AI models seems to have rendered obsolete the explicit exemptions for certain open- source general purpose AI models (i.e. open-source large content models, which make up much of the market). Further guidelines on the reconciliation of the various provisions for open source AI models will be needed going forward.

Further information about the status and content of the AI Act of the EU is provided in our Data Chronicles podcast with Eduardo Ustaran and Scott Loughlin


Digital Assets and Blockchain: Singapore’s new rules for digital payment token service providers

On 2 April 2024, in a move to bolster consumer protection and financial stability in digital finance, the Monetary Authority of Singapore introduced important amendments to the Payment Services Act (PS Act) and surrounding legislation. The changes focus on digital payment tokens and the new services related to them. This takes place in the broader context of further cementing Singapore’s position as a global hub for financial services and the corresponding need to double down on anti-money laundering (AML) and counter financing of terrorism (CFT).  In particular, the amendments address:

  • the provision of custodial services for digital payment tokens;

  • the facilitation of transmitting digital payment tokens; and,

  • the facilitation of cross-border money transfer of such tokens, even for transactions and/or settlements taking place outside Singapore.

The roll out of the new rules for these services has started already and will progress in stages over the next few months. Entities currently engaged in any of the above activities must notify Singapore’s Monetary Authority within 30 days and submit a license application within six months of 4 April 2024.

Obtaining the license and full compliance with the new rules will not only comprise comprehensive AML/CFT obligations but also a very clear segregation of customers' assets. Assets must be placed in a trust account, with detailed records maintained, and with effective systems and controls in place to protect the integrity and security of customers' assets.

Explore our DAB Hub for further insights and updates on digital assets and blockchain.

For more information on AML regulation, in particular on the EU's Anti-Money Laundering Authority (AMLA), see our video series here.


Notable publication of the month: the AI Index Report 2024

Aside from the new laws and regulations, a highlight from April has been the publication of the AI Index Report 2024, the most comprehensive review of the current status of AI developments, outlining its leading trends, major challenges as well as business and societal implications.

As in previous years, the report has been put together by Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI). Among the many relevant findings of the report (all of which are worth reading), here are some key takeaways with a particular relevance from the legal perspective:

  • AI's impact on labor: research in 2023 shows that AI enhances the speed and quality of work, potentially narrowing the skills gap. However, warnings of a decline in performance without proper oversight suggest the need for balanced workplace integration of AI.

  • AI propelling science: AI's contribution to scientific discovery has accelerated, with new applications enhancing research efficiency and material discovery, pointing to AI's growing role in scientific innovation.

  • Increasing AI regulations: there has been a sharp increase in AI regulations in the U.S., reflecting growing governmental oversight as AI becomes more integral to societal functions.

  • Growing public awareness and concern: Public awareness of the impacts of AI is growing globally, with nervousness about AI products on the rise. In the U.S., concern outweighs excitement, suggesting a shift in public sentiment towards caution about the future role of AI.

Reading through the report’s findings makes it ever more clear that regulators and legislators will not only need to focus keenly on the development of ever more refined and attuned rules to ethically and practically deal with the risks and opportunities deriving from this newly emerging reality, but that they will also need to try hard to find common ground among them and across country lines to adequately deal with a phenomenon that is truly global in nature. 


Also interesting: The draft U.S. Privacy Rights Act

In a development that may seem surprising, members of the U.S. Congress released a draft of the American Privacy Rights Act. The trigger for this proposal seems to be a growing urge to streamline the current patchwork of state and sectoral laws regarding data protection into a national privacy law.

Stakeholders and the legal community have already weighed in, and the chances of this passing now hinge on the congressional ability to smoothly navigate the interests of the affected parties. 

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Hogan Lovells | Attorney Advertising

Written by:

Hogan Lovells
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Hogan Lovells on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide