AI News Roundup – EU announces further AI initiatives, IEA releases report on AI, Energy, and Climate, DOGE uses AI to monitor federal workers, and more

McDonnell Boehnen Hulbert & Berghoff LLP

To help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.

  • CNBC reports on the European Commission’s “AI Continent Action Plan,” which aims to boost the European Union’s AI industry in face of competition from the United States and China. The Commission, which acts as the executive branch of the EU, identified several key pillars upon which the bloc will act to support AI in Europe. Among these are the development of AI Gigafactories (large-scale AI datacenters), a comprehensive “Data Union Strategy” to simplify the rules surrounding data usage for AI training, as well as measures to help with compliance with the EU’s AI Act regulations. The AI Act, and the bloc’s approach to AI in general, have received criticism from the tech industry (as well as U.S. Vice President J.D. Vance) as allegedly too burdensome on innovation, and the EU has focused its AI efforts on cutting red tape; European Commission Ursula von der Leyen said at the Paris AI Summit in February that “AI needs competition, but AI also needs collaboration, and AI needs the confidence of the people, and has to be safe.”
  • The International Energy Agency released its first comprehensive report on the energy use and corresponding climate implications relating to the AI boom. According to Axios, fears that AI will rapidly accelerate the pace of climate change appear “overstated,” though predictions that AI alone will also help solve the climate crisis were also determined to be overstated. Regardless, the report details how intertwined energy and AI have become. A single typical AI datacenter consumes as much electricity as 100,000 households, and some of the largest centers currently under construction could consume over 20 times that amount. Overall, the report concludes that the future of energy policy in this area is uncertain, much as the industry itself is. AI itself may help reduce energy usage through efficiency gains in some instances but also comes along with security risks and risks associated with overreliance on materials and energy sources from particular countries or volatile regions of the world.
  • Reuters reports that Elon Musk’s Department of Government Efficiency (DOGE) is using AI to surveil the communications of federal workers. In particular, DOGE is using AI to search for “hostility to President Donald Trump and his agenda,” particularly at the Environmental Protection Agency (EPA), where Microsoft Teams communications were searched for “anti-Trump or anti-Musk language.” In response to Reuters’ reporting, the EPA said that AI has been used to “better optimize agency functions and administrative efficiencies,” but was not being used for personnel decisions. DOGE has made AI a focus of its efforts to reduce the size of the federal government, including the creation of an AI chatbot for federal workers, as well as feeding government data into AI systems for further analysis.
  • Bloomberg reports on the development of AI-powered nautical drones by several military tech startups. Blue Water, a Boston-based startup, aims to create an autonomous ship that can travel the open ocean without a single human crew member, while several tech-focused venture capital funds including Peter Thiel’s Founders Fund and Andreessen Horowitz have also invested in companies creating autonomous naval equipment. Anduril Industries, another Thiel-backed company, recently unveiled a submersible drone that uses AI to navigate underwater environments. The startups generally claim to be addressing a perceived gap in naval construction between the U.S. and its geopolitical rival China, the latter of whom allegedly builds over 1,700 commercial ships a year compared to the U.S.’s five. Oceangoing vessels face many engineering challenges, including saltwater corrosion and major storms, which makes the industry a difficult area to break into. However, rising tensions with global powers such as China and Russia have also sparked investment into such military and weapons systems, who are applying technology developed in the AI boom primarily for navigational and sensor purposes.
  • The Financial Times reports that OpenAI has reduced the amount of time that its AI models are subject to safety testing. According to sources within the company, the budget and time allotted for testing processes for OpenAI’s models has been drastically slashed in recent months. In comparison to GPT-4, for which safety testers had over six months to conduct evaluations, OpenAI plans to release its next o3 model as soon as next week, which means some testers will only have access to the model for less than a week. The sources say that the rush is driven by competition in the AI space, especially against tech giants Google and Meta and other startups such as Anthropic and Elon Musk’s xAI. The tests are often expensive, involving the hiring of external experts and creating safety-focused data sets, which could also explain the desire to slim down the process. However, such moves do not come without risks: a former OpenAI safety researcher, Steven Adler, said to the FT that “not doing such tests could mean OpenAI and the other AI companies are underestimating the worst risks of their models.”

[View source.]

Written by:

McDonnell Boehnen Hulbert & Berghoff LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

McDonnell Boehnen Hulbert & Berghoff LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide