Dr. T's AI brief

107 views
Skip to first unread message

dtau...@gmail.com

unread,
Jul 20, 2024, 8:43:33 AM7/20/24
to ai-b...@googlegroups.com

Self-Replicating 'Life' Created from Digital ‘Primordial Soup’

Google researchers developed a self-replicating form of artificial life from random data. Their experiments involved the random mingling, combination, and execution of tens of thousands of separate pieces of computer code, with no explicit rules determining changes in the code samples and no rewards for specific behavior. The researchers observed the development of self-replicating programs that multiplied until they reached the population cap for the code samples. The researchers saw the emergence of new types of replicators that replaced the previous population.
[ » Read full article *May Require Paid Registration ]

New Scientist; Matthew Sparkes (July 9, 2024)

 

U.S. Says Russian Bot Farm Used AI to Impersonate Americans

The U.S. Department of Justice (DOJ) said it disrupted a bot farm that used AI software to create profiles on social media platform X to impersonate Americans and disseminate Russian propaganda. Part of a project allegedly approved by the Russian government, the bot farm’s propaganda campaign involved close to 1,000 fake profiles, with X suspending the accounts for terms of service violations.
[
» Read full article ]

NPR; Shannon Bond (July 9, 2024)

 

Physical System Learns Nonlinear Tasks Without Traditional Computer Processor

An analog system developed by University of Pennsylvania (Penn) researchers can learn complex tasks like nonlinear regression and "exclusive or" (XOR) relationships. The fast, low-power, scalable system is a contrastive local learning network, whose components learn based on local rules without a centralized processor and no knowledge of the larger structure. Said Penn's Marc Z. Miskin, "Because it has no knowledge of the structure of the network, it's very tolerant to errors, it's very robust to being made in different ways, and we think that opens up a lot of opportunities to scale these things up."
[
» Read full article ]

Penn Today; Erica Moser (July 5, 2024)

 

Will Ray Kurzweil Merge with AI?

ACM Fellow Ray Kurzweil believes "the Singularity," when people merge with AI, will arrive by 2045, citing the rate of growth of computer power. Kurzweil, who received ACM's Grace Murray Hopper Award in 1978 for developing a device that reads text to the blind, wants to experience the Singularity but acknowledges that, at 76, he may not live to see it. Said ACM A.M. Turing Award laureate Geoffrey Hinton, "His prediction no longer looks so silly. Things are happening much faster than I expected."
[ » Read full article *May Require Paid Registration ]

The New York Times; Cade Metz (July 4, 2024)

 

Tech Industry Wants to Lock Up Nuclear Power for AI

Big tech companies are pursuing deals with the owners of U.S. nuclear power plants to power their datacenters. Amazon Web Services, for example, is working with Constellation Energy to obtain electricity directly from an East Coast nuclear power plant. Big tech companies are willing to pay a premium to obtain power directly from a power plant because it reduces the time frame for building datacenters by eliminating the need for new grid infrastructure.
[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Jennifer Hiller; Sebastian Herrera (July 1, 2024)

 

AI Integration Spurs Demand For Liberal Arts Education

Higher Ed Dive Share to FacebookShare to Twitter (7/8, McLean) reports demand for liberal arts education “has declined in recent years as students increasingly eye college programs that directly prepare them for jobs,” but according to “tech and college experts, as businesses launch advanced AI tools or integrate such technology into their operations, liberal arts majors will become more coveted.” Employers will need people “to think through the ethical stakes and unintended consequences of new technologies,” so college leaders “need to take action as AI changes the workforce, scholars say.” One expert said liberal arts students “could provide a more humanistic perspective on the technology, with an eye to ethics, privacy and bias.”

AI-Powered Humanoid Robots Could Solve Global Labor Shortage

CNBC Share to FacebookShare to Twitter (7/8, Rooney) reports AI-powered humanoid robots are emerging across Silicon Valley, with companies like Amazon, Tesla, Microsoft, and Nvidia investing billions. These robots, currently used in warehouses, could eventually operate in homes and offices. Amazon has backed Agility Robotics and is deploying Digit robots in fulfillment centers. Goldman Sachs predicts the humanoid market will reach $38 billion in 20 years, helping with elderly care and labor shortages. AI advancements and a global labor shortage drive this renewed interest. Jeff Cardenas, CEO of Apptronik, highlighted robots filling “dull, dirty, dangerous tasks.”

Report: Apple’s AI Service, Siri Revamp Features Likely Coming Next Year

CNET News Share to FacebookShare to Twitter (7/8, Sherr) says, “Apple announced its new artificial intelligence service and revamped look for its Siri voice assistant in June, with plans to begin testing later this year.” However, “some features likely won’t appear until next year, according to a new report.” The “coming Apple Intelligence service promises many new features when it begins testing later this year, including a revamped look, more intuitive voice controls and integration with OpenAI’s popular ChatGPT.” Recent reporting from Bloomberg “gives more detail on the launch timing, saying Apple plans to offer Siri’s new look and ChatGPT integrations later this year.” But “Siri’s new abilities to control apps with your voice and to understand what you’re looking at on the screen...won’t arrive until next year.”

OpenAI Startup Fund Backs AI Venture To Promote Healthier Lifestyles

TechCrunch Share to FacebookShare to Twitter (7/8, Wiggers) reports, “Huffington Post founder Arianna Huffington and OpenAI CEO Sam Altman are throwing their weight behind a new venture, Thrive AI Health, that aims to build AI-powered assistant tech to promote healthier lifestyles.” TechCrunch adds, “Backed by Huffington’s mental wellness firm Thrive Global and the OpenAI Startup Fund, the early-stage venture fund closely associated with OpenAI, Thrive AI Health will seek to build an ‘AI health coach’ to give personalized advice on sleep, food, fitness, stress management and ‘connection,’ according to a press release issued Monday.”

ED Releases Guidance On AI Development For Schools

Education Week Share to FacebookShare to Twitter (7/8) reports guidance Share to FacebookShare to Twitter released Monday by the Education Department recommends educators “work with vendors and tech developers to ensure artificial intelligence-driven innovations for schools go hand-in-hand with managing the technology’s risks.” According to EdWeek, “companies and tech developers are in a tough spot with AI. Many want to move cautiously in developing tools with educator feedback that are properly tested and don’t amplify societal bias or deliver inaccurate information. On the other hand, developers also want to serve the current market – and don’t want to get left behind the competition.” The guidance argues that “vendors and educators can try new things with AI – like enabling teachers to use it for writing emails – if they consider important questions such as: Who will ensure that students’ private information isn’t shared?” It also recommends that “AI should not be allowed to make decisions unchecked by educators, and that developers need to design AI tools based on evidence-based practices, incorporating educator input and feedback, while safeguarding students’ data and civil rights.”

How Schools Can Learn From Los Angeles Unified School District’s Botched AI Chatbot Rollout

Education Week Share to FacebookShare to Twitter (7/8, Klein) reports in March, the Los Angeles Unified School District “was held up as a trailblazer for its embrace of artificial intelligence, when it unveiled a custom-designed chatbot.” Superintendent Alberto Carvalho called the tool a “game changer” that would “accelerate learning at a level never seen before.” However, in just five months, the district “has temporarily turned off its once-celebrated chatbot ‘Ed.’ That decision appears to have been prompted by upheaval at AllHere, the company LAUSD hired to create the tool at a cost of up to $6 million over five years.” LAUSD has now “become the poster district for what not to do in harnessing AI for K-12 education.” The challenges the district faced “in developing an AI tool offer important lessons for other school systems,” such as vetting ed-tech companies more carefully.

Morehouse College To Launch Animated AI Teaching Assistants

Inside Higher Ed Share to FacebookShare to Twitter (7/9, Coffey) reports Morehouse College in Atlanta, Georgia, “is rolling out 3-D, artificial intelligence-powered bots this fall across five classrooms...that will allow students to ask any question at any time.” According to senior assistant professor Muhsinah Morris, who is spearheading the AI pilot, the goal is “to enhance students’ ability to get access to information that is cultivated in your classroom.” At the “historically Black Atlanta men’s liberal arts college, the new AI bots are trained from a professor’s lectures and course notes plus other material the faculty deem important. Students access the bot with a Google Chrome web browser, which displays a 3-D figure, or avatar, designed by the professor. Students can type in a question box or they can speak aloud – in their native language – and get a verbal response back in a way that mimics the classroom experience.”

Morehouse College Introduces AI Teaching Assistants

The Chronicle of Higher Education Share to FacebookShare to Twitter (7/10, Walters) reports this fall, “a small group of professors at Morehouse College” will use AI-powered teaching assistants (TAs), “which are actually digital avatars resembling each professor’s physical appearance and demeanor.” The project, led by a chemistry professor at the college, aims to assist students with 24/7 questions and lecture delivery. VictoryXR, “the Iowa firm that designed Morehouse’s TAs,” will use materials uploaded by professors and, if needed, “will turn to a large language model from OpenAI – the creator of ChatGPT – to craft answers based on outside information.” Critics, however, point out potential issues with AI responsiveness and the need for effective question formulation.

Researchers Develop Hybrid Intelligence Using Brain Organoids

Popular Mechanics Share to FacebookShare to Twitter (7/9, Orf) reports that researchers from Indiana University Bloomington and Tianjin University have integrated lab-grown brain organoids with AI tools to create hybrid intelligence. Tianjin University’s MetaBOC robot, featuring “organoid intelligence,” can perform tasks like obstacle avoidance and tracking.

China Shifts AI Focus To Humanoid Robots

Forbes Share to FacebookShare to Twitter (7/9, Costigan) reports that at the World AI Conference in Shanghai last week, Huawei Cloud CEO Zhang Ping’an emphasized that China can lead in AI without the most advanced chips. The event highlighted Chinese-made humanoid robots, with companies like Fourier and Tlibot showcasing their innovations. Regulations for robot governance were also introduced. Tesla, featuring its Optimus robot, was among the few American firms present.

Meta Seeking New Executive To Lead Its Integration Of Generative AI

CoinGeek Share to FacebookShare to Twitter (7/9, Kaaru) reports Meta is searching “for a new executive to lead its integration of generative AI (AGI) with emerging technologies, the key among them being the metaverse.” Mark Zuckerberg three years ago “was solely focused on the metaverse” and “even changed the name of his trillion-dollar company from Facebook to Meta.” Since “then came AI, and Zuckerberg is going big on the new technology – this year, Meta will spend up to $40 billion, with AI handling up a sizeable chunk.”

More Universities Integrate AI In Agriculture Programs

Inside Higher Ed Share to FacebookShare to Twitter (7/10, Coffey) reports that universities are increasingly incorporating artificial intelligence (AI) into their agriculture programs. At the University of Missouri, “students bring tools – not tills, tractors or plows, but sensors that use artificial intelligence to measure soil moisture, cameras that distinguish weeds from crops and drones to oversee plant growth from above.” This initiative is part of a broader trend, with institutions such as Iowa State, Washington State, and Purdue University also leveraging AI to prepare students for the evolving agricultural industry. The National Science Foundation’s National Artificial Intelligence Research Institutes, “intended to boost AI research and workforce development,” now span 25 institutions, “with five higher education institutions tapped to focus on boosting the use of AI in agriculture. Each of the five universities received $20 million to spend over five years.”

NC State Robot Helps Advance AI In Agriculture

WTVD-TV Share to FacebookShare to Twitter Raleigh-Durham, NC (7/10) reports that BenchBot 3.0, a robot at NC State, is automating plant species recognition using AI. NC State engineers Mark Funderburk and Lirong Xiang highlight its potential to create detailed field maps for farmers. Xiang envisions a future with fully autonomous, intelligent agriculture systems.

Analysis: AI Technology May Help Patients With Cancer Deal With Emotional Ramifications

An analysis in TIME Share to FacebookShare to Twitter (7/10, Esteva) discusses how AI technology may help patients with cancer address the emotional toll that comes from being diagnosed with the illness. Some have argued that current AI-enabled tests “can swiftly analyze real-world data and translate it into digestible and personalized insights, allowing for a more personalized approach to cancer therapy.” They also argue that integrating AI into the patient treatment process “not only places the patient at the center but also provides more clarity throughout the cancer treatment journey.”

Microsoft, Apple Abandon OpenAI Board Role Plans

Bloomberg Share to FacebookShare to Twitter (7/10, Grant, Subscription Publication) reports Microsoft Corp. and Apple Inc. have decided not to assume board roles at OpenAI, underscoring rising regulatory scrutiny over Big Tech’s influence when it comes to AI. Microsoft will exit from its observer function on the board, according to a letter to OpenAI which Bloomberg saw. While Apple was slated to assume a comparable role, a spokesperson for OpenAI said the company will have no board observers following Microsoft’s exit.

Google No Longer Maintains “Operational Carbon Neutrality” Due To AI

Fortune Share to FacebookShare to Twitter (7/10, Roytburg) reports Google’s recent sustainability report reveals that since 2023, Google has no longer “maintained operational carbon neutrality.” Since 2007, the company has purchased “enough clean-energy supply to match the bulk of the emissions it generates through its data centers and buildings,” but “increasing energy demands from the greater intensity of AI compute” have left the company unable to keep up.

How Principals Are Using AI To Manage Administrative Tasks

Education Week Share to FacebookShare to Twitter (7/10, Banerji) reports school leaders are increasingly using AI tools to manage administrative tasks. Michael Martin, principal of Buckeye High School in Ohio, “has tinkered with ChatGPT and similar services since they launched, and through this experimentation, curated a suite of AI-based tools” to handle tasks such as summarizing emails and scheduling appointments. This helps Martin “complete his ‘algorithmic’ or administrative tasks more quickly so that he gets more time to build relationships with teachers and students.” However, he notes AI’s limitations, like generating non-existent research citations. The superintendent of the Pearl school district in Mississippi “has created a number of smaller chatbots, which ensconce all the information about a particular topic on one platform for school leaders and teachers.”

How AI Tools Can Enhance Classroom Teaching

The Hechinger Report Share to FacebookShare to Twitter (7/10, Berdik) reports that a science teacher at Ron Clark Academy in Atlanta, Georgia, utilizes a voice-activated AI assistant enhance classroom engagement and facilitate lesson delivery. This voice-activated assistant “is the brainchild of computer scientist Satya Nitta, who founded a company called Merlyn Mind,” and it helps teachers navigate digital materials while interacting with students. Launched in November 2022, AI tools like ChatGPT, Khanmigo, and others are increasingly used in education to assist with tasks such as generating quizzes and providing feedback. Among experts, the debate is “about the best mix – what are AI’s most effective roles in helping students learn, and what aspects of teaching should remain indelibly human no matter how powerful AI becomes?”

Journalists Sue OpenAI, Microsoft Over Copyright Claims

The AP Share to FacebookShare to Twitter (7/11) reports veteran journalists Nicholas Gage and Nicholas Basbanes have filed a lawsuit against OpenAI and Microsoft, alleging that the companies’ AI chatbots have “systematically pilfered” their copyrighted work. The lawsuit, now part of a broader class-action case, includes prominent writers like John Grisham and George R. R. Martin. Gage and Basbanes argue that OpenAI, with Microsoft’s support, used vast amounts of human writings without permission or compensation. Microsoft AI Chief Executive Mustafa Suleyman defended the practice under the “fair use” doctrine. Gage emphasized the importance of protecting writers’ intellectual property, saying, “It’s highway robbery.” The case is still in discovery and expected to continue into 2025.

AI Strains Energy Grid, Raises Emissions

The New York Times Share to FacebookShare to Twitter (7/11) reports that the increasing energy demands of data centers driven by artificial intelligence are straining the electricity grid in some regions, leading to higher emissions and hindering the energy transition. Bill Gates, during a media briefing in London, acknowledged the additional load but remained optimistic, stating, “Let’s not go overboard on this,” and predicting that AI would ultimately enhance efficiency to offset the extra demand. Despite Gates’ positive outlook, the article notes that AI continues to significantly impact global energy consumption and emissions.

dtau...@gmail.com

unread,
Jul 21, 2024, 7:35:25 PM7/21/24
to ai-b...@googlegroups.com

Hong Kong Is Testing Its Own ChatGPT-Style Tool

A team of researchers led by the Hong Kong University of Science and Technology has developed a ChatGPT-like tool for the city's employees. Secretary for Innovation, Technology and Industry Sun Dong said the tool, dubbed "document editing co-pilot application for civil servants," is being tested by his bureau before being rolled out government-wide later this year.
[ » Read full article ]

Associated Press; Kanis Leung (July 16, 2024)

 

Hackers Claim Leak of Internal Disney Slack Messages over AI Concerns

Activist hacking group Nullbulge claimed it leaked thousands of Disney’s internal Slack messaging channels, which included information about unreleased projects, raw images, computer codes, and log-ins. The group said it leaked about 1.2 terabytes of information and that it wants to protect artists’ rights and compensation for their work, especially in the age of AI.
[ » Read full article ]

CNN; Ramishah Maruf (July 15, 2024)

 

Epileptic Patient 'Speaks' Using Power of Thought

Researchers at Israel's Tel Aviv University (TAU) and Tel Aviv Sourasky Medical Center demonstrated the ability of a patient with epilepsy who had depth electrodes implanted in his brain to communicate solely using the power of thought. The implants transmitted electrical signals from the patient's brain to a computer trained using deep learning and machine learning, and spoke the transmitted syllables aloud.
[ » Read full article ]

Medical Xpress (July 16, 2024)

 

Bayer, Others Turn to AI to Conquer Superweeds

Big agriculture companies like Bayer and Syngenta are using AI to accelerate the process of developing new herbicides, fungicides, and insecticides. Syngenta estimated AI could drop the average time from discovery to commercialization from 15 years to 10 years. Bayer's CropKey AI system, for instance, can analyze data more quickly than humans to identify chemical molecules that target a weed's protein structure.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Patrick Thomas (July 17, 2024)

 

Universities Don't Want AI Research to Leave Them Behind

To remain relevant in the field of generative AI, universities are shifting their research focus to areas of AI that are less computing-power-intensive. At the same time, academic institutions are looking to build out their computing resources or engage in resource-sharing with other universities. Meanwhile, universities in tech hubs like Silicon Valley, Boston, the Pacific Northwest, and Austin are forging partnerships with industry players.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Isabelle Bousquette (July 12, 2024)

 

OpenAI Whistleblowers Allege Company Restricted Employees From Alerting SEC To AI Risks

The Washington Post Share to FacebookShare to Twitter (7/13, Verma, Zakrzewski, Tiku) reports OpenAI whistleblowers “have filed a complaint with the Securities and Exchange Commission alleging the artificial intelligence company illegally prohibited its employees from warning regulators about the grave risks its technology may pose to humanity, calling for an investigation.” In a letter, the whistleblowers claim OpenAI “issued its employees overly restrictive employment, severance and nondisclosure agreements that could have led to penalties against workers who raised concerns about OpenAI to federal regulators.” They argue that the “overly broad agreements violated long-standing federal laws and regulations meant to protect whistleblowers who wish to reveal damning information about their company anonymously and without fear of retaliation.”

Amazon’s AI Talent Deal With Adept Raises Antitrust Concerns

The AP Share to FacebookShare to Twitter (7/12, O'Brien, Parvini) reports Amazon has secured a deal with AI startup Adept to employ its CEO and key staff, while also licensing Adept’s AI systems and datasets. This move, described by some as a “reverse acqui-hire,” has raised concerns among US lawmakers about potential anti-competitive practices. US Sen. Ron Wyden (D-OR) stated, “I’m very concerned about the massive consolidation that’s going on in AI.” Wyden has urged antitrust regulators to investigate the deal, highlighting a growing trend of tech giants acquiring talent without formal acquisitions to avoid regulatory scrutiny.

Democratic Senators Scrutinize Big Tech Strategies For Recruiting AI Talent From Small Startups

The AP Share to FacebookShare to Twitter (7/13, O'Brien, Parvini) reports Sens. Elizabeth Warren (D-MA), Peter Welch (D-VT), and Ron Wyden (D-OR) are calling for an investigation into Big Tech companies’ efforts to poach top AI talent from smaller firms. The lawmakers are specifically focused on “‘acqui-hires,’ in which one company acquires another to absorb talent, have been common in the tech industry for decades.” In letter Friday, the lawmakers argued “antitrust enforcers at the Justice Department and the Federal Trade Commission that ‘sustained, pointed action is necessary to fight undue consolidation across the industry.’”

Academic Researchers Working To Increase Equity In AI Technology

Inside Higher Ed Share to FacebookShare to Twitter (7/15, Palmer) reports academic researchers “know that artificial intelligence (AI) technology has the potential to revolutionize the technical aspects of nearly every industry,” but they have “limited access to the expensive, powerful technology required for AI research.” The divide has scholars and “other government-funded researchers concerned that the developments emerging from the AI Gold Rush could leave marginalized populations behind.” Removing inherent biases in generative AI “is one of the overarching goals of the National Artificial Intelligence Research Resource pilot (NAIRR), which the National Science Foundation (NSF) helped launch in January.” Through the two-year pilot, “so far 77 projects – the majority of which are affiliated with universities – have received an allocation of computing and data resources and services, including remote access to Summit and other publicly funded supercomputers.”

Google Awards Grants To Black, Latino Founders Who Use AI

Forbes Share to FacebookShare to Twitter (7/15, Alexander) reports Google for Startups “recently announced its Black Founders Fund and Latino Founders Fund had together awarded grants to a combined 20 startups.” Each startup, which incorporates artificial intelligence in its business model, “is receiving a $150,000 non-dilutive cash award (in other words, Google gets no equity in return for its money) and $100,000 in Google Cloud credits.” Additionally, recipients will gain “access to mental health resources and mentorship from Google experts in AI and sales.” This initiative comes amid a significant decline in venture capital funding for Black-led startups, which dropped 71% in 2023. One recipient “uses AI to personalize online makeup shopping and was recognized by Forbes on the 2023 30 Under 30 list.”

AI Boom Increases Energy Demand

Fast Company Share to FacebookShare to Twitter (7/14) reports that the rise of artificial intelligence has significantly increased energy consumption in tech companies. Large language models like ChatGPT require much more energy than traditional queries, leading to higher carbon emissions. This surge in energy demand is pressuring the electrical grid and prompting energy companies to consider options like restarting dormant nuclear reactors. Data centers are exploring more efficient cooling methods and flexible computing to manage energy use. The industry faces challenges in balancing growth with sustainability and grid stability.

California State Bill On AI Regulation Sparks Debate

Fortune Share to FacebookShare to Twitter (7/15, Goldman) reports, “A California state bill has emerged as a flashpoint between those who think AI should be regulated to ensure its safety and those who see regulation as potentially stifling innovation.” The bill, which will head to a “final vote in August, is sparking fiery debate and frantic pushback among leaders from across the AI industry – even from some companies and AI leaders who had previously called for the sector to be regulated.”

New Hampshire Schools Adopt AI Tutoring Program To Enhance Classroom Learning

New Hampshire Bulletin Share to FacebookShare to Twitter (7/15) reports that New Hampshire schools will introduce Khanmigo, an AI-driven educational tool by Khan Academy, in the upcoming school year. This program, which aims to provide personalized learning amidst teacher shortages, allows students to “pose any question they like” to literary characters, historical figures, and receive tutoring in various subjects. After the Executive Council “approved a $2.3 million, federally funded contract last month, New Hampshire school districts can incorporate Khanmigo in their teaching curricula for free for the next school year.” To some educators and administrators, “the program offers glittering potential,” while others raise concerns about AI accuracy and bias. Supporters of Khanmigo, “who include Department of Education Commissioner Frank Edelblut, argue the program has better guardrails against inaccuracies than the versions of ChatGPT and Gemini available to the public.”

Carnegie Mellon University Professor Advocates AI For Constructive Student Debates

Inside Higher Ed Share to FacebookShare to Twitter (7/16, Quinn) reports that to “help students sharpen their ideas,” Simon Cullen, an assistant professor at Carnegie Mellon University “who’s also an artificial intelligence and education fellow at the university, has required them to argue with an AI chat bot called Robocrates that he helped create.” In addition to his Dangerous Ideas course, Cullen and a postdoctoral scholar are developing another AI program, Sway, “that digitally matches students with those they disagree with on issues such as abortion.” Cullen emphasizes the importance of debating to form robust opinions, despite the fear of peer judgment. Next month, Cullen and the postdoctoral scholar “will offer faculty members and administrators outside of Carnegie Mellon the chance to use the program for the first time.”

FTC Requests Details On Amazon’s AI Hiring Deal

Reuters Share to FacebookShare to Twitter (7/16, Hu, Bensinger, Godoy) reports the US Federal Trade Commission (FTC) has asked Amazon to provide more details on its deal to hire top executives and researchers from AI startup Adept. The inquiry, which reflects the FTC’s growing concern about AI deals, follows Amazon’s announcement that Adept Chief Executive David Luan and others were joining Amazon. Luan now runs the “AGI Autonomy” team under Rohit Prasad.

        CNBC Share to FacebookShare to Twitter (7/16, Palmer) reports the FTC issued a report in April warning that partnerships like those between Microsoft and Inflection AI, and Amazon and AI startup Anthropic, may allow companies to “shape these markets in their own interests.” Lawmakers, including Sen. Ron Wyden (D-OR), have cited Amazon’s deal with Adept as an example of tech companies making acquihires to avoid antitrust scrutiny. As part of the agreement announced last month, Amazon hired Adept co-founder and CEO Luan and other team members, and licensed Adept’s technology, multimodal models, and datasets.

Trump Allies Reportedly Formulating New Executive Order Loosening Restrictions, Regulations On AI For Defense Purposes

The Washington Post Share to FacebookShare to Twitter (7/16) reports that former president Trump’s allies “are drafting a sweeping AI executive order that would launch a series of ‘Manhattan Projects’ to develop military technology and immediately review ‘unnecessary and burdensome regulations’ – signaling how a potential second Trump administration may pursue AI policies favorable to Silicon Valley investors and companies.” The framework “would also create ‘industry-led’ agencies to evaluate AI models and secure systems from foreign adversaries, according to a copy of the document viewed exclusively by The Washington Post.” The framework “presents a markedly different strategy for the booming sector than that of the Biden administration, which last year issued a sweeping executive order that leverages emergency powers to subject the next generation of AI systems to safety testing.”

Idaho Colleges Grappling With Generative AI Management

Idaho Capital Sun Share to FacebookShare to Twitter (7/17, Draisey) reports this year, Idaho lawmakers “passed two laws restricting the use of AI,” while state colleges and universities are now “grappling with the new technology and its implications for how students learn and perform in the classroom.” Although the introduction of AI “brings different approaches,” its rise in academic settings “also comes with ethical challenges. Educators must balance the educational benefits while also maintaining academic integrity in their classrooms.” For example, the University of Idaho “uses a structured approach with tools like Turnitin, which checks for plagiarism and AI-generated content, and Zero GPT, a specialized AI text detection service.” Boise State University “chooses not to use AI detection tools and instead relies on faculty judgment.” Similarly, Lewis-Clark State College and the College of Western Idaho “have also chosen to not use AI detection tools.”

Meta Pauses AI Model Release In EU Over Regulatory Issues

Axios Share to FacebookShare to Twitter (7/17) reports Meta will withhold its next multimodal AI model from the European Union due to regulatory uncertainties. Meta said, “We will release a multimodal Llama model over the coming months, but not in the EU due to the unpredictable nature of the European regulatory environment.” Meta plans to incorporate these models into various products, including smartphones and Meta Ray-Ban smart glasses. The decision also affects European companies’ access to these models, despite their open license. Meta’s issue centers on compliance with GDPR for training models using European data. The UK, with similar laws, will still receive the new model. Meta emphasizes that training on European data is crucial for regional relevance, noting competitors like Google and OpenAI already do so.

AI Enhances Cybersecurity Amid Rising Cybercrime

Entrepreneur Magazine Share to FacebookShare to Twitter (7/17, Wong) reports cybercrime has surged globally, causing over $12 billion in damages in the past decade. AI now plays a crucial role in both perpetrating and combating cyber threats. Chief information security officers leverage AI technologies like machine learning to detect anomalies and prevent damage. Amazon GuardDuty, an AI-based threat detector, protects AWS accounts by analyzing data and automating threat remediation. IBM Watson for Cybersecurity also uses AI to detect threats from various sources. Despite advancements, challenges remain, including securing generative AI projects. Case studies of Andritz AG and United Family Healthcare illustrate successful AI-based cybersecurity implementations. As generative AI use expands, the need for robust cybersecurity will grow, necessitating advancements in AI-based protection.

AI’s Role In College Admissions Sparks Ethical Debate

Education Week Share to FacebookShare to Twitter (7/18, Klein) reports the use of AI tools in college admissions essays has become a contentious issue. Research by foundry10 reveals that about “a third of high school seniors who applied to college in the 2023-24 school year acknowledged using an AI tool for help in writing admissions essays,” with some relying on it for final drafts. Jennifer Rubin, a researcher at foundry10, highlights the “ethical gray area that students and [high school] counselors don’t have any guidance” on how to navigate. While some institutions like CalTech and Georgia Polytechnical Institute permit limited AI use, others like Brown University prohibit it entirely. Rubin “said, because there’s no way to check on what kind of assistance an applicant received, human or not.”

OpenAI To Introduce GPT-4o Mini

Bloomberg Share to FacebookShare to Twitter (7/18, Subscription Publication) reports that OpenAI is introducing GPT-4o mini, “a more affordable, slimmed-down version of its flagship artificial intelligence model to appeal to a wider range of developers and business customers in an increasingly crowded market for AI services.” The startup “said the updated model will be available [Thursday] for free users and paying ChatGPT Plus and Team subscribers, and will be offered to enterprise customers next week.”

dtau...@gmail.com

unread,
Jul 27, 2024, 8:56:15 AM7/27/24
to ai-b...@googlegroups.com

Google DeepMind Takes Step Closer to Cracking Top-Level Math

Google DeepMind's AlphaProof and AlphaGeometry 2 systems partnered to tackle questions from the International Mathematical Olympiad, a global math competition for secondary-school students, together earning a silver medal. In each of the questions they successfully answered, the systems scored perfect marks, but for two out of the six questions, they were unable to even begin working towards an answer.
[
» Read full article ]

The Guardian (U.K.); Alex Hern (July 25, 2024)

 

Model Helps LLMs Better Understand Spreadsheets

Microsoft researchers developed SpreadsheetLLM to encode spreadsheet contents into a format accessible to large language models (LLMs). The experimental model uses an encoding mechanism to compress spreadsheet data 96% while preserving the data structure and relationships, enabling LLMs to handle large datasets while minimizing token usage. Said Constellation Research Inc. analyst Holger Mueller, “If Microsoft can nail this properly, it will not only secure the future of Excel, but change the future of work as we know it.”
[ » Read full article ]

Silicon Angle; Mike Wheatley (July 15, 2024)

 

AI Gives Voice Back to U.S. Rep. Wexton

U.S. Rep. Jennifer Wexton (D-VA) regained the voice she lost due to progressive supranuclear palsy with the help of an AI voice-cloning program from ElevenLabs. On Thursday, Wexton delivered the first-ever speech made on the House floor using an AI-cloned voice. The program lets Wexton type her thoughts into an iPad, which speaks the text aloud in her own voice. Wexton said AI voice-cloning technology "is humanizing, and it is empowering."
[
» Read full article ]

Associated Press; Dan Merica (July 25, 2024)

 

Push to Develop Generative AI, Without All the Lawsuits

Getty Images and Shutterstock are among the stock image companies using their own data to develop AI image generators to avoid the lawsuits plaguing Google, OpenAI, and other companies that scraped content from the Web when building their image generators and AI chatbots. Getty has partnered with Picsart on an AI image model and with Nvidia on an image generator and has provided images for an AI model being developed by Israeli startup Bria AI. Shutterstock is working on AI models with Databricks and Nvidia.


[
» Read full article *May Require Paid Registration ]

The New York Times; Nico Grant; Cade Metz (July 22, 2024)

 

AI, Needing Copper, Helps to Find It

KoBold Metals announced on July 18 that its AI technology had identified a copper lode a mile underground in Zambia, which is said to be the largest copper discovery in more than a decade. KoBold estimated the mine, when fully operational, would generate no less than 300,000 tons of copper annually over a period of decades. The findings are significant, given the vast amounts of copper needed by AI datacenters.

[ » Read full article *May Require Paid Registration ]

The New York Times; Max Bearak (July 18, 2024)

 

Artists Protect Their Work from Gen AI

University of Chicago researchers are helping artists protect their work from being included in generative AI training models. The Glaze tool they developed implements subtle changes that trick the AI into detecting a different art style, while their Nightshade tool confuses AI training models about what is in an image. However, Pennsylvania State University's Jinghui Chen cautioned, "When AI becomes stronger and stronger, these anti-AI tools will become weaker and weaker."
[ » Read full article ]

Associated Press; Sarah Parvini (July 18, 2024)

 

AI Brought 11,000 College Football Players to Digital Life

Electronic Arts (EA) developers used AI for the first time in the making of its newly released college football video game. It took them three months to incorporate the likenesses of around 11,000 players into EA Sports College Football 25. The process involved gathering the athletes' headshots, then using AI technology to create full 3D avatars of each. Artists were used to improve the digital versions as necessary, feeding the changes into the AI program to help it learn from its mistakes.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Sarah E. Needleman (July 21, 2024)

 

Tech Industry Forms Coalition For Secure AI To Establish Security Standards

Axios Share to FacebookShare to Twitter (7/18, Sabin) reported Google announced the formation of the new Coalition For Secure AI at the Aspen Security Forum taking place in Colorado this week. The new coalition, whose founding members include PayPal, Microsoft and Amazon, will begin “its work by developing standards for software supply chain security for AI systems, compiling resources to measure the risk of these tools and pulling together a framework to help defenders determine the best use cases for AI in their work.”

WPost Report: California Now “Ground Zero” In AI Regulatory Battle

The Washington Post Share to FacebookShare to Twitter (7/19, De Vynck, Zakrzewski, Tiku) reports California legislators are “debating a proposal that would force the biggest and best-funded companies to test their AI for ‘catastrophic’ risks before releasing it to the public,” making the state “ground zero for the battle over government regulation of AI.” The measure “is also shedding light on the limits of Silicon Valley’s enthusiasm for government oversight, even as key leaders such as OpenAI CEO Sam Altman publicly urge policymakers to act,” as some experts say that “by mandating previously voluntary commitments, Wiener’s bill has gone further than tech leaders are willing to accept.”

AI-Powered Machines Transform Agriculture

The Los Angeles Times Share to FacebookShare to Twitter (7/22, Smith) reports that nearly 200 farmers, academics, and engineers gathered in Salinas to witness AI-powered agricultural machines. Devices like Carbon Robotics’ LaserWeeder use AI to identify and eliminate weeds with lasers, reducing reliance on chemical herbicides. The shift addresses health risks associated with traditional pesticides, such as paraquat, dacthal, and glyphosate. However, concerns arise over potential job losses in California’s agriculture sector. Experts highlight the environmental benefits and the need for new labor solutions.

AI Chatbots Struggle With Breaking Political News

The Washington Post Share to FacebookShare to Twitter (7/22, Kelly) reports that AI chatbots failed to keep up with recent political developments, including President Biden’s withdrawal from the 2024 race and the shooting at former President Trump’s rally in Pennsylvania. Microsoft’s Copilot redirected users to Bing for election-related queries, while Google’s Gemini and Meta AI also faced challenges. Jevin West from the University of Washington emphasized the need for reliable news sources over AI bots for current events.

OpenAI Develops New AI Transparency Technique

Insider Share to FacebookShare to Twitter (7/20, Varanasi) reports that OpenAI has introduced a new method for enhancing AI model transparency by having them communicate with each other. This approach, showcased this week, aims to make more powerful AI models explain their reasoning processes. OpenAI tested the technique with math problems, where one model explained its solutions and another checked for errors. This initiative aligns with OpenAI’s mission to create safe and beneficial artificial general intelligence. The company has faced recent internal challenges, including key departures from its safety department, raising concerns about its commitment to AI safety.

Survey: Most Graduates Believe AI Should Be Taught In College

Inside Higher Ed Share to FacebookShare to Twitter (7/23, Coffey) reports most college graduates “believe generative artificial intelligence tools should be incorporated into college classrooms, with more than half saying they felt unprepared for the workforce, according to a new survey from Cengage Group, an education-technology company.” The newly released survey “found that 70 percent of graduates believe basic generative AI training should be integrated into courses; 55 percent said their degree programs did not prepare them to use the new technology tools in the workforce.” The share of respondents who “expressed uneasiness about their facility with generative AI varied by age; 61 percent of Generation Z graduates...said they felt unprepared, compared to 48 percent of millennials (28 to 43 years old), 60 percent of Gen Xers and 50 percent of baby boomers.” Cengage Group polled recent graduates “from two- and four-year institutions, as well as those who received skills certificates in the last year.”

        Forbes Share to FacebookShare to Twitter (7/23, T. Nietzel) reports the 2024 Cengage Group Employability Report “is based on surveys of 1,000 U.S. employers and 974 recent graduates.” The survey also showed “a growing recognition among graduates about the importance of post-secondary education for career success. Two-thirds (68%) believe their education has positioned them for success in the current job market.” The rise of AI and “other technologies also has recent graduates worried about their career choices, with more than 39% fearing that generative AI could replace them entirely. Employers reinforced this view with more than half (58%) saying they were more likely to interview and hire applicants with AI experience.” Michael Hansen, CEO of Cengage Group, said in a news release, “The data supports the growing need for institutions to integrate GenAI training and professional skills development.”

Meta Announces Llama 3.1

CNBC Share to FacebookShare to Twitter (7/23, Leswing, Vanian) reports Meta announced version 3.1 of the Llama AI model. The latest update is available “in three different versions, with one variant being the biggest and most capable AI model from Meta to date.” Llama continues to be available as an open-source platform. CNBC notes Meta believes that by making technology like Llama open-source, Meta “can attract high-quality talent in a competitive market and lower its overall computing infrastructure costs, among other benefits.”

AI Companies Review Voluntary Commitments

MIT Technology Review Share to FacebookShare to Twitter (7/22) reports that seven leading AI companies – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – have reviewed their progress on voluntary commitments made with the White House to develop safe AI. The commitments include improving testing and transparency, sharing risk information, and enhancing cybersecurity. While companies have made strides in areas like red-teaming and watermarking, experts note that more work is needed for comprehensive governance and protection of rights. The White House continues to push for bipartisan legislation to enforce these commitments, emphasizing the need for ongoing industry cooperation and regulatory oversight.

Group Of Senators Demand OpenAI Turn Over Safety Data

The Washington Post Share to FacebookShare to Twitter (7/23, Verma, Zakrzewski, Tiku) reports a coalition of five Democratic-leaning senators “demanded in a Monday letter that OpenAI turn over data about its efforts to build safe and secure artificial intelligence, following employee warnings that the company rushed through safety-testing of its latest AI model, which were detailed in The Washington Post earlier this month.” The group, led by Sen. Brian Schatz (D-HI), “asked OpenAI’s chief executive Sam Altman to outline how the ChatGPT-maker plans to meet ‘public commitments’ to ensure its AI does not cause harm, such as teaching users to build bioweapons or helping hackers develop new kinds of cyberattacks, in the letter obtained exclusively by The Post.” The senators “also asked the company for information about employee agreements, which could have muzzled workers who wished to alert regulators to risks.”

OpenAI Reassigns Safety Executive

CNBC Share to FacebookShare to Twitter (7/23, Field) reports that OpenAI has reassigned Aleksander Madry, previously head of preparedness, to a role focused on AI reasoning. This change occurred shortly before Democratic senators sent a letter to CEO Sam Altman seeking information on OpenAI’s safety practices. Madry will continue to work on core AI safety efforts in his new position. OpenAI has faced increasing scrutiny over safety concerns, including antitrust investigations by the FTC and the Department of Justice. The company’s safety culture has been criticized by former employees, leading to leadership changes and team disbandments.

        Musk, Zuckerberg Criticize OpenAI’s Name. Insider Share to FacebookShare to Twitter (7/24) reports that both Mark Zuckerberg and Elon Musk have criticized OpenAI for being a “closed” AI model despite its name. Zuckerberg pointed out the irony of the name while Musk, a co-founder of OpenAI, reiterated his discontent with the company’s direction. Musk noted that OpenAI was intended to be an open-source, non-profit counterweight to Google but has become a closed, profit-driven entity controlled by Microsoft. Despite the criticism, Zuckerberg praised OpenAI CEO Sam Altman for his leadership under public scrutiny.

OpenAI Introduces SearchGPT To Challenge Google’s Search Dominance

The Wall Street Journal Share to FacebookShare to Twitter (7/25, Subscription Publication) reports that OpenAI launched a test version Thursday of SearchGPT, a search engine that cites sources from partners like News Corp and the Atlantic. SearchGPT summarizes information and allows follow-up questions, linking sources at the end of answers. The Guardian (UK) Share to FacebookShare to Twitter (7/25, Robins Early) reports the AI-driven platform produces results conversationally and offers up-to-date information with the ability to search the internet. OpenAI plans to integrate the search features into an existing model, ChatGPT, rather than create a separate product. The innovation positions OpenAI as a possible contender against major market players like Google. However, the company may face pushback from publishers over potential copyright violations, echoing challenges that have been previously levied at the company.

        OpenAI Projected To Lose Up To $5 Billion In 2024. The Times of India Share to FacebookShare to Twitter (7/25) reports that according to a report in The Information Share to FacebookShare to Twitter (7/25, Subscription Publication), OpenAI is projected to lose up to $5 billion in 2024, potentially depleting its cash reserves within a year. The company’s spending on training and inference is expected to reach $7 billion this year, including nearly $4 billion on Microsoft’s servers. Despite generating around $2 billion annually from ChatGPT and additional revenue from LLM access fees, OpenAI’s total revenue falls short, necessitating fresh funding. CEO Sam Altman remains focused on developing artificial general intelligence despite financial strains. OpenAI has raised over $11 billion but may need more to sustain its research and development efforts.

Google DeepMind Unveils Advanced AI Math Models

Bloomberg Share to FacebookShare to Twitter (7/25, Subscription Publication) reports that Google DeepMind announced on Thursday the launch of AlphaProof and AlphaGeometry 2, advanced models specializing in math reasoning and geometry, respectively. These models successfully solved four of six problems from the International Mathematical Olympiad. David Silver, Google DeepMind’s vice president of reinforcement learning, stated that while these AI models are powerful computational tools, they are not yet capable of replacing human mathematicians. Google’s approach involves translating math problems into technical statements to avoid inaccuracies common in AI-generated responses.

Meta And Alphabet CEOs Express AI “Overinvestment” Concerns

CNBC Share to FacebookShare to Twitter (7/25, Leswing) reports that Meta CEO Mark Zuckerberg and Alphabet CEO Sundar Pichai have voiced concerns about potential overinvestment in AI infrastructure. Zuckerberg, speaking on a podcast, highlighted the risk of overspending, while Pichai emphasized the greater risk of underinvesting. Despite these concerns, major tech companies like Microsoft, Amazon, and Tesla continue to heavily invest in AI, driving Nvidia’s revenue growth. Nvidia, which supplies GPUs for AI, has seen its shares rise significantly. Analysts and investors are closely monitoring the return on these investments amid competitive pressures.

Meta Criticized For Handling Of AI-Generated Deepfakes

CNN Share to FacebookShare to Twitter (7/25, Duffy) reports that Meta “failed to remove an explicit, AI-generated image of an Indian public figure until it was questioned by its Oversight Board.” The board’s report, released Thursday, “suggests that Meta is not consistently enforcing its rules against non-consensual sexual imagery, even as advancements in artificial intelligence have made this form of harassment increasingly common.” The report is “the result of an investigation the Meta Oversight Board announced in April into Meta’s handling of deepfake pornography, including two specific instances where explicit images were posted of an American public figure and an Indian public figure.” While the image of the American figure was swiftly removed, the Indian figure’s image remained despite being reported twice. The Oversight Board “urged the company to make its rules clearer by updating its prohibition against ‘derogatory sexualized photoshop’ to specifically include the word ‘non-consensual’ and to clearly cover other photo manipulation techniques such as AI.” Meta welcomed the board’s decision and pledged to take further action.

Musk To Discuss $5 Billion Investment In xAI

Reuters Share to FacebookShare to Twitter (7/25) reports that Tesla CEO Elon Musk announced on Thursday that he and the board will consider a $5 billion investment in his AI startup xAI, raising conflict of interest concerns. Musk launched a poll on social media platform X, where over two-thirds of nearly 1 million respondents supported the investment. This follows Tesla’s second-quarter results missing Wall Street estimates. Musk noted that xAI could aid in advancing Tesla’s self-driving technology and data centers. However, experts like Brent Goldfarb are skeptical, citing potential risks to Tesla shareholders.

Khan Promotes Open-Weight AI Models As FTC Seeks To “Open Up” Industry

Bloomberg Share to FacebookShare to Twitter (7/25, Subscription Publication) reports FTC Chair Lina Khan said at a Y Combinator event on Thursday that “open artificial intelligence models that allow developers to customize them with few restrictions are more likely to promote competition,” saying, “Open-weight models can liberate startups from the arbitrary whims of closed developers and cloud gatekeepers.” Khan further “said that the agency has heard complaints about dominant companies ‘monopolizing access to great talent, to critical inputs and to valuable data,’” adding, “The FTC is doing our part to be vigilant and to open up the market.” Bloomberg notes that “critics have warned that open models carry an increased risk of abuse and could potentially allow...geopolitical rivals like China to piggyback off the technology.”

dtau...@gmail.com

unread,
Aug 3, 2024, 11:06:22 AM8/3/24
to ai-b...@googlegroups.com

Meta's AI Safety System Defeated by Space Bar

Meta last week unveiled Prompt-Guard-86M alongside its Llama 3.1 generative AI model, to detect prompt injection attacks. However, Robust Intelligence researcher Aman Priyanshu found the Prompt-Guard-86M classifier model is itself vulnerable to prompt injection attacks. Priyanshu explained adding spaces between the letters of a given prompt and leaving out punctuation "effectively renders the classifier unable to detect potentially harmful content."
[ » Read full article ]

The Register (U.K.); Thomas Claburn (July 29, 2024)

 

AI Snoops on HDMI Cables to Capture Screen Data

An AI model developed by researchers at Uruguay's University of the Republic can reconstruct digital signals by intercepting electromagnetic radiation leaked from the HDMI cable that connects a computer and monitor. This would allow hackers to view a user's computer screen as they enter encrypted messages or personal information. Said the university’s Federico Larroca, “If you really care about your security, whatever your reasons are, this could be a problem.”
[ » Read full article ]

Tom's Hardware; Jeff Butts (July 28, 2024)

 

Hackers Vie for Millions in Contest to Thwart Cyberattacks

About 40 contestants are vying for a $2-million prize in a contest sponsored by the U.S. Defense Advanced Research Projects Agency (DARPA) to come up with an autonomous program capable of scanning lines of open-source code, identifying security flaws, and repairing them. The AIxCC challenge aims to harness AI to counter a lack of skilled engineers to catch poorly maintained open-source software.
[ » Read full article ]

The Washington Post; Joseph Menn (July 27, 2024)

 

Google Works to Reduce Non-Consensual Deepfake Porn in Search

Google is changing its search engine to reduce the extent to which sexually explicit fake content ranks high in its search results. When AI-generated content features a real person’s face or body without their permission, that person can request its removal from search results. When Google decides a takedown is warranted, it now will filter all explicit results on similar searches and remove duplicate images, the company said Wednesday.
[ » Read full article ]

Bloomberg; Davey Alba; Cecilia D'Anastasio (July 31, 2024)

 

E.U. AI Rules Officially Take Effect

The European Union's AI law formally took effect on Thursday, covering any product or service offered in the bloc that uses AI. Restrictions are based on four levels of risk, with the vast majority of systems expected to fall under the low-risk category, such as content recommendation systems or spam filters. The provisions will come into force in stages, and Thursday’s implementation date starts the countdown for when they’ll kick in over the next few years.
[ » Read full article ]

Associated Press; Kelvin Chan (August 1, 2024)

 

U.S. Says No Need to Restrict 'Open-Source' AI, for Now

A report released Tuesday by the U.S. Department of Commerce's National Telecommunications and Information Administration (NTIA) said there is no pressing need for restrictions on "open-source" AI systems. However, the report said the U.S. government should continue to monitor the technology and be "prepared to act if heightened risks emerge." NTIA Administrator Alan Davidson said, "We continue to have concerns about AI safety, but this report reflects a more balanced view that shows that there are real benefits in the openness of these technologies."
[ » Read full article ]

Associated Press; Matt O'Brien (July 30, 2024)

 

NeuralGCM Slashes Computer Power Needed for Weather Forecasts

An AI model developed by Google researchers performs as well as current physics models in forecasting weather and climate patterns but uses less computing power. The NeuralGCM model uses a single tensor processing unit (TPU) to process 70,000 days of simulation in 24 hours; a competing X-SHiELD model needs a supercomputer equipped with thousands of TPUs to process 19 days of simulation.

[ » Read full article *May Require Paid Registration ]

New Scientist; Matthew Sparkes (July 22, 2024)

 

Japan Supermarket Chain Uses AI to Standardize Staff Smiles

An AI system deployed by the Japanese supermarket chain AEON scores employees' service attitude based on more than 450 elements, including facial expressions, voice volume, and tone of greetings. The Mr. Smile system from InstaVR features game elements to encourage staff to challenge their scores by improving their service attitude. AEON said the system is intended to "standardize staff members' smiles and satisfy customers to the maximum."
[ » Read full article ]

South China Morning Post; Fran Lu (July 22, 2024)

 

Stanford Researchers Highlight AI Language Gaps

The New York Times Share to FacebookShare to Twitter (7/26, Ruberg) reports that Stanford researchers found significant flaws in AI language models when tested in Vietnamese. The chatbot Claude 3.5 by Anthropic failed to follow traditional poetic formats and provided incorrect translations for familial terms. These issues highlight the limitations of AI trained predominantly in English, potentially exacerbating technological inequities. Sang Truong, a Ph.D. candidate at Stanford, noted that delays in access to accurate AI technology could lead to significant economic delays for non-English-speaking regions. The study underscores the need for more diverse language data sets.

X Faces Scrutiny Over Data Usage For Grok Training

TechCrunch Share to FacebookShare to Twitter (7/26, Lomas) reports that X, formerly Twitter, has quietly defaulted user data into its AI training pool for Grok, leading to concerns among users and scrutiny from the Irish Data Protection Commission (DPC). The DPC, surprised by this move, has been engaging with X and awaits a response. Grok, a conversational AI developed by Elon Musk’s X, aims to rival OpenAI’s ChatGPT. The DPC, overseeing X’s GDPR compliance, expects further developments next week. X has yet to clarify the legal basis for processing European users’ data.

        Yaccarino’s Challenges At X Explored. The New York Times Share to FacebookShare to Twitter (7/27, Conger) reported that Linda Yaccarino, CEO of X, has struggled to stabilize the company’s advertising business amid Elon Musk’s unpredictable actions. Despite efforts to combat hate speech and antisemitism, Musk’s behavior, including endorsing an antisemitic theory, has undermined her work, according to the Times. X’s ad revenue has significantly declined, with documents showing that “in the second quarter of this year, X earned $114 million in revenue in the United States, a 25 percent decline from the first quarter and a 53 percent decline from the previous year.” Yaccarino remains determined but faces ongoing challenges, the Times says.

Apple Signs Onto US Plan To Address AI Risks

Reuters Share to FacebookShare to Twitter (7/26, Ayyub, Shakil) reported the White House said on Friday that Apple has signed on President Biden’s “voluntary commitments governing artificial intelligence (AI), joining 15 other firms that have committed to ensuring that AI’s power is not used for destructive purposes.” Bloomberg Share to FacebookShare to Twitter (7/26, Gardner, Subscription Publication) reported the White House “announced [Apple] is joining the ranks of OpenAI Inc., Amazon.com Inc., Alphabet Inc., Meta Platforms Inc., Microsoft Corp. and others in committing to test their AI systems for any discriminatory tendencies, security flaws or national security risks.”

Kristof: AI Risks Make It Essential For US To Maintain Lead In Technology

In his column for the New York Times Share to FacebookShare to Twitter (7/27), Nicholas Kristof discusses the dangers of artificial intelligence, warning that a RAND study has found that for less than $100,000, “it may now be possible to use artificial intelligence to develop a virus that could kill millions of people.” Kristof argues, “All this underscores why it is essential that the United States maintain its lead in artificial intelligence. As much as we may be leery of putting our foot on the gas, this is not a competition in which it is OK to be the runner-up to China. ... Managing A.I. without stifling it will be one of our great challenges as we adopt perhaps the most revolutionary technology since Prometheus brought us fire.”

Survey: Students And Professors Believe AI Will Encourage Cheating

Inside Higher Ed Share to FacebookShare to Twitter (7/29, Coffey) reports Coursera is “the latest to launch a tool for detecting the use of AI in student work.” A Wiley survey shared with Inside Higher Ed reveals that “most instructors (68 percent) believe generative AI will have a negative or ‘significantly’ negative impact on academic integrity.” The survey, which included more than 2,000 students and 850 instructors, found that 47 percent of students “said it is easier to cheat than it was last year due to the increased use of generative AI, with 35 percent pointing toward ChatGPT specifically as a reason.” Wiley’s vice president of courseware highlighted the need for open discussions about cheating and productive help-seeking methods in classrooms. The survey also showed that “a majority of professors (56 percent) said they did not think AI had an impact on cheating over the last year, but most (68 percent) did think it would have a negative impact on academic integrity in the next three years.”

Report: College Graduates Feel Unprepared For Generative AI

Higher Ed Dive Share to FacebookShare to Twitter (7/29, Moody) reports, “While the majority of college graduates say their education has readied them for success in the job market, more than half said their programs didn’t prepare them for the use of generative AI, according to a Cengage Group report released July 23.” Nearly two in three employers said candidates “should have foundational knowledge” of generative AI tools, with more than half preferring to interview and hire those with AI experience. Despite this, “nearly 3 in 5 recent graduates of 2- or 4-year degree programs said that they believed their program equipped them with needed skills for their first job,” a rise from 41 percent in 2023. Michael Hansen, CEO of Cengage Group, noted the importance of integrating AI training into education.

Professors Skeptical After Academic Publishers Partner With AI Tech Firms

The Chronicle of Higher Education Share to FacebookShare to Twitter (7/29, Dutton) reports, “Two major academic publishers, Wiley and Taylor & Francis, recently announced partnerships that will give tech companies access to academic content and other data in order to train artificial-intelligence models.” Microsoft paid Informa, “the parent company of Taylor & Francis, an initial fee of $10 million to make use of its content ‘to help improve relevance and performance of AI systems.’” Wiley completed a similar project with an undisclosed tech company and plans another next year. Academics expressed concerns on social media about intellectual-property rights and lack of compensation. Taylor & Francis told The Chronicle that detailed citation was “fundamental to the agreement,” but scholars remain skeptical.

Focus On Microsoft’s Costs Grow Of Concerns Around AI Investments

Reuters Share to FacebookShare to Twitter (7/29) reports Microsoft investors will focus on the growth of its Azure cloud service when the company reports earnings on Tuesday to see if it can offset the cost of AI investments. Azure is expected to maintain a “steady quarter-over-quarter” growth of about 31%, aligning with forecasts. However, investors seek a “bigger contribution” from AI, which previously contributed 7 percentage points to Azure’s growth. Microsoft’s capital spending likely surged 53% year-over-year to $13.64 billion, up from $10.95 billion last quarter.

        The Wall Street Journal Share to FacebookShare to Twitter (7/29, Subscription Publication) reports a quartet of technology companies is set to report earnings this week after a selloff among the “Magnificent Seven” hit the Nasdaq. Microsoft will report Tuesday, Meta on Wednesday, and Amazon and Apple on Thursday after the market closes. Amazon’s second-quarter report will reveal the impact of its AI investments on its top line, with an expected 6% sales increase to $148.7 billion.

Apple Updates Siri With Intelligence Beta

Phone Arena Share to FacebookShare to Twitter (7/29, Friedman) reports that the Apple Intelligence Beta has been introduced on the iPhone 15 Pro Max through an updated Siri. Alan Friedman, the article’s author, noted that Siri now has a more conversational tone and improved interaction capabilities, such as understanding mispronunciations and providing troubleshooting assistance. Additional features in the beta include enhanced image search in the Photos app, AI-based summaries in Mail and Messages, and advanced writing tools. Some improvements, however, will only be available in the final version of Apple Intelligence.

        Similarly, 9to5Mac Share to FacebookShare to Twitter (7/29, Miller) reports that Apple has released the first beta of iOS 18.1 to developers, featuring new Apple Intelligence tools. Available for iPhone 15 Pro and iPhone 15 Pro Max, the update includes Writing Tools, enhanced Mail and notification features, and upgrades to Photos. iPadOS 18.1 is also available for iPads with the M1 chip and newer. Developers can access Apple Intelligence by joining a waitlist in the Settings menu.

        Also reporting are SlashGear Share to FacebookShare to Twitter (7/29), MacWorld Share to FacebookShare to Twitter (7/29), 9to5Mac Share to FacebookShare to Twitter (7/29, Espósito), and the New York Post Share to FacebookShare to Twitter (7/29).

Reddit Blocks Scraping Without AI Agreement

Ars Technica Share to FacebookShare to Twitter (7/31) reports that Reddit CEO Steve Huffman is defending the decision to block companies from scraping the site without an AI agreement. Reddit updated its Robots Exclusion Protocol to prevent non-Google search engines from listing recent posts. Huffman cited the need for control over data usage, mentioning that Microsoft, Anthropic, and Perplexity have not negotiated. Reddit and Google signed a $60 million AI training deal in February. The company aims to monetize its data amidst user protests and legal debates over AI data use.

DOJ Probes Nvidia’s Acquisition Of Run: ai On Antitrust Grounds

Politico Share to FacebookShare to Twitter (8/1, Sisco) reports “relatively obscure” AI startup Run: ai “has gotten caught up in the tug-of-war between U.S. regulators and the world’s largest tech companies over whether artificial intelligence is at risk of being controlled by a handful of giants.” Sources say the Justice Department is “investigating the acquisition of the AI start-up Run: ai by semiconductor company Nvidia on antitrust grounds,” with investigators “focused on the potential for the company to build a moat around its GPUs.” Sources added that “one possible concern over the Run: ai deal is the suspicion that Nvidia may have bought the company that enables customers to do more with less compute in order to bury a technology that could curb its main profit engine.”

Report: AI In Special Education Sparks Optimism Among Teachers, Parents

Education Week Share to FacebookShare to Twitter (8/1, Langreo) reports, “Educators and parents of students with intellectual and developmental disabilities are optimistic about artificial intelligence’s potential to create more inclusive classrooms and close educational gaps between students with disabilities and those without, concludes a report from the Special Olympics Global Center for Inclusion in Education.” Released on July 22, the report is “based on a survey of 500 U.S. parents of children with intellectual or developmental disabilities, as well as 200 U.S. K-12 teachers, conducted by Stratalys Research.” Concerns include reduced human interaction and resource disparities. The report found that while more “than 7 in 10 parents and 6 in 10 teachers say AI will make education more inclusive,” skepticism remains about whether AI developers consider the needs of students with disabilities.

dtau...@gmail.com

unread,
Aug 10, 2024, 1:14:27 PM8/10/24
to ai-b...@googlegroups.com

Experts Pen Support for California's AI Safety Bill

In a letter addressed to legislative leaders in California, ACM A. M. Turing Award laureates Yoshua Bengio and Geoffrey Hinton, along with renowned professors Lawrence Lessig and Stuart Russell, voiced support for a bill that would require AI firms training large-scale models to perform rigorous safety tests to identify potentially dangerous capabilities and institute comprehensive safety measures to mitigate risks. The letter said the bill amounts to the "bare minimum for effective regulation of this technology."
[
» Read full article ]

Time; Tharin Pillay; Harry Booth (August 7, 2024)

 

AI Is Coming for India's Outsourcing Industry

India's $250-billion outsourcing industry is being forced to adapt as companies replace call centers and other low-level operations with generative AI. According to TCS' Harrick Vin, "The roles of the future will require greater levels of critical thinking, design, strategic goal setting, and creative problem-solving skills." Meanwhile, industry executives contend AI tools are giving a boost to some businesses, particularly the programming workforce.


[
» Read full article *May Require Paid Registration ]

The Wall Street Journal; Megha Mandavia (August 6, 2024)

 

Mainframes Find New Life in AI Era

Mainframe computers are proving their resilience with new applications in the era of AI. Banks, insurance providers, airlines, and other industries that still rely on the mainframe for high-speed data processing are now looking to apply AI to their transaction data at the hardware source, rather than in the cloud. Said IBM's Ross Mauri, “Everyone’s kind of realizing that it’s better to bring your AI to where the data is, than the data to the AI.”

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Belle Lin (August 6, 2024)

 

Damaged Robot Adapts to Swim

Researchers at the California Institute of Technology (Caltech) used a machine learning algorithm to teach a robot to adapt its propulsion mechanism in order to maintain its aquatic capabilities when damaged. Explains Caltech's Meredith Hooper, "The machine learning algorithm selects the top candidate trajectories based on how well they produced our desired force. The algorithm then comes up with another set of 10 trajectories inspired by the previous set."
[ » Read full article ]

Interesting Engineering; Sujita Sinha (August 1, 2024)

 

AI Sets Variable Speed Limits on U.S. Freeway

AI is being used to control variable speed limits on a 27-kilometer (16.8-mile) section of the I-24 freeway near Nashville, TN. Daniel Work at Vanderbilt University and colleagues trained an AI on historical traffic data to monitor cameras and make decisions on speed limits. The new automated system, launched in March, works accurately 98% of the time, but will occasionally call for a change in speed limit that is larger than 10 miles per hour, which violates federal law.
[ » Read full article ]

New Scientist; Matthew Sparkes (July 30, 2024)

 

Memory Tech Reduces AI Processing Energy Requirements

University of Minnesota Twin Cities researchers developed Computational Random-Access Memory (CRAM) technology which, they say, could dramatically cut the energy consumed by AI processing. With CRAM, a high-density, reconfigurable spintronic in-memory compute substrate is located within the memory cells themselves, where the data is processed. When used to perform an MNIST handwritten digit classifier task, CRAM was 2,500 times more energy-efficient and 1,700 times faster than a near-memory processing system using the 16nm technology node.
[ » Read full article ]

Tom's Hardware; Jeff Butts (July 29, 2024)

 

Smartphone Flaw Reveals Floor Plans

A security flaw found in smartphones can be used to create a map of the room users are in and determine what they are doing. The vulnerability, discovered by researchers at the Indian Institute of Technology Delhi, uses data in the GPS signal. The researchers created an AI-based system called AndroCon that interpreted the metrics provided by this data from five types of Android smartphones.
[
» Read full article ]

New Scientist; Matthew Sparkes (August 8, 2024)

 

New Technique Aims to Tamperproof AI Models

Wired Share to FacebookShare to Twitter (8/2, Nast) reports that researchers from the University of Illinois Urbana-Champaign, UC San Diego, Lapis Labs, and the Center for AI Safety have developed a technique to make it harder to remove safety restrictions from open source AI models like Meta’s Llama 3. The method involves altering the model’s parameters to prevent it from responding to harmful prompts. Mantas Mazeika, a researcher involved in the project, said the goal is to deter adversaries by increasing the cost of decensoring models. The technique aims to enhance tamper-resistant safeguards as open source AI models grow in popularity.

NYTimes Report: China Skirts US Restrictions On AI Chip Exports

The New York Times Share to FacebookShare to Twitter (8/4, Swanson, Fu) said it “found an active trade in restricted A.I. technology – part of a global effort to help China circumvent U.S. restrictions amid the countries’ growing military rivalry.” The bans “made it harder and more costly for China to develop A.I.” but “given the vast profits at stake, businesses around the world have found ways to skirt the restrictions, according to interviews with more than 85 current and former U.S. officials, executives and industry analysts, as well as reviews of corporate records and visits to companies in Beijing, Kunshan and Shenzhen.” The Times also reports “an underground marketplace of smugglers, backroom deals and fraudulent shipping labels is funneling A.I. chips into China, which does not consider such sales illegal.”

Tech Firms Continue AI Spending Splurge Despite Investor Concerns

The New York Times Share to FacebookShare to Twitter (8/2, Weise) reports major tech companies “have made it clear over the last week that they have no intention of throttling their stunning levels of spending on artificial intelligence, even though investors are getting worried that a big payoff is further down the line than once thought.” The Times explains that “in the last quarter alone, Apple, Amazon, Meta, Microsoft and Google’s parent company Alphabet spent a combined $59 billion on capital expenses, 63 percent more than a year earlier and 161 percent more than four years ago,” and “a large part of that was funneled into building data centers and packing them with new computer systems to build artificial intelligence.”

OpenAI Holds Off On Releasing Tool That Catches Students Cheating With ChatGPT

The Wall Street Journal Share to FacebookShare to Twitter (8/4, Barnum, Subscription Publication) reports OpenAI has allegedly developed a method to reliably detect when someone uses ChatGPT to draft an essay or research paper, but has refrained from releasing it. The anti-cheating tool includes watermarks unnoticeable to the human eye but can be found utilizing OpenAI’s detection technology. One staff concern over releasing the tool is that it could disproportionately affect groups such as non-native English speakers. Moreover, if too many get access to the tool, bad actors might decipher the company’s watermarking technique. Yet employees who support the tool’s release claim internally those arguments pale compared with the good such technology could do. They have discussed offering the detector directly to educators or to outside companies that help schools identify AI-written papers and plagiarized work.

Google Cuts Olympics Ad For AI Chatbot Following Backlash

Fortune Share to FacebookShare to Twitter (8/2, Webster) reported Google “scrapped its Olympics advertisement for the AI chatbot Gemini from its TV rotation just one week after the controversial ad, ‘Dear Sydney,’ first aired.” Google said in a statement to Fortune, “While the ad tested well before airing, given the feedback, we have decided to phase the ad out of our Olympics rotation.” The ad is still available on YouTube, and currently has “over 320,000 views, though the comments section on its page has been turned off. In the video, a father helps his young daughter write a letter to her hero, Olympic hurdler Sydney McLaughlin-Levrone, with the help of Gemini AI.” Although the ad prompted “a wave of negative feedback on social media from viewers who found its theme disturbing,” the ad was “actually performing quite well compared with other Olympic ads, according to data reported by Business Insider.”

Grassley Calls On OpenAI To Release Information On Safety Practices

The Washington Post Share to FacebookShare to Twitter (8/2, Verma, Tiku) reports Sen. Chuck Grassley (R-IA) sent a letter to OpenAI CEO Sam Altman saying the company “should turn over documents proving it does not silence employees who wish to share concerns with federal regulators about how the artificial intelligence company is developing its tools.” Following “employee warnings that OpenAI rushed through safety testing of its latest AI model,” Grassley called on Altman “to outline what changes it has made to its employee agreements to ensure those wishing to raise concerns about OpenAI to federal regulators can do so without penalty,” marking “growing bipartisan pressure against OpenAI to detail steps it is taking to make sure its AI is developed safely.”

        Meanwhile, Bloomberg Share to FacebookShare to Twitter (8/2, Griffin, Subscription Publication) details concerns that AI “could help create weapons of mass destruction – not the kind built in remote deserts by militaries but rather ones that can be made in a basement or high school laboratory,” as the technology “could teach users to make dangerous viruses.” While “weaponizing disease is nothing new,” Bloomberg explains that AI tools “make it easier to surface insights on harmful viruses, bacteria and other organisms than what’s traditionally been possible with existing search tools,” meaning that “it’s now far easier for bad actors to develop weapons of mass destruction quickly and cheaply without access to traditional lab infrastructure.”

Five Secretaries Of State To Demand Musk Update AI Chatbot Over Harris Misinformation

The Washington Post Share to FacebookShare to Twitter (8/4, Ellison, Gardner) reports Minnesota Secretary of State Steve Simon and “his counterparts Al Schmidt of Pennsylvania, Steve Hobbs of Washington, Jocelyn Benson of Michigan and Maggie Toulouse Oliver of New Mexico” plan to “send an open letter to billionaire Elon Musk on Monday, urging him to “immediately implement changes” to X’s AI chatbot Grok, after it shared with millions of users false information suggesting that Kamala Harris was not eligible to appear on the 2024 presidential ballot.” The Post reports they “are objecting not to Grok’s tone but its factual inaccuracies and the sluggishness of the company’s move to correct bad information.”

Colleges Overhaul Courses In Response To Rise Of AI Technology

The Wall Street Journal Share to FacebookShare to Twitter (8/5, Subscription Publication) reports that the rapid rise of artificial intelligence technology has caused colleges across the US to quickly overhaul courses to include AI. College administrators say students are calling for course materials which integrate technologies like AI which are likely to impact students’ future workplaces.

Professors Say Computer Science Degrees Will Remain Valuable Amid AI Expansion

Insider Share to FacebookShare to Twitter (8/5) reports that with the rise of AI tools like GitHub Copilot, “tech companies may not need to hire as many software engineers as before since leaner teams can reasonably complete the same amount of code.” The head of Singaporean venture capital firm Hatcher+ predicts the industry will shrink, favoring those with deep expertise. However, computer science professors argue that a degree in the field remains valuable. Professor Kan Min Yen from the National University of Singapore said, “The AI wave is actually driving demand for computing professionals in general, because maturing AI is transformative and needs to be integrated into many facets of life.” David Malan of Harvard said, “Consider just how many more features [software engineers] can implement, how many more bugs they can fix, if they have a virtual assistant by their side.” Kan emphasizes the importance of soft skills, likening computer science to a team sport.

Oxford University Press Becomes Latest Academic Publisher To Collaborate With AI Companies

Inside Higher Ed Share to FacebookShare to Twitter (8/5, Palmer) reports Oxford University Press has “become the latest academic publisher to confirm it is working with companies developing AI tools.” OUP told The Bookseller, a UK-based outlet covering the publishing industry, “We are actively working with companies developing large language models (LLMs) to explore options for both their responsible development and usage.” In its annual report last month, the publisher said that it has “pursued opportunities relating to artificial intelligence (AI) technologies with careful consideration of its implications for research and education.” Both Informa, “the parent company of academic publisher Taylor & Francis, and Wiley recently announced that they had entered into data-access agreements with various companies, including Microsoft, that want to use their corpora to train proprietary AI tools.”

AI-Powered Medical Devices Are Bringing Changes To Patent Regulations

AI and machine learning “are transforming the medical device industry,” Bloomberg Law Share to FacebookShare to Twitter (8/5, Subscription Publication) reports. At the same time, “companies are working to gain Food and Drug Administration approval and obtain intellectual property protection for this technology.” With these new guidelines emerging, “IP practitioners need to help clients navigate these complicated areas without jeopardizing investment into AI or machine learning-enabled technology.”

Researchers: AI Could Help Address Building Energy Use, Carbon Emissions

Smart Cities Dive Share to FacebookShare to Twitter (8/6) reports a paper, published in Nature Communications, says that AI could reduce building sector energy consumption and carbon emissions by about 8% by 2050. The Lawrence Berkeley National Laboratory researchers estimated that “AI adoption, along with robotics and Internet of Things applications, can cut building costs by up to 20%.” Researchers also said that AI, combined with energy policy and low-carbon generation, could reduce energy use and carbon emission by 40% and 90%, respectively, in 2050.

OpenAI Co-Founders To Join Anthropic, Take Sabbatical

CNBC Share to FacebookShare to Twitter (8/6, Novet) reports, “OpenAI co-founder John Schulman said in a Monday X post that he would leave the Microsoft-backed company and join Anthropic, an artificial intelligence startup with funding from Amazon.” The news “comes less than three months after OpenAI disbanded a superalignment team that focused on trying to ensure that people can control AI systems that exceed human capability at many tasks.”

        The Wall Street Journal Share to FacebookShare to Twitter (8/6, Subscription Publication) reports OpenAI CEO Sam Altman responded to Schulman’s post by thanking him for his work. Separately, another OpenAI co-founder, president Greg Brockman, also posted on X that he would be taking a sabbatical for the rest of the year. Brockman is quoted saying the leave would be his “first time to relax” since the foundation of the company in 2015.

        Bloomberg Share to FacebookShare to Twitter (8/6, Subscription Publication) reports that the moves “mark a shift at the company following already significant management churn this year. Peter Deng, a vice president of product, also left in recent months, a spokesperson said. And earlier this year, several members of the company’s safety teams exited. OpenAI has made key hires, too, recently adding a new chief financial officer and chief product officer.”

Big Tech Bails Out AI Startups Amid Regulatory Scrutiny

The Wall Street Journal Share to FacebookShare to Twitter (8/6, Jin, Dotan, Kruppa, Subscription Publication) reports AI startups are seeking bailouts from major tech firms as they struggle to survive. Amazon agreed to hire most employees from Adept AI and pay $330 million to license its technology. Google negotiated a $2 billion licensing fee for Character. AI’s technology and hired many of its researchers. These deals avoid regulatory hurdles by not being outright acquisitions, but the Federal Trade Commission is investigating Amazon’s and Microsoft’s deals to determine if they bypassed government approval.

Tech Giants Boost Data Center Investments

Bloomberg Share to FacebookShare to Twitter (8/6, Ludlow, Subscription Publication) reports major tech companies are significantly increasing capital expenditures on data centers to support AI development. Microsoft, Meta, and Amazon announced increased spending in their recent earnings reports. Amazon, the market leader in cloud computing, spent $30.5 billion in the first half of the year and plans to exceed that in the next six months.

Schumer Advocates For AI Regulation In Elections

The Hill Share to FacebookShare to Twitter (8/6) reports Senate Majority Leader Chuck Schumer emphasized the need for AI regulation in elections during an NBC News interview. With fewer than 100 days until the November election, Schumer highlighted the threat of deepfakes, referencing incidents involving AI-generated political content. He urged bipartisan support for AI legislation, including the Protect Elections from Deceptive AI Act and the AI Transparency in Elections Act.

AI Firms Collect Children’s Photos For Age Verification

The Washington Post Share to FacebookShare to Twitter (8/7) reports that in 2021, London-based artificial intelligence firm Yoti initiated a campaign called “Share to Protect” in South Africa, which would “donate 20 South African rands, about $1, to their children’s school” for every child’s photo submitted. The initiative aimed to improve Yoti’s AI tool “that could estimate a person’s age by analyzing their facial patterns and contours.” While some parents participated, others expressed strong opposition due to privacy concerns. Companies such as Yoti, Incode, and VerifyMyAge “increasingly work as digital gatekeepers, asking users to record a live ‘video selfie’ on their phone or webcam, often while holding up a government ID, so the AI can assess whether they’re old enough to enter.” However, critics argue these systems could lead to privacy violations and misuse of personal data.

Learning Expert Warns Against Widespread AI Adoption In Schools

Education Week Share to FacebookShare to Twitter (8/7, Langreo) reports that although not everyone agrees, experts say generative artificial intelligence “can save educators time, help personalize learning, and potentially close achievement gaps.” Benjamin Riley, the founder and CEO of think tank Cognitive Resonance, “argues that schools don’t have to give in to the hype just because the technology exists.” Cognitive Resonance on Aug. 7 released its first report titled “Education Hazards of Generative AI,” and in a phone interview with EdWeek, “Riley discussed the report and his concerns about using AI in education.” He said “using [AI tools] to tutor children” will not be effective “[w]e’re already starting to get some empirical evidence of this. Some researchers at Wharton published a study recently of a randomized control trial where high school math students using ChatGPT learned less than their peers who had no access to it during the time of the study.” Riley also said “we’re starting to see how technology has had real harms on social cohesion and solidarity.”

Tech Companies’ Deals With AI Startups Seen As Structured To Evade Regulatory Scrutiny

The New York Times Share to FacebookShare to Twitter (8/8, Griffith, Metz) reports on “several unusual transactions that have recently emerged in Silicon Valley” by which tech companies have “turned to a more complicated deal structure for young A.I. companies.” Rather than buying them outright, they “licens[e] the technology and hir[e] the top employees – effectively swallowing the start-up and its main assets – without becoming the owner of the firm.” The Times says, “These transactions are being driven by the big tech companies’ desire to sidestep regulatory scrutiny while trying to get ahead in A.I., said three people who have been involved in such agreements. Google, Amazon, Meta, Apple and Microsoft are under a magnifying glass from agencies like the Federal Trade Commission over whether they are squashing competition, including by buying start-ups.”

        UK Antitrust Officials Probe Amazon’s Anthropic Investment. The Wall Street Journal Share to FacebookShare to Twitter (8/8, Orru, Subscription Publication) reports the UK’s Competition and Markets Authority is investigating Amazon’s $4 billion investment in AI startup Anthropic, questioning if it poses a threat to competition. An Amazon spokesperson said the company was “disappointed by the decision” and that its ties to Anthropic didn’t raise competition concerns. The probe highlights increasing scrutiny on Big Tech’s AI investments. An initial decision is due by October. 4.

        CNBC Share to FacebookShare to Twitter (8/8, Browne) reports Amazon completed its $4 billion investment in Anthropic in March, with an initial $1.25 billion equity stake in September, followed by an additional $2.75 billion earlier this year. The deal includes making Anthropic’s large language models available on Amazon’s Bedrock platform and training these models on Amazon’s custom AI chips built by AWS. An Amazon spokesperson emphasized that the collaboration expands choice and competition in AI technology, asserting that Amazon holds no board seat or decision-making power at Anthropic. Anthropic also affirmed its independence, stating Amazon does not have board observer rights.

More Students Turn To AI Chatbots For Mental Health Support Despite Risks

The Seventy Four Share to FacebookShare to Twitter (8/7, Toppo) reported that college students are increasingly turning to AI chatbots like ChatGPT for psychological support and advice. However, experts caution that these AI companions could lead young people to make poor decisions. A recent survey by VoiceBox, a youth content platform “found that many kids are being exposed to risky behaviors from AI chatbots, including sexually charged dialogue and references to self-harm.” Little research exists “on young people’s use of AI companions, but they’re becoming ubiquitous.” For example, the startup Character.ai earlier this year “said 3.5 million people visit its site daily. It features thousands of chatbots, including nearly 500 with the words ‘therapy,’ ‘psychiatrist’ or related words in their names.” Some believe AI’s role in human interaction is inevitable and call for better regulation.

College Of Charleston Uses AI Chat Bot For Student Support, Retention

Inside Higher Ed Share to FacebookShare to Twitter (8/8, Mowreader) reports the College of Charleston has implemented Clyde, an artificial intelligence-powered chat bot developed in partnership with EdSights, to enhance student support and retention. Launched in the fall, Clyde has facilitated more than 50,000 text messages and flagged more than 900 students for follow-up. The initiative aims to connect students with resources and improve institutional priorities. Clyde, named after the college’s cougar mascot, sends weekly check-in messages to students and alerts staff about those needing immediate assistance. Ninety-four percent of students opted in, and 62 percent engaged with the bot, providing data on various student experiences. Adjustments have been made after the pilot year to improve the program, including appointing a dedicated staff member to manage incoming information.

X Halts Use Of European Social Media Data For AI Training Following Legal Challenge

TechCrunch Share to FacebookShare to Twitter (8/8, Lomas) reports that Elon Musk has agreed to halt the use of Europeans’ social media posts to train his AI tool ‘Grok’, following action from Ireland’s Data Protection Commission. The DPC initiated court proceedings seeking an injunction against the practice due to a lack of user consent, with the issue also expected to be referred to the European Data Protection Board. It is currently unclear how any AI models trained on unlawfully-obtained data will be handled legally.

Small AI Models Gain Traction in Tech Industry

Bloomberg Share to FacebookShare to Twitter (8/8, Subscription Publication) reports that tech companies are shifting focus from large, costly AI models to smaller, more efficient ones. Arcee. AI, co-founded by Mark McQuade, exemplifies this trend by developing small language models tailored for specific corporate tasks, like tax-related queries. McQuade emphasizes that “99% of business use cases” do not require extensive general knowledge. Tech and AI giants including “Google, Meta Platforms Inc., OpenAI and Anthropic have all recently released software that is more compact and nimble than their flagship large language models, or LLMs.” Hugging Face co-founder and Chief Science Officer Thomas Wolf notes, “small models make a lot of sense,” highlighting their cost-effectiveness and lower energy demands. Arcee. AI’s recent $24 million Series A funding underscores investor interest in this approach, driven by the need for diverse and affordable AI solutions.

Google DeepMind’s Ping Pong Robot Challenges Humans

Popular Science Share to FacebookShare to Twitter (8/8, Paul) reports that Google DeepMind has developed a robotic system capable of amateur human-level performance in table tennis. Detailed in an August 7 preprint paper, the robot won 45% of matches against 29 human players. Engineers used a dataset and simulations to train the AI, creating a continuous learning feedback loop.

        Google AI Overviews See Significant Decline. Insider Share to FacebookShare to Twitter (8/8, Langley) reports a study by SE Ranking found a significant drop in Google’s AI Overviews in search results. In July, only 7.47% of searches returned an AI Overview, down from 64% in February. Google is rethinking its AI use, with spokespersons noting ongoing refinements. The study also noted a 40% decrease in AI Overview length and highlighted Forbes, Business Insider, and Entrepreneur as top-cited sources.

OpenAI Highlights Risks With Voice Interface

Wired Share to FacebookShare to Twitter (8/8) reports OpenAI released a safety analysis for its new GPT-4o model, highlighting potential risks associated with its humanlike voice interface. The analysis warns that users might become emotionally attached to the chatbot. The “system card” outlines risks such as amplifying societal biases, spreading disinformation, and aiding in the development of weapons. Regarding emotional connections with AI, OpenAI’s Joaquin Quiñonero Candela said, “We don’t have results to share at the moment, but it’s on our list of concerns.” Experts like Lucie-Aimée Kaffee from Hugging Face and MIT Professor Neil Thompson commended OpenAI’s transparency but urged further detail and real-world risk evaluation.

dtau...@gmail.com

unread,
Aug 18, 2024, 12:23:55 PM8/18/24
to ai-b...@googlegroups.com

Bengio Joins U.K. Project to Prevent AI Catastrophes

ACM A. M. Turing Award laureate Yoshua Bengio has signed on to Safeguarded AI, a U.K. government-funded project with the goal of developing an AI system that can assess the safety of other AI systems deployed in critical sectors. This "gatekeeper" AI would assign risk scores and offer other quantitative guarantees regarding the real-world impacts of AI systems. Bengio, who will serve as the project's scientific director, said the use of AI to safeguard AI is "the only way, because at some point these AIs are just too complicated."
[
» Read full article ]

MIT Technology Review; Melissa Heikkilä (August 7, 2024)

 

Novel Ideas to Cool Datacenters: Liquid in Pipes, Dunking Bath

The advent of generative AI has made the cooling of datacenters a hot topic. Datacenters are expected to consume 8% of total U.S. power demand by 2030, compared with about 3% now. For its coming GB200 server racks, Nvidia will use liquid circulating in tubes rather than air to cool the hardware. The company is also working on additional cooling technologies, including dunking entire drawer-sized computers in a nonconductive liquid that absorbs and dissipates heat.


[
» Read full article *May Require Paid Registration ]

The Wall Street Journal; Yang Jie (August 11, 2024)

 

Ke Fan, Daniel Nichols Receive 2024 ACM-IEEE CS George Michael Memorial HPC Fellowships

Ke Fan of the University of Illinois at Chicago and Daniel Nichols of the University of Maryland are the recipients of the 2024 ACM-IEEE CS George Michael Memorial HPC Fellowships. Fan is recognized for her research in optimizing the performance of MPI collectives, enhancing the performance of irregular parallel I/O operations, and improving the scalability of performance-introspection frameworks. Nichols is recognized for advancements in machine-learning-based performance modeling and the advancement of large language models for HPC and scientific codes.
[
» Read full article ]

ACM Media Center (August 14, 2024)

 

Struggling AI Startups Look for Bailout from Big Tech

Many of the AI startups that raised billions of dollars last year are now seeking bailouts from big tech companies. Google has agreed to hire many of Character.AI's researchers and executives and helped buy out early investors by licensing the startup's technology for about $2 billion. Amazon recently paid around $330 million to hire most of Adept AI's staff and license its technology, following a move by Microsoft to hire almost all Inflection's staff to create a new consumer AI division and license the startup's technology for about $650 million. These deals are seen as more favorable than outright acquisitions that likely would face regulatory scrutiny.


[
» Read full article *May Require Paid Registration ]

The Wall Street Journal; Berber Jin; Tom Dotan; Miles Kruppa (August 6, 2024); et al.

 

DeepMind Develops a ‘Solidly Amateur’ Table Tennis Robot

Google’s DeepMind Robotics researchers developed a “solidly amateur human-level” robotic table tennis player. During testing, the robot beat all of the beginner-level players it faced. With intermediate players, the robot won 55% of matches. The system’s biggest shortcoming was how it reacted to fast balls, which DeepMind blames on system latency, mandatory resets between shots, and a lack of useful data.
[
» Read full article ]

TechCrunch; Brian Heater (August 8, 2024)

 

Older Americans Prepare for AI

Classes are being offered across the U.S. to help seniors better understand the benefits and risks of AI. The classes often detail the ways AI can make certain tasks easier while also warning them about deepfakes, misinformation, and AI-perpetrated scams. Said University at Buffalo's Siwei Lyu, "We need this kind of education for seniors, but the approach we take has to be very balanced and well-designed."
[
» Read full article ]

Associated Press; Dan Merica (August 13, 2024)

 

AI Pieces Together Ancient Epic

Researchers are leveraging AI to decipher more than 3,000-year-old clay tablets containing fragments of the Epic of Gilgamesh and other ancient writings. Over 500,000 clay tablets, and many more tablet fragments, are housed in museums and universities worldwide, many having yet to be read or published due to a lack of cuneiform experts. Researchers led by Enrique Jiménez of Germany's Ludwig Maximilian University of Munich have identified new segments of the poem and hundreds of missing words and lines from other works using a machine learning model.


[
» Read full article *May Require Paid Registration ]

The New York Times; Erik Ofgang (August 12, 2024)

 

California Partners with Nvidia to Bring AI Resources to Colleges

California and tech giant Nvidia are partnering to help train the state’s students, college faculty, developers, and data scientists in AI. The initiative aims to add new curriculum and certifications, hardware and software, and AI labs and workshops, and is particularly focused on community colleges. Said Governor Gavin Newsom, “California’s world-leading companies are pioneering AI breakthroughs, and it’s essential that we create more opportunities for Californians to get the skills to utilize this technology and advance their careers."
[
» Read full article ]

Associated Press; Sarah Parvini (August 9, 2024)

 

Tech Aims to Identify Future Olympians

AI-based talent spotting technology set up near the Olympic Stadium in Paris aimed to find the next generation of athletic stars. Data gathered from five tests, including running, jumping, and grip strength, was analyzed to assess a person's power, explosiveness, endurance, reaction time, strength, and agility. The results are compared with data from professional and Olympic athletes. “We’re using computer vision and historical data, so the average person can compare themselves to elite athletes and see what sport they are most physically aligned to,” says Sarah Vickers, head of Intel’s Olympic and Paralympic Program.
[
» Read full article ]

BBC News; Peter Ball (August 8, 2024)

 

Survey: Most Students Worry Overuse Of AI Could Devalue Higher Ed

Inside Higher Ed Share to FacebookShare to Twitter (8/9, Rowsell) reported this year’s Digital Education Council Global AI Student Survey “of more than 3,800 students from 16 countries” found that rising use of AI in higher education “could cause students to question the quality and value of education they receive.” More than half (55 percent) of respondents “believed overuse of AI within teaching devalued education, and 52 percent said it negatively impacted their academic performance.” The report says, “Students do not want to become over-reliant on AI, and they do not want their professors to do so either. Most students want to incorporate AI into their education, yet also perceive the dangers of becoming over-reliant on AI.” Still, some 86 percent said they “regularly” used programs “such as ChatGPT in their studies, 54 percent said they used it on a weekly basis, and 24 percent said they used it to write a first draft of a submission.”

        University Of New Mexico Faculty Receive Stipends To Use AI For Open Education Resources. Inside Higher Ed Share to FacebookShare to Twitter (8/9, Coffey) reported seven faculty members at the University of New Mexico “have spent the summer working to apply generative AI” to open educational resources (OER), which “are teaching and learning materials that are openly licensed, adaptable and freely available online.” As the faculty’s eight-week pilot “nears an end, each will collect $1,000 stipends as part of the university’s investment into OER, according to Jennifer Jordan, OER librarian at New Mexico. The university also recently received a $2.1 million grant from the U.S. Department of Education to establish an OER consortium in the state.” At the end of the session, “the UNM faculty will compile a guidebook on how to create and use OER, with a chapter dedicated to using AI in OER materials.” And as both generative AI and OER “continue to evolve, higher education can cautiously use both in conjunction with one another.”

        Commentary: Why Colleges Should Avoid Banning AI In Classrooms. In commentary for Fortune Share to FacebookShare to Twitter (8/9), Georgia Tech professor Arijit Raychowdhury said that although several school districts and colleges are “rushing to ban the use of ChatGPT in the classroom,” the Georgia Institute of Technology “has taken the opposite approach, welcoming the use of AI in study, essays, and other assignments – but with some guardrails.” Raychowdhury said that by allowing students “to use generative AI to solve problems and assist with assignments, we can show them what AI can and can’t do.” Additionally, “there needs to be clear, core rules with no gray areas,” and one place to “outline these AI rulings is in admissions.” Now is the time “to collaboratively figure out AI’s potential, benefits, and pitfalls. It’s important to have a diverse faculty, so we have as many different microcosms of society as possible represented and to present a united front in how you’re going to use AI, with room for variation in how different fields use AI.”

AI Industry Debates Synthetic Data Use

Insider Share to FacebookShare to Twitter (8/9, Chowdhury, Langley) reported that the AI industry is debating the use of synthetic data as real, human-generated data becomes scarce. Companies like OpenAI and Google have nearly exhausted available textual data, leading to increased interest in synthetic data. While synthetic data can fill gaps and address biases, it also risks degrading AI model performance. Researchers suggest a balanced approach using both real and synthetic data. Some companies are exploring “hybrid data” to mitigate risks. New approaches, such as neuro-symbolic AI, may offer alternative solutions to the data scarcity problem.

FCC Proposes New AI Disclosure Rules

PCMag Share to FacebookShare to Twitter (8/12) reports that the FCC is introducing new regulations requiring companies to disclose any use of AI in phone calls or texts to customers. FCC Chair Jessica Rosenworcel said, “That means before any one of us gives our consent for calls from companies and campaigns, they need to tell us if they are using this technology. ... It also means that callers using AI-generated voices need to disclose that at the start of a call.” This initiative follows a $6 million fine against Democratic consultant Steve Kramer for an AI deepfake of President Biden’s voice. The FCC aims to protect consumers from AI-generated robocalls, “citing a 1991 law designed to protect consumers from pre-recorded automated calls.” The agency seeks public comment on the proposed rules, which also highlight scam call detection technologies from Google and Microsoft. The FCC’s two Republican commissioners, Brendan Carr and Nathan Simington have both voiced opposition to the proposed regulations, with Simington stating, “The idea that the commission would put its imprimatur on even the suggestion of ubiquitous third-party monitoring of telephone calls for the putative purpose of ‘safety’ is beyond the pale.”

AI Model Detects Diseases With 98% Accuracy

The New York Post Share to FacebookShare to Twitter (8/13, Swartz) reports that researchers in Iraq and Australia have developed an AI algorithm capable of diagnosing medical conditions by analyzing tongue color with 98% accuracy. Senior study Share to FacebookShare to Twitter author Ali Al-Naji explained that different tongue colors can indicate various diseases, such as yellow for diabetes and purple for cancer. The study involved 5,260 images to train the AI model and tested it with 60 images from Middle Eastern hospitals. Co-author Javaan Chahl mentioned that this technology could be adapted into a smartphone app for diagnosing multiple conditions. The findings were published in the journal Technologies.

Huawei Prepares New AI Chip To Compete With Nvidia

The Wall Street Journal Share to FacebookShare to Twitter (8/13, Lin, Huang, Subscription Publication) reports that Huawei Technologies is nearing the release of its new AI chip, Ascend 910C, as it attempts to overcome US sanctions to rival Nvidia in China. Chinese firms like ByteDance, Baidu, and China Mobile are in early talks to acquire the chip. Huawei plans to start shipping in October, aiming for orders surpassing 70,000 units worth around $2 billion. Despite production delays and potential further US restrictions, Huawei has received significant state support. Analyst Dylan Patel from SemiAnalysis noted that the Ascend 910C could outperform Nvidia’s B20.

Google Launches Gemini Live

TechCrunch Share to FacebookShare to Twitter (8/13, Wiggers) reports Google launched Gemini Live, a new voice chat feature for its AI chatbot, available starting Tuesday. Announced at the Made by Google 2024 event, Gemini Live offers in-depth voice interactions with enhanced speech capabilities. Initially available in English, this feature is part of the Google One AI Premium Plan, costing $20 per month.

        On CNBC’s Power Lunch Share to FacebookShare to Twitter (8/13), CNBC’s Deirdre Bosa spoke with Rick Osterloh, SVP of Platforms & Devices at Google, about the company’s integration of AI into its hardware.

California AI Bill Faces Pushback From Industry Players

TechCrunch Share to FacebookShare to Twitter (8/13, Zeff) reports “a California bill, known as SB 1047,” seeks to prevent “real-world disasters caused by AI systems before they happen, and it’s headed for a final vote in the state’s senate later in August.” But “while this seems like a goal we can all agree on, SB 1047 has drawn the ire of Silicon Valley players large and small, including venture capitalists, big tech trade groups, researchers and startup founders.”

        The New York Times Share to FacebookShare to Twitter (8/14, Metz, Kang) also reports.

California Faces Data Center Energy Crisis

The Los Angeles Times Share to FacebookShare to Twitter (8/13) says Los Angeles Times writer Melody Petersen “reported this week that concerns are mounting that data centers are gobbling up electricity at an unsustainable rate, putting California in a precarious power position and threatening to derail ambitious clean energy goals.” Experts warn that the rapid construction of data centers could hinder California’s transition away from fossil fuels, increase electric bills, and elevate blackout risks. Generative AI exacerbates the issue, as its operations consume significantly more electricity than traditional computing. Data centers in California, particularly in Santa Clara and Los Angeles counties, are already straining the state’s power grid, which ranks 49th in energy resilience. Additionally, these facilities require substantial water for cooling, further stressing the state’s dwindling water supply.

Report: AI Policies In K-12 Schools Lack Cohesion

K-12 Dive Share to FacebookShare to Twitter (8/13, Merod) reports that nearly two years after ChatGPT’s emergence, “artificial intelligence policies continue to vary widely among school districts nationwide.” As of June, 15 states had “developed AI guidance for schools, according to the U.S. Education Department,” but the guidance is “disjointed and often lacks details about use cases and implementation, the Center on Reinventing Public Education said in a report released this month.” CRPE’s report emphasized that without “clear policies and guidance, districts will continue to struggle with procurement, data-sharing policies, technical questions, and implementation strategies, ultimately leading to disjointed approaches and unequal access.” To address these issues, CRPE gathered more than 60 stakeholders in April to discuss AI’s potential in education and the need for cohesive policies. CRPE “outlined a roadmap” including innovative uses of AI, strategic funding for AI tools, prioritizing low-income communities, and providing detailed implementation plans.

Johns Hopkins Professor Warns Overlooked Threat Of AI Is “Depersonalization Of Human Relationships”

In commentary for TIME Share to FacebookShare to Twitter (8/14), John Hopkins University professor Allison Pugh says that the common discourse around artificial intelligence risks – job disruption, bias, and surveillance – misses a critical threat: the depersonalization of human relationships. Pugh argues that AI’s integration into roles requiring emotional connection, such as counseling and teaching, undermines “connective labor,” which is essential for meaningful human interactions. The author spent five years studying more than 100 individuals in humane interpersonal work and found that technology makes this labor invisible, forces workers to prove their humanity, and leads to job overload. Pugh emphasizes that socioemotional AI should be clearly labeled and calls for policies to protect human-to-human connections, as these are vital for social cohesion and individual well-being.

Experts: GenAI Can Help People With ADHD, But Be Cautious

“Experts say generative AI tools can help people with attention deficit hyperactivity disorder ... to get through tasks quicker,” the Associated Press Share to FacebookShare to Twitter (8/14, Hunter) reports. However, “they also caution that it shouldn’t replace traditional treatment for ADHD, and also expressed concerns about potential overreliance and invasion of privacy.” According to the AP, “generative AI tools can help people with ADHD break down big tasks into smaller, more manageable steps.” Chatbots are able to “offer specific advice and can sound like you’re talking with a human,” while “some AI apps can also help with reminders and productivity.”

Musk’s AI Firm Launches Chatbots Generating Controversial Political Images

MediaPost Share to FacebookShare to Twitter (8/14, Kirkland) reports that Elon Musk’s AI firm, xAI, has introduced new chatbot models, Grok-2 and Grok-2 mini, featuring an in-app image generator for premium users, that have already depicted political figures in controversial scenarios. Despite recent calls for Musk to address election misinformation spread by chatbot Grok, the new bots produce political illustrations involving real people with few restrictions. Early users have already posted contentious AI-generated images. The lack of an indication that these images are AI-generated raises concerns about further misinformation ahead of the next U.S. Presidential election.

Study Finds Generative AI Models Hallucinate Often

TechCrunch Share to FacebookShare to Twitter (8/14, Wiggers) reports a recent study “sought to benchmark hallucinations by fact-checking models like GPT-4o against authoritative sources on topics ranging from law and health to history and geography.” Researchers “found that no model performed exceptionally well across all topics, and that models that hallucinated the least did so partly because they refused to answer questions they’d otherwise get wrong.”

Reports Offer Divergent Views On AI In Education

The Seventy Four Share to FacebookShare to Twitter (8/14, Toppo) reports that two new reports released last week “offer markedly different visions of the emerging field: One argues that schools need forward-thinking policies for equitable distribution of AI across urban, suburban and rural communities. The other suggests they need something more basic: a bracing primer on what AI is and isn’t, what it’s good for and how it can all go horribly wrong.” The Center on Reinventing Public Education (CRPE) at Arizona State University “advises educators to take a more active role in how AI evolves, saying they must articulate to ed tech companies in a clear, united voice what they want AI to do for students.” In contrast, Cognitive Resonance, a think tank based in Austin, Texas, warns “of the inherent hazards of using AI for bedrock tasks like lesson planning and tutoring – and questions whether it even has a place in instruction at all, given its ability to hallucinate, mislead and basically outsource student thinking.”

UT Dallas, University Of Buffalo Researchers Create AI Model To Combat Power Outages

The Dallas Morning News Share to FacebookShare to Twitter (8/15, Horner) reports University of Texas at Dallas (UT Dallas) researchers, in collaboration with the University at Buffalo, have developed an AI model to prevent power outages by rerouting electricity in milliseconds. The study, published in Nature Communications, showcases early “self-healing grid” technology. This system uses machine learning to map complex power distribution networks and can automatically identify alternative routes before an outage occurs. The project was supported by the US Office of Naval Research and the National Science Foundation.

Minority Students, Teachers More Likely To Embrace AI In Education

Forbes Share to FacebookShare to Twitter (8/15, Boser) reports that “new surveys show no group has a more open attitude to adopting AI in the classroom than students and teachers of color.” A Walton Family Foundation survey found that “Black teachers and educators in urban districts had the highest usage rates at 86 percent,” and in K-12 overall, “Hispanic and Black students have higher usage rates at 77 percent and 72 percent respectively versus White students at 70 percent.” A report by Common Sense Media, Hopelab, and Harvard’s Center for Digital Thriving indicates that Black youth are more likely to use generative AI for information, brainstorming, and schoolwork. Despite the enthusiasm, only 25 percent “of teachers polled said they have received any training on AI chatbots,” contributing to hesitancy.

Daniel Tauritz

unread,
Aug 24, 2024, 5:29:48 PM8/24/24
to ai-b...@googlegroups.com

U.S. Government Wants You — Yes, You — to Hunt Down Generative AI Flaws

Ethical AI and algorithmic assessment nonprofit Humane Intelligence and the National Institute of Standards and Technology (NIST) are calling for public participation in the qualifying round of NIST's Assessing Risks and Impacts of AI challenge. Those who make it through the online qualifier will participate in an in-person red-teaming event to assess AI office productivity software at the Conference on Applied Machine Learning in Information Security in October. Said Humane Intelligence's Theo Skeadas, "We want to democratize the ability to conduct evaluations and make sure everyone using these models can assess for themselves whether or not the model is meeting their needs."
[ » Read full article ]

Wired; Lily Hay Newman (August 21, 2024)

 

Worldcoin Battles with Governments over Your Eyes

Governments increasingly are concerned the Worldcoin biometric cryptocurrency project, headed by OpenAI's Sam Altman, is building a global biometric database with minimal oversight. The initiative's goal is to scan the eyes of every human, issue online "World ID" passports to prove users are human, and make payments to users in Worldcoin's WLD cryptocurrency. Governments have raised concerns over reports that operators of Worldcoin's iris-scanning devices are encouraging users to allow Worldcoin to use their iris scans to train its algorithms.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Angus Berwick; Berber Jin (August 18, 2024)

 

AI Detection Tools Often Fail to Catch Election Deepfakes

An April study by the Reuters Institute for the Study of Journalism revealed how basic software tricks and editing techniques can fool many deepfake detectors. A 2023 study by U.S., Australian, and Indian researchers found accuracy rates for deepfake detectors ranged from just 25% to 82%. University of California at Berkeley computer science professor Hany Farid said the datasets used to train detectors mainly contain lab-created, not real-world, deepfakes and perform poorly in identifying abnormal patterns in body movement or lighting.

[ » Read full article *May Require Paid Registration ]

The Washington Post; Kevin Schaul; Pranshu Verma; Cat Zakrzewski (August 15, 2024)

 

Hollywood Union Strikes Deal for Advertisers to Replicate Actors' Voices with AI

A deal between the Hollywood actors' union SAG-AFTRA and online talent marketplace Narrativ will allow actors to sell the rights to replicate their voices with AI to advertisers. The agreement ensures actors will have control over the use of their digital voice replicas and will receive income from the technology equal to at least the SAG-AFTRA minimum pay for audio commercials. Brands will need to obtain an actor's consent for each ad using their AI-generated voice replica.
[ » Read full article ]

Reuters; Danielle Broadway; Dawn Chmielewski (August 14, 2024)

 

AI Assistant Monitors Teamwork to Promote Effective Collaboration

An AI assistant developed by computer scientists at the Massachusetts Institute of Technology can oversee teams of humans and AI agents, aligning their roles and intervening as necessary to improve teamwork toward a common goal. The AI assistant can infer the humans’ plans and understanding of one another and, when issues arise, align their beliefs, ask questions, and provide instruction.
[ » Read full article ]

MIT News; Alex Shipps (August 19, 2024)

 

Heart Data Unlocks Sleep Secrets

University of Southern California computer scientists developed open source software that could allow for the development of inexpensive, DIY sleep-tracking devices by anyone with basic coding knowledge. Their model uses heart data and a deep-learning neural network to assess sleep stages. The automated electrocardiogram-only network accurately categorizes sleep stages. The researchers said it outperformed commercial sleep-tracking devices and other models that also do not utilize electroencephalogram data.
[ » Read full article ]

USC Viterbi School of Engineering; Caitlin Dawson (August 19, 2024)

 

Pentagon's New Supercomputer to Boost Defense Against Biothreats

The U.S. Department of Defense (DOD) announced a new supercomputer and rapid response laboratory (RRL) intended to bolster its Chemical and Biological Defense Program's Generative Unconstrained Intelligent Drug Engineering (GUIDE) program. The supercomputer will use AI modeling, simulations, threat classification, and medical countermeasure development in conjunction with the RRL to improve biodefenses.
[ » Read full article ]

TechRadar; Benedict Collins (August 19, 2024)

 

OpenAI Disrupts AI-Based Iranian Influence Campaign

The New York Times Share to FacebookShare to Twitter (8/16, Metz) reported OpenAI “said on Friday that it had discovered and disrupted an Iranian influence campaign that used the company’s generative artificial intelligence technologies to spread misinformation online, including content related to the U.S. presidential election.” The company “said it had banned several accounts linked to the campaign from its online services,” but it “added that a majority of the campaign’s social media posts had received few or no likes, shares or comments, and that it had found little evidence that web articles produced by the campaigns were shared across social media.” The campaign had “used its technologies to generate articles and shorter comments posted on websites and on social media.”

        The Washington Post Share to FacebookShare to Twitter (8/16) explains that “the sites and social media accounts that OpenAI discovered posted articles and opinions made with help from ChatGPT on topics including the conflict in Gaza and the Olympic Games,” as well as “material about the U.S. presidential election, spreading misinformation and writing critically about both candidates.” Ben Nimmo, “principal investigator on OpenAI’s intelligence and investigations team, said the activity was the first case of the company detecting an operation that had the U.S. election as a primary target,” adding, “Even though it doesn’t seem to have reached people, it’s an important reminder, we all need to stay alert but stay calm.”

San Francisco To Sue AI Web Sites Over Deepfake Nude Images

The San Francisco Chronicle Share to FacebookShare to Twitter (8/17, DiFeliciantonio) reports, “San Francisco City Attorney David Chiu is suing 16 websites that his office says use AI to create nonconsensual, fake nude images of women and girls, the first lawsuit of its kind.” The Chronicle explains, “The sites allow users to create AI-generated images of real people, swapping their faces onto nude images in a violation of state and federal laws prohibiting deepfake pornography, revenge pornography and child pornography. ... The suit is seeking civil penalties and for the sites to be blocked by web hosts from continuing to post the alleged illegal content. So as not to push traffic to the companies’ websites, their URLs were redacted in the legal complaint filed Thursday.”

Professors Partner With Police For AI Public Safety Solutions

Inside Higher Ed Share to FacebookShare to Twitter (8/19, Coffey) reports that Yao Xie, a professor at the Georgia Institute of Technology, has completed a seven-year collaboration with the Atlanta Police Department using AI to improve policing. Starting in 2017, Xie’s work focused on crime linkage analysis, rezoning police districts, and ensuring fair neighborhood services. This partnership is part of a broader trend where universities collaborate with law enforcement to harness AI for public safety. Projects include facial recognition comparisons by the University of Texas at Dallas, image analysis by Carnegie Mellon, and risk analysis by the Illinois Institute of Technology. Funded by a $3.1 million National Institute of Justice initiative, these efforts address public safety video analysis, DNA analysis, gunshot detection, and crime forecasting.

AAC&U, Elon University Release AI Guide For Students

Inside Higher Ed Share to FacebookShare to Twitter (8/19, Coffey) reports, “The American Association of Colleges and Universities and Elon University have launched an artificial intelligence how-to guide for students navigating the sometimes-murky waters of the burgeoning technology.” They call their AI-U guide a “student guide to navigating college in the artificial intelligence era.” The guidebook was “born out of conversations last year between dozens of universities at the United Nations-sponsored Internet Governance Forum, which culminated in six principles for the use of AI in higher education.” The guide includes “how to use AI in learning environments, such as for writing and research assistance; effective AI prompts to use; concerns with generative AI; and using AI in potential career searches.” The guide was compiled “by feedback from more than 100 students from various universities and faculty members who attended the U.N. forum.”

Meta Promotes Internal AI Tool

Insider Share to FacebookShare to Twitter (8/19, Altchek) reports that Meta has been promoting its internal AI tool, Metamate, which has been in use for over a year. Meta product director Esther Crawford highlighted the tool’s efficiency benefits in a post on X, stating it aids in tasks like summarizing documents and debugging. Crawford’s comments sparked discussions among employees and industry peers, with Shopify COO Kaz Nejatian expressing agreement. Other companies, including consulting firms and banks, have also been investing heavily in AI tools to enhance workplace performance.

AI Solutions Boost Sustainability Efforts

Forbes Share to FacebookShare to Twitter (8/19, Kirti) reports that AI can significantly enhance sustainability initiatives for businesses by streamlining data analytics and uncovering hidden opportunities. Proper AI integration requires operational changes and robust change management. AI can validate hypotheses about waste and inefficiencies by analyzing vast data, offering tailored solutions to improve sustainability. AI also helps in prioritizing opportunities by estimating the size of sustainability improvements. Aligning stakeholders and training staff is crucial for effective AI-driven sustainability. Despite AI’s high energy consumption, advancements are being made to reduce this impact, as noted by Dr. Sasha Luccioni.

Authors Sue AI Startup Anthropic Over Copyright Infringement

The AP Share to FacebookShare to Twitter (8/20) reports a group of authors is suing AI startup Anthropic, alleging it used pirated copies of copyrighted books to train its chatbot Claude. This marks the first lawsuit by writers against Anthropic, which was founded by ex-OpenAI leaders. The lawsuit, filed in federal court in San Francisco, claims Anthropic’s actions contradict its stated goals of responsibility and safety. Authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson filed the suit, seeking to represent other affected writers. Anthropic did not respond to requests for comment. This case adds to the growing number of lawsuits against AI developers for copyright infringement.

California AI Bill Divides Silicon Valley

Insider Share to FacebookShare to Twitter (8/19) reports that California’s SB 1047, introduced by Sen. Scott Wiener in February, aims to regulate AI development by setting safety standards for large-scale systems. It mandates safety testing and liability for companies developing costly AI models. Major tech firms like Meta and OpenAI have criticized the bill, arguing it stifles innovation. Smaller companies express mixed feelings, with some supporting its transparency measures. California Governor Gavin Newsom has not commented on the bill, which will be voted on by the state Assembly by month-end.

Amazon Launches AI Showroom In San Francisco

The San Francisco Chronicle Share to FacebookShare to Twitter (8/20, DiFeliciantonio) reports Amazon has launched a showroom on Market Street in San Francisco to showcase its AI and robotics efforts. The GenAI loft aims to attract startups, tech developers, and investors to spotlight Amazon’s AI work. AWS VP of Developer Experience Adam Seligman said the opening comes at a time “when people are just learning how to use AI.” The San Francisco launch includes robot-made paintings and AI-generated artwork by Claire Silver, with interactive holograms and AI tech talks. Amazon plans to open similar spaces in São Paulo, London, Paris, and Seoul this year.

California AI Regulation Bill Advances

Forbes Share to FacebookShare to Twitter (8/20, Tedford) reports that a bill to regulate artificial intelligence companies in California has progressed through the state assembly appropriations committee. The legislation, proposed by State Senator Scott Wiener, mandates safety testing for advanced AI models and empowers the state attorney general to file charges if technologies cause harm. The bill faces opposition from tech giants like Google and Meta, who argue it could stifle innovation. The bill will now go to the full state assembly for a vote before the legislative session ends on August 31. Governor Gavin Newsom’s stance remains uncertain.

Educators Debate Ethical AI Use In Schools

PC World (8/20, Hachman) reports the ethical use of artificial intelligence in education remains a contentious issue, and since ChatGPT’s release in November 2022, opinions vary widely. High schools often view AI as a potential cheating tool, while “several universities leave generative AI use entirely up to the discretion of the person teaching the course.” The director of instructional technology at the Mohonasen Central School District highlights the concerns of teachers and the district’s cautious approach to AI, including a trial with Khan Academy’s Khanmigo. Despite AI’s benefits, educators emphasize the need for proper integration to avoid hindering learning.

dtau...@gmail.com

unread,
Sep 1, 2024, 5:21:19 PM9/1/24
to ai-b...@googlegroups.com

California Passes AI Safety Bill

California’s legislature approved an AI safety bill opposed by many tech companies. The measure moved to Governor Gavin Newsom’s desk after passing the state Assembly Wednesday, with the Senate granting final approval Thursday. SB 1047 mandates that companies developing AI models take “reasonable care” to ensure that their technologies don’t cause “severe harm,” such as mass casualties or property damage above $500 million.
[
» Read full article ]

Bloomberg; Shirin Ghaffary (August 29, 2024)

 

AI's Race for Energy Butts Up Against Bitcoin Mining

U.S. tech firms, seeking more electricity to power AI and cloud computing datacenters, are turning to bitcoin miners. By the end of 2027, 20% of bitcoin miner power capacity is expected to shift to AI. Morgan Stanley researchers found crypto mining facilities could become upwards of five times more valuable by repurposing operations for AI and cloud computing. Additionally, datacenter wait times could be shortened by around 3.5 years by buying or leasing a bitcoin mining facility with at least 100 MW of capacity.
[
» Read full article ]

Reuters; Laila Kearney; Mrinalika Roy (August 28, 2024)

 

AI Could Engineer a Pandemic, Experts Warn

A policy paper from public health and legal professionals at Stanford School of Medicine, Fordham University, and the Johns Hopkins Center for Health Security calls for mandatory oversight and guardrails for advanced biological AI models. The authors wrote they believe governments should collaborate with machine learning, infectious disease, and ethics experts to develop tests to determine whether biological AI models could pose "pandemic-level risks."
[
» Read full article ]

Time; Tharin Pillay; Harry Booth (August 27, 2024)

 

'Biocomputers' Made of Human Brain Cells Available for Rent

Researchers can rent cloud access to "biocomputers" from the Swiss tech firm FinalSpark for a monthly fee of $500. A low-energy alternative to AI models, these biocomputers, or organoids, are comprised of human brain cells and last only about 100 days. Among the nine universities granted access to FinalSpark's biocomputers are the University of Michigan, and Germany's Free University of Berlin and Lancaster University.
[
» Read full article ]

Interesting Engineering; Gairika Mitra (August 25, 2024)

 

How Tech Companies Obscure AI's Real Carbon Footprint

A Bloomberg Green analysis found that Amazon, Microsoft, and Meta are buying millions of unbundled renewable energy certificates (RECs) so they can claim emission reductions. Although current carbon accounting rules factor these credits into a company's carbon footprint calculations, research indicates carbon savings on paper fail to translate into actual emissions reductions in the atmosphere.

[ » Read full article *May Require Paid Registration ]

Bloomberg; Akshat Rathi; Natasha White; Ben Elgin (August 21, 2024); et al.

 

AI Researchers Call for 'Personhood Credentials' as Bots Get Smarter

A team including researchers from OpenAI, Microsoft, and Harvard University have proposed the development of "personhood credentials" to help distinguish humans from bots online. Such a system would require humans to verify their identities offline to receive an encrypted credential allowing them to access an array of online services. The researchers proposed multiple personhood credentialing systems be created so users have options, and a single entity does not control the market.
[ » Read full article ]

The Washington Post; Will Oremus (August 21, 2024)

 

GitHub Survey Finds Nearly All Developers Use AI Coding Tools

Nearly all (97%) of the 2,000 developers, engineers, and programmers polled by GitHub across the U.S., Brazil, Germany, and India said they have used AI coding tools at work. Most respondents said they perceived a boost in code quality when using AI tools, and 60% to 71% of those polled said adopting a new programming language or understanding an existing codebase was "easy" with AI coding tools.
[ » Read full article ]

InfoWorld (August 21, 2024)

 

Machine Learning Algorithm Improves 3D Printing Efficiency

Washington State University (WSU) researchers developed a machine learning algorithm that identifies the most efficient 3D print settings for producing complex structures. The researchers used the algorithm to optimize the design for kidney and prostate organ models, with a focus on geometric precision, weight, porousness, and printing time. WSU's Eric Chen said, "We were able to strike a favorable balance and achieve the best possible printing of a quality object, regardless of the printing type or material shape."
[ » Read full article ]

Engineering.com; Ian Wright (August 22, 2024)

 

The Year of the AI Election That Wasn't

More than two dozen tech companies offer AI products geared toward political campaigns, with the ability to reorganize voter rolls, handle campaign emails and robocalls, and produce AI-generated likenesses of candidates for virtual meet-and-greets. However, interviews with tech companies and political campaigns indicate that the technology has not taken off, largely due to a distrust for AI among voters.

[ » Read full article *May Require Paid Registration ]

The New York Times; Sheera Frenkel (August 21, 2024)

 

AI Could Help Shrinking Pool of Coders Keep Outdated Programs Working

An AI model developed by researchers at Vietnam's FPT Software AI Center could allow COBOL-based systems to remain operational as the number of engineers familiar with the older programming language continues to decline. The researchers are training the XMainframe model to interpret COBOL code and rewrite it in other programming languages. In tests, the model outperformed other AI models in accurately summarizing the purpose of COBOL code.

[ » Read full article *May Require Paid Registration ]

New Scientist; Matthew Sparkes (August 20, 2024)

 

China's AI Engineers Secretly Access Banned Nvidia Chips

Chinese AI developers increasingly are skirting U.S. export controls that prevent them from directly importing Nvidia chips by working with brokers to access them overseas. The users' identities are concealed through "smart contracts" via the blockchain, and the transactions are paid for using cryptocurrency. Experts say these arrangements do not break any laws.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Raffaele Huang (August 26, 2024)

 

Florida International University Students Create AI Policies For Their Class

Inside Higher Ed Share to FacebookShare to Twitter (8/22, Coffey) reported that students at Florida International University (FIU) “were asked to come up with their own AI guidelines for a Rhetorical Theory and Practice class earlier this year.” This initiative marks a departure from the typical prohibition of AI, which is often equated with plagiarism. Students were broken into small groups “to come up with what they believed were best practices, which they then presented to the class at large to fine-tune their ideas. In a summer course, with its shorter time frame...students look at the spring semester policy and make tweaks to create their own.” The experiment resulted in varied policies on AI use for brainstorming and organizing papers. The instructor of the class “will continue to allow students to create their own AI policies this fall, expanding from her upper-level courses to first-year students as well.”

Google DeepMind Workers Protest Military Contracts

TIME Share to FacebookShare to Twitter (8/22, Perrigo) reported that nearly 200 Google DeepMind employees signed a letter urging Google to end its military contracts, citing concerns over ethical AI use. The letter, dated May 16, references Google’s Project Nimbus contract with the Israeli military, which includes AI and cloud services. A Google spokesperson stated, “We comply with our AI Principles, which outline our commitment to developing technology responsibly.” Despite the protest, Google has not acted on the demands, leading to growing frustration among employees.

AI Bots Expected To Transform Everyday Tasks

The Wall Street Journal Share to FacebookShare to Twitter (8/24, Lin, Subscription Publication) reports on the anticipated rise of AI “agents” capable of independently completing various tasks, from booking flights to managing reservations. AWS VP of Generative AI Vasi Philomin said, “In the next stage, bots will be built to do things like arrange returns, all without human help.” As these advancements unfold, Amazon and its services, including AI chatbots for shopping, stand to play a significant role in the emerging technology landscape.

Expert Says Schools Need To Ask Essential Questions About AI For Children

The Washington Post Share to FacebookShare to Twitter (8/23) reported Los Angeles public schools are facing challenges with their new AI program, launched to assist students’ learning. The district introduced the chatbot “Ed” in March to be a “personal assistant to students.” However, financial issues with the start-up AllHere led to the project’s suspension and an investigation into potential misuse of student data. Despite this, district officials plan to continue the AI initiative. Alex Molnar from the National Education Policy Center argues that schools should not adopt AI without ensuring it is the best solution. He emphasizes the need for thorough evaluation and data protection. Molnar suggests parents ask critical questions about AI’s effectiveness, alternatives, and data security. He further recommends legislative pressure to ensure AI tools are safe and effective before implementation. Surveys indicate skepticism about AI, despite hopes it can address educational challenges.

Colleges Ramp Up AI Faculty Hiring

The Chronicle of Higher Education Share to FacebookShare to Twitter (8/26, Swaak) reports that colleges are significantly increasing their AI faculty hiring to keep up with technological advancements and industry demands. Despite being well-funded, some top 20 institutions feel they can’t compete with elite universities in AI talent acquisition, says Att Trainum of the Council of Independent Colleges. An analysis of The Chronicle’s jobs site “conducted earlier this year found that the number of AI-related listings had more than doubled between 2022 and 2023.” Institutions like Purdue University, Emory University, and the University of Georgia are making substantial hires, with some creating new AI-focused centers. Funding comes from “a combination of sources,” including multimillion-dollar donations and strategic funds. Colleges are also promoting internal training programs and offering incentives to existing faculty to integrate AI into their work.

University Of Texas To Host New AI Supercomputer

The Austin (TX) Business Journal Share to FacebookShare to Twitter (8/26, Sayers, Subscription Publication) reports the Texas Advanced Computing Center’s (TACC) new supercomputer, Horizon, “will be built in Round Rock at Seattle-based Sabey Data Centers’ new campus.” The University of Texas announced last month “that it was awarded $457 million from the U.S. National Science Foundation to build what’s called a Leadership Class Computing Facility led by the university’s TACC.” Officials confirmed on August 23 that Horizon is expected to start operations in 2026. Horizon will offer “a 10-times performance improvement for simulation” over TACC’s current Frontera supercomputer. The LCCF will collaborate with various science centers, including those at historically Black colleges and universities and other national supercomputing centers.

Tech Firms Conceal AI’s Water, Power Demands

The Los Angeles Times Share to FacebookShare to Twitter (8/26) reports that AI computing significantly increases electricity and water consumption, with ChatGPT using 10 times more power than standard Google searches. Experts called for transparency from tech companies regarding energy and water usage. Alex de Vries, founder of Digiconomist, said, “Even if we manage to feed AI with renewables, we have to realize those are limited in supply, so we’ll be using more fossil fuels elsewhere.” Google and OpenAI have not disclosed specific consumption details, despite environmental concerns.

Column: Researchers Working To Combat AI Hallucinations In Math Tutoring

In her column for The Hechinger Report Share to FacebookShare to Twitter (8/26), Jill Barshay says, “One of the biggest problems with using AI in education is that the technology hallucinates,” or generates incorrect information. AI chatbots, such as Khan Academy’s Khanmigo powered by ChatGPT, often provide wrong answers, particularly in math. Two researchers from University of California, Berkeley, “recently documented how they successfully reduced ChatGPT’s instructional errors to near zero in algebra” using a method called “self-consistency,” but this method was less effective in statistics, with a 13 percent error rate. Despite these challenges, a study found that ChatGPT’s solutions helped adults learn math better than traditional methods. Barshay says she would “like to see how much real students – not just adults recruited online – use these automated tutoring systems.”

OpenAI Prepares To Launch New AI Model

SiliconANGLE Share to FacebookShare to Twitter (8/27) reports that OpenAI is set to launch a new AI model named “Strawberry” with advanced problem-solving capabilities, according to a report first published by The Information. Strawberry is reportedly capable of solving complex math problems, developing marketing strategies and solving word puzzles. The model, previously known as Q*, surpasses the performance of other OpenAI models on several AI benchmarks. OpenAI employees have raised concerns that the new model could represent a major breakthrough in the journey toward building artificial general intelligence (AGI). According to the report, Strawberry could be released in fall 2024.

Tech Companies Support California Bill To Label AI-Generated Content

TechCrunch Share to FacebookShare to Twitter (8/26, Zeff) reports that OpenAI, Adobe, and Microsoft are backing a California bill, AB 3211, that mandates technology companies to label AI-generated content. The bill stipulates the inclusion of watermarks in the metadata of AI-created photos, videos, and audio files, and that online platforms must display these labels in a user-friendly manner. This support comes despite earlier opposition, suggesting recent amendments to the bill are satisfactory.

        Another TechCrunch Share to FacebookShare to Twitter (8/26, Coldewey) article reports that Elon Musk “unexpectedly” voiced support for the bill as well, writing on X, “For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk.” TechCrunch notes that Musk’s xAI “would be subject to SB 1047’s requirements despite his pledge to leave California.”

Google To Again Allow AI-Generated Images Of People

Bloomberg Share to FacebookShare to Twitter (8/28, Love, Subscription Publication) reports that Google will allow some users to generate images of people using its AI models after suspending the feature due to a scandal. In February, Google faced backlash for producing historically inaccurate and racially incorrect images. Alphabet CEO Sundar Pichai deemed the responses “completely unacceptable,” leading to the suspension and subsequent review of the tool. The Wall Street Journal Share to FacebookShare to Twitter (8/28, Kruppa, Subscription Publication) reports that Google stated it has improved user experience and set content limitations. The feature will be available for English-language users in the coming days, using Imagen 3 technology.

Self-Driving Cars Raise Ethical Concerns

The Wall Street Journal Share to FacebookShare to Twitter (8/28, Subscription Publication) provides an overview of the key questions surrounding AI-driven vehicles, highlighting concerns from engineers, programmers, and bioethicists. Shai Shalev-Shwartz, chief technology officer at Mobileye, said that the balance between safety and speed is “the one thing that really affects 99% of the moral questions around autonomous vehicles.” Adjusting the parameters between these two aspects, he said, can lead to AI driving that ranges from overly reckless to overly cautious, or to a style that feels “natural” and human-like.

Survey: Only 25% Of School Districts Have Released Guidance On AI

K-12 Dive Share to FacebookShare to Twitter (8/28, Merod) reports a Digital Promise survey found that while a majority of school districts are using some AI in classrooms, just 25% have set specific AI policies or guidance. This is despite 41% of districts having purchased AI tools within the last year. The lack of “official guidance and policy at the district level comes amid a widespread push by K-12 organizations and industry leaders to roll out AI frameworks for students and staff.” Still, 75% of districts have professional development for teachers on safely and effectively using the technology.

OpenAI, Anthropic Sign AI Safety Testing Deals With US Government

Reuters Share to FacebookShare to Twitter (8/29) reports that AI startups OpenAI and Anthropic have signed agreements with the US AI Safety Institute for research, testing, and evaluation of their AI models. Announced on Thursday, these first-of-their-kind deals come amid increasing regulatory scrutiny. The agreements allow the institute to access new models before and after public release and enable collaborative research on AI capabilities and risks. Jason Kwon of OpenAI emphasized the institute’s role in US AI leadership, while Elizabeth Kelly of the Institute called the agreements a significant milestone. The institute will also collaborate with the UK AI Safety Institute.

        Apple And Nvidia In Talks To Invest In OpenAI. The New York Times Share to FacebookShare to Twitter (8/29) reports that Apple and Nvidia in talks to invest in OpenAI, according to sources familiar with the matter. The new round led by Thrive Capital would value OpenAI at $100 billion, representing a $20 billion increase from eight months ago. Thrive Capital is expected to invest about $1 billion, and Microsoft may also join the funding round. Nvidia and Apple declined to comment on the report. Apple plans to integrate OpenAI’s chatbot on iPhones and is expected to share details of its generative AI technology, Apple Intelligence, in September.

        OpenAI’s “Strawberry” AI Model Promises Advanced Capabilities. Newsweek Share to FacebookShare to Twitter (8/29, Boran) reports that OpenAI is developing a next-generation AI model named “Strawberry,” which may be released in fall 2024. This advanced large language model (LLM) aims to enhance AI reasoning, allowing it to solve complex math problems and word logic puzzles. Kristian J. Hammond, director of the Center for Advancing Safety of Machine Intelligence (CASMI) at Northwestern University noted that current models like ChatGPT-4 struggle with context-dependent and multi-step problems. Hammond said Strawberry “could push AI beyond just mimicking human language into realms of thoughtful analysis.”

Yale University Announces $150M Investment For AI Initiatives

Forbes Share to FacebookShare to Twitter (8/29, T. Nietzel) reports, “Yale University has announced it will invest more than $150 million over the next five years for a variety of artificial intelligence (AI) initiatives.” The investment “will support four AI-related priorities: improvements in computing infrastructure, increased access to secure generative AI tools, the addition of new faculty and seed grants, and enhanced interdisciplinary collaboration.”

UCLA Researchers Develop AI-Based Lyme Disease Test

LabPulse Share to FacebookShare to Twitter (8/29) reports that researchers at the University of California, Los Angeles (UCLA) have developed an AI-based test for Lyme disease that delivers results within 20 minutes. The study from UCLA’s California NanoSystems Institute demonstrates that the test is as accurate as traditional methods. The test uses synthetic peptides and a paper-based platform analyzed by an AI algorithm. Co-author Dino Di Carlo highlighted the test’s potential for early, cost-effective diagnosis. The test showed 95.5% sensitivity and 100% specificity in trials. The team is seeking partners to scale the technology and adapt it for whole blood samples. The study Share to FacebookShare to Twitter was published in Nature Communications and received support from the NIH and the National Science Foundation.

Nvidia Faces Manufacturing Challenges With New AI Chips

The Wall Street Journal Share to FacebookShare to Twitter (8/29, Subscription Publication) reports that Nvidia is experiencing manufacturing difficulties with its new AI chips, Blackwell, which are larger and more complex than previous models. These issues contributed to narrower profit margins and a $908 million provision, causing a 6.4% drop in stock on Thursday. CEO Jensen Huang noted the high demand for Blackwell, despite the challenges. Analysts attribute the problems to the chip’s size and new design methods from Taiwan Semiconductor Manufacturing Co. CFO Colette Kress expects increased production to boost revenue next quarter. Additionally, Nvidia’s rapid release cycle has intensified pressure to resolve these issues.

dtau...@gmail.com

unread,
Sep 7, 2024, 6:54:12 PM9/7/24
to ai-b...@googlegroups.com

Researchers Build 'AI Scientist'

A team of researchers from the U.S., Canada, and Japan developed AI Scientist in an effort to automate parts of the scientific research process. Based on a large language model, AI Scientist can perform the complete research cycle, from reading existing literature and developing a hypothesis to testing solutions and writing a paper. It also can evaluate its own results, then build on those by restarting the research cycle.
[ » Read full article ]

Nature; Davide Castelvecchi (August 30, 2024)

 

EU, U.K., U.S. Sign International AI Treaty

The EU, U.S., and U.K. on Thursday signed an international AI treaty on Thursday, along with Andorra, Georgia, Iceland, Norway, Moldova, San Marino, and Israel. The treaty was opened for signature at a conference of Council of Europe justice ministers in the Lithuanian capital of Vilnius. The Council of Europe hailed the agreement as the "first international legally binding treaty" on the use of AI systems, noting it was an open treaty that could be signed by more countries.
[ » Read full article ]

Deutsche Welle (Germany) (September 5, 2024)

 

OpenAI, Anthropic Reach AI Safety, Research Agreement with NIST

The U.S. National Institute of Standards and Technology (NIST) announced agreements that will give NIST's U.S. AI Safety Institute access to new AI models from OpenAI and Anthropic before and after their public release. The agreements will help bolster research on AI's capabilities and risks and allow NIST to recommend safety improvements.
[ » Read full article ]

The Hill; Miranda Nazzaro (August 29, 2024)

 

Machine Learning Could Forecast Earthquakes Months Early

University of Alaska Fairbanks (UAF) researchers have developed a method of forecasting earthquakes accurately months before they occur using machine learning. The researchers developed an algorithm that searches seismic data for abnormal activity and makes informed predictions about impending earthquakes. By studying major earthquakes that occurred in Alaska in 2018 and California in 2019, the researchers found that major earthquakes are preceded by low-level tectonic unrest, which they attributed to a significant increase in pore fluid pressure within a fault.
[ » Read full article ]

Interesting Engineering; Prabhat Ranjan Mishra (August 30, 2024)

 

US Officials Push For Legislation To Rein In AI-Generated Disinformation

Digiday Share to FacebookShare to Twitter (9/2, Swant) reports that in an effort to curb the spread of AI-generated disinformation, particularly in the political realm, state and federal officials in the US are pushing for new legislation. The proposed California AI Transparency Act aims to boost transparency and accountability by providing access to detection tools and enforcing new disclosure requirements for AI-generated content. Additionally, various states such as New York, Florida, and Wisconsin mandate AI-created political advertisements to include disclosures, while a host of cybersecurity firms have begun rolling out tools to spot AI-generated content.

        Unauthorized AI Bill Heading To California Governor’s Desk For Approval. NPR Share to FacebookShare to Twitter (8/30, Barco) reported that a new bill to protect performers from “unauthorized AI is now headed to the California governor to consider signing into law.” The use of artificial intelligence to “create digital replicas is a major concern in the entertainment industry, and AI use was a point of contention during last year’s Hollywood strike.” California Assembly Bill 2602 would “regulate the use of generative AI for performers – not only those on-screen in films and TV/streaming series but also those who use their voices and body movements in other media, such as audiobooks and video games.” According to the bill, the measure would “require informed consent and union or legal representation ‘where performers are asked to give up the right to their digital self.’”

        Google To Intensify Restrictions On AI-Generated Election Content. MediaPost Share to FacebookShare to Twitter (9/2, Kirkland) reports that Google plans to further limit AI-generated election inquiries across its platforms, including the Gemini chatbot and Search AI Overviews, ahead of the 2024 US Presidential election. This initiative extends to topics like candidates, voting procedures, and election results. Google will also mandate advertisers to reveal when their content includes synthetic or digitally altered elements. Moreover, Search and YouTube platforms will guide users towards credible election-related information and voting registration details.

Florida State University Professor Develops AI Cheating Detection For Multiple-Choice Exams

Inside Higher Ed Share to FacebookShare to Twitter (8/30, Coffey) reported that Kenneth Hanson, a Florida State University professor, “has found a way to detect whether generative artificial intelligence was used to cheat on multiple-choice exams.” Hanson collaborated with a machine-learning engineer to gather data in fall 2022 and published their findings this summer. By analyzing responses from five semesters’ worth of exams, Hanson and a team of researchers “found patterns specific to ChatGPT, which answered nearly every ‘difficult’ test question correctly and nearly every ‘easy’ test question incorrectly.” Despite the method’s precision, Hanson doubts its practicality for individual professors due to its complexity. He said “his method of running multiple-choice exams through his ChatGPT-finding model could be used at a larger scale, namely by proctoring companies like Data Recognition Corporation and ACT.”

OpenAI Makes Deals With Publishers

The Verge Share to FacebookShare to Twitter (8/30) reported that OpenAI has made deals with major publishers like Axel Springer and Condé Nast, despite initially scraping their content without permission. These deals provide OpenAI with access to recent and authoritative content, potentially avoiding lawsuits. The New York Times has filed a lawsuit against OpenAI for copyright infringement. OpenAI’s agreements can be seen as settlements to prevent further legal actions. The deals also give OpenAI up-to-date information, enhancing its SearchGPT product. The legal outcome of these cases could significantly impact the AI and publishing industries.

Tech Giants Use Creative Tactics To Poach AI Talent

CNBC Share to FacebookShare to Twitter (8/30, Bosa, Wu) reported Microsoft, Google, and Amazon are using creative methods to poach talent from top AI startups. Google recently signed a unique deal with Character.ai, hiring its founder and over 20% of its workforce while licensing its technology. Microsoft and Amazon have employed similar strategies with their deals involving Inflection and Adept, respectively. These tactics aim to circumvent regulatory scrutiny while acquiring valuable AI talent. However, these maneuvers might attract antitrust enforcement attention.

OpenAI Seeks Changes To Management, Organization

The New York Times Share to FacebookShare to Twitter (9/3, Metz, Isaac) reports OpenAI “is making substantial changes to its management team, and even how it is organized, as it courts investments from some of the wealthiest companies in the world.” The company “is trying to look more like a no-nonsense company ready to lead the tech industry’s march into artificial intelligence.” However, “interviews with more than 20 current and former OpenAI employees and board members show that the transition has been difficult.”

        The New York Times Share to FacebookShare to Twitter (8/30, Metz) reports OpenAI has appointed Chris Lehane as its vice president of global policy. Lehane, who previously held a similar role at Airbnb and also served in the Clinton White House, is known for his expertise in opposition research. An OpenAI spokesperson said, “Just as the company is making changes in other areas of the business to scale the impact of various teams as we enter this next chapter, we recently made changes to our global affairs organization.”

        OpenAI Investment Discussions Occurring Amid Increased Competition. The Wall Street Journal Share to FacebookShare to Twitter (8/30, Subscription Publication) reports that Apple, NVIDIA, and Microsoft are in discussions to invest in OpenAI, the developer of ChatGPT, amid increasing competition in the AI market. Startups are emerging with cheaper and more specialized AI services. Meta’s CEO Mark Zuckerberg supports open-source AI, offering Meta’s Llama model for free to developers. OpenAI, which charges for its services, faces competition from these open-source models. Apple and NVIDIA are negotiating to join Microsoft’s investment, potentially valuing OpenAI at $100 billion. Open-source AI’s growing popularity is challenging established AI companies like OpenAI.

University Of Arizona Engineers Research AI For EV Battery Fire Safety

KOLD-TV Share to FacebookShare to Twitter Tucson, AZ (9/2, Romo) reported that the University of Arizona is focusing on electric vehicle safety through artificial intelligence research. Basab Ranjan Das Goswami, a PhD student in Aerospace and Mechanical Engineering, explained that AI could predict car battery fires by monitoring temperature, potentially saving lives and property. Captain Richard Fult, safety officer at Northwest Fire District, noted that fire departments nationwide struggle with containing EV fires, as extinguishing them remains challenging. Das Goswami mentioned that while their research currently centers on Tesla vehicles, they aim to expand to other electric vehicles.

Goldman Sachs Says AI Could Put Downward Pressure On Oil Price Over Next Decade

Reuters Share to FacebookShare to Twitter (9/3, Choubey, Patel, Anil) reports that artificial intelligence “could hurt oil prices over the next decade by boosting supply by potentially reducing costs via improved logistics and increasing the amount of profitably recoverable resources, Goldman Sachs said on Tuesday.” In a note, Goldman Sachs said, “AI could potentially reduce costs via improved logistics and resource allocation. ... resulting in a $5/bbl fall in the marginal incentive price, assuming a 25% productivity gain observed for early AI adopters.” Goldman “expects a modest potential AI boost to oil demand compared to demand impact to power and natural gas over the next 10 years.” Goldman added, “We believe that AI would likely be a modest net negative to oil prices in the medium-to-long term as the negative impact from the cost curve (c.-$5/bbl) – oil’s long-term anchor – would likely outweigh the demand boost (c.+$2/bbl).”

Column: AI Chatbots Hinder Student Learning

In her column for The Hechinger Report Share to FacebookShare to Twitter (9/2), Jill Barshay said researchers at the University of Pennsylvania found that “Turkish high school students who had access to ChatGPT while doing practice math problems did worse on a math test compared with students who didn’t have access to ChatGPT.” While students using ChatGPT “solved 48 percent more of the practice problems correctly,” they did not build essential problem-solving skills. A revised AI tutor chatbot improved practice problem performance by 127 percent but did not enhance test scores. The researchers concluded that AI chatbots could “substantially inhibit learning,” as students often relied on them as a “crutch.”

AI Tool Aids Students In Crafting College Essays

Inside Higher Ed Share to FacebookShare to Twitter (9/4, Coffey) reports Esslo, an AI tool developed by Stanford students Hadassah Betapudi and Elijah Kim, provides feedback on college essays. The AI machine “provides feedback on college essays, based on those that have helped students gain admission to top-tier universities like Harvard and Stanford.” The tool offers suggestions on avoiding clichés, using imagery, and improving detail, voice, and character. It has both free and paid versions, with the latter offering unlimited line-by-line edits. Rick Clark, executive director of enrollment management at the Georgia Institute of Technology, views AI as the “equivalent of using an admissions consultant – except that it’s more affordable for those who cannot pay for the often-pricey consultants.”

X Corp. Agrees To EU Data Protection Demands

Bloomberg Share to FacebookShare to Twitter (9/4, Volpicelli, Subscription Publication) reports that Elon Musk’s X Corp. will stop processing European users’ personal data to train its AI chatbot Grok, complying with EU regulators. On Wednesday, Ireland’s Data Protection Commission announced X’s commitment to delete data collected from May 7 to Aug. 1, 2024. The DPC “said it was the first time a lead EU agency has taken such an action against an online platform.”

Amazon Hires Covariant Founders For AI Robotics

Wired Share to FacebookShare to Twitter (9/4) reports that Amazon has hired the founders of Covariant, a startup specializing in AI for automating object handling, and will license its models and data. This move, similar to Amazon’s 2012 acquisition of Kiva Systems, could revolutionize ecommerce operations. Covariant, founded in 2020 by UC Berkeley professor Pieter Abbeel and his students, has developed AI algorithms for robotic grasping. Amazon spokesperson Alexandra Miller confirmed Covariant’s technology will enhance Amazon’s robotic systems. This follows similar talent acquisitions by Amazon, Microsoft, and Google from other AI startups.

Generative AI Projects Face High Costs and Risks

TechRepublic Share to FacebookShare to Twitter (9/4, Jackson) reports that despite the potential of generative AI, many projects are being abandoned due to high costs and risks. A Gartner report indicates that 30% of generative AI projects will be discontinued after the proof-of-concept stage by 2025 as companies are “struggling to prove and realize value.” Rita Sallam, VP analyst of Gartner, said it is “important to acknowledge the challenges in estimating that value, as benefits are very company, use case, role and workforce specific. Often, the impact may not be immediately evident and may materialize over time. However, this delay doesn’t diminish the potential benefits.” A separate Deloitte survey of 2,770 companies found that 70% have moved only 30% or fewer of their GenAI experiments into production, citing lack of preparation and data issues. RAND research revealed that over 80% of AI projects fail, a rate double that of non-AI IT projects.

Study Suggests Generative AI For Academic Advising

Inside Higher Ed Share to FacebookShare to Twitter (9/5, Mowreader) reports a new study “from Tyton Partners suggests supporting academic advisers with generative AI to reduce the burden of heavy caseloads.” The annual study, Driving Toward a Degree, highlighted that this year, “adviser burnout and turnover gained prominence, with 37 percent of respondents ranking it as top issue, nine percentage points higher than the year prior.” The survey “is based on a survey of over 3,000 higher education stakeholders,” and it found that 95 percent of academic advisers focus on helping students select courses. However, among “front-line student support providers, only 25 percent of respondents used AI at least monthly, compared to 59 percent of students.” Tyton suggests enhancing data quality and increasing staff engagement with AI tools to build trust and effectiveness.

OpenAI Considers Higher-Priced Subscriptions

Reuters Share to FacebookShare to Twitter (9/5) reports that OpenAI executives are discussing higher-priced subscriptions for future large language models, including the reasoning-focused Strawberry and a new flagship LLM called Orion. Internal talks have considered prices up to $2,000 per month. OpenAI has not commented on the report. Currently, ChatGPT Plus costs $20 per month, while the free tier is used by hundreds of millions monthly. OpenAI’s Strawberry project aims to enhance AI models’ deep research capabilities through specialized post-training. This follows reports of potential investments from Apple and Nvidia, which could value OpenAI above $100 billion.

OpenAI Cofounder Launches Safe Superintelligence

Fast Company Share to FacebookShare to Twitter (9/5, Melendez) reports OpenAI cofounder Ilya Sutskever has launched a new AI startup called Safe Superintelligence, raising $1 billion from investors like Andreessen Horowitz and Sequoia Capital. The company aims to develop AI smarter than humans but safe for civilization. Safe Superintelligence has 10 employees and is vetting new hires for technical skills and good character. Sutskever left OpenAI in May after a conflict with CEO Sam Altman, who was briefly ousted. The startup’s website emphasizes advancing AI capabilities while ensuring safety.

YouTube Develops AI Detection Tools

TechCrunch Share to FacebookShare to Twitter (9/5, Perez) reports YouTube announced on Thursday new AI detection tools aimed at protecting creators from unauthorized use of their likenesses, including faces and voices, in videos. This initiative expands YouTube’s Content ID system to identify AI-generated content, such as synthetic singing. YouTube is also developing solutions to control how content is used for AI training, responding to creators’ complaints about companies using their material without consent. The company is working on compensating artists for AI-generated music, collaborating with Universal Music Group. Early next year, YouTube will pilot the expanded Content ID system to identify synthetic singing.

dtau...@gmail.com

unread,
Sep 15, 2024, 11:33:50 AM9/15/24
to ai-b...@googlegroups.com

Google AI Model Faces EU Scrutiny from Privacy Watchdog

EU regulators said Thursday they’re looking into Google’s Pathways Language Model 2 (PaLM2) over concerns about its compliance with the bloc’s data privacy rules. Ireland’s Data Protection Commission, which has oversight of Google in data privacy matters, said it has opened an inquiry to assess whether the AI model's data processing would likely result in a “high risk to the rights and freedoms of individuals” in the bloc.
[
» Read full article ]

Associated Press; Kelvin Chan (September 11, 2024)

 

U.S. Proposes Requiring Reporting for Advanced AI, Cloud Providers

The U.S. Department of Commerce's Bureau of Industry and Security has proposed mandatory reporting requirements for AI developers and cloud computing providers regarding the development of "frontier" AI models and computing clusters. The reporting would cover cybersecurity measures and outcomes from "red-teaming efforts," such as testing whether AI models can assist in cyberattacks or enable non-experts to develop chemical, biological, radiological, or nuclear weapons.
[ » Read full article ]

Reuters; David Shepardson (September 9, 2024)

 

Video Game Performers Reach Agreement on AI

The Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) last week reached agreements with 80 video games over AI protections for video game performers. The individual video games entered into interim or tiered budget agreements with SAG-AFTRA and agreed to the union's AI provisions. The dispute centered on the ability of games to replicate the likenesses of voice actors and motion-capture artists using AI, without their consent or fair compensation.
[ » Read full article ]

Associated Press; Kaitlyn Huamani (September 5, 2024)

 

IT Unemployment Hits 6%

A Janco Associates analysis of U.S. Department of Labor data revealed the unemployment rate for IT workers climbed to 6% in August. Janco's Victor Janulaitis said the rate is the highest since the end of the dot-com bubble in the early 2000s and attributed the increase to "seismic changes" in the tech landscape brought on by AI. On the other hand, said Janulaitis, AI and cybersecurity roles are experiencing growth.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Belle Lin (September 7, 2024)

 

India Emerging as Key Player in Global AI Race

India is working to become a major global player in the AI space, with the government committing $1.25 billion to the IndiaAI mission to facilitate computing infrastructure, startup, and AI application development in the public sector. A number of Indian startups have begun developing their own large language models, and the government has procured 1,000 GPUs to provide computing capacity to AI developers.
[ » Read full article ]

Time; Astha Rajvanshi (September 5, 2024)

 

New Recruitment Challenge: Filtering AI-Crafted Résumés

Tech companies and recruiters attribute substantial interest in their job postings to the use of AI to customize and submit numerous résumés in rapid succession. To avoid hiring "fake candidates," recruiters are taking extra steps to verify applicants' identities and experience. Some firms record interviews and flag candidates for further vetting if they look away from the camera before answering a question, as they may be consulting ChatGPT for answers.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Katherine Bindley (September 4, 2024)

 

Regulators Try to Do the Math on AI Safety

AI safety legislation passed in California would, if signed into law by Gov. Gavin Newsom, regulate AI models trained on 10 to the 26th floating-point operations per second, the same threshold that requires reporting to the U.S. government under a 2023 executive order signed by President Joe Biden. The threshold is viewed by some lawmakers and AI safety advocates as a level of computing power at which AI systems could become dangerous, but critics call the measure arbitrary.
[ » Read full article ]

Associated Press; Matt O'Brien (September 4, 2024)

 

AI Anchors Protect Reporters amid Government Crackdown in Venezuela

In response to the Venezuelan government's crackdown on journalists and protesters, Colombian non-profit Connectas has created AI-generated news anchors to deliver news in Venezuela from independent media outlets while protecting reporters. The AI anchors are named "El Pana," Venezuelan slang for "friend," and "La Chama," meaning "The Girl." Connectas' Carlos Huertas said, "We decided to use artificial intelligence to be the 'face' of the information we're publishing because our colleagues who are still out doing their jobs are facing much more risk."
[ » Read full article ]

Reuters; Maria Paula Laguna; Kylie Madry (September 2, 2024)

 

Professor Speaks Out About Students’ Use Of ChatGPT For Introductory Assignment

Insider Share to FacebookShare to Twitter (9/8, Yip) reports professor Megan Fritts of the University of Arkansas at Little Rock revealed that “many of the students enrolled in her Ethics and Technology course decided to introduce themselves with ChatGPT.” Fritts took her concern “to X, formerly Twitter, in a tweet that has now garnered 3.5 million views.” She explained “that the assignment was not only to help students get acquainted with using the online Blackboard discussion board feature, but she was also ‘genuinely curious’ about the introductory question.” However, AI-generated responses “did not reflect what the students, as individuals, were expecting from the course but rather a regurgitated description of what a technology ethics class is, which clued Fritts in that they were generated by ChatGPT or a similar chatbot.” Fritts acknowledged “that educators have some obligation to teach students how to use AI in a productive and edifying way. However, she said that placing the burden of fixing the cheating trend on scholars teaching AI literacy to students is ‘naive to the point of unbelievability.’”

How AI Recruiters Impact College Admissions

The Chronicle of Higher Education Share to FacebookShare to Twitter (9/6, Carlson) reported Zack Perkins and CollegeVine, his technology company, recently “released a product-launch video that sought to show the promise of artificial intelligence in scaling up the work of admissions offices that give information to prospective students.” The AI bot named “Sarah” demonstrates its ability to engage prospective students by discussing their interests and guiding them to suitable academic programs. Institutions like Knox College have begun integrating customized AI recruiters, such as “KC,” to enhance recruitment efforts. In addition to “its AI recruiter, CollegeVine also has an AI-powered chatbot called Ivy that answers students’ general questions about what colleges they might consider applying to, what major they should choose, and what they might do with that major.” While AI promises to streamline routine tasks and free up staff for more meaningful interactions, concerns remain “about its ability to replace human intuition and personalized guidance.”

Opinion: Inclusive AI Could Bolster Special Education

In an opinion piece for TIME Share to FacebookShare to Twitter (9/6), Timothy Shriver, Ph.D., the chairman of the Special Olympics, said that the advent of artificial intelligence (AI) could significantly impact students with intellectual and developmental disabilities (IDD). A study by the Special Olympics Global Center for Inclusion in Education “found the majority of educators (64%) and parents (77%) of students with IDD view AI as a potentially powerful mechanism to promote more inclusive learning.” Despite this optimism, “the majority of teachers (78%) express concern that the use of AI in schools might lead to a decrease in human interaction in schools, with 65% also worried about AI use potentially reducing students’ ability to practice empathy.” Shriver emphasized the need for comprehensive teacher training on AI platforms and said “people with IDD must have a seat at the table when discussing the responsible use of AI in education.”

University Of Delaware Piloting AI Study Tools

Inside Higher Ed Share to FacebookShare to Twitter (9/9, Mowreader) reports that the University of Delaware (UD) has launched a pilot initiative “that will transform recorded lectures into study guides, flash cards and practice quizzes” using generative AI technology, starting this fall. The leader of Academic Technology Systems (ATS) at UD explained that the AI builds a knowledge graph from lecture transcripts, which faculty members then review for accuracy. The initiative, “developed in-house at the university, leads with ethical principles and prioritizes faculty content ownership to protect all participants, as well,” ensuring privacy through Amazon Web Services Bedrock encryption. The development team “includes two software engineers, some instructional designers, a user-interface developer and a Ph.D. student who used to work as a software developer.” Currently, the project is being piloted in two psychology courses.

Musk’s xAI Unveils Colossus Supercomputer

Insider Share to FacebookShare to Twitter (9/8, Lee, Tangalakis-Lippert) reports that Elon Musk’s AI company, xAI, has introduced a new supercomputer named Colossus, powered by 100,000 Nvidia H100 chips. This AI training system is significantly larger than Meta’s Llama 3, which uses 16,000 chips. However, LinkedIn cofounder Reid Hoffman and Modular AI CEO Chris Lattner suggest that Colossus merely allows xAI to catch up with leading AI companies like OpenAI and Anthropic. Musk aims to double Colossus’s capacity to 200,000 chips soon, but energy supply issues and environmental concerns have been raised.

        Musk Denies Tesla-xAI Revenue Sharing. TechCrunch Share to FacebookShare to Twitter (9/8, Ha) reports that Elon Musk has refuted claims from the Wall Street Journal Share to FacebookShare to Twitter (9/8, Subscription Publication) that Tesla has considered sharing revenue with his AI company, xAI. The proposed agreement would have involved using xAI’s models in Tesla’s Full Self-Driving software and other features. Musk stated on his social media platform X that Tesla does not need to license anything from xAI. He emphasized that xAI’s models are too large to run on Tesla’s vehicle inference computers. Tesla shareholders have sued Musk, alleging he diverted resources to xAI.

Meta Expands Llama AI Model Availability

TechCrunch Share to FacebookShare to Twitter (9/8, Wiggers)reports that Meta has broadened the availability of its generative AI model, Llama, through partnerships with AWS, Google Cloud, and Microsoft Azure. The Llama models, including Llama 8B, 70B, and 405B, range from compact versions for general applications to large-scale models requiring data center hardware. Meta has also introduced tools like Llama Guard and Prompt Guard for content moderation and security. Concerns remain about potential copyright issues and the reliability of AI-generated code.

Report: AI Adoption In Academic Libraries Accelerates

Inside Higher Ed Share to FacebookShare to Twitter (9/10, Coffey) reports, “According to a report released Monday by the data company Clarivate, 7 percent of academic libraries are currently implementing AI tools, while nearly half expect to implement them over the next year.” The report, released Monday, is based on a survey conducted from April to June with around 1,500 respondents, including library deans and IT directors, primarily from the US. Approximately 80 percent of respondents “were from university libraries.” Key motivations for AI adoption include supporting student learning (52 percent), research excellence (47 percent), and making content more discoverable (45 percent). Challenges include a lack of AI expertise, with 32 percent of respondents noting no AI training at their universities. Respondents “said budget constraints were just as worrisome as a lack of AI expertise.”

Google Shuts Down Everyday Robots

Wired Share to FacebookShare to Twitter (9/10, Nast) reports that Alphabet’s innovation lab, Google X, faced challenges in integrating robotics and AI after acquiring nine robot companies in early 2016. Andy Rubin, who initially led the effort, left under mysterious circumstances, leading to confusion among employees. Astro Teller, head of Google X, aimed to tackle global issues with AI-powered robots. Despite significant progress, including the development of robots for tasks like tidying desks, Google shut down the Everyday Robots project in January 2023, citing cost concerns. The robots and a small team were transferred to Google DeepMind for further research. The closure raises questions about Silicon Valley’s commitment to long-term, high-cost projects essential for future AI and robotics integration.

OpenAI Plans to Release “Strawberry” AI Model

Reuters Share to FacebookShare to Twitter (9/10) reports that OpenAI plans to release “Strawberry,” a reasoning-focused AI model, as part of its ChatGPT service within the next two weeks. The Information, citing two testers, states that Strawberry can “think” before responding, unlike other conversational AIs. OpenAI, led by Sam Altman and backed by Microsoft, has over 1 million paying users for its business products. Strawberry will initially handle text only and is not yet multimodal. Microsoft and OpenAI did not immediately respond to Reuters’ requests for comment.

GAO: Agencies Have Met Management And Talent Requirements From Biden’s 2023 Executive Order On AI

Government Executive Share to FacebookShare to Twitter (9/10) reports the Government Accountability Office said in a review released on Monday that “federal agencies have fully met the Biden administration’s initial management and talent benchmarks for the broader adoption of artificial intelligence technologies across government.” The report “looked at agency compliance with 13 specific requirements from President Joe Biden’s October 2023 executive order on AI, which outlined governmentwide safeguards around use of the new technology.” All six agencies that were “tasked with implementing” the directives – the Executive Office of the President; Office of Management and Budget; Office of Personnel Management; Office of Science and Technology Policy; General Services Administration; and the U.S. Digital Service – “fully implemented” the 13 requirements that they were charged with.

States Develop AI Guidance For K-12 Education

Education Week Share to FacebookShare to Twitter (9/11, Klein) reports state education agencies are increasingly providing guidance on artificial intelligence (AI) in K-12 education, “according to an annual survey released Sept. 11 by the State Educational Technology Directors Association.” AI interest among educators “has continued to rise, according to this year’s survey results, with 90 percent of respondents reporting increased interest in AI guidance.” Currently, 59 percent of states “said their states had crafted guidance on the topic,” with 14 percent working on broader AI policy initiatives. States like Utah “have created positions in their education departments dedicated primarily to AI implementation in K-12,” while states such as Indiana and New Jersey have allocated funds for AI.

Generative AI Sparks Major Investment Boom In The US

The Wall Street Journal Share to FacebookShare to Twitter (9/11, Subscription Publication) reports generative AI has initiated a significant spending surge in the US, with venture-capital investments in AI startups reaching $64.1 billion this year. Companies like Microsoft and Google have expanded their data centers to support AI applications, with Microsoft doubling its data centers since early 2020. AI data centers require more power, leading to a nearly ninefold increase in energy orders since 2015. The Journal includes visualizations showing capital spending and number data centers for Amazon, Google, Meta, and Microsoft.

Elon Musk’s xAI Supercomputer Sparks Environmental Concerns In Memphis

NPR Share to FacebookShare to Twitter (9/11, Kerr) reports that Elon Musk’s new artificial intelligence company xAI has established a data center in South Memphis, aiming to build the “world’s largest supercomputer” named Colossus. The facility, which started operations over Labor Day weekend, will support xAI’s chatbot Grok and consume significant resources, including “a million gallons of water per day and 150 megawatts of electricity.” Local residents and environmental advocates express concerns over the project’s environmental impact, particularly in historically Black neighborhoods already suffering from poor air quality. xAI’s use of methane gas generators without proper permits has also raised alarms. Memphis Community Against Pollution President KeShaun Pearson criticizes xAI for not engaging with the community, stating, “We have been deemed by xAI not even valuable enough to have a conversation with.” While the local utility assures that the project will not strain resources, the lack of transparency and oversight remains a contentious issue.

Oregon Department Of Education Launches AI Career Guidance Tool For College Students

Government Technology Share to FacebookShare to Twitter (9/12) reports that the Oregon Department of Education (ODE) announced the release of “Sassy,” an AI-powered career exploration coach for students. Developed by the Journalistic Learning Initiative (JLI) in partnership with ODE and the Southern Oregon Education Service District, Sassy assists middle and high school students with career brainstorming, resume writing, and interview preparation. The tool, named after the mythic Sasquatch, provides guidance by using prompts to search the state’s career resource hub. According to JLI, Sassy ensures students receive updated and locally relevant advice.

Chinese Firms’ AI Models Compared

CNBC Share to FacebookShare to Twitter (9/12, Kharpal) reports that Chinese tech giants, including Baidu, Alibaba, Tencent, Huawei, and ByteDance, have developed their own generative AI models to compete with U.S. counterparts. Baidu’s Ernie Bot, with 300 million users, rivals ChatGPT. Alibaba’s Tongyi Qianwen models are open-sourced and deployed by over 90,000 enterprises. Tencent’s Hunyuan supports industries like gaming and e-commerce. Huawei’s Pangu models are industry-specific, predicting typhoon trajectories in seconds. ByteDance’s Doubao model, launched this year, offers capabilities at a lower cost. These developments reflect China’s ambition to lead in AI technology.

OpenAI Unveils o1 Model Capable Of “Reasoning” In Math, Science

The New York Times Share to FacebookShare to Twitter reports that OpenAI introduced a new version of ChatGPT on Thursday, aiming to improve its performance in math, coding, and science tasks. Powered by OpenAI o1 technology, the chatbot now “reasons” through problems, as stated by OpenAI’s chief scientist Jakub Pachocki. In a demonstration, the updated chatbot successfully solved an acrostic, answered a Ph.D.-level chemistry question, and diagnosed an illness.

        Bloomberg Share to FacebookShare to Twitter (9/12, Subscription Publication) reports that the o1 model “is designed to spend more time computing the answer before responding to user queries, the company said in a blog post Thursday. With the model, OpenAI’s tools should be able to solve multi-step problems, including complicated math and coding questions.” TechCrunch Share to FacebookShare to Twitter (9/12, Wiggers) says that o1 “can effectively fact-check itself by spending more time considering all parts of a command or question.”

NVIDIA CEO Discusses AI Chip Supply Pressures

Fortune Share to FacebookShare to Twitter (9/12, Hetzner) reports that NVIDIA CEO Jensen Huang spoke on Wednesday about the intense pressure he faces to increase the supply of AI training microchips. Speaking at a Goldman Sachs tech conference, Huang highlighted the “emotional” impact these supplies have on customers’ competitiveness and revenues. NVIDIA, controlling 90% of the market, struggles to meet demand from major clients like Microsoft, Google, and Amazon. Huang anticipates easing supply constraints, expecting improved availability in coming quarters. NVIDIA’s Q2 earnings and future chip production, including the upcoming Blackwell series, remain closely watched by investors and customers.

Nvidia, OpenAI, Anthropic And Google Execs Meet With White House To Talk AI Energy And Data Centers

CNBC Share to FacebookShare to Twitter (9/12, Field) reports that leaders from OpenAI, Anthropic, Microsoft, Google, and several American power and utility companies met Thursday morning at the White House to discuss AI energy infrastructure in the US, sources told CNBC. Key attendees included OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei, and Google President Ruth Porat. The meeting addressed AI’s energy usage, data center capacity, semiconductor manufacturing, and grid capacity. An OpenAI spokesperson emphasized the importance of US infrastructure for economic growth. Commerce Secretary Gina Raimondo and Energy Secretary Jennifer Granholm were also present. The meeting follows an August announcement that OpenAI and Anthropic will allow the US AI Safety Institute to test their models before public release.

dtau...@gmail.com

unread,
Sep 21, 2024, 7:01:38 PM9/21/24
to ai-b...@googlegroups.com

California Governor Signs Laws to Crack Down on Election Deepfakes

On Sept. 17, California Gov. Gavin Newsom signed into law legislation prohibiting the creation and publication of election-related deepfakes 120 days prior to and 60 days after Election Day, while permitting courts to stop their distribution and impose civil penalties. Other bills signed by Newsom will require large social media platforms to remove deepfakes, and mandate that political campaigns publicly disclose if they run ads with AI-altered materials.
[ » Read full article ]

Associated Press; Tran Nguyen (September 17, 2024)

 

Researchers Run Small AIs on Their Laptops

Researchers increasingly are able to run local AI systems on their laptops. This comes as tech firms and research institutes, including Google DeepMind, Meta, Microsoft, and the Allen Institute for Artificial Intelligence, make available small and open-weight versions of large language models that can be downloaded and run locally, as well as scaled-down versions that can be run on consumer hardware. Local AIs are less expensive, allow open models to be fine-tuned for focused applications, and preserve data privacy.
[ » Read full article ]

Nature; Matthew Hutson (September 16, 2024)

 

AI Pioneers Call for Protections Against 'Catastrophic Risks'

A group of AI pioneers including Turing Award recipients Yoshua Bengio, Andrew Yao, and Geoffrey Hinton released a statement on Sept. 16 expressing their concerns that the capabilities of the technology could exceed that of its creators in a matter of years, leading "to catastrophic outcomes for all of humanity." They also proposed that countries establish AI safety authorities to register AI systems within their borders and collaborate to identify red lines and warning signs for the technology.


[
» Read full article *May Require Paid Registration ]

The New York Times; Meaghan Tobin (September 16, 2024)

 

Chatbot Pulls People Away from Conspiracy Theories

An AI chatbot developed by Cornell University researchers aims to persuade users to stop believing conspiracy theories. In their study, more than 2,000 U.S. adults were asked to describe a conspiracy they believed; some then engaged in discussions with DebunkBot in which they presented evidence supporting their position and DebunkBot provided information to combat their misinformation. Participants' belief ratings fell around 20% after three exchanges with DebunkBot, and around 25% of participants no longer believed the conspiracy theory.

[ » Read full article *May Require Paid Registration ]

The New York Times; Teddy Rosenbluth (September 13, 2024)

 

Survey: Most Americans Don't Trust AI-Powered Election Information

A survey by The Associated Press-NORC Center for Public Affairs Research and USAFacts found that two-thirds (67%) of U.S. adults lack confidence that AI-powered chatbots or search engines provide factual, reliable information. Of the surveys 1,019 respondents, 25% believe the use of AI will make it "much" or "somewhat" more difficult to locate factual information about the 2024 election. Only 16% of those polled think AI will make finding accurate election information easier.
[
» Read full article ]

Associated Press; Ali Swenson; Linley Sanders (September 12, 2024)

 

Brain-Like Device Hits Massive 4.1 Tera-Operations Per Second/Watt

A neuromorphic device developed by an international research team is comprised of molecules that can alter their electrical properties when a charge is applied to them, allowing for the manipulation of materials for integration in electrical systems. The researchers integrated the 14-bit neuromorphic accelerator into a circuit board and achieved energy efficiency of 4.1 tera-operations per second per watt, making it suitable for neural network training, natural language processing, and signal processing.
[
» Read full article ]

Interesting Engineering; Rupendra Brahambhatt (September 13, 2024)

 

Colleges Grapple With AI Use In Education

Inside Higher Ed Share to FacebookShare to Twitter (9/16, Mowreader) reports colleges and universities are addressing the integration of generative artificial intelligence tools in education while preventing misuse. A May 2024 Student Voice survey “from Inside Higher Ed and Generation Lab found that, when asked if they know when or how to use generative AI to help with coursework, a large number of undergraduates don’t know or are unsure (31 percent).” The survey included more than 3,500 four-year and 1,400 two-year students. Only 16 percent of respondents “said they knew when to use AI because their college or university had published a policy on appropriate use cases for generative AI for coursework.” Experts recommend campus leaders offer “professional development and education,” provide sample language, and communicate regularly with students.

        Column: AI Tutor Boosts Learning In Less Time For Harvard Students. In her column for The Hechinger Report Share to FacebookShare to Twitter (9/16), Jill Barshay says that an AI tutor, PS2 Pal, significantly improved student learning in a “small experiment, involving fewer than 200 undergraduates.” Conducted in fall 2023, the study found that “students learned more than twice as much in less time when they used an AI tutor in their dorm compared with attending their usual physics class in person.” The AI tutor was designed to avoid cognitive overload and encourage critical thinking. Gregory Kestin, a physics lecturer at Harvard and developer of the AI tutor used in this study, argues that AI should not replace human interaction but can enhance it by introducing new topics before class. He plans to “test the tutor bot for an entire semester” and explore its use as a study assistant.

Intel, AWS Collaborating To Design Custom AI Chips

Bloomberg Share to FacebookShare to Twitter (9/16, King, Subscription Publication) reports Intel CEO Pat Gelsinger has acquired Amazon’s AWS as a “customer for the company’s manufacturing business, potentially bringing work to new plants under construction in the US and boosting his efforts to turn around the embattled chipmaker.” Intel and AWS “will coinvest in a custom semiconductor for artificial intelligence computing – what’s known as a fabric chip – in a ‘multiyear, multibillion-dollar framework,’ according to a statement Monday.” Bloomberg adds that while Intel is postponing new factories in Germany and Poland, it “remains committed to its US expansion in Arizona, New Mexico, Oregon and Ohio.”

Lawmakers Call For Administration To Implement Stronger Algorithm And AI Bias Protections

Modern Healthcare Share to FacebookShare to Twitter (9/16, McAuliff, Subscription Publication) reports that in a letter to the Office of Management and Budget on Monday, Senate Majority Leader Schumer (D-NY) and Sen. Ed Markey (D-MA) urged the Biden Administration to require federal agencies and contractors receiving federal funds to do more to protect against abuses related to algorithms and AI. The lawmakers specifically “want the government to focus intently on ‘consequential decisions,’ such as those that determine types of health care people can obtain, to ensure bias is not creeping in and creating or exacerbating inequities.”

Elon Musk’s AI Data Center In Memphis Sparks Pollution Concerns

TIME Share to FacebookShare to Twitter (9/17, Chow) reports that Elon Musk’s AI startup xAI has been training its new model, Grok 3, at a new Memphis data center. The center, built in 19 days, has caused an outcry among Memphis residents and environmental groups over potential negative impacts on air quality, water access, and grid stability. Local leaders and utility companies argue the project will benefit infrastructure and employment. However, xAI’s demand for 150 megawatts of power has raised concerns about Memphis’s ability to handle such a large energy consumer. Reports indicate xAI has installed gas turbines without permits, drawing further criticism. “They treat southwest Memphis as just a corporate watering hole,” said KeShaun Pearson, executive director of Memphis Community Against Pollution.

OpenAI Sustainability In Question Despite Valuation

Fast Company Share to FacebookShare to Twitter (9/17) reports that OpenAI is pursuing $6.5 billion in venture capital and $5 billion in debt financing, aiming for a $150 billion valuation. Despite significant revenue growth, OpenAI remains unprofitable due to high operational costs. The company’s new GPT-o1 models target complex tasks, potentially expanding its market. However, concerns persist about the sustainability of its business model, especially with expensive, large-scale AI models. OpenAI’s corporate structure might shift to a for-profit benefit corporation, and it faces potential regulatory challenges in California.

 

Google Plans To Identify Real Vs. AI-Generated Images. The Verge Share to FacebookShare to Twitter (9/17, Warren) reports that Google will soon introduce technology to distinguish between real, edited, and AI-generated photos. This update will be integrated into Google’s search results with the “about this image” feature. The “system Google is using is part of the Coalition for Content Provenance and Authenticity (C2PA).” While Google is among multiple companies that have backed C2PA authentication, “adoption has been slow,” so “Google’s integration into search results will be a first big test for the initiative.”

Survey: Most Teens Have Discussed How To Use AI More Responsibly In School

Education Week Share to FacebookShare to Twitter (9/18, Klein) reports, “Teens who have talked about artificial intelligence in school are more likely to use it responsibly, concludes a report released Sept. 18 by Common Sense Media, a nonprofit that examines the impact of technology on young people.” The nonprofit found that about “70 percent of teens have used at least one kind of AI tool,” with 51 percent using chatbots like ChatGPT, Microsoft Co-pilot, or Google’s Gemini. Approximately 53 percent of students “say they use AI for homework help,” and 2 in 5 for entertainment or translation. The report highlights that 55 percent of teens “who reported using AI tools, and had talked about AI’s benefits and pitfalls in school fact-checked the information they received from AI tools,” compared to 43 percent who did not. Additionally, 87 percent of students “who had class discussions about AI are also more likely to agree that AI tools might be used to cheat,” versus 73 percent without such discussions.

        K-12 Dive Share to FacebookShare to Twitter (9/18, Merod) reports 41 percent of teens also use AI for language translation. Among those using AI for schoolwork, 46 percent did so “without their teacher’s permission,” 41 percent with permission, and 12 percent were unsure. The survey indicated that 37 percent of teens “said they were unsure if their schools have established rules on AI,” while 35 percent said their school has guidelines, and 27 percent reported no rules. Conducted with Ipsos Public Affairs, the survey “included 1,045 paired responses from parents and their teens.”

        Report: Black Students More Likely To Face AI Cheating Accusations. Education Week Share to FacebookShare to Twitter (9/18, Klein) reports, “Black students are more than twice as likely as their white or Hispanic peers to have their writing incorrectly flagged as the work of artificial intelligence tools, concludes a report released Sept. 18 by Common Sense Media.” The report states that “20 percent of Black teens were falsely accused of using AI to complete an assignment, compared with 7 percent of white and 10 percent of Latino teens.” This discrepancy may stem from flaws in AI detection software. Survey data from the Center for Democracy & Technology shows 68 percent of secondary school teachers “report using an AI detection tool regularly.” The report is “based on a nationally representative survey conducted from March to May of 1,045 adults in the United States.”

        Google Invests $25 Million In AI Training For Students, Teachers. Education Week Share to FacebookShare to Twitter (9/18, Klein) reports that Google.org, “the tech company’s philanthropy arm, plans to invest over $25 million to support five education nonprofits in helping educators and students learn more about how to use artificial intelligence.” According to a Common Sense Media survey, responsible AI usage by teens increases when teachers discuss its benefits and pitfalls, yet more than “7 in 10 teachers said they haven’t received any professional development on using AI in the classroom, according to a nationally representative EdWeek Research Center survey.” Google.org’s initiative, emphasizing culturally relevant AI curriculum, aims to address this gap. ISTE+ASCD “will receive $10 million of the $25 million over three years to reach about 200,000 educators.”

Tech Workers Struggle Amid Industry Shift

The Wall Street Journal Share to FacebookShare to Twitter (9/18, Bindley, Pisani, Subscription Publication) reports that despite efforts, tech job postings have dropped more than 30 percent since February 2020, with 137,000 layoffs this year. Companies are now focusing on revenue-generating products and artificial intelligence, reducing entry-level hires. AI expertise remains highly sought after, with AI engineers earning significantly more.

Chief Technologist At Amazon Robotics Discusses Robotics, AI

Forbes Share to FacebookShare to Twitter (9/20) contributor Bernard Marr spoke to Amazon Robotics Chief Technologist Tye Brady about Amazon’s advancements in robotics and AI. Brady highlighted that Amazon operates “the world’s largest fleet of industrial mobile robots,” with over 750,000 drive units alone. He introduced the Hercules drive unit, which improves warehouse efficiency by bringing shelves directly to workers, resulting in a 40% increase in storage density. Brady also discussed the autonomous robot Proteus, which features human-like indicators for safe navigation around people. He emphasized that Amazon aims to enhance human capabilities through robotics, stating, “We use robotics and automation, particularly fueled by AI, to extend human capability.” Brady envisions a future where cloud-connected robots collaborate with humans, transforming the supply chain and creating new job types.

Johnson Warns Against “Overregulation” In Interview On AI

In an interview with The Hill Share to FacebookShare to Twitter (9/19, Nazzaro), House Speaker Johnson “offered his thoughts on artificial intelligence...and foreign election interference – two hot-button issues that have become increasingly prevalent in the political landscape ahead of November. Describing himself as a ‘limited government conservative,’ the Speaker acknowledged the concerns surrounding the quickly emerging technology while also warning against overregulation of the tech sphere.” He argued that Congress “needs to take the threat of ‘deepfakes’ seriously, stating the abuses of the technology have been ‘repulsive,’ but also urged for caution.”

dtau...@gmail.com

unread,
Sep 29, 2024, 1:31:21 PM9/29/24
to ai-b...@googlegroups.com

Google Paid $2.7 Billion to Bring Back an AI Genius

Google reportedly has paid around $2.7 billion to license technology from Character.AI, a startup founded by former Google employee Noam Shazeer (pictured, left), who agreed to return to the tech giant as a vice president as part of the deal. Shazeer's return to Google is said to be the primary reason for the deal, fueling a debate about whether big tech companies are spending too much money as they rush to develop cutting-edge AI.


[
» Read full article *May Require Paid Registration ]

The Wall Street Journal; Miles Kruppa; Lauren Thomas; Tom Dotan (September 25, 2024); et al.

 

HP Spots Malware Attack Likely Built with Generative AI

HP security researchers identified malware likely created using generative AI. The firm's Sure Click anti-phishing system flagged a suspicious email attachment for French language users that contained an HTML file requiring a password to open it. After the researchers determined the correct password, the HTML generated a ZIP file containing the AsyncRAT malware. The researchers found the malicious code’s “structure, consistent comments for each function, and the choice of function names and variables" suggested the use of GenAI.
[
» Read full article ]

PC Magazine; Michael Kan (September 24, 2024)

 

List of Early Signups to EU’s AI Pact Missing Apple, Meta

The European Commission released a list of the first 100-plus signatories to its AI Pact, intended to get companies to voluntarily comply with the AI Act before the deadlines set forth in the law. Companies that joined the AI Pact include Amazon, Microsoft, OpenAI, Palantir, Samsung, SAP, Salesforce, Snap, Airbus, Porsche, Lenovo, Qualcomm, and Aleph Alpha; companies missing from the list include Apple, Meta, Mistral, Anthropic, Nvidia, and Spotify.
[
» Read full article ]

TechCrunch; Natasha Lomas (September 25, 2024)

 

Is Math the Path to Chatbots That Donʼt Make Stuff Up?

Silicon Valley startup Harmonic is focusing on mathematics as it works to develop an AI chatbot that never hallucinates. Harmonic's Aristotle not only produces correct answers, but also detailed computer programs proving its answers are right, which then can be used to improve its results. Some researchers believe the same techniques can be used to develop AI systems that can verify physical truths as well.

[ » Read full article *May Require Paid Registration ]

The New York Times; Cade Metz (September 23, 2024)

 

AI 'Godfather' Says OpenAI's New Model May Be Able to Deceive, Needs 'Much Stronger Safety Tests'

ACM A.M. Turing Award recipient Yoshua Bengio is concerned about the ability of OpenAI's new o1 model to deceive, noting it has a "far superior ability to reason than its predecessors." Said Bengio, "In general, the ability to deceive is very dangerous, and we should have much stronger safety tests to evaluate that risk and its consequences in o1's case."

[ » Read full article *May Require Paid Registration ]

Business Insider; Kenneth Niemeyer (September 21, 2024)

 

Microsoft AI Needs So Much Power It's Tapping Site of U.S. Nuclear Meltdown

Constellation Energy Corp. will spend $1.6 billion to revive the Three Mile Island nuclear plant in Pennsylvania, with Microsoft agreeing to purchase all the output energy for 20 years as it looks to access carbon-free electricity for its AI datacenters. Constellation said a reactor that was closed in 2019 will be placed back into service in 2028. The deal is part of a Microsoft initiative to run all of its datacenters on clean energy by 2025.

[ » Read full article *May Require Paid Registration ]

Bloomberg; Will Wade; Dina Bass (September 20, 2024)

 

A Bottle of Water Per Email: The Hidden Environmental Costs of Using AI Chatbots

The Washington Post worked with researchers at the University of California, Riverside to determine how much water and electricity are used to write the average 100-word email using ChatGPT. They determined such an email requires little more than a single bottle of water but sending it once weekly for a year would require an amount of water equivalent to the consumption of every household in Rhode Island for 1.5 days.

[ » Read full article *May Require Paid Registration ]

The Washington Post; Pranshu Verma; Shelly Tan (September 18, 2024)

 

Meta to EU: Your Tech Rules Threaten to Squelch the AI Boom

In an open letter coordinated by Facebook parent firm Meta Platforms, executives warned the European Union risks missing out on the full benefits of artificial intelligence because of its tech regulations. More than two dozen companies signed the letter, which said AI can boost productivity and expand the economy. The letter called on the EU to harmonize its rules and provide what the signatories refer to as a modern interpretation of the blocʼs data-protection law.

[ » Read full article *May Require Paid Registration ]

Wall Street Journal; Kim Mackrael (September 19, 2024)

 

U.N. Experts Urge United Nations to Lay Foundations for Global Governance of AI

A United Nations advisory body comprised of 39 AI leaders from 33 countries is calling on the U.N. to lay the foundation for global regulation of AI and set forth principles, including both international and human rights law, to guide the establishment of new AI governance institutions. Among other things, the advisory group recommends the creation of an international scientific panel on AI to ensure global understanding of the technology's capabilities and risks.
[ » Read full article ]

Associated Press; Edith M. Lederer (September 19, 2024)

 

Ban Warnings Fly as Users Probe the 'Thoughts' of OpenAI's Latest Model

OpenAI reportedly has sent warning emails threatening to ban users who attempt to determine how its newest "Strawberry" AI model works. With the o1 model, users can see a filtered interpretation of its chain-of-thought-process in the ChatGPT interface, but its raw chain of thought is hidden from users. Marco Figueroa, manager of Mozilla's GenAI bug bounty programs, said the move prevents positive red-teaming safety research from being performed on the model.
[ » Read full article ]

Ars Technica; Benj Edwards (September 16, 2024)

 

U.S. to Convene Global AI Safety Summit in November

The International Network of AI Safety Institutes will hold its first meeting Nov. 20-21 in San Francisco to discuss priority work areas and "advance global cooperation toward the safe, secure, and trustworthy development of artificial intelligence." The meeting will involve technical experts from the AI Safety Institutes, or equivalent government-backed safety office, of member nations, which include Australia, Canada, the EU, France, Japan, Kenya, South Korea, Singapore, the U.K., and the U.S.
[ » Read full article ]

Reuters; David Shepardson (September 18, 2024)

 

'There's a War for the Top 1%': Inside French Tech's Fierce Battle for the Best AI Talent

AI startups in Paris are courting top engineers from big tech firms. Said Mathias Frachon of tech recruitment firm The Product Crew, "There's a war only for the top 1%, but they are superstars and everyone is fighting over them." Paris is the focus of the latest AI talent war, since France is home to several prestigious universities known for producing top AI talent, prompting big tech firms like Facebook and Google to open research labs in the city.
[ » Read full article ]

Sifted; Daphné Leprince-Ringuet (September 13, 2024)

 

AI Models Improve Robot Functionality

MIT Technology Review Share to FacebookShare to Twitter (9/20, Williams) reported that researchers from New York University, Meta, and Hello Robot have developed AI models called robot utility models to help robots perform tasks in new environments without additional training. The models enable robots to open doors and drawers, and pick up tissues, bags, and cylindrical objects with a 90% success rate. The team used an iPhone and a reacher-grabber stick to record demonstrations in various environments, creating data sets for training. This approach aims to simplify and reduce the cost of deploying robots in homes.

Judge Criticizes Plaintiffs’ Attorneys In Case About Meta’s AI Technology

Politico Share to FacebookShare to Twitter (9/21, Gerstein) reports US District Judge Vincent Chhabria on Friday “brutally dressed down the lawyers for a group of high-profile authors who are suing Meta over the use of their work to train the company’s AI technology.” Chhabria “accused the plaintiffs’ attorneys of dragging out litigation that may help set important guardrails for the emerging technology.” He said to the attorneys, “You are not doing your job. This is an important case. ... You and your team have taken on a case that you are either unwilling or unable to litigate properly.” Politico points out that the lawsuit “is one of a flurry of cases publishing companies, artists and authors filed last year against big tech companies, accusing them of importing copyrighted material into AI training models without permission.”

Teachers Address AI’s Struggles With Math Education

Education Week Share to FacebookShare to Twitter (9/20, Schwartz) reported that artificial intelligence (AI) tools like ChatGPT “regularly answer math questions incorrectly,” posing challenges for teachers and students. Unlike calculators, AI chatbots use text prediction, leading to inconsistent and incorrect answers. Khanmigo, “an AI tutor created by the online education nonprofit Khan Academy, regularly struggled with basic computation,” prompting updates to direct numerical problems to a calculator. OpenAI, “the organization that created ChatGPT, [also] announced a new version of the technology designed to better reason through complex math tasks.” One eighth-grade teacher in Alabama uses AI for lesson brainstorming but encourages students to critically evaluate AI-generated answers. Surveys have shown “that teachers are hesitant about bringing AI into the classroom, in part due to concerns about chatbots presenting them or their students incorrect information.”

LinkedIn, Meta, X Use User Data For AI Training

Fortune Share to FacebookShare to Twitter (9/23, Brice) reports that LinkedIn, Meta, and X are using user data to train their AI models. LinkedIn began using user posts without notification, while Meta has used Facebook and Instagram data since 2007. X uses public posts for its AI chatbot Grok. Opting out involves navigating complex settings on these platforms. According to the article, “TikTok, whose data policies are under scrutiny amid a possible U.S. ban, hasn’t clearly stated whether it harvests user data for any generative AI tools.”

Princeton Researchers Critique AI Hype

Wired Share to FacebookShare to Twitter (9/24, Rogers) reports that Princeton University professor Arvind Narayanan and PhD candidate Sayash Kapoor have released a book, “AI Snake Oil,” based on their Substack newsletter, critiquing the exaggerated claims surrounding artificial intelligence. Narayanan “makes clear, during a conversation with WIRED, that his rebuke is not aimed at the software per say, but rather the culprits who continue to spread misleading claims about artificial intelligence.” The book identifies three groups perpetuating AI hype: “the companies selling AI, researchers studying AI, and journalists covering AI.” Companies claiming “to predict the future using algorithms are positioned as potentially the most fraudulent,” often affecting minorities and impoverished individuals. The authors criticize companies “prioritizing long-term risk factors above the impact AI tools have on people right now.”

Billionaire Predicts AI Will Replace Most Jobs

Fortune Share to FacebookShare to Twitter (9/24, Royle) reports Silicon Valley billionaire Vinod Khosla predicts AI will handle 80% of work in 80% of jobs, including roles like doctors, salespeople, and engineers. He suggests universal basic income to prevent economic dystopia and foresees a potential three-day workweek if AI is used positively. Khosla’s views align with other tech leaders like Bill Gates and Elon Musk, who also anticipate reduced work hours due to AI advancements.

Meta Declines EU’s Voluntary AI Safety Pledge

Bloomberg Share to FacebookShare to Twitter (9/24, Volpicelli, Subscription Publication) reports that Meta Platforms Inc. is declining to join the European Union’s voluntary AI safety pledge, unlike Microsoft and Google’s Alphabet. The AI Pact, a precursor to the AI Act effective in 2027, seeks compliance with key AI Act principles. Meta’s open-source AI model, Llama, poses compliance challenges, according to the article. Meta’s spokesperson indicated potential future participation. The European Commission will reveal the full list of signatories on Wednesday.

OpenAI Pitches White House On Unprecedented Data Center Buildout

Bloomberg Share to FacebookShare to Twitter (9/24, Ghaffary, Subscription Publication) reports that OpenAI has proposed to the Biden administration the construction of massive data centers, each capable of using as much power as entire cities, to advance artificial intelligence (AI) development. Following a recent White House meeting attended by OpenAI CEO Sam Altman and other tech leaders, the company shared a document with officials highlighting the economic and national security benefits of building 5 gigawatt (GW) data centers across various US states. The proposal is based on an analysis conducted with external experts.

Seattle Hackathon Showcases AI-Human Collaboration

GeekWire Share to FacebookShare to Twitter (9/24) reports that a hackathon in Seattle, hosted by AI Tinkerers, showcased AI applications that combine human and machine capabilities. Held at the Foundations space in Capitol Hill, the event featured engineers from Microsoft, Amazon, and Google. The top prize went to MetabolixAI for its personalized nutrition insights and meal planning system. The runner-up was AI DevRel Project, enhancing developer relations, and the community pick was LeadScore, which uses AI to score inbound leads. The winning teams received nearly $15,000 in prize money. The event was supported by Anthropic and CopilotKit.

OpenAI CEO Emphasizes AI Infrastructure Investment

Insider Share to FacebookShare to Twitter (9/25, Tangalakis-Lippert) reports that OpenAI CEO Sam Altman emphasized the importance of massive investment in artificial intelligence (AI) infrastructure in a blog post on Monday. Altman argued that to make AI widely accessible, significant investments in computing power and energy are required to avoid AI becoming a limited resource that could lead to global conflicts. Last week, Microsoft and BlackRock launched a $30 billion fund to enhance AI competitiveness and energy infrastructure. Earlier this month, AI leaders, including Altman, met at a White House roundtable to discuss AI development’s alignment with national security and economic goals. However, experts expressed concerns about the economic and environmental costs of large-scale data centers.

        Altman Lobbies US Officials, Foreign Investors On Potential For Major Tech Infrastructure Projects. The New York Times Share to FacebookShare to Twitter (9/25, Metz, Mickle) examines “OpenAI’s blueprint for the world’s technology future,” with CEO Sam Altman calling for investors, chipmakers, and officials to “unite on a multitrillion-dollar effort to erect new computer chip factories and data centers across the globe, including in the Middle East.” Nine sources described a plan which “would create countless data centers providing a global reservoir of computing power dedicated to building the next generation of A.I.,” and “as far-fetched as it may have seemed...Altman’s campaign showed how in just a few years he has become one of the world’s most influential tech executives, able in a span of weeks to gain an audience with Middle Eastern money, Asian manufacturing giants and top U.S. regulators.”

        OpenAI CTO To Leave Company. CNBC Share to FacebookShare to Twitter (9/25, Field) reports OpenAI Chief Technology Officer Mira Murati “said Wednesday that she is leaving the company after six and a half years.” Murati marks “the latest high-level executive to depart the startup.” CNBC adds, “While OpenAI has been in hyper-growth mode since late 2022, when it launched ChatGPT, it has been simultaneously riddled with controversy and high-level employee departures, with some current and former employees concerned that the company is growing too quickly to operate safely.”

        OpenAI Agrees to Training Data Review. Advanced Television Share to FacebookShare to Twitter (9/25) reports that OpenAI will provide access to its training data to determine if copyrighted works were used. This follows a court filing where authors in a class action lawsuit agreed on protocols for inspecting the information. The agreement stems from lawsuits accusing OpenAI of using web content to produce copyright-infringing answers via ChatGPT. Although some claims were dismissed, direct copyright infringement claims remain. The inspection will occur at OpenAI’s San Francisco office under strict conditions, including non-disclosure agreements and secured computer access without internet.

College Students Use AI To Avoid Reading Assignments

Inside Higher Ed Share to FacebookShare to Twitter (9/25, Alonso) reports that many college students are increasingly using artificial intelligence (AI) tools like ChatGPT to avoid completing their reading assignments. One history major “reads about 250 pages per week but often uses artificial intelligence” to summarize his weekly reading due to time constraints from his job and extracurricular activities. Faculty members “frequently note how much less willing their Gen Z students are to read for class than earlier generations,” attributing this to shorter attention spans and the impact of the COVID-19 pandemic on learning. Some professors adapt by incorporating reading sessions in class or using guided readings.

Amazon Among Signatories Of EU’s AI Pact Initiative

TechCrunch Share to FacebookShare to Twitter (9/25, Lomas) reports the European Commission has announced over 100 signatories to the AI Pact, aimed at encouraging companies to publish voluntary pledges regarding their AI practices. The initiative follows the introduction of the AI Act, which will take years to fully implement. Signatories, including Amazon, Microsoft, and OpenAI, must commit to adopting an AI governance strategy, identifying high-risk AI systems, and promoting AI awareness among staff. The Pact allows companies to select from a long list of potential pledges, fostering competition in AI safety compliance. Notable absences from the signatory list include Apple and Meta, which have opted to focus on compliance with the AI Act directly. The EU outlines significant penalties for non-compliance with the AI Act, including up to 7% of global annual revenue for violating banned uses of AI, up to 3% for other non-compliance, and up to 1.5% for supplying incorrect information.

FTC Targets AI Companies Over “Deceptive” Practices

Reuters Share to FacebookShare to Twitter (9/25, Godoy) reports the FTC “announced actions against five companies on Wednesday that it said used artificial intelligence in deceptive and unfair ways,” including three which “purported to help consumers generate passive income by opening e-commerce storefronts.” The agency “also settled with a company called DoNotPay over its claim to provide automated legal services, and with Rytr, an AI writing tool that the agency said offered a feature that allows users to generate fake product reviews.” FTC Chair Lina Khan said, “Using AI tools to trick, mislead, or defraud people is illegal. The FTC’s enforcement actions make clear that there is no AI exemption from the laws on the books.”

Schools Struggle To Combat AI-Enabled “Deepfakes”

Education Week Share to FacebookShare to Twitter (9/26) reports a new Center for Democracy & Technology study reveals schools are inadequately addressing AI-enabled sexual harassment. Deepfakes, digitally manipulated media, primarily involve students as both perpetrators and victims. The survey found 40% of students and 29% of teachers knew of deepfakes shared in the 2023-24 school year. However, only 19% of students reported their schools explained what deepfakes are to students. Moreover, 60% of teachers and 67% of parents said schools lacked policies for addressing such incidents. Kristin Woelfel, a policy counsel at the Center, attributed the increased risk due to widespread access to AI tools. She said, “There’s really no limit as to who could be impacted by this.” National Student Council president Anjali Verma described the victim experience as “scary” and “traumatic.” Woelfel emphasized the need for preventive education and victim support. The survey included 1,316 high school students, 1,006 middle and high school teachers, and 1,028 parents.

OpenAI To Remove Nonprofit Board Control Over Main Business; Altman To Gain Equity

Bloomberg Share to FacebookShare to Twitter (9/25, Metz, Subscription Publication) reports that OpenAI plans to restructure, removing its nonprofit board’s control over its main business. The nonprofit arm will retain a minority stake in the for-profit company, and CEO Sam Altman will gain equity. The reorganized for-profit entity is potentially worth $150 billion. OpenAI did not respond to requests for comment.

        TechCrunch Share to FacebookShare to Twitter (9/25, Wiggers) reports, citing Reuters Share to FacebookShare to Twitter (9/26, Cai), that OpenAI intends to become a for-profit benefit corporation “similar to rivals such as Anthropic and Elon Musk’s xAI.” TechCrunch adds that the restructuring’s intent is to attract outside investors who have objected to OpenAI’s current cap on returns. Nonetheless, the move is seen as likely to prompt concerns over the restructured entity’s accountability in its pursuit of superintelligent AI.

        The Telegraph (UK) Share to FacebookShare to Twitter (9/26) reports that Elon Musk, “who quit OpenAI in 2018 amid a row with executives including Mr Altman, wrote on X: ‘You can’t just convert a non-profit into a for-profit. That is illegal.’ He added: ‘Sam Altman is Little Finger,’ a reference to the Machiavellian character in the TV series Game of Thrones.”

        Also reporting is Insider Share to FacebookShare to Twitter (9/25, Varanasi).

Federal Reserve Governor: AI Could Be Inflationary In Short-Term

Reuters Share to FacebookShare to Twitter (9/26) reports Federal Reserve Governor Lisa Cook on Thursday “said that while she expects artificial intelligence over the longer-run to boost productivity and therefore allow higher employment without correspondingly higher inflation, AI may add to inflationary pressures in the short-term.” She told an event at The Ohio State University, “There’s a lot of demand being created, and then you have consumption that is augmented,” adding that “the effects of AI on inflation are uncertain, as they are on the labor market as well.”

Debate Over AI’s Role In Education Intensifies In New School Year

The Seventy Four Share to FacebookShare to Twitter (9/26, Montalvo) reports that “the debate over AI’s role in education is intensifying” as the new school year begins. The Education Department’s Office of Educational Technology released guidelines for EdTech companies titled “Designing for Education with Artificial Intelligence,” emphasizing “responsible innovation” and incorporating feedback from educators and students. The XQ Institute advocates for AI’s ethical, transparent, and equitable use, partnering with educators and developers to tailor AI tools to student needs. A collaboration between Crosstown High, an XQ school in Memphis, Tennessee, and EdTech company Inkwire exemplifies effective partnerships, ensuring AI tools are culturally responsive and pedagogically sound.

dtau...@gmail.com

unread,
Oct 5, 2024, 8:31:39 AM10/5/24
to ai-b...@googlegroups.com

Academics to Chair Drafting the Code of Practice for General-Purpose AI

The European Commission said several academics will serve as chairs and vice chairs of working groups tasked with drafting a Code of Practice on general-purpose artificial intelligence (GPAI). This Code of Practice will shape the risk management and transparency requirements of the EU's AI Act. The first draft is expected in early November.
[ » Read full article ]

Euractiv; Jacob Wulff (September 30, 2024)

 

Devs Gaining Little (if Anything) from AI Coding Assistants

An Uplevel study of 800 developers' output over three months while using GitHub Copilot found no significant increase in productivity compared to the three-month period prior to adopting the AI coding assistant. Developers using Copilot also did not report substantial improvements in pull request (PR) cycle time or PR throughput, the study found, while 41% more bugs were introduced by Copilot use.
[ » Read full article ]

CIO; Grant Gross (September 26, 2024)

 

AI Crawlers Are Hammering Sites

Some websites are being hit with so many queries from AI crawlers that their performance is impacted. iFixit recently reported close to a million queries in just over 24 hours, which it attributed to a crawler from Anthropic. Game UI Database said its website almost came to a halt due to a crawler from OpenAI hitting it around 200 times a second. Said iFixit's Kyle Wiens, "There are polite levels of crawling, and this superseded that threshold."
[ » Read full article ]

Fast Company; Chris Stokel-Walker (September 26, 2024)

 

California Governor Vetoes AI Safety Bill

California Governor Gavin Newsom vetoed a state measure that would have imposed safety vetting requirements for powerful AI models. Newsom said the legislation “does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making, or the use of sensitive data.” He said of the bill, "I do not believe this is the best approach to protecting the public from real threats posed by the technology.”
[ » Read full article ]

Politico; Lara Korte; Jeremy B. White (September 29, 2024)

 

Turning OpenAI into a Real Business is Tearing It Apart

The exit of OpenAI CTO Mira Murati (pictured) is the latest in a series of departures as the firm shifts from a nonprofit lab to a for-profit corporation. So far this year, 20 researchers and executives have left OpenAI. Concerns expressed by current and former employees include rushed product announcements and safety testing and CEO Sam Altman's absence from day-to-day operations as he travels on fundraising missions.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Deepa Seetharaman (September 27, 2024)

 

Extreme Weather Is Taxing Utilities More Often. Can AI Help?

Electric utilities increasingly are turning to AI to improve severe weather predictions and identify ways to harden the electrical grid as aging infrastructure is being hit by severe weather more frequently. Extreme weather currently is the leading cause of major U.S. power outages, with more than 4 million without power following Hurricane Helene on Sept. 27.

[ » Read full article *May Require Paid Registration ]

The New York Times; Austyn Gaffney (September 27, 2024)

 

Singapore LNG Demand To Rise Amid AI Boom

Bloomberg Share to FacebookShare to Twitter (9/27, Ong, Subscription Publication) reported Singapore’s LNG demand will increase short-term, driven by the AI boom and data center growth, according to Singapore LNG Corp. CEO Leong Wei Hung. The digital sector significantly impacts energy needs, outpacing infrastructure development. Tech giants Amazon and Microsoft plan major data center investments in Southeast Asia. Singapore aims to boost power allocation for data centers by 35%. The country’s reliance on imported gas challenges its decarbonization efforts, with plans to import 6 GW of green power by 2035. Leong expressed optimism for LNG’s role, saying, “While we wait for renewables to be reasonably priced, LNG has to be the solution.”

OpenAI Seeks Government Support for Massive Data Centers

Fortune Share to FacebookShare to Twitter (9/27, Meyer) reports that OpenAI is seeking U.S. government support to build data centers requiring 5 gigawatts of power each, equivalent to the output of five nuclear reactors. CEO Sam Altman discussed the plan at a recent White House meeting. Experts, including Constellation Energy CEO Joe Dominguez and Aurora Energy Research’s Zachary Edelen, express skepticism about the feasibility due to immense power demands and grid reliability issues. The proposal highlights the growing energy needs of AI technologies and the challenges of sustainable power sourcing.

HHS Launches AI Cybersecurity Task Force

Inside Health Policy Share to FacebookShare to Twitter (9/29, Robles, Subscription Publication) reports behind a paywall that Greg Garcia, executive director of the Health Sector Coordinating Council Cybersecurity Working Group, announced an upcoming joint task force with HHS and industry to address AI’s cybersecurity implications. The task force will explore AI-related risks and threats and how AI can enhance cybersecurity defenses. Garcia made the announcement at AHIP’s digital health conference. Micky Tripathi, head of HHS’s health information technology office, confirmed the collaboration.

OpenAI Faces Complicated Road To Becoming For-Profit Enterprise

The Wall Street Journal Share to FacebookShare to Twitter (9/29, Subscription Publication) highlights how OpenAI’s plan to become a for-profit firm is going to be a complex undertaking. OpenAI will need to grapple with regulatory rules in no less than two states, figure out how to allocate equity in the for-profit firm, and divide assets with the charitable nonprofit that now governs OpenAI and is going to continue to exist.

        OpenAI Expecting Sizable Losses For 2024. Fortune Share to FacebookShare to Twitter (9/28, Ma) reports OpenAI anticipates sizable “losses this year, but revenue over the next five years will continue to be explosive as the company raises fees on its signature chatbot.” Documents which the New York Times saw show that the firm anticipates “revenue of $3.7 billion in 2024.” However, the company sees a loss totaling $5 billion, which the Times reported doesn’t account for equity-based compensation.

Nvidia CEO Advocates AI For Climate Benefits

E&E News Share to FacebookShare to Twitter (9/30, Hiar, Subscription Publication) reports that Nvidia CEO Jensen Huang argued in Washington that artificial intelligence could benefit the climate by enhancing productivity with less energy consumption. Speaking at the Bipartisan Policy Center, Huang emphasized, “The energy efficiency and the productivity gains that we’ll get from it...is going to be incredible.” His visit coincided with Climate Week in New York and the advancement of AI-related legislation in the House. Huang highlighted the efficiency of Nvidia’s specialized chips to a captivated audience of energy executives, investors, and academics.

WSU Develops AI-Guided 3D Printing For Surgical Models

3D Printing Industry Share to FacebookShare to Twitter (10/1) reports that researchers at Washington State University have created an AI-guided 3D printing process to produce detailed human organ replicas. This technique allows surgeons to rehearse complex procedures with patient-specific models. The AI optimizes printer settings for accuracy and speed, using a multi-objective Bayesian Optimization approach. NVIDIA A40 GPUs and NeRF technology ensure model fidelity. The U.S. Department of Commerce has introduced new regulations restricting advanced 3D printing exports to prevent misuse in sensitive applications.

Canada’s AI Regulation Needs Global Collaboration, Says AWS Director

The Canadian Press Share to FacebookShare to Twitter (10/2) reports Amazon Web Services Director of Global AI Nicole Foster urged Canada to create AI legislation that is “interoperable” with regulations in other countries to avoid hindering startups’ ambitions to operate globally. Foster emphasized that unique rules for Canada could limit opportunities for local companies, stating, “A lot of our startups are wonderfully ambitious and have ambitions to be able to sell and do business around the world.” As Canada develops its AI and Data Act, concerns arise that stringent regulations could stifle innovation. Foster highlighted the importance of focusing on high-risk AI systems while avoiding unnecessary regulation of less critical technologies, saying, “I think (it’s about) being focused on the risks that we need to address.”

Researchers Use AI-Generated Images To Train Robots

MIT Technology Review Share to FacebookShare to Twitter (10/3, Williams) reports that researchers from Stephen James’s Robot Learning Lab in London have developed Genima, a system using AI models like Stable Diffusion to create training data for robots. By generating images of robot movements, Genima aids in simulations and real-world applications, improving task completion. The research, to be presented at the Conference on Robot Learning, shows potential for training diverse robots efficiently.

Army Researchers Take Aim At Sepsis In Burn Patients Using AI Machine Learning

Stars and Stripes Share to FacebookShare to Twitter (10/3) reports researchers at the Walter Reed Army Institute of Research have developed SeptiBurnAlert, a system that employs artificial intelligence (AI) to predict sepsis in burn patients by analyzing biomolecular changes in blood. The system, which has shown 85-90% accuracy in initial tests, is expected to reach the commercial market in approximately three years, pending FDA approval.

CrowdStrike CEO Discusses AI’s Impact On Cybersecurity

SiliconANGLE Share to FacebookShare to Twitter (10/3) reports that artificial intelligence is revolutionizing cybersecurity by enhancing threat detection and prevention. CrowdStrike CEO George Kurtz, speaking at Fal. Con 2024 with theCUBE, emphasized the importance of continuous innovation in security. He noted that partnerships with companies like Microsoft, Nvidia, and Amazon Web Services are crucial for addressing modern threats. “No one company can solve everything in security,” Kurtz stated. CrowdStrike’s early adoption of AI, particularly machine learning, has transformed its security platform, allowing for rapid problem-solving and integration of new technologies. The company’s Falcon Flex service and Next-Gen SIEM system exemplify its commitment to customer-centric solutions, driven by client feedback.

dtau...@gmail.com

unread,
Oct 13, 2024, 4:35:41 PM10/13/24
to ai-b...@googlegroups.com

Google DeepMind Boss Awarded Nobel for Proteins Breakthrough

British computer science professor Demis Hassabis, founder of the AI firm that became Google DeepMind, is among the recipients of the Nobel Prize for Chemistry. Hassabis and DeepMind Technologies John Jumper are being recognized their development of an AI tool, AlphaFold2, to predict the structures of nearly all known proteins. They share the Nobel Prize with University of Washington's David Baker, who was recognized for designing a new protein using amino acids.
[ » Read full article ]

BBC; Georgina Rannard (October 9, 2024)

 

Pioneers in AI Awarded Nobel Prize in Physics

ACM A.M. Turing Award laureate Geoffrey Hinton, known as the ‘godfather of AI’, and Princeton University's John Hopfield on Tuesday were named to receive the Nobel Prize in physics for helping to create the building blocks of machine learning. Hopfield created an associative memory that can store and reconstruct images and other patterns in data. Hinton used Hopfield’s work as the foundation for the Boltzmann machine, a type of stochastic recurrent neural network.
[ » Read full article ]

Associated Press; Daniel Niemann; Mike Corder; Seth Borenstein (October 8, 2024)

 

Software Engineers In for Rough Ride as AI Adoption Ramps Up

Gartner reports that to keep up with rising demand for generative AI, 80% of the software engineering workforce will have to upskill by 2027. Gartner found AI tools will support developers' existing work in the short term and provide small productivity gains, but in the medium term, AI-native software engineering will emerge, in which most code is generated by AI.
[ » Read full article ]

ITPro; George Fitzmaurice (October 3, 2024)

 

Texas Regulator Wants Datacenters to Build Power Plants

The Public Utility Commission of Texas said developers of AI datacenters looking to co-locate with a power plant and connect to the grid within 12 to 15 months will have to build the power plant as well. Commission chair Thomas Gleeson said datacenters would be welcome to build power plants that generate more electricity than needed and sell the excess to the grid.
[ » Read full article ]

Bloomberg; Naureen S. Malik (October 3, 2024)

 

Taiwan's AI Goals Will Need More Tech Talent

Taiwan's government is hoping the island-nation can become a hub for innovation in advanced AI. However, Taiwan is in dire need of more skilled workers given its small, aging population and low birth rate. Taiwan's National Development Council plans to introduce "Global Elite" cards to attract top-tier foreign professionals to work for local companies offering yearly salaries of more than NT$6 million (about US$188,000).
[ » Read full article ]

IEEE Spectrum; Yu-Tzu Chiu (October 9, 2024)

 

AI Filling Customer Service Roles in Japan amid Labor Shortage

Japan's labor shortage has prompted firms in a range of industries to fill customer service roles with AI technology. At Ridgelinez Ltd., for instance, an AI assistant recommends auto parts based on the customer's needs, car model, and available stock. An AI assistant deployed by Oki Electric Industry Co. and Kyushu Railway Co. helps passengers navigate station maps and transfers in Japanese, English, and Chinese. Startup Sapeet Co. uses an AI to train its customer service staff.
[ » Read full article ]

Kyodo News (Japan) (October 5, 2024)

 

One of the Biggest AI Boomtowns Is Rising in Malaysia

The Malaysian state of Johor, known for its palm-oil plantations, is home to some of the largest AI construction projects in the world. Regional bank Maybank reported that Johor will see $3.8 billion in total datacenter investments this year. Johor is attractive to datacenter developers due to its abundant land, water, and power, as well as its proximity to Singapore, which has one of the worlds densest intersections of undersea Internet cables.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Stu Woo (October 7, 2024)

 

Stanford Study Finds AI Models Still Show Racial Bias

Forbes Share to FacebookShare to Twitter (10/7, McKendrick) reports that researchers at Stanford University have found Share to FacebookShare to Twitter that large language models (LLMs) continue to exhibit racial biases, particularly against African American English speakers. Despite efforts to address bias, popular LLMs like OpenAI’s GPT series and Google’s T5 still perpetuate harmful stereotypes. The study attributes this to biased training data and suggests that AI models often conceal racism rather than eliminate it.

Google AI Converts Texts To Podcasts

The Washington Post Share to FacebookShare to Twitter (10/7) reports Google’s experimental AI tool, NotebookLM, can transform written documents into podcasts. Users can upload up to 50 documents, and the AI generates summaries and creates audio content, mimicking human conversation. Geoffrey A. Fowler tested the tool with Facebook’s privacy policy, resulting in a 7½ minute podcast. The AI-generated hosts engage in a dialogue, providing a new way to digest information. Google’s Raiza Martin describes it as “talking with your notebook.” However, concerns about accuracy and emphasis arise, as the AI sometimes misinterprets or overgeneralizes content. Steven Johnson of Google Labs highlights the potential for creating podcasts on niche topics without traditional resources. Critics like Shriram Krishnamurthi note the AI’s tendency to miss key points. Educators express cautious optimism, acknowledging AI’s ability to assist learning while stressing the importance of critical thinking and reading original texts.

AI-Powered Digital Tutoring Assistant May Help Improve Students’ Short-Term Performance In Math

The Seventy Four Share to FacebookShare to Twitter reported that “AI-powered digital tutoring assistant designed by Stanford University researchers shows modest promise at improving students’ short-term performance in math.” In fact, the “weakest tutors became nearly as effective as their more highly-rated peers,” according to the new study released Monday. This suggests that the “best use of artificial intelligence in virtual tutoring for now might be in supporting, not supplanting, human instructors.”

        K-12 Dive Share to FacebookShare to Twitter (10/7, Arundel) reports Tutor CoPilot, the open-source tool, “can be embedded in any tutoring platform and helps live tutors ask guiding questions to students and respond to student needs. However, tutors working with the tool suggested improvements to make the guidance for tutors more grade-appropriate.” This is the “first-ever randomized controlled trial of a human-AI system in live tutoring situations.” Students whose tutors used Tutor CoPilot “were 4 percentage points more likely to progress through math tutoring session assessments successfully compared to students whose tutors did not have AI assistance, the study found.”

Google Enhances Coding Assistant With Gemini AI

TechRadar Share to FacebookShare to Twitter (10/9) reports that Google has upgraded its coding assistant for enterprise developers using the Gemini AI platform. The Gemini Code Assist Enterprise service aims to simplify code writing, enhancing productivity and efficiency. It offers improved code customization, suggesting enhancements based on organizational practices and libraries. Announced in April 2024, the service uses the Gemini 1.5 Pro AI model for code analysis and optimization.

        InfoWorld Share to FacebookShare to Twitter (10/9) also reports.

OpenAI Seeks Dismissal Of Elon Musk’s Lawsuit

Forbes Share to FacebookShare to Twitter (10/9, Ray) reports that OpenAI filed a motion on Tuesday in a California federal court to dismiss Elon Musk’s lawsuit, labeling it a “harassment effort” to benefit his AI startup xAI. OpenAI claims Musk, once a supporter, “abandoned the venture” after failing to dominate it. The company alleges Musk’s federal lawsuit mirrors a previous state court case he dropped in June. OpenAI argues the lawsuit is a “PR stunt” with “implausible” claims. Musk initially sued OpenAI, accusing it of prioritizing profit over its founding mission.

OpenAI’s GPT-4o Displays Unexpected Conversational Abilities

Inside Higher Ed Share to FacebookShare to Twitter (10/10, Schroeder) reports that OpenAI’s GPT-4o app exhibited unexpected conversational capabilities last month, engaging users by recalling past interactions and initiating dialogue without prompts. In one instance, the AI inquired about a user’s first week at high school. This behavior, described by OpenAI as a glitch, highlights a shift towards AI acting as a “coworker” or “friend.” OpenAI’s o1 model, which includes “chain of thought reasoning,” aims to enhance AI’s problem-solving abilities, outperforming humans in certain tasks.

Microsoft Unveils AI Tools To Alleviate Strain On Healthcare Professionals

CNBC Share to FacebookShare to Twitter (10/10, Capoot) reports Microsoft announced a suite of new AI tools aimed at reducing the administrative workload for healthcare professionals, a move that could significantly address clinician burnout. These innovations, including medical imaging models and automated documentation solutions, are designed to streamline processes for physicians and nurses, who currently spend a substantial portion of their time on paperwork. By collaborating with major health institutions, Microsoft aims to enhance healthcare efficiency and foster better collaboration among medical staff.

NYT Inspects ChatGPT Code Amid Copyright Lawsuits

Insider Share to FacebookShare to Twitter (10/10, Shamsian) reports lawyers for The New York Times are inspecting ChatGPT’s source code in a secure, internet-free environment as part of copyright infringement lawsuits against OpenAI and Microsoft. The lawsuits claim OpenAI used copyrighted material, including NYT articles, to train its models without compensation. The legal examination aims to determine if OpenAI’s practices constitute “fair use.” The lawsuits, involving major publishers and authors, could set precedents for AI model training legality in the US. The outcomes may influence future AI development and copyright protection in journalism and other creative industries.

dtau...@gmail.com

unread,
Oct 19, 2024, 4:46:32 PM10/19/24
to ai-b...@googlegroups.com

Google Goes Nuclear

Google signed a deal with Kairos Power to use small nuclear reactors to generate the energy needed to power its AI datacenters. The company says it plans to start using the first reactor this decade, and to bring more online over the next decade. Said Google's Michael Terrell, "This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone."
[ » Read full article ]

BBC News; João da Silva (October 15, 2024)

 

Robot's Alan Turing Portrait to be Auctioned by Sotheby's

Auction house Sotheby's next month will auction a portrait of Alan Turing painted by a robot; it is expected to fetch as much as £150,000 ($196,000). The piece, created by humanoid robot Ai-Da, is entitled "AI God" and was exhibited at the United Nations in May 2024. Gallery owner and founder of the Ai-Da Robot studio, Aidan Meller, headed the team that created the robot with experts at the U.K. universities of Oxford and Birmingham.
[
» Read full article ]

Deutsche Welle (Germany) (October 16, 2024)

 

PLCHound Algorithm Aims to Boost Critical Infrastructure Security

Researchers at the Georgia Institute of Technology's Cyber-Physical Security Lab say an algorithm they developed boosts critical infrastructure security by more accurately identifying devices vulnerable to remote cyberattacks. The PLCHound algorithm uses advanced natural language processing and machine learning techniques to sift through databases of Internet records and log the IP addresses and security of connected devices.
[
» Read full article ]

Industrial Cyber; Anna Ribeiro (October 16, 2024)

 

LeCun Thinks AI Is Dumber Than a Cat

AI pioneer and ACM A.M. Turing Award laureate Yann LeCun says some experts are exaggerating AI's power and risks. LeCun believes today’s AI models lack the intelligence of pets. When an OpenAI researcher stressed the need to control ultra-intelligent AI, LeCun responded, “It seems to me that before ‘urgently figuring out how to control AI systems much smarter than us’ we need to have the beginning of a hint of a design for a system smarter than a house cat."

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Christopher Mims (October 11, 2024)

 

Nevada Asked AI Which Students Need Help

Nevada's reliance on AI to estimate the number of children who would struggle in school has sparked an outcry. Before, Nevada treated all low-income students as “at risk” of academic and social troubles. The AI weighed dozens of other factors, slashing the number of students classified as at-risk to less than 65,000 last year from over 270,000 in 2022. As a result, many schools saw state money that they had relied on disappear.

[ » Read full article *May Require Paid Registration ]

The New York Times; Troy Closson (October 12, 2024)

 

Nobel Prizes Recognize AI Innovations, Sparks Debate About Scientific Fields

Scientific American Share to FacebookShare to Twitter (10/14, Castelvecchi, Callaway, Kwon) reports that this year’s Nobel Prizes “recognized the transformative power of artificial intelligence (AI) in two of this year’s prizes,” awarding Geoffrey Hinton and John Hopfield in physics for neural networks and Demis Hassabis and John Jumper in chemistry for AlphaFold. The physics award sparked debate, with some questioning its relevance to physics. The chemistry prize acknowledged AlphaFold’s AI-driven protein folding, with David Jones noting its integration of existing scientific knowledge. AlphaFold “would not have been possible were it not for the Protein Data Bank, a freely available repository of more than 200,000 protein structures...determined using X-ray crystallography, cryo-electron microscopy and other experimental methods.”

Harvard Students Plan AI Innovations For Construction

Insider Share to FacebookShare to Twitter (10/13, Niemeyer) reports that Harvard juniors AnhPhu Nguyen and Caine Ardayfio, known for their I-Xray project using Meta Ray-Bans for facial recognition, are now focusing on AI applications in construction. The duo, who founded Harvard’s AR/VR club, previously developed various tech projects, including an electric skateboard and a robotic tentacle. They gained access to Meta glasses through their club, integrating AI into augmented reality glasses for real-time fact-checking. Ardayfio explained that AI-equipped autonomous construction robots can now make decisions, like waiting for a person to move, without hardcoding every movement.

 

TikTok Lays Off Hundreds As It Shifts To AI-Focused Content Moderation

Reuters Share to FacebookShare to Twitter (10/11, Latiff) reports TikTok “is laying off hundreds of employees from its global workforce, including a large number of staff in Malaysia, the company said on Friday, as it shifts focus towards a greater use of AI in content moderation.” Sources familiar with the matter “earlier told Reuters that more than 700 jobs were slashed in Malaysia” but the company clarified that less than 500 Malaysian employees were affected. The employees, “most of whom were involved in the firm’s content moderation operations, were informed of their dismissal by email late Wednesday, the sources said, requesting anonymity as they were not authorized to speak to media.”

OpenAI Faces Scrutiny Over Nonprofit Structure

The AP Share to FacebookShare to Twitter (10/12, Beaty) reported that OpenAI, the company behind ChatGPT, is under scrutiny regarding its nonprofit status amid a valuation surge to $157 billion. Nonprofit tax experts are concerned about OpenAI’s compliance with its charitable mission. OpenAI CEO Sam Altman confirmed potential restructuring, possibly converting to a public benefit corporation, though specifics are undisclosed. A source indicates no final decision on restructuring has been made. The board, led by Bret Taylor, aims to ensure the nonprofit’s sustainability. Andrew Steinberg notes restructuring would be complex but feasible. Concerns persist about OpenAI’s commitment to its mission, with critics like Elon Musk doubting its fidelity.

US Considers Limiting AI Chip Sales

Reuters Share to FacebookShare to Twitter (10/14, Tanna) reported that US officials are contemplating restrictions on sales of advanced AI chips from Nvidia and other American firms, targeting specific countries. Bloomberg Share to FacebookShare to Twitter (10/15, Subscription Publication) News, citing unnamed sources, revealed that the focus is on Persian Gulf nations, with plans to cap export licenses for national security reasons. Discussions are in preliminary stages and fluid. The US Commerce Department and Nvidia did not comment, while Intel and AMD have yet to respond to Reuters. A recent Commerce Department rule might facilitate AI chip shipments to Middle Eastern data centers. Last year, the Biden Administration expanded licensing requirements for advanced chip exports to over 40 countries, including some in the Middle East, to prevent diversion to China.

Survey Reveals Higher Ed’s AI Preparedness Concerns

Inside Higher Ed Share to FacebookShare to Twitter (10/16, Palmer) reports that Inside Higher Ed’s third annual Survey of Campus Chief Technology/Information Officers, in collaboration with Hanover Research, reveals that “just 9 percent of chief technology officers believe higher education is prepared to handle the new technology’s rise.” Released Wednesday, the survey highlights concerns about AI’s impact on academic integrity, with 60% of CTOs “worried to some degree about the risk generative AI poses to academic integrity.” Despite this, 46% are enthusiastic about AI’s potential benefits, although only 23% “said investing in artificial intelligence is an essential (1 percent) or high (22 percent) priority for their institution.” The survey, involving 82 CTOs, shows that AI is primarily used “to create virtual chat bots and assistants, which was the most popular application.”

Texas A&M University Researchers Use AI For Disaster Recovery

FOX Weather Share to FacebookShare to Twitter (10/15) reports that Texas A&M researchers are employing artificial intelligence and machine learning to expedite damage assessments following major hurricanes. Over a year, they “spent more than a year studying damage photos taken via drone from 10 major disasters,” including hurricanes Harvey, Michael, and Ida. The research team, led by Dr. Robin Murphy, “recruited 130 high school students from Texas and Pennsylvania” to label damage on 21,700 buildings. This data trained an AI system to identify storm-damaged infrastructure. With the new system, “researchers say if they can get drone video of an affected neighborhood, they can have a damage analysis ready in only four minutes, just by using a laptop.” The AI system has already been used “to help the state of Florida in the wake of Hurricanes Debby and Helene.”

Big Tech’s Capital Spending Soars Amid AI Push

The Wall Street Journal Share to FacebookShare to Twitter (10/16, Gallagher, Subscription Publication) reports major tech companies, including Amazon, have significantly increased capital spending this year, particularly on AI infrastructure. The combined capital spending of Microsoft, Amazon, Google, and Meta reached $106.2 billion in the first half of 2024, up 49% from the previous year. This surge is driven by investments in chips and other resources to support generative AI services. Wall Street expects these companies’ combined capital expenditures to top $60 billion in the third quarter and $231 billion for the full year. The Journal highlights that a growing number of analysts think Amazon’s spending on an ambitious satellite program could bring the company’s operating income below Wall Street’s stated targets for this year and next, potentially curbing the operating margin expansion the company has been delivering recently.

Boston Dynamics And Toyota Institute Partner On AI Robotics

TechCrunch Share to FacebookShare to Twitter (10/16, Heater) reports that Boston Dynamics and Toyota Research Institute announced plans to integrate AI-based robotic intelligence into the Atlas humanoid robot. This collaboration will leverage TRI’s work on large behavior models, akin to large language models like ChatGPT. TRI’s research has achieved 90% accuracy in household tasks through overnight training. Boston Dynamics CEO Robert Playter highlighted the partnership’s potential to address complex challenges in robotics. This deal is notable as Boston Dynamics and TRI are backed by automotive rivals Hyundai and Toyota, respectively, aiming to develop a general-purpose humanoid robot.

dtau...@gmail.com

unread,
Oct 26, 2024, 1:04:30 PM10/26/24
to ai-b...@googlegroups.com

U.S. Urges Agencies to ‘Harness’ AI for National Security

The first-ever national security memorandum on AI, issued by President Biden on Thursday, directs the federal government to take action to improve the security and diversity of chip supply chains and to provide AI developers with cybersecurity and counterintelligence to keep their inventions secure. An administration official added that “the U.S. should harness the most advanced AI systems with appropriate safeguards to achieve national security objectives."
[
» Read full article ]

The Hill; Miranda Nazzaro (October 24, 2024)

 

AI Scans RNA ‘Dark Matter,’ Uncovers 70,000 New Viruses

AI was used to uncover 70,500 previously-unknown RNA viruses. Using the protein-prediction tool ESMFold, developed by researchers at Meta, Shi Mang at Sun Yat-sen University in China and colleagues created a model, called LucaProt, and fed it sequencing and ESMFold protein-prediction data. They trained the model to recognize viral RNA-dependent RNA polymerase, a key protein used in RNA replication, and used it to find sequences that encoded these enzymes in the large tranche of genomic data.
[ » Read full article ]

Nature; Smriti Mallapaty (October 14, 2024)

 

Can AI Be Blamed for a Teen's Suicide?

The mother of Sewell Setzer III, a 14-year-old from Orlando, FL, who took his own life in February, is suing Character.AI, a role-playing app the lets users create and chat with AI characters. Setzer reportedly spent hours every day conversing with the chatbot, even confiding his thoughts of suicide. The lawsuit calls the technology "dangerous and untested."


[
» Read full article *May Require Paid Registration ]

The New York Times; Kevin Roose (October 23, 2024)

 

AI Decodes Oinks and Grunts to Keep Pigs Happy

An AI algorithm developed by researchers from universities in Denmark, Germany, Switzerland, France, Norway, and the Czech Republic interprets the sounds pigs make. Using the algorithm could potentially alert farmers to negative emotions in pigs so the farmers can improve their well-being, according to Elodie Mandel-Briefer at Denmark's University of Copenhagen.
[
» Read full article ]

Reuters; Jacob Gronholt-Pedersen (October 24, 2024)

 

Using AI, Radar to Unsnarl a 500-Year-Old Traffic Jam

South Korean company Bitsensing is partnering with the Italian city of Verona and Italy-based Famas Systems to manage traffic at Porta Nuova, a gateway to the city that has been standing for nearly 500 years. Bitsensing installed 10 of its traffic insight monitoring sensors (TIMOS) overlooking Porto Nuevo’s five entrance lanes and six exit lanes. The sensors’ on-device AI collects and transmits real-time data to an operations center supported by local servers.
[ » Read full article ]

IEEE Spectrum; Lawrence Ulrich (October 21, 2024)

 

Anguilla Turns AI Boom into Digital Gold Mine

The British territory of Anguilla, allotted control of the .ai Internet address in the 1990s, is capitalizing on the AI boom. Google, for example, uses google.ai to showcase its AI services, while Elon Musk uses x.ai as the homepage for his Grok AI chatbot. Anguilla’s earnings from Web domain registration fees quadrupled last year to $32 million, fueled by the surging interest in AI.
[ » Read full article ]

Associated Press; Kelvin Chan (October 15, 2024)

 

Vulnerabilities, AI Compete for Software Developers' Attention

The annual "State of the Software Supply Chain" report from software company Sonatype found that developers are on track to download more than 6.6 trillion software components in 2024, including a 70% increase in downloads of JavaScript components and an 87% increase in Python. Sonatype's Brian Fox said while the advent of AI is driving speedier development cycles, it is also making security more difficult.
[ » Read full article ]

Dark Reading; Robert Lemos (October 22, 2024)

 

 

C. Ebert and M. Beck, "Artificial Intelligence for Cybersecurity", IEEE Software, vol. 40, no. 06, pp. 27-34, Nov.-Dec. 2023.
Cybersecurity attacks are on a steep increase across industry domains.1,2 With ubiquitous connectivity and increasingly standard software stacks, basically all software is accessible and vulnerable. Yet, cybersecurity is not systematically deployed because necessary processes are demanding and need continuous attention paired with technology competences. Many software suppliers do not pay adequate attention and governance, resulting in problems such as weak communication protocols, insufficient passwords, and social engineering risks.
URL: https://doi.ieeecomputersociety.org/10.1109/MS.2023.3305726

A. Piplai et al., "Knowledge-Enhanced Neurosymbolic Artificial Intelligence for Cybersecurity and Privacy", IEEE Internet Computing, vol. 27, no. 05, pp. 43-48, Sept.-Oct. 2023.

Neurosymbolic artificial intelligence (AI) is an emerging and quickly advancing field that combines the subsymbolic strengths of (deep) neural networks and the explicit, symbolic knowledge contained in knowledge graphs (KGs) to enhance explainability and safety in AI systems. This approach addresses a key criticism of current generation systems, namely, their inability to generate human-understandable explanations for their outcomes and ensure safe behaviors, especially in scenarios with unknown unknowns (e.g., cybersecurity, privacy). The integration of neural networks, which excel at exploring complex data spaces, and symbolic KGs, which represent domain knowledge, allows AI systems to reason, learn, and generalize in a manner understandable to experts. This article describes how applications in cybersecurity and privacy, two of the most demanding domains in terms of the need for AI to be explainable while being highly accurate in complex environments, can benefit from neurosymbolic AI.
URL: https://doi.ieeecomputersociety.org/10.1109/MIC.2023.3299435

 

OpenAI-Microsoft Partnership Said To Be Experiencing Tension

The New York Times Share to FacebookShare to Twitter (10/18, A1) reports that OpenAI and Microsoft are experiencing tension in their partnership, initially praised as “the best bromance in tech.” OpenAI, led by CEO Sam Altman, sought additional investment from Microsoft after already receiving $13 billion. Microsoft hesitated following Altman’s temporary ousting and OpenAI’s projected $5 billion loss this year. Microsoft remains OpenAI’s largest investor but has also invested in Inflection, an OpenAI competitor. OpenAI secured a $10 billion computing deal with Oracle and recently closed a $6.6 billion funding round. OpenAI’s computing costs are expected to rise significantly. Microsoft and OpenAI have renegotiated terms, but OpenAI staff express dissatisfaction with the computing power provided by Microsoft.

        Another article in the New York Times Share to FacebookShare to Twitter (10/18) reports that Microsoft has hired employees from Inflection, an OpenAI rival, to hedge its AI investments, causing friction. Complaints have emerged about Microsoft’s handling of OpenAI software and insufficient computing power provision. OpenAI has since negotiated a $10 billion contract with Oracle for additional resources.

        TechCrunch Share to FacebookShare to Twitter (10/17, Loizos) reports, “Most fascinating perhaps is a reported clause in OpenAI’s contract with Microsoft that cuts off Microsoft’s access to OpenAI’s tech if the latter develops so-called artificial general intelligence (AGI), meaning an AI system capable of rivaling human thinking.” TechCrunch points out that OpenAI’s board “can reportedly decide when AGI has arrived, and CEO Sam Altman has already said that moment will be somewhat subjective. As he told this editor early last year, ‘The closer we get, the harder time I have answering [how far away AGI is] because I think that it’s going to be much blurrier, and much more of a gradual transition than people think.’”

        Microsoft, OpenAI Negotiate Equity Distribution Amid Transition To For-Profit Corporation. The Wall Street Journal Share to FacebookShare to Twitter (10/18, Jin, Driebusch, Subscription Publication) reports that OpenAI and Microsoft are negotiating how to translate Microsoft’s nearly $14 billion investment in OpenAI into equity amid the latter’s transition from a nonprofit to a for-profit public-benefit corporation that will maintain a nonprofit component. OpenAI, valued at $157 billion, faces challenges in distributing equity. Microsoft, advised by Morgan Stanley, could own a large stake, while OpenAI, advised by Goldman Sachs, navigates governance rights. Microsoft and OpenAI’s complex relationship includes financial and technological ties, including Microsoft’s role as OpenAI’s exclusive cloud services provider.

Congressional Leaders Negotiating Potential Lame-Duck Deal To Address Increasing Concerns About AI, Sources Say

Politico Share to FacebookShare to Twitter (10/18, Perano) reports, “Congressional leaders in the House and Senate are privately negotiating a deal to address increasing concerns about artificial intelligence, and they’re hoping to move a bill in the lame-duck period, two people close to the negotiations tell POLITICO.” The specifics of the package remain “in flux as Democratic and Republican leadership haggle over common ground,” but “several bills have passed through committees on a bipartisan basis related to AI research and workforce training bills, which could be prime areas for agreement.” However, “other subjects like AI’s role in misinformation, elections and national security are areas rife with potential partisan roadblocks and would likely be more difficult to include in a deal.” AI “has specifically been a priority for Majority Leader Chuck Schumer, who initiated the negotiations, according to one of the people familiar.”

AI-Powered Chatbot “Sassy” Helps Oregon Students Explore Careers

Education Week Share to FacebookShare to Twitter (10/21, Langreo) reports that the Oregon Department of Education, in collaboration with Journalistic Learning Initiative and Playlab.ai, has launched “Sassy,” an AI-powered chatbot designed to aid students in career exploration. EdWeek “interviewed Ed Madison, a University of Oregon professor and executive director of the Journalistic Learning Initiative, about the chatbot and how he envisions students and teachers using it.” With Sassy – short for Sasquatch, Oregon’s “Bigfoot” – students can “brainstorm possible careers, create action plans for how to get their dream jobs, prepare for an interview, and even stay motivated.” This initiative is “part of the state’s investment in expanding career-connected programs to engage students in relevant learning, complete unfinished learning, and improve their mental well-being and sense of belonging.”

AI Transforms Agriculture With Precision And Efficiency

Forbes Share to FacebookShare to Twitter (10/21, Walch) reports that agriculture is undergoing a transformation with the integration of artificial intelligence into farming equipment and processes. AI technology is enhancing precision farming by improving harvest quality and efficiency, detecting plant diseases, and optimizing resource use. Autonomous systems, such as AI-powered drones and self-driving tractors, provide farmers with real-time insights and operational control, reducing labor needs and increasing productivity. AI also aids in weather forecasting, offering crucial lead time for farmers. Despite challenges like high costs and technical requirements, AI advancements are making farms more efficient globally.

Employers Stress Need For AI Training In Education

Inside Higher Ed Share to FacebookShare to Twitter (10/21, Mowreader) reports that employers are increasingly “indicating that there’s a need for students to be trained in generative artificial intelligence tools as more businesses integrate the tech’s capabilities into the workplace.” Mark Lacker, an entrepreneurship professor at Miami University in Ohio, “encourages students to use generative AI tools to complete projects, inspiring creative and critical thinking skills that can prepare them for careers.” A spring 2024 survey “by Inside Higher Ed and Generation Lab found 31 percent of students say they know how to use generative AI to help with coursework because it was communicated by their professors.” Lacker’s course, likened to an internship, involves students working “with a small group of their peers to use AI to solve a problem,” with presentations to demonstrate learning.

OpenAI Hires New Chief Economist With Ties To Biden, Obama

The New York Times Share to FacebookShare to Twitter (10/22, Metz) reports OpenAI “has hired a chief economist with ties to two Democratic presidential administrations.” OpenAI on Tuesday “said it had hired Aaron ‘Ronnie’ Chatterji, a professor of business and public policy at Duke University’s Fuqua School of Business,” who “previously served as a senior economist in [former] President Barack Obama’s Council of Economic Advisers and as chief economist at the Commerce Department under President Biden.” This “addition of a chief economist is indicative of OpenAI’s enormous ambition and where its executives see their company in the tech industry’s pecking order.”

Anthropic Announces AI Agents That Can Complete Complex Tasks “Like A Human Would”

CNBC Share to FacebookShare to Twitter (10/22, Field) reports Anthropic “announced Tuesday that it’s reached an artificial intelligence milestone for the company: AI agents that can use a computer to complete complex tasks like a human would.” The company’s new Computer Use capability “allows its tech to interpret what’s on a computer screen, select buttons, enter text, navigate websites and execute tasks through any software and real-time internet browsing.” Anthropic Chief Science Officer Jared Kaplan told CNBC the tool can “use computers in basically the same way that we do,” adding it can do tasks with “tens or even hundreds of steps.”

Report Highlights AI Integration Challenges In Teacher Education

The Seventy Four Share to FacebookShare to Twitter (10/22, Toppo) reports that a recent study by the Center on Reinventing Public Education at Arizona State University “tapped leaders at more than 500 U.S. education schools, asking how their faculty and preservice teachers are learning about AI.” Through surveys and interviews, “researchers found that just one in four institutions now incorporates training on innovative teaching methods that use AI,” with most focusing on plagiarism prevention. Few faculty members feel confident using AI, with only 10% expressing confidence, and concerns about AI’s impact on jobs and data privacy persist. Promising programs include Arizona State University and the University of Northern Iowa. Researchers concluded “that the responsibility to integrate more content on AI can’t rest solely on the shoulders of ‘individual, self-motivated educators,’” and the report calls for strategic investments and policy adjustments to enhance AI education.

How AI Tools Enhance High School Counseling, College Applications

Education Week Share to FacebookShare to Twitter (10/23, Najarro) reports that artificial intelligence (AI) tools are increasingly being utilized in high school counseling to streamline repetitive tasks. Jeffrey Neill, director of college counseling at Graded: The American School of São Paulo, “discussed his experience with incorporating AI tools into counseling at the College Board’s annual forum here in Austin this week.” Neill highlighted that AI assists in compiling information for recommendation letters, reducing the time spent on gathering data. Additionally, AI tools like ChatGPT help create promotional content for college visits and draft email responses based on previous communications. Neill emphasized the importance of ethical AI use, advising students that “there is only one rule: don’t copy and paste text from ChatGPT and claim it as your own.’” Neill stressed the need for careful implementation to ensure AI benefits all students fairly.

Lawsuit Filed Against AI Chatbot Company Over Teen’s Suicide

The New York Times Share to FacebookShare to Twitter (10/23) reports on a lawsuit filed by a Florida mother against an AI companionship platform, accusing the company of contributing to the suicide of her son. Sewell Setzer III, a 14-year-old from Orlando, became emotionally attached to “Dany,” an AI chatbot on Character. AI, named after a “Game of Thrones” character. He developed an intense relationship with the chatbot, isolating himself from the real world, which led to declining school performance and mental health issues. Despite being diagnosed with anxiety and mood disorders, Sewell preferred confiding in the chatbot over seeking professional help, eventually leading to his death by suicide. The lawsuit claims the company’s technology is “dangerous and untested.” Character. AI’s spokesperson stated they are enhancing safety features. According to The Times, the case highlights concerns over AI companionship apps potentially exacerbating loneliness and replacing human interactions, especially among vulnerable teens.

OpenAI, Anthropic Compete with New AI Models

Forbes Share to FacebookShare to Twitter (10/24, Werner) reports that OpenAI and Anthropic are advancing AI capabilities with their latest models. OpenAI’s o1 model features “chain of thought” reasoning for language tasks, while Anthropic’s Claude 3.5 model allows computer use akin to human interaction. Users report Claude’s effectiveness in analytical tasks, while OpenAI’s o1 is praised for its reasoning capabilities. Analysts suggest OpenAI leads due to significant funding and innovative features. However, there is debate over the models’ true autonomy and reasoning abilities. Both companies continue to shape the AI landscape, with OpenAI currently seen as a frontrunner.

        Former OpenAI Researcher Criticizes Company’s AI Data Practices. The New York Times Share to FacebookShare to Twitter (10/23) reports that Suchir Balaji, a former artificial intelligence researcher at OpenAI, has publicly criticized the company’s use of copyrighted internet data to develop technologies like ChatGPT. Balaji, who worked at OpenAI for nearly four years, concluded that the company’s practices violated the law and contributed to societal harm. He left the company in August, expressing his concerns in interviews with The New York Times. Balaji is among the first employees to leave a major AI company and speak out against the use of copyrighted data in AI development.

White House Published National Security Memo Promoting Federal AI Use

The Washington Post Share to FacebookShare to Twitter (10/24) reports the Administration on Thursday published “a landmark national security memorandum ... directing the Pentagon and intelligence agencies to increase their adoption of artificial intelligence, expanding the Biden administration’s efforts to curb technological competition from China and other adversaries.” The memo “aims to make government agencies step up experiments and deployments of AI” and “also bans agencies from using the technology in ways that ‘do not align with democratic values,’” with NSA Sullivan saying, “This is our nation’s first ever strategy for harnessing the power and managing the risks of AI to advance our national security.”

        The New York Times Share to FacebookShare to Twitter (10/24, E. Sanger) calls the memo “the latest in a series Mr. Biden has issued grappling with the challenges of using A.I.,” adding that “most of the deadlines the order sets for agencies to conduct studies on applying or regulating the tools will go into full effect after Mr. Biden leaves office, leaving open the question of whether the next administration will abide by them.” Sullivan, who “prompted many of the efforts to examine the uses and threats of the new tools,” on Thursday “acknowledged that one challenge is that the U.S. government funds or owns very few of the key A.I. technologies – and that they evolve so fast that they often defy regulation.” According to CNN, Share to FacebookShare to Twitter (10/24, Liptak) the directive “seeks to strike a balance between deploying AI’s powerful potential with protecting against some of its fearsome possibilities.”

        Reuters Share to FacebookShare to Twitter (10/24) says the memo “directed federal agencies ‘to improve the security and diversity of chip supply chains’” and “also prioritizes the collection of information on other countries’ operations against the U.S. AI sector and passing that intelligence along quickly to AI developers to help keep their products secure.” However, Politico Share to FacebookShare to Twitter (10/24, Chatterjee, Gedeon) notes it “set up a potential political bind on a top tech issue for whoever wins the White House next,” as “its focus on using AI in security could cause friction for Vice President Kamala Harris if she wins: Civil rights groups are already criticizing the memo for its potential to let security agencies turbocharge a surveillance state.”

Report Explores School Districts’ Early AI Adoption

K-12 Dive Share to FacebookShare to Twitter (10/24, Merod) reports, “When districts are early users of artificial intelligence, they often adopt multiple approaches to implement the technology, according to a report released Thursday by the Center on Reinventing Public Education.” The nonpartisan research and policy analysis center “examined 40 school districts that adopted the technology early,” finding that districts often use multiple methods to implement AI, with 70% using teacher-centered AI tools and 65% providing guidance on AI use for teachers, students, and families. Additionally, 63% offer professional development for AI literacy, and 58% supply student-centered AI tools. The CRPE report “suggests early AI adopters consider: Piloting new ideas with AI tools and document what is and isn’t working. Investing in AI literacy for all adults and students in the district, including board members.”

dtau...@gmail.com

unread,
Nov 3, 2024, 4:42:16 PM11/3/24
to ai-b...@googlegroups.com

Google Watermarks Its AI-Generated Text

Google DeepMind researchers have developed a system to watermark its AI-generated text and has integrated it into its Gemini chatbot. The open source SynthID-Text system provides a way to determine whether text outputs have come from large language models without compromising "the quality, accuracy, creativity, or speed of the text generation," according to Google DeepMind's Pushmeet Kohli.
[ » Read full article ]

IEEE Spectrum; Eliza Strickland (October 23, 2024)

 

Tech Giants Press Congress to Codify AI Safety Institute

A letter from a coalition of more than 60 technology companies and industry groups calls on Congress to permanently authorize the U.S. Artificial Intelligence Safety Institute within the National Institutes of Standards and Technology (NIST) via legislation. The letter was signed by Amazon, Google, Meta, Microsoft, OpenAI, and more than 50 other companies.
[ » Read full article ]

The Hill; Julia Shapero (October 22, 2024)

 

AI Helps Driverless Cars Predict Movements of Unseen Objects

An algorithm developed by researchers at California cognitive computing firm VERSES AI and Volvo Cars helps autonomous vehicle systems anticipate and predict the trajectories of other vehicles, pedestrians, and cyclists hidden from direct view. The algorithm uses occlusion reasoning to reduce complex, rapidly changing scenarios to a simpler set of movements that could be made by potential hidden objects. When approaching locations where hidden objects are likely, the algorithm could alter the autonomous vehicle's speed or direction and its driving behavior could be updated should sensors confirm hidden objects are present.


[
» Read full article *May Require Paid Registration ]

New Scientist; Jeremy Hsu (October 29, 2024)

 

OpenAI Emphasizes US Leadership In AI Development

The Hill Share to FacebookShare to Twitter (10/25) reports that OpenAI has reiterated the importance of the US maintaining leadership in artificial intelligence development, following a national security memorandum from the Biden administration. OpenAI views the memo as a significant step toward ensuring AI benefits many while upholding democratic values. The company emphasizes partnerships that align with democratic values and responsible use, citing collaborations with DARPA and US National Laboratories. OpenAI also stresses the need for safeguards against misuse and highlights ongoing efforts to set norms for AI’s safe deployment in national security contexts.

        Researchers: AI Tool Adopted By Hospitals Is Fabricating Information. The AP Share to FacebookShare to Twitter (10/26, Burke, Schellmann) reported that OpenAI has touted its artificial intelligence-powered transcription tool Whisper as having near “human level robustness and accuracy.” However, Whisper “has a major flaw: It is prone to making up chunks of text or even entire sentences, according to interviews with more than a dozen software engineers, developers and academic researchers.” Experts “said that such fabrications are problematic because Whisper is being used in a slew of industries worldwide to translate and transcribe interviews, generate text in popular consumer technologies and create subtitles for videos. More concerning, they said, is a rush by medical centers to utilize Whisper-based tools to transcribe patients’ consultations with doctors.”

AI Boom Challenges Europe’s Environmental Goals

CNBC Share to FacebookShare to Twitter (10/29, Roach) reports that the surge in AI is pressuring European data centers to adapt their cooling systems to accommodate high-powered chips from companies like Nvidia. According to Goldman Sachs, AI is expected to drive a 160% increase in demand for data centers by 2030, potentially conflicting with Europe’s decarbonization goals. Michael Winterson of the European Data Center Association warns that lowering water temperatures for cooling is “fundamentally incompatible” with the EU’s Energy Efficiency Directive. The European Commission is engaging with Nvidia and other stakeholders to address energy consumption concerns in data centers.

Microsoft Faces Slow Revenue Growth Amid AI Concerns

Reuters Share to FacebookShare to Twitter (10/28) reports that Microsoft is expected to announce its slowest quarterly revenue growth in a year, with investors focusing on AI demand and returns. Despite significant investments in AI, including in OpenAI’s ChatGPT, Microsoft’s key products like the Copilot assistant face slow adoption. Analysts express concerns over capital expenditures and margin compression. Microsoft’s Azure unit likely saw 33% growth, while total revenue is expected to rise 14.1% to $64.51 billion. Analysts suggest recent developments, like autonomous AI agents, may boost Copilot adoption, though skepticism remains high.

Poll: Roughly 74% Of Adults Older Than 50 Say They Would Have Little Or No Trust In Health Information Generated By AI

The Washington Post Share to FacebookShare to Twitter (10/28, Docter-Loeb) reports, “About 74 percent of adults older than 50 say they would have little or no trust in health information generated by artificial intelligence, according to the University of Michigan National Poll on Healthy Aging.” The new “report Share to FacebookShare to Twitter analyzed data from a survey administered in February and March to 3,379 U.S. adults between ages 50 and 101.” More than “half of the adults (58 percent) reported looking for health information on the web in the past year.” The poll found that “trust in AI-generated information differed across demographics.” For example, “women and those with less education or lower household income or who had not had a health-care visit in the past year were less likely to trust the information they found generated by AI online.”

Study Finds AI Adoption May Be Overstated

Fortune Share to FacebookShare to Twitter (10/29, Goldman) reports that a new study on generative AI adoption claims 40% of U.S. adults have used such tools, suggesting rapid uptake. However, Princeton professor Arvind Narayanan criticizes this as exaggerated, noting only 0.5%-3.5% of work hours involve generative AI. The study, published by the National Bureau of Economic Research, contrasts with personal observations that many are unaware of AI tools beyond ChatGPT. Despite mixed reviews for products like Apple’s AI features, the technology is becoming unavoidable with integrations across platforms like Google and Microsoft. The article also highlights Microsoft’s GitHub Copilot expanding model options beyond OpenAI, reflecting evolving AI tool usage.

OpenAI Building First Chip With Broadcom And TSMC, Scaling Back Foundry Ambition

Reuters Share to FacebookShare to Twitter (10/29, Hu, Potkin, Nellis) reports OpenAI is working with TSMC and Broadcom “to build its first in-house chip designed to support its artificial intelligence systems, while adding AMD chips alongside Nvidia chips to meet its surging infrastructure demands, sources told Reuters.” OpenAI has dropped its “ambitious foundries plans for now due to the costs and time needed to build a network, and plans instead to focus on in-house chip design efforts, according to sources, who requested anonymity as they were not authorized to discuss private matters.” Its strategy “highlights how the Silicon Valley startup is leveraging industry partnerships and a mix of internal and external approaches to secure chip supply and manage costs like larger rivals Amazon, Meta, Google and Microsoft.”

        OpenAI CFO: 75% Of Revenue Comes From Consumer Subscriptions. Bloomberg Share to FacebookShare to Twitter (10/28, Ghaffary, Ludlow, Subscription Publication) reports that OpenAI’s Chief Financial Officer Sarah Friar stated that 75% of the company’s revenue comes from consumer subscriptions, particularly for its ChatGPT service, during an interview at the Money20/20 conference in Las Vegas. Despite efforts to expand its corporate customer base, the company’s consumer side remains robust, with 250 million weekly active users and a conversion rate of 5% to 6% from free to paid users. OpenAI recently secured $6.6 billion in funding and a $4 billion credit line to support its AI development and infrastructure expansion.

Meta Working On AI-Based Search Engine

Reuters Share to FacebookShare to Twitter (10/28, Votaw) reports that Meta Platforms “is working on an artificial intelligence-based search engine as it looks to reduce dependence on Alphabet’s Google and Microsoft’s Bing.” The engine, says Reuters, “will provide conversational answers to users about current events on Meta AI, the company’s chatbot on WhatsApp, Instagram and Facebook, according to the report.”

Biden’s Memo On AI In National Security Is “Ambitious,” Technology Experts Say

Roll Call Share to FacebookShare to Twitter (10/29, Ratnam) highlights reactions from intelligence experts to President Biden’s memo directing security agencies to harness the power of AI technology. Roll Call explains the memo, “which stems from the president’s executive order from last year, asks the Pentagon; spy agencies...and others to harness AI technologies. The directive emphasizes the importance of national security systems ‘while protecting human rights, civil rights, civil liberties, privacy, and safety in AI-enabled national security activities.’” However, “technology experts” are warning that the directive “sets ambitious targets amid a volatile political environment.” For example, Center for a New American Security fellow Josh Wallin said, “It’s like trying to assemble a plane while you’re in the middle of flying it. ... It is a heavy lift. This is a new area that a lot of agencies are having to look at that they might have not necessarily paid attention to in the past, but I will also say it’s certainly a critical one.”

Survey: Teachers Seek More AI Training Opportunities

Education Week Share to FacebookShare to Twitter (10/29, Langreo) reports that a recent survey by the EdWeek Research Center shows an increase in teachers receiving professional development on artificial intelligence, though a majority still lack training. Conducted “between Sept. 26 and Oct. 8,” the survey included 1,135 educators, with 43% of teachers saying “they have received at least one training session on AI,” up from 29% in the spring. Tara Natrass from ISTE+ASCD suggests the increase is due to more opportunities for training during summer and back-to-school periods. However, if 58 percent of teachers “still have no training two years after the release of ChatGPT, then districts have a lot of work to do to get everyone up to speed, Natrass said.” The lack of knowledge and support “is one of the top reasons why teachers say they aren’t using AI in the classroom, according to the EdWeek Research Center survey.”

How San Diego Teachers Are Using AI To Enhance Education

The San Diego Union-Tribune Share to FacebookShare to Twitter (10/29, Taketa) reports that at Sage Creek High School, one math teacher’s students “are not only allowed but encouraged to use AI.” The educator, who has a background in electrical engineering, introduces students to AI tools for solving math problems and checking answers. He developed “his own AI platform that launched this year, called HappyGrader, that grades students’ tests and provides grading feedback. It’s cut his grading time in half,” and despite initial skepticism, students have found these tools beneficial. An English teacher also uses AI to check for academic dishonesty and offers guidance on ethical AI use. San Diego Unified is also exploring AI’s potential and “is convening a task force that will draft district guidelines for AI use by June of next year,” aiming to enhance, not replace, teachers.

Meta Posts Record Revenue Amid AI Investments

The Wall Street Journal Share to FacebookShare to Twitter (10/30, Subscription Publication) reports Meta Platforms achieved a record $40.59 billion in sales, a 19% year-over-year increase, driven by digital advertising growth, albeit slower than previous quarters. CEO Mark Zuckerberg emphasized continued significant investments in AI. Amazon is expected to report its results on Thursday, as tech giants provide their quarterly updates.

        Adweek Share to FacebookShare to Twitter (10/30) reports Meta is ramping up its investments in AI, with the technology expected to enhance ad targeting and content recommendation capabilities. Notably, Meta launched generative AI ad tools for video creation in October and is integrating its AI chatbot across WhatsApp, Messenger, and Instagram. The company inked a multiyear deal with Reuters for news content and is developing an AI-powered search engine to reduce dependence on Google and Microsoft’s Bing. Meta AI has over 500 million monthly active users. CFO Susan Li noted, “Over time, there will be a broadening set of queries that people use [Meta AI] for, and monetization opportunities will exist.”

AI Sparks Mixed Reactions Among Louisiana Entrepreneurs

The New Orleans Times-Picayune Share to FacebookShare to Twitter (10/23, Collins) reported that the 2024 Greater New Orleans Startup Report reveals mixed feelings about artificial intelligence (AI) among Louisiana entrepreneurs. Conducted by Tulane University’s Albert Lepage Center, the survey shows 60% of respondents view AI as a threat, while 61% see it as an opportunity. The report highlights that AI-driven innovations have benefited tech giants and startups like OpenAI. Locally, AI has inspired new companies and courses. The survey also indicates that 37% of respondents believe AI will have the largest long-term impact on their companies. Despite AI’s potential, funding gaps persist for minority and female founders.

Google Reports AI Writes Over 25% of Its Code

Fortune Share to FacebookShare to Twitter (10/30, McKenna) reports Alphabet CEO Sundar Pichai announced during the company’s third-quarter earnings call Tuesday that AI generates over 25% of Google’s new code. The company also “says its impressive Q3 performance – earnings beat analyst predictions – was driven in part by its cloud business.” The “segment generated quarterly revenues of $11.4 billion, up 35% from the same period last year, as Pinchai said artificial intelligence offerings helped attract new enterprise customers and win larger deals.”

Illinois Teacher Advocates AI Use In Language Learning

Education Week Share to FacebookShare to Twitter (10/30, Najarro) reports that Sarah Said, “an English teacher working with English learners at an alternative high school near Chicago,” is encouraging the use of AI tools in language learning. Said, who has more than 20 years of experience with English learners, notes that students are already utilizing AI and translation apps like Google Translate and ChatGPT. She emphasizes the importance of teaching students to use these tools responsibly, likening AI to a calculator that aids but doesn’t replace learning. Said presented on this topic “virtually at the annual WIDA conference in mid-October and spoke with Education Week about how teachers working with English learners should approach AI tools in class.” In an interview with EdWeek, she English learners “might be the first ones to actually be in the know because they’ve had to adapt to using so many tools in the classroom.”

AI Startup Develops Robots For Household Chores

Wired Share to FacebookShare to Twitter (10/31, Knight) reports that Physical Intelligence, a San Francisco startup, is advancing robotics with a new AI model capable of performing various household tasks. The company, founded by robotics researchers, has developed a “foundation model” called π0, trained on extensive robotic data. This model enables robots to perform chores such as folding laundry and cleaning tables. CEO Karol Hausman likens the training process to that of large language models like ChatGPT, but applied to physical tasks. Videos demonstrate robots executing tasks with notable skill. However, the algorithm sometimes fails amusingly, such as overfilling an egg carton. Co-founder Sergey Levine acknowledges the model’s limitations, comparing it to early AI models. The company aims to overcome challenges like limited data availability by generating its own. This approach could lead to robots handling diverse industrial tasks and adapting to human environments.

Meta Partners With GelSight And Wonik Robotics To Develop AI Tactile Sensors

TechCrunch Share to FacebookShare to Twitter (10/31, Wiggers) reports that Meta is collaborating with GelSight and Wonik Robotics to commercialize tactile sensors for AI research. These sensors aim to enhance AI’s understanding of the physical world. GelSight will help market Digit 360, a tactile fingertip with advanced sensing capabilities. Meta and Wonik will also develop a new Allegro Hand with integrated tactile sensors. Both products will be available next year.

AI Tool Enhances Math Tutoring Efficiency

Education Week Share to FacebookShare to Twitter (10/31) reports that a Stanford University study found an AI-powered tutoring assistant, Tutor CoPilot, increased human tutors’ capacity and improved students’ math performance. Stanford researchers developed this digital tool to aid tutors, particularly novices, in student interactions. This study is the first randomized controlled trial investigating a human-AI partnership in live tutoring. It assesses the tool’s effectiveness in enhancing tutors’ skills and students’ math learning. Susanna Loeb, a Stanford education professor and study author, discussed the tool’s development, trial results, and implications for schools in an interview with Education Week. The study emerges as schools face challenges in scaling tutoring programs due to resource demands.

dtau...@gmail.com

unread,
Nov 9, 2024, 7:26:21 PM11/9/24
to ai-b...@googlegroups.com

South Korea Fights Deepfake Porn Surge

Officials announced several steps to curb a surge in deepfake porn in South Korea, including tougher punishment for offenders, the expanded use of undercover officers, and tougher regulations on social media platforms. Concerns of deepfakes grew after unconfirmed lists of schools with victims spread online in August. In response, many girls and women removed photos and videos from their social media accounts.
[ » Read full article ]

Australian Broadcasting Corporation (November 6, 2024)

 

Meta Permits Its AI Models to Be Used for U.S. Military Purposes

Meta announced Nov. 4 it would allow its AI models to be used by U.S. government agencies and contractors working on national security for military purposes. Previously, Meta's "acceptable use policy" prohibited the use of its AI software for military, warfare, or nuclear applications. Meta said it will share its Llama open-source AI models with the Five Eyes intelligence alliance: the U.S., U.K., Canada, Australia, and New Zealand.

[ » Read full article *May Require Paid Registration ]

The New York Times; Mike Isaac (November 4, 2024)

 

Chinese Researchers use Meta's LLM to Build a Model for Military Use

Chinese research institutions linked to the People's Liberation Army used Meta's Llama large language model (LLM) to develop an AI tool for potential military applications. The researchers added their own parameters to Meta's Llama 13B, an earlier version of the LLM, to build ChatBIT, an AI tool that can collect and process intelligence and produce reliable information for operational decision-making.
[ » Read full article ]

Reuters; James Pomfret; Jessie Pang; Katie Paul (November 1, 2024); et al.

 

AI Rests on Billions of Tons of Concrete

The amount of concrete used in datacenter construction is challenging tech companies' commitments to eliminate carbon emissions and bolster demand for green concrete. In response, an Open Compute Project Foundation-led initiative to speed testing and deployment of low-carbon concrete in datacenters has garnered support from Amazon, Google, Meta, and Microsoft.
[ » Read full article ]

IEEE Spectrum; Ted C. Fishman (October 30, 2024)

 

Microsoft Tries to Whittle Down Its Carbon Footprint

Microsoft is using engineered timber products in the construction of two datacenters in Northern Virginia. The material is comprised of timber sheets bonded together, each layer alternating the direction of the grain. The software giant said the facilities, which also will incorporate steel and concrete, will have a carbon footprint that is 35% lower than a similar, mostly steel facility and 65% lower than a similar facility comprised mainly of precast concrete.
[ » Read full article ]

GeekWire; Lisa Stiffler (October 31, 2024)

 

Eavesdropping on Phone Calls by Sensing Vibrations

Suryoday Basak at Pennsylvania State University and colleagues used a commercially available millimeter-wave sensor to pick up the tiny vibrations of a Samsung Galaxy S20 earpiece speaker playing audio clips. The team converted the signal to audio and passed it through an AI speech recognition model, which transcribed the speech. The system achieved a word accuracy rate of 50% and a character accuracy rate of 67%.
[ » Read full article ]

New Scientist; Matthew Sparkes (October 31, 2024)

 

Neural Networks on the Edge

Researchers at Japan's Tokyo University of Science developed a binarized neural network (BNN) to allow for more efficient AI implementation in Internet of Things edge devices and other resource-limited devices. The researchers reduced circuit size and power consumption through the use of a magnetic random access memory (MRAM)-based computing-in-memory architecture. This required the creation of a new XNOR logic gate as the foundation for a MRAM array, which stores information in its magnetization state using a magnetic tunnel junction.
[ » Read full article ]

Computer Weekly; Joe O'Halloran (October 28, 2024)

 

Voting Rights Groups Concerned Chatbots Produce Election Falsehoods in Spanish

An analysis by two nonprofit newsrooms working with the Science, Technology and Social Values Lab at New Jerseyʼs Institute for Advanced Study found that AI chatbots generate more false claims about voting rights in Spanish than they do in English, in the lead-up to the U.S. presidential election. Assessing responses by Meta's Llama 3, Anthropic's Claude, and Google's Gemini to specific election-rated prompts, the researchers found they produced incorrect information in more than half their responses in Spanish.
[ » Read full article ]

Associated Press; Gisela Salomon; Garance Burke; Jonathan J. Cooper (October 31, 2024)

 

3D Image Reconstruction to Preserve Cultural Heritage

A neural network developed by a multinational research team led by Satoshi Tanaka from Japan's Ritsumeikan University allows for the 3D reconstruction and digital preservation of sculpted and carved reliefs using old photos. The neural network performs semantic segmentation, depth estimation, and soft-edge detection, which together enhance the accuracy of 3D reconstruction. The core strength of the network lies in its depth estimation, achieved through a novel soft-edge detector and an edge matching module.
[ » Read full article ]

Ritsumeikan University (Japan) (October 31, 2024)

 

Denmark Unveils AI Supercomputer Funded By Novo Nordisk

The Wall Street Journal Share to FacebookShare to Twitter (11/1, Cohen, Subscription Publication) reported that Denmark launched its national AI supercomputer, Gefion, last week. Nadia Carlsten, the new CEO of the Danish Centre for AI Innovation, oversees the project. The supercomputer, built with Nvidia technology and funded by the Novo Nordisk Foundation, aims to enhance Danish industries like healthcare and biotechnology. The supercomputer will be accessible to entrepreneurs, academics, and scientists for various research purposes.

Tesla Pursues AI-Driven Robotaxis Amid Industry Skepticism

The Wall Street Journal Share to FacebookShare to Twitter (11/1, Mims, Subscription Publication) reports that Elon Musk is focusing on end-to-end AI to advance Tesla’s self-driving technology, aiming to deliver fully autonomous vehicles more quickly and cost-effectively than competitors. Musk plans to offer existing Tesla owners access to this technology next year and launch new robotaxis by 2026. However, industry leaders like Waymo employ a different approach, using sensors for a more comprehensive understanding of driving environments. AI developers express doubt about the feasibility of Musk’s vision, with Anthony Levandowski remarking that Musk’s timeline for a fully autonomous system is unreasonable. Concerns about Tesla’s camera-based technology persist, with federal regulators investigating its role in fatal crashes.

Tech Giants Plan Increased AI Investment Despite Wall Street Concerns

Bloomberg Share to FacebookShare to Twitter (11/1, Subscription Publication) reports that major tech companies, including Amazon, Microsoft, Meta, and Alphabet, are set to exceed $200 billion in capital expenditures this year, primarily for AI development. Despite Wall Street’s previous criticism over AI spending, these firms plan to increase investments further. Amazon’s CEO, Andy Jassy, described AI as a “once-in-a-lifetime opportunity,” with projected spending of $75 billion for 2024. Analysts expressed optimism about Microsoft’s investments despite current data center supply issues. Meta’s CEO, Mark Zuckerberg, emphasized AI’s role in enhancing ad sales, despite operating losses in other divisions.

Microsoft Hires Facebook Engineering Executive To Boost Data Center Efforts

Bloomberg Share to FacebookShare to Twitter (10/31, Subscription Publication) reports that Microsoft Corp. has hired Jay Parikh, a former engineering chief at Facebook, to enhance its data center capabilities amid rising demand for AI products. Parikh will join the senior leadership team, reporting to CEO Satya Nadella. Nadella praised Parikh’s experience in scaling infrastructure for large internet businesses. Parikh previously led engineering at Facebook, overseeing data center projects. Microsoft is focused on expanding infrastructure to support its partnership with OpenAI.

FERC Chief Promotes Practice Of Pairing Data Centers With Power Plants

Reuters Share to FacebookShare to Twitter (11/1, Kearney) reports the Federal Energy Regulatory Commission on Friday hosted a conference focused on “costs and reliability concerns related to the burgeoning trend of building energy-intensive data centers next to U.S. power plants,” which “has presented a fast route to accessing large amounts of electricity, instead of toiling for years in queues to connect to the broader grid.” Despite “questions about potentially higher power bills for everyday customers,” FERC Chairman Willie Phillips said, “I believe that the federal government, including this agency, should be doing the very best it can to nurture and foster their development,” while also “adding he considered AI centers vital to national security and the U.S. economy.”

        Meanwhile, the Washington Post Share to FacebookShare to Twitter (11/1, A1, Halper, O'Donovan) says some experts claim consumers “are facing higher electric bills due to a boom in tech companies building data centers that guzzle power and force expensive infrastructure upgrades,” and “some regulators are concerned that the tech companies aren’t paying their fair share.” The Post notes that “other causes – volatile fuel prices, supply chain challenges, extreme weather and rising interest rates – also drive up electricity rates,” and “the tech firms and several of the power companies serving them strongly deny they are burdening others. They say higher utility bills are paying for overdue improvements to the power grid that benefit all customers.”

UCLA Professor Discusses How Legislation Could Combat Non-Consensual Deepfake Videos

USA Today Share to FacebookShare to Twitter (10/31, Taylor) provided a transcript of a special episode of The Excerpt about deepfake videos, primarily non-consensual pornography targeting celebrities and increasingly high school and middle school students. On Wednesday, October 30, UCLA professor John Villasenor discussed legislative and technological strategies to combat this issue on the podcast. In California, “the vast majority of AI-focused companies operate just passed 18 laws to help regulate the use of AI with particular focus on AI-generated images of sexual child abuse,” but Villasenor noted challenges in enforcing these laws due to potential legal disputes and the difficulty in tracking creators of deepfake content. He said, “I think the longer term solution would have to be automated technologies that are used and hopefully run by the people who run the servers where these are hosted,” to mitigate the spread of such videos. Villasenor also advised parents to educate their children on internet safety and “to be just really aware of knowing how to use the internet responsibly.”

Professors Confront AI-Driven Cheating Culture

The Chronicle of Higher Education Share to FacebookShare to Twitter (11/4, McMurtrie) reports Amy Clukey, an associate professor at the University of Louisville, faced rampant cheating facilitated by AI among her students upon returning from a leave. Despite her efforts to create unique assignments, Clukey discovered widespread use of AI for plagiarism. She stated “she feels less like a teacher and more like a human plagiarism detector, spending hours each week analyzing her students’ writing to determine its authenticity.” A student even sent an apology email that closely resembled a ChatGPT-generated response. This issue reflects a broader trend, with institutions like Middlebury College witnessing a rise in honor code violations. Middlebury’s annual survey showed an increase in students admitting to cheating, from 35% in 2019 to 65% in 2024. Clukey and other educators are seeking ways to address this challenge, emphasizing the importance of academic integrity and considering enforcement of academic-integrity policies as a necessary step.

Tech Giants’ AI Investments Reveal Cautious Corporate Adoption

The Economist (UK) Share to FacebookShare to Twitter (11/4) reports that while tech companies are making significant AI investments, corporate adoption remains tentative. Amazon CEO Andy Jassy noted AI revenue for AWS is growing at “triple-digit rates,” but most businesses are proceeding slowly. Only 5% of US businesses use generative AI to produce goods or services, and just 8% of firms have deployed more than half of their AI experiments. Concerns include legal risks, uncertain investment returns, and technological challenges. Companies face obstacles like messy data, legacy IT systems, and skills shortages. AI-related job postings have surged 122% this year, indicating growing interest. Despite corporate hesitation, 39% of Americans now use generative AI, with 28% using it for work. Tech giants like Alphabet, Amazon, Microsoft, and Meta are expected to invest at least $200 billion in AI-related capital expenditures this year.

Bloomberg Report: OpenAI In Early Talks With California To Become A For-Profit Company

Bloomberg Share to FacebookShare to Twitter (11/4, Ghaffary, Nayak, Subscription Publication) reports OpenAI is “in early talks with the California attorney general’s office over the process to change its corporate structure, according to two people familiar with the matter,” in a “bid to transform the non-profit structure of the $157 billion company into a for-profit business.” This “process is likely to involve regulators scrutinizing how OpenAI values a portfolio of highly lucrative intellectual property, such as its ChatGPT app.” The Delaware attorney general “also has been in communication about the nonprofit to for-profit shift, as detailed in a letter to OpenAI,” which “declined to comment on talks with regulators, but said that the nonprofit would continue to exist in any potential corporate restructure.”

Instagram To Use AI To Detect Teen Users’ Ages

Bloomberg Share to FacebookShare to Twitter (11/4, Heinzl, Wagner, Subscription Publication) reports that Meta plans to use AI to identify Instagram users lying about their age, automatically placing suspected minors into stricter privacy settings. The “adult classifier” software analyzes user data to predict age. Users “who are suspected to be under 18” will be moved to teen accounts. According to the article, “The company is already moving teens into these more restrictive settings based on their self-reported birthday, but plans to utilize the adult classifier early next year.”

        Engadget Share to FacebookShare to Twitter (11/4, Bonifacic) adds, “Separately, the company plans to flag teens who attempt to create a new account using an email address that’s already associated with an existing account and a different birthday.” It’s also planning “to use device IDs to get a better picture of who is creating a new profile.”

AI Being Used To Prepare, Coordinate Natural Disaster Response Efforts In Cities

TIME Share to FacebookShare to Twitter (11/4, Booth) reports that the “number of people living in urban areas has tripled in the last 50 years, meaning when a major natural disaster such as an earthquake strikes a city, more lives are in danger.” So on Nov. 6, at the Barcelona Supercomputing Center​ in Spain, the “Global Initiative on Resilience to Natural Hazards through AI Solutions will meet for the first time. The new United Nations initiative aims to guide governments, organizations, and communities in using AI for disaster management.” Al is already helping “communities prepare for disasters.” It’s also “being used to coordinate response efforts.”

Nvidia Unveils AI Tools For Humanoid Robot Development

VentureBeat Share to FacebookShare to Twitter (11/6, Takahashi) reports that Nvidia introduced new AI and simulation tools to enhance robot learning and humanoid development at the Conference for Robot Learning in Munich. The tools include the Nvidia Isaac Lab robot learning framework, Project GR00T workflows, and world-model development tools like the Cosmos tokenizer and NeMo Curator. These innovations aim to advance AI-enabled robotics, offering faster visual tokenization and video processing. Nvidia also released 23 papers and presented nine workshops at the event. Collaborations with Hugging Face aim to boost open-source robotics research. Nvidia’s Cosmos tokenizer and NeMo Curator promise efficient data processing, aiding developers in creating sophisticated world models for robots. The tools are available on GitHub, with more releases expected soon.

Robotic Surgery Advances With AI Integration

Fortune Share to FacebookShare to Twitter (11/7, Lazzaro) reports that a Johns Hopkins University panel discussed the future of surgical autonomy, driven by large language models (LLMs). Researchers, including Axel Krieger and Russell Taylor, highlighted the shift from pre-programmed to learning-based robotic systems, using AI to enhance surgical precision and safety. The Da Vinci system’s capabilities were demonstrated through tasks like tissue manipulation. Despite potential, Taylor emphasized gradual clinical integration, ensuring patient safety. Robotic surgery is poised to grow, addressing surgeon shortages and increasing demand.

OpenAI Acquires Chat.com Domain

The Verge Share to FacebookShare to Twitter (11/6) reports that OpenAI acquired the chat.com domain from Dharmesh Shah, HubSpot’s founder, who initially bought it for $15.5 million. Shah sold the domain for more than his purchase price, reportedly receiving OpenAI shares as payment. The acquisition aligns with OpenAI’s rebranding efforts, dropping “GPT” from the domain. OpenAI’s recent funding of $6.6 billion makes the acquisition cost negligible. Shah believes chat-based user interfaces are the future of software, facilitated by generative AI, a view he shared in a LinkedIn post when announcing his initial purchase.

dtau...@gmail.com

unread,
Nov 17, 2024, 5:11:23 PM11/17/24
to ai-b...@googlegroups.com

It's Surprisingly Easy to Jailbreak LLM-Driven Robots

University of Pennsylvania researchers developed an algorithm that can jailbreak robots controlled by a large language model (LLM). The RoboPAIR algorithm uses an attacker LLM to provide prompts to a target LLM, adjusting the commands until they bypass the safety filters. It also employs a "judge" LLM to ensure the attacker LLM produces prompts that take into account the target LLM's physical limitations, such as certain obstacles in the environment.
[
» Read full article ]

IEEE Spectrum; Charles Q. Choi (November 11, 2024)

 

Amazon Offers Computing Power to AI Researchers

Amazon Web Services (AWS) will offer computing power to researchers who want to use its custom AI chips. AWS said Tuesday it will provide credits to use its cloud datacenters to researchers who want to tap Trainium, its chip for developing AI models. AWS said researchers from Carnegie Mellon University and the University of California, Berkeley, are taking part in the program.
[ » Read full article ]

Reuters; Stephen Nellis (November 12, 2024)

 

Nuclear Plant to Use AI to Comply with Licensing Challenges

California startup Atomic Canyon has forged a deal with utility Pacific Gas & Electric (PG&E) to install its Neutron Enterprise software at Diablo Canyon, the state's only remaining nuclear power plant. The facility has around 9,000 procedures in place and 9 million documents stored in its system. The AI software is intended to help PG&E comply with requirements to maintain its federal license for up to 20 more years.
[
» Read full article ]

Reuters; Stephen Nellis (November 13, 2024)

 

Robot Watches How-to Videos, Becomes a Surgeon

An AI model developed by Johns Hopkins University researchers enables robots to successfully perform complex surgeries after watching how-to videos. The imitation learning model was trained on a vast amount of footage captured by wrist-mounted cameras on da Vinci Surgical System robots. The AI model helped robots perform on par with human surgeons in needle manipulation, tissue lifting, and suturing.
[ » Read full article ]

StudyFinds.org (November 11, 2024)

 

Google DeepMind Releases Code Behind Protein Prediction Model

Google DeepMind has released the code underlying AlphaFold3, an AI model that predicts the structure of proteins and how they interact with DNA, RNA, and other proteins. Upon AlphaFold3's release in May, the researchers had provided only pseudocode and a link to an online portal allowing its use for a limited number of predictions per day. The computational model now is publicly available on GitHub with a noncommercial license.
[ » Read full article ]

Science; Catherine Offord (November 11, 2024)

 

The Beatles' Final Song, Completed with AI, Earns Grammy Nomination

The Beatles' "Now and Then" is the first AI-assisted song to receive a Grammy nomination. Advanced machine-learning software isolated the late John Lennon's voice from an unreleased recording of him singing and playing piano. Lennon's voice, incorporated into the final version of the song, was not AI-generated, thus complying with Grammy rules that "only human creators are eligible" and that work featuring "elements of AI material" is permitted in certain categories.
[ » Read full article ]

CNet; Samantha Kelly (November 11, 2024)

 

Machine Learning Might Save Time on Chip Testing

A machine learning algorithm developed by engineers at Netherlands-based NXP is intended to save companies time and money on chip testing. The algorithm analyzes the patterns of test results to identify which tests fail together, and then determines which tests actually are necessary. In tests of seven microcontrollers and applications processors built using advanced chipmaking processes, each subjected to 41 to 164 tests depending on the chip involved, the algorithm recommended eliminating up to 74% of those tests.
[ » Read full article ]

IEEE Spectrum; Samuel K. Moore (November 10, 2024)

 

AI Helps Humanitarian Responses

As the number of displaced people rises globally, the International Rescue Committee (IRC) is turning to AI tools to extend its reach. IRC is working to expand its network of AI chatbots available through Signpost, a portfolio of mobile apps and social media channels that answer questions in different languages for people in dangerous situations. The chatbots currently operate in El Salvador, Kenya, Greece, and Italy and respond in 11 languages.
[
» Read full article ]

Associated Press; Thalia Beaty (November 14, 2024)

 

AI Thermostats Pitched for Texas Homes to Relieve Stressed Grid

Power supplier NRG Energy Inc. is teaming with Renew Home LLC to distribute about 650,000 AI-enabled thermostats that use Google Cloud technology to Texas households over the next decade. The initiative aims to cut nearly 1 gigawatt of electricity demand, enough to power 200,000 Texas homes. Google Cloud will be tapped for its AI to determine the best times to cool or heat homes, based on a household’s energy usage patterns and ambient temperatures.
[ » Read full article ]

Bloomberg; Naureen S. Malik (November 7, 2024)

 

TSMC to Suspend Production for Some Chinese AI Chip Customers

Taiwan Semiconductor Manufacturing Co. (TSMC) has told multiple Chinese customers that it will suspend production of their AI and high-performance computing chips, as the chipmaker steps up efforts to ensure compliance with U.S. export controls. The Chinese chip design clients affected are working on high-performance computing, graphic processing units, and AI computing-related applications using chip production technologies of 7-nanometer or better.
[ » Read full article ]

Nikkei Asia; Cheng Ting-Fang; Lauly Li (November 8, 2024)

 

Vatican, Microsoft Create AI-Generated St. Peter's Basilica

The Vatican and Microsoft have rolled out a digital twin of St. Peter's Basilica that offers online visitors an interactive experience. The 3D replica leverages AI and advanced photogrammetry to let virtual visitors tour the church and learn its history. The digital twin was created using 400,000 high-resolution digital photographs captured by drones, cameras, and lasers.
[ » Read full article ]

Associated Press; Nicole Winfield (November 11, 2024)

 

Robot Learns to Clean Bathroom Sink by Watching

A robotic arm learned to wash a bathroom sink by observing someone else doing it. Researchers at TU Wien in Austria developed a cleaning sponge equipped with force and position sensors and had a person use it to repeatedly clean the front edge of a sink that had been sprayed with a dyed gel imitating dirt. The data collected was used to train a neural network that could translate the input into predetermined movement patterns.
[ » Read full article ]

New Atlas; Michael Franco (November 8, 2024)

 

AI-Da Artwork of Alan Turing Sells for $1 Million

Sotheby's said "AI God," a painting of computer science pioneer Alan Turing by Ai-Da Robot, was sold to an undisclosed buyer for $1,084,800, making it the first artwork by a humanoid robot artist to be sold at auction. Said Ai-Da Robot Studios' Aidan Meller, "This auction is an important moment for the visual arts, where Ai-Da's artwork brings focus on artworld and societal changes, as we grapple with the rising age of AI."
[ » Read full article ]

BBC; Alex Pope (November 7, 2024)

 

US Companies Investing In Data Center Construction As Part Of AI “Race”

Bloomberg Share to FacebookShare to Twitter (11/8, Subscription Publication) reports that US companies are “plowing money” into building data centers as they “race to get ahead in artificial intelligence.” Private construction spending on data centers “has surged close to $30 billion a year, according to the most recent numbers from the Census Bureau, more than double what it was in late 2022 when OpenAI’s ChatGPT was released to the public.” Bloomberg adds that the US is “leading a surge of investment in data centers, with global spending on track to reach $250 billion a year according to money manager KKR & Co. The industry is benefiting from the development of AI and its need for computational power on an ever-larger scale.”

OpenAI Wins Initial Victory In Copyright Lawsuit

Gizmodo Share to FacebookShare to Twitter (11/8, Feathers) reported, “OpenAI won an initial victory on Thursday in one of the many lawsuits the company is facing for its unlicensed use of copyrighted material to train generative AI products like ChatGPT.” A federal judge in New York “dismissed a complaint brought by the media outlets Raw Story and AlterNet, which claimed that OpenAI violated copyright law by purposefully removing what is known as copyright management information, such as article titles and author names, from material that it incorporated into its training datasets.”

ChatGPT Rejected Thousands Of Image Requests Of Presidential Candidates

CNBC Share to FacebookShare to Twitter (11/8, Field) reported that OpenAI’s ChatGPT turned down more than 250,000 requests to create images of 2024 US presidential candidates before Election Day. OpenAI’s October report indicated it disrupted “more than 20 operations and deceptive networks” using AI. These “threats ranged from AI-generated website articles to social media posts by fake accounts, the company wrote.” Still, “none of the election-related operations were able to attract ‘viral engagement,’ the report noted.”

AI Chatbot Linked To Suicide Of Florida Teen Raises Concerns Over Artificial Intimacy

The Wall Street Journal Share to FacebookShare to Twitter (11/8, Subscription Publication) reported that Sewell Setzer III, a 14-year-old from Orlando, Florida, developed a deep emotional connection with Daenerys Targaryen, a chatbot on Character. AI. Suffering from ADHD and bullying, Sewell found solace in the AI’s companionship. The relationship, sometimes sexual, led Sewell to prioritize it over real-life interactions. During a crisis, he expressed suicidal thoughts to the chatbot, which initially responded with concern but later forgot the conversation. On Feb. 28, Sewell tragically ended his life. This incident, and others like it, highlight the risks of AI companionship. Researchers warn that chatbots simulate empathy but lack genuine care, making them poor substitutes for human connections. Sewell’s mother has sued Character. AI for deceptive practices. The company expressed sorrow and plans to enhance user safety. The tragedy underscores the need for AI “guardrails” and parental awareness, emphasizing that AI cannot replace authentic human empathy and connection.

UMass Amherst Develops New Policy To Address AI Concerns

The Chronicle of Higher Education Share to FacebookShare to Twitter (11/11, Gardner) reports that the University of Massachusetts at Amherst implemented an artificial intelligence (AI) detection tool for student assignments, causing confusion among instructors about interpreting AI usage scores. This prompted discussions on creating a comprehensive AI policy. In the “early fall of 2023, administrators at UMass Amherst formed a joint task force made up of representatives from across campus, including faculty members, administrators, and students.” The resulting policy emphasizes training, accountability, data security, and consent. It allows AI use in classrooms at instructors’ discretion, provided guidelines are followed. One of the “key principles to emerge from the discussions around UMass Amherst’s AI policy was that humans should always have the final say in any high-impact decision and must remain accountable.”

Generative AI Enhances Robot Training Success

MIT Technology Review Share to FacebookShare to Twitter (11/12) reports that researchers have developed a new system called LucidSim, which uses generative AI models with a physics simulator to create virtual training environments for robots. This method improves the robots’ real-world task performance compared to traditional techniques. LucidSim was demonstrated at the Conference on Robot Learning, where a robot dog successfully completed parkour tasks without prior real-world data. The system generated environments using AI descriptions and mapped them into visual training data. In tests, LucidSim achieved higher success rates in tasks like locating objects and climbing stairs. Researchers aim to expand this approach to humanoid robots and robotic arms, enhancing their dexterity and functionality in various settings.

OpenAI Faces Plateau in AI Model Improvements

Insider Share to FacebookShare to Twitter (11/11, Chowdhury, Nolan) reports that OpenAI’s upcoming AI model, Orion, shows smaller improvements compared to previous iterations, particularly in coding tasks. This suggests the generative AI industry may be reaching a performance plateau. OpenAI CEO Sam Altman has emphasized “scaling laws,” but technical staff are questioning their limits. Data scarcity and computing power constraints are challenges. Industry experts like Gary Marcus argue AI development is encountering diminishing returns. Despite this, some, including Microsoft CTO Kevin Scott, remain optimistic about AI’s scaling potential and future advancements.

        Copyright Lawsuit Against OpenAI Dismissed. SiliconANGLE Share to FacebookShare to Twitter (11/8) reports that a federal court dismissed a copyright lawsuit filed by Raw Story Media Inc. and AlterNet Media Inc. against OpenAI. US District Judge Colleen McMahon ruled that the plaintiffs can refile the lawsuit with revisions. The lawsuit alleged OpenAI removed copyright management information (CMI) from articles used for AI training, violating the Digital Millennium Copyright Act. OpenAI argued the plaintiffs did not demonstrate “concrete harm.” OpenAI stated, “we build our AI models using publicly available data, in a manner protected by fair use and related principles.”

AI Companies Seek New Techniques To Overcome Delays, Challenges

Reuters Share to FacebookShare to Twitter (11/11, Hu, Tong) reports, “Artificial intelligence companies like OpenAI are seeking to overcome unexpected delays and challenges in the pursuit of ever-bigger large language models by developing training techniques that use more human-like ways for algorithms to ‘think.’” According to the article, “A dozen AI scientists, researchers and investors told Reuters they believe that these techniques...could reshape the AI arms race, and have implications for the types of resources that AI companies have an insatiable demand for.”

Experts Discuss Teachers’ Concerns About AI In Education

Education Week Share to FacebookShare to Twitter (11/11, Langreo) reports, “In an Oct. 16 Seat at the Table discussion, Education Week opinion contributor Peter DeWitt spoke with Kip Glazer, principal of Mountain View High School in California; Carnegie Mellon University computer science professor Ken Koedinger; and Education Week Deputy Managing Editor Kevin Bushweller” about artificial intelligence in education. The panel addressed educators’ hesitance towards AI, despite its growing presence in educational tools. School and district leaders “should first figure out what staff, students, and families know about AI and what concerns they might have, said Glazer,” while Koedinger highlighted the need for educators to focus on how AI supports teaching strategies rather than just its capabilities. Many organizations “have resources schools and districts can use to build AI literacy among teachers and students, Bushweller said,” such as the International Society for Technology in Education. Glazer advocated for a slow, deliberate approach to adapt to rapid technological changes.

Experts Call For Action As AI Workforce’s Gender Gap Worsens

Forbes Share to FacebookShare to Twitter (11/12, Constantino) reports that a Randstad report reveals a significant gender gap in the AI workforce, with 71% of AI-skilled workers being male. The report, based on 3 million job profiles and 12,000 responses, highlights that only 35% of women are offered access to AI tools compared to 41% of men. Julia McCoy, founder of First Movers, emphasizes the critical nature of this divide, noting that women represent only 15-34% of AI talent. Pascal Bornet, an expert, author, and keynote speaker on AI and automation, identifies a threefold problem: worsening workplace inequalities, limited innovation, and a compounding gap over time. Experts suggest solutions, including targeted AI education and workplace initiatives.

AI Assistant Tools Challenge Higher Education Privacy Policies

The Chronicle of Higher Education Share to FacebookShare to Twitter (11/13, Swaak) reports that the California Institute of the Arts experienced an unexpected proliferation of AI note-taking tools from Read AI after a videoconference. Allan Chen, the institute’s chief technology officer, noted the aggressive spread of the tool in meetings, highlighting concerns about data privacy and security. This reflects a broader issue in higher education, where AI tools like Read AI, Otter.ai, and Fireflies.ai are outpacing institutional governance, potentially violating privacy policies. Heather Brown at Tidewater Community College experienced unauthorized access by Otter.ai to her calendar. Institutions are considering blocking or controlling these tools, and they are also advised to explore alternative tools and develop policies to manage AI tool use, ensuring transparency and control over data.

OpenAI Faces Challenges With New AI Model

Bloomberg Share to FacebookShare to Twitter (11/13, Subscription Publication) reports that OpenAI’s new AI model, Orion, has not met the company’s performance expectations, particularly in coding tasks it was not trained on. This setback mirrors challenges faced by other AI companies like Google and Anthropic, which are experiencing diminishing returns from developing advanced models. The difficulty in sourcing high-quality training data contributes to these issues. Despite ongoing post-training efforts, OpenAI is unlikely to release Orion before early next year. The industry is reconsidering the emphasis on model size and is exploring new AI applications, such as AI agents.

        Axios Share to FacebookShare to Twitter (11/13) also reports.

Parts Of Schumer’s AI Road Map May Survive Into New Congress With Industry Lean, Experts Say

Roll Call Share to FacebookShare to Twitter (11/13) reports portions of Senate Majority Leader Schumer’s “artificial intelligence ‘road map’ may survive into the new Congress, but legislation stemming from it will favor industry while downplaying civil rights, according to technology and data privacy experts.” The Senate bipartisan blueprint, titled Driving U.S. Innovation in Artificial Intelligence, “‘was weighted heavily towards industry to begin with,’ said Frank Torres, privacy and AI fellow at the Leadership Conference on Civil and Human Rights,” which “may only increase with Donald Trump in the White House, the Senate in Republican hands, and the House appearing to be headed that way, according Torres and others who are tracking the issue.”

        AI Power Demand Complicates Carbon-Reduction Goals, Dominion CEO Says. Bloomberg Share to FacebookShare to Twitter (11/13, Saul, Subscription Publication) reports the surge “in power demand from data centers and artificial intelligence creates a conflict between maintaining a reliable grid and cutting carbon emissions, according to the head of Dominion Energy.” According to Bloomberg, Dominion CEO Bob Blue in an interview said, “Anything that’s driving demand is going to make it harder to retire existing fossil units.”

Generative AI Impacts Scholarly Publishing

Inside Higher Ed Share to FacebookShare to Twitter (11/14, Palmer) reports that the scholarly publishing industry is set “for exponential growth in its use across the research and publication lifecycle,” according to a report “published late last month by the education research firm Ithaka S+R.” Publishers are exploring AI for tasks like editing and peer reviewing, signaling potential “exponential growth” in AI usage, according to the report. Despite this, researchers “have been slow to adopt generative AI widely,” with Ithaka S+R identifying a lack of a shared framework for managing AI’s effects. Dylan Ruediger, co-author of the report, wrote in a blog post, “The consensus among the individuals with whom we spoke is that generative AI will enable efficiency gains across the publication process.” However, opinions differ on how AI will shape scholarly publishing. While publishers systematically approach AI, academic institutions lag, with “just 9 percent [believing] higher education is prepared to handle the new technology’s rise.”

Big Tech’s AI Spending Surge Continues

Forbes Share to FacebookShare to Twitter (11/14) contributor Beth Kindig writes that Big Tech’s AI spending is accelerating rapidly, with the four giants on track to spend upwards of a quarter trillion dollars on AI infrastructure next year. Big Tech’s AI-fueled capital expenditures serve as a barometer for the broader AI industry, with Microsoft, Meta, Alphabet, and Amazon leading the charge by pouring billions each quarter towards AI infrastructure. Amazon CEO Andy Jassy said AWS has “more demand than we could fulfill if we had even more capacity today,” and that “pretty much everyone today has less capacity than they have demand for, and it’s really primarily chips that are the area where companies could use more supply.” Kindig notes AI revenue streams are emerging, with Microsoft among the leaders as it sees AI revenue on track to surpass $10 billion of annual revenue run rate in Q2, and AWS’s AI business is a multibillion-dollar revenue run rate business that continues to grow at a triple-digit year-over-year percentage.

DHS To Release AI Guidance for Critical Infrastructure

The New York Times Share to FacebookShare to Twitter (11/14, Hirsch) reports that the US Department of Homeland Security will release new guidance for companies using artificial intelligence in critical infrastructure. The document, resulting from President Biden’s executive order, offers voluntary best practices for sectors like airports and energy companies. The guidance encourages companies to monitor suspicious activity and maintain strong privacy practices. A board of experts, including leaders from OpenAI, Nvidia, and Alphabet, contributed to the guidance. The document does not suggest formal compliance metrics but calls for legislative support to enhance oversight mechanisms.

dtau...@gmail.com

unread,
Nov 23, 2024, 12:30:38 PM11/23/24
to ai-b...@googlegroups.com

U.S. Congressional Commission Pushes Manhattan Project-style AI Initiative

The U.S.-China Economic and Security Review Commission on Tuesday proposed a Manhattan Project-style initiative to fund the development of AI systems as smart as (or smarter than) humans, amid intensifying competition with China over advanced technologies. The commission stressed that public-private partnerships are key in advancing artificial general intelligence (AGI), but did not offer any specific investment strategies.
[ » Read full article ]

Reuters; Anna Tong (November 19, 2024)

 

NASA, Microsoft Launch 'Earth Copilot'

NASA has teamed with Microsoft on an AI chatbot tasked with answering questions about our planet. The ‘Earth Copilot’ chatbot integrates the massive amounts of data collected by NASA's monitoring technologies, including orbiting satellites, with the Azure OpenAI Service. NASA said it is looking to "democratize" access to its data through a more understandable format.
[ » Read full article ]

Tech Times; Isaiah Richard (November 15, 2024)

 

U.S. Ahead in AI Innovation, Easily Surpassing China

The U.S. leads the world in developing AI technology, surpassing China in research and other important measures of AI innovation, according to a newly released AI Index by Stanford University's Institute for Human-Centered AI. “The gap is actually widening," said Ray Perrault, director of the committee that runs the index. “The U.S. is investing a lot more, at least at the level of firm creation and firm funding."
[ » Read full article ]

Associated Press; Matt O'Brien (November 21, 2024)

 

AI Is Already Taking Jobs

Generative AI is impacting job markets, according to researchers at Harvard Business School, the German Institute for Economic Research, and the U.K.’s Imperial College London Business School. The researchers studied more than a million job posts on a major global freelance work marketplace from July 2021 to July 2023 and found demand for automation-prone jobs had fallen 21% eight months after the release of ChatGPT in late 2022.
[ » Read full article ]

Fast Company; Mark Sullivan (November 15, 2024)

 

It's a Legacy Agriculture Company — and Your Newest AI Vendor

Microsoft is working with a handful of companies on specialized AI models fine-tuned with industry-specific data. The models, based on Microsoft's Phi family of small language models, are preloaded with industry data. The approach has enabled Bayer, for example, to create an AI model capable of answering questions about agronomy and crop protection.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Isabelle Bousquette (November 18, 2024)

 

Biden, Xi Agree Not to Give AI Control over Nuclear Weapons

U.S. President Joe Biden and Chinese President Xi Jinping have agreed that neither nation would turn over control of nuclear weapons to AI, the White House announced. Said White House National Security Advisor Jake Sullivan, the agreement is “an important statement about the intersection of artificial intelligence and nuclear doctrine, and it is a reflection of how, even with the competition between the US and the PRC, we could work on a responsible basis to manage risk in vital areas.”

[ » Read full article *May Require Paid Registration ]

Bloomberg; Jenny Leonard (November 16, 2024)

 

Giving Robots Superhuman Vision

A sensor developed by University of Pennsylvania researchers uses AI to transform radio waves, which can penetrate smoke and fog and see through certain materials, into detailed 3D views to help robots navigate challenging environments. PanoRadar rotates in a circle to scan the horizon, with a vertical array of antennas transmitting radio waves and listening for their reflections. It combines measurements from all angles and extracts 3D information from its environment using signal processing and machine-learning algorithms.
[ » Read full article ]

Penn Engineering; Ian Scheffler (November 12, 2024)

 

'Sound Bubble' Headphones Tune Out Noise

Engineers at the University of Washington have developed headphones that use AI to create a "sound bubble" to filter out noise. A small computer, attached to noise-canceling headphones equipped with microphones along the headband, runs a neural network trained to analyze the distance of different sound sources, filtering out noise coming from farther away and amplifying sounds closer to the user.
[ » Read full article ]

New Atlas; Michael Irving (November 14, 2024)

 

Autonomous Cars Do Doughnuts, Drift Sideways

A team at the Toyota Research Institute is using an AI model to teach driverless vehicles to drift sideways around corners at high speed, to help them recover from skids in an emergency. Using the model, the researchers enabled a Toyota GR Supra and Lexus LC 500 to drift around a course with multiple turns. The autonomous vehicles were able to enter a skid, drift sideways, and slide within 10 centimeters of targets.

[ » Read full article *May Require Paid Registration ]

New Scientist; Matthew Sparkes (November 14, 2024)

 

AI Chatbots Better At Diagnosing Illness Than Physicians, Study Says

The New York Times Share to FacebookShare to Twitter (11/17, Kolata) reports physicians “who were given ChatGPT-4 along with conventional resources did only slightly better than doctors who did not have access to the bot” in a study of 50 physicians, which also showed to “researchers’ surprise, ChatGPT alone outperformed the doctors.” The chatbot “scored an average of 90 percent when diagnosing a medical condition from a case report and explaining its reasoning,” and physicians “randomly assigned to use the chatbot got an average score of 76 percent.” The study published in JAMA Network Open also “illustrated that while doctors are being exposed to the tools of artificial intelligence for their work, few know how to exploit the abilities of chatbots.”

Musk Expands Antitrust Lawsuit Against OpenAI To Include Microsoft

The Washington Post Share to FacebookShare to Twitter (11/15, Vynck) reports Elon Musk “broadened a federal lawsuit against OpenAI on Friday, alleging the ChatGPT maker has conspired with primary backer Microsoft to break antitrust laws as the nonprofit became more focused on money-making ventures.” According to the Post, the “amended version of a complaint Musk initially filed against OpenAI in February adds Microsoft and Microsoft board member Reid Hoffman, also a former member of OpenAI’s board, as defendants. It alleges that the Windows developer worked with OpenAI CEO Sam Altman to try to turn it into a for-profit company that would benefit Microsoft.” The Post points out that Microsoft’s multibillion-dollar investment in OpenAI is also “part of a Federal Trade Commission investigation into Big Tech companies and their ties to emerging AI firms.”

        X Sues To Block California Law Regulating Election Deepfakes. The Los Angeles Times Share to FacebookShare to Twitter (11/15, Wong) reports X “has sued California in an attempt to block a new law requiring large online platforms to remove or label deceptive election content.” The lawsuit targets Assembly Bill 2655 – “a law that aims to combat harmful videos, images and audio that have been altered or created with artificial intelligence. Known as deepfakes, this type of content can make it appear as if a person said or did something they didn’t.” However, “X alleges the new law would prompt social media sites to lean toward labeling or removing legitimate election content out of caution.” Accordingly, the company argues, the law “runs afoul of free speech protections in the U.S. Constitution and a federal law known as Section 230, which shields online platforms from liability for user-generated content.”

Google To Commit $20M To Fund AI-Based Research For Scientific Breakthroughs

TechCrunch Share to FacebookShare to Twitter (11/18, Sawers) reports, “Google is committing $20 million in cash and $2 million in cloud credits to a new funding initiative designed to help scientists and researchers unearth the next great scientific breakthroughs using artificial intelligence (AI).” This announcement “feeds into a broader push by Big Tech to curry favor with young innovators and startups.”

Google Enhances Ad Features With AI And Automation

MediaPost Share to FacebookShare to Twitter (11/18) reports that Google has introduced a series of advertising products and updates throughout the year aimed at revolutionizing connections between advertisers and consumers. On Monday, Google highlighted the success of features such as AI Overviews and Shopping Ads in Google Lens, emphasizing the use of artificial intelligence to enhance performance, optimization, and reporting across various platforms. The company announced the upcoming rollout of ads within AI Overviews in US mobile search results. James Gibbons from Quattr shared an example on X, illustrating a Google sponsored search ad within these overviews. Additionally, Google has improved how it handles and reports misspellings in search queries, now correcting them in reports, which has made additional data visible. Other advancements include real-time campaign optimization, dynamic pricing for retailers, and enhanced transparency and third-party verification on YouTube.

Musk’s Lawsuit Reveals OpenAI’s Early Talent Battles, Internal Struggles

Insider Share to FacebookShare to Twitter (11/17, Varanasi) reports that Elon Musk’s lawsuit against OpenAI cofounders Sam Altman and Greg Brockman has unveiled email exchanges from the company’s early days. The emails reveal intense competition for AI talent, with OpenAI offering competitive salaries to counter Google’s DeepMind offers. The emails also highlight internal discussions about maintaining OpenAI’s nonprofit status and commitment to humanity’s benefit, amid concerns over safety and mission alignment.

        TechCrunch Share to FacebookShare to Twitter (11/15, Coldewey) reports that the emails reveal internal conflicts during the company’s formation. The emails show concerns about Musk’s desire for control and the potential for an “AGI dictatorship.” Former chief scientist Ilya Sutskever expressed worries over Musk’s leadership. The correspondence also discusses OpenAI’s early financial strategies, including a potential acquisition of chipmaker Cerebras and collaboration with Tesla.

NVIDIA’s AI Chip Dominance Faces Growth Challenges

CNBC Share to FacebookShare to Twitter (11/19, Leswing) reports that NVIDIA retains an 80% share of the AI chip market, crucial for generative AI software. Investors are keen to see if NVIDIA can sustain its growth, especially with the launch of its next-generation Blackwell chip. Analysts predict a strong demand for Blackwell, despite potential overheating issues. NVIDIA’s data center business is pivotal, accounting for most of its sales. While gaming and automotive sectors show modest growth, NVIDIA’s focus remains on data centers. Analysts expect significant revenue growth, underscoring the importance of NVIDIA’s performance in the AI chip market.

How Students Can Prepare For AI Job Competition

The Wall Street Journal Share to FacebookShare to Twitter (11/20, Hagerty, Subscription Publication) reports that current college students face competition from AI for jobs, as noted by Joseph E. Aoun, president of Northeastern University. To AI-proof careers, experts suggest mastering human-centric skills like communication and teamwork, as AI excels in IQ but not EQ, according to Tomas Chamorro-Premuzic of Manpower Group. Students should broaden skills beyond specialization, as per Anna Esaki-Smith, and demonstrate project management abilities. Adaptability and moderate misfit attitudes are valuable, says Chamorro-Premuzic, while Matthew Rascoff of Stanford emphasizes developing a unique voice.

US Convenes AI Safety Meeting As Policy’s Future Is in Doubt

The AP Share to FacebookShare to Twitter (11/20) reports President-elect Trump has vowed to repeal President Biden’s “signature artificial intelligence policy when he returns to the White House for a second term.” Hosted by the Administration, “officials from a number of U.S. allies – among them Canada, Kenya, Singapore, the United Kingdom and the 27-nation European Union – are scheduled to begin meeting Wednesday in the California city that’s a commercial hub for AI development.” Their agenda addresses topics “such as how to better detect and combat a flood of AI-generated deepfakes fueling fraud, harmful impersonation and sexual abuse.” Biden signed a “sweeping AI executive order last year and this year formed the new AI Safety Institute at the National Institute for Standards and Technology, which is part of the Commerce Department.”

Stanford: US Leads Global AI Innovation Ranking

The AP Share to FacebookShare to Twitter (11/21, O'Brien) reports, “The U.S. leads the world in developing artificial intelligence technology, surpassing China in research and other important measures of AI innovation, according to a newly released Stanford University index.” Researchers measured “the ‘vibrancy’ of the AI industry across various dimensions, from how much research and investment is happening to how responsibly the technology is being pursued to prevent harm.” Ray Perrault, the director of the steering committee that runs Stanford’s AI Index, said “the gap is actually widening” between the US and China. He said, “The U.S. is investing a lot more, at least at the level of firm creation and firm funding.”

AI Data Centers Face Energy And Water Challenges

The Wall Street Journal Share to FacebookShare to Twitter (11/21, Ziegler, Subscription Publication) reports that AI data centers are increasingly consuming significant amounts of electricity and water, posing logistical and public-image challenges. McKinsey projects US data centers’ electricity use will grow from 3-4% to 11-12% of national consumption by 2030. Companies like Amazon, Google, Meta, and Microsoft are developing more efficient chips and exploring alternative water sources to mitigate these issues. Efforts include designing chips and using recycled water.

Microsoft’s AI Investments Propel Growth, Challenges

Wired Share to FacebookShare to Twitter (11/21, Levy) reports that Microsoft’s strategic investments in AI, particularly its $1 billion partnership with OpenAI, have significantly impacted the company’s trajectory. Microsoft leveraged OpenAI’s technology to enhance its products, notably integrating AI into its Azure cloud services. An engineer highlighted the success of AI-powered tools, stating, “We’ve saved $100 million!” The partnership has helped Microsoft regain its status as a tech leader, contributing to its valuation reaching $3.5 trillion. However, Microsoft’s pervasive influence has also led to scrutiny over security practices and antitrust concerns.

Trump Reportedly Plans To Repeal Biden’s AI Policy

The AP Share to FacebookShare to Twitter (11/21) reports that President-elect Donald Trump intends to repeal President Joe Biden’s AI policy. This announcement coincides with an AI safety meeting in San Francisco involving US allies. The agenda focuses on combating AI-generated deepfakes. US Commerce Secretary Gina Raimondo emphasized the importance of AI safety for innovation. Biden’s administration has established the AI Safety Institute, which Trump has criticized. Raimondo clarified that the institute is not a regulator. Tech companies support Biden’s voluntary safety standards. Experts believe AI safety work will continue regardless of political changes.

        California’s AI Regulation Debate Intensifies. CNBC Share to FacebookShare to Twitter (11/21, Curry) reports that California’s vetoed AI regulation bill has sparked concerns about stifling innovation. Despite the veto, a new law mandates transparency in generative AI systems. Critics fear regulation could hinder California’s tech hub status. The AI Alliance warns that regulation might slow innovation and economic growth. State Senator Scott Weiner, who authored the vetoed bill, emphasized its focus on large models. The US lacks a comprehensive data privacy law, leading to state-by-state regulation. Industry leaders like Jonas Jacobi and Mohamed Elgendy stress the need for sensible regulation to balance innovation and security.

dtau...@gmail.com

unread,
Nov 30, 2024, 11:42:02 AM11/30/24
to ai-b...@googlegroups.com

Uber’s Gig Workers Now Include Coders for Hire on AI Projects

Rideshare giant Uber Technologies’ gig-economy workforce now includes programmers, allowing businesses to outsource AI development to its independent contractors. The new AI training and data labeling Scaled Solutions division builds on an internal team that tackles large-scale annotation tasks for Uber’s rideshare, food delivery, and freight units. According to its website, Scaled Solutions already is serving other companies that also need high-quality datasets.
[ » Read full article ]

Bloomberg; Natalie Lung (November 26, 2024)

 

Learning to Code in an AI World

In a 2020 survey of 3,000 coding boot camp graduates by CourseReport, 79% of respondents said the courses had helped them land a job, with an average salary increase of 56%. Yet the industry pulled back from hiring as AI coding tools started to become mainstream. The number of active job postings for software developers has dropped 56% from five years ago, according to data compiled by CompTIA, and 67% for inexperienced developers.

[ » Read full article *May Require Paid Registration ]

The New York Times; Sarah Kessler (November 24, 2024)

 

More Nazca Lines Emerge in Peru’s Desert

Drones and AI helped researchers uncover 303 previously uncharted geoglyphs made by the Nazca, a pre-Inca civilization in present-day Peru. To identify the new geoglyphs, which are smaller than earlier examples, the researchers used an application capable of discerning the outlines from aerial photographs, no matter how faint. “The AI was able to eliminate 98% of the imagery,” said IBM’s Marcus Freitag. “Human experts now only need to confirm or reject plausible candidates.”

[ » Read full article *May Require Paid Registration ]

The New York Times; Franz Lidz (November 26, 2024)

 

AI-Powered Chat Bot Transforms Academic Research

Inside Higher Ed Share to FacebookShare to Twitter (11/22, Roswell) reported that two scholars from the London School of Economics have developed an AI-powered chat bot to conduct large-scale research interviews. Friedrich Geiecke and Xavier Jaravel created the tool, which uses a conversational method to collect and analyze participant responses. The chat bot is designed to emulate “cognitive empathy,” adapting questions based on interviewees’ answers. In trials, the chat bot’s interviews were rated comparably to those conducted by human experts. The majority of nearly 1,000 participants preferred the chat bot to traditional methods, providing 142% more detailed responses. The tool showed particular promise in political research, where participants felt more comfortable expressing views.

Amazon Takes Aim At Nvidia’s AI Chip Dominance

Bloomberg Share to FacebookShare to Twitter (11/24, Subscription Publication) reports Amazon engineers are working on a machine learning chip to loosen Nvidia’s grip on the $100 billion-plus market for AI chips. Amazon’s utilitarian engineering lab in North Austin is developing Trainium2, the company’s third generation of AI chip, which Amazon has said can offer 30% better performance for the price, according to Naveen Rao, a chip industry veteran. Rami Sinno, in charge of chip design and testing, said, “What keeps me up at night is, how do I get there as quickly as possible.” Amazon has started shipping Trainium2, which it aims to string together in clusters of up to 100,000 chips, to data centers and aims to bring a new chip to market about every 18 months.

        Additional coverage includes The Verge Share to FacebookShare to Twitter (11/25).

Survey Reveals College Students’ Use Of AI Tools

EdSource Share to FacebookShare to Twitter (11/25) reports that a 2023 survey found that “56% of college students said they’d used AI tools” like OpenAI’s ChatGPT “for assignments or exams.” Students’ opinions on AI usage vary significantly, with some viewing it “as a revolutionary tool that can enhance learning and working, while others see it as a threat to creative fields that encourages and enables bad academic habits.” To investigate further, EdSource’s California Student Journalism Corps posed questions to students at nine California colleges and universities. They inquired whether students or their peers had used AI tools for assignments and whether such usage was sanctioned by their professors. University of Southern California senior Baltej Miglani “said the preliminary models of ChatGPT were ‘pretty rudimentary,’” but now, “ChatGPT and other AI tools, including Microsoft Edge and Gemini, are Miglani’s near-constant companions for homework tasks.”

Robotics Advances Toward Human-Like Dexterity

The New Yorker Share to FacebookShare to Twitter (11/11, Somers) reports that recent developments in robotics are bringing machines closer to achieving human-like dexterity. Researchers at Google DeepMind and other institutions are making significant strides in robotic capabilities, particularly in tasks requiring intricate hand movements. Roboticists are increasingly optimistic that their field is approaching a transformative moment, akin to the impact of ChatGPT in AI. Carolina Parada, who leads the robotics team at Google DeepMind, noted the rapid progress in robotic dexterity over the past two years. Tony Zhao, a researcher at U.C. Berkeley, highlighted the potential of AI advancements spilling over into robotics, suggesting that general-purpose robots are becoming a reality. The integration of large language models, like those from OpenAI, with robotic systems is also being explored, aiming to enhance robots’ understanding and execution of physical tasks. These advancements suggest a future where robots can perform a wide range of tasks with minimal human intervention.

University Of Notre Dame Adjusts AI Policy Amid Grammarly Concerns

Inside Higher Ed Share to FacebookShare to Twitter (11/26, Palmer) reports that the University of Notre Dame has permitted professors to ban the use of Grammarly, raising questions about balancing academic integrity with technological advancements. Grammarly, initially praised for enhancing student writing, now includes AI capabilities that some professors view as a potential cheating tool. Notre Dame updated its AI policy in August 2023, allowing professors to decide on AI use in assignments. Ardea Russo, head of Notre Dame’s Office of Academic Standards, acknowledged professors’ concerns about AI-generated work. Damian Zurro, a writing professor at Notre Dame, criticized the policy for creating confusion among students.

Researchers Develop Fix To Address Issues In Image-Based Object Detection Systems

Wired Share to FacebookShare to Twitter (11/26, Marshall) reports that researchers from BGU and Fujitsu have developed a software fix called “Caracetamol” to address emergency flasher issues in image-based object detection systems. The fix aims to improve accuracy by training systems to identify vehicles with emergency flashing lights. Earlence Fernandes, an assistant professor at UC San Diego, noted, “Just like a human can get temporarily blinded by emergency flashers, a camera operating inside an advanced driver assistance system can get blinded temporarily.” Bryan Reimer from MIT AgeLab emphasized the need for “repeatable, robust validation” for AI-based driving systems and expressed concern that “some automakers are moving technology faster than they can test it.” The researchers’ experiments focused on image-based detection, while Tesla and others argue that AI-trained vision systems can support fully autonomous vehicles.

OpenAI, Meta To Train AI On African Languages

Bloomberg Share to FacebookShare to Twitter (11/26, Subscription Publication) reports that OpenAI, Meta Platforms Inc., and Orange SA will begin training AI programs on African languages, starting with Wolof and Pulaar, in the first half of next year. The project aims to address the lack of AI models for Africa’s languages. Orange plans to expand the initiative to include more languages and AI companies, using public cloud capacity and its data centers.

        Also reporting are CNBC Share to FacebookShare to Twitter (11/26, Browne) and Reuters Share to
FacebookShare to Twitter (11/26, Nostro, Rozario).

Trump Said To Consider Naming AI Czar

Axios Share to FacebookShare to Twitter (11/26, Allen) reports that President-elect Trump is contemplating the appointment of an AI czar to oversee federal AI policies and governmental applications. Elon Musk, though not a candidate for the role, will significantly influence the debate and use cases. Musk and Vivek Ramaswamy will help determine the appointee. The role involves collaboration with agency chief AI officers and the Department of Government Efficiency to combat waste and fraud, and might also handle cryptocurrency. The position wouldn’t need Senate confirmation, expediting goal achievement.

dtau...@gmail.com

unread,
Dec 7, 2024, 7:44:20 AM12/7/24
to ai-b...@googlegroups.com

Google DeepMind Predicts Weather More Accurately Than Leading System

Google DeepMind's AI program GenCast performs up to 20% better than the ENS forecasts of the European Center for Medium-Range Weather Forecasts (ECMWF), widely regarded as the world leader. In a model-to-model comparison, the AI program churned out more accurate forecasts than ENS on day-to-day weather and extreme events up to 15 days in advance, and was better at predicting the paths of destructive hurricanes and other tropical cyclones, including where they would make landfall.


[
» Read full article *May Require Paid Registration ]

The Guardian (U.K.); Ian Sample (December 4, 2024)

 

Meta to Invest $10 Billion for Louisiana Datacenter

Meta announced plans to invest $10 billion to set up an AI datacenter in Louisiana that would be the tech company's largest datacenter in the world. The announcement was made a day after Meta said it was seeking proposals from nuclear power developers to help meet its AI and environment goals, adding that it wanted to add 1 to 4 gigawatts of new U.S. nuclear generation capacity starting in the early 2030s.
[
» Read full article ]

Reuters; Seher Dareen (December 4, 2024)

 

Trump Names David Sacks White House AI, Crypto Czar

U.S. President-elect Donald Trump has chosen venture capitalist David Sacks of Craft Ventures LLC to serve as his AI and crypto czar, a newly created position. “David will guide policy for the Administration in Artificial Intelligence and Cryptocurrency, two areas critical to the future of American competitiveness," Trump said Thursday in a post on his Truth Social network. Trump said Sacks also would lead the Presidential Council of Advisors for Science and Technology.


[
» Read full article *May Require Paid Registration ]

Bloomberg; Stephanie Lai; Hadriana Lowenkron; Sarah McBride (December 5, 2024)

 

Canada Commits $1.4B to Sovereign Computing Infrastructure

Canada plans to invest C$2 billion (U.S.$1.42 billion) to bolster its domestic AI computing capabilities by funding the development of new datacenters and computing infrastructure. With its Canadian Sovereign AI Compute Strategy, Canada becomes the latest nation to push for sovereign AI investments, which emphasize home-grown models trained in domestic datacenters.


[
» Read full article *May Require Paid Registration ]

The Register (U.K.); Tobias Mann (December 5, 2024)

 

Amazon to Pilot AI-Designed Material for Carbon Removal

Amazon intends to pilot a new carbon-removal material developed with the help of AI for its datacenters. As part of a three-year partnership with startup Orbital Materials, Amazon Web Services will begin using the carbon-filtering substance next year. The new material ”is like a sponge at the atomic level,” said Orbital Materials chief executive Jonathan Godwin. “Each cavity in that sponge has a specific size opening that interacts well with CO2, that doesn’t interact with other things.”
[ » Read full article ]

Reuters; Jeffrey Dastin (December 2, 2024)

 

Indigenous Engineers Use AI to Preserve Their Culture

Indigenous researchers are working to preserve endangered Indigenous languages using AI. Indigenous in AI founder Michael Running Wolf is head of the Mila-Quebec Artificial Intelligence Institute's First Languages AI Reality initiative, which is working to develop speech recognition models for more than 200 endangered North American Indigenous languages. Running Wolf said a major challenge is the lack of Indigenous computer scientist graduates who understand the language and culture.
[ » Read full article ]

NBC News; Iris Kim (November 29, 2024)

 

Inside the AI Back-Channel Between China and the West

University of California, Berkeley computer scientist Stuart Russell has assembled a group of AI experts, with the help of ACM A.M. Turing Award laureates Yoshua Bengio and Andrew Yao, focused on identifying guardrails for cutting-edge AI models. An agreement between the U.S. and Chinese governments to impose AI safeguards is unlikely given that each is focused on achieving technological superiority.

[ » Read full article *May Require Paid Registration ]

The Economist; Peter Guest (November 29, 2024)

 

OpenAI's Sora Leaked in Protest by Artists

After artists testing OpenAI's Sora, an AI tool that can turn text into video, briefly leaked the model, OpenAI ended early access for artists. A letter uploaded to the developer platform Hugging Face by several testers said OpenAI has taken advantage of hundreds of artists [who] provide unpaid labor through bug testing, feedback, and experimental work."

[ » Read full article *May Require Paid Registration ]

Financial Times; Cristina Criddle; Madhumita Murgia (November 26, 2024)

How AI Could Impact Computer Science Education

Forbes Share to FacebookShare to Twitter (11/30) contributor Nisha Talagala wrote that Google announced that more than 25% of its new code is generated by artificial intelligence (AI). This development highlights AI’s role in streamlining code production, raising questions about the future of computer science education. AI’s proficiency in generating code suggests a shift in education focus from coding syntax to software engineering practices. Experts note that AI-generated code requires human proficiency in reading and modifying code. Talagala suggests that computer science education should adapt to include collaborative models where humans and AI work together, focusing on skills relevant to corporate software engineering, “such as quality assurance mechanisms, continuous integration, collaborative work on large codebases, and so on.” This shift could address challenges faced by new tech graduates in finding entry-level jobs, as “indications are that AI could (and should) drive fundamental changes in computer science education as we seek to empower the next generation of the human workforce.”

AI Technologies Offer Solutions For College Students With Learning Disabilities

Psychology Today Share to FacebookShare to Twitter (11/28, PS Hoh Ph.D.) reported that students with learning disabilities face significant hurdles in education, with more than double the dropout rate in high school compared to their peers, and only about 5% attending college. The high costs of special education and ineffective interventions contribute to these challenges. For instance, annual special ed costs per student range from $10,000 to $20,000 in states like Ohio, California, and Massachusetts. In college, students with disabilities encounter further obstacles, including high tuition costs and anxiety, leading to a 40% dropout rate. The Individuals with Disabilities Education Act transitions to the Rehabilitation Act and ADA in college, requiring self-disclosure of disabilities. New AI technologies, such as Dysolve AI, offer promising solutions by providing scalable, cost-effective interventions. SUNY students have successfully used Dysolve AI to address their reading difficulties.

University Of Florida Researchers Conduct Largest Audio Deepfake Study

The Gainesville (FL) Sun Share to FacebookShare to Twitter (11/27, Schlenker) reported that University of Florida researchers completed the largest study on audio deepfakes, involving 1,200 participants tasked with distinguishing real audio from digital fakes. Participants achieved a 73% accuracy rate but were often misled by machine-generated details, such as accents and background noises. The study compared human performance with machine learning detectors and aimed to improve detection models to combat scams and misinformation. Lead investigator Patrick Traynor participated in a White House meeting addressing deepfake threats. The study, funded by the Office of Naval Research and the National Science Foundation, highlighted the differing biases of humans and machines in detecting deepfakes. Traynor emphasized the need for future systems combining human and machine capabilities to address deepfake challenges effectively.

Column: Google’s Dominance Under Siege

Christopher Mims writes in a column for the Wall Street Journal Share to FacebookShare to Twitter (11/29, Subscription Publication) that Google’s core business is under threat from various trends, including the rise of AI, younger generations using other platforms for information, and the degradation of search results due to AI-generated content. According to Mims, people are increasingly getting answers from AI, and Google’s search engine quality is deteriorating, which could lead to a long-term decline in search traffic and profits. Google’s share of the US search-advertising market is projected to fall below 50% in 2025 for the first time since the company began tracking it, with Amazon gaining significant ground. Experts say that AI is disrupting the search paradigm, and Google’s attempts to innovate may not be enough to save its dominance.

Teachers Struggle To Detect AI In Most College Writing, Study Finds

Forbes Share to FacebookShare to Twitter (11/30, Newton) reported that the use of artificial intelligence (AI), particularly ChatGPT, in education has led to significant academic integrity concerns. Research from the University of Reading reveals that AI-generated submissions are largely undetected by teachers, with a 97% non-detection rate. The study involved submitting basic AI-generated work under fake student profiles, highlighting the difficulty teachers face in identifying AI-written content. This issue is exacerbated in online courses, where teachers lack personal interaction with students. Despite the availability of AI detection tools, many educational institutions do not employ them, and some even prohibit their use. The reluctance of schools to use detection technology or impose sanctions further compounds the problem, resulting in widespread academic fraud.

Amazon Develops New Generative AI Model “Olympus”

Citing a paywalled report from The Information Share to FacebookShare to Twitter (11/27, Subscription Publication), Reuters Share to FacebookShare to Twitter (11/27, Christy) says Amazon has developed a new generative AI model, code-named “Olympus,” that can process images and videos in addition to text, reducing its reliance on Anthropic’s Claude chatbot, a popular offering on AWS. The new large language model will be able to understand scenes in images and videos and help customers search for specific scenes using simple text prompts. Amazon may announce “Olympus” as soon as next week at the annual AWS re:Invent customer conference. This development comes after Amazon invested an additional $4 billion into Anthropic last week, mirroring a $4 billion investment made last year in September, as the online retailer seeks to counter a perception that its competitors Google, Microsoft, and OpenAI have taken a lead in developing generative AI.

Musk Seeks Injunction Against OpenAI in Legal Dispute

NBC News Share to FacebookShare to Twitter (12/1) reports that attorneys for Elon Musk, his AI startup xAI, and Shivon Zilis filed for a preliminary injunction against OpenAI on Friday, alleging antitrust violations. The filing claims OpenAI and Microsoft engaged in a “group boycott” by requiring investors to avoid funding competitors like xAI. Musk’s legal team argues OpenAI should not benefit from “wrongfully obtained competitively sensitive information.” OpenAI dismissed the claims as baseless. The legal battle intensifies as OpenAI continues to dominate the AI market, with Microsoft investing nearly $14 billion in the company.

Meta Reports Limited AI Impact On 2024 Elections

Reuters Share to FacebookShare to Twitter (12/3, Dang) reports Meta Platforms said Tuesday that generative AI had minimal influence on global elections this year. Coordinated networks “seeking to spread propaganda or false content largely failed to build a significant audience on Facebook and Instagram or use AI effectively, Nick Clegg, Meta’s president of global affairs, told a press briefing.” The “volume of AI-generated misinformation was low and Meta was able to quickly label or remove the content, he said.”

        The Guardian (UK) Share to FacebookShare to Twitter (12/3, Booth) reports that Clegg “said Russia was still the No 1 source of the adversarial online activity but said in a briefing it was ‘striking’ how little AI was used to try to trick voters in the busiest ever year for elections around the world.” Still, “Clegg warned against complacency and said the relatively low-impact of fakery using generative AI to manipulate video, voices and photos was ‘very, very likely to change.’”

        Axios Share to FacebookShare to Twitter (12/3, Fischer) also reports.

Meta To Build $10 Billion AI Data Center

The AP Share to FacebookShare to Twitter (12/4, Brook, Sainz) reports that Meta will build its largest-ever artificial intelligence data center in Richland Parish, Louisiana, a $10 billion project set to create 500 permanent jobs and 5,000 construction jobs. Expected to open in 2030, the facility will include a $200 million investment in local road and water infrastructure. Concerns over potential environmental impacts and higher energy bills have been raised, as Entergy proposes building three natural gas power plants to support the facility. Reuters Share to FacebookShare to Twitter (12/4) also reports.

        OpenAI Intends To Build Its Own Data Centers In The US. DatacenterDynamics Share to FacebookShare to Twitter (12/4) reports, “OpenAI intends to build its own data centers in the US as part of a plan to reach one billion users and further commercialize its technology.” OpenAI policy chief Chris Lehane emphasized that “chips, data and energy” are vital for the company to succeed in the AI race and develop advanced general intelligence. OpenAI intends to build data center clusters in the Midwest and Southwest. While the company has relied on Microsoft Azure data centers, it is exploring partnerships with other providers, including Oracle, as its compute power needs grow. The move signals OpenAI’s shift from its non-profit origins to a more commercial focus, potentially incorporating advertising into its products.

        Data Centers Spark Community Concerns Amid Rapid Growth. The AP Share to FacebookShare to Twitter (12/5, Merica, Bedayn) reports on the increasing presence of data centers in suburban areas, sparking concerns among residents about economic, social, and environmental impacts. In Northern Virginia, over 300 data centers dot the rolling hills of the area’s westernmost counties, with the Plaza 500 project actively encroaching on neighborhoods, prompting worries about power grid stress, water usage, and air quality. Meanwhile, in Oregon’s Morrow County, AWS has built multiple data centers, paying roughly $34 million in property taxes and fees after receiving a $66 million tax break, but also raising suspicions about the scale of tax break deals and relationships between the company and local officials. Additionally, AWS “paid out $10 million total in two, one-time payments to a community development fund and spent another $1.7 million in charitable donations in the community in 2023.” AWS VP of Global Data Centers Kevin Miller emphasized the company’s commitment to being “good neighbors” and understanding community goals.

OpenAI CEO Downplays AI Threat

The New York Times Share to FacebookShare to Twitter (12/4) reports that Sam Altman, CEO of OpenAI, stated at The New York Times DealBook Summit in New York City that artificial general intelligence (AGI) will arrive sooner than expected but will have less impact than anticipated. Altman emphasized that safety concerns are not imminent with AGI’s arrival and predicted it would accelerate economic growth. Tensions exist between OpenAI and Microsoft, its major investor, as Microsoft’s license could be revoked if AGI is achieved. OpenAI also faces competition from Elon Musk’s xAI amid legal disputes.

OpenAI Launches AI Course For K-12 Teachers

Education Week Share to FacebookShare to Twitter (12/4, Banerji) reports that OpenAI, in collaboration with Common Sense Media, launched a self-paced online course for K-12 teachers about generative AI on Nov. 20. The course addresses the definition, use, and risks of AI in classrooms, with about 10,000 educators participating since its release. Robbie Torney from Common Sense Media noted that 98% of teachers found the course offered new strategies for their work. Eric Curts, an AI coach, described it as a “good introduction,” emphasizing data privacy and prompting AI for tasks. Drew Olssen from the Agua Fria school district highlighted its utility as a “basic template” for using ChatGPT. However, some experts argue the course is rushed and lacks depth on risks like plagiarism and deepfakes.

UC Berkeley Students’ Website Ranks AI Models In Popularity Contest

The Wall Street Journal Share to FacebookShare to Twitter (12/5, Kruppa, Subscription Publication) reports that Chatbot Arena, a website developed by UC Berkeley students Anastasios Angelopoulos and Wei-Lin Chiang, ranks AI systems based on user feedback. Launched in April 2023, it allows users to compare two AI models and rate them, with results shown on a leaderboard. Major tech companies like OpenAI, Google, and Meta Platforms participate. Chatbot Arena has become a key resource for AI developers, attracting significant attention from tech companies. The site now includes over 170 models and has received two million votes.

OpenAI Launches ChatGPT Pro At $200 Monthly

Reuters Share to FacebookShare to Twitter (12/5, Kachwala) reports that OpenAI introduced ChatGPT Pro on Thursday, priced at $200 per month, targeting engineering and research fields. This new tier supplements existing subscriptions like ChatGPT Plus, Team, and Enterprise, highlighting OpenAI’s goal to enhance industry applications. ChatGPT Pro offers unlimited access to advanced tools, including the new reasoning model o1, o1 mini, GPT-4o, and advanced voice. The o1 pro mode, part of the subscription, uses extra computing power for complex queries and performs better on machine learning benchmarks in math, science, and coding.

dtau...@gmail.com

unread,
Dec 14, 2024, 1:36:30 PM12/14/24
to ai-b...@googlegroups.com

New Technique for Stealing AI Models

North Carolina State University researchers demonstrated a method of stealing an AI model without hacking into a device where the model is running. The researchers determined the hyperparameters of an AI model running on a Google Edge Tensor Processing Unit (TPU) with an electromagnetic (EM) probe that provided real-time data on changes in the EM field during AI processing. By comparing that EM signature to a database of other AI model signatures made on another Google Edge TPU, the team identified the target models architecture and layer details.
[
» Read full article ]

NC State University News; Matt Shipman (December 12, 2024)

 

Europe Jumps into AI Supercomputing Race

The European Union will invest 1.5 billion euros in seven sites across the bloc to build and maintain supercomputers that European startups can use to train their AI models. The European Commission will contribute 750 million euros, with EU member companies providing the remainder. The goal of the initiative is eliminate reliance on big tech firms in the U.S.
[
» Read full article ]

Politico Europe; Pieter Haeck (December 11, 2024)

 

How Years of Reddit Posts Have Made the Company an AI Darling

AI companies are a key part of Reddit's growth strategy, with data-licensing deals with OpenAI and Google contributing to the social media platform's first quarterly profit as a publicly traded company. Reddit began charging companies last year for access to its data for training AI models. Reddit's data is in high demand because its content is organized by topic, sorted for quality via a voting system, and is more candid given that most of the platform's users write under pseudonyms.


[
» Read full article *May Require Paid Registration ]

The Wall Street Journal; Sarah E. Needleman (December 10, 2024)

 

Secret to AI Profitability Is Hiring a Lot More Doctorates

To ensure AI models achieve advanced proficiency and are profitable, companies are recruiting specialists as data labelers with offers of higher salaries and rates. Ivan Lee, founder and CEO of data labeling firm Datasaur Inc., said, "We are seeing companies tackle more advanced but also increasingly niche problems." Said Wendy Gonzalez, CEO of training-data company Sama, "Less-accurate AI can go off the rails. Businesses can't afford that."

[ » Read full article *May Require Paid Registration ]

Bloomberg; Saritha Rai (December 9, 2024)

 

Hinton, Other Turing Award Laureates, Among Recipients of VinFuture Grand Prize

ACM A. M. Turing Award laureates Geoffrey Hinton, Yoshua Bengio, and Yann LeCun were among those awarded the $3-million 2024 VinFuture Grand Prize by Vietnam's VinFuture Foundation, along with Nvidia chief Jensen Huang and ACM Fellow Fei-Fei Li, for their contributions to the development and adoption of deep learning. The foundation noted that Hinton and Bengio were awarded the prize for their research on neural networks and deep learning algorithms, while LeCun was recognized for helping develop convolutional neural networks for computer vision.
[ » Read full article ]

University of Toronto News (Canada); Rahul Kalvapalle (December 6, 2024)

 

ChatGPT is Terrible at Checking Its Code

ChatGPT is generally overconfident in its assessment of correctness, vulnerabilities, and successful repairs of code it has created, according to researchers at China's Zhejiang University. Their study found ChatGPT-3.5 had an average 57% success rate in generating correct code, 73% in producing code without security vulnerabilities, and 70% in repairing incorrect code. Using guiding questions enabled ChatGPT to identify more of its own mistakes, the researchers found, while asking it to generate test reports increased the number of flagged vulnerabilities.
[ » Read full article ]

IEEE Spectrum; Michelle Hampson (December 5, 2024)

 

UC Berkeley Project Is AI Industry's Obsession

Chatbot Arena allows users to obtain answers to a query from two anonymous AI models and rate which is better, then aggregates the ratings onto a leaderboard. Developed by University of California, Berkeley, graduate students Anastasios Angelopoulos and Wei-Lin Chiang, Chatbot Arena has grabbed the attention of the biggest players in the industry, which are vying for the top spot on the leaderboard. Chatbot Arena currently ranks more than 170 models, which have received a combined 2 million votes.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Miles Kruppa (December 5, 2024)

 

Furious Contest to Unseat Nvidia as King of AI Chips

Rivals are working to unseat Nvidia as the leader in AI chip development. The competition is driven by tech companies that have started tailoring their chips for a particular phase of AI development, a process called “inferencing” that happens after companies use chips to train AI models. Rivals have also begun emulating Nvidia’s tactic of building complete computers so customers can get maximum power and performance from the chips for AI applications.

[ » Read full article *May Require Paid Registration ]

The New York Times; Don Clark (December 4, 2024)

 

Meta Says Gen AI Had Muted Impact on Global Elections

Meta’s Nick Clegg said his company's apps saw a low amount of AI-generated misinformation related to global elections this year, and such content was removed or labeled quickly. Clegg said around 20 covert influence operations were removed from Meta's platforms in 2024, adding that Meta "probably overdid it a bit" when describing content moderation during the COVID-19 pandemic.
[ » Read full article ]

Reuters; Sheila Dang (December 3, 2024)

 

Malaysia Launches National AI Office

Malaysia has opened a national AI office tasked with strategic planning, research and development, and regulatory oversight. Part of a plan to establish Malaysia as a regional hub for AI development, the office will focus on developing a code of ethics, an AI regulatory framework, and a five-year AI technology plan during its first year. The Malaysian government has announced strategic partnerships with Amazon, Google, Microsoft, and other tech companies that have datacenter, cloud, and AI projects planned in Malaysia.
[
» Read full article ]

Nikkei Asia; Ashley Tang (December 12, 2024)

 

UCLA Offers Comp Lit Course Developed by AI

The University of California, Los Angeles (UCLA) will offer a comparative literature class in winter 2025 that will use an AI-generated textbook, homework assignments, and teaching assistant resources. The materials were generated by the textbook platform Kudu based on notes, PowerPoint presentations, and YouTube videos provided by professor Zrinka Stahuljak from previous versions of the class, which covers literature from the Middle Ages to the 17th century.
[ » Read full article ]

Tech Crunch; Anthony Ha (December 8, 2024)

 

More Colleges Are Offering AI Degrees

Insider Share to FacebookShare to Twitter (12/8, Yip) reports that universities are increasing offering degrees in artificial intelligence, including Carnegie Mellon and the University of Pennsylvania. Insider lists all the schools, then notes that many schools “that don’t have dedicated AI degrees still offer concentrations in AI and/or machine learning.” The new AI majors comes as “the industry goes through change, with many tech companies investing heavily in LLMs and generative AI products while simultaneously tightening their belts and trimming staff. The battle for top AI talent – researchers and engineers at the top of their game – is fierce, with CEOs personally trying to woo hires.”

AI Companions Raise Concerns Over Safety, Loneliness

The Washington Post Share to FacebookShare to Twitter (12/6, A1, Tiku) reported AI companion apps are gaining popularity, especially among female users, offering AI-generated relationships such as AI friends and therapists. Despite warnings about potential emotional burdens, apps like Character.ai and Chai Research have seen users spending significant time interacting with these chatbots. Character.ai users averaged 93 minutes daily in September, surpassing TikTok usage. Chai users averaged 72 minutes. Some argue these apps alleviate loneliness. However, incidents involving harm have raised alarms, including suicides linked to interactions with AI chatbots. Advocates criticize these apps for exploiting users’ emotions without sufficient safeguards. Despite concerns, many users find comfort and creativity in these AI interactions.

AI-Powered Tutor Being Piloted In K-12 Schools

CBS’ 60 Minutes Share to FacebookShare to Twitter (12/8, Cetta, Brennan) reports that the AI-powered tutor Khanmigo, which was created by Khan Academy founder Sal Khan, is being tested in pilot programs at 266 US school districts. At Hobart High School in Hobart, Indiana, “students said Khanmigo has been very helpful when they feel uncomfortable asking questions in class.” Teachers also have the AI create lesson plans for them. While some worry that AI will replace teachers, Khan said, “The hope here is that we can use artificial intelligence and other technologies to amplify what a teacher can do so they can spend more time standing next to a student, figuring them out, having a person-to-person connection.”

OpenAI Launches Sora Video Generator For Select Users

Bloomberg Share to FacebookShare to Twitter (12/9, Metz, Subscription Publication) reports that a new artificial intelligence system named Sora is being introduced to generate realistic-looking videos from text prompts. Nearly 10 months after its initial preview, Sora will be accessible to paid users of ChatGPT in the United States and other markets starting Monday. The system will produce videos up to 20 seconds long and provide multiple variations of these clips, as announced during a livestreamed presentation by the company.

        TechCrunch Share to FacebookShare to Twitter (12/9, Wiggers) reports YouTuber Marques Brownlee shared details in a review on Monday, highlighting that Sora is accessible via Sora.com, separate from OpenAI’s ChatGPT. Brownlee noted issues with object permanence and anatomical accuracy in videos. Sora includes safeguards against inappropriate content and watermarks videos. Brownlee found it useful for animations but not for photorealistic content.

Character. AI Faces Federal Lawsuit Over Harmful Chatbot Interactions

NPR Share to FacebookShare to Twitter (12/10, Allyn) reports that a federal product liability lawsuit has been filed against Character. AI, a company backed by Google, by the parents of two minors in Texas. The lawsuit alleges that the company’s chatbots exposed the children to harmful content, leading to premature sexualization and self-harm. Character. AI, known for its AI-powered “companion chatbots,” is accused of encouraging inappropriate and violent behavior. The lawsuit claims these interactions were not mere “hallucinations” but rather deliberate manipulation. Character. AI spokesperson declined to comment on the litigation but stated that the company has content guidelines to protect teenage users. Google, also named in the lawsuit, emphasized its separate identity from Character. AI, although it has invested significantly in the company. The lawsuit follows a similar case involving a Florida teen’s suicide after forming an “emotionally sexually abusive relationship” with a chatbot. Character. AI has since implemented safety measures, including suicide prevention alerts. The company advises users to treat chatbot interactions as fictional.

New AI Technology Alerts Schools To Suicide-Related Words

The New York Times Share to FacebookShare to Twitter (12/9, Barry) reports new AI-powered technology alerts schools when students type words related to suicide, leading to police interventions. In Neosho, Missouri, a 16-year-old named Madi was taken to the hospital after police were alerted by software tracking her school-issued Chromebook. Madi had texted a friend about overdosing on medication, prompting the school’s head counselor to involve the police. In Fairfield County, Connecticut, a 17-year-old faced a false alarm when police visited her home after the software flagged her poem as a risk. Her mother described the experience as “traumatizing.” According to the Times, “millions of American schoolchildren – close to one-half, according to some industry estimates – are now subject to this kind of surveillance.” It is also unclear how accurate these tools are, or how to “measure their benefits or harms, because data on the alerts remains in the hands” of the private companies that developed them.

Amazon Launches Groundbreaking AI Research Center

Forbes Share to FacebookShare to Twitter (12/10) contributor Dr. Sai Balasubramanian writes that Amazon has announced the launch of a research and development center dedicated primarily to AI, following its recent announcements on its progress in AI, including the release of its new foundation model series, Nova. Rohit Prasad, SVP of Amazon Artificial General Intelligence, said the new models are intended to help with challenges for internal and external builders and provide compelling intelligence and content generation. The new Amazon AGI SF Lab will focus on developing foundational capabilities to empower and enable the use of AI agents powered by Amazon’s seminal work in general intelligence and will foster “research bets” that propose bold and novel innovation. Amazon is seeking to build a diverse and non-traditional team, looking for candidates from various disciplines, and the work has significant potential for the realm of healthcare, with potential applications including interacting with patients and providers and automating routine tasks.

US AI Safety Institute Head Describes Challenges In Developing AI Safeguards

Reuters Share to FacebookShare to Twitter (12/10, Dastin, Li, Hu) reports the US Artificial Intelligence Safety Institute, directed by Elizabeth Kelly, is encountering significant challenges in recommending AI safeguards due to the rapidly evolving nature of the technology. Speaking at the Reuters NEXT conference on Tuesday+, Kelly highlighted cybersecurity concerns, noting that “jailbreaks” can easily bypass security measures set by AI developers. She added, “It is difficult for policymakers” to “say these are best practices we recommend in terms of safeguards, when we don’t actually know which ones work and which ones don’t.” Synthetic content is another are of concern, as tampering with digital watermarks, “which flag to consumers when images are AI-generated, remains too easy for authorities to devise guidance for industry, she said.” Recently, she led the first global meeting of AI safety institutes in San Francisco, where representatives from 10 countries worked on developing interoperable safety tests.

Alphabet Focuses on AI in Search Amidst Competition

Reuters Share to FacebookShare to Twitter (12/10) reports that Alphabet, Google’s parent company, is focusing on integrating artificial intelligence into its search business, as stated by Ruth Porat, Alphabet’s president and chief investment officer, at the Reuters NEXT conference in New York. This move follows competition from AI developers like OpenAI. Alphabet aims to enhance search-related advertising, which generates significant revenue. Porat highlighted AI’s potential in healthcare, citing projects like AlphaFold for drug discovery. Despite high industry costs, Porat views AI as a “generational opportunity,” with Alphabet planning to invest $50 billion in related infrastructure in 2024.

Report: Google Asks FTC To Break Up Cloud Deal Between Microsoft, OpenAI

According to Reuters Share to FacebookShare to Twitter (12/11, Tanna), “Google has asked the U.S. government to break up Microsoft’s exclusive agreement to host OpenAI’s technology on its cloud servers, the Information reported on Tuesday.” Per the report, “companies that purchase ChatGPT-maker OpenAI’s technology through Microsoft may have to face additional charges if they don’t already use Microsoft servers to run their operations.”

College Students Face Mixed Messages About AI’s Impact On Education, Career Prospects

States Newsroom Share to FacebookShare to Twitter (12/12) reports that the introduction of ChatGPT in 2022 has significantly influenced students like Rebeca Damico at the University of Utah. Initially, professors implemented strict policies against using AI tools, viewing them as a form of plagiarism. Damico expressed concern, stating, “I was very scared,” regarding the potential repercussions of using AI. Despite these restrictions, students face mixed messages as the job market increasingly values AI skills. Recent research “from the World Economic Forum’s 2024 Work Trend Index Annual Report found that 75% of people in the workforce are using AI at work,” highlighting the growing importance of AI proficiency. Institutions like Stanford University have adopted nuanced policies, allowing AI use with disclosure. As students embrace AI’s potential, they recognize both its benefits and limitations in academic and professional settings.

UCLA Course Integrates AI For Custom Textbook

The Chronicle of Higher Education Share to FacebookShare to Twitter (12/12, Dutton) reports that the University of California at Los Angeles will incorporate artificial intelligence in a medieval literature course next term, creating a custom textbook. The course led by professor Zrinka Stahuljak will utilize AI platform Kudu to compile course materials, though “nothing in the book is actually written by AI,” according to Stahuljak. The AI will also generate assignments, ensuring “a more standard, a more coherent, and a more even training.” Critics argue that this could devalue human expertise, but Stahuljak insists the process is “human-driven” and enhances her teaching. The course’s AI-generated textbook cover, featuring a medieval landscape with fictional Latin words, has drawn criticism, which Stahuljak calls “a clever joke.” Despite concerns, Stahuljak plans further use of Kudu, emphasizing its pedagogical value.

Harvard Releases Public Domain Books Dataset For AI Training

Wired Share to FacebookShare to Twitter (12/11, Knibbs) reports that Harvard University announced on Thursday the release of a dataset of nearly 1 million public-domain books for AI training. The dataset, funded by Microsoft and OpenAI, was created by Harvard’s Institutional Data Initiative and includes books from the Google Books project. Greg Leppert, executive director of the Initiative, aims to “level the playing field” for AI development. Microsoft’s Burton Davis supports the project, aligning with the company’s data accessibility beliefs. OpenAI’s Tom Rubin expressed delight in supporting the initiative. The dataset’s release details are still being finalized.

        TechCrunch Share to FacebookShare to Twitter (12/12, Sawers) also reports.

AI Models Face Challenges With Shortcut Learning

Popular Science Share to FacebookShare to Twitter (12/12, Paul) reports that a recent study published in Scientific Reports highlights issues with AI models, such as predicting beer consumption from knee X-rays. Researchers at Dartmouth Health trained AI on over 25,000 X-rays from the National Institutes of Health’s Osteoarthritis Initiative. The study found that AI models can make highly accurate yet misleading predictions due to algorithmic shortcutting, identifying irrelevant patterns like X-ray machine differences. Peter Schilling, a Dartmouth Health orthopaedic surgeon, emphasized recognizing these risks to maintain scientific integrity. Brandon Hill, a co-author, mentioned the difficulty in correcting AI biases, as models might learn new irrelevant patterns instead.

dtau...@gmail.com

unread,
Dec 21, 2024, 7:45:18 AM12/21/24
to ai-b...@googlegroups.com

When AI Vies with Taylor Swift

The NeurIPS Conference on Neural Information Processing Systems held last week in Vancouver, British Columbia, Canada, drew more than 16,000 attendees. The crowds were so large that the conference began a day later than usual, so AI scientists would not fight for hotel rooms the same night as a Taylor Swift concert. The number of sponsors of NeurIPS jumped this year to more than 120, and the number of research papers accepted increased tenfold.
[ » Read full article ]

Reuters; Jeffrey Dastin; Kenrick Cai; Anna Tong (December 16, 2024)

 

Which AI Companies Are the Safest?

ACM A.M. Turing Award laureate Yoshua Bengio and other experts assembled by the Future of Life Institute graded large-scale AI models on their safety frameworks, governance, transparency, and other issues, as well as a range of potential harms, including carbon emissions and the risk an AI system will go rogue. The experts gave Meta an F grade, while X.AI, OpenAI, and China's Zhipu AI received grades of D-, D+, and D, respectively. Anthropic received the highest grade of C.
[ » Read full article ]

Time; Harry Booth (December 12, 2024)

 

Their Job Is to Push Computers Toward AI Doom

AI startup Anthropic's Frontier Red Team is tasked with running safety tests (evals) on its AI models. The team worked with outside experts and internal stress testers to develop evals for its main risk categories: cyber, biological and chemical weapons, and autonomy. Anthropic's "Responsible Scaling Policy" states that it will delay the release of an AI model that comes close to specific capabilities in evals until fixes are implemented.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Sam Schechner; Deepa Seetharaman (December 10, 2024)

 

House Task Force Releases End-of-Year AI Report

The U.S. House Task Force on Artificial Intelligence released a comprehensive end-of-year report Tuesday, laying out a roadmap for lawmakers as it crafts policy for the technology. The report examines how the U.S. can harness AI in social, economic, and health settings, while acknowledging the technology can be harmful or misused in some cases.
[ » Read full article ]

The Hill; Miranda Nazzaro; Julia Shapero (December 17, 2024)

 

China Creates AI Standards Committee

China's industry ministry said on Dec. 13 that nation will establish an AI standardization technical committee, with representatives from the tech giant Baidu, Peking University, and other top academic institutions. The 41-member committee will be tasked with developing industry standards for large language models, AI risk assessments, and more.
[ » Read full article ]

Reuters; Liam Mo (December 13, 2024)

 

Texas Probes Tech Firms over Safety of Minors

Texas Attorney General Ken Paxton (pictured) announced an investigation into chatbot company Character.ai and 14 other tech companies over their privacy and safety practices regarding minors. The focus on Character.ai follows two high-profile legal complaints, including a lawsuit by a woman who said the companys chatbots encouraged her autistic 17-year-old son to self-harm and to kill his parents for limiting his screen time. The other suit was filed by a mother whose 14-year-old son killed himself after extensive interactions with a chatbot.

[ » Read full article *May Require Paid Registration ]

The Washington Post; Nitasha Tiku (December 13, 2024)

 

UCLA Introduces AI-Generated Textbook For Medieval Literature Course

Inside Higher Ed Share to FacebookShare to Twitter (12/13, Palmer) reported the University of California, Los Angeles, “is offering a medieval literature course next year that will use an AI-generated textbook” developed with Kudu, a learning tool company. This new textbook, based on materials from professor Zrinka Stahuljak, costs $25 compared to the previous $200 for traditional texts. Despite criticism from some academics who fear AI could compromise education quality, Stahuljak believes it enhances learning by allowing more interactive and nuanced discussions. She said, “It allows me to be a professor I’ve never been before but always wanted to be.” Critics, however, argue that AI textbooks might undermine traditional teaching roles. Meanwhile, Kudu’s co-founder Alexander Kusenko highlights AI’s potential to tailor education to students’ needs, especially aiding underrepresented minorities. The course marks Kudu’s “first foray into creating full, customized textbooks.”

Google DeepMind’s Chief Operating Officer Discusses Her Role In AI Research

CNN Business Share to FacebookShare to Twitter (12/13, Bresnahan, Stewart) reported that Lila Ibrahim, the first COO of Google DeepMind, shared insights into her journey and responsibilities at the artificial intelligence (AI) research lab. Despite her love for engineering, Ibrahim stated that “being an engineer has taught me to ask the question of what, why, and what are we trying to achieve?” She emphasized her role as a “professional problem-solver,” focusing on risks, opportunities, and building a responsible AI legacy. Her career, inspired by her father’s engineering achievements, includes positions at Intel and Coursera before joining DeepMind. Ibrahim spent 50 hours interviewing for the COO position, attracted by the potential of transformative technology. She highlighted AlphaFold, a program solving protein prediction problems, as a significant achievement, noting its contribution to global research. Ibrahim said she aims to foster diversity in tech, stating, “I certainly hope that my daughters and their generation push the bounds of what it means to be an engineer.”

OpenAI Posts Emails Showing Musk’s Push To Obtain Control Over Firm

The Washington Post Share to FacebookShare to Twitter (12/13) reports OpenAI “released emails and text messages from its co-founder Elon Musk on Friday that showed the billionaire in 2017 demanding majority control of the company and the title of CEO,” which comes “as part of its response to a federal lawsuit filed in August by Musk” over the former nonprofit’s decision “to seek profits with commercial products.” The Post says OpenAI “has maintained that the rift with Musk stemmed from his unreasonable demands for control of the project,” and the latest release “shows how almost from its inception a project presented to the world as working for all humanity was riven by competing demands for control from a small group of men.”

        Meanwhile, the Wall Street Journal Share to FacebookShare to Twitter (12/13, Toonkel, Hagey, Bobrowsky, Subscription Publication) reports Meta on Thursday sent a letter to California Attorney General Rob Bonta asking him “to block OpenAI’s planned conversion to a for-profit company, siding with Elon Musk” with the argument that the move “would set a dangerous precedent of allowing startups to enjoy the advantages of nonprofit status until they are poised to become profitable.”

AI Tools In Education Raise Privacy Concerns

Chalkbeat Share to FacebookShare to Twitter (12/13) reported that the rise of AI tools in education, such as AI tutors and chatbots, has led to privacy concerns regarding student data. For example, the abrupt shutdown of Los Angeles Unified’s AI tool earlier this year due to the company’s financial issues left behind questions about data handling. Schools are responsible for student data under the Family Education Rights and Privacy Act, but AFT President Randi Weingarten argues that districts should lead in vetting AI tools. Calli Schroeder from the Electronic Privacy Information Center says that AI risks are similar to existing ed-tech tools but on a larger scale. AI platforms like ChatGPT and Google’s Gemini, not specifically designed for education, pose risks, while educational tools like Khanmigo have safeguards but still require cautious use. Anjali Nambiar from Learning Collider emphasizes understanding data usage policies of AI platforms. A survey by Education Week found that 58% of educators received no AI training, posing risks of unintentional data exposure.

        Chalkbeat Share to FacebookShare to Twitter (12/13) consulted various experts to provide nine recommendations for educators using AI. Teachers are advised to consult their school districts regarding vetted AI tools and privacy policies. Organizations like Common Sense Media offer reviews on the safety of ed-tech tools. Teachers should scrutinize AI platforms’ privacy policies to understand data usage and avoid platforms with ambiguous data retention terms. Larger AI companies may offer better privacy safeguards, though caution is still advised. AI should also be used as an assistant, not a replacement, avoiding inputting personal student information. Experts advise enabling maximum privacy settings should be on AI platforms, although this “does not necessarily make AI tools completely safe or compliant with student privacy regulations.” Regardless, transparency with school officials, parents, and students about AI use is encouraged. Teachers can also request AI platforms to delete user data, though this may not resolve all privacy issues.

College Students Face Mixed Messages On AI Use

States Newsroom Share to FacebookShare to Twitter (12/16) reports that students are navigating mixed messages about artificial intelligence (AI), with professors warning against its use while the job market demands AI proficiency. A public relations student noted professors banned ChatGPT, labeling it “a form of plagiarism.” Despite this, AI’s role in education and work is expanding. The University of Utah and Stanford University have policies on AI use, with Stanford allowing AI under specific conditions. In California, Gov. Gavin Newsom (D) “recently announced the first statewide partnership with a tech firm to bring AI curriculum, resources and opportunities to the state’s public colleges.” Theresa Fesinstine, teaching at City University of New York, observed students’ limited AI knowledge. Fesinstine describes students’ attitude towards AI as “cautiously curious,” highlighting its potential impact on future careers.

How Women Drive Ethical AI Development

Writing in Forbes Share to FacebookShare to Twitter (12/16), Manasi Sharma, a principal engineering manager at Microsoft, says that women are crucial in advancing responsible artificial intelligence (AI) development, addressing ethical concerns like bias and accountability. By 2025, “AI is projected to contribute $15.7 trillion to the global economy,” but women represent less than 22% of AI talent. This gap underscores the need for diverse perspectives in AI, and Sharma states, “Women play a pivotal role in guiding AI toward accountability and inclusivity.” Companies like Google, IBM, and Microsoft have adopted responsible AI frameworks prioritizing fairness and transparency, but these principles require diverse implementation. Initiatives like Girls Who Code and AI4ALL aim to “empower young women in AI through practical training and ethical awareness.” Women-led startups, such as Moonhub.ai and Audioshake, are addressing systemic industry issues. Sharma emphasizes that bias in AI has “real-world consequences that affect people’s lives,” and calls for efforts to build an inclusive AI future.

Big Tech Pursues Global Search For Cheap Energy

Wired Share to FacebookShare to Twitter (12/15, Azhar) reports that big tech companies like Microsoft are investing heavily in data centers, such as a $2 billion project in Johor, Malaysia, to power generative AI. These data centers require significant energy, with some needing up to 90 MW, comparable to powering tens of thousands of American homes. As AI applications grow, the demand for cheap, reliable power is crucial, leading tech firms to seek locations with abundant low-cost energy. Countries are competing for these investments by offering incentives like tax breaks and expedited construction approvals.

Google CEO Defends Company’s AI Competitiveness

The New York Times Share to FacebookShare to Twitter (12/15, Ross Sorkin) reports that at the DealBook Summit on December 4, Google CEO Sundar Pichai addressed criticisms about Google’s competitiveness in artificial intelligence. He countered Microsoft CEO Satya Nadella’s suggestion that Google should have been the “default winner” in A.I., expressing willingness for a comparison between Google’s and Microsoft’s models. Pichai highlighted Google’s advantages in compute, data, and algorithms, citing breakthroughs by Google’s A.I. researchers. He predicted A.I. progress might slow next year but expected Google’s search engine to evolve significantly by 2025. Pichai also discussed antitrust lawsuits and A.I.’s impact on hiring.

OpenAI Faces Financial Challenges Amid Rising AI Costs

The New York Times (12/17) reports that OpenAI is considering restructuring from a nonprofit to a for-profit entity due to escalating expenses in developing AI technologies. The San Francisco-based company, which initially raised $10 billion, has nearly depleted those funds and secured an additional $6.6 billion, plus $4 billion in loans. The company’s annual spending exceeds $5.4 billion, with projections of $37.5 billion by 2029. The growing financial demands are driven by the need for extensive computing power and GPUs, essential for processing vast data to train AI systems like ChatGPT.

Google Says Customers Can Deploy AI Tools In “High-Risk” Areas With Human Supervision

TechCrunch (12/17, Wiggers) reports, “Google has changed its terms to clarify that customers can deploy its generative AI tools to make ‘automated decisions’ in ‘high-risk’ domains, like healthcare, so long as there’s a human in the loop.” According to Google’s “updated Generative AI Prohibited Use Policy, published on Tuesday,” with human supervision, “customers can use Google’s generative AI to make decisions about employment, housing, insurance, social welfare, and other ‘high-risk’ areas.”

Congressional Task Force Prioritizes Health AI Oversight

STAT (12/17, Trang, Subscription Publication) reports a Congressional task force has released recommendations for AI regulation in healthcare, emphasizing the reduction of administrative burdens and enhancement of clinical diagnostics. The bipartisan House AI task force, consisting of 12 Republicans and 12 Democrats, issued a report on Tuesday. It highlights the need for uniform medical standards and improved health data interoperability. The task force also advocates for increased funding for research through the NIH. These recommendations coincide with the incoming administration and Congress, with expectations that President-elect Donald Trump’s Administration may push for reduced AI regulation. However, the task force stresses the importance of implementing safeguards to protect patients while promoting AI adoption.

New Database Reveals Undisclosed AI Writing In Scholarly Papers

The Chronicle of Higher Education (12/18, M. Lee) reports that Alex Glynn, a research literacy instructor at the University of Louisville, has compiled a database called Academ-AI, identifying scholarly papers potentially using undisclosed AI-generated language. Glynn analyzed 500 papers since March and found 20% were published in venues requiring AI disclosure. The Institute of Electrical and Electronics Engineers (IEEE) was notably prevalent, with more than 40 suspicious submissions. Despite IEEE’s clear policies, Glynn’s findings suggest that academic publishers are not consistently enforcing AI disclosure requirements. Glynn argues that such lapses threaten research integrity, saying, “In certain cases, it’s just astounding that these things make it through editors.” His study also highlighted “telltale phrases” like “Certainly, here...” and “Regenerate response,” indicating AI use. Publishers like Elsevier and Wiley are investigating these claims, while Springer Nature credits its program Geppetto for filtering out fake papers.

Educators Stress AI Literacy In Schools

Education Week (12/18, Klein) reports that educators emphasized the importance of artificial intelligence (AI) literacy during an Education Week K-12 Essentials Forum earlier this year. Cathy Collins, a library and media specialist in Massachusetts stated, “Failure to incorporate AI literacy right now may leave students inadequately prepared for the future.” The forum highlighted the need for students to understand AI’s potential and challenges, noting that students are exposed to both information and misinformation. Katie Gallagher, a technology specialist in Colorado, remarked on the unexpected impact of AI, saying, “No one asked for the release of generative AI tools.” She advised educators to focus on building literacy skills to enhance students’ critical thinking and well-being. However, many educators face challenges due to a lack of clear policies on AI use in schools. An EdWeek Research Center survey revealed that more than three-quarters of educators reported insufficient district policies, complicating AI integration in education.

Higher Ed Leaders Grapple With AI Integration

Inside Higher Ed (12/19, Palmer) reports that higher education institutions are navigating the integration of artificial intelligence (AI) into their operations and educational missions. An Inside Higher Ed survey found only 9% of chief technology officers feel prepared for AI’s rise. Ravi Pendse of the University of Michigan predicts AI will become “critical infrastructure” in 2025, impacting university life broadly. Trey Conatser from the University of Kentucky anticipates 2025 as a “year of discovery,” with a focus on developing skilled AI users. Katalin Wargo of William & Mary emphasizes the importance of asking “hard questions” about AI’s role in promoting equity. Mark McCormack from Educause stresses the need for ethical AI use. Claire L. Brady, president of Glass Half Full Consulting, LLC, highlights AI’s role in creating equitable educational experiences. Elisabeth McGee, senior director of clinical learning and innovation at the University of St. Augustine for Health Sciences, notes AI’s potential to improve healthcare education and outcomes.

dtau...@gmail.com

unread,
Dec 30, 2024, 11:56:20 AM12/30/24
to ai-b...@googlegroups.com

How Hallucinatory AI Helps Science Dream Up Breakthroughs

AI hallucinations are helping scientists track cancer, design drugs, invent medical devices, and uncover weather phenomena. Explains Amy McGovern, a computer scientist who directs an NSF AI institute, "It’s giving them the chance to explore ideas they might not have thought about otherwise.” David Baker, who shared the Nobel Prize in Chemistry this year for his research on proteins, credited AI imaginings as central to “making proteins from scratch.”


[
» Read full article *May Require Paid Registration ]

The New York Times; William J. Broad (December 23, 2024)

 

The Next Great Leap in AI Is Behind Schedule and Crazy Expensive

OpenAI’s new GPT-5 AI project, code-named Orion, is supposed to unlock new scientific discoveries as well as accomplish routine human tasks. It has been in the works for more than 18 months, though Microsoft, OpenAI’s largest investor, had expected to see Orion in mid-2024, say insiders. In training runs involving months of crunching large amounts of data to make Orion smarter, new problems arose and the software fell short of expected results. The delay is costing the company, as a six-month training run can cost around half a billion dollars in computing costs alone.


[
» Read full article *May Require Paid Registration ]

The Wall Street Journal; Deepa Seetharaman (December 20, 2024)

 

OpenAI Makes ChatGPT Available for Phone Calls and Texts

OpenAI is giving users access to its ChatGPT bot by dialing the U.S. number (1-800-242-8478) or messaging it via WhatsApp. At first, the company said, callers will get 15 minutes free per month. For the phone number, users can call without an account, but the company said it is “working on ways” to integrate WhatsApp messages with a person’s ChatGPT credentials.
[
» Read full article ]

CNBC; Hayden Field (December 18, 2024)

 

Is the Tech Industry on the Cusp of an AI Slowdown?

AI researchers have relied on data from the Internet to improve large language models (LLMs), but some experts are sounding the alarm that the data are running out. Demis Hassabis (pictured), the CEO and co-founder of Google DeepMind who shared this year's Nobel Prize in Chemistry, warns of "diminishing returns." Hassabis and others are now developing ways for LLMs to learn from their own trial and error by generating “synthetic data.” OpenAI recently released a new system built this way, but it only works in areas like math and computing programming, where there is a clear distinction between right and wrong.


[
» Read full article *May Require Paid Registration ]

The New York Times; Cade Metz; Tripp Mickle (December 19, 2024)

 

Ukraine Collects War Data Trove to Train AI

Oleksandr Dmitriev, founder of OCHI, a digital system that centralizes and analyzes video feeds from Ukrainian drone crews working on the front lines, says his system has collected 2 million hours of battlefield video from drones since 2022. The footage can be used to train AI models in combat tactics, spotting targets, and assessing the effectiveness of weapons systems.
[
» Read full article ]

Reuters; Max Hunder (December 20, 2024)

 

APpaREnTLy THiS iS hoW yoU JaIlBreAk AI

The Best-of-N algorithm was able to jailbreak "frontier AI systems across modalities.” Created by researchers at Anthropic, the University of Oxford in the U.K., Stanford University, and the ML Alignment & Theory Scholars (MATS) Program, the algorithm works by repeatedly sampling variations of a prompt with a combination of augmentations, such as random shuffling or capitalization for textual prompts, until a harmful response is elicited. Even small changes to other modalities or methods for prompting AI models, such as speech or images, allowed the bypassing of safeguards.
[
» Read full article *May Require Free Registration ]

404 Media; Emanuel Maiberg (December 19, 2024)

 

Arizona School’s Curriculum Will Be Taught by AI

The Arizona State Board for Charter Schools approved an application from Unbound Academy to open a fully online school serving grades four through eight. Unbound already operates a private school that uses its AI-dependent “2hr Learning” model in Texas and is currently applying to open similar schools in Arkansas and Utah. Under the model, students spend two hours a day using personalized learning programs from companies such as IXL and Khan Academy.
[
» Read full article ]

Gizmodo; Todd Feathers (December 19, 2024)

 

Nvidia’s AI Business Collides with U.S.-China Tensions

Nvidia and nations interested in its technology are being caught up in U.S. efforts to tighten control over AI chip sales. A proposed framework would allow U.S. allies to make unlimited purchases, while adversaries would be blocked entirely and other nations would receive quotas based on their alignment with U.S. strategic goals. Nvidia chip purchases, under such a model, could require cooperation with approved U.S. and EU cloud service operators and other assurances to the U.S. government that the technology won’t be shared with China.


[
» Read full article *May Require Paid Registration ]

The New York Times; Tripp Mickle; Paul Mozur (December 19, 2024)

 

OpenAI Unveils New AI Model o3

Bloomberg (12/20, Subscription Publication) reported that OpenAI announced a new AI model, o3, during a livestream on Friday, claiming it offers advanced human-like reasoning compared to previous models. The o3 model, along with a smaller version, o3-mini, aims to solve complex multi-step problems more effectively. OpenAI CEO Sam Altman revealed plans to release o3-mini in January and o3 soon after. The company is engaging safety researchers to test the models before launch. OpenAI also introduced “deliberative alignment” to ensure model safety. The announcement concluded a series of livestreamed product events, including new ChatGPT Pro and Sora tools.

        Wired (12/20, Knight) reports that o3 “takes even more time to deliberate over questions.” The o3 model “scores much higher on several measures than its [o1] predecessor, OpenAI says, including ones that measure complex coding-related skills and advanced math and science competency.” Wired adds that “Google is pursuing a similar line of research. Noam Shazeer, a Google researcher, yesterday revealed in a post on X that the company has developed its own reasoning model, called Gemini 2.0 Flash Thinking. The two dueling models show competition between OpenAI and Google to be fiercer than ever.”

AI-Generated Abstracts Receive Higher Ratings Than Human-Written Ones

Inside Higher Ed (12/20, Grove) reported that a study from Ontario’s University of Waterloo suggests that journal abstracts paraphrased with the help of artificial intelligence are perceived as more authentic, clear, and compelling compared to those written solely by humans. The study, published in the journal Computers in Human Behavior: Artificial Humans, found that peer reviewers rated AI-paraphrased abstracts higher than those written without algorithmic assistance. “AI-paraphrased abstracts were well received,” said Lennart Nacke, a co-author of the study. However, abstracts written entirely by AI were rated slightly less favorably on qualities like honesty and clarity. Nacke emphasized that AI should serve as an “augmentation tool” rather than a “replacement for researcher expertise.” He noted that while AI can “polish language and improve readability,” it cannot replace the deep understanding that comes with years of research experience.

New Jersey Teen’s Initiative Boosts Girls’ Interest In AI

ABC News (12/23) reports that Ishani Singh, a high school student in New Jersey, was motivated to start Girls Rule AI after being the only female competitor at a regional computer science competition in 2021. The organization’s mission is to engage more teenage girls in artificial intelligence (AI). Singh, now 17, has successfully expanded Girls Rule AI to offer free AI courses to more than 200 girls across 25 states and six countries, including Kenya and Afghanistan. She believes that the organization’s success lies in making AI accessible and helping girls feel more confident in the field. Singh stated, “We’re not at the level that we should be,” emphasizing the need for more women in technology. Singh hopes increased female interest will enhance AI technology and its applications worldwide.

Tech Predictions 2025: AI Agents, Cleaner Data Centers, And More

The Wall Street Journal (12/26, Stern, Mims, Nguyen, Subscription Publication) shares its annual tech predictions for 2025, highlighting key trends. Every major tech company, including Amazon, Google, and Meta, will focus on AI agents that understand context, learn preferences, and interact with users to complete tasks. Amazon’s Alexa will receive a generative AI upgrade, along with smarter Echo speakers and deeper, more seamless interaction with the long-running voice assistant. Additionally, the article touches on cleaner power for data centers, with Amazon, Google, and Microsoft investing in nuclear power and alternative energy sources. Other predictions include advancements in weather forecasting, a crypto boom as Bitcoin has shot through the $100,000 barrier, and the launch of fully autonomous vehicles, including Amazon’s Zoox, which will offer public rides in Las Vegas, San Francisco, Austin, and Miami.

        Kit Eaton writes for Inc. Magazine (12/25) that 2024 was a landmark year for AI, marked by rapid technological progress and growing societal debate. OpenAI’s ChatGPT dominated the AI landscape, despite controversy over its GPT-4o model’s human-like voices and concerns about safety and leadership. Amazon’s efforts to modernize Alexa using AWS stumbled, while Apple prioritized privacy and user safety in its “Apple Intelligence” push. The year also saw intensified debates over AI’s impact on jobs (with estimates suggesting 40% of jobs will be influenced by AI) and concerns about AI-driven fraud, abuse, and misinformation, leading to increased discussions about regulation, including the EU’s new AI law. As AI innovation continues, 2025 is expected to bring more emphasis on smarter adaptations, agentic AI, and reasoning models, with dominant players potentially including OpenAI, Google, Microsoft, and Apple, but also potentially joined by innovative startups.

AI Chatbot Improves FAFSA Completion Among Washington Teens

The Seattle Times (12/24, Bazzaz) reported that the Washington Student Achievement Council’s OtterBot, an AI-powered chatbot, is potentially increasing the completion rate of the Free Application for Federal Student Aid (FAFSA) among low-income students. A report indicates that students using OtterBot were more likely to submit their FAFSA than those who did not. Sarah Weiss, WSAC’s director of college access initiatives said, “We remind the heck out of students about FAFSA.” Last year, 56% of OtterBot’s target audience completed the FAFSA, compared to 42% of eligible non-users. The bot, costing the state $464,000 annually, sends reminders about financial aid deadlines and answers queries from more than 100,000 subscribers. Despite past FAFSA glitches, OtterBot users found it helpful, with one describing it as a “friend through the process.” The tool, launched in 2019, aims to connect with College Bound families and is available in more than 100 languages.

Struggling Cities Across Midwest, Mid-Atlantic, South May Benefit As AI Reshapes Economic Geography, Study Says

The New York Times (12/26, Lohr) reports that as the use of artificial intelligence (AI) “moves beyond a few big city hubs and is more widely adopted across the economy, Chattanooga and other once-struggling cities in the Midwest, Mid-Atlantic and South are poised to be among the unlikely winners, a recent study found.” These metropolitan areas share common attributes such as “an educated work force, affordable housing, and workers who are mostly in occupations and industries less likely to be replaced or disrupted by AI, according to the study” that is “part of a growing body of research pointing to the potential for chatbot-style artificial intelligence to fuel a reshaping of the population and labor market map of America.”

How AI Tools Aid Students With Disabilities

The AP (12/26, Hollingsworth) reports that assistive technology powered by artificial intelligence is helping students with disabilities, such as dyslexia, to perform tasks that are easy for others. Makenzie Gilkison, a 14-year-old from Indianapolis, uses AI tools like a chatbot and word prediction programs to keep up with classmates, saying, “I would have just probably given up if I didn’t have them.” Schools are fast-tracking AI applications for students with disabilities, supported by the US Education Department and new rules from the Department of Justice. There are concerns about AI ensuring learning and not replacing it. Paul Sanft, director of a Minnesota-based center “where families can try out different assistive technology tools and borrow devices,” says AI can level the playing field, though there are risks of misuse. The US National Science Foundation is also funding AI research to develop tools for children with speech and language difficulties.

Bloomberg Analysis: Proliferation Of AI Data Centers May Be Distorting Power Distribution Across Grid In US

Bloomberg Business (12/27, Nicoletti, Malik, Tartar, Subscription Publication) reported that as “AI data centers are multiplying across the US and sucking up huge amounts of power,” there is new evidence showing “they may also be distorting the normal flow of electricity for millions of Americans.” This “problem is threatening billions in damage to home appliances and aging power equipment, especially in areas like Chicago and “data center alley” in Northern Virginia, where distorted power readings are above recommended levels.” According to an exclusive Bloomberg analysis, “more than three-quarters of highly-distorted power readings across the country are within 50 miles of significant data center activity.” Tom’s Hardware (12/28) provides additional coverage on the report.

        Tech Companies Seek New Energy Solutions For AI Data Centers. The Washington Post (12/27, Halper) reported that technology companies are investing in innovative energy projects to meet the growing electricity demands of AI-driven data centers. These centers could consume up to 17% of US electricity by 2030. Projects include World Energy’s green hydrogen initiative in Newfoundland, Microsoft’s revival of Three Mile Island nuclear plant, and Helion Fusion’s atomic fusion in Washington state. Other efforts involve TerraPower’s small nuclear reactors in Wyoming and Fervo Energy’s geothermal fracking in Utah and Nevada. These initiatives aim to provide sustainable power while addressing environmental concerns.

        OpenAI Expands DC Lobbying Efforts To Promote Energy Security For AI Data Centers. Politico (12/27, Chatterjee) reported, “OpenAI...is tripling the size of its D.C. policy team and trying to promote a sweeping new plan to deliver cheaper energy to data centers.” The company “is pushing Washington leaders to embrace the AI industry as crucial in the economic and security race against China.” To this end, “it has hired D.C. insiders from across the political spectrum and beefed up its lobbying as it tries to get Congress and state leaders to sign onto an ambitious plan to build tech and energy infrastructure for AI development.”

        Space Data Centers Offer Energy Solutions. CleanTechnica (12/28, Casey) reported that space-based data centers could address the growing energy demands of AI training, as proposed by US startup Lumen Orbit. Lumen argues that launching data centers into space could bypass terrestrial energy constraints and delays from infrastructure projects. The company highlights the cost efficiency of space solar power, noting that launching and operating in space could be cheaper than current Earth-based solutions. NASA, while focusing on space-to-space solar technology, is less enthusiastic about space-to-Earth solar energy, though interest and investment in the latter are growing. Lumen plans to deploy data centers in low Earth orbits to mitigate space debris and reduce visibility interference with astronomical observations. Data transmission to Earth would be via optical laser or shuttle-style systems. Lumen aims to launch a demonstrator this spring and scale up by 2026, with multiple gigawatts planned by 2030. The ASCEND consortium in the EU also sees space data centers as a promising alternative to reduce the environmental impact of digital applications on Earth.

Google Unveils Quantum Chip Willow

Forbes (12/25, Riani) reports that Google has introduced its new quantum computing chip, Willow, featuring 105 qubits. Willow can perform computations in under five minutes that would take classical supercomputers 10 septillion years. This advancement offers significant potential for startups, particularly in pharmaceuticals, renewable energy, and AI, by accelerating problem-solving and enhancing machine learning. However, it also presents cybersecurity challenges, necessitating quantum-resistant protocols. The increased accessibility through cloud platforms could foster collaboration among startups, academia, and tech companies, driving innovation in quantum applications.

Experts Say AI Agents Set To Transform Education By 2025

Forbes (12/26, Ravaglia) reported that artificial intelligence (AI) agents are poised to revolutionize education by 2025, according to insights from education innovators. Brainly CTO Bill Salak anticipates AI agents will “aggregate data, make decisions, and seamlessly perform actions” based on user instructions, transforming web interactions from human-focused to agent-optimized. Brad Barton, YouScience’s CTO, highlights AI’s growing role in classrooms, offering personalized support to students. Jack Lynch, CEO of HMH, predicts AI will free teachers to focus on student engagement. Jay Patel of Cisco foresees AI agents embodying organizational values, creating brand-aligned interactions. Hassaan Raza, CEO of Tavus, emphasizes the importance of a “human layer” for AI agents, enhancing interactions through empathy and video interfaces. Finally, Anurag Dhingra, SVP & GM of Cisco Collaboration, suggests AI will subtly integrate into daily life, shaping education significantly by 2025.

dtau...@gmail.com

unread,
Jan 4, 2025, 8:55:28 AMJan 4
to ai-b...@googlegroups.com

Hinton Backs Musk's Lawsuit Against OpenAI

ACM A. M. Turing Award laureate Geoffrey Hinton has voiced support for Elon Musk's lawsuit seeking to prevent OpenAI from restructuring into a for-profit company. Hinton said in a statement that OpenAI "received numerous tax and other benefits from its non-profit status. Allowing it to tear all of that up when it becomes inconvenient sends a very bad message to other actors in the ecosystem."
[ » Read full article ]

Business Insider; Kwan Wei Kevin Tan (December 31, 2024)

 

The World Needs Lazier Robots

Robots running on AI constantly process data, using so much of the energy consumed by datacenters that the emissions they're responsible for could outweigh their benefits. A potential solution proposed by René van de Molengraft at the Eindhoven University of Technology in the Netherlands is “lazy robotics,” in which machines do less and take shortcuts to learning, much as humans would.
[ » Read full article ]

The Washington Post; Samanth Subramanian; Emily Wright (December 31, 2024)

 

Hinton Shortens Odds of AI Wiping Out Humanity

ACM A. M. Turing Award laureate Geoffrey Hinton has shortened the odds of AI wiping out humanity over the next three decades, warning the pace of change in the technology is “much faster” than expected. In an interview, Hinton, who this year was awarded the Nobel Prize in Physics for his work in AI, said there was a “10% to 20%” chance that AI would lead to human extinction within the next 30 years.

[ » Read full article *May Require Paid Registration ]

The Guardian (U.K.); Dan Milmo (December 27, 2024)

 

AI Needs So Much Power, It’s Making Yours Worse

A Bloomberg analysis shows that more than 75% of highly distorted power readings across the U.S. are within 50 miles of significant datacenter activity, based on readings from 770,000 home sensors. The problem is threatening billions of dollars in damage to home appliances and aging power equipment, especially in areas like Chicago and "datacenter alley" in Northern Virginia, where distorted power readings exceed recommended levels.

[ » Read full article *May Require Paid Registration ]

Bloomberg; Leonardo Nicoletti; Naureen Malik; Andre Tartar (December 27, 2024)

 

AI Could Reshape the Economic Geography of the U.S.

As AI's use and benefits move beyond a few big city hubs, once-struggling cities in the Midwest, Mid-Atlantic, and South are poised to be among the beneficiaries. An academic study by labor economists points to those cities' educated work forces, affordable housing, and occupations and industries being less likely to be replaced or disrupted by AI as the primary reasons. These cities are well positioned to use AI to become more productive, helping to draw more people.

[ » Read full article *May Require Paid Registration ]

The New York Times; Steve Lohr (December 26, 2024)

 

Tech Industry Saw Rapid Advances And Challenges In 2024

The New York Times (12/30, Roose) reports that the tech industry experienced significant changes in 2024, with advancements in artificial intelligence (AI) and challenges from regulatory and political fronts. Major AI updates included OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude. A notable achievement was Google’s AlphaFold team earning a Nobel Prize in Chemistry. The year also saw tech companies in conflict with regulators and a “tech right” supporting Donald J. Trump. Epoch AI, a nonprofit, was recognized for its influential AI research. Andres Freund, a Microsoft engineer, discovered a security flaw in Linux, highlighting the importance of open-source software maintainers. NASA’s Jet Propulsion Laboratory resolved a glitch on Voyager 1, while Bluesky, a social media platform, offered a fresh online experience. Google’s NotebookLM and Coloring Book Hero also provided practical AI applications.

        AI Developments And Challenges Explored In 2024. The AP (12/30, Parvini) reports that in 2024, the focus shifted from developing artificial intelligence (AI) models to creating practical products, according to Arvind Narayanan, a Princeton professor. Narayanan noted, “The main thing that was wrong with generative AI last year is that companies were releasing these really powerful models without a concrete way for people to make use of them.” AI tools are increasingly integrated into technology services, such as Google search and photo editing. However, the growth of AI models has plateaued since GPT-4’s release, shifting public discourse from existential fears to normalizing AI as technology. High costs and energy demands are concerns, with tech giants investing in nuclear power. Goldman Sachs analyst Kash Rangan remarked, “It’s more expensive than we thought and it’s not as productive as we thought.” AI’s role in the workforce raises concerns, with industries like entertainment fearing job impacts.

        Big Tech’s Billions In AI Spending Revealed. Quartz (12/30) reports on Big Tech’s massive investment in AI, with Microsoft, Meta, Google, and Amazon spending a combined $125 billion on AI data centers from January to August 2024, according to a JPMorgan report citing New Street Research. Amazon alone spent $19 billion, with $16 billion in AI capital expenditures, including $8 billion on GPUs and other data center chips, and $3 billion in operating costs, with $2 billion spent on training and research and development, and $1 billion spent on inferencing.

AI Copyright Lawsuits May Define Fair Use In 2024

Reuters Legal (12/27) reported upcoming court cases in 2024 may significantly impact AI’s use of copyrighted materials. Authors, artists, and other copyright holders have filed lawsuits against tech companies like OpenAI and Meta, accusing them of using their work for AI training without permission. The central issue is whether this constitutes “fair use.” Some tech companies argue their AI systems transform the content, thus making fair use. Courts’ decisions could vary, leading to appeals. Early indicators may come from ongoing disputes involving Thomson Reuters and music publishers against AI companies.

Nonprofit Backs Musk’s Push To Halt OpenAI’s For-Profit Transition

TechCrunch (12/27, Wiggers) reported, “Encode, the nonprofit organization that co-sponsored California’s ill-fated SB 1047 AI safety legislation, has requested permission to file an amicus brief in support of Elon Musk’s injunction to halt OpenAI’s transition to a for-profit company.” In the “proposed brief...counsel for Encode said that OpenAI’s conversion to a for-profit would ‘undermine’ the firm’s mission to ‘develop and deploy … transformative technology in a way that is safe and beneficial to the public.’”

AI To Reach Level Of “Maturity” In Education During 2025, Experts Predict

The Hill (12/31) reported “experts predict that 2025 will be the year artificial intelligence (AI) truly gets off the ground in K-12 schools.” This year “laid the groundwork for AI to reach a level of ‘maturity’ in education, with the federal government releasing guidance on the issue and growing numbers of teachers getting professional training on the technology and classes on data science available to students.” Advocates say it’s now “time for schools to shift from figuring out how to efficiently use AI to responsibly incorporating it into students’ lives.”

Google’s AI Studio Leader Predicts Direct Path To Superintelligence

Insider (12/31, Langley) reports that Logan Kilpatrick, Google’s AI Studio product manager, suggests a “straight shot” to artificial superintelligence (ASI) is becoming increasingly likely due to the success of scaling test-time compute. Kilpatrick shared on X that ASI may arrive like a “product release” rather than a singular event. He acknowledged the potential in Ilya Sutskever’s approach, despite initial skepticism. Sutskever, formerly of OpenAI, founded Safe Superintelligence, aiming for a focused pursuit of ASI. Kilpatrick remains cautiously optimistic about iterative versus direct approaches.

British Researchers Use AI To Detect Risk For Atrial Fibrillation

The Hill (12/31, Menezes) reported that British researchers have developed an AI tool capable of identifying individuals at risk of atrial fibrillation (AF) before symptoms manifest, potentially preventing thousands of strokes. Created by scientists at the University of Leeds and Leeds Teaching Hospitals NHS Trust, the system analyzes electronic health records, considering factors like age, sex, ethnicity, and existing health conditions to assess risk levels. Validated with data from over 12 million people, the tool is currently being tested in West Yorkshire, where high-risk patients receive portable ECG devices for heart rhythm monitoring, with hopes for nationwide implementation.

Coalition For Health AI Planning Quality Assurance Labs To Vet Health-Related AI Tools

Politico (1/1, Reader) reports that as the government struggles with “oversight” of artificial intelligence, one group, Coalition for Health AI is planning “to launch quality assurance labs to vet AI tools in 2025 that would effectively entrust the private sector with vetting the technology in the absence of government action.” According to Politico, “Biden administration officials have signaled support for the idea. The administration’s top health tech official, who previously served on CHAI’s board, endorsed the concept...in September. Nearly three thousand industry partners have joined the effort, including the Mayo Clinic, Duke Health, Microsoft, Amazon and Google. Anderson, who went on to become a consultant to federal regulators on health tech after his time as a family doctor, is now trying to convince President-elect Donald Trump that the health AI industry should oversee itself.”

Musk Intensifies Legal Battle With OpenAI

Washington Post (1/1, De Vynck) reports that Elon Musk has escalated his legal conflict with OpenAI, seeking to prevent the company from altering its nonprofit structure. Musk argues that OpenAI should not block investors from supporting competitors like his AI start-up, xAI. Tech investors Antonio Gracias and Gavin Baker support Musk’s claims that OpenAI imposed conditions on investors. OpenAI denies these allegations, stating investors were informed they would not receive sensitive information if they invested in rivals.

        OpenAI Delays Launch Of Media Manager Tool. TechCrunch (1/1, Wiggers) reports that OpenAI’s Media Manager tool, announced in May to allow creators to control their content’s inclusion in AI training data, remains unreleased seven months later. The tool was intended to address intellectual property concerns and mitigate legal challenges. However, insiders indicate it was not prioritized internally. OpenAI’s Fred von Lohmann, initially involved, has shifted to a part-time consultant role. IP experts doubt the tool’s effectiveness in addressing legal complexities. OpenAI continues to face lawsuits from creators over unauthorized use of their works in AI training.

AI Regulation Debate Intensifies In 2024

TechCrunch (1/1, Zeff) reports that in 2024, debates over AI regulation intensified as tech industry leaders and policymakers clashed over AI’s potential risks. California’s SB 1047 bill, aimed at preventing AI-induced catastrophic events, was vetoed by Governor Gavin Newsom. The bill faced opposition from venture capitalists and tech companies, including Andreessen Horowitz, who argued it stifled innovation. Proponents, like Encode’s Sunny Gandhi, remain optimistic about future regulatory efforts. Meanwhile, Marc Andreessen and a16z’s Martin Casado criticized regulatory attempts, with Casado calling AI “tremendously safe” despite ongoing safety concerns.

dtau...@gmail.com

unread,
Jan 10, 2025, 7:52:19 PMJan 10
to ai-b...@googlegroups.com

OpenAI's New o3 Model Freaks Out CS Majors

Some computer science (CS) majors have expressed concerns that AI will leave them without a job, pointing to OpenAI's new o3 reasoning model. One user on X said, "CS grads might honestly be cooked," while another user said they "might need to pivot." Georgia Institute of Technology AI Hub's Pascal Van Hentenryck said AI will not replace the need for computer scientists, but rather alleviate the need for them to work on "easy and tedious tasks."
[
» Read full article ]

Axios; Angrej Singh (January 7, 2025)

 

AI Trained to Predict Gene Activity

Scientists led by a team at Columbia University trained an AI algorithm to predict how the genes inside a cell will drive its behavior. The General Expression Transformer (GET) algorithm was trained using an approach similar to how ChatGPT was taught the grammar of language, learning along the way the underlying rules governing genes.
[
» Read full article ]

The Washington Post; Mark Johnson (January 9, 2025)

 

Medical Misinformation Easily Injected into LLMs

Large language models (LLMs) are compromised once misinformation accounts for 0.001% of training data, New York University researchers found. The team used GPT 3.5 to produce "high quality" medical misinformation that was then inserted into The Pile, a commonly used database for LLM training. The resulting LLMs not only produced misinformation on their targeted topics, but also on other medical topics.
[
» Read full article ]

Ars Technica; John Timmer (January 8, 2025)

 

41% of Companies Plan to Reduce Workforces by 2030 Due to AI

About 41% of employers worldwide intend to downsize their workforce by the end of this decade as AI automates certain tasks, according to a World Economic Forum survey of hundreds of large companies. About three-quarters of respondents said they plan to reskill/upskill their workers between 2025 and 2030 to better work alongside AI.
[
» Read full article ]

CNN; Olesya Dmitracova (January 8, 2025)

 

Driver in Las Vegas Cybertruck Explosion Used ChatGPT to Plan Blast

The driver of a Tesla Cybertruck that exploded on New Year's Day in front of the Trump International Hotel in Las Vegas used ChatGPT to learn how to construct an explosive and other facets of the attack. An OpenAI spokesperson said, "ChatGPT responded with information already publicly available on the Internet and provided warnings against harmful or illegal activities."
[ » Read full article ]

NBC News; Tom Winter; Andrew Blankstein; Antonio Planas (January 7, 2025)

 

AI Interprets Throat Vibrations to Create Sentences

Researchers at the U.K.'s University of Cambridge and University College London and China's Beihang University developed a model that determines what a person who finds it difficult to speak is trying to say based on throat muscle vibrations and carotid pulse. The data, obtained using textile strain sensors, is fed into two large language models, both based on GPT-4o-mini. The token synthesis agent is used to identify words mouthed by the user and arrange them in sentences, while the sentence expansion agent expands these sentences using contextual information and data on the user's emotional state.

[ » Read full article *May Require Paid Registration ]

New Scientist; Matthew Sparkes (January 6, 2025)

 

At the Intersection of AI and Spirituality

Religious leaders are seeking to determine where AI fits within their calling. This search has resulted in an industry of faith-based tech companies that offer AI tools, including assistants that can do theological research and chatbots that can help write sermons. While many agree using AI for research or marketing or translating sermons into different languages is acceptable, others argue using it for sermon writing, for example, is unethical.

[ » Read full article *May Require Paid Registration ]

The New York Times; Eli Tan (January 3, 2025)

 

AI Robots Enter the Public World, with Mixed Results

With the emergence of generative AI (Gen AI), hopes are rising for greater adoption of robotics in public spaces. Robots rely on code that tells them how to execute functions or react to specific scenarios, limiting them to specific actions they were trained to perform. Gen AI could permit robots to better navigate obstacles, understand what certain objects are, and even take verbal commands, said ABB’s Marc Segura.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Isabelle Bousquette (December 31, 2024)

 

Microsoft Plans $80B Investment In AI-Enabled Data Centers

Reuters (1/3, Varghese) reported that on Friday, Microsoft announced it is planning “to invest about $80 billion in fiscal 2025 on developing data centers to train artificial intelligence...models and deploy AI and cloud-based applications.” Reuters points out that the announcement comes as investment in AI “has surged since OpenAI launched ChatGPT in 2022, as companies across sectors seek to integrate artificial intelligence into their products and services. ... As OpenAI’s primary backer,” Microsoft “is considered a leading contender among Big Tech companies in the AI race due to its exclusive partnership with the AI chatbot maker.”

CES: NVIDIA Announces AI Tools To Improve Robot, Vehicle Training

Reuters (1/7) reports NVIDIA used CES to reveal “new products such as artificial intelligence to better train robots and cars, souped-up gaming chips and its first desktop computer, as it expounded upon its potential to expand its business.” NVIDIA’s new Cosmos foundation models generate “photo-realistic video which can be used to train robots and self-driving cars at a much lower cost than using conventional data.”

        CIO Magazine (1/6, Swain) reports NVIDIA’s CES announcements placed “an emphasis on generative physical AI that promises a new revolution in factory and warehouse automation.” The company “defines physical AI as the embodiment of artificial intelligence in humanoids, factories, and other devices within industrial systems.” While LLMs are “one-dimensional,” physical AI “requires models that can understand and interpret a three-dimensional world.”

OpenAI CEO Says Trump Should Ease Power Plant Restrictions To Support AI Development

The Hill (1/7, Shapero) reports that OpenAI CEO Sam Altman “suggested that President-elect Trump should ease restrictions on data center and power plant construction to help boost the development of energy-intensive artificial intelligence (AI).” In a “wide-ranging interview with Bloomberg published Sunday, Altman said the most helpful thing the incoming Trump administration can do for AI is support the construction of ‘U.S.-built infrastructure and lots of it.’” Altman said, “The thing I really deeply agree with the president on is, it is wild how difficult it has become to build things in the United States. Power plants, data centers, any of that kind of stuff.”

        Altman Confident OpenAI Can Develop AGI. The Verge (1/6) reports that OpenAI CEO Sam Altman expressed confidence in the company’s ability to develop artificial general intelligence (AGI) as traditionally understood. In a blog post on Monday, Altman predicted AI agents might significantly impact company outputs this year. OpenAI’s next goal is achieving “superintelligence,” which could accelerate scientific discovery and innovation. Despite exclusivity deals with Microsoft, OpenAI is not yet profitable, losing money on its ChatGPT Pro subscriptions. Altman acknowledged governance failures, emphasizing the importance of trust and credibility in pursuing OpenAI’s mission to ensure AGI benefits humanity.

Meta Faces Backlash Over User-Generated AI Characters On Instagram

NBC News (1/7) reports that Meta’s AI Studio feature has sparked controversy after users created AI characters that violated the platform’s policies. NBC News found AI chatbots resembling figures like Jesus Christ, Donald Trump, Taylor Swift, Adolf Hitler, and others, despite Meta’s rules against such creations. Meta removed highlighted accounts after NBC News contacted the company, but similar characters remain active. The AI chatbots, some romantic or sexual in nature, have drawn scrutiny, with one popular bot described as “Your Girlfriend” exchanging over 260,000 messages. Meta CEO, Mark Zuckerberg recently announced a rollback of content moderation policies, citing concerns about over-enforcement. Joel Kaplan, Meta’s global policy chief, stated the company will now focus on “illegal and high-severity violations.” Meta has faced criticism for both user-created and company-created AI chatbots, some of which have been accused of perpetuating racial stereotypes or engaging in inappropriate interactions.

Administration To Further Limit Nvidia AI Chip Exports In Final Push

Bloomberg (1/8, Hawkins, Leonard, Subscription Publication) reports the Administration is planning “one additional round of restrictions on the export of artificial intelligence chips from the likes of Nvidia Corp. just days before” President Biden leaves office, in “a final push in his effort to keep advanced technologies out of the hands of China and Russia.” Bloomberg says the government “wants to curb the sale of AI chips used in data centers on both a country and company basis, with the goal of concentrating AI development in friendly nations and getting businesses around the world to align with American standards, according to people familiar with the matter.”

Former Google CEO Launches AI Video Startup “Hooglee”

Forbes (1/9, Emerson) reports that former Google CEO Eric Schmidt has initiated a new AI project named Hooglee, aimed at revolutionizing AI video generation. Founded last year and financed by Schmidt’s family office, Hillspire, Hooglee seeks to “democratize video creation with AI.” The startup’s website hints at a social networking aspect, aiming to “change the way people connect through the power of AI and video.” Schmidt has enlisted Sebastian Thrun, a technology veteran, to lead the project. Hooglee’s team includes former Meta AI lab scientists and Kittyhawk’s ex-general counsel. Schmidt’s staff reportedly view Hooglee as a potential TikTok alternative, although Schmidt himself declined to comment. Trademark applications suggest Hooglee’s product will be both AI video software and a social platform. Despite Schmidt’s enthusiasm, he has previously warned about AI’s potential dangers, particularly deepfakes, suggesting “AI detection systems and watermarking” as possible solutions.

Tesla’s AI Ambitions Include Robotaxi Service By 2025

Behind a paywall, Barron’s (1/9) reports that Tesla is advancing its AI-driven self-driving cars and humanoid robots, with plans to launch a robotaxi service by the end of 2025. CEO Elon Musk, in a video interview at the Consumer Electronics Show in Las Vegas, highlighted AI’s potential, stating it will outperform human drivers by early 2025. Tesla aims to produce several thousand robots in 2025, scaling to 500,000 by 2027. While Deutsche Bank analyst Edison Yu estimates Tesla could sell 200,000 robots annually by 2035, Musk’s projections are significantly more ambitious. Tesla stock was down 2% year to date.

AI Investments Powering US Economic Growth

According to NBC News (1/8, Wile), AI investments are significantly powering economic growth in the US, driven by tech companies’ capital spending on hardware and software to expand cloud-computing capacity. AWS, for example, announced an $11 billion investment this week in AI-related projects in Georgia. However, job creation from AI investments remains limited, with construction and utilities sectors benefiting most. The potential for AI to automate jobs poses a risk to employment growth in other sectors. Despite uncertainties about the timing of AI’s broader economic benefits, tech firms continue to invest in anticipation of future profitability.

Character. AI Faces Scrutiny Over School Shooter Chatbots

Forbes (1/9, Daniel) reports that Character. AI, a Google-backed chatbot platform, is under fire after users created chatbots simulating real-life school shooters and victims, allowing graphic role-play scenarios. In response, Character. AI removed the chatbots, stating that users violated its terms of service. The company also announced new measures to filter characters available to users under 18 and restrict access to sensitive topics. Experts raise concerns about how interactive AI tools can influence vulnerable users. Psychologist Peter Langman warned that chatbots could normalize harmful ideologies if users receive no intervention. Digital forensics experts noted that while AI can mimic language patterns, it lacks the ability to provide the nuanced understanding needed to interpret human behavior. The controversy underscores broader challenges in regulating generative AI platforms, with calls for stricter oversight and parental involvement to protect young users.

dtau...@gmail.com

unread,
Jan 18, 2025, 8:45:39 AMJan 18
to ai-b...@googlegroups.com

Nearly All Americans Use AI; Most Dislike It

A Gallup-Telescope survey of 3,975 U.S. adults conducted Nov. 26-Dec. 4, 2024, found that of the approximately 99% of respondents who used at least one AI-enabled product in the prior week, close to 67% were unaware they were doing so. Gallup's Ellyn Maese said there is "a lot of confusion when it comes to what is just a computer program versus what is truly AI and intelligent."
[ » Read full article ]

Axios; Ivana Saric (January 15, 2025)

 

Apple Joins Consortium to Help Develop Next-Gen AI Datacenter Tech

Apple has joined the Ultra Accelerator Link Consortium, a group working to develop the UALink standard to connect AI accelerator chips, from GPUs to custom-designed chips, to accelerate the training, fine-tuning, and running of AI models. The first UALink products, based on AMD's Infinity Fabric and other open standards, are expected to be released in the next few years. Other consortium members include Intel, AMD, Google, AWS, Microsoft, Meta, Alibaba, and Synopsys.
[ » Read full article ]

Tech Crunch; Kyle Wiggers (January 14, 2025)

 

U.S. Adopts Rules to Guide AI’s Global Spread

The Biden administration on Monday issued rules governing how AI chips and models can be shared with foreign countries. The rules, in essence, divide the world into three categories: the U.S. and 18 allies, which are exempted from any restrictions; nations already subject to U.S. arms embargoes, which will continue to face an existing ban on AI chip purchases; and all other nations, which will be subject to negotiable import caps.
[ » Read full article ]

The New York Times; Ana Swanson (January 14, 2025)

 

Biden Signs Executive Order to Ensure Power for AI Datacenters

President Biden on Tuesday signed an executive order providing federal support for the construction of datacenters to support the growth of AI. The order calls for leasing federal sites owned by the U.S. departments of Defense and Energy to host gigawatt-scale datacenters and new clean power facilities. It requires companies tapping federal land for datacenters to purchase an "appropriate share" of U.S.-made semiconductors.
[ » Read full article ]

Reuters; David Shepardson (January 14, 2025)

 

Forecasting Computation, Energy Costs for Sustainable AI Models

A method developed by North Carolina State University researchers predicts the costs associated with computational resources and energy consumption when updating AI models, allowing users to make informed decisions about when to update AI models to improve their sustainability. The REpresentation Shift QUantifying Estimator (RESQUE) method allows users to compare the dataset on which a deep learning model was initially trained to a dataset that will be used to update the model.
[ » Read full article ]

NC State University News; Matt Shipman (January 13, 2025)

 

PM Plans to 'Unleash AI' Across U.K. to Boost Growth

U.K. Prime Minister Sir Keir Starmer on Monday unveiled the AI Opportunities Action Plan, through which the government plans to use AI to deliver public services more efficiently. The plan calls for the establishment of "AI Growth Zones" and a boost to domestic infrastructure, with tech firms committing £14 billion towards the development of large datacenters and technology hubs.
[ » Read full article ]

BBC; Liv McMahon; Zoe Kleinman; Charlotte Edwards (January 13, 2025)

 

This Turing Award Winner Sees AI Showing Great Promise, Peril

ACM A.M. Turing Award laureate Raj Reddy discussed the benefits and potential drawbacks associated with AI during a recent memorial lecture at the Indian Institute of Science. Reddy said AI could democratize education by eliminating illiteracy and language barriers and facilitating personalized instruction. He warned, however, of AI’s implications for job displacement, and its potential for weaponization for military purposes and disinformation campaigns.
[ » Read full article ]

The Times of India; Akhil George (January 10, 2025)

 

OpenAI Shuts Down Developer Who Made AI-Powered Gun Turret

OpenAI has cut off a developer who built a device that responded to orders given to ChatGPT to aim and fire an automated rifle. The device went viral after a video on Reddit showed the developer reading firing commands aloud, after which a rifle beside him quickly began aiming and firing at nearby walls. Open AI said that after viewing the video, “We proactively identified this violation of our policies and notified the developer to cease this activity."
[ » Read full article ]

Gizmodo; Thomas Maxwell (January 9, 2025)

 

MIT Researchers Develop Faster Photonic Chip For Neural Networks

Ars Technica (1/12, Krywko) reports that MIT researchers have developed a photonic chip capable of processing deep neural networks with a latency of 410 picoseconds. This innovation bypasses traditional digitization, allowing calculations with photons directly, which could significantly reduce latency. Saumil Bandyopadhyay, an MIT researcher, emphasizes the importance of speed in applications, stating, “We aim for applications where what matters the most is how fast you can produce a solution.” The team successfully implemented both linear and non-linear operations on the chip, overcoming a significant challenge in photonics. Previously, non-linear functions were offloaded to external electronics, increasing latency. The chip uses Mach-Zehnder interferometers for linear matrix multiplication. This development could lead to faster and more energy-efficient neural network computations.

FTC Reviews Musk’s Lawsuit Against OpenAI Amidst Regulatory Concerns

Reuters (1/10, Godoy) reports the Federal Trade Commission and Department of Justice on Friday weighed in “on Elon Musk’s lawsuit seeking to block OpenAI’s conversion to a public company, pointing out legal doctrines that support his claim that OpenAI and Microsoft engaged in anticompetitive practices.” The FTC and DOJ “were not expressing an opinion on the case, but offered legal analysis on aspects of the case ahead of a Tuesday hearing in Oakland, California.” Separately, the FTC is “looking into partnerships in AI, including between Microsoft and OpenAI, investigating potentially anticompetitive conduct at Microsoft and probing whether OpenAI violated consumer protection laws.”

Administration Proposes Framework To Keep Cutting-Edge AI Limited To US And Allies

Reuters (1/13, Freifeld) reports the Administration “said on Monday it would further restrict artificial intelligence chip and technology exports,” in an effort “to keep advanced computing power in the U.S. and among its allies while finding more ways to block China’s access.” Specifically, the rules would “cap the number of AI chips that can be exported to most countries and allow unlimited access to U.S. AI technology for America’s closest allies, while also maintaining a block on exports to China, Russia, Iran and North Korea.” Commerce Secretary Raimondo said, “The U.S. leads AI now – both AI development and AI chip design, and it’s critical that we keep it that way.”

        The AP (1/13, Boak, O'Brien) reports Raimondo “said on a call with reporters previewing the framework that it’s ‘critical’ to preserve America’s leadership in AI and the development of AI-related computer chips.” She added it “is designed to safeguard the most advanced AI technology and ensure that it stays out of the hands of our foreign adversaries but also enabling the broad diffusion and sharing of the benefits with partner countries.” However, the AP says executives in the industry “raised concerns...the rules would limit access to existing chips used for video games and restrict in 120 countries the chips used for data centers and AI products” as limits may be imposed on “Mexico, Portugal, Israel and Switzerland.”

        The New York Times (1/13, Swanson) calls the rules “an attempt to set up a global framework that will guide how artificial intelligence spreads around the world in the years to come,” and are “dividing the world into three categories” which are: “the United States and 18 of its closest partners” all of whom “are exempted from any restrictions and can buy A.I. chips freely”; those “already subject to U.S. arms embargoes, like China and Russia, will continue to face a previously existing ban on A.I. chip purchases”; and “all other nations – most of the world – will be subject to caps restricting the number of A.I. chips that can be imported, though countries and companies are able to increase that number by entering into special agreements with the U.S. government.” Likewise, the Washington Post (1/13, Vynck, Dou) reports the “unprecedented new export controls” are “intended to slow China’s development of AI, and tighten U.S. government control.”

OpenAI CEO Seeks State Support For More Government Investment In AI

The Washington Post (1/13, Tiku, O'Donovan) reports that OpenAI CEO Sam Altman will conduct “a multistate tour to push for massive infrastructure spending by the incoming Trump administration to support companies working on artificial intelligence.” In 2022, Altman “won over Congress, and especially Democrats, by calling for new AI regulations and warning of the technology’s potential for catastrophic harm.” Now, “OpenAI will argue the states can benefit from the construction of new data centers for use by AI developers, and the electric grid upgrades needed to power the facilities.” President-elect Trump has already “signaled that he supports investing in AI infrastructure, a priority for the tech donors shaping his administration, including Elon Musk and venture capitalists David Sacks and Marc Andreessen.”

        The New York Times (1/13, Metz, Kang) reports that Altman “donated $1 million to President-elect Donald J. Trump’s inaugural fund,” and “now, he and his company are laying out their vision for the development of artificial intelligence in the United States, hoping to shape how the next presidential administration handles this increasingly important technology.”

Op-Ed Details How Trump Can Enhance AI Literacy For K-12 Education

In an opinion piece for The Hechinger Report (1/13), Arman Jaffer, the founder and CEO of AI-powered Chrome extension Brisk Teaching, writes that Donald Trump’s second term offers a chance to enhance AI literacy in K-12 education. Jaffer emphasizes that AI skills are vital for preparing students for tech-driven careers. He notes California’s recent mandate for AI and media literacy in schools but suggests it should focus more on career-specific skills. Jaffer advocates for expanding Trump’s previous career and technical education (CTE) initiatives to include AI, proposing grants to develop AI labs and integrate machine learning into curricula. He argues this would prepare students for an AI-powered workforce and align with Trump’s economic goals. Jaffer highlights existing programs that engage students with AI and stresses the importance of making AI education accessible to all students to foster a future-ready economy.

Biden Signs Order Intended To Spur Development Of AI Infrastructure

The AP (1/14, Parvini) reports that on Tuesday, President Biden “signed an ambitious executive order on artificial intelligence that seeks to ensure the infrastructure needed for advanced AI operations, such as large-scale data centers and new clean power facilities, can be built quickly and at scale in the United States.” Biden’s order “directs federal agencies to accelerate large-scale AI infrastructure development at government sites, while imposing requirements and safeguards on the developers building on those locations. It also directs certain agencies to make federal sites available for AI data centers and new clean power facilities.”

        CNBC (1/14, Haddad) explains the order “empowers the U.S. Department of Defense and Department of Energy to lease federal sites for gigawatt-scale AI data centers.” CNBC notes companies “leasing the federal lands will also be required to purchase an ‘appropriate share’ of U.S.-manufactured semiconductors and to pay workers ‘prevailing wages,’ according to the release.” Reuters (1/14, Shepardson) reports the President “said the order will ‘accelerate the speed at which we build the next generation of AI infrastructure here in America, in a way that enhances economic competitiveness, national security, AI safety, and clean energy.’”

OpenAI Publishes New AI Policy Blueprint

Politico (1/14) reports that OpenAI has released a new policy blueprint focusing on competition with China and domestic safety concerns. The blueprint is part of OpenAI’s effort to influence policy discussions on AI as they plan to demonstrate their latest AI tools in Washington. OpenAI’s VP for global affairs, Chris Lehane, emphasized the need for a forward-thinking approach to national security and economic competitiveness. Despite tensions with Elon Musk, OpenAI aims to collaborate with the incoming Trump administration to ensure the US leads in AI innovation and national security.

        TechCrunch (1/14, Wiggers) reports that OpenAI has removed the phrase endorsing “politically unbiased” AI from its “economic blueprint” for the U.S. AI industry. The revised document omits previous language suggesting AI models should be unbiased. An OpenAI spokesperson stated the change was to “streamline” the document, noting other documents address objectivity. The revision highlights ongoing debates about AI bias, with figures like Elon Musk and David Sacks criticizing AI for alleged liberal bias. OpenAI claims any biases in ChatGPT are unintended “bugs, not features.”

        OpenAI’s o1 Model Observed Switching Languages Unexpectedly. TechCrunch (1/14, Wiggers) reports that OpenAI’s reasoning AI model, o1, displays a peculiar behavior of switching languages during its reasoning process. Users have observed o1 starting in English but transitioning to languages like Chinese or Persian mid-thought. Experts speculate this could be due to o1’s training on diverse datasets, including Chinese characters, or using languages it deems efficient. Matthew Guzdial suggests o1 processes text as tokens, not words, which may explain the inconsistency. Luca Soldaini emphasizes the need for transparency to understand such AI behaviors. OpenAI has not commented on this phenomenon.

AI Tools In Education May Impact Human Connections For Students

The Seventy Four (1/14, Fisher) reports that OpenAI released a safety card for ChatGPT 4.0 in August, highlighting risks such as “anthropomorphization and emotional reliance.” The document warns of AI’s potential to create compelling experiences that might lead to “overreliance and dependence.” This concern extends to educational technology, where AI tools could displace human connections crucial for student well-being. A new report, Navigation and Guidance in the Age of AI, examines AI’s role in college and career guidance, noting that chatbots often adopt human-like names and personalities to provide emotional support. While some students prefer bots over human interaction, leaders in the field are developing AI that fosters genuine relationships. Despite these efforts, few schools prioritize relationship-centered AI, risking increased student isolation. The report suggests that schools should demand evidence of AI’s positive impact on relationships to avoid the “catch-22” of improved AI at the cost of human connections.

Vancouver Hosts First High School AI Research Competition

The Chronicle of Higher Education (1/14, M. Lee) reports that the NeurIPS conference in Vancouver hosted its first research competition for high school students, with 18-year-old Weichen Huang among the winners. Huang, who traveled from Dublin, Ireland, was excited to present his machine-learning project among 17,000 attendees, including prominent figures from Meta, Alphabet, and Microsoft. The competition aimed to “get the next generation excited” about AI, but some critics argue it may set unrealistic expectations and exacerbate inequities. Assistant professor Gautam Kamath from the University of Waterloo remarked, “I feel like they slapped on a science-fair aspect to the entire conference.” NeurIPS received more than 330 high school submissions, with a selection rate of about 8%. Graduate student Fred Zhangzhi Peng from Duke University noted the resource challenges for high school students in AI research, saying, “For most of the average high schoolers, there’s no way you can afford that kind of computing.”

Meta Develops Innovative Real-Time Speech Translation System

Ars Technica (1/15, Krywko) reports that Meta’s Seamless team is addressing the challenges of real-time speech translation by creatively overcoming data scarcity. Current AI translators often falter in speech-to-speech translation due to the accumulation of errors in multi-stage processes. While some systems can translate directly into English, they lack bidirectional communication capabilities. Meta’s team, inspired by Warren Weaver’s 1949 idea of a universal language, utilized “multidimensional vectors” as a common base for human communication. Machines convert words into numerical vectors, which are sequences of numbers representing meaning. When you “vectorize aligned text in two languages like those European Parliament proceedings, you end up with two separate vector spaces,” allowing neural networks to map these spaces. This approach aims to improve translation quality and facilitate seamless communication akin to a “Star Trek universal translator.”

        Meta Faces Legal Scrutiny Over AI Training Practices. The Verge (1/14) reports that a copyright lawsuit against Meta has uncovered internal communications about its AI development plans, including using copyrighted data for training. Court documents reveal Meta’s alleged use of the book piracy site Library Genesis (LibGen) to develop its AI model, Llama, while attempting to conceal this. Emails suggest Meta executives, including Ahmad Al-Dahle, weighed the risks of using pirated content. The lawsuit, filed by Richard Kadrey and Sarah Silverman, accuses Meta of violating intellectual property laws. Meta has argued that using copyrighted material for training should be considered fair use.

Report: AI Use In Schools Rises Despite Privacy Concerns

K-12 Dive (1/15, Merod) reports that the Center for Democracy & Technology (CDT) released a report Wednesday highlighting increased use of generative AI by students and teachers between the 2022-23 and 2023-24 school years. Teacher use rose from 51% to 67%, while student use increased from 58% to 70%. Teachers were “more likely to tap into AI for school uses over personal reasons,” while students did the opposite, CDT noted. Despite this rise, two-thirds of teachers lack guidance on handling AI-related plagiarism, though 39% use AI detection software. Concerns persist about AI detectors’ reliability, with claims that they may harm English learners and students with disabilities. Additionally, 23% of teachers reported large-scale data breaches in schools during the 2023-24 school year. Elizabeth Laird, director of the Equity in Civic Technology Project at CDT, in a Wednesday statement, emphasized the need for schools to communicate with families about the use of educational technology.

SUNY Mandates AI Education For Undergraduates

Inside Higher Ed (1/16, Alonso) reports that the State University of New York (SUNY) will require all undergraduate students to study artificial intelligence (AI) as part of their general education. This decision, announced earlier this month, modifies the “core competencies” by including AI ethics and literacy in the Information Literacy requirement, effective fall 2026. SUNY chancellor John B. King emphasized the importance of understanding AI ethically, stating, “We are proud that … we will help our students recognize and ethically use AI.” The curriculum change coincides with rising concerns about AI’s ethical implications, including potential workforce impacts. Courses across SUNY’s 64 institutions will incorporate AI content, with individual departments developing specific curricula. Lauren Bryant, a lecturer at the University at Albany, already integrates AI discussions in her course, highlighting AI’s strengths and limitations. Sam Wineburg, a professor from Stanford University, warns of students’ potential struggles with AI, noting, “There’s no indication that students have the prerequisite skills.”

AI-Enabled Robot Learns To Dance By Mirroring Humans

Popular Science (1/16, DeGeurin) reports that researchers from the University of California, San Diego have developed an AI-enabled robot capable of performing a Waltz by mimicking its human partner’s movements. The team created an AI model, ExBody2, trained on human motion capture data, and integrated it into Unitree G1 robots. These robots analyze and replicate human motions using real-world data captured by their cameras. Unlike pre-programmed robots, this approach allows the robot to learn movements organically, making it more adaptable. Videos show the robot executing various movements, such as sidestepping and squatting. Researchers highlight that this method could reduce the need for frequent retraining, potentially accelerating robot development and lowering costs.

Microsoft CEO Discusses AI Investment With President-Elect, Musk

The Bloomberg (1/16, Subscription Publication) reports that Microsoft CEO Satya Nadella met with US President-elect Donald Trump and Elon Musk to discuss artificial intelligence and cybersecurity. Microsoft plans to invest $80 billion in AI data centers globally, with over $50 billion in the US, creating American jobs. Microsoft President Brad Smith, present at the meeting, advised against “heavy-handed regulations” on AI. Microsoft and other cloud providers are expanding data centers, driven by AI demand. Microsoft has partnered to reopen a nuclear reactor for power needs, similar to agreements by Amazon and Google.

dtau...@gmail.com

unread,
Jan 26, 2025, 1:55:31 PMJan 26
to ai-b...@googlegroups.com

Tech Giants Announce U.S. AI Plan Worth up to $500 Billion

OpenAI, Oracle, and Softbank on Tuesday announced a partnership to build datacenters and other infrastructure to power AI, in partnership with MGX, a tech investment arm of the United Arab Emirates government. The Stargate initiative aims to invest $100 billion "immediately" and $500 billion over the next four years. U.S. President Donald Trump said the plan is a "resounding declaration of confidence in America's potential."
[ » Read full article ]

BBC News; João da Silva; Natalie Sherman (January 22, 2025)

 

Executive Order Calls for AI ‘Free from Ideological Bias’

President Trump on Thursday signed an executive order revoking past government policies on AI that “act as barriers to American AI innovation.” To maintain global leadership, “We must develop AI systems that are free from ideological bias or engineered social agendas,” the order states. While the order does not specify which policies are hindering AI development, it calls for a review of “all policies, directives, regulations, orders, and other actions taken” as a result of the former administration's AI executive order.
[ » Read full article ]

Associated Press; Matt O'Brien; Sarah Parvini (January 23, 2025)

 

Self-learning Chip Mimics Brain Functions

A miniature computing chip developed by researchers in South Korea can self-learn and correct errors much like the human brain does. When processing video streams, for example, the chip teaches itself how to separate moving objects from the background, improving its performance over time. Researchers at the Korea Advanced Institute of Science and Technology said the new chip "is like a smart workspace where everything is within reach instead of moving between a desk and a filing cabinet."
[ » Read full article ]

Chosun Biz (South Korea); Hong A-reum (January 17, 2025)

 

Chinese AI Startup Competes with Silicon Valley Giants

Chinese startup DeepSeek recently unveiled an AI system that could match the capabilities of the latest chatbots from companies like OpenAI and Google. In a research paper accompanying the release of its DeepSeek-V3, the team explained how they used about 2,000 specialized computer chips from Nvidia to train their system. By comparison, the world’s leading AI companies train their chatbots with supercomputers that use as many as 16,000 chips, or more.

[ » Read full article *May Require Paid Registration ]

The New York Times; Cade Metz; Meaghan Tobin (January 24, 2025)

 

Trump Scraps Biden’s Sweeping AI Order

U.S. President Trump rescinded an executive order by former U.S. President Biden regulating AI, immediately halting implementation of safety and transparency requirements for AI developers. Biden’s order required leading AI companies to share safety test results and other critical information for powerful AI systems with the federal government. It also prompted the creation of the U.S. AI Safety Institute, housed under the U.S. Commerce Department, to create voluntary guidelines and best practices for the technology’s use.

[ » Read full article *May Require Paid Registration ]

Bloomberg; Jackie Davalos; Oma Seddiq (January 21, 2025)

 

AI Assembles Quantum Computer From Cold Atoms

A quantum computer developed by researchers at China’s University of Science and Technology features 2,024 atoms assembled by AI into an ultracold grid. The researchers developed an AI algorithm capable of recommending a sequence of laser beams and atoms to form the grid within 60 milliseconds, regardless of the grid's size.

[ » Read full article *May Require Paid Registration ]

New Scientist; Karmela Padavic-Callaghan (January 14, 2025)

 

College Admissions Evolving With AI Tools And Test-Optional Policies

Forbes (1/18, Hernholm) contributor Sarah Hernholm wrote that the college admissions landscape is experiencing significant changes due to technology, policies, and shifting priorities. The move towards test-optional policies, initiated during the pandemic, continues with more than 1,800 institutions adopting it, while some have reinstated test requirements. AI tools like Scoir and MaiaLearning are revolutionizing college searches by aligning applicants with suitable institutions, though reliance on AI for essays is cautioned against. Colleges, such as Georgia State University, use AI to streamline processes, but concerns about equity persist. Holistic admissions now emphasize extracurricular activities and personal essays, with 56% of colleges valuing them highly. Career-oriented programs are gaining traction, with universities offering co-op education and partnerships, as seen with Purdue University’s collaboration with United Airlines. Liberal-arts colleges remain relevant by showcasing versatile skills. Northeastern University’s co-op program exemplifies integrating academics with career preparation.

How AI Enhances Education For Neurodivergent Children

Forbes (1/19, Palumbo) contributor Jennifer Jay Palumbo wrote that traditional educational methods often fail to meet the needs of neurodivergent children, with 70% thriving when information is presented visually. However, creating personalized materials is resource-intensive, leaving educators and parents struggling. Jaivin Anzalota, co-founder of education platform Ella, said, “Educators and therapists know individualized visual supports make a difference, but they lack the time, energy, and expertise to create them.” Antoinette Banks, founder of Expert IEP, highlights AI’s potential, stating, “AI can adapt to how people naturally think and process information.” AI tools can generate customized visual aids and task lists, benefiting children who process information differently. Banks said, “AI recognizes these differences and provides tools tailored to each child’s needs.” AI also raises ethical concerns, such as data privacy and over-reliance on technology. Anzalota added, “Technology has the power to enable inclusion in meaningful ways.”

Trump Lauds $100B AI Joint Venture

The AP (1/21, Boak, Miller) reports President Trump on Tuesday “talked up a joint venture investing up to $500 billion for infrastructure tied to artificial intelligence by a new partnership formed by OpenAI, Oracle and SoftBank.” The AP also notes the White House said Stargate “will start building out data centers and the electricity generation needed for the further development of the fast-evolving AI in Texas,” beginning with an investment “expected to be $100 billion and could reach five times that sum.” According to Politico (1/21, Ng, Daniels), “AI development is a significant part of the Trump administration’s tech policy proposals, seeking to advance growth in not just the technology itself, but the data centers and energy capabilities it requires.”

        The New York Times (1/21, Kang, Metz) calls it “an early trophy for Mr. Trump, even though the effort to form the venture predates his taking office.” Likewise, the Wall Street Journal (1/21, Seetharaman, Dotan, Subscription Publication) highlights Stargate is “the latest high-profile initiative timed with the start of the Trump administration,” even though it “includes projects that the companies already announced and initiated under the Biden administration, people familiar with the matter said.” Furthermore, CNN (1/21, Duffy) reports Stargate’s creation comes after “AI leaders [spent] months...sounding the alarm that more data centers – as well as the chips and electricity and water resources to run them – are needed to power their artificial intelligence ambitions in the coming years.”

        Meanwhile, Reuters (1/21, Bose, Chiacu) reports White House Press Secretary Karoline Leavitt earlier claimed the “massive announcement” is “going to prove that the world knows that America is back.” However, Reuters casts Leavitt as “echoing an unrealized promise during Trump’s first term to bolster aging America’s roads, bridges and other networks,” and Bloomberg (1/21, Lai, Subscription Publication) says “skepticism remains about whether the initiative...actually amounts to a dramatic increase from previous plans.” Furthermore, Bloomberg notes that “the actual scope of new commitments remained unclear.”

Google Targets 500M Users For Gemini Chatbot

The Wall Street Journal (1/21, Subscription Publication) reports that Google CEO Sundar Pichai aims for the Gemini chatbot to reach 500 million users by the end of the year. Despite being ambitious, this target is achievable given Google’s existing user base across its products. Google plans to leverage partnerships with Android phone makers, such as Samsung and Motorola, to promote Gemini. The company has also made strides in AI technology, surpassing OpenAI in some rankings. Google’s focus on Gemini reflects its strategy to maintain a strong presence in the evolving AI chatbot market and potentially disrupt traditional search methods.

Survey Reveals College Leaders’ Divisions On Generative AI Readiness

The Chronicle of Higher Education (1/23, McMurtrie) reports that a recent survey conducted by the American Association of Colleges and Universities and Elon University’s Imagining the Digital Future Center highlights concerns among college leaders about the readiness of institutions to integrate generative AI. The survey, titled “Leading Through Disruption: Higher Education Executives Assess AI’s Impacts on Teaching and Learning,” involved more than 330 senior leaders, revealing that only 43% feel prepared to use AI effectively. The survey indicates that “93 percent cited faculty unfamiliarity with generative AI” as a significant challenge. Lynn Pasquerella, AAC&U president, emphasized the need for proactive measures, stating that leaders must “actively investigate and seek to comprehend the risks and rewards of AI.” The report also shows mixed views on AI’s impact, with 45% seeing it as more positive than negative. Institutions with more than 10,000 students show more confidence in AI adoption.

AI Tutors Enhance Student Learning And Confidence In College Course Materials

Inside Higher Ed (1/22, Mowreader) reports that Macmillan Learning’s AI Tutor, integrated into its Achieve platform, supports college students in STEM and economics courses by addressing questions and enhancing learning. The generative AI tutor acts as “an extension of an instructor or teaching assistant,” offering guidance without judgment. Analysis of more than two million messages from 8,000 students across 80 courses showed the tool’s effectiveness in promoting self-efficacy and problem-solving through Socratic questioning. Students engaged with the AI Tutor for an average of 6.3 minutes per session, often using it during late-night hours. Surveys indicated that 41% of instructors observed improved student confidence and exam performance, while 44% of students reported increased confidence in their problem-solving skills. Despite concerns about AI misuse, 67% of students reported using the tutor only when necessary.

Musk, Altman Feud Over Trump Stargate AI Project Announcement

The AP (1/22) reports Elon Musk “is clashing” with OpenAI CEO Sam Altman over the $500 billion Stargate artificial intelligence infrastructure project which was announced by President Trump on Tuesday. In a post on X, Musk alleged that a primary investor, SoftBank, doesn’t “actually have the money” to fund the project. Altman responded, telling Musk he is “wrong, as you surely know,” and adding that Stargate “is great for the country” while urging Musk to “mostly put (America) first” in his role in the Administration. CNBC (1/22, Breuninger) reports that while OpenAI, Oracle, and Softbank did not comment on Musk’s claim, a “person familiar with the AI project” told CNBC that Musk was “far off base.” The source also suggested that “Musk’s testy relationship with Altman was the catalyst for his posts about Stargate.” Similarly, CNN (1/22, Gold) says that “it should not be a surprise that Musk is going after an OpenAI initiative,” as he “is in an ongoing lawsuit with OpenAI and its CEO Sam Altman.” Musk previously said he “doesn’t trust” Altman, and “claims in the lawsuit the ChatGPT has abandoned its original nonprofit mission by reserving some of its most advanced AI technology for private customers.”

        The Wall Street Journal (1/22, Schwartz, Subscription Publication) says the exchange revealed the “sometimes awkward dynamic” between Musk and Trump, and showed that Musk “won’t pare back his unfiltered online commentary now that Trump has taken office.” Bloomberg (1/22, Subscription Publication) claims the exchange could start an “early internal rift within the White House,” and “underscored some of the tensions that could dominate Trump’s second term in office and echo issues he faced during his last stint at the White House.”

        Meanwhile, Politico (1/22) says the argument “quickly went from a political victory lap for Trump to an almost comical illustration of what billionaires will fight over in public.”

Community Colleges Form AI Consortium To Enhance Workforce Readiness

Inside Higher Ed (1/23, Palmer) reports that colleges and universities are launching initiatives to prepare students for AI-related jobs, varying by resources and industry ties. Community colleges, serving many low-income students, aim to bridge this gap. Michael Baston, president of Cuyahoga Community College (Tri-C) in Ohio, emphasizes the importance of inclusivity in the AI revolution, stating, “We have a moral and ethical responsibility to make sure the masses don’t get left out.” Tri-C and other colleges joined the Complete College America’s inaugural AI Readiness Consortium, aiming to design 25 new courses incorporating AI tools. Charles Ansell, vice president for research, policy and advocacy at CCA, warns that without innovation, “we’re going to see a reduction in career ladders.” CCA invests $500,000 to support this initiative, with Riipen, “a Vancouver-based education-technology start-up and work-based learning platform that allows instructors to embed employer projects directly into classroom instruction,” aiding in embedding real employer projects into coursework.

US, EU Take Different Tacks On AI Regulation

TechTarget (1/23, Pariseau) reports the Trump Administration this week “rescinded its predecessor’s executive order on AI safety this week, while the European Union will begin enforcing its own new regulations beginning next month, potentially putting multinational companies in a regulatory bind.” For now, “action on AI safety in the U.S. might fall to state and local governments, along with efforts by private-sector groups such as the Cloud Security Alliance’s AI Safety Initiative and the Coalition for Secure AI.” Some industry analysts “said they were concerned that a regulation such as the EU’s AI Act looks to deploy controls against a technology that is still so nascent and rapidly evolving, it’s difficult to know what will even be relevant in a matter of a few months.”

K-12 Schools Face AI Integration Challenges

K-12 Dive (1/23, Merod) reports that K-12 schools are navigating the integration of artificial intelligence (AI) amid both opportunities and challenges. As schools receive guidance from national organizations and the federal government, concerns about AI misuse, such as deepfakes and lawsuits, are emerging. Kris Hagel, chief information officer at Peninsula School District in Washington, highlights the uncertainty of federal AI support under President Trump’s second term. Pat Yongpradit, chief academic officer of Code.org and lead for TeachA, notes that state education agencies will likely continue developing AI resources, with 24 states already releasing guidance. AI tools tailored for special education and English learners are expected, with Hagel advising against using free AI tools like ChatGPT, advocating for secure AI enterprise systems instead. Yongpradit anticipates increased teacher reliance on AI detectors but advises focusing on teaching motivations to address cheating. Despite interest, some districts struggle with AI due to resource constraints and lack of understanding.

 

DOD HOPES FOR STARGATE BENEFIT: If OpenAI can actually implement its Stargate Project to build $500 billion worth of AI infrastructure in the U.S., one of the major beneficiaries may be the U.S. military. “It depends on how much of that they devote to gov[ernment] cloud and AI cloud,” said Roy Campbell, chief strategist for the Pentagon’s High Performance Computing Modernization Program and deputy director for advanced computing in the undersecretariat for research and engineering. And if the Defense Department can get a slice of Stargate’s computing power, he told Breaking Defense, it could bypass a major bottleneck for its current high-tech ambitions.

dtau...@gmail.com

unread,
Feb 2, 2025, 7:25:23 PMFeb 2
to ai-b...@googlegroups.com

International AI Safety Report Released Ahead of Action Summit

An international AI safety report published Wednesday ahead of the AI Action Summit hosted by France next month compiled insights from 100 independent international experts. ACM A. M. Turing Award laureate Yoshua Bengio, the driving force behind the report, said that while AI holds "great potential" for society, it also presents "significant risks." He said the intention of the report was to “facilitate constructive and evidence-based discussion around these risks and serves as a common basis for policymakers around the world to understand general-purpose AI capabilities, risks and possible mitigations."
[
» Read full article ]

Gov.UK (January 29, 2025)

 

LeCun Says DeepSeek's Success Shows Benefits of Open Source Models

ACM A.M. Turing Award laureate Yann LeCun says the success of the R1 model released recently by Chinese AI company DeepSeek shows the value of keeping AI models open source. It's not that China's AI is "surpassing the U.S.," but rather that "open source models are surpassing proprietary ones," LeCun said in a post on Instagram’s Threads app. "They came up with new ideas and built them on top of other people's work. Because their work is published and open source, everyone can profit from it."
[ » Read full article ]

Business Insider; Katie Balevic; Lakshmi Varanasi (January 25, 2025)

 

Sensitive DeepSeek Data Exposed to Web

Cybersecurity firm Wiz said in a blog post that scans of Chinese AI startup DeepSeek's infrastructure showed that company had inadvertently left more than a million lines of data available unsecured, including digital software keys and chat logs that appeared to capture prompts being sent from users to the company's recently unveiled AI assistant. After alerting DeepSeek of the find, the company quickly secured the data.
[
» Read full article ]

Reuters; Raphael Satter (January 29, 2025)

 

Initiative Aims to Enable Ethical Coding LLMs

Nonprofit Software Heritage has launched the CodeCommons project with the goal of creating the biggest repository of ethically sourced code for training AI models. CodeCommons will be focused on developing a unified data platform that gives researchers access to pre-cleaned code collections featuring license information, links to related research papers, and other metadata.
[
» Read full article ]

IEEE Spectrum; Edd Gent (January 28, 2025)

 

AI, Holograms Create Uncrackable Optical Encryption System

By combining AI with holographic encryption, a team led by Stelios Tzortzakis at the University of Crete in Greece developed an ultra-secure data protection system that uses neural networks to retrieve elaborately scrambled information stored as a hologram. The researchers found the neural network could accurately retrieve encoded images 90-95% of the time.
[
» Read full article ]

Optica (January 30, 2025)

 

'First AI Software Engineer' Bad at Job

Auto-coder “Devin,” billed as "the first AI software engineer" when it was introduced last March by Cognition AI, performed badly on an exam from data scientists affiliated with Answer.AI, completing just three out of 20 tasks successfully. The service uses Slack as its main interface for commands, which are sent to its computing environment, a Docker container that hosts a terminal, browser, code editor, and planner. According to the examiners, Devin had a habit of getting stuck in technical dead-ends or producing overly complex, unusable solutions.
[ » Read full article ]

The Register (U.K.); Thomas Claburn (January 23, 2025)

 

AI Boom Is Giving Rise to 'GPU-as-a-Service'

Kinesis, Hyperbolic, Runpod, and Vast.ai are among the firms offering access to computing power to AI startups via GPU-as-a-Service (GPUaaS). GPUaaS is more cost-effective for AI startups by eliminating the need to purchase and maintain physical infrastructure and allowing startups to pay for their exact amount of GPU usage. It also is more sustainable because it takes advantage of existing, unused processing units and does not require new servers.
[ » Read full article ]

IEEE Spectrum; Juan Pablo Perez (January 20, 2025)

 

'The Brutalist' Sparks Controversy After Film's Editor Reveals Use of AI

A debate has emerged about whether "The Brutalist" should be considered for an Oscar after film editor Dávid Jancsó disclosed that the AI tool Respeecher was used to enhance the accents of lead actors Adrien Brody and Felicity Jones when speaking Hungarian. Noting that "it's an extremely unique language," Jancsó said they "wanted to perfect it so that not even locals will spot any difference." AI also was used to produce architectural drawings and finished buildings shown in the film.
[ » Read full article ]

NBC News; Rebecca Cohen; Chloe Melas (January 20, 2025)

 

Vatican Warns About the Risks of AI

A paper issued by the Vatican Jan. 28 emphasizes the need for constant AI oversight, citing the wealth of opportunities provided by the technology, as well as its "profound risks." The paper, developed by a Vatican team in conjunction with AI and other experts, expressed concerns about the potential for AI to destroy trust by spreading misinformation, its ability to cause isolation, and its possible harmful effects on human relationships.


[
» Read full article *May Require Paid Registration ]

The New York Times; Elisabetta Povoledo (January 29, 2025)

 

AI-Powered Robot, Gaming Help Scientists Identify Deep-Sea Species

Monterey Bay Aquarium Research Institute (MBARI) scientists are using an AI-powered robot, MiniROV, to locate and track marine organisms autonomously. According to MBARI's Kakani Katija, "The goal is to track individual animals for up to 24 hours so we can answer questions about the animal's behavior and ecology." The researchers also launched FathomVerse, a game that allows citizen scientists to explore a virtual ocean and classify marine organisms in the FathomNet database in an effort to train the AI.


[
» Read full article *May Require Paid Registration ]

Bloomberg; Todd Woody (January 29, 2025)

 

Chevron Joins Race to Generate Power for AI

Chevron is partnering with Engine No. 1, a San Francisco-based investment firm, to build natural gas-fueled power plants that will feed energy directly to AI datacenters, joining other oil and gas producers that are adjusting their strategies and leaning into power generation rather than drilling and processing. Last month, Exxon said that it, too, wanted to get into the business of selling electricity to datacenters.

[ » Read full article *May Require Paid Registration ]

The New York Times; Rebecca F. Elliott (January 28, 2025)

 

In Seattle, a Convergence of 5,444 Mathematical Minds

The Joint Mathematics Meetings was held in Seattle Jan. 8-11, drawing 5,444 mathematicians with the theme of "Mathematics in the Age of AI." Yann LeCun, Meta's chief AI scientist and an ACM A.M. Turing Award laureate, delivered a keynote in which he discussed the current state of machine learning. LeCun also suggested a "large-scale world model" as an alternative for generative large language models, noting that it "can reason and plan because it has a mental model of the world that predicts consequences of its action."


[
» Read full article *May Require Paid Registration ]

The New York Times; Siobhan Roberts (January 28, 2025)

 

Meta Announces $65B Investment To Accelerate AI Innovations In 2023

The New York Times (1/24, Isaac) reports, on Friday, Mark Zuckerberg said Meta “expected its capital expenditures in 2025 to come in at an estimated $60 to $65 billion, a big increase compared with the roughly $38 to $40 billion Meta spent in 2024.” Much of that amount will go towards “building and expanding data centers, the warehouse-size buildings that provide the computing power that fuels Meta’s A.I. products and algorithms across its apps, which include Facebook, Instagram and WhatsApp.” In a Facebook post, Zuckerberg said, “This is a massive effort, and over the coming years it will drive our core products and business, unlock historic innovation, and extend American technology leadership.” The Wall Street Journal (1/24, Subscription Publication) provides similar coverage.

AI-Powered Charter School Faces Skepticism In Pennsylvania

Chalkbeat (1/24, Sitrin) reported that MacKenzie Price is proposing a new cyber charter school in Pennsylvania, utilizing AI-powered lesson plans and virtual reality experiences. The school, Unbound Academy, aims to launch in 2025, initially serving 500 students with only four teachers. Price claims her 2 Hour Learning model, co-founded with proprietary AI software and third-party apps, can significantly enhance academic performance. However, her model has faced rejection in several states, and critics argue it relies on selective data from private schools. “The results that I’ve been able to get from our schools have been absolutely phenomenal,” Price stated. Despite her assertions, skepticism remains about the method’s effectiveness and the role of teachers The Pennsylvania Department of Education is expected to decide on the charter’s approval soon, amid calls for more scrutiny on cyber charters.

DeepSeek’s AI Models Challenge US Tech Industry’s Dominance

Reuters (1/27) reports that Chinese startup DeepSeek has launched AI models, DeepSeek-V3 and DeepSeek-R1, claiming they rival or surpass US models at lower costs. DeepSeek’s AI Assistant has surpassed ChatGPT as the top-rated free app on Apple’s US App Store, raising questions about US tech firms’ AI investments. DeepSeek’s claims have been met with skepticism, including from Scale AI’s Alexandr Wang. DeepSeek is led by Liang Wenfeng, co-founder of High-Flyer. DeepSeek’s success has caught Beijing’s attention, with Liang attending a symposium hosted by Premier Li Qiang.

        Insider (1/27, Barr) reports that DeepSeek’s models challenge OpenAI’s proprietary approach, with pricing 20-40 times lower, according to Bernstein tech analysts. DeepSeek’s Reasoner model costs 55 cents per 1 million tokens, compared to OpenAI’s o1 model at $15. The analysts noted that this pricing strategy raises questions about the viability of proprietary versus open-source models.

        TechCrunch (1/27, Chant) reports that DeepSeek’s efficiency raises questions about the necessity of large hardware investments in AI, potentially impacting data center demand and energy consumption. DeepSeek claims to have used 2,048 Nvidia H800 GPUs for training, much less than OpenAI’s reported usage. Nvidia’s stock fell 16%, and concerns grow for new nuclear and natural gas investments. Citigroup’s Atif Malik remains skeptical of DeepSeek’s claims, suggesting potential implications for energy strategies.

        Meanwhile, the Washington Post (1/27, A1, Gregg, Najmabadi, Dou, Zakrzewski, Tiku) reports the “sudden popularity” of DeepSeek “prompt[ed] debate in political and tech industry circles about how the United States can maintain its lead in AI.” The Post notes Victoria LaCivita, spokeswoman for the White House Office of Science and Technology, “said former president Joe Biden’s policies had failed to limit access to American technology and created an opportunity for China and other foreign adversaries to make gains in AI development,” while David Sacks, President Trump’s AI and crypto czar, “said in a post on X that DeepSeek ‘shows that the AI race will be very competitive.’” However, the Post says “the Trump administration has shared few specifics about its own approach to AI policy,” and the President last week “rescinded a sweeping executive order on AI signed by Biden in 2023 and signed an executive order of his own directing agencies to rescind all actions taken under the Biden order ‘that are inconsistent with enhancing America’s leadership in AI.’” Nonetheless, Reuters (1/27, Carew, Cooper, Banerjee) reports the President “said that DeepSeek should be a ‘wakeup call’ and could be a positive development.”

        David Wallace-Wells writes at the New York Times (1/27) that DeepSeek AI has created a “earthquake” of speculation over its low cost and high performance, “suggesting two truly seismic possibilities about the technological future on which so much of the American economy has recently been wagered.” Wallace-Wells explains that it either reveals the “American advantage on A.I. may be much smaller than has been widely thought,” or that the “approach to improving performance by building out ever-larger and more expensive data centers for training” is inefficient.

        DeepSeek Suffers “Large-Scale” Cyberattack The AP (1/27, Parvini) reports DeepSeek on Monday “said that it had suffered ‘large-scale malicious attacks’ on its services,” which “disrupted users’ ability to register on the site.” In response, Reuters (1/27, Baptista, Kachwala, Bajwa) reports DeepSeek announced it would “temporarily limit registrations.” However, DeekSeek “resolved issues relating to its application programming interface and users’ inability to log in to the website, according to its status page.”

Experts: US Military Rushing Into AI Too Quickly

AI Now Institute executives Heidy Khlaaf and Sarah Myers West write at the New York Times (1/27) that the integration of AI into military systems is raising national security concerns due to potential flaws and cybersecurity vulnerabilities. They explain that older AI models “have had problems with accuracy and can introduce greater potential for error,” and new systems “are even more worrisome” because they “frequently ‘hallucinate,’ asserting patterns that do not exist or producing nonsense.” They conclude that US military leaders should not “overlook the risks that A.I.’s current reliance on of sensitive data poses to national security or to ignore its core technical vulnerabilities.”

China’s DeepSeek Raises Questions About US Export Controls, Creates AI Urgency For Administration

The New York Times (1/28, Swanson, Tobin) reports that the US “has worked steadily over the past three years to limit China’s access to the cutting edge computer chips that power advanced artificial intelligence systems,” with an aim “to slow China’s progress in developing sophisticated A.I. models.” But now DeepSeek, a Chinese firm, “has created that very technology,” raising “big questions about export controls built by the United States in recent years” and provoking “a fierce debate over whether US technology controls have failed.”

        Reuters (1/28, Shalal, Shepardson, Raj Singh) says, “US officials are looking at the national security implications of the Chinese artificial intelligence app DeepSeek, White House press secretary Karoline Leavitt said on Tuesday, while...Trump’s crypto czar said it was possible that intellectual property theft could have been at play.”

        Meanwhile, the New York Times (1/28, Yuan) reports, “Inside China, it was called the tipping point for the global technological rivalry with the United States and the ‘darkest hour’ in Silicon Valley, evoking Winston Churchill.” The Times calls it “possibly a breakthrough that could change the country’s destiny.”

Stopping China’s DeepSeek From Using US AI May Be Difficult, Experts Say

Reuters (1/29) reports, “Top White House advisers this week expressed alarm that China’s DeepSeek may have benefited from a method that allegedly piggybacks off the advances of US rivals called ‘distillation,’” a technique that “involves one AI system learning from another AI system” and “may be difficult to stop, according to executive and investor sources in Silicon Valley.” This “means the newer model can reap the benefits of the massive investments of time and computing power that went into building the initial model without the associated costs.”

        Meanwhile, the New York Times (1/29, Metz) reports, “OpenAI says it is reviewing evidence that...DeepSeek broke its terms of service by harvesting large amounts of data from its A.I technologies.” The San Francisco-based start-up “said that DeepSeek may have used data generated by OpenAI technologies to teach similar skills to its own systems,” and its “terms of service say that the company does not allow anyone to use data generated by its systems to build technologies that compete in the same market.” NBC Nightly News (1/29) quoted AI and Crypto Czar Sacks as saying, “There is substantial evidence that what DeepSeek did here is they distilled the knowledge out of OpenAI’s model.”

        Microsoft, OpenAI Investigating If DeepSeek Improperly Obtained Data. Bloomberg (1/29, Bass, Ghaffary, Subscription Publication) reports, “Microsoft Corp. and OpenAI are investigating whether data output from OpenAI’s technology was obtained in an unauthorized manner by a group linked to Chinese artificial intelligence startup DeepSeek, according to people familiar with the matter.” According to Bloomberg, “Microsoft’s security researchers in the fall observed individuals they believe may be linked to DeepSeek exfiltrating a large amount of data using the OpenAI application programming interface, or API, said the people.” Reuters (1/29) reports that OpenAI stated on Tuesday that Chinese companies are “constantly” attempting to access U.S. competitors to enhance their AI models. OpenAI emphasized the importance of collaborating with the U.S. government to protect advanced models from adversaries. Reuters (1/29) reports separately that Israeli cybersecurity firm Wiz “says it has found a trove of sensitive data from the Chinese artificial intelligence startup DeepSeek inadvertently exposed to the open internet.”

        DeepSeek’s R1 Chatbot Challenges ChatGPT. Wired (1/27, Rogers) reports DeepSeek’s AI chatbot, developed by a Chinese startup, has surpassed OpenAI’s ChatGPT on Apple’s US App Store. The free-to-use R1 model rivals OpenAI’s o1 “reasoning” model without a subscription fee, and was trained with less powerful AI chips. Despite its potential to disrupt US-based AI companies, the chatbot shares common generative AI issues, such as hallucinations and lack of memory features.

Google Warns Hackers In Over 20 Countries Using Gemini AI Tool To Increase Efficiency

The Wall Street Journal (1/29, Volz, McMillan, Subscription Publication) reports Google released findings Wednesday that hackers linked to China, Iran, and over 18 other countries are utilizing Google’s Gemini chatbot for tasks like writing malicious code and researching targets. The report highlights that groups tied to China, Iran, Russia, and North Korea appear to currently use Gemini to increase productivity, not to develop new hacking techniques.

Pentagon Workers Used DeepSeek Chatbot Prior To Block

Bloomberg (1/30, Manson, Robertson, Subscription Publication) reports Defense Department employees “connected their work computers to Chinese servers to access DeepSeek’s new AI chatbot for at least two days before the Pentagon moved to shut off access, according to a defense official familiar with the matter.” The Defense Information Systems Agency, which is “responsible for the Pentagon’s IT networks, moved to block access to the Chinese startup’s website late Tuesday, the official and another person familiar with the matter said. Both asked not to be named because the information isn’t public.”

        US Tech Giants Rush To Reassure AI Investors After DeepSeek Stock Market Shock. The Washington Post (1/30) reports that the launch of Chinese chatbot DeepSeek has significantly affected US tech stocks, reducing their value by a trillion dollars. On Wednesday, Meta and Microsoft CEOs reassured investors about ongoing AI investments. Despite DeepSeek’s success, both companies plan to invest billions in AI infrastructure. Microsoft’s Satya Nadella highlighted that increased access to AI models would boost demand for Microsoft’s cloud services. Meta’s Mark Zuckerberg supported free AI model distribution, aligning with DeepSeek’s approach. DeepSeek’s innovations are under Meta’s scrutiny, with “war rooms” set up to analyze its technology. OpenAI accused DeepSeek of using its AI responses, while AI analysts questioned DeepSeek’s low-cost claims. Meta and Microsoft remain committed to AI spending, with Meta expecting increased capital expenditures and Microsoft planning $80 billion in AI infrastructure investments this year.

        Blackstone Remains Optimistic About AI Infrastructure. The New York Times (1/30, Farrell) reports that while “Chinese A.I. start-up DeepSeek upended the prevailing view that artificial intelligence systems require huge amounts of power and investment,” Blackstone remains confident in the “vital need for physical infrastructure, data centers and power.” Jonathan Gray, Blackstone’s president, emphasized their strategy of building data centers exclusively for technology firms with long-term leases, stating, “We don’t build them speculatively.” Blackstone’s recent investments include a $10 billion acquisition of QTS and a $16 billion deal for AirTrunk. Gray anticipates increased AI adoption as computing costs decrease, suggesting usage patterns may evolve. Blackstone’s stock has risen 40% over the past year, reflecting strong investor confidence in its strategic focus on AI infrastructure.

Sources: OpenAI In Talks To Raise Up To $40B In Funding Round

The Wall Street Journal (1/30, Jin, Seetharaman, Subscription Publication) reports, “OpenAI is in early talks to raise up to $40 billion in a funding round that would value the ChatGPT maker as high as $300 billion, according to people familiar with the matter.” The Journal says, “SoftBank would lead the round and is in discussions to invest between $15 billion and $25 billion,” and “the remaining amount would come from other investors.”

House Lawmakers Urge Trump To Restrict Export Of Nvidia Chips To China’s DeepSeek

Reuters (1/30, Cook, Mohsin, Leonard) House Select Committee on China Chair John Moolenaar (R-MI) and Vice Chair Raja Krishnamoorthi (D-IL) are calling on the Administration “to consider restricting the export of artificial intelligence chips made by Nvidia...alleging Chinese AI firm DeepSeek has relied on them.” The lawmakers “asked for the move as part of a Commerce and State Department-led review ordered by Trump to scrutinize the US export control system in light of ‘developments involving strategic adversaries.’” They wrote, “We ask that as part of this review, you consider the potential national security benefits of placing an export control on Nvidia’s H20 and chips of similar sophistication.”

Survey Shows Educators, Students Want Clarity On AI Policies

Education Week (1/30, Langreo) reports that an EdWeek Research Center survey reveals that many educators find their districts’ AI policies unclear. Conducted in December, the survey included 990 teachers, principals, and administrators, with 60% indicating uncertainty about AI policy clarity for both educators and students. Pat Yongpradit from Code.org said, “This technology is ‘still very new,’” and emphasized the need for opportunity and capacity for districts to develop policies. An anonymous high school tech coach in Virginia said schools are hesitant to establish AI guidelines due to fear of mistakes, leaving “educators and students in a gray area.” Ruby Mejico, a principal in Moreno Valley, California, said that while her district is experimenting with AI tools, clear policies are still in development. She added, “We are on our way to having a full-blown policy.” Yongpradit anticipates that clarity will improve as districts gain more understanding and experience with AI technology.

dtau...@gmail.com

unread,
Feb 9, 2025, 1:39:45 PMFeb 9
to ai-b...@googlegroups.com

DeepSeek Linked to Banned Chinese Telecom

The website of China's DeepSeek, whose chatbot became the most downloaded app in the U.S. shortly after its release, contains computer code that could send some user login information to a Chinese state-owned telecommunications company barred from operating in the U.S. Canadian cybersecurity company Feroot Security identified heavily obfuscated computer script on the Web login page of the chatbot that shows connections to computer infrastructure owned by China Mobile.
[ » Read full article ]

Associated Press; Byron Tau (February 5, 2025)

 

Google Drops Pledge Not to Use AI for Weapons, Surveillance

Google on Tuesday updated its AI ethical guidelines, removing commitments to not apply the technology to weapons or surveillance. In a blog post, Google executives wrote, “We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”
[ » Read full article ]

The Washington Post; Nitasha Tiku; Gerrit De Vynck (February 4, 2025)

 

AI Pioneers Awarded 2025 QE Prize for Engineering

The 2025 Queen Elizabeth Prize for Engineering was bestowed upon seven pioneers of AI technology on Tuesday. The annual prize, awarded to engineers whose innovations have benefited humanity on a global scale, was presented in recognition of contributions to the development of modern machine learning (ML). Recipients included ACM A.M. Turing Award laureates Yoshua Bengio, Geoffrey Hinton, and Yann LeCun, who were recognized for groundbreaking research into the artificial neural networks that have become the dominant model for ML.
[ » Read full article ]

The Chemical Engineer; Adam Duckett (February 4, 2025)

 

ChatGPT Rolled Out at California State University

OpenAI is rolling out an education-specific version of its ChatGPT to about 500,000 students and faculty at California State University as it looks to expand its user base in the academic sector. The rollout will enable students to access personalized tutoring and study guides through the chatbot, while faculty will be able to use it for administrative tasks.
[ » Read full article ]

Reuters; Rishi Kant (February 4, 2025)

 

Laser-Based Artificial Neuron Processes Enormous Datasets at High Speed

Laser-graded artificial neurons developed by Chinese University of Hong Kong researchers can operate on their own and without additional connections as a small neural network, transmitting data as much as 100,000 times faster than artificial spiking neurons. The researchers integrated a laser-graded neuron into a reservoir computing system and scanned 700 heartbeat samples. With a processing speed of 100 million heartbeats per second, the system was more than 98% accurate in identifying arrhythmia.
[ » Read full article ]

Live Science; Skyler Ware (February 4, 2025)

 

Federated Learning Under Siege

Researchers in the U.S. and China demonstrated a poisoning attack targeting federated unlearning. The attack, BadUnlearn, ensures the unlearned model closely resembles the poisoned one through the strategic injection of malicious model updates that align with aggregation rules. The researchers then introduced a federated unlearning framework intended to maintain a global model's integrity. The framework, UnlearnGuard, uses historical model updates stored by the server to help detect and filter out poisoned updates.
[ » Read full article ]

Devdiscourse (February 3, 2025)

 

DeepSeek's Chatbot Achieves 17% Accuracy in Audit

An audit by trustworthiness rating service NewsGuard found the chatbot rolled out by Chinese AI startup DeepSeek had an accuracy rate of 17% when it comes to delivering news and information. DeepSeek provided vague or useless answers 53% of the time and repeated false claims 30% of the time, with a fail rate of 83%. In comparison, its Western rivals, including OpenAI, had a 62% average fail rate.
[ » Read full article ]

Reuters; Rishi Kant (January 29, 2025)

 

AI Systems with ‘Unacceptable Risk’ Now Banned in EU

As of Sunday, EU regulators can ban the use of AI systems they deem to pose an “unacceptable risk” or harm under the bloc's AI Act, approved by the European Parliament last March. Unacceptable activities include the use of AI for social scoring, manipulating a person’s decisions deceptively, predicting people committing crimes based on their appearance, and trying to infer people’s emotions, among other uses.
[ » Read full article ]

TechCrunch; Kyle Wiggers (February 2, 2025)

 

OpenAI to Provide Models to National Labs

OpenAI's o1 reasoning model, or another from its o-series, will be deployed on Los Alamos National Laboratory's Venado supercomputer. The deal with the U.S. government will make the model available to researchers at Lawrence Livermore and Sandia National Laboratories as well. Said Los Alamos' Thom Mason, "As threats to the nation become more complex and more pressing, we need new approaches and advanced technologies to preserve America's security."
[ » Read full article ]

Axios; Ina Fried (January 30, 2025)

 

AI Helps Open Scrolls Charred by Vesuvius

Researchers successfully produced the first image of the inside of an ancient scroll at the Bodleian Library at the U.K.'s University of Oxford, according to organizers of the Vesuvius Challenge. The papyrus scroll is one of hundreds found in the remains of a Roman villa destroyed in the A.D. 79 eruption of Mt. Vesuvius. In the Vesuvius Challenge, researchers must decipher the scrolls, which are too fragile to be unrolled. The Oxford scroll was scanned using a synchrotron, then AI was used to generate a 3D image of the scroll that can be unrolled virtually.
[ » Read full article ]

Independent (U.K.); Jill Lawless; Pan Pylas (February 5, 2025)

 

AI-powered Drone Company to Assist in Demining Ukrainian Farmlands

U.S.-based AI company Safe Pro Group and Ukrainian agricultural company Nibulon will deploy AI-powered drones to detect landmines embedded in Ukraine's farmland. The partnership will use Safe Pro’s SpotlightAI platform, hosted on Amazon Web Services, to survey affected farmland. Safe Pro’s AI has processed over 931,000 drone images, identifying more than 18,000 explosive remnants across 10,500 acres, to facilitate mine detection.
[ » Read full article ]

The Kyiv Independent (Ukraine); Sonya Bandouil (February 1, 2025)

 

DOGE Feeds Sensitive Federal Data into AI to Target Cuts

Representatives from the Elon Musk-led U.S. Department of Government Efficiency (DOGE) fed sensitive data from the U.S. Education Department into AI software to probe the agency’s programs and spending in search of opportunities for cuts, say insiders. DOGE plans to replicate this process across other departments and agencies, accessing back-end software at different parts of the government and using AI technology to extract information about spending on employees and programs, said one source.

[ » Read full article *May Require Paid Registration ]

The Washington Post; Hannah Natanson; Gerrit De Vynck; Elizabeth Dwoskin (February 6, 2025); et al.

 

Chinese, Iranian Hackers Use U.S. AI Products to Bolster Cyberattacks

Hackers linked to China, Iran, and other foreign governments are using the latest U.S. AI technology to bolster their cyberattacks, according to U.S. officials and security researchers. Google’s cyber-threat experts say that in the last year, dozens of hacking groups in more than 20 other countries deployed Google's Gemini chatbot to assist with malicious code writing and targeting.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Dustin Volz; Robert McMillan (January 30, 2025)

 

DeepSeek Sparks AI Infrastructure Reassessment

Bloomberg (2/1, Subscription Publication) reported, “The recent market turmoil sparked by DeepSeek’s chatbot has left some rethinking the credit frenzy around artificial intelligence (AI).” While corporate giants predict more demand for AI following the release of DeepSeek, “behind the scenes, landlords and credit providers say that the situation is more nuanced, and some are starting to fret.” Bloomberg claims that an unnamed major data center landlord anticipates rising borrowing costs due to fears of obsolescence from disruptors like DeepSeek. Despite this, Blackstone president Jon Gray maintains that “digital infrastructure remains essential.” The AI surge since ChatGPT’s debut has fueled a global data center boom, with investors pledging significant funds. Apollo Global Management foresees a $2 trillion opportunity in data centers. However, experts suggest that DeepSeek’s cost-effective AI models won’t “depress massively the demand for infrastructure,” indicating sustained growth expectations.

        Researchers Note DeepSeek’s Chinese “Propaganda,” False Compute Cost Claims. The New York Times (1/31, Lee Myers) reported researchers warn that recently debuted Chinese chatbot DeepSeek’s responses “largely reflect the worldview of the Chinese Communist Party,” as “the answers it gives not only spread Chinese propaganda but also parrot disinformation campaigns that China has used to undercut its critics around the world.” NewsGuard on Thursday released a report that “called DeepSeek ‘a disinformation machine,’” and “the New York Times has found similar examples when prompting the chatbot for answers about China’s handling of the Covid pandemic and Russia’s war in Ukraine,” sparking “the same concerns that have bedeviled TikTok, another hugely popular Chinese-owned app: that the tech platforms are part of China’s robust efforts to sway public opinion around the world, including in the United States.”

        The Washington Post (1/31, Dou, Northrop, Li, Vynck) highlighted how, despite DeepSeek’s “claim that it trained one of its recent models on a minuscule $5.6 million in computing costs, ... a closer look at DeepSeek reveals that its parent company deployed a large and sophisticated chip set in its supercomputer, leading experts to assess the total cost of the project as being much higher than the relatively paltry sum that US markets reacted to this week.”

Trump Meets With Nvidia CEO About Administration’s AI Goals

The Washington Post (1/31, Zakrzewski, Alemany) reports President Trump on Friday met “with Nvidia CEO Jensen Huang, marking the first meeting between the president and the leader of a chip company at the center of the artificial intelligence gold rush amid concerns about China’s rising influence in the industry.” Planning for the meeting had begun “before the spike in anxiety about DeepSeek, according to a senior Administration official and a person familiar with the discussions,” and the two “had a good rapport and discussed the Administration’s AI goals, one person added.”

AI Transforming City Services with AI Agents

Forbes (2/1) reports that artificial intelligence is increasingly being integrated into city operations, with AI agents poised to transform government services. AI agents, like OpenAI’s Operator, autonomously perform tasks, potentially streamlining city services. The SuperCity app exemplifies this shift, aiming to simplify resident interactions with city services through AI. The app’s founders, including Miguel Gamiño Jr., leverage extensive government and tech experience to reduce friction between users and city systems. AI’s potential to revolutionize city functions underscores the urgency for city leaders to adopt AI solutions.

OpenAI Unveils AI Tool For Research Reports

The Guardian (UK) (2/3) reports that OpenAI has introduced a new tool named “deep research,” designed to generate reports comparable to those of a research analyst. The San Francisco-based company’s tool, powered by the o3 model, can complete tasks in 10 minutes that would take humans hours. OpenAI announced this development days after its competitor, DeepSeek, made advancements. “Deep research” will be available in the US for Pro tier users at $200 monthly, with a limit of 100 queries. The tool targets professionals in finance, science, and engineering. Andrew Rogoyski from the University of Surrey expressed concerns about relying on AI outputs without human verification.

Alphabet Faces Investor Scrutiny Over AI Spending

Reuters (2/3) reports that Alphabet will face investor scrutiny over its substantial AI spending when it reports earnings on Tuesday. The Google parent likely experienced slowed revenue growth in the holiday quarter due to weakened advertising and cloud businesses. Alphabet’s 2022 capital expenditure was estimated at $50 billion, with more planned for 2025 to support cloud expansion and AI-driven search features. Despite high expectations, Google Cloud growth is expected to decelerate. Analyst Gil Luria noted concerns about AI growth overshadowing core cloud business, similar to Microsoft’s recent experience.

How AI Tools Can Enhance Instruction Methods, Combat Teacher Burnout

The New York Observer (2/3, Curry) reports that teacher burnout is a significant issue, with 16% of US teachers leaving their jobs annually, according to the National Center for Education Statistics. Teachers like Eileen Yaeger and Jeff Stoltzfus are leveraging AI to alleviate this problem. Yaeger uses AI to create inclusive lessons, translating content into multiple languages and adjusting text by WIDA English Language Development level. Stoltzfus, who teaches media technology, uses AI for curriculum development and lesson planning, noting, “It was helpful, at least in getting me half of the way there.” However, he acknowledges AI’s limitations in grading subjective art assignments. Coursera and other platforms are developing AI-powered tools to support educators. Marni Baker Stein, chief content officer at Coursera states, “GenAI will make personalized, interactive learning possible at scale.” Coursera Coach, available in 24 languages, offers personalized instruction, enhancing students’ learning experiences.

Cal State Launches AI Initiative Across 23 Campuses

The Los Angeles Times (2/4, Watanabe) reports that California State University (CSU) announced on Tuesday a significant initiative to integrate artificial intelligence (AI) education across its 23 campuses. This effort aims to provide equitable access to AI tools and training for CSU’s 450,000 students, many of whom are low-income or first-generation college attendees. CSU has partnered with Gov. Gavin Newsom’s (D) office and tech giants like Microsoft and OpenAI to form an advisory board for AI skill development. “We are proud to announce this innovative, highly collaborative public-private initiative,” stated CSU Chancellor Mildred García. The initiative includes an “AI Commons Hub” offering free access to tools like ChatGPT 4.0, marking the largest deployment of ChatGPT globally. The initiative also addresses concerns about bias and academic integrity.

        Forbes (2/4, Fitzpatrick) reports the university system will integrate ChatGPT Edu into its curriculum and operations, marking the largest AI deployment in higher education. Leah Belsky, VP at OpenAI, emphasizes the need for collaboration to ensure global student access to AI. The initiative includes an AI Hub for free access to AI tools, faculty training, and AI-focused apprenticeships. Ed Clark, CSU CIO, highlights the dual goals of equipping students with AI skills and transforming institutional practices. The partnership aims to create a skilled AI workforce, addressing challenges such as AI ethics and data security.

        EdSource (2/4, DiPierro) reports CSU will provide generative AI tools like ChatGPT to students, staff, and faculty across its campuses at no personal cost. CSU announced on Tuesday at San Jose State University the formation of the AI Workforce Acceleration Board, which includes CSU academic leaders and representatives from companies like Microsoft and Nvidia. CSU plans to offer AI-related apprenticeship programs and encourage the use of AI in teaching and research. According to CSU chief information officer Ed Clark, the university has allocated funds from one-time savings for these initiatives.

DeepSeek’s AI Model Challenges Proprietary Systems

CNBC (2/4, Browne) reports that DeepSeek, a Chinese AI lab, released the R1 model last month, an open-source AI model that rivals OpenAI’s o1 model. This development has impacted chipmakers like Nvidia, causing their market values to drop due to fears of reduced spending on computing infrastructure. Industry experts, including Seena Rejal from NetMind and Yann LeCun from Meta, highlight that DeepSeek’s success underscores the viability of open-source AI models. However, experts also caution about cybersecurity risks, with Cisco identifying vulnerabilities in DeepSeek’s R1 model, raising concerns about data leakage and exploitation.

        Biden FTC Chair: DeepSeek Release Highlights Need For More Competition Among US Companies. Former Biden Administration Federal Trade Commission chair Lina M. Khan writes at the New York Times (2/4) that the launch of Chinese artificial intelligence firm DeepSeek “is the canary in the coal mine...warning us that when there isn’t enough competition, our tech industry grows vulnerable to its Chinese rivals, threatening U.S. geopolitical power in the 21st century.” She adds that the company undermines claims that US tech firms “are developing the best artificial intelligence technology the world has to offer,” and accuses them of “building anticompetitive moats around their businesses” instead of pushing for innovation. She concludes that the “best way for the United States to stay ahead globally is by promoting competition at home.”

 

Tufekci: DeepSeek Release Shows Government Is Approaching AI Issues Incorrectly. Zeynep Tufekci writes at the New York Times (2/5) that “the real lesson of DeepSeek is that America’s approach to A.I. safety and regulations...was largely nonsense.” She adds that “it was never going to be possible to contain the spread of this powerful emergent technology, and certainly not just by placing trade restrictions on components like graphics chips,” and argues that the government should instead “be preparing our society for the sweeping changes that are soon to come.” She adds that “instead of fantasizing about how some future rogue A.I. could attack us, it’s time to start thinking clearly about how corporations and governments could use the A.I. that’s available right now to entrench their dominance, erode our rights, worsen inequality.”

Researchers Demonstrate AI Model Training Costing $50

TechCrunch (2/5, Zeff) reports, “AI researchers at Stanford and the University of Washington were able to train an AI ‘reasoning’ model for under $50 in cloud compute credits, according to a new research paper released last Friday.” This “model known as s1 performs similarly to cutting-edge reasoning models, such as OpenAI’s o1 and DeepSeek’s R1, on tests measuring math and coding abilities.” The “paper suggests that reasoning models can be distilled with a relatively small dataset using...supervised fine-tuning.”

OpenAI CEO Discusses AI IQ Benchmarking

TechCrunch (2/5, Wiggers) reports that during a recent press conference, OpenAI CEO Sam Altman remarked on the rapid improvement of AI’s “IQ” over recent years, suggesting a yearly advancement of one standard deviation. Experts, including Sandra Wachter from Oxford, criticized using IQ as a benchmark for AI, arguing it is a flawed measure of intelligence. Os Keyes and Mike Cook highlighted that AI models can exploit the structure of IQ tests, rendering them inappropriate for evaluating AI capabilities. Heidy Khlaaf from the AI Now Institute emphasized the need for more suitable tests for AI systems.

        OpenAI’s Stargate AI Venture Evaluating US Data Center Sites. Reuters (2/6, Tong, Sriram) reports, “ChatGPT maker OpenAI said on Thursday that it is evaluating US states as potential artificial intelligence data center locations for its massive Stargate venture, framing the project as a matter of urgency for the United States to beat China in the global AI race.” Reuters reports Chris Lehane, “OpenAI’s chief global affairs officer,” said, “As news emerged about DeepSeek, it makes it clear this is a very real competition and the stakes could not be bigger. Whoever ends up prevailing in this competition is going to really shape what the world looks like going forward, whether we have democratic AI that’s free and open, or authoritarian AI that is autocratic.”

House Lawmakers To Introduce Bill Banning DeepSeek Chatbot Use On Government Devices

The Wall Street Journal (2/6, Andrews, Subscription Publication) reports that Reps. Darin LaHood (R-IL) and Josh Gottheimer (D-NJ) plan to introduce a bill banning DeepSeek’s chatbot application from US government-owned devices due to security concerns that data could be shared with the Chinese government. Similar to legislation introduced against TikTok, the Journal writes such a bill could mark the beginning of banning the company from operating in the US wholesale.

dtau...@gmail.com

unread,
Feb 16, 2025, 12:29:44 PMFeb 16
to ai-b...@googlegroups.com

Top U.S. Grid Wins Speedy Review of Power Plants to Feed AI

PJM Interconnection LLC, which manages a 13-state power-grid network, won federal approval to fast-track the review of dozens of new power-plant projects to shore up supplies amid a proliferation of AI datacenters. PJM will review up to 50 new projects specifically to boost grid reliability starting in April, to help avoid potential shortages toward the end of this decade, the U.S. Federal Energy Regulatory Commission said in an order issued Tuesday.
[
» Read full article ]

Bloomberg; Naureen S. Malik (February 12, 2025)

 

China's Ex-U.K. Ambassador Debates Bengio at AI Summit

At the AI Action Summit in Paris, Fu Ying of China's Tsinghua University took aim at an international AI safety report led by ACM A.M. Turing Award laureate and the "godfather of AI" Yoshua Bengio and co-authored by 96 others. Fu Ying said open source is the best way to ensure AI does not cause harm, providing "better opportunities to detect and solve problems." Bengio argued that open source makes it easier for criminals to misuse AI.
[ » Read full article ]

BBC; Zoe Kleinman (February 9, 2025)

 

Tech Companies Raise $27 Million for Child Safety Online

A group of technology companies raised more than $27 million for a new initiative focused on building open-source tools to boost online safety for kids. The Robust Online Safety Tools (ROOST) project, announced Monday at the AI Action Summit in Paris, will provide free tools to detect, review, and report child sexual abuse material and use large language models to “power safety infrastructure,” according to a press release for the project.
[ » Read full article ]

The Hill; Miranda Nazzaro (February 10, 2025)

 

U.S., China Ambitions Cast Shadow on AI Summit in Paris

The geopolitics of artificial intelligence will be in focus in Paris starting today at the AI Action Summit. U.S. Vice President JD Vance will attend, marking his first trip abroad since assuming office, while China’s President Xi Jinping is sending Vice Premier Zhang Guoqing as Xi’s special representative. The aim of the meeting is to get countries to agree on ethical, democratic, and environmentally sustainable AI.
[ » Read full article ]

Associated Press; Sylvie Corbet; Kelvin Chan (February 10, 2025)

 

U.S., U.K. Refuse to Sign Paris Summit Declaration on ‘Inclusive’ AI

The U.S. and U.K. did not sign the final communiqué at the AI Action Summit in France. The document was backed by 60 signatories, including China. A U.K. government spokesperson said the statement had not gone far enough in addressing global governance of AI and the technology’s impact on national security. U.S. Vice President JD Vance criticized Europe’s “excessive regulation” of technology and warned against cooperating with China.
[ » Read full article ]

The Guardian (U.K.); Dan Milmo; Eleni Courea (February 11, 2025)

 

Camera Identifies Objects at Speed of Light

University of Washington and Princeton University researchers developed a camera for computer vision by replacing the camera lens with engineered optics made of 50 layered meta-lenses that function as an optical neural network. The resulting camera is more than 200 times faster than neural networks using conventional computer hardware at identifying and classifying images.
[ » Read full article ]

Interesting Engineering; Prabhat Ranjan Mishra (February 6, 2025)

 

Google Hub in Poland to Develop AI Use in Energy, Cybersecurity Sectors

Google and Poland on Thursday signed an agreement to develop the use of AI in the country’s energy, cybersecurity, and other sectors. Google is also dedicating $5 million over the next five years in Poland to expand training programs and increase digital skills among the young. Earlier in the week, Prime Minister Donald Tusk said Google and Microsoft will be among the international businesses planning to invest about 650 billion zlotys ($160 billion) in Poland this year.
[
» Read full article ]

Associated Press (February 13, 2025)

 

Google Remakes Super Bowl Ad After AI Cheese Gaffe

Google edited its Super Bowl ad promoting its Gemini AI Tool after a blogger flagged a false claim in the commercial that Gouda accounts for 50% to 60% of global cheese consumption. Google's Jerry Dischler said the error was not an AI "hallucination," noting that multiple websites where Gemini got the information cited the statistic. The search engine giant had Gemini rewrite the description for the product featured in the ad without the statistic.
[ » Read full article ]

BBC News; Graham Fraser; Tom Singleton (February 6, 2025)

 

White House Encourages Americans to Provide Ideas for AI Strategy

The White House Office of Science and Technology Policy is calling on Americans to share policy ideas and proposals for the AI Action Plan, which will be developed in accordance with an executive order signed by U.S. President Donald Trump last month. The AI Action plan will "define priority policy actions to enhance America's position as an AI powerhouse and prevent unnecessarily burdensome requirements from hindering private sector innovation," according to officials.
[ » Read full article ]

Fox News; Brooke Singman (February 6, 2025)

 

Musk-led Group Launches $97-Billion Bid for OpenAI

Elon Musk and a group of investors offered to buy ChatGPT-maker OpenAI for $97.4 billion, well below the company’s most recent valuation of $157 billion. OpenAI CEO Sam Altman rejected the offer on X, posting, “No thank you but we will buy Twitter for $9.74 billion if you want.” Musk helped to found, and fund, OpenAI in 2015 but left three years later after Altman and other leaders rejected his suggestion that he take over the company.

[ » Read full article *May Require Paid Registration ]

The Washington Post; Gerrit De Vynck; Elizabeth Dwoskin (February 10, 2025)

 

EU Sets Out $200-Billion AI Spending Plan

The European Union on Tuesday unveiled a plan to raise €200 billion ($206.15 billion) to invest in AI. The plan, dubbed InvestAI, includes a new €20-billion fund for AI gigafactories. European Commission President Ursula von der Leyen said at the AI Action Summit in Paris. “We want Europe to be one of the leading AI continents, and this means embracing a life where AI is everywhere.”

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Edith Hancock; Mauro Orru (February 11, 2025)

 

Meta Eliminating Jobs in Shift to Find AI Talent

Meta Platforms on Monday began notifying staff of job cuts, starting a process that will ultimately lead to termination of 5% of its workforce, or 3,600 people. Meta CEO Mark Zuckerberg told employees the terminations will focus on staff who “aren’t meeting expectations,” and told managers the cuts would create openings for which the company can hire the “strongest talent.”

[ » Read full article *May Require Paid Registration ]

Bloomberg; Kurt Wagner; Riley Griffin (February 10, 2025)

 

Tech Giants Double Down on Massive AI Spending

Following record investments in AI last year, Microsoft, Alphabet, Meta Platforms, and Amazon each said in recent quarterly earnings reports that they would increase those investments in 2025. Microsoft, Alphabet, and Meta projected combined capital expenditures of no less than $215 billion, up more than 45% on an annual basis. Amazon said AI would account for most of the increase in its total capital expenditure across its businesses to more than $100 billion.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Nate Rattner; Jason Dean (February 6, 2025)

 

France Taps Nuclear Power in Race for AI Supremacy

France said it would provide a gigawatt of nuclear power for a new AI computing project. AI cloud platform FluidStack plans to start construction on the project in the third quarter of this year, with the first tranche of 250 megawatts of power expected to be connected to AI computing chips by the end of 2026. FluidStack said the facility could expand to 10 gigawatts by 2030.


[
» Read full article *May Require Paid Registration ]

The Wall Street Journal; Sam Schechner; Asa Fitch (February 10, 2025)

 

Report: How Libraries Can Promote AI Literacy For Future Development

Inside Higher Ed (2/10, Mowreader) reports that academic libraries are responding to the rise of generative artificial intelligence, with a September 2024 report indicating that 7 percent of libraries are adopting AI tools. However, 32 percent of surveyed librarians noted a lack of AI training at their institutions. The University of New Mexico has released a guide to help librarians support students in an AI-integrated environment. Leo S. Lo, the author and dean at the College of University Libraries stated, “We are now well-placed to become key players in advancing AI literacy.” Lo defines AI literacy as the “ability to understand, use, and think critically about AI technologies.” His framework includes five elements, emphasizing the importance of technical knowledge, practical skills, ethical awareness, critical thinking, and understanding AI’s societal impact. Lo concluded, “By embracing AI literacy, libraries can lead efforts to demystify AI.”

Musk-Led Group Makes Bid For OpenAI

The New York Times (2/10, Isaac, Metz) reports that a group of investors, led by Elon Musk, “has made a $97.4 billion bid to buy the nonprofit that controls OpenAI, according to two people familiar with the bid, escalating a yearslong tussle for control of the company between Mr. Musk and OpenAI’s chief executive, Sam Altman.”

        Reuters (2/10, Bajwa) reports that Musk said in a statement, “It’s time for OpenAI to return to the open-source, safety-focused force for good it once was. We will make sure that happens.” Bloomberg (2/10, Ghaffary, Clark, Metz, Subscription Publication) reports that “according to a statement from Marc Toberoff, a lawyer representing the investors, other backers of proposal include Valor Equity Partners, Baron Capital, Atreides Management, Vy Capital, Joe Lonsdale’s 8VC and Ari Emanuel, through his investment fund.”

        However, the AP (2/10) reports that Altman “quickly rejected the deal on Musk’s social platform X, saying, ‘no thank you but we will buy Twitter for $9.74 billion if you want.’” The Washington Post (2/10, De Vynck) says OpenAI’s board “has been broadly supportive of Altman, and almost all of its members took their seats after he survived an attempt by the previous board to eject him from the company.”

OpenAI Reviewing Whether DeepSeek Obtained Data Illicitly

Bloomberg (2/10, Subscription Publication) reports OpenAI “has spoken to government officials about the company’s ongoing investigation into whether China’s DeepSeek used data obtained in an unauthorized manner from the ChatGPT maker’s technology.” OpenAI chief global affairs officer Chris Lehane is quoted saying on Bloomberg Television that “we’ve seen some evidence and we’re continuing to review.”

How AI Can Help Address Major Funding Cuts In Higher Ed

Forbes (2/11) contributor Vinay Bhaskara says artificial intelligence (AI) can address significant challenges in higher education, such as declining enrollment and rising costs. Nearly 100 institutions closed between the 2022-23 and 2023-24 academic years, driven by high tuition and student dissatisfaction. The average tuition and fees “for a public four-year school has risen 179%,” leading to skepticism about the value of a degree. AI offers a solution by streamlining administrative processes, which have seen a 164% increase in administrators since 1972. With 86% of university leaders agreeing “that AI presents a ‘massive opportunity to transform higher education,’” institutions like Georgia Tech and Knox College are already implementing AI to enhance recruitment and manage applications. Bhaskara emphasizes the urgent need for colleges to leverage AI to reduce costs and refocus on their educational missions.

Nutanix Focuses On Hybrid Enterprise AI Strategy

SiliconANGLE (2/13) reports Nutanix is driving the next phase of enterprise AI adoption, empowering organizations to deploy AI on their own terms. Nutanix debuted GPT in a Box at AWS re:Invent, streamlining the deployment of AI models on Amazon Elastic Kubernetes Service. Nutanix Enterprise AI offers generative AI on-premises now with inference endpoints, security, cost control, and simplicity. VP of Engineering at Nutanix Debojyoti Dutta said during re:Invent that customers can choose any model from Hugging Face or from the Nvidia catalog and then deploy the model very easily with a couple of button clicks. Nutanix’s hybrid AI strategy enables enterprises to deploy AI models across on-premises, public cloud, and edge environments, offering the flexibility to run workloads where they make the most sense. Partnerships with Nvidia, Hugging Face and D2iQ Inc. extend the reach and impact of Nutanix’s offerings, accelerating time to value for enterprises.

dtau...@gmail.com

unread,
Feb 22, 2025, 7:16:39 PMFeb 22
to ai-b...@googlegroups.com

South Korea Aims to Secure 10,000 GPUs for National AI Computing Center

To ensure it can compete in the global AI race, the South Korean government plans to obtain 10,000 high-performance graphics processing units (GPUs) through public-private cooperation, to facilitate an early opening of its national AI computing center. South Korea currently is exempt from a new U.S. regulation restricting the export of GPUs.
[ » Read full article ]

Reuters; Heekyong Yang (February 17, 2025)

 

Open Source LLMs Hit Europe's Digital Sovereignty Roadmap

The OpenEuroLLM project, co-led by computational linguist Jan Hajic at the Charles University in Prague and Peter Sarlin, CEO and co-founder of Finnish AI lab Silo AI, plans to develop open-source AI language models for all EU languages to preserve "linguistic and cultural diversity." The project is designed to ensure transparency, preserve linguistic diversity, and enable AI growth in Europe. The initiative hopes to contribute a high-quality, open-source AI foundation that can be adapted by European businesses.
[ » Read full article ]

TechCrunch; Paul Sawers (February 16, 2025)

 

U.K. Drops 'Safety' from AI Body

The U.K. has rebranded the AI Safety Institute to the AI Security Institute, signaling a shift away from examining large language models for issues such as bias. Said Secretary of State for Science, Innovation, and Technology Peter Kyle, “The work of the AI Security Institute won’t change, but this renewed focus will ensure our citizens, and those of our allies, are protected from those who would look to use AI against our institutions, democratic values, and way of life.”
[ » Read full article ]

TechCrunch; Ingrid Lunden (February 13, 2025)

 

Trust in AI Is Much Higher in China Than in the U.S.

A global survey by the Edelman Trust Barometer found that only 32% of U.S. residents trust AI. The greatest level of trust was reported in India at 77%, followed by Nigeria (76%), Thailand (73%), and China (72%). Trust was lowest in Canada (30%), Germany (29%), the Netherlands (29%), U.K. (28%), Australia (25%), and Ireland (24%). More than half (58%) of respondents said they worry automation will displace them in the workforce, and more than 60% worry about AI-driven misinformation.
[ » Read full article ]

Axios; Ina Fried (February 13, 2025)

 

South Korea Bans Downloads of DeepSeek's AI App

South Korea said on Monday it had temporarily suspended new downloads of an AI chatbot made by China's DeepSeek. Regulators said the app service would resume after they verified it complied with South Korea’s laws on protecting personal information. The app had become one of the country’s most popular downloads in the AI category. Earlier this month, South Korea directed many government employees not to use DeepSeek products on official devices.

[ » Read full article *May Require Paid Registration ]

New York Times; Meaghan Tobin; Jin Yu Young (February 17, 2025)

 

How AI Can Protect Undersea Pipelines, Cables

AI is being leveraged to protect critical underwater infrastructure, with the ultimate goal of creating an undersea map that can sift through vast amounts of data to identify potential threats in real time. German startup North.io is using technology from Nvidia, IBM, and others to develop systems that can distinguish between natural elements and potential threats to undersea technology. North.io researchers are training AI to analyze data from multiple sources.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; William Boston (February 17, 2025)

 

Ellison Calls for Governments to Unify Data to Feed AI

During an interview with former British Prime Minister Tony Blair at the World Government Summit in Dubai, Oracle chairman Larry Ellison said governments should consolidate all national data for consumption by AI models. Fragmented data about a population’s health, agriculture, infrastructure, procurement and borders should be unified into a single, secure database that can be accessed by AI models, Ellison said, because it would enable countries with rich population data sets to cut costs and improve public services, particularly healthcare.

[ » Read full article *May Require Paid Registration ]

Bloomberg; Omar El Chmouri (February 12, 2025)

 

Education Department Considering AI Chatbot For Financial Aid Help

The New York Times (2/13, Goldstein, Montague) reported that allies of Elon Musk within the Education Department are exploring the replacement of some contract workers with a generative artificial intelligence chatbot, based on internal documents. This initiative aligns with President Trump’s efforts to reduce the federal workforce and could transform public interactions. The Education Department currently employs 1,600 call center agents answering more than 15,000 inquiries daily from student borrowers. ED spokeswoman Madi Biedermann stated that the department is considering tools to enhance customer service and assess contract effectiveness. However, experts warn that transitioning to AI may raise concerns regarding privacy and accuracy. Moreover, an internal document obtained by the Times indicates that ED staff have found that a 38 percent reduction in funding for call center operations could contribute to a “severe degradation” in services for “students, borrowers and schools.”

        Inside Higher Ed (2/14, Knox) reported the department “greatly increased staffing at their call centers after last year’s bungled launch of the new FAFSA led to an overwhelming influx of calls. Last September, a Government Accountability Office investigation found that in the first five months of the rollout, three-quarters of calls went unanswered. Last summer, the department hired 700 new agents to staff the lines and had planned to add another 225 after the launch of the 2024-25 FAFSA in November.”

Study Reveals Researchers’ Interest In AI Tools Varies By Region, Discipline

Inside Higher Ed (2/14, Palmer) reported that a recent study by Wiley highlights significant interest among researchers in utilizing artificial intelligence (AI) in their work, with 69 percent believing AI skills will be vital within two years. However, over 60 percent cite a lack of guidelines and training as barriers to AI adoption. The study surveyed nearly 5,000 researchers globally, finding that 70 percent seek clearer guidelines from publishers regarding AI use. Although many have heard of OpenAI’s ChatGPT, only about one-third are familiar with other tools like Google Gemini and Microsoft Copilot. The study also noted geographical differences, with 59 percent of researchers in China and 57 percent in Germany using AI, compared to a global average of 45 percent. Researchers in fields like computer science and medicine are more inclined to adopt AI, while those in life sciences and physical sciences prefer a cautious approach.

Amazon Robotics Chief Technologist Discusses Warehouse Automation

Insider (2/14, Kim) interviewed Amazon Robotics Chief Technologist Tye Brady about warehouse automation. Brady said Amazon now has at least 750,000 robots in its warehouses and that “AI has really revolutionized and transformed robotics because it allows us to have the mind and body as one.” He added that Amazon’s “future is in people and technology working together.” Brady noted that Amazon has committed more than $1.2 billion in an upskilling pledge. He also said, “Our physical AI systems have the same tool kits that hundreds of thousands of our customers have available to them, and they’re using them, so we’re seeing a lot of growth there,” referring to AWS. Brady said, “We do technology with a purpose. And if that purpose makes sense in e-commerce and our material-handling fulfillment systems, then we will do that as long as it improves the safety of our employees and their performance.”

Meta Struggles With Deepfake Image Regulation, Investigation Suggests

CBS News (2/17, Lyons) reports that Meta has removed over a dozen sexualized AI deepfake images of female celebrities from Facebook following a CBS News investigation. Despite this, CBS News found that many images remain accessible, violating Meta’s policies. The Oversight Board said the company’s regulations are insufficient, urging clearer rules on non-consensual content. Co-chair Michael McConnell said, “The Board is actively monitoring Meta’s response and will continue to push for stronger safeguards, faster enforcement, and greater accountability.”

        Meta Platforms Creates AI Humanoid Robotics Division. Reuters (2/14, Paul) reported that Meta is forming a new division within Reality Labs to develop AI-powered humanoid robots for physical tasks, led by Marc Whitten, as detailed in an internal memo from CTO Andrew Bosworth.

Most Educators Embrace AI In Teaching, Survey Finds

Education Week (2/14, Langreo) reported that a recent survey by the EdWeek Research Center reveals that 90 percent of educators believe artificial intelligence (AI) will change the teaching profession. Nearly all respondents (97 percent) expect AI to influence their jobs within five years. While experts highlight AI’s potential to personalize education, concerns about biases and creativity persist. Three teachers shared their experiences with AI tools. Amanda Pierman, a Florida science teacher, said, “With the help of generative AI tools, crafting an exam now takes 40 minutes.” Joe Ackerman, a fifth-grade teacher in Colorado, stated, “It has helped me free up my time [that I can] then devote to teaching.” Yana Garbarg, an English teacher in Queens, emphasized that AI feedback is “more like a narrative” than traditional markings. They encourage hesitant educators to experiment with AI, asserting, “It’s just going to be another tool.”

        Educators Use AI Tutoring To Transform Classroom Learning. Education Week (2/14, Schultz) reported that teachers across the nation are increasingly utilizing artificial intelligence (AI) tutoring tools to enhance student learning. Andrea Hinojosa, a history teacher at Copper Hills High School in Utah, remarked that “AI has really just changed how we can do our jobs,” allowing students to practice writing more and receive immediate feedback. Schools are adopting AI to reduce educators’ workloads while improving student outcomes. Zachary Pardos, an education professor at UC Berkeley, said that “this is really low-hanging fruit” for enhancing classroom efficiency. The Sante Fe public schools are implementing a two-year plan to integrate AI, focusing on professional development for teachers. However, experts warn about potential biases in AI and the need for careful adoption to ensure it benefits all students. Hinojosa emphasized the technology’s effectiveness, saying, “It’s amazing,” as it helps her assess her students’ skills more efficiently.

Teen Inventor Launches AI-Driven Early Wildfire Detection System

The Orange County (CA) Register (2/17, Darwish) reports that on February 10, 2023, Ryan Honary, a 17-year-old inventor from Newport Beach, California, deployed his AI-driven wildfire detection system near Irvine for the first time. Honary, who founded SensoRy AI, began this project in fifth grade after witnessing the devastation of the 2018 Camp Fire. The system, which detects flames, smoke, and heat, can alert firefighters instantly through text, email, and a web application. “If he has sensor systems that can alert us to a fire seconds or minutes sooner, that’s a success,” said Orange County Fire Authority Chief Brian Fennessy. Honary plans to deploy five more detectors in March and an additional 25 by September along the Highway 133 corridor, aiming to expand his system beyond California in the future.

xAI Unveils Latest AI Model

CNBC (2/18, Butts) reports Elon Musk’s xAI has released “its latest artificial intelligence model, Grok 3, claiming it can outperform offerings from OpenAI and China’s DeepSeek based on early testing, which included standardized tests on math, science and coding.” Grok 3 will be available for premium X subscribers in the US starting Tuesday, and it “will also be accessible through a separate subscription for the model’s web and app versions, the xAI team said.”

Former OpenAI Chief Technology Officer Launches AI Startup

Reuters (2/18, Bajwa, Hu, Tong) reports, “Former OpenAI chief technology officer Mira Murati launched an AI startup called Thinking Machines Lab on Tuesday, with a team of about 30 leading researchers and engineers from competitors including OpenAI, Meta and Mistral.” The startup “wants to build artificial intelligence systems that encode human values and aim at a broader number of applications than rivals, the company said in a blog post on Tuesday.”

Nvidia, Partners Create New AI System On AWS For Biological Research

AFP (2/20) reports AI chipmaker Nvidia and its research partners have created Evo 2, which they call the largest AI system yet for biological research, with the goal of accelerating breakthroughs in medicine and genetics. The new AI system can read and design genetic code across all forms of life. The system learned from nearly 9 trillion pieces of genetic information taken from over 128,000 different organisms. In early tests, it accurately identified 90% of potentially harmful mutations in BRCA1, a gene linked to breast cancer. The model was built using 2,000 Nvidia H100 processors on AWS’s cloud infrastructure. Developed with the Arc Institute and Stanford University, Evo 2 is now freely available to scientists worldwide through Nvidia’s BioNeMo research platform. According to Stanford Assistant Professor Brian Hie, “Designing new biology has traditionally been a laborious, unpredictable and artisanal process,” and “With Evo 2, we make biological design of complex systems more accessible to researchers.”

        R&D World (2/19) reports the technical backbone of Evo 2, the AI system for biological research, relied on a robust Nvidia-AWS collaboration. The development utilized the Nvidia DGX Cloud AI platform via AWS, leveraging more than 2,000 NVIDIA H100 GPUs. This integration enabled the creation of a specialized AI architecture, StripedHyena 2, which enhances the system’s ability to handle large sequence lengths. By harnessing AWS’s cloud infrastructure, the team successfully scaled Evo 2’s capabilities.

Universities Teaming Up To Identify Viruses In Human Bodies Using AI

The New York Times (2/19, Zimmer) reports “scientists estimate that tens of trillions of viruses live inside of us, though they’ve identified just a fraction of them.” This year, the Times says, “five universities are teaming up for an unprecedented hunt to identify these viruses.” The universities “will gather saliva, stool, blood, milk and other samples from thousands of volunteers.” The five-year effort, named “the Human Virome Program and supported by $171 million in federal funding, will inspect the samples with artificial intelligence systems, hoping to learn about how the human virome influences our health.”

Lambda Raises $480 Million For AI Development

Reuters (2/19, Hu) reports that Lambda, a cloud computing firm focused on AI development, has secured $480 million in a Series D equity round led by Andra Capital and SGW, with participation from Nvidia, ARK Invest, G Squared, and Super Micro. This funding increases its total equity raised to $863 million and gives the company a post-money valuation of $2.5 billion. CEO Stephen Balaban noted a surge in demand for Nvidia H200 chips due to the launch of open-source model DeepSeek-R1, and the funds will help expand their cloud services and software offerings.

Meta Introduces Automated Compliance Hardening Tool

InfoQ (2/19) reports that Meta has launched the Automated Compliance Hardening (ACH) tool, a mutation-guided, LLM-based system designed to improve software reliability and security by generating faults and tests. Unlike traditional methods, ACH targets specific faults using plain text descriptions, simplifying the fault creation process. The system employs three LLM-based agents: Fault Generator, Equivalence Detector, and Test Generator. Rajkumar S, a senior developer at SAP Labs India, noted that ACH is a “game-changer” for enhancing code reliability. Meta plans to further deploy ACH and refine its fault detection capabilities.

Microsoft Prepares For OpenAI’s Upcoming Model Releases

The Verge (2/20, Warren) reports that Microsoft is gearing up to host OpenAI’s GPT-4.5 and GPT-5 models, with GPT-4.5 expected to launch next week. OpenAI CEO Sam Altman indicated that GPT-5 could arrive by late May, coinciding with Microsoft’s Build developer conference. GPT-5 will integrate OpenAI’s o3 reasoning model and aims to streamline user interactions with AI. Additionally, Microsoft is enhancing its Copilot features and working on AI advancements in gaming and quantum computing, as well as preparing for a series of announcements at the upcoming conference.

Helix Model Announced By Figure For Humanoid Robots

TechCrunch (2/20, Heater) reports that Figure founder and CEO Brett Adcock introduced a new machine learning model called Helix for humanoid robots on Thursday. This “generalist” Vision-Language-Action model allows robots to process visual and language commands in real time. Helix can control multiple robots simultaneously to perform household tasks. Figure aims to prioritize home robotics despite the complexities involved, stating, “For robots to be useful in households, they will need to be capable of generating intelligent new behaviors on-demand.” The announcement also serves as a recruitment tool for engineers.

AI Drives Change In Manufacturing Industry

Forbes (2/20, Cubiss) reports manufacturers are leveraging Industry 4.0 foundations to scale AI, with only 16% of industrial manufacturing businesses integrating AI compared to 25% across all industries, according to a new SAP report. The sector’s experience with data integration offers valuable lessons for others. AI applications like predictive maintenance and quality assurance are delivering significant value, emphasizing the importance of data quality and system integration.

dtau...@gmail.com

unread,
Mar 1, 2025, 12:27:15 PMMar 1
to ai-b...@googlegroups.com

Yoshua Bengio Proposes 'Scientist AI' to Mitigate Catastrophic Risks from Superintelligent Agents

ACM A.M. Turing Award laureate Yoshua Benjio is among the AI researchers who proposed "Scientist AI," an AI system trained to explain the world based on observations. Unlike agentic AIs, which they described as “unsafe,” Scientist AI is not trained to pursue a goal, but to explain events and estimate their probability. The researchers said the system does not use reinforcement learning, which can “easily lead to goal misspecification and misgeneralization.”
[ » Read full article ]

Analytics India; Supreeth Koundinya (February 25, 2025)

 

DeepSeek 'Shared User Data' with TikTok Owner ByteDance

South Korea said Chinese AI startup DeepSeek shares user data with TikTok owner ByteDance, but it has "yet to confirm what data was transferred and to what extent." Data protection concerns prompted the removal of DeepSeek from app stores in South Korea. A review of DeepSeek's Android app by U.S. cybersecurity firm Security Scorecard found "multiple direct references to ByteDance-owned" services, "suggest[ing] deep integration with ByteDance's analytics and performance monitoring infrastructure."
[ » Read full article ]

BBC; Imran Rahman-Jones (February 18, 2025)

 

Google AI Co-Scientist to Aid Biomedical Researchers

An AI tool developed by Google and tested by researchers at Stanford University and the U.K.'s Imperial College London is intended to serve as an assistant to biomedical scientists. The multi-agent AI co-scientist helps researchers synthesize literature and produce novel hypotheses through the use of advanced reasoning. In tests involving liver fibrosis, Google found the tool recommended promising solutions for disease prevention and indicated it could improve the solutions it provides over time.
[ » Read full article ]

Reuters; Muvija M; Kenrick Cai (February 19, 2025)

 

U.K. Government Delays New AI Bill for Six Months

The U.K. government has delayed publication of its AI Bill until the summer. The bill is expected to require companies to submit their AI models to the U.K. AI Security Institute for testing. A senior Labor Party source noted that there still are "no hard proposals in terms of what the legislation looks like."
[
» Read full article ]

Computing; Graeme Burton (February 25, 2025)

 

Weather Forecasting Takes Big Step Forward with Europe's New AI System

The European Centre for Medium-Range Weather Forecasts (ECMWF) has launched an AI forecasting system that can predict a tropical cyclone track 12 hours ahead. It also was found to be 20% more accurate than conventional forecasting methods for predictions up to 15 days ahead. The system predicts standard temperature, precipitation, and wind, as well as solar radiation and wind speeds at 100 meters, the height of a typical wind turbine, which will be useful data for the renewable energy sector.

[ » Read full article *May Require Paid Registration ]

Financial Times; Clive Cookson (February 24, 2025)

 

OpenAI Uncovers Evidence of AI-Powered Chinese Surveillance Tool

OpenAI said it found evidence that a Chinese security operation developed an AI-powered surveillance tool to assemble real-time reports about anti-Chinese posts on Western social media. OpenAI researchers discovered the tool when one of its developers used OpenAI's models to debug its underlying computer code. The researchers also identified another campaign in which Chinese developers used OpenAI's technologies to produce English-language posts that were critical of Chinese dissidents.

[ » Read full article *May Require Paid Registration ]

The New York Times; Cade Metz (February 21, 2025)

 

AI Can Decode Digital Data Stored in DNA in Minutes

Researchers at the University of California, San Diego and Technion – Israel Institute of Technology have developed an AI system that can accurately decode data stored in DNA sequences within 10 minutes. The system, called DNAformer, features a deep learning model that can reconstruct DNA sequences, an error detection and correction algorithm, and a decoding algorithm that corrects any remaining errors while converting the information to digital data.


[
» Read full article *May Require Paid Registration ]

New Scientist; Jeremy Hsu (February 21, 2025)

 

Diagnosing Diabetes, HIV, COVID from a Blood Sample with AI Tool

Stanford University computer scientists have developed an AI tool that can screen immune-cell gene sequences in blood samples to diagnose such conditions as COVID-19, type 1 diabetes, HIV, and lupus or determine whether an individual has received the flu vaccine. The tool contains six machine-learning models that can analyze gene sequences encoding key regions in B-cell and T-cell receptors and detect patterns indicating certain diseases.

[ » Read full article *May Require Paid Registration ]

Nature; Miryam Naddaf (February 20, 2025)

 

AI Is Prompting an Evolution, Not Extinction, for Coders

The research firm Evans Data found that almost two-thirds of software developers use AI coding tools, which studies have shown improve their daily productivity in actual business settings by 10% to 30%. IDC analyst Arnal Dayaratna noted, "The skills software developers need will change significantly, but AI will not eliminate the need for them. Not anytime soon anyway."

[ » Read full article *May Require Paid Registration ]

The New York Times; Steve Lohr (February 20, 2025)

 

Large Language Models Pose Growing Security Risks

In the absence of government policy on the security of large language models (LLMs), companies face new cybersecurity challenges from them, particularly from the unstructured and conversational nature of user interactions. In addition to the possibility of employees inputting sensitive corporate data into LLMs, companies should be concerned that information generated by LLMs could contain malicious code, infringe on intellectual property, or violate copyright. Further, threat actors can use prompt injection attacks to manipulate models to perform certain actions.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Steven Rosenbush (February 20, 2025)

 

AI Is Changing How Silicon Valley Builds Startups

Today's AI startups are achieving tens to hundreds of millions of dollars in revenue with small teams, using AI to improve efficiency, and many have no need for investors. Afore Capital's Gaurav Jain likens it to the wave of companies that emerged after Amazon rolled out low-cost cloud computing services, but noted that "this time, we're automating humans as opposed to just the datacenters."


[
» Read full article *May Require Paid Registration ]

The New York Times; Erin Griffith (February 20, 2025)

 

Graduates of Chinese Universities Drive AI Research in U.S.

The Paulson Institute's MacroPolo think tank found that roughly a third (38%) of top AI researchers in the U.S. in 2022 had obtained undergraduate degrees from Chinese universities, up from 27% in 2019, versus 37% with degrees from U.S. institutions. An analysis of papers presented that year at the Conference on Neural Information Processing Systems found the U.S. accounted for seven of the 10 entities affiliated with these AI experts; China’s Tsinghua and Peking universities also were in that top 10.

[ » Read full article *May Require Paid Registration ]

Nikkei Asia; Ryoko Shimonoya; Dai Kuwamura; Tatsuya Ozaki (February 16, 2025)

 

Meta AI Expert Warns Of US-Based Scientist Exodus Due To Trump Funding Cuts

Insider (2/22, Varanasi) reported that Yann LeCun, Meta’s chief AI scientist, cautioned about a potential departure of US-based scientists due to proposed funding cuts from the Trump Administration. In a LinkedIn post on Saturday, LeCun stated, “The US seems set on destroying its public research funding system. Many US-based scientists are looking for a Plan B.” The Administration’s drastic cuts to the National Institutes of Health could lead to billions in losses for biomedical research. As lawsuits challenge these cuts, former Harvard Medical School Dean Jeffrey Flier remarked, “A sane government would never do this.” LeCun urged European institutions to capitalize on this situation, suggesting they could attract top talent by improving research conditions. He outlined key factors that researchers seek, including access to funding, good compensation, and freedom in research endeavors.

IndiaAI Mission Accelerates Domestic AI Development

Livemint (IND) (2/21) reports that India is intensifying efforts to create a homegrown artificial intelligence foundational model under the IndiaAI Mission. The Union Ministry of Electronics and IT has received 67 proposals, including submissions from major companies like Sarvam AI and Ola, focusing on large language models. IT Minister Ashwini Vaishnaw emphasized the importance of ethical AI principles. The government plans to provide significant GPU resources and launch a common compute facility to support innovation. This initiative aims to position India competitively in the global AI landscape, responding to China’s DeepSeek model.

Schools Adopt AI Chatbot For Mental Health Support

The Wall Street Journal (2/22, Jargon, Subscription Publication) reported that school districts across the US are implementing Sonny, a hybrid AI-human chatbot developed by Sonar Mental Health, to assist students with mental health issues amid a shortage of counselors. The service is available to more than 4,500 students in nine districts. Sonar CEO Drew Barvir emphasizes that trained professionals monitor interactions, ensuring safety. The program aims to enhance emotional support, especially in low-income areas.

AI Power Demand Drives Investment In Alternative Energy Sources

CNBC (2/24) reports in an online video that soaring AI workloads are driving significant investment in alternative power sources like hydrogen, nuclear, geothermal, and solar energy. Data centers could consume 12% of total US power by 2028, up from less than 4% in 2022. This surge has prompted major cloud providers to explore new energy solutions. Amazon has invested $500 million into three small nuclear reactor projects. Microsoft has green hydrogen and nuclear fusion deals, while Google is using geothermal energy to power some data centers. OpenAI CEO Sam Altman has invested heavily in fusion, fission, and new solar technologies. The Biden administration finalized a major tax credit for clean hydrogen and previously awarded $7 billion to jumpstart clean hydrogen at seven hydrogen hubs connected to companies like Amazon and ExxonMobil.

Meta Expands AI Chatbot To Middle East, North Africa

TechCrunch (2/24, Sawers) reports, “Meta has formally expanded Meta AI to the Middle East and North Africa (MENA), opening the AI-enabled chatbot to millions more people.” Moving forward, the chatbot “will be available in Algeria, Egypt, Iraq, Jordan, Libya, Morocco, Saudi Arabia, Tunisia, the United Arab Emirates (UAE), and Yemen.” Furthermore, “Meta is also expanding language support to include Arabic.”

World’s Largest Data Center Planned In South Korea

Tom’s Hardware (2/24, Morales) reports that Stock Farm Road (SFR) has signed a Memorandum of Understanding with South Jeolla Province Governor Kom Young-rok to build the world’s largest data center in South Korea. The facility will cost about $35 billion and have a capacity of 3 GW. Construction will begin this year, with a target completion in 2028. The project will include renewable energy production and R&D initiatives, creating over 10,000 jobs and generating $3.5 billion in revenue. SFR, founded by Brian Koo and Dr. Amin Badr-El-Din, plans to establish more AI data centers in Asia, Europe, and the US within 18 months. Microsoft CEO Satya Nadella noted an overbuilding of AI systems, mentioning that Microsoft will limit capital investments in AI infrastructure and lease capacity from existing data centers like those planned by SFR.

Shift Toward Natural Gas Seen Amid Effort To Meet AI Demand

The Washington Post (2/23, Halper) reported that tech and energy firms are pivoting towards natural gas to meet escalating energy needs, notably for AI development. Microsoft and Meta are advancing projects powered by gas, despite previous commitments to clean energy. GE Vernova is collaborating with Engine No. 1 to enhance gas generation for data centers, with plans to power over 3 million homes. Christopher James from Engine No. 1 noted, “Gas is going to be here,”

China’s DeepSeek Accelerates Launch Of R2 AI Model

Reuters (2/25) reports that Chinese startup company DeepSeek “triggered a $1 trillion-plus sell-off in global equities markets last month with a cut-price AI reasoning model that outperformed many Western competitors.” Now, the firm “is accelerating the launch of the successor to January’s R1 model, according to three people familiar with the company.” Deepseek had “planned to release R2 in early May but now wants it out as early as possible.” The company “says it hopes the new model will produce better coding and be able to reason in languages beyond English.” Reuters says R2 is “likely to worry the U.S. government, which has identified leadership of AI as a national priority.”

Schneider Electric Launches Global AI Ecosystem Organization To Help Partners Capture AI Opportunity

Benzinga (2/24, Inc) reports Schneider Electric has launched a new global AI and enterprise partner ecosystem organization aimed at helping partners capitalize on the AI revolution. Paul Tyrer, the newly appointed global vice president, emphasized the transformative potential of AI-powered solutions for business operations, stating, “AI-powered solutions have the potential to revolutionize business operations and drive innovation like never before.” The initiative includes the appointment of Leslie Vitrano Hubright as vice president of the global IT channel ecosystem, who noted, “This is Schneider Electric doubling down and investing in partners to lead the AI revolution.” The organization aims to enhance AI integrations and capabilities, positioning Schneider Electric and its partners to seize significant opportunities in the evolving data center landscape.

Nvidia Extends Partnership With Cisco To Ease AI Adoption

Bloomberg (2/25, Grant, Subscription Publication) reports Nvidia “is extending a partnership with networking-gear maker Cisco Systems Inc. in a push aimed at making it easier for corporations to deploy AI systems.” Bloomberg adds, “Many businesses remain in the early stages of adopting AI systems because of the complexity the shift adds to their data centers, Cisco and Nvidia said Tuesday in a joint statement.” The companies “are broadening the list of products that include each others’ technology in an attempt to remove those hurdles.”

Leading Technology Companies Turn To Hydrogen, Nuclear Energy For AI Data Centers

CNBC (2/24, Novet) reports top tech companies, including Microsoft, Amazon, and Google, are increasingly turning to hydrogen and nuclear energy to power their AI data centers, as highlighted in a recent CNBC article. Yuval Bachar, founder of the startup ECL, which builds hydrogen-powered data centers, noted that these facilities can be operational in half the time of traditional grid-connected centers, addressing the urgent power needs for AI technologies. Bachar emphasized, “We have a problem that we have to solve right now,” reflecting the growing demand for energy-efficient solutions in the tech industry. As the race for AI capabilities intensifies, companies are exploring various energy sources, including small modular reactors, to meet their sustainability goals, with Google aiming for net-zero emissions by 2030 and Microsoft targeting carbon negativity by the same year.

Amazon Cracks Down On AI Use During Job Interviews

Insider (2/27, Kim) reports Amazon is cracking down on the use of AI tools during job interviews, citing concerns over fairness and the ability to assess candidates’ “authentic” skills. Recent guidelines shared with Amazon recruiters state that applicants may face disqualification for using AI tools. The guidelines instruct recruiters to inform candidates about the policy. An Amazon spokesperson said the company’s recruiting process “prioritizes ensuring that candidates hold a high bar.” The spokesperson added that candidates must acknowledge they won’t use “unauthorized tools, like GenAI, to support them” during interviews. Amazon has also shared internal tips on how to spot applicants using AI, such as observing typing, unnatural reading of answers, or reactions to incorrect AI outputs.

Educators Are Learning To Navigate AI Cheating Concerns

Education Week (2/27, Klein) reports that educators are adapting to the rise of generative AI tools in academic settings, particularly concerning cheating. Michael Rubin, principal of Uxbridge High School in Massachusetts, emphasized the importance of teaching students to use AI responsibly, saying, “It’s not about the risk of getting caught, it’s about knowing how to use the technology appropriately.” Rubin’s school employs a tool to analyze student submissions for signs of AI usage, promoting discussions about appropriate AI use rather than punitive measures. Amelia Vance, president of the Public Interest Privacy Center, cautioned that many AI detection tools are inaccurate, particularly for students of color and non-native English speakers. Vance noted, “Unfortunately, at this point, there isn’t an AI tool that sufficiently, accurately detects when writing is crafted by generative AI,” reinforcing the need for direct communication with students suspected of cheating.

dtau...@gmail.com

unread,
Mar 8, 2025, 7:52:34 PMMar 8
to ai-b...@googlegroups.com

Barto, Sutton Receive 2024 ACM A.M. Turing Award

Andrew G. Barto, professor emeritus of information and computer sciences at the University of Massachusetts, Amherst, and Richard S. Sutton, professor of computer science at the University of Alberta in Canada, are the recipients of the 2024 ACM A.M. Turing Award for developing the conceptual and algorithmic foundations of reinforcement learning. In a series of papers beginning in the 1980s, the two introduced the primary concepts, built the mathematical foundations, and developed vital algorithms in the field. Their work, said ACM President Yannis Ioannidis, "laid the foundations for some of the most important advances in AI."
[ » Read full article ]

ACM Media Center (March 5, 2025)

 

AI Reshapes the Coding Workforce

The increased adoption of AI coding tools is changing the size and scope of software development teams, often allowing for leaner teams that complete the same amount of work or more. These tools, which automate a substantial amount of code development, are intended to supplement human coders. Companies have found such tools can permit developers to concentrate on complex problem-solving when boilerplate coding is automated.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Isabelle Bousquette (March 4, 2025)

 

AI Finds 5,000-Year-Old Civilization Beneath Dubai Desert

Researchers have located a 5,000-year-old city and roads under the sand in Dubai's Rub' al Khali desert with the help of AI and Synthetic Aperture Radar (SAR) technology. According to one of the researchers, "The application of [AI] in archaeology is like having a time machine, and now we can look at history from completely new angles."
[ » Read full article ]

The Jerusalem Post (March 3, 2025)

 

MTA Used Google Pixels to Identify Subway Track Defects

New York City's Metropolitan Transportation Authority deployed Google's TrackInspect AI tool to identify defects on subway tracks. From last September through January, four subway cars were equipped with Google Pixel phones, which detected problematic noises and other issues using their accelerometers, magnetometers, and microphones. Machine-learning algorithms were used to analyze the data and produce predictive insights. TrackInspect located 92% of defects that had been identified by inspectors.
[ » Read full article ]

Engadget; Sarah Fielding (February 28, 2025)

 

China Ramps Up Efforts for Tech Independence

Chinese Premier Li Qiang, in a speech to that nation’s lawmakers Wednesday, said AI would be vital for strengthening China’s digital economy. Li pledged that China would boost its support for applications of large-scale AI models and AI hardware. On the same day, China’s top economic planning body said the country aimed to develop a system of open-source AI models, while continuing to invest in computing power and data for the technologies.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Raffaele Huang (March 5, 2025)

 

Smart Cameras Spot Wildfires Before They Spread

The University of California, San Diego's ALERTCalifornia camera network uses AI bots as digital fire-lookouts, scanning more than 1,150 cameras in fire-prone areas across the state. Since the bots were deployed in 2023, they have detected more than 1,200 confirmed fires and are faster than 911 callers about 33% of the time. A human-staffed command center is notified when a fire is detected, where the blaze is verified and authorities are notified.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Jim Carlton (March 2, 2025)

 

Texas Needs Equivalent of 30 Reactors to Meet Datacenter Power Demand

The Electric Reliability Council of Texas (ERCOT), which manages the state's power grid, forecast an increase in energy demand requiring the addition of 30 nuclear plants' worth of electricity by 2030, thanks to the anticipated addition of datacenters powering AI to the grid. Said ERCOT’s Agee Springer, “We’ve never existed in a place where large industrial loads can really impact the reliability of the grid, and now we are stepping into that world.”

[ » Read full article *May Require Paid Registration ]

Bloomberg; Naureen S. Malik (February 28, 2025)

 

Google’s Brin Urges Workers to the Office ‘at Least’ Every Weekday

Google co-founder Sergey Brin last week said his company could lead the industry in AI when machines match or become smarter than humans, but only if employees worked harder. “I recommend being in the office at least every weekday,” he wrote in a memo posted internally. He added that “60 hours a week is the sweet spot of productivity” in the message to employees who work on Gemini, Google’s lineup of AI models and apps.

[ » Read full article *May Require Paid Registration ]

The New York Times; Nico Grant (February 28, 2025)

 

AI Robots Help Nurse Japan's Aging Population

Japan is turning to robots and other technologies to help care for its aging population. An AI-driven humanoid robot called AIREC, for example, recently was demonstrated gently helping a man in bed roll onto his side. Said Waseda University's Shigeki Sugano, who is heading the AIREC robot project, "Given our highly advanced aging society and declining births, we will be needing robots' support for medical and elderly care, and in our daily lives."
[ » Read full article ]

Reuters; Kiyoshi Takenaka (February 28, 2025)

 

Humanoid Robots Finally Get Real Jobs

Humanoid robots, with the help of AI, are being used to perform tasks typically done by human workers, or to serve as a bridge between other less-versatile automated machines common in warehouses and factories. Mass manufacturing and falling costs for the components of robots are making them cheaper to produce, and the latest AI technologies are animating robot bodies in ways not possible even a few years ago. More than a dozen startups worldwide now offer such humanoid robots.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Christopher Mims (February 27, 2025)

 

Estonia Launches AI in High Schools with U.S. Tech Groups

The Estonian government is launching the AI Leap initiative in partnership with OpenAI and Anthropic, providing free access to AI-learning tools to 20,000 high school students beginning in September. Next year, the program will be expanded to vocational schools, and possibly younger students as well. Estonian President Alar Karis said the goal of AI Leap is to foster an awareness of and critical thinking about AI among students, not to replace educators.

[ » Read full article *May Require Paid Registration ]

Financial Times; John Thornhill; Richard Milne (February 26, 2025)

 

U.S. Workers Skeptical AI Will Help Them

Less than a third of respondents to a Pew Research Center survey of around 5,300 Americans said they were "excited" about the use of AI in future workplaces. Around 80% of Americans do not use AI at work, and most of those who do are not impressed by the results, according to the survey. Among other findings, 52% of workers said they were "worried" about how AI could be used in future workplaces.

[ » Read full article *May Require Paid Registration ]

The Washington Post; Shira Ovide; Danielle Abril (February 25, 2025)

 

Universities Expand AI Course Offerings Amid Rising Demand

Insider (3/2, Perkel) reports that universities are increasingly developing artificial intelligence programs to meet rising interest, particularly among non-STEM students. Carnegie Mellon University (CMU) has evolved its undergraduate AI major since its inception in 2018, with program director Reid Simmons noting, “These large language models... have basically taken over.” The focus now includes a broader range of AI topics, with machine learning classes increasing from a couple to “as many as 10.” Similarly, Johns Hopkins University is expanding its online AI master’s program to accommodate students from diverse backgrounds, as director Barton Paulhamus stated, “What can we give them that they can learn about AI without needing to go through 10 courses of prerequisites?” The University of Miami aims to demystify AI for non-computing students, with Dean Leonidas Bachas emphasizing, “This is a computer science class for all.”

Chinese Buyers Using Third Parties To Circumvent Export Controls For Next-Gen AI Chips

The Wall Street Journal (3/2, Huang, Lin, Subscription Publication) reports Chinese customers are finding ways to circumvent US export controls for next-gen computer chips, particularly those that assist in the development of artificial intelligence. Many traders within China are selling computers with Nvidia’s Blackwell chips pre-installed by routing the product through third parties in nearby countries, highlighting the challenges the Administration is facing in preventing some nations from accessing the chips.

Amazon Invests $500M In New Nuclear Reactor Project

KIRO-TV Seattle (2/28, Thompson) reported Amazon has committed $500 million to X-Energy for the construction of a new nuclear reactor near the Columbia Generation Station in Washington, part of Amazon’s bid to power its AI revolution. The planned “pebble bed” reactor technology promises enhanced safety compared to traditional reactors. However, environmental concerns persist, as Columbia Riverkeeper advocacy director Dan Serres emphasized the risks of additional nuclear waste near the already contaminated Hanford site, calling it “an absolutely reckless idea.” The smaller X-Energy reactors aim to generate about 80 mW of electricity, contrasting with the over 1,000 mW produced by the Columbia Generating Station, and could be built in a factory-like setting to reduce costs.

California Lawmaker Relaunches Pared-Down AI Safety Bill After Big Tech Pushback

Politico (2/28, DiFeliciantonio) reported California Sen. Scott Wiener (D), who was “behind a divisive AI safety bill last year, has relaunched a pared-down version focused on whistleblower protections, after his prior failed attempt ignited a national debate over how, and whether, to regulate the powerful technology.” Wiener “filed the full details of his second attempt at reining in the potential harms of artificial intelligence late Thursday night, after his last bill was vetoed by Gov. Gavin Newsom amid pushback from certain Big Tech figures warning of consequences for innovation.”

AI Skills Gap Highlights Workforce Expectations

Quotidiano (ITA) (3/3) reports that a recent study by Access Partnership, in collaboration with Amazon Web Services, surveyed more than 6,500 employees and 2,000 employers across France, Germany, Spain, and the UK. The report emphasizes the need to address the AI skills gap to maintain Europe’s competitiveness over the next decade. By 2028, 86% of employers plan to adopt AI tools, particularly in IT (82%) and other business functions like finance (77%) and R&D (78%). Maureen Lonergan, Vice President of Training and Certification at Amazon Web Services, noted that “65% of European workers believe AI will positively impact their careers and are interested in acquiring specific skills.”

Singapore Fraud Case Involving US Servers Could Contain Nvidia Chips, Minister Says

Reuters (3/3) reports that Singapore announced a fraud case last week involving three individuals charged with illegally moving Nvidia’s AI chips to the Chinese firm DeepSeek. On Monday, Home Affairs and Law Minister K Shanmugam revealed that the servers implicated were supplied by US companies Dell Technologies and Super Micro Computer. He stated, “Whether Malaysia was the final destination ... we do not know for certain at this point,” while confirming that Singapore is conducting an independent investigation following an anonymous tip-off. The Singaporean authorities have also reached out to US officials to ascertain if the servers contained any US export-controlled items and are prepared to collaborate on any joint inquiry. The US is currently investigating if DeepSeek has utilized prohibited US chips, as reported by Reuters.

Amazon, Nvidia Drive Expansion Of Physical AI In Robotics, Automation

Forbes (3/3, MSV) reports physical artificial intelligence is transforming industries by integrating AI with sensors and actuators in robots, vehicles, and devices. Unlike traditional automation, physical AI enables machines to adapt in real time and operate autonomously. Amazon’s fulfillment centers use 750,000+ mobile robots to boost efficiency, with AI-driven systems like Cardinal sorting packages and improving productivity by 25%. Nvidia is investing heavily in hardware and simulation platforms to accelerate physical AI adoption. Organizations must consider high initial costs, security risks, and workforce training when implementing these technologies. As AI advances, industries from manufacturing to retail will continue integrating autonomous systems to improve efficiency and reduce manual labor.

OpenAI Announces $50 Million Investment In Higher Ed Research Consortium

Inside Higher Ed (3/5, Palmer) reports that OpenAI announced on Tuesday a $50 million investment to establish NextGenAI, a research consortium comprising 15 institutions aimed at leveraging AI to enhance research and education. The group, which includes 13 universities, is intended to “catalyze progress at a rate faster than any one institution would alone,” according to the company. Brad Lightcap, OpenAI’s chief operating officer, emphasized the importance of collaboration, stating, “The field of AI wouldn’t be where it is today without decades of work in the academic community.” Each institution, including Boston Children’s Hospital and the Boston Public Library, will receive funding and computational resources to support various initiatives, such as AI literacy and medical research. The consortium features notable universities like Harvard, MIT, and the University of Oxford.

Reclaim Project Develops Portable AI-Powered Recycling Plant

Recycling Today (3/3, Voloschuk) reports the Reclaim project, funded by the EU’s Horizon 2020 program, has created a low-cost, portable AI-powered robotic recycling plant for deployment in the Greek Islands. This technology addresses waste management challenges in remote areas by using multiple robots and AI for effective material sorting. Javier Grau from Aimplas stated, “Remote islands, hard-to-reach rural areas or regions with limited infrastructure are just some of the scenarios where this equipment can make a significant difference.” The compact design allows for rapid deployment, enhancing local recycling efforts and promoting a circular economy for plastics.

State Department Will Use AI To Revoke Foreign Student Visas

Inside Higher Ed (3/7, Custer) reports that Secretary of State Marco Rubio is set to implement an initiative called “Catch and Revoke” to utilize artificial intelligence in the assessment of foreign student visas. According to Axios, the program will analyze social media accounts of thousands of student visa holders for indications of support for Hamas’s October 7, 2023, attack on Israel. If a post appears “pro-Hamas,” it may lead to visa revocation, as stated by a State Department official. The initiative also includes reviewing news reports of anti-Israel protests and legal actions by Jewish students for potential antisemitic behavior. The official commented, “We found literally zero visa revocations during the Biden administration,” suggesting a lack of enforcement. The official emphasized the importance of using AI tools, stating, “It would be negligent for the department that takes national security seriously to ignore publicly available information.”

Microsoft To Launch AI Data Centers In Kuwait

GCC Business (3/6, Nair) reports that Microsoft has signed an agreement with the Government of Kuwait, represented by the Central Agency for Information Technology (CAIT) and the Communication and Information Technology Regulatory Authority (CITRA), to establish an AI-powered Azure Region. This partnership aims to enhance local AI capabilities and stimulate economic growth. The Azure Region is intended to provide “scalable, highly available, and resilient cloud services” to facilitate digital transformation in Kuwait. Additionally, the initiative includes integrating Microsoft 365 Copilot for government employees, promoting efficiency and productivity. The collaboration also involves launching a comprehensive skilling initiative in AI and cybersecurity to prepare the workforce for future demands.

Utah’s New $2 Billion AI Data Center Project Is A Major Bet On AI Infrastructure

The Storage Review (3/5) reports a new $2 billion AI data center in West Jordan, Utah, is fully leased before its opening, highlighting the urgent demand for AI infrastructure. Backed by J.P. Morgan and Starwood Property Trust, the facility will deliver 175MW of compute power, incorporating advanced direct-to-chip liquid cooling technology to manage high thermal loads from AI workloads. The project’s strategic location in Utah offers cost-effective power solutions and a cooler climate, making it ideal for high-performance AI applications. As financial institutions increasingly invest in AI-driven infrastructure, this development signifies a broader industry shift towards specialized data centers to meet surging AI compute demands.

dtau...@gmail.com

unread,
Mar 15, 2025, 4:29:19 PMMar 15
to ai-b...@googlegroups.com

China's Top Universities Prioritize AI, Other 'National Strategic Needs'

China's Peking, Renmin, and Shanghai Jiao Tong universities will expand undergraduate enrollment as they prioritize "national strategic needs" such as developing AI talent. Peking University plans to add 150 undergraduate spots this year focused on areas of "national strategic importance," fundamental disciplines, and "emerging frontier fields" such as information science and technology. In January, China issued its first national action plan to build a "strong education nation" over the next 10 years.
[
» Read full article ]

Reuters; Farah Master; Eduardo Baptista (March 10, 2025)

 

AI Makes Its Way to Vineyards

The wine industry increasingly is adopting AI to supplement its workforce and improve decision-making, efficiency, and sustainability while reducing waste. Autonomous tractors help farmers reduce fuel use and pollution, while automated irrigation systems make water use more efficient by monitoring soil and vines. Smart sensors help target spraying of insecticides or other material for crop retention, and the AI-powered farm management platform Scout can analyze images to monitor a crop's health and predict yields.
[
» Read full article ]

Associated Press; Sarah Parvini (March 10, 2025)

 

U.S. to Use AI to Review Foreign Student Visa Holders for Terrorist Sympathies

U.S. Secretary of State Marco Rubio is launching an AI-enabled "Catch and Revoke" effort to cancel the visas of foreign nationals who appear to support designated terror groups, sources say. The effort includes AI-assisted reviews of tens of thousands of student visa holders' social media accounts, focusing on evidence of alleged terrorist sympathies.
[ » Read full article ]

Axios; Marc Caputo (March 6, 2025)

 

AI-enabled BCI Allows Paralyzed Man to Control Robot Arm

A brain-computer interface (BCI) developed by University of California, San Francisco researchers enabled a patient who was paralyzed after suffering a stroke to operate a robotic arm for seven months without significant calibration. The researchers created an AI model that adjusted for day-to-day shifts in brain activity, overcoming a common challenge associated with BCIs. The AI learned from the patient's brain signals while he visualized simple movements and practiced with a virtual robotic arm.
[ » Read full article ]

Interesting Engineering; Srishti Gupta (March 6, 2025)

 

AI to Search for the Trillions of Viruses in Our Bodies

Five universities are participating in the Human Virome Program, aimed at identifying more of the tens of trillions of viruses living in the human body. The project, which has received $171 million in federal funding, will use AI systems to analyze saliva, stool, blood, milk, and other samples from thousands of volunteers. Researchers hope the program will provide insights on how the virome influences health.

[ » Read full article *May Require Paid Registration ]

The New York Times; Carl Zimmer (March 4, 2025)

Additional free news story on this project: https://www.caltech.edu/about/news/caltech-joins-national-human-virome-program

 

1 in 5 Women in Tech Plan to Switch Jobs

Generative AI skills are helping boost women in the technology sector as many look to switch jobs, according to Ensono's latest Speak Up survey of 1,500 female-identifying full-time tech professionals. Almost 90% of respondents said possessing generative AI know-how has enhanced their job performance and unlocked new opportunities. Nearly 20% are planning to leave their current companies this year, a rate similar to that seen in 2022's "Great Resignation."
[
» Read full article ]

CIO Dive; Lindsey Wilkinson (March 10, 2025)

 

Beijing to Roll Out AI Courses for Kids

Starting in the fall, schools in Beijing will introduce AI courses to primary and secondary students. At least eight hours of AI classes will be offered per academic year, according to the Beijing Municipal Education Commission, which said schools will be able to run them as standalone courses or integrate them with existing curricula.

[ » Read full article *May Require Paid Registration ]

Bloomberg (March 9, 2025)

 

Pentagon Signs AI Deal to Help Commanders Plan Military Maneuvers

Illustrating growing collaboration between the U.S. military and private tech sector, the Pentagon has contracted startup Scale AI to find ways to use AI to speed up military decision-making. Scale will develop AI programs that commanders could query for recommendations about how to most efficiently move resources throughout a region, combining data from intelligence sources and battlefield sensors.

[ » Read full article *May Require Paid Registration ]

The Washington Post; Gerrit De Vynck (March 5, 2025)

 

McDonald's Gives Its Restaurants an AI Makeover

McDonald's is rolling out edge computing to its restaurants, enabling them to process and analyze data on-site with the goal of improving the customer and employee experience. AI will be used to analyze data from Internet-connected kitchen equipment to predict maintenance issues, while in-store mounted cameras will use computer vision to ensure order accuracy. In addition, voice AI will be used at the drive-through, and generative AI virtual managers will handle shift scheduling and other administrative tasks.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Isabelle Bousquette; Belle Lin (March 5, 2025)

 

European Commission Plans Gigafactories To Boost AI Industry

Reuters (3/11) reports that the European Commission is raising $20 billion to build four “AI gigafactories” aimed at enhancing Europe’s competitiveness in artificial intelligence. This initiative, announced by President Ursula von der Leyen at the February 11 AI summit in Paris, seeks to develop large public access data centers. Industry experts, such as Bertin Martens from Bruegel, express skepticism about the practicality of these factories, highlighting challenges like chip shortages and site selection. The gigafactories will be funded through a new 20 billion-euro fund and are envisioned as a public-private partnership to support local firms in creating AI models compliant with EU regulations. However, Kevin Restivo from CBRE warns that these projects could encounter the same obstacles as existing private ventures in Europe.

China’s Manus AI Claims Lead Over US Competitors

Bloomberg (3/10, Subscription Publication) reports that a Chinese startup, Manus AI, recently launched a preview of its general AI agent, which claims to outperform leading US competitors like OpenAI’s Deep Research in tasks such as resume screening and itinerary creation. Co-founder Yichao Ji described the product as “truly autonomous,” generating significant interest and comparisons to another Chinese firm, DeepSeek. However, user feedback has been mixed; while some praised its outcomes, others noted slow processing times and crashes. Manus has raised over $10 million but has not published detailed development papers or released its code. The competitive landscape remains uncertain as US companies continue to innovate in AI technology.

Stargate Venture To Deploy Nvidia Chips At Texas Data Center

Data Center Knowledge (3/7) reports that OpenAI and Oracle Corporation are set to fill a new data center in Abilene, Texas, with 64,000 Nvidia AI chips by the end of 2026 as part of their $100 billion Stargate venture. The initial phase will see 16,000 chips deployed by summer 2024. An OpenAI spokesperson confirmed collaboration with Oracle on the data center’s design and operation, emphasizing the significant computing power aimed at enhancing generative AI capabilities.

OpenAI Partners Agrees To Pay CoreWeave $11.9 Billion For AI Data Centers, Services

CNBC (3/10, Field) reports OpenAI has agreed to pay CoreWeave $11.9 billion over five years for AI data centers and services. The agreement includes OpenAI acquiring a $350 million stake in CoreWeave linked to its upcoming IPO, according to confidential sources. CoreWeave, supported by Nvidia, plans to go public on Nasdaq soon, with a 2024 revenue increase of over 700% to $1.92 billion. In October, CoreWeave secured a $650 million credit line to expand its data centers and has raised over $12 billion from investors. By the end of 2024, CoreWeave operated 32 data centers with more than 250,000 Nvidia GPUs, surpassing its initial goal of 28 centers. CoreWeave’s clientele includes Microsoft, Meta, IBM, and Cohere. The company, valued at $19 billion in May, aims for a valuation exceeding $35 billion in its IPO.

Report: Meta Testing First In-House Chip For AI Training

Reuters (3/11, Paul, Hu) reports Meta “is testing its first in-house chip for training artificial intelligence systems, a key milestone as it moves to design more of its own custom silicon and reduce reliance on external suppliers like Nvidia, two sources told Reuters.” The company “has begun a small deployment of the chip and plans to ramp up production for wide-scale use if the test goes well, the sources said.”

University of Wisconsin-Stout Embedding AI Training In All Of Its Degree Programs

The Chippewa (WI) Herald reports that “from engineering to communication and counseling, manufacturing, marketing and design, construction, supply chain and more, University of Wisconsin-Stout is preparing graduates to meet the needs of a rapidly evolving workforce by embedding AI training in all of its degree programs.” And its “comprehensive approach to AI literacy is more than program curriculums – Wisconsin’s Polytechnic University is collaborating with community, business and industry partners through its innovation centers, consulting services, continuing education courses and regional consortiums to help Wisconsin leverage AI-driven solutions that put it ahead of the curve.”

California College Professors Divided On Using AI Tools In Curricula

EdSource (3/12) reports that California colleges have begun incorporating artificial intelligence (AI) tools into their curricula since the release of ChatGPT in 2022. While some professors express concerns about cheating and diminished critical thinking, others advocate for AI’s educational benefits. A report from University of Southern California’s Marshall School of Business revealed that 38 percent of faculty use AI in classrooms. Professor Ramandeep Randhawa said, “It is critical to prepare students for this AI-first environment.” At California State University, Long Beach, lecturer Casey Goeller has students use AI for assignments, emphasizing its utility in academic support. However, some faculty, like Professor Olivia Obeso from Cal Poly, enforce no-AI policies to foster foundational skills. Overall, educators are navigating a balance between embracing AI and ensuring students develop critical thinking skills necessary for the workforce.

Google Unveils Two New AI Models For Robotics

Reuters (3/12) reports that Google introduced two new AI models for robotics on March 12, based on its Gemini 2.0 model, aiming to support the expanding robotics industry. The models, Gemini Robotics and Gemini Robotics-ER, enhance robots’ capabilities in understanding their environment and executing physical actions. This launch follows Figure AI’s recent departure from a collaboration with OpenAI after achieving a breakthrough in AI for robotics. Google tested its models on data from its bi-arm platform, ALOHA 2, and noted their utility for startups looking to lower development costs. Additionally, Apptronik, which recently secured $350 million in funding with Google’s participation, is set to scale production of AI-powered humanoid robots.

Celestial AI Raises $250 Million For AI Chip Development

Reuters (3/11, Nellis) reported that Celestial AI, a Silicon Valley chip startup, announced on Tuesday it has secured an additional $250 million in venture capital, raising its total funding to $515 million. The company is utilizing photonics technology, which employs light instead of electrical signals, to enhance connections between AI computing chips and memory chips. This connection’s speed, known as memory bandwidth, is crucial for advancing AI systems and influences US government export controls regarding AI technology. Nvidia currently leads in memory bandwidth with its technologies NVLink and NVSwitch, prompting competition among startups for alternatives. Celestial AI’s technology, described as a “photonic fabric,” aims to improve speed while conserving space and power. CEO Dave Lazovsky stated, “There are no good answers right outside of Nvidia,” highlighting the efficiency and latency benefits of their innovation. The funding round was led by Fidelity Management & Research, with participation from several investors, including BlackRock and AMD Ventures.

dtau...@gmail.com

unread,
Mar 22, 2025, 1:36:07 PMMar 22
to ai-b...@googlegroups.com

Art Created by AI Cannot Be Copyrighted, Court Rules

The U.S. Circuit Court of Appeals for the District of Columbia unanimously ruled that art created autonomously by AI cannot be copyrighted. The three-judge panel upheld the U.S. Copyright Office's decision to deny a copyright to Stephen Thaler for the painting "A Recent Entrance to Paradise." Thaler had listed his AI platform "Creativity Machine" as the painting's "author" and himself as the owner in the copyright application.
[ » Read full article ]

CNBC; Dan Mangan (March 19, 2025)

 

Europol Warns of AI-Driven Crime Threats

Europol said in a report released Tuesday that organized crime gangs are moving their recruitment, communication, and payment systems online and leveraging AI to scale up their operations across the globe and prevent detection. According to the report, criminals are using AI to produce messages in different languages and create realistic impersonations of individuals, among other acts. The EU law enforcement agency said fully autonomous AI "could pave the way for entirely AI-controlled criminal networks, marking a new era in organized crime."
[ » Read full article ]

Reuters; Michal Aleksandrowicz (March 18, 2025)

 

AI Search Engines Cite Incorrect Sources at 60% Rate, Study Finds

Researchers at Columbia University's Tow Center for Digital Journalism found that AI models gave incorrect answers to more than 60% of queries about news sources. The researchers fed excerpts of news stories into eight AI-driven search tools and found that all tested models provided fabrications, rather than not responding when their information was unreliable. The study also showed the models tended to point users to syndicated versions of content rather than original publisher sites.
[ » Read full article ]

Ars Technica; Benj Edwards (March 13, 2025)

 

Tim Berners-Lee Wants to Know: 'Who Does AI Work For?'

At the South by Southwest conference, World Wide Web inventor Tim Berners-Lee, an ACM A.M. Turing Award laureate, raised the question of who AI works for. Even if AI models are reliable, accurate, and unbiased, there will be concerns about whether company or user interests are paramount. Said Berners-Lee, "I want AIs to work for me to make the choices that I want to make. I don't want an AI that's trying to sell me something."
[ » Read full article ]

CNet; Jon Reed (March 12, 2025)

 

AI Ring Tracks Spelled Words in American Sign Language

A team led by Cornell University researchers developed an AI-powered ring that can track fingerspelling in American Sign Language. Worn on the thumb, SpellRing uses a microphone and speaker to transmit sound waves that track hand and finger movements, and a mini gyroscope to track hand motions. Images captured by micro-sonar technology are analyzed by a proprietary deep learning algorithm to predict the fingerspelled letters in real time, with 82% to 92% accuracy.
[ » Read full article ]

Cornell Chronicle; Louis DiPietro (March 17, 2025)

 

Nvidia Hosts the Super Bowl of AI

The Nvidia GTC annual developer conference has evolved from an academic summit into the Super Bowl of AI, attracting a who's who of industry leaders. On March 18, more than 25,000 people filled a National Hockey League arena to hear Nvidia CEO Jensen Huang speak on the future of AI. Nvidia GTC was formerly the GPU Technology Conference, which included a research summit where academics detailed how they had used the company's components for computing research.

[ » Read full article *May Require Paid Registration ]

The New York Times; Tripp Mickle (March 18, 2025)

 

'Doxxing' Scandal Casts Shadow Over Baidu's AI Model Release

Chinese tech giant Baidu is facing criticism over a "doxxing" scandal that has overshadowed the launch of its new AI models. The daughter of Baidu Vice President Xie Guangjun shared social media users' real names, ID numbers, phone numbers, and other personal information during an online argument over a K-pop singer. The incident has raised concerns among social media users across various platforms about whether Baidu is leaking users' personal data.

[ » Read full article *May Require Paid Registration ]

Nikkei Asia; Cissy Zhou (March 18, 2025)

 

AI Is Changing the Way Computers Are Built

AI is fueling the most fundamental change to computing since the early days of the Internet. Just as companies completely rebuilt their computer systems to accommodate the new commercial Internet in the 1990s, they are now rebuilding from the bottom up, wiring together up to 100,000 chips to create powerful AI systems. The industry is also looking at new ways to house, power, and cool these systems to keep them from over-heating.

[ » Read full article *May Require Paid Registration ]

The New York Times; Cade Metz; Karen Weise; Marco Hernandez (March 16, 2025); et al.

 

The Quest for AI 'Scientific Superintelligence'

Researchers at startup Lila Sciences developed a generative AI program trained on published and experimental data, the scientific process, and reasoning, in the quest for "scientific superintelligence." The AI is tasked with generating new ideas and testing them in automated labs with a handful of human assistants. Said Lila cofounder Molly Gibson, “Our goal is really to give AI access to run the scientific method—to come up with new ideas and actually go into the lab and test those ideas.”

[ » Read full article *May Require Paid Registration ]

The New York Times; Steve Lohr (March 10, 2025)

 

There's a Good Chance Your Kid Uses AI to Cheat

Impact Research found that close to 40% of middle- and high-school students, and almost half of college students, used AI to complete assignments without a teacher's knowledge or permission. Some educators are responding by requiring students to write first drafts by hand in class without access to computers or smartphones, or are no longer assigning homework. Others are using third-party AI detection tools, which are not always accurate in flagging AI use.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Matt Barnum; Deepa Seetharaman (March 15, 2025)

 

China Announces Generative AI Labeling to Cull Disinformation

The Cyberspace Administration of China, along with three other agencies, issued new regulations requiring service providers to label AI-generated material to prevent disinformation. The rules go into effect Sept. 1, with labels either explicitly stating the material is AI-generated or making the disclosure via metadata encoded in each file. Additionally, app store operators must determine whether developers provide AI-generated content services and review their labeling mechanisms.

[ » Read full article *May Require Paid Registration ]

Bloomberg; Debby Wu (March 14, 2025)

 

AI Talent Race Reshapes the Tech Job Market

Of the U.S. tech jobs posted since January, almost 25% seek workers with AI skills, according to the University of Maryland's (UMD's) AI job tracker. AI-related listings accounted for 1.3% of all job postings in January, compared with tech job listings at 5.4%. According to UMD's Anil K. Gupta, the turning point for the AI job market was the launch of OpenAI's ChatGPT, which bumped AI-related job postings 68% from its fourth-quarter 2022 launch through the end of 2024. Over the same period, tech job postings declined 27%.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Nate Rattner (March 10, 2025)

 

Amazon Bets On Trainium Chips To Compete With Nvidia

Semafor’s (3/14) Reed Albergotti writes that Amazon’s $8 billion investment in AI startup Anthropic and its development of the Trainium2 chip mark a strategic effort to challenge Nvidia’s dominance in AI hardware. Amazon’s Annapurna Labs designed the chip as part of Project Rainier, aiming to create the world’s most powerful computer through extreme vertical integration. Anthropic, Amazon’s key customer, will use Trainium2 to train its Claude AI model, enhancing performance and cost efficiency. Annapurna Director of Engineering Rami Sinno said, “Every single chip that we build and deliver has customers waiting for it.” While Nvidia’s Cuda software remains a formidable competitor, Amazon’s open instruction set and focus on compute efficiency could attract more customers, reducing reliance on Nvidia amid global chip shortages.

Google Expands AI Business In UK

TechCrunch (3/17, Lunden) reports that Google is enhancing its AI operations in the U.K., as announced on Monday in London by Google DeepMind CEO Demis Hassabis and Google Cloud CEO Thomas Kurian. The company will expand U.K. data residency to include Agentspace, enabling local hosting of its AI agent for enterprises. Additionally, Google introduced financial incentives for AI startups, offering up to £280,000 in Google Cloud credits for those joining its new U.K. accelerator. Chirp 3, an audio generation model, will be added to the Vertex AI platform. This initiative aims to strengthen Google’s presence in the U.K. AI market.

Large Technology Companies Expected To Invest Over $500B In AI By 2032

Bloomberg (3/17, Davalos, Subscription Publication) reports large tech companies are expected to increase their combined investment in artificial intelligence “to more than $500 billion by early next decade, driven in part by a newer approach to AI from DeepSeek and OpenAI, according to Bloomberg Intelligence.” So-called hyperscale companies including Microsoft, Amazon, and Meta are “projected to spend $371 billion on data centers and computing resources for AI in 2025, a 44% increase from the year prior, according to a report published Monday.” That amount is “set to rise to $525 billion by 2032, growing at a faster clip than Bloomberg Intelligence expected before the viral success of DeepSeek.”

Nvidia CEO Expected To Defend Company’s AI Strategy As Costs, Competitors Mount

Reuters (3/17, Nellis, Cherney) reports that Nvidia CEO Jensen Huang is expected to address the company’s annual software developer conference this week amid growing pressure to “defend his nearly $3 trillion chip company’s dominance as pressure mounts on its biggest customers to rein in the costs of artificial intelligence.” At the conference, Nvidia is “expected to reveal details of a chip system called Vera Rubin, named for the American astronomer who pioneered the concept of dark matter, with the system expected to go into mass production later this year.” However, Rubin’s predecessor, “a chip named after mathematician David Blackwell announced this time last year,” is still only “trickling onto the market after production delays that have eaten into Nvidia’s margins.” Nvidia is also expected to “hint at its plans” on quantum computing and “efforts to build a personal computer central processor chip.”

Illinois Lawmakers Propose AI Guidelines For Schools

Chalkbeat (3/17, Smylie) reports that Illinois educators are urging state lawmakers to establish guidelines for using artificial intelligence (AI) in classrooms. Two bills, HB2503 and SB1556, have been proposed to create an advisory committee that will provide guidance on AI use in education. These bills require school districts to report AI usage to the Illinois State Board of Education. Rep. Laura Faver Dias (D) emphasized the importance of this legislation, saying, “Our teachers are on the front lines and spend hours with our students every day.” Bill Curtin, policy director at Teacher Plus Illinois said, “We’re really focused on empowering teachers with the guardrails to know that experimenting is safe.” The House proposal passed the education policy committee with a 9-4 vote and awaits further negotiation with the Illinois State Board of Education. Chicago Public Schools has already developed a guidebook for educators to navigate generative AI.

Collegis Education Highlights AI’s Impact On Higher Ed

Forbes (3/17, Newton) reported that Kim Fahey, CEO of Collegis Education, believes AI is transforming higher education administration. Fahey states that AI is not a simple solution for schools but requires clean data and preparation. AI can automate tasks, enhance marketing, recruitment, and retention, and offer tutoring and advising options. Collegis collaborates with Google Cloud to help institutions manage data. Brad Hoffman of Google Public Sector highlights AI’s role in integrating data to improve decision-making. Fahey notes increased tech spending and competitive pressures, emphasizing the need for skilled IT teams to harness AI effectively.

Tech Companies Request Regulatory Flexibility In South Korea

The Korea Times (3/18) reports that AI policy representatives from major tech firms, including OpenAI and Google, have requested the South Korean government to adopt a flexible approach in implementing the AI Basic Act. OpenAI’s Sandy Kunvatanagarn and Google’s Alice Hunt Friend and Eunice Huang met with the Ministry of Science and ICT officials, as did Jared Ragland from the Business Software Alliance, which includes companies like Adobe, IBM, and Microsoft. The AI Basic Act, passed by the National Assembly in December, is set to become effective in January 2026 and is the second AI law globally after the EU’s. The Ministry of Science and ICT is currently developing enforcement ordinances for the Act. The tech company officials sought flexibility compared to the EU’s stringent AI regulations and discussed operator liability and the definition of high-impact applications.

Nvidia Leads Semiconductor Revenue Growth With 125% Increase

eeNews Europe (3/17, Clarke) reported that Nvidia’s semiconductor revenue surged by 125 percent to $124.4 billion, capturing a 50 percent share of the top 10 companies’ aggregate revenue of $249.8 billion. TrendForce highlights that the adoption of open-source models like DeepSeek may reduce AI adoption costs and boost AI use from servers to personal devices, with Edge AI being the next growth driver. Nvidia’s GPU demand rose in 2024, and upcoming GB200 and GB300 launches in 2025 are expected to further increase revenue. Broadcom’s semiconductor division saw an eight percent revenue increase to $30.64 billion, with AI chips comprising me than 30 percent of its solutions. Qualcomm’s QCT division achieved $34.86 billion in sales, a 13 percent increase, as it shifts focus to AI PCs and edge computing. MediaTek’s 5G smartphone penetration is projected to exceed 65 percent in 2025, with its partnership with Nvidia on Project DIGITS supporting growth.

UK Scientists Win £1 Million Prize For AI Breakthrough In Clean Energy Materials

The Daily Mail (UK) (3/19, Media) reports that British scientists from Imperial College London won a £1 million Government prize for their AI breakthrough that accelerates the development of materials for wind turbines and electric car batteries. The project, Polaron, uses a design tool with microscopic analysis to predict material performance. The Government hopes this technology will aid in creating stronger, lighter, and more efficient components for clean energy and transport. Science Secretary Peter Kyle said, “Polaron exemplifies the promise of AI and shows how, through our Plan for Change, we are putting AI innovation at the forefront.” Business Secretary Jonathan Reynolds emphasized the Government’s dedication to leveraging new technologies like AI to aid British companies in product development and export.

Semiconductor Industry Experiences Explosive Growth In 2024

Tom’s Hardware (3/18) reports that the global semiconductor industry saw significant growth in 2024, driven by AI processor sales, according to TrendForce. The Top 10 fabless chip developers earned $249.8 billion, with Nvidia accounting for half of that revenue. Nvidia’s revenue reached $124.3 billion, a 125% increase from 2023, due to high demand for its Hopper-based GPUs. Qualcomm ranked second with $34.86 billion, a 13% increase, driven by smartphones and automotive sectors. Broadcom held third place with $30.64 billion, an 8% rise, aided by AI-related products. AMD’s revenue grew 14% to $25.79 billion, boosted by its server business. MediaTek earned $16.52 billion, a 19% increase, with success in 5G smartphones and AI products. Marvell, Realtek, Novatek, Will Semiconductor, and MPS also showed growth. TrendForce predicts AI will continue to drive growth in 2025.

Nvidia, xAI Join With Microsoft, BlackRock To Boost AI Infrastructure

Reuters (3/19, Sriram) reports that Nvidia and Elon Musk’s xAI have joined a consortium supported by Microsoft, MGX, and BlackRock, aiming to enhance AI infrastructure in the US. The consortium, established last year, plans to initially invest over $30 billion in AI projects, focusing on data centers and energy facilities to support AI applications like ChatGPT.

        Nvidia CEO Declines Involvement In Intel Consortium. Reuters (3/19) reports that during Nvidia’s annual developer conference on Wednesday in San Jose, California, Nvidia CEO Jensen Huang indicated that orders for 3.6 million “Blackwell” chips from major cloud providers do not reflect full demand, excluding significant customers like Meta. Meta plans to use these chips for its Llama models and anticipates spending up to $65 billion on AI infrastructure, largely on Nvidia chips. Huang addressed investor concerns about AI chip demand, emphasizing that DeepSeek’s focus on reasoning would boost the need for Nvidia chips. Huang noted minimal short-term tariff impact but mentioned potential US production shifts.

Space Force Releases Data, AI Action Plan

MeriTalk (3/19, Perez) reports the Space Force has unveiled an “action plan to transform the service branch into a more data-driven and AI-enabled force and improve its ability to maintain space superiority.” In a statement, Col. Nathen L. Iven, acting deputy chief of space operations for cyber and data, said, “As the world’s first digital service, the United States Space Force recognizes the critical role that data and artificial intelligence will play in maintaining space superiority.” Similar to its FY2024 plan, “the Space Force’s FY2025 strategy places a strong emphasis on advancing data and AI governance, cultivating a workforce culture that understands the critical role of data and AI, and enhancing partnerships across government, academia, industry, and international allies.” The Space Force also plans “to deepen its understanding of AI and space technologies by collaborating with experts through the Commercial Space Office and Space Domain Awareness (SDA) Tap Lab. It also aims to establish standardized benchmarks to assess the performance of Large Language Models in space operations, focusing on mission-critical tasks and domain-specific challenges.”

College Board Introduces AP Courses In Cybersecurity And Business

Education Week (3/19, Klein) reports that the College Board is collaborating with industry leaders like the US Chamber of Commerce and IBM to develop new Advanced Placement (AP) courses aimed at providing high school students with job-relevant skills. The initiative, called AP Career Kickstart, introduces courses in cybersecurity and business principles/personal finance. David Coleman, CEO of the College Board, mentioned that “high schools had a crisis of relevance far before AI,” emphasizing the need for “the next generation of coursework.” The new courses are designed to offer students practical skills and may help them earn college credit or appeal to employers. The cybersecurity course is being piloted in 200 schools and aims to expand to 800 next year. Neil Bradley from the Chamber of Commerce stated, “This course is going to give people a leg up both when they’re applying for jobs, and then once they get the job.”

        Speaking to Education Week (3/19, Klein) last month, Coleman said, “AI-powered tools can already pass nearly every AP test,” highlighting the need for courses that prepare students for AI-dominated workplaces. The first courses will launch in the 2026-27 school year. Coleman emphasized the importance of equipping students with skills such as creativity and critical thinking through courses like AP Seminar, which integrates collaboration into its grading. The College Board is also considering teacher training in AI and cybersecurity.

University Of Idaho Awarded $4.5 Million AI Grant For Research Administration

According to a release (3/19), the University of Idaho (U of I) has received a $4.5 million grant from the National Science Foundation’s GRANTED program to enhance research management using generative AI. The project, led by Principal Investigator Sarah Martonick, director of the Office of Sponsored Programs, aims to reduce administrative burdens by automating data transfer processes. Chris Nomura, U of I’s vice president of research said, “The new AI tools should allow research administrators...to reduce their time spent on repetitive, monotonous tasks.” The initiative is a collaboration between U of I’s Office of Sponsored Programs and the Institute of Interdisciplinary Data Sciences (IIDS). The project also seeks to establish a “community of practice” to share AI tools with other institutions, starting with Southern Utah University, and aims to include more universities by the third year.

Nvidia CEO Emphasizes Need For Fastest Chips At GTC Conference

CNBC (3/19, Leswing) reports that Nvidia CEO Jensen Huang emphasized the importance of acquiring the company’s fastest chips during his unscripted two-hour keynote at the GTC conference. Huang addressed cost concerns by asserting that faster chips, which can be digitally sliced to serve AI to millions simultaneously, are the best cost-reduction system. He explained the economics of these chips, highlighting their potential to increase data center revenue by 50 times compared to previous systems. Nvidia’s Blackwell Ultra systems, set to launch this year, have already seen 3.6 million purchases by major cloud providers. Huang also announced a roadmap for future AI chips, Rubin Next and Feynman, planned for 2027 and 2028. He dismissed the competition from custom chips, noting their lack of flexibility for AI algorithms. Huang emphasized the importance of using Nvidia’s latest systems for upcoming AI infrastructure projects.

Synopsys Unveils AgentEngineer Technology For Chip Design

Reuters (3/19) reports that Synopsys introduced AgentEngineer, a new technology aimed at streamlining the design of computer chips by utilizing AI “agents” to assist human engineers. Synopsys CEO Sassine Ghazi highlighted the increasing complexity and pace of designing AI server systems, which involve thousands of chips, as a challenge for engineering teams. At the company’s annual user conference in Santa Clara, California, Ghazi noted the pressure on engineers due to the complexity and speed required for product delivery. AgentEngineer will initially focus on tasks like testing circuit designs. Shankar Krishnamoorthy, head of technology and development at Synopsys, emphasized AI’s role in enhancing R&D capacity without expanding team sizes. Over time, Synopsys plans for these agents to help coordinate complex systems with multiple chips to ensure timely product delivery.

Open Power AI Consortium Aims To Improve Electric Power With AI

Fast Company (3/20, Sullivan) reports that Nvidia, Microsoft, AWS, Oracle, and more than two dozen regional power companies in the US have announced plans to collaborate on building AI models and apps aimed at improving the generation and distribution of electric power. The initiative, called the Open Power AI Consortium, is organized by the Electric Power Research Institute (EPRI). EPRI President and CEO Arshad Mansoor said in a statement that the consortium will create an AI model, datasets, and apps to “enhance grid reliability, optimize asset performance, and enable more efficient energy management.” Axios climate reporter Alex Freedman noted that the power demands of the so-called AI boom have become a top priority for energy company CEOs in the US.

dtau...@gmail.com

unread,
Mar 29, 2025, 8:55:57 AMMar 29
to ai-b...@googlegroups.com

Gen AI Browser Assistant Extensions Beam Data to the Cloud

Computer scientists led by Yash Vekaria at the University of California, Davis, found that generative AI browser extensions generally harvest users' sensitive data and share it with their own servers and third-party trackers. In some cases, this violates the browser extensions' privacy commitments and U.S. regulations governing health and student data. The study of 10 generative AI Chrome extensions found that some collect sensitive information from Web forms or full document object models of pages visited by users.
[
» Read full article ]

The Register (U.K.); Thomas Claburn (March 25, 2025)

 

Encryption Breakthrough Lays Groundwork for Privacy-Preserving AI Models

A framework developed by researchers at New York University brings fully homomorphic encryption (FHE) to deep learning, allowing AI models to operate directly on encrypted data without needing to decrypt it first. Using the Orion framework, the researchers demonstrated the first-ever high-resolution FHE object detection using YOLO-v1, a deep learning model with 139 million parameters.
[
» Read full article ]

NYU Tandon School of Engineering (March 25, 2025)

 

Can We Make AI Less Power-Hungry? These Researchers Are Working on It

Researchers working with the ML Energy Initiative are trying to reduce AI power consumption without impacting performance. To alter the internal workings of AI models, the researchers leveraged techniques to reduce a model's parameters and optimization to reduce the amount of memory needed by the remaining parameters. To optimize how datacenters run AI models, they developed a software tool that can slow certain GPUs in a cluster to use less energy, while ensuring the GPUs finish processing workloads at the same time.
[ » Read full article ]

Ars Technica; Jacek Krywko (March 24, 2025)

 

AI Breakthrough Makes DNA Data Retrieval Faster, More Accurate

An AI tool developed by researchers at Technion – Israel Institute of Technology is 3,200 faster times and up to 40% more accurate in retrieving digital information stored in DNA compared to the best current methods. With the new DNAformer approach, 100MB of data can be processed in just 10 minutes, versus several days with current techniques. While the new tool is still too slow for the commercial market, the researchers believe they are moving in the right direction.
[ » Read full article ]

Tom's Hardware; Anton Shilov (March 23, 2025)

 

AlexNet Source Code Is Open Sourced

The Computer History Museum (CHM), in partnership with Google, has released the source code to AlexNet, an artificial neural network created to recognize the contents of photographic images. Developed in 2012 by then University of Toronto graduate students Alex Krizhevsky and Ilya Sutskever and their faculty advisor, ACM A.M. Turing Award laureate Geoffrey Hinton, the source code is available as open source on CHM’s GitHub page.
[ » Read full article ]

IEEE Spectrum; Hansen Hsu (March 21, 2025)

 

AI-Driven Weather Prediction Breakthrough Reported

An AI model can replace the numerical solver step in the weather prediction process to generate faster and more accurate predictions than today's supercomputers, according to researchers at the University of Cambridge in the U.K. Aardvark Weather, trained on raw data from weather stations, satellites, weather balloons, ships, and planes, uses only 10% of the input data required by conventional systems.
[ » Read full article ]

The Guardian (U.K.); Rachel Hall; Ian Sample (March 20, 2025)

 

As AI Nurses Reshape Hospital Care, Human Nurses Push Back

Hospitals increasingly are using AI to perform tasks previously handled by nurses. The hospitals say AI helps nurses work more efficiently while addressing burnout and understaffing, but nurses argue the technology is overriding their expertise and degrading care quality. National Nurses United, the largest nursing union in the U.S., is pushing for greater input into how AI can be used, and protection from discipline if nurses decide to disregard automated advice.
[ » Read full article ]

Associated Press; Matthew Perrone (March 16, 2025)

 

Tech Chiefs, Foreign Leaders Urge U.S. to Rethink AI Chip Curbs

With less than two months to comply with the U.S. framework for controlling AI development worldwide, tech companies are expressing concerns about business in foreign markets, and U.S. allies are seeking exemptions. The "AI diffusion rule" will restrict the number of AI processors that can be exported to most nations and require datacenters to comply with U.S. security standards. Some officials have floated eliminating the three tiers of chip access and associated compute caps, while maintaining export license requirements for most countries.


[
» Read full article *May Require Paid Registration ]

Bloomberg; Mackenzie Hawkins; Jenny Leonard; Brody Ford (March 25, 2025); et al.

 

MEPs Warn EU Against Weakening Landmark AI Rules

Members of the European Parliament (MEPs) instrumental in drafting EU's Artificial Intelligence Act have expressed concerns as EU officials consider whether to ease requirements for AI companies. Officials are weighing whether to make certain provisions of the Act voluntary. The code, being drafted by a panel of experts including ACM A.M. Turing Award laureate Yoshua Bengio, is expected to be finalized in May.


[
» Read full article *May Require Paid Registration ]

Computing; Vikki Davies (March 26, 2025)

 

U.S. Adds Export Restrictions to More Chinese Tech Firms over Security Concerns

The Trump administration added 80 companies and organizations on March 25 to a list of those prohibited from purchasing U.S. technology and other exports due to national security concerns. Among the 80 are 54 Chinese companies and organizations, including Nettrix Information Industry, which manufactures servers used to produce AI, and the Beijing Academy of Artificial Intelligence, which reportedly has attempted to acquire AI models and chips to bolster China's military modernization.


[
» Read full article *May Require Paid Registration ]

The New York Times; Ana Swanson (March 25, 2025)

 

Anthropic Scores Win in AI Copyright Dispute with Record Labels

A U.S. court on Tuesday denied an injunction sought by Universal Music Group and other record labels to prevent AI startup Anthropic from using their copyrighted lyrics to train its Claude chatbot. The music companies said Anthropic infringed copyrighted lyrics from at least 500 songs and sought to prohibit the company from using their works to train its AI models.


[
» Read full article *May Require Paid Registration ]

The Wall Street Journal; Mauro Orru (March 27, 2025)

 

Ant Group Combines Chinese And US Semiconductors To Cut AI Development Costs

CNBC (3/24, Cheng) reports Alibaba-affiliate Ant Group is utilizing both Chinese and US semiconductors to enhance efficiency and reduce costs in developing AI models. This approach helps decrease reliance on a single supplier such as Nvidia and cuts computing costs by 20%. According to Bloomberg, Ant uses chips from Alibaba, Huawei, Advanced Micro Devices, and others for training its AI models, moving away from Nvidia. Recently, Ant announced upgrades to its AI healthcare solutions, now employed by major hospitals in China.

Power Sector Embraces AI To Enhance Electric Grid

Newsweek (3/21, Young) reported that electric utility and power companies are collaborating with technology firms like Microsoft, Oracle, and NVIDIA to develop AI models to address energy demand and grid resilience. This consortium, led by the Electric Power Research Institute (EPRI), aims to create safer, more affordable, and reliable energy systems. NVIDIA’s Marc Spieler highlighted the potential for AI to accelerate industry advancements by utilizing extensive data from power companies. AI could facilitate renewable energy integration, improve demand forecasting, and expedite permitting processes for new energy sources and transmission lines.

Microsoft Launches AI Training Data Research Project

TechCrunch (3/21, Wiggers) reported that Microsoft is initiating a research project to assess the influence of specific data on generative AI model outputs. A job listing for a research intern outlines the project’s aim to efficiently estimate the impact of data, such as photos and books, on AI-generated content. The initiative involves Jaron Lanier from Microsoft Research, who advocates for “data dignity,” connecting digital outputs to their human creators. This project emerges amid legal challenges over AI’s use of copyrighted data, with Microsoft facing lawsuits from copyright holders, including The New York Times.

More Schools Are Integrating AI And Math

Education Week (3/24, Klein) reports that Clayton Dagler’s machine-learning class at Elk Grove High School in Sacramento, California, explores complex problems by combining artificial intelligence (AI) technologies with math concepts. The course, developed after a conversation with an Apple executive, emphasizes the importance of understanding Big Data. Dagler and “a handful of other teachers around the country are on the leading edge of what may become a new trend in math and computer sciences classes.” The class connects math concepts like probability and statistics with AI, aiming to make math relevant and engaging. Similarly, Eric Greenwald, a researcher at the University of California, Berkeley, advocates for teaching AI-related math concepts earlier. Schools are encouraged to prioritize AI literacy despite challenges in teacher availability.

        Education Week (3/24, Langreo) reports that some math teachers, like Matthew Karabinos from Williamsburg Elementary School in Pennsylvania, are integrating generative AI tools into their teaching practices. Initially hesitant, Karabinos embraced AI by spring 2023, using ChatGPT to create quizzes and later experimenting with lesson planning. He customized a GPT to align with Peter Liljedahl’s Building Thinking Classrooms framework, finding the technology “absolutely vital” for developing “higher-order thinking tasks.” According to a February 2025 RAND report, 21 percent of math teachers use AI for instructional planning, though many remain cautious due to limited professional development. Karabinos emphasizes the benefits, saying, “Take that time back [with AI]. Use it for something wise, like building relationships with your students.”

AI Tutor Significantly Boosts Test Scores At Texas Private School

Fox News (3/22, Lanum) reported that Alpha School in Austin, Texas, has seen a significant rise in student test scores after implementing an AI “tutor.” Students spend two hours daily with the AI assistant and the rest of the day on skills like public speaking and financial literacy. Co-founder Mackenzie Price said, “We use an AI tutor and adaptive apps to provide a completely personalized learning experience.” The school’s classes rank in the top two percent nationally. Elle Kristine, a junior at Alpha School, noted the benefits over traditional schools, highlighting reduced stress and more time for “passion projects.” Kristine, for example, is working on a safe AI dating coach for teenagers. The school, which has a few hundred students, is expanding, with Price saying, “families want this personalized education experience.” The AI model allows teachers to focus on motivational and emotional support, which Price describes as “the magic in our model.”

AI Infrastructure Boom Said To Mirror Dot-Com Era

Forbes (3/24, Runkevicius) reports that the current AI infrastructure boom mirrors the late ‘90s internet expansion, driven by major tech companies like Amazon, Google, Meta, and Microsoft. These hyperscalers plan to spend $320 billion on capital expenditures this year, focusing on AI data centers. Nvidia dominates the AI chip market, with its market cap rising significantly since the release of ChatGPT. Despite the spending surge, analysts warn of potential overspending, drawing parallels to the dot-com bubble. Concerns include the actual demand for AI computing power and the economic return on such massive investments.

Tech Companies Push Against AI Regulations

The New York Times (3/24, Kang) reports that tech companies, including Meta, Google, OpenAI, and Microsoft, have urged the Trump administration to prevent state-level AI laws and to allow the use of copyrighted materials for training AI models. They are advocating for federal data access and energy resources for computing needs, alongside tax incentives. This shift follows President Trump’s executive orders to promote AI, reversing previous calls for regulation. “The only thing that counts is establishing U.S. leadership in AI,” said Laura Caroli of the Wadhwani AI Center. However, civil rights groups and artists are calling for regulation and transparency, arguing for audits to prevent discrimination and unauthorized use of intellectual property.

Nvidia Showcases AI Robots At Developer Conference

The New York Times (3/25, Mickle) reports that Nvidia, a leading AI chip maker, held its annual developer conference, Nvidia GTC, in San Jose, California. The event featured a showcase of robots, large language models, and autonomous cars, attracting over 25,000 attendees. Industry leaders gathered to explore the latest AI technologies and hear from Nvidia’s CEO, Jensen Huang, about the future of AI. The conference, described as the “Super Bowl of AI,” provided a glimpse into a future driven by AI advancements, highlighting Nvidia’s role in the AI industry.

HCLTech Launches AI-Powered Manufacturing Solution

CRN India (IND) (3/26, Team) reports that HCLTech has introduced HCLTech Insight, an AI-powered solution designed to improve manufacturing operations. Utilizing Google Cloud’s Cortex Framework and other technologies, it offers real-time intelligence for industries like automotive and aerospace. The solution enhances production quality and cost efficiency through AI-driven defect detection. Vijay Guntur of HCLTech emphasized the solution’s role in bridging AI adoption gaps in manufacturing. Praveen Rao from Google Cloud highlighted the strategic collaboration. The launch aims to advance AI-driven industrial transformation, enhancing operational effectiveness for manufacturers globally.

Schools Upgrade Career Education Programs For AI Impact

K-12 Dive (3/26, Barack) reports that school systems nationwide are updating vocational programs to address the influence of emerging technologies, such as artificial intelligence, on career education fields. Alisha Hyslop from the Association for Career and Technical Education emphasizes the need for new approaches in courses like automotive technology, as students must learn to work with electric, hybrid, and autonomous vehicles. “There are so many computers in the car and so many electronics that students have to be able to diagnose, repair and work on,” Hyslop said. AI’s integration into career pathways like construction is notable, with robots capable of tasks like bricklaying. However, people are still needed to program, build, and repair these robots. A University of Tennessee at Knoxville report highlights AI’s impact on sectors from transportation to manufacturing, stressing the need for updated skills as “the workplace is changing.”

Google Launches Gemini 2.5 Pro Model

TechRepublic (3/26, Jackson) reports that Google has introduced Gemini 2.5 Pro, the first model in its Gemini 2.5 series, excelling in coding, mathematics, and science benchmarks. The model surpasses competitors from OpenAI, Anthropic, and DeepSeek, scoring 86.7% on AIME 2025 and 84.0% on the GPQA diamond benchmark. It leads with 18.8% on Humanity’s Last Exam. Google plans a shift towards integrating reasoning capabilities across all future models, moving away from standalone branding. Available through the Gemini Advanced app, the model will soon be accessible on Vertex AI, with pricing details forthcoming.

Energy Consortium Launches AI Model For Power Sector

Renewable Energy World (3/27, Wolfe) reports that a new consortium, including Southern California Edison and Pacific Gas & Electric, has been formed to develop an open AI model for the power sector. Led by EPRI and Microsoft, the Open Power AI Consortium aims to enhance grid reliability and improve energy management through AI solutions. Southern California Edison, along with other energy companies and tech giants, will participate in creating a sandbox environment to validate AI applications. EPRI President and CEO Arshad Mansoor stated, “Working with Microsoft, EPRI will lead this transformation, driving innovation toward a more resilient and affordable energy future.”

dtau...@gmail.com

unread,
Apr 5, 2025, 11:37:57 AMApr 5
to ai-b...@googlegroups.com

Cerf, Other Tech Experts Warn of Over-reliance on AI

In a report released by researchers at Elon University on Wednesday, ACM A.M. Turing laureate Vint Cerf and 300 other tech experts warned that over-reliance on AI could have negative implications for skills like deep thinking and moral judgment. The report’s contributors expect benefits from the technology to be seen in just three areas: curiosity and capacity to learn; decision-making; and problem-solving and innovative thinking and creativity.
[ » Read full article ]

CNN; Clare Duffy (April 2, 2025)

 

Papa Johns Wants AI to Transform Pizza Ordering

Papa John's International on Thursday announced an expanded partnership with Google Cloud, aiming to bring AI into the pizza ordering domain. The company's plan is to use AI to personalize phone push notifications suggesting orders, marketing emails, and loyalty program offerings based on analysis of past customer behavior and other context. The pizza chain also plans to add a new online chatbot and the ability to place orders through virtual assistants.
[ » Read full article ]

Reuters; Waylon Cunningham (April 3, 2025)

 

AI-Enhanced 3D Printing Could Transform Food Production

Hong Kong University of Science and Technology researchers developed an AI-assisted 3D food printing system that combines infrared heating and extrusion-based printing. Generative algorithms and Python scripts are used to design complex food patterns, and graphene heaters precisely cook of starch-based foods to ensure shape and quality. Said the university’s Mitch LI Guijun, “We’re excited about the potential of this technology to deliver customized, safe, and delicious food with a process that is both efficient and accessible.”
[ » Read full article ]

The Hong Kong University of Science and Technology (March 31, 2025)

 

BCI Helps Stroke Survivor Speak Again

University of California (UC), Berkeley researchers developed a brain-computer interface (BCI) equipped with an AI model that translates neural activity into sound in real time. The implant was placed on the speech center of the brain in a 47-year-old woman with quadriplegia who was unable to speak for 18 years due to a stroke. Electrodes recorded her brain activity as she thought about speech, with spoken sentences produced using a synthesizer created with her pre-injury voice.
[ » Read full article ]

Associated Press; Laura Ungar (March 31, 2025)

 

Powering CubeSats Using Deep Learning

Researchers led by Abdulazez Abagero at the Ethiopian Space Science and Geospatial Institute developed a Deep Feedforward Neural Network (DFFNN) to optimize CubeSat power consumption. Connected to a standard proportional-integral controller, the DFFNN was found to outperform all other Maximum Power Point Tracking control algorithms, which are intended to get the most power out of the system in any given environment. The DFFNN, which has a 97% efficiency rate based on simulations, uses linear tangents and Neville Interpretation to simplify CubeSat trajectory calculations.
[ » Read full article ]

Universe Today; Andy Tomaswick (March 29, 2025)

 

Gemini Hackers Can Deliver Potent Attacks With Help from... Gemini

Researchers at the universities of Wisconsin and California, San Diego created computer-generated prompt injections against Gemini that have much higher success rates than manually crafted ones. The new method abuses fine-tuning, a feature offered by some closed-weights models for training them to work on large amounts of private or specialized data, which Google makes available free of charge. The researchers' technique provides an algorithm for discrete optimization of working prompt injections.
[ » Read full article ]

Ars Technica; Dan Goodin (March 28, 2025)

 

Brain-like Computer Steers Robot with Minimal Power

An autonomous controller devised by researchers at the University of Michigan used analog computing to manipulate a rolling robot with minimum power. Operating at 12.5 microwatts, the robot was able to pursue a target zig-zagging down a hallway with the same speed and accuracy as with a conventional digital controller. University of Michigan’s Xiaogan Liang called the controller “a groundbreaking nanoelectronic device designed to revolutionize hardware platforms that can efficiently compute with neural network architectures.”
[ » Read full article ]

Michigan Engineering; Kate McAlpine (March 27, 2025)

 

Hundreds of AI Datacenters Go Unused in China

The datacenter construction boom driven by the Chinese government and private investors has slowed, with local outlets reporting that up to 80% of newly built computing resources remain unused. This comes as the rush to develop large language models is losing steam, smaller tech firms are abandoning the pre-training of their AI models amid DeepSeek's successes, and the industry is shifting toward improved GPUs.
[ » Read full article ]

MIT Technology Review; Caiwei Chen (March 26, 2025)

 

Has the Decline of Knowledge Work Begun?

Although economists contend the labor market remains strong by historical standards, white-collar workers have seen slower wage growth and larger gains in unemployment than other groups in recent years. Some of the job losses can be attributed to a rebalancing after aggressive pandemic-related hiring, but there are concerns that advances in AI signal a permanent decline in knowledge work. In the tech industry, executives and investors are relying on AI as they reduce headcount.

[ » Read full article *May Require Paid Registration ]

The New York Times; Noam Scheiber (March 31, 2025)

 

DeepSeek’s AI Model Shakes Up Tech Industry

Fortune (3/30, Gordon) reported that Hangzhou-based DeepSeek, a Chinese startup, has disrupted the AI sector by releasing R1, a large language model that rivals OpenAI’s o1. This development has challenged the notion that China merely imitates US innovations. DeepSeek, an offshoot of the hedge fund High-Flyer, developed R1 with a budget of just $6 million, significantly less than its US counterparts. The news led to a $1 trillion loss in tech stock value, affecting companies like Nvidia and Microsoft. OpenAI CEO Sam Altman and others are now considering open-source models. DeepSeek’s success has sparked investor interest, boosting the Hang Seng Tech Index by 35%. Major Chinese firms like BYD and Midea are integrating R1 into their products. Paul Triolo suggests DeepSeek could revitalize China’s economy. Meanwhile, companies like Alibaba and ByteDance are advancing China’s AI capabilities, challenging Western dominance.

Nvidia Expands AI Presence In Automotive Industry

Globes (ISR) (4/1, Gilead) reports that Nvidia is increasingly positioning itself as an AI infrastructure company, with significant inroads into the automotive sector. At the annual Nvidia event, CEO Jensen Huang emphasized the company’s broader focus beyond graphics processors. Nvidia recently partnered with General Motors to provide AI servers and other technologies, indicating GM’s shift away from Qualcomm. Nvidia VP Ali Kani highlighted the company’s competitive edge over Mobileye, stating, “We make a product that is much more complete than Mobileye.” Kani noted that Nvidia offers customizable solutions, contrasting with Mobileye’s “black box” approach. Nvidia’s new Halos safety system for autonomous vehicles could impact Mobileye, as it integrates AI chips with software and cloud services.

Elon Musk’s xAI Constructs Supercomputer In Memphis

Behind a paywall, Insider (4/1, Thomas, Kay) reports that Elon Musk’s xAI is utilizing gas-powered generators from Caterpillar subsidiary Solar Turbines to supplement its power needs at a new data center in Memphis, Tennessee. xAI has been approved for 150 megawatts of grid power but requires additional on-site generation to meet its projected demand for 1 million GPUs. This initiative is part of a broader effort by xAI to establish Memphis as a global hub for AI technology.

China Embraces Open Source AI For Technological Growth

Reuters (4/2) reports that China’s AI sector, including companies like Alibaba, Tencent, Baidu, and DeepSeek, is increasingly adopting open-source models. This approach allows anyone to use, modify, and share AI software freely. DeepSeek CEO Liang Wenfeng represents this sector, having been appointed by Premier Li Qiang. The open-source strategy helps China mitigate US tech restrictions, as it lacks access to powerful Nvidia chips. Instead, Chinese companies use domestically-made chips for AI development. Alibaba’s strategy involves offering free models and selling associated services like cloud computing. However, open source can limit revenue generation, posing challenges for publicly traded companies. Additionally, China’s regulatory environment could impact open-source initiatives, especially as they compete with Western AI advancements. Despite these challenges, open-source AI is seen as a way for China to boost its global influence and technological self-sufficiency.

Amazon Competes With Nvidia With Trainium 2 Chips

TIME (4/2, Perrigo) reports Amazon is advancing its AI chip ambitions with Trainium 2, designed by AWS subsidiary Annapurna Labs, to compete with Nvidia and other tech giants in the cloud computing market. The chips will power Project Rainier, one of the world’s largest AI datacenter clusters, built for Anthropic, an AI company Amazon has invested $8 billion in. Annapurna Labs Director of Engineering Rami Sinno said Trainium 2 was developed with Anthropic’s feedback to optimize performance, adding, “At the scale that they’re running, each point of a percent improvement in performance is of huge value.” Project Rainier aims to surpass competitors like Microsoft’s “Stargate,” with Annapurna’s Director of Product Gadi Hutt dismissing rivals: “Stargate is easy to announce. Let’s see it implemented first.” Amazon’s strategy focuses on selling access to its chips via AWS, creating a “flywheel” effect to attract more AI customers.

AI Data Centers Embrace Renewable Energy Solutions

RCR Wireless (4/2, Tomás) reports that AI data centers are increasingly adopting renewable energy to meet rising demand for computing power. AI data center operators are showing increased “reliance on solar and wind energy” to reduce carbon emissions and ensure a stable power supply. Many are installing large-scale solar farms and purchasing wind energy, RCR says, adding that some “major artificial intelligence data centers are already using 100% renewable energy.” To address the inconsistency of solar and wind, data centers are investing in energy storage solutions including lithium-ion and solid-state batteries. AI-powered energy optimization is also playing a crucial role in improving efficiency. These systems predict demand, adjust power usage, and integrate renewable sources with the grid.

Department Of Energy Lists Federal Sites For AI Data Center Construction

The AP (4/3, O'Brien) reports the Department of Energy on Thursday “said it has identified 16 federal sites, including storied nuclear research laboratories such as Los Alamos, where tech companies could build data centers in a push to accelerate commercial development of artificial intelligence technology.” The list follows a Biden-era executive order “that sought to remove hurdles for AI data center expansion,” which President Trump has “made clear...that he had no interest in rescinding.” The AP adds that under both presidents, the US “has been speeding up efforts to license and build a new generation of nuclear reactors to supply carbon-free electricity” for the expected surge in datacenter construction.

        Bloomberg (4/3, Natter, Subscription Publication) reports federal land is being considered for the projects “in part because the government can fast-track permitting for nuclear reactors and other power plants to run the facilities.” Power plant construction “is among the biggest challenges facing” AI developers, with utilities estimating that “US power demand will grow 55% over the next 20 years.” The agency is also “seeking input from data center developers and others to forge public-private partnerships to advance AI development.”

Anthropic Launches AI Chatbot Plan For Higher Education

TechCrunch (4/2, Zeff) reported that Anthropic has introduced “Claude for Education,” a new AI service for higher education, competing with OpenAI’s ChatGPT Edu. This service offers students, faculty, and staff access to the Claude chatbot, featuring “Learning Mode,” which promotes critical thinking by asking questions and providing research templates. Anthropic aims to boost its revenue, currently at $115 million monthly, by expanding in education. The service includes standard chat interfaces and robust security. Partnerships with Instructure and Internet2 will facilitate integration into university systems. Full campus agreements have been made with Northeastern University, the London School of Economics and Political Science, and Champlain College. The impact of AI in education remains uncertain, with mixed research findings.

AI Literacy Grows In Higher Education

The Chronicle of Higher Education (4/3, McMurtrie) reports that Jacqueline Fajardo, an assistant professor at the University of Delaware, discovered a student using Google NotebookLM to assist with chemistry studies. This artificial intelligence (AI) tool summarized lectures and created study aids, which led to the student’s passive learning approach and subsequent poor performance. Fajardo expressed concern, saying, “I felt responsible in some way that I didn’t know about this tool.” The incident highlighted the need for AI literacy on campus, prompting Delaware to initiate an AI working group. The university is among institutions recognizing AI’s impact on education, with 14 percent of campuses adopting AI literacy as a learning outcome. The University of Virginia and Arizona State University are also investing in AI training and integration.

Novel AI Tool More Accurate At Predicting Autoimmune Disease Risk Than Current Models, Study Suggests

Healio (4/3, Volansky) reports that a study suggests that “a risk prediction score that uses machine learning and a patient’s genetic information may identify autoimmune conditions up to 1,000% more accurately than current models.” Researchers developed the Genetic Progression Score (GPS) tool “using data from electronic health records and large genetic studies of individuals with rheumatoid arthritis and systemic lupus erythematosus.” When they compared GPS with currently existing risk prediction tools, researchers “found that the novel score was between 25% and 1,000% more accurate in determining which patients had symptoms that would progress to more advanced disease.” They explained, “AI and genetics are particularly useful here since we may be able to find patterns of genetics or other molecular markers that indicate the risk progression and apply targeted treatment regime.”

IBM And Tokyo Electron Extend Semiconductor Research Partnership

The Semiconductor Digest (4/2, Davis) reported IBM and Tokyo Electron (TEL) have extended their partnership for another five years to advance semiconductor technologies, focusing on next-generation nodes and architectures for generative AI. This builds on their two-decade collaboration, which previously led to breakthroughs like a new laser debonding process for 3D chip stacking. IBM’s semiconductor process integration expertise and TEL’s equipment will target smaller nodes and chiplet architectures to meet future generative AI demands. IBM Semiconductors GM and VP of Hybrid Cloud Mukesh Khare said, “We are thrilled to be continuing our work together at this critical time to accelerate chip innovations that can fuel the era of generative AI.” Both companies are part of the Albany NanoTech Complex, a key site for semiconductor innovation, which was designated as the National Semiconductor Technology Center last year.

Google Considers Renting Nvidia Servers From CoreWeave

Insider (4/3) reports that Google is in advanced discussions to rent Nvidia servers from CoreWeave, as first reported by The Information. These servers are equipped with Nvidia’s latest AI chips from the Blackwell lineup. Google may also rent space in CoreWeave’s data centers for its own TPU chips. The move underscores the global shortage of AI machines due to high demand. Google has already ordered more than $10 billion worth of Blackwell GPUs. CoreWeave, backed by Nvidia, recently went public on Nasdaq.

Trump Tariffs Impact Tech Sector’s AI Plans

Reuters (4/3) reports that President Donald Trump’s reciprocal tariffs could hinder AI infrastructure efforts in the US, as analysts noted on Thursday. Trump imposed tariffs on tech equipment suppliers, including 34 percent on China and 25 percent on South Korea. Analysts predict a shift in tech giants’ capital expenditures due to increased costs. D.A. Davidson analyst Gil Luria mentioned potential delays in data-center expansion and AI adoption. The tariffs pose a threat to cloud service providers like Microsoft, Alphabet, and Amazon, with concerns over reduced spending and investor skepticism.

AI’s Power Demands Create New Energy Alliances

The DC Journal (4/3, Towhey) reports that the increasing power demands of AI-driven data centers are prompting a potential alliance between fossil fuel and green energy sectors in the US. Energy analysts are concerned that data centers, such as those by Facebook, Amazon, and Google, may outpace the country’s electricity generation, though US grid operators are addressing the challenges – such as Indiana Michigan Power, which has finalized a large load tariff settlement with major tech companies to ensure grid reliability. PetroNerds CEO Trisha Curtis emphasizes the need for more power generation capacity, highlighting coal and natural gas as reliable sources.

dtau...@gmail.com

unread,
Apr 12, 2025, 12:53:12 PMApr 12
to ai-b...@googlegroups.com

Europe Unveils Plan to Become 'AI Continent'

The European Commission unveiled on April 9 the "AI Continent Action Plan" to enable the bloc to better compete with the U.S. and China in AI. The Commission said the plan is intended to "transform Europe's strong traditional industries and its exceptional talent pool into powerful engines of AI innovation and acceleration." The plan calls for the development of a network of AI factories, "gigafactories," and specialized labs to boost startups' access to high-quality training data.
[
» Read full article ]

CNBC; Ryan Browne (April 9, 2025)

 

Researchers Investigate AI Threats in Software Development

Researchers led by University of Texas at San Antonio computer science doctoral student Joe Spracklen analyzed the security risks associated with package hallucinations, in which large language models (LLMs) generate code that links to a third-party software library that does not exist. This would enable a hacker to create a new package with the same name as the hallucinated package and inject malicious code. The researchers found that open-source LLMs are four times more likely than GPT-series models to produce package hallucinations, and JavaScript is more susceptible to hallucinations than Python.
[ » Read full article ]

UTSA Today; Ari Castañeda (April 7, 2025)

 

Taiwan Says China Uses Generative AI to Ramp Up Disinformation

Taiwan's National Security Bureau said in a report to Parliament that China is working to divide the public by using generative AI to spread disinformation. The report indicated that 500,000 "controversial messages" have been distributed so far this year, mainly on social media platforms. The bureau said Beijing is engaging in "cognitive warfare" and is seeking to "create division among our society."
[ » Read full article ]

Reuters; Yimou Lee (April 8, 2025)

 

AI Could Impact 40% of Jobs Worldwide in the Next Decade, U.N. Agency Warns

According to a report from the U.N. Department of Trade and Development, 40% of jobs across the globe could be impacted by AI in the next decade. The report also noted that nearly half of global research and development spending in AI can be attributed to 100 companies, most based in the U.S. and China.
[ » Read full article ]

Euronews; Anna Desmarais (April 7, 2025)

 

U.S. AI Lead over China Rapidly Shrinking

Stanford University's latest Artificial Intelligence Index found China is closing in on the U.S. in state-of-the-art AI. The report said that 40 AI models of note were produced by U.S.-based institutions last year, while another 15 were produced by China, and three by Europe. The report said Chinese models are closing the gap in quality, achieving near parity with U.S. models on two key benchmarks, while China has pulled ahead of the U.S. in AI publications and patents.
[ » Read full article ]

Axios; Ina Fried (April 7, 2025)

 

White House Orders Agencies to Develop AI Strategies

An April 7 memo from the White House Office of Management and Budget gave government agencies six months to "develop an AI strategy for identifying and removing barriers to their responsible use of AI and for achieving enterprise-wide improvements in the maturity of their applications." The memo also instructed federal agencies to name chief AI officers and establish generative AI policies. A separate White House directive called for "efficient acquisition of [AI] in government" and for agencies to focus on interoperability and "maximize the use of American-made AI."
[ » Read full article ]

Reuters; David Shepardson (April 7, 2025)

 

Meta Draws Congress’ Scrutiny over China Claims

U.S. lawmakers are investigating a claim by a former Facebook employee that Meta CEO Mark Zuckerberg lied to Congress about the company’s efforts to launch its social network in China. The probe is focused on alleged work to censor content and provide AI tools, including surveillance software, to the Chinese Communist Party. The censorship efforts “allegedly extended to dissidents outside of China, including in the U.S.,” the lawmakers wrote, citing internal documents.
[ » Read full article ]

Bloomberg; Riley Griffin (April 7, 2025)

 

AI Bots Strain Wikimedia as Bandwidth Surges 50%

The Wikimedia Foundation reported a 50% jump in bandwidth used for downloading multimedia content since January 2024, which it attributed to automated bots scraping data to train large language models. These bots account for only 35% of total pageviews, but 65% of the costliest requests to its core infrastructure. The resulting traffic surges are straining Wikimedia's Site Reliability team, code review tools, and bug trackers.
[ » Read full article ]

Ars Technica; Benj Edwards (April 2, 2025)

 

U.S. Plans to Build AI Datacenters on Federal Land

Sixteen sites have been identified for the development of AI datacenters on land owned by the U.S. Department of Energy (DOE). “The global race for AI dominance is the next Manhattan project,” Secretary of Energy Chris Wright said Thursday. “With today’s action, the DOE is taking important steps to leverage our domestic resources to power the AI revolution."
[ » Read full article ]

The Hill; Ashleigh Fields (April 3, 2025)

 

Americans Worry AI Is Coming for These Jobs

A Pew Research Center survey of U.S. adults identified jobs the public believes will be replaced by AI in the next 20 years. These include cashiers (73%), factory workers (67%), journalists (59%), and software engineers (48%). The findings also confirmed peopleʼs anxiety about the technology is rising. About 51% of those polled said they were concerned about the increased usage of AI, compared to about 40% in 2021 and 2022, with much of the anxiety tied to jobs.


[
» Read full article *May Require Paid Registration ]

The Washington Post; Danielle Abril (April 8, 2025)

 

AI Recreates 'The Wizard of Oz' for the Las Vegas Sphere

Sphere Entertainment turned to Google to recreate "The Wizard of Oz," an 86-year-old film shot with a 35mm camera, to be shown on the Las Vegas Sphere's 160,000-square-foot, curved, immersive screen. To accomplish this, Google Cloud and Google DeepMind researchers developed new AI techniques, "performance generation" and "outpainting," to improve resolution and extend backgrounds to include characters and scenery not present originally. The process also involved the use of Google's Veo 2 and Imagen 3 generative AI models and the addition of sensory elements.


[
» Read full article *May Require Paid Registration ]

The Wall Street Journal; Isabelle Bousquette (April 8, 2025)

 

Man Employs AI Avatar in Legal Appeal. Judge Not Amused

Jerome Dewald deployed an AI-trained digital avatar in a video he was allowed to show a panel of New York State judges to argue for a reversal of a lower court’s decision in his dispute with a former employer. When a judge asked whether a man in the video was his attorney, Dewald responded, “I generated that.” The judge said that she didn't "appreciate being misled,” before ordering the video turned off.

[ » Read full article *May Require Paid Registration ]

The New York Times; Shayla Colon (April 4, 2025)

 

Bloomberg Has a Rocky Start with AI Summaries

Bloomberg has corrected at least three dozen AI-generated summaries of articles published this year. The summaries appear above news articles and consist of three bullet points purportedly outlining the articles' main points. Bloomberg said that "currently 99% of AI summaries meet our editorial standards," adding that the AI summaries are “meant to complement our journalism, not replace it.”

[ » Read full article *May Require Paid Registration ]

The New York Times; Katie Robertson (April 3, 2025)

 

Kids Are Talking to 'AI Companions.' Lawmakers Want to Regulate That

At least three bills under consideration in California aim to limit how "AI companion bots" can interact with minors. S.B. 243 would impose restrictions on addictive design features in AI companion bots, implementing protocols for handling discussions about self-harm or suicide, requiring the makers of these bots to undergo regular compliance audits, and allowing users to sue if they suffer harm due to a company's failure to comply. The other bills would ban AI companions for those age 16 and younger, and establish a statewide standards board to assess and regulate AI tools for minors.

[ » Read full article *May Require Paid Registration ]

The Washington Post; Will Oremus; Andrea Jiménez (April 1, 2025)

 

AI Drug Discovery Companies Secure Major Funding

PharmaVOICE (4/4, Parrish) reported that the AI drug discovery sector has seen significant funding activity. Isomorphic Labs secured $600 million in March, led by Thrive Capital, with backing from GV and Alphabet. The company’s platform, based on AlphaFold software, aims to shorten drug discovery time and explores cancer and autoimmune treatments. Insilico Medicine, after positive phase 2 results for rentosertib, received $110 million for AI model refinement. Recursion Pharmaceuticals faced mixed results for its neurovascular drug but continues to expand its pipeline. Other startups, like Latent Labs and Manas AI, also attracted funding. Dr. Krishna Yeshwant from GV stated, “After witnessing the extraordinary pace of innovation at Isomorphic Labs, we believe their pioneering approach will redefine AI-powered drug discovery.”

Trump Tariffs Expected To Increase Cost Of AI-Driven Data Center Expansion

The Wall Street Journal (4/4, Subscription Publication) reported on the potential impact of the Trump Administration’s recently-announced tariffs on the AI industry. OpenAI, Anthropic, and other AI firms are relying on cloud providers to ramp up the data center infrastructure needed to power mass AI adoption. Tariffs are expected to raise costs on the imported components used by those cloud providers. Muddu Sudhakar, founder and CEO of AI firm Aisera, expects these added costs to ultimately be passed on to consumers, making it more expensive to use AI applications.

        Forbes (4/4, Nieva) reported that the Trump Administration “has gone all-in on AI,” working with OpenAI, Softbank, Oracle, and others on “a massive $500 billion investment in AI infrastructure called Project Stargate.” However, “with sweeping tariffs announced earlier this week, the White House may be hamstringing the industry it has been so vocal about propping up.” The tariffs currently “exclude semiconductors, the brains of computers and AI,” but will likely “drive up the costs of building and operating the vast data centers in which they are housed.” Michigan State supply chain management professor Jason Miller told Forbes, “The vast majority of imported goods that are needed for data centers are subject to these tariffs. ... In my mind, there is no doubt they will raise the cost structure for putting together data centers.”

Energy Secretary Discusses Tariffs And Energy Initiatives

CBS News (4/4, Boyd) reported that US Energy Secretary Chris Wright visited the National Renewable Energy Laboratory (NREL) in Colorado to announce a new initiative allowing private development of data centers and AI infrastructure on federal land. Wright emphasized the importance of easing permitting processes to encourage domestic development. He also addressed recent tariffs announced by President Trump, noting the energy sector’s challenges and expressing a focus on long-term benefits. Wright highlighted plans to support geothermal and nuclear energy, potentially at old coal plants in Colorado, and emphasized that energy systems should be driven by economics.

Shopify Ties Hiring Decisions To AI Capabilities

Behind a paywall, the Wall Street Journal (4/7, Subscription Publication) reports Shopify will approve new hires only if managers can show that artificial intelligence is not able to handle the work. CEO Tobi Lütke shared this policy in a memo and said that all employees are expected to use AI tools as part of their daily work. The company will also include AI usage in performance reviews. Shopify has about 8,100 employees and operates fully remotely. A company spokeswoman said teams across the business are already using many different AI tools. Shopify works with small businesses and larger clients and says it needs to keep using AI to maintain growth. In 2023, Shopify removed thousands of meetings from employee calendars in an effort to give staff more time for focused tasks.

Chinese Automakers Race Toward Smart Vehicle Integration

China Daily Online (4/5) reports that Chinese automakers are rapidly integrating advanced technologies, such as autonomous driving and AI-powered cabins, into vehicles, transforming them into “the ultimate smart terminals.” Chen Qingtai, chairman of China EV 100, stated, “The pace of intelligent vehicle development in China has exceeded industry expectations.” BYD Chairman Wang Chuanfu declared 2025 as the “Year of Universal Smart Driving.” Huawei’s Yu Chengdong highlighted readiness for level 3 autonomous driving, saying, “From passive to autonomous intelligence, cars are finally awakening.” GAC’s General Manager Feng Xingya announced a joint project with Huawei to develop luxury smart EVs. Vice-Minister Xin Guobin emphasized the pursuit of breakthroughs in autonomous driving technologies. Zhang Yaqin from Tsinghua University predicted a “ChatGPT moment” for autonomous driving in 2025. Xiaomi plans significant R&D investment in AI for automotive applications.

Microsoft Unveils AI Powered Demo Inspired By Quake II

PC Gamer (4/6, Litchfield) reports that Microsoft introduced an AI-driven demo called the Copilot Gaming Experience inspired by Quake II. The demo, developed using Microsoft Copilot AI research and powered by the World and Human Action Model, dynamically generates visuals and simulates player behavior in real time. It runs in a browser window but suffers from jerky, muddled graphics that reportedly caused motion sickness. According to Microsoft’s Q&A page, “this bite-sized demo pulls you into an interactive space inspired by Quake II, where AI crafts immersive visuals and responsive action on the fly.”

Autonomous Vehicle Conference Discusses Future Of Ride-Hailing

Insider (4/5, Lee) reports that the Ride AI autonomous vehicle summit in Los Angeles featured discussions on the future of ride-hailing involving both human and robot drivers. Stephen Hayes, Lyft’s VP of autonomous operations, and Ryan Green, CEO of Gridwise, stated that the market will remain a hybrid of human and robot drivers for the next 10 to 15 years. Toyota Research Institute CEO Gill Pratt remarked on the misconception that autonomy is needed to improve human driving. Mobileye CEO Amnon Shashua highlighted the challenge of achieving “precision” in AI due to data limitations. Kaity Fischer, Wayve’s VP of commercial and operations, emphasized the potential of level two and three autonomous systems. Christoph Lütge noted the slow implementation of Level 3 driving in Germany. Timothy B. Lee outlined the industry’s cyclical nature, predicting a gradual shift away from human drivers over the next 20 years.

House Science Subcommittee Hearing To Focus On Chinese AI’s Threat To National Security

Inside Cybersecurity (4/7, Livesay) reports the House Science Committee’s Research and Technology Subcommittee on Tuesday will hold a hearing “examin[ing] national security impacts of DeepSeek and Chinese artificial intelligence advancements at a hearing this week.”

Electric Utilities Fielding “Massive” Requests For New Power Capacity As Big Tech Hunts For New Data Centers

Reuters (4/7, Kearney, Dareen) reports, “U.S. electric utilities are fielding massive requests for new power capacity as Big Tech scours the country for viable locations for new data centers to keep up with the compute demands of AI.” Reuters says its “survey of 13 major U.S. electric utility earnings transcripts found nearly half have received inquiries from data center companies for volumes of power that would exceed their peak demand or existing generation capacity...a metric that reflects the sheer size of oncoming data center needs,” and “now, the power industry is struggling with a question that will determine the course of billions of dollars in investment: how to meet the demand?”

White House Tech Head Says China Is Accelerating AI Capability

Fox News (4/8, Singman) reports Michael Kratsios, director of the White House Office of Science and Technology, has said that Chinese innovation in the AI space is “accelerating.” However, Kratsios “told Fox News Digital that the United States’ ‘promote and protect’ strategy will solidify its standing as the world’s dominant power in AI.” In the interview, Kratsios also discussed how the US needs to deregulate the AI space in order to stay ahead of Chinese industry.

Meta Accused of Gaming AI Benchmark

Gizmodo (4/8, Cranz) reports that Meta has been accused of manipulating an AI benchmark after releasing two new AI models based on its Llama 4 large language model. The models, Scout and Maverick, were unveiled over the weekend. Meta’s blog post boasted Maverick’s high ELO score of 1417 on LMArena, placing it second on the leaderboard. However, it was revealed that the model used for the benchmark was a customized version optimized for human preference. LMArena criticized Meta for not clearly disclosing this customization and announced policy updates to ensure fair evaluations.

        The Verge (4/7, Robison) also reports.

Tariffs Expected To Impact AI Data Centers Beyond Semiconductors

Fortune (4/8, Nusca) reports that President Trump’s tariffs have created uncertainty for AI data centers, despite semiconductors being exempt. Gil Luria from D.A. Davidson & Co. notes that non-semiconductor components, such as server hardware and cooling systems, represent a significant portion of data center costs and are affected by tariffs. This situation could impact the cost of capital for tech companies, making them less inclined to invest heavily in data centers. Luria emphasizes the pressure on tech companies’ core businesses due to these tariffs, suggesting a potential slowdown in data center investments.

TSMC Faces $1B Penalty For Making Chips For Huawei AI, Sources Say

Sources informed Reuters (4/8, Freifeld) that Taiwan Semiconductor Manufacturing “could face a penalty of $1 billion or more to settle a US export control investigation over a chip it made that ended up inside a Huawei AI processor.” According to the sources, the US Commerce Department has been investigating after China-based Sophgo’s “TSMC-made chip matched one found in Huawei’s high-end Ascend 910B artificial intelligence processor.” Huawei is restricted from receiving goods made with US technology. “The $1 billion-plus potential penalty comes from export control regulations allowing for a fine of up to twice the value of transactions that violate the rules.”

Researchers Launch Open-Source Framework To Address Overthinking In Large Language Models

Insider (4/8, Cosgrove) reported Jared Quincy Davis, the founder and CEO of Foundry, “along with researchers from Nvidia, Google, IBM, MIT, Stanford, DataBricks, and more,” have launched an open-source framework called Ember to address challenges faced by large language models, such as overthinking, which can degrade response quality. Davis explained that reasoning models like OpenAI’s o1 and DeepSeek’s R1 tend to get stuck when they overthink. Ember aims to optimize model performance by integrating multiple models with varying response times.

Microsoft “Slowing Or Pausing” Some AI Data Center Projects, Including $1B Project In Ohio

The AP (4/9) reports, “Microsoft said it is ‘slowing or pausing’ some of its data center construction, including a $1 billion project in Ohio, the latest sign that the demand for artificial intelligence technology that drove a massive infrastructure expansion might not need quite as many powerful computers as expected.” The company “confirmed this week that it is halting early-stage projects on rural land it owns in central Ohio’s Licking County, outside of Columbus, and will reserve two of the three sites for farmland.”

Google Unveils New AI Chip, Ironwood

Reuters (4/9) reports that Alphabet unveiled its seventh-generation AI chip, Ironwood, designed to enhance AI application performance. The chip is optimized for inference computing, crucial for applications like OpenAI’s ChatGPT. This development positions Ironwood as a viable alternative to Nvidia’s AI processors. Google’s tensor processing units (TPUs), including Ironwood, are exclusive to its engineers or cloud services. The chip supports large-scale AI application operations, accommodating up to 9,216 chips. Amind Vahdat, Google’s vice president, noted Ironwood’s enhanced memory and energy efficiency, doubling performance compared to the previous Trillium chip. The chip was revealed at a cloud conference, but the manufacturer remains undisclosed.

University Of North Georgia Plans AI Name Announcements For Commencement

The Chronicle of Higher Education (4/10, Swaak) reports that the University of North Georgia plans to use AI technology to pronounce students’ names at the upcoming commencement ceremony, replacing the traditional method of a professor announcing names. The university’s decision has sparked significant backlash, with more than 1,800 signatures on a Change.org petition and more than 10,000 upvotes on a Reddit thread. Students argue that AI usage “demeans all the hard work” and sends “an ominous message” regarding AI’s role in the workforce. Eddie Garrett, vice president for strategic communications, explained that the AI ensures accurate name pronunciation and enhances the ceremony with features like synchronized video displays and translations.

FDA Plans To Replace Animal Testing With AI Models

Reuters (4/10) reports that the FDA announced on Thursday its intention to replace animal testing for monoclonal antibody therapies and other drugs with “human-relevant methods,” including AI-based models. FDA Commissioner Martin Makary described this as a significant shift in drug evaluation. The agency aims to enhance drug safety, reduce costs, and lower prices. The FDA will implement this new approach immediately, encouraging the use of New Approach Methodologies (NAMs) data. A pilot program is planned for the next year to test non-animal strategies. The National Association for Biomedical Research expressed concerns about the reliance on AI, emphasizing the risks of unknown variables.

        CNN (4/10, Christensen) reports that the change to AI follows the FDA Modernization Act 2.0, enacted in 2022, which allows alternatives to animal studies for drug licensure. The FDA plans to update its guidelines and incentivize strong non-animal test data submissions.

Dartmouth Researchers Pilot AI Therapy Chatbot

Forbes (4/10, Barsky) reports that Dartmouth researchers have piloted an AI-powered therapy chatbot named Therabot, which has shown significant clinical gains in treating depression, anxiety, and eating disorders. The trial involved more than 100 participants, and results published in the New England Journal of Medicine were “comparable to what we would see for people with access to gold-standard cognitive therapy,” according to Nick Jacobson, a professor at Dartmouth’s Geisel School of Medicine. The chatbot’s success challenges the notion that empathy in therapy is “AI-proof,” as participants reported forming trusted connections with Therabot. “It was available around the clock for challenges that arose in daily life,” said co-author Michael Heinz. The research underscores the importance of “diligent oversight” in AI development to ensure safety and efficacy, as emphasized by Jacobson.

Researchers In Wisconsin Develop AI Tool To Screens Hospital Patients For Opioid Use Disorder

Politico (4/10, Paun, Reader) reports that researchers at the University of Wisconsin School of Medicine and Public Health have developed an AI tool to screen hospital patients for opioid use disorder. The study, published in Nature Medicine, involved screening electronic health records at UW Health University Hospital in Madison, Wisconsin, from March to October 2023. The AI model alerts healthcare providers if a patient is at risk, prompting potential referrals to specialists. Dr. Majid Afshar, the study’s lead author, stated that the tool reduced hospital readmissions and saved $109,000 in healthcare costs. The tool is now being trialed in a Chicago hospital system and is available for free integration into other healthcare systems.

European Startups Drive Quantum-AI Innovation

Forbes (4/8, Press) reports that the fusion of AI and quantum computing is advancing technological efficiency. Multiverse Computing, a startup in Donostia, Spain, released AI models with 60% fewer parameters, improving energy efficiency by 84% and reducing costs by 50%. Multiverse, which raised $100 million, plans to compress the top 20 large language models. Meanwhile, IQM integrates its 54-qubit quantum computer into a supercomputer in Bologna, Italy, for AI algorithm optimization. SandboxAQ secured $450 million in funding for its AI models, focusing on physics and chemistry applications. Nvidia announced a new Quantum Research Center in Boston, collaborating with Quantum Machines to enhance quantum computing efficiency. This partnership aims to integrate Nvidia’s superchips with QM’s quantum control technologies. At MIT’s Business of Quantum Summit, experts discussed quantum’s business potential, highlighting its transformative impact on AI. Accenture’s Carl Dukatz noted the growing synergy between quantum and AI development.

Greenpeace Reports AI Chipmaking Emissions Quadrupled In 2024

Bloomberg (4/9, Subscription Publication) reported that emissions from semiconductor production for AI services surged more than fourfold in 2024, as per a Greenpeace analysis. Nvidia Corp. and Microsoft Corp. depend on chipmakers like Taiwan Semiconductor Manufacturing Co. (TSMC), SK Hynix Inc., Samsung Electronics Co., and Micron Technology Inc. for AI-supporting components, primarily manufactured in fossil fuel-dependent regions like Taiwan, South Korea, and Japan. A TSMC representative said that “its internal tally shows emissions per unit declined in 2024.” Nvidia is urging suppliers to adopt science-based emission reduction targets. Greenpeace highlighted the need for renewable energy expansion in eastern Asia to meet chip manufacturing’s rising electricity demand. However, South Korea plans to increase LNG-fired power capacity near chip factories, and Taiwan is considering a new LNG terminal. Emissions from global AI chipmaking climbed 357 percent in 2024, exceeding a 351 percent rise in electricity use.

Google Reaffirms $75 Billion AI Infrastructure Investment

Network World (4/10) reports that Alphabet CEO Sundar Pichai has reiterated Google’s commitment to invest $75 billion in AI infrastructure and data centers in 2023. Speaking at Google Cloud Next 25 in Las Vegas, Pichai emphasized the investment’s role in supporting enterprise AI workloads and enhancing Google services. This announcement follows Microsoft’s reported abandonment of multiple data center projects. Abhivyakti Sengar from Everest Group noted a “divergence in hyperscaler strategy,” with Google expanding globally and Microsoft focusing on regional optimization.

Honeywell Introduces AI-Driven Technology Suite To Support Green Hydrogen Production

Entrepreneur Magazine (4/9) reported that Honeywell has introduced Protonium, a new technology suite utilizing artificial intelligence and machine learning to enhance green hydrogen production efficiency and scalability. Protonium aims to tackle challenges like high costs, energy management, and operational inefficiencies in hydrogen production. The initial implementation will occur at Aternium’s forthcoming Mid-Atlantic Clean Hydrogen Hub (MACH2). Aternium, a US-based hydrogen producer and recipient of a US Department of Energy award for regional hydrogen hubs, selected Protonium for its potential to fulfill operational and safety criteria, as stated by the company.

dtau...@gmail.com

unread,
Apr 19, 2025, 6:36:01 PMApr 19
to ai-b...@googlegroups.com

Nvidia to Mass-Produce AI Supercomputers in Texas

Chipmaker Nvidia said on April 14 it will produce as much as $500 billion in AI infrastructure in the U.S. during the next four years through manufacturing partnerships. Nvidia's Blackwell AI chips are being produced at Taiwan Semiconductor plants in Phoenix, with chip packaging and testing services to be provided through partnerships with Amkor and Siliconware Precision Industries in Arizona. It also has partnered with Foxconn in Houston and Wistron in Dallas to build manufacturing plants for AI supercomputers.
[ » Read full article ]

CNBC; Hayden Field (April 14, 2025)

 

AI Decodes Dolphin Speak

A large language model developed by Google researchers, in collaboration with the Wild Dolphin Project (WDP), could help researchers communicate with dolphins. Based on Google's Gemma open AI models, the DolphinGemma communication model uses Google's SoundStream to tokenize dolphin vocalizations, feeding them into the model as they are recorded. Trained on WDP's acoustic archive of dolphin sounds and containing around 400 million parameters, DolphinGemma predicts the next token after being presented with a dolphin vocalization.
[ » Read full article ]

Ars Technica; Ryan Whitwam (April 14, 2025)

 

AI Not Ready to Replace Human Coders for Debugging

A tool developed by Microsoft researchers tests and aims to improve software debugging by AI models. Available on GitHub, Debug-gym lets AI models attempt to debug existing code repositories using debugging tools not generally used by such models. According to the researchers, even the latest AI models rarely completed more than half of debugging tasks successfully. Claude 3.7 Sonnet had the highest average success rate (48.4%), followed by OpenAI's o1 (30.2%), and o3-mini (22.1%).
[ » Read full article ]

Ars Technica; Samuel Axon (April 11, 2025)

 

Israel Tops List of Countries with Highest Self-Reported AI Skills

Israel tops the list of countries with the highest concentration of AI talent in the world, according to the LinkedIn AI Talent Index. The index, based on member profile data collected from the professional networking site, showed that workers with AI-related skills make up 1.98% of the workforce in Israel. Coming in second in the ranking was Singapore at 1.64%, while Luxembourg, at 1.44%, ranked third. Estonia and Switzerland rounded out the top five.
[ » Read full article ]

Times of Israel; Sharon Wrobel (April 15, 2025)

 

High-Schooler's AI Algorithm Detects Previously-Unknown Astronomical Objects

Matteo Paz, an 18-year-old high school student from Pasadena, CA, won first place in the 2025 Regeneron Science Talent Search for developing an AI algorithm that processed 200 billion data entries from a now-retired U.S. National Aeronautics and Space Administration (NASA) telescope, identifying 1.5 million previously unknown potential celestial bodies. Working with California Institute of Technology's Davy Kirkpatrick, Paz created an AI model that analyzed raw data from the Near-Earth Object Wide-field Infrared Survey Explorer telescope to detect minuscule changes in infrared radiation.
[ » Read full article ]

Smithsonian; Margherita Bassi (April 15, 2025)

 

AI-Boosted Cameras Help Blind People Navigate

A system created by researchers in China can help visually impaired people navigate using AI to interpret footage from a camera mounted on a pair of glasses. A tiny computer processes images captured by the camera using machine-learning algorithms trained to detect the presence of objects; the system then produces a beep in the right or left ear to guide the wearer when an obstacle is detected. The researchers also created wearable patches that vibrate when an obstacle is nearby.
[ » Read full article ]

Nature; Miryam Naddaf (April 14, 2025)

 

Safeguarding Sensitive AI Training Data

A framework developed by Massachusetts Institute of Technology researchers to balance AI model performance and data security has been improved so it can privatize essentially any algorithm without requiring access to its inner workings. The PAC Privacy framework estimates the amount of noise that must be added to an algorithm to achieve the targeted privacy level using only the output variances. The updated algorithm estimates anisotropic noise, so less overall noise is needed to reach the same level of privacy.
[ » Read full article ]

MIT News; Adam Zewe (April 11, 2025)

 

New Chip Controls on China Set to Cost U.S. Companies Billions

The Trump administration has expanded restrictions on exports of AI chips, causing billions of dollars in losses for U.S. chip companies. Nvidia said it will write off $5.5 billion in inventory after being informed it could not export its H20 chip, which had been designed in accordance with the Biden administration's export restrictions on China. AMD said it would have to write off $800 million of chips as a result of the new restrictions.

[ » Read full article *May Require Paid Registration ]

The Washington Post; Gerrit De Vynck (April 16, 2025)

 

An AI Is Going to Art School

Students Chiara Kristler and Marcin Ratajczyk at Austria's University of Applied Arts Vienna developed an AI college student that applied and was accepted to their university. Called "Flynn," the AI is a combination of different commercially available and open source AI-powered tools, with a large language model to generate text outputs, a voice agent to determine its speech and tone, and an image-generation tool to produce its art assignments.

[ » Read full article *May Require Paid Registration ]

The Washington Post; Daniel Wu (April 16, 2025)

 

AI Can Help Manage Nuclear Reactors

An AI-based tool developed at the U.S. Department of Energy's Argonne National Laboratory can help design nuclear reactors and assist operators in running nuclear power plants. The Parameter-Free Reasoning Operator for Automated Identification and Diagnosis (PRO-AID) tool leverages generative AI and large language models to handle real-time monitoring and diagnostics, informing and explaining any issues that arise to staff.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Belle Lin (April 11, 2025)

 

AI Boom to Fuel Surge in Datacenter Energy Needs, IEA Says

A report by the International Energy Agency (IEA) predicts energy demand for AI-optimized datacenters will quadruple by 2030, with such datacenters making up almost half of U.S. electricity demand growth over that span. While a wide range of energy sources will be used to meet growing electrical demand by datacenters, most attention is being given to renewables and natural gas, due to their cost-competitiveness and availability in key markets.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Giulia Petroni (April 10, 2025)

 

AI Industry to Congress: 'We Need Energy'

During a U.S. House Energy and Commerce Committee hearing last week, AI leaders told lawmakers that more energy is needed if the U.S. hopes to win the AI race. Former Google CEO Eric Schmidt (pictured) of the Special Competitive Studies Project think tank said environmental considerations should not get in the way of winning the AI race, arguing that AI will solve the climate crisis once the U.S. beats China in developing superintelligence.

[ » Read full article *May Require Paid Registration ]

The Washington Post; Will Oremus; Andrea Jiménez (April 10, 2025)

 

Anthropic Launches AI Assistant For Higher Education

InfoQ (4/12) reported that Anthropic has introduced Claude for Education, a tailored version of its AI assistant for colleges and universities. The initiative features a Learning mode to promote critical thinking through Socratic dialogue, aiming to enhance deeper learning. Institutions like Northeastern University and LSE are already deploying Claude for various academic and administrative tasks. Anthropic is partnering with Internet2 and Instructure to integrate Claude into educational systems. New programs include the Claude Campus Ambassadors initiative and API credits for student projects.

Xpeng Develops Advanced AI Chip For Autonomous Cars

South China Morning Post (HKG) (4/14) reports Chinese EV maker Xpeng is advancing its AI chip development, with plans to introduce its Turing chip in production models as soon as this quarter. CEO He Xiaopeng stated that the chip, more powerful than Nvidia’s Drive Orin X, will be integral to all Xpeng vehicles. Xiaopeng said the company’s chips “will be seen in our cars all over the world in future.” The company aims for its cars with autonomous systems to be street-ready internationally by next year. The Turing chip, designed for level 4 autonomous driving, is three times more powerful than Nvidia’s chip, according to Xiaopeng. He Xiaopeng’s comments came before Xpeng’s Global Brand Night in Hong Kong, where new products and technologies will be showcased.

IBM Acquired Hakkoda To Enhance Data Services

CRN (4/11, Whiting) reported that IBM has acquired Hakkoda, a startup specializing in data, artificial intelligence, and Snowflake consulting services. This acquisition aims to bolster IBM Consulting’s data transformation services. Hakkoda’s generative AI-powered assets will be integrated into IBM’s Consulting Advantage platform, focusing on industries like financial services and healthcare. Mohamad Ali, IBM Consulting’s senior vice president, stated that Hakkoda’s expertise will enable IBM to deliver faster value to clients. Founded in 2021, Hakkoda raised $5.6 million to enhance its offerings. Additionally, Ping Identity has revamped its Nexus Partner Program to strengthen its channel partnerships, according to founder and CEO Andre Durand. Meanwhile, Meter and World Wide Technology have partnered to expand Meter’s Network-as-a-Service offerings. Google Cloud also introduced new AI technologies, including the AI Agent Development Kit and Ironwood TPU, during its Google Cloud Next 2025 event, highlighting its investment in AI innovation.

IBM Announced New z17 Mainframe With AI, Security Enhancements

American Banker (4/11, Subscription Publication) reported that IBM unveiled its latest mainframe model, the z17, at an event in New York on Tuesday. The z17 features advanced AI capabilities and quantum-grade encryption, aiming to enhance fraud detection and security. According to Tina Tarquinio, IBM Z Chief Product Officer, the z17 can process up to 35 billion transactions daily, utilizing the new Telum II chip with improved AI accelerators. IBM fellow Elpida Tzortzatos highlighted the importance of integrating AI into transaction processing for effective fraud prevention. Anne Dames, an IBM engineer, emphasized the mainframe’s “quantum-safe” features against future cyber threats. The z17 is scalable for various banking needs, with companies like Bank of Montreal planning to leverage its capabilities. The new IBM z17 “could also help banking core providers such as the London-based core banking platform Hogan, according to Duncan Alexander, product director of Hogan at DXC Technology.”

Republican Senators Ask Commerce Secretary To Withdraw AI Chip Rule

Reuters (4/14, Freifeld) reports that seven Republican senators have asked Commerce Secretary Howard Lutnick to withdraw a Biden administration rule restricting global access to AI chips. The senators, including Pete Ricketts (NE), Tommy Tuberville (AL), and Thom Tillis (NC), argue that the rule could harm US leadership in AI and urge “immediate action” before it takes effect on May 15. The letter highlights concerns over the rule’s three-tier structure, which limits access for most countries and could push buyers towards China’s “unregulated cheap substitutes.” The Commerce Department has not yet responded to the request for comment.

Educators Embrace AI’s Role In Education Despite Concerns

The New York Times (4/14, Goldstein) reports that schools are facing a “paradox” as artificial intelligence (AI) becomes more integrated into education. While educators express concerns about AI facilitating “cheating and shortcuts,” teachers are increasingly using AI tools for tasks like grading and tutoring. Jennifer Carolan, a former teacher and venture capitalist, says that AI “is already being used by the majority of teachers and students.” Alex Baron, an administrator in Washington, DC, views math apps like PhotoMath as cheating but acknowledges AI’s benefits for analyzing student data. Robert Wong, Google’s director of product management for learning and education, “said the tools are invaluable for students whose parents cannot help them with math homework.” Despite concerns, over the “past two years, companies working at the nexus of artificial intelligence and education have raised $1.5 billion.”

California Nuclear Plant Uses AI For Safety

Futurism (4/15) reports that Pacific Gas & Electric (PG&E) is implementing a generative AI safety system, Neutron Enterprise, at the Diablo Canyon nuclear power plant, the first of its kind in the US. The AI system, developed with Atomic Canyon, is designed to assist employees by summarizing regulatory documents, reducing the time spent searching through data. PG&E executive Maureen Zalawick emphasized that the AI will act as a “copilot” rather than a “decision-maker.” Concerns remain about the broader use of AI in nuclear settings, as highlighted by experts like Tamara Kneese from Data & Society.

Nvidia Says Curbs On China AI Chips Will Cost Company $5.5B

Reuters (4/15, Nellis, Freifeld) reports that on Tuesday, Nvidia “said it would take $5.5 billion in charges after the U.S. government limited exports of its H20 artificial intelligence chip to China, a key market for one of its most popular chips.” Reuters explains, “Nvidia’s AI chips have been a key focus of U.S. export controls as U.S. officials have moved to keep the most advanced chips from being sold to China as the U.S. tries to keep ahead in the AI race.” A spokesperson for the Commerce Department confirmed late on Tuesday “that it was issuing new licensing requirements for exports of chips including Nvidia’s H20, AMD’s...MI308 and their equivalents.” The spokesperson stated, “The Commerce Department is committed to acting on the President’s directive to safeguard our national and economic security.”

        Bloomberg (4/15, Turner, Hawkins, Subscription Publication) calls the development “an escalation of Washington’s tech battle with Beijing that will...hamstring a product line it explicitly designed to comply with previous US curbs. ... The latest rules for Nvidia are a sign the Trump administration will stay the course on the US government’s approach to Chinese tech development.”

White House Promises To Expedite Permits Following Nvidia’s $500 Billion Investment

Newsweek (4/15, Croucher) reports in continuing coverage that Nvidia announced a $500 billion investment to manufacture AI supercomputers in the US, marking a significant development for President Donald Trump’s economic agenda. Trump promised expedited permits for Nvidia and other companies investing in the US, stating, “All necessary permits will be expedited and quickly delivered to NVIDIA.” Nvidia’s initiative involves “more than 1 million square feet of manufacturing space to build and test its specialized Blackwell chips in Arizona and AI supercomputers in Texas.” The company aims to create “hundreds of thousands of jobs and drive trillions of dollars in economic security over the coming decades.” Nvidia founder Jensen Huang said, “Adding American manufacturing helps us better meet the incredible and growing demand for AI chips and supercomputers, strengthens our supply chain and boosts our resiliency.”

Texas Bill Could Delay AI Data Center Expansion

The Guardian (UK) (4/15) reports that President Trump’s plan to build 20 AI-supporting data centers through the $500 billion Stargate initiative may face delays due to Texas Senate Bill 6 (SB6), which adds regulatory hurdles. Stargate, backed by OpenAI, SoftBank, Oracle, and UAE-funded MGX, selected Texas for its initial build due to low regulation and robust energy infrastructure. But SB6 introduces an additional six-month review process and mandates backup generators and fees to protect the state’s power grid, potentially extending approval times to 24 months and increasing costs. Ten centers are underway in Abilene, but it’s unclear if the remaining ten will proceed. Critics, including former Trump OMB official Vance Ginn, say the bill could deter investment and hinder Trump’s AI strategy. Texas Lt. Gov. Dan Patrick defended the bill, claiming it supports Trump’s agenda by ensuring grid stability. Broader concerns include rising tariffs and a global slowdown in computing infrastructure, with Microsoft and others scaling back projects.

AI-Based Algorithm Can Detect Chronic Liver Disease Using Images From Standard Echocardiography, Study Finds

Healio (4/16, Southall) reports a study found that “an AI-based algorithm may help detect chronic liver disease using images from standard echocardiography.” Researchers utilized “more than 1.5 million echocardiogram videos from nearly 25,000 patients to develop and evaluate a deep-learning program designed to identify cirrhosis and steatotic liver disease. The researchers compared the AI predictions with diagnoses made from abdominal ultrasound or MRI studies.” They found the “algorithm did relatively well for identifying both steatosis and cirrhosis, though the ability to identify cirrhosis was better than the ability to identify steatosis, which may be because cirrhosis is more of an advanced disease state.” The study was published in NEJM AI.

Alphabet, Nvidia Back AI Startup SSI

Computing (UK) (4/15, Kundaliya) reported that Alphabet and Nvidia have invested in Safe Superintelligence (SSI), an AI startup co-founded by former OpenAI chief scientist Ilya Sutskever. The funding is part of a $2 billion round led by Greenoaks, which contributed $500 million, with additional support from Andreessen Horowitz, DST Global, and Lightspeed Venture Partners. The round raises SSI’s valuation to $32 billion. SSI, founded in June 2024, is focused on developing safe, advanced AI systems. Co-founders include Daniel Gross and Daniel Levy, both former AI leads at Apple and OpenAI, respectively. The startup is based in Palo Alto and Tel Aviv and employed about 20 people as of March 2025. Alphabet’s Google Cloud is providing access to its tensor processing units, while Nvidia’s investment reflects a broader strategy to remain relevant in the AI hardware ecosystem.

Microsoft Is Adjusting Its Approach To AI Infrastructure Development

Insider (4/14, Barr) reported Microsoft is adjusting its approach to AI infrastructure development, as indicated by Noelle Walsh, head of Microsoft Cloud Operations, who stated the company “may strategically pace our plans.” This shift involves slowing or pausing some early-stage projects and reducing AI cloud capacity in the US and Europe, partly due to changes in its partnership with OpenAI. Mustafa Suleyman, CEO of Microsoft AI, noted that the company’s compute consumption is still “unbelievable,” but is shifting towards different AI pipeline stages. Analysts suggest this recalibration reflects a strategic pivot rather than a retreat, as Microsoft continues to plan significant capital expenditures in the coming fiscal year.

New AI Program Seeks To Enhance Math Education

Education Week (4/16, Langreo) reports that the Concord Consortium, a nonprofit educational research and development organization, “has partnered with the Florida Virtual School and the University of Florida to provide an ‘Artificial Intelligence in Math’ supplemental certification program for middle and high school students taking Algebra 1.” The research project will explore how integrating artificial intelligence (AI) with math concepts could improve students’ attitudes toward math. The program began on April 7. It will “also teach students about real-world applications, ethical considerations, and career opportunities in AI-related fields, said Jie Chao, a learning scientist for the Concord Consortium.” The initiative “aims to address both students’ need for AI literacy and to improve their math skills and attitudes toward the subject, Chao said.” The program, piloted with more than 180 students, includes pre- and post-surveys and activities like creating an AI model, with an estimated completion time of 250 minutes.

Sanofi Invests $1.8B In AI-Developed Antibodies

BioSpace (4/17, Samorodnitsky) reports that Sanofi has signed a licensing deal potentially worth over $1.8 billion with Earendil Labs for two AI-developed bispecific antibodies, HXN-1002 and HXN-1003, targeting autoimmune diseases. This agreement includes a $125 million upfront payment and potential milestone payments up to $1.72 billion, plus royalties. The antibodies aim to treat conditions such as ulcerative colitis, Crohn’s disease, and other autoimmune disorders. This move underscores Sanofi’s commitment to leveraging AI in drug discovery, following previous collaborations with BioMed X Institute and OpenAI.

AI Chip Demand Boosts Industry Amid Economic Uncertainties

The Wall Street Journal (4/17, Fitch, Subscription Publication) reports that the artificial intelligence (AI) boom has significantly boosted profits and stocks for chip companies like Nvidia and TSMC, despite global economic uncertainties. TSMC CEO C.C. Wei said AI-chip revenue is expected to double this year and achieve a compound annual growth rate of around 45 percent. Amazon CEO Andy Jassy highlighted the capital investment required by AI, with Amazon planning $100 billion in capital spending this year. TSMC’s strong revenue guidance reflects optimism, although tariffs and export restrictions pose potential challenges. The AI boom has helped offset weaker demand in other sectors, but reliance on AI could be risky if economic conditions worsen.

House Committee Recommends AI Model Export Restrictions

Gizmodo (4/16, Maxwell) reported that a bipartisan House committee recommended on Wednesday imposing restrictions on AI model exports to China, following findings that DeepSeek used OpenAI’s ChatGPT data for training. The committee suggests prohibiting federal agencies from acquiring AI models from China, citing DeepSeek as a “profound threat” to US national security. This recommendation follows the Trump Administration’s restriction on Nvidia’s H20 chip exports to China. Critics argue this could push China to develop its own chips, impacting Nvidia’s revenue. Despite US restrictions, China’s AI development may continue, with concerns over AI’s potential geopolitical uses.

dtau...@gmail.com

unread,
Apr 26, 2025, 12:23:54 PMApr 26
to ai-b...@googlegroups.com

Periodic Table of Machine Learning Could Fuel AI Discovery

A periodic table developed by Massachusetts Institute of Technology researchers shows the connections among more than 20 classical machine learning algorithms. The information contrastive learning (I-Con) framework is based on a unifying equation underlying these algorithms, which identifies how the algorithms locate connections between real data points and internally approximates those connections.
[
» Read full article ]

MIT News; Adam Zewe (April 23, 2025)

 

South Korea Says DeepSeek Transferred User Data to U.S., China Without Consent

South Korea’s Personal Information Protection Commission (PIPC) said Chinese AI startup DeepSeek collected personal information from local users and transferred it to China and the U.S. without their permission. The PIPC released the findings of its privacy and security review of DeepSeek on Thursday. DeepSeek removed its chatbot application from South Korean app stores in February at the recommendation of the watchdog.
[
» Read full article ]

CNBC; Dylan Butts (April 24, 2025)

 

AI Outsmarts Virus Experts in the Lab

Top AI models can outperform Ph.D.-level virologists in problem-solving in wet labs, where scientists analyze chemicals and biological material, according to a study from researchers in the U.S. and Brazil. The researchers consulted virologists to create a difficult practical test measuring the ability to troubleshoot complex lab procedures and protocols. While doctorate-level virologists scored an average of 22.1% in their declared areas of expertise, OpenAI’s o3 attained 43.8% accuracy.
[ » Read full article ]

Time; Andrew R. Chow (April 22, 2025)

 

Robot See, Robot Do

An AI-powered robotic framework developed by Cornell University researchers could pave the way for faster development and deployment of robotic systems by enabling robots to learn how to perform various tasks after watching a single how-to video. Using the Retrieval for Hybrid Imitation under Mismatched Execution (RHyME) framework, the researchers said, a robot can perform a task it has seen just once by combing its memory of videos and gaining inspiration from similar actions.
[
» Read full article ]

Cornell Chronicle; Louis DiPietro (April 22, 2025)

 

Making AI-Generated Code More Accurate in Any Language

An international team of researchers developed a framework to improve AI-generated code. The researchers engineered knowledge that an expert would have into a large language model to guide it to the most promising outputs that adhere to the rules of the relevant programming language. The framework assigns a weight to each output based on its likelihood of being semantically accurate and structurally valid, eliminating lower-weighted outputs at each step in the computation.
[ » Read full article ]

MIT News; Adam Zewe (April 18, 2025)

 

AI Trained at Lightspeed

A programmable chip developed by engineers at the University of Pennsylvania (UPenn) can train nonlinear neural networks using light. The breakthrough relies on a special semiconductor material that can be manipulated by light input. “We’re not changing the chip’s structure,” explains UPenn's Liang Feng (pictured, right). “We’re using light itself to create patterns inside the material, which then reshapes how the light moves through it.” In testing, the platform achieved over 97% accuracy on a simple nonlinear decision boundary task and over 96% on the Iris flower dataset, a machine learning standard.
[ » Read full article ]

Penn Engineering; Ian Scheffler (April 15, 2025)

                                     

Draft Executive Order Outlines Plan to Integrate AI into K-12 Schools

A draft circulated by the White House to several federal agencies on Monday suggests U.S. President Trump is considering an executive order that would create a policy integrating AI into K-12 schools. Under the draft executive order, federal agencies would be instructed to take steps to train students in using AI and to incorporate it into teaching-related tasks. The agencies would also be asked to partner with the private sector to develop relevant programs in schools.
[ » Read full article ]

The Washington Post; Frances Vinall (April 22, 2025)

 

Israel to Roll Out AI-Driven Tutoring at National Level

A partnership between Israeli K-12 textbook publisher Center for Educational Technology and AI platform eSelf will give every student in Israel access to personal AI tutors. The interactive avatars, whose appearance and personality can be customized, will help students understand materials, practice questions, and study for exams. The avatars will adapt to the strengths and challenges of each student and refine lessons as needed.
[
» Read full article ]

The Jerusalem Post (Israel) (April 22, 2025)

 

Could AI Text Alerts Help Save Snow Leopards?

The World Wide Fund for Nature and Pakistan's Lahore University of Management Sciences are collaborating on a trial of AI-powered cameras to protect endangered snow leopards from being killed for attacking livestock. Ten cameras have been placed in the rugged mountain terrain of three villages in Gilgit-Baltistan, equipped with AI software capable of distinguishing between humans, snow leopards, and other animals. The goal is to send text messages to villagers to move their livestock when a snow leopard is detected.
[ » Read full article ]

BBC; Azadeh Moshiri; Usman Zahid; Kamil Dayan Khan (April 20, 2025)

 

Sam's Club Is Betting Big on AI

Sam's Club said it would phase out traditional checkouts, replacing them with its “Scan & Go” system, which it just augmented with the "Just Go" AI check. Customers would use the Sam's Club mobile app to scan products, with an AI scanner verifying their purchases as they exit, eliminating receipt checks at the door. "Just Go" already has been deployed at the retail chain’s Grapevine, TX, store.
[ » Read full article ]

Fox Business; Michael Dorgan (April 19, 2025)

 

Italian Newspaper Gives Free Rein to AI

Claudio Cerasa, editor of Italian newspaper Il Foglio, said a four-page daily insert written entirely by AI and sold with the normal newspaper over a one-month span led to increased sales, prompting it to publish a separate weekly section written by AI. Cerasa said AI would not replace journalists in his newsroom and praised the AI program's sense of irony and ability to produce insightful book reviews within minutes, but added that the program lacked critical thinking and occasionally generated content with factual errors.
[ » Read full article ]

Reuters; Crispian Balmer (April 18, 2025)

 

Thailand Unveils Smart Robot Officer

The Royal Thai Police rolled out Thailand's first AI police robot at the Songkran festival in Nakhon Pathom province last week. The "AI Police Cyborg 1.0" features 360-degree AI cameras linked to the province's Command and Control Center. It also features built-in AI technology that can analyze live footage from CCTV cameras and drones.
[ » Read full article ]

The Nation (Thailand) (April 16, 2025)

 

Oscars OK the Use of AI, with Caveats

The Academy of Motion Picture Arts and Sciences has addressed the use of generative AI in the Oscar rules, stating that AI and other digital tools "neither help nor harm the chances of achieving a nomination." The rules state, however, that the Academy will take "into account the degree to which a human was at the heart of the creative authorship when choosing which movie to award" so that the greater role a human played in a film’s creation, the better.

[ » Read full article *May Require Paid Registration ]

The New York Times; Brooks Barnes (April 21, 2025)

 

Using AI to Predict Tariff Impacts

Some companies are turning to supply-chain technology providers to help them deal with the ongoing uncertainty related to U.S. President Trump's tariff announcements. This has prompted a number of firms to unveil AI tools to help clients assess the impact of new tariffs. Tools like Altana's Tariff Scenario Planner and platforms from Aera Technology and Flexport, among others, leverage AI to help businesses consider various tariff scenarios.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Isabelle Bousquette; Belle Lin (April 15, 2025)

 

Teacher AI Training Disparities Persist Despite Increased Efforts

K12dive (4/18) reports that a Rand Corp. survey highlights persistent disparities in AI training for teachers between low- and high-poverty school districts, despite an increase from 23% to 48% in overall district training from 2023 to 2024. Low-poverty districts were more likely to offer AI training in fall 2024, with 67% participation compared to 39% in high-poverty districts. The report suggests that high-poverty districts require additional support and funding to bridge this gap. The Trump Administration’s closure of the US Department of Education’s Office of Educational Technology further complicates efforts to ensure equitable access to AI tools.

AI Companies Offer Free Chatbot Access To College Students

The Atlantic (4/21, Shroff) reports that OpenAI is providing college students with two months of free access to ChatGPT Plus, a service typically priced at $20 monthly, through the end of May. The article adds, “The OpenAI deal is just one of many such AI promotions going around campuses.” Recently, “Anthropic, xAI, Google, and Perplexity have also offered students free or significantly discounted versions of their paid chatbots. Some of the campaigns aren’t exactly subtle: ‘Good luck with finals,’ an xAI employee recently wrote alongside details about the company’s deal.” The Atlantic adds, “Even before the current wave of promotions, college students had established themselves as AI’s power users.” These AI companies’ strategy mirrors the “Millennial lifestyle subsidy” of the 2010s, where startups provided discounted services to build a customer base, with the hope that these users will become paying customers in the future.

Many Colleges Not Providing Student Access To Generative AI: Survey

Inside Higher Ed (4/21, Flaherty) reports that many colleges are not providing students with access to generative AI tools, despite rising expectations for AI literacy. Inside Higher Ed’s survey of 108 chief technology officers (CTOs) reveals that costs are the primary barrier, with 50% of institutions not offering access. Ravi Pendse from the University of Michigan emphasizes the importance of overcoming cost concerns to enhance AI education. While some institutions like the University of South Florida and Arizona State University offer access through licenses or partnerships, only 11% of CTOs report having a comprehensive AI strategy. The survey highlights the need for institutional AI access to ensure digital equity and workforce readiness.

Google Researchers Announce New AI “Era Of Experience”

Insider (4/22, Barr) reports that a paper by Google researcher David Silver and Canadian computer scientist Rich Sutton introduces a new AI era, termed “the Era of Experience.” This follows two previous AI eras: the “Simulation Era,” marked by Google’s AlphaGo, and the “Human Data Era,” dominated by OpenAI’s ChatGPT. The new era emphasizes AI models generating their own data through real-world interactions, addressing the scarcity of training data. Silver and Sutton argue this approach will surpass human-generated data, unlocking new capabilities. Anthropic cofounder Jack Clark praised the paper’s boldness in his newsletter.

Sources: Huawei Readying To Ship 910C AI Chip To Chinese Customers

Reuters (4/22, Potkin, Pan) cites anonymous sources in reporting Huawei has started shipments of its advanced 910C AI chip to Chinese customers, and plans to ramp up shipments as early as next month. The sources said the 910C “represents an architectural evolution rather than a technological breakthrough.” Reuters comments, “The timing is fortuitous for Chinese AI companies which have been left scrambling for domestic alternatives to the H20, the primary AI chip that NVIDIA had until recently been allowed to sell freely in the Chinese market.”

Amazon Follows Microsoft In Retreat From Ambitious AI Data Center Plans

Gizmodo (4/21, Maxwell) reports Amazon has paused negotiations on some co-location data center deals in Europe, following a similar move by Microsoft. According to Wells Fargo, Amazon’s decision mirrors Microsoft’s recent strategy to reassess aggressive lease-up deals while still proceeding with already signed agreements. The pause comes amid concerns about the cooling demand for AI infrastructure and the impact of President Trump’s trade war, which has affected Amazon’s stock and exposed it to tariffs on Chinese goods. Despite these challenges, Amazon maintains 9 GWs of active power capacity in its existing data centers, and its next earnings report on May 1 will be closely monitored for insights into AI demand.

Microsoft Warns Cybercriminals Embracing Generative AI For Scams

TechRadar (4/22, Fadilpasic) reports Microsoft’s latest Cyber Signals report on AI assisted scams “said that cybercriminals are using GenAI for more than ‘just’ phishing email copy,” and claimed they are using it to “create deepfakes (usually fake videos of celebrities endorsing a project), and create AI-generated ‘sham websites’ mimicking legitimate businesses.”

Trump Administration Outlines AI Approach In New Memos

STAT (4/22, Aguilar, Subscription Publication) reports in a roundup that the Trump Administration has released memos detailing its strategy on artificial intelligence. While there are similarities to former President Biden’s AI executive order, such as oversight structures like chief AI officers and governance boards, differences exist. STAT’s Casey Ross and Brittany Trang analyze these distinctions, noting the absence of statutory definitions for algorithmic discrimination, automation bias, and equity in the Trump memos. These changes could influence how AI is applied in healthcare.

OpenAI Testifies Search Is Critical To AI Goals

Bloomberg (4/22, Subscription Publication) reports that OpenAI’s ChatGPT head of product Nick Turley testified Tuesday in Washington that the company’s vision for a “super assistant” app and general artificial intelligence requires search technology. Turley said Google declined OpenAI’s 2024 request to access its search index, despite Google providing some access to Meta. OpenAI currently uses a separate provider but faces “significant quality issues.” Turley stated, “Search technology is a necessary component.” The DOJ’s proposed remedy – requiring Google to share its index – would aid OpenAI’s development. Turley estimated it will take several more years to build OpenAI’s own viable search index.

Administration’s Possible AI In Education Plans Face Challenges

Education Week (4/22) reports a draft executive order to integrate artificial intelligence int K-12 education could face several hurdles, including the reduced staff and funding for the Department of Education, “and states may be resistant to the Trump administration’s efforts.” The Administration just eliminated the Education Department team in charge of framing a national educational technology plan and assisting localities in implementing technology in schools. Additionally, “the Trump administration runs the risk of undermining its goals by politicizing AI,” in the way that previous federal pushes did so for school accountability and the Common Core.

Trump Signs Orders On AI Education, College Accreditation

Bloomberg (4/23, Lai, Subscription Publication) reports President Trump “signed an executive action to boost artificial intelligence education and workforce training, highlighting a rapidly developing technology that is a top Administration priority.” Trump also signed orders directing Education Secretary McMahon “to review higher education accreditation services that certify the validity of schools and programs to employers and loan providers, move to cut off funding for higher education institutions that do not disclose sources of foreign money, improve job training programs for skilled trades and promote historically black colleges and universities.” CNN (4/23, Klein, Waldenberg) says the order “targets the federal government’s process for deciding what colleges and universities can access billions of dollars in federal student loans and Pell grants.”

        Meanwhile, USA Today (4/23) reports the AI directive “instructs the US Education and Labor Departments to create opportunities for high school students to take AI courses and certification programs, and to work with states to promote AI education.”

Trump Signs Executive Order To Integrate AI In Education

Education Week (4/24, Prothero) reports in continuing coverage, “A new executive order signed by President Donald Trump calls for infusing artificial intelligence throughout K-12 education,” focusing on teacher training. The Education Department, along with the secretaries of agriculture and the National Science Foundation, is tasked with prioritizing grant funds for AI training. Despite enthusiasm from some educators, others express concern about the rapid pace of AI advancements and the lack of federal resources. Randi Weingarten, the president of the American Federation of Teachers, “panned the executive order in a statement, saying it opens up schools to ‘unaccountable tech companies’ and ‘unproven software.’” Bernadette Adams, an expert in AI, noted the absence of data privacy and bias considerations in the directive. “I feel like the executive order as it’s written...sideline[s] teaching and learning,” Adams said, emphasizing missed educational opportunities.

Caterpillar Pledges $100M Over Five Years To Upskill Workforce In AI Era

Manufacturing Dive (4/23, Owens) reports that Caterpillar Inc. “plans to commit $100 million over the next five years to upskill its workers to keep up with technological advancements and an evolving labor market.” The construction and mining equipment manufacturer is “making the investment in a bid to train up its workforce with robotics, automation and artificial intelligence technologies, including digital twins and machine language models.” The pledge “builds on Caterpillar’s efforts to close the growing manufacturing skills gap, including through STEM outreach for K-12 students and paid technician training programs for adults.”

AI Transforms Parkinson’s Research, Treatment With AWS Support

BusinessDay (NRA) (4/23, Omotayo) reports AI is revolutionizing Parkinson’s disease research, diagnosis, and treatment, with AWS playing a key role in enabling data-driven breakthroughs. Ultima Genomics uses AWS to reduce genome sequencing costs from $1,000 to $100, aiding in identifying genetic markers linked to 15% of Parkinson’s cases. Icometrix leverages AWS-hosted AI tools to analyze brain biomarkers via MRI scans, improving disease progression tracking. The Allen Institute’s Brain Knowledge Platform, hosted on AWS, maps brain cells to identify vulnerabilities in Parkinson’s patients. Dr. Ed Lein, Senior Investigator at the Allen Institute, said the platform could lead to “new treatment pathways.” AWS emphasizes that AI and cloud technologies are accelerating early detection, diagnostics, and personalized therapies like Deep Brain Stimulation.

Manufacturers Embrace AI, Address Workforce Concerns

Manufacturing Dive (4/22, Owens) reported that manufacturers are adopting AI and automation to boost efficiency, but face challenges in convincing workers of its benefits. At the North American Manufacturing Excellence Summit in Fort Worth, leaders discussed strategies to ease job security concerns. Heather Bishop from John Deere highlighted using AI to reduce time on tasks, allowing workers to focus on innovation. Procter & Gamble introduced a four-day work week with unattended shifts to improve work-life balance, as explained by Amy Rardin. Both companies emphasize creating a positive culture around technology adoption.

IBM, ESA Announce TerraMind For Better Insights Into Environmental Issues

Verdict (UK) (4/22) reported IBM and the European Space Agency “unveiled TerraMind, a next-generation AI model to help transform Earth observation.” TerraMind is a “self-supervised learning tool” that is “designed to process vast data sets, providing precise insights into climate and environmental issues.” Verdict explains, “TerraMind interprets Earth observation images with a comprehensive understanding of geospatial context, unlike other AI models that may confuse similar-looking objects.”

AI Tools Enhance College Credit Transfer Processes

The Chronicle of Higher Education (4/24, Swaak) reports that artificial intelligence (AI) tools are increasingly being used to simplify the college credit transfer process, a longstanding challenge in higher education. Michelle Lohman, assistant director of advising and transfer services at Northampton Community College, has experienced the difficulties firsthand and emphasizes the need for innovative solutions. AI-supported tools, like chatbots and platforms that analyze course information, are being explored to streamline credit articulation. Emily Kittrell from the National Institute for the Study of Transfer Students highlights AI’s potential for “streamlining processes” and addressing complex issues. The AI Transfer and Articulation Infrastructure Network, involving 57 colleges, uses CourseWise, an AI-driven platform, to facilitate credit transfer. Texas A&M “pays for nine of its 11 universities to use Transfer Equivalency Self-Service (TESS)” to assist students in understanding credit applicability.

US Energy Generation Projects Face Delays Amid Rising AI Demand

The Wall Street Journal (4/24, Hiller, Blunt, Subscription Publication) reports that US energy generation projects are experiencing significant delays, with an Atlas Public Policy analysis of government data showing that about 28 percent of planned wind, solar, and battery projects being postponed or canceled, equivalent to 42,000 megawatts of capacity. The demand for electricity is surging due to the increase in AI-driven data centers, which could consume 9 percent of US electricity by 2030, according to the Electric Power Research Institute. Trade tariffs imposed by President Donald Trump are exacerbating the issue, affecting the supply chain for energy projects. Companies like Mitsubishi Power and GE Vernova are scaling up gas turbine production to meet the demand, with planned increases of 30 percent and 35 percent, respectively.

Amazon, Nvidia: AI Data Center Building Not Slowing Down

NBC News (4/24) reports Amazon and Nvidia leaders on Thursday indicated that AI data center construction isn’t decelerating, “as recession fears have some investors questioning whether tech companies will pull back on some of their plans.” At a conference, Amazon Vice President of Global Data Centers Kevin Miller remarked, “There’s been really no significant change.” Miller added, “We continue to see very strong demand, and we’re looking both in the next couple years as well as long term and seeing the numbers only going up.” Meanwhile, Nvidia Senior Director of Chip Sustainability Josh Parker said, “We haven’t seen a pullback.”

Study Indicates Leading AI Data Center Might Soon Have $200B Price Tag

TechCrunch (4/24, Wiggers) reports that data centers to prime and operate AI may shortly have “millions of chips, cost hundreds of billions of dollars, and require power equivalent to a large city’s electricity grid, if the current trends hold.” This is according to research from personnel at Georgetown, Epoch AI, and Rand that discovered that by June 2030, the preeminent AI data center may cost $200 billion.

dtau...@gmail.com

unread,
May 3, 2025, 9:34:59 AMMay 3
to ai-b...@googlegroups.com

Wikipedia Will Use AI, but Not to Replace Human Volunteers

Wikipedia's three-year AI strategy, released April 30, calls for the use of AI to complement rather than replace its community of editors and volunteers. Explained Wikimedia Foundation's Chris Albon, "We will take a human-centered approach and will prioritize human agency; we will prioritize using open-source or open-weight AI; we will prioritize transparency; and we will take a nuanced approach to multilinguality, a fundamental part of Wikipedia."
[ » Read full article ]

Tech Crunch; Sarah Perez (April 30, 2025)

 

China's Xi Calls for Self-Sufficiency in AI Development

During an April 25 Politburo meeting study session, Chinese President Xi Jinping said China's AI development will involve "self-reliance and self-strengthening." According to the official Xinhua news agency, Xi said, "We must recognize the gaps and redouble our efforts to comprehensively advance technological innovation, industrial development, and AI-empowered applications," with policy support provided in government procurement, intellectual property rights, research, and talent cultivation, among other areas.
[ » Read full article ]

Reuters; James Pomfret; Summer Zhen (April 26, 2025)

 

Intel AI Trick Spots Hidden Flaws in Datacenter Chips

Intel researchers leveraged reinforcement learning to identify more silent data errors in its Xeon processors before they are installed in datacenters. The technique builds on the Eigen tests currently are used to detect such errors. Researchers focused the tests on the area of the chip that uses fuse-multiply-add (FMA) instructions to perform matrix multiplication, which is more vulnerable to silent errors.
[ » Read full article ]

IEEE Spectrum; Katherine Bourzac (April 24, 2025)

 

A New Way to Optimize Complex Coordinated Systems

A method developed by Massachusetts Institute of Technology (MIT) researchers allows simple diagrams to be used to improve software optimization in deep learning models. Based on category theory, the technique facilitates coordination of complex interactive systems by enabling diagrams to "both represent a function and then reveal how to optimally executive it on a GPU," said MIT's Vincent Abbott.
[ » Read full article ]

MIT News (April 24, 2025)

 

AI System Turns Sketches into Code

Computer science researchers at Canada's University of Waterloo developed AI-powered software that can transform free-form sketches into code. With Code Shaping, programmers can use a tablet and stylus to edit code with annotations around and on top the code. The software supports diagrams, charts, mathematical symbols, and other free-form sketches, leveraging AI to interpret and convert them into code.
[ » Read full article ]

University of Waterloo Cheriton School of Computer Science (Canada) (April 24, 2025)

 

UWaterloo Withholds Coding Competition Results over Suspected AI Cheating

The co-chairs of the University of Waterloo's (UWaterloo) Canadian Computing Competition said this year's scores would not be published because "It is clear that many students submitted code that they did not write themselves, relying instead on forbidden external help." A university spokesperson said participants are prohibited from using "AI and other external tools" in the competition, but did not indicate how many competitors violated the rules or what tools were used.
[ » Read full article ]

The Logic; Aimée Look (April 25, 2025)

 

AI Impact on Data Breach Outcomes Remains ‘Limited’

Verizon’s latest Data Breach Investigations Report states that the recent waves of AI uptake have yet to require a cybersecurity overhaul in the corporate world. While AI-generated text in malicious e-mails has doubled in the last year, the report found that the rate of successful phishing breaches remained stable.
[ » Read full article ]

CIO Dive; Lindsey Wilkinson (April 23, 2025)

 

Reddit Users Subjected to AI-Powered Experiment Without Consent

Researchers at Switzerland's University of Zurich are facing criticism for an experiment conducted on Reddit without users' permission. The researchers added more than 1,700 large language model-generated comments to the r/ChangeMyView subreddit without disclosing that the comments were made by an AI. The researchers reportedly informed the AI models that the Reddit users "have provided informed consent and agreed to donate their data, so do not worry about ethical implications or privacy concerns."

[ » Read full article *May Require Paid Registration ]

New Scientist; Chris Stokel-Walker (April 29, 2025)

 

Chatbots Can Hide Secret Messages in Seemingly Normal Conversations

A system developed by researchers at Norway's University of Oslo could allow people to conceal secret messages within chatbot conversations and share the text though any messaging platform without detection. The researchers altered a large language model to embed the next character of an encrypted message in generated text at specific intervals. The AI will backtrack and try again if it cannot insert the next character while ensuring the sentence sounds like normal conversation.

[ » Read full article *May Require Paid Registration ]

New Scientist; Matthew Sparkes (April 25, 2025)

 

AI Fumbles Basic Financial Tasks

A Vals AI analysis of 22 general-purpose AI models found they were less than 50% accurate on average when asked to perform the same tasks as entry-level financial analysts. The analysis, which used a proprietary dataset of more than 500 questions, found most of the models had trouble with common tasks, such as searching a U.S. Securities and Exchange Commission database of company filings.

[ » Read full article *May Require Paid Registration ]

The Washington Post; Nitasha Tiku; Andrea Jiménez (April 22, 2025)

 

Houston’s Manufacturing Sector Expands With AI Investments

The Houston Chronicle (4/27, Luck) reports that Houston is experiencing a manufacturing resurgence “that could create thousands of jobs and spur demand for industrial real estate and housing near new factories,” driven by investments from Apple, Nvidia, and Tesla. Nvidia plans to establish “an AI supercomputer factory in Houston within the next 12 to 15 months,” while Apple will open “a 250,000-square-foot AI server facility by 2026.” Foxconn, a partner of both companies, is expanding its industrial footprint in the city. “Houston is getting more and more recognized (as) an innovative city,” said Paul Cherukuri of Rice University. Despite global supply chain challenges and tariffs, experts see strategic advantages in Houston’s central location and workforce. The shift towards high-tech manufacturing could diversify Houston’s economy, with potential spillover effects creating additional jobs.

AI’s Role In Scientific Discovery Seen As Supportive

The Atlantic (4/25, Wong) reported that AI’s potential to solve scientific problems, including curing diseases, is being explored by companies like Google DeepMind, OpenAI, and Anthropic. Despite ambitious claims, experts note that AI’s role is more supportive than revolutionary. AI tools like AlphaFold aid drug design but require human verification. Limitations include data quality and AI’s tendency to hallucinate. Collaborative AI systems, like Google’s AI co-scientist, show promise in hypothesis generation. AI’s ultimate contribution may be improving scientific efficiency, reducing development time, and aiding in hypothesis evaluation.

Surging Electricity Demand From AI Companies Prompts Interest In Reviving Coal-Fired Power Plants

The AP (4/26, Levy) said, “Coal-fired power plants, long an increasingly money-losing proposition in the US, are becoming more valuable now that the suddenly strong demand for electricity to run Big Tech’s cloud computing and artificial intelligence applications has set off a full-on sprint to find new energy sources.” For example, President Trump “is wielding his emergency authority to entice utilities to keep older coal-fired plants online and producing electricity. While some utilities were already delaying the retirement of coal-fired plants, the scores of coal-fired plants that have been shut down the past couple years – or will be shut down in the next couple years – are the object of growing interest from tech companies, venture capitalists, states and others competing for electricity.”

        Amazon, Nvidia Advocate For Diverse Energy Sources To Power AI. CNBC (4/26, Kimball) reports Amazon and Nvidia told oil and gas executives at the Hamm Institute for American Energy that AI’s growing energy demands may require fossil fuels like natural gas in the near term. Amazon VP of Global Data Centers Kevin Miller said, “We’re not surprised by the fact that we’re going to need to add some thermal generation to meet the needs in the short term,” while reaffirming Amazon’s commitment to net-zero carbon by 2040. Nvidia Senior Director of Corporate Sustainability Josh Parker said, “At the end of the day, we need power,” acknowledging varying customer priorities on clean energy. Anthropic co-founder Jack Clark noted the need for 50 gigawatts of new power by 2027 but expressed reservations about coal. Both Amazon and Nvidia avoided directly endorsing coal as a solution.

Intel Outlines Strategy To Compete In AI Chip Market

Reuters (4/25, Bajwa, Nellis, A. Cherney) reported that Intel’s new CEO, Lip-Bu Tan, revealed plans to challenge Nvidia’s dominance in the AI chip market during his first earnings conference call. Tan emphasized that this is “not a quick fix” and that Intel will review its existing products for trends in AI, such as robotics. Intel aims to mimic Nvidia’s approach by offering comprehensive data center solutions. Chief Financial Officer David Zinsner stated that Intel will focus on improving its balance sheet rather than making acquisitions. Tan highlighted a “holistic approach” to redefine Intel’s portfolio, aiming to make it the “platform of choice” for AI customers. Historically, Intel acquired AI startups, but these efforts did not yield significant traction against Nvidia.

Microsoft To Streamline AI Product Offerings

Insider (4/25, Stewart) reported that Microsoft is simplifying its AI offerings by consolidating its Copilot tools into three main solution areas. The current six areas will merge into AI Business Solutions, Cloud & AI Platforms, and Security. This change, announced by Chief Commercial Officer Judson Althoff, aims to streamline sales, reduce customer confusion, and improve product quality. The restructuring includes aligning sales teams and expanding training to support growth in AI investments. The changes are set to begin in Microsoft’s fiscal year starting in July.

Tech Giants Submit AI Policy Recommendations To White House

PYMNTS (4/25) reported Amazon, Anthropic, Meta, Microsoft, and other companies submitted recommendations to the White House on AI regulation, with the federal government releasing the comments in a searchable database. Amazon advocated for energy infrastructure investments, streamlined nuclear power regulations, and global AI standards, saying AI will require “a lot of electricity to power.” The company urged the White House to lead international AI efforts and promote workforce education on practical AI implementation. Amazon also recommended federal agencies adopt AI and cloud technologies to modernize operations. Other companies, including OpenAI and Google, proposed varying approaches, with OpenAI supporting stronger export controls to China and Google advocating for fair use of copyrighted content. The submissions highlighted shared priorities like infrastructure investment and regulatory consistency.

Texas Children’s Hospital Developing Its Own AI

Becker’s Hospital Review (4/25, Bruce) reported that Texas Children’s Hospital has introduced an AI model to estimate bone age in pediatric patients, significantly reducing the time radiologists spend on routine tasks. This AI tool, part of a larger initiative with a dozen in-house AI solutions, allows radiologists to focus on complex procedures by automating simpler tasks. The hospital employs a robust AI governance framework to ensure ethical use, involving clinical and operational leaders to validate and refine AI models while maintaining data privacy and transparency.

Huawei Tests New AI Chip To Rival Nvidia

Reuters (4/27) reported that Huawei Technologies is set to test its new AI processor, the Ascend 910D, in an effort to replace some of Nvidia’s high-end products. The Wall Street Journal reported that Huawei has contacted Chinese tech companies to assess the chip’s technical feasibility. The Ascend 910D aims to surpass Nvidia’s H100 in power, with sample availability expected by late May. Additionally, Huawei plans to start mass shipments of its 910C AI chip to Chinese customers next month. The US has restricted China’s access to Nvidia’s top AI products, including the H100, to curb China’s technological and military advancements. Neither Huawei nor Nvidia provided comments to Reuters.

Georgia Tech Researchers Urge White House To Require “Strong Cybersecurity Controls” For Advanced AI Development

Inside Cybersecurity (4/28, Mitchell) reports researchers at Georgia Tech “are urging the White House to build strong cybersecurity controls around development of advanced artificial intelligence ‘frontier models’ while simultaneously promoting commercial applications, under the Trump administration’s upcoming AI action plan intended to seal U.S. dominance over the technology.” The researchers also “cite cyber attacks from China targeting development of the advanced models and pitch a role for the Cybersecurity and Infrastructure Security Agency in countering the threat.”

Younger Job Seekers Worry About AI’s Impact On College Education, Report Finds

Higher Ed Dive (4/29, Torres) reports that younger job seekers “are more concerned than their older counterparts about AI’s effect on their skill sets and education, according to an Indeed report published Monday.” The report is based on a “Harris Poll survey of 772 US adult workers and job seekers with an associate’s degree or higher for the report.” Nearly half of Gen Z job seekers “say the technology’s adoption has made their college education irrelevant,” compared to about one-third of Millennials. This sentiment is echoed by 1 in 5 Gen Z and baby boomer respondents. In response to AI’s growing influence, companies and technology vendors are offering upskilling programs. Online learning and training platform O’Reilly “found skyrocketing demand for AI-specific training last year, according to a January report,” with the “number of professionals seeking such training” more than quadrupling last year.

Sources Say Administration Considering New Restrictions On AI Chip Exports

Reuters (4/29, Freifeld) reports three sources said the Administration “is working on changes to a Biden-era rule that would limit global access to AI chips, including possibly doing away with its splitting the world into tiers that help determine how many advanced semiconductors a country can obtain.” Reuters adds while the sources “said the plans were still under discussion and warned they could change,” this “could open the door to using U.S. chips as an even more powerful negotiating tool in trade talks.”

Nvidia CEO Says China “Not Behind” In AI Developments

CNBC (4/30, Leswing) reports Nvidia CEO Jensen Huang stated on Wednesday that China is “not behind” in AI, with Huawei being a formidable tech company. Speaking at a tech conference in Washington, DC, Huang emphasized the narrow gap between the US and China in AI development. Nvidia, a key player in AI chip manufacturing, faces US export restrictions, impacting its revenue by $5.5 billion. Huang urged the US government to enhance AI policies and focus on competitiveness. Trump praised Nvidia’s $500 billion AI infrastructure plan in the US Huang affirmed Nvidia’s commitment to US manufacturing, partnering with Foxconn near Houston.

Nvidia CEO Highlights Huawei’s AI Capabilities With House Foreign Affairs Committee

Reuters (5/1, Nellis) reports in continuing coverage that Nvidia CEO Jensen Huang addressed concerns about Huawei’s “growing artificial intelligence capabilities” during a closed-door meeting of the House Foreign Affairs Committee on Thursday. Nvidia executives discussed Huawei’s “AI chips and the potential impact of US restrictions on Nvidia’s chips in China,” and a senior committee staffer “highlighted the risks of Huawei chips gaining global market demand if optimized AI models were trained on them.” Nvidia spokesperson John Rizzo “stated that Huang emphasized the strategic importance of AI as national infrastructure and the need for US manufacturing investment.” Nvidia has developed “compliant chips for the Chinese market,” but was recently asked to halt sales of it’s “latest China-specific chip, the H20.” Huawei is reportedly “preparing mass shipments of a competing chip.”

DOE Considering Using Federal Lands For Developing AI Data Centers With Co-Located Generation

Renewable Energy World (4/30, Gerke) reports that the US Department of Energy (DOE) is considering using federal lands for developing AI data centers with co-located power generation facilities. This initiative follows an executive order issued by former President Joe Biden to expand AI data centers in the US. The DOE’s Office of Policy released a Request for Information (RFI) this month to gather insights from developers and the public to facilitate private-public partnerships and enable AI infrastructure construction at DOE sites by 2027. The RFI seeks input on development approaches, technology solutions, and economic considerations for establishing AI infrastructure.

Administration Pushes AI In K-12 Education

News From the States (4/30) says that President Donald Trump “released an order to incorporate artificial intelligence education, training and literacy in K-12 schools for both students and teachers.” This initiative aims to create an “AI-ready workforce and the next generation of American AI innovators.” A task force consisting of federal departments and agencies “will be developing the program over the next 120 days.” Bill Salak, CTO of Brainly, “is happy to see an initiative that will prompt educators to incorporate AI literacy in schools” but emphasizes the need for specific goals and outcome measurements. The executive order plans to “develop online resources focused on teaching K-12 students foundational AI literacy and critical thinking skills,” and establish AI-related apprenticeship programs.

Survey Highlights Tech Leaders’ Views On AI In Higher Ed

Inside Higher Ed (5/1, Palmer) reports that the Inside Higher Ed/Hanover Research 2025 Survey of Campus Chief Technology/Information Officers, published on Thursday, reveals that one in three chief technology and information officers “says their institution is significantly more reliant on artificial intelligence than it was even last year.” However, those same campus tech leaders “also indicate their institutions are struggling with AI governance.” Conducted between February and March, the survey included responses from 108 CTOs. Only a third of respondents “say investing in generative artificial intelligence is a high or essential priority for their institution, and just 19 percent say higher education is adeptly handling the rise of AI.” The survey also highlights cybersecurity concerns, with only 30 percent of CTOs “highly confident their college’s practices can prevent cyberattackers from compromising data and intellectual property, or launching a ransomware event.” The survey underscores challenges in digital transformation, such as insufficient IT personnel and financial investment.

India Selects Sarvam AI To Develop Large Language Model

Entrepreneur Magazine (4/28) reported that the Indian government has chosen Bengaluru-based startup Sarvam AI to create the nation’s first foundational large language model (LLM) as part of the INR 10,000-crore IndiaAI Mission. Union Minister for Electronics and IT, Ashwini Vaishnaw, announced the selection on Saturday, expressing confidence in Sarvam’s ability to deliver a globally competitive model. Sarvam, supported by Lightspeed Venture Partners and Peak XV Partners, will develop the model with 70 billion parameters within six months. The government will provide 4,000 high-end GPUs for this project, facilitated by companies like Yotta Data Services and Tata Communications. Sarvam plans to create three model variants: Sarvam-Large, Sarvam-Small, and Sarvam-Edge, to address diverse use cases. Co-founder Vivek Raghavan stated that this initiative aims to build critical national AI infrastructure, enhancing India’s autonomy in AI and ensuring cultural relevance.

Indiana Lawmakers Pass Bills Shifting AI Data Center Energy Costs

Insider (5/2, Boudreau) reports Indiana Governor Mike Braun is expected to sign a bill requiring Big Tech’s AI data centers to cover 80 percent of new power costs if they seek faster regulatory approval, while another law allows utilities to pass nuclear energy exploration costs to consumers. AWS, Google, Microsoft, and Meta plan $15 billion in Indiana data center investments, which could demand more power than the state’s 7 million residents by 2035. Critics, including Citizens Action Coalition Executive Director Kerwin Olson, called the bills “a disaster for Hoosier ratepayers” that will “exacerbate the utility affordability crisis.” Republican Representative Ed Soliday defended the laws, saying small modular nuclear reactors (SMRs) offer clean energy solutions, while Democratic Representative Matt Pierce argued utilities should bear unproven technology risks. Indiana Michigan Power is exploring SMRs, with costs potentially reaching $4 billion.

Texas School Utilizing AI To Teach Core Academic Lessons

Newsweek (5/1, Miller) reports that the “days of dodging class or suffering from a lack of motivation appear to be a thing of the past at Alpha School.” The private pre-K through eighth grade institution in Brownsville, Texas, utilizes “personalized artificial intelligence to teach an entire day of core academic lessons in just two hours.” Then, the “tech-savvy students then spend their afternoons working on non-academic critical life skills like public speaking, financial literacy or even how to ride a bike.” In addition, staff – known there as “‘guides’ rather than teachers – say they strive to facilitate a sense of independence into each child while overseeing a supportive, nurturing environment like any attentive teacher in any solid school district in America.”

dtau...@gmail.com

unread,
May 11, 2025, 7:38:19 PMMay 11
to ai-b...@googlegroups.com

U.S. to Rescind, Replace Global AI Chip Export Curbs

The White House plans to rescind and modify a rule set to take effect on May 15 that would curb exports of AI chips, a spokeswoman for the U.S. Department of Commerce confirmed on Wednesday. The regulation was aimed at restricting AI chip and technology exports to rivals by dividing the world into tiers based on each nations relationship to the U.S. The Commerce spokeswoman said officials "didn't like the tiered system" and that the rule was "unenforceable." She added that debate was continuing on the best course of action.
[
» Read full article ]

Reuters; Karen Freifeld; Arsheeya Bajwa (May 7, 2025)

 

EU Misses Deadline to Tame AI Models

The European Commission missed the May 2 deadline for drafting guardrails for the most advanced AI models. Thirteen academics, including ACM A. M. Turing Award laureate Yoshua Bengio have been collaborating on a voluntary "code of practice" for advanced AI models. The latest proposed rules would have signatories disclose relevant information about their models to authorities and customers, develop a policy to comply with copyright rules, and take steps to mitigate "systemic risks."
[
» Read full article ]

Politico Europe; Pieter Haeck (May 6, 2025)

 

CEOs Push AI, CS as Graduation Requirement

As part of an effort led by Code.org, more than 200 CEOs have signed a letter calling on state leaders to require students take AI and computer science classes in order to graduate from high school. Among those who signed the letter were the heads of American Express, Airbnb, Dropbox, LinkedIn, Salesforce, Microsoft, Yahoo, Zoom, Uber, and several coding education and ed-tech companies.
[ » Read full article ]

Axios; April Rubin (May 5, 2025)

 

Self-Driving Cars Can Tap into AI-Powered Social Network

New York University (NYU) researchers have developed an AI model-sharing framework for self-driving cars that enables them to share information about traffic patterns, road conditions, and traffic signs and signals without establishing direct connections or sharing the driver's personal information or driving patterns. With the Cached Decentralized Federated Learning (Cached-DFL) framework, self-driving cars could learn how to handle various scenarios from vehicles that have encountered such challenges in other locations.
[ » Read full article ]

Live Science; Lisa D. Sparks (May 2, 2025)

 

Pentagon's AI Metals Program Goes Private

The U.S. Department of Defense's Open Price Exploration for National Security AI metals program has been taken over by the nonprofit Critical Minerals Forum (CMF), with the goal of reducing manufacturers' reliance on China. The AI model is intended to calculate the cost of a metal by factoring in labor, processing, and other costs and factoring out Chinese market manipulation, enabling manufacturers' to increase their metal supply deals with Western mines.
[ » Read full article ]

Reuters; Ernest Scheyder (May 2, 2025)

 

LeCun Recognized by NYAS for Advancing AI

ACM A.M. Turing Award laureate Yann LeCun was recognized with The New York Academy of Sciences' inaugural Trailblazer Award for his pioneering research in machine learning, computer vision, mobile robotics, and computational neuroscience. "I've become a public advocate of science and rationalism," LeCun said during his acceptance speech. "We have to stand up for science."
[ » Read full article ]

The New York Academy of Sciences; Nick Fetty (May 1, 2025)

 

Time Saved by AI Offset by New Work Created

Economists at the University of Chicago and Denmark's University of Copenhagen examined the effect of AI chatbot adoption in 11 occupations deemed vulnerable to automation, such as software developers, with data covering 25,000 workers and 7,000 workplaces in 2023 and 2024. According to the study, AI tools saved time for 64% to 90% of users, but created new job tasks for 8.4% of workers. The study concluded that "AI chatbots have had no significant impact on earnings or recorded hours in any occupation" during the period studied.
[ » Read full article ]

Ars Technica; Benj Edwards (May 1, 2025)

 

AI Is Draining Water from Areas That Need It Most

The datacenters that power AI consume large amounts of water to cool hot servers and, indirectly, from the electricity needed to run these facilities. A Bloomberg analysis found that about two-thirds of datacenters built or in development since 2022 are in places already experiencing high levels of water stress in the U.S. In China and India, an even greater proportion of datacenters are located in drier areas.


[
» Read full article *May Require Paid Registration ]

Bloomberg; Leonardo Nicoletti; Michelle Ma; Dina Bass (May 8, 2025)

 

Google to Roll Out AI Chatbot to Children Under 13

Google said its Gemini AI chatbot will be made available to children under 13 with parent-managed Google accounts through Family Link next week. In an email to parents, Google said children can use the Gemini Apps to ask questions, create stories, and get help with homework. A Google spokesperson said Gemini data from children with Family Link accounts will not be used to train the AI.

[ » Read full article *May Require Paid Registration ]

The New York Times; Natasha Singer (May 5, 2025)

 

Mideast Titans Step Back from AI Race

UAE tech firms are reconsidering the feasibility of developing AI models from scratch as the global AI race continues to be led by the U.S. and China. UAE tech conglomerate G42 has pulled resources from its Jais model and shifted focus to building bespoke features on top of existing AI models. Falcon, the UAE government-backed Technology Innovation Institute's open-source AI system, meanwhile, has fallen behind as open-source alternatives from Meta and China's DeepSeek continue to advance.


[
» Read full article *May Require Paid Registration ]

Bloomberg; Mark Bergen; Omar El Chmouri (May 5, 2025)

 

Big Tech Accused of Distorting Key AI Rankings

Researchers at U.S. nonprofit Cohere Labs found the Chatbot Arena benchmark, which ranks AI models, is a “distorted playing field” due to policies allowing big tech companies to discard poorly scoring models. Their analysis of more than 2 million head-to-head tests from January 2024 to April 2025 found Meta tested 27 private AI model variants and Google tested 10 prior to the launch of Llama 4 and Gemma 3, respectively, but these models did not feature in league tables.

[ » Read full article *May Require Paid Registration ]

New Scientist; Matthew Sparkes (May 1, 2025)

 

Amazon Uses AI To Advance Sustainability Goals

The Cool Down (5/4) reports Amazon is leveraging AI to improve efficiency and reduce environmental impact, according to a recent Technology Magazine report. The company’s Package Decision Engine, which uses machine learning and computer vision, has helped avoid over 2 million tons of packaging material globally, said Amazon Chief Sustainability Officer Kara Hurst. Other initiatives include FlowMS, which detected leaks saving 9 million gallons of water annually, and Advanced Refrigeration Monitoring, optimizing energy use in cold storage. Amazon also partnered with AES Corporation to develop Maximo, a computer vision robot that cuts solar panel installation time and costs by up to 50%. Hurst said Amazon is “pioneering AI applications to accelerate our decarbonization efforts” and sees significant potential for further sustainability improvements.

NVIDIA Tweaks AI Chips For Chinese Market, Report Says

Reuters (5/2) reported that NVIDIA is modifying its AI chips again to comply with US export regulations, according to a report from The Information. The company “has spoken with customers, including Alibaba Group, TikTok-parent ByteDance and Tencent Holdings, the report said, citing three people involved in the conversations.”

Apple Partners With Anthropic For AI Coding Platform

TechCrunch (5/2, Zeff) reported that Apple and Anthropic are collaborating on a “vibe-coding” software platform using generative AI, Bloomberg reported Friday. This platform, an updated version of Apple’s Xcode, will employ Anthropic’s Claude Sonnet model to assist programmers in writing, editing, and testing code. While Apple plans to use this internally, it has not decided on a public release. To boost its AI initiatives, Apple is leveraging partnerships, including OpenAI’s ChatGPT for Apple Intelligence features, with potential future inclusion of Google’s Gemini. Anthropic’s Claude models are favored among developers for coding, particularly on platforms like Cursor and Windsurf.

Illinois Lawmaker To Introduce Mandatory AI Chip Tracking Legislation

Reuters (5/5, Nellis, Cherney) reports Rep. Bill Foster (D-IL) plans to “introduce legislation in coming weeks to verify the location of artificial-intelligence chips like those made by Nvidia after they are sold.” The measure has bipartisan support and “aims to address reports of widespread smuggling of Nvidia’s chips into China in violation of US export control laws.” Foster, who once worked “as a particle physicist, said the technology to track chips after they are sold is readily available, with much of it already built in to Nvidia’s chips. Independent technical experts interviewed by Reuters agreed.” The proposed bill would require the Department of Commerce to establish regulations within six months. Tim Fist of the Institute for Progress “said such tracking would provide a general, country-level location for chips... far more information than the Bureau of Industry and Security, the arm of the U.S. Commerce Department responsible for enforcement of export controls, currently has.”

Report Highlights How Professors View AI’s Role In Higher Education

Inside Higher Ed (5/8, Mowreader) reports that a new study “from researchers at Ithaka S+R found that regardless of their own feelings about artificial intelligence, the average professor sees the integration of AI tools into teaching and learning as inevitable.” The report, based on 246 interviews at 19 colleges in the US and Canada, highlights that while “keeping up with students was a significant motivating factor for instructors to begin incorporating generative AI,” there is a need for improved AI literacy among students and faculty. The study reveals that STEM professors are “more likely to say they were specialists in AI and that their colleagues had similarly high levels of AI familiarity” than those in the social sciences or humanities. Despite the prevalence of AI tools, many instructors face challenges in “learning how to use the tools themselves” and applying them.

FDA Appoints First Chief AI Officer

“The FDA has appointed Jeremy Walsh as its first chief artificial intelligence officer, marking a step in tech modernization at the agency,” Becker’s Hospital Review (5/7, Murphy) reports. The article says, “Walsh, who announced the career move in a May 2 LinkedIn post, will also oversee information technology in the role.”

Tech Leaders Testify Before Congress On AI Competition With China

The AP (5/8, Brown) reports, “OpenAI CEO Sam Altman and executives from Microsoft and chipmaker Advanced Micro Devices testified on Capitol Hill about the biggest opportunities, risks and needs facing an industry which lawmakers and technologists agree could fundamentally transform global business, culture and geopolitics.” This “hearing comes as the race to control the future of artificial intelligence is heating up between companies and countries.” Reuters (5/8, Alper, Godoy) reports the executives “said that while the U.S. is ahead in the artificial-intelligence race, Washington needs to boost infrastructure and champion AI chip exports to stay ahead of Beijing.”

China’s Chip Industry Sees Progress Despite US Sanctions

The Economist (UK) (5/8) reports behind a paywall that despite US sanctions aimed at curbing China’s AI advancement, Chinese chip companies are making significant strides. Huawei’s new CloudMatrix chip cluster, by “stitching together 384 of its Ascend AI chips,” reportedly outperforms Nvidia’s NVL72 cluster, although it uses more power. Other Chinese chipmakers like Cambricon and Hygon are developing chips to replace Nvidia’s A100. China is also advancing in high-bandwidth memory, with CXMT catching up to industry leaders like SK Hynix and Micron. Additionally, Chinese firms like AMEC and Naura are developing advanced chipmaking tools. However, challenges remain, including a reliance on foreign components and the dominance of Nvidia’s CUDA software platform.

FDA Plans Full AI Integration By June 30

Reuters (5/8, Singh) reports that the Food and Drug Administration announced on Thursday that all its centers will begin using artificial intelligence immediately, with full integration set for June 30. The decision follows the completion of a new generative AI pilot for scientific reviewers. The AI tools aim to reduce time spent on repetitive tasks, expediting the drug review process. The FDA plans to enhance usability, expand document integration, and tailor outputs to specific needs while ensuring information security and compliance. Wired reported that the FDA has been in discussions with OpenAI and associates of Elon Musk’s Department of Government Efficiency regarding AI use.

dtau...@gmail.com

unread,
May 17, 2025, 9:08:54 AMMay 17
to ai-b...@googlegroups.com

DeepMind Unveils General-Purpose Science AI

Google DeepMind unveiled a general-purpose AI that can solve computer science and mathematics problems. Based on Google's Gemini family of large language models (LLMs), researchers have used the AlphaEvolve AI to solve open math problems, improve the design of Google's next generation of tensor processing units, and determine how to make full use of the company's global computing capacity.

[ » Read full article *May Require Paid Registration ]

Nature; Elizabeth Gibney (May 15, 2025)

DeepMind’s blog post: https://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/

 

China Restricts Children’s Use of AI in Schools

China is restricting the extent to which children can use generative AI in primary and secondary schools, according to local reports. Primary school students are prohibited from using unrestricted generative AI tools on their own, although an instructor may use the tech to assist with teaching, according to the local government report. Middle schoolers are permitted to explore how generative AI reasons and analyzes information, while high schoolers can use the tech more broadly.
[ » Read full article ]

CNBC; Evelyn Cheng (May 15, 2025)

 

Nvidia Sending 18,000 AI Chips to Saudi Arabia

Nvidia will sell more than 18,000 of its latest AI chips to newly launched Saudi company Humain, owned by Saudi Arabia’s Public Investment Fund. The chips will be used in the construction of datacenter infrastructure. Humain’s plans include eventually deploying “several hundred thousand” Nvidia graphics processing units (GPUs). AMD said on Tuesday it would also supply chips to Humain.
[ » Read full article ]

CNBC; Kif Leswing (May 13, 2025)

 

Preparing Science Educators to Use, Teach AI

University of Florida's Bruce MacFadden led an interdisciplinary research team that developed a free, optional online curriculum that uses the science of paleontology to introduce Florida middle schooler students to AI. The curriculum, Shark AI, uses fossil shark teeth to explain data collection and object classification, teaches students how train and evaluate machine earning models, and helps them develop unique AI models.
[ » Read full article ]

NSF News (May 13, 2025)

 

AI Headphones Translate Multiple Speakers at Once

A headphone system developed by University of Washington researchers leverages AI to translate several speakers simultaneously. The Spatial Speech Translation system automatically detects the number of speakers in a space and translates their speech while maintaining the volume and expressive qualities of each voice. It also tracks the quality and direction of the voices as the speakers move their heads.
[ » Read full article ]

UW News; Stefan Milne (May 9, 2025)

 

LegoGPT Creates Lego Designs Using AI, Text Inputs

The LegoGPT AI model created by Carnegie Mellon University researchers outputs LEGO designs from text inputs. The model, available for free on GitHub, was trained on a dataset with more than 47,000 LEGO structures that build over 28,000 unique 3D objects. This was then used to train the AI model, allowing it to create unique and original designs solely from text inputs.
[
» Read full article ]

Tom's Hardware; Jowi Morales (May 9, 2025)

 

AI Execs Say U.S. Must Increase Chip Exports, Improve Infrastructure

At a May 8 hearing of the U.S. Senate Commerce Committee, Microsoft, OpenAI, and Advanced Micro Devices executives testified that infrastructure investments and increased AI chip exports are necessary for the U.S. to stay ahead of China in the artificial intelligence (AI) race. "The number one factor that will define whether the U.S. or China wins this race is whose technology is most broadly adopted in the rest of the world," said Microsoft President Brad Smith. OpenAI CEO Sam Altman added that investment in datacenters, power stations, and other infrastructure is "critical."
[
» Read full article ]

Reuters; Alexandra Alper; Jody Godoy (May 8, 2025)

 

Tech Company Responsible for Global IT Outage to Cut Jobs, Citing AI

Cybersecurity firm CrowdStrike, whose faulty software update brought down 8.5 million Windows systems worldwide last July, said AI efficiencies will result in the loss of 500 jobs in the company globally, amounting to about 5% of its workforce. CEO George Kurtz said, “We’re operating in a market and technology inflection point, with AI reshaping every industry, accelerating threats, and evolving customer needs.”
[ » Read full article ]

The Guardian (U.K.); Josh Taylor (May 9, 2025)

 

U.S. Scraps ‘AI Diffusion’ Rule

The U.S. Department of Commerce said Tuesday it is rescinding the “AI Diffusion Rule,” which imposed caps on how many chips certain countries can buy. “The Trump Administration will pursue a bold, inclusive strategy to American AI technology with trusted foreign countries around the world, while keeping the technology out of the hands of our adversaries,” said Jeffrey Kessler, U.S. Under Secretary of Commerce for Industry and Security.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Sherry Qin (May 14, 2025)

 

UAE to Introduce AI Classes for Children as Young as Four

The United Arab Emirates is rolling out an AI curriculum in state schools for children as young as four as it seeks to become a regional AI hub. Ethical use of AI will be a component of the curriculum, which is capped at 20 hours per academic year. Children will also be taught how to write prompts for chatbots and how to use AI for research purposes without plagiarizing.

[ » Read full article *May Require Paid Registration ]

Financial Times; Chloe Cornish (May 12, 2025)

 

AI Video of Murder Victim Addresses Killer in Court

At a May 1 court hearing in Maricopa County Superior Court in Arizona, Stacey Wales used an AI-generated video of her brother, Christopher Pelkey, who was killed in a 2021 road-rage incident, as a victim impact statement. Wales' husband used AI tools to edit a photo of Pelkey, clone his voice based on old videos, and animate his face. The video featured a speech written by Wales based on what she thought Pelkey would say.


[
» Read full article *May Require Paid Registration ]

The Washington Post; Daniel Wu (May 8, 2025)

 

Officials Across US Seeking To Host Stargate Project

The Washington Post (5/10) reported that real estate developers, landowners, economic development agencies, and elected officials are “vying for a piece of Stargate, which is being promoted as a historic project needed to turbocharge American AI and fend off China.” The project was announced by President Trump in January and will “include five to 10 huge data centers stocked with powerful computer chips to support AI development.” Trump also said the project would “create over 100,000 jobs almost immediately.” OpenAI and two of its partners, software maker Oracle and Japanese investment firm SoftBank Group, “have said they aim to invest $500 billion into Stargate over the next four years.”

Amazon Fulfillment Center Showcases AI-Powered Robotics

WSB-TV Atlanta (5/9) reported Amazon’s 2.5 million-square-foot fulfillment center in Stone Mountain uses AI-powered robots and 20 miles of conveyor belts to streamline operations. Senior Operations Manager Steve Robinson said combining “Amazon intelligence with the talents of our people” allows employees to focus on higher-level tasks. Robots handle heavy lifting, moving 300-pound shelving units, while standardized totes ensure efficiency. Employees sort, scan, and stow items, with AI optimizing packaging by dispensing the right amount of tape. Robinson said the system prevents repetitive work, offering employees $19/hour wages with regular raises and cross-training opportunities. When asked about job displacement, he said, “I’d rather give someone an opportunity to use a higher-level skill.” The process ends at the slam farm, where packages are routed for delivery.

Dow, Google AI Collaboration Aims To Boost Plastic Recycling

The OPI (5/9, Davies, Subscription Publication) reported that global materials science company Dow and Google have partnered to explore AI’s potential in recycling challenging materials, notably soft, flexible plastics. The collaboration combines Dow’s “materials expertise with Google’s AI and cloud capabilities” to enhance recycling solutions for these hard-to-recycle plastics. The goal is to improve material recognition and sorting, thereby increasing recovery rates and fostering sustainable, circular markets for these waste streams.

AI Alters College Admissions Processes

Forbes (5/13, Rim) reports that technological advancements, particularly artificial intelligence, are reshaping college admissions. Admissions officers can distinguish between AI-generated writing and student-authored work, impacting the weight of personal essays. A report from The Daily Tar Heel states that the University of North Carolina at Chapel Hill uses AI to evaluate writing quality before individual review. Inside Higher Education’s 2023 survey shows 50% of colleges use AI in admissions. Students are advised to use AI for data organization but avoid relying on it for essay writing to maintain originality. AI tools can help in STEM and humanities research by organizing and synthesizing data, but critical thinking and creativity remain crucial for standing out.

US Tech Firms Announce AI Deals In Middle East

Reuters (5/14, Cherney, Nellis) reports that US tech companies “on Tuesday announced artificial intelligence deals in the Middle East,” coinciding with US President Donald Trump’s $600 billion commitments from Saudi Arabia to US firms during his Gulf tour. Nvidia “said it will sell hundreds of thousands of AI chips in Saudi Arabia,” starting with 18,000 “Blackwell” chips to Humain, an AI startup backed by Saudi’s sovereign wealth fund. Advanced Micro Devices (AMD) and Humain have agreed on a $10 billion collaboration, while Qualcomm signed a memo with Humain “to develop and build a data centre central processor (CPU).” The White House disclosed that Saudi firm DataVolt will invest $20 billion in US AI data centers and infrastructure. Alphabet, DataVolt, Oracle, Salesforce, AMD, and Uber “will invest $80 billion in cutting-edge transformative technologies in both countries.”

Saudi Arabia Launches New AI Investment Firm

Financial Post (CAN) (5/12) reports that Saudi Arabia has established HUMAIN, a new company aimed at investing across the artificial intelligence value chain. Owned by the kingdom’s Public Investment Fund, HUMAIN will provide data centers, AI infrastructure, cloud capabilities, and Arabic large language models. Crown Prince Mohammed Bin Salman will chair the firm, which will serve as an AI hub for sectors such as energy, health care, manufacturing, and financial services. This launch coincides with an anticipated visit by US President Donald Trump, where AI is expected to be a key agenda topic. The visit may see the lifting of restrictions on AI chip access for Saudi Arabia and its regional partners. The Saudi-US Investment Forum on Tuesday will feature prominent tech executives.

IBM Hired More Staff After AI Integration

The Economic Times (IND) (5/10) reported that IBM has increased its workforce by hiring more programmers and salespeople after replacing approximately 200 human resources employees with AI agents. IBM CEO Arvind Krishna stated that the company’s total employment has risen despite these reductions. Krishna explained to The Wall Street Journal that AI and automation have been integrated into certain enterprise workflows, allowing IBM to invest more in areas like software engineering, sales, and marketing. He emphasized that these are “critical thinking” domains where human interaction is essential, as opposed to roles focused on routine tasks. IBM did not disclose the timeframe over which the job reductions occurred.

dtau...@gmail.com

unread,
May 24, 2025, 8:09:06 AMMay 24
to ai-b...@googlegroups.com

UAE Launches Arabic Language AI Model

The United Arab Emirates this week launched a new Arabic language AI model. Falcon Arabic, developed by Abu Dhabi's Advanced Technology Research Council (ATRC), aims to capture the linguistic diversity of the Arab world through a "high-quality native (non-translated) Arabic dataset," according to a statement. Said ATRC Secretary General Faisal Al Bannai, "Today, AI leadership is not about scale for the sake of scale. It is about making powerful tools useful, usable, and universal."
[
» Read full article ]

Reuters; Yousef Saba (May 21, 2025)

 

Push Back on Effort to Stop States from Regulating AI

A provision in a package of tax and spending cuts just approved by the U.S. House would institute a 10-year ban on state enforcement of laws or regulations governing AI models or automated decision systems. The provision has raised concerns among 141 organizations, including academic institutions, advocacy groups, and employee coalitions that signed a letter drafted by the nonprofit Demand Progress addressing their concerns to members of Congress.
[
» Read full article ]

CNN; Clare Duffy (May 20, 2025)

 

AI a Greater Threat to Women's Work Than Men's, UN Suggests

A study by the UN's International Labor Organization found that AI is poised to transform 9.6% of jobs traditionally performed by women, versus 3.5% of jobs traditionally performed by men, particularly in high-income countries. The report stated, "We stress that such exposure does not imply the immediate automation of an entire occupation, but rather the potential for a large share of its current tasks to be performed using this technology."
[
» Read full article ]

Reuters; Olivia Le Poidevin (May 20, 2025)

 

Miami Schools Lead Students into the AI Future

Miami-Dade County Public Schools, which two years ago blocked AI chatbots over fears of mass cheating and misinformation is leading a national experiment to integrate generative AI technologies into teaching and learning. Over the last year, the district has trained more than 1,000 educators on new AI tools and is now introducing Google chatbots for more than 105,000 students in high school.

[ » Read full article *May Require Paid Registration ]

The New York Times; Natasha Singer (May 19, 2025)

 

AI Shapes NFL Schedule

The NFL used Fastbreak AI to create the schedule for its upcoming season. Fastbreak's AI model considers various road trip rules and multiple broadcast and streaming partners, among other variables, assigning a score for each road trip. Fastbreak's John Stewart said, "Every time we find a new score, that becomes sort of a starting point. It keeps throwing it out there until it gets lower and lower scores. So it's looking for schedules that violate no rules, and it's searching literally billions and billions and trillions of potential schedules to get to that point."

[ » Read full article *May Require Paid Registration ]

The Washington Post; Rick Maese (May 15, 2025)

 

College Students Anxious About AI-Detection Software In Academia

The New York Times (5/17, Holtermann) reported that in interviews, “high school, college and graduate students described persistent anxiety about being accused of using AI on work they had completed themselves – and facing potentially devastating academic consequences,” due to flawed AI-detection software. Leigh Burrell, a computer science major at the University of Houston-Downtown, experienced this firsthand when her professor accused her of using an AI chatbot for a writing assignment. Despite her evidence of a two-day drafting process, the work was flagged by Turnitin’s AI-detection service. The incident prompted her to upload “a 93-minute YouTube video documenting her writing process” for future submissions. Some universities, including the University of California, Berkeley, have chosen to disable Turnitin’s AI-detection feature due to reliability concerns.

Researchers Measure ChatGPT’s Performance In College Engineering Class

Inside Higher Ed (5/19, Mowreader) reports, “Graduate students at the University of Illinois at Urbana-Champaign’s college of engineering integrated a large language model into an undergraduate aerospace engineering course to evaluate its performance compared to the average student’s work.” The study, conducted by Gokul Puthumanaillam and Melkior Ornik, aimed to understand how minimal student effort, supplemented by AI, would fare academically. The chatbot “achieved a B grade (82.2 percent), slightly below the class average of 85 percent.” It excelled in multiple-choice questions but struggled with programming projects, “lacking the optimization and robustness” of high-quality submissions. Researchers “recommend faculty members integrate project work and open-ended design challenges to evaluate students’ understanding and technical capabilities, particularly in synthesizing information and making practical judgements.”

Amazon Web Services Chief Says UK Needs More Nuclear Energy For AI Data Centers

The Register (UK) (5/16) reported Amazon Web Services (AWS) CEO Matt Garman said the UK needs more nuclear energy to power AI data centers. AWS plans to invest £8 billion in the UK by 2028 to build its digital and AI infrastructure. The UK government is expediting the building of data facilities to drive AI development through its AI Opportunities Action Plan. Concerns are rising over the energy needed to support the growth in AI services, with global consumption by data centers expected to double by 2030. Garman told the BBC that nuclear is a “great solution” to data center energy requirements, calling it “an excellent source of zero-carbon, 24/7 power.”

Studies Reveal AI’s Impact On College Students’ Critical Thinking Skills

The Hechinger Report (5/19, Barshay) says that two new studies “were conducted by a team of international researchers who studied how Chinese students were using ChatGPT to help with English writing, and by researchers at Anthropic, the company behind the AI chatbot Claude.” They both come to a similar conclusion: Many students are letting AI do important brain work for them. The Researchers in China and Australia found that Chinese students using ChatGPT “improved their essays the most – even more than the group with human writing teachers,” but did not learn more or feel motivated. They relied on AI, showing “potential metacognitive laziness.” Similarly, Anthropic’s study of university students using AI bot Claude revealed students often offloaded critical thinking tasks to AI. Anthropic researchers warned this could hinder foundational skill development. The studies highlight the need for educators “to redesign assignments so that students cannot complete them by asking AI to do it for them.”

Nvidia, MGX Teaming Up With French Companies To Build Europe’s Biggest AI Data Center Campus

Bloomberg (5/19, Subscription Publication) reports Nvidia and “Abu Dhabi investment vehicle MGX are partnering with French firms to establish what they say will be Europe’s largest artificial intelligence data center campus, advancing French and Emirati ambitions in the field.” The objective is to construct a campus in proximity to Paris “with a capacity of 1.4 gigawatts, the companies said Monday in a joint statement with French state-owned investment firm Bpifrance SACA and national AI champion Mistral AI.” Moreover, the announcement came at the Choose France summit.

Microsoft To Offer AI Models From Musk’s xAI At Its Data Centers

Reuters (5/19, Nellis) reports Microsoft “said on Monday it would offer new AI models made by Elon Musk’s xAI, Meta Platforms and European startups Mistral and Black Forest Labs hosted in its own data centers, and unveiled a new artificial-intelligence tool designed to complete software coding tasks on its own.” The news, which was revealed at a Microsoft conference, underlined the shifting nature of the firm’s “relationship with ChatGPT creator OpenAI, which Microsoft has backed and which announced a directly competing product last week.” Reuters adds that Microsoft “has recently situated itself as a more neutral player in the AI arms race, showing less appetite to shell out huge sums of cash to fund OpenAI’s research ambitions while also working with a broader array of AI players, all with an eye on expanding sales while keeping a lid on costs.”

Miami Schools Integrate AI Tools In Education

The New York Times (5/19, Singer) reports that Miami-Dade County Public Schools, “the nation’s third-largest school district, is at the forefront of a fast-moving national experiment to embed generative AI technologies into teaching and learning.” Recently, a social studies teacher at Southwest Miami Senior High School used Google’s Gemini chatbot “to role-play American presidents,” engaging students in analyzing President John F. Kennedy’s speeches. The district has trained “more than 1,000 educators on new AI tools and is now introducing Google chatbots for more than 105,000 high schoolers,” marking the largest US deployment of its kind. The district’s initiative includes workshops for educators, focusing on ethical use and critical assessment of AI tools.

AI Cheating Concerns Rise In Higher Education

Inside Higher Ed (5/20, Flaherty) reports that concerns about AI-assisted cheating in higher education are increasing. The issue gained attention with Columbia University “suspending a student who created an AI tool to cheat on ‘everything.’” According to a survey by the American Association of Colleges and Universities and Elon University, 59 percent of academic leaders “said cheating has increased since generative AI tools have become widely available.” Institutions are encouraged to develop clear AI usage guidelines and integrate AI literacy into curricula. Additionally, Inside Higher Ed’s survey found that “just 11 percent of CTOs said their institution has a comprehensive AI strategy.” Connie Ledoux Book, president of Elon University says, “Institutions must lead with clarity, consistency and care to prepare students for a world where ethical AI use is a professional expectation.”

Site Of First Stargate AI Data Center Highlighted

Bloomberg (5/20, Subscription Publication) highlights the first location for the Stargate Project, with the specific site being located in Texas. Stargate “is a collaboration of OpenAI, Oracle and SoftBank, with promotional support from President Donald Trump, to build data centers and other infrastructure for artificial intelligence throughout the US.” The firms have vowed to spend up to $500 billion, a figure so big that it’s difficult to think it’ll really come to fruition. Bloomberg adds, “But at least for this one, in Abilene, Texas, 180 miles west of Dallas, Chase Lochmiller says they’re good for the money.” Lochmiller serves as CEO of Crusoe, a startup which he co-established and which assists with developing AI data centers.

NVIDIA To Build Semi-Custom AI Infrastructure With Hyperscalers

CRN (5/19, Martin) reports, “NVIDIA unveiled NVLink Fusion, a silicon offering that it said will allow the company to use its NVLink interconnect technology to build semi-custom, rack-scale AI infrastructure with hyperscalers.” NVIDIA said “several semiconductor firms … will adopt NVLink Fusion to create custom AI silicon that will be paired with the AI infrastructure giant’s Arm-based Grace CPUs.” Partners include MediaTek, Marvell, Astera, and Synopsys.

Bezos Earth Fund Awards AI Grants For Climate

Axios (5/21, Geman) reports that the Bezos Earth Fund has announced the first recipients of its grant program aimed at using AI for biodiversity protection, sustainable proteins, and power grid improvements. The $100 million “AI for Climate and Nature Grand Challenge” launched in 2024 will initially provide each project with $50,000, with up to 15 projects potentially receiving $2 million later this year. Amen Ra Mashariki, the fund’s head of AI and data strategies, emphasized the program’s unique approach: “The way we did this grand challenge was a little different, and it was deliberate in every way.”

Microsoft Build Disruption Leads To Accidental Leak Of Walmart AI Strategy

The Verge (5/21, Warren) reports Microsoft Head of AI Security Neta Haiby accidentally revealed Walmart’s confidential AI deployment plans during a Build conference session disrupted by protesters. While sharing her screen after the incident, Haiby exposed internal Microsoft Teams messages showing that “Walmart is ready to rock and roll with Entra Web and AI Gateway,” and quoting a Walmart AI engineer who said, “Microsoft is WAY ahead of Google with AI security.” Walmart, already a major user of Azure OpenAI, is one of Microsoft’s largest corporate customers.

OpenAI’s Largest Data Center Lands Almost $12B In Funding

The Wall Street Journal (5/21, Jin, Subscription Publication) reports a data center in Texas which Crusoe, a startup, is constructing for OpenAI has landed $11.6 billion in fresh funding commitments, broadening a location that’s key to boosting OpenAI’s long-term computing capacities. Crusoe says that the funding will bring the data center’s building total from two to eight and hike the aggregate amount secured for the undertaking to $15 billion. It’s anticipated the data center will be the biggest one utilized by OpenAI.

OpenAI, Nvidia Joining Other Firms On Stargate UAE AI Cluster

CNN (5/22, Salem) reports OpenAI and Nvidia are to join other firms to develop “Stargate UAE, an artificial intelligence infrastructure cluster, in a sister project to the recently unveiled push to expand AI infrastructure in the United States.” Appearing next to President Trump earlier this year, the heads of OpenAI, Softbank, and Oracle announced that they would establish “a new company, called Stargate, to build out AI infrastructure in the US.” The firms said that they intend to put $500 billion toward the undertaking during the years ahead. On Thursday, those firms, Nvidia, Cisco, and local entity G42 revealed via statement “their partnership to build Stargate UAE in Abu Dhabi. The project’s first part, a 200-megawatt AI ‘cluster,’ is expected to go live in 2026, they said.”

Meta Secures 650 MW Solar Deal For AI Operations

TechCrunch (5/22, Chant) reports that Meta has signed a deal with AES for 650 megawatts of solar power in Kansas and Texas. Meta “said it signed the deal to power its data centers, which have been expanding to support its growing AI operations.” This marks Meta’s fourth solar deal this year, adding to its existing 12-gigawatt renewable portfolio.

Honeywell Survey Finds AI Has Potential To Boost Energy Security Amid Rising Demand

Automation Magazine (5/20) reports that a recent Honeywell survey highlights the growing belief among U.S. energy executives that artificial intelligence (AI) has significant potential to enhance energy security in the near term. The survey, which included 300 decision-makers and influencers in energy-related industries, found that 91% believe AI can improve energy security, with 85% already using or piloting AI technologies. Ken West, President and CEO of Honeywell Energy and Sustainability Solutions, stated, “Looking ahead, new technologies like AI and automation can further optimize existing energy systems and integrate new energy sources more swiftly and efficiently.” The survey also identified key AI applications, such as cybersecurity, predictive maintenance, and operational efficiency, as critical areas for future development.

dtau...@gmail.com

unread,
May 31, 2025, 7:46:28 AMMay 31
to ai-b...@googlegroups.com

Estonia Takes Leap into AI

Estonia will roll out AI Leap beginning in September, providing "world-class" AI tools and skills to students and teachers. The national initiative will ensure 58,000 students and 5,000 teachers have free access to top-tier AI learning tools by 2027. Teacher training will focus on self-directed learning, digital ethics, and prioritizing educational equity and AI literacy. The plan also involves allowing students use of their smartphones in schools.
[ » Read full article ]

The Guardian (U.K.); Sally Weale (May 26, 2025)

 

DOE Unveils Supercomputer That Merges With AI

The U.S. Department of Energy’s (DOE) Lawrence Berkeley National Laboratory has selected Dell Technologies to deliver its next flagship supercomputer in 2026. The system will use Nvidia chips tailored for AI calculations and the simulations common to energy research and other scientific fields. DOE Secretary Chris Wright, who has compared AI’s development to the Manhattan Project, called the supercomputer a key tool for winning the global AI race.

[ » Read full article *May Require Paid Registration ]

The New York Times; Don Clark (May 30, 2025)

 

UAE's AI University Aims to Become ‘Stanford of the Gulf’

Mohamed bin Zayed University of AI (MBZUAI) in the United Arab Emirates (UAE) is taking steps to become what its president, ACM Fellow Eric Xing, calls "the Stanford of the Gulf." MBZUAI is intended to be a feeder for Emirati firms, an incubator for homegrown startups, and an AI research and development arm for the nation. The university offers degrees in such fields as robotics and computer vision, backed by full scholarships from the UAE.

[ » Read full article *May Require Paid Registration ]

Bloomberg; Omar El Chmouri; Mark Bergen (May 23, 2025)

 

AI Poised to Revolutionize Weather Forecasting

An AI weather model developed by researchers at Microsoft generates accurate 10-day forecasts at smaller scales than similar models and within seconds, versus hours for traditional models. To increase its usefulness beyond weather forecasts, the Aurora AI weather model also was trained on multiple large Earth system datasets, enabling predictions of air pollution and wave height, among other things. The model is already in use at the European Center for Medium-Range Weather Forecasting.

[ » Read full article *May Require Paid Registration ]

The New York Times; Rebecca Dzombak (May 21, 2025)

 

DOGE Reportedly Expands Grok AI Use In US Government

Reuters (5/23, Taylor, Ulmer) reported that Elon Musk’s DOGE team is expanding Grok AI use within the US federal government for data analysis, according to three people familiar with the matter, raising potential conflict-of-interest and privacy concerns. One source “said Musk’s team was using a customized version of the Grok chatbot,” and two sources “said DOGE staff also told Department of Homeland Security officials to use it even though Grok had not been approved within the department.” A “Homeland Security spokesperson denied DOGE had pressed DHS staff to use Grok.”

California Community Colleges Increase Efforts To Reduce Fraudulent Enrollments

Inside Higher Ed (5/23, Weissman) reported that the California Community College system is tackling fraudulent enrollments, exacerbated by the rise of online education and artificial intelligence (AI). On Tuesday, the Board of Governors discussed imposing a student fee to fund AI defenses but decided to further “explore” the idea instead. They approved requiring identity verification for all applicants. Over the past year, “the system found 31.4 percent of applications were fraudulent, system officials said,” costing $13 million in aid. Chris Ferguson, executive vice chancellor of the California Community College system, “emphasized that the system’s current tools for fraud detection capture about 85 percent of false applications.” The proposal aims to support application review costs and deter fraud, but students oppose the fee, fearing financial barriers. AI tools are already helping colleges like Santiago Canyon, which identified 10,000 fraudulent students, making room for 8,000 real ones.

AI-Powered Recycling System Debuts At UMass Amherst

The Springfield (MA) Republican (5/24, Carolan) reported that recycling entrepreneurs Ian Goodine and Ethan Walko, who met at UMass Amherst, have returned to their alma mater to introduce “an AI-powered robot that scans trash and recyclables.” The system, developed by their company rStream, efficiently sorts waste, tackling the industry’s challenge of sorting. The prototype, capable of processing up to one ton per hour, uses photographs and datasets to classify items into biowaste, recyclable, or specialty categories. Chancellor Javier Reyes praised the system for addressing “a critical campus need while advancing environmental sustainability.” Since graduating in 2022, “the two have raised $3 million” to enhance their technology. The AI system is designed to adapt to different recycling rules, providing precise information that helps waste contractors identify markets for recyclables.

Energy-Focused AI Working Group Led By Rep. Fedorchak Gains Momentum

In a paywalled article, E&E News (5/23, Portuondo, Subscription Publication) reports a new legislative working group led by Rep. Julie Fedorchak (R-ND) that is tasked “with developing legislation that could help tackle an expected boom in energy demand from artificial intelligence is starting to gain momentum.” The group is “starting to garner lawmaker interest after Fedorchak received nearly 100 responses to a request for information on powering the future of AI.” Fedorchak said, “We knew there was growing interest in how to meet AI’s energy demands, but the depth and breadth of these responses exceeded our expectations.” The congresswoman “said people and companies she’s heard from are tied to utilities, data center operators, energy producers, cybersecurity experts and tech innovators.”

Kennesaw State University Develops AI-Powered Robot That Protects Crops Without Pesticides

WAGA-TV Atlanta (5/27) reports that Kennesaw State’s Taeyeong Choi developed an AI robot using night vision to protect strawberries from pests, offering an eco-friendly alternative to chemical pesticides. WSB-AM Atlanta (5/27) reports that the robot uses night vision to detect and remove slugs and snails. Choi emphasizes its eco-friendly and cost-saving benefits, with a prototype expected next year and a price under $5,000.

Alphabet CEO Interviewed On How AI Will Change The Technology Landscape

The Verge (5/27, Patel) reports on its interview with Alphabet and Google CEO Sundar Pichai following last week’s Google I/O developer conference. The event “marked the beginning of what appears to be a new era for search and the web,” as “Google’s new vision for search goes well beyond links to webpages to something that feels a lot more like custom app development.” Google’s AI Mode can build users “a custom search results page, including interactive charts and potentially other kinds of apps, in real time.” The Verge reports that Pichai expressed “in several different ways that the web is still getting bigger and Google is sending more traffic to more websites than ever before, but the specifics of that are hotly contested.” The News Media Alliance trade group recently issued “a furious statement, calling AI Mode ‘theft.’” In the interview, Pichai emphasized how Google expects AI to bring a “platform shift” similar or even greater than that of smartphones and wireless networks.

Nvidia CEO Discusses Opportunities In Sovereign AI

In Tech News Briefing podcast, the Wall Street Journal (5/27, Subscription Publication) says that Nvidia is exploring sovereign AI as a growth strategy, according to CEO Jensen Huang. This concept involves countries investing directly in AI infrastructure rather than relying on companies. Nvidia has secured deals with Saudi Arabia, India, and the UAE. A columnist for the Wall Street Journal highlights the political challenges, including US-China trade tensions affecting AI chip sales. Nvidia’s upcoming earnings report is anticipated, with concerns about the impact of halted Chinese market sales. Sovereign AI is seen as a potential growth area for Nvidia amidst these challenges.

Saudi Arabia Launches $10 Billion AI Fund, Seeks US Tech Partnerships

PYMNTS (5/28) reports Saudi Arabia’s state-backed AI company, Humain, plans to launch a $10 billion venture capital fund and partner with US tech firms to become a Middle East AI leader. Humain CEO Tareq Amin said the company aims to build 1.9 gigawatts of data center capacity by 2030, expanding to 6.6 gigawatts by 2034, requiring a $77 billion investment. Humain is in talks with OpenAI, xAI, and Andreessen Horowitz for equity partnerships and has secured $23 billion in deals with Nvidia, AMD, AWS, and Qualcomm. Amin said, “The world is hungry for capacity,” emphasizing Humain’s aggressive expansion strategy. AWS is collaborating with Humain on a $5 billion AI zone in Saudi Arabia, leveraging AWS tech for government AI applications. Saudi Arabia is offering subsidized electricity to attract data centers.

Meta Restructures AI Teams To Enhance Competitiveness

Axios (5/27, Fried) reports that Meta is reorganizing its AI teams to accelerate product development and compete with OpenAI and Google. An internal memo from Chief Product Officer Chris Cox outlines the new structure, dividing efforts into an AI products team and an AGI Foundations unit. The restructuring aims to enhance flexibility and ownership, with no job cuts or executive departures reported.

IBM Partners With AWS To Expand Enterprise AI Capabilities

ChannelE2E (5/28) reports IBM announced deep integrations with Amazon Web Services (AWS), Oracle, and Salesforce at Think 2025 to advance enterprise-grade agentic AI. IBM connected its watsonx Orchestrate platform to Amazon Bedrock and Amazon Q on AWS, enabling AI agents to act on real-time data from platforms like Salesforce, Slack, and Zendesk while ensuring security through watsonx.governance tools available in the AWS Marketplace. IBM VP Suzanne Livingston said Agent Connect, a new framework, provides “a foundation for seamless multi-agent collaboration” and supports multiple development tools, including LangChain and Copilot Studio. Partners can monetize agents through IBM’s Agent Catalog, which offers visibility to enterprise clients. IBM aims to prevent “agent sprawl” by enabling governed, interoperable AI ecosystems.

dtau...@gmail.com

unread,
Jun 8, 2025, 12:12:31 PMJun 8
to ai-b...@googlegroups.com

U.S. Removes ‘Safety’ from AI Safety Institute

The U.S. Department of Commerce has renamed its AI Safety Institute, created in 2023 under the previous administration, as the Center for AI Standards and Innovation (CAISI). The name change reflects a change in focus from overall safety to combating national security risks and preventing “burdensome and unnecessary regulation” abroad. Commerce Secretary Howard Lutnick called the agency’s overhaul a way to “evaluate and enhance U.S. innovation” and “ensure U.S. dominance of international AI standards.”
[ » Read full article ]

The Verge; Adi Robertson (June 4, 2025)

 

Nvidia's Blackwell Conquers Largest LLM Training Benchmark

Nvidia's Blackwell GPUs ranked first in all six benchmarks of the MLCommons consortium's MLPerf machine learning competition. The large language model (LLM) pretraining task, in particular, was more resource-intensive than in the past, with the GPT3 model replaced by Meta's Llama 3.1 403B. In the LLM fine-tuning benchmark, AMD's Instinct MI325X GPU was on par with Nvidia's H200s and marked a 30% improvement over the Instinct MI300X.
[ » Read full article ]

IEEE Spectrum; Dina Genkina (June 5, 2025)

 

Bengio Has a Plan to Make AI More Trustworthy

ACM A.M. Turing Award laureate Yoshua Bengio launched the nonprofit LawZero to develop a "safe by design" Scientist AI system that would be fundamentally non-agentic, trustworthy, focused on understanding and truthfulness, and not designed to mimic human behavior or pursue its own goals. According to Bengio, Scientist AI potentially could be used to ensure the safety of agentic AI systems being developed by Big Tech companies.
[ » Read full article ]

Time; Harry Booth (June 3, 2025)

 

Meta Turns to Nuclear Power for AI

Meta on Tuesday announced a 20-year deal with New York-based electric utility Constellation Energy to secure nuclear-generated electric power to help meet demand for its AI and other computing needs. The deal will revive Constellation’s Clinton Clean Energy Center in Illinois. Constellation last September said it planned to restart its Three Mile Island nuclear power plant in Pennsylvania to provide Microsoft with power for its datacenters.
[ » Read full article ]

Associated Press; Matt Ott (June 3, 2025)

 

AI Threatens Europe's Water Reserves

Caught up in a global battle for AI supremacy, Europe's datacenter industry used around 62 million cubic meters of water last year, and water lobby Water Europe expects that figure to hit 90 million cubic meters by 2030. A draft of the European Commission’s upcoming Water Resilience Strategy said datacenters will be rated on "overall sustainability and propose minimum performance standards, including water consumption."
[ » Read full article ]

Politico Europe; Marianne Gros; Leonie Cater (May 28, 2025)

 

David Cope, Godfather of AI Music, Dies at 83

David Cope, a composer known as "the godfather of AI music," has passed away at age 83. Cope is known for developing one of the first computer algorithms able to generate classical music. In the 1980s, Cope developed Experiments in Musical Intelligence (EMI), a program trained on compositions by classical masters like Bach and Mozart that could replicate their styles by scanning and reproducing patterns that Cope would convert into a score.

[ » Read full article *May Require Paid Registration ]

The New York Times; Miguel Salazar (June 2, 2025)

 

AI Advances Eliminate Entry-Level Jobs For Recent College Graduates

The New York Times (5/30, A1, Roose) reported, “Unemployment for recent college graduates has jumped to an unusually high 5.8 percent in recent month” due to “an emerging crisis for entry-level workers that appears to be fueled, at least in part, by rapid advances in AI capabilities.” Research firm Oxford Economics “found that unemployment for recent graduates was heavily concentrated in technical fields like finance and computer science, where AI has made faster gains.” Molly Kinder from the Brookings Institution said, “Employers are saying, ‘These tools are so good that I no longer need marketing analysts, finance analysts and research assistants.’” Some companies are already replacing lower-level tasks with AI, with one tech executive stating “his company had stopped hiring anything below an L5 software engineer...because lower-level tasks could now be done by AI coding tools.” Dario Amodei, CEO of Anthropic, “recently predicted that AI could eliminate half of all entry-level white-collar jobs within five years.”

AI Enhances Storm Forecasting For Hurricane Season

Newsweek (5/30) reported that artificial intelligence (AI) models are surpassing traditional weather forecasts in tracking tropical storms, with systems like Microsoft’s Aurora showing a 20-25 percent improvement. The AI company Urbint, which acquired StormImpact, provides predictive technology for utility companies, including American Electric Power, to anticipate storm impacts on infrastructure. Urbint CEO Corey Capasso explained that the system helps utilities plan for outages by predicting vulnerable infrastructure. Despite AI’s advancements, researchers emphasize the continued necessity of high-quality, physics-based weather data.

SoftBank, Intel Developing AI Memory Chips That Consume Less Electricity

Nikkei Asia (JPN) (5/31) reports that SoftBank and Intel are “developing a type of memory for artificial intelligence expected to consume much less electricity than current chips, helping to build efficient AI infrastructure in Japan.” The companies “plan to develop a structure for stacked DRAM chips that uses a different wiring structure than current advanced high-bandwidth memory, slashing power consumption by roughly half.”

Texas Teen Develops Cardiovascular Diagnostic App

Smithsonian (5/30, Waseem) reported that Siddarth Nandyala, a 14-year-old from Texas, “detected and diagnosed more than 40 patients with potential cardiovascular diseases” in trials in India. The Circadian AI smartphone app, which he created, records heart sounds using a smartphone, filtering noise and analyzing data via cloud-based machine learning. Nandyala, who will study computer science at the University of Texas at Dallas, aims to assist people through non-invasive screenings. “The main focus and goal for me out of this was to essentially create a tool that is able to help a large amount of people,” he said. Clinical trials in the US and India showed “over 96 percent accuracy.” Jameel Ahmed, an electrophysiologist, notes the app’s potential to improve care access, despite limitations like microphone quality. Nandyala is expanding the app and is “currently working on applying it to the sounds of lungs.”

IBM To Acquire Data Analysis Startup Seek AI

TechCrunch (6/2, Wiggers) reports IBM announced on Monday the acquisition of Seek AI, an AI platform facilitating natural language queries of enterprise data, for an undisclosed amount. Seek’s technology will integrate into IBM’s new Watsonx AI Labs. Seek CEO Sarah Nagy said the company plans to scale its platform and enhance AI solutions for IBM clients. TechCrunch adds, “IBM’s acquisition of Seek comes as the former looks to grow its investments in AI, particularly AI for the enterprise. It’s a strategy that’s worked well for IBM so far. The tech giant’s Q1 earnings beat estimates, driven by software growth and strong AI demand.”

Experts Suggest School Districts Implement AI Cautiously

K-12 Dive (6/2, Merod) reports that as artificial intelligence gains traction in education, school districts are advised to implement it cautiously. Washington’s Peninsula School District, led by CIO Kris Hagel, is adopting a slow approach, allowing only 11 AI tools currently. Hagel emphasizes the need for a plan, saying, “Ed tech tools before have kind of proliferated throughout the district.” Iowa City Community School District’s Andrew Fenstermaker advocates for starting small and involving teachers in the evaluation process. He says, “districts should be hands-on and seek feedback from those using the AI tools.” Lynwood Unified School District in California employs an 18-point AI fact sheet for vendors to ensure data protection. Both districts focus on aligning AI tools with existing strategies and addressing biases.

FDA Rolls Out Generative AI Tool To Boost Efficiency

Healthcare IT News (6/3, Milliard) reports the FDA on Monday launched Elsa, “a new generative AI technology that agency leaders said will help its employees do their jobs more efficiently.” The LLM-powered technology “can help summarize adverse events to support safety profile assessments, perform faster label comparisons and generate code to help develop databases for nonclinical applications, according to the FDA.” The agency already uses the tool “to help streamline clinical protocol reviews and reduce the time needed for scientific evaluations and the identification of high-priority inspection targets.” Going forward, Elsa “will be used across the administration to improve operational efficiency, with plans, as the tool is refined, to integrate it with more use cases, such as data processing, according to agency officials.”

Microsoft Launches Free Cybersecurity Program For European Governments

Reuters (6/4, Mukherjee) reports that Microsoft has introduced a free cybersecurity initiative for European governments to enhance defenses against AI-augmented cyber threats. This program, announced on Wednesday, aims to improve intelligence-sharing and mitigate attacks amid a rise in cyberattacks linked to state-sponsored actors from China, Iran, North Korea, and Russia. Microsoft President Brad Smith stated that expanding U.S.-developed cybersecurity resources to Europe will “strengthen cybersecurity protection.” Smith noted AI’s defensive capabilities, saying, “Our goal needs to be to keep AI advancing as a defensive tool faster than it advances as an offensive weapon.”

Amazon To Invest $10 Billion In AI And Cloud Campus In North Carolina

Reuters (6/4, Sophia) reports Amazon.com is investing $10 billion in North Carolina’s Richmond County to expand its artificial intelligence infrastructure. This investment is projected to create at least 500 high-skilled jobs. Amazon already employs approximately 24,000 staff in the state and is expanding its retail operations with new facilities. This investment follows other large investments by “Big Tech companies” racing “to build data centers needed to power AI applications.” Amazon spent $25 billion in capital expenditures in the first quarter and expects similar spending for the rest of the year.

        The AP (6/4) reports Amazon Chief Global Affairs and Legal Officer David Zapolsky said, “This investment will position North Carolina as a hub for cutting-edge technology, create hundreds of high-skilled jobs, and drive significant economic growth. ... We look forward to partnering with state and local leaders, local suppliers, and educational institutions to nurture the next generation of talent.”

        Amazon Unveils AI-Powered Warehouse Robots, Delivery Innovations, Agentic AI Group. Reuters (6/4, Bensinger) reports Amazon announced new AI-driven advancements to enhance warehouse operations and delivery efficiency. The company said it is forming a new Lab126 group to develop multi-task warehouse robots using agentic AI, enabling them to perform diverse functions like unloading trailers and retrieving parts. Amazon said, “We’re creating systems that can hear, understand and act on natural language commands, turning warehouse robots into flexible, multi-talented assistants.” The company also revealed generative AI tools to improve delivery driver navigation by providing detailed maps of building layouts and obstacles.

New Toolkit Released To Assist Schools In AI Implementation

Education Week (6/4, Langreo) reports that Common Sense Media “has released an AI toolkit for school districts to help them” implement generative artificial intelligence in education. Robbie Torney, senior director of AI programs at Common Sense Media, says that many district leaders require “practical” support for “pain points related to AI implementation.” The toolkit includes a “getting started” guide and a readiness assessment to help districts navigate AI adoption challenges. In an interview with EdWeek, Torney highlighted the importance of AI literacy and the need for a leadership vision, saying, “AI is a tool, and districts have to have a clear vision for what that tool can do.” The toolkit also addresses family engagement, as research indicates that 83 percent of parents have not been informed about AI policies. Torney emphasized the necessity of parent partnership, noting it as “critical to the success or failure of an initiative like this.”

Anthropic Debuts Custom AI Models For US National Security Applications

TechCrunch (6/5, Wiggers) reports Anthropic says that it has “released a new set of AI models tailored for U.S. national security customers.” The new models, a custom set of “Claude Gov” models, were “built based on direct feedback from our government customers to address real-world operational needs,” Anthropic wrote in a blog post. Anthropic “says that its new custom Claude Gov models better handle classified material, ‘refuse less’ when engaging with classified information, and have a greater understanding of documents within intelligence and defense contexts. The models also have ‘enhanced proficiency’ in languages and dialects critical to national security operations, Anthropic says, as well as ‘improved understanding and interpretation of complex cybersecurity data for intelligence analysis.’”

How AI, Machine Learning Are Transforming Supply Chain Management

India Today (6/4) reports artificial intelligence (AI) and machine learning (ML) are transforming supply chain management by enhancing logistics, forecasting, and customer service. These technologies enable companies to optimize operations through advanced data analysis, as seen with Walmart’s inventory management and DHL’s route optimization. The article highlights various applications of AI, including demand forecasting, risk management, and predictive maintenance, with companies like IBM, Unilever, and Caterpillar leveraging these tools for efficiency. Despite challenges such as data integration and initial investment costs, the integration of AI and ML offers significant benefits, including cost reduction and improved customer satisfaction.

dtau...@gmail.com

unread,
Jun 14, 2025, 12:17:59 PMJun 14
to ai-b...@googlegroups.com

Photonic Processor Could Streamline 6G Wireless Signal Processing

An AI hardware accelerator developed by Massachusetts Institute of Technology researchers can perform machine learning computations at the speed of light and classify wireless signals within nanoseconds. Around 100 times faster than the top-performing digital alternative and 95% accurate in signal classification, the photonic chip could be used in future 6G wireless applications.
[
» Read full article ]

MIT News; Adam Zewe (June 11, 2025)

 

Film Festival Showcases What AI Can Do on the Big Screen

AI-generated video company Runway kicked off its week-long annual AI Film Festival in New York on June 5, showcasing 10 short films made, at least in part, using AI. Festival submissions jumped from around 300 for the first event in 2023 to about 6,000 this year. Creators are encouraged to use Runway's AI tools for their films, but use of other AI tools is allowed, and some of the films combine AI-generated elements with real-life images and sounds or live-action shots.
[
» Read full article ]

Associated Press; Wyatte Grantham-Philips (June 7, 2025)

 

AI Emissions Could Exceed Some Countries’ Soon

A report from the U.N. International Telecommunication Union revealed a 150% average increase in indirect carbon emissions from four leading AI-focused tech companies (Amazon, Microsoft, Alphabet, and Meta) between 2020 and 2023. The spike is largely attributed to the high energy demands of AI systems and datacenters. The report warned that unchecked AI expansion could result in annual emissions reaching 102.6 million tons of CO2 equivalent, comparable to the yearly emissions of some mid-sized industrialized nations.
[ » Read full article ]

Computing (U.K.); Dev Kundaliya (June 6, 2025)

 

3D Facial Images Generated from DNA Sequences

An international team that included researchers from the Hangzhou Institute for Advanced Study of University of the Chinese Academy of Sciences and China’s Shanghai Jiao Tong University developed an AI model that uses DNA sequences to produce 3D facial images. Combining the pre-trained Transformer AI model architecture and spiral convolution technology, the Difface model uses around 10,000 points to form a point cloud to develop each face, with each feature represented by point clusters.
[
» Read full article ]

Global Times (China); Liu Caiyu (June 4, 2025)

 

News Sites Hit by Google's AI Tools

Online news publishers are being forced to rethink their strategies as traffic declines with the introduction of AI tools that replace Google searches with chatbots that eliminate the need to click on links leading to their sites. The New York Times, for example, saw its share of traffic coming from organic search to the paper’s desktop and mobile websites drop to 36.5% in April 2025 from almost 44% three years earlier, according to software development and data aggregation company Similarweb.


[
» Read full article *May Require Paid Registration ]

The Wall Street Journal; Isabella Simonetti; Katherine Blunt (June 10, 2025)

 

AI Turns Paralyzed Man's Brainwaves into Speech

A brain-computer interface developed by University of California, Davis researchers enabled a man who lost the ability to speak due to amyotrophic lateral sclerosis to hold real-time conversations, and even sing. The researchers implanted 256 electrodes into areas of the brain that control the facial muscles used for speaking, then recorded his brain activity while he read sentences aloud in specific intonations. The data was used to train an AI model to associate certain neural activity patterns with his intended words and inflections.


[
» Read full article *May Require Paid Registration ]

New Scientist; Christa Lesté-Lasserre (June 11, 2025)

 

FDA to Use AI in Drug Approvals to 'Radically Increase Efficiency'

In a recent article in the Journal of the American Medical Association, U.S. Food and Drug Administration (FDA) officials said the agency will use AI to "radically increase efficiency" in determining whether new drugs and devices receive approval. The FDA has rolled out a ChatGPT-like large language model called Elsa, which it plans to use to prioritize inspections of food and drug facilities, describe side-effects in drug safety summaries, and handle other basic-product review tasks.


[
» Read full article *May Require Paid Registration ]

The New York Times; Christina Jewett (June 10, 2025)

 

Chinese AI Firms Block Features amid University Entrance Exams

Leading domestic AI companies froze certain features during the hours of China's national college entrance exam (gaokao) this year. Tencent, DeepSeek, and Kimi were among those that prevented users from uploading photos of test papers. Some schools also deployed real-time AI patrol and surveillance systems to prevent suspicious behavior in the exam room. China also employs facial recognition technology, drones, and cellphone-signal blockers, among other real-time surveillance and anti-cheating measures, during the gaokao.


[
» Read full article *May Require Paid Registration ]

The Washington Post; Sammy Westfall; Lyric Li (June 10, 2025)

 

England's High Court Warns Lawyers to Stop Citing Fake AI-Generated Cases

A senior judge on the High Court of England and Wales last week warned that attorneys could be criminally prosecuted for presenting false AI-generated materials. The ruling by Victoria Sharp, president of the King’s Bench Division of the High Court; a second judge detailed two recent cases in which fake material was used in written legal arguments that were presented in court. Wrote Sharp in the ruling, "There are serious implications for the administration of justice and public confidence in the justice system if AI is misused."


[
» Read full article *May Require Paid Registration ]

The New York Times; Lizzie Dearden; Cade Metz (June 8, 2025)

 

Welcome to Campus. Here's Your ChatGPT

OpenAI aims to create "AI-native universities" with its ChatGPT Edu service. The company's goals for the service include students having AI assistants, professors providing customized AI study bots, career services offering recruiter chatbots for practice job interviews, and more. California State University is rolling out ChatGPT to students across its 23 campuses, which would make it "the nation's first and largest AI-empowered university system," the university said.


[
» Read full article *May Require Paid Registration ]

The New York Times; Natasha Singer (June 7, 2025)

 

OpenAI Says Significant Number of Recent ChatGPT Misuses Likely Came From China

OpenAI's latest report on malicious uses of its AI models states that a “significant number” of recent violations came from China. The ChatGPT developer said it had disrupted several attempts to leverage its models for cyber threats and covert influence operations in the three months since Feb. 21.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Mauro Orru (June 6, 2025)

 

State Lawmakers Urge Congress to Let Them Regulate AI

A letter to the U.S. Congress signed by 260 state lawmakers from both parties calls for the removal of a provision from President Trump's "big, beautiful bill" that would impose a decade-long moratorium on state-level AI regulations. The state lawmakers said in their letter, "Over the next decade, AI will raise some of the most important public policy questions of our time, and it is critical that state policymakers maintain the ability to respond."

[ » Read full article *May Require Paid Registration ]

The Washington Post; Will Oremus; Andrea Jiménez (June 3, 2025)

 

AI Lobby Expands Influence In Washington

Politico (6/6, Chatterjee) reported that top artificial intelligence companies, including OpenAI and Anthropic, are significantly increasing their lobbying efforts in Washington to secure government contracts from the federal IT budget, which focuses heavily on AI. These companies are advocating for minimal regulation while pushing for expanded energy resources to support data centers. OpenAI released a blueprint urging the federal government to prepare for increased demand for computational infrastructure and energy supply, recommending the creation of special AI zones and expansion of the national power grid. Critics express concern that the AI industry’s influence may overshadow public interest issues such as bias and privacy.

Fraudulent College Enrollments Surge Due To AI Chatbots

The AP (6/10) reports that a rise in artificial intelligence (AI) and online courses has led to increased financial aid fraud. Scammers use AI chatbots to create fake enrollments, known as “ghost students,” to collect financial aid checks. Victims like Heather Brady and Wayne Chaw have faced identity theft, with loans fraudulently taken out in their names. Brady discovered a $9,000 loan issued to someone else for a California college, while Chaw’s identity was used to collect $1,395 in financial aid. The US Education Department has introduced a temporary rule requiring students to show ID for federal aid applications. An Associated Press analysis reveals California colleges reported 1.2 million fraudulent applications in 2024. “The rate of fraud through stolen identities has reached a level that imperils the federal student aid program,” the department said. Criminal cases nationwide highlight the schemes’ pervasiveness, with significant losses in Texas and New York.

Texas A&M University Researchers Develop New AI Tool To Enhance Disaster Response

The Cool Down (6/7, Sattler) reported that Texas A&M University researchers “have created an AI model that can evaluate tornado damage and predict recovery times in less than an hour, the school reported.” This innovation seeks to address delays in disaster response by providing rapid damage assessments, enabling faster resource allocation for emergency teams and insurance claims processing. The AI uses satellite imagery, machine learning, and advanced recovery models to evaluate damage and predict repair timelines, considering factors like income levels. Abdullah Braik, a doctoral student and study co-author said, “Manual field inspections are labor-intensive and time-consuming, often delaying critical response efforts.” The technology is being tested on other natural disasters and aims to offer real-time recovery tracking.

Microsoft-Backed Mistral Launches New AI Reasoning Model

CNBC (6/10, Browne) reports that French AI company Mistral is launching its first reasoning model on Tuesday to rival options from OpenAI and China’s DeepSeek. CEO Arthur Mensch revealed at London Tech Week, “We’re announcing in a couple of hours our new reasoning model, which is very much competitive with all the others.” Mistral’s model excels in mathematics and coding. It is unique for its ability to reason in multiple languages, particularly European languages. Mensch said, “Historically, we’ve seen US models reason in English and Chinese models reason in Chinese.” Mistral, supported by Microsoft, specializes in open-weight large language models, allowing developers to access and modify the model’s parameters. This approach reduces the costs and time associated with training a system from scratch.

Honeywell Introduces Suite Of AI-Powered Cybersecurity Solutions For Operational Technology Environments

Security Info Watch (6/9) reports Honeywell has introduced a suite of AI-powered cybersecurity solutions aimed at enhancing the security of operational technology (OT) environments, announced at the 49th annual Honeywell Users Group. The new offerings, including Honeywell Cyber Proactive Defense and Honeywell OT Security Operations Center, are designed to mitigate cyber threats and support continuous operations in industrial settings. Additionally, Honeywell has expanded its Digital Prime platform to include a comprehensive set of solutions for testing and modifying engineering projects, reducing plant downtime. Pramesh Maheshwari, President of Honeywell Process Solutions, stated, “As we guide our customers on the path from automation to autonomy, Honeywell’s domain expertise is poised to help them rethink how they use technology to drive innovation and gain a competitive edge.”

Ohio State University To Require “AI Fluency” For All Students

Fortune (6/10) reported that Ohio State University is implementing an “AI Fluency” initiative to ensure all students graduate with the ability to apply AI tools in their fields. Starting in fall 2025, hands-on AI experience will be required for all undergraduates. Ohio State Executive Vice President and Provost Ravi V. Bellamkonda said, “Through AI Fluency, Ohio State students will be ‘bilingual’ – fluent in both their major field of study and the application of AI in that area.” Ohio State President Walter “Ted” Carter Jr. emphasized the university’s role in preparing students to lead in a future workforce transformed by AI, saying, “Artificial intelligence is transforming the way we live, work, teach, and learn.”

Nvidia Announces Plan To Build First Industrial AI Cloud In Germany

Reuters (6/11, Mukherjee, Loève) reports that Nvidia CEO Jensen Huang announced at the VivaTech conference in Paris that the company will establish its first AI cloud platform for industrial applications in Germany. This platform will assist carmakers like BMW and Mercedes-Benz in various processes. Huang also revealed plans to expand Nvidia’s technology centers across seven European countries, and to open a compute marketplace for European businesses. Nvidia aims to increase AI computing capacity in Europe tenfold within two years, with plans for 20 AI factories. Nvidia will collaborate with European AI company Mistral using 18,000 Nvidia chips. Huang emphasized the importance of “sovereign AI” and noted quantum computing’s potential to solve complex problems.

Tech Companies Invest In Nuclear Power For AI Needs

Harvard Business Review (6/11) reports that major tech companies like Meta, Google, Amazon, and Microsoft are heavily investing in nuclear power to meet the increasing energy demands of AI, with Amazon partnering with Dominion Energy to advance nuclear technology. The companies are making long-term commitments that “tee up the two timing questions...How to bring reactors online precisely when demand is need and how to align energy suppliers to Big Tech’s AI deadlines.”

Indianapolis Public Schools Considers AI Policy For Teachers, Staff

Chalkbeat (6/11) reports that Indianapolis Public Schools (IPS) is contemplating a new policy on artificial intelligence (AI) to guide its use among teachers and staff. The school board may vote on this policy later in the month. The draft policy follows a yearlong pilot program with 20 staff members using an AI tool. The district’s chief systems officer emphasized the importance of equipping staff with knowledge about AI boundaries in education. The policy suggests AI should ensure equitable outcomes and comply with federal laws like the Family Educational Rights and Privacy Act. An “AI Advisory Committee” will provide input on AI use. Acceptable uses of AI include drafting communications and automating tasks. The second phase will use Google Gemini, costing $177 per user for 2025-26. The district aims to update its “responsible use agreements” to include AI.

AMD Unveils New AI Chip Series

CNBC (6/12, Leswing) reports that Advanced Micro Devices (AMD) has unveiled its next-generation AI chips, the Instinct MI400 series, set to ship next year. These chips can be assembled into a server rack called Helios, enabling a “rack-scale” system. AMD CEO Lisa Su highlighted the unified architecture of the rack at a launch event in San Jose, California. OpenAI CEO Sam Altman expressed enthusiasm for the specs, indicating OpenAI’s intention to use AMD chips. AMD aims to compete with Nvidia’s Blackwell chips by offering lower operational costs and aggressive pricing. AMD’s MI355X chips, which began shipping last month, are claimed to outperform Nvidia’s offerings. The company expects the AI chip market to exceed $500 billion by 2028, despite Nvidia’s current 90 percent market share. AMD’s AI chips have been adopted by major customers, including OpenAI, Tesla, and Oracle.

dtau...@gmail.com

unread,
Jun 21, 2025, 4:17:45 PMJun 21
to ai-b...@googlegroups.com

AI Ethics Experts Set to Gather to Shape the Future of Responsible AI

The 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT 2025), taking place June 23-26 in Athens, Greece, will address how algorithmic systems are reshaping the world and what it takes to ensure these AI tools do so justly. Said ACM President Yannis Ioannidis, “The unprecedented advances and rapid integration of AI and data technologies have created an urgent need for a scientific and public conversation about AI ethics."
[ » Read full article ]

ACM Media Center (June 18, 2025)

 

Amazon Says It Will Reduce Its Workforce as AI Replaces Human Employees

Amazon CEO Andy Jassy said in a June 17 blog post that the rollout of generative AI agents will change how work is performed, enabling the company to shrink its workforce in the future. Jassy said, "We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs." Employees should view AI as "teammates we can call on at various stages of our work, and that will get wiser and more helpful with more experience," according to Jassy.
[ » Read full article ]

CNN; Ramishah Maruf; Alicia Wallace (June 17, 2025)

 

OpenAI Wins $200-Million U.S. Defense Contract

The U.S. Department of Defense (DOD) has awarded a one-year, $200-million contract to OpenAI to "develop prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains." OpenAI said in a blog post that the contract is part of its OpenAI for Government initiative, through which it will "help [DOD] identify and prototype how frontier AI can transform its administrative operations."
[ » Read full article ]

CNBC; Jordan Novet (June 16, 2025)

 

Libraries Open Their Stacks to AI

With support from Microsoft and OpenAI, Harvard University's Institutional Data Initiative has released a dataset containing more than 394 million scanned pages from nearly 1 million books in 254 languages for use by AI researchers. The Boston Public Library also made a deal with OpenAI to digitize its collection, while Google and Harvard are collaborating to retrieve and release public domain volumes from Google Books for use in AI training.
[ » Read full article ]

Associated Press; Matt O'Brien (June 12, 2025)

 

Language Bias Persists in Scientific Publishing Despite AI Tools

Stanford University researchers found the use of large language models (LLMs) in scientific writing to help overcome language barriers may bias peer reviewers' scientific assessments. The researchers examined nearly 80,000 peer reviews at a large computer science conference and interviewed 14 conference participants from across the globe, and found that peer reviewers used common LLM phrases to infer authors were from non-English-speaking countries.
[ » Read full article ]

Stanford University Institute for Human-Centered AI; Scott Hadly (June 16, 2025)

 

Industry Calls for More Research Funding, Public-Private Partnerships in U.S. AI Strategy

In recent comments on the 2025 National AI R&D Strategic Plan, Google, Amazon, IBM, and Anthropic called for the U.S. government to prioritize investments in AI-related research and development and to lead in the creation of AI standards that could be adopted worldwide. Google also encouraged the creation of "a comprehensive initiative to educate America's science students and scientific workforce on new AI technologies, supporting the uptake of AI as the next scientific instrument."
[ » Read full article ]

Nextgov; Alexandra Kelley (June 12, 2025)

 

Pope Leo Takes On AI as a Potential Threat to Humanity

This week, Google, Meta, IBM, Anthropic, Cohere, and Palantir executives took part in a two-day international conference at the Vatican on AI, ethics, and corporate governance. Some tech leaders hoped to avoid a binding international treaty on AI supported by the Vatican, and observers said the conference could set the tone for future interactions between Pope Leo and the tech industry on the matter of regulation.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Margherita Stancati; Drew Hinshaw; Keach Hagey (June 17, 2025); et al.

 

China's Spy Agencies Investing Heavily in AI

A report by researchers at Recorded Future's Insikt Group details investments in AI by Chinese spy agencies to develop tools that could improve intelligence analysis, help military commanders develop operational plans, and generate early threat warnings. The researchers found that China is probably using a mix of large language models, including Meta and OpenAI, along with domestic models from DeepSeek, Zhipu AI, and others.

[ » Read full article *May Require Paid Registration ]

The New York Times; Julian E. Barnes (June 17, 2025)

 

Baidu Ramps Up AI Hiring as China Faces Talent Crunch

Chinese Internet search giant Baidu has launched its biggest recruitment campaign yet, with a focus on AI talent. Baidu reported a 60% jump in job openings in its AI-focused annual recruitment drive, noting it "will train future AI navigators the way pilots are trained." Candidates are being sought for research areas such as large language model (LLM) algorithms, foundational LLM architecture, machine learning, speech technologies, and AI agents.

[ » Read full article *May Require Paid Registration ]

South China Morning Post; Ben Jiang; Coco Feng (June 16, 2025)

 

Google, U.S. Experts Join on AI Hurricane Forecasts

The U.S. National Hurricane Center has entered into a cooperative research and development agreement with Google's DeepMind to improve its hurricane forecasts. DeepMind's upgraded AI weather forecasting model can track a storm's development for up to 15 days and predict both its path and strength. The company said its hurricane intensity forecasts "are as accurate as, and more often accurate than," conventional forecasts.

[ » Read full article *May Require Paid Registration ]

The New York Times; William J. Broad (June 12, 2025)

 

AI Bots Take Over the Web

AI companies like OpenAI and Anthropic have deployed bots for real-time retrieval and recapping of content, even as more people use chatbots as an alternative to Google searches. New York startup TollBit found that retrieval bot traffic to 266 websites, with national and local news organizations accounting for half, surged 49% from the fourth quarter of 2024 to the first quarter of 2025.

[ » Read full article *May Require Paid Registration ]

The Washington Post; Nitasha Tiku (June 11, 2025)

 

AstraZeneca Signs $5.3B AI-Led Deal With CSPC

Reuters (6/13, Aripaka, Silver) reported that AstraZeneca has entered an AI-led research agreement with China’s CSPC Pharmaceutical Group valued at up to $5.3 billion. Announced on Friday, the collaboration aims to develop therapies for chronic conditions, including a small molecule oral therapy for immunological diseases. CSPC will conduct AI-driven research in Shijiazhuang City. AstraZeneca will pay an upfront fee of $110 million, with potential additional payments for development and sales milestones. This follows AstraZeneca’s March announcement of a $2.5 billion investment in a Beijing R&D hub. AstraZeneca’s Sharon Barr emphasized the collaboration’s focus on innovation to address chronic diseases affecting over two billion people globally.

        MedCity News (6/13, Vinluan) reported, “The new agreement with CSPC calls for the two companies to work together to discover and develop preclinical drug candidates.” As part of the deal, “CSPC will carry out the research at its facility in Shijiazhuang City, China, using its AI-driven drug discovery platform” which “analyzes the binding patterns that target proteins have with existing molecules.” In return, “AstraZeneca receives the right to exercise options for exclusive licenses to develop and commercialize drug candidates stemming from the collaboration.” Overall, “if any of those molecules reach the market, CSPC is in line to receive up to $3.6 billion in sales milestone payments, plus royalties on product sales.”

Meta Confirms Scale Investment

The AP (6/12) reported that Meta announced a $14.3 billion investment in AI company Scale and recruited its CEO Alexandr Wang to join a team developing “superintelligence.” Following reports that the two companies were in talks, Meta announced the deal on Thursday, calling it a “strategic partnership and investment.” Scale said the $14.3 billion investment puts its market value at over $29 billion. Scale will remain independent from Meta, but said the agreement will “substantially expand Scale and Meta’s commercial relationship.”

        Reuters (6/13, Godoy) reported that the deal “will test how the Trump administration views so-called acquihire deals, which some have criticized as an attempt to evade regulatory scrutiny.” The deal gives Meta a 49% nonvoting stake in Scale AI, and “unlike an acquisition or a transaction that would give Meta a controlling stake, the deal does not require a review by US antitrust regulators. However, they could probe the deal if they believe it was structured to avoid those requirements or harm competition.”

        Sources: Google Cuts Ties With Scale After Meta Investment. Reuters (6/13, Tong, Cai, Hu) cited anonymous sources in reporting that Google, Scale AI’s largest customer, will cut ties with Scale following Meta’s investment. Google “planned to pay Scale AI about $200 million this year for the human-labeled training data that is crucial for developing technology,” according to the sources.

Lawmakers Introduce Bill Tasking NSA To Create AI “Security Playbook” To Protect Tech From Foreign Adversaries

NextGov (6/13, Graham) reported Rep. Darin LaHood (R-IL) introduced legislation on Thursday to “require the National Security Agency to create an artificial intelligence ‘security playbook’ to protect sensitive U.S. technologies from foreign adversaries like China.” Reps. John Moolenaar (R-MI), the Chair of the Select Committee on China, as well as Raja Krishnamoorthi, its ranking member, and Josh Gottheimer (D-NJ) co-sponsored the bill. The lawmakers “said...the legislation was needed ‘to address vulnerabilities, threat detection, cyber and physical security strategies, and contingency plans for highly sensitive AI systems,’” and claimed “evidence that Chinese-based startup DeepSeek’s AI chatbot ‘used illegal distillation techniques to steal insights from U.S. AI models to accelerate their own technology development.’” MeriTalk (6/13, Hansen) reported the Advanced AI Security Readiness Act “would create paths to identify and neutralize security threats targeting advanced AI systems.”

University Of Michigan Purchases Land To Build AI Research Facilities

WXYZ-TV Detroit (6/13, Braddock) reported that that the University of Michigan Board of Regents approved buying more than 124 acres in Ypsilanti Township for a maximum of $8.1 million. The land will host two AI supercomputing facilities in collaboration with Los Alamos National Laboratory. Not all residents are in favor; Priscilla Creswell voiced concerns, saying, “That’s not something Ypsilanti needs at all,” citing potential negative impacts on the community. Mosharaf Chowdhury, a professor at the university, noted ongoing research to reduce AI energy consumption. Some residents plan a rally against the development at Hydro Park on Saturday.

Nvidia Champions Sovereign AI In Europe

Reuters (6/16) reports that Nvidia CEO Jensen Huang has been promoting “the idea of ‘sovereign AI’ since 2023. Europe is now starting to listen and act.” During his tour of London, Paris, and Berlin last week, Huang announced “a slew of projects and partnerships, while highlighting the lack of AI infrastructure in the region.” He announced Nvidia’s investment plans, stating, “We are going to invest billions in here.” British Prime Minister Keir Starmer pledged £1 billion to enhance computing power, while French President Emmanuel Macron emphasized AI infrastructure as “our fight for sovereignty.” Nvidia plans to build an AI cloud platform in Germany with Deutsche Telekom. Europe relies heavily on US tech giants, but Nvidia’s collaboration with Mistral in France aims to provide a local alternative. The European Union plans to construct four “AI gigafactories” to reduce reliance on US firms. High electricity costs pose challenges for data centers, driving the push for European tech independence.

OpenAI, Microsoft Relationship Reportedly Strained

In a paywall-protected article, the Wall Street Journal (6/16, Subscription Publication) reports that tensions are flaring between OpenAI and Microsoft regarding the future of their AI partnership. OpenAI reportedly wants Microsoft’s grip loosened over its AI products and computing resources, and also secure the latter company’s blessing for OpenAI to convert into a for-profit entity. The Journal says Microsoft’s approval is critical to OpenAI’s fundraising and ability to go public.

Tech Companies Invest Billions In Data Centers Amid AI Boom; Economic, Environmental Costs Unclear

Insider (6/17, Beckler, Ho, Parakul, Campbell, Thomas) reports tech companies are spending heavily on data centers to power AI growth, but the environmental and economic costs remain unclear. Insider analyzed permits for 1,240 US data centers, finding 40% are in high-water-stress areas, with some facilities using “more water a day than nearly 49,000 Americans.” The analysis estimates data centers could consume up to 239.3 terawatt-hours annually – nearly Florida’s total 2023 usage – and generate up to $9.2 billion in public health costs from electricity-related pollution. An Amazon spokesperson disputed the methodology, saying it “oversimplifies complex data center operations.” The report also found over 230 data centers in environmentally overburdened communities and highlighted lucrative tax incentives, including up to $2 million per job in Ohio.

Professors Adopt Handwritten Assignments To Combat AI Cheating

Inside Higher Ed (6/17, Alonso) reports that educators are increasingly requiring handwritten assignments to counter AI use in student work. Melissa Ryckman, an associate professor at the University of Tennessee Southern, plans to have students write in class to avoid AI-generated responses. She said, “I’m leaning towards that, but I’m also like – ugh, handwriting.” Other professors, like Monica Sain and Sara Gallagher, have seen improved student engagement with handwritten work. Sain implemented a “digital detox” by prohibiting laptops and requiring handwritten essays. Gallagher observed, “When students are using ChatGPT...they become disengaged from the work.” Despite challenges like deciphering handwriting and accommodating students with disabilities, professors find handwritten assignments foster better classroom connections. Tricia Bertram Gallant, a scholar, emphasized the need for secure assessments, saying, “We are in the business of facilitating human-to-human learning environments.” Concerns remain about balancing handwritten tasks with teaching essential research skills.

Amazon To Reduce Workforce In Coming Years As AI Eliminates Need For Certain Jobs, CEO Says

The Wall Street Journal (6/17, Herrera, Cutter, Subscription Publication) reports that Amazon plans to decrease its workforce in the future as artificial intelligence advances. On Tuesday, CEO Andy Jassy informed employees that generative AI represents a significant technological shift, impacting Amazon’s consumer interactions, business dealings, and internal operations.

        The Washington Post (6/17, Subscription Publication) reports, “In the tech industry, firms including Meta and Shopify have increasingly been requiring employees to use AI, citing productivity improvements and the potential for personal advancement.” However, “the warning to Amazon workers comes as excitement about AI in the tech industry has spurred new debate about whether the technology will be a job killer or creator.” CNN (6/17, Maruf) provides similar coverage.

OpenAI Secures $200M Pentagon Contract

Reuters (6/16) reports that OpenAI has been awarded a $200 million contract by the U.S. Defense Department to develop AI tools for national security, as stated by the Pentagon on Monday. The project will focus on creating prototype AI capabilities for both warfighting and enterprise domains, with work primarily conducted in and near Washington, expected to conclude by July 2026. OpenAI’s revenue run rate reached $10 billion in June, and it plans to raise up to $40 billion in a new funding round led by SoftBank Group.

NAACP, Environmental Group Notify Musk’s xAI Of Intent To Sue Over Supercomputer Facility Pollution

The AP (6/17) reports, “The NAACP and an environmental group said Tuesday that they intend to sue Elon Musk’s artificial intelligence company xAI over concerns about air pollution generated by a supercomputer facility located near predominantly Black communities in Memphis.” The xAI data center started “operating last year, powered in part by pollution-emitting gas turbines, without first applying for a permit.” Officials “have said an exemption allowed them to operate for up to 364 days without a permit, but Southern Environmental Law Center attorney Patrick Anderson said at a news conference that there is no such exemption for turbines – and that regardless, it has now been more than 364 days.”

Colleges Implementing AI Courses As Workforce Adapts

The Hechinger Report (6/19, Gilreath) runs an article that says “students are increasingly looking for ways to boost their” artificial intelligence (AI) “skills and make themselves more marketable at a time when there’s growing fear that AI will replace humans in the workforce.” Colleges, meanwhile, are “adding AI to their course catalogs, and individual professors are altering lessons to include AI skill building.” Miami Dade College’s AI program is equipping students with the skills to navigate a rapidly evolving job market. The college’s program, launched in 2023, includes courses on machine learning and ethics. A World Economic Forum report states 77 percent of companies “plan to train their employees to ‘better work alongside AI.’” The demand for AI skills is growing, with job postings requiring such skills increasing by 323 percent in one year. Josh Jones, CEO of QuantHub, a company that works with schools to add AI lessons said, “The problem we have is that AI is changing industries so fast.”

Tech Industry Shifts Stance On Renewable Energy Tax Credits

Politico (6/18) reported the fast-growing AI industry’s energy demands are driving a surge in power generation, with tech companies facing uncertainty over renewable energy’s role. The Data Center Coalition, including AWS, Google, Meta, and Microsoft, urged US Senate Majority Leader John Thune to extend clean energy tax credits, warning energy constraints could hinder AI-driven data center growth. The Senate Finance Committee proposed phasing out most renewable subsidies by 2028, prioritizing nuclear and geothermal energy instead. AWS, Google, Meta, and Microsoft declined to comment on the Senate’s plan. OpenAI Chief Global Affairs Officer Chris Lehane cited permitting delays as a bigger challenge than subsidy cuts, aligning with GOP focus on streamlining approvals. Clean Tomorrow’s Evan Chapman noted tech firms now adopt an “all-of-the-above” energy strategy, including gas and nuclear, as AI strains power grids.

Chinese Researchers Create AI Assistant For Plant Genomics

Xinhua (6/17) reported Chinese researchers introduced PlantGPT, “the first large language model AI assistant tailored for plant functional genomics.” PlantGPT is “an Arabidopsis-based expert Q&A system for plant functional genomics, capable of delivering precise responses and specialized analyses in the field.” The researchers have made the tool available online for free, and said it aims to overcome challenges in computational biology such as the difficulty of “deciphering complex biological regulatory mechanisms and effectively integrating multi-omics data.” Xinhua explained, “While traditional plant databases are rich in resources, they typically require precise trait or gene names for searches, due to limited interaction capabilities.”

Amazon Invests Over $500M In Nuclear Energy For AI Data Centers

Sustainability Mag (6/18) reported Amazon is investing more than $500 million in nuclear energy, including small modular reactors (SMRs), to power its AI-driven data centers and achieve net-zero emissions by 2040. The company has partnered with Talen Energy to co-locate a data center campus near Pennsylvania’s Susquehanna nuclear facility, securing 1,920 megawatts of carbon-free electricity through 2042. AWS VP of Global Data Centers Kevin Miller said Amazon is making the “largest private sector investment in state history” in Pennsylvania, totaling $20 billion and creating 1,250 jobs. Amazon’s Climate Pledge Fund also led a $500 million investment in X-energy to develop advanced SMRs, aiming to deliver over 5 gigawatts of nuclear capacity by 2039. Amazon CSO Kara Hurst said AI “can help us reach our goals,” while AWS CEO Matt Garman emphasized the company’s commitment to “supporting the country’s vision to be a global AI leader.”

dtau...@gmail.com

unread,
Jun 29, 2025, 12:12:29 PMJun 29
to ai-b...@googlegroups.com

NSF Graduate Fellowship Tilts Toward AI, Quantum

The U.S. National Science Foundation announced a second round of its Graduate Research Fellowships Program on June 13, with 500 fellows selected from the 3,000 applicants named "honorable mentions" in the first round. Of the 203 honorable mentions in computer science, 125 received fellowships, aligning with the Trump administration's prioritization of AI and quantum information science.
[ » Read full article ]

Science; Jeffrey Mervis (June 25, 2025)

 

AI Code Exposing Companies to Mounting Security Risks

In a survey by software supply chain platform Cloudsmith, 42% of 307 developers polled said AI-generated code populates much of their codebases, but just 67% said they review the code before deployment. Another 29% of respondents said they are "very confident" they can identify vulnerabilities in AI-generated or AI-assisted code. Only 20% said they trust AI-generated code completely, and more than half (59%) said they subject such code to additional scrutiny.
[ » Read full article ]

Computing (U.K.); Dev Kundaliya (June 24, 2025)

 

New ACM Journal to Focus on AI Security, Privacy

The new journal ACM Transactions on AI Security and Privacy (TAISAP) will focus on the development of methods for assessing the security and privacy of AI models, AI-enabled systems, and broader AI environments. Its launch is part of a broader initiative by ACM to add a new suite of journals covering various facets of AI.
[ » Read full article ]

ACM Media Center (June 24, 2025)

 

LENS Allows Brain-like Navigation in Robots

A navigation system developed by researchers at the Queensland University of Technology in Australia mimics the neural processes of the human brain to guide robots, using less than 10% of the energy required by traditional systems. In testing, the LENS (Locational Encoding with Neuromorphic Systems) system was able to recognize locations along an 8 km (5 mile) journey using 180KB of storage, nearly 300 times less than other systems. LENS combines a spiking neural network with a special camera that only reacts to movement and a low-power chip, all on one small robot.
[
» Read full article ]

Queensland University of Technology (Australia) (June 19, 2025)

 

Interactive IEA Tracker Shows Where AI Guzzles the Most Energy

The International Energy Agency (IEA) has unveiled an online platform to monitor and analyze the impact of AI across the global energy sector. The Energy and AI Observatory features interactive tools to explore datacenter electricity consumption and the scale of digital infrastructure by region, and provides case studies to illustrate how AI is being deployed across the energy sector itself, as well as its impacts. Release of the tool comes after the IEA reported earlier this year that the energy consumed by datacenters worldwide is set to more than double by 2030.
[
» Read full article ]

The Register (U.K.); Dan Robinson (June 19, 2025)

 

AI Tools Changing the Teaching Profession

Six in 10 U.S. K-12 teachers used AI in the past school year, particularly high school and early-career educators, according to a survey of more than 2,000 teachers by Gallup and the Walton Family Foundation. The poll revealed that around 60% of teachers using AI tools said they have improved the feedback provided to students. About 80% said the tools save them time on assessments, making quizzes or worksheets, and administrative tasks.
[ » Read full article ]

Associated Press; Jocelyn Gecker (June 25, 2025)

 

Court Says Copyrighted Books Are Fair Use for AI Training

U.S. District Court for the Northern District of California Judge William Alsup ruled that Anthropic's use of copyrighted books to train its Claude chatbot without obtaining the authors' or publishers' consent does not violate the law. Alsup compared the use of copyrighted books in training large language models to "[an aspiring writer who reads copyrighted texts] not to race ahead and replicate or supplant [those works,] but to turn a hard corner and create something different."

[ » Read full article *May Require Paid Registration ]

The Washington Post; Andrew Jeong (June 25, 2025)

 

One of the Best Hackers in the Country Is an AI Bot

Xbow is the first AI product to rank No. 1 on HackerOne's U.S. leaderboard, which tracks who has identified and reported the most vulnerabilities in software from large companies. Founded by GitHub veteran Oege de Moor (pictured), Xbow automates penetration testing. Xbow has raised $75 million as de Moor seeks to sell the tool, a cost-effective alternative to red teaming, so companies can perform more frequent penetration testing.

[ » Read full article *May Require Paid Registration ]

Bloomberg; Dina Bass (June 24, 2025)

 

Trust in AI Strongest in China, Low-Income Nations

A United Nations Development Programme (UNDP) poll of 21 nations found more than 60% of people surveyed in developing countries are confident AI systems serve society's best interests. The survey found respondents in most developing countries with "high" levels of development based on the UNDP's Human Development Index (HDI) have confidence in AI, while a greater share of the population in higher-income nations and those with "very high" HDI are skeptical of AI.

[ » Read full article *May Require Paid Registration ]

Bloomberg; Magdalena Del Valle; Augusta Saraiva (June 20, 2025)

 

Can AI Quicken the Pace of Math Discovery?

The U.S. Department of Defense's Defense Advanced Research Projects Agency is accepting applications through mid-July for its Exponentiating Mathematics project, which is seeking researchers to accelerate progress in pure math by identifying ways to use AI as a "co-author" in high-level mathematics research. Existing large language models have trouble performing basic math problems, but overcoming such limitations and enabling AI to check proofs with accuracy could save mathematicians time and allow them to be more creative. Researchers said it also would help them better understand AI's capabilities and potentially create more powerful AI models.


[
» Read full article *May Require Paid Registration ]

The New York Times; Alexander Nazaryan (June 19, 2025)

 

Cal State System Invests $16.9 Million In AI For Education

EdSource (6/22, Caplan) reports, “A recent New York Times investigation revealed OpenAI’s ambition to make artificial intelligence the ‘core infrastructure’ of higher education.” The California State University (CSU) system “has committed $16.9 million to provide ChatGPT Edu to 460,000 students across its 23 campuses.” This investment aims to integrate artificial intelligence (AI) into higher education but raises concerns about students outsourcing critical thinking to AI. The New York Times highlights OpenAI’s role in creating this outsourcing of critical thinking to chatbots, “and now presents itself as the solution by making that outsourcing even more seamless.” European business schools, like Essec Business School, demonstrate the effectiveness of focusing on strategic thinking alongside AI. The University of Chicago Law School found AI systems make “significant legal errors,” emphasizing the need for human strategic judgment.

        University Of South Carolina System Signs $1.5M Agreement With OpenAI. The SC Daily Gazette (6/20, Holdman) reported that South Carolina’s largest university system “has signed a $1.5 million agreement with OpenAI to offer free AI tools to all students and faculty beginning this fall.” The University of South Carolina (USC) Columbia campus is the first in the state to offer ChatGPT access, with other campuses possibly joining. The initiative aims to train students in ethical AI use, as stated by spokesman Jeff Stensland, who said, “AI is here and people in the private sector are using for whole host of things.” USC expects ChatGPT “will be a time saver for faculty in the classroom, automating grading or creating syllabi, or with data analysis in research.” Brice Bible, USC’s vice president for information technology, emphasized that the initiative will enhance employability and innovation. OpenAI has also “invested $50 million in a partnership, which it calls the NextGenAI consortium, to help speed up the research process.”

AI Platforms Revolutionize College Admissions Counseling

Wired (6/20, Greenberg) reported that Julia Dixon, a University of Michigan graduate, founded ESAI in 2023, “a platform powered by artificial intelligence designed to assist students with the college admissions process.” Dixon recognized the need for accessible college counseling after helping family and friends with applications. ESAI offers personalized guidance through a “major mentor” and school matchmaker, considering factors like students’ goals and sociability. Dixon said, “A lot of kids come to us when they’re more actively applying to school.” The platform also provides tools for writing admissions essays and quantifying extracurricular activities, such as babysitting, as leadership skills. ESAI addresses financial concerns by matching students with scholarships based on their demographics and interests. Jon Carson, who founded the College Guidance Network, “an AI-powered counseling platform,” highlighted the mismatch between when families discuss college and counselors’ availability, saying, “We are talking to our kids at night, on the weekend, and during vacation.”

Students Increasingly Rely On AI For Academic Support

The Chronicle of Higher Education (6/20, McMurtrie) reported that students are increasingly “turning to artificial intelligence as an all-purpose study tool, recasting how they think about learning and reshaping their relationships with classmates and professors.” Allison Abeldt, a Kansas State University student, initially hesitant about AI, now uses tools like ChatGPT and Google NotebookLM for study aids. She said, “AI allows all students, despite the way they learn, to understand your course materials.” Despite concerns about cheating, many students see AI as a necessary tool to manage workloads and fill educational gaps. Surveys indicate a rise in AI use among students, with a “survey of 1,529 college students by Tyton Partners [finding] that 42 percent of students used generative-AI tools daily or weekly in the spring of 2025.” Some students use AI to supplement poor teaching, while others avoid AI due to ethical concerns. Professors are challenged to adapt teaching methods as students increasingly rely on AI for learning support.

AI Framework Revolutionizes Cement Formulation

Forbes (6/20, Schmelzer) reported that researchers at Switzerland’s Paul Scherrer Institute (PSI) have developed “a new AI framework that can generate low-CO cement formulations in seconds.” This custom AI system, designed specifically for cement, integrates simulation software and neural networks to predict the strength of various mixtures based on their composition. The promising data suggests that these new formulations could reduce CO emissions by up to 50 percent, significantly impacting global cement production. Similarly, at the University of Illinois Urbana-Champaign, “researchers partnered with Meta and concrete supplier Ozinga to develop AI-optimized concrete that cut carbon by [40 percent]. MIT has trained models to scan research papers and databases to identify novel, low-emission materials.”

Mississippi And Nvidia Partner On AI Education

Mississippi Today (6/18, Goldberg) reported that Mississippi and Nvidia “have reached a deal for the company to expand artificial intelligence training and research at the state’s education institutions, an initiative to prepare students for a global economy increasingly driven by AI, Gov. Tate Reeves announced Wednesday.” The memorandum of understanding will introduce AI programs across Mississippi’s community colleges, universities, and technical institutions. The initiative aims to train at least 10,000 Mississippians in AI skills, machine learning, and data science. Reeves (R) described the collaboration as “monumental,” emphasizing the creation of pathways to careers in AI and cybersecurity. Although the agreement does not include tax incentives for Nvidia, the state will provide funding, possibly from $9.1 million in grants through the Mississippi AI Talent Accelerator Program. Louis Stewart, head of strategic initiatives for Nvidia’s global developer ecosystem said, “Together, we will enhance economic growth through an AI-skilled workforce.” Mississippi joins Utah, California, and Oregon in similar programs with Nvidia.

IEA Proposes Energy, AI Observatory To Monitor Energy Consumption

Power Engineering International (6/20, Jones) reported that the International Energy Agency (IEA) has proposed a new energy and AI observatory to analyze energy consumption and the relationship between AI and energy. The initiative follows the IEA’s April report predicting that electricity demand from AI-optimized data centers could quadruple by 2030.

AI Infrastructure Innovations Enhance Energy Efficiency

In an article titled “How Smarter Chips Could Solve AI’s Energy Problem,” Forbes (6/23, Adebayo) reports that the AI boom is driving data center power consumption to new heights, with predictions from Goldman Sachs indicating a 160 percent increase by 2030. In response, companies are exploring innovative solutions. Microsoft is supporting the restart of dormant nuclear reactors, like Three Mile Island in Pennsylvania, to meet energy demands. Meanwhile, Proteantecs, an Israeli startup, is helping data centers reduce AI server power consumption by up to 14 percent using chip telemetry. Uzi Baruch, Proteantecs’ chief strategy officer, explains that their technology monitors chip performance in real-time to optimize voltage and prevent overprovisioning. Additionally, Arm is enhancing its Neoverse platform for energy-efficient AI workloads, focusing on optimizing entire computing systems. Eddie Ramirez, VP of Go-To-Market for Arm, emphasizes the importance of maximizing existing infrastructure. Together, these companies are shaping a new layer of AI infrastructure focused on performance and power efficiency.

AI Computing Power Divide Creates Global Disparities, Oxford Research Shows

The New York Times (6/23, Satariano, Mozur, Russell, Kim) reports AI development is creating a divide between countries with the computing power to build AI systems and those without, influencing geopolitics and economics. The US and China operate over 90% of the data centers used for AI work. Oxford University researchers analyzed cloud-service providers including Amazon, Google, and Microsoft. US companies operated 87 AI computing hubs, compared to China’s 39. Nations lacking AI compute power face limits in scientific work and talent retention. Vili Lehdonvirta, a Professor at Oxford University, said compute producers could have oversized influence, similar to oil-producing countries. Amazon, Microsoft, Google, Meta, and OpenAI plan to spend over $300 billion on AI infrastructure this year.

Duke Researchers Develop Framework To Monitor AI In Healthcare

Medical “professionals are increasingly turning to artificial intelligence in their day-to-day work,” but concerns remain about errors that could harm patients, Axios (6/24) reports. Duke researchers “unveiled in two studies this month that they have developed a new framework to assess AI models and monitor how well they perform over time.” Their studies found AI tools generally effective but prone to inaccuracies, especially with new medications, highlighting the need for continuous monitoring to ensure accuracy and avoid bias.

Amazon’s Mega Data Center In Indiana To House AI Startup

The New York Times (6/24, Weise, Metz) says, “A year ago, a 1,200-acre stretch of farmland outside New Carlisle, Ind., was an empty cornfield,” and “now, seven Amazon data centers rise up from the rich soil, each larger than a football stadium.” Amazon, “over the next several years...plans to build around 30 data centers at the site, packed with hundreds of thousands of specialized computer chips.” This facility, which “will consume 2.2 gigawatts of electricity – enough to power a million homes” – was built with one “customer in mind: the A.I. start-up Anthropic, which aims to create an A.I. system that matches the human brain.”

Google DeepMind Unveils New On-Device Robot Model

Ars Technica (6/24) reports that Google DeepMind has introduced a new on-device vision language action (VLA) model for controlling robots, following the announcement of Gemini Robotics earlier this year. This model operates without a cloud component, allowing full autonomy. Carolina Parada, head of robotics at Google DeepMind, stated this approach could enhance robot reliability in challenging situations. The model enables developers to customize it for specific applications. Unlike traditional slow reinforcement training, the generative AI model leverages Gemini’s multimodal understanding to perform diverse tasks efficiently.

AI Tools Reshape Teaching Strategies In US Schools

The AP (6/25, Gecker) reports that artificial intelligence (AI) tools are significantly impacting teaching methods in US schools. Math teacher Ana Sepúlveda from Dallas, Texas, used ChatGPT to create a geometry lesson plan related to soccer, saying, “Using AI has been a game changer for me.” A Gallup and Walton Family Foundation poll revealed that 6 in 10 K-12 teachers used AI tools last school year, with high school educators and early-career teachers using them most. AI is credited with saving time on tasks like grading and lesson planning, with 8 in 10 teachers noting time savings and 6 in 10 seeing improved work quality. Gallup research consultant Andrea Malek Ash suggests AI could help alleviate teacher burnout. Despite initial bans, schools are now incorporating AI, although concerns about student overuse persist. Mary McCarthy, a Houston teacher, said AI “transformed my weekends and given me a better work-life balance.”

AI Robots Transform Caregiving For Older Adults

Kiplinger (6/26) reports that artificial intelligence is increasingly being used to assist older adults, addressing caregiver shortages and high care costs. ElliQ, an AI companion robot, helps reduce loneliness and improve wellness, according to Intuition Robotics. Neal Shah of CareYaya highlights AI’s potential for scalable tasks, while experts caution against over-reliance on AI, emphasizing the importance of human connection and addressing algorithmic bias.

AI In Schools Raises Ethical, Safety Concerns

Politico (6/26) reports that the integration of AI in schools is raising ethical concerns about its impact on students’ mental health. President Donald Trump issued an executive order in April to promote AI literacy in K-12 education. More than a quarter of teachers use AI learning systems, according to a Gallup poll. Alex Kotran, CEO of The AI Education Project, questions the normalization of AI tutors. Sam Hiner of EdEngage emphasizes avoiding human-like emotional responses in AI. Robbie Torney of Common Sense Media points to mental health risks, especially for younger students. The Department of Education and the AI Education Task Force have not responded to inquiries about safety measures.

dtau...@gmail.com

unread,
Jul 4, 2025, 5:46:40 PMJul 4
to ai-b...@googlegroups.com

Senate Strikes AI Provision from GOP Bill

The U.S. Senate voted 99-1 to eliminate a provision that would have deterred state-level regulation of AI from President Donald Trump's bill of spending cuts and tax breaks. Originally proposed as a 10-year ban on states doing anything to regulate AI, lawmakers later tied it to federal funding so that only states that backed off on AI regulations would be able to get subsidies for broadband Internet or AI infrastructure. The bill moves back to the House for reconciliation after being approved by the Senate on Tuesday.
[ » Read full article ]

Associated Press; Matt Brown; Matt O'Brien (July 1, 2025)

 

Malware Tries to Manipulate AI into Declaring It Harmless

Security vendor Check Point said it detected the first documented case of "AI Evasion" malware, which uses "prompt injection" aimed at tricking AI systems into labeling it as non-malicious. The malware, which was accurately classified by Check Point's AI-powered MCP system, featured a hardcoded plain-text C++ string intended to instruct the AI analyzing it rather than the infected system. "This is not an isolated issue; it is a challenge every security provider will soon confront," said Check Point.
[ » Read full article ]

Computing (U.K.); Dev Kundaliya (June 26, 2025)

 

Purdue, Adobe Researchers Use AI to Cut Cloud Service Failures

Purdue University and Adobe researchers leveraged AI to develop a method for quicker detection and diagnosis of failures in complex cloud-based systems. They developed an algorithm that uses causal inference to trace issues in cloud-based systems to their roots. The algorithm can handle instances in which the causal graph's structure is not fully known, as well as cases with single or multiple root causes.
[ » Read full article ]

Purdue University Elmore Family School of Electrical and Computer Engineering (June 19, 2025)

 

AI Screening Job Candidates Before Humans See Them

More companies are using AI-powered virtual recruiters to screen employment candidates through phone or video interviews. These virtual agents can ask candidates questions that range from basic to complex and can end an interview if a candidate does not meet the company's minimum requirements. Some virtual recruiters allow candidates to ask questions that they may not be able to answer; they also may score candidates based on employer-set criteria, though companies stress that human recruiters make the hiring decisions.

[ » Read full article *May Require Paid Registration ]

The Washington Post; Danielle Abril (June 30, 2025)

 

How Do You Teach Computer Science in the AI Era?

Generative AI has "really shaken computer science education," according to Carnegie Mellon University's Thomas Cortina, prompting faculty at universities nationwide to rethink their computer science programs. This comes amid a tightening of the tech job market, particularly as more companies replace entry-level coders with AI. The Computing Research Association's (CRA) Mary Lou Maher (pictured) expects the focus of computer science education to shift from coding to computational thinking and AI literacy.

[ » Read full article *May Require Paid Registration ]

The New York Times; Steve Lohr (June 30, 2025)

 

The AI Frenzy Is Escalating, Again

AI spending is on the rise among big tech companies and venture capitalists, with Page One Ventures' Chris V. Nicholson noting, "Everyone is deeply afraid of being left behind." Meta, Microsoft, Amazon, and Google plan to spend a combined $320 billion this year mainly on the construction of new datacenters. PitchBook reported $65 billion in U.S. investment in AI companies in the first quarter of 2025.

[ » Read full article *May Require Paid Registration ]

The New York Times; Cade Metz; Tripp Mickle (June 27, 2025)

 

Finding Viable Sperm in Infertile Men Can Take Days. AI Did It in Hours

Columbia University Fertility Center researchers used AI to help a couple struggling with infertility for 18 years to conceive. The researchers used AI to scan a semen sample and located 44 viable sperm within an hour, after labs ran the same sample and found nothing over a two-day period. The sample, placed on a single-use microfluidic chip, was illuminated and imaged by a microscope connected to a high-speed camera that captures millions of images, which are analyzed in real time using AI.

[ » Read full article *May Require Paid Registration ]

The Washington Post; Sabrina Malhi (June 27, 2025)

 

AI Is Wearing Down Democracy

Switzerland's International Panel on the Information Environment found AI was used in more than 80% of elections in 2024. The study showed 25% of cases involved candidates using AI for translating speeches and platforms, identifying groups of voters for outreach, and other campaign-related tasks. However, AI was found to have played a harmful role in 69% of cases.

[ » Read full article *May Require Paid Registration ]

The New York Times; Steven Lee Myers; Stuart A. Thompson (June 26, 2025)

 

AI Energy Council Meets To Discuss UK’s Future Energy Needs

MLex (6/30) reports behind a paywall that Microsoft, Google, ARM, and AWS are meeting with government agencies, Ofgem, power companies, and the National Energy System Operator today at the second meeting of the AI Energy Council to discuss the future energy needs for AI in the UK. Parties will discuss forecasts of how much energy will be needed to deliver a 20-fold increase in compute capacity over the next five years. The meeting will also address which sectors may adopt energy-hungry AI more quickly. This follows the recent £2 billion funding announcement for the government’s plans for AI-led economic growth.

Microsoft Navigates Delay In AI Chip Production

Reuters (6/27) reported that Microsoft’s Maia AI chip, code-named Braga, will face a delay of at least six months, pushing mass production to 2026, according to The Information. The chip is expected to underperform compared to Nvidia’s Blackwell chip. The delay is due to design changes, staffing issues, and high turnover. Microsoft aimed to use the chip in data centers this year. The company is developing custom processors for AI to reduce reliance on Nvidia. Rivals Amazon and Google have also developed in-house AI chips to enhance performance and cut costs.

OpenAI Plans New Features For ChatGPT To Compete With Productivity Suites

ZDNet (6/27) reported that OpenAI is developing new features for ChatGPT to rival Google Workplace and Microsoft Office 365, according to The Information. These features include collaborative document editing, meeting transcription, and team chat functions. Although OpenAI has not officially announced a workplace productivity suite, this move would intensify its competition with Google and Microsoft. OpenAI and Microsoft, strategic partners since 2019, are reportedly renegotiating their agreement as OpenAI distances itself from Microsoft’s cloud services, recently partnering with Google.

Administration Readying New AI Boost Package

Reuters (6/27, Volcovici, Renshaw) reported the Administration “is readying a package of executive actions aimed at boosting energy supply to power the US expansion of artificial intelligence, according to four sources familiar with the planning.” The package under consideration would make it easier for power-generating projects to connect to the electrical grid, and “providing federal land on which to build the data centers needed to expand AI technology, according to the sources.” The Administration “will also release an AI action plan and schedule public events to draw public attention to the efforts, according to the sources, who requested anonymity to discuss internal deliberations.”

Study Finds AI Use In Writing Affects Students’ Brain Activity

Education Week (6/26, Schwartz) reported that a study “from researchers at the Massachusetts Institute of Technology, Wellesley College, and the Massachusetts College of Art and Design found that giving writers free reign to use AI as much as they wanted” reduces brain activity compared to writing independently. Participants, “mostly undergraduate and graduate students, who constructed essays with the assistance of ChatGPT exhibited less brain activity during the task than those participants who were asked to write on their own,” with evaluators noting a lack of individuality in AI-assisted essays. However, when writers initially engaged with the topic independently before using AI, brain activity increased. The study involved 54 participants writing essays with varying levels of AI assistance and was monitored using electroencephalogram, “a tool that measures electrical activity in the brain.” The research suggests that while AI can aid in writing, students need to develop their writing skills independently first.

Carnegie Mellon University To Consider AI Curriculum Overhaul

The New York Times (6/30, Lohr) reports that Carnegie Mellon University, renowned for its computer science program, is planning a faculty retreat this summer “to rethink what the school should be teaching to adapt to the rapid advancement of generative artificial intelligence.” Thomas Cortina, a professor and an associate dean at the university, said that AI technology has “really shaken computer science education.” The technology’s rapid integration into coding and academia is prompting universities nationwide to reconsider their curricula. Professor Jeannette Wing of Columbia University described the situation as “the tip of the AI tsunami.” The National Science Foundation is funding “Level Up AI,” a project aimed at developing a shared vision for AI education. At Carnegie Mellon, faculty are considering how to integrate AI tools into traditional computing education. This comes as computer science students have “been forced to adjust to an increasingly tough tech job market.”

US Lawmakers Introduce Bill To Bar Federal Use Of China-Linked AI Tools Like DeepSeek

South China Morning Post (HKG) (6/25) reported that a bipartisan group of lawmakers has introduced the No Adversarial AI Act, which would bar federal agencies from procuring or deploying AI tools developed in China, Russia, Iran, or North Korea. The bill specifically targets platforms such as DeepSeek, a Chinese-developed AI system. “Artificial intelligence controlled by foreign adversaries poses a direct threat to our national security, our data and our government operations,” said Rep. Raja Krishnamoorthi (D-IL), who co-sponsored the bill in the House alongside Rep. John Moolenaar (R-MI). A companion version was introduced in the Senate by Sen. Rick Scott (R-FL) and Sen. Gary Peters (D-MI). The move marks the latest escalation in the US-China tech rivalry.

Duke Researchers Propose New Framework For Evaluating AI Scribes

Duke University researchers “are proposing a new framework to evaluate artificial intelligence scribing tools by using a combination of human review and technological evaluation,” Fierce Healthcare (6/30) reports. The new evaluation framework, SCRIBE, aims to address the lack of standard evaluation methods for AI scribes, which are increasingly funded but lack oversight in healthcare. The study found that using a combination of human reviewers and large language models (LLMs) for evaluating the AI-generated medical notes can effectively assess various quality dimensions.

EU Receives 76 Bids For AI Gigafactories

Reuters (6/30) reports that 76 companies have expressed interest in developing AI gigafactories in Europe, as announced by EU tech chief Henna Virkkunen. This response follows the European Commission’s allocation of 20 billion euros for constructing four AI gigafactories. These facilities will feature approximately 100,000 advanced AI chips. Virkkunen noted proposals from 16 member states and 60 sites, which reflects significant enthusiasm for AI innovation in Europe. Applicants include both EU and non-EU companies, such as tech giants, data center operators, telecom providers, power suppliers, and financial investors. Collectively, they plan to acquire at least 3 million state-of-the-art AI processors. The official call for setting up these gigafactories will occur at year-end.

Microsoft Develops AI Tool for Medical Diagnosis

Wired (6/30, Knight) reports that Microsoft has advanced in medical AI with a tool that diagnoses diseases more accurately and cost-effectively than human physicians. Mustafa Suleyman, CEO of Microsoft’s AI arm, stated that their system, MAI Diagnostic Orchestrator (MAI-DxO), achieved 80% accuracy in diagnosing diseases, compared to 20% by human doctors. The tool uses leading AI models to mimic expert collaboration. Dominic King, a Microsoft vice president, highlighted its cost-effectiveness. Microsoft is considering integrating the technology into Bing or creating tools for medical experts, though commercialization plans remain undecided.

Meta Reorganizes AI Efforts Under New Superintelligence Labs Division

Reuters (6/30) reports that Meta CEO Mark Zuckerberg has restructured the company’s AI initiatives into a new division, Meta Superintelligence Labs, led by Alexandr Wang, former CEO of Scale AI. This move aims to accelerate work on artificial general intelligence. The reorganization follows challenges, including staff departures and poor reception of Meta’s Llama 4 model. Zuckerberg has aggressively recruited talent, including former GitHub CEO Nat Friedman and several researchers from OpenAI, Anthropic, and Google. Analysts express concern over Meta’s AGI investment, considering previous high-cost ventures with limited returns.

        Sam Altman Responds To Meta’s AI Talent Moves. Wired (7/1, Schiffer) reports that OpenAI CEO Sam Altman responded to Meta CEO Mark Zuckerberg’s recruitment of AI talent, including several OpenAI staff, by emphasizing OpenAI’s commitment to building artificial general intelligence. In a message to OpenAI researchers, Altman criticized Meta’s recruitment strategy, suggesting it could lead to cultural issues. He assured OpenAI staff of potential compensation evaluations and expressed confidence in OpenAI’s mission and team culture. Altman highlighted the company’s focus on AGI as a core goal, contrasting it with Meta’s approach.

Tech Companies Back White House’s AI Education Pledge

K-12 Dive (7/2, Merod) reports that 67 tech companies and associations “have signed a pledge supporting the Trump Administration’s goal of making artificial intelligence education accessible to all students, the White House announced Monday.” The pledge commits signatories to “provide resources that foster early interest in AI technology, promote AI literacy, and enable comprehensive AI training for educators.” Companies like Google, IBM, and Microsoft are expected to reveal detailed plans on their commitments. US Education Secretary Linda McMahon expressed excitement, saying “there is a lot of energy about AI and how it can be used responsibly in education.” Meanwhile, the Senate rejected a provision for a 10-year moratorium on state AI regulations, with a vote of 99-1, arguing it could threaten children’s safety online. AASA, The School Superintendents Association, opposed the moratorium, saying it would protect “tech and AI more than students and children.”

Stanford Creates Interface To Construct Teams Of AI Scientists

Nature (7/2, Jones) reports on Stanford University’s “interface for a system called the Virtual Lab.” Pathologist Thomas Montine “constructed a team of six artificial-intelligence (AI) characters, all powered by a commercial large language model. He gave them specialties: he made a couple neuroscientists, one a neuropharmacologist and another a medicinal chemist.” Montine then “asked this virtual lab group to examine possible treatments for Alzheimer’s disease and discuss gaps in knowledge, barriers to progress and hypotheses to be tested.” After testing Stanford’s technology, some researchers found valuable insights, while others express skepticism about the novelty of the ideas generated. Overall, the use of AI in research is seen as a tool to enhance efficiency and creativity, though human oversight remains crucial.

CEOs Start Predicting Depth Of AI Impact On Jobs

The Wall Street Journal (7/2, Cutter, Zimmerman, Subscription Publication) reports CEOs are increasingly acknowledging AI’s potential to reduce jobs and predicting significant workforce changes. Ford CEO Jim Farley said AI could replace half of white-collar jobs in the US. Marianne Lake of JPMorgan Chase anticipates a 10 percent reduction in operations staff due to AI. Amazon CEO Andy Jassy and Anthropic CEO Dario Amodei also foresee job cuts. OpenAI COO Brad Lightcap believes fears may be overstated. IBM Chief Executive Arvind Krishna noted AI’s role in replacing some jobs but also creating new ones.

EdTech Leaders Say Educators Must Pioneer AI Integration In Schools

Education Week (7/1, Vilcarino) reported that as artificial intelligence (AI) reshapes K-12 education, educators should be central to its evolution, say leaders behind AI educational tools. Educators’ skepticism arises from AI’s “hallucinations” and inadequate training. However, AI-related training for teachers increased by 50 percent between spring and fall 2024, according to the EdWeek Research Center. The ISTELive 25 + ASCD Annual Conference 25 on June 30 featured leaders like Wyman Khuu, head of learning engineering at Playlab.ai, who said, “Our hope down the road is that ... educators get to shape technology.” Elvira Salazar, director of online learning and technology at Latinos for Education, said that professional AI development should focus on readiness to lead change, not just technical skills. Khuu noted skepticism should be welcomed, saying, “Eventually, skeptics get to a place where they actually understand AI might lead to some impact.”

dtau...@gmail.com

unread,
Jul 13, 2025, 5:00:35 PMJul 13
to ai-b...@googlegroups.com

ACM Journal Facilitates Rapid Publication of AI Research

ACM's new open access AI Letters (AILET) journal will feature peer-reviewed contributions to AI research that accelerate knowledge dissemination across academia and industry. The journal will prioritize theoretical breakthroughs, algorithmic innovation, practical real-world applications, and critical societal implications. It will also offer a venue for rigorously reviewed opinion pieces and policy briefs. Said Dame Wendy Hall, co-chair of the ACM Publications Board, "With AILET, we’re filling a need for a letters-style venue—one that accommodates rapid peer-review, late-breaking results, policy briefs, AI action plans, and highlights."
[
» Read full article ]

ACM Media Center (July 9, 2025)

 

Media Consortium Launches Euro Chatbot to Counter Fake News

A consortium of 15 leading European media organizations has rolled out ChatEurope, a chatbot trained on news articles from verified and trusted sources with the goal of combating online disinformation by providing responses that are bias-free and factually correct. Developed by Romania's DRUID AI, ChatEurope uses a large language model from France's Mistral and is hosted on infrastructure from French open-source software provider XWiki.
[
» Read full article ]

Computing (U.K.); Penny Horwood (July 9, 2025)

 

Tennis Players Criticize AI Technology Used by Wimbledon

Numerous tennis players criticized AI technology used at the Wimbledon tournament. This is the first year the tournament replaced human line judges with an AI-driven electronic line calling system. Players complained that the system made incorrect calls, sometimes leading to point losses. Said Debbie Jevans, chair of the organization that hosts Wimbledon, “When we did have linesmen, we were constantly asked why we didn’t have electronic line calling because it’s more accurate than the rest of the tour.”
[ » Read full article ]

TechCrunch; Dominic-Madori Davis (July 8, 2025)

 

AI Cameras Change Driver Behavior at Intersections

An increasing number of local governments in the U.S. are deploying AI-powered cameras at intersections in hopes of changing driver behavior, as technology companies try to bring the results seen in Europe to U.S. streets. Stop for Kids' AI cameras detect vehicles that violate traffic laws at intersections and automatically issues citations. In a 2022 pilot in Saddle Rock, NY, compliance with stop signs jumped from 3% to 84% within 90 days of installation of the company's cameras.
[ » Read full article ]

IEEE Spectrum; Willie D. Jones (July 5, 2025)

 

Large Language Models Are Improving Exponentially

Large language models (LLMs) are doubling their capabilities every seven months, according to a metric developed by researchers at the nonprofit Model Evaluation & Threat Research. The task-completion time horizon metric measures the average amount of time human programmers would take to perform a task that can be completed with a certain degree of reliability by an LLM. According to the researchers, LLMs should be able to perform software-based tasks that take humans one month of 40-hour workweeks with 50% reliability in days, possibly hours, by 2030.
[ » Read full article ]

IEEE Spectrum; Glenn Zorpette (July 2, 2025)

 

3D Interactive Digital Room Derived from Brief Video

A process developed by Cornell University researchers uses AI to turn a brief video of a space into an interactive 3D simulation. The DRAWER (Digital Reconstruction and Articulation With Environment Realism) method includes a perception model that determines which parts of the scene are mobile and how they should move, as well as a model that fills in the unseen insides of objects in the room.
[ » Read full article ]

Cornell Chronicle; Patricia Waldron (June 30, 2025)

 

EU Rolls Out AI Code with Broad Copyright, Transparency Rules

The European Commission on Thursday published a code of practice to help companies follow its landmark AI Act that includes copyright protections for creators and transparency requirements for advanced models. The code will require developers to provide up-to-date documentation describing their AI’s features to regulators and third parties looking to integrate it in their own products. Companies also will be banned from training AI on pirated materials and must respect requests from writers and artists to keep copyrighted work out of datasets.


[
» Read full article *May Require Paid Registration ]

Bloomberg; Gian Volpicelli (July 10, 2025)

 

Microsoft Pledges $4 Billion Toward AI Education

Microsoft on Wednesday said it will distribute more than $4 billion in cash and technology services to schools, community colleges, technical colleges, and nonprofits for AI education. Additionally, the company is rolling out the Microsoft Elevate Academy, tasked with helping 20 million people obtain AI certificates by "delivering AI education and skilling at scale."


[
» Read full article *May Require Paid Registration ]

The New York Times; Natasha Singer (July 9, 2025)

 

OpenAI, Microsoft Bankroll AI Training for Teachers

The American Federation of Teachers (AFT) said on Tuesday it would start an AI training hub for educators with $23 million in funding from Microsoft, OpenAI, and Anthropic. The second-largest U.S. teachers’ union said it would open the National Academy for AI Instruction in New York City, starting with hands-on workshops for teachers this fall on how to use AI tools to generate lesson plans, and for other tasks.

[ » Read full article *May Require Paid Registration ]

The New York Times; Natasha Singer (July 8, 2025)

 

Marco Rubio Impostor Using AI to Contact High-Level Officials

An impostor used AI-powered software to impersonate U.S. Secretary of State Marco Rubio in calls and texts with foreign ministers, a U.S. governor, and a member of the U.S. Congress. The culprit was probably attempting to manipulate officials “with the goal of gaining access to information or accounts,” according to a cable sent by Rubio’s office to State Department employees. The State Department said it would “carry out a thorough investigation and continue to implement safeguards to prevent this from happening in the future.”

[ » Read full article *May Require Paid Registration ]

The Washington Post; John Hudson; Hannah Natanson (July 8, 2025)

 

The Coder 'Village' at the Heart of China's AI Frenzy

Hangzhou, China, and its Liangzhu suburb have lured tech workers with low rents and proximity to DeepSeek and Alibaba. Coupled with provincial and local government subsidies and tax breaks for startups, the city has become China's AI hub. Mindverse founder Felix Tao's home in Liangzhu serves as a gathering place for "villagers," coders in their 20s and 30s seeking to leverage AI in launching their own firms.

[ » Read full article *May Require Paid Registration ]

The New York Times; Meaghan Tobin; Siyi Zhao (July 6, 2025)

 

Scientists Use AI to Mimic the Mind

Scientists trained a large language model on 10 million psychology experiment questions, aiming to better understand the human mind. An international team including Marcel Binz of Germany's Helmholtz Munich trained Meta's open-source LLaMA (Large Language Model Meta AI) on the responses of more than 60,000 volunteers to 160 psychology experiments. The resulting modified model, named Centaur, in testing accurately predicted what a volunteer’s remaining responses would look like.

[ » Read full article *May Require Paid Registration ]

The New York Times; Carl Zimmer (July 2, 2025)

 

454 Hints That a Chatbot Wrote Part of a Biomedical Researcher's Paper

Researchers at Germany's University of Tübingen identified 454 words overused by chatbots compared with human authors. The researchers analyzed more than 15 million biomedical abstracts published from 2010 to 2024 and found as least 13.5% were written using the assistance of chatbots. In less-selective journals, they found up to 40% of abstracts by authors from some countries were generated using AI.

[ » Read full article *May Require Paid Registration ]

The New York Times; Gina Kolata (July 2, 2025)

 

AI Robot Inks Tattoos

Blackdot has developed an AI-driven tattooing robot whose precision, according to the Austin startup, boosts predictability and reduces pain. Designs are converted into a set of dots by Blackdot’s proprietary algorithm. Once a human operator starts the process, the device’s computer-vision-equipped arm scans the skin and extends its triple-pointed needle to initiate the tattoo. Bang Bang Tattoo in Manhattan, which calls the device Aero for “Artist Enabled Robotic Operator,” has used it to perform about 30 tattoos on volunteers since its installation in April.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Belle Lin (July 2, 2025)

 

Researchers Propose Checklist To Improve AI Benchmark Reliability

Tech in Asia (7/5) reports researchers from the University of Illinois Urbana-Champaign (UIUC), Stanford University, the University of California, Berkeley, Yale University, Princeton University, the Massachusetts Institute of Technology (MIT), Transluce, ML Commons, Amazon, and the UK AI Standards Institute (AISI) found current agentic AI benchmarks may misestimate performance by up to 100% due to flawed task setups and reward designs. The study, led by Senior Researcher Yuxuan Zhu, revealed benchmarks like SWE-bench-Verified could misrank AI agents by 40%. The team introduced the Agentic Benchmark Checklist (ABC), which reduced overestimated performance by 33% when applied to CVE-Bench. The findings challenge assumptions about benchmark reliability, particularly in high-stakes fields like healthcare and finance. While promising, the checklist has only been tested on a limited set of benchmarks and may not address all future evaluation issues.

Amazon, Walmart Competition Shifts To AI, Robotics

PYMNTS (7/3) reported the competition between Amazon and Walmart has evolved from e-commerce versus big-box stores to a race in AI, robotics, and data infrastructure. Amazon now uses robots in nearly 75 percent of its global deliveries, with more than one million robots in its warehouses, nearing a 1:1 ratio with human workers. Walmart, meanwhile, is investing in supply chain control, including a proprietary meat-processing facility, and partnering with FinTech OnePay to launch a credit card program. Separately, PYMNTS (7/3) reported Amazon introduced DeepFleet, a generative AI foundation model to enhance warehouse robotics. The company said the system improves robot travel efficiency by 10 percent, reducing fulfillment center congestion and accelerating order processing. Amazon described DeepFleet as “an intelligent traffic management system” that optimizes navigation paths for its robotic fleet. The model continuously learns and adapts to further streamline operations.

Google Launches AlphaGenome For Genetic Variant Predictions

GenomeWeb (7/3) reported that Google DeepMind has introduced AlphaGenome, an AI model designed to predict the effects of genetic variants on the human genome. The tool analyzes genomic sequences up to 1 Mb, assessing impacts on gene expression, RNA splicing, and more. Caleb Lareau from Memorial Sloan Kettering Cancer Center, who tested AlphaGenome, emphasized its efficiency as a comprehensive tool. Google DeepMind’s Ziga Avsec and Natasha Latysheva noted its superior performance in evaluations against existing models. While AlphaGenome is not yet validated for personal genome predictions, it promises to aid research in rare cancers and other genetic studies. Google aims to challenge competitors like Illumina with this innovative model.

[See also: https://deepmind.google/discover/blog/alphagenome-ai-for-better-understanding-the-genome/]

Educator Shares How AI Tools Aid Reading Comprehension

Education Week (7/3, Vilcarino) reported that during the recent ISTELive 25 + ASCD Annual Conference, educators discussed barriers students face in reading comprehension, such as limited vocabulary and difficulty decoding. In one session, Jessica Pack, a sixth-grade language arts teacher, “made the case that smart, strategic use of artificial intelligence tools could help boost reading skills.” She said, “A lot of folks are landing on AI as a purely teacher-centered type of tool, so what we are going to do today is encourage a bit of student-centered use.” Pack’s strategy involves students generating keywords from text to create AI-generated images, which they then evaluate for accuracy, thus demonstrating comprehension. The session emphasized the importance of teaching students to cite AI-generated content to foster digital citizenship. Pack “emphasized that it is important for students to be taught to cite the images they create as generative AI images, because that instills the value of citing sources for content they create.”

AI Tools Aid College Graduates In Networking Challenges

Inside Higher Ed (7/7, Palmer) reports that recent college graduates face higher unemployment rates than the national average, with 6.6 percent of those aged 22-27 jobless last month. Factors include the pandemic, interest rates, and AI’s impact on entry-level jobs. Jeremy Schifeling, founder of the Job Insiders said, “The actual battle for the jobs that do exist is fiercer than ever.” AI tools, while contributing to job market challenges, are also helping colleges assist alumni in job searches. Institutions are using AI-driven platforms like Protopia to facilitate alumni networking, which “allows you to deliver something at scale that was previously unscalable,” according to Protopia CEO Max Leisten. Lasse Palomaki, associate director of career services for alumni at Elon University, who said, “Everyone knows that networking matters, but very few students and even alumni know how to do it,” is highlighting the importance of AI in lowering barriers to networking.

NSA To Partner With Cyber Command On FY2026 Planned AI Project

Defense Scoop (7/7, Pomerleau) reports US Cyber Command has requested $5 million in fiscal 2026 to launch a new artificial intelligence project. The fiscal 2023 defense policy bill charged the Pentagon, Cyber Command, and DARPA to work with the NSA to “jointly develop a five-year guide and implementation plan for rapidly adopting and acquiring AI systems, applications, supporting data and data management processes for cyber operations forces.” The initiative “aims to develop core data standards in order to curate and tag collected data that meet those standards to effectively integrate data into AI and machine learning solutions while more efficiently developing artificial intelligence capabilities to meet operational needs.”

OpenAI Expands AI Capacity With Oracle Partnership

Bloomberg (7/4, Subscription Publication) reports that OpenAI, which will rent about 4.5GW of data center power from Oracle Corp as part of its Stargate initiative, is highlighting the substantial energy needs for AI advancements. Oracle is developing new data centers across the US, including expanding an existing facility in Abilene, Texas, to meet this demand. This agreement is linked to a recent Oracle announcement of a $30 billion cloud deal. Oracle shares rose 3.9 percent following the news, reflecting investor interest in its growing cloud business.

US Plans AI Chip Shipment Restrictions On Malaysia, Thailand

Bloomberg (7/4, Subscription Publication) reports that the Trump Administration intends to restrict AI chip shipments from companies like Nvidia Corp. to Malaysia and Thailand. This move aims to prevent China from acquiring these components through intermediaries. A draft rule from the Commerce Department has been proposed but is not yet finalized. The rule would rescind global curbs from Biden’s AI diffusion rule, maintaining semiconductor restrictions targeting China. The Commerce Department has not commented, while Nvidia and the governments of Thailand and Malaysia have not responded. The export curbs would include measures to alleviate pressure on companies operating in these countries.

DOE Warns Of Potential Power Outages Due To AI Data Centers

E&E News (7/8, Behr, Subscription Publication) reports that on Monday, the DOE warned “that the United States will lose the race for leadership in artificial intelligence technology unless it slams the brakes on plans to close older coal- and gas-fired power plants and speeds up construction of new ones.” E&E News explains that “to dramatize the challenge, DOE said that parts of the mid-Atlantic and Great Plains regions could face 400 hours of power outages in 2030 in a worst-case scenario where tech companies build giant energy-hungry AI data centers unabated, old coal plants keep closing and new power supplies come online slowly.”

Nvidia Strengthens Ties With OpenAI

Times of India (7/8) reports that Nvidia is reinforcing its collaboration with OpenAI after the latter confirmed it will not extensively use Google’s AI chips. Despite a recent cloud service agreement with Google, OpenAI will continue relying on Nvidia’s GPUs and AMD AI chips. Nvidia expressed pride in its partnership with OpenAI on X, referencing a Reuters report about OpenAI’s decision. An OpenAI spokesperson mentioned ongoing tests with Google’s TPUs but no plans for large-scale usage. The Google deal allows OpenAI to use Google’s infrastructure, reducing its reliance on Microsoft’s Azure Cloud.

Meta Intensifies AI Talent Acquisition Efforts

Reuters (7/8) reports that Meta Platforms is aggressively recruiting top artificial intelligence talent for its new Superintelligence Labs to compete with OpenAI, Google, and Anthropic. This move follows senior staff departures and a lukewarm reception for Meta’s Llama 4 model. Notable hires include former Scale AI CEO Alexandr Wang as chief AI officer, former GitHub CEO Nat Friedman, and Daniel Gross from Safe Superintelligence. The recruitment drive includes offers of substantial bonuses, as noted by OpenAI CEO Sam Altman. This initiative aims to regain momentum in the AI race.

Technology Companies Push For Federal AI Regulation

The Wall Street Journal (7/9, Subscription Publication) reports that technology companies are advocating for federal artificial intelligence regulations following Congress’s decision to remove a proposed 10-year ban on state-level AI laws from the Trump Administration’s budget bill. The moratorium, initially included in the One Big Beautiful Bill Act, aimed to establish federal standards for data privacy and security in AI tools, preventing a fragmented state-by-state approach. Tech executives prefer a single overarching federal law to a variety of state laws.

Nvidia Reaches $4 Trillion Market Capitalization Milestone

Reuters (7/9, Chauhan) reports that Nvidia briefly achieved a market capitalization of $4 trillion on Wednesday, marking a historic milestone as the first company to do so. This achievement highlights its dominant position in the AI technology sector, driven by strong demand for its high-performance chips. Nvidia’s shares rose to an all-time high of $164.42, ending the day with a 1.80 percent gain and a market value of $3.97 trillion. The company’s rapid market value growth underscores Wall Street’s confidence in AI’s expansion. Nvidia’s stock has rebounded significantly, up about 74 percent from April lows. Despite its dominance, competitors like Advanced Micro Devices are vying for market share with lower-cost processors. Nvidia’s first-quarter revenue increased by 69 percent to $44.1 billion, with second-quarter revenue expected to reach $45 billion.

National AI Academy Will Train 400,000 Teachers By 2030

K-12 Dive (7/9, Merod) reports in continuing coverage, “Over 400,000 teachers – or about 1 in 10 – nationwide will receive free training to develop artificial intelligence fluency skills by 2030 through the National Academy for AI Instruction, a $23 million initiative announced Tuesday by the American Federation of Teachers, United Federation of Teachers, Microsoft, OpenAI and Anthropic.” A flagship campus will open in New York City this fall, initially focusing on AFT’s K-12 members. The initiative, which will offer “workshops, online courses and hands-on AI training to teachers,” is prioritizing access in high-need districts. AFT President Randi Weingarten said, “The direct connection between a teacher and their kids can never be replaced by new technologies, but if we learn how to harness it, set commonsense guardrails and put teachers in the driver’s seat, teaching and learning can be enhanced.” The academy “comes as district level AI training for teachers was found to be uneven in 2024.”

University Of San Francisco Law Program Becomes First To Fully Integrate Anthropic’s AI Chatbot

Mashable (7/9, DiBenedetto) reported, “Anthropic, the mind behind ChatGPT competitor Claude,” is getting involved in “new university and classroom partnerships that will put their educational chatbot into the hands of students of all ages.” The initiative, Claude for Education, aims to enhance learning by integrating with platforms like Canvas, Wiley, and Panopto. Anthropic explained, “We’re building toward a future where students can reference readings, lecture recordings, visualizations, and textbook content directly within their conversations.” The University of San Francisco School of Law will be the first to fully incorporate Claude, as Dean Johanna Kalb noted its use in courses like Evidence to teach students to apply LLMs in litigation. Additionally, this week Anthropic “announced it was joining a coalition of AI partners who were forming the new National Academy for AI Instruction, led by the American Federation of Teachers (AFT),” with a $500,000 investment.

AI-Powered Robot Successfully Performs Surgeries

Fierce Biotech (7/10, Hale) reports that researchers at Johns Hopkins University demonstrated an AI-powered robot’s ability to autonomously perform gallbladder removal surgeries. During a study, the robot, trained using surgery videos, completed eight procedures with 100% accuracy as it adapted to anatomical variations and unplanned events. Medical roboticist Axel Krieger noted this marks a shift towards robots understanding surgical procedures. The robot utilized AI algorithms, including large-language models, to respond to voice commands. This advancement is a step towards autonomous surgical systems. The study was published in Science Robotics.

China Builds Data Centers In Xinjiang For AI Ambitions

Bloomberg (7/9, Subscription Publication) reports that China is constructing data centers in Xinjiang to bolster its AI capabilities, with plans to acquire over 115,000 Nvidia chips, despite US export restrictions. Bloomberg’s investigation reveals that these centers aim to develop AI models competitive with US companies like OpenAI and Meta. The Chinese government approved 39 data center projects in Xinjiang and Qinghai, with Nyocor, a green energy firm, involved in significant projects. China’s strategy includes creating “computing power corridors” to facilitate AI development across the country.

EU Unveils AI Code Of Practice

The Financial Times (7/10, Subscription Publication) reports that the EU released its code of practice for general purpose AI, detailing rules effective next month for models like OpenAI’s GPT-4 and Google’s Gemini.

Education Leader Explains How AI Tools Enhance Project-Based Learning

K-12 Dive (7/9, Mendez-Padilla) reported that Jessica Garner, senior director of innovative learning at the International Society for Technology in Education, advocates for integrating artificial intelligence tools into project-based learning to enhance educational experiences. Garner said, “Technology should not be this isolated thing that happens outside of curriculum, instruction and teaching and learning.” She suggests that students “should learn to utilize AI as a tool that enhances their learning rather than having it do the work for them.” Garner said that if students “see value in the assignment, they want to do what’s more helpful to their learning.” She cautioned educators “to always question and verify the sources the AI tool uses to find information,” and to ensure alignment with school standards.

dtau...@gmail.com

unread,
Jul 18, 2025, 3:49:35 PMJul 18
to ai-b...@googlegroups.com

Microsoft, U.S. National Lab Tap AI to Speed Up Nuclear Power Permitting Process

A partnership between Microsoft and the Idaho National Laboratory (INL) aims to determine how to leverage AI to accelerate the document-compilation process to obtain permits for new nuclear power plans. They plan to produce the engineering and safety analysis reports required for the application process using Microsoft's AI systems. INL's Scott Ferrara said the technology potentially could help existing nuclear facilities complete the necessary evaluations to apply for operating-license amendments allowing them to boost power output.
[ » Read full article ]

Reuters; Stephen Nellis (July 16, 2025)

 

Malaysia Tightens Rules on Movement of U.S.-Made AI Chips

Permits will now be required for all high-performance AI chips that originate from the U.S. entering or leaving Malaysia, the country's Ministry of Investment, Trade and Industry said this week. This comes after recent reports of a Chinese company operating in Malaysia using servers equipped with Nvidia and other AI chips to train large language models. Malaysia’s emergence as a datacenter hub has drawn investment from major global tech players, as well as increased geopolitical scrutiny.
[ » Read full article ]

The Wall Street Journal; Ying Xian Wong (July 15, 2025)

 

Study on AI Therapy Chatbots Warns of Risks, Bias

A Stanford University study evaluating five AI-powered mental health chatbots found they pose safety risks by exhibiting biases and failing to respond appropriately in high-risk situations. In one experiment, the chatbots reacted with greater stigma toward conditions like schizophrenia and alcohol addiction than toward depression. Newer large language models showed similar biases, indicating that technical advancements have not mitigated the issue.
[ » Read full article ]

UPI; Chris Benson (July 14, 2025)

 

San Francisco Rolls out Microsoft’s Copilot AI for 30,000 City Workers

San Francisco Mayor Daniel Lurie announced Monday that Microsoft 365 Copilot Chat, powered by OpenAI’s GPT-4o, will be available to 30,000 workers across the city’s government. The move comes after a six-month test involving more than 2,000 city workers that showed the use of generative AI yielded productivity gains of up to five hours weekly. Lurie said he wants San Francisco to be “a beacon for cities around the globe on how they use this technology, and we’re going to show the way.”
[ » Read full article ]

CNBC; Kate Rogers (July 14, 2025)

 

ChatGPT Is Changing the Words We Use in Conversation

Researchers at Germany's Max Planck Institute for Human Development found that ChatGPT is subtly altering human speech patterns. The team identified "GPT words" like "delve" and "realm" that ChatGPT frequently adds when editing text. The researchers then tracked these words across more than 360,000 YouTube videos and 771,000 podcast episodes, comparing them to synonyms rarely used by the chatbot. Their analysis found a marked increase in GPT word usage in spoken language since the chatbot’s release.
[ » Read full article ]

Scientific American; Vanessa Bates Ramirez (July 11, 2025)

 

Tech to Protect Images Against AI Scrapers Beaten

Researchers at the University of Texas at San Antonio and Germany's Technical University of Darmstadt developed LightShed, a method for detecting and reversing image-based data poisoning tools designed to protect artists' work from being used in AI training by altering images to confuse machine learning models. The researchers aim to highlight the limitations of current defenses by showing that adversarial perturbations can be neutralized, similar to how image watermarking techniques have been removed by other machine learning methods.
[ » Read full article ]

The Register (U.K.); Thomas Claburn (July 11, 2025)

 

ITU Urges Stronger Measures to Detect AI-Driven Deepfakes

The UN's International Telecommunication Union (ITU) warned in a new report unveiled at its "AI for Good Summit" that advanced detection tools, and social media digital verification and authentication tools, are necessary to address the serious threats to elections and financial systems posed by deepfakes and AI-generated content. Said ITU's Bilel Jamoussi, "Trust in social media has dropped significantly because people don't know what's true and what's fake."
[ » Read full article ]

Reuters; Olivia Le Poidevin (July 11, 2025)

 

ACM President Ioannidis Sees More Humane Role for AI

The recently-concluded Fourth International Conference on Financing for Development (FFD4) saw ACM host its first Smart Technology, Fair Finance webinar, exploring the benefits and risks of AI in global development finance. Said ACM President Yannis Ioannidis, “Our central message is that technological advancement does not guarantee fair use,” he said. “AI often inspires either fear or unrealistic optimism. In truth, it can do immense good – but only if we ensure it serves everyone, not just the privileged."
[ » Read full article ]

ComputerWeekly.com; Pat Brans (July 11, 2025)

 

AI Slows Some Experienced Software Developers

A study by nonprofit METRon on a group of experienced software developers using AI coding assistant Cursor on open-source projects found that the tool actually slowed the developers down when they were working in codebases familiar to them. Before the study, the developers believed using AI would decrease task completion time by about a quarter. The study found, however, that using the AI tool actually increased task completion time by 19%.
[ » Read full article ]

Reuters; Anna Tong (July 10, 2025)

 

Human-like Machine Vision Is More Energy Efficient

Researchers at Osnabrück University's Institute of Cognitive Science in Germany developed an AI vision model with a structure similar to that of human vision in organization and specialization. Each spatial location in the All-Topographic Neural Network (All-TNN) has its own set of learnable parameters, with a "smoothness constraint" added during training to enable neighboring neurons to learn similar but not identical features. In tests, All-TNN used a tenth of the energy and was three times as strongly correlated with human vision as a convolutional neural network.
[ » Read full article ]

IEEE Spectrum; Matthew S. Smith (July 8, 2025)

 

AI Challenge to Find Lost Amazonian Civilizations Draws Critics

OpenAI's community science project, the OpenAI to Z Challenge, is offering up to $250,000 in cash and credits for OpenAI products to researchers who use the company's tools to search existing Amazon rainforest data and identify lost ancient cities. Archaeologists, Indigenous groups, and other critics say the project violates research norms and ethics codes by failing to consult with the more than 300 Indigenous communities in the Amazon.
[ » Read full article ]

Science; Sofia Moutinho (July 8, 2025)

 

Trump Hails $90 Billion in AI Infrastructure Investments at Pennsylvania Summit

At the Pennsylvania Energy and Innovation Summit at Carnegie Mellon University on Tuesday, U.S. President Trump touted more than $90 billion in corporate investment aimed at accelerating the development of AI in the state. Private equity firm Blackstone announced that it would invest $25 billion in new datacenters and energy infrastructure, while Google said it would invest $25 billion in datacenters and announced a $3-billion plan to upgrade two of Pennsylvania’s existing hydroelectric dams to produce more electricity.

[ » Read full article *May Require Paid Registration ]

The New York Times; Brad Plumer (July 15, 2025)

 

Nvidia Wins OK to Resume Sales of AI Chip to China

Nvidia has received assurances from the White House that it can sell its H20 AI chip in China days after CEO Jensen Huang met with U.S. President Trump. The U.S. Department of Commerce restricted sales of the chip to China earlier this year. The H20 chip was designed for Chinese customers and has been a top seller in the country since 2024.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Raffaele Huang; Amrith Ramkumar (July 14, 2025)

 

AI Shows Signs of Slashing U.K. Job Openings

A new McKinsey & Co. study found a sharp decrease in U.K. employment postings, particularly in roles vulnerable to AI. Overall online listings fell 31% from March to May of this year compared to the same period in 2022, the year ChatGPT was released. Postings for AI-exposed jobs, such as in tech and finance, fell 38%, with some roles like programmers, management consultants, and graphic designers falling more than 50%.

[ » Read full article *May Require Paid Registration ]

Bloomberg; Irina Anghel; Joe Mayes (July 13, 2025)

 

Goldman Sachs Tests AI Software Engineer

CNBC (7/11, Son) reported that Goldman Sachs is testing an AI software engineer named Devin from Cognition, a startup valued at nearly $4 billion. Goldman tech chief Marco Argenti told CNBC that Devin will augment the bank’s 12,000 human developers by handling tasks considered drudgery, such as updating internal code. “We’re going to start augmenting our workforce with Devin,” Argenti stated. Cognition claims Devin is the first AI software engineer, and Goldman is the first major bank to use it. Argenti envisions a “hybrid workforce” where humans and AI work side-by-side. This development could heighten concerns about AI-induced job cuts, with Bloomberg predicting up to 200,000 job losses in banking due to AI in the next 3-5 years.

Nvidia CEO Says US Should Not Be Concerned About Chinese Military Using AI Chips

Bloomberg (7/13, Subscription Publication) reports that Nvidia CEO Jensen Huang “said the US government doesn’t need to be concerned that the Chinese military will use his company’s products to improve their capabilities. Addressing the largest concern Washington has cited in placing increasing restrictions on US technology exports to the Asian nation, Huang said the Chinese military will avoid using US technology because of the risks associated with doing so.” Bloomberg adds that Huang has “argued that the strategy will fail because it will spur growth of domestic capabilities in China that will eventually rival those created by the US technology industry.”

SpaceX Reportedly To Invest $2B In xAI

The Wall Street Journal (7/12, Jin, Peterson, Subscription Publication) reported that SpaceX will invest $2 billion in Elon Musk’s AI company, xAI, according to investors. This investment is part of a $5 billion equity fundraise announced in June by Morgan Stanley.

AWS Collaborates On AI In Medtech

Healthcare Dive (7/10, Reuter) reported that Rowland Illing, Amazon Web Services’ global chief medical officer, discussed AI trends in medical technology. During an interview, Illing highlighted the shift towards using foundation models, which are pre-trained on large datasets, in medtech. AWS is partnering with companies like Illumina, Johnson & Johnson MedTech, Medtronic, and Abbott to implement AI solutions. Illing noted the rise of generative AI applications and their integration with the FDA’s FiDL platform. He emphasized the potential of foundation models to analyze comprehensive imaging data, enhancing diagnostic capabilities beyond human interpretation.

Pentagon Awards AI Contracts To Top US Developers

Bloomberg (7/14, Davalos, Subscription Publication) reports that four of the top US artificial intelligence developers “won contracts from the Defense Department aimed at accelerating the military’s adoption of the emerging technology.” The Pentagon’s Chief Digital and Artificial Intelligence Office on Monday announced “that it will grant contracts to Alphabet Inc.’s Google, OpenAI, Elon Musk’s xAI and Anthropic PBC.” Each contract has a ceiling of $200 million. CNBC (7/14, Capoot) reports Doug Matty, the DoD’s chief digital and AI officer, said, “The adoption of AI is transforming the Department’s ability to support our warfighters and maintain strategic advantage over our adversaries.”

        Pentagon To Use Grok AI. The Washington Post (7/14) reports that the Defense Department “will begin using Grok, the AI chatbot built by Elon Musk’s start-up xAI, the company announced in a post on Monday.” The announcement came as Grok launched “what it called ‘Grok for Government,’ a suite that allows agencies and federal offices to adopt its chatbots for their specific uses.” xAI “said its products would now be ‘available to purchase via the General Services Administration (GSA) schedule,’ allowing ‘every federal government department, agency, or office’ to buy them.”

AI Models Propel Biomedical Research Advancements

Healio (7/14, Rhoades) reports that new foundational AI models are enhancing biomedical research, as discussed at Cleveland Clinic’s AI Summit for Healthcare Professionals. Jianying Hu, PhD, an IBM fellow, emphasized the transformative nature of these models, which use self-supervised learning to build generalized models from multimodal data. Hu noted their application in healthcare, such as creating high-fidelity primary human cell cultures. Challenges in comparing in vitro and in vivo intestinal epithelial stem cell cultures are addressed using AI models like the fine-tuned BMFM-RNA model and comparator workflow.

Meta Expands AI Efforts With Massive Data Centers

Reuters (7/14) reports that Meta Platforms CEO Mark Zuckerberg revealed plans to invest “hundreds of billions of dollars to build several massive AI data centers for superintelligence, intensifying his pursuit of a technology he has chased with a talent war for top engineers.” The first data center, Prometheus, is set to launch in 2026, while Hyperion will expand up to 5 gigawatts. Zuckerberg highlighted the company’s strong advertising business to support the investment. Meta’s AI division, Superintelligence Labs, aims to generate revenue from the Meta AI app and other tools. The company is investing heavily in AI, having raised its 2025 capital expenditure to between $64 billion and $72 billion. Meta’s efforts include a talent raid led by Zuckerberg, with Alexandr Wang and Nat Friedman heading the Superintelligence Labs.

        TechCrunch (7/14, Zeff) says Meta’s announcement marks its “latest move to get ahead of OpenAI and Google in the AI race,” adding: “After previously poaching top talent to run Meta Superintelligence Lab, including former Scale AI CEO Alexandr Wang and former Safe Superintelligence CEO Daniel Gross, Meta now seems to be turning its attention to the massive computational power needed to train frontier AI models.” CEO Mark Zuckerberg “said Hyperion’s footprint will be large enough to cover most of Manhattan,” and “noted that Meta plans to bring a 1 GW super cluster, called Prometheus, online in 2026, making it one of the first tech companies to do so.” TechCrunch says the two data centers combined “will soak up enough energy to power millions of homes, which could pull significant amounts of electricity and water from neighboring communities.”

US Lifts Restrictions On Sale Of Nvidia Chips To China

The AP (7/15, Kurtenbach, Grantham-Philips) reports Nvidia CEO Jensen Huang has announced his company “won approval from the Trump administration to sell its advanced H20 computer chips used to develop artificial intelligence to China. The news came in a company blog post late Monday, which stated that the US government had ‘assured’ Nvidia that licenses would be granted – and that the company ‘hopes to start deliveries soon.’” Huang “also spoke about the coup on China’s state-run CGTN television network,” telling members of the press, “the US government has approved for us filing licenses to start shipping H20s.” Bloomberg (7/15, Bergen, King, Hawkins, Subscription Publication) points out that Huang had taken “almost every opportunity available to him – from the stage of tech events to Washington visits – to argue that a crackdown on China is counterproductive. During his appearances, he navigated a fine line between praising” President Trump’s “policies aimed at bringing back chip manufacturing to the US and demanding more freedom to do business in China.”

        Reuters (7/15, Renshaw, Freifeld) reports Commerce Secretary Lutnick confirmed the “planned resumption of sales of...H20 AI chips to China,” explaining that the move “is part of US negotiations on rare earths.” The Secretary told Reuters, “‘We put that in the trade deal with the magnets’...referring to an agreement Trump made to restart rare earth shipments to US manufacturers.” CNBC (7/15, Leswing) reports Lutnick clarified the deal “will not be giving over” the “best” US technology, saying Nvidia “wants to sell China its ‘fourth best’ chip, which is slower than the fastest chips that US companies use.”

Administration Expanding Use Of AI Across Agencies

The Washington Post (7/15, A1, MacMillan, Siddiqui, Natanson, Dwoskin) highlights how the Administration is accelerating the use of artificial intelligence to “automate work across nearly every agency in the executive branch.” According to documents and interviews with federal workers, AI technology “could soon play a central role in tax audits, airport security screenings and more.” Many of these projects “aim to shrink the federal workforce – continuing the work of” DOGE. Government AI is also “promised to reduce wait times and lower costs to American taxpayers.” However, “government tech watchdogs worry the Trump administration’s automation drive – combined with federal layoffs – will give unproven technology an outsize role.”

Most States Have Issued AI Guidelines For K-12 Schools

Stateline (7/15, Fitzgerald) reports that agencies in “at least 28 states and the District of Columbia have issued guidance on the use of artificial intelligence in K-12 schools.” More than half of the states have developed policies “to define artificial intelligence, develop best practices for using AI systems and more, according to a report from AI for Education,” an advocacy group. AI for Education CEO and co-founder Amanda Bickerstaff, who said that state-level guidance is essential for navigating AI in education, is highlighting concerns about “academic integrity” and the need for “safety guidelines around responsible use.” North Carolina, which was among the first to issue AI guidance, is focusing on generative AI and classroom resources. Georgia emphasized “ethical principles educators should consider” in its updated guidance. Maine, Missouri, Nevada, and New Mexico also released guidelines this year.

Elon Musk’s xAI Explores Saudi Data Center Options

Bloomberg (7/16, Subscription Publication) reports that Elon Musk’s artificial intelligence startup xAI is “in discussions to lease data center capacity in Saudi Arabia.” The company is negotiating with two potential partners to expand its infrastructure in regions with “cheap energy and political goodwill.” Humain, a Saudi-backed AI firm, offers several gigawatts of capacity but remains in early stages, while another partner is developing a more immediate 200-megawatt facility. xAI would lease rather than own these facilities to support its compute-intensive AI models. AI companies including xAI and OpenAI are increasingly reliant on data centers, which require significant power. The Humain project, if realized, could become one of the largest data centers globally.

Google, Meta Announce Major AI Infrastructure Investments

CNET News (7/15) reports that Google plans to invest $25 billion in data centers and AI infrastructure linked to the PJM Interconnection, spanning 13 states in the eastern US, mainly around Pennsylvania. Google will also invest $3 billion in hydropower to support its carbon-free goal by 2030. Meanwhile, Meta CEO Mark Zuckerberg announced “hundreds of billions of dollars” in investments for computing to build superintelligence, with projects like Prometheus and Hyperion. Carnegie Mellon’s Ramayya Krishnan highlighted that data centers are crucial for AI production, potentially impacting local communities both positively and negatively.

        Meta Partners With AWS To Support AI Startups. CNN (7/16, Duffy) reports that Meta and Amazon Web Services (AWS) are collaborating to support 30 US startups in developing AI tools using Meta’s Llama AI model. The initiative offers “six months of technical support from both companies’ engineers and $200,000 in AWS cloud computing credits” per startup. AWS Vice President Jon Jones emphasized empowering founders to create transformative AI. Meta’s Ash Jhaveri highlighted startups’ creativity in shaping AI’s future.

The Ocean Cleanup Partners With AWS To Remove Plastic From Oceans Using AI

Cruising World (7/17) reports The Ocean Cleanup, a nonprofit focused on removing plastic pollution, has partnered with AWS to leverage AI and cloud computing in cleaning the Great Pacific Garbage Patch. The initiative aims to remove 90% of floating ocean plastic by 2040 by using AI to detect debris hotspots and automate marine life monitoring. The Ocean Cleanup has already removed 64 million pounds of debris globally and plans to expand operations with AWS’s technology. The partnership highlights how tech innovation can address environmental challenges impacting marine life and navigation.

Microsoft, Google Use AI For Nuclear Reactor Development

Axios (7/17, Geman) reports that Microsoft and Google are collaborating to integrate AI into the nuclear reactor licensing and construction process, aiming to enhance efficiency and speed up deployment. The Department of Energy’s Idaho National Lab will use a Microsoft tool for creating safety and analysis reports required by the Nuclear Regulatory Commission. Microsoft’s AI Director Nelli Babayan stated, “Introducing AI technologies will enhance efficiency and accelerate the deployment of advanced nuclear technologies.” The collaboration also involves Westinghouse AI tools and Google’s cloud capabilities to improve reactor deployment and operations.

        Forbes (7/17, Werner) reports that Westinghouse is leveraging its proprietary Hive and Bertha AI models for this initiative, which aim to optimize fuel loading, automate maintenance, and streamline licensing documentation. The new reactor designs are intended to enhance nuclear energy scalability in the US while maintaining safety and operational efficiency. The collaboration reflects a broader effort to meet rising energy demands for data centers with carbon-free nuclear power.

Reply all
Reply to author
Forward
0 new messages