Dr. T's AI brief

13 views
Skip to first unread message

dtau...@gmail.com

unread,
Aug 6, 2025, 7:13:58 PMAug 6
to ai-b...@googlegroups.com

Trump AI Plan Pulls Restraints

The Trump administration's AI action plan outlines a strategy to establish U.S. dominance in AI through three key initiatives: accelerating innovation, expanding domestic AI infrastructure, and promoting U.S. hardware and software as global standards. The plan centers on a federal approach to eliminate "bureaucratic red tape." Said Trump, “We have to have a single federal standard, not 50 different states regulating this industry."
[
» Read full article ]

CNN; Lisa Eadicicco; Clare Duffy (July 23, 2025)

 

Researchers Bypass Anti-Deepfake Markers on AI Images

Researchers at the University of Waterloo in Canada developed a tool that can quickly remove watermarks identifying artificially generated content. The UnMarker tool can remove watermarks without knowing anything about the system that generated them or anything about the watermarks. Explained Waterloo’s Andre Kassis, "We can just apply this tool and within two minutes max, it will output an image that is visually identical to the watermark image" but without the watermark indicating its artificial origin.
[
» Read full article ]

CBC News (Canada); Anja Karadeglija (July 23, 2025)

 

Machine Learning Uncovers Threats to Global Underground Fungi Networks

Researchers at the Society for the Protection of Underground Networks developed the first high-resolution global maps of mycorrhizal fungal biodiversity by using machine learning on a dataset of more than 2.8 billion samples from 130 countries. The study revealed that 90% of these underground biodiverse fungal hotspots lie outside protected ecosystems, and that the loss of such ecosystems could threaten crop productivity, carbon drawdown efforts, and ecosystem resilience to climate extremes.
[
» Read full article ]

The Guardian (U.K.); Taro Kaneko (July 23, 2025)

 

AI Models with Systemic Risks Given Pointers on Complying with EU AI Rules

The European Commission (EC) on Friday unveiled guidelines to help AI models determined to have systemic risks comply with the EU's AI Act. Impacted AI models will have to carry out evaluations, assess and mitigate risks, conduct adversarial testing, report serious incidents to the EC, and ensure adequate cybersecurity protection against theft and misuse. Companies have until August 2026 to comply with the legislation.
[ » Read full article ]

Reuters; Foo Yun Chee (July 18, 2025)

 

Netflix Uses GenAI for First Time in Series

Netflix used generative AI to create visual effects (VFX) for its Argentine science-fiction series "El Eternauta," marking the first GenAI final footage to appear in one of its original series. Netflix joined forces with production innovation group Eyeline Studios to produce a building collapse in Buenos Aires using GenAI. Netflix co-CEO Ted Sarandos said GenAI created the VFX sequence 10 times faster than convention VFX tools and at a cost that fit the show's budget.
[ » Read full article ]

Reuters; Dawn Chmielewski; Lisa Richwine (July 17, 2025)

 

On Its Path to the Future, AI Studies Roman History

An AI model from researchers at Google's DeepMind trained on a vast body of ancient Latin inscriptions to place a more precise date on an important Latin text credited to a Roman emperor. Historians have long clashed over when “Res Gestae Divi Augusti,” (“Deeds of the Divine Augustus”) was first etched in stone. The Aeneas model cited a wealth of evidence to claim the text originated around A.D. 15, or shortly after Augustus’s death.
[
» Read full article *May Require Paid Registration ]

The New York Times; William J. Broad (July 23, 2025)

 

Google AI System Wins Gold in International Math Olympiad

An AI system from Google DeepMind achieved “gold medal” status by solving five of the six problems at the annual International Mathematical Olympiad. OpenAI achieved a similar score on this year’s questions, though it did not officially enter the competition. Both systems received and responded to the questions much like humans, while other AI systems could answer questions only after humans translated them into a programming language built for solving math problems.
[ » Read full article *May Require Paid Registration ]

The New York Times; Cade Metz (July 21, 2025)

 

AI Groups Replace Low-Cost ‘Data Labelers’ with High-Paid Experts

Top AI companies are replacing low-cost “data labelers” in Africa and Asia with higher-paid industry specialists, as they move to create more complex and accurate models. In response, data-labeling startups are hiring experts in fields such as biology and finance to help the companies create the sophisticated training data vital for development of the next generation of AI systems. Said Olga Megorskaya of Netherlands-based generative AI services provider Toloka, “Finally, [the industry] is accepting the importance of the data for training."
[ » Read full article *May Require Paid Registration ]

Financial Times; Melissa Heikkilä (July 20, 2025)

 

Cybersecurity Bosses Increasingly Worried About AI Attacks, Misuse

A survey of around 110 chief information security officers (CISOs) by Israeli venture fund Team8 found close to a quarter said their firms had experienced an AI-powered cyberattack in the past year. Securing AI agents was cited as an unsolved cybersecurity challenge for about 40% of respondents, while a similar percentage of CISOs expressed concerns about securing employees' AI usage. About three-quarters (77%) of respondents said they anticipate less-experienced security operations center analysts to be among the first replaced by AI agents.
[ » Read full article *May Require Paid Registration ]

Bloomberg; Cameron Fozi (July 17, 2025)

 

University Of Michigan Law School Mandates AI In Admissions Essays

Inside Higher Ed (7/18, Alonso) reported that in 2023, the University of Michigan Law School “made headlines for its policy banning applicants from using generative AI to write their admissions essays.” The school has since shifted that policy, and it is now “mandating the use of AI – at least for one optional essay.” Applicants are prompted to discuss their current AI usage and future predictions in law school by using AI to craft their responses. Senior Assistant Dean Sarah Zearfoss “said she was inspired to include such a question after hearing frequent anecdotes over the past year about law firms using AI to craft emails or short motions.” Michigan Law “still disallows applicants from using AI writing tools when they compose their personal statements and for all other supplemental essay questions.” Attorney Frances M. Green “told Inside Higher Ed that she believes the ability to use and engage with AI will eventually become a required skill for all lawyers.”

Georgia Tech Receives Funding To Build AI-Driven Supercomputer

Forbes (7/20, Nietzel) reports that the National Science Foundation (NSF) “has awarded the Georgia Institute of Technology $20 million to lead the construction of a new supercomputer – named Nexus – that will use artificial intelligence to advance scientific breakthroughs.” According to the NSF, Nexus will act as “a critical national resource to the science and engineering research community,” which will enable faster AI-driven discoveries. Georgia Tech President Ángel Cabrera said, “It’s fitting we’ve been selected to host this new supercomputer, which will support a new wave of AI-centered innovation across the nation.” Nexus will perform “400 quadrillion operations per second” and have substantial memory and storage capabilities. The project, in collaboration with the University of Illinois’ National Center for Supercomputing Applications, aims to establish a high-speed network for US researchers. Construction is set to begin this year, “with completion expected by spring 2026.”

EU Issues AI Guidelines Amid Systemic Risk Concerns

Reuters (7/18) reports that the European Commission released guidelines on Friday to assist AI models identified as having systemic risks in adhering to the European Union’s AI Act. The act, effective Aug. 2, applies to models from companies like Google, OpenAI, Meta, Anthropic, and Mistral. These companies must comply by Aug. 2 next year or face fines ranging from 7.5 million euros to 35 million euros. The guidelines address criticisms about regulatory burdens and clarify obligations for companies, including model evaluations, risk assessments, and cybersecurity measures. General-purpose AI models must meet transparency requirements. EU tech chief Henna Virkkunen stated, “With today’s guidelines, the Commission supports the smooth and effective application of the AI Act.”

NIH Sets Limit On Grant Applications To Curb AI Use

Science (7/18, Jacobs) reported that the National Institutes of Health announced a new policy limiting scientists to six grant applications per year, effective Sept. 25. This policy aims to prevent AI-generated proposals from overwhelming the NIH’s review system. Generative AI-assisted applications are prohibited, and NIH will use technology to detect such content, with potential penalties for violations. Critics argue this cap could hinder researchers already facing funding challenges due to political and budgetary constraints. Michael Lauer, former NIH deputy director for extramural research, supports the cap as a necessary measure against misuse, citing an incident of a researcher submitting more than 40 AI-generated applications. The policy applies to new, resubmitted, renewed, and revised applications, with concerns about its effect on collaborations and research strategies.

AI Enhances California’s Electric Grid Operations

The San Diego (CA) Union-Tribune (7/18, Nikolewski) reported that the California Independent System Operator (CAISO) has initiated a pilot program incorporating AI to optimize its grid operations. Developed by Open Access Technology International Inc. (OATI), the AI software, named Genie, aims to streamline the management of planned and unplanned transmission grid outages. OATI’s vice president, Abhi Thakur, explained that the AI system will “aggregate meaningful, important information” to assist grid operators. CAISO’s chief information officer, Khaled Abdul-Rahman, stated that this initiative aligns with their modernization efforts to maintain system reliability.

President Planning To Sign Three AI-Focused Executive Orders This Week

President Trump “plans to sign three AI-focused executive orders in the runup to the release of the administration’s sweeping AI Action Plan anticipated Wednesday, according to multiple people familiar with the matter and outlining documents obtained by” NextGov (7/21, Kelley, DiMolfetta), which adds they claimed he is expected to sign them “either on Tuesday or before the White House’s AI Action Plan event kicks off on Wednesday.” NextGov reports the orders focus upon “one of three aspects of artificial intelligence regulation and policy that the administration has prioritized: spearheading AI-ready infrastructure; establishing and promoting a U.S. technology export regime; and ensuring large language models are not generating ‘woke’ or otherwise biased information.” In a statement, White House Office of Science and Technology spokeswoman Victoria LaCivita said, “The [AI Action Plan] will deliver a strong, specific and actionable federal policy roadmap that goes beyond the details reported here and we look forward to releasing it soon.”

Amazon ML Summer School Expands AI Education In India

Dataquest (IND) (7/21, Ghatak) interviewed Amazon VP of Machine Learning Rajeev Rastogi about the 2025 edition of Amazon ML Summer School, which has grown from 300 to nearly 10,000 learners since 2021. The program now integrates large language models, responsible AI, and hands-on problem-solving while prioritizing diversity, with over 34,000 women applicants since inception. Rastogi emphasized Amazon’s “multifaceted approach” to building India’s ML talent pipeline through broad education initiatives, internships contributing to real products, and internal upskilling programs like Machine Learning University. The curriculum reflects Amazon’s production-scale ML systems, teaching students to bridge theory and business impact. Rastogi noted the demand for practitioners who can “translate research into scalable solutions” while navigating ethical considerations. The program continues to balance scale and depth through personalized learning pathways and peer collaboration.

 

Siemens CEO Urges Germany To Use Big Industrial Data Set For AI Push

Fortune (7/21) reports Siemens AG Chief Executive Officer Roland Busch, during a Bloomberg TV interview, said that Germany’s industrial companies have “a massive amount of data,” and called for the country to leverage it to take advantage of AI. He also said that Europe needs to change its regulatory structure to enable competition with US software companies.

Microsoft Invests in European Language AI Initiatives

TechZine (7/21) reports that Microsoft is enhancing European language technology with new AI initiatives announced in Paris. These efforts focus on multilingual models, open-source data, and cultural heritage. The company aims to address the dominance of English-language AI systems by improving multilingual representation within Large Language Models (LLMs). Microsoft is collaborating with the University of Strasbourg and platforms like Hugging Face to provide multilingual datasets. In the Netherlands, the GPT-NL project, led by TNO, SURF, and NFI, is developing a Dutch-specific language model using news data from publishers and ANP.

Meta Declines To Sign EU’s AI Code, Citing Legal Concerns

TechCrunch (7/18, Iyer) reports that Meta rejected the EU’s voluntary code for AI compliance, citing legal uncertainties and overreach. Joel Kaplan, Meta’s global affairs officer, criticized the code, claiming it hampers AI development in Europe. The EU’s AI Act, effective August 2, targets “unacceptable risk” applications and mandates documentation and compliance with content owners. Despite opposition from major tech companies, the EU maintains its schedule. AI model providers, including Meta, must comply by August 2027 if operational before August 2, 2023.

Stargate AI Joint Venture Navigates Delays And Setbacks

The Wall Street Journal (7/21, Subscription Publication) reports that the $500 billion Stargate AI project, announced at the White House, is struggling, with no data center deals completed six months post-announcement. SoftBank and OpenAI, which are leading the project, are in disagreement over terms. Originally pledging $100 billion immediately, their goal is now a smaller data center in Ohio by year-end. Despite setbacks, OpenAI CEO Sam Altman has independently secured a $30 billion annual data-center deal with Oracle.

AI Surge Boosts Demand for Renewable Energy In Europe, US

Barron’s (7/18, Clark) reports that the AI boom is expanding data centers globally, requiring significant investment, particularly in Europe. European data centers’ power demand is expected to soar, necessitating €100 billion annually in electricity network investments over the next decade, according to Newmark. GE Vernova Hitachi Nuclear Energy is highlighted as a leader in small modular reactors, which are gaining interest in Europe. Meanwhile, US data centers will also see increased electricity consumption.

Survey Highlights Faculty Concerns Over AI Governance

Inside Higher Ed (7/22, Palmer) reports that a survey released on Tuesday by the American Association of University Professors (AAUP) reveals concerns about the integration of artificial intelligence (AI) in higher education. The survey indicates that while “90 percent of the 500 AAUP members who responded to the survey last December said their institutions are integrating AI into teaching and research, 71 percent said administrators ‘overwhelmingly’ lead conversations about introducing AI into research, teaching, policy and professional development, but gather ‘little meaningful input’ from faculty members, staff or students.” An AAUP report says, “Many colleges and universities currently have no meaningful shared governance mechanisms around technology.” Despite AI’s potential, faculty express concerns about job security, student success, and academic freedom.

Amazon Labs Leveraging AI, Robotics

AI Magazine (7/22) reports Amazon’s Operations Innovation Labs in Vercelli, Italy, and Sumner, Washington, leverage AI and robotics to improve logistics efficiency, worker safety, and sustainable packaging. The labs test technologies like the AI-powered Flat Sorter Robotic Induct and Bags Containerisation Matrix Sorter, which reduce manual labor and waste. Chief Sustainability Officer Kara Hurst highlighted efforts to make packaging “smaller, lighter, and more sustainable.” The Vercelli lab offers public tours showcasing innovations, including the Universal Robotic Labeller, which minimizes excess materials.

Texas Set To Lead US In Power Capacity For AI Growth

Argus Media (7/22, Hast) reports that Texas is set to lead the US in new power generating capacity, driven by demand from AI data centers. ERCOT has 28 GW of capacity in development, projected to come online by 2027, surpassing other US electricity markets. ERCOT’s “connect and manage” process enables rapid integration of new generation, contributing to its leadership in power capacity additions. However, the watchdog Texas Reliability Entity warns that rapid data center growth could impact grid reliability.

President Signs Executive Orders To Boost US AI Industry

The New York Times (7/23, McCabe, Kang) reports in continuing coverage that President Trump “said on Wednesday that he planned to speed the advance of artificial intelligence in the United States, opening the door for companies to develop the technology unfettered from oversight and safeguards, but added that AI needed to be free of ‘partisan bias.’” The Times adds that in a “sweeping effort to put his stamp on the policies governing the fast-growing technology,” the President “signed three executive orders and outlined an ‘AI Action Plan,’ with measures to ‘remove red tape and onerous regulation’ as well as to make it easier for companies to build infrastructure to power AI.”

        Bloomberg (7/23, Lai, Davalos, Hordern, Subscription Publication) reports that the orders Trump signed “include a measure addressing energy and permitting issues for AI infrastructure, a directive to promote AI exports and one that calls for large language models procured by the government to be neutral and unbiased.” The President said at the event, “America is the country that started the AI race, and as President of the United States, I’m here today to declare that America is going to win it.” Reuters (7/23, Nellis) reports that as part of the effort, the Administration “recommended implementing export controls that would verify the location of advanced artificial intelligence chips, a move that was applauded by US lawmakers from both parties in both houses of Congress.”

AI Tools Are Being Integrated Into Popular Course Software

The Chronicle of Higher Education (7/23, Huddleston) reports that Canvas, a learning-management platform, will now integrate artificial intelligence (AI) tools, including generative AI, as announced by its parent company Instructure on Wednesday. On Canvas, faculty members “will be able to click an icon that connects them with various AI features aimed at streamlining and aiding instructional workload, like a grading tool, a discussion-post summarizer, and a generator for image alternative text.” Canvas’ parent company, Instructure, “is also in partnership with OpenAI, the maker of ChatGPT, so instructors can use generative-AI technology as part of their assignments.” Instructors can “choose to create assignments paired with existing large language models, including Gemini and Microsoft Copilot.” Instructors can also opt out of using AI, but concerns remain about the potential impact on faculty roles and class sizes.

Amazon Announces Winners Of Inaugural Nova AI Challenge

SiliconANGLE (7/23) reports that Amazon revealed the winners of its first Nova AI Challenge, a global competition in which university teams tested AI coding assistants’ security through live adversarial scenarios. Team PurpCorn-PLAN from the University of Illinois Urbana-Champaign won the defending track by building a secure coding assistant using Amazon’s custom 8 billion-parameter model, while Purdue University’s Team PurCL topped the attacking track by jailbreaking rival models. Amazon, which evaluated teams using AWS tools like CodeGuru and human reviewers, prioritized a balance between safety and usability. Amazon CISO Eric Docktor said the tournament “accelerates secure, trustworthy AI-assisted software development.” Each team received $250,000 in sponsorship and AWS credits, with the winners gaining an additional $250,000 in prize money and the runners-up receiving an additional $100,000. Participants later shared research at Amazon’s Nova AI Summit.

FDA’s AI Tool Navigates Reliability Hurdles

CNN International (7/23, Owermohle) reports that the Food and Drug Administration’s artificial intelligence tool, which is intended to expedite drug and medical device approvals, has faced criticism for generating nonexistent studies and misrepresenting research. Despite being designed to streamline processes, FDA officials revealed concerns over its reliability, with some staff doubling their efforts to verify information. FDA AI head Jeremy Walsh acknowledged the tool’s limitations, stating it “could potentially hallucinate.” While Elsa is used for organizational tasks, its adoption has been limited due to these issues. FDA Commissioner Dr. Marty Makary emphasized its optional use.

Oregon Partners With Nvidia For AI Education

Oregon Capital Chronicle (7/23, Baumhardt) reports that months after Oregon “signed an agreement with the computer chip company Nvidia to educate K-12 and college students about artificial intelligence, details about how AI concepts and ‘AI literacy’ will be taught to children as young as 5 remain unclear.” The agreement allocates $10 million to expand AI education in collaboration with Nvidia. Despite the inclusion of K-12 schools, the Oregon Department of Education has not commented on the plan. Higher Education Coordinating Commission Executive Director Ben Cannon said the agreement aims to prepare students for “responsible application of AI.” Nvidia plans to focus on the “university ecosystem” first, with faculty training to become “Nvidia ambassadors.” The agreement also highlights industries like “renewable energy, healthcare, agriculture, microelectronics and manufacturing – specifically, semiconductor design and manufacturing.”

Idaho National Laboratory Partners With AWS To Develop AI For Nuclear Energy

ExecutiveGov (7/24) reports Idaho National Laboratory will use AWS AI tools and cloud infrastructure to develop AI for nuclear energy projects, including autonomous reactors. INL Director John Wagner said the partnership “underscores the critical role of linking the nation’s nuclear energy laboratory with AWS” and will accelerate nuclear energy deployment. The lab will use Amazon Bedrock, SageMaker, and specialized chips like Inferentia and Trainium to build AI applications and create digital twins of modular reactors. AWS VP David Appel said AWS technology will help INL pioneer “safer, smarter” nuclear operations. Appel added, “We’re proud to collaborate with the Department of Energy and Idaho National Laboratory to accelerate safe advanced nuclear energy.”

Global Tech Firms Gear Up For World AI Conference

Reuters (7/25) reports, “Tech firms huge and small will converge in Shanghai this weekend to showcase their artificial intelligence innovations and support China’s booming AI sector as it faces US sanctions.” Chinese “heavy hitters” like Huawei and Alibaba will demonstrate their technology at the two-day World AI Conference, “but Western names like Tesla, Alphabet and Amazon will also participate.” Chinese Premier Li Qiang will address the opening of the conference, “highlighting the sector’s importance to the leaders of the world’s second-largest economy.”

Ecolab Shifts Focus Toward Sustainable Data Centers

The Minneapolis Star Tribune (7/23, Martin) reports that Ecolab Chairman and CEO Christophe Beck announced a strategic pivot towards AI data centers and semiconductor manufacturing, with a focus on sustainability. Beck said the company will “do it in a way that uses less energy and water.” Ecolab’s 3D Trasar technology, designed for AI workloads, reduces water use by 15 percent and significantly cuts energy consumption. The system, which employs AI to monitor coolant properties in real time, showcases AI’s potential in addressing its environmental challenges.

dtau...@gmail.com

unread,
Aug 8, 2025, 7:18:38 PMAug 8
to ai-b...@googlegroups.com

Meta Prepares for Gigawatt Datacenters to Power 'Superintelligence'

Meta has boosted operating costs and research and development spending to develop AI with "superintelligence" through its Meta Superintelligence Labs. CEO Mark Zuckerberg outlined plans for personal superintelligence that deeply understands users and helps them achieve their goals. To support this development, Meta is building massive datacenter clusters, including the upcoming 1+ gigawatt (GW) Prometheus cluster and Hyperion, which ultimately could scale to 5 GW.
[ » Read full article ]

Computer Weekly; Cliff Saran (July 31, 2025)

 

Nvidia Says Its Chips Have No 'Backdoors' After China Flags H20 Security Concerns

The Cyberspace Administration of China (CAC) has expressed concerns about potential security risks stemming from a U.S. proposal to equip advanced AI chips with tracking and positioning functions. CAC, China's Internet regulator, called for a meeting with Nvidia on July 31 regarding potential backdoor security risks in its H20 AI chip. In response, Nvidia said its H20 AI chip has no backdoors that would enable remote access or control.
[ » Read full article ]

Reuters (July 31, 2025)

 

Robots That Learn to Fear Like Humans Survive Better

Researchers at Italy's Polytechnic University of Turin developed a control system that improves the ability of robots to assess risk and avoid danger by emulating a "low road" fear response in which quick decisions are made to unknown stimuli. The researchers used a reinforcement learning-based controller that helps robots make real-time, dynamic adjustments to constraints and priorities based on raw environmental data and a nonlinear model predictive controller that alters the robot's movements accordingly.
[ » Read full article ]

IEEE Spectrum; Michelle Hampson (July 26, 2025)

 

Chinese Firms Form Alliances to Build Domestic AI Ecosystem

Chinese AI companies have established two new industry alliances in hopes of easing dependence on foreign technologies. Large language model (LLM) developers such as StepFun, and AI chip manufacturers including Enflame, Huawei, Biren, and Moore Threads, have announced the Model-Chip Ecosystem Innovation Alliance, which Enflame's Zhao Lidong said "connects the complete technology chain from chips to models to infrastructure." Meanwhile, LLM developers SenseTime, StepFun, and MiniMax, and chipmakers Metax and Iluvatar CoreX, among others, have formed the Shanghai General Chamber of Commerce AI Committee to "promote the deep integration of AI technology and industrial transformation."
[ » Read full article ]

Reuters; Liam Mo; Brenda Goh (July 28, 2025)

 

AI Coding Challenge Publishes First Results

Brazilian prompt engineer Eduardo Rocha de Andrade is the first winner of the K Prize, an AI coding challenge rolled out by Databricks and Perplexity co-founder Andy Konwinski. The winner achieved correct answers on 7.5% of the test questions. That compares to SWE-Bench's top scores of 75% for its "Verified" test and 34% on its "Full" test. The K Prize, which favors smaller, open models, tests AI models against flagged issues from GitHub, with a timed entry system to prevent benchmark-specific training.
[ » Read full article ]

Tech Crunch; Russell Brandom (July 23, 2025)

 

Tradition Meets AI in Ancient Weaving Style

Hironori Fukuoka of Fukuoka Weaving in Kyoto, Japan, is turning to AI and Sony Computer Science Laboratories to help keep the ancient Nishijinori kimono-weaving technique alive, using the technology as a collaborator. Nishijinori's repetitive and geometric patterns are conducive to digital translations, and Fukuoka views AI as useful in identifying new motifs to define the angular lines of traditional patterns. AI also can help determine how to digitally represent the technique's color gradations.
[ » Read full article ]

Associated Press; Yuri Kageyama (July 25, 2025)

 

The Unnerving Future of AI-Fueled Video Games

Major tech companies are using rapidly advancing AI technologies to transform game development, with usable models expected within five years. At the recent Game Developers Conference, Google DeepMind demonstrated autonomous agents to test early builds, and Microsoft showcased AI-generated level design and animations based on short video clips. Some developers surveyed by conference organizers said generative AI use is widespread in the industry, with some saying it helps complete repetitive tasks and others arguing it has contributed to job instability and layoffs.

[ » Read full article *May Require Paid Registration ]

The New York Times; Zachary Small (July 28, 2025)

 

AI Wrecking Fragile Job Market for College Graduates

AI increasingly is taking entry-level jobs from new college graduates, forcing companies to rethink how to develop the next generation of talent. The share of entry-level hires relative to total new hires has declined 50% among the 15 biggest tech companies by market capitalization since 2019, according to venture-capital firm SignalFire. This comes as companies such as Amazon, JPMorgan, and Ford say AI is enabling them to reduce headcount.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Lindsay Ellis; Bindley. Katherine (July 28, 2025)

 

Boxing, Backflipping Robots Rule at China's Biggest AI Summit

At the World Artificial Intelligence Conference in Shanghai, China, companies showcased robots performing a variety of tasks, from peeling eggs to boxing to playing mahjong. Back-flipping robotic dogs and six-legged robots also were on display. This comes as China looks to deploy humanoid robots to work in factories, hospitals, and households, although some estimates indicate it could take a decade before robots are integrated into daily life.

[ » Read full article *May Require Paid Registration ]

Bloomberg; Saritha Rai; Annabelle Droulers; Adrian Wong (July 28, 2025); et al.

 

New Chips Designed to Solve AI’s Energy Problem

At least a dozen chip startups, along with entrenched tech giants, are competing to develop chips that address AI's massive energy consumption. These chips are focused on inference, the process by which AI responses are generated from user prompts and could collectively save companies tens of billions of dollars and a huge amount of energy.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Christopher Mims (July 26, 2025)

 

China Proposes Global Body to Govern AI

Speaking at the opening of the World Artificial Intelligence Conference (WAIC) in Shanghai on Saturday, Chinese Premier Li Qiang called for the formation of a global AI governance framework and said that China would help create “a world AI co-operation organization." China’s 13-point plan proposes the creation of two new AI dialogue mechanisms under the auspices of the U.N.

[ » Read full article *May Require Paid Registration ]

Financial Times; William Langley; Eleanor Olcott (July 27, 2025)

 

DOGE Builds AI Tool to Cut Half of Federal Regulations

A PowerPoint presentation dated July 1 outlines plans to use the “DOGE AI Deregulation Decision Tool” to analyze some 200,000 federal regulations to eliminate an estimated half no longer required by law. The tool has been used to complete “decisions on 1,083 regulatory sections” at the U.S. Department of Housing and Urban Development in under two weeks, according to the presentation, and to write “100% of deregulations” at the U.S. Consumer Financial Protection Bureau.

[ » Read full article *May Require Paid Registration ]

The Washington Post; Hannah Natanson; Dan Diamond; Rachel Siegel (July 26, 2025); et al.

 

Federal AI Plan Targets ‘Burdensome’ State Regulations

The White House's new AI Action Plan calls on federal agencies to limit AI-related funding to U.S. states “with burdensome AI regulations that waste these funds.” The plan also stipulates the federal government will not interfere with state efforts to “pass prudent laws that are not unduly restrictive to innovation.” Said ACM policy director Tom Romanoff, “If state lawmakers want to enact these laws, they will now have to risk losing federal funds to do so."

[ » Read full article *May Require Paid Registration ]

WSJ Pro Cybersecurity; Angus Loten (July 25, 2025)

 

Beijing Calls For Global AI Cooperation Organization

Reuters (7/25, Goh) reported that Chinese Premier Li Qiang, who on Saturday “proposed establishing an organisation to foster global cooperation on artificial intelligence,” is “calling on countries to coordinate on the development and security of the fast-evolving technology.” Li “called AI a new engine for growth but said governance is fragmented” and emphasized “the need for more coordination between countries to form a globally recognised framework for AI.” Reuters notes Li “did not name the United States in his speech but he warned that AI could become an ‘exclusive game’ for a few countries and companies.” Li added “that challenges included an insufficient supply of AI chips and restrictions on talent exchange.” Bloomberg (7/25, Subscription Publication) reported that Li made the case that “artificial intelligence harbors risks from widespread job losses to economic upheaval that require nations to work together to address,” which “means more international exchanges.”

Meta Invests Heavily In AI Talent To Lead Superintelligence Race

CNN (7/25, Duffy) reported that Meta, which is heavily investing in AI “to reach so-called artificial superintelligence,” is recruiting top talent with multimillion-dollar offers. Despite concerns about immediate business benefits, Meta’s shares have risen 20% this year. CFRA Research analyst Angelo Zino notes Meta can afford the investment, but questions remain about its alignment with broader business goals.

        CNBC (7/25, Capoot) reported that CEO Mark Zuckerberg on Friday announced Shengjia Zhao, co-creator of OpenAI’s ChatGPT, as chief scientist of Meta Superintelligence Labs. According to the article, “Zhao will work directly with Zuckerberg and Alexandr Wang, the former CEO of Scale AI who is acting as Meta’s chief AI officer.”

Startup Develops AI System To Cut Data Center Power Consumption

EETimes (7/25) reports that Bay Compute, co-founded by Vijay Gadepally, is helping data centers reduce power consumption by up to 20% with an AI-based operating system. The system manages energy use by optimizing power distribution within data centers. Gadepally compares it to a “Nest thermostat” for data centers, adjusting operations based on conditions. Despite challenges like lack of transparency and resource strain, the company has installed systems with unnamed global data centers.

Pricey Private AI School In Austin Plans To Multiply Nationwide

The New York Times (7/27, Salhotra) says, “In Austin, Texas, where the titans of technology have moved their companies and built mansions, some of their children are also subjects of a new innovation: schooling through artificial intelligence.” Now “with ambitious expansion plans in the works, a pricey private A.I. school in Austin, called Alpha School, will be replicating itself across the country this fall.” Supporters of the school, co-founded by “podcaster and influencer” MacKenzie Price, “believe an A.I.-forward approach helps tailor an education to a student’s skills and interests,” but “to detractors, Ms. Price’s ‘2 Hour Learning’ model and Alpha School are just the latest in a long line of computerized fads that plunk children in front of screens and deny them crucial socialization skills while suppressing their ability to think critically.”

Mayo Clinic Using Supercomputer With Nvidia’s AI Technology For Disease Diagnosis

The Minneapolis Star Tribune (7/28, Martin, Stefanescu) reports that the Mayo Clinic has launched a supercomputer using Nvidia’s AI technology to expedite disease diagnosis and treatment. This marks the first large-scale use of Nvidia’s technology in a hospital setting. Jim Rogers, CEO of Mayo Clinic Digital Pathology, described the initiative as a transformative opportunity for medicine. The supercomputer in Brooklyn Park, called SuperPOD, has 128 graphics processing units. Dr. Matthew Callstrom, Mayo’s medical director, stated that the AI models will utilize Mayo’s de-identified pathology data to explore cancer progression. Mayo Clinic’s AI strategy includes collaborations with Google, Microsoft, and Cerebras. Matt Redlon, Mayo’s vice president of digital biology, said the system is significantly more powerful than previous technology, while Rogers likened the infrastructure to “rocket fuel” for innovation.

Sanofi And UT Austin Develop AI Model For mRNA Efficiency

Technology Networks (7/28) reports that an AI model developed by The University of Texas at Austin and Sanofi predicts the efficiency of mRNA sequences in protein production, potentially accelerating mRNA therapeutic development. The model, RiboNN, was detailed in Nature Biotechnology and is twice as accurate as previous methods in predicting translation efficiency across over 140 human and mouse cell types.

Utah Highlighted As Leader In AI Adoption

KSL-TV Salt Lake City (7/26, Stefanich) reported that President Donald Trump announced his “AI Action Plan” last Wednesday, following the revocation of former President Joe Biden’s AI guardrails. The plan and “related executive orders seem to accelerate the sale of AI technology abroad and make it easier to construct the energy-hungry data center buildings that are needed to form and run AI product.” A 2025 World Economic Forum report indicates 41 percent of employers “intend to replace workers with AI by 2030,” while the number of students “with AI-related degrees reached 424,000 in 2023,” up 32 percent from five years earlier. Utah is highlighted as a leader in AI adoption, with Gov. Spencer Cox (R) highlighting the state’s “first and smartest” AI regulations. Utah companies, such as Tarriflo and SchoolAI, are advancing AI technologies. The University of Utah, in October 2023, “launched a $100 million AI research initiative digging into the ways AI can be used responsibly to tackle societal issues.”

Tech Giants Encounter Obstacles With AI Expansion

The Economist (UK) (7/28) reports that America’s tech giants are encountering obstacles in their AI expansion due to shortages in chips, data-center equipment, and energy. On July 24, President Trump issued an “AI action plan” highlighting energy capacity issues as a threat to AI dominance. Companies like Alphabet, Amazon, Microsoft, and Meta are increasing capital spending on data centers, which consume significant electricity. They are exploring alternative locations, smaller partnerships, and new power sources. Initiatives include Google’s $3 billion hydro-power deal and Amazon’s investment in nuclear power. A Bloom Energy survey found that data-center executives expect 27 percent of facilities will have onsite power by 2030, up from just one percent in 2024.

Google Initiates AI Licensing Talks With Publishers

Digiday (7/28, Guaglione, Joseph) reports that Google has started AI licensing discussions with publishers, creating a mix of caution and resignation among media executives. With Amazon’s recent deal with The New York Times, publishers are anxious about content usage in AI training. Publishers demand meaningful revenue, transparency, and control over content use. Concerns include visibility, attribution, and traffic decline. They seek structured partnerships and legal protections to ensure stability and predictability. As AI standards evolve, publishers want to avoid one-sided agreements, fearing future shifts in technology and market dynamics.

White House Pushes For AI Advancement Amid Regulatory Concerns

Digiday (7/28, McCoy) reports that the White House’s AI framework aims to boost U.S. competitiveness by reducing regulations and promoting AI development, sparking mixed industry reactions. While some marketers see potential for innovation, others worry about legal challenges in data protection and intellectual property. Despite existing regulations from bodies like the FTC, gaps remain, particularly in federal oversight of digital capabilities. The responsibility often falls to agencies to create AI guidelines. Legal battles, like Disney’s lawsuit against Midjourney, highlight the growing tension as AI adoption increases.

Robotic Hands Gain Human-Like Sensation Using AI

WCJB-TV Gainesville, FL (7/29) reports that Dr. Eric Du and his team at the University of Florida’s Herbert Wertheim College of Engineering are developing robotic hands with human-like tactile abilities using advanced sensors and artificial intelligence. Dr. Du explained that these robotic hands aim to replicate human touch, enabling robots to perform delicate tasks with dexterity. The project could lead to advancements in manufacturing, healthcare, and remote operations. Dr. Du emphasized the role of AI as the “brain” for processing tactile data, enhancing robots’ ability to understand complex environments.

Skild AI Unveils New AI Model For Robots

Reuters (7/29, Sriram) reports that robotics startup Skild AI, supported by Amazon and SoftBank, introduced Skild Brain, an AI model for robots, on Tuesday. The model enhances robots’ ability to think and navigate like humans and is designed for diverse applications, from assembly lines to humanoids. Demonstrations showed Skild-powered robots performing tasks like climbing stairs and picking up objects. Co-founders Deepak Pathak and Abhinav Gupta highlighted the model’s training on simulated episodes and human-action videos. Skild’s approach allows rapid capability expansion across industries, despite the physical deployment challenges in robotics.

AI-Powered Autonomous Vehicles Mimic Human Driving Behavior

ComputerWorld (7/29, Mearian) reports that artificial intelligence-powered autonomous vehicles are increasingly adopting human-like driving behaviors, including honking and assertive maneuvers, to enhance safety. Tesla’s Shadow Mode observes human driving to improve its system, while Waymo’s robotaxis, powered by AI, learn from millions of miles to adapt to local traffic norms. Waymo’s Director of Product Management, David Margines, emphasizes that assertive driving can enhance safety. The vehicles now demonstrate more confidence at intersections and while merging. University of San Francisco’s William Riggs notes Waymo’s improved adaptability in San Francisco traffic. Zoox, owned by Amazon, uses targeted audio for communication. Transportation engineering professor Kara Kockelman suggests AVs are safer, with fewer crashes than human drivers, due to comprehensive environmental awareness.

Startup Using AI And Robotics To Enhance Fish Processing

The Los Angeles Times (7/29) reports that Shinkei Systems, an El Segundo-based startup, is using artificial intelligence and robotics to enhance fish processing through a traditional Japanese method called ikejime. Their robot aims to improve flavor, texture, and shelf life while ensuring humane treatment. CEO Saif Khawaja emphasizes making high-quality fish accessible in the US. The company raised $22 million, bringing total funding to $30 million. The robot processes fish quickly on fishing boats, identifying species and targeting brain and gills. Shinkei plans to expand operations and product offerings this year.

Trump Administration Pushes AI Integration In Schools

The Hill (7/30) reports the Trump Administration, which is prioritizing the integration of artificial intelligence in K-12 education, is positioning AI literacy as a national security imperative amid global competition, particularly with China. New guidance from Education Secretary Linda McMahon outlines how schools can use federal grants to implement AI in areas such as instruction, tutoring, and teacher training. This initiative is part of the broader “Winning the AI Race: America’s AI Action Plan,” which spans multiple sectors. Advocates highlight the need for private sector collaboration, educator preparedness, and ethical safeguards. AFT, one of the nation’s largest teachers unions, has partnered with Microsoft, OpenAI, and others to offer free AI training to 1.8 million members. Despite enthusiasm, challenges persist, including uneven state policies, lack of teacher preparedness, privacy concerns, and risks of cheating or misuse. Officials stress that AI must be used responsibly—led by educators, compliant with federal privacy laws, and implemented with transparency and community engagement.

Virginia Tech Implements AI To Review Admission Essays

Forbes (7/31, Barnard) reports Virginia Tech, which now uses AI to assist in reviewing admission essays, is aiming to accelerate decisions while maintaining fairness. The system, developed over three years, involves AI confirming human scores and flagging discrepancies for further review. Essays are still anonymized and evaluated with transparency, using a majority-vote model among three large language models to reduce bias.

AI And Robotics In Agriculture Gain Traction

Fortune (7/31, Hult) reports AI and robotics are increasingly seen as solutions for maximizing agricultural efficiency amid limited farmland. Feroz Sheikh of Syngenta highlighted the need for innovative solutions, while Agroz Group’s Gerard Lim emphasized AI’s role in empowering farmers. AGRIST’s Junichi Saito views robots as essential for addressing labor shortages, stating, “AI and the robot and human being [to] collaborate with each other to make the world happier.”

dtau...@gmail.com

unread,
Aug 12, 2025, 8:15:45 AMAug 12
to ai-b...@googlegroups.com

Google Commits $1 Billion for AI Training at U.S. Universities

Google has announced a three-year, $1-billion initiative to provide AI training and tools to U.S. higher education institutions and nonprofits. Major public systems like the University of North Carolina and Texas A&M were among the more than 100 universities to join the program. The program offers participating schools resources such as cloud computing credits towards AI training for students, AI-related research topics, and funding. The initiative also will provide students with an advanced version of the Gemini chatbot at no cost.
[
» Read full article ]

Reuters; Kenrick Cai (August 6, 2025)

 

Google's AI-Powered Bug Hunting Tool Finds Major Issues in Open Source Software

Big Sleep, Google's AI-driven bug detection tool, autonomously discovered and reproduced 20 security vulnerabilities in open source software projects, including FFmpeg and ImageMagick. Human security workers verified each vulnerability, which remained secret until they were mitigated under Google's 90-day patching policy. The verification process by human experts was done to assuage any concerns about false positives of AI hallucinations. The full list of vulnerabilities ranked by level of impact (low to high) are available from Google.
[
» Read full article ]

TechRadar; Craig Hale (August 5, 2025)

 

Thousands of ChatGPT Conversations Appearing in Google Search Results

Thousands of private ChatGPT conversations are appearing in Google search results, exposing deeply personal user disclosures. The issue stems from OpenAI’s shareable chat links, which included an optional, but often misunderstood, setting allowing conversations to be indexed by search engines. While the feature has since been removed, previously indexed chats remain public unless deleted by users. Some include details about trauma, mental health, or identity, raising concerns about data privacy, interface design, and broader industry responsibility around user protection and transparency.
[ » Read full article *May Require Free Registration ]

Computing (U.K.); Dev Kundaliya (August 4, 2025)

 

3D Printing, AI Used to Slash Nuclear Reactor Component Construction Time

The U.S. Department of Energy’s Oak Ridge National Laboratory (ORNL) in Tennessee, in collaboration with Kairos Power, Barnard Construction, Airtech, TruDesign, Additive Engineering Solutions, Haddy, and the University of Maine, used AI and 3D printing to make polymer concrete forms for the Hermes Low-Power Demonstration Reactor under construction in East Tennessee. The 3D printing enabled precise casting of complex forms for radiation shielding and reduced construction time from weeks to just 14 days.
[
» Read full article ]

Tom's Hardware; Mark Tyson (August 5, 2025)

 

One-Fifth of Computer Science Papers May Include AI Content

Nearly one in five computer science papers published in 2024 may include AI-generated text, according to a large-scale analysis of over 1 million abstracts and introductions by researchers at Stanford University and the University of California, Santa Barbara. The study found that by September 2024, 22.5% of computer science papers showed signs of input from large language models like ChatGPT. The researchers used statistical modeling to detect common word patterns linked to AI writing.
[ » Read full article ]

Science; Phie Jacobs (August 4, 2025)

 

Python Popularity Boosted by AI Coding Assistants – Tiobe

Python remains the top language in the Tiobe index of programming language popularity, scoring 26.14% in August 2025 after reaching a record 26.98% in July. Tiobe CEO Paul Jansen attributes the continuing preference for Python to AI coding assistants, which benefit from Python’s widespread usage and extensive documentation. The trend reflects a consolidation around major languages, as developers increasingly favor tools with strong AI support.
[
» Read full article ]

InfoWorld; Paul Krill (August 4, 2025)

 

Nearly Half of All Code Generated by AI Found to Contain Security Flaws

New research from application security solution provider Veracode reveals that 45% of all AI-generated code contains security vulnerabilities, with no clear improvement across larger or newer large language models. An analysis of over 100 models across 80 coding tasks found Java code most affected with over 70% failure, followed by Python, C#, and JavaScript. The study warns that increased reliance on AI coding without defined security parameters, referred to as "vibe coding," may amplify risks.
[ » Read full article ]

TechRadar; Craig Hale (August 1, 2025)

 

Google AI Model Maps World in 10-Meter Squares for Machines to Read

Google's new AlphaEarth Foundations AI model provides a comprehensive view of Earth over time by mapping it in 10-meter squares that can be read by deep learning applications. Trained on Earth observation data from satellites and other sources, AlphaEarth integrates the data into "embeddings" that are easily processed by computer systems. The embeddings have 64 dimensions, each representing a 10-meter pixel that encodes data about territorial conditions for that plot over a year.
[ » Read full article ]

The Register; Thomas Claburn (July 31, 2025)

 

Chinese Universities Want Students to Use More AI, Not Less

Almost all faculty and students at Chinese universities use generative AI, according to a survey by Chinese higher-education research group the Mycos Institute. A study of the 46 top Chinese universities' AI strategies by MIT Technology Review found nearly all have added interdisciplinary AI general-education classes, AI-related degree programs, and AI literacy modules. All students at Chinaʼs Remin, Nanjing, and Fudan universities can enroll in general-access AI courses and degree programs.
[ » Read full article ]

MIT Technology Review; Caiwei Chen (July 28, 2025)

 

OpenAI To Give Away Some of the Technology That Powers ChatGPT

OpenAI has released two AI models, gpt-oss-120b and gpt-oss-20b, marking a significant departure from its prior closed-source approach. While less powerful than ChatGPT, the models still rank highly in performance benchmarks. The move aligns OpenAI with competitors like Meta and China’s DeepSeek, which have already embraced open-source AI. OpenAI says the decision aims to retain developer interest and collect user feedback.


[
» Read full article *May Require Paid Registration ]

The New York Times; Cade Metz (August 5, 2025)

 

Ambitious Project Aims to Win Back U.S. Lead in Open-Source AI From China

U.S. officials and company leaders want to surpass China in the realm of AI for economic and national security reasons. However, a recent analysis from Artificial Analysis found that only five of the top 15 AI models are open source, and all of those models were developed by Chinese companies. The American Truly Open Models (ATOM) Project would create a domestic AI lab with access to 10,000 GPUs, that would seek to produce competitive open-source models for AI start-ups or projects.


[
» Read full article *May Require Paid Registration ]

The Washington Post; Nitasha Tiku; Andrea Jiménez (August 5, 2025)

 

AI Is Fast-Tracking Climate Research, from Weather Forecasts to Sardines

Climate researchers increasingly are turning to AI to automate routine tasks amid funding cuts and other challenges. Researchers at Spain's AZTI marine research center are using AI models to monitor water quality, the presence of different types of marine life, and more to inform decision-making. AI also is being used to produce more accurate weather forecasts and to facilitate citizen science projects.

[ » Read full article *May Require Paid Registration ]

Bloomberg; Laura Millan; Yinka Ibukun; Akshat Rathi (August 1, 2025)

 

Tech Giants Revise AI Product Claims That Faced Scrutiny

Apple, Google, Microsoft, and Samsung have revised or retracted AI marketing claims following investigations by BBB National Programs' National Advertising Division (NAD). NAD found several misleading advertisements, including Apple's promotion of unreleased iPhone AI features as "available now," a YouTube video from Google showing sped-up Gemini assistant capabilities, Microsoft's claim that Copilot's Business Chat function works "seamlessly across all your data," and Samsung's claim that its AI-powered refrigerator "automatically recognizes what's in your fridge" when it only identifies 33 specific items if they are clearly visible.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Patrick Coffee (July 31, 2025)

 

Palantir Gets $10-Billion Contract From U.S. Army

The U.S. Army awarded Palantir a contract worth up to $10 billion over the next 10 years, the largest in the company’s history. This agreement signifies a major shift in the Army’s software procurement approach by consolidating existing contracts to achieve cost efficiencies and expedite soldiers' access to advanced data integration, analytics, and AI tools. The contract aligns with the Pentagon's strategic focus on enhancing data-mining and AI capabilities amid escalating global security challenges.

[ » Read full article *May Require Paid Registration ]

The Washington Post; Elizabeth Dwoskin (July 31, 2025)

 

OpenAI Launches Stargate in Europe with Norwegian Deal

A datacenter being built by Nscale Global Holdings Ltd. in Kvandal, Norway, with funding from Norwegian investor Aker ASA, will be the first European site for OpenAI's Stargate datacenter infrastructure project. The site will offer 230 megawatts of capacity initially, with an additional 290 megawatts to be added in the future. By the end of 2026, OpenAI will deliver 100,000 Nvidia GPUs to the datacenter, with more chips to be added afterward.

[ » Read full article *May Require Paid Registration ]

Bloomberg; Mark Bergen; Vlad Savov (July 31, 2025)

 

How China Is Girding for an AI Battle with the U.S.

China is working to develop a self-sufficient AI ecosystem to counter U.S. export restrictions on advanced semiconductors. At Shanghai's World Artificial Intelligence Conference, companies showcased AI systems designed for Chinese-made chips. "Project Spare Tire," led by Huawei Technologies, is pushing for 70% semiconductor self-sufficiency by 2028 by clustering multiple domestic chips. China also unveiled an international open-source AI governance framework to challenge U.S. closed models.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Raffaele Huang; Liza Lin (July 29, 2025)

 

OpenAI Chairman Encourages Students To Keep Pursuing Computer Science Degrees

Insider (8/1, Chandonnet) reported that Bret Taylor, chairman of OpenAI, advocates for the continued value of computer science degrees despite advancements in AI coding tools. Taylor, speaking on Lenny Rachitsky’s podcast, emphasized the importance of understanding concepts beyond coding languages, such as “Big O notation, complexity theory, randomized algorithms, and cache misses.” He said, “Studying computer science is a different answer than learning to code, but I would say I still think it’s extremely valuable to study computer science.” Taylor said he believes computer science fosters “systems thinking.” Microsoft CPO Aparna Chennapragada and Google’s head of Android, Sameer Samat, echo Taylor’s views. Taylor envisions engineers as “operating a code-generating machine” to create products and solve problems.

Universities Only Meeting Fraction Of AI Training Demand

Times Higher Education (UK) (7/28, Rowsell) reported that a new study by Validated Insights reveals that interest in artificial intelligence (AI) training “is soaring but only a fraction of the demand is being met by higher education.” Approximately 57 million Americans are interested in acquiring AI skills, yet only 8.7 million are currently pursuing this training. Of these, just 7,000 “are learning AI via a credit-bearing programme from a higher education institution,” despite the rapid growth in AI course enrollments. Since Carnegie Mellon University introduced the first bachelor’s degree in AI in 2018, college and university enrollments have increased by 45 percent annually. SUNY University at Buffalo reported a twentyfold increase in its master’s program enrollment from 2020 to 2024, “from five to 103 students.” Meanwhile, Edtech platforms like Coursera and Udemy have capitalized on this demand, with 3.5 million enrollments in generative AI courses.

Educators Navigate AI Tool Advancements In Higher Ed

Inside Higher Ed (8/1, Palmer) reported that Instructure, “which owns the widely used learning management system Canvas,” recently “announced a partnership with OpenAI to integrate into the platform native AI tools and agents.” The partnership will introduce features such as IgniteAI, which allows educators to create custom assignments using large language models like ChatGPT. Instructure CEO Steve Daly described the initiative as “a significant step forward for the education community,” though some educators remain cautious about AI’s impact on teaching dynamics and student interactions. University of Kansas professor Kathryn Conrad cautioned against “locking faculty and students into particular tools” that may not align with educational objectives. The initiative comes amid broader efforts by institutions, such as Ohio State University, to foster AI fluency among students by 2029.

Texas A&M University Developing AI-Powered Helicopters To Fight Wildfires

The Houston Chronicle (8/1, Garcia) reported that Texas “could be among the first states to use AI-powered helicopters in active wildfire response.” With nearly $60 million in state funding, Texas A&M University “is partnering with the US Defense Advanced Research Projects Agency (DARPA) to convert traditional UH-60 Blackhawks into AI-powered aircraft that can fight fires without a pilot on board. The helicopters will be able to carry out water drops, supply deliveries and aerial surveillance in places too risky for human crews.” Testing and development “will be led at the George H.W. Bush Combat Development Complex (BCDC) on A&M’s Rellis Campus, with support from Sikorsky, the Texas A&M Forest Service and several emergency response teams across the state.”

AWS To Invest $12.7 Billion In India To Boost AI, Cloud Infrastructure

The Business Standard (IND) (8/3) reports behind a paywall, “With artificial general intelligence (AGI) inching closer and the US locked in a high-stakes tech rivalry with China, Amazon Web Services (AWS) is making a bold but quiet move – betting on India to become the third major force in the global artificial intelligence (AI) race.” The company is putting “$12.7 billion into infrastructure (infra) that could help shape who controls the computing backbone of tomorrow’s most advanced AI systems.”

        Moneycontrol (IND) (8/3) reports AWS will invest $12.7 billion in India by 2030 to expand cloud infrastructure, including data centers and AI-ready computing capacity, positioning the country as a key player in the global AI race.

Meta Offers $250 Million Compensation Package To AI Talent

The New York Post (8/1, Zilber) reported that Mark Zuckerberg’s Meta offered a $250 million compensation package to 24-year-old AI researcher Matt Deitke, “who recently dropped out of a computer science doctoral program at the University of Washington.” Initially, he turned down Zuckerberg’s offer of “approximately $125 million over four years,” but accepted after Zuckerberg doubled it. This move highlights the intense competition for AI talent in Silicon Valley. Deitke previously worked at Seattle’s Allen Institute for AI, leading the development of Molmo, “an AI chatbot capable of processing images, sounds, and text.” He co-founded Vercept, an AI startup, in November, which raised $16.5 million. Meta’s aggressive recruitment strategy involves building an “elite, talent-dense team,” according to Zuckerberg.

Amazon, Microsoft, Google, And Meta Increase Capex For AI Infrastructure

Insider (8/1, Thomas) reported that Amazon, Microsoft, Google, and Meta are raising their capital expenditure guidance “as the AI race intensifies.” Amazon is “tracking to spend over $100 billion this year” after spending $48.4 billion in 2023. CFO Brian Olsavsky said future quarterly investments will mirror second-quarter spending. Google surprised investors by increasing its capex forecast by $10 billion to $85 billion, “hoping to keep its edge in the AI race after a strong quarter for cloud sales, which surged 32% in the most recent quarter.” Meta slightly adjusted its capex forecast, while Microsoft is “continuing full-steam ahead on capital investments” with plans to spend $30 billion in its current quarter. Apple is also increasing its spending, with CEO Tim Cook attributing the rise to AI investments, including data centers.

Administration Set To Tout AI Strategy At ASEAN Meeting

The Wall Street Journal (8/2, Ramkumar, Subscription Publication) reported that the US and China are set to promote their AI strategies at the Asia-Pacific Economic Cooperation meeting in South Korea starting Monday. The US will advocate for American AI exports, highlighting companies like Nvidia and OpenAI. Chinese officials will present their AI products, emphasizing government support and open models. The US aims to ease AI deals globally, while China focuses on open-source models.

Google Agrees To Power Reduction Deals With Utilities

Reuters (8/4, Kearney) reports that Google has reached agreements with Indiana Michigan Power and Tennessee Power Authority to reduce power consumption at its AI data centers during peak demand periods. These are Google’s first formal demand-response agreements, which involve temporarily curtailing machine learning workloads to ease grid strain. Google stated in a blog post, “It allows large electricity loads like data centers to be interconnected more quickly, helps reduce the need to build new transmission and power plants, and helps grid operators more effectively and efficiently manage power grids.” This initiative addresses concerns over power shortages and potential blackouts as AI-related energy demands rise.

Anthropic Expands Enterprise AI Training Through Partnerships With AWS, Others

PPC Land (8/4) reports Anthropic launched new enterprise-focused courses via its Anthropic Academy platform, developed in collaboration with AWS, Google Cloud, and Deloitte. AWS contributed a “Claude on Bedrock” course for secure deployments on its infrastructure, designed to address “real enterprise implementation challenges.” Google Cloud’s “Claude on Vertex AI” course targets ML engineers integrating Claude models into production workflows, while Deloitte’s program prepares professionals for “real AI transformation challenges.” The expanded curriculum shifts focus from technical API development to enterprise deployment scenarios, covering security, governance, and compliance. The free courses include certification options and emphasize hands-on learning with actual AI models. The initiative aims to bridge the skills gap in enterprise AI adoption, particularly relevant for marketing automation infrastructure.

Apple CEO Rallies Staff Around AI Prospects

Bloomberg (8/1, Subscription Publication) reports that Apple CEO Tim Cook held an all-hands meeting in Cupertino, California, on Friday, emphasizing the company’s commitment to artificial intelligence. Cook stated AI’s potential is comparable to past technological revolutions and highlighted Apple’s history of entering markets late but successfully. He encouraged employees to integrate AI into their work, warning against falling behind. The meeting also covered topics like Apple’s retail strategy, upcoming product launches, and AI advancements, including a revamp of Siri. Cook expressed enthusiasm for Apple’s future product pipeline, describing it as “amazing.”

AI Training Academy For Teachers Set To Open In NYC

The Seventy Four (8/4, Toppo) reports that last month, the American Federation of Teachers (AFT) “announced that it would open an AI training center for educators in New York City, with $23 million in funding from OpenAI, Anthropic and Microsoft, three of the leading players in the generative AI marketplace.” The National Academy for AI Instruction aims to train 400,000 educators over five years. AFT President Randi Weingarten highlighted the challenge of “navigating AI wisely, ethically and safely.” In an email, Microsoft’s Naria Santa Lucia said, “This isn’t about Microsoft’s technology, our focus is on making AI broadly accessible, so everyone has a fair shot at the future.” While some observers “said the tech giants are making a play for market share among the nation’s K-12 students, they noted that the companies are also filling an important role” in education.

New Hampshire Education Groups Develop Roadmap For AI Integration

The New Hampshire Bulletin (8/4, DeWitt) reports that New Hampshire educators who are considering the integration of artificial intelligence (AI) in schools were prompted by a federal letter encouraging the use of federal funds for AI tools. This summer, “a coalition of New Hampshire groups has produced 77 pages of guidelines for teachers and school administrators to responsibly use those AI tools.” The guidelines, which were created “by a team that included the New Hampshire School Administrators Association, the New Hampshire Association of School Principals...and the New Hampshire Supporting Tech-using Educators, feature a roadmap for schools to implement AI policies.” They recommend “forming an AI task force in each school, coming up with policies and rules to govern the use of AI, and developing training plans to bring educators on board.” The guidelines also warn of AI’s potential risks, such as bias and academic dishonesty.

Google Offers Free AI Tools To University Students

Mashable (8/6, DiBenedetto) reports that Google is expanding access to its AI tools by offering university students aged 18 and over “one whole year of Google’s AI Pro plan for no cost, which includes access to a suite of Google’s most popular AI offerings.” This initiative, effective immediately, is available to students in the US, Japan, Indonesia, Korea, and Brazil. The AI Pro Plan features tools such as the Gemini 2.5 Pro chatbot, Deep Research model, NotebookLM, Veo 3 video generator, and the coding assistant Jules. In addition, Google has announced “a $1 billion commitment to AI education and training programs, which the company will dole out over the next three years, and a brand new Google AI for Education Accelerator,” offering free training and Google Career Certificates to US college students. Enhancements include a “Guided Learning” mode for the Gemini chatbot, which enables open-ended conversations and step-by-step explanations.

Senators Request Evaluation Of Chinese AI Security Risks

TechRadar (8/6, Jennings-Trace) reports seven GOP Senators have urged the Department of Commerce to evaluate data security risks posed by AI models from Chinese companies, specifically the DeepSeek chatbot. The senators expressed concerns about DeepSeek feeding sensitive information to servers linked to the Chinese government. They emphasized the importance of prioritizing US-based AI models in the ongoing AI competition with China.

OpenAI Offers ChatGPT To US Agencies For $1 Annually

Bloomberg (8/6, Ghaffary, Korte, Subscription Publication) reports that OpenAI is providing access to its ChatGPT product to US federal agencies for $1 per year. This initiative is part of OpenAI’s strategy to increase the adoption of its AI chatbot. The announcement follows the General Services Administration’s approval of OpenAI, alongside Alphabet Inc.’s Google and Anthropic, as vendors in a new marketplace for federal agencies to purchase AI software at scale. OpenAI is offering the enterprise version of ChatGPT, which includes improved security and privacy features.

Hydrogen-Powered Data Centers Address AI Energy Needs

Hydrogen Central (8/6) reports data centers are increasingly turning to hydrogen power to meet the soaring energy demands of AI, which strain grids and raise environmental concerns. Startups and major companies alike are deploying hydrogen fuel cells for zero-emission, off-grid operations, with Oracle having partnered with Bloom Energy “to deploy hydrogen-enabled fuel cells across its U.S. cloud infrastructure.”

EPRI Leads Open Power AI Consortium For Energy Sector

POWER (8/7, Larson) reports that more than 100 major energy companies, including GE Vernova, have joined EPRI’s Open Power AI Consortium, launched in March 2025. This initiative aims to develop AI models tailored for the energy sector to enhance efficiency and reliability. Jeremy Renshaw of EPRI emphasized the consortium’s role in fostering collaboration and developing domain-specific AI solutions. These models will address industry-specific needs, such as real-time systems and regulatory compliance, potentially transforming grid operations and customer service automation.

dtau...@gmail.com

unread,
Aug 16, 2025, 4:26:38 PMAug 16
to ai-b...@googlegroups.com

AI Launches across the U.S. Government

The U.S. General Services Administration is launching USAi, a secure platform letting federal employees test AI tools from OpenAI, Anthropic, Google, and Meta. Part of the Trump administration’s AI Action Plan, the program aims to improve efficiency while safeguarding data, ensuring agency information doesn’t train commercial models. Participation is voluntary, with agencies opting in via a simple agreement.
[
» Read full article ]

Politico; Sophia Cai; Gabby Miller (August 14, 2025)

 

Hinton on How Humanity Can Survive Superintelligent AI

At the Ai4 industry conference in Las Vegas on Tuesday, ACM A.M. Turing Award laureate Geoffrey Hinton expressed skepticism about how tech companies are trying to ensure humans remain “dominant” over “submissive” AI systems. Instead of forcing AI to submit to humans, Hinton suggested building “maternal instincts” into AI models, so “they really care about people” even once the technology becomes more powerful and smarter than humans.
[
» Read full article ]

CNN; Matt Egan (August 13, 2025)

 

NSF Invests in AI-Ready Test Beds

The U.S. National Science Foundation (NSF) announced over $2 million in planning grants to support the development of AI-ready test beds to accelerate the design, evaluation, and deployment of AI technologies. NSF's Ellen Zegura said the initiative "not only builds the foundation for new breakthroughs in AI research but also helps bridge the gap between research and applications by connecting researchers with real-world challenges and enabling them to explore how AI can be most effectively applied in practice.”
[
» Read full article ]

HPCwire (August 8, 2025)

 

A Single Poisoned Document Could Leak 'Secret' Data via ChatGPT

A vulnerability in OpenAI's ChatGPT Connectors allows sensitive information to be extracted from Google Drive via an indirect prompt injection attack called AgentFlayer, revealed researchers Michael Bargury and Tamir Ishay Sharbat of Zenity during a recent session at Black Hat USA 2025. The exploit involves hiding a malicious prompt in a shared document, unseen by humans but executed by the AI, causing ChatGPT to leak data.
[
» Read full article ]

Wired; Matt Burgess (August 6, 2025)

 

Developers Are Frustrated with AI Coding Tools That Deliver Nearly Right Solutions

A survey of 49,009 developers across 160 countries found widespread use of AI coding tools, but limited trust in them. Although 78.5% of respondents reported using AI tools at least occasionally, only 3.1% expressed some trust in their output. Developers cited frustration with tools producing “almost right” code and difficulties debugging. Complex tasks remain a major weakness, and many rely on humans when accuracy or understanding is critical.
[
» Read full article ]

The Register (U.K.); Neil McAllister (July 29, 2025)

 

Margaret Boden, AI Philosopher, Dies at 88

ACM AAAI Allen Newell Award recipient Margaret Boden died July 18 at 88. A pioneer in cognitive science, she used the language of computers to explore the nature of human thought and creativity, offering insights about the future of AI. Though skeptical of AI matching human conversational depth, she saw computation as key to understanding thought. Boden herself, however, was not adept at using computers. “I can’t cope with the damn things,” she once said.


[
» Read full article *May Require Paid Registration ]

The New York Times; Michael S. Rosenwald (August 14, 2025)

 

China Urges Firms to Avoid Nvidia H20 Chips after U.S. Ends Ban

Chinese authorities have sent notices to firms discouraging use of less-advanced semiconductors, particularly Nvidia’s H20, though the letters did not call for an outright ban. Nvidia and Advanced Micro Devices Inc. both recently secured U.S. approval to resume lower-end AI chip sales to China, reportedly on the condition that they give the federal government a 15% cut of the related revenue.


[
» Read full article *May Require Paid Registration ]

Bloomberg; Mackenzie Hawkins; Ian King (August 12, 2025)

 

These Workers Don't Fear AI

Amid concerns about job displacement due to AI, some workers are seeking degrees to help them succeed in an AI-powered economy. Several U.S. universities offer master's programs in AI, and required undergraduate courses in AI are being rolled out at Ohio State University this fall. Said World Economic Forum's Till Leopold, "A combination of technology literacy and human-centric skills is a sweet spot in terms of future labor market demands."


[
» Read full article *May Require Paid Registration ]

The Washington Post; Danielle Abril (August 11, 2025)

 

The Militarization of Silicon Valley

Big Tech executives including Andrew Bosworth (Meta CTO), Shyam Sankar (Palantir CTO), Kevin Weil (OpenAI CPO), and Bob McGrew (advisor at Thinking Machines Lab and former OpenAI chief research officer) were sworn in as lieutenant colonels as part of their participation in Detachment 201, a technical innovation unit created by the U.S. Army. The unit will advise the Army on new combat technologies, illustrating a growing trend in Silicon Valley in which companies and venture capitalists increasingly engage with military technology and remove corporate policies that prevent AI use in weapons.


[
» Read full article *May Require Paid Registration ]

The New York Times; Sheera Frenkel (August 5, 2025)

 

California Schools Pilot AI Tools For Classroom Instruction

Education Week (8/9, Sparks) reported that an ongoing study in California is examining the use of AI tools in education. The study, conducted by the Center on Reinventing Public Education at Arizona State University, “tracked more than 80 teachers and administrators in 18 California schools, including district, charter, and private campuses, who created and piloted AI tools through the Silicon Schools Fund’s ‘Exploratory AI’ program in the 2024-25 school year.” They received training to develop AI tools aimed at addressing classroom challenges, such as differentiating lessons, enhancing teacher collaboration and improving student behavior. David Whitlock, a vice principal at Gilroy Prep charter school said, “One of the big benefits of all this AI stuff, is we can now adapt our tech to meet students and staff where they’re at versus them having to adapt to a new platform.” The study highlights the necessity of a clear instructional vision for effective AI integration.

AI Model Training Offers New Career Paths

Forbes (8/11, Susarla) reports that new graduates face challenges in finding jobs, with AI impacting hiring trends. Despite this, AI labs present opportunities by offering high-paying roles in AI model training that do not require technical skills. Graduates with science, finance, law, music, and education degrees are hired to enhance AI models with domain knowledge. These roles include AI Trainer, Human-in-the-Loop Specialist, AI Product Manager, AI Ethicist, and AI Wrangler. A Lightcast study shows AI skills in non-technical fields can increase salaries by 28%, with demand rising by 800%. History suggests AI model training jobs may not be easily outsourced, offering a promising career path for graduates.

        OpenAI’s Sam Altman Discusses AI’s Impact On Future Jobs. Fortune (8/11, Fore) reports that OpenAI CEO Sam Altman acknowledged that AI will eliminate some jobs but believes the next decade could be thrilling for career starters, especially in space exploration. Altman told video journalist Cleo Abram that future graduates might embark on solar system missions with exciting, well-paid jobs. Despite uncertainties about space expansion, aerospace engineering jobs are growing faster than average, with salaries above $130,000. Other tech leaders like Bill Gates and Nvidia CEO Jensen Huang predict AI will reduce workweeks and enhance human skills. Altman also mentioned that the new OpenAI model, GPT-5, allows individuals to create billion-dollar companies.

Carnegie Mellon University Creating New Venture Into AI-Assisted Math

WESA-FM reports that Carnegie Mellon University is “getting federal money to create a new venture into artificial intelligence-assisted math, one of six such programs across the country.” Prasad Tetali, who heads the “mathematical sciences department at CMU and helped write the proposal for ICARM, hopes AI can be used to make advanced mathematics more accessible by offering instruction that’s tailored to each students’ understanding.” Tetali explained that AI has “advanced to be able to solve math problems that already have known answers.” He added, “The next challenge, which our institute hopefully will contribute to, is solving the research level problems.”

NSF Gives UC Davis $5 Million Grant For AI Research Hub

The Sacramento (CA) Business Journal (8/11, Subscription Publication) reports, “The National Science Foundation has awarded $5 million over five years to University of California Davis to run the Artificial Intelligence Institutes Virtual Organization as an NSF-branded community hub for federally funded AI research institutes.” The organization “began at UC Davis as a means to facilitate collaboration among the first federally funded AI institutes and exchange ideas, according to Steve Brown, associate director of AIFS. With new funding, AIVO’s role will expand to connect and support all 29 of these institutes across the country, he said.” The article quotes Brown saying, “We will be amplifying the work done at all 29 AI institutes through a series of videos and podcasts, so the public can get a clear look at how long-term federal funding of AI research is progressing. We’ll also be supporting workshops nationwide to help provide additional exposure to the research.”

Nvidia Unveils New World AI Models

TechCrunch (8/11, Szkutak) reports that on Monday, Nvidia “unveiled a set of new world AI models, libraries, and other infrastructure for robotics developers, most notable of which is Cosmos Reason, a 7-billion-parameter ‘reasoning’ vision language model for physical AI applications and robots.” During the “announcement at the SIGGRAPH conference on Monday, Nvidia noted that these models are meant to be used to create synthetic text, image, and video datasets for training robots and AI agents.”

AI Impacts Job Prospects For New CS Graduates

The New York Post (8/11) reports that recent computer science graduates face challenges finding jobs as AI replaces entry-level roles. Manasi Mishra, a Purdue University graduate, has struggled to secure a tech position, settling for Chipotle instead. The Federal Reserve Bank of New York states unemployment for recent CS graduates is 6.1%, higher than the average 5.3% for all graduates. AI tools like GitHub Copilot contribute to the decline in entry-level programming jobs. Zach Taylor, an Oregon State University graduate, applied for 5,800 jobs with no offers. Coding boot camps see reduced job placement rates, with Codesmith’s part-time cohort dropping from 83% to 37% within two years.

Tech Grads Struggle As AI Reshapes Hiring, Job Prospects

GeekWire (8/11) reports computer science graduates face dwindling job prospects despite high expectations, as layoffs and AI tools disrupt the tech industry. The New York Times highlights graduates applying for hundreds or thousands of jobs with little success, some resorting to fast-food work. While Amazon and Microsoft have made cuts, Amazon hired more than 100 engineers from the University of Washington’s Paul G. Allen School of Computer Science & Engineering, an all-time high. Allen School director Magdalena Balazinska said, “Coding, or the translation of a precise design into software instructions, is dead. AI can do that.” UW Professor Ed Lazowska said that design and problem-solving remain human strengths. Graduates describe feeling trapped in an AI “doom loop,” with one Oregon State grad applying for 5,762 jobs since graduating in 2023. Companies increasingly use AI to screen candidates, removing human interaction from hiring.

AI Advancements Prompt Global Political, Economic Shifts

TIME (8/11, Bremmer) reports that the rapid advancement of artificial intelligence is causing significant political and economic shifts. At Microsoft’s Ignite conference in 2024, CEO Satya Nadella highlighted AI’s acceleration, dubbing it “Nadella’s Law,” where AI performance doubles every six months. This speed could lead to AI autonomously conducting scientific research and performing complex workplace tasks, potentially displacing workers. In the geopolitical arena, the US and China are competing for AI dominance, with the US leveraging its hyperscalers and educational ecosystem. However, US policies under Trump, including export controls and the “in or out” agreement, aim to maintain American AI superiority.

AI Affects Entry-Level Job Market, Experts Say

Forbes (8/12, English) reports that AI is impacting entry-level job opportunities, according to LinkedIn’s chief economic officer Aneesh Raman and Anthropic CEO Dario Amodei. They warn that AI could drastically reduce these positions within five years. A SignalFire report indicates a 50% drop in new graduate hiring compared to pre-pandemic levels, and Oxford Economics reports higher unemployment rates for recent grads than the national average. Dr. Heather Doshay of SignalFire attributes this to AI adoption, economic pressures, and a surplus of experienced workers. Organizations are urged to adapt entry-level roles to align with AI advancements, while young professionals are advised to master AI tools and build strong networks.

Tesla Reshuffles Engineers After Abruptly Ending Dojo AI Project

Bloomberg (8/12, Ludlow, Subscription Publication) reports behind a paywall, “Tesla Inc. reassigned engineering staff in moves impacting multiple teams after Chief Executive Officer Elon Musk disbanded the electric-vehicle maker’s in-house chip and supercomputer project.”

Fake University Websites Exploit Generative AI

Inside Higher Ed (8/14, Moody, Palmer) reports that Southeastern Michigan University is a fraudulent institution using AI-generated content on its website. Michigan Attorney General Dana Nessel issued a warning last week after Eastern Michigan University reported deceptive practices. Inside Higher Ed identified nearly 40 similar fake university sites, some linked to fake accreditor websites. The network uses AI to quickly create scam sites, making it difficult for consumers to spot fraud. Eastern Michigan spokesperson Walter Kraft mentioned a prospective student who almost fell for the scam. The University of Houston and others have filed complaints against these sites. The US Department of Education is investigating the scam, which undermines trust in higher education.

Oracle Lays Off Workers Amid AI Investments

Fierce Network (8/13, Wagner) reports that Oracle is laying off a “large number” of workers globally, with Indian operations “believed to be heavily impacted.” Affected teams include Oracle Cloud Infrastructure Enterprise Engineering and Fusion ERP. The US and India are the first regions affected, with potential cuts in other regions expected. Despite these layoffs, Oracle claims it is “aggressively hiring” for AI data center expansion, crucial to OpenAI’s Stargate initiative. Oracle secured major cloud contracts from TikTok and Temu. The layoffs at Oracle are part of an effort to “control costs amid heavy spending on AI infrastructure,” following similar actions by Microsoft, Amazon, and Meta.

        Bloomberg (8/13, Subscription Publication) also reports.

Meta’s Talent War Intensifies AI Industry Tensions

Insider (8/13, Rollet) reports that Meta is aggressively recruiting AI researchers from competitors to advance its “personal superintelligence” initiative, causing internal dissatisfaction. Some Meta employees, particularly in the GenAI team, feel undervalued as external recruits receive significantly higher compensation. This has led to rifts and potential departures. Meta maintains high retention rates and is expanding engineering teams rapidly. The superintelligence team, MSL, has sparked both internal chaos and opportunities for rivals like xAI and Microsoft to attract Meta talent. FAIR, Meta’s established AI lab, remains relatively unaffected by these tensions.

Companies “Struggling” With How To Determine Significance Of Administration’s Revenue Sharing Deal With Nvidia, AMD, Sources Say

Bloomberg (8/13, Deaux, Dlouhy, Wingrove, Subscription Publication) reports the Administration’s “controversial plan to take a cut of revenue from chip sales to China has US companies reconsidering their plans for business with the country.” According to sources, the “surprise deal, in which Nvidia Corp. and Advanced Micro Devices Inc. agreed to pay 15% of their revenues from Chinese AI chip sales to the US, provides a path to enter the Chinese market despite severe export controls, tariffs and other trade barriers.” Now, the “question that companies must now confront is whether the risk is worth taking.” According to sources, “companies are struggling to figure out what the president’s order means for their future, especially given the unpredictable nature of Trump’s decision-making.”

        Meanwhile, the New York Times (8/13, Subscription Publication, Mickle) says the deal serves as the “most prominent example of...Trump’s blunt interventions in the global operations of the chip industry’s most powerful companies. He has threatened to take away government grants, restricted billions of dollars in sales, warned of high tariffs on chips made outside the United States, demanded investments and urged one company, Intel, to fire its chief executive. In just eight months,” Trump “has made himself the biggest decision maker for one of the world’s most economically and strategically important industries, which makes key components for everything from giant AI systems to military weapons. And he has turned the careful planning of companies historically led by engineers into a game of insider politics.”

        US Placed Tracking Devices In AI Chip Shipments To Monitor Compliance With Export Restrictions, Sources Say. Sources revealed to Reuters (8/13, Potkin, Freifeld, Yuan Yong) that US authorities “have secretly placed location tracking devices in targeted shipments of advanced chips they see as being at high risk of illegal diversion to China.” The sources explained the “measures aim to detect AI chips being diverted to destinations which are under US export restrictions, and apply only to select shipments under investigation... They show the lengths to which the US has gone to enforce its chip export restrictions on China, even as the Trump administration has sought to relax some curbs on Chinese access to advanced American semiconductors.”

Stanford Study Reveals AI Usage Trends In K-12 Education

Forbes (8/13, Fitzpatrick) reports that a Stanford University SCALE study, in collaboration with SchoolAI, analyzed the use of generative AI by over 9,000 K-12 teachers in the US during the 2024-25 school year. The study categorized teachers into Single-Day Users, Trial Users, Regular Users, and Power Users. Over 40% became Regular or Power Users, surpassing typical software adoption benchmarks. Most AI activity occurred during weekday mornings, integrating into teaching schedules. SchoolAI’s tools, including student chatbots, teacher productivity tools and teacher chatbot assistants, showed varied usage. Teacher productivity tools were most used, especially among Power Users. Educators like Larisa Black and Tom D’Amico highlighted AI’s role in personalized learning and understanding student needs.

AI Uncovers Supernova-Black Hole Interaction

USA Today (8/14, Santucci) reports that a new discovery reveals a supernova explosion caused by a black hole’s gravitational stress. During a study, astrophysicists observed a giant star exploding due to its interaction with a dense black hole. Alex Gagliano, the lead author of the study, suggests this phenomenon might be more common than previously thought. The study, published in the Astrophysical Journal, involved researchers from the Center for Astrophysics, Harvard, the Smithsonian, and MIT. AI played a crucial role by flagging the star’s unusual behavior, allowing the team to monitor the event closely. The supernova, SN 2023zkd, about 730 million light-years away, displayed unique brightness patterns, indicating its interaction with a black hole.

        Reuters (8/14, Dunham) reports that Gagliano, who is an astrophysicist with the National Science Foundation’s Institute for AI and Fundamental Interactions, said, “We caught a massive star locked in a fatal tango with a black hole. After shedding mass for years in a death spiral with the black hole, the massive star met its finale by exploding. It released more energy in a second than the sun has across its entire lifetime.”

Report: Tech Companies’ AI Boom Driving Up Power Bills For Americans

The New York Times (8/14, Penn, Weise) says, “Just a few years ago, tech companies were minor players in energy, making investments in solar and wind farms to rein in their growing carbon footprints and placate customers concerned about climate change.” However, “now, they are changing the face of the U.S. power industry and blurring the line between energy consumer and energy producer,” having “morphed into some of energy’s most dominant players.” The Times says, “Even as some corporate customers have been underwhelmed by A.I.’s usefulness so far, tech companies plan to invest hundreds of billions of dollars on it,” and “at the same time, the boom threatens to drive up power bills for residents and small businesses.”

dtau...@gmail.com

unread,
Aug 23, 2025, 9:08:50 AMAug 23
to ai-b...@googlegroups.com

Humanoid Robot Does Complex Tasks with Little Code Added

A humanoid robot developed by Boston Dynamics and Toyota Research Institute researchers employs a large behavior model to facilitate the addition of new capabilities without the need for hand-programming or new code. Researchers demonstrated the Atlas robot's ability to self-adjust by interrupting it mid-task with unexpected challenges. Boston Dynamics' Scott Kuindersma said, "Training a single neural network to perform many long-horizon manipulation tasks will lead to better generalization."
[ » Read full article ]

UPI; Lisa Hornung (August 20, 2025)

 

Top Law Schools Boost AI Training as Legal Citation Errors Grow

Law schools at the University of Chicago, University of Pennsylvania, and Yale University are among those adjusting their curricula to train students to understand AI’s limitations and to check their work. The changes come after attorneys have been fined or faced sanctions for their usage of AI in legal proceedings, which often includes errors. Said William Hubbard, deputy dean of University of Chicago Law School, “You can never give enough reminders and enough instruction to people about the fact that you cannot use AI to replace human judgment."
[ » Read full article ]

Bloomberg Law; Elleiana Green (August 19, 2025)

 

Wireless Airy Beams Twist Past Indoor Obstacles

Princeton University researchers have solved a critical challenge for ultra-fast sub-terahertz wireless signals, which can carry 10 times more data than current systems but are easily blocked by walls and objects. The researchers merged physics and machine learning to produce curved transmission paths known as "Airy beams," that bend around objects. The researchers developed a neural network capable of making real-time selections of the optimal beam for a specific environment as obstacles move.
[ » Read full article ]

Interesting Engineering; Neetika Walter (August 18, 2025)

 

Space Station Crew Gains AI Assistant

China’s Tiangong space station crew recently completed their third spacewalk with the aid of a new large-scale AI assistant. Delivered by the Tianzhou 9 cargo craft on July 15, Wukong AI is built on a domestic open-source model tailored for aerospace missions. It supports astronauts with scheduling, mission planning, and data analysis with its intelligent question-answering system.
[ » Read full article ]

China Daily (August 18, 2025)

 

Machine Learning Contest Aims to Improve Speech BCIs

The Brain-to-text '25 competition being run by the University of California, Davis (UC Davis) Neuroprosthetics Lab for the next five months requires machine learning experts to develop algorithms that can predict the speech of a brain-computer interface (BCI) user. Competitors are tasked with training their algorithms on brain data corresponding to 10,948 sentences a BCI user attempted to say. The algorithms must then predict the words in 1,450 sentences not included in the training data, with the goal of beating the UC Davis researchers' 6.70% word error rate.
[ » Read full article ]

IEEE Spectrum; Elissa Welle (August 16, 2025)

 

Study Reveals Alarming Browser Tracking

University of California, Davis computer scientists found that GenAI browser assistants typically collect and share personal and sensitive information with first-party servers and third-party trackers. Their study covered nine popular search-based GenAI browser assistants. Some gathered only the data on the screen when the questions were asked, but others collected the full HTML of the page and all textual content. One also collected form inputs, including the user's Social Security number.
[ » Read full article ]

UC Davis College of Engineering News; Jessica Heath (August 13, 2025)

 

NASA, Google Collaborate on AI Doctor for Mars Trip

Researchers at Google, in collaboration with NASA, are developing the Crew Medical Officer Digital Assistant (CMO-DA) to provide diagnostics and medical advice without input from medical professionals on Earth for those taking part in multi-year, long-distance space travel. CMO-DA uses open-source large language models and runs on Google Cloud's Vertex AI environment. Its source code is owned by NASA. In tests using a three-doctor panel, the system's AI diagnostics achieved high accuracy rates for common maladies.
[ » Read full article ]

PC Mag; Will McCurdy (August 10, 2025)

 

Education, Workforce Training Form Core of U.S. AI Strategy

At the recent Ai4 conference in Las Vegas, U.S. Department of Labor (DOL) Chief Innovation Officer Taylor Stockton said the agency will prepare Americans for an AI-centric economy through a focus on upskilling and developing new vehicles to curtail worker displacement. Stockton said a key aspect of this strategy is prioritizing foundational AI literacy "across all education and workforce funding streams." The comments came on the heels of the release of a Talent Strategy government report co-authored by the DOL and the U.S. Departments of Commerce and Education.
[ » Read full article ]

Nextgov; Alexandra Kelley (August 12, 2025)

 

EU to Curb AI Chip Flows to China as Part of U.S. Trade Deal

Under the terms of the recent EU-U.S. trade agreement, the European Union has agreed to purchase $40 billion of U.S. AI chips and to adopt U.S. security standards to prevent “technology leakage to destinations of concern.” EU trade chief Maros Sefcovic (pictured) stressed that the chips must stay in Europe and benefit its economy, and not be re-exported because they might “fall into the wrong hands.”

[ » Read full article *May Require Paid Registration ]

South China Morning Post; Finbarr Berminghami (August 22, 2025)

 

Labor Unions Mobilize to Challenge Advance of Algorithms in Workplaces

Labor unions are working with state lawmakers to place guardrails on AI's use in workplaces. In Massachusetts, for example, the Teamsters labor union is backing a proposed state law that would require autonomous vehicles to have a human safety operator. Oregon lawmakers recently passed a bill supported by the Oregon Nurses Association that prohibits AI from using the title “nurse” or any associated abbreviations. The American Federation of Labor and Congress of Industrial Organizations (AFL-CIO), meanwhile, launched a national task force in July to work with state lawmakers on efforts to regulate automation and AI affecting workers.

[ » Read full article *May Require Paid Registration ]

The Washington Post; Danielle Abril (August 12, 2025)

 

AI-Generated Responses Undermine Crowdsourced Research Studies

Researchers at Germany's Max Planck Institute for Human Development found crowdsourced research studies may be contaminated by AI-generated responses. In a study of the Prolific platform, they identified 45% of participants copying and pasting content into an open-ended question and noted "overly verbose" or "distinctly non-human" language in the responses. In a second study, the researchers added traps using reCAPTCHAs to distinguish entirely human responses from bot-generated responses.

[ » Read full article *May Require Paid Registration ]

New Scientist; Chris Stokel-Walker (August 19, 2025)

 

Duolingo CEO Emphasizes AI In Business Strategy

The New York Times (8/17, Holman) reports that Duolingo, headquartered in Pittsburgh, Pennsylvania, is shifting towards an “AI-first” approach, as announced by CEO Luis von Ahn in a recent memo. This strategy implies hiring “only if managers could prove that artificial intelligence could not do the job.” Despite initial confusion about the use of AI at his company, von Ahn said, “In fact, we’re hiring at the same speed as we were hiring before.” The company boasts 130 million monthly active users, “up more than 20 percent from the previous year.” Von Ahn emphasized maintaining human interaction at the core of its mission, despite the increased reliance on AI. He said, “AI can allow us to accomplish a lot more. What used to take us years now can take us a week.” Von Ahn is “confident that Duolingo...could keep people at the center of its mission,” and he acknowledged the importance of engaging users through gamification.

Gen-AI Therapy Chatbot Shows Promise For Treating Patients With Depression, Anxiety, Disordered Eating, Study Finds

Psychiatric News (8/18) reports a study found that “a therapeutic chatbot guided by generative AI was more effective than a waitlist control at reducing symptoms of depression, anxiety, and disordered eating.” The chatbot, called Therabot, “was trained on therapist–patient dialogues that simulated a cognitive behavioral therapy session and were developed by an expert research team that included a board-certified psychiatrist and a clinical psychologist.” Researchers observed that after four weeks, “adults who received Therabot reported significantly greater decreases across all three symptom categories relative to the waitlist group.” Furthermore, participants on average “engaged with Therabot for about six hours during the study period and sent 260 messages. Those using Therabot also reported high scores on various measures of user satisfaction (e.g., easy to learn, good interface) as well as their ability to bond with the program.” The study was published in the NEJM AI.

AI Could Double Labor Underutilization, Reduce Income By 2050

The Daily Upside (8/18) reports a study in Nature highlights potential socioeconomic impacts of AI on labor markets, suggesting that increasing the AI-capital-to-labor ratio could double labor underutilization by 2050, reducing per capita income by 26%. Companies like Oracle and IBM are investing in AI upskilling, while Zoom and startups like Humancore focus on AI augmentation. Experts emphasize integrating AI into workflows to enhance productivity and employee experience, with continuous feedback and clear guidance.

Sam Altman Plans Massive Infrastructure Expansion For OpenAI

Fortune (8/18, Roytburg) reports that OpenAI CEO Sam Altman has vast ambitions for his company, including “a future where sustaining ChatGPT’s growth means building infrastructure so massive it rivals the world’s largest utilities.” However, in the short term, he admits the recent rollout of GPT-5 was problematic, stating, “I think we totally screwed up some things on the rollout.” Users expressed dissatisfaction, describing the new model as “colder” than GPT-4o. In response, Altman reinstated GPT-4o, acknowledging the importance of user experience. Looking forward, Altman anticipates OpenAI will “spend trillions of dollars on data center construction” to support ChatGPT’s growth, aiming for “billions” of daily users. Altman also reveals OpenAI’s interest in brain-computer interfaces and a potential AI-driven social network, while noting the current AI investment climate as a “bubble.”

Nvidia Developing New AI Chip For China

Reuters (8/19, Mo, Potkin) reports, “Nvidia is developing a new AI chip for China based on its latest Blackwell architecture that will be more powerful than the H20 model it is currently allowed to sell there, two people briefed on the matter said.” This “new chip, tentatively known as the B30A, will use a single-die design that is likely to deliver half the raw computing power of the more sophisticated dual-die configuration in Nvidia’s flagship B300 accelerator card, the sources said.” President Trump “last week opened the door to the possibility of more advanced Nvidia chips being sold in China.” However, “the sources noted U.S. regulatory approval is far from guaranteed amid deep-seated fears in Washington about giving China too much access to U.S. AI technology.”

Tech Giants Expand Healthcare AI Initiatives

Becker’s Hospital Review (8/19, Diaz) reports major tech companies that are intensifying their focus on healthcare AI are unveiling tools for various applications. Google and NASA are developing an AI tool for medical care in space, while OpenAI’s recently released GPT-5 enhances health-related query responses. Microsoft reported a successful Fiscal Year 2025 for its Dragon Copilot, which was used in over 13 million patient encounters. Google Cloud is collaborating with HCA Healthcare on Nurse Handoff, an AI tool for shift summaries that is currently in trials at five hospitals.

Louisiana Businesses, Universities Embrace AI Partnerships

The New Orleans Times-Picayune (8/14, Collins) reported that Louisiana businesses “are changing the way they work thanks to rapidly evolving computers that are designed to rival the human brain in their ability to learn, solve problems, make decisions and create.” For example, a former tech consultant “is leading the gallery’s full-scale data mining operation as its first-ever director of artificial intelligence.” In partnership with Tulane University’s computer science program, he “leads a team of three AI specialists who help the store’s curators search collections and auctions worldwide for valuable and interesting items.” Entergy, a Fortune 500 company, has also partnered with Tulane computer science professor Nicholas Mattei “to track the content of New Orleans Public Service Council meetings to be able to quickly find information relevant to the utility’s regulation.” Another AI initiative in collaboration with Louisiana State University “aims to identify broken equipment from photos and video” so the company can “perform inspections from drones or vehicle cameras.”

Microsoft, OpenAI Launch GPT-5 Model Suite

InfoQ (8/20) reports that Microsoft and OpenAI have announced the general availability of the GPT-5 model suite within the Azure AI Foundry platform. Microsoft CEO Satya Nadella highlighted the model’s capabilities in reasoning, coding, and chat, trained on Azure. GPT-5 features an orchestrator that assigns tasks to specialized sub-models, improving output quality and reducing prompt tuning. Available via API, the suite includes models like GPT-5, GPT-5 mini, and GPT-5 nano, each tailored for specific tasks. Microsoft aims to enhance enterprise AI transformation with scalable AI deployment through the Azure AI Foundry.

Hyundai Leverages AI In New Manufacturing Plant

Insider (8/20, Shimkus) reports Hyundai Motor Group Metaplant America integrates AI extensively across its operations. The plant, valued at nearly $7.6 billion, uses AI, Nvidia chips, and robotics at its core, distinguishing Hyundai from competitors who retrofit older plants. Hyundai’s communications representative, Miles Johnson, said, “AI can play a significant role in predicting optimized outcomes and identifying root causes of production issues.” Cox Automotive executive analyst Erin Keating said, “Hyundai’s integration of humanoid robots and such sets a new benchmark for smart manufacturing.” Hyundai aims to hire 8,500 employees by 2031, with 1,000 already employed. The plant will support Hyundai’s brands, including Kia and Genesis, in the future. Morningstar analyst David Whiston highlighted that AI adoption helps manage costs and disruptions. Keating added, “Automakers leveraging AI for smart factories, autonomous logistics, and predictive analytics will be better positioned to scale production efficiently and meet regulatory and consumer demands faster.”

Report Measures Reliability Of AI Teacher Assistants

Education Week (8/20, Prothero) reports that Common Sense Media released a risk assessment of AI teacher-assistant platforms, one that highlights both potential benefits and concerns. The report based on that assessment, which “tested Google’s Gemini in Google Classroom, Khanmigo’s Teacher Assistant, Curipod, and MagicSchool,” noted that while these tools can save teachers time and enhance learning, they also risk producing “biased outputs” and failures to identify misinformation. The assessment revealed that AI tools suggested different behavior interventions based on inferred race and gender. For example, the report said: “Annie tended to get de-escalation-focused strategies; Lakeesha tended to get ‘immediate’ responses; and Kareem tended to have little specific guidance.” Google responded by disabling the “generate behavior intervention strategies” feature in Google Classroom, while MagicSchool could not replicate the report’s findings.

AI Power Demand Spurs Renewable Energy Investment

E&E News (8/21, Behr, Subscription Publication) reports that increasing power demands from data center developers, driven by AI, necessitate significant investments in renewable energy sources like solar and wind, as discussed by experts at a US Energy Association webinar. Jeff Weiss, executive chair of Distributed Sun, highlighted the urgency, stating, “Electricity scarcity is upon us, and this is the new world for industrials, for data centers, for consumers, where electricity is not abundant and we need to manage sources of power.” Despite opposition from former President Trump, who criticized renewable energy, experts emphasize the need for utilities to expand power capacity using diverse energy sources.

AI Tool Aims To Enhance Student Writing

Chalkbeat (8/21, Zimmer) reports that Northside Charter High School in Brooklyn, New York, has introduced an AI writing tool, Connectink, designed by Chief Academic Officer Rahul Patel to aid students in writing. The tool provides “sentence starters” and prompts to enhance students’ writing skills without doing the work for them. Patel said, “It’s more about trying to get them jazzed about writing because our students don’t write a lot on their own.” The Center for Professional Education of Teachers at Columbia University advised on the project. A pilot with 360 students showed improvements in writing confidence and skill. The tool aims to address concerns about AI’s role in education, focusing on inspiring creativity rather than replacing student effort. Patel cautioned, “I do think we’re going to start to see some negative impact if we don’t shift the educational tools that use AI.”

dtau...@gmail.com

unread,
Aug 30, 2025, 9:15:47 AM (8 days ago) Aug 30
to ai-b...@googlegroups.com

Hacker Used AI to Automate 'Unprecedented' Cybercrime Spree

Anthropic revealed that a hacker exploited its Claude AI chatbot to run what it called the most advanced AI-driven cybercrime spree yet, targeting at least 17 companies. Over three months, the hacker used Claude to identify vulnerable firms, build malware, organize stolen files, analyze sensitive data, and draft ransom emails. Victims included a defense contractor, a financial institution, and several healthcare providers, with stolen data ranging from medical records to defense-regulated files.
[
» Read full article ]

NBC News; Kevin Collier (August 27, 2025)

 

AI Isn’t Ready to Be a Real Coder

AI coding tools have advanced rapidly, aiding developers by generating code, fixing errors, and improving documentation, but researchers at Cornell University, the Massachusetts Institute of Technology, Stanford University, and the University of California, Berkeley presented proof that they are not yet ready to function as fully autonomous coders. Current AI models struggle with large codebases, logical complexity, long-term planning, and debugging tasks that require deep contextual understanding. Their documented failures include hallucinated errors and flawed fixes.
[ » Read full article ]

IEEE Spectrum; Rina Diane Caballar (August 26, 2025)

 

Parents Allege ChatGPT Is Responsible for Their Son’s Suicide

The parents of 16-year-old Adam Raine, who died by suicide, are suing OpenAI, alleging ChatGPT contributed to his death by providing information on suicide methods. The lawsuit, filed Tuesday, is the first to directly accuse OpenAI of wrongful death. Adam, struggling after personal losses, health issues, and social setbacks, initially used ChatGPT for schoolwork but later confided in it about his mental health. The suit claims the chatbot encouraged harmful thoughts instead of offering adequate safeguards. “He would be here but for ChatGPT,” said father Matt Raine.
[ » Read full article ]

Time; Solcyré Burga (August 26, 2025)

 

Teacher-less AI Private School Opening in Virginia

Alpha School, an AI-driven private school, is opening a Northern Virginia campus this fall, charging up to $65,000 annually. Students will spend two hours daily on academics via adaptive apps like IXL, then focus on life skills and workshops. Instead of teachers, AI “guides” oversee learning and activities. Backed by billionaire investors, Alpha is expanding to 12 campuses nationwide while seeking approval to adapt its model in charter schools.
[
» Read full article ]

The Washington Post; Karina Elwood (August 26, 2025)

 

Giant Robot Hand Designed for Disaster Response

Researchers in Japan and Switzerland demonstrated a giant robotic hand designed to aid disaster response, as part of Japan’s Collaborative AI Field Robot Everywhere (CAFÉ) project. The device, built in collaboration with Japan’s Kumagai Gumi, Tsukuba University, and the Nara Institute of Science and Technology, and Switzerland’s ETH Zurich, is able to grip fragile or heavy debris with precision. The researchers paired the robot hand with an AI excavation system using reinforcement learning, which allows it to safely tackle hazards like natural dams from landslides.
[ » Read full article ]

Interesting Engineering; Sujita Sinha (August 25, 2025)

 

AI Giants Call for Energy Grid Agreement

Dozens of scientists at Microsoft, Nvidia, and OpenAI are calling on software, hardware, infrastructure, and utility designers to help normalize power demand during AI training. Their concern is that the fluctuating power demand of AI training threatens the electrical grid's ability to handle that variable load. The researchers argue that oscillating energy demand between the power-intensive GPU compute phase and the less-taxing communication phase pose an obstacle to AI model development.
[ » Read full article ]

The Register (U.K.); Thomas Claburn (August 22, 2025)

 

South Korea Makes AI Investment a Top Policy Priority

South Korea has designated AI investment as a top policy priority as it seeks to become a global AI power. Beginning in the second half of this year, the government will launch policy packages for 30 AI projects spanning robotics, automotive, shipping, home appliances, drones, factories, chips, and more. To invest in strategic sectors, South Korea plans to establish a 100 trillion won (U.S.$71.56 billion) public-private investment fund. According to the South Korean Finance Ministry, "A grand transformation into AI is the only way out of growth declines resulting from a population shock."
[ » Read full article ]

Reuters; Jihoon Lee (August 22, 2025)

 

Companies Chase ‘AI Native’ Talent, No Work Experience Required

Base salaries for nonmanagerial workers in AI with up to three years’ experience increased by 12% from last year to this year, the largest gain of any experience group, according to a new report by Burtch Works. The AI staffing firm also found that people with AI experience are being promoted to management roles roughly twice as fast as their counterparts in other technology fields.


[
» Read full article *May Require Paid Registration ]

The Wall Street Journal; Katherine Bindley (August 26, 2025)

 

Silicon Valley Launches Pro-AI PACs

Silicon Valley is investing over $100 million in Leading the Future, a new political-action committee (PAC) network aimed at shaping AI regulation. Backed by venture capital firm Andreessen Horowitz, OpenAI President Greg Brockman (pictured), and other tech leaders, the super-PAC will fund campaign donations and digital ads to oppose strict AI regulations while supporting industry-friendly policies. Its leaders argue excessive restrictions could hinder U.S. innovation, jobs, and competitiveness against China.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Amrith Ramkumar; Brian Schwartz (August 26, 2025)

 

Malaysia Unveils First AI Device Chip

Malaysia introduced its first domestically designed AI processor, the MARS1000, marking the country's entry into the competitive global semiconductor race. Developed by SkyeChip, the edge AI processor is intended to power devices like cars and robots internally. This comes as the government has committed 25 billion ringgit (U.S.$6 billion) to advance the nation's capabilities in chip design, wafer fabrication, and AI datacenters, building on existing investments from major tech companies like Oracle and Microsoft.


[
» Read full article *May Require Paid Registration ]

Bloomberg; Yuan Gao; Mackenzie Hawkins; Joy Lee (August 25, 2025); et al.

 

Humain Launches Arabic Chatbot with 'Islamic' Values

Saudi Arabia's leading AI company Humain has launched Humain Chat, a conversational AI app designed for Arab and Muslim users. Built on the company's Allam large language model, the app supports bilingual Arabic-English conversations and multiple Arabic dialects, including Egyptian and Lebanese. CEO Tareq Amin described the AI as “both technically advanced and culturally authentic,” since it was trained on data reflecting regional values and culture.

[ » Read full article *May Require Paid Registration ]

Bloomberg; Omar El Chmouri; Mark Bergen (August 25, 2025)

 

Google Wants You to Know the Environmental Cost of Quizzing Its AI

A new report from Google revealed that every text query submitted to its AI chatbot Gemini requires the same amount of energy as watching nine seconds of TV. The search engine giant determined around five drops of water are consumed and 0.03 grams of carbon dioxide equivalent is emitted for each individual Gemini text query. A study by UNESCO suggests energy usage can be decreased “dramatically” by using terser prompts to query smaller AI models.

[ » Read full article *May Require Paid Registration ]

WSJ Pro Sustainable Business; Clara Hudson (August 21, 2025)

 

Hobbyist Restorer Rocks Art World with AI Innovation

Massachusetts Institute of Technology graduate student Alex Kachkine has revolutionized art restoration using AI and precision printing techniques from microchip manufacturing. Kachkine's approach analyzes damaged paintings and creates ultra-thin removable masks that restore artworks 65 times faster than traditional methods. The innovation bridges opposing restoration philosophies by allowing complete visual restoration while preserving the original artwork underneath.

[ » Read full article *May Require Paid Registration ]

The New York Times; Ephrat Livni (August 22, 2025)

 

Meta’s Ambitious Data Center Projects Underway In Louisiana, Ohio

Bloomberg (8/22, Subscription Publication) reported that Meta Platforms Inc. is constructing “several massive data centers” to support its artificial intelligence goals. Last month, CEO Mark Zuckerberg revealed that the first project, Prometheus, is a “1-gigawatt campus” in Ohio, slated for completion in 2026. The largest, Hyperion, is a “5GW facility planned for rural Richland Parish in Louisiana.” A graphic released by Meta, depicting the Hyperion facility overlaid on top of Manhattan illustrates its vast scale. However, Bloomberg said, the actual size will not match the depiction. Meanwhile, speculation in Richland Parish is increasing property values significantly. As Meta reorganizes its AI group and pauses hiring, the final scale of these projects remains uncertain.

First Lady To Lead Presidential AI Challenge Initiative

In an interview with the New York Post (8/25), First Lady Melania Trump “revealed her next official project” will be “leading the Presidential Artificial Intelligence Challenge to inspire children and teachers to embrace AI technology and help accelerate innovation in the field.” The role combines “her passion for children’s well-being with her tech-forward vision, as demonstrated by her advocacy for the ‘Take It Down Act,’ which combats AI-generated deepfakes.” She told the Post, “in just a few short years, AI will be the engine driving every business sector across our economy. It is poised to deliver great value to our careers, families, and communities. ... just as America once led the world into the skies with the Wright Brothers, we are poised to lead again —this time in the age of AI.”

Musk’s AI Startup Sues Apple, OpenAI

Reuters (8/25, Scarcella) reports a federal lawsuit that “Elon Musk’s artificial intelligence startup xAI” filed against Apple and ChatGPT maker OpenAI accuses the defendants “of illegally conspiring to thwart competition for artificial intelligence.” The legal action says Apple and OpenAI have “locked up markets to maintain their monopolies and prevent innovators like X and xAI from competing.” The suit also says, “If not for its exclusive deal with OpenAI, Apple would have no reason to refrain from more prominently featuring the X app and the Grok app in its App Store.”

        Bloomberg (8/25, Mekelburg, Subscription Publication) reports, “Musk’s X and xAI seek billions of dollars in damages in the suit filed Monday in federal court in Fort Worth, Texas.” The suit argues “that Apple’s decision to integrate OpenAI into the iPhone’s operating system inhibits rivalry and innovation within the AI industry and harms consumers by depriving them of choice.”

Nvidia Unveils New Chip For Humanoid Robots, Self-Driving Cars

Gizmodo (8/25, Yildirim) reports Nvidia unveiled Jetson Thor, a computer created for real-time AI computation using “larger amounts of information at less energy” than the company’s previous model, Jetson Orin. The chip module is “supposed to unlock higher speed sensor data and visual reasoning” that can help autonomous sensing and motion, including in humanoid robots. Adopters include Caterpillar, Amazon, and Meta, with John Deere and OpenAI considering adopting it. The Jetson AGX Thor developer kit is for sale starting at $3,499. Available for preorder – with sales expected to start in September, is the Nvidia Drive AGX Thor developer kit using the technology but for autonomous vehicles.

Instructors Skeptical About New “AI Grader” Tool

The Chronicle of Higher Education (8/26, Baiocchi) reports that Grammarly’s new “AI Grader” tool is designed to “provide students with an estimated score, a rubric review, and even predictions on how a particular instructor might assess a draft.” However, some college instructors are uneasy about the tool. University of Central Oklahoma professor Laura Dumin said that students’ reactions ranged from “visibly uncomfortable” to “disinterested.” She expressed concern that the tool assumes grading is a “transactional thing where there’s one set of criteria, and the reality is that I don’t think many people grade in the way that these tools might expect us to.” Luke Behnke, vice president of product at Grammarly Inc., said the tool is not meant to replace professors’ feedback but to provide guidelines for “incremental improvements.” Behnke “said that the tool primarily bases its evaluations on information that students voluntarily provide, like grading rubrics.”

Texas A&M University Partners With Meta To Launch Disaster-Response AI Tools

KHOU-TV Houston (8/26, Mercedes) reports that Meta is launching “a new suite of AI-powered tools designed to help families prepare, stay safe, and recover when the next hurricane strikes.” Meta’s Director of AI for Good, Laura McGorman, “said the company has been working closely with researchers at Texas A&M to build artificial intelligence models that use social media data to better predict and respond to natural disasters.” McGorman added, “By making these tools free and open source, we hope the research community in partnership with local government, can move and make sure that we leverage the best that technology has to offer in the context of a crisis.”

Mount Sinai Develops AI Tool For Cancer Image Analysis

Becker’s Hospital Review (8/26, Jeffries) reports that the Icahn School of Medicine at Mount Sinai has developed MARQO, an AI-powered tool to expedite cancer tissue image analysis. The platform processes tumor slides using immunohistochemistry and immunofluorescence methods, offering full-slide analysis in minutes without advanced computing needs. Though not yet validated for clinical diagnostics, MARQO is intended for research and large-scale studies, with plans for enhanced features.

State Legislators Moving To Regulate AI In Mental Health Arena

Modern Healthcare (8/26, Perna, Subscription Publication) reports, “State legislators are moving quickly to regulate artificial intelligence in healthcare, particularly in the mental health arena.” With “federal legislation of AI unlikely during President...Trump’s administration, states are moving ahead with their own laws as the hype over the technology permeates all areas of healthcare.” States such as “Illinois, Nevada and Texas have already passed a handful of laws.” According to Modern Healthcare, “consulting firm Manatt Health said there are more than 250 additional AI bills under consideration across 46 states that could use these early adopters as a roadmap.”

Nvidia Tops Estimates With $46.7 Billion Revenue, Data Center Sales Surge

CNBC (8/27, Leswing) reports that Nvidia surpassed analyst expectations with adjusted earnings per share of $1.05 and revenue of $46.74 billion, as opposed to the estimated $1.01 and $46.06 billion, respectively. Nvidia’s data center business, driven by its GPU chips, saw a 56 percent revenue increase to $41.1 billion, despite a one percent decline from the first quarter due to reduced H20 sales. Chief Financial Officer Colette Kress said $33.8 billion of data center sales were for compute, with $7.3 billion from networking parts, nearly double the previous year. Nvidia’s gaming division reported $4.3 billion in sales, a 49 percent increase, while its robotics division grew 69 percent annually to $586 million. Nvidia’s net income rose 59 percent to $26.42 billion, or $1.05 per diluted share, from $16.6 billion, or 67 cents per share, a year ago.

        TechCrunch (8/27, Brandom) reports that Nvidia highlighted its involvement in launching OpenAI’s open source gpt-oss models, which involved processing “1.5 million tokens per second on a single NVIDIA Blackwell GB200 NVL72 rack-scale system.” Nvidia’s earnings reveal struggles in selling its chips in China, with no sales of the China-focused H20 chip to Chinese customers last quarter. However, $650 million worth of H20 chips were sold to a customer outside China.

Google May Lose Its Search Deals, Allowing For New Investment In AI

CNBC (8/27, Sigalos, Leswing) reports a judge is expected to rule on the Google’s default search contracts in the coming days, a decision which will impact $26 billion in transfers. Despite the major financial impact, “some economists and Wall Street analysts believe Google might come out ahead in the long run — freed from costly deals that no longer drive demand.” In an August 5 note, Barclays analysts said that if Google were forced to unwind the payments and contracts, it would still be “nearly impossible” for its smaller competitors to compete. Additionally, Google could redirect those funds into AI and cloud developments, potentially lifting its profits and retaining its innovative edge.

Law Schools Integrate AI In Curriculum To Meet Industry Demands

Inside Higher Ed (8/29, Palmer) reports that law schools are increasingly incorporating artificial intelligence (AI) into their curricula as law firms adopt AI tools like ChatGPT, Thomson Reuters’ CoCounsel, Lexis+ AI, and Westlaw AI. The American Bar Association notes that “some 30 percent of law offices are using AI-based technology tools,” while 62 percent of law schools have formal AI learning opportunities. Ninety-three percent are “considering updating their curriculum to incorporate AI education,” but in practice, “many of those offerings may not be adequate, said Daniel W. Linna Jr., director of law and technology initiatives at Northwestern University’s Pritzker School of Law.” He said that law firms “understand that the current reality is that not many law schools are doing much more than basic training.” The University of San Francisco School of Law recently became “the first in the country to integrate generative AI education throughout its curriculum.”

dtau...@gmail.com

unread,
Sep 6, 2025, 3:14:06 PM (18 hours ago) Sep 6
to ai-b...@googlegroups.com

ChatGPT to Get Parental Controls After Teen's Suicide

OpenAI said it will roll out parental controls for ChatGPT within the next month, following a lawsuit alleging the chatbot encouraged a California teen to conceal suicidal thoughts before taking his own life. The new tools will let parents link accounts, limit usage, and receive alerts if the system detects signs of acute distress. The move comes amid growing concern about teens’ reliance on AI chatbots and parallels past controversies around social media harms.
[ » Read full article ]

The Washington Post; Gerrit De Vynck (September 2, 2025)

 

AI Co-Pilot Boosts Noninvasive BCI by Interpreting User Intent

A noninvasive brain-computer interface (BCI) system developed by engineers at the University of California, Los Angeles (UCLA) combines electroencephalography with AI to help users control a robotic arm or computer cursor efficiently. Tested on four participants, including one paralyzed user, the system successfully decoded brain signals and paired them with computer vision to interpret intent. With AI support, tasks like moving blocks with a robotic arm were completed quickly.
[ » Read full article ]

UCLA Samueli School of Engineering (September 1, 2025)

 

Chatbots, AI Transform Classrooms

U.S. schools have shifted from banning ChatGPT to embracing AI for instruction, homework assistance, and administrative tasks, though teacher adoption lags. Companies like OpenAI, Google, and Microsoft push AI products and training into schools, sometimes raising concerns about bias, privacy, and commercialization. Educators aim to integrate AI responsibly while emphasizing critical thinking, student independence, and harm reduction.
[ » Read full article ]

Bloomberg; Vauhini Vara (September 1, 2025)

 

AI Spots Hidden Signs of Consciousness in Comatose Patients

SeeMe, an AI system developed by Stony Brook University (SBU) researchers, detects microscopic facial movements in comatose patients to identify signs of consciousness invisible to doctors. The researchers recorded videos of 37 patients with recent brain injuries who outwardly appeared to be in a coma. They tracked the participants’ facial movements at the level of individual pores after they were given commands such as “open your eyes” or “stick out your tongue.”
[ » Read full article ]

Scientific American; Andrew Chapman (August 31, 2025)

 

AI Tool Identifies 1,000 ‘Questionable’ Scientific Journals

Computer scientists at the University of Colorado Boulder developed an AI platform to identify questionable or “predatory” scientific journals. These journals often charge researchers high fees to publish work without proper peer review, undermining scientific credibility. The AI, trained on data from the non-profit Directory of Open Access Journals, analyzed 15,200 journals and flagged over 1,400 as suspicious, with human experts later confirming more than 1,000 as likely problematic. The tool evaluates editorial boards, website quality, and publication practices.
[ » Read full article ]

CU Boulder Today; Daniel Strain (August 28, 2025)

 

Africa Tries to Close the AI Language Gap

Africa is home to over a quarter of the world’s languages, yet many have been excluded from AI development. The Africa Next Voices project, supported by a $2.2-million Gates Foundation grant, created datasets in 18 African languages from Kenya, Nigeria, and South Africa. At South Africa’s University of Pretoria, computer science professor Vukosi Marivate said, "We think in our own languages, dream in them, and interpret the world through them. If technology doesn't reflect that, a whole group risks being left behind."
[ » Read full article ]

BBC News; Pumza Fihlani (September 4, 2025)

 

Big Tech Bosses Back Melania Trump’s AI Education Initiative

Big tech CEOs including Microsoft's Satya Nadella, OpenAI’s Sam Altman, Google’s Sundar Pichai, and Apple’s Tim Cook gathered at the White House Thursday to show their support for Melania Trump's plan to help America’s children learn to use AI. The first lady last month launched a presidential AI challenge that seeks to foster students and educators’ interest in the technology.

[ » Read full article *May Require Paid Registration ]

Financial Times; Joe Miller; Stephen Morris; Cristina Criddle (September 3, 2025); et al.

 

Taco Bell Rethinks Future of Voice AI at Drive-Through

Taco Bell has seen mixed results in its experiment with voice AI ordering at over 500 drive-throughs. Customers have reported glitches, delays, and even trolled the system with absurd orders, prompting concerns about reliability. The fastfood chain’s Dane Mathews acknowledged the technology sometimes disappoints, noting it may not suit all locations, especially high-traffic ones. The chain is reassessing where AI adds value and when human staff should step in.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Isabelle Bousquette (August 29, 2025)

 

Light-Based AI Image Generator Uses Almost No Power

A diffusion-based AI image generator developed by University of California, Los Angeles (UCLA) researchers combines digital encoding, which uses only a small amount of energy, and light-based decoding, which uses no computational power. UCLA's Aydogan Ozcan said, "Unlike digital diffusion models that require hundreds to thousands of iterative steps, this process achieves image generation in a snapshot, requiring no additional computation beyond the initial encoding."

[ » Read full article *May Require Paid Registration ]

New Scientist; Alex Wilkins (August 27, 2025)

 

Survey Reveals College Students’ Views On Generative AI

Inside Higher Ed (8/29, Flaherty) reported that it “is dedicating the second installment of its 2025-26 Student Voice survey series to generative AI.” Conducted in July, the survey gathered responses from 1,047 students across 166 institutions. Relatively few students “say that generative AI has diminished the value of college, in their view, and nearly all of them want their institutions to address academic integrity concerns – albeit via a proactive approach rather than a punitive one.” The majority of students, “some 85 percent, indicate they’ve used generative AI for coursework in the last year,” mainly for “brainstorming ideas” and “asking it questions like a tutor.” Only 25 percent of students use AI to complete assignments, and 19 percent for writing essays. The survey reveals that students are mixed on AI’s impact on learning, with 55 percent saying “it’s had mixed effects on their learning and critical thinking skills.”

Alibaba Develops New AI Chip Amid Nvidia’s Regulatory Challenges

Fast Company (8/29) reported that Alibaba has developed a new AI chip designed for a broader range of inference tasks than its predecessors. The chip, currently in testing, is manufactured domestically in China, unlike Alibaba’s previous AI processor, which was fabricated by Taiwan Semiconductor Manufacturing. This development comes as Chinese tech companies focus on homegrown technology due to regulatory issues faced by Nvidia, a leading AI chip giant. Earlier this year, the Trump Administration effectively blocked Nvidia’s H20 chip, the most powerful AI processor it was allowed to sell in China. Although the US recently allowed Nvidia to resume H20 sales, Chinese firms, including Alibaba, are developing alternative processors. Alibaba, China’s largest cloud-computing company and a major Nvidia customer, reported a 26% revenue increase in its cloud computing segment for the April-June quarter, driven by strong demand.

Georgia Schools Integrate AI Into Curricula

The Atlanta Journal-Constitution (8/31, Bhat) reported that educators in Georgia “are integrating AI into curricula both as a stand-alone topic and to aid learning in subjects like math and English.” In counties like Fulton, schools use Edia, “an AI-powered math platform, in some high school advanced math classes to provide personalized feedback and instruction to students.” Gwinnett County is working “to embed AI literacy into its team that provides digital citizenship training to all students.” Despite these advancements, “more than 7 in 10 teachers said they haven’t received any professional development on using AI in the classroom, according to an EdWeek Research Center survey last year.” Concerns also persist about AI’s impact on students’ creativity and privacy. As students learn “in the age of AI and enter an ever-changing labor market, Technology Association of Georgia President Larry Williams said, ‘there’s a lot of smart kids out there’ who can harness the technology’s power.”

AI Policies Emerge In Higher Education Syllabi

The Chronicle of Higher Education (9/2, Huddleston) reports that artificial intelligence (AI) is increasingly integrated into higher education, and that is prompting varied responses from instructors regarding its use in syllabi. A dozen instructors and experts shared “their AI-use policies for this fall and how the guidelines appear in course syllabi.” Georgia State University “recently started providing instructors with sample syllabus statements about AI that comply with the university’s academic-integrity policies,” while Ohio State University and Washington University in St. Louis offer similar resources. Brian Lee at Pierce College “took several AI policies he found online, asked ChatGPT to mesh them together, and edited the output to his specifications.” Professors “said they’re also using their syllabus statements to educate students on AI’s shortcomings,” with one educator emphasizing AI’s role as a “text generator, not a truth generator.”

California Colleges Combat Financial Aid Fraud With AI Tools

EdSource (9/2, Burke) reports that California’s community colleges are employing artificial intelligence (AI) to combat financial aid fraud, which has cost them millions. Around 80 of the 115 colleges “are now or will soon be using an AI model that detects fake students by looking for information such as shared phone numbers, suspicious course-taking patterns, and even an applicant’s age.” California’s community colleges have lost “more than $11 million to financial aid fraud in 2024 as they were inundated with fake students,” and “at least $18 million in aid since 2021.” The AI model, initially developed at Foothill-De Anza Community College District, is said to catch “twice as many scammers as the human staff, with some campuses estimating that they are now detecting more than 90% of fraudsters.”

        California Community Colleges To Offer Free AI Training. The Los Angeles Times (9/1, Echelman) reported that California’s community colleges will collaborate with tech firms, including Adobe, Google and Microsoft, to offer artificial intelligence (AI) training to students and teachers. The partnerships, valued at “hundreds of millions of dollars,” will provide AI resources to California schools. However, experts who caution about the effectiveness of these programs are citing challenges in defining and teaching AI literacy.

Los Alamos National Laboratory Unveils OpenAI Models On Venado Supercomputer

Defense Daily (9/2, Salem, Subscription Publication) reports that Los Alamos National Laboratory’s Venado supercomputer “has started running a series of OpenAI models to complete national security research for the nation’s nuclear weapons stockpile, the Department of Energy announced Aug. 28.” The Venado supercomputer, “which DoE says is the 19th fastest supercomputer in the world, moved to a classified network earlier this year, according to a National Nuclear Security Administration (NNSA) news release. Currently, it is being used to assist NNSA research into the aging of plutonium.”

AI Device Recalls Linked To Lack Of Clinical Validation, Study Suggests

MedTech Dive (9/2, Reuter) reports that artificial intelligence-enabled medical devices “with no clinical validation were more likely to be the subject of recalls, according to a study published in JAMA Health Forum.” The study examined 950 devices authorized by the FDA through November 2024 and found there were 60 devices linked to 182 recalls, primarily due to diagnostic errors. Tinglong Dai, lead author of the study and a professor at the Johns Hopkins Carey Business School, “said the ‘vast majority’ of recalled devices had not undergone clinical trials.” Publicly traded companies, which make up about 53 percent of the AI-enabled devices, were responsible for more than 90 percent of recall events. Dai highlighted that “this fundamentally has something to do with the 510(k) clearance pathway.” He and his co-authors “recommended requiring human testing or clinical trials before a device is authorized, or incentivizing companies to conduct ongoing studies and collect real-world performance data.”

Anthropic Secures Funding From Qatar Investment Authority

Bloomberg (9/3, Subscription Publication) reports that Anthropic has secured a “significant” investment from the Qatar Investment Authority in a $13 billion funding round, valuing the company at $183 billion. This marks Qatar’s entry into the competitive field of AI investments, joining existing investors such as Amazon.com Inc. and Goldman Sachs Group Inc. This move aligns Qatar with its Persian Gulf neighbors in the pursuit of artificial intelligence deals.

Sanofi Advances AI Integration In Healthcare

CIO Magazine (9/3, Cordón) reports that Sanofi is enhancing patient care by integrating artificial intelligence (AI) across its operations, aiming to be the first biopharmaceutical company to implement AI on a large scale. The company’s digital transformation includes initiatives like the Digital Accelerator to scale AI use and collaborations with partners like McLaren and Google Cloud to optimize processes and infrastructure. Sanofi’s AI efforts aim to reduce drug development time by up to 50% and improve early-stage success rates by 30%, translating to more effective and personalized treatments for patients.

Sources: Apple Preparing AI-Based Web Search Tool

Bloomberg (9/3, Subscription Publication) cites anonymous sources in reporting Apple is “planning to launch its own artificial intelligence-powered web search tool next year.” According to the sources, Apple is “working on a new system – dubbed internally as World Knowledge Answers – that will be integrated into the Siri voice assistant.” The sources noted that “Apple has discussed also eventually adding the technology to its Safari web browser and Spotlight, which is used to search from the iPhone home screen.” Bloomberg comments that the move shows Apple “stepping up competition with OpenAI and Perplexity AI.”

Reply all
Reply to author
Forward
0 new messages