Dr. T's AI brief

130 views
Skip to first unread message

dtau...@gmail.com

unread,
Jul 20, 2024, 8:43:33 AM7/20/24
to ai-b...@googlegroups.com

Self-Replicating 'Life' Created from Digital ‘Primordial Soup’

Google researchers developed a self-replicating form of artificial life from random data. Their experiments involved the random mingling, combination, and execution of tens of thousands of separate pieces of computer code, with no explicit rules determining changes in the code samples and no rewards for specific behavior. The researchers observed the development of self-replicating programs that multiplied until they reached the population cap for the code samples. The researchers saw the emergence of new types of replicators that replaced the previous population.
[ » Read full article *May Require Paid Registration ]

New Scientist; Matthew Sparkes (July 9, 2024)

 

U.S. Says Russian Bot Farm Used AI to Impersonate Americans

The U.S. Department of Justice (DOJ) said it disrupted a bot farm that used AI software to create profiles on social media platform X to impersonate Americans and disseminate Russian propaganda. Part of a project allegedly approved by the Russian government, the bot farm’s propaganda campaign involved close to 1,000 fake profiles, with X suspending the accounts for terms of service violations.
[
» Read full article ]

NPR; Shannon Bond (July 9, 2024)

 

Physical System Learns Nonlinear Tasks Without Traditional Computer Processor

An analog system developed by University of Pennsylvania (Penn) researchers can learn complex tasks like nonlinear regression and "exclusive or" (XOR) relationships. The fast, low-power, scalable system is a contrastive local learning network, whose components learn based on local rules without a centralized processor and no knowledge of the larger structure. Said Penn's Marc Z. Miskin, "Because it has no knowledge of the structure of the network, it's very tolerant to errors, it's very robust to being made in different ways, and we think that opens up a lot of opportunities to scale these things up."
[
» Read full article ]

Penn Today; Erica Moser (July 5, 2024)

 

Will Ray Kurzweil Merge with AI?

ACM Fellow Ray Kurzweil believes "the Singularity," when people merge with AI, will arrive by 2045, citing the rate of growth of computer power. Kurzweil, who received ACM's Grace Murray Hopper Award in 1978 for developing a device that reads text to the blind, wants to experience the Singularity but acknowledges that, at 76, he may not live to see it. Said ACM A.M. Turing Award laureate Geoffrey Hinton, "His prediction no longer looks so silly. Things are happening much faster than I expected."
[ » Read full article *May Require Paid Registration ]

The New York Times; Cade Metz (July 4, 2024)

 

Tech Industry Wants to Lock Up Nuclear Power for AI

Big tech companies are pursuing deals with the owners of U.S. nuclear power plants to power their datacenters. Amazon Web Services, for example, is working with Constellation Energy to obtain electricity directly from an East Coast nuclear power plant. Big tech companies are willing to pay a premium to obtain power directly from a power plant because it reduces the time frame for building datacenters by eliminating the need for new grid infrastructure.
[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Jennifer Hiller; Sebastian Herrera (July 1, 2024)

 

AI Integration Spurs Demand For Liberal Arts Education

Higher Ed Dive Share to FacebookShare to Twitter (7/8, McLean) reports demand for liberal arts education “has declined in recent years as students increasingly eye college programs that directly prepare them for jobs,” but according to “tech and college experts, as businesses launch advanced AI tools or integrate such technology into their operations, liberal arts majors will become more coveted.” Employers will need people “to think through the ethical stakes and unintended consequences of new technologies,” so college leaders “need to take action as AI changes the workforce, scholars say.” One expert said liberal arts students “could provide a more humanistic perspective on the technology, with an eye to ethics, privacy and bias.”

AI-Powered Humanoid Robots Could Solve Global Labor Shortage

CNBC Share to FacebookShare to Twitter (7/8, Rooney) reports AI-powered humanoid robots are emerging across Silicon Valley, with companies like Amazon, Tesla, Microsoft, and Nvidia investing billions. These robots, currently used in warehouses, could eventually operate in homes and offices. Amazon has backed Agility Robotics and is deploying Digit robots in fulfillment centers. Goldman Sachs predicts the humanoid market will reach $38 billion in 20 years, helping with elderly care and labor shortages. AI advancements and a global labor shortage drive this renewed interest. Jeff Cardenas, CEO of Apptronik, highlighted robots filling “dull, dirty, dangerous tasks.”

Report: Apple’s AI Service, Siri Revamp Features Likely Coming Next Year

CNET News Share to FacebookShare to Twitter (7/8, Sherr) says, “Apple announced its new artificial intelligence service and revamped look for its Siri voice assistant in June, with plans to begin testing later this year.” However, “some features likely won’t appear until next year, according to a new report.” The “coming Apple Intelligence service promises many new features when it begins testing later this year, including a revamped look, more intuitive voice controls and integration with OpenAI’s popular ChatGPT.” Recent reporting from Bloomberg “gives more detail on the launch timing, saying Apple plans to offer Siri’s new look and ChatGPT integrations later this year.” But “Siri’s new abilities to control apps with your voice and to understand what you’re looking at on the screen...won’t arrive until next year.”

OpenAI Startup Fund Backs AI Venture To Promote Healthier Lifestyles

TechCrunch Share to FacebookShare to Twitter (7/8, Wiggers) reports, “Huffington Post founder Arianna Huffington and OpenAI CEO Sam Altman are throwing their weight behind a new venture, Thrive AI Health, that aims to build AI-powered assistant tech to promote healthier lifestyles.” TechCrunch adds, “Backed by Huffington’s mental wellness firm Thrive Global and the OpenAI Startup Fund, the early-stage venture fund closely associated with OpenAI, Thrive AI Health will seek to build an ‘AI health coach’ to give personalized advice on sleep, food, fitness, stress management and ‘connection,’ according to a press release issued Monday.”

ED Releases Guidance On AI Development For Schools

Education Week Share to FacebookShare to Twitter (7/8) reports guidance Share to FacebookShare to Twitter released Monday by the Education Department recommends educators “work with vendors and tech developers to ensure artificial intelligence-driven innovations for schools go hand-in-hand with managing the technology’s risks.” According to EdWeek, “companies and tech developers are in a tough spot with AI. Many want to move cautiously in developing tools with educator feedback that are properly tested and don’t amplify societal bias or deliver inaccurate information. On the other hand, developers also want to serve the current market – and don’t want to get left behind the competition.” The guidance argues that “vendors and educators can try new things with AI – like enabling teachers to use it for writing emails – if they consider important questions such as: Who will ensure that students’ private information isn’t shared?” It also recommends that “AI should not be allowed to make decisions unchecked by educators, and that developers need to design AI tools based on evidence-based practices, incorporating educator input and feedback, while safeguarding students’ data and civil rights.”

How Schools Can Learn From Los Angeles Unified School District’s Botched AI Chatbot Rollout

Education Week Share to FacebookShare to Twitter (7/8, Klein) reports in March, the Los Angeles Unified School District “was held up as a trailblazer for its embrace of artificial intelligence, when it unveiled a custom-designed chatbot.” Superintendent Alberto Carvalho called the tool a “game changer” that would “accelerate learning at a level never seen before.” However, in just five months, the district “has temporarily turned off its once-celebrated chatbot ‘Ed.’ That decision appears to have been prompted by upheaval at AllHere, the company LAUSD hired to create the tool at a cost of up to $6 million over five years.” LAUSD has now “become the poster district for what not to do in harnessing AI for K-12 education.” The challenges the district faced “in developing an AI tool offer important lessons for other school systems,” such as vetting ed-tech companies more carefully.

Morehouse College To Launch Animated AI Teaching Assistants

Inside Higher Ed Share to FacebookShare to Twitter (7/9, Coffey) reports Morehouse College in Atlanta, Georgia, “is rolling out 3-D, artificial intelligence-powered bots this fall across five classrooms...that will allow students to ask any question at any time.” According to senior assistant professor Muhsinah Morris, who is spearheading the AI pilot, the goal is “to enhance students’ ability to get access to information that is cultivated in your classroom.” At the “historically Black Atlanta men’s liberal arts college, the new AI bots are trained from a professor’s lectures and course notes plus other material the faculty deem important. Students access the bot with a Google Chrome web browser, which displays a 3-D figure, or avatar, designed by the professor. Students can type in a question box or they can speak aloud – in their native language – and get a verbal response back in a way that mimics the classroom experience.”

Morehouse College Introduces AI Teaching Assistants

The Chronicle of Higher Education Share to FacebookShare to Twitter (7/10, Walters) reports this fall, “a small group of professors at Morehouse College” will use AI-powered teaching assistants (TAs), “which are actually digital avatars resembling each professor’s physical appearance and demeanor.” The project, led by a chemistry professor at the college, aims to assist students with 24/7 questions and lecture delivery. VictoryXR, “the Iowa firm that designed Morehouse’s TAs,” will use materials uploaded by professors and, if needed, “will turn to a large language model from OpenAI – the creator of ChatGPT – to craft answers based on outside information.” Critics, however, point out potential issues with AI responsiveness and the need for effective question formulation.

Researchers Develop Hybrid Intelligence Using Brain Organoids

Popular Mechanics Share to FacebookShare to Twitter (7/9, Orf) reports that researchers from Indiana University Bloomington and Tianjin University have integrated lab-grown brain organoids with AI tools to create hybrid intelligence. Tianjin University’s MetaBOC robot, featuring “organoid intelligence,” can perform tasks like obstacle avoidance and tracking.

China Shifts AI Focus To Humanoid Robots

Forbes Share to FacebookShare to Twitter (7/9, Costigan) reports that at the World AI Conference in Shanghai last week, Huawei Cloud CEO Zhang Ping’an emphasized that China can lead in AI without the most advanced chips. The event highlighted Chinese-made humanoid robots, with companies like Fourier and Tlibot showcasing their innovations. Regulations for robot governance were also introduced. Tesla, featuring its Optimus robot, was among the few American firms present.

Meta Seeking New Executive To Lead Its Integration Of Generative AI

CoinGeek Share to FacebookShare to Twitter (7/9, Kaaru) reports Meta is searching “for a new executive to lead its integration of generative AI (AGI) with emerging technologies, the key among them being the metaverse.” Mark Zuckerberg three years ago “was solely focused on the metaverse” and “even changed the name of his trillion-dollar company from Facebook to Meta.” Since “then came AI, and Zuckerberg is going big on the new technology – this year, Meta will spend up to $40 billion, with AI handling up a sizeable chunk.”

More Universities Integrate AI In Agriculture Programs

Inside Higher Ed Share to FacebookShare to Twitter (7/10, Coffey) reports that universities are increasingly incorporating artificial intelligence (AI) into their agriculture programs. At the University of Missouri, “students bring tools – not tills, tractors or plows, but sensors that use artificial intelligence to measure soil moisture, cameras that distinguish weeds from crops and drones to oversee plant growth from above.” This initiative is part of a broader trend, with institutions such as Iowa State, Washington State, and Purdue University also leveraging AI to prepare students for the evolving agricultural industry. The National Science Foundation’s National Artificial Intelligence Research Institutes, “intended to boost AI research and workforce development,” now span 25 institutions, “with five higher education institutions tapped to focus on boosting the use of AI in agriculture. Each of the five universities received $20 million to spend over five years.”

NC State Robot Helps Advance AI In Agriculture

WTVD-TV Share to FacebookShare to Twitter Raleigh-Durham, NC (7/10) reports that BenchBot 3.0, a robot at NC State, is automating plant species recognition using AI. NC State engineers Mark Funderburk and Lirong Xiang highlight its potential to create detailed field maps for farmers. Xiang envisions a future with fully autonomous, intelligent agriculture systems.

Analysis: AI Technology May Help Patients With Cancer Deal With Emotional Ramifications

An analysis in TIME Share to FacebookShare to Twitter (7/10, Esteva) discusses how AI technology may help patients with cancer address the emotional toll that comes from being diagnosed with the illness. Some have argued that current AI-enabled tests “can swiftly analyze real-world data and translate it into digestible and personalized insights, allowing for a more personalized approach to cancer therapy.” They also argue that integrating AI into the patient treatment process “not only places the patient at the center but also provides more clarity throughout the cancer treatment journey.”

Microsoft, Apple Abandon OpenAI Board Role Plans

Bloomberg Share to FacebookShare to Twitter (7/10, Grant, Subscription Publication) reports Microsoft Corp. and Apple Inc. have decided not to assume board roles at OpenAI, underscoring rising regulatory scrutiny over Big Tech’s influence when it comes to AI. Microsoft will exit from its observer function on the board, according to a letter to OpenAI which Bloomberg saw. While Apple was slated to assume a comparable role, a spokesperson for OpenAI said the company will have no board observers following Microsoft’s exit.

Google No Longer Maintains “Operational Carbon Neutrality” Due To AI

Fortune Share to FacebookShare to Twitter (7/10, Roytburg) reports Google’s recent sustainability report reveals that since 2023, Google has no longer “maintained operational carbon neutrality.” Since 2007, the company has purchased “enough clean-energy supply to match the bulk of the emissions it generates through its data centers and buildings,” but “increasing energy demands from the greater intensity of AI compute” have left the company unable to keep up.

How Principals Are Using AI To Manage Administrative Tasks

Education Week Share to FacebookShare to Twitter (7/10, Banerji) reports school leaders are increasingly using AI tools to manage administrative tasks. Michael Martin, principal of Buckeye High School in Ohio, “has tinkered with ChatGPT and similar services since they launched, and through this experimentation, curated a suite of AI-based tools” to handle tasks such as summarizing emails and scheduling appointments. This helps Martin “complete his ‘algorithmic’ or administrative tasks more quickly so that he gets more time to build relationships with teachers and students.” However, he notes AI’s limitations, like generating non-existent research citations. The superintendent of the Pearl school district in Mississippi “has created a number of smaller chatbots, which ensconce all the information about a particular topic on one platform for school leaders and teachers.”

How AI Tools Can Enhance Classroom Teaching

The Hechinger Report Share to FacebookShare to Twitter (7/10, Berdik) reports that a science teacher at Ron Clark Academy in Atlanta, Georgia, utilizes a voice-activated AI assistant enhance classroom engagement and facilitate lesson delivery. This voice-activated assistant “is the brainchild of computer scientist Satya Nitta, who founded a company called Merlyn Mind,” and it helps teachers navigate digital materials while interacting with students. Launched in November 2022, AI tools like ChatGPT, Khanmigo, and others are increasingly used in education to assist with tasks such as generating quizzes and providing feedback. Among experts, the debate is “about the best mix – what are AI’s most effective roles in helping students learn, and what aspects of teaching should remain indelibly human no matter how powerful AI becomes?”

Journalists Sue OpenAI, Microsoft Over Copyright Claims

The AP Share to FacebookShare to Twitter (7/11) reports veteran journalists Nicholas Gage and Nicholas Basbanes have filed a lawsuit against OpenAI and Microsoft, alleging that the companies’ AI chatbots have “systematically pilfered” their copyrighted work. The lawsuit, now part of a broader class-action case, includes prominent writers like John Grisham and George R. R. Martin. Gage and Basbanes argue that OpenAI, with Microsoft’s support, used vast amounts of human writings without permission or compensation. Microsoft AI Chief Executive Mustafa Suleyman defended the practice under the “fair use” doctrine. Gage emphasized the importance of protecting writers’ intellectual property, saying, “It’s highway robbery.” The case is still in discovery and expected to continue into 2025.

AI Strains Energy Grid, Raises Emissions

The New York Times Share to FacebookShare to Twitter (7/11) reports that the increasing energy demands of data centers driven by artificial intelligence are straining the electricity grid in some regions, leading to higher emissions and hindering the energy transition. Bill Gates, during a media briefing in London, acknowledged the additional load but remained optimistic, stating, “Let’s not go overboard on this,” and predicting that AI would ultimately enhance efficiency to offset the extra demand. Despite Gates’ positive outlook, the article notes that AI continues to significantly impact global energy consumption and emissions.

dtau...@gmail.com

unread,
Jul 21, 2024, 7:35:25 PM7/21/24
to ai-b...@googlegroups.com

Hong Kong Is Testing Its Own ChatGPT-Style Tool

A team of researchers led by the Hong Kong University of Science and Technology has developed a ChatGPT-like tool for the city's employees. Secretary for Innovation, Technology and Industry Sun Dong said the tool, dubbed "document editing co-pilot application for civil servants," is being tested by his bureau before being rolled out government-wide later this year.
[ » Read full article ]

Associated Press; Kanis Leung (July 16, 2024)

 

Hackers Claim Leak of Internal Disney Slack Messages over AI Concerns

Activist hacking group Nullbulge claimed it leaked thousands of Disney’s internal Slack messaging channels, which included information about unreleased projects, raw images, computer codes, and log-ins. The group said it leaked about 1.2 terabytes of information and that it wants to protect artists’ rights and compensation for their work, especially in the age of AI.
[ » Read full article ]

CNN; Ramishah Maruf (July 15, 2024)

 

Epileptic Patient 'Speaks' Using Power of Thought

Researchers at Israel's Tel Aviv University (TAU) and Tel Aviv Sourasky Medical Center demonstrated the ability of a patient with epilepsy who had depth electrodes implanted in his brain to communicate solely using the power of thought. The implants transmitted electrical signals from the patient's brain to a computer trained using deep learning and machine learning, and spoke the transmitted syllables aloud.
[ » Read full article ]

Medical Xpress (July 16, 2024)

 

Bayer, Others Turn to AI to Conquer Superweeds

Big agriculture companies like Bayer and Syngenta are using AI to accelerate the process of developing new herbicides, fungicides, and insecticides. Syngenta estimated AI could drop the average time from discovery to commercialization from 15 years to 10 years. Bayer's CropKey AI system, for instance, can analyze data more quickly than humans to identify chemical molecules that target a weed's protein structure.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Patrick Thomas (July 17, 2024)

 

Universities Don't Want AI Research to Leave Them Behind

To remain relevant in the field of generative AI, universities are shifting their research focus to areas of AI that are less computing-power-intensive. At the same time, academic institutions are looking to build out their computing resources or engage in resource-sharing with other universities. Meanwhile, universities in tech hubs like Silicon Valley, Boston, the Pacific Northwest, and Austin are forging partnerships with industry players.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Isabelle Bousquette (July 12, 2024)

 

OpenAI Whistleblowers Allege Company Restricted Employees From Alerting SEC To AI Risks

The Washington Post Share to FacebookShare to Twitter (7/13, Verma, Zakrzewski, Tiku) reports OpenAI whistleblowers “have filed a complaint with the Securities and Exchange Commission alleging the artificial intelligence company illegally prohibited its employees from warning regulators about the grave risks its technology may pose to humanity, calling for an investigation.” In a letter, the whistleblowers claim OpenAI “issued its employees overly restrictive employment, severance and nondisclosure agreements that could have led to penalties against workers who raised concerns about OpenAI to federal regulators.” They argue that the “overly broad agreements violated long-standing federal laws and regulations meant to protect whistleblowers who wish to reveal damning information about their company anonymously and without fear of retaliation.”

Amazon’s AI Talent Deal With Adept Raises Antitrust Concerns

The AP Share to FacebookShare to Twitter (7/12, O'Brien, Parvini) reports Amazon has secured a deal with AI startup Adept to employ its CEO and key staff, while also licensing Adept’s AI systems and datasets. This move, described by some as a “reverse acqui-hire,” has raised concerns among US lawmakers about potential anti-competitive practices. US Sen. Ron Wyden (D-OR) stated, “I’m very concerned about the massive consolidation that’s going on in AI.” Wyden has urged antitrust regulators to investigate the deal, highlighting a growing trend of tech giants acquiring talent without formal acquisitions to avoid regulatory scrutiny.

Democratic Senators Scrutinize Big Tech Strategies For Recruiting AI Talent From Small Startups

The AP Share to FacebookShare to Twitter (7/13, O'Brien, Parvini) reports Sens. Elizabeth Warren (D-MA), Peter Welch (D-VT), and Ron Wyden (D-OR) are calling for an investigation into Big Tech companies’ efforts to poach top AI talent from smaller firms. The lawmakers are specifically focused on “‘acqui-hires,’ in which one company acquires another to absorb talent, have been common in the tech industry for decades.” In letter Friday, the lawmakers argued “antitrust enforcers at the Justice Department and the Federal Trade Commission that ‘sustained, pointed action is necessary to fight undue consolidation across the industry.’”

Academic Researchers Working To Increase Equity In AI Technology

Inside Higher Ed Share to FacebookShare to Twitter (7/15, Palmer) reports academic researchers “know that artificial intelligence (AI) technology has the potential to revolutionize the technical aspects of nearly every industry,” but they have “limited access to the expensive, powerful technology required for AI research.” The divide has scholars and “other government-funded researchers concerned that the developments emerging from the AI Gold Rush could leave marginalized populations behind.” Removing inherent biases in generative AI “is one of the overarching goals of the National Artificial Intelligence Research Resource pilot (NAIRR), which the National Science Foundation (NSF) helped launch in January.” Through the two-year pilot, “so far 77 projects – the majority of which are affiliated with universities – have received an allocation of computing and data resources and services, including remote access to Summit and other publicly funded supercomputers.”

Google Awards Grants To Black, Latino Founders Who Use AI

Forbes Share to FacebookShare to Twitter (7/15, Alexander) reports Google for Startups “recently announced its Black Founders Fund and Latino Founders Fund had together awarded grants to a combined 20 startups.” Each startup, which incorporates artificial intelligence in its business model, “is receiving a $150,000 non-dilutive cash award (in other words, Google gets no equity in return for its money) and $100,000 in Google Cloud credits.” Additionally, recipients will gain “access to mental health resources and mentorship from Google experts in AI and sales.” This initiative comes amid a significant decline in venture capital funding for Black-led startups, which dropped 71% in 2023. One recipient “uses AI to personalize online makeup shopping and was recognized by Forbes on the 2023 30 Under 30 list.”

AI Boom Increases Energy Demand

Fast Company Share to FacebookShare to Twitter (7/14) reports that the rise of artificial intelligence has significantly increased energy consumption in tech companies. Large language models like ChatGPT require much more energy than traditional queries, leading to higher carbon emissions. This surge in energy demand is pressuring the electrical grid and prompting energy companies to consider options like restarting dormant nuclear reactors. Data centers are exploring more efficient cooling methods and flexible computing to manage energy use. The industry faces challenges in balancing growth with sustainability and grid stability.

California State Bill On AI Regulation Sparks Debate

Fortune Share to FacebookShare to Twitter (7/15, Goldman) reports, “A California state bill has emerged as a flashpoint between those who think AI should be regulated to ensure its safety and those who see regulation as potentially stifling innovation.” The bill, which will head to a “final vote in August, is sparking fiery debate and frantic pushback among leaders from across the AI industry – even from some companies and AI leaders who had previously called for the sector to be regulated.”

New Hampshire Schools Adopt AI Tutoring Program To Enhance Classroom Learning

New Hampshire Bulletin Share to FacebookShare to Twitter (7/15) reports that New Hampshire schools will introduce Khanmigo, an AI-driven educational tool by Khan Academy, in the upcoming school year. This program, which aims to provide personalized learning amidst teacher shortages, allows students to “pose any question they like” to literary characters, historical figures, and receive tutoring in various subjects. After the Executive Council “approved a $2.3 million, federally funded contract last month, New Hampshire school districts can incorporate Khanmigo in their teaching curricula for free for the next school year.” To some educators and administrators, “the program offers glittering potential,” while others raise concerns about AI accuracy and bias. Supporters of Khanmigo, “who include Department of Education Commissioner Frank Edelblut, argue the program has better guardrails against inaccuracies than the versions of ChatGPT and Gemini available to the public.”

Carnegie Mellon University Professor Advocates AI For Constructive Student Debates

Inside Higher Ed Share to FacebookShare to Twitter (7/16, Quinn) reports that to “help students sharpen their ideas,” Simon Cullen, an assistant professor at Carnegie Mellon University “who’s also an artificial intelligence and education fellow at the university, has required them to argue with an AI chat bot called Robocrates that he helped create.” In addition to his Dangerous Ideas course, Cullen and a postdoctoral scholar are developing another AI program, Sway, “that digitally matches students with those they disagree with on issues such as abortion.” Cullen emphasizes the importance of debating to form robust opinions, despite the fear of peer judgment. Next month, Cullen and the postdoctoral scholar “will offer faculty members and administrators outside of Carnegie Mellon the chance to use the program for the first time.”

FTC Requests Details On Amazon’s AI Hiring Deal

Reuters Share to FacebookShare to Twitter (7/16, Hu, Bensinger, Godoy) reports the US Federal Trade Commission (FTC) has asked Amazon to provide more details on its deal to hire top executives and researchers from AI startup Adept. The inquiry, which reflects the FTC’s growing concern about AI deals, follows Amazon’s announcement that Adept Chief Executive David Luan and others were joining Amazon. Luan now runs the “AGI Autonomy” team under Rohit Prasad.

        CNBC Share to FacebookShare to Twitter (7/16, Palmer) reports the FTC issued a report in April warning that partnerships like those between Microsoft and Inflection AI, and Amazon and AI startup Anthropic, may allow companies to “shape these markets in their own interests.” Lawmakers, including Sen. Ron Wyden (D-OR), have cited Amazon’s deal with Adept as an example of tech companies making acquihires to avoid antitrust scrutiny. As part of the agreement announced last month, Amazon hired Adept co-founder and CEO Luan and other team members, and licensed Adept’s technology, multimodal models, and datasets.

Trump Allies Reportedly Formulating New Executive Order Loosening Restrictions, Regulations On AI For Defense Purposes

The Washington Post Share to FacebookShare to Twitter (7/16) reports that former president Trump’s allies “are drafting a sweeping AI executive order that would launch a series of ‘Manhattan Projects’ to develop military technology and immediately review ‘unnecessary and burdensome regulations’ – signaling how a potential second Trump administration may pursue AI policies favorable to Silicon Valley investors and companies.” The framework “would also create ‘industry-led’ agencies to evaluate AI models and secure systems from foreign adversaries, according to a copy of the document viewed exclusively by The Washington Post.” The framework “presents a markedly different strategy for the booming sector than that of the Biden administration, which last year issued a sweeping executive order that leverages emergency powers to subject the next generation of AI systems to safety testing.”

Idaho Colleges Grappling With Generative AI Management

Idaho Capital Sun Share to FacebookShare to Twitter (7/17, Draisey) reports this year, Idaho lawmakers “passed two laws restricting the use of AI,” while state colleges and universities are now “grappling with the new technology and its implications for how students learn and perform in the classroom.” Although the introduction of AI “brings different approaches,” its rise in academic settings “also comes with ethical challenges. Educators must balance the educational benefits while also maintaining academic integrity in their classrooms.” For example, the University of Idaho “uses a structured approach with tools like Turnitin, which checks for plagiarism and AI-generated content, and Zero GPT, a specialized AI text detection service.” Boise State University “chooses not to use AI detection tools and instead relies on faculty judgment.” Similarly, Lewis-Clark State College and the College of Western Idaho “have also chosen to not use AI detection tools.”

Meta Pauses AI Model Release In EU Over Regulatory Issues

Axios Share to FacebookShare to Twitter (7/17) reports Meta will withhold its next multimodal AI model from the European Union due to regulatory uncertainties. Meta said, “We will release a multimodal Llama model over the coming months, but not in the EU due to the unpredictable nature of the European regulatory environment.” Meta plans to incorporate these models into various products, including smartphones and Meta Ray-Ban smart glasses. The decision also affects European companies’ access to these models, despite their open license. Meta’s issue centers on compliance with GDPR for training models using European data. The UK, with similar laws, will still receive the new model. Meta emphasizes that training on European data is crucial for regional relevance, noting competitors like Google and OpenAI already do so.

AI Enhances Cybersecurity Amid Rising Cybercrime

Entrepreneur Magazine Share to FacebookShare to Twitter (7/17, Wong) reports cybercrime has surged globally, causing over $12 billion in damages in the past decade. AI now plays a crucial role in both perpetrating and combating cyber threats. Chief information security officers leverage AI technologies like machine learning to detect anomalies and prevent damage. Amazon GuardDuty, an AI-based threat detector, protects AWS accounts by analyzing data and automating threat remediation. IBM Watson for Cybersecurity also uses AI to detect threats from various sources. Despite advancements, challenges remain, including securing generative AI projects. Case studies of Andritz AG and United Family Healthcare illustrate successful AI-based cybersecurity implementations. As generative AI use expands, the need for robust cybersecurity will grow, necessitating advancements in AI-based protection.

AI’s Role In College Admissions Sparks Ethical Debate

Education Week Share to FacebookShare to Twitter (7/18, Klein) reports the use of AI tools in college admissions essays has become a contentious issue. Research by foundry10 reveals that about “a third of high school seniors who applied to college in the 2023-24 school year acknowledged using an AI tool for help in writing admissions essays,” with some relying on it for final drafts. Jennifer Rubin, a researcher at foundry10, highlights the “ethical gray area that students and [high school] counselors don’t have any guidance” on how to navigate. While some institutions like CalTech and Georgia Polytechnical Institute permit limited AI use, others like Brown University prohibit it entirely. Rubin “said, because there’s no way to check on what kind of assistance an applicant received, human or not.”

OpenAI To Introduce GPT-4o Mini

Bloomberg Share to FacebookShare to Twitter (7/18, Subscription Publication) reports that OpenAI is introducing GPT-4o mini, “a more affordable, slimmed-down version of its flagship artificial intelligence model to appeal to a wider range of developers and business customers in an increasingly crowded market for AI services.” The startup “said the updated model will be available [Thursday] for free users and paying ChatGPT Plus and Team subscribers, and will be offered to enterprise customers next week.”

dtau...@gmail.com

unread,
Jul 27, 2024, 8:56:15 AM7/27/24
to ai-b...@googlegroups.com

Google DeepMind Takes Step Closer to Cracking Top-Level Math

Google DeepMind's AlphaProof and AlphaGeometry 2 systems partnered to tackle questions from the International Mathematical Olympiad, a global math competition for secondary-school students, together earning a silver medal. In each of the questions they successfully answered, the systems scored perfect marks, but for two out of the six questions, they were unable to even begin working towards an answer.
[
» Read full article ]

The Guardian (U.K.); Alex Hern (July 25, 2024)

 

Model Helps LLMs Better Understand Spreadsheets

Microsoft researchers developed SpreadsheetLLM to encode spreadsheet contents into a format accessible to large language models (LLMs). The experimental model uses an encoding mechanism to compress spreadsheet data 96% while preserving the data structure and relationships, enabling LLMs to handle large datasets while minimizing token usage. Said Constellation Research Inc. analyst Holger Mueller, “If Microsoft can nail this properly, it will not only secure the future of Excel, but change the future of work as we know it.”
[ » Read full article ]

Silicon Angle; Mike Wheatley (July 15, 2024)

 

AI Gives Voice Back to U.S. Rep. Wexton

U.S. Rep. Jennifer Wexton (D-VA) regained the voice she lost due to progressive supranuclear palsy with the help of an AI voice-cloning program from ElevenLabs. On Thursday, Wexton delivered the first-ever speech made on the House floor using an AI-cloned voice. The program lets Wexton type her thoughts into an iPad, which speaks the text aloud in her own voice. Wexton said AI voice-cloning technology "is humanizing, and it is empowering."
[
» Read full article ]

Associated Press; Dan Merica (July 25, 2024)

 

Push to Develop Generative AI, Without All the Lawsuits

Getty Images and Shutterstock are among the stock image companies using their own data to develop AI image generators to avoid the lawsuits plaguing Google, OpenAI, and other companies that scraped content from the Web when building their image generators and AI chatbots. Getty has partnered with Picsart on an AI image model and with Nvidia on an image generator and has provided images for an AI model being developed by Israeli startup Bria AI. Shutterstock is working on AI models with Databricks and Nvidia.


[
» Read full article *May Require Paid Registration ]

The New York Times; Nico Grant; Cade Metz (July 22, 2024)

 

AI, Needing Copper, Helps to Find It

KoBold Metals announced on July 18 that its AI technology had identified a copper lode a mile underground in Zambia, which is said to be the largest copper discovery in more than a decade. KoBold estimated the mine, when fully operational, would generate no less than 300,000 tons of copper annually over a period of decades. The findings are significant, given the vast amounts of copper needed by AI datacenters.

[ » Read full article *May Require Paid Registration ]

The New York Times; Max Bearak (July 18, 2024)

 

Artists Protect Their Work from Gen AI

University of Chicago researchers are helping artists protect their work from being included in generative AI training models. The Glaze tool they developed implements subtle changes that trick the AI into detecting a different art style, while their Nightshade tool confuses AI training models about what is in an image. However, Pennsylvania State University's Jinghui Chen cautioned, "When AI becomes stronger and stronger, these anti-AI tools will become weaker and weaker."
[ » Read full article ]

Associated Press; Sarah Parvini (July 18, 2024)

 

AI Brought 11,000 College Football Players to Digital Life

Electronic Arts (EA) developers used AI for the first time in the making of its newly released college football video game. It took them three months to incorporate the likenesses of around 11,000 players into EA Sports College Football 25. The process involved gathering the athletes' headshots, then using AI technology to create full 3D avatars of each. Artists were used to improve the digital versions as necessary, feeding the changes into the AI program to help it learn from its mistakes.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Sarah E. Needleman (July 21, 2024)

 

Tech Industry Forms Coalition For Secure AI To Establish Security Standards

Axios Share to FacebookShare to Twitter (7/18, Sabin) reported Google announced the formation of the new Coalition For Secure AI at the Aspen Security Forum taking place in Colorado this week. The new coalition, whose founding members include PayPal, Microsoft and Amazon, will begin “its work by developing standards for software supply chain security for AI systems, compiling resources to measure the risk of these tools and pulling together a framework to help defenders determine the best use cases for AI in their work.”

WPost Report: California Now “Ground Zero” In AI Regulatory Battle

The Washington Post Share to FacebookShare to Twitter (7/19, De Vynck, Zakrzewski, Tiku) reports California legislators are “debating a proposal that would force the biggest and best-funded companies to test their AI for ‘catastrophic’ risks before releasing it to the public,” making the state “ground zero for the battle over government regulation of AI.” The measure “is also shedding light on the limits of Silicon Valley’s enthusiasm for government oversight, even as key leaders such as OpenAI CEO Sam Altman publicly urge policymakers to act,” as some experts say that “by mandating previously voluntary commitments, Wiener’s bill has gone further than tech leaders are willing to accept.”

AI-Powered Machines Transform Agriculture

The Los Angeles Times Share to FacebookShare to Twitter (7/22, Smith) reports that nearly 200 farmers, academics, and engineers gathered in Salinas to witness AI-powered agricultural machines. Devices like Carbon Robotics’ LaserWeeder use AI to identify and eliminate weeds with lasers, reducing reliance on chemical herbicides. The shift addresses health risks associated with traditional pesticides, such as paraquat, dacthal, and glyphosate. However, concerns arise over potential job losses in California’s agriculture sector. Experts highlight the environmental benefits and the need for new labor solutions.

AI Chatbots Struggle With Breaking Political News

The Washington Post Share to FacebookShare to Twitter (7/22, Kelly) reports that AI chatbots failed to keep up with recent political developments, including President Biden’s withdrawal from the 2024 race and the shooting at former President Trump’s rally in Pennsylvania. Microsoft’s Copilot redirected users to Bing for election-related queries, while Google’s Gemini and Meta AI also faced challenges. Jevin West from the University of Washington emphasized the need for reliable news sources over AI bots for current events.

OpenAI Develops New AI Transparency Technique

Insider Share to FacebookShare to Twitter (7/20, Varanasi) reports that OpenAI has introduced a new method for enhancing AI model transparency by having them communicate with each other. This approach, showcased this week, aims to make more powerful AI models explain their reasoning processes. OpenAI tested the technique with math problems, where one model explained its solutions and another checked for errors. This initiative aligns with OpenAI’s mission to create safe and beneficial artificial general intelligence. The company has faced recent internal challenges, including key departures from its safety department, raising concerns about its commitment to AI safety.

Survey: Most Graduates Believe AI Should Be Taught In College

Inside Higher Ed Share to FacebookShare to Twitter (7/23, Coffey) reports most college graduates “believe generative artificial intelligence tools should be incorporated into college classrooms, with more than half saying they felt unprepared for the workforce, according to a new survey from Cengage Group, an education-technology company.” The newly released survey “found that 70 percent of graduates believe basic generative AI training should be integrated into courses; 55 percent said their degree programs did not prepare them to use the new technology tools in the workforce.” The share of respondents who “expressed uneasiness about their facility with generative AI varied by age; 61 percent of Generation Z graduates...said they felt unprepared, compared to 48 percent of millennials (28 to 43 years old), 60 percent of Gen Xers and 50 percent of baby boomers.” Cengage Group polled recent graduates “from two- and four-year institutions, as well as those who received skills certificates in the last year.”

        Forbes Share to FacebookShare to Twitter (7/23, T. Nietzel) reports the 2024 Cengage Group Employability Report “is based on surveys of 1,000 U.S. employers and 974 recent graduates.” The survey also showed “a growing recognition among graduates about the importance of post-secondary education for career success. Two-thirds (68%) believe their education has positioned them for success in the current job market.” The rise of AI and “other technologies also has recent graduates worried about their career choices, with more than 39% fearing that generative AI could replace them entirely. Employers reinforced this view with more than half (58%) saying they were more likely to interview and hire applicants with AI experience.” Michael Hansen, CEO of Cengage Group, said in a news release, “The data supports the growing need for institutions to integrate GenAI training and professional skills development.”

Meta Announces Llama 3.1

CNBC Share to FacebookShare to Twitter (7/23, Leswing, Vanian) reports Meta announced version 3.1 of the Llama AI model. The latest update is available “in three different versions, with one variant being the biggest and most capable AI model from Meta to date.” Llama continues to be available as an open-source platform. CNBC notes Meta believes that by making technology like Llama open-source, Meta “can attract high-quality talent in a competitive market and lower its overall computing infrastructure costs, among other benefits.”

AI Companies Review Voluntary Commitments

MIT Technology Review Share to FacebookShare to Twitter (7/22) reports that seven leading AI companies – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – have reviewed their progress on voluntary commitments made with the White House to develop safe AI. The commitments include improving testing and transparency, sharing risk information, and enhancing cybersecurity. While companies have made strides in areas like red-teaming and watermarking, experts note that more work is needed for comprehensive governance and protection of rights. The White House continues to push for bipartisan legislation to enforce these commitments, emphasizing the need for ongoing industry cooperation and regulatory oversight.

Group Of Senators Demand OpenAI Turn Over Safety Data

The Washington Post Share to FacebookShare to Twitter (7/23, Verma, Zakrzewski, Tiku) reports a coalition of five Democratic-leaning senators “demanded in a Monday letter that OpenAI turn over data about its efforts to build safe and secure artificial intelligence, following employee warnings that the company rushed through safety-testing of its latest AI model, which were detailed in The Washington Post earlier this month.” The group, led by Sen. Brian Schatz (D-HI), “asked OpenAI’s chief executive Sam Altman to outline how the ChatGPT-maker plans to meet ‘public commitments’ to ensure its AI does not cause harm, such as teaching users to build bioweapons or helping hackers develop new kinds of cyberattacks, in the letter obtained exclusively by The Post.” The senators “also asked the company for information about employee agreements, which could have muzzled workers who wished to alert regulators to risks.”

OpenAI Reassigns Safety Executive

CNBC Share to FacebookShare to Twitter (7/23, Field) reports that OpenAI has reassigned Aleksander Madry, previously head of preparedness, to a role focused on AI reasoning. This change occurred shortly before Democratic senators sent a letter to CEO Sam Altman seeking information on OpenAI’s safety practices. Madry will continue to work on core AI safety efforts in his new position. OpenAI has faced increasing scrutiny over safety concerns, including antitrust investigations by the FTC and the Department of Justice. The company’s safety culture has been criticized by former employees, leading to leadership changes and team disbandments.

        Musk, Zuckerberg Criticize OpenAI’s Name. Insider Share to FacebookShare to Twitter (7/24) reports that both Mark Zuckerberg and Elon Musk have criticized OpenAI for being a “closed” AI model despite its name. Zuckerberg pointed out the irony of the name while Musk, a co-founder of OpenAI, reiterated his discontent with the company’s direction. Musk noted that OpenAI was intended to be an open-source, non-profit counterweight to Google but has become a closed, profit-driven entity controlled by Microsoft. Despite the criticism, Zuckerberg praised OpenAI CEO Sam Altman for his leadership under public scrutiny.

OpenAI Introduces SearchGPT To Challenge Google’s Search Dominance

The Wall Street Journal Share to FacebookShare to Twitter (7/25, Subscription Publication) reports that OpenAI launched a test version Thursday of SearchGPT, a search engine that cites sources from partners like News Corp and the Atlantic. SearchGPT summarizes information and allows follow-up questions, linking sources at the end of answers. The Guardian (UK) Share to FacebookShare to Twitter (7/25, Robins Early) reports the AI-driven platform produces results conversationally and offers up-to-date information with the ability to search the internet. OpenAI plans to integrate the search features into an existing model, ChatGPT, rather than create a separate product. The innovation positions OpenAI as a possible contender against major market players like Google. However, the company may face pushback from publishers over potential copyright violations, echoing challenges that have been previously levied at the company.

        OpenAI Projected To Lose Up To $5 Billion In 2024. The Times of India Share to FacebookShare to Twitter (7/25) reports that according to a report in The Information Share to FacebookShare to Twitter (7/25, Subscription Publication), OpenAI is projected to lose up to $5 billion in 2024, potentially depleting its cash reserves within a year. The company’s spending on training and inference is expected to reach $7 billion this year, including nearly $4 billion on Microsoft’s servers. Despite generating around $2 billion annually from ChatGPT and additional revenue from LLM access fees, OpenAI’s total revenue falls short, necessitating fresh funding. CEO Sam Altman remains focused on developing artificial general intelligence despite financial strains. OpenAI has raised over $11 billion but may need more to sustain its research and development efforts.

Google DeepMind Unveils Advanced AI Math Models

Bloomberg Share to FacebookShare to Twitter (7/25, Subscription Publication) reports that Google DeepMind announced on Thursday the launch of AlphaProof and AlphaGeometry 2, advanced models specializing in math reasoning and geometry, respectively. These models successfully solved four of six problems from the International Mathematical Olympiad. David Silver, Google DeepMind’s vice president of reinforcement learning, stated that while these AI models are powerful computational tools, they are not yet capable of replacing human mathematicians. Google’s approach involves translating math problems into technical statements to avoid inaccuracies common in AI-generated responses.

Meta And Alphabet CEOs Express AI “Overinvestment” Concerns

CNBC Share to FacebookShare to Twitter (7/25, Leswing) reports that Meta CEO Mark Zuckerberg and Alphabet CEO Sundar Pichai have voiced concerns about potential overinvestment in AI infrastructure. Zuckerberg, speaking on a podcast, highlighted the risk of overspending, while Pichai emphasized the greater risk of underinvesting. Despite these concerns, major tech companies like Microsoft, Amazon, and Tesla continue to heavily invest in AI, driving Nvidia’s revenue growth. Nvidia, which supplies GPUs for AI, has seen its shares rise significantly. Analysts and investors are closely monitoring the return on these investments amid competitive pressures.

Meta Criticized For Handling Of AI-Generated Deepfakes

CNN Share to FacebookShare to Twitter (7/25, Duffy) reports that Meta “failed to remove an explicit, AI-generated image of an Indian public figure until it was questioned by its Oversight Board.” The board’s report, released Thursday, “suggests that Meta is not consistently enforcing its rules against non-consensual sexual imagery, even as advancements in artificial intelligence have made this form of harassment increasingly common.” The report is “the result of an investigation the Meta Oversight Board announced in April into Meta’s handling of deepfake pornography, including two specific instances where explicit images were posted of an American public figure and an Indian public figure.” While the image of the American figure was swiftly removed, the Indian figure’s image remained despite being reported twice. The Oversight Board “urged the company to make its rules clearer by updating its prohibition against ‘derogatory sexualized photoshop’ to specifically include the word ‘non-consensual’ and to clearly cover other photo manipulation techniques such as AI.” Meta welcomed the board’s decision and pledged to take further action.

Musk To Discuss $5 Billion Investment In xAI

Reuters Share to FacebookShare to Twitter (7/25) reports that Tesla CEO Elon Musk announced on Thursday that he and the board will consider a $5 billion investment in his AI startup xAI, raising conflict of interest concerns. Musk launched a poll on social media platform X, where over two-thirds of nearly 1 million respondents supported the investment. This follows Tesla’s second-quarter results missing Wall Street estimates. Musk noted that xAI could aid in advancing Tesla’s self-driving technology and data centers. However, experts like Brent Goldfarb are skeptical, citing potential risks to Tesla shareholders.

Khan Promotes Open-Weight AI Models As FTC Seeks To “Open Up” Industry

Bloomberg Share to FacebookShare to Twitter (7/25, Subscription Publication) reports FTC Chair Lina Khan said at a Y Combinator event on Thursday that “open artificial intelligence models that allow developers to customize them with few restrictions are more likely to promote competition,” saying, “Open-weight models can liberate startups from the arbitrary whims of closed developers and cloud gatekeepers.” Khan further “said that the agency has heard complaints about dominant companies ‘monopolizing access to great talent, to critical inputs and to valuable data,’” adding, “The FTC is doing our part to be vigilant and to open up the market.” Bloomberg notes that “critics have warned that open models carry an increased risk of abuse and could potentially allow...geopolitical rivals like China to piggyback off the technology.”

dtau...@gmail.com

unread,
Aug 3, 2024, 11:06:22 AM8/3/24
to ai-b...@googlegroups.com

Meta's AI Safety System Defeated by Space Bar

Meta last week unveiled Prompt-Guard-86M alongside its Llama 3.1 generative AI model, to detect prompt injection attacks. However, Robust Intelligence researcher Aman Priyanshu found the Prompt-Guard-86M classifier model is itself vulnerable to prompt injection attacks. Priyanshu explained adding spaces between the letters of a given prompt and leaving out punctuation "effectively renders the classifier unable to detect potentially harmful content."
[ » Read full article ]

The Register (U.K.); Thomas Claburn (July 29, 2024)

 

AI Snoops on HDMI Cables to Capture Screen Data

An AI model developed by researchers at Uruguay's University of the Republic can reconstruct digital signals by intercepting electromagnetic radiation leaked from the HDMI cable that connects a computer and monitor. This would allow hackers to view a user's computer screen as they enter encrypted messages or personal information. Said the university’s Federico Larroca, “If you really care about your security, whatever your reasons are, this could be a problem.”
[ » Read full article ]

Tom's Hardware; Jeff Butts (July 28, 2024)

 

Hackers Vie for Millions in Contest to Thwart Cyberattacks

About 40 contestants are vying for a $2-million prize in a contest sponsored by the U.S. Defense Advanced Research Projects Agency (DARPA) to come up with an autonomous program capable of scanning lines of open-source code, identifying security flaws, and repairing them. The AIxCC challenge aims to harness AI to counter a lack of skilled engineers to catch poorly maintained open-source software.
[ » Read full article ]

The Washington Post; Joseph Menn (July 27, 2024)

 

Google Works to Reduce Non-Consensual Deepfake Porn in Search

Google is changing its search engine to reduce the extent to which sexually explicit fake content ranks high in its search results. When AI-generated content features a real person’s face or body without their permission, that person can request its removal from search results. When Google decides a takedown is warranted, it now will filter all explicit results on similar searches and remove duplicate images, the company said Wednesday.
[ » Read full article ]

Bloomberg; Davey Alba; Cecilia D'Anastasio (July 31, 2024)

 

E.U. AI Rules Officially Take Effect

The European Union's AI law formally took effect on Thursday, covering any product or service offered in the bloc that uses AI. Restrictions are based on four levels of risk, with the vast majority of systems expected to fall under the low-risk category, such as content recommendation systems or spam filters. The provisions will come into force in stages, and Thursday’s implementation date starts the countdown for when they’ll kick in over the next few years.
[ » Read full article ]

Associated Press; Kelvin Chan (August 1, 2024)

 

U.S. Says No Need to Restrict 'Open-Source' AI, for Now

A report released Tuesday by the U.S. Department of Commerce's National Telecommunications and Information Administration (NTIA) said there is no pressing need for restrictions on "open-source" AI systems. However, the report said the U.S. government should continue to monitor the technology and be "prepared to act if heightened risks emerge." NTIA Administrator Alan Davidson said, "We continue to have concerns about AI safety, but this report reflects a more balanced view that shows that there are real benefits in the openness of these technologies."
[ » Read full article ]

Associated Press; Matt O'Brien (July 30, 2024)

 

NeuralGCM Slashes Computer Power Needed for Weather Forecasts

An AI model developed by Google researchers performs as well as current physics models in forecasting weather and climate patterns but uses less computing power. The NeuralGCM model uses a single tensor processing unit (TPU) to process 70,000 days of simulation in 24 hours; a competing X-SHiELD model needs a supercomputer equipped with thousands of TPUs to process 19 days of simulation.

[ » Read full article *May Require Paid Registration ]

New Scientist; Matthew Sparkes (July 22, 2024)

 

Japan Supermarket Chain Uses AI to Standardize Staff Smiles

An AI system deployed by the Japanese supermarket chain AEON scores employees' service attitude based on more than 450 elements, including facial expressions, voice volume, and tone of greetings. The Mr. Smile system from InstaVR features game elements to encourage staff to challenge their scores by improving their service attitude. AEON said the system is intended to "standardize staff members' smiles and satisfy customers to the maximum."
[ » Read full article ]

South China Morning Post; Fran Lu (July 22, 2024)

 

Stanford Researchers Highlight AI Language Gaps

The New York Times Share to FacebookShare to Twitter (7/26, Ruberg) reports that Stanford researchers found significant flaws in AI language models when tested in Vietnamese. The chatbot Claude 3.5 by Anthropic failed to follow traditional poetic formats and provided incorrect translations for familial terms. These issues highlight the limitations of AI trained predominantly in English, potentially exacerbating technological inequities. Sang Truong, a Ph.D. candidate at Stanford, noted that delays in access to accurate AI technology could lead to significant economic delays for non-English-speaking regions. The study underscores the need for more diverse language data sets.

X Faces Scrutiny Over Data Usage For Grok Training

TechCrunch Share to FacebookShare to Twitter (7/26, Lomas) reports that X, formerly Twitter, has quietly defaulted user data into its AI training pool for Grok, leading to concerns among users and scrutiny from the Irish Data Protection Commission (DPC). The DPC, surprised by this move, has been engaging with X and awaits a response. Grok, a conversational AI developed by Elon Musk’s X, aims to rival OpenAI’s ChatGPT. The DPC, overseeing X’s GDPR compliance, expects further developments next week. X has yet to clarify the legal basis for processing European users’ data.

        Yaccarino’s Challenges At X Explored. The New York Times Share to FacebookShare to Twitter (7/27, Conger) reported that Linda Yaccarino, CEO of X, has struggled to stabilize the company’s advertising business amid Elon Musk’s unpredictable actions. Despite efforts to combat hate speech and antisemitism, Musk’s behavior, including endorsing an antisemitic theory, has undermined her work, according to the Times. X’s ad revenue has significantly declined, with documents showing that “in the second quarter of this year, X earned $114 million in revenue in the United States, a 25 percent decline from the first quarter and a 53 percent decline from the previous year.” Yaccarino remains determined but faces ongoing challenges, the Times says.

Apple Signs Onto US Plan To Address AI Risks

Reuters Share to FacebookShare to Twitter (7/26, Ayyub, Shakil) reported the White House said on Friday that Apple has signed on President Biden’s “voluntary commitments governing artificial intelligence (AI), joining 15 other firms that have committed to ensuring that AI’s power is not used for destructive purposes.” Bloomberg Share to FacebookShare to Twitter (7/26, Gardner, Subscription Publication) reported the White House “announced [Apple] is joining the ranks of OpenAI Inc., Amazon.com Inc., Alphabet Inc., Meta Platforms Inc., Microsoft Corp. and others in committing to test their AI systems for any discriminatory tendencies, security flaws or national security risks.”

Kristof: AI Risks Make It Essential For US To Maintain Lead In Technology

In his column for the New York Times Share to FacebookShare to Twitter (7/27), Nicholas Kristof discusses the dangers of artificial intelligence, warning that a RAND study has found that for less than $100,000, “it may now be possible to use artificial intelligence to develop a virus that could kill millions of people.” Kristof argues, “All this underscores why it is essential that the United States maintain its lead in artificial intelligence. As much as we may be leery of putting our foot on the gas, this is not a competition in which it is OK to be the runner-up to China. ... Managing A.I. without stifling it will be one of our great challenges as we adopt perhaps the most revolutionary technology since Prometheus brought us fire.”

Survey: Students And Professors Believe AI Will Encourage Cheating

Inside Higher Ed Share to FacebookShare to Twitter (7/29, Coffey) reports Coursera is “the latest to launch a tool for detecting the use of AI in student work.” A Wiley survey shared with Inside Higher Ed reveals that “most instructors (68 percent) believe generative AI will have a negative or ‘significantly’ negative impact on academic integrity.” The survey, which included more than 2,000 students and 850 instructors, found that 47 percent of students “said it is easier to cheat than it was last year due to the increased use of generative AI, with 35 percent pointing toward ChatGPT specifically as a reason.” Wiley’s vice president of courseware highlighted the need for open discussions about cheating and productive help-seeking methods in classrooms. The survey also showed that “a majority of professors (56 percent) said they did not think AI had an impact on cheating over the last year, but most (68 percent) did think it would have a negative impact on academic integrity in the next three years.”

Report: College Graduates Feel Unprepared For Generative AI

Higher Ed Dive Share to FacebookShare to Twitter (7/29, Moody) reports, “While the majority of college graduates say their education has readied them for success in the job market, more than half said their programs didn’t prepare them for the use of generative AI, according to a Cengage Group report released July 23.” Nearly two in three employers said candidates “should have foundational knowledge” of generative AI tools, with more than half preferring to interview and hire those with AI experience. Despite this, “nearly 3 in 5 recent graduates of 2- or 4-year degree programs said that they believed their program equipped them with needed skills for their first job,” a rise from 41 percent in 2023. Michael Hansen, CEO of Cengage Group, noted the importance of integrating AI training into education.

Professors Skeptical After Academic Publishers Partner With AI Tech Firms

The Chronicle of Higher Education Share to FacebookShare to Twitter (7/29, Dutton) reports, “Two major academic publishers, Wiley and Taylor & Francis, recently announced partnerships that will give tech companies access to academic content and other data in order to train artificial-intelligence models.” Microsoft paid Informa, “the parent company of Taylor & Francis, an initial fee of $10 million to make use of its content ‘to help improve relevance and performance of AI systems.’” Wiley completed a similar project with an undisclosed tech company and plans another next year. Academics expressed concerns on social media about intellectual-property rights and lack of compensation. Taylor & Francis told The Chronicle that detailed citation was “fundamental to the agreement,” but scholars remain skeptical.

Focus On Microsoft’s Costs Grow Of Concerns Around AI Investments

Reuters Share to FacebookShare to Twitter (7/29) reports Microsoft investors will focus on the growth of its Azure cloud service when the company reports earnings on Tuesday to see if it can offset the cost of AI investments. Azure is expected to maintain a “steady quarter-over-quarter” growth of about 31%, aligning with forecasts. However, investors seek a “bigger contribution” from AI, which previously contributed 7 percentage points to Azure’s growth. Microsoft’s capital spending likely surged 53% year-over-year to $13.64 billion, up from $10.95 billion last quarter.

        The Wall Street Journal Share to FacebookShare to Twitter (7/29, Subscription Publication) reports a quartet of technology companies is set to report earnings this week after a selloff among the “Magnificent Seven” hit the Nasdaq. Microsoft will report Tuesday, Meta on Wednesday, and Amazon and Apple on Thursday after the market closes. Amazon’s second-quarter report will reveal the impact of its AI investments on its top line, with an expected 6% sales increase to $148.7 billion.

Apple Updates Siri With Intelligence Beta

Phone Arena Share to FacebookShare to Twitter (7/29, Friedman) reports that the Apple Intelligence Beta has been introduced on the iPhone 15 Pro Max through an updated Siri. Alan Friedman, the article’s author, noted that Siri now has a more conversational tone and improved interaction capabilities, such as understanding mispronunciations and providing troubleshooting assistance. Additional features in the beta include enhanced image search in the Photos app, AI-based summaries in Mail and Messages, and advanced writing tools. Some improvements, however, will only be available in the final version of Apple Intelligence.

        Similarly, 9to5Mac Share to FacebookShare to Twitter (7/29, Miller) reports that Apple has released the first beta of iOS 18.1 to developers, featuring new Apple Intelligence tools. Available for iPhone 15 Pro and iPhone 15 Pro Max, the update includes Writing Tools, enhanced Mail and notification features, and upgrades to Photos. iPadOS 18.1 is also available for iPads with the M1 chip and newer. Developers can access Apple Intelligence by joining a waitlist in the Settings menu.

        Also reporting are SlashGear Share to FacebookShare to Twitter (7/29), MacWorld Share to FacebookShare to Twitter (7/29), 9to5Mac Share to FacebookShare to Twitter (7/29, Espósito), and the New York Post Share to FacebookShare to Twitter (7/29).

Reddit Blocks Scraping Without AI Agreement

Ars Technica Share to FacebookShare to Twitter (7/31) reports that Reddit CEO Steve Huffman is defending the decision to block companies from scraping the site without an AI agreement. Reddit updated its Robots Exclusion Protocol to prevent non-Google search engines from listing recent posts. Huffman cited the need for control over data usage, mentioning that Microsoft, Anthropic, and Perplexity have not negotiated. Reddit and Google signed a $60 million AI training deal in February. The company aims to monetize its data amidst user protests and legal debates over AI data use.

DOJ Probes Nvidia’s Acquisition Of Run: ai On Antitrust Grounds

Politico Share to FacebookShare to Twitter (8/1, Sisco) reports “relatively obscure” AI startup Run: ai “has gotten caught up in the tug-of-war between U.S. regulators and the world’s largest tech companies over whether artificial intelligence is at risk of being controlled by a handful of giants.” Sources say the Justice Department is “investigating the acquisition of the AI start-up Run: ai by semiconductor company Nvidia on antitrust grounds,” with investigators “focused on the potential for the company to build a moat around its GPUs.” Sources added that “one possible concern over the Run: ai deal is the suspicion that Nvidia may have bought the company that enables customers to do more with less compute in order to bury a technology that could curb its main profit engine.”

Report: AI In Special Education Sparks Optimism Among Teachers, Parents

Education Week Share to FacebookShare to Twitter (8/1, Langreo) reports, “Educators and parents of students with intellectual and developmental disabilities are optimistic about artificial intelligence’s potential to create more inclusive classrooms and close educational gaps between students with disabilities and those without, concludes a report from the Special Olympics Global Center for Inclusion in Education.” Released on July 22, the report is “based on a survey of 500 U.S. parents of children with intellectual or developmental disabilities, as well as 200 U.S. K-12 teachers, conducted by Stratalys Research.” Concerns include reduced human interaction and resource disparities. The report found that while more “than 7 in 10 parents and 6 in 10 teachers say AI will make education more inclusive,” skepticism remains about whether AI developers consider the needs of students with disabilities.

dtau...@gmail.com

unread,
Aug 10, 2024, 1:14:27 PM8/10/24
to ai-b...@googlegroups.com

Experts Pen Support for California's AI Safety Bill

In a letter addressed to legislative leaders in California, ACM A. M. Turing Award laureates Yoshua Bengio and Geoffrey Hinton, along with renowned professors Lawrence Lessig and Stuart Russell, voiced support for a bill that would require AI firms training large-scale models to perform rigorous safety tests to identify potentially dangerous capabilities and institute comprehensive safety measures to mitigate risks. The letter said the bill amounts to the "bare minimum for effective regulation of this technology."
[
» Read full article ]

Time; Tharin Pillay; Harry Booth (August 7, 2024)

 

AI Is Coming for India's Outsourcing Industry

India's $250-billion outsourcing industry is being forced to adapt as companies replace call centers and other low-level operations with generative AI. According to TCS' Harrick Vin, "The roles of the future will require greater levels of critical thinking, design, strategic goal setting, and creative problem-solving skills." Meanwhile, industry executives contend AI tools are giving a boost to some businesses, particularly the programming workforce.


[
» Read full article *May Require Paid Registration ]

The Wall Street Journal; Megha Mandavia (August 6, 2024)

 

Mainframes Find New Life in AI Era

Mainframe computers are proving their resilience with new applications in the era of AI. Banks, insurance providers, airlines, and other industries that still rely on the mainframe for high-speed data processing are now looking to apply AI to their transaction data at the hardware source, rather than in the cloud. Said IBM's Ross Mauri, “Everyone’s kind of realizing that it’s better to bring your AI to where the data is, than the data to the AI.”

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Belle Lin (August 6, 2024)

 

Damaged Robot Adapts to Swim

Researchers at the California Institute of Technology (Caltech) used a machine learning algorithm to teach a robot to adapt its propulsion mechanism in order to maintain its aquatic capabilities when damaged. Explains Caltech's Meredith Hooper, "The machine learning algorithm selects the top candidate trajectories based on how well they produced our desired force. The algorithm then comes up with another set of 10 trajectories inspired by the previous set."
[ » Read full article ]

Interesting Engineering; Sujita Sinha (August 1, 2024)

 

AI Sets Variable Speed Limits on U.S. Freeway

AI is being used to control variable speed limits on a 27-kilometer (16.8-mile) section of the I-24 freeway near Nashville, TN. Daniel Work at Vanderbilt University and colleagues trained an AI on historical traffic data to monitor cameras and make decisions on speed limits. The new automated system, launched in March, works accurately 98% of the time, but will occasionally call for a change in speed limit that is larger than 10 miles per hour, which violates federal law.
[ » Read full article ]

New Scientist; Matthew Sparkes (July 30, 2024)

 

Memory Tech Reduces AI Processing Energy Requirements

University of Minnesota Twin Cities researchers developed Computational Random-Access Memory (CRAM) technology which, they say, could dramatically cut the energy consumed by AI processing. With CRAM, a high-density, reconfigurable spintronic in-memory compute substrate is located within the memory cells themselves, where the data is processed. When used to perform an MNIST handwritten digit classifier task, CRAM was 2,500 times more energy-efficient and 1,700 times faster than a near-memory processing system using the 16nm technology node.
[ » Read full article ]

Tom's Hardware; Jeff Butts (July 29, 2024)

 

Smartphone Flaw Reveals Floor Plans

A security flaw found in smartphones can be used to create a map of the room users are in and determine what they are doing. The vulnerability, discovered by researchers at the Indian Institute of Technology Delhi, uses data in the GPS signal. The researchers created an AI-based system called AndroCon that interpreted the metrics provided by this data from five types of Android smartphones.
[
» Read full article ]

New Scientist; Matthew Sparkes (August 8, 2024)

 

New Technique Aims to Tamperproof AI Models

Wired Share to FacebookShare to Twitter (8/2, Nast) reports that researchers from the University of Illinois Urbana-Champaign, UC San Diego, Lapis Labs, and the Center for AI Safety have developed a technique to make it harder to remove safety restrictions from open source AI models like Meta’s Llama 3. The method involves altering the model’s parameters to prevent it from responding to harmful prompts. Mantas Mazeika, a researcher involved in the project, said the goal is to deter adversaries by increasing the cost of decensoring models. The technique aims to enhance tamper-resistant safeguards as open source AI models grow in popularity.

NYTimes Report: China Skirts US Restrictions On AI Chip Exports

The New York Times Share to FacebookShare to Twitter (8/4, Swanson, Fu) said it “found an active trade in restricted A.I. technology – part of a global effort to help China circumvent U.S. restrictions amid the countries’ growing military rivalry.” The bans “made it harder and more costly for China to develop A.I.” but “given the vast profits at stake, businesses around the world have found ways to skirt the restrictions, according to interviews with more than 85 current and former U.S. officials, executives and industry analysts, as well as reviews of corporate records and visits to companies in Beijing, Kunshan and Shenzhen.” The Times also reports “an underground marketplace of smugglers, backroom deals and fraudulent shipping labels is funneling A.I. chips into China, which does not consider such sales illegal.”

Tech Firms Continue AI Spending Splurge Despite Investor Concerns

The New York Times Share to FacebookShare to Twitter (8/2, Weise) reports major tech companies “have made it clear over the last week that they have no intention of throttling their stunning levels of spending on artificial intelligence, even though investors are getting worried that a big payoff is further down the line than once thought.” The Times explains that “in the last quarter alone, Apple, Amazon, Meta, Microsoft and Google’s parent company Alphabet spent a combined $59 billion on capital expenses, 63 percent more than a year earlier and 161 percent more than four years ago,” and “a large part of that was funneled into building data centers and packing them with new computer systems to build artificial intelligence.”

OpenAI Holds Off On Releasing Tool That Catches Students Cheating With ChatGPT

The Wall Street Journal Share to FacebookShare to Twitter (8/4, Barnum, Subscription Publication) reports OpenAI has allegedly developed a method to reliably detect when someone uses ChatGPT to draft an essay or research paper, but has refrained from releasing it. The anti-cheating tool includes watermarks unnoticeable to the human eye but can be found utilizing OpenAI’s detection technology. One staff concern over releasing the tool is that it could disproportionately affect groups such as non-native English speakers. Moreover, if too many get access to the tool, bad actors might decipher the company’s watermarking technique. Yet employees who support the tool’s release claim internally those arguments pale compared with the good such technology could do. They have discussed offering the detector directly to educators or to outside companies that help schools identify AI-written papers and plagiarized work.

Google Cuts Olympics Ad For AI Chatbot Following Backlash

Fortune Share to FacebookShare to Twitter (8/2, Webster) reported Google “scrapped its Olympics advertisement for the AI chatbot Gemini from its TV rotation just one week after the controversial ad, ‘Dear Sydney,’ first aired.” Google said in a statement to Fortune, “While the ad tested well before airing, given the feedback, we have decided to phase the ad out of our Olympics rotation.” The ad is still available on YouTube, and currently has “over 320,000 views, though the comments section on its page has been turned off. In the video, a father helps his young daughter write a letter to her hero, Olympic hurdler Sydney McLaughlin-Levrone, with the help of Gemini AI.” Although the ad prompted “a wave of negative feedback on social media from viewers who found its theme disturbing,” the ad was “actually performing quite well compared with other Olympic ads, according to data reported by Business Insider.”

Grassley Calls On OpenAI To Release Information On Safety Practices

The Washington Post Share to FacebookShare to Twitter (8/2, Verma, Tiku) reports Sen. Chuck Grassley (R-IA) sent a letter to OpenAI CEO Sam Altman saying the company “should turn over documents proving it does not silence employees who wish to share concerns with federal regulators about how the artificial intelligence company is developing its tools.” Following “employee warnings that OpenAI rushed through safety testing of its latest AI model,” Grassley called on Altman “to outline what changes it has made to its employee agreements to ensure those wishing to raise concerns about OpenAI to federal regulators can do so without penalty,” marking “growing bipartisan pressure against OpenAI to detail steps it is taking to make sure its AI is developed safely.”

        Meanwhile, Bloomberg Share to FacebookShare to Twitter (8/2, Griffin, Subscription Publication) details concerns that AI “could help create weapons of mass destruction – not the kind built in remote deserts by militaries but rather ones that can be made in a basement or high school laboratory,” as the technology “could teach users to make dangerous viruses.” While “weaponizing disease is nothing new,” Bloomberg explains that AI tools “make it easier to surface insights on harmful viruses, bacteria and other organisms than what’s traditionally been possible with existing search tools,” meaning that “it’s now far easier for bad actors to develop weapons of mass destruction quickly and cheaply without access to traditional lab infrastructure.”

Five Secretaries Of State To Demand Musk Update AI Chatbot Over Harris Misinformation

The Washington Post Share to FacebookShare to Twitter (8/4, Ellison, Gardner) reports Minnesota Secretary of State Steve Simon and “his counterparts Al Schmidt of Pennsylvania, Steve Hobbs of Washington, Jocelyn Benson of Michigan and Maggie Toulouse Oliver of New Mexico” plan to “send an open letter to billionaire Elon Musk on Monday, urging him to “immediately implement changes” to X’s AI chatbot Grok, after it shared with millions of users false information suggesting that Kamala Harris was not eligible to appear on the 2024 presidential ballot.” The Post reports they “are objecting not to Grok’s tone but its factual inaccuracies and the sluggishness of the company’s move to correct bad information.”

Colleges Overhaul Courses In Response To Rise Of AI Technology

The Wall Street Journal Share to FacebookShare to Twitter (8/5, Subscription Publication) reports that the rapid rise of artificial intelligence technology has caused colleges across the US to quickly overhaul courses to include AI. College administrators say students are calling for course materials which integrate technologies like AI which are likely to impact students’ future workplaces.

Professors Say Computer Science Degrees Will Remain Valuable Amid AI Expansion

Insider Share to FacebookShare to Twitter (8/5) reports that with the rise of AI tools like GitHub Copilot, “tech companies may not need to hire as many software engineers as before since leaner teams can reasonably complete the same amount of code.” The head of Singaporean venture capital firm Hatcher+ predicts the industry will shrink, favoring those with deep expertise. However, computer science professors argue that a degree in the field remains valuable. Professor Kan Min Yen from the National University of Singapore said, “The AI wave is actually driving demand for computing professionals in general, because maturing AI is transformative and needs to be integrated into many facets of life.” David Malan of Harvard said, “Consider just how many more features [software engineers] can implement, how many more bugs they can fix, if they have a virtual assistant by their side.” Kan emphasizes the importance of soft skills, likening computer science to a team sport.

Oxford University Press Becomes Latest Academic Publisher To Collaborate With AI Companies

Inside Higher Ed Share to FacebookShare to Twitter (8/5, Palmer) reports Oxford University Press has “become the latest academic publisher to confirm it is working with companies developing AI tools.” OUP told The Bookseller, a UK-based outlet covering the publishing industry, “We are actively working with companies developing large language models (LLMs) to explore options for both their responsible development and usage.” In its annual report last month, the publisher said that it has “pursued opportunities relating to artificial intelligence (AI) technologies with careful consideration of its implications for research and education.” Both Informa, “the parent company of academic publisher Taylor & Francis, and Wiley recently announced that they had entered into data-access agreements with various companies, including Microsoft, that want to use their corpora to train proprietary AI tools.”

AI-Powered Medical Devices Are Bringing Changes To Patent Regulations

AI and machine learning “are transforming the medical device industry,” Bloomberg Law Share to FacebookShare to Twitter (8/5, Subscription Publication) reports. At the same time, “companies are working to gain Food and Drug Administration approval and obtain intellectual property protection for this technology.” With these new guidelines emerging, “IP practitioners need to help clients navigate these complicated areas without jeopardizing investment into AI or machine learning-enabled technology.”

Researchers: AI Could Help Address Building Energy Use, Carbon Emissions

Smart Cities Dive Share to FacebookShare to Twitter (8/6) reports a paper, published in Nature Communications, says that AI could reduce building sector energy consumption and carbon emissions by about 8% by 2050. The Lawrence Berkeley National Laboratory researchers estimated that “AI adoption, along with robotics and Internet of Things applications, can cut building costs by up to 20%.” Researchers also said that AI, combined with energy policy and low-carbon generation, could reduce energy use and carbon emission by 40% and 90%, respectively, in 2050.

OpenAI Co-Founders To Join Anthropic, Take Sabbatical

CNBC Share to FacebookShare to Twitter (8/6, Novet) reports, “OpenAI co-founder John Schulman said in a Monday X post that he would leave the Microsoft-backed company and join Anthropic, an artificial intelligence startup with funding from Amazon.” The news “comes less than three months after OpenAI disbanded a superalignment team that focused on trying to ensure that people can control AI systems that exceed human capability at many tasks.”

        The Wall Street Journal Share to FacebookShare to Twitter (8/6, Subscription Publication) reports OpenAI CEO Sam Altman responded to Schulman’s post by thanking him for his work. Separately, another OpenAI co-founder, president Greg Brockman, also posted on X that he would be taking a sabbatical for the rest of the year. Brockman is quoted saying the leave would be his “first time to relax” since the foundation of the company in 2015.

        Bloomberg Share to FacebookShare to Twitter (8/6, Subscription Publication) reports that the moves “mark a shift at the company following already significant management churn this year. Peter Deng, a vice president of product, also left in recent months, a spokesperson said. And earlier this year, several members of the company’s safety teams exited. OpenAI has made key hires, too, recently adding a new chief financial officer and chief product officer.”

Big Tech Bails Out AI Startups Amid Regulatory Scrutiny

The Wall Street Journal Share to FacebookShare to Twitter (8/6, Jin, Dotan, Kruppa, Subscription Publication) reports AI startups are seeking bailouts from major tech firms as they struggle to survive. Amazon agreed to hire most employees from Adept AI and pay $330 million to license its technology. Google negotiated a $2 billion licensing fee for Character. AI’s technology and hired many of its researchers. These deals avoid regulatory hurdles by not being outright acquisitions, but the Federal Trade Commission is investigating Amazon’s and Microsoft’s deals to determine if they bypassed government approval.

Tech Giants Boost Data Center Investments

Bloomberg Share to FacebookShare to Twitter (8/6, Ludlow, Subscription Publication) reports major tech companies are significantly increasing capital expenditures on data centers to support AI development. Microsoft, Meta, and Amazon announced increased spending in their recent earnings reports. Amazon, the market leader in cloud computing, spent $30.5 billion in the first half of the year and plans to exceed that in the next six months.

Schumer Advocates For AI Regulation In Elections

The Hill Share to FacebookShare to Twitter (8/6) reports Senate Majority Leader Chuck Schumer emphasized the need for AI regulation in elections during an NBC News interview. With fewer than 100 days until the November election, Schumer highlighted the threat of deepfakes, referencing incidents involving AI-generated political content. He urged bipartisan support for AI legislation, including the Protect Elections from Deceptive AI Act and the AI Transparency in Elections Act.

AI Firms Collect Children’s Photos For Age Verification

The Washington Post Share to FacebookShare to Twitter (8/7) reports that in 2021, London-based artificial intelligence firm Yoti initiated a campaign called “Share to Protect” in South Africa, which would “donate 20 South African rands, about $1, to their children’s school” for every child’s photo submitted. The initiative aimed to improve Yoti’s AI tool “that could estimate a person’s age by analyzing their facial patterns and contours.” While some parents participated, others expressed strong opposition due to privacy concerns. Companies such as Yoti, Incode, and VerifyMyAge “increasingly work as digital gatekeepers, asking users to record a live ‘video selfie’ on their phone or webcam, often while holding up a government ID, so the AI can assess whether they’re old enough to enter.” However, critics argue these systems could lead to privacy violations and misuse of personal data.

Learning Expert Warns Against Widespread AI Adoption In Schools

Education Week Share to FacebookShare to Twitter (8/7, Langreo) reports that although not everyone agrees, experts say generative artificial intelligence “can save educators time, help personalize learning, and potentially close achievement gaps.” Benjamin Riley, the founder and CEO of think tank Cognitive Resonance, “argues that schools don’t have to give in to the hype just because the technology exists.” Cognitive Resonance on Aug. 7 released its first report titled “Education Hazards of Generative AI,” and in a phone interview with EdWeek, “Riley discussed the report and his concerns about using AI in education.” He said “using [AI tools] to tutor children” will not be effective “[w]e’re already starting to get some empirical evidence of this. Some researchers at Wharton published a study recently of a randomized control trial where high school math students using ChatGPT learned less than their peers who had no access to it during the time of the study.” Riley also said “we’re starting to see how technology has had real harms on social cohesion and solidarity.”

Tech Companies’ Deals With AI Startups Seen As Structured To Evade Regulatory Scrutiny

The New York Times Share to FacebookShare to Twitter (8/8, Griffith, Metz) reports on “several unusual transactions that have recently emerged in Silicon Valley” by which tech companies have “turned to a more complicated deal structure for young A.I. companies.” Rather than buying them outright, they “licens[e] the technology and hir[e] the top employees – effectively swallowing the start-up and its main assets – without becoming the owner of the firm.” The Times says, “These transactions are being driven by the big tech companies’ desire to sidestep regulatory scrutiny while trying to get ahead in A.I., said three people who have been involved in such agreements. Google, Amazon, Meta, Apple and Microsoft are under a magnifying glass from agencies like the Federal Trade Commission over whether they are squashing competition, including by buying start-ups.”

        UK Antitrust Officials Probe Amazon’s Anthropic Investment. The Wall Street Journal Share to FacebookShare to Twitter (8/8, Orru, Subscription Publication) reports the UK’s Competition and Markets Authority is investigating Amazon’s $4 billion investment in AI startup Anthropic, questioning if it poses a threat to competition. An Amazon spokesperson said the company was “disappointed by the decision” and that its ties to Anthropic didn’t raise competition concerns. The probe highlights increasing scrutiny on Big Tech’s AI investments. An initial decision is due by October. 4.

        CNBC Share to FacebookShare to Twitter (8/8, Browne) reports Amazon completed its $4 billion investment in Anthropic in March, with an initial $1.25 billion equity stake in September, followed by an additional $2.75 billion earlier this year. The deal includes making Anthropic’s large language models available on Amazon’s Bedrock platform and training these models on Amazon’s custom AI chips built by AWS. An Amazon spokesperson emphasized that the collaboration expands choice and competition in AI technology, asserting that Amazon holds no board seat or decision-making power at Anthropic. Anthropic also affirmed its independence, stating Amazon does not have board observer rights.

More Students Turn To AI Chatbots For Mental Health Support Despite Risks

The Seventy Four Share to FacebookShare to Twitter (8/7, Toppo) reported that college students are increasingly turning to AI chatbots like ChatGPT for psychological support and advice. However, experts caution that these AI companions could lead young people to make poor decisions. A recent survey by VoiceBox, a youth content platform “found that many kids are being exposed to risky behaviors from AI chatbots, including sexually charged dialogue and references to self-harm.” Little research exists “on young people’s use of AI companions, but they’re becoming ubiquitous.” For example, the startup Character.ai earlier this year “said 3.5 million people visit its site daily. It features thousands of chatbots, including nearly 500 with the words ‘therapy,’ ‘psychiatrist’ or related words in their names.” Some believe AI’s role in human interaction is inevitable and call for better regulation.

College Of Charleston Uses AI Chat Bot For Student Support, Retention

Inside Higher Ed Share to FacebookShare to Twitter (8/8, Mowreader) reports the College of Charleston has implemented Clyde, an artificial intelligence-powered chat bot developed in partnership with EdSights, to enhance student support and retention. Launched in the fall, Clyde has facilitated more than 50,000 text messages and flagged more than 900 students for follow-up. The initiative aims to connect students with resources and improve institutional priorities. Clyde, named after the college’s cougar mascot, sends weekly check-in messages to students and alerts staff about those needing immediate assistance. Ninety-four percent of students opted in, and 62 percent engaged with the bot, providing data on various student experiences. Adjustments have been made after the pilot year to improve the program, including appointing a dedicated staff member to manage incoming information.

X Halts Use Of European Social Media Data For AI Training Following Legal Challenge

TechCrunch Share to FacebookShare to Twitter (8/8, Lomas) reports that Elon Musk has agreed to halt the use of Europeans’ social media posts to train his AI tool ‘Grok’, following action from Ireland’s Data Protection Commission. The DPC initiated court proceedings seeking an injunction against the practice due to a lack of user consent, with the issue also expected to be referred to the European Data Protection Board. It is currently unclear how any AI models trained on unlawfully-obtained data will be handled legally.

Small AI Models Gain Traction in Tech Industry

Bloomberg Share to FacebookShare to Twitter (8/8, Subscription Publication) reports that tech companies are shifting focus from large, costly AI models to smaller, more efficient ones. Arcee. AI, co-founded by Mark McQuade, exemplifies this trend by developing small language models tailored for specific corporate tasks, like tax-related queries. McQuade emphasizes that “99% of business use cases” do not require extensive general knowledge. Tech and AI giants including “Google, Meta Platforms Inc., OpenAI and Anthropic have all recently released software that is more compact and nimble than their flagship large language models, or LLMs.” Hugging Face co-founder and Chief Science Officer Thomas Wolf notes, “small models make a lot of sense,” highlighting their cost-effectiveness and lower energy demands. Arcee. AI’s recent $24 million Series A funding underscores investor interest in this approach, driven by the need for diverse and affordable AI solutions.

Google DeepMind’s Ping Pong Robot Challenges Humans

Popular Science Share to FacebookShare to Twitter (8/8, Paul) reports that Google DeepMind has developed a robotic system capable of amateur human-level performance in table tennis. Detailed in an August 7 preprint paper, the robot won 45% of matches against 29 human players. Engineers used a dataset and simulations to train the AI, creating a continuous learning feedback loop.

        Google AI Overviews See Significant Decline. Insider Share to FacebookShare to Twitter (8/8, Langley) reports a study by SE Ranking found a significant drop in Google’s AI Overviews in search results. In July, only 7.47% of searches returned an AI Overview, down from 64% in February. Google is rethinking its AI use, with spokespersons noting ongoing refinements. The study also noted a 40% decrease in AI Overview length and highlighted Forbes, Business Insider, and Entrepreneur as top-cited sources.

OpenAI Highlights Risks With Voice Interface

Wired Share to FacebookShare to Twitter (8/8) reports OpenAI released a safety analysis for its new GPT-4o model, highlighting potential risks associated with its humanlike voice interface. The analysis warns that users might become emotionally attached to the chatbot. The “system card” outlines risks such as amplifying societal biases, spreading disinformation, and aiding in the development of weapons. Regarding emotional connections with AI, OpenAI’s Joaquin Quiñonero Candela said, “We don’t have results to share at the moment, but it’s on our list of concerns.” Experts like Lucie-Aimée Kaffee from Hugging Face and MIT Professor Neil Thompson commended OpenAI’s transparency but urged further detail and real-world risk evaluation.

dtau...@gmail.com

unread,
Aug 18, 2024, 12:23:55 PM8/18/24
to ai-b...@googlegroups.com

Bengio Joins U.K. Project to Prevent AI Catastrophes

ACM A. M. Turing Award laureate Yoshua Bengio has signed on to Safeguarded AI, a U.K. government-funded project with the goal of developing an AI system that can assess the safety of other AI systems deployed in critical sectors. This "gatekeeper" AI would assign risk scores and offer other quantitative guarantees regarding the real-world impacts of AI systems. Bengio, who will serve as the project's scientific director, said the use of AI to safeguard AI is "the only way, because at some point these AIs are just too complicated."
[
» Read full article ]

MIT Technology Review; Melissa Heikkilä (August 7, 2024)

 

Novel Ideas to Cool Datacenters: Liquid in Pipes, Dunking Bath

The advent of generative AI has made the cooling of datacenters a hot topic. Datacenters are expected to consume 8% of total U.S. power demand by 2030, compared with about 3% now. For its coming GB200 server racks, Nvidia will use liquid circulating in tubes rather than air to cool the hardware. The company is also working on additional cooling technologies, including dunking entire drawer-sized computers in a nonconductive liquid that absorbs and dissipates heat.


[
» Read full article *May Require Paid Registration ]

The Wall Street Journal; Yang Jie (August 11, 2024)

 

Ke Fan, Daniel Nichols Receive 2024 ACM-IEEE CS George Michael Memorial HPC Fellowships

Ke Fan of the University of Illinois at Chicago and Daniel Nichols of the University of Maryland are the recipients of the 2024 ACM-IEEE CS George Michael Memorial HPC Fellowships. Fan is recognized for her research in optimizing the performance of MPI collectives, enhancing the performance of irregular parallel I/O operations, and improving the scalability of performance-introspection frameworks. Nichols is recognized for advancements in machine-learning-based performance modeling and the advancement of large language models for HPC and scientific codes.
[
» Read full article ]

ACM Media Center (August 14, 2024)

 

Struggling AI Startups Look for Bailout from Big Tech

Many of the AI startups that raised billions of dollars last year are now seeking bailouts from big tech companies. Google has agreed to hire many of Character.AI's researchers and executives and helped buy out early investors by licensing the startup's technology for about $2 billion. Amazon recently paid around $330 million to hire most of Adept AI's staff and license its technology, following a move by Microsoft to hire almost all Inflection's staff to create a new consumer AI division and license the startup's technology for about $650 million. These deals are seen as more favorable than outright acquisitions that likely would face regulatory scrutiny.


[
» Read full article *May Require Paid Registration ]

The Wall Street Journal; Berber Jin; Tom Dotan; Miles Kruppa (August 6, 2024); et al.

 

DeepMind Develops a ‘Solidly Amateur’ Table Tennis Robot

Google’s DeepMind Robotics researchers developed a “solidly amateur human-level” robotic table tennis player. During testing, the robot beat all of the beginner-level players it faced. With intermediate players, the robot won 55% of matches. The system’s biggest shortcoming was how it reacted to fast balls, which DeepMind blames on system latency, mandatory resets between shots, and a lack of useful data.
[
» Read full article ]

TechCrunch; Brian Heater (August 8, 2024)

 

Older Americans Prepare for AI

Classes are being offered across the U.S. to help seniors better understand the benefits and risks of AI. The classes often detail the ways AI can make certain tasks easier while also warning them about deepfakes, misinformation, and AI-perpetrated scams. Said University at Buffalo's Siwei Lyu, "We need this kind of education for seniors, but the approach we take has to be very balanced and well-designed."
[
» Read full article ]

Associated Press; Dan Merica (August 13, 2024)

 

AI Pieces Together Ancient Epic

Researchers are leveraging AI to decipher more than 3,000-year-old clay tablets containing fragments of the Epic of Gilgamesh and other ancient writings. Over 500,000 clay tablets, and many more tablet fragments, are housed in museums and universities worldwide, many having yet to be read or published due to a lack of cuneiform experts. Researchers led by Enrique Jiménez of Germany's Ludwig Maximilian University of Munich have identified new segments of the poem and hundreds of missing words and lines from other works using a machine learning model.


[
» Read full article *May Require Paid Registration ]

The New York Times; Erik Ofgang (August 12, 2024)

 

California Partners with Nvidia to Bring AI Resources to Colleges

California and tech giant Nvidia are partnering to help train the state’s students, college faculty, developers, and data scientists in AI. The initiative aims to add new curriculum and certifications, hardware and software, and AI labs and workshops, and is particularly focused on community colleges. Said Governor Gavin Newsom, “California’s world-leading companies are pioneering AI breakthroughs, and it’s essential that we create more opportunities for Californians to get the skills to utilize this technology and advance their careers."
[
» Read full article ]

Associated Press; Sarah Parvini (August 9, 2024)

 

Tech Aims to Identify Future Olympians

AI-based talent spotting technology set up near the Olympic Stadium in Paris aimed to find the next generation of athletic stars. Data gathered from five tests, including running, jumping, and grip strength, was analyzed to assess a person's power, explosiveness, endurance, reaction time, strength, and agility. The results are compared with data from professional and Olympic athletes. “We’re using computer vision and historical data, so the average person can compare themselves to elite athletes and see what sport they are most physically aligned to,” says Sarah Vickers, head of Intel’s Olympic and Paralympic Program.
[
» Read full article ]

BBC News; Peter Ball (August 8, 2024)

 

Survey: Most Students Worry Overuse Of AI Could Devalue Higher Ed

Inside Higher Ed Share to FacebookShare to Twitter (8/9, Rowsell) reported this year’s Digital Education Council Global AI Student Survey “of more than 3,800 students from 16 countries” found that rising use of AI in higher education “could cause students to question the quality and value of education they receive.” More than half (55 percent) of respondents “believed overuse of AI within teaching devalued education, and 52 percent said it negatively impacted their academic performance.” The report says, “Students do not want to become over-reliant on AI, and they do not want their professors to do so either. Most students want to incorporate AI into their education, yet also perceive the dangers of becoming over-reliant on AI.” Still, some 86 percent said they “regularly” used programs “such as ChatGPT in their studies, 54 percent said they used it on a weekly basis, and 24 percent said they used it to write a first draft of a submission.”

        University Of New Mexico Faculty Receive Stipends To Use AI For Open Education Resources. Inside Higher Ed Share to FacebookShare to Twitter (8/9, Coffey) reported seven faculty members at the University of New Mexico “have spent the summer working to apply generative AI” to open educational resources (OER), which “are teaching and learning materials that are openly licensed, adaptable and freely available online.” As the faculty’s eight-week pilot “nears an end, each will collect $1,000 stipends as part of the university’s investment into OER, according to Jennifer Jordan, OER librarian at New Mexico. The university also recently received a $2.1 million grant from the U.S. Department of Education to establish an OER consortium in the state.” At the end of the session, “the UNM faculty will compile a guidebook on how to create and use OER, with a chapter dedicated to using AI in OER materials.” And as both generative AI and OER “continue to evolve, higher education can cautiously use both in conjunction with one another.”

        Commentary: Why Colleges Should Avoid Banning AI In Classrooms. In commentary for Fortune Share to FacebookShare to Twitter (8/9), Georgia Tech professor Arijit Raychowdhury said that although several school districts and colleges are “rushing to ban the use of ChatGPT in the classroom,” the Georgia Institute of Technology “has taken the opposite approach, welcoming the use of AI in study, essays, and other assignments – but with some guardrails.” Raychowdhury said that by allowing students “to use generative AI to solve problems and assist with assignments, we can show them what AI can and can’t do.” Additionally, “there needs to be clear, core rules with no gray areas,” and one place to “outline these AI rulings is in admissions.” Now is the time “to collaboratively figure out AI’s potential, benefits, and pitfalls. It’s important to have a diverse faculty, so we have as many different microcosms of society as possible represented and to present a united front in how you’re going to use AI, with room for variation in how different fields use AI.”

AI Industry Debates Synthetic Data Use

Insider Share to FacebookShare to Twitter (8/9, Chowdhury, Langley) reported that the AI industry is debating the use of synthetic data as real, human-generated data becomes scarce. Companies like OpenAI and Google have nearly exhausted available textual data, leading to increased interest in synthetic data. While synthetic data can fill gaps and address biases, it also risks degrading AI model performance. Researchers suggest a balanced approach using both real and synthetic data. Some companies are exploring “hybrid data” to mitigate risks. New approaches, such as neuro-symbolic AI, may offer alternative solutions to the data scarcity problem.

FCC Proposes New AI Disclosure Rules

PCMag Share to FacebookShare to Twitter (8/12) reports that the FCC is introducing new regulations requiring companies to disclose any use of AI in phone calls or texts to customers. FCC Chair Jessica Rosenworcel said, “That means before any one of us gives our consent for calls from companies and campaigns, they need to tell us if they are using this technology. ... It also means that callers using AI-generated voices need to disclose that at the start of a call.” This initiative follows a $6 million fine against Democratic consultant Steve Kramer for an AI deepfake of President Biden’s voice. The FCC aims to protect consumers from AI-generated robocalls, “citing a 1991 law designed to protect consumers from pre-recorded automated calls.” The agency seeks public comment on the proposed rules, which also highlight scam call detection technologies from Google and Microsoft. The FCC’s two Republican commissioners, Brendan Carr and Nathan Simington have both voiced opposition to the proposed regulations, with Simington stating, “The idea that the commission would put its imprimatur on even the suggestion of ubiquitous third-party monitoring of telephone calls for the putative purpose of ‘safety’ is beyond the pale.”

AI Model Detects Diseases With 98% Accuracy

The New York Post Share to FacebookShare to Twitter (8/13, Swartz) reports that researchers in Iraq and Australia have developed an AI algorithm capable of diagnosing medical conditions by analyzing tongue color with 98% accuracy. Senior study Share to FacebookShare to Twitter author Ali Al-Naji explained that different tongue colors can indicate various diseases, such as yellow for diabetes and purple for cancer. The study involved 5,260 images to train the AI model and tested it with 60 images from Middle Eastern hospitals. Co-author Javaan Chahl mentioned that this technology could be adapted into a smartphone app for diagnosing multiple conditions. The findings were published in the journal Technologies.

Huawei Prepares New AI Chip To Compete With Nvidia

The Wall Street Journal Share to FacebookShare to Twitter (8/13, Lin, Huang, Subscription Publication) reports that Huawei Technologies is nearing the release of its new AI chip, Ascend 910C, as it attempts to overcome US sanctions to rival Nvidia in China. Chinese firms like ByteDance, Baidu, and China Mobile are in early talks to acquire the chip. Huawei plans to start shipping in October, aiming for orders surpassing 70,000 units worth around $2 billion. Despite production delays and potential further US restrictions, Huawei has received significant state support. Analyst Dylan Patel from SemiAnalysis noted that the Ascend 910C could outperform Nvidia’s B20.

Google Launches Gemini Live

TechCrunch Share to FacebookShare to Twitter (8/13, Wiggers) reports Google launched Gemini Live, a new voice chat feature for its AI chatbot, available starting Tuesday. Announced at the Made by Google 2024 event, Gemini Live offers in-depth voice interactions with enhanced speech capabilities. Initially available in English, this feature is part of the Google One AI Premium Plan, costing $20 per month.

        On CNBC’s Power Lunch Share to FacebookShare to Twitter (8/13), CNBC’s Deirdre Bosa spoke with Rick Osterloh, SVP of Platforms & Devices at Google, about the company’s integration of AI into its hardware.

California AI Bill Faces Pushback From Industry Players

TechCrunch Share to FacebookShare to Twitter (8/13, Zeff) reports “a California bill, known as SB 1047,” seeks to prevent “real-world disasters caused by AI systems before they happen, and it’s headed for a final vote in the state’s senate later in August.” But “while this seems like a goal we can all agree on, SB 1047 has drawn the ire of Silicon Valley players large and small, including venture capitalists, big tech trade groups, researchers and startup founders.”

        The New York Times Share to FacebookShare to Twitter (8/14, Metz, Kang) also reports.

California Faces Data Center Energy Crisis

The Los Angeles Times Share to FacebookShare to Twitter (8/13) says Los Angeles Times writer Melody Petersen “reported this week that concerns are mounting that data centers are gobbling up electricity at an unsustainable rate, putting California in a precarious power position and threatening to derail ambitious clean energy goals.” Experts warn that the rapid construction of data centers could hinder California’s transition away from fossil fuels, increase electric bills, and elevate blackout risks. Generative AI exacerbates the issue, as its operations consume significantly more electricity than traditional computing. Data centers in California, particularly in Santa Clara and Los Angeles counties, are already straining the state’s power grid, which ranks 49th in energy resilience. Additionally, these facilities require substantial water for cooling, further stressing the state’s dwindling water supply.

Report: AI Policies In K-12 Schools Lack Cohesion

K-12 Dive Share to FacebookShare to Twitter (8/13, Merod) reports that nearly two years after ChatGPT’s emergence, “artificial intelligence policies continue to vary widely among school districts nationwide.” As of June, 15 states had “developed AI guidance for schools, according to the U.S. Education Department,” but the guidance is “disjointed and often lacks details about use cases and implementation, the Center on Reinventing Public Education said in a report released this month.” CRPE’s report emphasized that without “clear policies and guidance, districts will continue to struggle with procurement, data-sharing policies, technical questions, and implementation strategies, ultimately leading to disjointed approaches and unequal access.” To address these issues, CRPE gathered more than 60 stakeholders in April to discuss AI’s potential in education and the need for cohesive policies. CRPE “outlined a roadmap” including innovative uses of AI, strategic funding for AI tools, prioritizing low-income communities, and providing detailed implementation plans.

Johns Hopkins Professor Warns Overlooked Threat Of AI Is “Depersonalization Of Human Relationships”

In commentary for TIME Share to FacebookShare to Twitter (8/14), John Hopkins University professor Allison Pugh says that the common discourse around artificial intelligence risks – job disruption, bias, and surveillance – misses a critical threat: the depersonalization of human relationships. Pugh argues that AI’s integration into roles requiring emotional connection, such as counseling and teaching, undermines “connective labor,” which is essential for meaningful human interactions. The author spent five years studying more than 100 individuals in humane interpersonal work and found that technology makes this labor invisible, forces workers to prove their humanity, and leads to job overload. Pugh emphasizes that socioemotional AI should be clearly labeled and calls for policies to protect human-to-human connections, as these are vital for social cohesion and individual well-being.

Experts: GenAI Can Help People With ADHD, But Be Cautious

“Experts say generative AI tools can help people with attention deficit hyperactivity disorder ... to get through tasks quicker,” the Associated Press Share to FacebookShare to Twitter (8/14, Hunter) reports. However, “they also caution that it shouldn’t replace traditional treatment for ADHD, and also expressed concerns about potential overreliance and invasion of privacy.” According to the AP, “generative AI tools can help people with ADHD break down big tasks into smaller, more manageable steps.” Chatbots are able to “offer specific advice and can sound like you’re talking with a human,” while “some AI apps can also help with reminders and productivity.”

Musk’s AI Firm Launches Chatbots Generating Controversial Political Images

MediaPost Share to FacebookShare to Twitter (8/14, Kirkland) reports that Elon Musk’s AI firm, xAI, has introduced new chatbot models, Grok-2 and Grok-2 mini, featuring an in-app image generator for premium users, that have already depicted political figures in controversial scenarios. Despite recent calls for Musk to address election misinformation spread by chatbot Grok, the new bots produce political illustrations involving real people with few restrictions. Early users have already posted contentious AI-generated images. The lack of an indication that these images are AI-generated raises concerns about further misinformation ahead of the next U.S. Presidential election.

Study Finds Generative AI Models Hallucinate Often

TechCrunch Share to FacebookShare to Twitter (8/14, Wiggers) reports a recent study “sought to benchmark hallucinations by fact-checking models like GPT-4o against authoritative sources on topics ranging from law and health to history and geography.” Researchers “found that no model performed exceptionally well across all topics, and that models that hallucinated the least did so partly because they refused to answer questions they’d otherwise get wrong.”

Reports Offer Divergent Views On AI In Education

The Seventy Four Share to FacebookShare to Twitter (8/14, Toppo) reports that two new reports released last week “offer markedly different visions of the emerging field: One argues that schools need forward-thinking policies for equitable distribution of AI across urban, suburban and rural communities. The other suggests they need something more basic: a bracing primer on what AI is and isn’t, what it’s good for and how it can all go horribly wrong.” The Center on Reinventing Public Education (CRPE) at Arizona State University “advises educators to take a more active role in how AI evolves, saying they must articulate to ed tech companies in a clear, united voice what they want AI to do for students.” In contrast, Cognitive Resonance, a think tank based in Austin, Texas, warns “of the inherent hazards of using AI for bedrock tasks like lesson planning and tutoring – and questions whether it even has a place in instruction at all, given its ability to hallucinate, mislead and basically outsource student thinking.”

UT Dallas, University Of Buffalo Researchers Create AI Model To Combat Power Outages

The Dallas Morning News Share to FacebookShare to Twitter (8/15, Horner) reports University of Texas at Dallas (UT Dallas) researchers, in collaboration with the University at Buffalo, have developed an AI model to prevent power outages by rerouting electricity in milliseconds. The study, published in Nature Communications, showcases early “self-healing grid” technology. This system uses machine learning to map complex power distribution networks and can automatically identify alternative routes before an outage occurs. The project was supported by the US Office of Naval Research and the National Science Foundation.

Minority Students, Teachers More Likely To Embrace AI In Education

Forbes Share to FacebookShare to Twitter (8/15, Boser) reports that “new surveys show no group has a more open attitude to adopting AI in the classroom than students and teachers of color.” A Walton Family Foundation survey found that “Black teachers and educators in urban districts had the highest usage rates at 86 percent,” and in K-12 overall, “Hispanic and Black students have higher usage rates at 77 percent and 72 percent respectively versus White students at 70 percent.” A report by Common Sense Media, Hopelab, and Harvard’s Center for Digital Thriving indicates that Black youth are more likely to use generative AI for information, brainstorming, and schoolwork. Despite the enthusiasm, only 25 percent “of teachers polled said they have received any training on AI chatbots,” contributing to hesitancy.

Daniel Tauritz

unread,
Aug 24, 2024, 5:29:48 PM8/24/24
to ai-b...@googlegroups.com

U.S. Government Wants You — Yes, You — to Hunt Down Generative AI Flaws

Ethical AI and algorithmic assessment nonprofit Humane Intelligence and the National Institute of Standards and Technology (NIST) are calling for public participation in the qualifying round of NIST's Assessing Risks and Impacts of AI challenge. Those who make it through the online qualifier will participate in an in-person red-teaming event to assess AI office productivity software at the Conference on Applied Machine Learning in Information Security in October. Said Humane Intelligence's Theo Skeadas, "We want to democratize the ability to conduct evaluations and make sure everyone using these models can assess for themselves whether or not the model is meeting their needs."
[ » Read full article ]

Wired; Lily Hay Newman (August 21, 2024)

 

Worldcoin Battles with Governments over Your Eyes

Governments increasingly are concerned the Worldcoin biometric cryptocurrency project, headed by OpenAI's Sam Altman, is building a global biometric database with minimal oversight. The initiative's goal is to scan the eyes of every human, issue online "World ID" passports to prove users are human, and make payments to users in Worldcoin's WLD cryptocurrency. Governments have raised concerns over reports that operators of Worldcoin's iris-scanning devices are encouraging users to allow Worldcoin to use their iris scans to train its algorithms.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Angus Berwick; Berber Jin (August 18, 2024)

 

AI Detection Tools Often Fail to Catch Election Deepfakes

An April study by the Reuters Institute for the Study of Journalism revealed how basic software tricks and editing techniques can fool many deepfake detectors. A 2023 study by U.S., Australian, and Indian researchers found accuracy rates for deepfake detectors ranged from just 25% to 82%. University of California at Berkeley computer science professor Hany Farid said the datasets used to train detectors mainly contain lab-created, not real-world, deepfakes and perform poorly in identifying abnormal patterns in body movement or lighting.

[ » Read full article *May Require Paid Registration ]

The Washington Post; Kevin Schaul; Pranshu Verma; Cat Zakrzewski (August 15, 2024)

 

Hollywood Union Strikes Deal for Advertisers to Replicate Actors' Voices with AI

A deal between the Hollywood actors' union SAG-AFTRA and online talent marketplace Narrativ will allow actors to sell the rights to replicate their voices with AI to advertisers. The agreement ensures actors will have control over the use of their digital voice replicas and will receive income from the technology equal to at least the SAG-AFTRA minimum pay for audio commercials. Brands will need to obtain an actor's consent for each ad using their AI-generated voice replica.
[ » Read full article ]

Reuters; Danielle Broadway; Dawn Chmielewski (August 14, 2024)

 

AI Assistant Monitors Teamwork to Promote Effective Collaboration

An AI assistant developed by computer scientists at the Massachusetts Institute of Technology can oversee teams of humans and AI agents, aligning their roles and intervening as necessary to improve teamwork toward a common goal. The AI assistant can infer the humans’ plans and understanding of one another and, when issues arise, align their beliefs, ask questions, and provide instruction.
[ » Read full article ]

MIT News; Alex Shipps (August 19, 2024)

 

Heart Data Unlocks Sleep Secrets

University of Southern California computer scientists developed open source software that could allow for the development of inexpensive, DIY sleep-tracking devices by anyone with basic coding knowledge. Their model uses heart data and a deep-learning neural network to assess sleep stages. The automated electrocardiogram-only network accurately categorizes sleep stages. The researchers said it outperformed commercial sleep-tracking devices and other models that also do not utilize electroencephalogram data.
[ » Read full article ]

USC Viterbi School of Engineering; Caitlin Dawson (August 19, 2024)

 

Pentagon's New Supercomputer to Boost Defense Against Biothreats

The U.S. Department of Defense (DOD) announced a new supercomputer and rapid response laboratory (RRL) intended to bolster its Chemical and Biological Defense Program's Generative Unconstrained Intelligent Drug Engineering (GUIDE) program. The supercomputer will use AI modeling, simulations, threat classification, and medical countermeasure development in conjunction with the RRL to improve biodefenses.
[ » Read full article ]

TechRadar; Benedict Collins (August 19, 2024)

 

OpenAI Disrupts AI-Based Iranian Influence Campaign

The New York Times Share to FacebookShare to Twitter (8/16, Metz) reported OpenAI “said on Friday that it had discovered and disrupted an Iranian influence campaign that used the company’s generative artificial intelligence technologies to spread misinformation online, including content related to the U.S. presidential election.” The company “said it had banned several accounts linked to the campaign from its online services,” but it “added that a majority of the campaign’s social media posts had received few or no likes, shares or comments, and that it had found little evidence that web articles produced by the campaigns were shared across social media.” The campaign had “used its technologies to generate articles and shorter comments posted on websites and on social media.”

        The Washington Post Share to FacebookShare to Twitter (8/16) explains that “the sites and social media accounts that OpenAI discovered posted articles and opinions made with help from ChatGPT on topics including the conflict in Gaza and the Olympic Games,” as well as “material about the U.S. presidential election, spreading misinformation and writing critically about both candidates.” Ben Nimmo, “principal investigator on OpenAI’s intelligence and investigations team, said the activity was the first case of the company detecting an operation that had the U.S. election as a primary target,” adding, “Even though it doesn’t seem to have reached people, it’s an important reminder, we all need to stay alert but stay calm.”

San Francisco To Sue AI Web Sites Over Deepfake Nude Images

The San Francisco Chronicle Share to FacebookShare to Twitter (8/17, DiFeliciantonio) reports, “San Francisco City Attorney David Chiu is suing 16 websites that his office says use AI to create nonconsensual, fake nude images of women and girls, the first lawsuit of its kind.” The Chronicle explains, “The sites allow users to create AI-generated images of real people, swapping their faces onto nude images in a violation of state and federal laws prohibiting deepfake pornography, revenge pornography and child pornography. ... The suit is seeking civil penalties and for the sites to be blocked by web hosts from continuing to post the alleged illegal content. So as not to push traffic to the companies’ websites, their URLs were redacted in the legal complaint filed Thursday.”

Professors Partner With Police For AI Public Safety Solutions

Inside Higher Ed Share to FacebookShare to Twitter (8/19, Coffey) reports that Yao Xie, a professor at the Georgia Institute of Technology, has completed a seven-year collaboration with the Atlanta Police Department using AI to improve policing. Starting in 2017, Xie’s work focused on crime linkage analysis, rezoning police districts, and ensuring fair neighborhood services. This partnership is part of a broader trend where universities collaborate with law enforcement to harness AI for public safety. Projects include facial recognition comparisons by the University of Texas at Dallas, image analysis by Carnegie Mellon, and risk analysis by the Illinois Institute of Technology. Funded by a $3.1 million National Institute of Justice initiative, these efforts address public safety video analysis, DNA analysis, gunshot detection, and crime forecasting.

AAC&U, Elon University Release AI Guide For Students

Inside Higher Ed Share to FacebookShare to Twitter (8/19, Coffey) reports, “The American Association of Colleges and Universities and Elon University have launched an artificial intelligence how-to guide for students navigating the sometimes-murky waters of the burgeoning technology.” They call their AI-U guide a “student guide to navigating college in the artificial intelligence era.” The guidebook was “born out of conversations last year between dozens of universities at the United Nations-sponsored Internet Governance Forum, which culminated in six principles for the use of AI in higher education.” The guide includes “how to use AI in learning environments, such as for writing and research assistance; effective AI prompts to use; concerns with generative AI; and using AI in potential career searches.” The guide was compiled “by feedback from more than 100 students from various universities and faculty members who attended the U.N. forum.”

Meta Promotes Internal AI Tool

Insider Share to FacebookShare to Twitter (8/19, Altchek) reports that Meta has been promoting its internal AI tool, Metamate, which has been in use for over a year. Meta product director Esther Crawford highlighted the tool’s efficiency benefits in a post on X, stating it aids in tasks like summarizing documents and debugging. Crawford’s comments sparked discussions among employees and industry peers, with Shopify COO Kaz Nejatian expressing agreement. Other companies, including consulting firms and banks, have also been investing heavily in AI tools to enhance workplace performance.

AI Solutions Boost Sustainability Efforts

Forbes Share to FacebookShare to Twitter (8/19, Kirti) reports that AI can significantly enhance sustainability initiatives for businesses by streamlining data analytics and uncovering hidden opportunities. Proper AI integration requires operational changes and robust change management. AI can validate hypotheses about waste and inefficiencies by analyzing vast data, offering tailored solutions to improve sustainability. AI also helps in prioritizing opportunities by estimating the size of sustainability improvements. Aligning stakeholders and training staff is crucial for effective AI-driven sustainability. Despite AI’s high energy consumption, advancements are being made to reduce this impact, as noted by Dr. Sasha Luccioni.

Authors Sue AI Startup Anthropic Over Copyright Infringement

The AP Share to FacebookShare to Twitter (8/20) reports a group of authors is suing AI startup Anthropic, alleging it used pirated copies of copyrighted books to train its chatbot Claude. This marks the first lawsuit by writers against Anthropic, which was founded by ex-OpenAI leaders. The lawsuit, filed in federal court in San Francisco, claims Anthropic’s actions contradict its stated goals of responsibility and safety. Authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson filed the suit, seeking to represent other affected writers. Anthropic did not respond to requests for comment. This case adds to the growing number of lawsuits against AI developers for copyright infringement.

California AI Bill Divides Silicon Valley

Insider Share to FacebookShare to Twitter (8/19) reports that California’s SB 1047, introduced by Sen. Scott Wiener in February, aims to regulate AI development by setting safety standards for large-scale systems. It mandates safety testing and liability for companies developing costly AI models. Major tech firms like Meta and OpenAI have criticized the bill, arguing it stifles innovation. Smaller companies express mixed feelings, with some supporting its transparency measures. California Governor Gavin Newsom has not commented on the bill, which will be voted on by the state Assembly by month-end.

Amazon Launches AI Showroom In San Francisco

The San Francisco Chronicle Share to FacebookShare to Twitter (8/20, DiFeliciantonio) reports Amazon has launched a showroom on Market Street in San Francisco to showcase its AI and robotics efforts. The GenAI loft aims to attract startups, tech developers, and investors to spotlight Amazon’s AI work. AWS VP of Developer Experience Adam Seligman said the opening comes at a time “when people are just learning how to use AI.” The San Francisco launch includes robot-made paintings and AI-generated artwork by Claire Silver, with interactive holograms and AI tech talks. Amazon plans to open similar spaces in São Paulo, London, Paris, and Seoul this year.

California AI Regulation Bill Advances

Forbes Share to FacebookShare to Twitter (8/20, Tedford) reports that a bill to regulate artificial intelligence companies in California has progressed through the state assembly appropriations committee. The legislation, proposed by State Senator Scott Wiener, mandates safety testing for advanced AI models and empowers the state attorney general to file charges if technologies cause harm. The bill faces opposition from tech giants like Google and Meta, who argue it could stifle innovation. The bill will now go to the full state assembly for a vote before the legislative session ends on August 31. Governor Gavin Newsom’s stance remains uncertain.

Educators Debate Ethical AI Use In Schools

PC World (8/20, Hachman) reports the ethical use of artificial intelligence in education remains a contentious issue, and since ChatGPT’s release in November 2022, opinions vary widely. High schools often view AI as a potential cheating tool, while “several universities leave generative AI use entirely up to the discretion of the person teaching the course.” The director of instructional technology at the Mohonasen Central School District highlights the concerns of teachers and the district’s cautious approach to AI, including a trial with Khan Academy’s Khanmigo. Despite AI’s benefits, educators emphasize the need for proper integration to avoid hindering learning.

dtau...@gmail.com

unread,
Sep 1, 2024, 5:21:19 PM9/1/24
to ai-b...@googlegroups.com

California Passes AI Safety Bill

California’s legislature approved an AI safety bill opposed by many tech companies. The measure moved to Governor Gavin Newsom’s desk after passing the state Assembly Wednesday, with the Senate granting final approval Thursday. SB 1047 mandates that companies developing AI models take “reasonable care” to ensure that their technologies don’t cause “severe harm,” such as mass casualties or property damage above $500 million.
[
» Read full article ]

Bloomberg; Shirin Ghaffary (August 29, 2024)

 

AI's Race for Energy Butts Up Against Bitcoin Mining

U.S. tech firms, seeking more electricity to power AI and cloud computing datacenters, are turning to bitcoin miners. By the end of 2027, 20% of bitcoin miner power capacity is expected to shift to AI. Morgan Stanley researchers found crypto mining facilities could become upwards of five times more valuable by repurposing operations for AI and cloud computing. Additionally, datacenter wait times could be shortened by around 3.5 years by buying or leasing a bitcoin mining facility with at least 100 MW of capacity.
[
» Read full article ]

Reuters; Laila Kearney; Mrinalika Roy (August 28, 2024)

 

AI Could Engineer a Pandemic, Experts Warn

A policy paper from public health and legal professionals at Stanford School of Medicine, Fordham University, and the Johns Hopkins Center for Health Security calls for mandatory oversight and guardrails for advanced biological AI models. The authors wrote they believe governments should collaborate with machine learning, infectious disease, and ethics experts to develop tests to determine whether biological AI models could pose "pandemic-level risks."
[
» Read full article ]

Time; Tharin Pillay; Harry Booth (August 27, 2024)

 

'Biocomputers' Made of Human Brain Cells Available for Rent

Researchers can rent cloud access to "biocomputers" from the Swiss tech firm FinalSpark for a monthly fee of $500. A low-energy alternative to AI models, these biocomputers, or organoids, are comprised of human brain cells and last only about 100 days. Among the nine universities granted access to FinalSpark's biocomputers are the University of Michigan, and Germany's Free University of Berlin and Lancaster University.
[
» Read full article ]

Interesting Engineering; Gairika Mitra (August 25, 2024)

 

How Tech Companies Obscure AI's Real Carbon Footprint

A Bloomberg Green analysis found that Amazon, Microsoft, and Meta are buying millions of unbundled renewable energy certificates (RECs) so they can claim emission reductions. Although current carbon accounting rules factor these credits into a company's carbon footprint calculations, research indicates carbon savings on paper fail to translate into actual emissions reductions in the atmosphere.

[ » Read full article *May Require Paid Registration ]

Bloomberg; Akshat Rathi; Natasha White; Ben Elgin (August 21, 2024); et al.

 

AI Researchers Call for 'Personhood Credentials' as Bots Get Smarter

A team including researchers from OpenAI, Microsoft, and Harvard University have proposed the development of "personhood credentials" to help distinguish humans from bots online. Such a system would require humans to verify their identities offline to receive an encrypted credential allowing them to access an array of online services. The researchers proposed multiple personhood credentialing systems be created so users have options, and a single entity does not control the market.
[ » Read full article ]

The Washington Post; Will Oremus (August 21, 2024)

 

GitHub Survey Finds Nearly All Developers Use AI Coding Tools

Nearly all (97%) of the 2,000 developers, engineers, and programmers polled by GitHub across the U.S., Brazil, Germany, and India said they have used AI coding tools at work. Most respondents said they perceived a boost in code quality when using AI tools, and 60% to 71% of those polled said adopting a new programming language or understanding an existing codebase was "easy" with AI coding tools.
[ » Read full article ]

InfoWorld (August 21, 2024)

 

Machine Learning Algorithm Improves 3D Printing Efficiency

Washington State University (WSU) researchers developed a machine learning algorithm that identifies the most efficient 3D print settings for producing complex structures. The researchers used the algorithm to optimize the design for kidney and prostate organ models, with a focus on geometric precision, weight, porousness, and printing time. WSU's Eric Chen said, "We were able to strike a favorable balance and achieve the best possible printing of a quality object, regardless of the printing type or material shape."
[ » Read full article ]

Engineering.com; Ian Wright (August 22, 2024)

 

The Year of the AI Election That Wasn't

More than two dozen tech companies offer AI products geared toward political campaigns, with the ability to reorganize voter rolls, handle campaign emails and robocalls, and produce AI-generated likenesses of candidates for virtual meet-and-greets. However, interviews with tech companies and political campaigns indicate that the technology has not taken off, largely due to a distrust for AI among voters.

[ » Read full article *May Require Paid Registration ]

The New York Times; Sheera Frenkel (August 21, 2024)

 

AI Could Help Shrinking Pool of Coders Keep Outdated Programs Working

An AI model developed by researchers at Vietnam's FPT Software AI Center could allow COBOL-based systems to remain operational as the number of engineers familiar with the older programming language continues to decline. The researchers are training the XMainframe model to interpret COBOL code and rewrite it in other programming languages. In tests, the model outperformed other AI models in accurately summarizing the purpose of COBOL code.

[ » Read full article *May Require Paid Registration ]

New Scientist; Matthew Sparkes (August 20, 2024)

 

China's AI Engineers Secretly Access Banned Nvidia Chips

Chinese AI developers increasingly are skirting U.S. export controls that prevent them from directly importing Nvidia chips by working with brokers to access them overseas. The users' identities are concealed through "smart contracts" via the blockchain, and the transactions are paid for using cryptocurrency. Experts say these arrangements do not break any laws.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Raffaele Huang (August 26, 2024)

 

Florida International University Students Create AI Policies For Their Class

Inside Higher Ed Share to FacebookShare to Twitter (8/22, Coffey) reported that students at Florida International University (FIU) “were asked to come up with their own AI guidelines for a Rhetorical Theory and Practice class earlier this year.” This initiative marks a departure from the typical prohibition of AI, which is often equated with plagiarism. Students were broken into small groups “to come up with what they believed were best practices, which they then presented to the class at large to fine-tune their ideas. In a summer course, with its shorter time frame...students look at the spring semester policy and make tweaks to create their own.” The experiment resulted in varied policies on AI use for brainstorming and organizing papers. The instructor of the class “will continue to allow students to create their own AI policies this fall, expanding from her upper-level courses to first-year students as well.”

Google DeepMind Workers Protest Military Contracts

TIME Share to FacebookShare to Twitter (8/22, Perrigo) reported that nearly 200 Google DeepMind employees signed a letter urging Google to end its military contracts, citing concerns over ethical AI use. The letter, dated May 16, references Google’s Project Nimbus contract with the Israeli military, which includes AI and cloud services. A Google spokesperson stated, “We comply with our AI Principles, which outline our commitment to developing technology responsibly.” Despite the protest, Google has not acted on the demands, leading to growing frustration among employees.

AI Bots Expected To Transform Everyday Tasks

The Wall Street Journal Share to FacebookShare to Twitter (8/24, Lin, Subscription Publication) reports on the anticipated rise of AI “agents” capable of independently completing various tasks, from booking flights to managing reservations. AWS VP of Generative AI Vasi Philomin said, “In the next stage, bots will be built to do things like arrange returns, all without human help.” As these advancements unfold, Amazon and its services, including AI chatbots for shopping, stand to play a significant role in the emerging technology landscape.

Expert Says Schools Need To Ask Essential Questions About AI For Children

The Washington Post Share to FacebookShare to Twitter (8/23) reported Los Angeles public schools are facing challenges with their new AI program, launched to assist students’ learning. The district introduced the chatbot “Ed” in March to be a “personal assistant to students.” However, financial issues with the start-up AllHere led to the project’s suspension and an investigation into potential misuse of student data. Despite this, district officials plan to continue the AI initiative. Alex Molnar from the National Education Policy Center argues that schools should not adopt AI without ensuring it is the best solution. He emphasizes the need for thorough evaluation and data protection. Molnar suggests parents ask critical questions about AI’s effectiveness, alternatives, and data security. He further recommends legislative pressure to ensure AI tools are safe and effective before implementation. Surveys indicate skepticism about AI, despite hopes it can address educational challenges.

Colleges Ramp Up AI Faculty Hiring

The Chronicle of Higher Education Share to FacebookShare to Twitter (8/26, Swaak) reports that colleges are significantly increasing their AI faculty hiring to keep up with technological advancements and industry demands. Despite being well-funded, some top 20 institutions feel they can’t compete with elite universities in AI talent acquisition, says Att Trainum of the Council of Independent Colleges. An analysis of The Chronicle’s jobs site “conducted earlier this year found that the number of AI-related listings had more than doubled between 2022 and 2023.” Institutions like Purdue University, Emory University, and the University of Georgia are making substantial hires, with some creating new AI-focused centers. Funding comes from “a combination of sources,” including multimillion-dollar donations and strategic funds. Colleges are also promoting internal training programs and offering incentives to existing faculty to integrate AI into their work.

University Of Texas To Host New AI Supercomputer

The Austin (TX) Business Journal Share to FacebookShare to Twitter (8/26, Sayers, Subscription Publication) reports the Texas Advanced Computing Center’s (TACC) new supercomputer, Horizon, “will be built in Round Rock at Seattle-based Sabey Data Centers’ new campus.” The University of Texas announced last month “that it was awarded $457 million from the U.S. National Science Foundation to build what’s called a Leadership Class Computing Facility led by the university’s TACC.” Officials confirmed on August 23 that Horizon is expected to start operations in 2026. Horizon will offer “a 10-times performance improvement for simulation” over TACC’s current Frontera supercomputer. The LCCF will collaborate with various science centers, including those at historically Black colleges and universities and other national supercomputing centers.

Tech Firms Conceal AI’s Water, Power Demands

The Los Angeles Times Share to FacebookShare to Twitter (8/26) reports that AI computing significantly increases electricity and water consumption, with ChatGPT using 10 times more power than standard Google searches. Experts called for transparency from tech companies regarding energy and water usage. Alex de Vries, founder of Digiconomist, said, “Even if we manage to feed AI with renewables, we have to realize those are limited in supply, so we’ll be using more fossil fuels elsewhere.” Google and OpenAI have not disclosed specific consumption details, despite environmental concerns.

Column: Researchers Working To Combat AI Hallucinations In Math Tutoring

In her column for The Hechinger Report Share to FacebookShare to Twitter (8/26), Jill Barshay says, “One of the biggest problems with using AI in education is that the technology hallucinates,” or generates incorrect information. AI chatbots, such as Khan Academy’s Khanmigo powered by ChatGPT, often provide wrong answers, particularly in math. Two researchers from University of California, Berkeley, “recently documented how they successfully reduced ChatGPT’s instructional errors to near zero in algebra” using a method called “self-consistency,” but this method was less effective in statistics, with a 13 percent error rate. Despite these challenges, a study found that ChatGPT’s solutions helped adults learn math better than traditional methods. Barshay says she would “like to see how much real students – not just adults recruited online – use these automated tutoring systems.”

OpenAI Prepares To Launch New AI Model

SiliconANGLE Share to FacebookShare to Twitter (8/27) reports that OpenAI is set to launch a new AI model named “Strawberry” with advanced problem-solving capabilities, according to a report first published by The Information. Strawberry is reportedly capable of solving complex math problems, developing marketing strategies and solving word puzzles. The model, previously known as Q*, surpasses the performance of other OpenAI models on several AI benchmarks. OpenAI employees have raised concerns that the new model could represent a major breakthrough in the journey toward building artificial general intelligence (AGI). According to the report, Strawberry could be released in fall 2024.

Tech Companies Support California Bill To Label AI-Generated Content

TechCrunch Share to FacebookShare to Twitter (8/26, Zeff) reports that OpenAI, Adobe, and Microsoft are backing a California bill, AB 3211, that mandates technology companies to label AI-generated content. The bill stipulates the inclusion of watermarks in the metadata of AI-created photos, videos, and audio files, and that online platforms must display these labels in a user-friendly manner. This support comes despite earlier opposition, suggesting recent amendments to the bill are satisfactory.

        Another TechCrunch Share to FacebookShare to Twitter (8/26, Coldewey) article reports that Elon Musk “unexpectedly” voiced support for the bill as well, writing on X, “For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk.” TechCrunch notes that Musk’s xAI “would be subject to SB 1047’s requirements despite his pledge to leave California.”

Google To Again Allow AI-Generated Images Of People

Bloomberg Share to FacebookShare to Twitter (8/28, Love, Subscription Publication) reports that Google will allow some users to generate images of people using its AI models after suspending the feature due to a scandal. In February, Google faced backlash for producing historically inaccurate and racially incorrect images. Alphabet CEO Sundar Pichai deemed the responses “completely unacceptable,” leading to the suspension and subsequent review of the tool. The Wall Street Journal Share to FacebookShare to Twitter (8/28, Kruppa, Subscription Publication) reports that Google stated it has improved user experience and set content limitations. The feature will be available for English-language users in the coming days, using Imagen 3 technology.

Self-Driving Cars Raise Ethical Concerns

The Wall Street Journal Share to FacebookShare to Twitter (8/28, Subscription Publication) provides an overview of the key questions surrounding AI-driven vehicles, highlighting concerns from engineers, programmers, and bioethicists. Shai Shalev-Shwartz, chief technology officer at Mobileye, said that the balance between safety and speed is “the one thing that really affects 99% of the moral questions around autonomous vehicles.” Adjusting the parameters between these two aspects, he said, can lead to AI driving that ranges from overly reckless to overly cautious, or to a style that feels “natural” and human-like.

Survey: Only 25% Of School Districts Have Released Guidance On AI

K-12 Dive Share to FacebookShare to Twitter (8/28, Merod) reports a Digital Promise survey found that while a majority of school districts are using some AI in classrooms, just 25% have set specific AI policies or guidance. This is despite 41% of districts having purchased AI tools within the last year. The lack of “official guidance and policy at the district level comes amid a widespread push by K-12 organizations and industry leaders to roll out AI frameworks for students and staff.” Still, 75% of districts have professional development for teachers on safely and effectively using the technology.

OpenAI, Anthropic Sign AI Safety Testing Deals With US Government

Reuters Share to FacebookShare to Twitter (8/29) reports that AI startups OpenAI and Anthropic have signed agreements with the US AI Safety Institute for research, testing, and evaluation of their AI models. Announced on Thursday, these first-of-their-kind deals come amid increasing regulatory scrutiny. The agreements allow the institute to access new models before and after public release and enable collaborative research on AI capabilities and risks. Jason Kwon of OpenAI emphasized the institute’s role in US AI leadership, while Elizabeth Kelly of the Institute called the agreements a significant milestone. The institute will also collaborate with the UK AI Safety Institute.

        Apple And Nvidia In Talks To Invest In OpenAI. The New York Times Share to FacebookShare to Twitter (8/29) reports that Apple and Nvidia in talks to invest in OpenAI, according to sources familiar with the matter. The new round led by Thrive Capital would value OpenAI at $100 billion, representing a $20 billion increase from eight months ago. Thrive Capital is expected to invest about $1 billion, and Microsoft may also join the funding round. Nvidia and Apple declined to comment on the report. Apple plans to integrate OpenAI’s chatbot on iPhones and is expected to share details of its generative AI technology, Apple Intelligence, in September.

        OpenAI’s “Strawberry” AI Model Promises Advanced Capabilities. Newsweek Share to FacebookShare to Twitter (8/29, Boran) reports that OpenAI is developing a next-generation AI model named “Strawberry,” which may be released in fall 2024. This advanced large language model (LLM) aims to enhance AI reasoning, allowing it to solve complex math problems and word logic puzzles. Kristian J. Hammond, director of the Center for Advancing Safety of Machine Intelligence (CASMI) at Northwestern University noted that current models like ChatGPT-4 struggle with context-dependent and multi-step problems. Hammond said Strawberry “could push AI beyond just mimicking human language into realms of thoughtful analysis.”

Yale University Announces $150M Investment For AI Initiatives

Forbes Share to FacebookShare to Twitter (8/29, T. Nietzel) reports, “Yale University has announced it will invest more than $150 million over the next five years for a variety of artificial intelligence (AI) initiatives.” The investment “will support four AI-related priorities: improvements in computing infrastructure, increased access to secure generative AI tools, the addition of new faculty and seed grants, and enhanced interdisciplinary collaboration.”

UCLA Researchers Develop AI-Based Lyme Disease Test

LabPulse Share to FacebookShare to Twitter (8/29) reports that researchers at the University of California, Los Angeles (UCLA) have developed an AI-based test for Lyme disease that delivers results within 20 minutes. The study from UCLA’s California NanoSystems Institute demonstrates that the test is as accurate as traditional methods. The test uses synthetic peptides and a paper-based platform analyzed by an AI algorithm. Co-author Dino Di Carlo highlighted the test’s potential for early, cost-effective diagnosis. The test showed 95.5% sensitivity and 100% specificity in trials. The team is seeking partners to scale the technology and adapt it for whole blood samples. The study Share to FacebookShare to Twitter was published in Nature Communications and received support from the NIH and the National Science Foundation.

Nvidia Faces Manufacturing Challenges With New AI Chips

The Wall Street Journal Share to FacebookShare to Twitter (8/29, Subscription Publication) reports that Nvidia is experiencing manufacturing difficulties with its new AI chips, Blackwell, which are larger and more complex than previous models. These issues contributed to narrower profit margins and a $908 million provision, causing a 6.4% drop in stock on Thursday. CEO Jensen Huang noted the high demand for Blackwell, despite the challenges. Analysts attribute the problems to the chip’s size and new design methods from Taiwan Semiconductor Manufacturing Co. CFO Colette Kress expects increased production to boost revenue next quarter. Additionally, Nvidia’s rapid release cycle has intensified pressure to resolve these issues.

dtau...@gmail.com

unread,
Sep 7, 2024, 6:54:12 PM9/7/24
to ai-b...@googlegroups.com

Researchers Build 'AI Scientist'

A team of researchers from the U.S., Canada, and Japan developed AI Scientist in an effort to automate parts of the scientific research process. Based on a large language model, AI Scientist can perform the complete research cycle, from reading existing literature and developing a hypothesis to testing solutions and writing a paper. It also can evaluate its own results, then build on those by restarting the research cycle.
[ » Read full article ]

Nature; Davide Castelvecchi (August 30, 2024)

 

EU, U.K., U.S. Sign International AI Treaty

The EU, U.S., and U.K. on Thursday signed an international AI treaty on Thursday, along with Andorra, Georgia, Iceland, Norway, Moldova, San Marino, and Israel. The treaty was opened for signature at a conference of Council of Europe justice ministers in the Lithuanian capital of Vilnius. The Council of Europe hailed the agreement as the "first international legally binding treaty" on the use of AI systems, noting it was an open treaty that could be signed by more countries.
[ » Read full article ]

Deutsche Welle (Germany) (September 5, 2024)

 

OpenAI, Anthropic Reach AI Safety, Research Agreement with NIST

The U.S. National Institute of Standards and Technology (NIST) announced agreements that will give NIST's U.S. AI Safety Institute access to new AI models from OpenAI and Anthropic before and after their public release. The agreements will help bolster research on AI's capabilities and risks and allow NIST to recommend safety improvements.
[ » Read full article ]

The Hill; Miranda Nazzaro (August 29, 2024)

 

Machine Learning Could Forecast Earthquakes Months Early

University of Alaska Fairbanks (UAF) researchers have developed a method of forecasting earthquakes accurately months before they occur using machine learning. The researchers developed an algorithm that searches seismic data for abnormal activity and makes informed predictions about impending earthquakes. By studying major earthquakes that occurred in Alaska in 2018 and California in 2019, the researchers found that major earthquakes are preceded by low-level tectonic unrest, which they attributed to a significant increase in pore fluid pressure within a fault.
[ » Read full article ]

Interesting Engineering; Prabhat Ranjan Mishra (August 30, 2024)

 

US Officials Push For Legislation To Rein In AI-Generated Disinformation

Digiday Share to FacebookShare to Twitter (9/2, Swant) reports that in an effort to curb the spread of AI-generated disinformation, particularly in the political realm, state and federal officials in the US are pushing for new legislation. The proposed California AI Transparency Act aims to boost transparency and accountability by providing access to detection tools and enforcing new disclosure requirements for AI-generated content. Additionally, various states such as New York, Florida, and Wisconsin mandate AI-created political advertisements to include disclosures, while a host of cybersecurity firms have begun rolling out tools to spot AI-generated content.

        Unauthorized AI Bill Heading To California Governor’s Desk For Approval. NPR Share to FacebookShare to Twitter (8/30, Barco) reported that a new bill to protect performers from “unauthorized AI is now headed to the California governor to consider signing into law.” The use of artificial intelligence to “create digital replicas is a major concern in the entertainment industry, and AI use was a point of contention during last year’s Hollywood strike.” California Assembly Bill 2602 would “regulate the use of generative AI for performers – not only those on-screen in films and TV/streaming series but also those who use their voices and body movements in other media, such as audiobooks and video games.” According to the bill, the measure would “require informed consent and union or legal representation ‘where performers are asked to give up the right to their digital self.’”

        Google To Intensify Restrictions On AI-Generated Election Content. MediaPost Share to FacebookShare to Twitter (9/2, Kirkland) reports that Google plans to further limit AI-generated election inquiries across its platforms, including the Gemini chatbot and Search AI Overviews, ahead of the 2024 US Presidential election. This initiative extends to topics like candidates, voting procedures, and election results. Google will also mandate advertisers to reveal when their content includes synthetic or digitally altered elements. Moreover, Search and YouTube platforms will guide users towards credible election-related information and voting registration details.

Florida State University Professor Develops AI Cheating Detection For Multiple-Choice Exams

Inside Higher Ed Share to FacebookShare to Twitter (8/30, Coffey) reported that Kenneth Hanson, a Florida State University professor, “has found a way to detect whether generative artificial intelligence was used to cheat on multiple-choice exams.” Hanson collaborated with a machine-learning engineer to gather data in fall 2022 and published their findings this summer. By analyzing responses from five semesters’ worth of exams, Hanson and a team of researchers “found patterns specific to ChatGPT, which answered nearly every ‘difficult’ test question correctly and nearly every ‘easy’ test question incorrectly.” Despite the method’s precision, Hanson doubts its practicality for individual professors due to its complexity. He said “his method of running multiple-choice exams through his ChatGPT-finding model could be used at a larger scale, namely by proctoring companies like Data Recognition Corporation and ACT.”

OpenAI Makes Deals With Publishers

The Verge Share to FacebookShare to Twitter (8/30) reported that OpenAI has made deals with major publishers like Axel Springer and Condé Nast, despite initially scraping their content without permission. These deals provide OpenAI with access to recent and authoritative content, potentially avoiding lawsuits. The New York Times has filed a lawsuit against OpenAI for copyright infringement. OpenAI’s agreements can be seen as settlements to prevent further legal actions. The deals also give OpenAI up-to-date information, enhancing its SearchGPT product. The legal outcome of these cases could significantly impact the AI and publishing industries.

Tech Giants Use Creative Tactics To Poach AI Talent

CNBC Share to FacebookShare to Twitter (8/30, Bosa, Wu) reported Microsoft, Google, and Amazon are using creative methods to poach talent from top AI startups. Google recently signed a unique deal with Character.ai, hiring its founder and over 20% of its workforce while licensing its technology. Microsoft and Amazon have employed similar strategies with their deals involving Inflection and Adept, respectively. These tactics aim to circumvent regulatory scrutiny while acquiring valuable AI talent. However, these maneuvers might attract antitrust enforcement attention.

OpenAI Seeks Changes To Management, Organization

The New York Times Share to FacebookShare to Twitter (9/3, Metz, Isaac) reports OpenAI “is making substantial changes to its management team, and even how it is organized, as it courts investments from some of the wealthiest companies in the world.” The company “is trying to look more like a no-nonsense company ready to lead the tech industry’s march into artificial intelligence.” However, “interviews with more than 20 current and former OpenAI employees and board members show that the transition has been difficult.”

        The New York Times Share to FacebookShare to Twitter (8/30, Metz) reports OpenAI has appointed Chris Lehane as its vice president of global policy. Lehane, who previously held a similar role at Airbnb and also served in the Clinton White House, is known for his expertise in opposition research. An OpenAI spokesperson said, “Just as the company is making changes in other areas of the business to scale the impact of various teams as we enter this next chapter, we recently made changes to our global affairs organization.”

        OpenAI Investment Discussions Occurring Amid Increased Competition. The Wall Street Journal Share to FacebookShare to Twitter (8/30, Subscription Publication) reports that Apple, NVIDIA, and Microsoft are in discussions to invest in OpenAI, the developer of ChatGPT, amid increasing competition in the AI market. Startups are emerging with cheaper and more specialized AI services. Meta’s CEO Mark Zuckerberg supports open-source AI, offering Meta’s Llama model for free to developers. OpenAI, which charges for its services, faces competition from these open-source models. Apple and NVIDIA are negotiating to join Microsoft’s investment, potentially valuing OpenAI at $100 billion. Open-source AI’s growing popularity is challenging established AI companies like OpenAI.

University Of Arizona Engineers Research AI For EV Battery Fire Safety

KOLD-TV Share to FacebookShare to Twitter Tucson, AZ (9/2, Romo) reported that the University of Arizona is focusing on electric vehicle safety through artificial intelligence research. Basab Ranjan Das Goswami, a PhD student in Aerospace and Mechanical Engineering, explained that AI could predict car battery fires by monitoring temperature, potentially saving lives and property. Captain Richard Fult, safety officer at Northwest Fire District, noted that fire departments nationwide struggle with containing EV fires, as extinguishing them remains challenging. Das Goswami mentioned that while their research currently centers on Tesla vehicles, they aim to expand to other electric vehicles.

Goldman Sachs Says AI Could Put Downward Pressure On Oil Price Over Next Decade

Reuters Share to FacebookShare to Twitter (9/3, Choubey, Patel, Anil) reports that artificial intelligence “could hurt oil prices over the next decade by boosting supply by potentially reducing costs via improved logistics and increasing the amount of profitably recoverable resources, Goldman Sachs said on Tuesday.” In a note, Goldman Sachs said, “AI could potentially reduce costs via improved logistics and resource allocation. ... resulting in a $5/bbl fall in the marginal incentive price, assuming a 25% productivity gain observed for early AI adopters.” Goldman “expects a modest potential AI boost to oil demand compared to demand impact to power and natural gas over the next 10 years.” Goldman added, “We believe that AI would likely be a modest net negative to oil prices in the medium-to-long term as the negative impact from the cost curve (c.-$5/bbl) – oil’s long-term anchor – would likely outweigh the demand boost (c.+$2/bbl).”

Column: AI Chatbots Hinder Student Learning

In her column for The Hechinger Report Share to FacebookShare to Twitter (9/2), Jill Barshay said researchers at the University of Pennsylvania found that “Turkish high school students who had access to ChatGPT while doing practice math problems did worse on a math test compared with students who didn’t have access to ChatGPT.” While students using ChatGPT “solved 48 percent more of the practice problems correctly,” they did not build essential problem-solving skills. A revised AI tutor chatbot improved practice problem performance by 127 percent but did not enhance test scores. The researchers concluded that AI chatbots could “substantially inhibit learning,” as students often relied on them as a “crutch.”

AI Tool Aids Students In Crafting College Essays

Inside Higher Ed Share to FacebookShare to Twitter (9/4, Coffey) reports Esslo, an AI tool developed by Stanford students Hadassah Betapudi and Elijah Kim, provides feedback on college essays. The AI machine “provides feedback on college essays, based on those that have helped students gain admission to top-tier universities like Harvard and Stanford.” The tool offers suggestions on avoiding clichés, using imagery, and improving detail, voice, and character. It has both free and paid versions, with the latter offering unlimited line-by-line edits. Rick Clark, executive director of enrollment management at the Georgia Institute of Technology, views AI as the “equivalent of using an admissions consultant – except that it’s more affordable for those who cannot pay for the often-pricey consultants.”

X Corp. Agrees To EU Data Protection Demands

Bloomberg Share to FacebookShare to Twitter (9/4, Volpicelli, Subscription Publication) reports that Elon Musk’s X Corp. will stop processing European users’ personal data to train its AI chatbot Grok, complying with EU regulators. On Wednesday, Ireland’s Data Protection Commission announced X’s commitment to delete data collected from May 7 to Aug. 1, 2024. The DPC “said it was the first time a lead EU agency has taken such an action against an online platform.”

Amazon Hires Covariant Founders For AI Robotics

Wired Share to FacebookShare to Twitter (9/4) reports that Amazon has hired the founders of Covariant, a startup specializing in AI for automating object handling, and will license its models and data. This move, similar to Amazon’s 2012 acquisition of Kiva Systems, could revolutionize ecommerce operations. Covariant, founded in 2020 by UC Berkeley professor Pieter Abbeel and his students, has developed AI algorithms for robotic grasping. Amazon spokesperson Alexandra Miller confirmed Covariant’s technology will enhance Amazon’s robotic systems. This follows similar talent acquisitions by Amazon, Microsoft, and Google from other AI startups.

Generative AI Projects Face High Costs and Risks

TechRepublic Share to FacebookShare to Twitter (9/4, Jackson) reports that despite the potential of generative AI, many projects are being abandoned due to high costs and risks. A Gartner report indicates that 30% of generative AI projects will be discontinued after the proof-of-concept stage by 2025 as companies are “struggling to prove and realize value.” Rita Sallam, VP analyst of Gartner, said it is “important to acknowledge the challenges in estimating that value, as benefits are very company, use case, role and workforce specific. Often, the impact may not be immediately evident and may materialize over time. However, this delay doesn’t diminish the potential benefits.” A separate Deloitte survey of 2,770 companies found that 70% have moved only 30% or fewer of their GenAI experiments into production, citing lack of preparation and data issues. RAND research revealed that over 80% of AI projects fail, a rate double that of non-AI IT projects.

Study Suggests Generative AI For Academic Advising

Inside Higher Ed Share to FacebookShare to Twitter (9/5, Mowreader) reports a new study “from Tyton Partners suggests supporting academic advisers with generative AI to reduce the burden of heavy caseloads.” The annual study, Driving Toward a Degree, highlighted that this year, “adviser burnout and turnover gained prominence, with 37 percent of respondents ranking it as top issue, nine percentage points higher than the year prior.” The survey “is based on a survey of over 3,000 higher education stakeholders,” and it found that 95 percent of academic advisers focus on helping students select courses. However, among “front-line student support providers, only 25 percent of respondents used AI at least monthly, compared to 59 percent of students.” Tyton suggests enhancing data quality and increasing staff engagement with AI tools to build trust and effectiveness.

OpenAI Considers Higher-Priced Subscriptions

Reuters Share to FacebookShare to Twitter (9/5) reports that OpenAI executives are discussing higher-priced subscriptions for future large language models, including the reasoning-focused Strawberry and a new flagship LLM called Orion. Internal talks have considered prices up to $2,000 per month. OpenAI has not commented on the report. Currently, ChatGPT Plus costs $20 per month, while the free tier is used by hundreds of millions monthly. OpenAI’s Strawberry project aims to enhance AI models’ deep research capabilities through specialized post-training. This follows reports of potential investments from Apple and Nvidia, which could value OpenAI above $100 billion.

OpenAI Cofounder Launches Safe Superintelligence

Fast Company Share to FacebookShare to Twitter (9/5, Melendez) reports OpenAI cofounder Ilya Sutskever has launched a new AI startup called Safe Superintelligence, raising $1 billion from investors like Andreessen Horowitz and Sequoia Capital. The company aims to develop AI smarter than humans but safe for civilization. Safe Superintelligence has 10 employees and is vetting new hires for technical skills and good character. Sutskever left OpenAI in May after a conflict with CEO Sam Altman, who was briefly ousted. The startup’s website emphasizes advancing AI capabilities while ensuring safety.

YouTube Develops AI Detection Tools

TechCrunch Share to FacebookShare to Twitter (9/5, Perez) reports YouTube announced on Thursday new AI detection tools aimed at protecting creators from unauthorized use of their likenesses, including faces and voices, in videos. This initiative expands YouTube’s Content ID system to identify AI-generated content, such as synthetic singing. YouTube is also developing solutions to control how content is used for AI training, responding to creators’ complaints about companies using their material without consent. The company is working on compensating artists for AI-generated music, collaborating with Universal Music Group. Early next year, YouTube will pilot the expanded Content ID system to identify synthetic singing.

dtau...@gmail.com

unread,
Sep 15, 2024, 11:33:50 AM9/15/24
to ai-b...@googlegroups.com

Google AI Model Faces EU Scrutiny from Privacy Watchdog

EU regulators said Thursday they’re looking into Google’s Pathways Language Model 2 (PaLM2) over concerns about its compliance with the bloc’s data privacy rules. Ireland’s Data Protection Commission, which has oversight of Google in data privacy matters, said it has opened an inquiry to assess whether the AI model's data processing would likely result in a “high risk to the rights and freedoms of individuals” in the bloc.
[
» Read full article ]

Associated Press; Kelvin Chan (September 11, 2024)

 

U.S. Proposes Requiring Reporting for Advanced AI, Cloud Providers

The U.S. Department of Commerce's Bureau of Industry and Security has proposed mandatory reporting requirements for AI developers and cloud computing providers regarding the development of "frontier" AI models and computing clusters. The reporting would cover cybersecurity measures and outcomes from "red-teaming efforts," such as testing whether AI models can assist in cyberattacks or enable non-experts to develop chemical, biological, radiological, or nuclear weapons.
[ » Read full article ]

Reuters; David Shepardson (September 9, 2024)

 

Video Game Performers Reach Agreement on AI

The Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) last week reached agreements with 80 video games over AI protections for video game performers. The individual video games entered into interim or tiered budget agreements with SAG-AFTRA and agreed to the union's AI provisions. The dispute centered on the ability of games to replicate the likenesses of voice actors and motion-capture artists using AI, without their consent or fair compensation.
[ » Read full article ]

Associated Press; Kaitlyn Huamani (September 5, 2024)

 

IT Unemployment Hits 6%

A Janco Associates analysis of U.S. Department of Labor data revealed the unemployment rate for IT workers climbed to 6% in August. Janco's Victor Janulaitis said the rate is the highest since the end of the dot-com bubble in the early 2000s and attributed the increase to "seismic changes" in the tech landscape brought on by AI. On the other hand, said Janulaitis, AI and cybersecurity roles are experiencing growth.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Belle Lin (September 7, 2024)

 

India Emerging as Key Player in Global AI Race

India is working to become a major global player in the AI space, with the government committing $1.25 billion to the IndiaAI mission to facilitate computing infrastructure, startup, and AI application development in the public sector. A number of Indian startups have begun developing their own large language models, and the government has procured 1,000 GPUs to provide computing capacity to AI developers.
[ » Read full article ]

Time; Astha Rajvanshi (September 5, 2024)

 

New Recruitment Challenge: Filtering AI-Crafted Résumés

Tech companies and recruiters attribute substantial interest in their job postings to the use of AI to customize and submit numerous résumés in rapid succession. To avoid hiring "fake candidates," recruiters are taking extra steps to verify applicants' identities and experience. Some firms record interviews and flag candidates for further vetting if they look away from the camera before answering a question, as they may be consulting ChatGPT for answers.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Katherine Bindley (September 4, 2024)

 

Regulators Try to Do the Math on AI Safety

AI safety legislation passed in California would, if signed into law by Gov. Gavin Newsom, regulate AI models trained on 10 to the 26th floating-point operations per second, the same threshold that requires reporting to the U.S. government under a 2023 executive order signed by President Joe Biden. The threshold is viewed by some lawmakers and AI safety advocates as a level of computing power at which AI systems could become dangerous, but critics call the measure arbitrary.
[ » Read full article ]

Associated Press; Matt O'Brien (September 4, 2024)

 

AI Anchors Protect Reporters amid Government Crackdown in Venezuela

In response to the Venezuelan government's crackdown on journalists and protesters, Colombian non-profit Connectas has created AI-generated news anchors to deliver news in Venezuela from independent media outlets while protecting reporters. The AI anchors are named "El Pana," Venezuelan slang for "friend," and "La Chama," meaning "The Girl." Connectas' Carlos Huertas said, "We decided to use artificial intelligence to be the 'face' of the information we're publishing because our colleagues who are still out doing their jobs are facing much more risk."
[ » Read full article ]

Reuters; Maria Paula Laguna; Kylie Madry (September 2, 2024)

 

Professor Speaks Out About Students’ Use Of ChatGPT For Introductory Assignment

Insider Share to FacebookShare to Twitter (9/8, Yip) reports professor Megan Fritts of the University of Arkansas at Little Rock revealed that “many of the students enrolled in her Ethics and Technology course decided to introduce themselves with ChatGPT.” Fritts took her concern “to X, formerly Twitter, in a tweet that has now garnered 3.5 million views.” She explained “that the assignment was not only to help students get acquainted with using the online Blackboard discussion board feature, but she was also ‘genuinely curious’ about the introductory question.” However, AI-generated responses “did not reflect what the students, as individuals, were expecting from the course but rather a regurgitated description of what a technology ethics class is, which clued Fritts in that they were generated by ChatGPT or a similar chatbot.” Fritts acknowledged “that educators have some obligation to teach students how to use AI in a productive and edifying way. However, she said that placing the burden of fixing the cheating trend on scholars teaching AI literacy to students is ‘naive to the point of unbelievability.’”

How AI Recruiters Impact College Admissions

The Chronicle of Higher Education Share to FacebookShare to Twitter (9/6, Carlson) reported Zack Perkins and CollegeVine, his technology company, recently “released a product-launch video that sought to show the promise of artificial intelligence in scaling up the work of admissions offices that give information to prospective students.” The AI bot named “Sarah” demonstrates its ability to engage prospective students by discussing their interests and guiding them to suitable academic programs. Institutions like Knox College have begun integrating customized AI recruiters, such as “KC,” to enhance recruitment efforts. In addition to “its AI recruiter, CollegeVine also has an AI-powered chatbot called Ivy that answers students’ general questions about what colleges they might consider applying to, what major they should choose, and what they might do with that major.” While AI promises to streamline routine tasks and free up staff for more meaningful interactions, concerns remain “about its ability to replace human intuition and personalized guidance.”

Opinion: Inclusive AI Could Bolster Special Education

In an opinion piece for TIME Share to FacebookShare to Twitter (9/6), Timothy Shriver, Ph.D., the chairman of the Special Olympics, said that the advent of artificial intelligence (AI) could significantly impact students with intellectual and developmental disabilities (IDD). A study by the Special Olympics Global Center for Inclusion in Education “found the majority of educators (64%) and parents (77%) of students with IDD view AI as a potentially powerful mechanism to promote more inclusive learning.” Despite this optimism, “the majority of teachers (78%) express concern that the use of AI in schools might lead to a decrease in human interaction in schools, with 65% also worried about AI use potentially reducing students’ ability to practice empathy.” Shriver emphasized the need for comprehensive teacher training on AI platforms and said “people with IDD must have a seat at the table when discussing the responsible use of AI in education.”

University Of Delaware Piloting AI Study Tools

Inside Higher Ed Share to FacebookShare to Twitter (9/9, Mowreader) reports that the University of Delaware (UD) has launched a pilot initiative “that will transform recorded lectures into study guides, flash cards and practice quizzes” using generative AI technology, starting this fall. The leader of Academic Technology Systems (ATS) at UD explained that the AI builds a knowledge graph from lecture transcripts, which faculty members then review for accuracy. The initiative, “developed in-house at the university, leads with ethical principles and prioritizes faculty content ownership to protect all participants, as well,” ensuring privacy through Amazon Web Services Bedrock encryption. The development team “includes two software engineers, some instructional designers, a user-interface developer and a Ph.D. student who used to work as a software developer.” Currently, the project is being piloted in two psychology courses.

Musk’s xAI Unveils Colossus Supercomputer

Insider Share to FacebookShare to Twitter (9/8, Lee, Tangalakis-Lippert) reports that Elon Musk’s AI company, xAI, has introduced a new supercomputer named Colossus, powered by 100,000 Nvidia H100 chips. This AI training system is significantly larger than Meta’s Llama 3, which uses 16,000 chips. However, LinkedIn cofounder Reid Hoffman and Modular AI CEO Chris Lattner suggest that Colossus merely allows xAI to catch up with leading AI companies like OpenAI and Anthropic. Musk aims to double Colossus’s capacity to 200,000 chips soon, but energy supply issues and environmental concerns have been raised.

        Musk Denies Tesla-xAI Revenue Sharing. TechCrunch Share to FacebookShare to Twitter (9/8, Ha) reports that Elon Musk has refuted claims from the Wall Street Journal Share to FacebookShare to Twitter (9/8, Subscription Publication) that Tesla has considered sharing revenue with his AI company, xAI. The proposed agreement would have involved using xAI’s models in Tesla’s Full Self-Driving software and other features. Musk stated on his social media platform X that Tesla does not need to license anything from xAI. He emphasized that xAI’s models are too large to run on Tesla’s vehicle inference computers. Tesla shareholders have sued Musk, alleging he diverted resources to xAI.

Meta Expands Llama AI Model Availability

TechCrunch Share to FacebookShare to Twitter (9/8, Wiggers)reports that Meta has broadened the availability of its generative AI model, Llama, through partnerships with AWS, Google Cloud, and Microsoft Azure. The Llama models, including Llama 8B, 70B, and 405B, range from compact versions for general applications to large-scale models requiring data center hardware. Meta has also introduced tools like Llama Guard and Prompt Guard for content moderation and security. Concerns remain about potential copyright issues and the reliability of AI-generated code.

Report: AI Adoption In Academic Libraries Accelerates

Inside Higher Ed Share to FacebookShare to Twitter (9/10, Coffey) reports, “According to a report released Monday by the data company Clarivate, 7 percent of academic libraries are currently implementing AI tools, while nearly half expect to implement them over the next year.” The report, released Monday, is based on a survey conducted from April to June with around 1,500 respondents, including library deans and IT directors, primarily from the US. Approximately 80 percent of respondents “were from university libraries.” Key motivations for AI adoption include supporting student learning (52 percent), research excellence (47 percent), and making content more discoverable (45 percent). Challenges include a lack of AI expertise, with 32 percent of respondents noting no AI training at their universities. Respondents “said budget constraints were just as worrisome as a lack of AI expertise.”

Google Shuts Down Everyday Robots

Wired Share to FacebookShare to Twitter (9/10, Nast) reports that Alphabet’s innovation lab, Google X, faced challenges in integrating robotics and AI after acquiring nine robot companies in early 2016. Andy Rubin, who initially led the effort, left under mysterious circumstances, leading to confusion among employees. Astro Teller, head of Google X, aimed to tackle global issues with AI-powered robots. Despite significant progress, including the development of robots for tasks like tidying desks, Google shut down the Everyday Robots project in January 2023, citing cost concerns. The robots and a small team were transferred to Google DeepMind for further research. The closure raises questions about Silicon Valley’s commitment to long-term, high-cost projects essential for future AI and robotics integration.

OpenAI Plans to Release “Strawberry” AI Model

Reuters Share to FacebookShare to Twitter (9/10) reports that OpenAI plans to release “Strawberry,” a reasoning-focused AI model, as part of its ChatGPT service within the next two weeks. The Information, citing two testers, states that Strawberry can “think” before responding, unlike other conversational AIs. OpenAI, led by Sam Altman and backed by Microsoft, has over 1 million paying users for its business products. Strawberry will initially handle text only and is not yet multimodal. Microsoft and OpenAI did not immediately respond to Reuters’ requests for comment.

GAO: Agencies Have Met Management And Talent Requirements From Biden’s 2023 Executive Order On AI

Government Executive Share to FacebookShare to Twitter (9/10) reports the Government Accountability Office said in a review released on Monday that “federal agencies have fully met the Biden administration’s initial management and talent benchmarks for the broader adoption of artificial intelligence technologies across government.” The report “looked at agency compliance with 13 specific requirements from President Joe Biden’s October 2023 executive order on AI, which outlined governmentwide safeguards around use of the new technology.” All six agencies that were “tasked with implementing” the directives – the Executive Office of the President; Office of Management and Budget; Office of Personnel Management; Office of Science and Technology Policy; General Services Administration; and the U.S. Digital Service – “fully implemented” the 13 requirements that they were charged with.

States Develop AI Guidance For K-12 Education

Education Week Share to FacebookShare to Twitter (9/11, Klein) reports state education agencies are increasingly providing guidance on artificial intelligence (AI) in K-12 education, “according to an annual survey released Sept. 11 by the State Educational Technology Directors Association.” AI interest among educators “has continued to rise, according to this year’s survey results, with 90 percent of respondents reporting increased interest in AI guidance.” Currently, 59 percent of states “said their states had crafted guidance on the topic,” with 14 percent working on broader AI policy initiatives. States like Utah “have created positions in their education departments dedicated primarily to AI implementation in K-12,” while states such as Indiana and New Jersey have allocated funds for AI.

Generative AI Sparks Major Investment Boom In The US

The Wall Street Journal Share to FacebookShare to Twitter (9/11, Subscription Publication) reports generative AI has initiated a significant spending surge in the US, with venture-capital investments in AI startups reaching $64.1 billion this year. Companies like Microsoft and Google have expanded their data centers to support AI applications, with Microsoft doubling its data centers since early 2020. AI data centers require more power, leading to a nearly ninefold increase in energy orders since 2015. The Journal includes visualizations showing capital spending and number data centers for Amazon, Google, Meta, and Microsoft.

Elon Musk’s xAI Supercomputer Sparks Environmental Concerns In Memphis

NPR Share to FacebookShare to Twitter (9/11, Kerr) reports that Elon Musk’s new artificial intelligence company xAI has established a data center in South Memphis, aiming to build the “world’s largest supercomputer” named Colossus. The facility, which started operations over Labor Day weekend, will support xAI’s chatbot Grok and consume significant resources, including “a million gallons of water per day and 150 megawatts of electricity.” Local residents and environmental advocates express concerns over the project’s environmental impact, particularly in historically Black neighborhoods already suffering from poor air quality. xAI’s use of methane gas generators without proper permits has also raised alarms. Memphis Community Against Pollution President KeShaun Pearson criticizes xAI for not engaging with the community, stating, “We have been deemed by xAI not even valuable enough to have a conversation with.” While the local utility assures that the project will not strain resources, the lack of transparency and oversight remains a contentious issue.

Oregon Department Of Education Launches AI Career Guidance Tool For College Students

Government Technology Share to FacebookShare to Twitter (9/12) reports that the Oregon Department of Education (ODE) announced the release of “Sassy,” an AI-powered career exploration coach for students. Developed by the Journalistic Learning Initiative (JLI) in partnership with ODE and the Southern Oregon Education Service District, Sassy assists middle and high school students with career brainstorming, resume writing, and interview preparation. The tool, named after the mythic Sasquatch, provides guidance by using prompts to search the state’s career resource hub. According to JLI, Sassy ensures students receive updated and locally relevant advice.

Chinese Firms’ AI Models Compared

CNBC Share to FacebookShare to Twitter (9/12, Kharpal) reports that Chinese tech giants, including Baidu, Alibaba, Tencent, Huawei, and ByteDance, have developed their own generative AI models to compete with U.S. counterparts. Baidu’s Ernie Bot, with 300 million users, rivals ChatGPT. Alibaba’s Tongyi Qianwen models are open-sourced and deployed by over 90,000 enterprises. Tencent’s Hunyuan supports industries like gaming and e-commerce. Huawei’s Pangu models are industry-specific, predicting typhoon trajectories in seconds. ByteDance’s Doubao model, launched this year, offers capabilities at a lower cost. These developments reflect China’s ambition to lead in AI technology.

OpenAI Unveils o1 Model Capable Of “Reasoning” In Math, Science

The New York Times Share to FacebookShare to Twitter reports that OpenAI introduced a new version of ChatGPT on Thursday, aiming to improve its performance in math, coding, and science tasks. Powered by OpenAI o1 technology, the chatbot now “reasons” through problems, as stated by OpenAI’s chief scientist Jakub Pachocki. In a demonstration, the updated chatbot successfully solved an acrostic, answered a Ph.D.-level chemistry question, and diagnosed an illness.

        Bloomberg Share to FacebookShare to Twitter (9/12, Subscription Publication) reports that the o1 model “is designed to spend more time computing the answer before responding to user queries, the company said in a blog post Thursday. With the model, OpenAI’s tools should be able to solve multi-step problems, including complicated math and coding questions.” TechCrunch Share to FacebookShare to Twitter (9/12, Wiggers) says that o1 “can effectively fact-check itself by spending more time considering all parts of a command or question.”

NVIDIA CEO Discusses AI Chip Supply Pressures

Fortune Share to FacebookShare to Twitter (9/12, Hetzner) reports that NVIDIA CEO Jensen Huang spoke on Wednesday about the intense pressure he faces to increase the supply of AI training microchips. Speaking at a Goldman Sachs tech conference, Huang highlighted the “emotional” impact these supplies have on customers’ competitiveness and revenues. NVIDIA, controlling 90% of the market, struggles to meet demand from major clients like Microsoft, Google, and Amazon. Huang anticipates easing supply constraints, expecting improved availability in coming quarters. NVIDIA’s Q2 earnings and future chip production, including the upcoming Blackwell series, remain closely watched by investors and customers.

Nvidia, OpenAI, Anthropic And Google Execs Meet With White House To Talk AI Energy And Data Centers

CNBC Share to FacebookShare to Twitter (9/12, Field) reports that leaders from OpenAI, Anthropic, Microsoft, Google, and several American power and utility companies met Thursday morning at the White House to discuss AI energy infrastructure in the US, sources told CNBC. Key attendees included OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei, and Google President Ruth Porat. The meeting addressed AI’s energy usage, data center capacity, semiconductor manufacturing, and grid capacity. An OpenAI spokesperson emphasized the importance of US infrastructure for economic growth. Commerce Secretary Gina Raimondo and Energy Secretary Jennifer Granholm were also present. The meeting follows an August announcement that OpenAI and Anthropic will allow the US AI Safety Institute to test their models before public release.

dtau...@gmail.com

unread,
Sep 21, 2024, 7:01:38 PM9/21/24
to ai-b...@googlegroups.com

California Governor Signs Laws to Crack Down on Election Deepfakes

On Sept. 17, California Gov. Gavin Newsom signed into law legislation prohibiting the creation and publication of election-related deepfakes 120 days prior to and 60 days after Election Day, while permitting courts to stop their distribution and impose civil penalties. Other bills signed by Newsom will require large social media platforms to remove deepfakes, and mandate that political campaigns publicly disclose if they run ads with AI-altered materials.
[ » Read full article ]

Associated Press; Tran Nguyen (September 17, 2024)

 

Researchers Run Small AIs on Their Laptops

Researchers increasingly are able to run local AI systems on their laptops. This comes as tech firms and research institutes, including Google DeepMind, Meta, Microsoft, and the Allen Institute for Artificial Intelligence, make available small and open-weight versions of large language models that can be downloaded and run locally, as well as scaled-down versions that can be run on consumer hardware. Local AIs are less expensive, allow open models to be fine-tuned for focused applications, and preserve data privacy.
[ » Read full article ]

Nature; Matthew Hutson (September 16, 2024)

 

AI Pioneers Call for Protections Against 'Catastrophic Risks'

A group of AI pioneers including Turing Award recipients Yoshua Bengio, Andrew Yao, and Geoffrey Hinton released a statement on Sept. 16 expressing their concerns that the capabilities of the technology could exceed that of its creators in a matter of years, leading "to catastrophic outcomes for all of humanity." They also proposed that countries establish AI safety authorities to register AI systems within their borders and collaborate to identify red lines and warning signs for the technology.


[
» Read full article *May Require Paid Registration ]

The New York Times; Meaghan Tobin (September 16, 2024)

 

Chatbot Pulls People Away from Conspiracy Theories

An AI chatbot developed by Cornell University researchers aims to persuade users to stop believing conspiracy theories. In their study, more than 2,000 U.S. adults were asked to describe a conspiracy they believed; some then engaged in discussions with DebunkBot in which they presented evidence supporting their position and DebunkBot provided information to combat their misinformation. Participants' belief ratings fell around 20% after three exchanges with DebunkBot, and around 25% of participants no longer believed the conspiracy theory.

[ » Read full article *May Require Paid Registration ]

The New York Times; Teddy Rosenbluth (September 13, 2024)

 

Survey: Most Americans Don't Trust AI-Powered Election Information

A survey by The Associated Press-NORC Center for Public Affairs Research and USAFacts found that two-thirds (67%) of U.S. adults lack confidence that AI-powered chatbots or search engines provide factual, reliable information. Of the surveys 1,019 respondents, 25% believe the use of AI will make it "much" or "somewhat" more difficult to locate factual information about the 2024 election. Only 16% of those polled think AI will make finding accurate election information easier.
[
» Read full article ]

Associated Press; Ali Swenson; Linley Sanders (September 12, 2024)

 

Brain-Like Device Hits Massive 4.1 Tera-Operations Per Second/Watt

A neuromorphic device developed by an international research team is comprised of molecules that can alter their electrical properties when a charge is applied to them, allowing for the manipulation of materials for integration in electrical systems. The researchers integrated the 14-bit neuromorphic accelerator into a circuit board and achieved energy efficiency of 4.1 tera-operations per second per watt, making it suitable for neural network training, natural language processing, and signal processing.
[
» Read full article ]

Interesting Engineering; Rupendra Brahambhatt (September 13, 2024)

 

Colleges Grapple With AI Use In Education

Inside Higher Ed Share to FacebookShare to Twitter (9/16, Mowreader) reports colleges and universities are addressing the integration of generative artificial intelligence tools in education while preventing misuse. A May 2024 Student Voice survey “from Inside Higher Ed and Generation Lab found that, when asked if they know when or how to use generative AI to help with coursework, a large number of undergraduates don’t know or are unsure (31 percent).” The survey included more than 3,500 four-year and 1,400 two-year students. Only 16 percent of respondents “said they knew when to use AI because their college or university had published a policy on appropriate use cases for generative AI for coursework.” Experts recommend campus leaders offer “professional development and education,” provide sample language, and communicate regularly with students.

        Column: AI Tutor Boosts Learning In Less Time For Harvard Students. In her column for The Hechinger Report Share to FacebookShare to Twitter (9/16), Jill Barshay says that an AI tutor, PS2 Pal, significantly improved student learning in a “small experiment, involving fewer than 200 undergraduates.” Conducted in fall 2023, the study found that “students learned more than twice as much in less time when they used an AI tutor in their dorm compared with attending their usual physics class in person.” The AI tutor was designed to avoid cognitive overload and encourage critical thinking. Gregory Kestin, a physics lecturer at Harvard and developer of the AI tutor used in this study, argues that AI should not replace human interaction but can enhance it by introducing new topics before class. He plans to “test the tutor bot for an entire semester” and explore its use as a study assistant.

Intel, AWS Collaborating To Design Custom AI Chips

Bloomberg Share to FacebookShare to Twitter (9/16, King, Subscription Publication) reports Intel CEO Pat Gelsinger has acquired Amazon’s AWS as a “customer for the company’s manufacturing business, potentially bringing work to new plants under construction in the US and boosting his efforts to turn around the embattled chipmaker.” Intel and AWS “will coinvest in a custom semiconductor for artificial intelligence computing – what’s known as a fabric chip – in a ‘multiyear, multibillion-dollar framework,’ according to a statement Monday.” Bloomberg adds that while Intel is postponing new factories in Germany and Poland, it “remains committed to its US expansion in Arizona, New Mexico, Oregon and Ohio.”

Lawmakers Call For Administration To Implement Stronger Algorithm And AI Bias Protections

Modern Healthcare Share to FacebookShare to Twitter (9/16, McAuliff, Subscription Publication) reports that in a letter to the Office of Management and Budget on Monday, Senate Majority Leader Schumer (D-NY) and Sen. Ed Markey (D-MA) urged the Biden Administration to require federal agencies and contractors receiving federal funds to do more to protect against abuses related to algorithms and AI. The lawmakers specifically “want the government to focus intently on ‘consequential decisions,’ such as those that determine types of health care people can obtain, to ensure bias is not creeping in and creating or exacerbating inequities.”

Elon Musk’s AI Data Center In Memphis Sparks Pollution Concerns

TIME Share to FacebookShare to Twitter (9/17, Chow) reports that Elon Musk’s AI startup xAI has been training its new model, Grok 3, at a new Memphis data center. The center, built in 19 days, has caused an outcry among Memphis residents and environmental groups over potential negative impacts on air quality, water access, and grid stability. Local leaders and utility companies argue the project will benefit infrastructure and employment. However, xAI’s demand for 150 megawatts of power has raised concerns about Memphis’s ability to handle such a large energy consumer. Reports indicate xAI has installed gas turbines without permits, drawing further criticism. “They treat southwest Memphis as just a corporate watering hole,” said KeShaun Pearson, executive director of Memphis Community Against Pollution.

OpenAI Sustainability In Question Despite Valuation

Fast Company Share to FacebookShare to Twitter (9/17) reports that OpenAI is pursuing $6.5 billion in venture capital and $5 billion in debt financing, aiming for a $150 billion valuation. Despite significant revenue growth, OpenAI remains unprofitable due to high operational costs. The company’s new GPT-o1 models target complex tasks, potentially expanding its market. However, concerns persist about the sustainability of its business model, especially with expensive, large-scale AI models. OpenAI’s corporate structure might shift to a for-profit benefit corporation, and it faces potential regulatory challenges in California.

 

Google Plans To Identify Real Vs. AI-Generated Images. The Verge Share to FacebookShare to Twitter (9/17, Warren) reports that Google will soon introduce technology to distinguish between real, edited, and AI-generated photos. This update will be integrated into Google’s search results with the “about this image” feature. The “system Google is using is part of the Coalition for Content Provenance and Authenticity (C2PA).” While Google is among multiple companies that have backed C2PA authentication, “adoption has been slow,” so “Google’s integration into search results will be a first big test for the initiative.”

Survey: Most Teens Have Discussed How To Use AI More Responsibly In School

Education Week Share to FacebookShare to Twitter (9/18, Klein) reports, “Teens who have talked about artificial intelligence in school are more likely to use it responsibly, concludes a report released Sept. 18 by Common Sense Media, a nonprofit that examines the impact of technology on young people.” The nonprofit found that about “70 percent of teens have used at least one kind of AI tool,” with 51 percent using chatbots like ChatGPT, Microsoft Co-pilot, or Google’s Gemini. Approximately 53 percent of students “say they use AI for homework help,” and 2 in 5 for entertainment or translation. The report highlights that 55 percent of teens “who reported using AI tools, and had talked about AI’s benefits and pitfalls in school fact-checked the information they received from AI tools,” compared to 43 percent who did not. Additionally, 87 percent of students “who had class discussions about AI are also more likely to agree that AI tools might be used to cheat,” versus 73 percent without such discussions.

        K-12 Dive Share to FacebookShare to Twitter (9/18, Merod) reports 41 percent of teens also use AI for language translation. Among those using AI for schoolwork, 46 percent did so “without their teacher’s permission,” 41 percent with permission, and 12 percent were unsure. The survey indicated that 37 percent of teens “said they were unsure if their schools have established rules on AI,” while 35 percent said their school has guidelines, and 27 percent reported no rules. Conducted with Ipsos Public Affairs, the survey “included 1,045 paired responses from parents and their teens.”

        Report: Black Students More Likely To Face AI Cheating Accusations. Education Week Share to FacebookShare to Twitter (9/18, Klein) reports, “Black students are more than twice as likely as their white or Hispanic peers to have their writing incorrectly flagged as the work of artificial intelligence tools, concludes a report released Sept. 18 by Common Sense Media.” The report states that “20 percent of Black teens were falsely accused of using AI to complete an assignment, compared with 7 percent of white and 10 percent of Latino teens.” This discrepancy may stem from flaws in AI detection software. Survey data from the Center for Democracy & Technology shows 68 percent of secondary school teachers “report using an AI detection tool regularly.” The report is “based on a nationally representative survey conducted from March to May of 1,045 adults in the United States.”

        Google Invests $25 Million In AI Training For Students, Teachers. Education Week Share to FacebookShare to Twitter (9/18, Klein) reports that Google.org, “the tech company’s philanthropy arm, plans to invest over $25 million to support five education nonprofits in helping educators and students learn more about how to use artificial intelligence.” According to a Common Sense Media survey, responsible AI usage by teens increases when teachers discuss its benefits and pitfalls, yet more than “7 in 10 teachers said they haven’t received any professional development on using AI in the classroom, according to a nationally representative EdWeek Research Center survey.” Google.org’s initiative, emphasizing culturally relevant AI curriculum, aims to address this gap. ISTE+ASCD “will receive $10 million of the $25 million over three years to reach about 200,000 educators.”

Tech Workers Struggle Amid Industry Shift

The Wall Street Journal Share to FacebookShare to Twitter (9/18, Bindley, Pisani, Subscription Publication) reports that despite efforts, tech job postings have dropped more than 30 percent since February 2020, with 137,000 layoffs this year. Companies are now focusing on revenue-generating products and artificial intelligence, reducing entry-level hires. AI expertise remains highly sought after, with AI engineers earning significantly more.

Chief Technologist At Amazon Robotics Discusses Robotics, AI

Forbes Share to FacebookShare to Twitter (9/20) contributor Bernard Marr spoke to Amazon Robotics Chief Technologist Tye Brady about Amazon’s advancements in robotics and AI. Brady highlighted that Amazon operates “the world’s largest fleet of industrial mobile robots,” with over 750,000 drive units alone. He introduced the Hercules drive unit, which improves warehouse efficiency by bringing shelves directly to workers, resulting in a 40% increase in storage density. Brady also discussed the autonomous robot Proteus, which features human-like indicators for safe navigation around people. He emphasized that Amazon aims to enhance human capabilities through robotics, stating, “We use robotics and automation, particularly fueled by AI, to extend human capability.” Brady envisions a future where cloud-connected robots collaborate with humans, transforming the supply chain and creating new job types.

Johnson Warns Against “Overregulation” In Interview On AI

In an interview with The Hill Share to FacebookShare to Twitter (9/19, Nazzaro), House Speaker Johnson “offered his thoughts on artificial intelligence...and foreign election interference – two hot-button issues that have become increasingly prevalent in the political landscape ahead of November. Describing himself as a ‘limited government conservative,’ the Speaker acknowledged the concerns surrounding the quickly emerging technology while also warning against overregulation of the tech sphere.” He argued that Congress “needs to take the threat of ‘deepfakes’ seriously, stating the abuses of the technology have been ‘repulsive,’ but also urged for caution.”

dtau...@gmail.com

unread,
Sep 29, 2024, 1:31:21 PM9/29/24
to ai-b...@googlegroups.com

Google Paid $2.7 Billion to Bring Back an AI Genius

Google reportedly has paid around $2.7 billion to license technology from Character.AI, a startup founded by former Google employee Noam Shazeer (pictured, left), who agreed to return to the tech giant as a vice president as part of the deal. Shazeer's return to Google is said to be the primary reason for the deal, fueling a debate about whether big tech companies are spending too much money as they rush to develop cutting-edge AI.


[
» Read full article *May Require Paid Registration ]

The Wall Street Journal; Miles Kruppa; Lauren Thomas; Tom Dotan (September 25, 2024); et al.

 

HP Spots Malware Attack Likely Built with Generative AI

HP security researchers identified malware likely created using generative AI. The firm's Sure Click anti-phishing system flagged a suspicious email attachment for French language users that contained an HTML file requiring a password to open it. After the researchers determined the correct password, the HTML generated a ZIP file containing the AsyncRAT malware. The researchers found the malicious code’s “structure, consistent comments for each function, and the choice of function names and variables" suggested the use of GenAI.
[
» Read full article ]

PC Magazine; Michael Kan (September 24, 2024)

 

List of Early Signups to EU’s AI Pact Missing Apple, Meta

The European Commission released a list of the first 100-plus signatories to its AI Pact, intended to get companies to voluntarily comply with the AI Act before the deadlines set forth in the law. Companies that joined the AI Pact include Amazon, Microsoft, OpenAI, Palantir, Samsung, SAP, Salesforce, Snap, Airbus, Porsche, Lenovo, Qualcomm, and Aleph Alpha; companies missing from the list include Apple, Meta, Mistral, Anthropic, Nvidia, and Spotify.
[
» Read full article ]

TechCrunch; Natasha Lomas (September 25, 2024)

 

Is Math the Path to Chatbots That Donʼt Make Stuff Up?

Silicon Valley startup Harmonic is focusing on mathematics as it works to develop an AI chatbot that never hallucinates. Harmonic's Aristotle not only produces correct answers, but also detailed computer programs proving its answers are right, which then can be used to improve its results. Some researchers believe the same techniques can be used to develop AI systems that can verify physical truths as well.

[ » Read full article *May Require Paid Registration ]

The New York Times; Cade Metz (September 23, 2024)

 

AI 'Godfather' Says OpenAI's New Model May Be Able to Deceive, Needs 'Much Stronger Safety Tests'

ACM A.M. Turing Award recipient Yoshua Bengio is concerned about the ability of OpenAI's new o1 model to deceive, noting it has a "far superior ability to reason than its predecessors." Said Bengio, "In general, the ability to deceive is very dangerous, and we should have much stronger safety tests to evaluate that risk and its consequences in o1's case."

[ » Read full article *May Require Paid Registration ]

Business Insider; Kenneth Niemeyer (September 21, 2024)

 

Microsoft AI Needs So Much Power It's Tapping Site of U.S. Nuclear Meltdown

Constellation Energy Corp. will spend $1.6 billion to revive the Three Mile Island nuclear plant in Pennsylvania, with Microsoft agreeing to purchase all the output energy for 20 years as it looks to access carbon-free electricity for its AI datacenters. Constellation said a reactor that was closed in 2019 will be placed back into service in 2028. The deal is part of a Microsoft initiative to run all of its datacenters on clean energy by 2025.

[ » Read full article *May Require Paid Registration ]

Bloomberg; Will Wade; Dina Bass (September 20, 2024)

 

A Bottle of Water Per Email: The Hidden Environmental Costs of Using AI Chatbots

The Washington Post worked with researchers at the University of California, Riverside to determine how much water and electricity are used to write the average 100-word email using ChatGPT. They determined such an email requires little more than a single bottle of water but sending it once weekly for a year would require an amount of water equivalent to the consumption of every household in Rhode Island for 1.5 days.

[ » Read full article *May Require Paid Registration ]

The Washington Post; Pranshu Verma; Shelly Tan (September 18, 2024)

 

Meta to EU: Your Tech Rules Threaten to Squelch the AI Boom

In an open letter coordinated by Facebook parent firm Meta Platforms, executives warned the European Union risks missing out on the full benefits of artificial intelligence because of its tech regulations. More than two dozen companies signed the letter, which said AI can boost productivity and expand the economy. The letter called on the EU to harmonize its rules and provide what the signatories refer to as a modern interpretation of the blocʼs data-protection law.

[ » Read full article *May Require Paid Registration ]

Wall Street Journal; Kim Mackrael (September 19, 2024)

 

U.N. Experts Urge United Nations to Lay Foundations for Global Governance of AI

A United Nations advisory body comprised of 39 AI leaders from 33 countries is calling on the U.N. to lay the foundation for global regulation of AI and set forth principles, including both international and human rights law, to guide the establishment of new AI governance institutions. Among other things, the advisory group recommends the creation of an international scientific panel on AI to ensure global understanding of the technology's capabilities and risks.
[ » Read full article ]

Associated Press; Edith M. Lederer (September 19, 2024)

 

Ban Warnings Fly as Users Probe the 'Thoughts' of OpenAI's Latest Model

OpenAI reportedly has sent warning emails threatening to ban users who attempt to determine how its newest "Strawberry" AI model works. With the o1 model, users can see a filtered interpretation of its chain-of-thought-process in the ChatGPT interface, but its raw chain of thought is hidden from users. Marco Figueroa, manager of Mozilla's GenAI bug bounty programs, said the move prevents positive red-teaming safety research from being performed on the model.
[ » Read full article ]

Ars Technica; Benj Edwards (September 16, 2024)

 

U.S. to Convene Global AI Safety Summit in November

The International Network of AI Safety Institutes will hold its first meeting Nov. 20-21 in San Francisco to discuss priority work areas and "advance global cooperation toward the safe, secure, and trustworthy development of artificial intelligence." The meeting will involve technical experts from the AI Safety Institutes, or equivalent government-backed safety office, of member nations, which include Australia, Canada, the EU, France, Japan, Kenya, South Korea, Singapore, the U.K., and the U.S.
[ » Read full article ]

Reuters; David Shepardson (September 18, 2024)

 

'There's a War for the Top 1%': Inside French Tech's Fierce Battle for the Best AI Talent

AI startups in Paris are courting top engineers from big tech firms. Said Mathias Frachon of tech recruitment firm The Product Crew, "There's a war only for the top 1%, but they are superstars and everyone is fighting over them." Paris is the focus of the latest AI talent war, since France is home to several prestigious universities known for producing top AI talent, prompting big tech firms like Facebook and Google to open research labs in the city.
[ » Read full article ]

Sifted; Daphné Leprince-Ringuet (September 13, 2024)

 

AI Models Improve Robot Functionality

MIT Technology Review Share to FacebookShare to Twitter (9/20, Williams) reported that researchers from New York University, Meta, and Hello Robot have developed AI models called robot utility models to help robots perform tasks in new environments without additional training. The models enable robots to open doors and drawers, and pick up tissues, bags, and cylindrical objects with a 90% success rate. The team used an iPhone and a reacher-grabber stick to record demonstrations in various environments, creating data sets for training. This approach aims to simplify and reduce the cost of deploying robots in homes.

Judge Criticizes Plaintiffs’ Attorneys In Case About Meta’s AI Technology

Politico Share to FacebookShare to Twitter (9/21, Gerstein) reports US District Judge Vincent Chhabria on Friday “brutally dressed down the lawyers for a group of high-profile authors who are suing Meta over the use of their work to train the company’s AI technology.” Chhabria “accused the plaintiffs’ attorneys of dragging out litigation that may help set important guardrails for the emerging technology.” He said to the attorneys, “You are not doing your job. This is an important case. ... You and your team have taken on a case that you are either unwilling or unable to litigate properly.” Politico points out that the lawsuit “is one of a flurry of cases publishing companies, artists and authors filed last year against big tech companies, accusing them of importing copyrighted material into AI training models without permission.”

Teachers Address AI’s Struggles With Math Education

Education Week Share to FacebookShare to Twitter (9/20, Schwartz) reported that artificial intelligence (AI) tools like ChatGPT “regularly answer math questions incorrectly,” posing challenges for teachers and students. Unlike calculators, AI chatbots use text prediction, leading to inconsistent and incorrect answers. Khanmigo, “an AI tutor created by the online education nonprofit Khan Academy, regularly struggled with basic computation,” prompting updates to direct numerical problems to a calculator. OpenAI, “the organization that created ChatGPT, [also] announced a new version of the technology designed to better reason through complex math tasks.” One eighth-grade teacher in Alabama uses AI for lesson brainstorming but encourages students to critically evaluate AI-generated answers. Surveys have shown “that teachers are hesitant about bringing AI into the classroom, in part due to concerns about chatbots presenting them or their students incorrect information.”

LinkedIn, Meta, X Use User Data For AI Training

Fortune Share to FacebookShare to Twitter (9/23, Brice) reports that LinkedIn, Meta, and X are using user data to train their AI models. LinkedIn began using user posts without notification, while Meta has used Facebook and Instagram data since 2007. X uses public posts for its AI chatbot Grok. Opting out involves navigating complex settings on these platforms. According to the article, “TikTok, whose data policies are under scrutiny amid a possible U.S. ban, hasn’t clearly stated whether it harvests user data for any generative AI tools.”

Princeton Researchers Critique AI Hype

Wired Share to FacebookShare to Twitter (9/24, Rogers) reports that Princeton University professor Arvind Narayanan and PhD candidate Sayash Kapoor have released a book, “AI Snake Oil,” based on their Substack newsletter, critiquing the exaggerated claims surrounding artificial intelligence. Narayanan “makes clear, during a conversation with WIRED, that his rebuke is not aimed at the software per say, but rather the culprits who continue to spread misleading claims about artificial intelligence.” The book identifies three groups perpetuating AI hype: “the companies selling AI, researchers studying AI, and journalists covering AI.” Companies claiming “to predict the future using algorithms are positioned as potentially the most fraudulent,” often affecting minorities and impoverished individuals. The authors criticize companies “prioritizing long-term risk factors above the impact AI tools have on people right now.”

Billionaire Predicts AI Will Replace Most Jobs

Fortune Share to FacebookShare to Twitter (9/24, Royle) reports Silicon Valley billionaire Vinod Khosla predicts AI will handle 80% of work in 80% of jobs, including roles like doctors, salespeople, and engineers. He suggests universal basic income to prevent economic dystopia and foresees a potential three-day workweek if AI is used positively. Khosla’s views align with other tech leaders like Bill Gates and Elon Musk, who also anticipate reduced work hours due to AI advancements.

Meta Declines EU’s Voluntary AI Safety Pledge

Bloomberg Share to FacebookShare to Twitter (9/24, Volpicelli, Subscription Publication) reports that Meta Platforms Inc. is declining to join the European Union’s voluntary AI safety pledge, unlike Microsoft and Google’s Alphabet. The AI Pact, a precursor to the AI Act effective in 2027, seeks compliance with key AI Act principles. Meta’s open-source AI model, Llama, poses compliance challenges, according to the article. Meta’s spokesperson indicated potential future participation. The European Commission will reveal the full list of signatories on Wednesday.

OpenAI Pitches White House On Unprecedented Data Center Buildout

Bloomberg Share to FacebookShare to Twitter (9/24, Ghaffary, Subscription Publication) reports that OpenAI has proposed to the Biden administration the construction of massive data centers, each capable of using as much power as entire cities, to advance artificial intelligence (AI) development. Following a recent White House meeting attended by OpenAI CEO Sam Altman and other tech leaders, the company shared a document with officials highlighting the economic and national security benefits of building 5 gigawatt (GW) data centers across various US states. The proposal is based on an analysis conducted with external experts.

Seattle Hackathon Showcases AI-Human Collaboration

GeekWire Share to FacebookShare to Twitter (9/24) reports that a hackathon in Seattle, hosted by AI Tinkerers, showcased AI applications that combine human and machine capabilities. Held at the Foundations space in Capitol Hill, the event featured engineers from Microsoft, Amazon, and Google. The top prize went to MetabolixAI for its personalized nutrition insights and meal planning system. The runner-up was AI DevRel Project, enhancing developer relations, and the community pick was LeadScore, which uses AI to score inbound leads. The winning teams received nearly $15,000 in prize money. The event was supported by Anthropic and CopilotKit.

OpenAI CEO Emphasizes AI Infrastructure Investment

Insider Share to FacebookShare to Twitter (9/25, Tangalakis-Lippert) reports that OpenAI CEO Sam Altman emphasized the importance of massive investment in artificial intelligence (AI) infrastructure in a blog post on Monday. Altman argued that to make AI widely accessible, significant investments in computing power and energy are required to avoid AI becoming a limited resource that could lead to global conflicts. Last week, Microsoft and BlackRock launched a $30 billion fund to enhance AI competitiveness and energy infrastructure. Earlier this month, AI leaders, including Altman, met at a White House roundtable to discuss AI development’s alignment with national security and economic goals. However, experts expressed concerns about the economic and environmental costs of large-scale data centers.

        Altman Lobbies US Officials, Foreign Investors On Potential For Major Tech Infrastructure Projects. The New York Times Share to FacebookShare to Twitter (9/25, Metz, Mickle) examines “OpenAI’s blueprint for the world’s technology future,” with CEO Sam Altman calling for investors, chipmakers, and officials to “unite on a multitrillion-dollar effort to erect new computer chip factories and data centers across the globe, including in the Middle East.” Nine sources described a plan which “would create countless data centers providing a global reservoir of computing power dedicated to building the next generation of A.I.,” and “as far-fetched as it may have seemed...Altman’s campaign showed how in just a few years he has become one of the world’s most influential tech executives, able in a span of weeks to gain an audience with Middle Eastern money, Asian manufacturing giants and top U.S. regulators.”

        OpenAI CTO To Leave Company. CNBC Share to FacebookShare to Twitter (9/25, Field) reports OpenAI Chief Technology Officer Mira Murati “said Wednesday that she is leaving the company after six and a half years.” Murati marks “the latest high-level executive to depart the startup.” CNBC adds, “While OpenAI has been in hyper-growth mode since late 2022, when it launched ChatGPT, it has been simultaneously riddled with controversy and high-level employee departures, with some current and former employees concerned that the company is growing too quickly to operate safely.”

        OpenAI Agrees to Training Data Review. Advanced Television Share to FacebookShare to Twitter (9/25) reports that OpenAI will provide access to its training data to determine if copyrighted works were used. This follows a court filing where authors in a class action lawsuit agreed on protocols for inspecting the information. The agreement stems from lawsuits accusing OpenAI of using web content to produce copyright-infringing answers via ChatGPT. Although some claims were dismissed, direct copyright infringement claims remain. The inspection will occur at OpenAI’s San Francisco office under strict conditions, including non-disclosure agreements and secured computer access without internet.

College Students Use AI To Avoid Reading Assignments

Inside Higher Ed Share to FacebookShare to Twitter (9/25, Alonso) reports that many college students are increasingly using artificial intelligence (AI) tools like ChatGPT to avoid completing their reading assignments. One history major “reads about 250 pages per week but often uses artificial intelligence” to summarize his weekly reading due to time constraints from his job and extracurricular activities. Faculty members “frequently note how much less willing their Gen Z students are to read for class than earlier generations,” attributing this to shorter attention spans and the impact of the COVID-19 pandemic on learning. Some professors adapt by incorporating reading sessions in class or using guided readings.

Amazon Among Signatories Of EU’s AI Pact Initiative

TechCrunch Share to FacebookShare to Twitter (9/25, Lomas) reports the European Commission has announced over 100 signatories to the AI Pact, aimed at encouraging companies to publish voluntary pledges regarding their AI practices. The initiative follows the introduction of the AI Act, which will take years to fully implement. Signatories, including Amazon, Microsoft, and OpenAI, must commit to adopting an AI governance strategy, identifying high-risk AI systems, and promoting AI awareness among staff. The Pact allows companies to select from a long list of potential pledges, fostering competition in AI safety compliance. Notable absences from the signatory list include Apple and Meta, which have opted to focus on compliance with the AI Act directly. The EU outlines significant penalties for non-compliance with the AI Act, including up to 7% of global annual revenue for violating banned uses of AI, up to 3% for other non-compliance, and up to 1.5% for supplying incorrect information.

FTC Targets AI Companies Over “Deceptive” Practices

Reuters Share to FacebookShare to Twitter (9/25, Godoy) reports the FTC “announced actions against five companies on Wednesday that it said used artificial intelligence in deceptive and unfair ways,” including three which “purported to help consumers generate passive income by opening e-commerce storefronts.” The agency “also settled with a company called DoNotPay over its claim to provide automated legal services, and with Rytr, an AI writing tool that the agency said offered a feature that allows users to generate fake product reviews.” FTC Chair Lina Khan said, “Using AI tools to trick, mislead, or defraud people is illegal. The FTC’s enforcement actions make clear that there is no AI exemption from the laws on the books.”

Schools Struggle To Combat AI-Enabled “Deepfakes”

Education Week Share to FacebookShare to Twitter (9/26) reports a new Center for Democracy & Technology study reveals schools are inadequately addressing AI-enabled sexual harassment. Deepfakes, digitally manipulated media, primarily involve students as both perpetrators and victims. The survey found 40% of students and 29% of teachers knew of deepfakes shared in the 2023-24 school year. However, only 19% of students reported their schools explained what deepfakes are to students. Moreover, 60% of teachers and 67% of parents said schools lacked policies for addressing such incidents. Kristin Woelfel, a policy counsel at the Center, attributed the increased risk due to widespread access to AI tools. She said, “There’s really no limit as to who could be impacted by this.” National Student Council president Anjali Verma described the victim experience as “scary” and “traumatic.” Woelfel emphasized the need for preventive education and victim support. The survey included 1,316 high school students, 1,006 middle and high school teachers, and 1,028 parents.

OpenAI To Remove Nonprofit Board Control Over Main Business; Altman To Gain Equity

Bloomberg Share to FacebookShare to Twitter (9/25, Metz, Subscription Publication) reports that OpenAI plans to restructure, removing its nonprofit board’s control over its main business. The nonprofit arm will retain a minority stake in the for-profit company, and CEO Sam Altman will gain equity. The reorganized for-profit entity is potentially worth $150 billion. OpenAI did not respond to requests for comment.

        TechCrunch Share to FacebookShare to Twitter (9/25, Wiggers) reports, citing Reuters Share to FacebookShare to Twitter (9/26, Cai), that OpenAI intends to become a for-profit benefit corporation “similar to rivals such as Anthropic and Elon Musk’s xAI.” TechCrunch adds that the restructuring’s intent is to attract outside investors who have objected to OpenAI’s current cap on returns. Nonetheless, the move is seen as likely to prompt concerns over the restructured entity’s accountability in its pursuit of superintelligent AI.

        The Telegraph (UK) Share to FacebookShare to Twitter (9/26) reports that Elon Musk, “who quit OpenAI in 2018 amid a row with executives including Mr Altman, wrote on X: ‘You can’t just convert a non-profit into a for-profit. That is illegal.’ He added: ‘Sam Altman is Little Finger,’ a reference to the Machiavellian character in the TV series Game of Thrones.”

        Also reporting is Insider Share to FacebookShare to Twitter (9/25, Varanasi).

Federal Reserve Governor: AI Could Be Inflationary In Short-Term

Reuters Share to FacebookShare to Twitter (9/26) reports Federal Reserve Governor Lisa Cook on Thursday “said that while she expects artificial intelligence over the longer-run to boost productivity and therefore allow higher employment without correspondingly higher inflation, AI may add to inflationary pressures in the short-term.” She told an event at The Ohio State University, “There’s a lot of demand being created, and then you have consumption that is augmented,” adding that “the effects of AI on inflation are uncertain, as they are on the labor market as well.”

Debate Over AI’s Role In Education Intensifies In New School Year

The Seventy Four Share to FacebookShare to Twitter (9/26, Montalvo) reports that “the debate over AI’s role in education is intensifying” as the new school year begins. The Education Department’s Office of Educational Technology released guidelines for EdTech companies titled “Designing for Education with Artificial Intelligence,” emphasizing “responsible innovation” and incorporating feedback from educators and students. The XQ Institute advocates for AI’s ethical, transparent, and equitable use, partnering with educators and developers to tailor AI tools to student needs. A collaboration between Crosstown High, an XQ school in Memphis, Tennessee, and EdTech company Inkwire exemplifies effective partnerships, ensuring AI tools are culturally responsive and pedagogically sound.

dtau...@gmail.com

unread,
Oct 5, 2024, 8:31:39 AM10/5/24
to ai-b...@googlegroups.com

Academics to Chair Drafting the Code of Practice for General-Purpose AI

The European Commission said several academics will serve as chairs and vice chairs of working groups tasked with drafting a Code of Practice on general-purpose artificial intelligence (GPAI). This Code of Practice will shape the risk management and transparency requirements of the EU's AI Act. The first draft is expected in early November.
[ » Read full article ]

Euractiv; Jacob Wulff (September 30, 2024)

 

Devs Gaining Little (if Anything) from AI Coding Assistants

An Uplevel study of 800 developers' output over three months while using GitHub Copilot found no significant increase in productivity compared to the three-month period prior to adopting the AI coding assistant. Developers using Copilot also did not report substantial improvements in pull request (PR) cycle time or PR throughput, the study found, while 41% more bugs were introduced by Copilot use.
[ » Read full article ]

CIO; Grant Gross (September 26, 2024)

 

AI Crawlers Are Hammering Sites

Some websites are being hit with so many queries from AI crawlers that their performance is impacted. iFixit recently reported close to a million queries in just over 24 hours, which it attributed to a crawler from Anthropic. Game UI Database said its website almost came to a halt due to a crawler from OpenAI hitting it around 200 times a second. Said iFixit's Kyle Wiens, "There are polite levels of crawling, and this superseded that threshold."
[ » Read full article ]

Fast Company; Chris Stokel-Walker (September 26, 2024)

 

California Governor Vetoes AI Safety Bill

California Governor Gavin Newsom vetoed a state measure that would have imposed safety vetting requirements for powerful AI models. Newsom said the legislation “does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making, or the use of sensitive data.” He said of the bill, "I do not believe this is the best approach to protecting the public from real threats posed by the technology.”
[ » Read full article ]

Politico; Lara Korte; Jeremy B. White (September 29, 2024)

 

Turning OpenAI into a Real Business is Tearing It Apart

The exit of OpenAI CTO Mira Murati (pictured) is the latest in a series of departures as the firm shifts from a nonprofit lab to a for-profit corporation. So far this year, 20 researchers and executives have left OpenAI. Concerns expressed by current and former employees include rushed product announcements and safety testing and CEO Sam Altman's absence from day-to-day operations as he travels on fundraising missions.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Deepa Seetharaman (September 27, 2024)

 

Extreme Weather Is Taxing Utilities More Often. Can AI Help?

Electric utilities increasingly are turning to AI to improve severe weather predictions and identify ways to harden the electrical grid as aging infrastructure is being hit by severe weather more frequently. Extreme weather currently is the leading cause of major U.S. power outages, with more than 4 million without power following Hurricane Helene on Sept. 27.

[ » Read full article *May Require Paid Registration ]

The New York Times; Austyn Gaffney (September 27, 2024)

 

Singapore LNG Demand To Rise Amid AI Boom

Bloomberg Share to FacebookShare to Twitter (9/27, Ong, Subscription Publication) reported Singapore’s LNG demand will increase short-term, driven by the AI boom and data center growth, according to Singapore LNG Corp. CEO Leong Wei Hung. The digital sector significantly impacts energy needs, outpacing infrastructure development. Tech giants Amazon and Microsoft plan major data center investments in Southeast Asia. Singapore aims to boost power allocation for data centers by 35%. The country’s reliance on imported gas challenges its decarbonization efforts, with plans to import 6 GW of green power by 2035. Leong expressed optimism for LNG’s role, saying, “While we wait for renewables to be reasonably priced, LNG has to be the solution.”

OpenAI Seeks Government Support for Massive Data Centers

Fortune Share to FacebookShare to Twitter (9/27, Meyer) reports that OpenAI is seeking U.S. government support to build data centers requiring 5 gigawatts of power each, equivalent to the output of five nuclear reactors. CEO Sam Altman discussed the plan at a recent White House meeting. Experts, including Constellation Energy CEO Joe Dominguez and Aurora Energy Research’s Zachary Edelen, express skepticism about the feasibility due to immense power demands and grid reliability issues. The proposal highlights the growing energy needs of AI technologies and the challenges of sustainable power sourcing.

HHS Launches AI Cybersecurity Task Force

Inside Health Policy Share to FacebookShare to Twitter (9/29, Robles, Subscription Publication) reports behind a paywall that Greg Garcia, executive director of the Health Sector Coordinating Council Cybersecurity Working Group, announced an upcoming joint task force with HHS and industry to address AI’s cybersecurity implications. The task force will explore AI-related risks and threats and how AI can enhance cybersecurity defenses. Garcia made the announcement at AHIP’s digital health conference. Micky Tripathi, head of HHS’s health information technology office, confirmed the collaboration.

OpenAI Faces Complicated Road To Becoming For-Profit Enterprise

The Wall Street Journal Share to FacebookShare to Twitter (9/29, Subscription Publication) highlights how OpenAI’s plan to become a for-profit firm is going to be a complex undertaking. OpenAI will need to grapple with regulatory rules in no less than two states, figure out how to allocate equity in the for-profit firm, and divide assets with the charitable nonprofit that now governs OpenAI and is going to continue to exist.

        OpenAI Expecting Sizable Losses For 2024. Fortune Share to FacebookShare to Twitter (9/28, Ma) reports OpenAI anticipates sizable “losses this year, but revenue over the next five years will continue to be explosive as the company raises fees on its signature chatbot.” Documents which the New York Times saw show that the firm anticipates “revenue of $3.7 billion in 2024.” However, the company sees a loss totaling $5 billion, which the Times reported doesn’t account for equity-based compensation.

Nvidia CEO Advocates AI For Climate Benefits

E&E News Share to FacebookShare to Twitter (9/30, Hiar, Subscription Publication) reports that Nvidia CEO Jensen Huang argued in Washington that artificial intelligence could benefit the climate by enhancing productivity with less energy consumption. Speaking at the Bipartisan Policy Center, Huang emphasized, “The energy efficiency and the productivity gains that we’ll get from it...is going to be incredible.” His visit coincided with Climate Week in New York and the advancement of AI-related legislation in the House. Huang highlighted the efficiency of Nvidia’s specialized chips to a captivated audience of energy executives, investors, and academics.

WSU Develops AI-Guided 3D Printing For Surgical Models

3D Printing Industry Share to FacebookShare to Twitter (10/1) reports that researchers at Washington State University have created an AI-guided 3D printing process to produce detailed human organ replicas. This technique allows surgeons to rehearse complex procedures with patient-specific models. The AI optimizes printer settings for accuracy and speed, using a multi-objective Bayesian Optimization approach. NVIDIA A40 GPUs and NeRF technology ensure model fidelity. The U.S. Department of Commerce has introduced new regulations restricting advanced 3D printing exports to prevent misuse in sensitive applications.

Canada’s AI Regulation Needs Global Collaboration, Says AWS Director

The Canadian Press Share to FacebookShare to Twitter (10/2) reports Amazon Web Services Director of Global AI Nicole Foster urged Canada to create AI legislation that is “interoperable” with regulations in other countries to avoid hindering startups’ ambitions to operate globally. Foster emphasized that unique rules for Canada could limit opportunities for local companies, stating, “A lot of our startups are wonderfully ambitious and have ambitions to be able to sell and do business around the world.” As Canada develops its AI and Data Act, concerns arise that stringent regulations could stifle innovation. Foster highlighted the importance of focusing on high-risk AI systems while avoiding unnecessary regulation of less critical technologies, saying, “I think (it’s about) being focused on the risks that we need to address.”

Researchers Use AI-Generated Images To Train Robots

MIT Technology Review Share to FacebookShare to Twitter (10/3, Williams) reports that researchers from Stephen James’s Robot Learning Lab in London have developed Genima, a system using AI models like Stable Diffusion to create training data for robots. By generating images of robot movements, Genima aids in simulations and real-world applications, improving task completion. The research, to be presented at the Conference on Robot Learning, shows potential for training diverse robots efficiently.

Army Researchers Take Aim At Sepsis In Burn Patients Using AI Machine Learning

Stars and Stripes Share to FacebookShare to Twitter (10/3) reports researchers at the Walter Reed Army Institute of Research have developed SeptiBurnAlert, a system that employs artificial intelligence (AI) to predict sepsis in burn patients by analyzing biomolecular changes in blood. The system, which has shown 85-90% accuracy in initial tests, is expected to reach the commercial market in approximately three years, pending FDA approval.

CrowdStrike CEO Discusses AI’s Impact On Cybersecurity

SiliconANGLE Share to FacebookShare to Twitter (10/3) reports that artificial intelligence is revolutionizing cybersecurity by enhancing threat detection and prevention. CrowdStrike CEO George Kurtz, speaking at Fal. Con 2024 with theCUBE, emphasized the importance of continuous innovation in security. He noted that partnerships with companies like Microsoft, Nvidia, and Amazon Web Services are crucial for addressing modern threats. “No one company can solve everything in security,” Kurtz stated. CrowdStrike’s early adoption of AI, particularly machine learning, has transformed its security platform, allowing for rapid problem-solving and integration of new technologies. The company’s Falcon Flex service and Next-Gen SIEM system exemplify its commitment to customer-centric solutions, driven by client feedback.

dtau...@gmail.com

unread,
Oct 13, 2024, 4:35:41 PM10/13/24
to ai-b...@googlegroups.com

Google DeepMind Boss Awarded Nobel for Proteins Breakthrough

British computer science professor Demis Hassabis, founder of the AI firm that became Google DeepMind, is among the recipients of the Nobel Prize for Chemistry. Hassabis and DeepMind Technologies John Jumper are being recognized their development of an AI tool, AlphaFold2, to predict the structures of nearly all known proteins. They share the Nobel Prize with University of Washington's David Baker, who was recognized for designing a new protein using amino acids.
[ » Read full article ]

BBC; Georgina Rannard (October 9, 2024)

 

Pioneers in AI Awarded Nobel Prize in Physics

ACM A.M. Turing Award laureate Geoffrey Hinton, known as the ‘godfather of AI’, and Princeton University's John Hopfield on Tuesday were named to receive the Nobel Prize in physics for helping to create the building blocks of machine learning. Hopfield created an associative memory that can store and reconstruct images and other patterns in data. Hinton used Hopfield’s work as the foundation for the Boltzmann machine, a type of stochastic recurrent neural network.
[ » Read full article ]

Associated Press; Daniel Niemann; Mike Corder; Seth Borenstein (October 8, 2024)

 

Software Engineers In for Rough Ride as AI Adoption Ramps Up

Gartner reports that to keep up with rising demand for generative AI, 80% of the software engineering workforce will have to upskill by 2027. Gartner found AI tools will support developers' existing work in the short term and provide small productivity gains, but in the medium term, AI-native software engineering will emerge, in which most code is generated by AI.
[ » Read full article ]

ITPro; George Fitzmaurice (October 3, 2024)

 

Texas Regulator Wants Datacenters to Build Power Plants

The Public Utility Commission of Texas said developers of AI datacenters looking to co-locate with a power plant and connect to the grid within 12 to 15 months will have to build the power plant as well. Commission chair Thomas Gleeson said datacenters would be welcome to build power plants that generate more electricity than needed and sell the excess to the grid.
[ » Read full article ]

Bloomberg; Naureen S. Malik (October 3, 2024)

 

Taiwan's AI Goals Will Need More Tech Talent

Taiwan's government is hoping the island-nation can become a hub for innovation in advanced AI. However, Taiwan is in dire need of more skilled workers given its small, aging population and low birth rate. Taiwan's National Development Council plans to introduce "Global Elite" cards to attract top-tier foreign professionals to work for local companies offering yearly salaries of more than NT$6 million (about US$188,000).
[ » Read full article ]

IEEE Spectrum; Yu-Tzu Chiu (October 9, 2024)

 

AI Filling Customer Service Roles in Japan amid Labor Shortage

Japan's labor shortage has prompted firms in a range of industries to fill customer service roles with AI technology. At Ridgelinez Ltd., for instance, an AI assistant recommends auto parts based on the customer's needs, car model, and available stock. An AI assistant deployed by Oki Electric Industry Co. and Kyushu Railway Co. helps passengers navigate station maps and transfers in Japanese, English, and Chinese. Startup Sapeet Co. uses an AI to train its customer service staff.
[ » Read full article ]

Kyodo News (Japan) (October 5, 2024)

 

One of the Biggest AI Boomtowns Is Rising in Malaysia

The Malaysian state of Johor, known for its palm-oil plantations, is home to some of the largest AI construction projects in the world. Regional bank Maybank reported that Johor will see $3.8 billion in total datacenter investments this year. Johor is attractive to datacenter developers due to its abundant land, water, and power, as well as its proximity to Singapore, which has one of the worlds densest intersections of undersea Internet cables.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Stu Woo (October 7, 2024)

 

Stanford Study Finds AI Models Still Show Racial Bias

Forbes Share to FacebookShare to Twitter (10/7, McKendrick) reports that researchers at Stanford University have found Share to FacebookShare to Twitter that large language models (LLMs) continue to exhibit racial biases, particularly against African American English speakers. Despite efforts to address bias, popular LLMs like OpenAI’s GPT series and Google’s T5 still perpetuate harmful stereotypes. The study attributes this to biased training data and suggests that AI models often conceal racism rather than eliminate it.

Google AI Converts Texts To Podcasts

The Washington Post Share to FacebookShare to Twitter (10/7) reports Google’s experimental AI tool, NotebookLM, can transform written documents into podcasts. Users can upload up to 50 documents, and the AI generates summaries and creates audio content, mimicking human conversation. Geoffrey A. Fowler tested the tool with Facebook’s privacy policy, resulting in a 7½ minute podcast. The AI-generated hosts engage in a dialogue, providing a new way to digest information. Google’s Raiza Martin describes it as “talking with your notebook.” However, concerns about accuracy and emphasis arise, as the AI sometimes misinterprets or overgeneralizes content. Steven Johnson of Google Labs highlights the potential for creating podcasts on niche topics without traditional resources. Critics like Shriram Krishnamurthi note the AI’s tendency to miss key points. Educators express cautious optimism, acknowledging AI’s ability to assist learning while stressing the importance of critical thinking and reading original texts.

AI-Powered Digital Tutoring Assistant May Help Improve Students’ Short-Term Performance In Math

The Seventy Four Share to FacebookShare to Twitter reported that “AI-powered digital tutoring assistant designed by Stanford University researchers shows modest promise at improving students’ short-term performance in math.” In fact, the “weakest tutors became nearly as effective as their more highly-rated peers,” according to the new study released Monday. This suggests that the “best use of artificial intelligence in virtual tutoring for now might be in supporting, not supplanting, human instructors.”

        K-12 Dive Share to FacebookShare to Twitter (10/7, Arundel) reports Tutor CoPilot, the open-source tool, “can be embedded in any tutoring platform and helps live tutors ask guiding questions to students and respond to student needs. However, tutors working with the tool suggested improvements to make the guidance for tutors more grade-appropriate.” This is the “first-ever randomized controlled trial of a human-AI system in live tutoring situations.” Students whose tutors used Tutor CoPilot “were 4 percentage points more likely to progress through math tutoring session assessments successfully compared to students whose tutors did not have AI assistance, the study found.”

Google Enhances Coding Assistant With Gemini AI

TechRadar Share to FacebookShare to Twitter (10/9) reports that Google has upgraded its coding assistant for enterprise developers using the Gemini AI platform. The Gemini Code Assist Enterprise service aims to simplify code writing, enhancing productivity and efficiency. It offers improved code customization, suggesting enhancements based on organizational practices and libraries. Announced in April 2024, the service uses the Gemini 1.5 Pro AI model for code analysis and optimization.

        InfoWorld Share to FacebookShare to Twitter (10/9) also reports.

OpenAI Seeks Dismissal Of Elon Musk’s Lawsuit

Forbes Share to FacebookShare to Twitter (10/9, Ray) reports that OpenAI filed a motion on Tuesday in a California federal court to dismiss Elon Musk’s lawsuit, labeling it a “harassment effort” to benefit his AI startup xAI. OpenAI claims Musk, once a supporter, “abandoned the venture” after failing to dominate it. The company alleges Musk’s federal lawsuit mirrors a previous state court case he dropped in June. OpenAI argues the lawsuit is a “PR stunt” with “implausible” claims. Musk initially sued OpenAI, accusing it of prioritizing profit over its founding mission.

OpenAI’s GPT-4o Displays Unexpected Conversational Abilities

Inside Higher Ed Share to FacebookShare to Twitter (10/10, Schroeder) reports that OpenAI’s GPT-4o app exhibited unexpected conversational capabilities last month, engaging users by recalling past interactions and initiating dialogue without prompts. In one instance, the AI inquired about a user’s first week at high school. This behavior, described by OpenAI as a glitch, highlights a shift towards AI acting as a “coworker” or “friend.” OpenAI’s o1 model, which includes “chain of thought reasoning,” aims to enhance AI’s problem-solving abilities, outperforming humans in certain tasks.

Microsoft Unveils AI Tools To Alleviate Strain On Healthcare Professionals

CNBC Share to FacebookShare to Twitter (10/10, Capoot) reports Microsoft announced a suite of new AI tools aimed at reducing the administrative workload for healthcare professionals, a move that could significantly address clinician burnout. These innovations, including medical imaging models and automated documentation solutions, are designed to streamline processes for physicians and nurses, who currently spend a substantial portion of their time on paperwork. By collaborating with major health institutions, Microsoft aims to enhance healthcare efficiency and foster better collaboration among medical staff.

NYT Inspects ChatGPT Code Amid Copyright Lawsuits

Insider Share to FacebookShare to Twitter (10/10, Shamsian) reports lawyers for The New York Times are inspecting ChatGPT’s source code in a secure, internet-free environment as part of copyright infringement lawsuits against OpenAI and Microsoft. The lawsuits claim OpenAI used copyrighted material, including NYT articles, to train its models without compensation. The legal examination aims to determine if OpenAI’s practices constitute “fair use.” The lawsuits, involving major publishers and authors, could set precedents for AI model training legality in the US. The outcomes may influence future AI development and copyright protection in journalism and other creative industries.

dtau...@gmail.com

unread,
Oct 19, 2024, 4:46:32 PM10/19/24
to ai-b...@googlegroups.com

Google Goes Nuclear

Google signed a deal with Kairos Power to use small nuclear reactors to generate the energy needed to power its AI datacenters. The company says it plans to start using the first reactor this decade, and to bring more online over the next decade. Said Google's Michael Terrell, "This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone."
[ » Read full article ]

BBC News; João da Silva (October 15, 2024)

 

Robot's Alan Turing Portrait to be Auctioned by Sotheby's

Auction house Sotheby's next month will auction a portrait of Alan Turing painted by a robot; it is expected to fetch as much as £150,000 ($196,000). The piece, created by humanoid robot Ai-Da, is entitled "AI God" and was exhibited at the United Nations in May 2024. Gallery owner and founder of the Ai-Da Robot studio, Aidan Meller, headed the team that created the robot with experts at the U.K. universities of Oxford and Birmingham.
[
» Read full article ]

Deutsche Welle (Germany) (October 16, 2024)

 

PLCHound Algorithm Aims to Boost Critical Infrastructure Security

Researchers at the Georgia Institute of Technology's Cyber-Physical Security Lab say an algorithm they developed boosts critical infrastructure security by more accurately identifying devices vulnerable to remote cyberattacks. The PLCHound algorithm uses advanced natural language processing and machine learning techniques to sift through databases of Internet records and log the IP addresses and security of connected devices.
[
» Read full article ]

Industrial Cyber; Anna Ribeiro (October 16, 2024)

 

LeCun Thinks AI Is Dumber Than a Cat

AI pioneer and ACM A.M. Turing Award laureate Yann LeCun says some experts are exaggerating AI's power and risks. LeCun believes today’s AI models lack the intelligence of pets. When an OpenAI researcher stressed the need to control ultra-intelligent AI, LeCun responded, “It seems to me that before ‘urgently figuring out how to control AI systems much smarter than us’ we need to have the beginning of a hint of a design for a system smarter than a house cat."

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Christopher Mims (October 11, 2024)

 

Nevada Asked AI Which Students Need Help

Nevada's reliance on AI to estimate the number of children who would struggle in school has sparked an outcry. Before, Nevada treated all low-income students as “at risk” of academic and social troubles. The AI weighed dozens of other factors, slashing the number of students classified as at-risk to less than 65,000 last year from over 270,000 in 2022. As a result, many schools saw state money that they had relied on disappear.

[ » Read full article *May Require Paid Registration ]

The New York Times; Troy Closson (October 12, 2024)

 

Nobel Prizes Recognize AI Innovations, Sparks Debate About Scientific Fields

Scientific American Share to FacebookShare to Twitter (10/14, Castelvecchi, Callaway, Kwon) reports that this year’s Nobel Prizes “recognized the transformative power of artificial intelligence (AI) in two of this year’s prizes,” awarding Geoffrey Hinton and John Hopfield in physics for neural networks and Demis Hassabis and John Jumper in chemistry for AlphaFold. The physics award sparked debate, with some questioning its relevance to physics. The chemistry prize acknowledged AlphaFold’s AI-driven protein folding, with David Jones noting its integration of existing scientific knowledge. AlphaFold “would not have been possible were it not for the Protein Data Bank, a freely available repository of more than 200,000 protein structures...determined using X-ray crystallography, cryo-electron microscopy and other experimental methods.”

Harvard Students Plan AI Innovations For Construction

Insider Share to FacebookShare to Twitter (10/13, Niemeyer) reports that Harvard juniors AnhPhu Nguyen and Caine Ardayfio, known for their I-Xray project using Meta Ray-Bans for facial recognition, are now focusing on AI applications in construction. The duo, who founded Harvard’s AR/VR club, previously developed various tech projects, including an electric skateboard and a robotic tentacle. They gained access to Meta glasses through their club, integrating AI into augmented reality glasses for real-time fact-checking. Ardayfio explained that AI-equipped autonomous construction robots can now make decisions, like waiting for a person to move, without hardcoding every movement.

 

TikTok Lays Off Hundreds As It Shifts To AI-Focused Content Moderation

Reuters Share to FacebookShare to Twitter (10/11, Latiff) reports TikTok “is laying off hundreds of employees from its global workforce, including a large number of staff in Malaysia, the company said on Friday, as it shifts focus towards a greater use of AI in content moderation.” Sources familiar with the matter “earlier told Reuters that more than 700 jobs were slashed in Malaysia” but the company clarified that less than 500 Malaysian employees were affected. The employees, “most of whom were involved in the firm’s content moderation operations, were informed of their dismissal by email late Wednesday, the sources said, requesting anonymity as they were not authorized to speak to media.”

OpenAI Faces Scrutiny Over Nonprofit Structure

The AP Share to FacebookShare to Twitter (10/12, Beaty) reported that OpenAI, the company behind ChatGPT, is under scrutiny regarding its nonprofit status amid a valuation surge to $157 billion. Nonprofit tax experts are concerned about OpenAI’s compliance with its charitable mission. OpenAI CEO Sam Altman confirmed potential restructuring, possibly converting to a public benefit corporation, though specifics are undisclosed. A source indicates no final decision on restructuring has been made. The board, led by Bret Taylor, aims to ensure the nonprofit’s sustainability. Andrew Steinberg notes restructuring would be complex but feasible. Concerns persist about OpenAI’s commitment to its mission, with critics like Elon Musk doubting its fidelity.

US Considers Limiting AI Chip Sales

Reuters Share to FacebookShare to Twitter (10/14, Tanna) reported that US officials are contemplating restrictions on sales of advanced AI chips from Nvidia and other American firms, targeting specific countries. Bloomberg Share to FacebookShare to Twitter (10/15, Subscription Publication) News, citing unnamed sources, revealed that the focus is on Persian Gulf nations, with plans to cap export licenses for national security reasons. Discussions are in preliminary stages and fluid. The US Commerce Department and Nvidia did not comment, while Intel and AMD have yet to respond to Reuters. A recent Commerce Department rule might facilitate AI chip shipments to Middle Eastern data centers. Last year, the Biden Administration expanded licensing requirements for advanced chip exports to over 40 countries, including some in the Middle East, to prevent diversion to China.

Survey Reveals Higher Ed’s AI Preparedness Concerns

Inside Higher Ed Share to FacebookShare to Twitter (10/16, Palmer) reports that Inside Higher Ed’s third annual Survey of Campus Chief Technology/Information Officers, in collaboration with Hanover Research, reveals that “just 9 percent of chief technology officers believe higher education is prepared to handle the new technology’s rise.” Released Wednesday, the survey highlights concerns about AI’s impact on academic integrity, with 60% of CTOs “worried to some degree about the risk generative AI poses to academic integrity.” Despite this, 46% are enthusiastic about AI’s potential benefits, although only 23% “said investing in artificial intelligence is an essential (1 percent) or high (22 percent) priority for their institution.” The survey, involving 82 CTOs, shows that AI is primarily used “to create virtual chat bots and assistants, which was the most popular application.”

Texas A&M University Researchers Use AI For Disaster Recovery

FOX Weather Share to FacebookShare to Twitter (10/15) reports that Texas A&M researchers are employing artificial intelligence and machine learning to expedite damage assessments following major hurricanes. Over a year, they “spent more than a year studying damage photos taken via drone from 10 major disasters,” including hurricanes Harvey, Michael, and Ida. The research team, led by Dr. Robin Murphy, “recruited 130 high school students from Texas and Pennsylvania” to label damage on 21,700 buildings. This data trained an AI system to identify storm-damaged infrastructure. With the new system, “researchers say if they can get drone video of an affected neighborhood, they can have a damage analysis ready in only four minutes, just by using a laptop.” The AI system has already been used “to help the state of Florida in the wake of Hurricanes Debby and Helene.”

Big Tech’s Capital Spending Soars Amid AI Push

The Wall Street Journal Share to FacebookShare to Twitter (10/16, Gallagher, Subscription Publication) reports major tech companies, including Amazon, have significantly increased capital spending this year, particularly on AI infrastructure. The combined capital spending of Microsoft, Amazon, Google, and Meta reached $106.2 billion in the first half of 2024, up 49% from the previous year. This surge is driven by investments in chips and other resources to support generative AI services. Wall Street expects these companies’ combined capital expenditures to top $60 billion in the third quarter and $231 billion for the full year. The Journal highlights that a growing number of analysts think Amazon’s spending on an ambitious satellite program could bring the company’s operating income below Wall Street’s stated targets for this year and next, potentially curbing the operating margin expansion the company has been delivering recently.

Boston Dynamics And Toyota Institute Partner On AI Robotics

TechCrunch Share to FacebookShare to Twitter (10/16, Heater) reports that Boston Dynamics and Toyota Research Institute announced plans to integrate AI-based robotic intelligence into the Atlas humanoid robot. This collaboration will leverage TRI’s work on large behavior models, akin to large language models like ChatGPT. TRI’s research has achieved 90% accuracy in household tasks through overnight training. Boston Dynamics CEO Robert Playter highlighted the partnership’s potential to address complex challenges in robotics. This deal is notable as Boston Dynamics and TRI are backed by automotive rivals Hyundai and Toyota, respectively, aiming to develop a general-purpose humanoid robot.

dtau...@gmail.com

unread,
Oct 26, 2024, 1:04:30 PM10/26/24
to ai-b...@googlegroups.com

U.S. Urges Agencies to ‘Harness’ AI for National Security

The first-ever national security memorandum on AI, issued by President Biden on Thursday, directs the federal government to take action to improve the security and diversity of chip supply chains and to provide AI developers with cybersecurity and counterintelligence to keep their inventions secure. An administration official added that “the U.S. should harness the most advanced AI systems with appropriate safeguards to achieve national security objectives."
[
» Read full article ]

The Hill; Miranda Nazzaro (October 24, 2024)

 

AI Scans RNA ‘Dark Matter,’ Uncovers 70,000 New Viruses

AI was used to uncover 70,500 previously-unknown RNA viruses. Using the protein-prediction tool ESMFold, developed by researchers at Meta, Shi Mang at Sun Yat-sen University in China and colleagues created a model, called LucaProt, and fed it sequencing and ESMFold protein-prediction data. They trained the model to recognize viral RNA-dependent RNA polymerase, a key protein used in RNA replication, and used it to find sequences that encoded these enzymes in the large tranche of genomic data.
[ » Read full article ]

Nature; Smriti Mallapaty (October 14, 2024)

 

Can AI Be Blamed for a Teen's Suicide?

The mother of Sewell Setzer III, a 14-year-old from Orlando, FL, who took his own life in February, is suing Character.AI, a role-playing app the lets users create and chat with AI characters. Setzer reportedly spent hours every day conversing with the chatbot, even confiding his thoughts of suicide. The lawsuit calls the technology "dangerous and untested."


[
» Read full article *May Require Paid Registration ]

The New York Times; Kevin Roose (October 23, 2024)

 

AI Decodes Oinks and Grunts to Keep Pigs Happy

An AI algorithm developed by researchers from universities in Denmark, Germany, Switzerland, France, Norway, and the Czech Republic interprets the sounds pigs make. Using the algorithm could potentially alert farmers to negative emotions in pigs so the farmers can improve their well-being, according to Elodie Mandel-Briefer at Denmark's University of Copenhagen.
[
» Read full article ]

Reuters; Jacob Gronholt-Pedersen (October 24, 2024)

 

Using AI, Radar to Unsnarl a 500-Year-Old Traffic Jam

South Korean company Bitsensing is partnering with the Italian city of Verona and Italy-based Famas Systems to manage traffic at Porta Nuova, a gateway to the city that has been standing for nearly 500 years. Bitsensing installed 10 of its traffic insight monitoring sensors (TIMOS) overlooking Porto Nuevo’s five entrance lanes and six exit lanes. The sensors’ on-device AI collects and transmits real-time data to an operations center supported by local servers.
[ » Read full article ]

IEEE Spectrum; Lawrence Ulrich (October 21, 2024)

 

Anguilla Turns AI Boom into Digital Gold Mine

The British territory of Anguilla, allotted control of the .ai Internet address in the 1990s, is capitalizing on the AI boom. Google, for example, uses google.ai to showcase its AI services, while Elon Musk uses x.ai as the homepage for his Grok AI chatbot. Anguilla’s earnings from Web domain registration fees quadrupled last year to $32 million, fueled by the surging interest in AI.
[ » Read full article ]

Associated Press; Kelvin Chan (October 15, 2024)

 

Vulnerabilities, AI Compete for Software Developers' Attention

The annual "State of the Software Supply Chain" report from software company Sonatype found that developers are on track to download more than 6.6 trillion software components in 2024, including a 70% increase in downloads of JavaScript components and an 87% increase in Python. Sonatype's Brian Fox said while the advent of AI is driving speedier development cycles, it is also making security more difficult.
[ » Read full article ]

Dark Reading; Robert Lemos (October 22, 2024)

 

 

C. Ebert and M. Beck, "Artificial Intelligence for Cybersecurity", IEEE Software, vol. 40, no. 06, pp. 27-34, Nov.-Dec. 2023.
Cybersecurity attacks are on a steep increase across industry domains.1,2 With ubiquitous connectivity and increasingly standard software stacks, basically all software is accessible and vulnerable. Yet, cybersecurity is not systematically deployed because necessary processes are demanding and need continuous attention paired with technology competences. Many software suppliers do not pay adequate attention and governance, resulting in problems such as weak communication protocols, insufficient passwords, and social engineering risks.
URL: https://doi.ieeecomputersociety.org/10.1109/MS.2023.3305726

A. Piplai et al., "Knowledge-Enhanced Neurosymbolic Artificial Intelligence for Cybersecurity and Privacy", IEEE Internet Computing, vol. 27, no. 05, pp. 43-48, Sept.-Oct. 2023.

Neurosymbolic artificial intelligence (AI) is an emerging and quickly advancing field that combines the subsymbolic strengths of (deep) neural networks and the explicit, symbolic knowledge contained in knowledge graphs (KGs) to enhance explainability and safety in AI systems. This approach addresses a key criticism of current generation systems, namely, their inability to generate human-understandable explanations for their outcomes and ensure safe behaviors, especially in scenarios with unknown unknowns (e.g., cybersecurity, privacy). The integration of neural networks, which excel at exploring complex data spaces, and symbolic KGs, which represent domain knowledge, allows AI systems to reason, learn, and generalize in a manner understandable to experts. This article describes how applications in cybersecurity and privacy, two of the most demanding domains in terms of the need for AI to be explainable while being highly accurate in complex environments, can benefit from neurosymbolic AI.
URL: https://doi.ieeecomputersociety.org/10.1109/MIC.2023.3299435

 

OpenAI-Microsoft Partnership Said To Be Experiencing Tension

The New York Times Share to FacebookShare to Twitter (10/18, A1) reports that OpenAI and Microsoft are experiencing tension in their partnership, initially praised as “the best bromance in tech.” OpenAI, led by CEO Sam Altman, sought additional investment from Microsoft after already receiving $13 billion. Microsoft hesitated following Altman’s temporary ousting and OpenAI’s projected $5 billion loss this year. Microsoft remains OpenAI’s largest investor but has also invested in Inflection, an OpenAI competitor. OpenAI secured a $10 billion computing deal with Oracle and recently closed a $6.6 billion funding round. OpenAI’s computing costs are expected to rise significantly. Microsoft and OpenAI have renegotiated terms, but OpenAI staff express dissatisfaction with the computing power provided by Microsoft.

        Another article in the New York Times Share to FacebookShare to Twitter (10/18) reports that Microsoft has hired employees from Inflection, an OpenAI rival, to hedge its AI investments, causing friction. Complaints have emerged about Microsoft’s handling of OpenAI software and insufficient computing power provision. OpenAI has since negotiated a $10 billion contract with Oracle for additional resources.

        TechCrunch Share to FacebookShare to Twitter (10/17, Loizos) reports, “Most fascinating perhaps is a reported clause in OpenAI’s contract with Microsoft that cuts off Microsoft’s access to OpenAI’s tech if the latter develops so-called artificial general intelligence (AGI), meaning an AI system capable of rivaling human thinking.” TechCrunch points out that OpenAI’s board “can reportedly decide when AGI has arrived, and CEO Sam Altman has already said that moment will be somewhat subjective. As he told this editor early last year, ‘The closer we get, the harder time I have answering [how far away AGI is] because I think that it’s going to be much blurrier, and much more of a gradual transition than people think.’”

        Microsoft, OpenAI Negotiate Equity Distribution Amid Transition To For-Profit Corporation. The Wall Street Journal Share to FacebookShare to Twitter (10/18, Jin, Driebusch, Subscription Publication) reports that OpenAI and Microsoft are negotiating how to translate Microsoft’s nearly $14 billion investment in OpenAI into equity amid the latter’s transition from a nonprofit to a for-profit public-benefit corporation that will maintain a nonprofit component. OpenAI, valued at $157 billion, faces challenges in distributing equity. Microsoft, advised by Morgan Stanley, could own a large stake, while OpenAI, advised by Goldman Sachs, navigates governance rights. Microsoft and OpenAI’s complex relationship includes financial and technological ties, including Microsoft’s role as OpenAI’s exclusive cloud services provider.

Congressional Leaders Negotiating Potential Lame-Duck Deal To Address Increasing Concerns About AI, Sources Say

Politico Share to FacebookShare to Twitter (10/18, Perano) reports, “Congressional leaders in the House and Senate are privately negotiating a deal to address increasing concerns about artificial intelligence, and they’re hoping to move a bill in the lame-duck period, two people close to the negotiations tell POLITICO.” The specifics of the package remain “in flux as Democratic and Republican leadership haggle over common ground,” but “several bills have passed through committees on a bipartisan basis related to AI research and workforce training bills, which could be prime areas for agreement.” However, “other subjects like AI’s role in misinformation, elections and national security are areas rife with potential partisan roadblocks and would likely be more difficult to include in a deal.” AI “has specifically been a priority for Majority Leader Chuck Schumer, who initiated the negotiations, according to one of the people familiar.”

AI-Powered Chatbot “Sassy” Helps Oregon Students Explore Careers

Education Week Share to FacebookShare to Twitter (10/21, Langreo) reports that the Oregon Department of Education, in collaboration with Journalistic Learning Initiative and Playlab.ai, has launched “Sassy,” an AI-powered chatbot designed to aid students in career exploration. EdWeek “interviewed Ed Madison, a University of Oregon professor and executive director of the Journalistic Learning Initiative, about the chatbot and how he envisions students and teachers using it.” With Sassy – short for Sasquatch, Oregon’s “Bigfoot” – students can “brainstorm possible careers, create action plans for how to get their dream jobs, prepare for an interview, and even stay motivated.” This initiative is “part of the state’s investment in expanding career-connected programs to engage students in relevant learning, complete unfinished learning, and improve their mental well-being and sense of belonging.”

AI Transforms Agriculture With Precision And Efficiency

Forbes Share to FacebookShare to Twitter (10/21, Walch) reports that agriculture is undergoing a transformation with the integration of artificial intelligence into farming equipment and processes. AI technology is enhancing precision farming by improving harvest quality and efficiency, detecting plant diseases, and optimizing resource use. Autonomous systems, such as AI-powered drones and self-driving tractors, provide farmers with real-time insights and operational control, reducing labor needs and increasing productivity. AI also aids in weather forecasting, offering crucial lead time for farmers. Despite challenges like high costs and technical requirements, AI advancements are making farms more efficient globally.

Employers Stress Need For AI Training In Education

Inside Higher Ed Share to FacebookShare to Twitter (10/21, Mowreader) reports that employers are increasingly “indicating that there’s a need for students to be trained in generative artificial intelligence tools as more businesses integrate the tech’s capabilities into the workplace.” Mark Lacker, an entrepreneurship professor at Miami University in Ohio, “encourages students to use generative AI tools to complete projects, inspiring creative and critical thinking skills that can prepare them for careers.” A spring 2024 survey “by Inside Higher Ed and Generation Lab found 31 percent of students say they know how to use generative AI to help with coursework because it was communicated by their professors.” Lacker’s course, likened to an internship, involves students working “with a small group of their peers to use AI to solve a problem,” with presentations to demonstrate learning.

OpenAI Hires New Chief Economist With Ties To Biden, Obama

The New York Times Share to FacebookShare to Twitter (10/22, Metz) reports OpenAI “has hired a chief economist with ties to two Democratic presidential administrations.” OpenAI on Tuesday “said it had hired Aaron ‘Ronnie’ Chatterji, a professor of business and public policy at Duke University’s Fuqua School of Business,” who “previously served as a senior economist in [former] President Barack Obama’s Council of Economic Advisers and as chief economist at the Commerce Department under President Biden.” This “addition of a chief economist is indicative of OpenAI’s enormous ambition and where its executives see their company in the tech industry’s pecking order.”

Anthropic Announces AI Agents That Can Complete Complex Tasks “Like A Human Would”

CNBC Share to FacebookShare to Twitter (10/22, Field) reports Anthropic “announced Tuesday that it’s reached an artificial intelligence milestone for the company: AI agents that can use a computer to complete complex tasks like a human would.” The company’s new Computer Use capability “allows its tech to interpret what’s on a computer screen, select buttons, enter text, navigate websites and execute tasks through any software and real-time internet browsing.” Anthropic Chief Science Officer Jared Kaplan told CNBC the tool can “use computers in basically the same way that we do,” adding it can do tasks with “tens or even hundreds of steps.”

Report Highlights AI Integration Challenges In Teacher Education

The Seventy Four Share to FacebookShare to Twitter (10/22, Toppo) reports that a recent study by the Center on Reinventing Public Education at Arizona State University “tapped leaders at more than 500 U.S. education schools, asking how their faculty and preservice teachers are learning about AI.” Through surveys and interviews, “researchers found that just one in four institutions now incorporates training on innovative teaching methods that use AI,” with most focusing on plagiarism prevention. Few faculty members feel confident using AI, with only 10% expressing confidence, and concerns about AI’s impact on jobs and data privacy persist. Promising programs include Arizona State University and the University of Northern Iowa. Researchers concluded “that the responsibility to integrate more content on AI can’t rest solely on the shoulders of ‘individual, self-motivated educators,’” and the report calls for strategic investments and policy adjustments to enhance AI education.

How AI Tools Enhance High School Counseling, College Applications

Education Week Share to FacebookShare to Twitter (10/23, Najarro) reports that artificial intelligence (AI) tools are increasingly being utilized in high school counseling to streamline repetitive tasks. Jeffrey Neill, director of college counseling at Graded: The American School of São Paulo, “discussed his experience with incorporating AI tools into counseling at the College Board’s annual forum here in Austin this week.” Neill highlighted that AI assists in compiling information for recommendation letters, reducing the time spent on gathering data. Additionally, AI tools like ChatGPT help create promotional content for college visits and draft email responses based on previous communications. Neill emphasized the importance of ethical AI use, advising students that “there is only one rule: don’t copy and paste text from ChatGPT and claim it as your own.’” Neill stressed the need for careful implementation to ensure AI benefits all students fairly.

Lawsuit Filed Against AI Chatbot Company Over Teen’s Suicide

The New York Times Share to FacebookShare to Twitter (10/23) reports on a lawsuit filed by a Florida mother against an AI companionship platform, accusing the company of contributing to the suicide of her son. Sewell Setzer III, a 14-year-old from Orlando, became emotionally attached to “Dany,” an AI chatbot on Character. AI, named after a “Game of Thrones” character. He developed an intense relationship with the chatbot, isolating himself from the real world, which led to declining school performance and mental health issues. Despite being diagnosed with anxiety and mood disorders, Sewell preferred confiding in the chatbot over seeking professional help, eventually leading to his death by suicide. The lawsuit claims the company’s technology is “dangerous and untested.” Character. AI’s spokesperson stated they are enhancing safety features. According to The Times, the case highlights concerns over AI companionship apps potentially exacerbating loneliness and replacing human interactions, especially among vulnerable teens.

OpenAI, Anthropic Compete with New AI Models

Forbes Share to FacebookShare to Twitter (10/24, Werner) reports that OpenAI and Anthropic are advancing AI capabilities with their latest models. OpenAI’s o1 model features “chain of thought” reasoning for language tasks, while Anthropic’s Claude 3.5 model allows computer use akin to human interaction. Users report Claude’s effectiveness in analytical tasks, while OpenAI’s o1 is praised for its reasoning capabilities. Analysts suggest OpenAI leads due to significant funding and innovative features. However, there is debate over the models’ true autonomy and reasoning abilities. Both companies continue to shape the AI landscape, with OpenAI currently seen as a frontrunner.

        Former OpenAI Researcher Criticizes Company’s AI Data Practices. The New York Times Share to FacebookShare to Twitter (10/23) reports that Suchir Balaji, a former artificial intelligence researcher at OpenAI, has publicly criticized the company’s use of copyrighted internet data to develop technologies like ChatGPT. Balaji, who worked at OpenAI for nearly four years, concluded that the company’s practices violated the law and contributed to societal harm. He left the company in August, expressing his concerns in interviews with The New York Times. Balaji is among the first employees to leave a major AI company and speak out against the use of copyrighted data in AI development.

White House Published National Security Memo Promoting Federal AI Use

The Washington Post Share to FacebookShare to Twitter (10/24) reports the Administration on Thursday published “a landmark national security memorandum ... directing the Pentagon and intelligence agencies to increase their adoption of artificial intelligence, expanding the Biden administration’s efforts to curb technological competition from China and other adversaries.” The memo “aims to make government agencies step up experiments and deployments of AI” and “also bans agencies from using the technology in ways that ‘do not align with democratic values,’” with NSA Sullivan saying, “This is our nation’s first ever strategy for harnessing the power and managing the risks of AI to advance our national security.”

        The New York Times Share to FacebookShare to Twitter (10/24, E. Sanger) calls the memo “the latest in a series Mr. Biden has issued grappling with the challenges of using A.I.,” adding that “most of the deadlines the order sets for agencies to conduct studies on applying or regulating the tools will go into full effect after Mr. Biden leaves office, leaving open the question of whether the next administration will abide by them.” Sullivan, who “prompted many of the efforts to examine the uses and threats of the new tools,” on Thursday “acknowledged that one challenge is that the U.S. government funds or owns very few of the key A.I. technologies – and that they evolve so fast that they often defy regulation.” According to CNN, Share to FacebookShare to Twitter (10/24, Liptak) the directive “seeks to strike a balance between deploying AI’s powerful potential with protecting against some of its fearsome possibilities.”

        Reuters Share to FacebookShare to Twitter (10/24) says the memo “directed federal agencies ‘to improve the security and diversity of chip supply chains’” and “also prioritizes the collection of information on other countries’ operations against the U.S. AI sector and passing that intelligence along quickly to AI developers to help keep their products secure.” However, Politico Share to FacebookShare to Twitter (10/24, Chatterjee, Gedeon) notes it “set up a potential political bind on a top tech issue for whoever wins the White House next,” as “its focus on using AI in security could cause friction for Vice President Kamala Harris if she wins: Civil rights groups are already criticizing the memo for its potential to let security agencies turbocharge a surveillance state.”

Report Explores School Districts’ Early AI Adoption

K-12 Dive Share to FacebookShare to Twitter (10/24, Merod) reports, “When districts are early users of artificial intelligence, they often adopt multiple approaches to implement the technology, according to a report released Thursday by the Center on Reinventing Public Education.” The nonpartisan research and policy analysis center “examined 40 school districts that adopted the technology early,” finding that districts often use multiple methods to implement AI, with 70% using teacher-centered AI tools and 65% providing guidance on AI use for teachers, students, and families. Additionally, 63% offer professional development for AI literacy, and 58% supply student-centered AI tools. The CRPE report “suggests early AI adopters consider: Piloting new ideas with AI tools and document what is and isn’t working. Investing in AI literacy for all adults and students in the district, including board members.”

dtau...@gmail.com

unread,
Nov 3, 2024, 4:42:16 PM11/3/24
to ai-b...@googlegroups.com

Google Watermarks Its AI-Generated Text

Google DeepMind researchers have developed a system to watermark its AI-generated text and has integrated it into its Gemini chatbot. The open source SynthID-Text system provides a way to determine whether text outputs have come from large language models without compromising "the quality, accuracy, creativity, or speed of the text generation," according to Google DeepMind's Pushmeet Kohli.
[ » Read full article ]

IEEE Spectrum; Eliza Strickland (October 23, 2024)

 

Tech Giants Press Congress to Codify AI Safety Institute

A letter from a coalition of more than 60 technology companies and industry groups calls on Congress to permanently authorize the U.S. Artificial Intelligence Safety Institute within the National Institutes of Standards and Technology (NIST) via legislation. The letter was signed by Amazon, Google, Meta, Microsoft, OpenAI, and more than 50 other companies.
[ » Read full article ]

The Hill; Julia Shapero (October 22, 2024)

 

AI Helps Driverless Cars Predict Movements of Unseen Objects

An algorithm developed by researchers at California cognitive computing firm VERSES AI and Volvo Cars helps autonomous vehicle systems anticipate and predict the trajectories of other vehicles, pedestrians, and cyclists hidden from direct view. The algorithm uses occlusion reasoning to reduce complex, rapidly changing scenarios to a simpler set of movements that could be made by potential hidden objects. When approaching locations where hidden objects are likely, the algorithm could alter the autonomous vehicle's speed or direction and its driving behavior could be updated should sensors confirm hidden objects are present.


[
» Read full article *May Require Paid Registration ]

New Scientist; Jeremy Hsu (October 29, 2024)

 

OpenAI Emphasizes US Leadership In AI Development

The Hill Share to FacebookShare to Twitter (10/25) reports that OpenAI has reiterated the importance of the US maintaining leadership in artificial intelligence development, following a national security memorandum from the Biden administration. OpenAI views the memo as a significant step toward ensuring AI benefits many while upholding democratic values. The company emphasizes partnerships that align with democratic values and responsible use, citing collaborations with DARPA and US National Laboratories. OpenAI also stresses the need for safeguards against misuse and highlights ongoing efforts to set norms for AI’s safe deployment in national security contexts.

        Researchers: AI Tool Adopted By Hospitals Is Fabricating Information. The AP Share to FacebookShare to Twitter (10/26, Burke, Schellmann) reported that OpenAI has touted its artificial intelligence-powered transcription tool Whisper as having near “human level robustness and accuracy.” However, Whisper “has a major flaw: It is prone to making up chunks of text or even entire sentences, according to interviews with more than a dozen software engineers, developers and academic researchers.” Experts “said that such fabrications are problematic because Whisper is being used in a slew of industries worldwide to translate and transcribe interviews, generate text in popular consumer technologies and create subtitles for videos. More concerning, they said, is a rush by medical centers to utilize Whisper-based tools to transcribe patients’ consultations with doctors.”

AI Boom Challenges Europe’s Environmental Goals

CNBC Share to FacebookShare to Twitter (10/29, Roach) reports that the surge in AI is pressuring European data centers to adapt their cooling systems to accommodate high-powered chips from companies like Nvidia. According to Goldman Sachs, AI is expected to drive a 160% increase in demand for data centers by 2030, potentially conflicting with Europe’s decarbonization goals. Michael Winterson of the European Data Center Association warns that lowering water temperatures for cooling is “fundamentally incompatible” with the EU’s Energy Efficiency Directive. The European Commission is engaging with Nvidia and other stakeholders to address energy consumption concerns in data centers.

Microsoft Faces Slow Revenue Growth Amid AI Concerns

Reuters Share to FacebookShare to Twitter (10/28) reports that Microsoft is expected to announce its slowest quarterly revenue growth in a year, with investors focusing on AI demand and returns. Despite significant investments in AI, including in OpenAI’s ChatGPT, Microsoft’s key products like the Copilot assistant face slow adoption. Analysts express concerns over capital expenditures and margin compression. Microsoft’s Azure unit likely saw 33% growth, while total revenue is expected to rise 14.1% to $64.51 billion. Analysts suggest recent developments, like autonomous AI agents, may boost Copilot adoption, though skepticism remains high.

Poll: Roughly 74% Of Adults Older Than 50 Say They Would Have Little Or No Trust In Health Information Generated By AI

The Washington Post Share to FacebookShare to Twitter (10/28, Docter-Loeb) reports, “About 74 percent of adults older than 50 say they would have little or no trust in health information generated by artificial intelligence, according to the University of Michigan National Poll on Healthy Aging.” The new “report Share to FacebookShare to Twitter analyzed data from a survey administered in February and March to 3,379 U.S. adults between ages 50 and 101.” More than “half of the adults (58 percent) reported looking for health information on the web in the past year.” The poll found that “trust in AI-generated information differed across demographics.” For example, “women and those with less education or lower household income or who had not had a health-care visit in the past year were less likely to trust the information they found generated by AI online.”

Study Finds AI Adoption May Be Overstated

Fortune Share to FacebookShare to Twitter (10/29, Goldman) reports that a new study on generative AI adoption claims 40% of U.S. adults have used such tools, suggesting rapid uptake. However, Princeton professor Arvind Narayanan criticizes this as exaggerated, noting only 0.5%-3.5% of work hours involve generative AI. The study, published by the National Bureau of Economic Research, contrasts with personal observations that many are unaware of AI tools beyond ChatGPT. Despite mixed reviews for products like Apple’s AI features, the technology is becoming unavoidable with integrations across platforms like Google and Microsoft. The article also highlights Microsoft’s GitHub Copilot expanding model options beyond OpenAI, reflecting evolving AI tool usage.

OpenAI Building First Chip With Broadcom And TSMC, Scaling Back Foundry Ambition

Reuters Share to FacebookShare to Twitter (10/29, Hu, Potkin, Nellis) reports OpenAI is working with TSMC and Broadcom “to build its first in-house chip designed to support its artificial intelligence systems, while adding AMD chips alongside Nvidia chips to meet its surging infrastructure demands, sources told Reuters.” OpenAI has dropped its “ambitious foundries plans for now due to the costs and time needed to build a network, and plans instead to focus on in-house chip design efforts, according to sources, who requested anonymity as they were not authorized to discuss private matters.” Its strategy “highlights how the Silicon Valley startup is leveraging industry partnerships and a mix of internal and external approaches to secure chip supply and manage costs like larger rivals Amazon, Meta, Google and Microsoft.”

        OpenAI CFO: 75% Of Revenue Comes From Consumer Subscriptions. Bloomberg Share to FacebookShare to Twitter (10/28, Ghaffary, Ludlow, Subscription Publication) reports that OpenAI’s Chief Financial Officer Sarah Friar stated that 75% of the company’s revenue comes from consumer subscriptions, particularly for its ChatGPT service, during an interview at the Money20/20 conference in Las Vegas. Despite efforts to expand its corporate customer base, the company’s consumer side remains robust, with 250 million weekly active users and a conversion rate of 5% to 6% from free to paid users. OpenAI recently secured $6.6 billion in funding and a $4 billion credit line to support its AI development and infrastructure expansion.

Meta Working On AI-Based Search Engine

Reuters Share to FacebookShare to Twitter (10/28, Votaw) reports that Meta Platforms “is working on an artificial intelligence-based search engine as it looks to reduce dependence on Alphabet’s Google and Microsoft’s Bing.” The engine, says Reuters, “will provide conversational answers to users about current events on Meta AI, the company’s chatbot on WhatsApp, Instagram and Facebook, according to the report.”

Biden’s Memo On AI In National Security Is “Ambitious,” Technology Experts Say

Roll Call Share to FacebookShare to Twitter (10/29, Ratnam) highlights reactions from intelligence experts to President Biden’s memo directing security agencies to harness the power of AI technology. Roll Call explains the memo, “which stems from the president’s executive order from last year, asks the Pentagon; spy agencies...and others to harness AI technologies. The directive emphasizes the importance of national security systems ‘while protecting human rights, civil rights, civil liberties, privacy, and safety in AI-enabled national security activities.’” However, “technology experts” are warning that the directive “sets ambitious targets amid a volatile political environment.” For example, Center for a New American Security fellow Josh Wallin said, “It’s like trying to assemble a plane while you’re in the middle of flying it. ... It is a heavy lift. This is a new area that a lot of agencies are having to look at that they might have not necessarily paid attention to in the past, but I will also say it’s certainly a critical one.”

Survey: Teachers Seek More AI Training Opportunities

Education Week Share to FacebookShare to Twitter (10/29, Langreo) reports that a recent survey by the EdWeek Research Center shows an increase in teachers receiving professional development on artificial intelligence, though a majority still lack training. Conducted “between Sept. 26 and Oct. 8,” the survey included 1,135 educators, with 43% of teachers saying “they have received at least one training session on AI,” up from 29% in the spring. Tara Natrass from ISTE+ASCD suggests the increase is due to more opportunities for training during summer and back-to-school periods. However, if 58 percent of teachers “still have no training two years after the release of ChatGPT, then districts have a lot of work to do to get everyone up to speed, Natrass said.” The lack of knowledge and support “is one of the top reasons why teachers say they aren’t using AI in the classroom, according to the EdWeek Research Center survey.”

How San Diego Teachers Are Using AI To Enhance Education

The San Diego Union-Tribune Share to FacebookShare to Twitter (10/29, Taketa) reports that at Sage Creek High School, one math teacher’s students “are not only allowed but encouraged to use AI.” The educator, who has a background in electrical engineering, introduces students to AI tools for solving math problems and checking answers. He developed “his own AI platform that launched this year, called HappyGrader, that grades students’ tests and provides grading feedback. It’s cut his grading time in half,” and despite initial skepticism, students have found these tools beneficial. An English teacher also uses AI to check for academic dishonesty and offers guidance on ethical AI use. San Diego Unified is also exploring AI’s potential and “is convening a task force that will draft district guidelines for AI use by June of next year,” aiming to enhance, not replace, teachers.

Meta Posts Record Revenue Amid AI Investments

The Wall Street Journal Share to FacebookShare to Twitter (10/30, Subscription Publication) reports Meta Platforms achieved a record $40.59 billion in sales, a 19% year-over-year increase, driven by digital advertising growth, albeit slower than previous quarters. CEO Mark Zuckerberg emphasized continued significant investments in AI. Amazon is expected to report its results on Thursday, as tech giants provide their quarterly updates.

        Adweek Share to FacebookShare to Twitter (10/30) reports Meta is ramping up its investments in AI, with the technology expected to enhance ad targeting and content recommendation capabilities. Notably, Meta launched generative AI ad tools for video creation in October and is integrating its AI chatbot across WhatsApp, Messenger, and Instagram. The company inked a multiyear deal with Reuters for news content and is developing an AI-powered search engine to reduce dependence on Google and Microsoft’s Bing. Meta AI has over 500 million monthly active users. CFO Susan Li noted, “Over time, there will be a broadening set of queries that people use [Meta AI] for, and monetization opportunities will exist.”

AI Sparks Mixed Reactions Among Louisiana Entrepreneurs

The New Orleans Times-Picayune Share to FacebookShare to Twitter (10/23, Collins) reported that the 2024 Greater New Orleans Startup Report reveals mixed feelings about artificial intelligence (AI) among Louisiana entrepreneurs. Conducted by Tulane University’s Albert Lepage Center, the survey shows 60% of respondents view AI as a threat, while 61% see it as an opportunity. The report highlights that AI-driven innovations have benefited tech giants and startups like OpenAI. Locally, AI has inspired new companies and courses. The survey also indicates that 37% of respondents believe AI will have the largest long-term impact on their companies. Despite AI’s potential, funding gaps persist for minority and female founders.

Google Reports AI Writes Over 25% of Its Code

Fortune Share to FacebookShare to Twitter (10/30, McKenna) reports Alphabet CEO Sundar Pichai announced during the company’s third-quarter earnings call Tuesday that AI generates over 25% of Google’s new code. The company also “says its impressive Q3 performance – earnings beat analyst predictions – was driven in part by its cloud business.” The “segment generated quarterly revenues of $11.4 billion, up 35% from the same period last year, as Pinchai said artificial intelligence offerings helped attract new enterprise customers and win larger deals.”

Illinois Teacher Advocates AI Use In Language Learning

Education Week Share to FacebookShare to Twitter (10/30, Najarro) reports that Sarah Said, “an English teacher working with English learners at an alternative high school near Chicago,” is encouraging the use of AI tools in language learning. Said, who has more than 20 years of experience with English learners, notes that students are already utilizing AI and translation apps like Google Translate and ChatGPT. She emphasizes the importance of teaching students to use these tools responsibly, likening AI to a calculator that aids but doesn’t replace learning. Said presented on this topic “virtually at the annual WIDA conference in mid-October and spoke with Education Week about how teachers working with English learners should approach AI tools in class.” In an interview with EdWeek, she English learners “might be the first ones to actually be in the know because they’ve had to adapt to using so many tools in the classroom.”

AI Startup Develops Robots For Household Chores

Wired Share to FacebookShare to Twitter (10/31, Knight) reports that Physical Intelligence, a San Francisco startup, is advancing robotics with a new AI model capable of performing various household tasks. The company, founded by robotics researchers, has developed a “foundation model” called π0, trained on extensive robotic data. This model enables robots to perform chores such as folding laundry and cleaning tables. CEO Karol Hausman likens the training process to that of large language models like ChatGPT, but applied to physical tasks. Videos demonstrate robots executing tasks with notable skill. However, the algorithm sometimes fails amusingly, such as overfilling an egg carton. Co-founder Sergey Levine acknowledges the model’s limitations, comparing it to early AI models. The company aims to overcome challenges like limited data availability by generating its own. This approach could lead to robots handling diverse industrial tasks and adapting to human environments.

Meta Partners With GelSight And Wonik Robotics To Develop AI Tactile Sensors

TechCrunch Share to FacebookShare to Twitter (10/31, Wiggers) reports that Meta is collaborating with GelSight and Wonik Robotics to commercialize tactile sensors for AI research. These sensors aim to enhance AI’s understanding of the physical world. GelSight will help market Digit 360, a tactile fingertip with advanced sensing capabilities. Meta and Wonik will also develop a new Allegro Hand with integrated tactile sensors. Both products will be available next year.

AI Tool Enhances Math Tutoring Efficiency

Education Week Share to FacebookShare to Twitter (10/31) reports that a Stanford University study found an AI-powered tutoring assistant, Tutor CoPilot, increased human tutors’ capacity and improved students’ math performance. Stanford researchers developed this digital tool to aid tutors, particularly novices, in student interactions. This study is the first randomized controlled trial investigating a human-AI partnership in live tutoring. It assesses the tool’s effectiveness in enhancing tutors’ skills and students’ math learning. Susanna Loeb, a Stanford education professor and study author, discussed the tool’s development, trial results, and implications for schools in an interview with Education Week. The study emerges as schools face challenges in scaling tutoring programs due to resource demands.

dtau...@gmail.com

unread,
Nov 9, 2024, 7:26:21 PM11/9/24
to ai-b...@googlegroups.com

South Korea Fights Deepfake Porn Surge

Officials announced several steps to curb a surge in deepfake porn in South Korea, including tougher punishment for offenders, the expanded use of undercover officers, and tougher regulations on social media platforms. Concerns of deepfakes grew after unconfirmed lists of schools with victims spread online in August. In response, many girls and women removed photos and videos from their social media accounts.
[ » Read full article ]

Australian Broadcasting Corporation (November 6, 2024)

 

Meta Permits Its AI Models to Be Used for U.S. Military Purposes

Meta announced Nov. 4 it would allow its AI models to be used by U.S. government agencies and contractors working on national security for military purposes. Previously, Meta's "acceptable use policy" prohibited the use of its AI software for military, warfare, or nuclear applications. Meta said it will share its Llama open-source AI models with the Five Eyes intelligence alliance: the U.S., U.K., Canada, Australia, and New Zealand.

[ » Read full article *May Require Paid Registration ]

The New York Times; Mike Isaac (November 4, 2024)

 

Chinese Researchers use Meta's LLM to Build a Model for Military Use

Chinese research institutions linked to the People's Liberation Army used Meta's Llama large language model (LLM) to develop an AI tool for potential military applications. The researchers added their own parameters to Meta's Llama 13B, an earlier version of the LLM, to build ChatBIT, an AI tool that can collect and process intelligence and produce reliable information for operational decision-making.
[ » Read full article ]

Reuters; James Pomfret; Jessie Pang; Katie Paul (November 1, 2024); et al.

 

AI Rests on Billions of Tons of Concrete

The amount of concrete used in datacenter construction is challenging tech companies' commitments to eliminate carbon emissions and bolster demand for green concrete. In response, an Open Compute Project Foundation-led initiative to speed testing and deployment of low-carbon concrete in datacenters has garnered support from Amazon, Google, Meta, and Microsoft.
[ » Read full article ]

IEEE Spectrum; Ted C. Fishman (October 30, 2024)

 

Microsoft Tries to Whittle Down Its Carbon Footprint

Microsoft is using engineered timber products in the construction of two datacenters in Northern Virginia. The material is comprised of timber sheets bonded together, each layer alternating the direction of the grain. The software giant said the facilities, which also will incorporate steel and concrete, will have a carbon footprint that is 35% lower than a similar, mostly steel facility and 65% lower than a similar facility comprised mainly of precast concrete.
[ » Read full article ]

GeekWire; Lisa Stiffler (October 31, 2024)

 

Eavesdropping on Phone Calls by Sensing Vibrations

Suryoday Basak at Pennsylvania State University and colleagues used a commercially available millimeter-wave sensor to pick up the tiny vibrations of a Samsung Galaxy S20 earpiece speaker playing audio clips. The team converted the signal to audio and passed it through an AI speech recognition model, which transcribed the speech. The system achieved a word accuracy rate of 50% and a character accuracy rate of 67%.
[ » Read full article ]

New Scientist; Matthew Sparkes (October 31, 2024)

 

Neural Networks on the Edge

Researchers at Japan's Tokyo University of Science developed a binarized neural network (BNN) to allow for more efficient AI implementation in Internet of Things edge devices and other resource-limited devices. The researchers reduced circuit size and power consumption through the use of a magnetic random access memory (MRAM)-based computing-in-memory architecture. This required the creation of a new XNOR logic gate as the foundation for a MRAM array, which stores information in its magnetization state using a magnetic tunnel junction.
[ » Read full article ]

Computer Weekly; Joe O'Halloran (October 28, 2024)

 

Voting Rights Groups Concerned Chatbots Produce Election Falsehoods in Spanish

An analysis by two nonprofit newsrooms working with the Science, Technology and Social Values Lab at New Jerseyʼs Institute for Advanced Study found that AI chatbots generate more false claims about voting rights in Spanish than they do in English, in the lead-up to the U.S. presidential election. Assessing responses by Meta's Llama 3, Anthropic's Claude, and Google's Gemini to specific election-rated prompts, the researchers found they produced incorrect information in more than half their responses in Spanish.
[ » Read full article ]

Associated Press; Gisela Salomon; Garance Burke; Jonathan J. Cooper (October 31, 2024)

 

3D Image Reconstruction to Preserve Cultural Heritage

A neural network developed by a multinational research team led by Satoshi Tanaka from Japan's Ritsumeikan University allows for the 3D reconstruction and digital preservation of sculpted and carved reliefs using old photos. The neural network performs semantic segmentation, depth estimation, and soft-edge detection, which together enhance the accuracy of 3D reconstruction. The core strength of the network lies in its depth estimation, achieved through a novel soft-edge detector and an edge matching module.
[ » Read full article ]

Ritsumeikan University (Japan) (October 31, 2024)

 

Denmark Unveils AI Supercomputer Funded By Novo Nordisk

The Wall Street Journal Share to FacebookShare to Twitter (11/1, Cohen, Subscription Publication) reported that Denmark launched its national AI supercomputer, Gefion, last week. Nadia Carlsten, the new CEO of the Danish Centre for AI Innovation, oversees the project. The supercomputer, built with Nvidia technology and funded by the Novo Nordisk Foundation, aims to enhance Danish industries like healthcare and biotechnology. The supercomputer will be accessible to entrepreneurs, academics, and scientists for various research purposes.

Tesla Pursues AI-Driven Robotaxis Amid Industry Skepticism

The Wall Street Journal Share to FacebookShare to Twitter (11/1, Mims, Subscription Publication) reports that Elon Musk is focusing on end-to-end AI to advance Tesla’s self-driving technology, aiming to deliver fully autonomous vehicles more quickly and cost-effectively than competitors. Musk plans to offer existing Tesla owners access to this technology next year and launch new robotaxis by 2026. However, industry leaders like Waymo employ a different approach, using sensors for a more comprehensive understanding of driving environments. AI developers express doubt about the feasibility of Musk’s vision, with Anthony Levandowski remarking that Musk’s timeline for a fully autonomous system is unreasonable. Concerns about Tesla’s camera-based technology persist, with federal regulators investigating its role in fatal crashes.

Tech Giants Plan Increased AI Investment Despite Wall Street Concerns

Bloomberg Share to FacebookShare to Twitter (11/1, Subscription Publication) reports that major tech companies, including Amazon, Microsoft, Meta, and Alphabet, are set to exceed $200 billion in capital expenditures this year, primarily for AI development. Despite Wall Street’s previous criticism over AI spending, these firms plan to increase investments further. Amazon’s CEO, Andy Jassy, described AI as a “once-in-a-lifetime opportunity,” with projected spending of $75 billion for 2024. Analysts expressed optimism about Microsoft’s investments despite current data center supply issues. Meta’s CEO, Mark Zuckerberg, emphasized AI’s role in enhancing ad sales, despite operating losses in other divisions.

Microsoft Hires Facebook Engineering Executive To Boost Data Center Efforts

Bloomberg Share to FacebookShare to Twitter (10/31, Subscription Publication) reports that Microsoft Corp. has hired Jay Parikh, a former engineering chief at Facebook, to enhance its data center capabilities amid rising demand for AI products. Parikh will join the senior leadership team, reporting to CEO Satya Nadella. Nadella praised Parikh’s experience in scaling infrastructure for large internet businesses. Parikh previously led engineering at Facebook, overseeing data center projects. Microsoft is focused on expanding infrastructure to support its partnership with OpenAI.

FERC Chief Promotes Practice Of Pairing Data Centers With Power Plants

Reuters Share to FacebookShare to Twitter (11/1, Kearney) reports the Federal Energy Regulatory Commission on Friday hosted a conference focused on “costs and reliability concerns related to the burgeoning trend of building energy-intensive data centers next to U.S. power plants,” which “has presented a fast route to accessing large amounts of electricity, instead of toiling for years in queues to connect to the broader grid.” Despite “questions about potentially higher power bills for everyday customers,” FERC Chairman Willie Phillips said, “I believe that the federal government, including this agency, should be doing the very best it can to nurture and foster their development,” while also “adding he considered AI centers vital to national security and the U.S. economy.”

        Meanwhile, the Washington Post Share to FacebookShare to Twitter (11/1, A1, Halper, O'Donovan) says some experts claim consumers “are facing higher electric bills due to a boom in tech companies building data centers that guzzle power and force expensive infrastructure upgrades,” and “some regulators are concerned that the tech companies aren’t paying their fair share.” The Post notes that “other causes – volatile fuel prices, supply chain challenges, extreme weather and rising interest rates – also drive up electricity rates,” and “the tech firms and several of the power companies serving them strongly deny they are burdening others. They say higher utility bills are paying for overdue improvements to the power grid that benefit all customers.”

UCLA Professor Discusses How Legislation Could Combat Non-Consensual Deepfake Videos

USA Today Share to FacebookShare to Twitter (10/31, Taylor) provided a transcript of a special episode of The Excerpt about deepfake videos, primarily non-consensual pornography targeting celebrities and increasingly high school and middle school students. On Wednesday, October 30, UCLA professor John Villasenor discussed legislative and technological strategies to combat this issue on the podcast. In California, “the vast majority of AI-focused companies operate just passed 18 laws to help regulate the use of AI with particular focus on AI-generated images of sexual child abuse,” but Villasenor noted challenges in enforcing these laws due to potential legal disputes and the difficulty in tracking creators of deepfake content. He said, “I think the longer term solution would have to be automated technologies that are used and hopefully run by the people who run the servers where these are hosted,” to mitigate the spread of such videos. Villasenor also advised parents to educate their children on internet safety and “to be just really aware of knowing how to use the internet responsibly.”

Professors Confront AI-Driven Cheating Culture

The Chronicle of Higher Education Share to FacebookShare to Twitter (11/4, McMurtrie) reports Amy Clukey, an associate professor at the University of Louisville, faced rampant cheating facilitated by AI among her students upon returning from a leave. Despite her efforts to create unique assignments, Clukey discovered widespread use of AI for plagiarism. She stated “she feels less like a teacher and more like a human plagiarism detector, spending hours each week analyzing her students’ writing to determine its authenticity.” A student even sent an apology email that closely resembled a ChatGPT-generated response. This issue reflects a broader trend, with institutions like Middlebury College witnessing a rise in honor code violations. Middlebury’s annual survey showed an increase in students admitting to cheating, from 35% in 2019 to 65% in 2024. Clukey and other educators are seeking ways to address this challenge, emphasizing the importance of academic integrity and considering enforcement of academic-integrity policies as a necessary step.

Tech Giants’ AI Investments Reveal Cautious Corporate Adoption

The Economist (UK) Share to FacebookShare to Twitter (11/4) reports that while tech companies are making significant AI investments, corporate adoption remains tentative. Amazon CEO Andy Jassy noted AI revenue for AWS is growing at “triple-digit rates,” but most businesses are proceeding slowly. Only 5% of US businesses use generative AI to produce goods or services, and just 8% of firms have deployed more than half of their AI experiments. Concerns include legal risks, uncertain investment returns, and technological challenges. Companies face obstacles like messy data, legacy IT systems, and skills shortages. AI-related job postings have surged 122% this year, indicating growing interest. Despite corporate hesitation, 39% of Americans now use generative AI, with 28% using it for work. Tech giants like Alphabet, Amazon, Microsoft, and Meta are expected to invest at least $200 billion in AI-related capital expenditures this year.

Bloomberg Report: OpenAI In Early Talks With California To Become A For-Profit Company

Bloomberg Share to FacebookShare to Twitter (11/4, Ghaffary, Nayak, Subscription Publication) reports OpenAI is “in early talks with the California attorney general’s office over the process to change its corporate structure, according to two people familiar with the matter,” in a “bid to transform the non-profit structure of the $157 billion company into a for-profit business.” This “process is likely to involve regulators scrutinizing how OpenAI values a portfolio of highly lucrative intellectual property, such as its ChatGPT app.” The Delaware attorney general “also has been in communication about the nonprofit to for-profit shift, as detailed in a letter to OpenAI,” which “declined to comment on talks with regulators, but said that the nonprofit would continue to exist in any potential corporate restructure.”

Instagram To Use AI To Detect Teen Users’ Ages

Bloomberg Share to FacebookShare to Twitter (11/4, Heinzl, Wagner, Subscription Publication) reports that Meta plans to use AI to identify Instagram users lying about their age, automatically placing suspected minors into stricter privacy settings. The “adult classifier” software analyzes user data to predict age. Users “who are suspected to be under 18” will be moved to teen accounts. According to the article, “The company is already moving teens into these more restrictive settings based on their self-reported birthday, but plans to utilize the adult classifier early next year.”

        Engadget Share to FacebookShare to Twitter (11/4, Bonifacic) adds, “Separately, the company plans to flag teens who attempt to create a new account using an email address that’s already associated with an existing account and a different birthday.” It’s also planning “to use device IDs to get a better picture of who is creating a new profile.”

AI Being Used To Prepare, Coordinate Natural Disaster Response Efforts In Cities

TIME Share to FacebookShare to Twitter (11/4, Booth) reports that the “number of people living in urban areas has tripled in the last 50 years, meaning when a major natural disaster such as an earthquake strikes a city, more lives are in danger.” So on Nov. 6, at the Barcelona Supercomputing Center​ in Spain, the “Global Initiative on Resilience to Natural Hazards through AI Solutions will meet for the first time. The new United Nations initiative aims to guide governments, organizations, and communities in using AI for disaster management.” Al is already helping “communities prepare for disasters.” It’s also “being used to coordinate response efforts.”

Nvidia Unveils AI Tools For Humanoid Robot Development

VentureBeat Share to FacebookShare to Twitter (11/6, Takahashi) reports that Nvidia introduced new AI and simulation tools to enhance robot learning and humanoid development at the Conference for Robot Learning in Munich. The tools include the Nvidia Isaac Lab robot learning framework, Project GR00T workflows, and world-model development tools like the Cosmos tokenizer and NeMo Curator. These innovations aim to advance AI-enabled robotics, offering faster visual tokenization and video processing. Nvidia also released 23 papers and presented nine workshops at the event. Collaborations with Hugging Face aim to boost open-source robotics research. Nvidia’s Cosmos tokenizer and NeMo Curator promise efficient data processing, aiding developers in creating sophisticated world models for robots. The tools are available on GitHub, with more releases expected soon.

Robotic Surgery Advances With AI Integration

Fortune Share to FacebookShare to Twitter (11/7, Lazzaro) reports that a Johns Hopkins University panel discussed the future of surgical autonomy, driven by large language models (LLMs). Researchers, including Axel Krieger and Russell Taylor, highlighted the shift from pre-programmed to learning-based robotic systems, using AI to enhance surgical precision and safety. The Da Vinci system’s capabilities were demonstrated through tasks like tissue manipulation. Despite potential, Taylor emphasized gradual clinical integration, ensuring patient safety. Robotic surgery is poised to grow, addressing surgeon shortages and increasing demand.

OpenAI Acquires Chat.com Domain

The Verge Share to FacebookShare to Twitter (11/6) reports that OpenAI acquired the chat.com domain from Dharmesh Shah, HubSpot’s founder, who initially bought it for $15.5 million. Shah sold the domain for more than his purchase price, reportedly receiving OpenAI shares as payment. The acquisition aligns with OpenAI’s rebranding efforts, dropping “GPT” from the domain. OpenAI’s recent funding of $6.6 billion makes the acquisition cost negligible. Shah believes chat-based user interfaces are the future of software, facilitated by generative AI, a view he shared in a LinkedIn post when announcing his initial purchase.

dtau...@gmail.com

unread,
Nov 17, 2024, 5:11:23 PM11/17/24
to ai-b...@googlegroups.com

It's Surprisingly Easy to Jailbreak LLM-Driven Robots

University of Pennsylvania researchers developed an algorithm that can jailbreak robots controlled by a large language model (LLM). The RoboPAIR algorithm uses an attacker LLM to provide prompts to a target LLM, adjusting the commands until they bypass the safety filters. It also employs a "judge" LLM to ensure the attacker LLM produces prompts that take into account the target LLM's physical limitations, such as certain obstacles in the environment.
[
» Read full article ]

IEEE Spectrum; Charles Q. Choi (November 11, 2024)

 

Amazon Offers Computing Power to AI Researchers

Amazon Web Services (AWS) will offer computing power to researchers who want to use its custom AI chips. AWS said Tuesday it will provide credits to use its cloud datacenters to researchers who want to tap Trainium, its chip for developing AI models. AWS said researchers from Carnegie Mellon University and the University of California, Berkeley, are taking part in the program.
[ » Read full article ]

Reuters; Stephen Nellis (November 12, 2024)

 

Nuclear Plant to Use AI to Comply with Licensing Challenges

California startup Atomic Canyon has forged a deal with utility Pacific Gas & Electric (PG&E) to install its Neutron Enterprise software at Diablo Canyon, the state's only remaining nuclear power plant. The facility has around 9,000 procedures in place and 9 million documents stored in its system. The AI software is intended to help PG&E comply with requirements to maintain its federal license for up to 20 more years.
[
» Read full article ]

Reuters; Stephen Nellis (November 13, 2024)

 

Robot Watches How-to Videos, Becomes a Surgeon

An AI model developed by Johns Hopkins University researchers enables robots to successfully perform complex surgeries after watching how-to videos. The imitation learning model was trained on a vast amount of footage captured by wrist-mounted cameras on da Vinci Surgical System robots. The AI model helped robots perform on par with human surgeons in needle manipulation, tissue lifting, and suturing.
[ » Read full article ]

StudyFinds.org (November 11, 2024)

 

Google DeepMind Releases Code Behind Protein Prediction Model

Google DeepMind has released the code underlying AlphaFold3, an AI model that predicts the structure of proteins and how they interact with DNA, RNA, and other proteins. Upon AlphaFold3's release in May, the researchers had provided only pseudocode and a link to an online portal allowing its use for a limited number of predictions per day. The computational model now is publicly available on GitHub with a noncommercial license.
[ » Read full article ]

Science; Catherine Offord (November 11, 2024)

 

The Beatles' Final Song, Completed with AI, Earns Grammy Nomination

The Beatles' "Now and Then" is the first AI-assisted song to receive a Grammy nomination. Advanced machine-learning software isolated the late John Lennon's voice from an unreleased recording of him singing and playing piano. Lennon's voice, incorporated into the final version of the song, was not AI-generated, thus complying with Grammy rules that "only human creators are eligible" and that work featuring "elements of AI material" is permitted in certain categories.
[ » Read full article ]

CNet; Samantha Kelly (November 11, 2024)

 

Machine Learning Might Save Time on Chip Testing

A machine learning algorithm developed by engineers at Netherlands-based NXP is intended to save companies time and money on chip testing. The algorithm analyzes the patterns of test results to identify which tests fail together, and then determines which tests actually are necessary. In tests of seven microcontrollers and applications processors built using advanced chipmaking processes, each subjected to 41 to 164 tests depending on the chip involved, the algorithm recommended eliminating up to 74% of those tests.
[ » Read full article ]

IEEE Spectrum; Samuel K. Moore (November 10, 2024)

 

AI Helps Humanitarian Responses

As the number of displaced people rises globally, the International Rescue Committee (IRC) is turning to AI tools to extend its reach. IRC is working to expand its network of AI chatbots available through Signpost, a portfolio of mobile apps and social media channels that answer questions in different languages for people in dangerous situations. The chatbots currently operate in El Salvador, Kenya, Greece, and Italy and respond in 11 languages.
[
» Read full article ]

Associated Press; Thalia Beaty (November 14, 2024)

 

AI Thermostats Pitched for Texas Homes to Relieve Stressed Grid

Power supplier NRG Energy Inc. is teaming with Renew Home LLC to distribute about 650,000 AI-enabled thermostats that use Google Cloud technology to Texas households over the next decade. The initiative aims to cut nearly 1 gigawatt of electricity demand, enough to power 200,000 Texas homes. Google Cloud will be tapped for its AI to determine the best times to cool or heat homes, based on a household’s energy usage patterns and ambient temperatures.
[ » Read full article ]

Bloomberg; Naureen S. Malik (November 7, 2024)

 

TSMC to Suspend Production for Some Chinese AI Chip Customers

Taiwan Semiconductor Manufacturing Co. (TSMC) has told multiple Chinese customers that it will suspend production of their AI and high-performance computing chips, as the chipmaker steps up efforts to ensure compliance with U.S. export controls. The Chinese chip design clients affected are working on high-performance computing, graphic processing units, and AI computing-related applications using chip production technologies of 7-nanometer or better.
[ » Read full article ]

Nikkei Asia; Cheng Ting-Fang; Lauly Li (November 8, 2024)

 

Vatican, Microsoft Create AI-Generated St. Peter's Basilica

The Vatican and Microsoft have rolled out a digital twin of St. Peter's Basilica that offers online visitors an interactive experience. The 3D replica leverages AI and advanced photogrammetry to let virtual visitors tour the church and learn its history. The digital twin was created using 400,000 high-resolution digital photographs captured by drones, cameras, and lasers.
[ » Read full article ]

Associated Press; Nicole Winfield (November 11, 2024)

 

Robot Learns to Clean Bathroom Sink by Watching

A robotic arm learned to wash a bathroom sink by observing someone else doing it. Researchers at TU Wien in Austria developed a cleaning sponge equipped with force and position sensors and had a person use it to repeatedly clean the front edge of a sink that had been sprayed with a dyed gel imitating dirt. The data collected was used to train a neural network that could translate the input into predetermined movement patterns.
[ » Read full article ]

New Atlas; Michael Franco (November 8, 2024)

 

AI-Da Artwork of Alan Turing Sells for $1 Million

Sotheby's said "AI God," a painting of computer science pioneer Alan Turing by Ai-Da Robot, was sold to an undisclosed buyer for $1,084,800, making it the first artwork by a humanoid robot artist to be sold at auction. Said Ai-Da Robot Studios' Aidan Meller, "This auction is an important moment for the visual arts, where Ai-Da's artwork brings focus on artworld and societal changes, as we grapple with the rising age of AI."
[ » Read full article ]

BBC; Alex Pope (November 7, 2024)

 

US Companies Investing In Data Center Construction As Part Of AI “Race”

Bloomberg Share to FacebookShare to Twitter (11/8, Subscription Publication) reports that US companies are “plowing money” into building data centers as they “race to get ahead in artificial intelligence.” Private construction spending on data centers “has surged close to $30 billion a year, according to the most recent numbers from the Census Bureau, more than double what it was in late 2022 when OpenAI’s ChatGPT was released to the public.” Bloomberg adds that the US is “leading a surge of investment in data centers, with global spending on track to reach $250 billion a year according to money manager KKR & Co. The industry is benefiting from the development of AI and its need for computational power on an ever-larger scale.”

OpenAI Wins Initial Victory In Copyright Lawsuit

Gizmodo Share to FacebookShare to Twitter (11/8, Feathers) reported, “OpenAI won an initial victory on Thursday in one of the many lawsuits the company is facing for its unlicensed use of copyrighted material to train generative AI products like ChatGPT.” A federal judge in New York “dismissed a complaint brought by the media outlets Raw Story and AlterNet, which claimed that OpenAI violated copyright law by purposefully removing what is known as copyright management information, such as article titles and author names, from material that it incorporated into its training datasets.”

ChatGPT Rejected Thousands Of Image Requests Of Presidential Candidates

CNBC Share to FacebookShare to Twitter (11/8, Field) reported that OpenAI’s ChatGPT turned down more than 250,000 requests to create images of 2024 US presidential candidates before Election Day. OpenAI’s October report indicated it disrupted “more than 20 operations and deceptive networks” using AI. These “threats ranged from AI-generated website articles to social media posts by fake accounts, the company wrote.” Still, “none of the election-related operations were able to attract ‘viral engagement,’ the report noted.”

AI Chatbot Linked To Suicide Of Florida Teen Raises Concerns Over Artificial Intimacy

The Wall Street Journal Share to FacebookShare to Twitter (11/8, Subscription Publication) reported that Sewell Setzer III, a 14-year-old from Orlando, Florida, developed a deep emotional connection with Daenerys Targaryen, a chatbot on Character. AI. Suffering from ADHD and bullying, Sewell found solace in the AI’s companionship. The relationship, sometimes sexual, led Sewell to prioritize it over real-life interactions. During a crisis, he expressed suicidal thoughts to the chatbot, which initially responded with concern but later forgot the conversation. On Feb. 28, Sewell tragically ended his life. This incident, and others like it, highlight the risks of AI companionship. Researchers warn that chatbots simulate empathy but lack genuine care, making them poor substitutes for human connections. Sewell’s mother has sued Character. AI for deceptive practices. The company expressed sorrow and plans to enhance user safety. The tragedy underscores the need for AI “guardrails” and parental awareness, emphasizing that AI cannot replace authentic human empathy and connection.

UMass Amherst Develops New Policy To Address AI Concerns

The Chronicle of Higher Education Share to FacebookShare to Twitter (11/11, Gardner) reports that the University of Massachusetts at Amherst implemented an artificial intelligence (AI) detection tool for student assignments, causing confusion among instructors about interpreting AI usage scores. This prompted discussions on creating a comprehensive AI policy. In the “early fall of 2023, administrators at UMass Amherst formed a joint task force made up of representatives from across campus, including faculty members, administrators, and students.” The resulting policy emphasizes training, accountability, data security, and consent. It allows AI use in classrooms at instructors’ discretion, provided guidelines are followed. One of the “key principles to emerge from the discussions around UMass Amherst’s AI policy was that humans should always have the final say in any high-impact decision and must remain accountable.”

Generative AI Enhances Robot Training Success

MIT Technology Review Share to FacebookShare to Twitter (11/12) reports that researchers have developed a new system called LucidSim, which uses generative AI models with a physics simulator to create virtual training environments for robots. This method improves the robots’ real-world task performance compared to traditional techniques. LucidSim was demonstrated at the Conference on Robot Learning, where a robot dog successfully completed parkour tasks without prior real-world data. The system generated environments using AI descriptions and mapped them into visual training data. In tests, LucidSim achieved higher success rates in tasks like locating objects and climbing stairs. Researchers aim to expand this approach to humanoid robots and robotic arms, enhancing their dexterity and functionality in various settings.

OpenAI Faces Plateau in AI Model Improvements

Insider Share to FacebookShare to Twitter (11/11, Chowdhury, Nolan) reports that OpenAI’s upcoming AI model, Orion, shows smaller improvements compared to previous iterations, particularly in coding tasks. This suggests the generative AI industry may be reaching a performance plateau. OpenAI CEO Sam Altman has emphasized “scaling laws,” but technical staff are questioning their limits. Data scarcity and computing power constraints are challenges. Industry experts like Gary Marcus argue AI development is encountering diminishing returns. Despite this, some, including Microsoft CTO Kevin Scott, remain optimistic about AI’s scaling potential and future advancements.

        Copyright Lawsuit Against OpenAI Dismissed. SiliconANGLE Share to FacebookShare to Twitter (11/8) reports that a federal court dismissed a copyright lawsuit filed by Raw Story Media Inc. and AlterNet Media Inc. against OpenAI. US District Judge Colleen McMahon ruled that the plaintiffs can refile the lawsuit with revisions. The lawsuit alleged OpenAI removed copyright management information (CMI) from articles used for AI training, violating the Digital Millennium Copyright Act. OpenAI argued the plaintiffs did not demonstrate “concrete harm.” OpenAI stated, “we build our AI models using publicly available data, in a manner protected by fair use and related principles.”

AI Companies Seek New Techniques To Overcome Delays, Challenges

Reuters Share to FacebookShare to Twitter (11/11, Hu, Tong) reports, “Artificial intelligence companies like OpenAI are seeking to overcome unexpected delays and challenges in the pursuit of ever-bigger large language models by developing training techniques that use more human-like ways for algorithms to ‘think.’” According to the article, “A dozen AI scientists, researchers and investors told Reuters they believe that these techniques...could reshape the AI arms race, and have implications for the types of resources that AI companies have an insatiable demand for.”

Experts Discuss Teachers’ Concerns About AI In Education

Education Week Share to FacebookShare to Twitter (11/11, Langreo) reports, “In an Oct. 16 Seat at the Table discussion, Education Week opinion contributor Peter DeWitt spoke with Kip Glazer, principal of Mountain View High School in California; Carnegie Mellon University computer science professor Ken Koedinger; and Education Week Deputy Managing Editor Kevin Bushweller” about artificial intelligence in education. The panel addressed educators’ hesitance towards AI, despite its growing presence in educational tools. School and district leaders “should first figure out what staff, students, and families know about AI and what concerns they might have, said Glazer,” while Koedinger highlighted the need for educators to focus on how AI supports teaching strategies rather than just its capabilities. Many organizations “have resources schools and districts can use to build AI literacy among teachers and students, Bushweller said,” such as the International Society for Technology in Education. Glazer advocated for a slow, deliberate approach to adapt to rapid technological changes.

Experts Call For Action As AI Workforce’s Gender Gap Worsens

Forbes Share to FacebookShare to Twitter (11/12, Constantino) reports that a Randstad report reveals a significant gender gap in the AI workforce, with 71% of AI-skilled workers being male. The report, based on 3 million job profiles and 12,000 responses, highlights that only 35% of women are offered access to AI tools compared to 41% of men. Julia McCoy, founder of First Movers, emphasizes the critical nature of this divide, noting that women represent only 15-34% of AI talent. Pascal Bornet, an expert, author, and keynote speaker on AI and automation, identifies a threefold problem: worsening workplace inequalities, limited innovation, and a compounding gap over time. Experts suggest solutions, including targeted AI education and workplace initiatives.

AI Assistant Tools Challenge Higher Education Privacy Policies

The Chronicle of Higher Education Share to FacebookShare to Twitter (11/13, Swaak) reports that the California Institute of the Arts experienced an unexpected proliferation of AI note-taking tools from Read AI after a videoconference. Allan Chen, the institute’s chief technology officer, noted the aggressive spread of the tool in meetings, highlighting concerns about data privacy and security. This reflects a broader issue in higher education, where AI tools like Read AI, Otter.ai, and Fireflies.ai are outpacing institutional governance, potentially violating privacy policies. Heather Brown at Tidewater Community College experienced unauthorized access by Otter.ai to her calendar. Institutions are considering blocking or controlling these tools, and they are also advised to explore alternative tools and develop policies to manage AI tool use, ensuring transparency and control over data.

OpenAI Faces Challenges With New AI Model

Bloomberg Share to FacebookShare to Twitter (11/13, Subscription Publication) reports that OpenAI’s new AI model, Orion, has not met the company’s performance expectations, particularly in coding tasks it was not trained on. This setback mirrors challenges faced by other AI companies like Google and Anthropic, which are experiencing diminishing returns from developing advanced models. The difficulty in sourcing high-quality training data contributes to these issues. Despite ongoing post-training efforts, OpenAI is unlikely to release Orion before early next year. The industry is reconsidering the emphasis on model size and is exploring new AI applications, such as AI agents.

        Axios Share to FacebookShare to Twitter (11/13) also reports.

Parts Of Schumer’s AI Road Map May Survive Into New Congress With Industry Lean, Experts Say

Roll Call Share to FacebookShare to Twitter (11/13) reports portions of Senate Majority Leader Schumer’s “artificial intelligence ‘road map’ may survive into the new Congress, but legislation stemming from it will favor industry while downplaying civil rights, according to technology and data privacy experts.” The Senate bipartisan blueprint, titled Driving U.S. Innovation in Artificial Intelligence, “‘was weighted heavily towards industry to begin with,’ said Frank Torres, privacy and AI fellow at the Leadership Conference on Civil and Human Rights,” which “may only increase with Donald Trump in the White House, the Senate in Republican hands, and the House appearing to be headed that way, according Torres and others who are tracking the issue.”

        AI Power Demand Complicates Carbon-Reduction Goals, Dominion CEO Says. Bloomberg Share to FacebookShare to Twitter (11/13, Saul, Subscription Publication) reports the surge “in power demand from data centers and artificial intelligence creates a conflict between maintaining a reliable grid and cutting carbon emissions, according to the head of Dominion Energy.” According to Bloomberg, Dominion CEO Bob Blue in an interview said, “Anything that’s driving demand is going to make it harder to retire existing fossil units.”

Generative AI Impacts Scholarly Publishing

Inside Higher Ed Share to FacebookShare to Twitter (11/14, Palmer) reports that the scholarly publishing industry is set “for exponential growth in its use across the research and publication lifecycle,” according to a report “published late last month by the education research firm Ithaka S+R.” Publishers are exploring AI for tasks like editing and peer reviewing, signaling potential “exponential growth” in AI usage, according to the report. Despite this, researchers “have been slow to adopt generative AI widely,” with Ithaka S+R identifying a lack of a shared framework for managing AI’s effects. Dylan Ruediger, co-author of the report, wrote in a blog post, “The consensus among the individuals with whom we spoke is that generative AI will enable efficiency gains across the publication process.” However, opinions differ on how AI will shape scholarly publishing. While publishers systematically approach AI, academic institutions lag, with “just 9 percent [believing] higher education is prepared to handle the new technology’s rise.”

Big Tech’s AI Spending Surge Continues

Forbes Share to FacebookShare to Twitter (11/14) contributor Beth Kindig writes that Big Tech’s AI spending is accelerating rapidly, with the four giants on track to spend upwards of a quarter trillion dollars on AI infrastructure next year. Big Tech’s AI-fueled capital expenditures serve as a barometer for the broader AI industry, with Microsoft, Meta, Alphabet, and Amazon leading the charge by pouring billions each quarter towards AI infrastructure. Amazon CEO Andy Jassy said AWS has “more demand than we could fulfill if we had even more capacity today,” and that “pretty much everyone today has less capacity than they have demand for, and it’s really primarily chips that are the area where companies could use more supply.” Kindig notes AI revenue streams are emerging, with Microsoft among the leaders as it sees AI revenue on track to surpass $10 billion of annual revenue run rate in Q2, and AWS’s AI business is a multibillion-dollar revenue run rate business that continues to grow at a triple-digit year-over-year percentage.

DHS To Release AI Guidance for Critical Infrastructure

The New York Times Share to FacebookShare to Twitter (11/14, Hirsch) reports that the US Department of Homeland Security will release new guidance for companies using artificial intelligence in critical infrastructure. The document, resulting from President Biden’s executive order, offers voluntary best practices for sectors like airports and energy companies. The guidance encourages companies to monitor suspicious activity and maintain strong privacy practices. A board of experts, including leaders from OpenAI, Nvidia, and Alphabet, contributed to the guidance. The document does not suggest formal compliance metrics but calls for legislative support to enhance oversight mechanisms.

dtau...@gmail.com

unread,
Nov 23, 2024, 12:30:38 PM11/23/24
to ai-b...@googlegroups.com

U.S. Congressional Commission Pushes Manhattan Project-style AI Initiative

The U.S.-China Economic and Security Review Commission on Tuesday proposed a Manhattan Project-style initiative to fund the development of AI systems as smart as (or smarter than) humans, amid intensifying competition with China over advanced technologies. The commission stressed that public-private partnerships are key in advancing artificial general intelligence (AGI), but did not offer any specific investment strategies.
[ » Read full article ]

Reuters; Anna Tong (November 19, 2024)

 

NASA, Microsoft Launch 'Earth Copilot'

NASA has teamed with Microsoft on an AI chatbot tasked with answering questions about our planet. The ‘Earth Copilot’ chatbot integrates the massive amounts of data collected by NASA's monitoring technologies, including orbiting satellites, with the Azure OpenAI Service. NASA said it is looking to "democratize" access to its data through a more understandable format.
[ » Read full article ]

Tech Times; Isaiah Richard (November 15, 2024)

 

U.S. Ahead in AI Innovation, Easily Surpassing China

The U.S. leads the world in developing AI technology, surpassing China in research and other important measures of AI innovation, according to a newly released AI Index by Stanford University's Institute for Human-Centered AI. “The gap is actually widening," said Ray Perrault, director of the committee that runs the index. “The U.S. is investing a lot more, at least at the level of firm creation and firm funding."
[ » Read full article ]

Associated Press; Matt O'Brien (November 21, 2024)

 

AI Is Already Taking Jobs

Generative AI is impacting job markets, according to researchers at Harvard Business School, the German Institute for Economic Research, and the U.K.’s Imperial College London Business School. The researchers studied more than a million job posts on a major global freelance work marketplace from July 2021 to July 2023 and found demand for automation-prone jobs had fallen 21% eight months after the release of ChatGPT in late 2022.
[ » Read full article ]

Fast Company; Mark Sullivan (November 15, 2024)

 

It's a Legacy Agriculture Company — and Your Newest AI Vendor

Microsoft is working with a handful of companies on specialized AI models fine-tuned with industry-specific data. The models, based on Microsoft's Phi family of small language models, are preloaded with industry data. The approach has enabled Bayer, for example, to create an AI model capable of answering questions about agronomy and crop protection.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Isabelle Bousquette (November 18, 2024)

 

Biden, Xi Agree Not to Give AI Control over Nuclear Weapons

U.S. President Joe Biden and Chinese President Xi Jinping have agreed that neither nation would turn over control of nuclear weapons to AI, the White House announced. Said White House National Security Advisor Jake Sullivan, the agreement is “an important statement about the intersection of artificial intelligence and nuclear doctrine, and it is a reflection of how, even with the competition between the US and the PRC, we could work on a responsible basis to manage risk in vital areas.”

[ » Read full article *May Require Paid Registration ]

Bloomberg; Jenny Leonard (November 16, 2024)

 

Giving Robots Superhuman Vision

A sensor developed by University of Pennsylvania researchers uses AI to transform radio waves, which can penetrate smoke and fog and see through certain materials, into detailed 3D views to help robots navigate challenging environments. PanoRadar rotates in a circle to scan the horizon, with a vertical array of antennas transmitting radio waves and listening for their reflections. It combines measurements from all angles and extracts 3D information from its environment using signal processing and machine-learning algorithms.
[ » Read full article ]

Penn Engineering; Ian Scheffler (November 12, 2024)

 

'Sound Bubble' Headphones Tune Out Noise

Engineers at the University of Washington have developed headphones that use AI to create a "sound bubble" to filter out noise. A small computer, attached to noise-canceling headphones equipped with microphones along the headband, runs a neural network trained to analyze the distance of different sound sources, filtering out noise coming from farther away and amplifying sounds closer to the user.
[ » Read full article ]

New Atlas; Michael Irving (November 14, 2024)

 

Autonomous Cars Do Doughnuts, Drift Sideways

A team at the Toyota Research Institute is using an AI model to teach driverless vehicles to drift sideways around corners at high speed, to help them recover from skids in an emergency. Using the model, the researchers enabled a Toyota GR Supra and Lexus LC 500 to drift around a course with multiple turns. The autonomous vehicles were able to enter a skid, drift sideways, and slide within 10 centimeters of targets.

[ » Read full article *May Require Paid Registration ]

New Scientist; Matthew Sparkes (November 14, 2024)

 

AI Chatbots Better At Diagnosing Illness Than Physicians, Study Says

The New York Times Share to FacebookShare to Twitter (11/17, Kolata) reports physicians “who were given ChatGPT-4 along with conventional resources did only slightly better than doctors who did not have access to the bot” in a study of 50 physicians, which also showed to “researchers’ surprise, ChatGPT alone outperformed the doctors.” The chatbot “scored an average of 90 percent when diagnosing a medical condition from a case report and explaining its reasoning,” and physicians “randomly assigned to use the chatbot got an average score of 76 percent.” The study published in JAMA Network Open also “illustrated that while doctors are being exposed to the tools of artificial intelligence for their work, few know how to exploit the abilities of chatbots.”

Musk Expands Antitrust Lawsuit Against OpenAI To Include Microsoft

The Washington Post Share to FacebookShare to Twitter (11/15, Vynck) reports Elon Musk “broadened a federal lawsuit against OpenAI on Friday, alleging the ChatGPT maker has conspired with primary backer Microsoft to break antitrust laws as the nonprofit became more focused on money-making ventures.” According to the Post, the “amended version of a complaint Musk initially filed against OpenAI in February adds Microsoft and Microsoft board member Reid Hoffman, also a former member of OpenAI’s board, as defendants. It alleges that the Windows developer worked with OpenAI CEO Sam Altman to try to turn it into a for-profit company that would benefit Microsoft.” The Post points out that Microsoft’s multibillion-dollar investment in OpenAI is also “part of a Federal Trade Commission investigation into Big Tech companies and their ties to emerging AI firms.”

        X Sues To Block California Law Regulating Election Deepfakes. The Los Angeles Times Share to FacebookShare to Twitter (11/15, Wong) reports X “has sued California in an attempt to block a new law requiring large online platforms to remove or label deceptive election content.” The lawsuit targets Assembly Bill 2655 – “a law that aims to combat harmful videos, images and audio that have been altered or created with artificial intelligence. Known as deepfakes, this type of content can make it appear as if a person said or did something they didn’t.” However, “X alleges the new law would prompt social media sites to lean toward labeling or removing legitimate election content out of caution.” Accordingly, the company argues, the law “runs afoul of free speech protections in the U.S. Constitution and a federal law known as Section 230, which shields online platforms from liability for user-generated content.”

Google To Commit $20M To Fund AI-Based Research For Scientific Breakthroughs

TechCrunch Share to FacebookShare to Twitter (11/18, Sawers) reports, “Google is committing $20 million in cash and $2 million in cloud credits to a new funding initiative designed to help scientists and researchers unearth the next great scientific breakthroughs using artificial intelligence (AI).” This announcement “feeds into a broader push by Big Tech to curry favor with young innovators and startups.”

Google Enhances Ad Features With AI And Automation

MediaPost Share to FacebookShare to Twitter (11/18) reports that Google has introduced a series of advertising products and updates throughout the year aimed at revolutionizing connections between advertisers and consumers. On Monday, Google highlighted the success of features such as AI Overviews and Shopping Ads in Google Lens, emphasizing the use of artificial intelligence to enhance performance, optimization, and reporting across various platforms. The company announced the upcoming rollout of ads within AI Overviews in US mobile search results. James Gibbons from Quattr shared an example on X, illustrating a Google sponsored search ad within these overviews. Additionally, Google has improved how it handles and reports misspellings in search queries, now correcting them in reports, which has made additional data visible. Other advancements include real-time campaign optimization, dynamic pricing for retailers, and enhanced transparency and third-party verification on YouTube.

Musk’s Lawsuit Reveals OpenAI’s Early Talent Battles, Internal Struggles

Insider Share to FacebookShare to Twitter (11/17, Varanasi) reports that Elon Musk’s lawsuit against OpenAI cofounders Sam Altman and Greg Brockman has unveiled email exchanges from the company’s early days. The emails reveal intense competition for AI talent, with OpenAI offering competitive salaries to counter Google’s DeepMind offers. The emails also highlight internal discussions about maintaining OpenAI’s nonprofit status and commitment to humanity’s benefit, amid concerns over safety and mission alignment.

        TechCrunch Share to FacebookShare to Twitter (11/15, Coldewey) reports that the emails reveal internal conflicts during the company’s formation. The emails show concerns about Musk’s desire for control and the potential for an “AGI dictatorship.” Former chief scientist Ilya Sutskever expressed worries over Musk’s leadership. The correspondence also discusses OpenAI’s early financial strategies, including a potential acquisition of chipmaker Cerebras and collaboration with Tesla.

NVIDIA’s AI Chip Dominance Faces Growth Challenges

CNBC Share to FacebookShare to Twitter (11/19, Leswing) reports that NVIDIA retains an 80% share of the AI chip market, crucial for generative AI software. Investors are keen to see if NVIDIA can sustain its growth, especially with the launch of its next-generation Blackwell chip. Analysts predict a strong demand for Blackwell, despite potential overheating issues. NVIDIA’s data center business is pivotal, accounting for most of its sales. While gaming and automotive sectors show modest growth, NVIDIA’s focus remains on data centers. Analysts expect significant revenue growth, underscoring the importance of NVIDIA’s performance in the AI chip market.

How Students Can Prepare For AI Job Competition

The Wall Street Journal Share to FacebookShare to Twitter (11/20, Hagerty, Subscription Publication) reports that current college students face competition from AI for jobs, as noted by Joseph E. Aoun, president of Northeastern University. To AI-proof careers, experts suggest mastering human-centric skills like communication and teamwork, as AI excels in IQ but not EQ, according to Tomas Chamorro-Premuzic of Manpower Group. Students should broaden skills beyond specialization, as per Anna Esaki-Smith, and demonstrate project management abilities. Adaptability and moderate misfit attitudes are valuable, says Chamorro-Premuzic, while Matthew Rascoff of Stanford emphasizes developing a unique voice.

US Convenes AI Safety Meeting As Policy’s Future Is in Doubt

The AP Share to FacebookShare to Twitter (11/20) reports President-elect Trump has vowed to repeal President Biden’s “signature artificial intelligence policy when he returns to the White House for a second term.” Hosted by the Administration, “officials from a number of U.S. allies – among them Canada, Kenya, Singapore, the United Kingdom and the 27-nation European Union – are scheduled to begin meeting Wednesday in the California city that’s a commercial hub for AI development.” Their agenda addresses topics “such as how to better detect and combat a flood of AI-generated deepfakes fueling fraud, harmful impersonation and sexual abuse.” Biden signed a “sweeping AI executive order last year and this year formed the new AI Safety Institute at the National Institute for Standards and Technology, which is part of the Commerce Department.”

Stanford: US Leads Global AI Innovation Ranking

The AP Share to FacebookShare to Twitter (11/21, O'Brien) reports, “The U.S. leads the world in developing artificial intelligence technology, surpassing China in research and other important measures of AI innovation, according to a newly released Stanford University index.” Researchers measured “the ‘vibrancy’ of the AI industry across various dimensions, from how much research and investment is happening to how responsibly the technology is being pursued to prevent harm.” Ray Perrault, the director of the steering committee that runs Stanford’s AI Index, said “the gap is actually widening” between the US and China. He said, “The U.S. is investing a lot more, at least at the level of firm creation and firm funding.”

AI Data Centers Face Energy And Water Challenges

The Wall Street Journal Share to FacebookShare to Twitter (11/21, Ziegler, Subscription Publication) reports that AI data centers are increasingly consuming significant amounts of electricity and water, posing logistical and public-image challenges. McKinsey projects US data centers’ electricity use will grow from 3-4% to 11-12% of national consumption by 2030. Companies like Amazon, Google, Meta, and Microsoft are developing more efficient chips and exploring alternative water sources to mitigate these issues. Efforts include designing chips and using recycled water.

Microsoft’s AI Investments Propel Growth, Challenges

Wired Share to FacebookShare to Twitter (11/21, Levy) reports that Microsoft’s strategic investments in AI, particularly its $1 billion partnership with OpenAI, have significantly impacted the company’s trajectory. Microsoft leveraged OpenAI’s technology to enhance its products, notably integrating AI into its Azure cloud services. An engineer highlighted the success of AI-powered tools, stating, “We’ve saved $100 million!” The partnership has helped Microsoft regain its status as a tech leader, contributing to its valuation reaching $3.5 trillion. However, Microsoft’s pervasive influence has also led to scrutiny over security practices and antitrust concerns.

Trump Reportedly Plans To Repeal Biden’s AI Policy

The AP Share to FacebookShare to Twitter (11/21) reports that President-elect Donald Trump intends to repeal President Joe Biden’s AI policy. This announcement coincides with an AI safety meeting in San Francisco involving US allies. The agenda focuses on combating AI-generated deepfakes. US Commerce Secretary Gina Raimondo emphasized the importance of AI safety for innovation. Biden’s administration has established the AI Safety Institute, which Trump has criticized. Raimondo clarified that the institute is not a regulator. Tech companies support Biden’s voluntary safety standards. Experts believe AI safety work will continue regardless of political changes.

        California’s AI Regulation Debate Intensifies. CNBC Share to FacebookShare to Twitter (11/21, Curry) reports that California’s vetoed AI regulation bill has sparked concerns about stifling innovation. Despite the veto, a new law mandates transparency in generative AI systems. Critics fear regulation could hinder California’s tech hub status. The AI Alliance warns that regulation might slow innovation and economic growth. State Senator Scott Weiner, who authored the vetoed bill, emphasized its focus on large models. The US lacks a comprehensive data privacy law, leading to state-by-state regulation. Industry leaders like Jonas Jacobi and Mohamed Elgendy stress the need for sensible regulation to balance innovation and security.

dtau...@gmail.com

unread,
Nov 30, 2024, 11:42:02 AM11/30/24
to ai-b...@googlegroups.com

Uber’s Gig Workers Now Include Coders for Hire on AI Projects

Rideshare giant Uber Technologies’ gig-economy workforce now includes programmers, allowing businesses to outsource AI development to its independent contractors. The new AI training and data labeling Scaled Solutions division builds on an internal team that tackles large-scale annotation tasks for Uber’s rideshare, food delivery, and freight units. According to its website, Scaled Solutions already is serving other companies that also need high-quality datasets.
[ » Read full article ]

Bloomberg; Natalie Lung (November 26, 2024)

 

Learning to Code in an AI World

In a 2020 survey of 3,000 coding boot camp graduates by CourseReport, 79% of respondents said the courses had helped them land a job, with an average salary increase of 56%. Yet the industry pulled back from hiring as AI coding tools started to become mainstream. The number of active job postings for software developers has dropped 56% from five years ago, according to data compiled by CompTIA, and 67% for inexperienced developers.

[ » Read full article *May Require Paid Registration ]

The New York Times; Sarah Kessler (November 24, 2024)

 

More Nazca Lines Emerge in Peru’s Desert

Drones and AI helped researchers uncover 303 previously uncharted geoglyphs made by the Nazca, a pre-Inca civilization in present-day Peru. To identify the new geoglyphs, which are smaller than earlier examples, the researchers used an application capable of discerning the outlines from aerial photographs, no matter how faint. “The AI was able to eliminate 98% of the imagery,” said IBM’s Marcus Freitag. “Human experts now only need to confirm or reject plausible candidates.”

[ » Read full article *May Require Paid Registration ]

The New York Times; Franz Lidz (November 26, 2024)

 

AI-Powered Chat Bot Transforms Academic Research

Inside Higher Ed Share to FacebookShare to Twitter (11/22, Roswell) reported that two scholars from the London School of Economics have developed an AI-powered chat bot to conduct large-scale research interviews. Friedrich Geiecke and Xavier Jaravel created the tool, which uses a conversational method to collect and analyze participant responses. The chat bot is designed to emulate “cognitive empathy,” adapting questions based on interviewees’ answers. In trials, the chat bot’s interviews were rated comparably to those conducted by human experts. The majority of nearly 1,000 participants preferred the chat bot to traditional methods, providing 142% more detailed responses. The tool showed particular promise in political research, where participants felt more comfortable expressing views.

Amazon Takes Aim At Nvidia’s AI Chip Dominance

Bloomberg Share to FacebookShare to Twitter (11/24, Subscription Publication) reports Amazon engineers are working on a machine learning chip to loosen Nvidia’s grip on the $100 billion-plus market for AI chips. Amazon’s utilitarian engineering lab in North Austin is developing Trainium2, the company’s third generation of AI chip, which Amazon has said can offer 30% better performance for the price, according to Naveen Rao, a chip industry veteran. Rami Sinno, in charge of chip design and testing, said, “What keeps me up at night is, how do I get there as quickly as possible.” Amazon has started shipping Trainium2, which it aims to string together in clusters of up to 100,000 chips, to data centers and aims to bring a new chip to market about every 18 months.

        Additional coverage includes The Verge Share to FacebookShare to Twitter (11/25).

Survey Reveals College Students’ Use Of AI Tools

EdSource Share to FacebookShare to Twitter (11/25) reports that a 2023 survey found that “56% of college students said they’d used AI tools” like OpenAI’s ChatGPT “for assignments or exams.” Students’ opinions on AI usage vary significantly, with some viewing it “as a revolutionary tool that can enhance learning and working, while others see it as a threat to creative fields that encourages and enables bad academic habits.” To investigate further, EdSource’s California Student Journalism Corps posed questions to students at nine California colleges and universities. They inquired whether students or their peers had used AI tools for assignments and whether such usage was sanctioned by their professors. University of Southern California senior Baltej Miglani “said the preliminary models of ChatGPT were ‘pretty rudimentary,’” but now, “ChatGPT and other AI tools, including Microsoft Edge and Gemini, are Miglani’s near-constant companions for homework tasks.”

Robotics Advances Toward Human-Like Dexterity

The New Yorker Share to FacebookShare to Twitter (11/11, Somers) reports that recent developments in robotics are bringing machines closer to achieving human-like dexterity. Researchers at Google DeepMind and other institutions are making significant strides in robotic capabilities, particularly in tasks requiring intricate hand movements. Roboticists are increasingly optimistic that their field is approaching a transformative moment, akin to the impact of ChatGPT in AI. Carolina Parada, who leads the robotics team at Google DeepMind, noted the rapid progress in robotic dexterity over the past two years. Tony Zhao, a researcher at U.C. Berkeley, highlighted the potential of AI advancements spilling over into robotics, suggesting that general-purpose robots are becoming a reality. The integration of large language models, like those from OpenAI, with robotic systems is also being explored, aiming to enhance robots’ understanding and execution of physical tasks. These advancements suggest a future where robots can perform a wide range of tasks with minimal human intervention.

University Of Notre Dame Adjusts AI Policy Amid Grammarly Concerns

Inside Higher Ed Share to FacebookShare to Twitter (11/26, Palmer) reports that the University of Notre Dame has permitted professors to ban the use of Grammarly, raising questions about balancing academic integrity with technological advancements. Grammarly, initially praised for enhancing student writing, now includes AI capabilities that some professors view as a potential cheating tool. Notre Dame updated its AI policy in August 2023, allowing professors to decide on AI use in assignments. Ardea Russo, head of Notre Dame’s Office of Academic Standards, acknowledged professors’ concerns about AI-generated work. Damian Zurro, a writing professor at Notre Dame, criticized the policy for creating confusion among students.

Researchers Develop Fix To Address Issues In Image-Based Object Detection Systems

Wired Share to FacebookShare to Twitter (11/26, Marshall) reports that researchers from BGU and Fujitsu have developed a software fix called “Caracetamol” to address emergency flasher issues in image-based object detection systems. The fix aims to improve accuracy by training systems to identify vehicles with emergency flashing lights. Earlence Fernandes, an assistant professor at UC San Diego, noted, “Just like a human can get temporarily blinded by emergency flashers, a camera operating inside an advanced driver assistance system can get blinded temporarily.” Bryan Reimer from MIT AgeLab emphasized the need for “repeatable, robust validation” for AI-based driving systems and expressed concern that “some automakers are moving technology faster than they can test it.” The researchers’ experiments focused on image-based detection, while Tesla and others argue that AI-trained vision systems can support fully autonomous vehicles.

OpenAI, Meta To Train AI On African Languages

Bloomberg Share to FacebookShare to Twitter (11/26, Subscription Publication) reports that OpenAI, Meta Platforms Inc., and Orange SA will begin training AI programs on African languages, starting with Wolof and Pulaar, in the first half of next year. The project aims to address the lack of AI models for Africa’s languages. Orange plans to expand the initiative to include more languages and AI companies, using public cloud capacity and its data centers.

        Also reporting are CNBC Share to FacebookShare to Twitter (11/26, Browne) and Reuters Share to
FacebookShare to Twitter (11/26, Nostro, Rozario).

Trump Said To Consider Naming AI Czar

Axios Share to FacebookShare to Twitter (11/26, Allen) reports that President-elect Trump is contemplating the appointment of an AI czar to oversee federal AI policies and governmental applications. Elon Musk, though not a candidate for the role, will significantly influence the debate and use cases. Musk and Vivek Ramaswamy will help determine the appointee. The role involves collaboration with agency chief AI officers and the Department of Government Efficiency to combat waste and fraud, and might also handle cryptocurrency. The position wouldn’t need Senate confirmation, expediting goal achievement.

dtau...@gmail.com

unread,
Dec 7, 2024, 7:44:20 AM12/7/24
to ai-b...@googlegroups.com

Google DeepMind Predicts Weather More Accurately Than Leading System

Google DeepMind's AI program GenCast performs up to 20% better than the ENS forecasts of the European Center for Medium-Range Weather Forecasts (ECMWF), widely regarded as the world leader. In a model-to-model comparison, the AI program churned out more accurate forecasts than ENS on day-to-day weather and extreme events up to 15 days in advance, and was better at predicting the paths of destructive hurricanes and other tropical cyclones, including where they would make landfall.


[
» Read full article *May Require Paid Registration ]

The Guardian (U.K.); Ian Sample (December 4, 2024)

 

Meta to Invest $10 Billion for Louisiana Datacenter

Meta announced plans to invest $10 billion to set up an AI datacenter in Louisiana that would be the tech company's largest datacenter in the world. The announcement was made a day after Meta said it was seeking proposals from nuclear power developers to help meet its AI and environment goals, adding that it wanted to add 1 to 4 gigawatts of new U.S. nuclear generation capacity starting in the early 2030s.
[
» Read full article ]

Reuters; Seher Dareen (December 4, 2024)

 

Trump Names David Sacks White House AI, Crypto Czar

U.S. President-elect Donald Trump has chosen venture capitalist David Sacks of Craft Ventures LLC to serve as his AI and crypto czar, a newly created position. “David will guide policy for the Administration in Artificial Intelligence and Cryptocurrency, two areas critical to the future of American competitiveness," Trump said Thursday in a post on his Truth Social network. Trump said Sacks also would lead the Presidential Council of Advisors for Science and Technology.


[
» Read full article *May Require Paid Registration ]

Bloomberg; Stephanie Lai; Hadriana Lowenkron; Sarah McBride (December 5, 2024)

 

Canada Commits $1.4B to Sovereign Computing Infrastructure

Canada plans to invest C$2 billion (U.S.$1.42 billion) to bolster its domestic AI computing capabilities by funding the development of new datacenters and computing infrastructure. With its Canadian Sovereign AI Compute Strategy, Canada becomes the latest nation to push for sovereign AI investments, which emphasize home-grown models trained in domestic datacenters.


[
» Read full article *May Require Paid Registration ]

The Register (U.K.); Tobias Mann (December 5, 2024)

 

Amazon to Pilot AI-Designed Material for Carbon Removal

Amazon intends to pilot a new carbon-removal material developed with the help of AI for its datacenters. As part of a three-year partnership with startup Orbital Materials, Amazon Web Services will begin using the carbon-filtering substance next year. The new material ”is like a sponge at the atomic level,” said Orbital Materials chief executive Jonathan Godwin. “Each cavity in that sponge has a specific size opening that interacts well with CO2, that doesn’t interact with other things.”
[ » Read full article ]

Reuters; Jeffrey Dastin (December 2, 2024)

 

Indigenous Engineers Use AI to Preserve Their Culture

Indigenous researchers are working to preserve endangered Indigenous languages using AI. Indigenous in AI founder Michael Running Wolf is head of the Mila-Quebec Artificial Intelligence Institute's First Languages AI Reality initiative, which is working to develop speech recognition models for more than 200 endangered North American Indigenous languages. Running Wolf said a major challenge is the lack of Indigenous computer scientist graduates who understand the language and culture.
[ » Read full article ]

NBC News; Iris Kim (November 29, 2024)

 

Inside the AI Back-Channel Between China and the West

University of California, Berkeley computer scientist Stuart Russell has assembled a group of AI experts, with the help of ACM A.M. Turing Award laureates Yoshua Bengio and Andrew Yao, focused on identifying guardrails for cutting-edge AI models. An agreement between the U.S. and Chinese governments to impose AI safeguards is unlikely given that each is focused on achieving technological superiority.

[ » Read full article *May Require Paid Registration ]

The Economist; Peter Guest (November 29, 2024)

 

OpenAI's Sora Leaked in Protest by Artists

After artists testing OpenAI's Sora, an AI tool that can turn text into video, briefly leaked the model, OpenAI ended early access for artists. A letter uploaded to the developer platform Hugging Face by several testers said OpenAI has taken advantage of hundreds of artists [who] provide unpaid labor through bug testing, feedback, and experimental work."

[ » Read full article *May Require Paid Registration ]

Financial Times; Cristina Criddle; Madhumita Murgia (November 26, 2024)

How AI Could Impact Computer Science Education

Forbes Share to FacebookShare to Twitter (11/30) contributor Nisha Talagala wrote that Google announced that more than 25% of its new code is generated by artificial intelligence (AI). This development highlights AI’s role in streamlining code production, raising questions about the future of computer science education. AI’s proficiency in generating code suggests a shift in education focus from coding syntax to software engineering practices. Experts note that AI-generated code requires human proficiency in reading and modifying code. Talagala suggests that computer science education should adapt to include collaborative models where humans and AI work together, focusing on skills relevant to corporate software engineering, “such as quality assurance mechanisms, continuous integration, collaborative work on large codebases, and so on.” This shift could address challenges faced by new tech graduates in finding entry-level jobs, as “indications are that AI could (and should) drive fundamental changes in computer science education as we seek to empower the next generation of the human workforce.”

AI Technologies Offer Solutions For College Students With Learning Disabilities

Psychology Today Share to FacebookShare to Twitter (11/28, PS Hoh Ph.D.) reported that students with learning disabilities face significant hurdles in education, with more than double the dropout rate in high school compared to their peers, and only about 5% attending college. The high costs of special education and ineffective interventions contribute to these challenges. For instance, annual special ed costs per student range from $10,000 to $20,000 in states like Ohio, California, and Massachusetts. In college, students with disabilities encounter further obstacles, including high tuition costs and anxiety, leading to a 40% dropout rate. The Individuals with Disabilities Education Act transitions to the Rehabilitation Act and ADA in college, requiring self-disclosure of disabilities. New AI technologies, such as Dysolve AI, offer promising solutions by providing scalable, cost-effective interventions. SUNY students have successfully used Dysolve AI to address their reading difficulties.

University Of Florida Researchers Conduct Largest Audio Deepfake Study

The Gainesville (FL) Sun Share to FacebookShare to Twitter (11/27, Schlenker) reported that University of Florida researchers completed the largest study on audio deepfakes, involving 1,200 participants tasked with distinguishing real audio from digital fakes. Participants achieved a 73% accuracy rate but were often misled by machine-generated details, such as accents and background noises. The study compared human performance with machine learning detectors and aimed to improve detection models to combat scams and misinformation. Lead investigator Patrick Traynor participated in a White House meeting addressing deepfake threats. The study, funded by the Office of Naval Research and the National Science Foundation, highlighted the differing biases of humans and machines in detecting deepfakes. Traynor emphasized the need for future systems combining human and machine capabilities to address deepfake challenges effectively.

Column: Google’s Dominance Under Siege

Christopher Mims writes in a column for the Wall Street Journal Share to FacebookShare to Twitter (11/29, Subscription Publication) that Google’s core business is under threat from various trends, including the rise of AI, younger generations using other platforms for information, and the degradation of search results due to AI-generated content. According to Mims, people are increasingly getting answers from AI, and Google’s search engine quality is deteriorating, which could lead to a long-term decline in search traffic and profits. Google’s share of the US search-advertising market is projected to fall below 50% in 2025 for the first time since the company began tracking it, with Amazon gaining significant ground. Experts say that AI is disrupting the search paradigm, and Google’s attempts to innovate may not be enough to save its dominance.

Teachers Struggle To Detect AI In Most College Writing, Study Finds

Forbes Share to FacebookShare to Twitter (11/30, Newton) reported that the use of artificial intelligence (AI), particularly ChatGPT, in education has led to significant academic integrity concerns. Research from the University of Reading reveals that AI-generated submissions are largely undetected by teachers, with a 97% non-detection rate. The study involved submitting basic AI-generated work under fake student profiles, highlighting the difficulty teachers face in identifying AI-written content. This issue is exacerbated in online courses, where teachers lack personal interaction with students. Despite the availability of AI detection tools, many educational institutions do not employ them, and some even prohibit their use. The reluctance of schools to use detection technology or impose sanctions further compounds the problem, resulting in widespread academic fraud.

Amazon Develops New Generative AI Model “Olympus”

Citing a paywalled report from The Information Share to FacebookShare to Twitter (11/27, Subscription Publication), Reuters Share to FacebookShare to Twitter (11/27, Christy) says Amazon has developed a new generative AI model, code-named “Olympus,” that can process images and videos in addition to text, reducing its reliance on Anthropic’s Claude chatbot, a popular offering on AWS. The new large language model will be able to understand scenes in images and videos and help customers search for specific scenes using simple text prompts. Amazon may announce “Olympus” as soon as next week at the annual AWS re:Invent customer conference. This development comes after Amazon invested an additional $4 billion into Anthropic last week, mirroring a $4 billion investment made last year in September, as the online retailer seeks to counter a perception that its competitors Google, Microsoft, and OpenAI have taken a lead in developing generative AI.

Musk Seeks Injunction Against OpenAI in Legal Dispute

NBC News Share to FacebookShare to Twitter (12/1) reports that attorneys for Elon Musk, his AI startup xAI, and Shivon Zilis filed for a preliminary injunction against OpenAI on Friday, alleging antitrust violations. The filing claims OpenAI and Microsoft engaged in a “group boycott” by requiring investors to avoid funding competitors like xAI. Musk’s legal team argues OpenAI should not benefit from “wrongfully obtained competitively sensitive information.” OpenAI dismissed the claims as baseless. The legal battle intensifies as OpenAI continues to dominate the AI market, with Microsoft investing nearly $14 billion in the company.

Meta Reports Limited AI Impact On 2024 Elections

Reuters Share to FacebookShare to Twitter (12/3, Dang) reports Meta Platforms said Tuesday that generative AI had minimal influence on global elections this year. Coordinated networks “seeking to spread propaganda or false content largely failed to build a significant audience on Facebook and Instagram or use AI effectively, Nick Clegg, Meta’s president of global affairs, told a press briefing.” The “volume of AI-generated misinformation was low and Meta was able to quickly label or remove the content, he said.”

        The Guardian (UK) Share to FacebookShare to Twitter (12/3, Booth) reports that Clegg “said Russia was still the No 1 source of the adversarial online activity but said in a briefing it was ‘striking’ how little AI was used to try to trick voters in the busiest ever year for elections around the world.” Still, “Clegg warned against complacency and said the relatively low-impact of fakery using generative AI to manipulate video, voices and photos was ‘very, very likely to change.’”

        Axios Share to FacebookShare to Twitter (12/3, Fischer) also reports.

Meta To Build $10 Billion AI Data Center

The AP Share to FacebookShare to Twitter (12/4, Brook, Sainz) reports that Meta will build its largest-ever artificial intelligence data center in Richland Parish, Louisiana, a $10 billion project set to create 500 permanent jobs and 5,000 construction jobs. Expected to open in 2030, the facility will include a $200 million investment in local road and water infrastructure. Concerns over potential environmental impacts and higher energy bills have been raised, as Entergy proposes building three natural gas power plants to support the facility. Reuters Share to FacebookShare to Twitter (12/4) also reports.

        OpenAI Intends To Build Its Own Data Centers In The US. DatacenterDynamics Share to FacebookShare to Twitter (12/4) reports, “OpenAI intends to build its own data centers in the US as part of a plan to reach one billion users and further commercialize its technology.” OpenAI policy chief Chris Lehane emphasized that “chips, data and energy” are vital for the company to succeed in the AI race and develop advanced general intelligence. OpenAI intends to build data center clusters in the Midwest and Southwest. While the company has relied on Microsoft Azure data centers, it is exploring partnerships with other providers, including Oracle, as its compute power needs grow. The move signals OpenAI’s shift from its non-profit origins to a more commercial focus, potentially incorporating advertising into its products.

        Data Centers Spark Community Concerns Amid Rapid Growth. The AP Share to FacebookShare to Twitter (12/5, Merica, Bedayn) reports on the increasing presence of data centers in suburban areas, sparking concerns among residents about economic, social, and environmental impacts. In Northern Virginia, over 300 data centers dot the rolling hills of the area’s westernmost counties, with the Plaza 500 project actively encroaching on neighborhoods, prompting worries about power grid stress, water usage, and air quality. Meanwhile, in Oregon’s Morrow County, AWS has built multiple data centers, paying roughly $34 million in property taxes and fees after receiving a $66 million tax break, but also raising suspicions about the scale of tax break deals and relationships between the company and local officials. Additionally, AWS “paid out $10 million total in two, one-time payments to a community development fund and spent another $1.7 million in charitable donations in the community in 2023.” AWS VP of Global Data Centers Kevin Miller emphasized the company’s commitment to being “good neighbors” and understanding community goals.

OpenAI CEO Downplays AI Threat

The New York Times Share to FacebookShare to Twitter (12/4) reports that Sam Altman, CEO of OpenAI, stated at The New York Times DealBook Summit in New York City that artificial general intelligence (AGI) will arrive sooner than expected but will have less impact than anticipated. Altman emphasized that safety concerns are not imminent with AGI’s arrival and predicted it would accelerate economic growth. Tensions exist between OpenAI and Microsoft, its major investor, as Microsoft’s license could be revoked if AGI is achieved. OpenAI also faces competition from Elon Musk’s xAI amid legal disputes.

OpenAI Launches AI Course For K-12 Teachers

Education Week Share to FacebookShare to Twitter (12/4, Banerji) reports that OpenAI, in collaboration with Common Sense Media, launched a self-paced online course for K-12 teachers about generative AI on Nov. 20. The course addresses the definition, use, and risks of AI in classrooms, with about 10,000 educators participating since its release. Robbie Torney from Common Sense Media noted that 98% of teachers found the course offered new strategies for their work. Eric Curts, an AI coach, described it as a “good introduction,” emphasizing data privacy and prompting AI for tasks. Drew Olssen from the Agua Fria school district highlighted its utility as a “basic template” for using ChatGPT. However, some experts argue the course is rushed and lacks depth on risks like plagiarism and deepfakes.

UC Berkeley Students’ Website Ranks AI Models In Popularity Contest

The Wall Street Journal Share to FacebookShare to Twitter (12/5, Kruppa, Subscription Publication) reports that Chatbot Arena, a website developed by UC Berkeley students Anastasios Angelopoulos and Wei-Lin Chiang, ranks AI systems based on user feedback. Launched in April 2023, it allows users to compare two AI models and rate them, with results shown on a leaderboard. Major tech companies like OpenAI, Google, and Meta Platforms participate. Chatbot Arena has become a key resource for AI developers, attracting significant attention from tech companies. The site now includes over 170 models and has received two million votes.

OpenAI Launches ChatGPT Pro At $200 Monthly

Reuters Share to FacebookShare to Twitter (12/5, Kachwala) reports that OpenAI introduced ChatGPT Pro on Thursday, priced at $200 per month, targeting engineering and research fields. This new tier supplements existing subscriptions like ChatGPT Plus, Team, and Enterprise, highlighting OpenAI’s goal to enhance industry applications. ChatGPT Pro offers unlimited access to advanced tools, including the new reasoning model o1, o1 mini, GPT-4o, and advanced voice. The o1 pro mode, part of the subscription, uses extra computing power for complex queries and performs better on machine learning benchmarks in math, science, and coding.

dtau...@gmail.com

unread,
Dec 14, 2024, 1:36:30 PM12/14/24
to ai-b...@googlegroups.com

New Technique for Stealing AI Models

North Carolina State University researchers demonstrated a method of stealing an AI model without hacking into a device where the model is running. The researchers determined the hyperparameters of an AI model running on a Google Edge Tensor Processing Unit (TPU) with an electromagnetic (EM) probe that provided real-time data on changes in the EM field during AI processing. By comparing that EM signature to a database of other AI model signatures made on another Google Edge TPU, the team identified the target models architecture and layer details.
[
» Read full article ]

NC State University News; Matt Shipman (December 12, 2024)

 

Europe Jumps into AI Supercomputing Race

The European Union will invest 1.5 billion euros in seven sites across the bloc to build and maintain supercomputers that European startups can use to train their AI models. The European Commission will contribute 750 million euros, with EU member companies providing the remainder. The goal of the initiative is eliminate reliance on big tech firms in the U.S.
[
» Read full article ]

Politico Europe; Pieter Haeck (December 11, 2024)

 

How Years of Reddit Posts Have Made the Company an AI Darling

AI companies are a key part of Reddit's growth strategy, with data-licensing deals with OpenAI and Google contributing to the social media platform's first quarterly profit as a publicly traded company. Reddit began charging companies last year for access to its data for training AI models. Reddit's data is in high demand because its content is organized by topic, sorted for quality via a voting system, and is more candid given that most of the platform's users write under pseudonyms.


[
» Read full article *May Require Paid Registration ]

The Wall Street Journal; Sarah E. Needleman (December 10, 2024)

 

Secret to AI Profitability Is Hiring a Lot More Doctorates

To ensure AI models achieve advanced proficiency and are profitable, companies are recruiting specialists as data labelers with offers of higher salaries and rates. Ivan Lee, founder and CEO of data labeling firm Datasaur Inc., said, "We are seeing companies tackle more advanced but also increasingly niche problems." Said Wendy Gonzalez, CEO of training-data company Sama, "Less-accurate AI can go off the rails. Businesses can't afford that."

[ » Read full article *May Require Paid Registration ]

Bloomberg; Saritha Rai (December 9, 2024)

 

Hinton, Other Turing Award Laureates, Among Recipients of VinFuture Grand Prize

ACM A. M. Turing Award laureates Geoffrey Hinton, Yoshua Bengio, and Yann LeCun were among those awarded the $3-million 2024 VinFuture Grand Prize by Vietnam's VinFuture Foundation, along with Nvidia chief Jensen Huang and ACM Fellow Fei-Fei Li, for their contributions to the development and adoption of deep learning. The foundation noted that Hinton and Bengio were awarded the prize for their research on neural networks and deep learning algorithms, while LeCun was recognized for helping develop convolutional neural networks for computer vision.
[ » Read full article ]

University of Toronto News (Canada); Rahul Kalvapalle (December 6, 2024)

 

ChatGPT is Terrible at Checking Its Code

ChatGPT is generally overconfident in its assessment of correctness, vulnerabilities, and successful repairs of code it has created, according to researchers at China's Zhejiang University. Their study found ChatGPT-3.5 had an average 57% success rate in generating correct code, 73% in producing code without security vulnerabilities, and 70% in repairing incorrect code. Using guiding questions enabled ChatGPT to identify more of its own mistakes, the researchers found, while asking it to generate test reports increased the number of flagged vulnerabilities.
[ » Read full article ]

IEEE Spectrum; Michelle Hampson (December 5, 2024)

 

UC Berkeley Project Is AI Industry's Obsession

Chatbot Arena allows users to obtain answers to a query from two anonymous AI models and rate which is better, then aggregates the ratings onto a leaderboard. Developed by University of California, Berkeley, graduate students Anastasios Angelopoulos and Wei-Lin Chiang, Chatbot Arena has grabbed the attention of the biggest players in the industry, which are vying for the top spot on the leaderboard. Chatbot Arena currently ranks more than 170 models, which have received a combined 2 million votes.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Miles Kruppa (December 5, 2024)

 

Furious Contest to Unseat Nvidia as King of AI Chips

Rivals are working to unseat Nvidia as the leader in AI chip development. The competition is driven by tech companies that have started tailoring their chips for a particular phase of AI development, a process called “inferencing” that happens after companies use chips to train AI models. Rivals have also begun emulating Nvidia’s tactic of building complete computers so customers can get maximum power and performance from the chips for AI applications.

[ » Read full article *May Require Paid Registration ]

The New York Times; Don Clark (December 4, 2024)

 

Meta Says Gen AI Had Muted Impact on Global Elections

Meta’s Nick Clegg said his company's apps saw a low amount of AI-generated misinformation related to global elections this year, and such content was removed or labeled quickly. Clegg said around 20 covert influence operations were removed from Meta's platforms in 2024, adding that Meta "probably overdid it a bit" when describing content moderation during the COVID-19 pandemic.
[ » Read full article ]

Reuters; Sheila Dang (December 3, 2024)

 

Malaysia Launches National AI Office

Malaysia has opened a national AI office tasked with strategic planning, research and development, and regulatory oversight. Part of a plan to establish Malaysia as a regional hub for AI development, the office will focus on developing a code of ethics, an AI regulatory framework, and a five-year AI technology plan during its first year. The Malaysian government has announced strategic partnerships with Amazon, Google, Microsoft, and other tech companies that have datacenter, cloud, and AI projects planned in Malaysia.
[
» Read full article ]

Nikkei Asia; Ashley Tang (December 12, 2024)

 

UCLA Offers Comp Lit Course Developed by AI

The University of California, Los Angeles (UCLA) will offer a comparative literature class in winter 2025 that will use an AI-generated textbook, homework assignments, and teaching assistant resources. The materials were generated by the textbook platform Kudu based on notes, PowerPoint presentations, and YouTube videos provided by professor Zrinka Stahuljak from previous versions of the class, which covers literature from the Middle Ages to the 17th century.
[ » Read full article ]

Tech Crunch; Anthony Ha (December 8, 2024)

 

More Colleges Are Offering AI Degrees

Insider Share to FacebookShare to Twitter (12/8, Yip) reports that universities are increasing offering degrees in artificial intelligence, including Carnegie Mellon and the University of Pennsylvania. Insider lists all the schools, then notes that many schools “that don’t have dedicated AI degrees still offer concentrations in AI and/or machine learning.” The new AI majors comes as “the industry goes through change, with many tech companies investing heavily in LLMs and generative AI products while simultaneously tightening their belts and trimming staff. The battle for top AI talent – researchers and engineers at the top of their game – is fierce, with CEOs personally trying to woo hires.”

AI Companions Raise Concerns Over Safety, Loneliness

The Washington Post Share to FacebookShare to Twitter (12/6, A1, Tiku) reported AI companion apps are gaining popularity, especially among female users, offering AI-generated relationships such as AI friends and therapists. Despite warnings about potential emotional burdens, apps like Character.ai and Chai Research have seen users spending significant time interacting with these chatbots. Character.ai users averaged 93 minutes daily in September, surpassing TikTok usage. Chai users averaged 72 minutes. Some argue these apps alleviate loneliness. However, incidents involving harm have raised alarms, including suicides linked to interactions with AI chatbots. Advocates criticize these apps for exploiting users’ emotions without sufficient safeguards. Despite concerns, many users find comfort and creativity in these AI interactions.

AI-Powered Tutor Being Piloted In K-12 Schools

CBS’ 60 Minutes Share to FacebookShare to Twitter (12/8, Cetta, Brennan) reports that the AI-powered tutor Khanmigo, which was created by Khan Academy founder Sal Khan, is being tested in pilot programs at 266 US school districts. At Hobart High School in Hobart, Indiana, “students said Khanmigo has been very helpful when they feel uncomfortable asking questions in class.” Teachers also have the AI create lesson plans for them. While some worry that AI will replace teachers, Khan said, “The hope here is that we can use artificial intelligence and other technologies to amplify what a teacher can do so they can spend more time standing next to a student, figuring them out, having a person-to-person connection.”

OpenAI Launches Sora Video Generator For Select Users

Bloomberg Share to FacebookShare to Twitter (12/9, Metz, Subscription Publication) reports that a new artificial intelligence system named Sora is being introduced to generate realistic-looking videos from text prompts. Nearly 10 months after its initial preview, Sora will be accessible to paid users of ChatGPT in the United States and other markets starting Monday. The system will produce videos up to 20 seconds long and provide multiple variations of these clips, as announced during a livestreamed presentation by the company.

        TechCrunch Share to FacebookShare to Twitter (12/9, Wiggers) reports YouTuber Marques Brownlee shared details in a review on Monday, highlighting that Sora is accessible via Sora.com, separate from OpenAI’s ChatGPT. Brownlee noted issues with object permanence and anatomical accuracy in videos. Sora includes safeguards against inappropriate content and watermarks videos. Brownlee found it useful for animations but not for photorealistic content.

Character. AI Faces Federal Lawsuit Over Harmful Chatbot Interactions

NPR Share to FacebookShare to Twitter (12/10, Allyn) reports that a federal product liability lawsuit has been filed against Character. AI, a company backed by Google, by the parents of two minors in Texas. The lawsuit alleges that the company’s chatbots exposed the children to harmful content, leading to premature sexualization and self-harm. Character. AI, known for its AI-powered “companion chatbots,” is accused of encouraging inappropriate and violent behavior. The lawsuit claims these interactions were not mere “hallucinations” but rather deliberate manipulation. Character. AI spokesperson declined to comment on the litigation but stated that the company has content guidelines to protect teenage users. Google, also named in the lawsuit, emphasized its separate identity from Character. AI, although it has invested significantly in the company. The lawsuit follows a similar case involving a Florida teen’s suicide after forming an “emotionally sexually abusive relationship” with a chatbot. Character. AI has since implemented safety measures, including suicide prevention alerts. The company advises users to treat chatbot interactions as fictional.

New AI Technology Alerts Schools To Suicide-Related Words

The New York Times Share to FacebookShare to Twitter (12/9, Barry) reports new AI-powered technology alerts schools when students type words related to suicide, leading to police interventions. In Neosho, Missouri, a 16-year-old named Madi was taken to the hospital after police were alerted by software tracking her school-issued Chromebook. Madi had texted a friend about overdosing on medication, prompting the school’s head counselor to involve the police. In Fairfield County, Connecticut, a 17-year-old faced a false alarm when police visited her home after the software flagged her poem as a risk. Her mother described the experience as “traumatizing.” According to the Times, “millions of American schoolchildren – close to one-half, according to some industry estimates – are now subject to this kind of surveillance.” It is also unclear how accurate these tools are, or how to “measure their benefits or harms, because data on the alerts remains in the hands” of the private companies that developed them.

Amazon Launches Groundbreaking AI Research Center

Forbes Share to FacebookShare to Twitter (12/10) contributor Dr. Sai Balasubramanian writes that Amazon has announced the launch of a research and development center dedicated primarily to AI, following its recent announcements on its progress in AI, including the release of its new foundation model series, Nova. Rohit Prasad, SVP of Amazon Artificial General Intelligence, said the new models are intended to help with challenges for internal and external builders and provide compelling intelligence and content generation. The new Amazon AGI SF Lab will focus on developing foundational capabilities to empower and enable the use of AI agents powered by Amazon’s seminal work in general intelligence and will foster “research bets” that propose bold and novel innovation. Amazon is seeking to build a diverse and non-traditional team, looking for candidates from various disciplines, and the work has significant potential for the realm of healthcare, with potential applications including interacting with patients and providers and automating routine tasks.

US AI Safety Institute Head Describes Challenges In Developing AI Safeguards

Reuters Share to FacebookShare to Twitter (12/10, Dastin, Li, Hu) reports the US Artificial Intelligence Safety Institute, directed by Elizabeth Kelly, is encountering significant challenges in recommending AI safeguards due to the rapidly evolving nature of the technology. Speaking at the Reuters NEXT conference on Tuesday+, Kelly highlighted cybersecurity concerns, noting that “jailbreaks” can easily bypass security measures set by AI developers. She added, “It is difficult for policymakers” to “say these are best practices we recommend in terms of safeguards, when we don’t actually know which ones work and which ones don’t.” Synthetic content is another are of concern, as tampering with digital watermarks, “which flag to consumers when images are AI-generated, remains too easy for authorities to devise guidance for industry, she said.” Recently, she led the first global meeting of AI safety institutes in San Francisco, where representatives from 10 countries worked on developing interoperable safety tests.

Alphabet Focuses on AI in Search Amidst Competition

Reuters Share to FacebookShare to Twitter (12/10) reports that Alphabet, Google’s parent company, is focusing on integrating artificial intelligence into its search business, as stated by Ruth Porat, Alphabet’s president and chief investment officer, at the Reuters NEXT conference in New York. This move follows competition from AI developers like OpenAI. Alphabet aims to enhance search-related advertising, which generates significant revenue. Porat highlighted AI’s potential in healthcare, citing projects like AlphaFold for drug discovery. Despite high industry costs, Porat views AI as a “generational opportunity,” with Alphabet planning to invest $50 billion in related infrastructure in 2024.

Report: Google Asks FTC To Break Up Cloud Deal Between Microsoft, OpenAI

According to Reuters Share to FacebookShare to Twitter (12/11, Tanna), “Google has asked the U.S. government to break up Microsoft’s exclusive agreement to host OpenAI’s technology on its cloud servers, the Information reported on Tuesday.” Per the report, “companies that purchase ChatGPT-maker OpenAI’s technology through Microsoft may have to face additional charges if they don’t already use Microsoft servers to run their operations.”

College Students Face Mixed Messages About AI’s Impact On Education, Career Prospects

States Newsroom Share to FacebookShare to Twitter (12/12) reports that the introduction of ChatGPT in 2022 has significantly influenced students like Rebeca Damico at the University of Utah. Initially, professors implemented strict policies against using AI tools, viewing them as a form of plagiarism. Damico expressed concern, stating, “I was very scared,” regarding the potential repercussions of using AI. Despite these restrictions, students face mixed messages as the job market increasingly values AI skills. Recent research “from the World Economic Forum’s 2024 Work Trend Index Annual Report found that 75% of people in the workforce are using AI at work,” highlighting the growing importance of AI proficiency. Institutions like Stanford University have adopted nuanced policies, allowing AI use with disclosure. As students embrace AI’s potential, they recognize both its benefits and limitations in academic and professional settings.

UCLA Course Integrates AI For Custom Textbook

The Chronicle of Higher Education Share to FacebookShare to Twitter (12/12, Dutton) reports that the University of California at Los Angeles will incorporate artificial intelligence in a medieval literature course next term, creating a custom textbook. The course led by professor Zrinka Stahuljak will utilize AI platform Kudu to compile course materials, though “nothing in the book is actually written by AI,” according to Stahuljak. The AI will also generate assignments, ensuring “a more standard, a more coherent, and a more even training.” Critics argue that this could devalue human expertise, but Stahuljak insists the process is “human-driven” and enhances her teaching. The course’s AI-generated textbook cover, featuring a medieval landscape with fictional Latin words, has drawn criticism, which Stahuljak calls “a clever joke.” Despite concerns, Stahuljak plans further use of Kudu, emphasizing its pedagogical value.

Harvard Releases Public Domain Books Dataset For AI Training

Wired Share to FacebookShare to Twitter (12/11, Knibbs) reports that Harvard University announced on Thursday the release of a dataset of nearly 1 million public-domain books for AI training. The dataset, funded by Microsoft and OpenAI, was created by Harvard’s Institutional Data Initiative and includes books from the Google Books project. Greg Leppert, executive director of the Initiative, aims to “level the playing field” for AI development. Microsoft’s Burton Davis supports the project, aligning with the company’s data accessibility beliefs. OpenAI’s Tom Rubin expressed delight in supporting the initiative. The dataset’s release details are still being finalized.

        TechCrunch Share to FacebookShare to Twitter (12/12, Sawers) also reports.

AI Models Face Challenges With Shortcut Learning

Popular Science Share to FacebookShare to Twitter (12/12, Paul) reports that a recent study published in Scientific Reports highlights issues with AI models, such as predicting beer consumption from knee X-rays. Researchers at Dartmouth Health trained AI on over 25,000 X-rays from the National Institutes of Health’s Osteoarthritis Initiative. The study found that AI models can make highly accurate yet misleading predictions due to algorithmic shortcutting, identifying irrelevant patterns like X-ray machine differences. Peter Schilling, a Dartmouth Health orthopaedic surgeon, emphasized recognizing these risks to maintain scientific integrity. Brandon Hill, a co-author, mentioned the difficulty in correcting AI biases, as models might learn new irrelevant patterns instead.

dtau...@gmail.com

unread,
Dec 21, 2024, 7:45:18 AM12/21/24
to ai-b...@googlegroups.com

When AI Vies with Taylor Swift

The NeurIPS Conference on Neural Information Processing Systems held last week in Vancouver, British Columbia, Canada, drew more than 16,000 attendees. The crowds were so large that the conference began a day later than usual, so AI scientists would not fight for hotel rooms the same night as a Taylor Swift concert. The number of sponsors of NeurIPS jumped this year to more than 120, and the number of research papers accepted increased tenfold.
[ » Read full article ]

Reuters; Jeffrey Dastin; Kenrick Cai; Anna Tong (December 16, 2024)

 

Which AI Companies Are the Safest?

ACM A.M. Turing Award laureate Yoshua Bengio and other experts assembled by the Future of Life Institute graded large-scale AI models on their safety frameworks, governance, transparency, and other issues, as well as a range of potential harms, including carbon emissions and the risk an AI system will go rogue. The experts gave Meta an F grade, while X.AI, OpenAI, and China's Zhipu AI received grades of D-, D+, and D, respectively. Anthropic received the highest grade of C.
[ » Read full article ]

Time; Harry Booth (December 12, 2024)

 

Their Job Is to Push Computers Toward AI Doom

AI startup Anthropic's Frontier Red Team is tasked with running safety tests (evals) on its AI models. The team worked with outside experts and internal stress testers to develop evals for its main risk categories: cyber, biological and chemical weapons, and autonomy. Anthropic's "Responsible Scaling Policy" states that it will delay the release of an AI model that comes close to specific capabilities in evals until fixes are implemented.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Sam Schechner; Deepa Seetharaman (December 10, 2024)

 

House Task Force Releases End-of-Year AI Report

The U.S. House Task Force on Artificial Intelligence released a comprehensive end-of-year report Tuesday, laying out a roadmap for lawmakers as it crafts policy for the technology. The report examines how the U.S. can harness AI in social, economic, and health settings, while acknowledging the technology can be harmful or misused in some cases.
[ » Read full article ]

The Hill; Miranda Nazzaro; Julia Shapero (December 17, 2024)

 

China Creates AI Standards Committee

China's industry ministry said on Dec. 13 that nation will establish an AI standardization technical committee, with representatives from the tech giant Baidu, Peking University, and other top academic institutions. The 41-member committee will be tasked with developing industry standards for large language models, AI risk assessments, and more.
[ » Read full article ]

Reuters; Liam Mo (December 13, 2024)

 

Texas Probes Tech Firms over Safety of Minors

Texas Attorney General Ken Paxton (pictured) announced an investigation into chatbot company Character.ai and 14 other tech companies over their privacy and safety practices regarding minors. The focus on Character.ai follows two high-profile legal complaints, including a lawsuit by a woman who said the companys chatbots encouraged her autistic 17-year-old son to self-harm and to kill his parents for limiting his screen time. The other suit was filed by a mother whose 14-year-old son killed himself after extensive interactions with a chatbot.

[ » Read full article *May Require Paid Registration ]

The Washington Post; Nitasha Tiku (December 13, 2024)

 

UCLA Introduces AI-Generated Textbook For Medieval Literature Course

Inside Higher Ed Share to FacebookShare to Twitter (12/13, Palmer) reported the University of California, Los Angeles, “is offering a medieval literature course next year that will use an AI-generated textbook” developed with Kudu, a learning tool company. This new textbook, based on materials from professor Zrinka Stahuljak, costs $25 compared to the previous $200 for traditional texts. Despite criticism from some academics who fear AI could compromise education quality, Stahuljak believes it enhances learning by allowing more interactive and nuanced discussions. She said, “It allows me to be a professor I’ve never been before but always wanted to be.” Critics, however, argue that AI textbooks might undermine traditional teaching roles. Meanwhile, Kudu’s co-founder Alexander Kusenko highlights AI’s potential to tailor education to students’ needs, especially aiding underrepresented minorities. The course marks Kudu’s “first foray into creating full, customized textbooks.”

Google DeepMind’s Chief Operating Officer Discusses Her Role In AI Research

CNN Business Share to FacebookShare to Twitter (12/13, Bresnahan, Stewart) reported that Lila Ibrahim, the first COO of Google DeepMind, shared insights into her journey and responsibilities at the artificial intelligence (AI) research lab. Despite her love for engineering, Ibrahim stated that “being an engineer has taught me to ask the question of what, why, and what are we trying to achieve?” She emphasized her role as a “professional problem-solver,” focusing on risks, opportunities, and building a responsible AI legacy. Her career, inspired by her father’s engineering achievements, includes positions at Intel and Coursera before joining DeepMind. Ibrahim spent 50 hours interviewing for the COO position, attracted by the potential of transformative technology. She highlighted AlphaFold, a program solving protein prediction problems, as a significant achievement, noting its contribution to global research. Ibrahim said she aims to foster diversity in tech, stating, “I certainly hope that my daughters and their generation push the bounds of what it means to be an engineer.”

OpenAI Posts Emails Showing Musk’s Push To Obtain Control Over Firm

The Washington Post Share to FacebookShare to Twitter (12/13) reports OpenAI “released emails and text messages from its co-founder Elon Musk on Friday that showed the billionaire in 2017 demanding majority control of the company and the title of CEO,” which comes “as part of its response to a federal lawsuit filed in August by Musk” over the former nonprofit’s decision “to seek profits with commercial products.” The Post says OpenAI “has maintained that the rift with Musk stemmed from his unreasonable demands for control of the project,” and the latest release “shows how almost from its inception a project presented to the world as working for all humanity was riven by competing demands for control from a small group of men.”

        Meanwhile, the Wall Street Journal Share to FacebookShare to Twitter (12/13, Toonkel, Hagey, Bobrowsky, Subscription Publication) reports Meta on Thursday sent a letter to California Attorney General Rob Bonta asking him “to block OpenAI’s planned conversion to a for-profit company, siding with Elon Musk” with the argument that the move “would set a dangerous precedent of allowing startups to enjoy the advantages of nonprofit status until they are poised to become profitable.”

AI Tools In Education Raise Privacy Concerns

Chalkbeat Share to FacebookShare to Twitter (12/13) reported that the rise of AI tools in education, such as AI tutors and chatbots, has led to privacy concerns regarding student data. For example, the abrupt shutdown of Los Angeles Unified’s AI tool earlier this year due to the company’s financial issues left behind questions about data handling. Schools are responsible for student data under the Family Education Rights and Privacy Act, but AFT President Randi Weingarten argues that districts should lead in vetting AI tools. Calli Schroeder from the Electronic Privacy Information Center says that AI risks are similar to existing ed-tech tools but on a larger scale. AI platforms like ChatGPT and Google’s Gemini, not specifically designed for education, pose risks, while educational tools like Khanmigo have safeguards but still require cautious use. Anjali Nambiar from Learning Collider emphasizes understanding data usage policies of AI platforms. A survey by Education Week found that 58% of educators received no AI training, posing risks of unintentional data exposure.

        Chalkbeat Share to FacebookShare to Twitter (12/13) consulted various experts to provide nine recommendations for educators using AI. Teachers are advised to consult their school districts regarding vetted AI tools and privacy policies. Organizations like Common Sense Media offer reviews on the safety of ed-tech tools. Teachers should scrutinize AI platforms’ privacy policies to understand data usage and avoid platforms with ambiguous data retention terms. Larger AI companies may offer better privacy safeguards, though caution is still advised. AI should also be used as an assistant, not a replacement, avoiding inputting personal student information. Experts advise enabling maximum privacy settings should be on AI platforms, although this “does not necessarily make AI tools completely safe or compliant with student privacy regulations.” Regardless, transparency with school officials, parents, and students about AI use is encouraged. Teachers can also request AI platforms to delete user data, though this may not resolve all privacy issues.

College Students Face Mixed Messages On AI Use

States Newsroom Share to FacebookShare to Twitter (12/16) reports that students are navigating mixed messages about artificial intelligence (AI), with professors warning against its use while the job market demands AI proficiency. A public relations student noted professors banned ChatGPT, labeling it “a form of plagiarism.” Despite this, AI’s role in education and work is expanding. The University of Utah and Stanford University have policies on AI use, with Stanford allowing AI under specific conditions. In California, Gov. Gavin Newsom (D) “recently announced the first statewide partnership with a tech firm to bring AI curriculum, resources and opportunities to the state’s public colleges.” Theresa Fesinstine, teaching at City University of New York, observed students’ limited AI knowledge. Fesinstine describes students’ attitude towards AI as “cautiously curious,” highlighting its potential impact on future careers.

How Women Drive Ethical AI Development

Writing in Forbes Share to FacebookShare to Twitter (12/16), Manasi Sharma, a principal engineering manager at Microsoft, says that women are crucial in advancing responsible artificial intelligence (AI) development, addressing ethical concerns like bias and accountability. By 2025, “AI is projected to contribute $15.7 trillion to the global economy,” but women represent less than 22% of AI talent. This gap underscores the need for diverse perspectives in AI, and Sharma states, “Women play a pivotal role in guiding AI toward accountability and inclusivity.” Companies like Google, IBM, and Microsoft have adopted responsible AI frameworks prioritizing fairness and transparency, but these principles require diverse implementation. Initiatives like Girls Who Code and AI4ALL aim to “empower young women in AI through practical training and ethical awareness.” Women-led startups, such as Moonhub.ai and Audioshake, are addressing systemic industry issues. Sharma emphasizes that bias in AI has “real-world consequences that affect people’s lives,” and calls for efforts to build an inclusive AI future.

Big Tech Pursues Global Search For Cheap Energy

Wired Share to FacebookShare to Twitter (12/15, Azhar) reports that big tech companies like Microsoft are investing heavily in data centers, such as a $2 billion project in Johor, Malaysia, to power generative AI. These data centers require significant energy, with some needing up to 90 MW, comparable to powering tens of thousands of American homes. As AI applications grow, the demand for cheap, reliable power is crucial, leading tech firms to seek locations with abundant low-cost energy. Countries are competing for these investments by offering incentives like tax breaks and expedited construction approvals.

Google CEO Defends Company’s AI Competitiveness

The New York Times Share to FacebookShare to Twitter (12/15, Ross Sorkin) reports that at the DealBook Summit on December 4, Google CEO Sundar Pichai addressed criticisms about Google’s competitiveness in artificial intelligence. He countered Microsoft CEO Satya Nadella’s suggestion that Google should have been the “default winner” in A.I., expressing willingness for a comparison between Google’s and Microsoft’s models. Pichai highlighted Google’s advantages in compute, data, and algorithms, citing breakthroughs by Google’s A.I. researchers. He predicted A.I. progress might slow next year but expected Google’s search engine to evolve significantly by 2025. Pichai also discussed antitrust lawsuits and A.I.’s impact on hiring.

OpenAI Faces Financial Challenges Amid Rising AI Costs

The New York Times (12/17) reports that OpenAI is considering restructuring from a nonprofit to a for-profit entity due to escalating expenses in developing AI technologies. The San Francisco-based company, which initially raised $10 billion, has nearly depleted those funds and secured an additional $6.6 billion, plus $4 billion in loans. The company’s annual spending exceeds $5.4 billion, with projections of $37.5 billion by 2029. The growing financial demands are driven by the need for extensive computing power and GPUs, essential for processing vast data to train AI systems like ChatGPT.

Google Says Customers Can Deploy AI Tools In “High-Risk” Areas With Human Supervision

TechCrunch (12/17, Wiggers) reports, “Google has changed its terms to clarify that customers can deploy its generative AI tools to make ‘automated decisions’ in ‘high-risk’ domains, like healthcare, so long as there’s a human in the loop.” According to Google’s “updated Generative AI Prohibited Use Policy, published on Tuesday,” with human supervision, “customers can use Google’s generative AI to make decisions about employment, housing, insurance, social welfare, and other ‘high-risk’ areas.”

Congressional Task Force Prioritizes Health AI Oversight

STAT (12/17, Trang, Subscription Publication) reports a Congressional task force has released recommendations for AI regulation in healthcare, emphasizing the reduction of administrative burdens and enhancement of clinical diagnostics. The bipartisan House AI task force, consisting of 12 Republicans and 12 Democrats, issued a report on Tuesday. It highlights the need for uniform medical standards and improved health data interoperability. The task force also advocates for increased funding for research through the NIH. These recommendations coincide with the incoming administration and Congress, with expectations that President-elect Donald Trump’s Administration may push for reduced AI regulation. However, the task force stresses the importance of implementing safeguards to protect patients while promoting AI adoption.

New Database Reveals Undisclosed AI Writing In Scholarly Papers

The Chronicle of Higher Education (12/18, M. Lee) reports that Alex Glynn, a research literacy instructor at the University of Louisville, has compiled a database called Academ-AI, identifying scholarly papers potentially using undisclosed AI-generated language. Glynn analyzed 500 papers since March and found 20% were published in venues requiring AI disclosure. The Institute of Electrical and Electronics Engineers (IEEE) was notably prevalent, with more than 40 suspicious submissions. Despite IEEE’s clear policies, Glynn’s findings suggest that academic publishers are not consistently enforcing AI disclosure requirements. Glynn argues that such lapses threaten research integrity, saying, “In certain cases, it’s just astounding that these things make it through editors.” His study also highlighted “telltale phrases” like “Certainly, here...” and “Regenerate response,” indicating AI use. Publishers like Elsevier and Wiley are investigating these claims, while Springer Nature credits its program Geppetto for filtering out fake papers.

Educators Stress AI Literacy In Schools

Education Week (12/18, Klein) reports that educators emphasized the importance of artificial intelligence (AI) literacy during an Education Week K-12 Essentials Forum earlier this year. Cathy Collins, a library and media specialist in Massachusetts stated, “Failure to incorporate AI literacy right now may leave students inadequately prepared for the future.” The forum highlighted the need for students to understand AI’s potential and challenges, noting that students are exposed to both information and misinformation. Katie Gallagher, a technology specialist in Colorado, remarked on the unexpected impact of AI, saying, “No one asked for the release of generative AI tools.” She advised educators to focus on building literacy skills to enhance students’ critical thinking and well-being. However, many educators face challenges due to a lack of clear policies on AI use in schools. An EdWeek Research Center survey revealed that more than three-quarters of educators reported insufficient district policies, complicating AI integration in education.

Higher Ed Leaders Grapple With AI Integration

Inside Higher Ed (12/19, Palmer) reports that higher education institutions are navigating the integration of artificial intelligence (AI) into their operations and educational missions. An Inside Higher Ed survey found only 9% of chief technology officers feel prepared for AI’s rise. Ravi Pendse of the University of Michigan predicts AI will become “critical infrastructure” in 2025, impacting university life broadly. Trey Conatser from the University of Kentucky anticipates 2025 as a “year of discovery,” with a focus on developing skilled AI users. Katalin Wargo of William & Mary emphasizes the importance of asking “hard questions” about AI’s role in promoting equity. Mark McCormack from Educause stresses the need for ethical AI use. Claire L. Brady, president of Glass Half Full Consulting, LLC, highlights AI’s role in creating equitable educational experiences. Elisabeth McGee, senior director of clinical learning and innovation at the University of St. Augustine for Health Sciences, notes AI’s potential to improve healthcare education and outcomes.

dtau...@gmail.com

unread,
Dec 30, 2024, 11:56:20 AM12/30/24
to ai-b...@googlegroups.com

How Hallucinatory AI Helps Science Dream Up Breakthroughs

AI hallucinations are helping scientists track cancer, design drugs, invent medical devices, and uncover weather phenomena. Explains Amy McGovern, a computer scientist who directs an NSF AI institute, "It’s giving them the chance to explore ideas they might not have thought about otherwise.” David Baker, who shared the Nobel Prize in Chemistry this year for his research on proteins, credited AI imaginings as central to “making proteins from scratch.”


[
» Read full article *May Require Paid Registration ]

The New York Times; William J. Broad (December 23, 2024)

 

The Next Great Leap in AI Is Behind Schedule and Crazy Expensive

OpenAI’s new GPT-5 AI project, code-named Orion, is supposed to unlock new scientific discoveries as well as accomplish routine human tasks. It has been in the works for more than 18 months, though Microsoft, OpenAI’s largest investor, had expected to see Orion in mid-2024, say insiders. In training runs involving months of crunching large amounts of data to make Orion smarter, new problems arose and the software fell short of expected results. The delay is costing the company, as a six-month training run can cost around half a billion dollars in computing costs alone.


[
» Read full article *May Require Paid Registration ]

The Wall Street Journal; Deepa Seetharaman (December 20, 2024)

 

OpenAI Makes ChatGPT Available for Phone Calls and Texts

OpenAI is giving users access to its ChatGPT bot by dialing the U.S. number (1-800-242-8478) or messaging it via WhatsApp. At first, the company said, callers will get 15 minutes free per month. For the phone number, users can call without an account, but the company said it is “working on ways” to integrate WhatsApp messages with a person’s ChatGPT credentials.
[
» Read full article ]

CNBC; Hayden Field (December 18, 2024)

 

Is the Tech Industry on the Cusp of an AI Slowdown?

AI researchers have relied on data from the Internet to improve large language models (LLMs), but some experts are sounding the alarm that the data are running out. Demis Hassabis (pictured), the CEO and co-founder of Google DeepMind who shared this year's Nobel Prize in Chemistry, warns of "diminishing returns." Hassabis and others are now developing ways for LLMs to learn from their own trial and error by generating “synthetic data.” OpenAI recently released a new system built this way, but it only works in areas like math and computing programming, where there is a clear distinction between right and wrong.


[
» Read full article *May Require Paid Registration ]

The New York Times; Cade Metz; Tripp Mickle (December 19, 2024)

 

Ukraine Collects War Data Trove to Train AI

Oleksandr Dmitriev, founder of OCHI, a digital system that centralizes and analyzes video feeds from Ukrainian drone crews working on the front lines, says his system has collected 2 million hours of battlefield video from drones since 2022. The footage can be used to train AI models in combat tactics, spotting targets, and assessing the effectiveness of weapons systems.
[
» Read full article ]

Reuters; Max Hunder (December 20, 2024)

 

APpaREnTLy THiS iS hoW yoU JaIlBreAk AI

The Best-of-N algorithm was able to jailbreak "frontier AI systems across modalities.” Created by researchers at Anthropic, the University of Oxford in the U.K., Stanford University, and the ML Alignment & Theory Scholars (MATS) Program, the algorithm works by repeatedly sampling variations of a prompt with a combination of augmentations, such as random shuffling or capitalization for textual prompts, until a harmful response is elicited. Even small changes to other modalities or methods for prompting AI models, such as speech or images, allowed the bypassing of safeguards.
[
» Read full article *May Require Free Registration ]

404 Media; Emanuel Maiberg (December 19, 2024)

 

Arizona School’s Curriculum Will Be Taught by AI

The Arizona State Board for Charter Schools approved an application from Unbound Academy to open a fully online school serving grades four through eight. Unbound already operates a private school that uses its AI-dependent “2hr Learning” model in Texas and is currently applying to open similar schools in Arkansas and Utah. Under the model, students spend two hours a day using personalized learning programs from companies such as IXL and Khan Academy.
[
» Read full article ]

Gizmodo; Todd Feathers (December 19, 2024)

 

Nvidia’s AI Business Collides with U.S.-China Tensions

Nvidia and nations interested in its technology are being caught up in U.S. efforts to tighten control over AI chip sales. A proposed framework would allow U.S. allies to make unlimited purchases, while adversaries would be blocked entirely and other nations would receive quotas based on their alignment with U.S. strategic goals. Nvidia chip purchases, under such a model, could require cooperation with approved U.S. and EU cloud service operators and other assurances to the U.S. government that the technology won’t be shared with China.


[
» Read full article *May Require Paid Registration ]

The New York Times; Tripp Mickle; Paul Mozur (December 19, 2024)

 

OpenAI Unveils New AI Model o3

Bloomberg (12/20, Subscription Publication) reported that OpenAI announced a new AI model, o3, during a livestream on Friday, claiming it offers advanced human-like reasoning compared to previous models. The o3 model, along with a smaller version, o3-mini, aims to solve complex multi-step problems more effectively. OpenAI CEO Sam Altman revealed plans to release o3-mini in January and o3 soon after. The company is engaging safety researchers to test the models before launch. OpenAI also introduced “deliberative alignment” to ensure model safety. The announcement concluded a series of livestreamed product events, including new ChatGPT Pro and Sora tools.

        Wired (12/20, Knight) reports that o3 “takes even more time to deliberate over questions.” The o3 model “scores much higher on several measures than its [o1] predecessor, OpenAI says, including ones that measure complex coding-related skills and advanced math and science competency.” Wired adds that “Google is pursuing a similar line of research. Noam Shazeer, a Google researcher, yesterday revealed in a post on X that the company has developed its own reasoning model, called Gemini 2.0 Flash Thinking. The two dueling models show competition between OpenAI and Google to be fiercer than ever.”

AI-Generated Abstracts Receive Higher Ratings Than Human-Written Ones

Inside Higher Ed (12/20, Grove) reported that a study from Ontario’s University of Waterloo suggests that journal abstracts paraphrased with the help of artificial intelligence are perceived as more authentic, clear, and compelling compared to those written solely by humans. The study, published in the journal Computers in Human Behavior: Artificial Humans, found that peer reviewers rated AI-paraphrased abstracts higher than those written without algorithmic assistance. “AI-paraphrased abstracts were well received,” said Lennart Nacke, a co-author of the study. However, abstracts written entirely by AI were rated slightly less favorably on qualities like honesty and clarity. Nacke emphasized that AI should serve as an “augmentation tool” rather than a “replacement for researcher expertise.” He noted that while AI can “polish language and improve readability,” it cannot replace the deep understanding that comes with years of research experience.

New Jersey Teen’s Initiative Boosts Girls’ Interest In AI

ABC News (12/23) reports that Ishani Singh, a high school student in New Jersey, was motivated to start Girls Rule AI after being the only female competitor at a regional computer science competition in 2021. The organization’s mission is to engage more teenage girls in artificial intelligence (AI). Singh, now 17, has successfully expanded Girls Rule AI to offer free AI courses to more than 200 girls across 25 states and six countries, including Kenya and Afghanistan. She believes that the organization’s success lies in making AI accessible and helping girls feel more confident in the field. Singh stated, “We’re not at the level that we should be,” emphasizing the need for more women in technology. Singh hopes increased female interest will enhance AI technology and its applications worldwide.

Tech Predictions 2025: AI Agents, Cleaner Data Centers, And More

The Wall Street Journal (12/26, Stern, Mims, Nguyen, Subscription Publication) shares its annual tech predictions for 2025, highlighting key trends. Every major tech company, including Amazon, Google, and Meta, will focus on AI agents that understand context, learn preferences, and interact with users to complete tasks. Amazon’s Alexa will receive a generative AI upgrade, along with smarter Echo speakers and deeper, more seamless interaction with the long-running voice assistant. Additionally, the article touches on cleaner power for data centers, with Amazon, Google, and Microsoft investing in nuclear power and alternative energy sources. Other predictions include advancements in weather forecasting, a crypto boom as Bitcoin has shot through the $100,000 barrier, and the launch of fully autonomous vehicles, including Amazon’s Zoox, which will offer public rides in Las Vegas, San Francisco, Austin, and Miami.

        Kit Eaton writes for Inc. Magazine (12/25) that 2024 was a landmark year for AI, marked by rapid technological progress and growing societal debate. OpenAI’s ChatGPT dominated the AI landscape, despite controversy over its GPT-4o model’s human-like voices and concerns about safety and leadership. Amazon’s efforts to modernize Alexa using AWS stumbled, while Apple prioritized privacy and user safety in its “Apple Intelligence” push. The year also saw intensified debates over AI’s impact on jobs (with estimates suggesting 40% of jobs will be influenced by AI) and concerns about AI-driven fraud, abuse, and misinformation, leading to increased discussions about regulation, including the EU’s new AI law. As AI innovation continues, 2025 is expected to bring more emphasis on smarter adaptations, agentic AI, and reasoning models, with dominant players potentially including OpenAI, Google, Microsoft, and Apple, but also potentially joined by innovative startups.

AI Chatbot Improves FAFSA Completion Among Washington Teens

The Seattle Times (12/24, Bazzaz) reported that the Washington Student Achievement Council’s OtterBot, an AI-powered chatbot, is potentially increasing the completion rate of the Free Application for Federal Student Aid (FAFSA) among low-income students. A report indicates that students using OtterBot were more likely to submit their FAFSA than those who did not. Sarah Weiss, WSAC’s director of college access initiatives said, “We remind the heck out of students about FAFSA.” Last year, 56% of OtterBot’s target audience completed the FAFSA, compared to 42% of eligible non-users. The bot, costing the state $464,000 annually, sends reminders about financial aid deadlines and answers queries from more than 100,000 subscribers. Despite past FAFSA glitches, OtterBot users found it helpful, with one describing it as a “friend through the process.” The tool, launched in 2019, aims to connect with College Bound families and is available in more than 100 languages.

Struggling Cities Across Midwest, Mid-Atlantic, South May Benefit As AI Reshapes Economic Geography, Study Says

The New York Times (12/26, Lohr) reports that as the use of artificial intelligence (AI) “moves beyond a few big city hubs and is more widely adopted across the economy, Chattanooga and other once-struggling cities in the Midwest, Mid-Atlantic and South are poised to be among the unlikely winners, a recent study found.” These metropolitan areas share common attributes such as “an educated work force, affordable housing, and workers who are mostly in occupations and industries less likely to be replaced or disrupted by AI, according to the study” that is “part of a growing body of research pointing to the potential for chatbot-style artificial intelligence to fuel a reshaping of the population and labor market map of America.”

How AI Tools Aid Students With Disabilities

The AP (12/26, Hollingsworth) reports that assistive technology powered by artificial intelligence is helping students with disabilities, such as dyslexia, to perform tasks that are easy for others. Makenzie Gilkison, a 14-year-old from Indianapolis, uses AI tools like a chatbot and word prediction programs to keep up with classmates, saying, “I would have just probably given up if I didn’t have them.” Schools are fast-tracking AI applications for students with disabilities, supported by the US Education Department and new rules from the Department of Justice. There are concerns about AI ensuring learning and not replacing it. Paul Sanft, director of a Minnesota-based center “where families can try out different assistive technology tools and borrow devices,” says AI can level the playing field, though there are risks of misuse. The US National Science Foundation is also funding AI research to develop tools for children with speech and language difficulties.

Bloomberg Analysis: Proliferation Of AI Data Centers May Be Distorting Power Distribution Across Grid In US

Bloomberg Business (12/27, Nicoletti, Malik, Tartar, Subscription Publication) reported that as “AI data centers are multiplying across the US and sucking up huge amounts of power,” there is new evidence showing “they may also be distorting the normal flow of electricity for millions of Americans.” This “problem is threatening billions in damage to home appliances and aging power equipment, especially in areas like Chicago and “data center alley” in Northern Virginia, where distorted power readings are above recommended levels.” According to an exclusive Bloomberg analysis, “more than three-quarters of highly-distorted power readings across the country are within 50 miles of significant data center activity.” Tom’s Hardware (12/28) provides additional coverage on the report.

        Tech Companies Seek New Energy Solutions For AI Data Centers. The Washington Post (12/27, Halper) reported that technology companies are investing in innovative energy projects to meet the growing electricity demands of AI-driven data centers. These centers could consume up to 17% of US electricity by 2030. Projects include World Energy’s green hydrogen initiative in Newfoundland, Microsoft’s revival of Three Mile Island nuclear plant, and Helion Fusion’s atomic fusion in Washington state. Other efforts involve TerraPower’s small nuclear reactors in Wyoming and Fervo Energy’s geothermal fracking in Utah and Nevada. These initiatives aim to provide sustainable power while addressing environmental concerns.

        OpenAI Expands DC Lobbying Efforts To Promote Energy Security For AI Data Centers. Politico (12/27, Chatterjee) reported, “OpenAI...is tripling the size of its D.C. policy team and trying to promote a sweeping new plan to deliver cheaper energy to data centers.” The company “is pushing Washington leaders to embrace the AI industry as crucial in the economic and security race against China.” To this end, “it has hired D.C. insiders from across the political spectrum and beefed up its lobbying as it tries to get Congress and state leaders to sign onto an ambitious plan to build tech and energy infrastructure for AI development.”

        Space Data Centers Offer Energy Solutions. CleanTechnica (12/28, Casey) reported that space-based data centers could address the growing energy demands of AI training, as proposed by US startup Lumen Orbit. Lumen argues that launching data centers into space could bypass terrestrial energy constraints and delays from infrastructure projects. The company highlights the cost efficiency of space solar power, noting that launching and operating in space could be cheaper than current Earth-based solutions. NASA, while focusing on space-to-space solar technology, is less enthusiastic about space-to-Earth solar energy, though interest and investment in the latter are growing. Lumen plans to deploy data centers in low Earth orbits to mitigate space debris and reduce visibility interference with astronomical observations. Data transmission to Earth would be via optical laser or shuttle-style systems. Lumen aims to launch a demonstrator this spring and scale up by 2026, with multiple gigawatts planned by 2030. The ASCEND consortium in the EU also sees space data centers as a promising alternative to reduce the environmental impact of digital applications on Earth.

Google Unveils Quantum Chip Willow

Forbes (12/25, Riani) reports that Google has introduced its new quantum computing chip, Willow, featuring 105 qubits. Willow can perform computations in under five minutes that would take classical supercomputers 10 septillion years. This advancement offers significant potential for startups, particularly in pharmaceuticals, renewable energy, and AI, by accelerating problem-solving and enhancing machine learning. However, it also presents cybersecurity challenges, necessitating quantum-resistant protocols. The increased accessibility through cloud platforms could foster collaboration among startups, academia, and tech companies, driving innovation in quantum applications.

Experts Say AI Agents Set To Transform Education By 2025

Forbes (12/26, Ravaglia) reported that artificial intelligence (AI) agents are poised to revolutionize education by 2025, according to insights from education innovators. Brainly CTO Bill Salak anticipates AI agents will “aggregate data, make decisions, and seamlessly perform actions” based on user instructions, transforming web interactions from human-focused to agent-optimized. Brad Barton, YouScience’s CTO, highlights AI’s growing role in classrooms, offering personalized support to students. Jack Lynch, CEO of HMH, predicts AI will free teachers to focus on student engagement. Jay Patel of Cisco foresees AI agents embodying organizational values, creating brand-aligned interactions. Hassaan Raza, CEO of Tavus, emphasizes the importance of a “human layer” for AI agents, enhancing interactions through empathy and video interfaces. Finally, Anurag Dhingra, SVP & GM of Cisco Collaboration, suggests AI will subtly integrate into daily life, shaping education significantly by 2025.

dtau...@gmail.com

unread,
Jan 4, 2025, 8:55:28 AM1/4/25
to ai-b...@googlegroups.com

Hinton Backs Musk's Lawsuit Against OpenAI

ACM A. M. Turing Award laureate Geoffrey Hinton has voiced support for Elon Musk's lawsuit seeking to prevent OpenAI from restructuring into a for-profit company. Hinton said in a statement that OpenAI "received numerous tax and other benefits from its non-profit status. Allowing it to tear all of that up when it becomes inconvenient sends a very bad message to other actors in the ecosystem."
[ » Read full article ]

Business Insider; Kwan Wei Kevin Tan (December 31, 2024)

 

The World Needs Lazier Robots

Robots running on AI constantly process data, using so much of the energy consumed by datacenters that the emissions they're responsible for could outweigh their benefits. A potential solution proposed by René van de Molengraft at the Eindhoven University of Technology in the Netherlands is “lazy robotics,” in which machines do less and take shortcuts to learning, much as humans would.
[ » Read full article ]

The Washington Post; Samanth Subramanian; Emily Wright (December 31, 2024)

 

Hinton Shortens Odds of AI Wiping Out Humanity

ACM A. M. Turing Award laureate Geoffrey Hinton has shortened the odds of AI wiping out humanity over the next three decades, warning the pace of change in the technology is “much faster” than expected. In an interview, Hinton, who this year was awarded the Nobel Prize in Physics for his work in AI, said there was a “10% to 20%” chance that AI would lead to human extinction within the next 30 years.

[ » Read full article *May Require Paid Registration ]

The Guardian (U.K.); Dan Milmo (December 27, 2024)

 

AI Needs So Much Power, It’s Making Yours Worse

A Bloomberg analysis shows that more than 75% of highly distorted power readings across the U.S. are within 50 miles of significant datacenter activity, based on readings from 770,000 home sensors. The problem is threatening billions of dollars in damage to home appliances and aging power equipment, especially in areas like Chicago and "datacenter alley" in Northern Virginia, where distorted power readings exceed recommended levels.

[ » Read full article *May Require Paid Registration ]

Bloomberg; Leonardo Nicoletti; Naureen Malik; Andre Tartar (December 27, 2024)

 

AI Could Reshape the Economic Geography of the U.S.

As AI's use and benefits move beyond a few big city hubs, once-struggling cities in the Midwest, Mid-Atlantic, and South are poised to be among the beneficiaries. An academic study by labor economists points to those cities' educated work forces, affordable housing, and occupations and industries being less likely to be replaced or disrupted by AI as the primary reasons. These cities are well positioned to use AI to become more productive, helping to draw more people.

[ » Read full article *May Require Paid Registration ]

The New York Times; Steve Lohr (December 26, 2024)

 

Tech Industry Saw Rapid Advances And Challenges In 2024

The New York Times (12/30, Roose) reports that the tech industry experienced significant changes in 2024, with advancements in artificial intelligence (AI) and challenges from regulatory and political fronts. Major AI updates included OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude. A notable achievement was Google’s AlphaFold team earning a Nobel Prize in Chemistry. The year also saw tech companies in conflict with regulators and a “tech right” supporting Donald J. Trump. Epoch AI, a nonprofit, was recognized for its influential AI research. Andres Freund, a Microsoft engineer, discovered a security flaw in Linux, highlighting the importance of open-source software maintainers. NASA’s Jet Propulsion Laboratory resolved a glitch on Voyager 1, while Bluesky, a social media platform, offered a fresh online experience. Google’s NotebookLM and Coloring Book Hero also provided practical AI applications.

        AI Developments And Challenges Explored In 2024. The AP (12/30, Parvini) reports that in 2024, the focus shifted from developing artificial intelligence (AI) models to creating practical products, according to Arvind Narayanan, a Princeton professor. Narayanan noted, “The main thing that was wrong with generative AI last year is that companies were releasing these really powerful models without a concrete way for people to make use of them.” AI tools are increasingly integrated into technology services, such as Google search and photo editing. However, the growth of AI models has plateaued since GPT-4’s release, shifting public discourse from existential fears to normalizing AI as technology. High costs and energy demands are concerns, with tech giants investing in nuclear power. Goldman Sachs analyst Kash Rangan remarked, “It’s more expensive than we thought and it’s not as productive as we thought.” AI’s role in the workforce raises concerns, with industries like entertainment fearing job impacts.

        Big Tech’s Billions In AI Spending Revealed. Quartz (12/30) reports on Big Tech’s massive investment in AI, with Microsoft, Meta, Google, and Amazon spending a combined $125 billion on AI data centers from January to August 2024, according to a JPMorgan report citing New Street Research. Amazon alone spent $19 billion, with $16 billion in AI capital expenditures, including $8 billion on GPUs and other data center chips, and $3 billion in operating costs, with $2 billion spent on training and research and development, and $1 billion spent on inferencing.

AI Copyright Lawsuits May Define Fair Use In 2024

Reuters Legal (12/27) reported upcoming court cases in 2024 may significantly impact AI’s use of copyrighted materials. Authors, artists, and other copyright holders have filed lawsuits against tech companies like OpenAI and Meta, accusing them of using their work for AI training without permission. The central issue is whether this constitutes “fair use.” Some tech companies argue their AI systems transform the content, thus making fair use. Courts’ decisions could vary, leading to appeals. Early indicators may come from ongoing disputes involving Thomson Reuters and music publishers against AI companies.

Nonprofit Backs Musk’s Push To Halt OpenAI’s For-Profit Transition

TechCrunch (12/27, Wiggers) reported, “Encode, the nonprofit organization that co-sponsored California’s ill-fated SB 1047 AI safety legislation, has requested permission to file an amicus brief in support of Elon Musk’s injunction to halt OpenAI’s transition to a for-profit company.” In the “proposed brief...counsel for Encode said that OpenAI’s conversion to a for-profit would ‘undermine’ the firm’s mission to ‘develop and deploy … transformative technology in a way that is safe and beneficial to the public.’”

AI To Reach Level Of “Maturity” In Education During 2025, Experts Predict

The Hill (12/31) reported “experts predict that 2025 will be the year artificial intelligence (AI) truly gets off the ground in K-12 schools.” This year “laid the groundwork for AI to reach a level of ‘maturity’ in education, with the federal government releasing guidance on the issue and growing numbers of teachers getting professional training on the technology and classes on data science available to students.” Advocates say it’s now “time for schools to shift from figuring out how to efficiently use AI to responsibly incorporating it into students’ lives.”

Google’s AI Studio Leader Predicts Direct Path To Superintelligence

Insider (12/31, Langley) reports that Logan Kilpatrick, Google’s AI Studio product manager, suggests a “straight shot” to artificial superintelligence (ASI) is becoming increasingly likely due to the success of scaling test-time compute. Kilpatrick shared on X that ASI may arrive like a “product release” rather than a singular event. He acknowledged the potential in Ilya Sutskever’s approach, despite initial skepticism. Sutskever, formerly of OpenAI, founded Safe Superintelligence, aiming for a focused pursuit of ASI. Kilpatrick remains cautiously optimistic about iterative versus direct approaches.

British Researchers Use AI To Detect Risk For Atrial Fibrillation

The Hill (12/31, Menezes) reported that British researchers have developed an AI tool capable of identifying individuals at risk of atrial fibrillation (AF) before symptoms manifest, potentially preventing thousands of strokes. Created by scientists at the University of Leeds and Leeds Teaching Hospitals NHS Trust, the system analyzes electronic health records, considering factors like age, sex, ethnicity, and existing health conditions to assess risk levels. Validated with data from over 12 million people, the tool is currently being tested in West Yorkshire, where high-risk patients receive portable ECG devices for heart rhythm monitoring, with hopes for nationwide implementation.

Coalition For Health AI Planning Quality Assurance Labs To Vet Health-Related AI Tools

Politico (1/1, Reader) reports that as the government struggles with “oversight” of artificial intelligence, one group, Coalition for Health AI is planning “to launch quality assurance labs to vet AI tools in 2025 that would effectively entrust the private sector with vetting the technology in the absence of government action.” According to Politico, “Biden administration officials have signaled support for the idea. The administration’s top health tech official, who previously served on CHAI’s board, endorsed the concept...in September. Nearly three thousand industry partners have joined the effort, including the Mayo Clinic, Duke Health, Microsoft, Amazon and Google. Anderson, who went on to become a consultant to federal regulators on health tech after his time as a family doctor, is now trying to convince President-elect Donald Trump that the health AI industry should oversee itself.”

Musk Intensifies Legal Battle With OpenAI

Washington Post (1/1, De Vynck) reports that Elon Musk has escalated his legal conflict with OpenAI, seeking to prevent the company from altering its nonprofit structure. Musk argues that OpenAI should not block investors from supporting competitors like his AI start-up, xAI. Tech investors Antonio Gracias and Gavin Baker support Musk’s claims that OpenAI imposed conditions on investors. OpenAI denies these allegations, stating investors were informed they would not receive sensitive information if they invested in rivals.

        OpenAI Delays Launch Of Media Manager Tool. TechCrunch (1/1, Wiggers) reports that OpenAI’s Media Manager tool, announced in May to allow creators to control their content’s inclusion in AI training data, remains unreleased seven months later. The tool was intended to address intellectual property concerns and mitigate legal challenges. However, insiders indicate it was not prioritized internally. OpenAI’s Fred von Lohmann, initially involved, has shifted to a part-time consultant role. IP experts doubt the tool’s effectiveness in addressing legal complexities. OpenAI continues to face lawsuits from creators over unauthorized use of their works in AI training.

AI Regulation Debate Intensifies In 2024

TechCrunch (1/1, Zeff) reports that in 2024, debates over AI regulation intensified as tech industry leaders and policymakers clashed over AI’s potential risks. California’s SB 1047 bill, aimed at preventing AI-induced catastrophic events, was vetoed by Governor Gavin Newsom. The bill faced opposition from venture capitalists and tech companies, including Andreessen Horowitz, who argued it stifled innovation. Proponents, like Encode’s Sunny Gandhi, remain optimistic about future regulatory efforts. Meanwhile, Marc Andreessen and a16z’s Martin Casado criticized regulatory attempts, with Casado calling AI “tremendously safe” despite ongoing safety concerns.

dtau...@gmail.com

unread,
Jan 10, 2025, 7:52:19 PM1/10/25
to ai-b...@googlegroups.com

OpenAI's New o3 Model Freaks Out CS Majors

Some computer science (CS) majors have expressed concerns that AI will leave them without a job, pointing to OpenAI's new o3 reasoning model. One user on X said, "CS grads might honestly be cooked," while another user said they "might need to pivot." Georgia Institute of Technology AI Hub's Pascal Van Hentenryck said AI will not replace the need for computer scientists, but rather alleviate the need for them to work on "easy and tedious tasks."
[
» Read full article ]

Axios; Angrej Singh (January 7, 2025)

 

AI Trained to Predict Gene Activity

Scientists led by a team at Columbia University trained an AI algorithm to predict how the genes inside a cell will drive its behavior. The General Expression Transformer (GET) algorithm was trained using an approach similar to how ChatGPT was taught the grammar of language, learning along the way the underlying rules governing genes.
[
» Read full article ]

The Washington Post; Mark Johnson (January 9, 2025)

 

Medical Misinformation Easily Injected into LLMs

Large language models (LLMs) are compromised once misinformation accounts for 0.001% of training data, New York University researchers found. The team used GPT 3.5 to produce "high quality" medical misinformation that was then inserted into The Pile, a commonly used database for LLM training. The resulting LLMs not only produced misinformation on their targeted topics, but also on other medical topics.
[
» Read full article ]

Ars Technica; John Timmer (January 8, 2025)

 

41% of Companies Plan to Reduce Workforces by 2030 Due to AI

About 41% of employers worldwide intend to downsize their workforce by the end of this decade as AI automates certain tasks, according to a World Economic Forum survey of hundreds of large companies. About three-quarters of respondents said they plan to reskill/upskill their workers between 2025 and 2030 to better work alongside AI.
[
» Read full article ]

CNN; Olesya Dmitracova (January 8, 2025)

 

Driver in Las Vegas Cybertruck Explosion Used ChatGPT to Plan Blast

The driver of a Tesla Cybertruck that exploded on New Year's Day in front of the Trump International Hotel in Las Vegas used ChatGPT to learn how to construct an explosive and other facets of the attack. An OpenAI spokesperson said, "ChatGPT responded with information already publicly available on the Internet and provided warnings against harmful or illegal activities."
[ » Read full article ]

NBC News; Tom Winter; Andrew Blankstein; Antonio Planas (January 7, 2025)

 

AI Interprets Throat Vibrations to Create Sentences

Researchers at the U.K.'s University of Cambridge and University College London and China's Beihang University developed a model that determines what a person who finds it difficult to speak is trying to say based on throat muscle vibrations and carotid pulse. The data, obtained using textile strain sensors, is fed into two large language models, both based on GPT-4o-mini. The token synthesis agent is used to identify words mouthed by the user and arrange them in sentences, while the sentence expansion agent expands these sentences using contextual information and data on the user's emotional state.

[ » Read full article *May Require Paid Registration ]

New Scientist; Matthew Sparkes (January 6, 2025)

 

At the Intersection of AI and Spirituality

Religious leaders are seeking to determine where AI fits within their calling. This search has resulted in an industry of faith-based tech companies that offer AI tools, including assistants that can do theological research and chatbots that can help write sermons. While many agree using AI for research or marketing or translating sermons into different languages is acceptable, others argue using it for sermon writing, for example, is unethical.

[ » Read full article *May Require Paid Registration ]

The New York Times; Eli Tan (January 3, 2025)

 

AI Robots Enter the Public World, with Mixed Results

With the emergence of generative AI (Gen AI), hopes are rising for greater adoption of robotics in public spaces. Robots rely on code that tells them how to execute functions or react to specific scenarios, limiting them to specific actions they were trained to perform. Gen AI could permit robots to better navigate obstacles, understand what certain objects are, and even take verbal commands, said ABB’s Marc Segura.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Isabelle Bousquette (December 31, 2024)

 

Microsoft Plans $80B Investment In AI-Enabled Data Centers

Reuters (1/3, Varghese) reported that on Friday, Microsoft announced it is planning “to invest about $80 billion in fiscal 2025 on developing data centers to train artificial intelligence...models and deploy AI and cloud-based applications.” Reuters points out that the announcement comes as investment in AI “has surged since OpenAI launched ChatGPT in 2022, as companies across sectors seek to integrate artificial intelligence into their products and services. ... As OpenAI’s primary backer,” Microsoft “is considered a leading contender among Big Tech companies in the AI race due to its exclusive partnership with the AI chatbot maker.”

CES: NVIDIA Announces AI Tools To Improve Robot, Vehicle Training

Reuters (1/7) reports NVIDIA used CES to reveal “new products such as artificial intelligence to better train robots and cars, souped-up gaming chips and its first desktop computer, as it expounded upon its potential to expand its business.” NVIDIA’s new Cosmos foundation models generate “photo-realistic video which can be used to train robots and self-driving cars at a much lower cost than using conventional data.”

        CIO Magazine (1/6, Swain) reports NVIDIA’s CES announcements placed “an emphasis on generative physical AI that promises a new revolution in factory and warehouse automation.” The company “defines physical AI as the embodiment of artificial intelligence in humanoids, factories, and other devices within industrial systems.” While LLMs are “one-dimensional,” physical AI “requires models that can understand and interpret a three-dimensional world.”

OpenAI CEO Says Trump Should Ease Power Plant Restrictions To Support AI Development

The Hill (1/7, Shapero) reports that OpenAI CEO Sam Altman “suggested that President-elect Trump should ease restrictions on data center and power plant construction to help boost the development of energy-intensive artificial intelligence (AI).” In a “wide-ranging interview with Bloomberg published Sunday, Altman said the most helpful thing the incoming Trump administration can do for AI is support the construction of ‘U.S.-built infrastructure and lots of it.’” Altman said, “The thing I really deeply agree with the president on is, it is wild how difficult it has become to build things in the United States. Power plants, data centers, any of that kind of stuff.”

        Altman Confident OpenAI Can Develop AGI. The Verge (1/6) reports that OpenAI CEO Sam Altman expressed confidence in the company’s ability to develop artificial general intelligence (AGI) as traditionally understood. In a blog post on Monday, Altman predicted AI agents might significantly impact company outputs this year. OpenAI’s next goal is achieving “superintelligence,” which could accelerate scientific discovery and innovation. Despite exclusivity deals with Microsoft, OpenAI is not yet profitable, losing money on its ChatGPT Pro subscriptions. Altman acknowledged governance failures, emphasizing the importance of trust and credibility in pursuing OpenAI’s mission to ensure AGI benefits humanity.

Meta Faces Backlash Over User-Generated AI Characters On Instagram

NBC News (1/7) reports that Meta’s AI Studio feature has sparked controversy after users created AI characters that violated the platform’s policies. NBC News found AI chatbots resembling figures like Jesus Christ, Donald Trump, Taylor Swift, Adolf Hitler, and others, despite Meta’s rules against such creations. Meta removed highlighted accounts after NBC News contacted the company, but similar characters remain active. The AI chatbots, some romantic or sexual in nature, have drawn scrutiny, with one popular bot described as “Your Girlfriend” exchanging over 260,000 messages. Meta CEO, Mark Zuckerberg recently announced a rollback of content moderation policies, citing concerns about over-enforcement. Joel Kaplan, Meta’s global policy chief, stated the company will now focus on “illegal and high-severity violations.” Meta has faced criticism for both user-created and company-created AI chatbots, some of which have been accused of perpetuating racial stereotypes or engaging in inappropriate interactions.

Administration To Further Limit Nvidia AI Chip Exports In Final Push

Bloomberg (1/8, Hawkins, Leonard, Subscription Publication) reports the Administration is planning “one additional round of restrictions on the export of artificial intelligence chips from the likes of Nvidia Corp. just days before” President Biden leaves office, in “a final push in his effort to keep advanced technologies out of the hands of China and Russia.” Bloomberg says the government “wants to curb the sale of AI chips used in data centers on both a country and company basis, with the goal of concentrating AI development in friendly nations and getting businesses around the world to align with American standards, according to people familiar with the matter.”

Former Google CEO Launches AI Video Startup “Hooglee”

Forbes (1/9, Emerson) reports that former Google CEO Eric Schmidt has initiated a new AI project named Hooglee, aimed at revolutionizing AI video generation. Founded last year and financed by Schmidt’s family office, Hillspire, Hooglee seeks to “democratize video creation with AI.” The startup’s website hints at a social networking aspect, aiming to “change the way people connect through the power of AI and video.” Schmidt has enlisted Sebastian Thrun, a technology veteran, to lead the project. Hooglee’s team includes former Meta AI lab scientists and Kittyhawk’s ex-general counsel. Schmidt’s staff reportedly view Hooglee as a potential TikTok alternative, although Schmidt himself declined to comment. Trademark applications suggest Hooglee’s product will be both AI video software and a social platform. Despite Schmidt’s enthusiasm, he has previously warned about AI’s potential dangers, particularly deepfakes, suggesting “AI detection systems and watermarking” as possible solutions.

Tesla’s AI Ambitions Include Robotaxi Service By 2025

Behind a paywall, Barron’s (1/9) reports that Tesla is advancing its AI-driven self-driving cars and humanoid robots, with plans to launch a robotaxi service by the end of 2025. CEO Elon Musk, in a video interview at the Consumer Electronics Show in Las Vegas, highlighted AI’s potential, stating it will outperform human drivers by early 2025. Tesla aims to produce several thousand robots in 2025, scaling to 500,000 by 2027. While Deutsche Bank analyst Edison Yu estimates Tesla could sell 200,000 robots annually by 2035, Musk’s projections are significantly more ambitious. Tesla stock was down 2% year to date.

AI Investments Powering US Economic Growth

According to NBC News (1/8, Wile), AI investments are significantly powering economic growth in the US, driven by tech companies’ capital spending on hardware and software to expand cloud-computing capacity. AWS, for example, announced an $11 billion investment this week in AI-related projects in Georgia. However, job creation from AI investments remains limited, with construction and utilities sectors benefiting most. The potential for AI to automate jobs poses a risk to employment growth in other sectors. Despite uncertainties about the timing of AI’s broader economic benefits, tech firms continue to invest in anticipation of future profitability.

Character. AI Faces Scrutiny Over School Shooter Chatbots

Forbes (1/9, Daniel) reports that Character. AI, a Google-backed chatbot platform, is under fire after users created chatbots simulating real-life school shooters and victims, allowing graphic role-play scenarios. In response, Character. AI removed the chatbots, stating that users violated its terms of service. The company also announced new measures to filter characters available to users under 18 and restrict access to sensitive topics. Experts raise concerns about how interactive AI tools can influence vulnerable users. Psychologist Peter Langman warned that chatbots could normalize harmful ideologies if users receive no intervention. Digital forensics experts noted that while AI can mimic language patterns, it lacks the ability to provide the nuanced understanding needed to interpret human behavior. The controversy underscores broader challenges in regulating generative AI platforms, with calls for stricter oversight and parental involvement to protect young users.

dtau...@gmail.com

unread,
Jan 18, 2025, 8:45:39 AM1/18/25
to ai-b...@googlegroups.com

Nearly All Americans Use AI; Most Dislike It

A Gallup-Telescope survey of 3,975 U.S. adults conducted Nov. 26-Dec. 4, 2024, found that of the approximately 99% of respondents who used at least one AI-enabled product in the prior week, close to 67% were unaware they were doing so. Gallup's Ellyn Maese said there is "a lot of confusion when it comes to what is just a computer program versus what is truly AI and intelligent."
[ » Read full article ]

Axios; Ivana Saric (January 15, 2025)

 

Apple Joins Consortium to Help Develop Next-Gen AI Datacenter Tech

Apple has joined the Ultra Accelerator Link Consortium, a group working to develop the UALink standard to connect AI accelerator chips, from GPUs to custom-designed chips, to accelerate the training, fine-tuning, and running of AI models. The first UALink products, based on AMD's Infinity Fabric and other open standards, are expected to be released in the next few years. Other consortium members include Intel, AMD, Google, AWS, Microsoft, Meta, Alibaba, and Synopsys.
[ » Read full article ]

Tech Crunch; Kyle Wiggers (January 14, 2025)

 

U.S. Adopts Rules to Guide AI’s Global Spread

The Biden administration on Monday issued rules governing how AI chips and models can be shared with foreign countries. The rules, in essence, divide the world into three categories: the U.S. and 18 allies, which are exempted from any restrictions; nations already subject to U.S. arms embargoes, which will continue to face an existing ban on AI chip purchases; and all other nations, which will be subject to negotiable import caps.
[ » Read full article ]

The New York Times; Ana Swanson (January 14, 2025)

 

Biden Signs Executive Order to Ensure Power for AI Datacenters

President Biden on Tuesday signed an executive order providing federal support for the construction of datacenters to support the growth of AI. The order calls for leasing federal sites owned by the U.S. departments of Defense and Energy to host gigawatt-scale datacenters and new clean power facilities. It requires companies tapping federal land for datacenters to purchase an "appropriate share" of U.S.-made semiconductors.
[ » Read full article ]

Reuters; David Shepardson (January 14, 2025)

 

Forecasting Computation, Energy Costs for Sustainable AI Models

A method developed by North Carolina State University researchers predicts the costs associated with computational resources and energy consumption when updating AI models, allowing users to make informed decisions about when to update AI models to improve their sustainability. The REpresentation Shift QUantifying Estimator (RESQUE) method allows users to compare the dataset on which a deep learning model was initially trained to a dataset that will be used to update the model.
[ » Read full article ]

NC State University News; Matt Shipman (January 13, 2025)

 

PM Plans to 'Unleash AI' Across U.K. to Boost Growth

U.K. Prime Minister Sir Keir Starmer on Monday unveiled the AI Opportunities Action Plan, through which the government plans to use AI to deliver public services more efficiently. The plan calls for the establishment of "AI Growth Zones" and a boost to domestic infrastructure, with tech firms committing £14 billion towards the development of large datacenters and technology hubs.
[ » Read full article ]

BBC; Liv McMahon; Zoe Kleinman; Charlotte Edwards (January 13, 2025)

 

This Turing Award Winner Sees AI Showing Great Promise, Peril

ACM A.M. Turing Award laureate Raj Reddy discussed the benefits and potential drawbacks associated with AI during a recent memorial lecture at the Indian Institute of Science. Reddy said AI could democratize education by eliminating illiteracy and language barriers and facilitating personalized instruction. He warned, however, of AI’s implications for job displacement, and its potential for weaponization for military purposes and disinformation campaigns.
[ » Read full article ]

The Times of India; Akhil George (January 10, 2025)

 

OpenAI Shuts Down Developer Who Made AI-Powered Gun Turret

OpenAI has cut off a developer who built a device that responded to orders given to ChatGPT to aim and fire an automated rifle. The device went viral after a video on Reddit showed the developer reading firing commands aloud, after which a rifle beside him quickly began aiming and firing at nearby walls. Open AI said that after viewing the video, “We proactively identified this violation of our policies and notified the developer to cease this activity."
[ » Read full article ]

Gizmodo; Thomas Maxwell (January 9, 2025)

 

MIT Researchers Develop Faster Photonic Chip For Neural Networks

Ars Technica (1/12, Krywko) reports that MIT researchers have developed a photonic chip capable of processing deep neural networks with a latency of 410 picoseconds. This innovation bypasses traditional digitization, allowing calculations with photons directly, which could significantly reduce latency. Saumil Bandyopadhyay, an MIT researcher, emphasizes the importance of speed in applications, stating, “We aim for applications where what matters the most is how fast you can produce a solution.” The team successfully implemented both linear and non-linear operations on the chip, overcoming a significant challenge in photonics. Previously, non-linear functions were offloaded to external electronics, increasing latency. The chip uses Mach-Zehnder interferometers for linear matrix multiplication. This development could lead to faster and more energy-efficient neural network computations.

FTC Reviews Musk’s Lawsuit Against OpenAI Amidst Regulatory Concerns

Reuters (1/10, Godoy) reports the Federal Trade Commission and Department of Justice on Friday weighed in “on Elon Musk’s lawsuit seeking to block OpenAI’s conversion to a public company, pointing out legal doctrines that support his claim that OpenAI and Microsoft engaged in anticompetitive practices.” The FTC and DOJ “were not expressing an opinion on the case, but offered legal analysis on aspects of the case ahead of a Tuesday hearing in Oakland, California.” Separately, the FTC is “looking into partnerships in AI, including between Microsoft and OpenAI, investigating potentially anticompetitive conduct at Microsoft and probing whether OpenAI violated consumer protection laws.”

Administration Proposes Framework To Keep Cutting-Edge AI Limited To US And Allies

Reuters (1/13, Freifeld) reports the Administration “said on Monday it would further restrict artificial intelligence chip and technology exports,” in an effort “to keep advanced computing power in the U.S. and among its allies while finding more ways to block China’s access.” Specifically, the rules would “cap the number of AI chips that can be exported to most countries and allow unlimited access to U.S. AI technology for America’s closest allies, while also maintaining a block on exports to China, Russia, Iran and North Korea.” Commerce Secretary Raimondo said, “The U.S. leads AI now – both AI development and AI chip design, and it’s critical that we keep it that way.”

        The AP (1/13, Boak, O'Brien) reports Raimondo “said on a call with reporters previewing the framework that it’s ‘critical’ to preserve America’s leadership in AI and the development of AI-related computer chips.” She added it “is designed to safeguard the most advanced AI technology and ensure that it stays out of the hands of our foreign adversaries but also enabling the broad diffusion and sharing of the benefits with partner countries.” However, the AP says executives in the industry “raised concerns...the rules would limit access to existing chips used for video games and restrict in 120 countries the chips used for data centers and AI products” as limits may be imposed on “Mexico, Portugal, Israel and Switzerland.”

        The New York Times (1/13, Swanson) calls the rules “an attempt to set up a global framework that will guide how artificial intelligence spreads around the world in the years to come,” and are “dividing the world into three categories” which are: “the United States and 18 of its closest partners” all of whom “are exempted from any restrictions and can buy A.I. chips freely”; those “already subject to U.S. arms embargoes, like China and Russia, will continue to face a previously existing ban on A.I. chip purchases”; and “all other nations – most of the world – will be subject to caps restricting the number of A.I. chips that can be imported, though countries and companies are able to increase that number by entering into special agreements with the U.S. government.” Likewise, the Washington Post (1/13, Vynck, Dou) reports the “unprecedented new export controls” are “intended to slow China’s development of AI, and tighten U.S. government control.”

OpenAI CEO Seeks State Support For More Government Investment In AI

The Washington Post (1/13, Tiku, O'Donovan) reports that OpenAI CEO Sam Altman will conduct “a multistate tour to push for massive infrastructure spending by the incoming Trump administration to support companies working on artificial intelligence.” In 2022, Altman “won over Congress, and especially Democrats, by calling for new AI regulations and warning of the technology’s potential for catastrophic harm.” Now, “OpenAI will argue the states can benefit from the construction of new data centers for use by AI developers, and the electric grid upgrades needed to power the facilities.” President-elect Trump has already “signaled that he supports investing in AI infrastructure, a priority for the tech donors shaping his administration, including Elon Musk and venture capitalists David Sacks and Marc Andreessen.”

        The New York Times (1/13, Metz, Kang) reports that Altman “donated $1 million to President-elect Donald J. Trump’s inaugural fund,” and “now, he and his company are laying out their vision for the development of artificial intelligence in the United States, hoping to shape how the next presidential administration handles this increasingly important technology.”

Op-Ed Details How Trump Can Enhance AI Literacy For K-12 Education

In an opinion piece for The Hechinger Report (1/13), Arman Jaffer, the founder and CEO of AI-powered Chrome extension Brisk Teaching, writes that Donald Trump’s second term offers a chance to enhance AI literacy in K-12 education. Jaffer emphasizes that AI skills are vital for preparing students for tech-driven careers. He notes California’s recent mandate for AI and media literacy in schools but suggests it should focus more on career-specific skills. Jaffer advocates for expanding Trump’s previous career and technical education (CTE) initiatives to include AI, proposing grants to develop AI labs and integrate machine learning into curricula. He argues this would prepare students for an AI-powered workforce and align with Trump’s economic goals. Jaffer highlights existing programs that engage students with AI and stresses the importance of making AI education accessible to all students to foster a future-ready economy.

Biden Signs Order Intended To Spur Development Of AI Infrastructure

The AP (1/14, Parvini) reports that on Tuesday, President Biden “signed an ambitious executive order on artificial intelligence that seeks to ensure the infrastructure needed for advanced AI operations, such as large-scale data centers and new clean power facilities, can be built quickly and at scale in the United States.” Biden’s order “directs federal agencies to accelerate large-scale AI infrastructure development at government sites, while imposing requirements and safeguards on the developers building on those locations. It also directs certain agencies to make federal sites available for AI data centers and new clean power facilities.”

        CNBC (1/14, Haddad) explains the order “empowers the U.S. Department of Defense and Department of Energy to lease federal sites for gigawatt-scale AI data centers.” CNBC notes companies “leasing the federal lands will also be required to purchase an ‘appropriate share’ of U.S.-manufactured semiconductors and to pay workers ‘prevailing wages,’ according to the release.” Reuters (1/14, Shepardson) reports the President “said the order will ‘accelerate the speed at which we build the next generation of AI infrastructure here in America, in a way that enhances economic competitiveness, national security, AI safety, and clean energy.’”

OpenAI Publishes New AI Policy Blueprint

Politico (1/14) reports that OpenAI has released a new policy blueprint focusing on competition with China and domestic safety concerns. The blueprint is part of OpenAI’s effort to influence policy discussions on AI as they plan to demonstrate their latest AI tools in Washington. OpenAI’s VP for global affairs, Chris Lehane, emphasized the need for a forward-thinking approach to national security and economic competitiveness. Despite tensions with Elon Musk, OpenAI aims to collaborate with the incoming Trump administration to ensure the US leads in AI innovation and national security.

        TechCrunch (1/14, Wiggers) reports that OpenAI has removed the phrase endorsing “politically unbiased” AI from its “economic blueprint” for the U.S. AI industry. The revised document omits previous language suggesting AI models should be unbiased. An OpenAI spokesperson stated the change was to “streamline” the document, noting other documents address objectivity. The revision highlights ongoing debates about AI bias, with figures like Elon Musk and David Sacks criticizing AI for alleged liberal bias. OpenAI claims any biases in ChatGPT are unintended “bugs, not features.”

        OpenAI’s o1 Model Observed Switching Languages Unexpectedly. TechCrunch (1/14, Wiggers) reports that OpenAI’s reasoning AI model, o1, displays a peculiar behavior of switching languages during its reasoning process. Users have observed o1 starting in English but transitioning to languages like Chinese or Persian mid-thought. Experts speculate this could be due to o1’s training on diverse datasets, including Chinese characters, or using languages it deems efficient. Matthew Guzdial suggests o1 processes text as tokens, not words, which may explain the inconsistency. Luca Soldaini emphasizes the need for transparency to understand such AI behaviors. OpenAI has not commented on this phenomenon.

AI Tools In Education May Impact Human Connections For Students

The Seventy Four (1/14, Fisher) reports that OpenAI released a safety card for ChatGPT 4.0 in August, highlighting risks such as “anthropomorphization and emotional reliance.” The document warns of AI’s potential to create compelling experiences that might lead to “overreliance and dependence.” This concern extends to educational technology, where AI tools could displace human connections crucial for student well-being. A new report, Navigation and Guidance in the Age of AI, examines AI’s role in college and career guidance, noting that chatbots often adopt human-like names and personalities to provide emotional support. While some students prefer bots over human interaction, leaders in the field are developing AI that fosters genuine relationships. Despite these efforts, few schools prioritize relationship-centered AI, risking increased student isolation. The report suggests that schools should demand evidence of AI’s positive impact on relationships to avoid the “catch-22” of improved AI at the cost of human connections.

Vancouver Hosts First High School AI Research Competition

The Chronicle of Higher Education (1/14, M. Lee) reports that the NeurIPS conference in Vancouver hosted its first research competition for high school students, with 18-year-old Weichen Huang among the winners. Huang, who traveled from Dublin, Ireland, was excited to present his machine-learning project among 17,000 attendees, including prominent figures from Meta, Alphabet, and Microsoft. The competition aimed to “get the next generation excited” about AI, but some critics argue it may set unrealistic expectations and exacerbate inequities. Assistant professor Gautam Kamath from the University of Waterloo remarked, “I feel like they slapped on a science-fair aspect to the entire conference.” NeurIPS received more than 330 high school submissions, with a selection rate of about 8%. Graduate student Fred Zhangzhi Peng from Duke University noted the resource challenges for high school students in AI research, saying, “For most of the average high schoolers, there’s no way you can afford that kind of computing.”

Meta Develops Innovative Real-Time Speech Translation System

Ars Technica (1/15, Krywko) reports that Meta’s Seamless team is addressing the challenges of real-time speech translation by creatively overcoming data scarcity. Current AI translators often falter in speech-to-speech translation due to the accumulation of errors in multi-stage processes. While some systems can translate directly into English, they lack bidirectional communication capabilities. Meta’s team, inspired by Warren Weaver’s 1949 idea of a universal language, utilized “multidimensional vectors” as a common base for human communication. Machines convert words into numerical vectors, which are sequences of numbers representing meaning. When you “vectorize aligned text in two languages like those European Parliament proceedings, you end up with two separate vector spaces,” allowing neural networks to map these spaces. This approach aims to improve translation quality and facilitate seamless communication akin to a “Star Trek universal translator.”

        Meta Faces Legal Scrutiny Over AI Training Practices. The Verge (1/14) reports that a copyright lawsuit against Meta has uncovered internal communications about its AI development plans, including using copyrighted data for training. Court documents reveal Meta’s alleged use of the book piracy site Library Genesis (LibGen) to develop its AI model, Llama, while attempting to conceal this. Emails suggest Meta executives, including Ahmad Al-Dahle, weighed the risks of using pirated content. The lawsuit, filed by Richard Kadrey and Sarah Silverman, accuses Meta of violating intellectual property laws. Meta has argued that using copyrighted material for training should be considered fair use.

Report: AI Use In Schools Rises Despite Privacy Concerns

K-12 Dive (1/15, Merod) reports that the Center for Democracy & Technology (CDT) released a report Wednesday highlighting increased use of generative AI by students and teachers between the 2022-23 and 2023-24 school years. Teacher use rose from 51% to 67%, while student use increased from 58% to 70%. Teachers were “more likely to tap into AI for school uses over personal reasons,” while students did the opposite, CDT noted. Despite this rise, two-thirds of teachers lack guidance on handling AI-related plagiarism, though 39% use AI detection software. Concerns persist about AI detectors’ reliability, with claims that they may harm English learners and students with disabilities. Additionally, 23% of teachers reported large-scale data breaches in schools during the 2023-24 school year. Elizabeth Laird, director of the Equity in Civic Technology Project at CDT, in a Wednesday statement, emphasized the need for schools to communicate with families about the use of educational technology.

SUNY Mandates AI Education For Undergraduates

Inside Higher Ed (1/16, Alonso) reports that the State University of New York (SUNY) will require all undergraduate students to study artificial intelligence (AI) as part of their general education. This decision, announced earlier this month, modifies the “core competencies” by including AI ethics and literacy in the Information Literacy requirement, effective fall 2026. SUNY chancellor John B. King emphasized the importance of understanding AI ethically, stating, “We are proud that … we will help our students recognize and ethically use AI.” The curriculum change coincides with rising concerns about AI’s ethical implications, including potential workforce impacts. Courses across SUNY’s 64 institutions will incorporate AI content, with individual departments developing specific curricula. Lauren Bryant, a lecturer at the University at Albany, already integrates AI discussions in her course, highlighting AI’s strengths and limitations. Sam Wineburg, a professor from Stanford University, warns of students’ potential struggles with AI, noting, “There’s no indication that students have the prerequisite skills.”

AI-Enabled Robot Learns To Dance By Mirroring Humans

Popular Science (1/16, DeGeurin) reports that researchers from the University of California, San Diego have developed an AI-enabled robot capable of performing a Waltz by mimicking its human partner’s movements. The team created an AI model, ExBody2, trained on human motion capture data, and integrated it into Unitree G1 robots. These robots analyze and replicate human motions using real-world data captured by their cameras. Unlike pre-programmed robots, this approach allows the robot to learn movements organically, making it more adaptable. Videos show the robot executing various movements, such as sidestepping and squatting. Researchers highlight that this method could reduce the need for frequent retraining, potentially accelerating robot development and lowering costs.

Microsoft CEO Discusses AI Investment With President-Elect, Musk

The Bloomberg (1/16, Subscription Publication) reports that Microsoft CEO Satya Nadella met with US President-elect Donald Trump and Elon Musk to discuss artificial intelligence and cybersecurity. Microsoft plans to invest $80 billion in AI data centers globally, with over $50 billion in the US, creating American jobs. Microsoft President Brad Smith, present at the meeting, advised against “heavy-handed regulations” on AI. Microsoft and other cloud providers are expanding data centers, driven by AI demand. Microsoft has partnered to reopen a nuclear reactor for power needs, similar to agreements by Amazon and Google.

dtau...@gmail.com

unread,
Jan 26, 2025, 1:55:31 PM1/26/25
to ai-b...@googlegroups.com

Tech Giants Announce U.S. AI Plan Worth up to $500 Billion

OpenAI, Oracle, and Softbank on Tuesday announced a partnership to build datacenters and other infrastructure to power AI, in partnership with MGX, a tech investment arm of the United Arab Emirates government. The Stargate initiative aims to invest $100 billion "immediately" and $500 billion over the next four years. U.S. President Donald Trump said the plan is a "resounding declaration of confidence in America's potential."
[ » Read full article ]

BBC News; João da Silva; Natalie Sherman (January 22, 2025)

 

Executive Order Calls for AI ‘Free from Ideological Bias’

President Trump on Thursday signed an executive order revoking past government policies on AI that “act as barriers to American AI innovation.” To maintain global leadership, “We must develop AI systems that are free from ideological bias or engineered social agendas,” the order states. While the order does not specify which policies are hindering AI development, it calls for a review of “all policies, directives, regulations, orders, and other actions taken” as a result of the former administration's AI executive order.
[ » Read full article ]

Associated Press; Matt O'Brien; Sarah Parvini (January 23, 2025)

 

Self-learning Chip Mimics Brain Functions

A miniature computing chip developed by researchers in South Korea can self-learn and correct errors much like the human brain does. When processing video streams, for example, the chip teaches itself how to separate moving objects from the background, improving its performance over time. Researchers at the Korea Advanced Institute of Science and Technology said the new chip "is like a smart workspace where everything is within reach instead of moving between a desk and a filing cabinet."
[ » Read full article ]

Chosun Biz (South Korea); Hong A-reum (January 17, 2025)

 

Chinese AI Startup Competes with Silicon Valley Giants

Chinese startup DeepSeek recently unveiled an AI system that could match the capabilities of the latest chatbots from companies like OpenAI and Google. In a research paper accompanying the release of its DeepSeek-V3, the team explained how they used about 2,000 specialized computer chips from Nvidia to train their system. By comparison, the world’s leading AI companies train their chatbots with supercomputers that use as many as 16,000 chips, or more.

[ » Read full article *May Require Paid Registration ]

The New York Times; Cade Metz; Meaghan Tobin (January 24, 2025)

 

Trump Scraps Biden’s Sweeping AI Order

U.S. President Trump rescinded an executive order by former U.S. President Biden regulating AI, immediately halting implementation of safety and transparency requirements for AI developers. Biden’s order required leading AI companies to share safety test results and other critical information for powerful AI systems with the federal government. It also prompted the creation of the U.S. AI Safety Institute, housed under the U.S. Commerce Department, to create voluntary guidelines and best practices for the technology’s use.

[ » Read full article *May Require Paid Registration ]

Bloomberg; Jackie Davalos; Oma Seddiq (January 21, 2025)

 

AI Assembles Quantum Computer From Cold Atoms

A quantum computer developed by researchers at China’s University of Science and Technology features 2,024 atoms assembled by AI into an ultracold grid. The researchers developed an AI algorithm capable of recommending a sequence of laser beams and atoms to form the grid within 60 milliseconds, regardless of the grid's size.

[ » Read full article *May Require Paid Registration ]

New Scientist; Karmela Padavic-Callaghan (January 14, 2025)

 

College Admissions Evolving With AI Tools And Test-Optional Policies

Forbes (1/18, Hernholm) contributor Sarah Hernholm wrote that the college admissions landscape is experiencing significant changes due to technology, policies, and shifting priorities. The move towards test-optional policies, initiated during the pandemic, continues with more than 1,800 institutions adopting it, while some have reinstated test requirements. AI tools like Scoir and MaiaLearning are revolutionizing college searches by aligning applicants with suitable institutions, though reliance on AI for essays is cautioned against. Colleges, such as Georgia State University, use AI to streamline processes, but concerns about equity persist. Holistic admissions now emphasize extracurricular activities and personal essays, with 56% of colleges valuing them highly. Career-oriented programs are gaining traction, with universities offering co-op education and partnerships, as seen with Purdue University’s collaboration with United Airlines. Liberal-arts colleges remain relevant by showcasing versatile skills. Northeastern University’s co-op program exemplifies integrating academics with career preparation.

How AI Enhances Education For Neurodivergent Children

Forbes (1/19, Palumbo) contributor Jennifer Jay Palumbo wrote that traditional educational methods often fail to meet the needs of neurodivergent children, with 70% thriving when information is presented visually. However, creating personalized materials is resource-intensive, leaving educators and parents struggling. Jaivin Anzalota, co-founder of education platform Ella, said, “Educators and therapists know individualized visual supports make a difference, but they lack the time, energy, and expertise to create them.” Antoinette Banks, founder of Expert IEP, highlights AI’s potential, stating, “AI can adapt to how people naturally think and process information.” AI tools can generate customized visual aids and task lists, benefiting children who process information differently. Banks said, “AI recognizes these differences and provides tools tailored to each child’s needs.” AI also raises ethical concerns, such as data privacy and over-reliance on technology. Anzalota added, “Technology has the power to enable inclusion in meaningful ways.”

Trump Lauds $100B AI Joint Venture

The AP (1/21, Boak, Miller) reports President Trump on Tuesday “talked up a joint venture investing up to $500 billion for infrastructure tied to artificial intelligence by a new partnership formed by OpenAI, Oracle and SoftBank.” The AP also notes the White House said Stargate “will start building out data centers and the electricity generation needed for the further development of the fast-evolving AI in Texas,” beginning with an investment “expected to be $100 billion and could reach five times that sum.” According to Politico (1/21, Ng, Daniels), “AI development is a significant part of the Trump administration’s tech policy proposals, seeking to advance growth in not just the technology itself, but the data centers and energy capabilities it requires.”

        The New York Times (1/21, Kang, Metz) calls it “an early trophy for Mr. Trump, even though the effort to form the venture predates his taking office.” Likewise, the Wall Street Journal (1/21, Seetharaman, Dotan, Subscription Publication) highlights Stargate is “the latest high-profile initiative timed with the start of the Trump administration,” even though it “includes projects that the companies already announced and initiated under the Biden administration, people familiar with the matter said.” Furthermore, CNN (1/21, Duffy) reports Stargate’s creation comes after “AI leaders [spent] months...sounding the alarm that more data centers – as well as the chips and electricity and water resources to run them – are needed to power their artificial intelligence ambitions in the coming years.”

        Meanwhile, Reuters (1/21, Bose, Chiacu) reports White House Press Secretary Karoline Leavitt earlier claimed the “massive announcement” is “going to prove that the world knows that America is back.” However, Reuters casts Leavitt as “echoing an unrealized promise during Trump’s first term to bolster aging America’s roads, bridges and other networks,” and Bloomberg (1/21, Lai, Subscription Publication) says “skepticism remains about whether the initiative...actually amounts to a dramatic increase from previous plans.” Furthermore, Bloomberg notes that “the actual scope of new commitments remained unclear.”

Google Targets 500M Users For Gemini Chatbot

The Wall Street Journal (1/21, Subscription Publication) reports that Google CEO Sundar Pichai aims for the Gemini chatbot to reach 500 million users by the end of the year. Despite being ambitious, this target is achievable given Google’s existing user base across its products. Google plans to leverage partnerships with Android phone makers, such as Samsung and Motorola, to promote Gemini. The company has also made strides in AI technology, surpassing OpenAI in some rankings. Google’s focus on Gemini reflects its strategy to maintain a strong presence in the evolving AI chatbot market and potentially disrupt traditional search methods.

Survey Reveals College Leaders’ Divisions On Generative AI Readiness

The Chronicle of Higher Education (1/23, McMurtrie) reports that a recent survey conducted by the American Association of Colleges and Universities and Elon University’s Imagining the Digital Future Center highlights concerns among college leaders about the readiness of institutions to integrate generative AI. The survey, titled “Leading Through Disruption: Higher Education Executives Assess AI’s Impacts on Teaching and Learning,” involved more than 330 senior leaders, revealing that only 43% feel prepared to use AI effectively. The survey indicates that “93 percent cited faculty unfamiliarity with generative AI” as a significant challenge. Lynn Pasquerella, AAC&U president, emphasized the need for proactive measures, stating that leaders must “actively investigate and seek to comprehend the risks and rewards of AI.” The report also shows mixed views on AI’s impact, with 45% seeing it as more positive than negative. Institutions with more than 10,000 students show more confidence in AI adoption.

AI Tutors Enhance Student Learning And Confidence In College Course Materials

Inside Higher Ed (1/22, Mowreader) reports that Macmillan Learning’s AI Tutor, integrated into its Achieve platform, supports college students in STEM and economics courses by addressing questions and enhancing learning. The generative AI tutor acts as “an extension of an instructor or teaching assistant,” offering guidance without judgment. Analysis of more than two million messages from 8,000 students across 80 courses showed the tool’s effectiveness in promoting self-efficacy and problem-solving through Socratic questioning. Students engaged with the AI Tutor for an average of 6.3 minutes per session, often using it during late-night hours. Surveys indicated that 41% of instructors observed improved student confidence and exam performance, while 44% of students reported increased confidence in their problem-solving skills. Despite concerns about AI misuse, 67% of students reported using the tutor only when necessary.

Musk, Altman Feud Over Trump Stargate AI Project Announcement

The AP (1/22) reports Elon Musk “is clashing” with OpenAI CEO Sam Altman over the $500 billion Stargate artificial intelligence infrastructure project which was announced by President Trump on Tuesday. In a post on X, Musk alleged that a primary investor, SoftBank, doesn’t “actually have the money” to fund the project. Altman responded, telling Musk he is “wrong, as you surely know,” and adding that Stargate “is great for the country” while urging Musk to “mostly put (America) first” in his role in the Administration. CNBC (1/22, Breuninger) reports that while OpenAI, Oracle, and Softbank did not comment on Musk’s claim, a “person familiar with the AI project” told CNBC that Musk was “far off base.” The source also suggested that “Musk’s testy relationship with Altman was the catalyst for his posts about Stargate.” Similarly, CNN (1/22, Gold) says that “it should not be a surprise that Musk is going after an OpenAI initiative,” as he “is in an ongoing lawsuit with OpenAI and its CEO Sam Altman.” Musk previously said he “doesn’t trust” Altman, and “claims in the lawsuit the ChatGPT has abandoned its original nonprofit mission by reserving some of its most advanced AI technology for private customers.”

        The Wall Street Journal (1/22, Schwartz, Subscription Publication) says the exchange revealed the “sometimes awkward dynamic” between Musk and Trump, and showed that Musk “won’t pare back his unfiltered online commentary now that Trump has taken office.” Bloomberg (1/22, Subscription Publication) claims the exchange could start an “early internal rift within the White House,” and “underscored some of the tensions that could dominate Trump’s second term in office and echo issues he faced during his last stint at the White House.”

        Meanwhile, Politico (1/22) says the argument “quickly went from a political victory lap for Trump to an almost comical illustration of what billionaires will fight over in public.”

Community Colleges Form AI Consortium To Enhance Workforce Readiness

Inside Higher Ed (1/23, Palmer) reports that colleges and universities are launching initiatives to prepare students for AI-related jobs, varying by resources and industry ties. Community colleges, serving many low-income students, aim to bridge this gap. Michael Baston, president of Cuyahoga Community College (Tri-C) in Ohio, emphasizes the importance of inclusivity in the AI revolution, stating, “We have a moral and ethical responsibility to make sure the masses don’t get left out.” Tri-C and other colleges joined the Complete College America’s inaugural AI Readiness Consortium, aiming to design 25 new courses incorporating AI tools. Charles Ansell, vice president for research, policy and advocacy at CCA, warns that without innovation, “we’re going to see a reduction in career ladders.” CCA invests $500,000 to support this initiative, with Riipen, “a Vancouver-based education-technology start-up and work-based learning platform that allows instructors to embed employer projects directly into classroom instruction,” aiding in embedding real employer projects into coursework.

US, EU Take Different Tacks On AI Regulation

TechTarget (1/23, Pariseau) reports the Trump Administration this week “rescinded its predecessor’s executive order on AI safety this week, while the European Union will begin enforcing its own new regulations beginning next month, potentially putting multinational companies in a regulatory bind.” For now, “action on AI safety in the U.S. might fall to state and local governments, along with efforts by private-sector groups such as the Cloud Security Alliance’s AI Safety Initiative and the Coalition for Secure AI.” Some industry analysts “said they were concerned that a regulation such as the EU’s AI Act looks to deploy controls against a technology that is still so nascent and rapidly evolving, it’s difficult to know what will even be relevant in a matter of a few months.”

K-12 Schools Face AI Integration Challenges

K-12 Dive (1/23, Merod) reports that K-12 schools are navigating the integration of artificial intelligence (AI) amid both opportunities and challenges. As schools receive guidance from national organizations and the federal government, concerns about AI misuse, such as deepfakes and lawsuits, are emerging. Kris Hagel, chief information officer at Peninsula School District in Washington, highlights the uncertainty of federal AI support under President Trump’s second term. Pat Yongpradit, chief academic officer of Code.org and lead for TeachA, notes that state education agencies will likely continue developing AI resources, with 24 states already releasing guidance. AI tools tailored for special education and English learners are expected, with Hagel advising against using free AI tools like ChatGPT, advocating for secure AI enterprise systems instead. Yongpradit anticipates increased teacher reliance on AI detectors but advises focusing on teaching motivations to address cheating. Despite interest, some districts struggle with AI due to resource constraints and lack of understanding.

 

DOD HOPES FOR STARGATE BENEFIT: If OpenAI can actually implement its Stargate Project to build $500 billion worth of AI infrastructure in the U.S., one of the major beneficiaries may be the U.S. military. “It depends on how much of that they devote to gov[ernment] cloud and AI cloud,” said Roy Campbell, chief strategist for the Pentagon’s High Performance Computing Modernization Program and deputy director for advanced computing in the undersecretariat for research and engineering. And if the Defense Department can get a slice of Stargate’s computing power, he told Breaking Defense, it could bypass a major bottleneck for its current high-tech ambitions.

dtau...@gmail.com

unread,
Feb 2, 2025, 7:25:23 PM2/2/25
to ai-b...@googlegroups.com

International AI Safety Report Released Ahead of Action Summit

An international AI safety report published Wednesday ahead of the AI Action Summit hosted by France next month compiled insights from 100 independent international experts. ACM A. M. Turing Award laureate Yoshua Bengio, the driving force behind the report, said that while AI holds "great potential" for society, it also presents "significant risks." He said the intention of the report was to “facilitate constructive and evidence-based discussion around these risks and serves as a common basis for policymakers around the world to understand general-purpose AI capabilities, risks and possible mitigations."
[
» Read full article ]

Gov.UK (January 29, 2025)

 

LeCun Says DeepSeek's Success Shows Benefits of Open Source Models

ACM A.M. Turing Award laureate Yann LeCun says the success of the R1 model released recently by Chinese AI company DeepSeek shows the value of keeping AI models open source. It's not that China's AI is "surpassing the U.S.," but rather that "open source models are surpassing proprietary ones," LeCun said in a post on Instagram’s Threads app. "They came up with new ideas and built them on top of other people's work. Because their work is published and open source, everyone can profit from it."
[ » Read full article ]

Business Insider; Katie Balevic; Lakshmi Varanasi (January 25, 2025)

 

Sensitive DeepSeek Data Exposed to Web

Cybersecurity firm Wiz said in a blog post that scans of Chinese AI startup DeepSeek's infrastructure showed that company had inadvertently left more than a million lines of data available unsecured, including digital software keys and chat logs that appeared to capture prompts being sent from users to the company's recently unveiled AI assistant. After alerting DeepSeek of the find, the company quickly secured the data.
[
» Read full article ]

Reuters; Raphael Satter (January 29, 2025)

 

Initiative Aims to Enable Ethical Coding LLMs

Nonprofit Software Heritage has launched the CodeCommons project with the goal of creating the biggest repository of ethically sourced code for training AI models. CodeCommons will be focused on developing a unified data platform that gives researchers access to pre-cleaned code collections featuring license information, links to related research papers, and other metadata.
[
» Read full article ]

IEEE Spectrum; Edd Gent (January 28, 2025)

 

AI, Holograms Create Uncrackable Optical Encryption System

By combining AI with holographic encryption, a team led by Stelios Tzortzakis at the University of Crete in Greece developed an ultra-secure data protection system that uses neural networks to retrieve elaborately scrambled information stored as a hologram. The researchers found the neural network could accurately retrieve encoded images 90-95% of the time.
[
» Read full article ]

Optica (January 30, 2025)

 

'First AI Software Engineer' Bad at Job

Auto-coder “Devin,” billed as "the first AI software engineer" when it was introduced last March by Cognition AI, performed badly on an exam from data scientists affiliated with Answer.AI, completing just three out of 20 tasks successfully. The service uses Slack as its main interface for commands, which are sent to its computing environment, a Docker container that hosts a terminal, browser, code editor, and planner. According to the examiners, Devin had a habit of getting stuck in technical dead-ends or producing overly complex, unusable solutions.
[ » Read full article ]

The Register (U.K.); Thomas Claburn (January 23, 2025)

 

AI Boom Is Giving Rise to 'GPU-as-a-Service'

Kinesis, Hyperbolic, Runpod, and Vast.ai are among the firms offering access to computing power to AI startups via GPU-as-a-Service (GPUaaS). GPUaaS is more cost-effective for AI startups by eliminating the need to purchase and maintain physical infrastructure and allowing startups to pay for their exact amount of GPU usage. It also is more sustainable because it takes advantage of existing, unused processing units and does not require new servers.
[ » Read full article ]

IEEE Spectrum; Juan Pablo Perez (January 20, 2025)

 

'The Brutalist' Sparks Controversy After Film's Editor Reveals Use of AI

A debate has emerged about whether "The Brutalist" should be considered for an Oscar after film editor Dávid Jancsó disclosed that the AI tool Respeecher was used to enhance the accents of lead actors Adrien Brody and Felicity Jones when speaking Hungarian. Noting that "it's an extremely unique language," Jancsó said they "wanted to perfect it so that not even locals will spot any difference." AI also was used to produce architectural drawings and finished buildings shown in the film.
[ » Read full article ]

NBC News; Rebecca Cohen; Chloe Melas (January 20, 2025)

 

Vatican Warns About the Risks of AI

A paper issued by the Vatican Jan. 28 emphasizes the need for constant AI oversight, citing the wealth of opportunities provided by the technology, as well as its "profound risks." The paper, developed by a Vatican team in conjunction with AI and other experts, expressed concerns about the potential for AI to destroy trust by spreading misinformation, its ability to cause isolation, and its possible harmful effects on human relationships.


[
» Read full article *May Require Paid Registration ]

The New York Times; Elisabetta Povoledo (January 29, 2025)

 

AI-Powered Robot, Gaming Help Scientists Identify Deep-Sea Species

Monterey Bay Aquarium Research Institute (MBARI) scientists are using an AI-powered robot, MiniROV, to locate and track marine organisms autonomously. According to MBARI's Kakani Katija, "The goal is to track individual animals for up to 24 hours so we can answer questions about the animal's behavior and ecology." The researchers also launched FathomVerse, a game that allows citizen scientists to explore a virtual ocean and classify marine organisms in the FathomNet database in an effort to train the AI.


[
» Read full article *May Require Paid Registration ]

Bloomberg; Todd Woody (January 29, 2025)

 

Chevron Joins Race to Generate Power for AI

Chevron is partnering with Engine No. 1, a San Francisco-based investment firm, to build natural gas-fueled power plants that will feed energy directly to AI datacenters, joining other oil and gas producers that are adjusting their strategies and leaning into power generation rather than drilling and processing. Last month, Exxon said that it, too, wanted to get into the business of selling electricity to datacenters.

[ » Read full article *May Require Paid Registration ]

The New York Times; Rebecca F. Elliott (January 28, 2025)

 

In Seattle, a Convergence of 5,444 Mathematical Minds

The Joint Mathematics Meetings was held in Seattle Jan. 8-11, drawing 5,444 mathematicians with the theme of "Mathematics in the Age of AI." Yann LeCun, Meta's chief AI scientist and an ACM A.M. Turing Award laureate, delivered a keynote in which he discussed the current state of machine learning. LeCun also suggested a "large-scale world model" as an alternative for generative large language models, noting that it "can reason and plan because it has a mental model of the world that predicts consequences of its action."


[
» Read full article *May Require Paid Registration ]

The New York Times; Siobhan Roberts (January 28, 2025)

 

Meta Announces $65B Investment To Accelerate AI Innovations In 2023

The New York Times (1/24, Isaac) reports, on Friday, Mark Zuckerberg said Meta “expected its capital expenditures in 2025 to come in at an estimated $60 to $65 billion, a big increase compared with the roughly $38 to $40 billion Meta spent in 2024.” Much of that amount will go towards “building and expanding data centers, the warehouse-size buildings that provide the computing power that fuels Meta’s A.I. products and algorithms across its apps, which include Facebook, Instagram and WhatsApp.” In a Facebook post, Zuckerberg said, “This is a massive effort, and over the coming years it will drive our core products and business, unlock historic innovation, and extend American technology leadership.” The Wall Street Journal (1/24, Subscription Publication) provides similar coverage.

AI-Powered Charter School Faces Skepticism In Pennsylvania

Chalkbeat (1/24, Sitrin) reported that MacKenzie Price is proposing a new cyber charter school in Pennsylvania, utilizing AI-powered lesson plans and virtual reality experiences. The school, Unbound Academy, aims to launch in 2025, initially serving 500 students with only four teachers. Price claims her 2 Hour Learning model, co-founded with proprietary AI software and third-party apps, can significantly enhance academic performance. However, her model has faced rejection in several states, and critics argue it relies on selective data from private schools. “The results that I’ve been able to get from our schools have been absolutely phenomenal,” Price stated. Despite her assertions, skepticism remains about the method’s effectiveness and the role of teachers The Pennsylvania Department of Education is expected to decide on the charter’s approval soon, amid calls for more scrutiny on cyber charters.

DeepSeek’s AI Models Challenge US Tech Industry’s Dominance

Reuters (1/27) reports that Chinese startup DeepSeek has launched AI models, DeepSeek-V3 and DeepSeek-R1, claiming they rival or surpass US models at lower costs. DeepSeek’s AI Assistant has surpassed ChatGPT as the top-rated free app on Apple’s US App Store, raising questions about US tech firms’ AI investments. DeepSeek’s claims have been met with skepticism, including from Scale AI’s Alexandr Wang. DeepSeek is led by Liang Wenfeng, co-founder of High-Flyer. DeepSeek’s success has caught Beijing’s attention, with Liang attending a symposium hosted by Premier Li Qiang.

        Insider (1/27, Barr) reports that DeepSeek’s models challenge OpenAI’s proprietary approach, with pricing 20-40 times lower, according to Bernstein tech analysts. DeepSeek’s Reasoner model costs 55 cents per 1 million tokens, compared to OpenAI’s o1 model at $15. The analysts noted that this pricing strategy raises questions about the viability of proprietary versus open-source models.

        TechCrunch (1/27, Chant) reports that DeepSeek’s efficiency raises questions about the necessity of large hardware investments in AI, potentially impacting data center demand and energy consumption. DeepSeek claims to have used 2,048 Nvidia H800 GPUs for training, much less than OpenAI’s reported usage. Nvidia’s stock fell 16%, and concerns grow for new nuclear and natural gas investments. Citigroup’s Atif Malik remains skeptical of DeepSeek’s claims, suggesting potential implications for energy strategies.

        Meanwhile, the Washington Post (1/27, A1, Gregg, Najmabadi, Dou, Zakrzewski, Tiku) reports the “sudden popularity” of DeepSeek “prompt[ed] debate in political and tech industry circles about how the United States can maintain its lead in AI.” The Post notes Victoria LaCivita, spokeswoman for the White House Office of Science and Technology, “said former president Joe Biden’s policies had failed to limit access to American technology and created an opportunity for China and other foreign adversaries to make gains in AI development,” while David Sacks, President Trump’s AI and crypto czar, “said in a post on X that DeepSeek ‘shows that the AI race will be very competitive.’” However, the Post says “the Trump administration has shared few specifics about its own approach to AI policy,” and the President last week “rescinded a sweeping executive order on AI signed by Biden in 2023 and signed an executive order of his own directing agencies to rescind all actions taken under the Biden order ‘that are inconsistent with enhancing America’s leadership in AI.’” Nonetheless, Reuters (1/27, Carew, Cooper, Banerjee) reports the President “said that DeepSeek should be a ‘wakeup call’ and could be a positive development.”

        David Wallace-Wells writes at the New York Times (1/27) that DeepSeek AI has created a “earthquake” of speculation over its low cost and high performance, “suggesting two truly seismic possibilities about the technological future on which so much of the American economy has recently been wagered.” Wallace-Wells explains that it either reveals the “American advantage on A.I. may be much smaller than has been widely thought,” or that the “approach to improving performance by building out ever-larger and more expensive data centers for training” is inefficient.

        DeepSeek Suffers “Large-Scale” Cyberattack The AP (1/27, Parvini) reports DeepSeek on Monday “said that it had suffered ‘large-scale malicious attacks’ on its services,” which “disrupted users’ ability to register on the site.” In response, Reuters (1/27, Baptista, Kachwala, Bajwa) reports DeepSeek announced it would “temporarily limit registrations.” However, DeekSeek “resolved issues relating to its application programming interface and users’ inability to log in to the website, according to its status page.”

Experts: US Military Rushing Into AI Too Quickly

AI Now Institute executives Heidy Khlaaf and Sarah Myers West write at the New York Times (1/27) that the integration of AI into military systems is raising national security concerns due to potential flaws and cybersecurity vulnerabilities. They explain that older AI models “have had problems with accuracy and can introduce greater potential for error,” and new systems “are even more worrisome” because they “frequently ‘hallucinate,’ asserting patterns that do not exist or producing nonsense.” They conclude that US military leaders should not “overlook the risks that A.I.’s current reliance on of sensitive data poses to national security or to ignore its core technical vulnerabilities.”

China’s DeepSeek Raises Questions About US Export Controls, Creates AI Urgency For Administration

The New York Times (1/28, Swanson, Tobin) reports that the US “has worked steadily over the past three years to limit China’s access to the cutting edge computer chips that power advanced artificial intelligence systems,” with an aim “to slow China’s progress in developing sophisticated A.I. models.” But now DeepSeek, a Chinese firm, “has created that very technology,” raising “big questions about export controls built by the United States in recent years” and provoking “a fierce debate over whether US technology controls have failed.”

        Reuters (1/28, Shalal, Shepardson, Raj Singh) says, “US officials are looking at the national security implications of the Chinese artificial intelligence app DeepSeek, White House press secretary Karoline Leavitt said on Tuesday, while...Trump’s crypto czar said it was possible that intellectual property theft could have been at play.”

        Meanwhile, the New York Times (1/28, Yuan) reports, “Inside China, it was called the tipping point for the global technological rivalry with the United States and the ‘darkest hour’ in Silicon Valley, evoking Winston Churchill.” The Times calls it “possibly a breakthrough that could change the country’s destiny.”

Stopping China’s DeepSeek From Using US AI May Be Difficult, Experts Say

Reuters (1/29) reports, “Top White House advisers this week expressed alarm that China’s DeepSeek may have benefited from a method that allegedly piggybacks off the advances of US rivals called ‘distillation,’” a technique that “involves one AI system learning from another AI system” and “may be difficult to stop, according to executive and investor sources in Silicon Valley.” This “means the newer model can reap the benefits of the massive investments of time and computing power that went into building the initial model without the associated costs.”

        Meanwhile, the New York Times (1/29, Metz) reports, “OpenAI says it is reviewing evidence that...DeepSeek broke its terms of service by harvesting large amounts of data from its A.I technologies.” The San Francisco-based start-up “said that DeepSeek may have used data generated by OpenAI technologies to teach similar skills to its own systems,” and its “terms of service say that the company does not allow anyone to use data generated by its systems to build technologies that compete in the same market.” NBC Nightly News (1/29) quoted AI and Crypto Czar Sacks as saying, “There is substantial evidence that what DeepSeek did here is they distilled the knowledge out of OpenAI’s model.”

        Microsoft, OpenAI Investigating If DeepSeek Improperly Obtained Data. Bloomberg (1/29, Bass, Ghaffary, Subscription Publication) reports, “Microsoft Corp. and OpenAI are investigating whether data output from OpenAI’s technology was obtained in an unauthorized manner by a group linked to Chinese artificial intelligence startup DeepSeek, according to people familiar with the matter.” According to Bloomberg, “Microsoft’s security researchers in the fall observed individuals they believe may be linked to DeepSeek exfiltrating a large amount of data using the OpenAI application programming interface, or API, said the people.” Reuters (1/29) reports that OpenAI stated on Tuesday that Chinese companies are “constantly” attempting to access U.S. competitors to enhance their AI models. OpenAI emphasized the importance of collaborating with the U.S. government to protect advanced models from adversaries. Reuters (1/29) reports separately that Israeli cybersecurity firm Wiz “says it has found a trove of sensitive data from the Chinese artificial intelligence startup DeepSeek inadvertently exposed to the open internet.”

        DeepSeek’s R1 Chatbot Challenges ChatGPT. Wired (1/27, Rogers) reports DeepSeek’s AI chatbot, developed by a Chinese startup, has surpassed OpenAI’s ChatGPT on Apple’s US App Store. The free-to-use R1 model rivals OpenAI’s o1 “reasoning” model without a subscription fee, and was trained with less powerful AI chips. Despite its potential to disrupt US-based AI companies, the chatbot shares common generative AI issues, such as hallucinations and lack of memory features.

Google Warns Hackers In Over 20 Countries Using Gemini AI Tool To Increase Efficiency

The Wall Street Journal (1/29, Volz, McMillan, Subscription Publication) reports Google released findings Wednesday that hackers linked to China, Iran, and over 18 other countries are utilizing Google’s Gemini chatbot for tasks like writing malicious code and researching targets. The report highlights that groups tied to China, Iran, Russia, and North Korea appear to currently use Gemini to increase productivity, not to develop new hacking techniques.

Pentagon Workers Used DeepSeek Chatbot Prior To Block

Bloomberg (1/30, Manson, Robertson, Subscription Publication) reports Defense Department employees “connected their work computers to Chinese servers to access DeepSeek’s new AI chatbot for at least two days before the Pentagon moved to shut off access, according to a defense official familiar with the matter.” The Defense Information Systems Agency, which is “responsible for the Pentagon’s IT networks, moved to block access to the Chinese startup’s website late Tuesday, the official and another person familiar with the matter said. Both asked not to be named because the information isn’t public.”

        US Tech Giants Rush To Reassure AI Investors After DeepSeek Stock Market Shock. The Washington Post (1/30) reports that the launch of Chinese chatbot DeepSeek has significantly affected US tech stocks, reducing their value by a trillion dollars. On Wednesday, Meta and Microsoft CEOs reassured investors about ongoing AI investments. Despite DeepSeek’s success, both companies plan to invest billions in AI infrastructure. Microsoft’s Satya Nadella highlighted that increased access to AI models would boost demand for Microsoft’s cloud services. Meta’s Mark Zuckerberg supported free AI model distribution, aligning with DeepSeek’s approach. DeepSeek’s innovations are under Meta’s scrutiny, with “war rooms” set up to analyze its technology. OpenAI accused DeepSeek of using its AI responses, while AI analysts questioned DeepSeek’s low-cost claims. Meta and Microsoft remain committed to AI spending, with Meta expecting increased capital expenditures and Microsoft planning $80 billion in AI infrastructure investments this year.

        Blackstone Remains Optimistic About AI Infrastructure. The New York Times (1/30, Farrell) reports that while “Chinese A.I. start-up DeepSeek upended the prevailing view that artificial intelligence systems require huge amounts of power and investment,” Blackstone remains confident in the “vital need for physical infrastructure, data centers and power.” Jonathan Gray, Blackstone’s president, emphasized their strategy of building data centers exclusively for technology firms with long-term leases, stating, “We don’t build them speculatively.” Blackstone’s recent investments include a $10 billion acquisition of QTS and a $16 billion deal for AirTrunk. Gray anticipates increased AI adoption as computing costs decrease, suggesting usage patterns may evolve. Blackstone’s stock has risen 40% over the past year, reflecting strong investor confidence in its strategic focus on AI infrastructure.

Sources: OpenAI In Talks To Raise Up To $40B In Funding Round

The Wall Street Journal (1/30, Jin, Seetharaman, Subscription Publication) reports, “OpenAI is in early talks to raise up to $40 billion in a funding round that would value the ChatGPT maker as high as $300 billion, according to people familiar with the matter.” The Journal says, “SoftBank would lead the round and is in discussions to invest between $15 billion and $25 billion,” and “the remaining amount would come from other investors.”

House Lawmakers Urge Trump To Restrict Export Of Nvidia Chips To China’s DeepSeek

Reuters (1/30, Cook, Mohsin, Leonard) House Select Committee on China Chair John Moolenaar (R-MI) and Vice Chair Raja Krishnamoorthi (D-IL) are calling on the Administration “to consider restricting the export of artificial intelligence chips made by Nvidia...alleging Chinese AI firm DeepSeek has relied on them.” The lawmakers “asked for the move as part of a Commerce and State Department-led review ordered by Trump to scrutinize the US export control system in light of ‘developments involving strategic adversaries.’” They wrote, “We ask that as part of this review, you consider the potential national security benefits of placing an export control on Nvidia’s H20 and chips of similar sophistication.”

Survey Shows Educators, Students Want Clarity On AI Policies

Education Week (1/30, Langreo) reports that an EdWeek Research Center survey reveals that many educators find their districts’ AI policies unclear. Conducted in December, the survey included 990 teachers, principals, and administrators, with 60% indicating uncertainty about AI policy clarity for both educators and students. Pat Yongpradit from Code.org said, “This technology is ‘still very new,’” and emphasized the need for opportunity and capacity for districts to develop policies. An anonymous high school tech coach in Virginia said schools are hesitant to establish AI guidelines due to fear of mistakes, leaving “educators and students in a gray area.” Ruby Mejico, a principal in Moreno Valley, California, said that while her district is experimenting with AI tools, clear policies are still in development. She added, “We are on our way to having a full-blown policy.” Yongpradit anticipates that clarity will improve as districts gain more understanding and experience with AI technology.

dtau...@gmail.com

unread,
Feb 9, 2025, 1:39:45 PM2/9/25
to ai-b...@googlegroups.com

DeepSeek Linked to Banned Chinese Telecom

The website of China's DeepSeek, whose chatbot became the most downloaded app in the U.S. shortly after its release, contains computer code that could send some user login information to a Chinese state-owned telecommunications company barred from operating in the U.S. Canadian cybersecurity company Feroot Security identified heavily obfuscated computer script on the Web login page of the chatbot that shows connections to computer infrastructure owned by China Mobile.
[ » Read full article ]

Associated Press; Byron Tau (February 5, 2025)

 

Google Drops Pledge Not to Use AI for Weapons, Surveillance

Google on Tuesday updated its AI ethical guidelines, removing commitments to not apply the technology to weapons or surveillance. In a blog post, Google executives wrote, “We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”
[ » Read full article ]

The Washington Post; Nitasha Tiku; Gerrit De Vynck (February 4, 2025)

 

AI Pioneers Awarded 2025 QE Prize for Engineering

The 2025 Queen Elizabeth Prize for Engineering was bestowed upon seven pioneers of AI technology on Tuesday. The annual prize, awarded to engineers whose innovations have benefited humanity on a global scale, was presented in recognition of contributions to the development of modern machine learning (ML). Recipients included ACM A.M. Turing Award laureates Yoshua Bengio, Geoffrey Hinton, and Yann LeCun, who were recognized for groundbreaking research into the artificial neural networks that have become the dominant model for ML.
[ » Read full article ]

The Chemical Engineer; Adam Duckett (February 4, 2025)

 

ChatGPT Rolled Out at California State University

OpenAI is rolling out an education-specific version of its ChatGPT to about 500,000 students and faculty at California State University as it looks to expand its user base in the academic sector. The rollout will enable students to access personalized tutoring and study guides through the chatbot, while faculty will be able to use it for administrative tasks.
[ » Read full article ]

Reuters; Rishi Kant (February 4, 2025)

 

Laser-Based Artificial Neuron Processes Enormous Datasets at High Speed

Laser-graded artificial neurons developed by Chinese University of Hong Kong researchers can operate on their own and without additional connections as a small neural network, transmitting data as much as 100,000 times faster than artificial spiking neurons. The researchers integrated a laser-graded neuron into a reservoir computing system and scanned 700 heartbeat samples. With a processing speed of 100 million heartbeats per second, the system was more than 98% accurate in identifying arrhythmia.
[ » Read full article ]

Live Science; Skyler Ware (February 4, 2025)

 

Federated Learning Under Siege

Researchers in the U.S. and China demonstrated a poisoning attack targeting federated unlearning. The attack, BadUnlearn, ensures the unlearned model closely resembles the poisoned one through the strategic injection of malicious model updates that align with aggregation rules. The researchers then introduced a federated unlearning framework intended to maintain a global model's integrity. The framework, UnlearnGuard, uses historical model updates stored by the server to help detect and filter out poisoned updates.
[ » Read full article ]

Devdiscourse (February 3, 2025)

 

DeepSeek's Chatbot Achieves 17% Accuracy in Audit

An audit by trustworthiness rating service NewsGuard found the chatbot rolled out by Chinese AI startup DeepSeek had an accuracy rate of 17% when it comes to delivering news and information. DeepSeek provided vague or useless answers 53% of the time and repeated false claims 30% of the time, with a fail rate of 83%. In comparison, its Western rivals, including OpenAI, had a 62% average fail rate.
[ » Read full article ]

Reuters; Rishi Kant (January 29, 2025)

 

AI Systems with ‘Unacceptable Risk’ Now Banned in EU

As of Sunday, EU regulators can ban the use of AI systems they deem to pose an “unacceptable risk” or harm under the bloc's AI Act, approved by the European Parliament last March. Unacceptable activities include the use of AI for social scoring, manipulating a person’s decisions deceptively, predicting people committing crimes based on their appearance, and trying to infer people’s emotions, among other uses.
[ » Read full article ]

TechCrunch; Kyle Wiggers (February 2, 2025)

 

OpenAI to Provide Models to National Labs

OpenAI's o1 reasoning model, or another from its o-series, will be deployed on Los Alamos National Laboratory's Venado supercomputer. The deal with the U.S. government will make the model available to researchers at Lawrence Livermore and Sandia National Laboratories as well. Said Los Alamos' Thom Mason, "As threats to the nation become more complex and more pressing, we need new approaches and advanced technologies to preserve America's security."
[ » Read full article ]

Axios; Ina Fried (January 30, 2025)

 

AI Helps Open Scrolls Charred by Vesuvius

Researchers successfully produced the first image of the inside of an ancient scroll at the Bodleian Library at the U.K.'s University of Oxford, according to organizers of the Vesuvius Challenge. The papyrus scroll is one of hundreds found in the remains of a Roman villa destroyed in the A.D. 79 eruption of Mt. Vesuvius. In the Vesuvius Challenge, researchers must decipher the scrolls, which are too fragile to be unrolled. The Oxford scroll was scanned using a synchrotron, then AI was used to generate a 3D image of the scroll that can be unrolled virtually.
[ » Read full article ]

Independent (U.K.); Jill Lawless; Pan Pylas (February 5, 2025)

 

AI-powered Drone Company to Assist in Demining Ukrainian Farmlands

U.S.-based AI company Safe Pro Group and Ukrainian agricultural company Nibulon will deploy AI-powered drones to detect landmines embedded in Ukraine's farmland. The partnership will use Safe Pro’s SpotlightAI platform, hosted on Amazon Web Services, to survey affected farmland. Safe Pro’s AI has processed over 931,000 drone images, identifying more than 18,000 explosive remnants across 10,500 acres, to facilitate mine detection.
[ » Read full article ]

The Kyiv Independent (Ukraine); Sonya Bandouil (February 1, 2025)

 

DOGE Feeds Sensitive Federal Data into AI to Target Cuts

Representatives from the Elon Musk-led U.S. Department of Government Efficiency (DOGE) fed sensitive data from the U.S. Education Department into AI software to probe the agency’s programs and spending in search of opportunities for cuts, say insiders. DOGE plans to replicate this process across other departments and agencies, accessing back-end software at different parts of the government and using AI technology to extract information about spending on employees and programs, said one source.

[ » Read full article *May Require Paid Registration ]

The Washington Post; Hannah Natanson; Gerrit De Vynck; Elizabeth Dwoskin (February 6, 2025); et al.

 

Chinese, Iranian Hackers Use U.S. AI Products to Bolster Cyberattacks

Hackers linked to China, Iran, and other foreign governments are using the latest U.S. AI technology to bolster their cyberattacks, according to U.S. officials and security researchers. Google’s cyber-threat experts say that in the last year, dozens of hacking groups in more than 20 other countries deployed Google's Gemini chatbot to assist with malicious code writing and targeting.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Dustin Volz; Robert McMillan (January 30, 2025)

 

DeepSeek Sparks AI Infrastructure Reassessment

Bloomberg (2/1, Subscription Publication) reported, “The recent market turmoil sparked by DeepSeek’s chatbot has left some rethinking the credit frenzy around artificial intelligence (AI).” While corporate giants predict more demand for AI following the release of DeepSeek, “behind the scenes, landlords and credit providers say that the situation is more nuanced, and some are starting to fret.” Bloomberg claims that an unnamed major data center landlord anticipates rising borrowing costs due to fears of obsolescence from disruptors like DeepSeek. Despite this, Blackstone president Jon Gray maintains that “digital infrastructure remains essential.” The AI surge since ChatGPT’s debut has fueled a global data center boom, with investors pledging significant funds. Apollo Global Management foresees a $2 trillion opportunity in data centers. However, experts suggest that DeepSeek’s cost-effective AI models won’t “depress massively the demand for infrastructure,” indicating sustained growth expectations.

        Researchers Note DeepSeek’s Chinese “Propaganda,” False Compute Cost Claims. The New York Times (1/31, Lee Myers) reported researchers warn that recently debuted Chinese chatbot DeepSeek’s responses “largely reflect the worldview of the Chinese Communist Party,” as “the answers it gives not only spread Chinese propaganda but also parrot disinformation campaigns that China has used to undercut its critics around the world.” NewsGuard on Thursday released a report that “called DeepSeek ‘a disinformation machine,’” and “the New York Times has found similar examples when prompting the chatbot for answers about China’s handling of the Covid pandemic and Russia’s war in Ukraine,” sparking “the same concerns that have bedeviled TikTok, another hugely popular Chinese-owned app: that the tech platforms are part of China’s robust efforts to sway public opinion around the world, including in the United States.”

        The Washington Post (1/31, Dou, Northrop, Li, Vynck) highlighted how, despite DeepSeek’s “claim that it trained one of its recent models on a minuscule $5.6 million in computing costs, ... a closer look at DeepSeek reveals that its parent company deployed a large and sophisticated chip set in its supercomputer, leading experts to assess the total cost of the project as being much higher than the relatively paltry sum that US markets reacted to this week.”

Trump Meets With Nvidia CEO About Administration’s AI Goals

The Washington Post (1/31, Zakrzewski, Alemany) reports President Trump on Friday met “with Nvidia CEO Jensen Huang, marking the first meeting between the president and the leader of a chip company at the center of the artificial intelligence gold rush amid concerns about China’s rising influence in the industry.” Planning for the meeting had begun “before the spike in anxiety about DeepSeek, according to a senior Administration official and a person familiar with the discussions,” and the two “had a good rapport and discussed the Administration’s AI goals, one person added.”

AI Transforming City Services with AI Agents

Forbes (2/1) reports that artificial intelligence is increasingly being integrated into city operations, with AI agents poised to transform government services. AI agents, like OpenAI’s Operator, autonomously perform tasks, potentially streamlining city services. The SuperCity app exemplifies this shift, aiming to simplify resident interactions with city services through AI. The app’s founders, including Miguel Gamiño Jr., leverage extensive government and tech experience to reduce friction between users and city systems. AI’s potential to revolutionize city functions underscores the urgency for city leaders to adopt AI solutions.

OpenAI Unveils AI Tool For Research Reports

The Guardian (UK) (2/3) reports that OpenAI has introduced a new tool named “deep research,” designed to generate reports comparable to those of a research analyst. The San Francisco-based company’s tool, powered by the o3 model, can complete tasks in 10 minutes that would take humans hours. OpenAI announced this development days after its competitor, DeepSeek, made advancements. “Deep research” will be available in the US for Pro tier users at $200 monthly, with a limit of 100 queries. The tool targets professionals in finance, science, and engineering. Andrew Rogoyski from the University of Surrey expressed concerns about relying on AI outputs without human verification.

Alphabet Faces Investor Scrutiny Over AI Spending

Reuters (2/3) reports that Alphabet will face investor scrutiny over its substantial AI spending when it reports earnings on Tuesday. The Google parent likely experienced slowed revenue growth in the holiday quarter due to weakened advertising and cloud businesses. Alphabet’s 2022 capital expenditure was estimated at $50 billion, with more planned for 2025 to support cloud expansion and AI-driven search features. Despite high expectations, Google Cloud growth is expected to decelerate. Analyst Gil Luria noted concerns about AI growth overshadowing core cloud business, similar to Microsoft’s recent experience.

How AI Tools Can Enhance Instruction Methods, Combat Teacher Burnout

The New York Observer (2/3, Curry) reports that teacher burnout is a significant issue, with 16% of US teachers leaving their jobs annually, according to the National Center for Education Statistics. Teachers like Eileen Yaeger and Jeff Stoltzfus are leveraging AI to alleviate this problem. Yaeger uses AI to create inclusive lessons, translating content into multiple languages and adjusting text by WIDA English Language Development level. Stoltzfus, who teaches media technology, uses AI for curriculum development and lesson planning, noting, “It was helpful, at least in getting me half of the way there.” However, he acknowledges AI’s limitations in grading subjective art assignments. Coursera and other platforms are developing AI-powered tools to support educators. Marni Baker Stein, chief content officer at Coursera states, “GenAI will make personalized, interactive learning possible at scale.” Coursera Coach, available in 24 languages, offers personalized instruction, enhancing students’ learning experiences.

Cal State Launches AI Initiative Across 23 Campuses

The Los Angeles Times (2/4, Watanabe) reports that California State University (CSU) announced on Tuesday a significant initiative to integrate artificial intelligence (AI) education across its 23 campuses. This effort aims to provide equitable access to AI tools and training for CSU’s 450,000 students, many of whom are low-income or first-generation college attendees. CSU has partnered with Gov. Gavin Newsom’s (D) office and tech giants like Microsoft and OpenAI to form an advisory board for AI skill development. “We are proud to announce this innovative, highly collaborative public-private initiative,” stated CSU Chancellor Mildred García. The initiative includes an “AI Commons Hub” offering free access to tools like ChatGPT 4.0, marking the largest deployment of ChatGPT globally. The initiative also addresses concerns about bias and academic integrity.

        Forbes (2/4, Fitzpatrick) reports the university system will integrate ChatGPT Edu into its curriculum and operations, marking the largest AI deployment in higher education. Leah Belsky, VP at OpenAI, emphasizes the need for collaboration to ensure global student access to AI. The initiative includes an AI Hub for free access to AI tools, faculty training, and AI-focused apprenticeships. Ed Clark, CSU CIO, highlights the dual goals of equipping students with AI skills and transforming institutional practices. The partnership aims to create a skilled AI workforce, addressing challenges such as AI ethics and data security.

        EdSource (2/4, DiPierro) reports CSU will provide generative AI tools like ChatGPT to students, staff, and faculty across its campuses at no personal cost. CSU announced on Tuesday at San Jose State University the formation of the AI Workforce Acceleration Board, which includes CSU academic leaders and representatives from companies like Microsoft and Nvidia. CSU plans to offer AI-related apprenticeship programs and encourage the use of AI in teaching and research. According to CSU chief information officer Ed Clark, the university has allocated funds from one-time savings for these initiatives.

DeepSeek’s AI Model Challenges Proprietary Systems

CNBC (2/4, Browne) reports that DeepSeek, a Chinese AI lab, released the R1 model last month, an open-source AI model that rivals OpenAI’s o1 model. This development has impacted chipmakers like Nvidia, causing their market values to drop due to fears of reduced spending on computing infrastructure. Industry experts, including Seena Rejal from NetMind and Yann LeCun from Meta, highlight that DeepSeek’s success underscores the viability of open-source AI models. However, experts also caution about cybersecurity risks, with Cisco identifying vulnerabilities in DeepSeek’s R1 model, raising concerns about data leakage and exploitation.

        Biden FTC Chair: DeepSeek Release Highlights Need For More Competition Among US Companies. Former Biden Administration Federal Trade Commission chair Lina M. Khan writes at the New York Times (2/4) that the launch of Chinese artificial intelligence firm DeepSeek “is the canary in the coal mine...warning us that when there isn’t enough competition, our tech industry grows vulnerable to its Chinese rivals, threatening U.S. geopolitical power in the 21st century.” She adds that the company undermines claims that US tech firms “are developing the best artificial intelligence technology the world has to offer,” and accuses them of “building anticompetitive moats around their businesses” instead of pushing for innovation. She concludes that the “best way for the United States to stay ahead globally is by promoting competition at home.”

 

Tufekci: DeepSeek Release Shows Government Is Approaching AI Issues Incorrectly. Zeynep Tufekci writes at the New York Times (2/5) that “the real lesson of DeepSeek is that America’s approach to A.I. safety and regulations...was largely nonsense.” She adds that “it was never going to be possible to contain the spread of this powerful emergent technology, and certainly not just by placing trade restrictions on components like graphics chips,” and argues that the government should instead “be preparing our society for the sweeping changes that are soon to come.” She adds that “instead of fantasizing about how some future rogue A.I. could attack us, it’s time to start thinking clearly about how corporations and governments could use the A.I. that’s available right now to entrench their dominance, erode our rights, worsen inequality.”

Researchers Demonstrate AI Model Training Costing $50

TechCrunch (2/5, Zeff) reports, “AI researchers at Stanford and the University of Washington were able to train an AI ‘reasoning’ model for under $50 in cloud compute credits, according to a new research paper released last Friday.” This “model known as s1 performs similarly to cutting-edge reasoning models, such as OpenAI’s o1 and DeepSeek’s R1, on tests measuring math and coding abilities.” The “paper suggests that reasoning models can be distilled with a relatively small dataset using...supervised fine-tuning.”

OpenAI CEO Discusses AI IQ Benchmarking

TechCrunch (2/5, Wiggers) reports that during a recent press conference, OpenAI CEO Sam Altman remarked on the rapid improvement of AI’s “IQ” over recent years, suggesting a yearly advancement of one standard deviation. Experts, including Sandra Wachter from Oxford, criticized using IQ as a benchmark for AI, arguing it is a flawed measure of intelligence. Os Keyes and Mike Cook highlighted that AI models can exploit the structure of IQ tests, rendering them inappropriate for evaluating AI capabilities. Heidy Khlaaf from the AI Now Institute emphasized the need for more suitable tests for AI systems.

        OpenAI’s Stargate AI Venture Evaluating US Data Center Sites. Reuters (2/6, Tong, Sriram) reports, “ChatGPT maker OpenAI said on Thursday that it is evaluating US states as potential artificial intelligence data center locations for its massive Stargate venture, framing the project as a matter of urgency for the United States to beat China in the global AI race.” Reuters reports Chris Lehane, “OpenAI’s chief global affairs officer,” said, “As news emerged about DeepSeek, it makes it clear this is a very real competition and the stakes could not be bigger. Whoever ends up prevailing in this competition is going to really shape what the world looks like going forward, whether we have democratic AI that’s free and open, or authoritarian AI that is autocratic.”

House Lawmakers To Introduce Bill Banning DeepSeek Chatbot Use On Government Devices

The Wall Street Journal (2/6, Andrews, Subscription Publication) reports that Reps. Darin LaHood (R-IL) and Josh Gottheimer (D-NJ) plan to introduce a bill banning DeepSeek’s chatbot application from US government-owned devices due to security concerns that data could be shared with the Chinese government. Similar to legislation introduced against TikTok, the Journal writes such a bill could mark the beginning of banning the company from operating in the US wholesale.

dtau...@gmail.com

unread,
Feb 16, 2025, 12:29:44 PM2/16/25
to ai-b...@googlegroups.com

Top U.S. Grid Wins Speedy Review of Power Plants to Feed AI

PJM Interconnection LLC, which manages a 13-state power-grid network, won federal approval to fast-track the review of dozens of new power-plant projects to shore up supplies amid a proliferation of AI datacenters. PJM will review up to 50 new projects specifically to boost grid reliability starting in April, to help avoid potential shortages toward the end of this decade, the U.S. Federal Energy Regulatory Commission said in an order issued Tuesday.
[
» Read full article ]

Bloomberg; Naureen S. Malik (February 12, 2025)

 

China's Ex-U.K. Ambassador Debates Bengio at AI Summit

At the AI Action Summit in Paris, Fu Ying of China's Tsinghua University took aim at an international AI safety report led by ACM A.M. Turing Award laureate and the "godfather of AI" Yoshua Bengio and co-authored by 96 others. Fu Ying said open source is the best way to ensure AI does not cause harm, providing "better opportunities to detect and solve problems." Bengio argued that open source makes it easier for criminals to misuse AI.
[ » Read full article ]

BBC; Zoe Kleinman (February 9, 2025)

 

Tech Companies Raise $27 Million for Child Safety Online

A group of technology companies raised more than $27 million for a new initiative focused on building open-source tools to boost online safety for kids. The Robust Online Safety Tools (ROOST) project, announced Monday at the AI Action Summit in Paris, will provide free tools to detect, review, and report child sexual abuse material and use large language models to “power safety infrastructure,” according to a press release for the project.
[ » Read full article ]

The Hill; Miranda Nazzaro (February 10, 2025)

 

U.S., China Ambitions Cast Shadow on AI Summit in Paris

The geopolitics of artificial intelligence will be in focus in Paris starting today at the AI Action Summit. U.S. Vice President JD Vance will attend, marking his first trip abroad since assuming office, while China’s President Xi Jinping is sending Vice Premier Zhang Guoqing as Xi’s special representative. The aim of the meeting is to get countries to agree on ethical, democratic, and environmentally sustainable AI.
[ » Read full article ]

Associated Press; Sylvie Corbet; Kelvin Chan (February 10, 2025)

 

U.S., U.K. Refuse to Sign Paris Summit Declaration on ‘Inclusive’ AI

The U.S. and U.K. did not sign the final communiqué at the AI Action Summit in France. The document was backed by 60 signatories, including China. A U.K. government spokesperson said the statement had not gone far enough in addressing global governance of AI and the technology’s impact on national security. U.S. Vice President JD Vance criticized Europe’s “excessive regulation” of technology and warned against cooperating with China.
[ » Read full article ]

The Guardian (U.K.); Dan Milmo; Eleni Courea (February 11, 2025)

 

Camera Identifies Objects at Speed of Light

University of Washington and Princeton University researchers developed a camera for computer vision by replacing the camera lens with engineered optics made of 50 layered meta-lenses that function as an optical neural network. The resulting camera is more than 200 times faster than neural networks using conventional computer hardware at identifying and classifying images.
[ » Read full article ]

Interesting Engineering; Prabhat Ranjan Mishra (February 6, 2025)

 

Google Hub in Poland to Develop AI Use in Energy, Cybersecurity Sectors

Google and Poland on Thursday signed an agreement to develop the use of AI in the country’s energy, cybersecurity, and other sectors. Google is also dedicating $5 million over the next five years in Poland to expand training programs and increase digital skills among the young. Earlier in the week, Prime Minister Donald Tusk said Google and Microsoft will be among the international businesses planning to invest about 650 billion zlotys ($160 billion) in Poland this year.
[
» Read full article ]

Associated Press (February 13, 2025)

 

Google Remakes Super Bowl Ad After AI Cheese Gaffe

Google edited its Super Bowl ad promoting its Gemini AI Tool after a blogger flagged a false claim in the commercial that Gouda accounts for 50% to 60% of global cheese consumption. Google's Jerry Dischler said the error was not an AI "hallucination," noting that multiple websites where Gemini got the information cited the statistic. The search engine giant had Gemini rewrite the description for the product featured in the ad without the statistic.
[ » Read full article ]

BBC News; Graham Fraser; Tom Singleton (February 6, 2025)

 

White House Encourages Americans to Provide Ideas for AI Strategy

The White House Office of Science and Technology Policy is calling on Americans to share policy ideas and proposals for the AI Action Plan, which will be developed in accordance with an executive order signed by U.S. President Donald Trump last month. The AI Action plan will "define priority policy actions to enhance America's position as an AI powerhouse and prevent unnecessarily burdensome requirements from hindering private sector innovation," according to officials.
[ » Read full article ]

Fox News; Brooke Singman (February 6, 2025)

 

Musk-led Group Launches $97-Billion Bid for OpenAI

Elon Musk and a group of investors offered to buy ChatGPT-maker OpenAI for $97.4 billion, well below the company’s most recent valuation of $157 billion. OpenAI CEO Sam Altman rejected the offer on X, posting, “No thank you but we will buy Twitter for $9.74 billion if you want.” Musk helped to found, and fund, OpenAI in 2015 but left three years later after Altman and other leaders rejected his suggestion that he take over the company.

[ » Read full article *May Require Paid Registration ]

The Washington Post; Gerrit De Vynck; Elizabeth Dwoskin (February 10, 2025)

 

EU Sets Out $200-Billion AI Spending Plan

The European Union on Tuesday unveiled a plan to raise €200 billion ($206.15 billion) to invest in AI. The plan, dubbed InvestAI, includes a new €20-billion fund for AI gigafactories. European Commission President Ursula von der Leyen said at the AI Action Summit in Paris. “We want Europe to be one of the leading AI continents, and this means embracing a life where AI is everywhere.”

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Edith Hancock; Mauro Orru (February 11, 2025)

 

Meta Eliminating Jobs in Shift to Find AI Talent

Meta Platforms on Monday began notifying staff of job cuts, starting a process that will ultimately lead to termination of 5% of its workforce, or 3,600 people. Meta CEO Mark Zuckerberg told employees the terminations will focus on staff who “aren’t meeting expectations,” and told managers the cuts would create openings for which the company can hire the “strongest talent.”

[ » Read full article *May Require Paid Registration ]

Bloomberg; Kurt Wagner; Riley Griffin (February 10, 2025)

 

Tech Giants Double Down on Massive AI Spending

Following record investments in AI last year, Microsoft, Alphabet, Meta Platforms, and Amazon each said in recent quarterly earnings reports that they would increase those investments in 2025. Microsoft, Alphabet, and Meta projected combined capital expenditures of no less than $215 billion, up more than 45% on an annual basis. Amazon said AI would account for most of the increase in its total capital expenditure across its businesses to more than $100 billion.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Nate Rattner; Jason Dean (February 6, 2025)

 

France Taps Nuclear Power in Race for AI Supremacy

France said it would provide a gigawatt of nuclear power for a new AI computing project. AI cloud platform FluidStack plans to start construction on the project in the third quarter of this year, with the first tranche of 250 megawatts of power expected to be connected to AI computing chips by the end of 2026. FluidStack said the facility could expand to 10 gigawatts by 2030.


[
» Read full article *May Require Paid Registration ]

The Wall Street Journal; Sam Schechner; Asa Fitch (February 10, 2025)

 

Report: How Libraries Can Promote AI Literacy For Future Development

Inside Higher Ed (2/10, Mowreader) reports that academic libraries are responding to the rise of generative artificial intelligence, with a September 2024 report indicating that 7 percent of libraries are adopting AI tools. However, 32 percent of surveyed librarians noted a lack of AI training at their institutions. The University of New Mexico has released a guide to help librarians support students in an AI-integrated environment. Leo S. Lo, the author and dean at the College of University Libraries stated, “We are now well-placed to become key players in advancing AI literacy.” Lo defines AI literacy as the “ability to understand, use, and think critically about AI technologies.” His framework includes five elements, emphasizing the importance of technical knowledge, practical skills, ethical awareness, critical thinking, and understanding AI’s societal impact. Lo concluded, “By embracing AI literacy, libraries can lead efforts to demystify AI.”

Musk-Led Group Makes Bid For OpenAI

The New York Times (2/10, Isaac, Metz) reports that a group of investors, led by Elon Musk, “has made a $97.4 billion bid to buy the nonprofit that controls OpenAI, according to two people familiar with the bid, escalating a yearslong tussle for control of the company between Mr. Musk and OpenAI’s chief executive, Sam Altman.”

        Reuters (2/10, Bajwa) reports that Musk said in a statement, “It’s time for OpenAI to return to the open-source, safety-focused force for good it once was. We will make sure that happens.” Bloomberg (2/10, Ghaffary, Clark, Metz, Subscription Publication) reports that “according to a statement from Marc Toberoff, a lawyer representing the investors, other backers of proposal include Valor Equity Partners, Baron Capital, Atreides Management, Vy Capital, Joe Lonsdale’s 8VC and Ari Emanuel, through his investment fund.”

        However, the AP (2/10) reports that Altman “quickly rejected the deal on Musk’s social platform X, saying, ‘no thank you but we will buy Twitter for $9.74 billion if you want.’” The Washington Post (2/10, De Vynck) says OpenAI’s board “has been broadly supportive of Altman, and almost all of its members took their seats after he survived an attempt by the previous board to eject him from the company.”

OpenAI Reviewing Whether DeepSeek Obtained Data Illicitly

Bloomberg (2/10, Subscription Publication) reports OpenAI “has spoken to government officials about the company’s ongoing investigation into whether China’s DeepSeek used data obtained in an unauthorized manner from the ChatGPT maker’s technology.” OpenAI chief global affairs officer Chris Lehane is quoted saying on Bloomberg Television that “we’ve seen some evidence and we’re continuing to review.”

How AI Can Help Address Major Funding Cuts In Higher Ed

Forbes (2/11) contributor Vinay Bhaskara says artificial intelligence (AI) can address significant challenges in higher education, such as declining enrollment and rising costs. Nearly 100 institutions closed between the 2022-23 and 2023-24 academic years, driven by high tuition and student dissatisfaction. The average tuition and fees “for a public four-year school has risen 179%,” leading to skepticism about the value of a degree. AI offers a solution by streamlining administrative processes, which have seen a 164% increase in administrators since 1972. With 86% of university leaders agreeing “that AI presents a ‘massive opportunity to transform higher education,’” institutions like Georgia Tech and Knox College are already implementing AI to enhance recruitment and manage applications. Bhaskara emphasizes the urgent need for colleges to leverage AI to reduce costs and refocus on their educational missions.

Nutanix Focuses On Hybrid Enterprise AI Strategy

SiliconANGLE (2/13) reports Nutanix is driving the next phase of enterprise AI adoption, empowering organizations to deploy AI on their own terms. Nutanix debuted GPT in a Box at AWS re:Invent, streamlining the deployment of AI models on Amazon Elastic Kubernetes Service. Nutanix Enterprise AI offers generative AI on-premises now with inference endpoints, security, cost control, and simplicity. VP of Engineering at Nutanix Debojyoti Dutta said during re:Invent that customers can choose any model from Hugging Face or from the Nvidia catalog and then deploy the model very easily with a couple of button clicks. Nutanix’s hybrid AI strategy enables enterprises to deploy AI models across on-premises, public cloud, and edge environments, offering the flexibility to run workloads where they make the most sense. Partnerships with Nvidia, Hugging Face and D2iQ Inc. extend the reach and impact of Nutanix’s offerings, accelerating time to value for enterprises.

dtau...@gmail.com

unread,
Feb 22, 2025, 7:16:39 PM2/22/25
to ai-b...@googlegroups.com

South Korea Aims to Secure 10,000 GPUs for National AI Computing Center

To ensure it can compete in the global AI race, the South Korean government plans to obtain 10,000 high-performance graphics processing units (GPUs) through public-private cooperation, to facilitate an early opening of its national AI computing center. South Korea currently is exempt from a new U.S. regulation restricting the export of GPUs.
[ » Read full article ]

Reuters; Heekyong Yang (February 17, 2025)

 

Open Source LLMs Hit Europe's Digital Sovereignty Roadmap

The OpenEuroLLM project, co-led by computational linguist Jan Hajic at the Charles University in Prague and Peter Sarlin, CEO and co-founder of Finnish AI lab Silo AI, plans to develop open-source AI language models for all EU languages to preserve "linguistic and cultural diversity." The project is designed to ensure transparency, preserve linguistic diversity, and enable AI growth in Europe. The initiative hopes to contribute a high-quality, open-source AI foundation that can be adapted by European businesses.
[ » Read full article ]

TechCrunch; Paul Sawers (February 16, 2025)

 

U.K. Drops 'Safety' from AI Body

The U.K. has rebranded the AI Safety Institute to the AI Security Institute, signaling a shift away from examining large language models for issues such as bias. Said Secretary of State for Science, Innovation, and Technology Peter Kyle, “The work of the AI Security Institute won’t change, but this renewed focus will ensure our citizens, and those of our allies, are protected from those who would look to use AI against our institutions, democratic values, and way of life.”
[ » Read full article ]

TechCrunch; Ingrid Lunden (February 13, 2025)

 

Trust in AI Is Much Higher in China Than in the U.S.

A global survey by the Edelman Trust Barometer found that only 32% of U.S. residents trust AI. The greatest level of trust was reported in India at 77%, followed by Nigeria (76%), Thailand (73%), and China (72%). Trust was lowest in Canada (30%), Germany (29%), the Netherlands (29%), U.K. (28%), Australia (25%), and Ireland (24%). More than half (58%) of respondents said they worry automation will displace them in the workforce, and more than 60% worry about AI-driven misinformation.
[ » Read full article ]

Axios; Ina Fried (February 13, 2025)

 

South Korea Bans Downloads of DeepSeek's AI App

South Korea said on Monday it had temporarily suspended new downloads of an AI chatbot made by China's DeepSeek. Regulators said the app service would resume after they verified it complied with South Korea’s laws on protecting personal information. The app had become one of the country’s most popular downloads in the AI category. Earlier this month, South Korea directed many government employees not to use DeepSeek products on official devices.

[ » Read full article *May Require Paid Registration ]

New York Times; Meaghan Tobin; Jin Yu Young (February 17, 2025)

 

How AI Can Protect Undersea Pipelines, Cables

AI is being leveraged to protect critical underwater infrastructure, with the ultimate goal of creating an undersea map that can sift through vast amounts of data to identify potential threats in real time. German startup North.io is using technology from Nvidia, IBM, and others to develop systems that can distinguish between natural elements and potential threats to undersea technology. North.io researchers are training AI to analyze data from multiple sources.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; William Boston (February 17, 2025)

 

Ellison Calls for Governments to Unify Data to Feed AI

During an interview with former British Prime Minister Tony Blair at the World Government Summit in Dubai, Oracle chairman Larry Ellison said governments should consolidate all national data for consumption by AI models. Fragmented data about a population’s health, agriculture, infrastructure, procurement and borders should be unified into a single, secure database that can be accessed by AI models, Ellison said, because it would enable countries with rich population data sets to cut costs and improve public services, particularly healthcare.

[ » Read full article *May Require Paid Registration ]

Bloomberg; Omar El Chmouri (February 12, 2025)

 

Education Department Considering AI Chatbot For Financial Aid Help

The New York Times (2/13, Goldstein, Montague) reported that allies of Elon Musk within the Education Department are exploring the replacement of some contract workers with a generative artificial intelligence chatbot, based on internal documents. This initiative aligns with President Trump’s efforts to reduce the federal workforce and could transform public interactions. The Education Department currently employs 1,600 call center agents answering more than 15,000 inquiries daily from student borrowers. ED spokeswoman Madi Biedermann stated that the department is considering tools to enhance customer service and assess contract effectiveness. However, experts warn that transitioning to AI may raise concerns regarding privacy and accuracy. Moreover, an internal document obtained by the Times indicates that ED staff have found that a 38 percent reduction in funding for call center operations could contribute to a “severe degradation” in services for “students, borrowers and schools.”

        Inside Higher Ed (2/14, Knox) reported the department “greatly increased staffing at their call centers after last year’s bungled launch of the new FAFSA led to an overwhelming influx of calls. Last September, a Government Accountability Office investigation found that in the first five months of the rollout, three-quarters of calls went unanswered. Last summer, the department hired 700 new agents to staff the lines and had planned to add another 225 after the launch of the 2024-25 FAFSA in November.”

Study Reveals Researchers’ Interest In AI Tools Varies By Region, Discipline

Inside Higher Ed (2/14, Palmer) reported that a recent study by Wiley highlights significant interest among researchers in utilizing artificial intelligence (AI) in their work, with 69 percent believing AI skills will be vital within two years. However, over 60 percent cite a lack of guidelines and training as barriers to AI adoption. The study surveyed nearly 5,000 researchers globally, finding that 70 percent seek clearer guidelines from publishers regarding AI use. Although many have heard of OpenAI’s ChatGPT, only about one-third are familiar with other tools like Google Gemini and Microsoft Copilot. The study also noted geographical differences, with 59 percent of researchers in China and 57 percent in Germany using AI, compared to a global average of 45 percent. Researchers in fields like computer science and medicine are more inclined to adopt AI, while those in life sciences and physical sciences prefer a cautious approach.

Amazon Robotics Chief Technologist Discusses Warehouse Automation

Insider (2/14, Kim) interviewed Amazon Robotics Chief Technologist Tye Brady about warehouse automation. Brady said Amazon now has at least 750,000 robots in its warehouses and that “AI has really revolutionized and transformed robotics because it allows us to have the mind and body as one.” He added that Amazon’s “future is in people and technology working together.” Brady noted that Amazon has committed more than $1.2 billion in an upskilling pledge. He also said, “Our physical AI systems have the same tool kits that hundreds of thousands of our customers have available to them, and they’re using them, so we’re seeing a lot of growth there,” referring to AWS. Brady said, “We do technology with a purpose. And if that purpose makes sense in e-commerce and our material-handling fulfillment systems, then we will do that as long as it improves the safety of our employees and their performance.”

Meta Struggles With Deepfake Image Regulation, Investigation Suggests

CBS News (2/17, Lyons) reports that Meta has removed over a dozen sexualized AI deepfake images of female celebrities from Facebook following a CBS News investigation. Despite this, CBS News found that many images remain accessible, violating Meta’s policies. The Oversight Board said the company’s regulations are insufficient, urging clearer rules on non-consensual content. Co-chair Michael McConnell said, “The Board is actively monitoring Meta’s response and will continue to push for stronger safeguards, faster enforcement, and greater accountability.”

        Meta Platforms Creates AI Humanoid Robotics Division. Reuters (2/14, Paul) reported that Meta is forming a new division within Reality Labs to develop AI-powered humanoid robots for physical tasks, led by Marc Whitten, as detailed in an internal memo from CTO Andrew Bosworth.

Most Educators Embrace AI In Teaching, Survey Finds

Education Week (2/14, Langreo) reported that a recent survey by the EdWeek Research Center reveals that 90 percent of educators believe artificial intelligence (AI) will change the teaching profession. Nearly all respondents (97 percent) expect AI to influence their jobs within five years. While experts highlight AI’s potential to personalize education, concerns about biases and creativity persist. Three teachers shared their experiences with AI tools. Amanda Pierman, a Florida science teacher, said, “With the help of generative AI tools, crafting an exam now takes 40 minutes.” Joe Ackerman, a fifth-grade teacher in Colorado, stated, “It has helped me free up my time [that I can] then devote to teaching.” Yana Garbarg, an English teacher in Queens, emphasized that AI feedback is “more like a narrative” than traditional markings. They encourage hesitant educators to experiment with AI, asserting, “It’s just going to be another tool.”

        Educators Use AI Tutoring To Transform Classroom Learning. Education Week (2/14, Schultz) reported that teachers across the nation are increasingly utilizing artificial intelligence (AI) tutoring tools to enhance student learning. Andrea Hinojosa, a history teacher at Copper Hills High School in Utah, remarked that “AI has really just changed how we can do our jobs,” allowing students to practice writing more and receive immediate feedback. Schools are adopting AI to reduce educators’ workloads while improving student outcomes. Zachary Pardos, an education professor at UC Berkeley, said that “this is really low-hanging fruit” for enhancing classroom efficiency. The Sante Fe public schools are implementing a two-year plan to integrate AI, focusing on professional development for teachers. However, experts warn about potential biases in AI and the need for careful adoption to ensure it benefits all students. Hinojosa emphasized the technology’s effectiveness, saying, “It’s amazing,” as it helps her assess her students’ skills more efficiently.

Teen Inventor Launches AI-Driven Early Wildfire Detection System

The Orange County (CA) Register (2/17, Darwish) reports that on February 10, 2023, Ryan Honary, a 17-year-old inventor from Newport Beach, California, deployed his AI-driven wildfire detection system near Irvine for the first time. Honary, who founded SensoRy AI, began this project in fifth grade after witnessing the devastation of the 2018 Camp Fire. The system, which detects flames, smoke, and heat, can alert firefighters instantly through text, email, and a web application. “If he has sensor systems that can alert us to a fire seconds or minutes sooner, that’s a success,” said Orange County Fire Authority Chief Brian Fennessy. Honary plans to deploy five more detectors in March and an additional 25 by September along the Highway 133 corridor, aiming to expand his system beyond California in the future.

xAI Unveils Latest AI Model

CNBC (2/18, Butts) reports Elon Musk’s xAI has released “its latest artificial intelligence model, Grok 3, claiming it can outperform offerings from OpenAI and China’s DeepSeek based on early testing, which included standardized tests on math, science and coding.” Grok 3 will be available for premium X subscribers in the US starting Tuesday, and it “will also be accessible through a separate subscription for the model’s web and app versions, the xAI team said.”

Former OpenAI Chief Technology Officer Launches AI Startup

Reuters (2/18, Bajwa, Hu, Tong) reports, “Former OpenAI chief technology officer Mira Murati launched an AI startup called Thinking Machines Lab on Tuesday, with a team of about 30 leading researchers and engineers from competitors including OpenAI, Meta and Mistral.” The startup “wants to build artificial intelligence systems that encode human values and aim at a broader number of applications than rivals, the company said in a blog post on Tuesday.”

Nvidia, Partners Create New AI System On AWS For Biological Research

AFP (2/20) reports AI chipmaker Nvidia and its research partners have created Evo 2, which they call the largest AI system yet for biological research, with the goal of accelerating breakthroughs in medicine and genetics. The new AI system can read and design genetic code across all forms of life. The system learned from nearly 9 trillion pieces of genetic information taken from over 128,000 different organisms. In early tests, it accurately identified 90% of potentially harmful mutations in BRCA1, a gene linked to breast cancer. The model was built using 2,000 Nvidia H100 processors on AWS’s cloud infrastructure. Developed with the Arc Institute and Stanford University, Evo 2 is now freely available to scientists worldwide through Nvidia’s BioNeMo research platform. According to Stanford Assistant Professor Brian Hie, “Designing new biology has traditionally been a laborious, unpredictable and artisanal process,” and “With Evo 2, we make biological design of complex systems more accessible to researchers.”

        R&D World (2/19) reports the technical backbone of Evo 2, the AI system for biological research, relied on a robust Nvidia-AWS collaboration. The development utilized the Nvidia DGX Cloud AI platform via AWS, leveraging more than 2,000 NVIDIA H100 GPUs. This integration enabled the creation of a specialized AI architecture, StripedHyena 2, which enhances the system’s ability to handle large sequence lengths. By harnessing AWS’s cloud infrastructure, the team successfully scaled Evo 2’s capabilities.

Universities Teaming Up To Identify Viruses In Human Bodies Using AI

The New York Times (2/19, Zimmer) reports “scientists estimate that tens of trillions of viruses live inside of us, though they’ve identified just a fraction of them.” This year, the Times says, “five universities are teaming up for an unprecedented hunt to identify these viruses.” The universities “will gather saliva, stool, blood, milk and other samples from thousands of volunteers.” The five-year effort, named “the Human Virome Program and supported by $171 million in federal funding, will inspect the samples with artificial intelligence systems, hoping to learn about how the human virome influences our health.”

Lambda Raises $480 Million For AI Development

Reuters (2/19, Hu) reports that Lambda, a cloud computing firm focused on AI development, has secured $480 million in a Series D equity round led by Andra Capital and SGW, with participation from Nvidia, ARK Invest, G Squared, and Super Micro. This funding increases its total equity raised to $863 million and gives the company a post-money valuation of $2.5 billion. CEO Stephen Balaban noted a surge in demand for Nvidia H200 chips due to the launch of open-source model DeepSeek-R1, and the funds will help expand their cloud services and software offerings.

Meta Introduces Automated Compliance Hardening Tool

InfoQ (2/19) reports that Meta has launched the Automated Compliance Hardening (ACH) tool, a mutation-guided, LLM-based system designed to improve software reliability and security by generating faults and tests. Unlike traditional methods, ACH targets specific faults using plain text descriptions, simplifying the fault creation process. The system employs three LLM-based agents: Fault Generator, Equivalence Detector, and Test Generator. Rajkumar S, a senior developer at SAP Labs India, noted that ACH is a “game-changer” for enhancing code reliability. Meta plans to further deploy ACH and refine its fault detection capabilities.

Microsoft Prepares For OpenAI’s Upcoming Model Releases

The Verge (2/20, Warren) reports that Microsoft is gearing up to host OpenAI’s GPT-4.5 and GPT-5 models, with GPT-4.5 expected to launch next week. OpenAI CEO Sam Altman indicated that GPT-5 could arrive by late May, coinciding with Microsoft’s Build developer conference. GPT-5 will integrate OpenAI’s o3 reasoning model and aims to streamline user interactions with AI. Additionally, Microsoft is enhancing its Copilot features and working on AI advancements in gaming and quantum computing, as well as preparing for a series of announcements at the upcoming conference.

Helix Model Announced By Figure For Humanoid Robots

TechCrunch (2/20, Heater) reports that Figure founder and CEO Brett Adcock introduced a new machine learning model called Helix for humanoid robots on Thursday. This “generalist” Vision-Language-Action model allows robots to process visual and language commands in real time. Helix can control multiple robots simultaneously to perform household tasks. Figure aims to prioritize home robotics despite the complexities involved, stating, “For robots to be useful in households, they will need to be capable of generating intelligent new behaviors on-demand.” The announcement also serves as a recruitment tool for engineers.

AI Drives Change In Manufacturing Industry

Forbes (2/20, Cubiss) reports manufacturers are leveraging Industry 4.0 foundations to scale AI, with only 16% of industrial manufacturing businesses integrating AI compared to 25% across all industries, according to a new SAP report. The sector’s experience with data integration offers valuable lessons for others. AI applications like predictive maintenance and quality assurance are delivering significant value, emphasizing the importance of data quality and system integration.

dtau...@gmail.com

unread,
Mar 1, 2025, 12:27:15 PM3/1/25
to ai-b...@googlegroups.com

Yoshua Bengio Proposes 'Scientist AI' to Mitigate Catastrophic Risks from Superintelligent Agents

ACM A.M. Turing Award laureate Yoshua Benjio is among the AI researchers who proposed "Scientist AI," an AI system trained to explain the world based on observations. Unlike agentic AIs, which they described as “unsafe,” Scientist AI is not trained to pursue a goal, but to explain events and estimate their probability. The researchers said the system does not use reinforcement learning, which can “easily lead to goal misspecification and misgeneralization.”
[ » Read full article ]

Analytics India; Supreeth Koundinya (February 25, 2025)

 

DeepSeek 'Shared User Data' with TikTok Owner ByteDance

South Korea said Chinese AI startup DeepSeek shares user data with TikTok owner ByteDance, but it has "yet to confirm what data was transferred and to what extent." Data protection concerns prompted the removal of DeepSeek from app stores in South Korea. A review of DeepSeek's Android app by U.S. cybersecurity firm Security Scorecard found "multiple direct references to ByteDance-owned" services, "suggest[ing] deep integration with ByteDance's analytics and performance monitoring infrastructure."
[ » Read full article ]

BBC; Imran Rahman-Jones (February 18, 2025)

 

Google AI Co-Scientist to Aid Biomedical Researchers

An AI tool developed by Google and tested by researchers at Stanford University and the U.K.'s Imperial College London is intended to serve as an assistant to biomedical scientists. The multi-agent AI co-scientist helps researchers synthesize literature and produce novel hypotheses through the use of advanced reasoning. In tests involving liver fibrosis, Google found the tool recommended promising solutions for disease prevention and indicated it could improve the solutions it provides over time.
[ » Read full article ]

Reuters; Muvija M; Kenrick Cai (February 19, 2025)

 

U.K. Government Delays New AI Bill for Six Months

The U.K. government has delayed publication of its AI Bill until the summer. The bill is expected to require companies to submit their AI models to the U.K. AI Security Institute for testing. A senior Labor Party source noted that there still are "no hard proposals in terms of what the legislation looks like."
[
» Read full article ]

Computing; Graeme Burton (February 25, 2025)

 

Weather Forecasting Takes Big Step Forward with Europe's New AI System

The European Centre for Medium-Range Weather Forecasts (ECMWF) has launched an AI forecasting system that can predict a tropical cyclone track 12 hours ahead. It also was found to be 20% more accurate than conventional forecasting methods for predictions up to 15 days ahead. The system predicts standard temperature, precipitation, and wind, as well as solar radiation and wind speeds at 100 meters, the height of a typical wind turbine, which will be useful data for the renewable energy sector.

[ » Read full article *May Require Paid Registration ]

Financial Times; Clive Cookson (February 24, 2025)

 

OpenAI Uncovers Evidence of AI-Powered Chinese Surveillance Tool

OpenAI said it found evidence that a Chinese security operation developed an AI-powered surveillance tool to assemble real-time reports about anti-Chinese posts on Western social media. OpenAI researchers discovered the tool when one of its developers used OpenAI's models to debug its underlying computer code. The researchers also identified another campaign in which Chinese developers used OpenAI's technologies to produce English-language posts that were critical of Chinese dissidents.

[ » Read full article *May Require Paid Registration ]

The New York Times; Cade Metz (February 21, 2025)

 

AI Can Decode Digital Data Stored in DNA in Minutes

Researchers at the University of California, San Diego and Technion – Israel Institute of Technology have developed an AI system that can accurately decode data stored in DNA sequences within 10 minutes. The system, called DNAformer, features a deep learning model that can reconstruct DNA sequences, an error detection and correction algorithm, and a decoding algorithm that corrects any remaining errors while converting the information to digital data.


[
» Read full article *May Require Paid Registration ]

New Scientist; Jeremy Hsu (February 21, 2025)

 

Diagnosing Diabetes, HIV, COVID from a Blood Sample with AI Tool

Stanford University computer scientists have developed an AI tool that can screen immune-cell gene sequences in blood samples to diagnose such conditions as COVID-19, type 1 diabetes, HIV, and lupus or determine whether an individual has received the flu vaccine. The tool contains six machine-learning models that can analyze gene sequences encoding key regions in B-cell and T-cell receptors and detect patterns indicating certain diseases.

[ » Read full article *May Require Paid Registration ]

Nature; Miryam Naddaf (February 20, 2025)

 

AI Is Prompting an Evolution, Not Extinction, for Coders

The research firm Evans Data found that almost two-thirds of software developers use AI coding tools, which studies have shown improve their daily productivity in actual business settings by 10% to 30%. IDC analyst Arnal Dayaratna noted, "The skills software developers need will change significantly, but AI will not eliminate the need for them. Not anytime soon anyway."

[ » Read full article *May Require Paid Registration ]

The New York Times; Steve Lohr (February 20, 2025)

 

Large Language Models Pose Growing Security Risks

In the absence of government policy on the security of large language models (LLMs), companies face new cybersecurity challenges from them, particularly from the unstructured and conversational nature of user interactions. In addition to the possibility of employees inputting sensitive corporate data into LLMs, companies should be concerned that information generated by LLMs could contain malicious code, infringe on intellectual property, or violate copyright. Further, threat actors can use prompt injection attacks to manipulate models to perform certain actions.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Steven Rosenbush (February 20, 2025)

 

AI Is Changing How Silicon Valley Builds Startups

Today's AI startups are achieving tens to hundreds of millions of dollars in revenue with small teams, using AI to improve efficiency, and many have no need for investors. Afore Capital's Gaurav Jain likens it to the wave of companies that emerged after Amazon rolled out low-cost cloud computing services, but noted that "this time, we're automating humans as opposed to just the datacenters."


[
» Read full article *May Require Paid Registration ]

The New York Times; Erin Griffith (February 20, 2025)

 

Graduates of Chinese Universities Drive AI Research in U.S.

The Paulson Institute's MacroPolo think tank found that roughly a third (38%) of top AI researchers in the U.S. in 2022 had obtained undergraduate degrees from Chinese universities, up from 27% in 2019, versus 37% with degrees from U.S. institutions. An analysis of papers presented that year at the Conference on Neural Information Processing Systems found the U.S. accounted for seven of the 10 entities affiliated with these AI experts; China’s Tsinghua and Peking universities also were in that top 10.

[ » Read full article *May Require Paid Registration ]

Nikkei Asia; Ryoko Shimonoya; Dai Kuwamura; Tatsuya Ozaki (February 16, 2025)

 

Meta AI Expert Warns Of US-Based Scientist Exodus Due To Trump Funding Cuts

Insider (2/22, Varanasi) reported that Yann LeCun, Meta’s chief AI scientist, cautioned about a potential departure of US-based scientists due to proposed funding cuts from the Trump Administration. In a LinkedIn post on Saturday, LeCun stated, “The US seems set on destroying its public research funding system. Many US-based scientists are looking for a Plan B.” The Administration’s drastic cuts to the National Institutes of Health could lead to billions in losses for biomedical research. As lawsuits challenge these cuts, former Harvard Medical School Dean Jeffrey Flier remarked, “A sane government would never do this.” LeCun urged European institutions to capitalize on this situation, suggesting they could attract top talent by improving research conditions. He outlined key factors that researchers seek, including access to funding, good compensation, and freedom in research endeavors.

IndiaAI Mission Accelerates Domestic AI Development

Livemint (IND) (2/21) reports that India is intensifying efforts to create a homegrown artificial intelligence foundational model under the IndiaAI Mission. The Union Ministry of Electronics and IT has received 67 proposals, including submissions from major companies like Sarvam AI and Ola, focusing on large language models. IT Minister Ashwini Vaishnaw emphasized the importance of ethical AI principles. The government plans to provide significant GPU resources and launch a common compute facility to support innovation. This initiative aims to position India competitively in the global AI landscape, responding to China’s DeepSeek model.

Schools Adopt AI Chatbot For Mental Health Support

The Wall Street Journal (2/22, Jargon, Subscription Publication) reported that school districts across the US are implementing Sonny, a hybrid AI-human chatbot developed by Sonar Mental Health, to assist students with mental health issues amid a shortage of counselors. The service is available to more than 4,500 students in nine districts. Sonar CEO Drew Barvir emphasizes that trained professionals monitor interactions, ensuring safety. The program aims to enhance emotional support, especially in low-income areas.

AI Power Demand Drives Investment In Alternative Energy Sources

CNBC (2/24) reports in an online video that soaring AI workloads are driving significant investment in alternative power sources like hydrogen, nuclear, geothermal, and solar energy. Data centers could consume 12% of total US power by 2028, up from less than 4% in 2022. This surge has prompted major cloud providers to explore new energy solutions. Amazon has invested $500 million into three small nuclear reactor projects. Microsoft has green hydrogen and nuclear fusion deals, while Google is using geothermal energy to power some data centers. OpenAI CEO Sam Altman has invested heavily in fusion, fission, and new solar technologies. The Biden administration finalized a major tax credit for clean hydrogen and previously awarded $7 billion to jumpstart clean hydrogen at seven hydrogen hubs connected to companies like Amazon and ExxonMobil.

Meta Expands AI Chatbot To Middle East, North Africa

TechCrunch (2/24, Sawers) reports, “Meta has formally expanded Meta AI to the Middle East and North Africa (MENA), opening the AI-enabled chatbot to millions more people.” Moving forward, the chatbot “will be available in Algeria, Egypt, Iraq, Jordan, Libya, Morocco, Saudi Arabia, Tunisia, the United Arab Emirates (UAE), and Yemen.” Furthermore, “Meta is also expanding language support to include Arabic.”

World’s Largest Data Center Planned In South Korea

Tom’s Hardware (2/24, Morales) reports that Stock Farm Road (SFR) has signed a Memorandum of Understanding with South Jeolla Province Governor Kom Young-rok to build the world’s largest data center in South Korea. The facility will cost about $35 billion and have a capacity of 3 GW. Construction will begin this year, with a target completion in 2028. The project will include renewable energy production and R&D initiatives, creating over 10,000 jobs and generating $3.5 billion in revenue. SFR, founded by Brian Koo and Dr. Amin Badr-El-Din, plans to establish more AI data centers in Asia, Europe, and the US within 18 months. Microsoft CEO Satya Nadella noted an overbuilding of AI systems, mentioning that Microsoft will limit capital investments in AI infrastructure and lease capacity from existing data centers like those planned by SFR.

Shift Toward Natural Gas Seen Amid Effort To Meet AI Demand

The Washington Post (2/23, Halper) reported that tech and energy firms are pivoting towards natural gas to meet escalating energy needs, notably for AI development. Microsoft and Meta are advancing projects powered by gas, despite previous commitments to clean energy. GE Vernova is collaborating with Engine No. 1 to enhance gas generation for data centers, with plans to power over 3 million homes. Christopher James from Engine No. 1 noted, “Gas is going to be here,”

China’s DeepSeek Accelerates Launch Of R2 AI Model

Reuters (2/25) reports that Chinese startup company DeepSeek “triggered a $1 trillion-plus sell-off in global equities markets last month with a cut-price AI reasoning model that outperformed many Western competitors.” Now, the firm “is accelerating the launch of the successor to January’s R1 model, according to three people familiar with the company.” Deepseek had “planned to release R2 in early May but now wants it out as early as possible.” The company “says it hopes the new model will produce better coding and be able to reason in languages beyond English.” Reuters says R2 is “likely to worry the U.S. government, which has identified leadership of AI as a national priority.”

Schneider Electric Launches Global AI Ecosystem Organization To Help Partners Capture AI Opportunity

Benzinga (2/24, Inc) reports Schneider Electric has launched a new global AI and enterprise partner ecosystem organization aimed at helping partners capitalize on the AI revolution. Paul Tyrer, the newly appointed global vice president, emphasized the transformative potential of AI-powered solutions for business operations, stating, “AI-powered solutions have the potential to revolutionize business operations and drive innovation like never before.” The initiative includes the appointment of Leslie Vitrano Hubright as vice president of the global IT channel ecosystem, who noted, “This is Schneider Electric doubling down and investing in partners to lead the AI revolution.” The organization aims to enhance AI integrations and capabilities, positioning Schneider Electric and its partners to seize significant opportunities in the evolving data center landscape.

Nvidia Extends Partnership With Cisco To Ease AI Adoption

Bloomberg (2/25, Grant, Subscription Publication) reports Nvidia “is extending a partnership with networking-gear maker Cisco Systems Inc. in a push aimed at making it easier for corporations to deploy AI systems.” Bloomberg adds, “Many businesses remain in the early stages of adopting AI systems because of the complexity the shift adds to their data centers, Cisco and Nvidia said Tuesday in a joint statement.” The companies “are broadening the list of products that include each others’ technology in an attempt to remove those hurdles.”

Leading Technology Companies Turn To Hydrogen, Nuclear Energy For AI Data Centers

CNBC (2/24, Novet) reports top tech companies, including Microsoft, Amazon, and Google, are increasingly turning to hydrogen and nuclear energy to power their AI data centers, as highlighted in a recent CNBC article. Yuval Bachar, founder of the startup ECL, which builds hydrogen-powered data centers, noted that these facilities can be operational in half the time of traditional grid-connected centers, addressing the urgent power needs for AI technologies. Bachar emphasized, “We have a problem that we have to solve right now,” reflecting the growing demand for energy-efficient solutions in the tech industry. As the race for AI capabilities intensifies, companies are exploring various energy sources, including small modular reactors, to meet their sustainability goals, with Google aiming for net-zero emissions by 2030 and Microsoft targeting carbon negativity by the same year.

Amazon Cracks Down On AI Use During Job Interviews

Insider (2/27, Kim) reports Amazon is cracking down on the use of AI tools during job interviews, citing concerns over fairness and the ability to assess candidates’ “authentic” skills. Recent guidelines shared with Amazon recruiters state that applicants may face disqualification for using AI tools. The guidelines instruct recruiters to inform candidates about the policy. An Amazon spokesperson said the company’s recruiting process “prioritizes ensuring that candidates hold a high bar.” The spokesperson added that candidates must acknowledge they won’t use “unauthorized tools, like GenAI, to support them” during interviews. Amazon has also shared internal tips on how to spot applicants using AI, such as observing typing, unnatural reading of answers, or reactions to incorrect AI outputs.

Educators Are Learning To Navigate AI Cheating Concerns

Education Week (2/27, Klein) reports that educators are adapting to the rise of generative AI tools in academic settings, particularly concerning cheating. Michael Rubin, principal of Uxbridge High School in Massachusetts, emphasized the importance of teaching students to use AI responsibly, saying, “It’s not about the risk of getting caught, it’s about knowing how to use the technology appropriately.” Rubin’s school employs a tool to analyze student submissions for signs of AI usage, promoting discussions about appropriate AI use rather than punitive measures. Amelia Vance, president of the Public Interest Privacy Center, cautioned that many AI detection tools are inaccurate, particularly for students of color and non-native English speakers. Vance noted, “Unfortunately, at this point, there isn’t an AI tool that sufficiently, accurately detects when writing is crafted by generative AI,” reinforcing the need for direct communication with students suspected of cheating.

dtau...@gmail.com

unread,
Mar 8, 2025, 7:52:34 PM3/8/25
to ai-b...@googlegroups.com

Barto, Sutton Receive 2024 ACM A.M. Turing Award

Andrew G. Barto, professor emeritus of information and computer sciences at the University of Massachusetts, Amherst, and Richard S. Sutton, professor of computer science at the University of Alberta in Canada, are the recipients of the 2024 ACM A.M. Turing Award for developing the conceptual and algorithmic foundations of reinforcement learning. In a series of papers beginning in the 1980s, the two introduced the primary concepts, built the mathematical foundations, and developed vital algorithms in the field. Their work, said ACM President Yannis Ioannidis, "laid the foundations for some of the most important advances in AI."
[ » Read full article ]

ACM Media Center (March 5, 2025)

 

AI Reshapes the Coding Workforce

The increased adoption of AI coding tools is changing the size and scope of software development teams, often allowing for leaner teams that complete the same amount of work or more. These tools, which automate a substantial amount of code development, are intended to supplement human coders. Companies have found such tools can permit developers to concentrate on complex problem-solving when boilerplate coding is automated.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Isabelle Bousquette (March 4, 2025)

 

AI Finds 5,000-Year-Old Civilization Beneath Dubai Desert

Researchers have located a 5,000-year-old city and roads under the sand in Dubai's Rub' al Khali desert with the help of AI and Synthetic Aperture Radar (SAR) technology. According to one of the researchers, "The application of [AI] in archaeology is like having a time machine, and now we can look at history from completely new angles."
[ » Read full article ]

The Jerusalem Post (March 3, 2025)

 

MTA Used Google Pixels to Identify Subway Track Defects

New York City's Metropolitan Transportation Authority deployed Google's TrackInspect AI tool to identify defects on subway tracks. From last September through January, four subway cars were equipped with Google Pixel phones, which detected problematic noises and other issues using their accelerometers, magnetometers, and microphones. Machine-learning algorithms were used to analyze the data and produce predictive insights. TrackInspect located 92% of defects that had been identified by inspectors.
[ » Read full article ]

Engadget; Sarah Fielding (February 28, 2025)

 

China Ramps Up Efforts for Tech Independence

Chinese Premier Li Qiang, in a speech to that nation’s lawmakers Wednesday, said AI would be vital for strengthening China’s digital economy. Li pledged that China would boost its support for applications of large-scale AI models and AI hardware. On the same day, China’s top economic planning body said the country aimed to develop a system of open-source AI models, while continuing to invest in computing power and data for the technologies.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Raffaele Huang (March 5, 2025)

 

Smart Cameras Spot Wildfires Before They Spread

The University of California, San Diego's ALERTCalifornia camera network uses AI bots as digital fire-lookouts, scanning more than 1,150 cameras in fire-prone areas across the state. Since the bots were deployed in 2023, they have detected more than 1,200 confirmed fires and are faster than 911 callers about 33% of the time. A human-staffed command center is notified when a fire is detected, where the blaze is verified and authorities are notified.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Jim Carlton (March 2, 2025)

 

Texas Needs Equivalent of 30 Reactors to Meet Datacenter Power Demand

The Electric Reliability Council of Texas (ERCOT), which manages the state's power grid, forecast an increase in energy demand requiring the addition of 30 nuclear plants' worth of electricity by 2030, thanks to the anticipated addition of datacenters powering AI to the grid. Said ERCOT’s Agee Springer, “We’ve never existed in a place where large industrial loads can really impact the reliability of the grid, and now we are stepping into that world.”

[ » Read full article *May Require Paid Registration ]

Bloomberg; Naureen S. Malik (February 28, 2025)

 

Google’s Brin Urges Workers to the Office ‘at Least’ Every Weekday

Google co-founder Sergey Brin last week said his company could lead the industry in AI when machines match or become smarter than humans, but only if employees worked harder. “I recommend being in the office at least every weekday,” he wrote in a memo posted internally. He added that “60 hours a week is the sweet spot of productivity” in the message to employees who work on Gemini, Google’s lineup of AI models and apps.

[ » Read full article *May Require Paid Registration ]

The New York Times; Nico Grant (February 28, 2025)

 

AI Robots Help Nurse Japan's Aging Population

Japan is turning to robots and other technologies to help care for its aging population. An AI-driven humanoid robot called AIREC, for example, recently was demonstrated gently helping a man in bed roll onto his side. Said Waseda University's Shigeki Sugano, who is heading the AIREC robot project, "Given our highly advanced aging society and declining births, we will be needing robots' support for medical and elderly care, and in our daily lives."
[ » Read full article ]

Reuters; Kiyoshi Takenaka (February 28, 2025)

 

Humanoid Robots Finally Get Real Jobs

Humanoid robots, with the help of AI, are being used to perform tasks typically done by human workers, or to serve as a bridge between other less-versatile automated machines common in warehouses and factories. Mass manufacturing and falling costs for the components of robots are making them cheaper to produce, and the latest AI technologies are animating robot bodies in ways not possible even a few years ago. More than a dozen startups worldwide now offer such humanoid robots.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Christopher Mims (February 27, 2025)

 

Estonia Launches AI in High Schools with U.S. Tech Groups

The Estonian government is launching the AI Leap initiative in partnership with OpenAI and Anthropic, providing free access to AI-learning tools to 20,000 high school students beginning in September. Next year, the program will be expanded to vocational schools, and possibly younger students as well. Estonian President Alar Karis said the goal of AI Leap is to foster an awareness of and critical thinking about AI among students, not to replace educators.

[ » Read full article *May Require Paid Registration ]

Financial Times; John Thornhill; Richard Milne (February 26, 2025)

 

U.S. Workers Skeptical AI Will Help Them

Less than a third of respondents to a Pew Research Center survey of around 5,300 Americans said they were "excited" about the use of AI in future workplaces. Around 80% of Americans do not use AI at work, and most of those who do are not impressed by the results, according to the survey. Among other findings, 52% of workers said they were "worried" about how AI could be used in future workplaces.

[ » Read full article *May Require Paid Registration ]

The Washington Post; Shira Ovide; Danielle Abril (February 25, 2025)

 

Universities Expand AI Course Offerings Amid Rising Demand

Insider (3/2, Perkel) reports that universities are increasingly developing artificial intelligence programs to meet rising interest, particularly among non-STEM students. Carnegie Mellon University (CMU) has evolved its undergraduate AI major since its inception in 2018, with program director Reid Simmons noting, “These large language models... have basically taken over.” The focus now includes a broader range of AI topics, with machine learning classes increasing from a couple to “as many as 10.” Similarly, Johns Hopkins University is expanding its online AI master’s program to accommodate students from diverse backgrounds, as director Barton Paulhamus stated, “What can we give them that they can learn about AI without needing to go through 10 courses of prerequisites?” The University of Miami aims to demystify AI for non-computing students, with Dean Leonidas Bachas emphasizing, “This is a computer science class for all.”

Chinese Buyers Using Third Parties To Circumvent Export Controls For Next-Gen AI Chips

The Wall Street Journal (3/2, Huang, Lin, Subscription Publication) reports Chinese customers are finding ways to circumvent US export controls for next-gen computer chips, particularly those that assist in the development of artificial intelligence. Many traders within China are selling computers with Nvidia’s Blackwell chips pre-installed by routing the product through third parties in nearby countries, highlighting the challenges the Administration is facing in preventing some nations from accessing the chips.

Amazon Invests $500M In New Nuclear Reactor Project

KIRO-TV Seattle (2/28, Thompson) reported Amazon has committed $500 million to X-Energy for the construction of a new nuclear reactor near the Columbia Generation Station in Washington, part of Amazon’s bid to power its AI revolution. The planned “pebble bed” reactor technology promises enhanced safety compared to traditional reactors. However, environmental concerns persist, as Columbia Riverkeeper advocacy director Dan Serres emphasized the risks of additional nuclear waste near the already contaminated Hanford site, calling it “an absolutely reckless idea.” The smaller X-Energy reactors aim to generate about 80 mW of electricity, contrasting with the over 1,000 mW produced by the Columbia Generating Station, and could be built in a factory-like setting to reduce costs.

California Lawmaker Relaunches Pared-Down AI Safety Bill After Big Tech Pushback

Politico (2/28, DiFeliciantonio) reported California Sen. Scott Wiener (D), who was “behind a divisive AI safety bill last year, has relaunched a pared-down version focused on whistleblower protections, after his prior failed attempt ignited a national debate over how, and whether, to regulate the powerful technology.” Wiener “filed the full details of his second attempt at reining in the potential harms of artificial intelligence late Thursday night, after his last bill was vetoed by Gov. Gavin Newsom amid pushback from certain Big Tech figures warning of consequences for innovation.”

AI Skills Gap Highlights Workforce Expectations

Quotidiano (ITA) (3/3) reports that a recent study by Access Partnership, in collaboration with Amazon Web Services, surveyed more than 6,500 employees and 2,000 employers across France, Germany, Spain, and the UK. The report emphasizes the need to address the AI skills gap to maintain Europe’s competitiveness over the next decade. By 2028, 86% of employers plan to adopt AI tools, particularly in IT (82%) and other business functions like finance (77%) and R&D (78%). Maureen Lonergan, Vice President of Training and Certification at Amazon Web Services, noted that “65% of European workers believe AI will positively impact their careers and are interested in acquiring specific skills.”

Singapore Fraud Case Involving US Servers Could Contain Nvidia Chips, Minister Says

Reuters (3/3) reports that Singapore announced a fraud case last week involving three individuals charged with illegally moving Nvidia’s AI chips to the Chinese firm DeepSeek. On Monday, Home Affairs and Law Minister K Shanmugam revealed that the servers implicated were supplied by US companies Dell Technologies and Super Micro Computer. He stated, “Whether Malaysia was the final destination ... we do not know for certain at this point,” while confirming that Singapore is conducting an independent investigation following an anonymous tip-off. The Singaporean authorities have also reached out to US officials to ascertain if the servers contained any US export-controlled items and are prepared to collaborate on any joint inquiry. The US is currently investigating if DeepSeek has utilized prohibited US chips, as reported by Reuters.

Amazon, Nvidia Drive Expansion Of Physical AI In Robotics, Automation

Forbes (3/3, MSV) reports physical artificial intelligence is transforming industries by integrating AI with sensors and actuators in robots, vehicles, and devices. Unlike traditional automation, physical AI enables machines to adapt in real time and operate autonomously. Amazon’s fulfillment centers use 750,000+ mobile robots to boost efficiency, with AI-driven systems like Cardinal sorting packages and improving productivity by 25%. Nvidia is investing heavily in hardware and simulation platforms to accelerate physical AI adoption. Organizations must consider high initial costs, security risks, and workforce training when implementing these technologies. As AI advances, industries from manufacturing to retail will continue integrating autonomous systems to improve efficiency and reduce manual labor.

OpenAI Announces $50 Million Investment In Higher Ed Research Consortium

Inside Higher Ed (3/5, Palmer) reports that OpenAI announced on Tuesday a $50 million investment to establish NextGenAI, a research consortium comprising 15 institutions aimed at leveraging AI to enhance research and education. The group, which includes 13 universities, is intended to “catalyze progress at a rate faster than any one institution would alone,” according to the company. Brad Lightcap, OpenAI’s chief operating officer, emphasized the importance of collaboration, stating, “The field of AI wouldn’t be where it is today without decades of work in the academic community.” Each institution, including Boston Children’s Hospital and the Boston Public Library, will receive funding and computational resources to support various initiatives, such as AI literacy and medical research. The consortium features notable universities like Harvard, MIT, and the University of Oxford.

Reclaim Project Develops Portable AI-Powered Recycling Plant

Recycling Today (3/3, Voloschuk) reports the Reclaim project, funded by the EU’s Horizon 2020 program, has created a low-cost, portable AI-powered robotic recycling plant for deployment in the Greek Islands. This technology addresses waste management challenges in remote areas by using multiple robots and AI for effective material sorting. Javier Grau from Aimplas stated, “Remote islands, hard-to-reach rural areas or regions with limited infrastructure are just some of the scenarios where this equipment can make a significant difference.” The compact design allows for rapid deployment, enhancing local recycling efforts and promoting a circular economy for plastics.

State Department Will Use AI To Revoke Foreign Student Visas

Inside Higher Ed (3/7, Custer) reports that Secretary of State Marco Rubio is set to implement an initiative called “Catch and Revoke” to utilize artificial intelligence in the assessment of foreign student visas. According to Axios, the program will analyze social media accounts of thousands of student visa holders for indications of support for Hamas’s October 7, 2023, attack on Israel. If a post appears “pro-Hamas,” it may lead to visa revocation, as stated by a State Department official. The initiative also includes reviewing news reports of anti-Israel protests and legal actions by Jewish students for potential antisemitic behavior. The official commented, “We found literally zero visa revocations during the Biden administration,” suggesting a lack of enforcement. The official emphasized the importance of using AI tools, stating, “It would be negligent for the department that takes national security seriously to ignore publicly available information.”

Microsoft To Launch AI Data Centers In Kuwait

GCC Business (3/6, Nair) reports that Microsoft has signed an agreement with the Government of Kuwait, represented by the Central Agency for Information Technology (CAIT) and the Communication and Information Technology Regulatory Authority (CITRA), to establish an AI-powered Azure Region. This partnership aims to enhance local AI capabilities and stimulate economic growth. The Azure Region is intended to provide “scalable, highly available, and resilient cloud services” to facilitate digital transformation in Kuwait. Additionally, the initiative includes integrating Microsoft 365 Copilot for government employees, promoting efficiency and productivity. The collaboration also involves launching a comprehensive skilling initiative in AI and cybersecurity to prepare the workforce for future demands.

Utah’s New $2 Billion AI Data Center Project Is A Major Bet On AI Infrastructure

The Storage Review (3/5) reports a new $2 billion AI data center in West Jordan, Utah, is fully leased before its opening, highlighting the urgent demand for AI infrastructure. Backed by J.P. Morgan and Starwood Property Trust, the facility will deliver 175MW of compute power, incorporating advanced direct-to-chip liquid cooling technology to manage high thermal loads from AI workloads. The project’s strategic location in Utah offers cost-effective power solutions and a cooler climate, making it ideal for high-performance AI applications. As financial institutions increasingly invest in AI-driven infrastructure, this development signifies a broader industry shift towards specialized data centers to meet surging AI compute demands.

dtau...@gmail.com

unread,
Mar 15, 2025, 4:29:19 PM3/15/25
to ai-b...@googlegroups.com

China's Top Universities Prioritize AI, Other 'National Strategic Needs'

China's Peking, Renmin, and Shanghai Jiao Tong universities will expand undergraduate enrollment as they prioritize "national strategic needs" such as developing AI talent. Peking University plans to add 150 undergraduate spots this year focused on areas of "national strategic importance," fundamental disciplines, and "emerging frontier fields" such as information science and technology. In January, China issued its first national action plan to build a "strong education nation" over the next 10 years.
[
» Read full article ]

Reuters; Farah Master; Eduardo Baptista (March 10, 2025)

 

AI Makes Its Way to Vineyards

The wine industry increasingly is adopting AI to supplement its workforce and improve decision-making, efficiency, and sustainability while reducing waste. Autonomous tractors help farmers reduce fuel use and pollution, while automated irrigation systems make water use more efficient by monitoring soil and vines. Smart sensors help target spraying of insecticides or other material for crop retention, and the AI-powered farm management platform Scout can analyze images to monitor a crop's health and predict yields.
[
» Read full article ]

Associated Press; Sarah Parvini (March 10, 2025)

 

U.S. to Use AI to Review Foreign Student Visa Holders for Terrorist Sympathies

U.S. Secretary of State Marco Rubio is launching an AI-enabled "Catch and Revoke" effort to cancel the visas of foreign nationals who appear to support designated terror groups, sources say. The effort includes AI-assisted reviews of tens of thousands of student visa holders' social media accounts, focusing on evidence of alleged terrorist sympathies.
[ » Read full article ]

Axios; Marc Caputo (March 6, 2025)

 

AI-enabled BCI Allows Paralyzed Man to Control Robot Arm

A brain-computer interface (BCI) developed by University of California, San Francisco researchers enabled a patient who was paralyzed after suffering a stroke to operate a robotic arm for seven months without significant calibration. The researchers created an AI model that adjusted for day-to-day shifts in brain activity, overcoming a common challenge associated with BCIs. The AI learned from the patient's brain signals while he visualized simple movements and practiced with a virtual robotic arm.
[ » Read full article ]

Interesting Engineering; Srishti Gupta (March 6, 2025)

 

AI to Search for the Trillions of Viruses in Our Bodies

Five universities are participating in the Human Virome Program, aimed at identifying more of the tens of trillions of viruses living in the human body. The project, which has received $171 million in federal funding, will use AI systems to analyze saliva, stool, blood, milk, and other samples from thousands of volunteers. Researchers hope the program will provide insights on how the virome influences health.

[ » Read full article *May Require Paid Registration ]

The New York Times; Carl Zimmer (March 4, 2025)

Additional free news story on this project: https://www.caltech.edu/about/news/caltech-joins-national-human-virome-program

 

1 in 5 Women in Tech Plan to Switch Jobs

Generative AI skills are helping boost women in the technology sector as many look to switch jobs, according to Ensono's latest Speak Up survey of 1,500 female-identifying full-time tech professionals. Almost 90% of respondents said possessing generative AI know-how has enhanced their job performance and unlocked new opportunities. Nearly 20% are planning to leave their current companies this year, a rate similar to that seen in 2022's "Great Resignation."
[
» Read full article ]

CIO Dive; Lindsey Wilkinson (March 10, 2025)

 

Beijing to Roll Out AI Courses for Kids

Starting in the fall, schools in Beijing will introduce AI courses to primary and secondary students. At least eight hours of AI classes will be offered per academic year, according to the Beijing Municipal Education Commission, which said schools will be able to run them as standalone courses or integrate them with existing curricula.

[ » Read full article *May Require Paid Registration ]

Bloomberg (March 9, 2025)

 

Pentagon Signs AI Deal to Help Commanders Plan Military Maneuvers

Illustrating growing collaboration between the U.S. military and private tech sector, the Pentagon has contracted startup Scale AI to find ways to use AI to speed up military decision-making. Scale will develop AI programs that commanders could query for recommendations about how to most efficiently move resources throughout a region, combining data from intelligence sources and battlefield sensors.

[ » Read full article *May Require Paid Registration ]

The Washington Post; Gerrit De Vynck (March 5, 2025)

 

McDonald's Gives Its Restaurants an AI Makeover

McDonald's is rolling out edge computing to its restaurants, enabling them to process and analyze data on-site with the goal of improving the customer and employee experience. AI will be used to analyze data from Internet-connected kitchen equipment to predict maintenance issues, while in-store mounted cameras will use computer vision to ensure order accuracy. In addition, voice AI will be used at the drive-through, and generative AI virtual managers will handle shift scheduling and other administrative tasks.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Isabelle Bousquette; Belle Lin (March 5, 2025)

 

European Commission Plans Gigafactories To Boost AI Industry

Reuters (3/11) reports that the European Commission is raising $20 billion to build four “AI gigafactories” aimed at enhancing Europe’s competitiveness in artificial intelligence. This initiative, announced by President Ursula von der Leyen at the February 11 AI summit in Paris, seeks to develop large public access data centers. Industry experts, such as Bertin Martens from Bruegel, express skepticism about the practicality of these factories, highlighting challenges like chip shortages and site selection. The gigafactories will be funded through a new 20 billion-euro fund and are envisioned as a public-private partnership to support local firms in creating AI models compliant with EU regulations. However, Kevin Restivo from CBRE warns that these projects could encounter the same obstacles as existing private ventures in Europe.

China’s Manus AI Claims Lead Over US Competitors

Bloomberg (3/10, Subscription Publication) reports that a Chinese startup, Manus AI, recently launched a preview of its general AI agent, which claims to outperform leading US competitors like OpenAI’s Deep Research in tasks such as resume screening and itinerary creation. Co-founder Yichao Ji described the product as “truly autonomous,” generating significant interest and comparisons to another Chinese firm, DeepSeek. However, user feedback has been mixed; while some praised its outcomes, others noted slow processing times and crashes. Manus has raised over $10 million but has not published detailed development papers or released its code. The competitive landscape remains uncertain as US companies continue to innovate in AI technology.

Stargate Venture To Deploy Nvidia Chips At Texas Data Center

Data Center Knowledge (3/7) reports that OpenAI and Oracle Corporation are set to fill a new data center in Abilene, Texas, with 64,000 Nvidia AI chips by the end of 2026 as part of their $100 billion Stargate venture. The initial phase will see 16,000 chips deployed by summer 2024. An OpenAI spokesperson confirmed collaboration with Oracle on the data center’s design and operation, emphasizing the significant computing power aimed at enhancing generative AI capabilities.

OpenAI Partners Agrees To Pay CoreWeave $11.9 Billion For AI Data Centers, Services

CNBC (3/10, Field) reports OpenAI has agreed to pay CoreWeave $11.9 billion over five years for AI data centers and services. The agreement includes OpenAI acquiring a $350 million stake in CoreWeave linked to its upcoming IPO, according to confidential sources. CoreWeave, supported by Nvidia, plans to go public on Nasdaq soon, with a 2024 revenue increase of over 700% to $1.92 billion. In October, CoreWeave secured a $650 million credit line to expand its data centers and has raised over $12 billion from investors. By the end of 2024, CoreWeave operated 32 data centers with more than 250,000 Nvidia GPUs, surpassing its initial goal of 28 centers. CoreWeave’s clientele includes Microsoft, Meta, IBM, and Cohere. The company, valued at $19 billion in May, aims for a valuation exceeding $35 billion in its IPO.

Report: Meta Testing First In-House Chip For AI Training

Reuters (3/11, Paul, Hu) reports Meta “is testing its first in-house chip for training artificial intelligence systems, a key milestone as it moves to design more of its own custom silicon and reduce reliance on external suppliers like Nvidia, two sources told Reuters.” The company “has begun a small deployment of the chip and plans to ramp up production for wide-scale use if the test goes well, the sources said.”

University of Wisconsin-Stout Embedding AI Training In All Of Its Degree Programs

The Chippewa (WI) Herald reports that “from engineering to communication and counseling, manufacturing, marketing and design, construction, supply chain and more, University of Wisconsin-Stout is preparing graduates to meet the needs of a rapidly evolving workforce by embedding AI training in all of its degree programs.” And its “comprehensive approach to AI literacy is more than program curriculums – Wisconsin’s Polytechnic University is collaborating with community, business and industry partners through its innovation centers, consulting services, continuing education courses and regional consortiums to help Wisconsin leverage AI-driven solutions that put it ahead of the curve.”

California College Professors Divided On Using AI Tools In Curricula

EdSource (3/12) reports that California colleges have begun incorporating artificial intelligence (AI) tools into their curricula since the release of ChatGPT in 2022. While some professors express concerns about cheating and diminished critical thinking, others advocate for AI’s educational benefits. A report from University of Southern California’s Marshall School of Business revealed that 38 percent of faculty use AI in classrooms. Professor Ramandeep Randhawa said, “It is critical to prepare students for this AI-first environment.” At California State University, Long Beach, lecturer Casey Goeller has students use AI for assignments, emphasizing its utility in academic support. However, some faculty, like Professor Olivia Obeso from Cal Poly, enforce no-AI policies to foster foundational skills. Overall, educators are navigating a balance between embracing AI and ensuring students develop critical thinking skills necessary for the workforce.

Google Unveils Two New AI Models For Robotics

Reuters (3/12) reports that Google introduced two new AI models for robotics on March 12, based on its Gemini 2.0 model, aiming to support the expanding robotics industry. The models, Gemini Robotics and Gemini Robotics-ER, enhance robots’ capabilities in understanding their environment and executing physical actions. This launch follows Figure AI’s recent departure from a collaboration with OpenAI after achieving a breakthrough in AI for robotics. Google tested its models on data from its bi-arm platform, ALOHA 2, and noted their utility for startups looking to lower development costs. Additionally, Apptronik, which recently secured $350 million in funding with Google’s participation, is set to scale production of AI-powered humanoid robots.

Celestial AI Raises $250 Million For AI Chip Development

Reuters (3/11, Nellis) reported that Celestial AI, a Silicon Valley chip startup, announced on Tuesday it has secured an additional $250 million in venture capital, raising its total funding to $515 million. The company is utilizing photonics technology, which employs light instead of electrical signals, to enhance connections between AI computing chips and memory chips. This connection’s speed, known as memory bandwidth, is crucial for advancing AI systems and influences US government export controls regarding AI technology. Nvidia currently leads in memory bandwidth with its technologies NVLink and NVSwitch, prompting competition among startups for alternatives. Celestial AI’s technology, described as a “photonic fabric,” aims to improve speed while conserving space and power. CEO Dave Lazovsky stated, “There are no good answers right outside of Nvidia,” highlighting the efficiency and latency benefits of their innovation. The funding round was led by Fidelity Management & Research, with participation from several investors, including BlackRock and AMD Ventures.

dtau...@gmail.com

unread,
Mar 22, 2025, 1:36:07 PM3/22/25
to ai-b...@googlegroups.com

Art Created by AI Cannot Be Copyrighted, Court Rules

The U.S. Circuit Court of Appeals for the District of Columbia unanimously ruled that art created autonomously by AI cannot be copyrighted. The three-judge panel upheld the U.S. Copyright Office's decision to deny a copyright to Stephen Thaler for the painting "A Recent Entrance to Paradise." Thaler had listed his AI platform "Creativity Machine" as the painting's "author" and himself as the owner in the copyright application.
[ » Read full article ]

CNBC; Dan Mangan (March 19, 2025)

 

Europol Warns of AI-Driven Crime Threats

Europol said in a report released Tuesday that organized crime gangs are moving their recruitment, communication, and payment systems online and leveraging AI to scale up their operations across the globe and prevent detection. According to the report, criminals are using AI to produce messages in different languages and create realistic impersonations of individuals, among other acts. The EU law enforcement agency said fully autonomous AI "could pave the way for entirely AI-controlled criminal networks, marking a new era in organized crime."
[ » Read full article ]

Reuters; Michal Aleksandrowicz (March 18, 2025)

 

AI Search Engines Cite Incorrect Sources at 60% Rate, Study Finds

Researchers at Columbia University's Tow Center for Digital Journalism found that AI models gave incorrect answers to more than 60% of queries about news sources. The researchers fed excerpts of news stories into eight AI-driven search tools and found that all tested models provided fabrications, rather than not responding when their information was unreliable. The study also showed the models tended to point users to syndicated versions of content rather than original publisher sites.
[ » Read full article ]

Ars Technica; Benj Edwards (March 13, 2025)

 

Tim Berners-Lee Wants to Know: 'Who Does AI Work For?'

At the South by Southwest conference, World Wide Web inventor Tim Berners-Lee, an ACM A.M. Turing Award laureate, raised the question of who AI works for. Even if AI models are reliable, accurate, and unbiased, there will be concerns about whether company or user interests are paramount. Said Berners-Lee, "I want AIs to work for me to make the choices that I want to make. I don't want an AI that's trying to sell me something."
[ » Read full article ]

CNet; Jon Reed (March 12, 2025)

 

AI Ring Tracks Spelled Words in American Sign Language

A team led by Cornell University researchers developed an AI-powered ring that can track fingerspelling in American Sign Language. Worn on the thumb, SpellRing uses a microphone and speaker to transmit sound waves that track hand and finger movements, and a mini gyroscope to track hand motions. Images captured by micro-sonar technology are analyzed by a proprietary deep learning algorithm to predict the fingerspelled letters in real time, with 82% to 92% accuracy.
[ » Read full article ]

Cornell Chronicle; Louis DiPietro (March 17, 2025)

 

Nvidia Hosts the Super Bowl of AI

The Nvidia GTC annual developer conference has evolved from an academic summit into the Super Bowl of AI, attracting a who's who of industry leaders. On March 18, more than 25,000 people filled a National Hockey League arena to hear Nvidia CEO Jensen Huang speak on the future of AI. Nvidia GTC was formerly the GPU Technology Conference, which included a research summit where academics detailed how they had used the company's components for computing research.

[ » Read full article *May Require Paid Registration ]

The New York Times; Tripp Mickle (March 18, 2025)

 

'Doxxing' Scandal Casts Shadow Over Baidu's AI Model Release

Chinese tech giant Baidu is facing criticism over a "doxxing" scandal that has overshadowed the launch of its new AI models. The daughter of Baidu Vice President Xie Guangjun shared social media users' real names, ID numbers, phone numbers, and other personal information during an online argument over a K-pop singer. The incident has raised concerns among social media users across various platforms about whether Baidu is leaking users' personal data.

[ » Read full article *May Require Paid Registration ]

Nikkei Asia; Cissy Zhou (March 18, 2025)

 

AI Is Changing the Way Computers Are Built

AI is fueling the most fundamental change to computing since the early days of the Internet. Just as companies completely rebuilt their computer systems to accommodate the new commercial Internet in the 1990s, they are now rebuilding from the bottom up, wiring together up to 100,000 chips to create powerful AI systems. The industry is also looking at new ways to house, power, and cool these systems to keep them from over-heating.

[ » Read full article *May Require Paid Registration ]

The New York Times; Cade Metz; Karen Weise; Marco Hernandez (March 16, 2025); et al.

 

The Quest for AI 'Scientific Superintelligence'

Researchers at startup Lila Sciences developed a generative AI program trained on published and experimental data, the scientific process, and reasoning, in the quest for "scientific superintelligence." The AI is tasked with generating new ideas and testing them in automated labs with a handful of human assistants. Said Lila cofounder Molly Gibson, “Our goal is really to give AI access to run the scientific method—to come up with new ideas and actually go into the lab and test those ideas.”

[ » Read full article *May Require Paid Registration ]

The New York Times; Steve Lohr (March 10, 2025)

 

There's a Good Chance Your Kid Uses AI to Cheat

Impact Research found that close to 40% of middle- and high-school students, and almost half of college students, used AI to complete assignments without a teacher's knowledge or permission. Some educators are responding by requiring students to write first drafts by hand in class without access to computers or smartphones, or are no longer assigning homework. Others are using third-party AI detection tools, which are not always accurate in flagging AI use.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Matt Barnum; Deepa Seetharaman (March 15, 2025)

 

China Announces Generative AI Labeling to Cull Disinformation

The Cyberspace Administration of China, along with three other agencies, issued new regulations requiring service providers to label AI-generated material to prevent disinformation. The rules go into effect Sept. 1, with labels either explicitly stating the material is AI-generated or making the disclosure via metadata encoded in each file. Additionally, app store operators must determine whether developers provide AI-generated content services and review their labeling mechanisms.

[ » Read full article *May Require Paid Registration ]

Bloomberg; Debby Wu (March 14, 2025)

 

AI Talent Race Reshapes the Tech Job Market

Of the U.S. tech jobs posted since January, almost 25% seek workers with AI skills, according to the University of Maryland's (UMD's) AI job tracker. AI-related listings accounted for 1.3% of all job postings in January, compared with tech job listings at 5.4%. According to UMD's Anil K. Gupta, the turning point for the AI job market was the launch of OpenAI's ChatGPT, which bumped AI-related job postings 68% from its fourth-quarter 2022 launch through the end of 2024. Over the same period, tech job postings declined 27%.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Nate Rattner (March 10, 2025)

 

Amazon Bets On Trainium Chips To Compete With Nvidia

Semafor’s (3/14) Reed Albergotti writes that Amazon’s $8 billion investment in AI startup Anthropic and its development of the Trainium2 chip mark a strategic effort to challenge Nvidia’s dominance in AI hardware. Amazon’s Annapurna Labs designed the chip as part of Project Rainier, aiming to create the world’s most powerful computer through extreme vertical integration. Anthropic, Amazon’s key customer, will use Trainium2 to train its Claude AI model, enhancing performance and cost efficiency. Annapurna Director of Engineering Rami Sinno said, “Every single chip that we build and deliver has customers waiting for it.” While Nvidia’s Cuda software remains a formidable competitor, Amazon’s open instruction set and focus on compute efficiency could attract more customers, reducing reliance on Nvidia amid global chip shortages.

Google Expands AI Business In UK

TechCrunch (3/17, Lunden) reports that Google is enhancing its AI operations in the U.K., as announced on Monday in London by Google DeepMind CEO Demis Hassabis and Google Cloud CEO Thomas Kurian. The company will expand U.K. data residency to include Agentspace, enabling local hosting of its AI agent for enterprises. Additionally, Google introduced financial incentives for AI startups, offering up to £280,000 in Google Cloud credits for those joining its new U.K. accelerator. Chirp 3, an audio generation model, will be added to the Vertex AI platform. This initiative aims to strengthen Google’s presence in the U.K. AI market.

Large Technology Companies Expected To Invest Over $500B In AI By 2032

Bloomberg (3/17, Davalos, Subscription Publication) reports large tech companies are expected to increase their combined investment in artificial intelligence “to more than $500 billion by early next decade, driven in part by a newer approach to AI from DeepSeek and OpenAI, according to Bloomberg Intelligence.” So-called hyperscale companies including Microsoft, Amazon, and Meta are “projected to spend $371 billion on data centers and computing resources for AI in 2025, a 44% increase from the year prior, according to a report published Monday.” That amount is “set to rise to $525 billion by 2032, growing at a faster clip than Bloomberg Intelligence expected before the viral success of DeepSeek.”

Nvidia CEO Expected To Defend Company’s AI Strategy As Costs, Competitors Mount

Reuters (3/17, Nellis, Cherney) reports that Nvidia CEO Jensen Huang is expected to address the company’s annual software developer conference this week amid growing pressure to “defend his nearly $3 trillion chip company’s dominance as pressure mounts on its biggest customers to rein in the costs of artificial intelligence.” At the conference, Nvidia is “expected to reveal details of a chip system called Vera Rubin, named for the American astronomer who pioneered the concept of dark matter, with the system expected to go into mass production later this year.” However, Rubin’s predecessor, “a chip named after mathematician David Blackwell announced this time last year,” is still only “trickling onto the market after production delays that have eaten into Nvidia’s margins.” Nvidia is also expected to “hint at its plans” on quantum computing and “efforts to build a personal computer central processor chip.”

Illinois Lawmakers Propose AI Guidelines For Schools

Chalkbeat (3/17, Smylie) reports that Illinois educators are urging state lawmakers to establish guidelines for using artificial intelligence (AI) in classrooms. Two bills, HB2503 and SB1556, have been proposed to create an advisory committee that will provide guidance on AI use in education. These bills require school districts to report AI usage to the Illinois State Board of Education. Rep. Laura Faver Dias (D) emphasized the importance of this legislation, saying, “Our teachers are on the front lines and spend hours with our students every day.” Bill Curtin, policy director at Teacher Plus Illinois said, “We’re really focused on empowering teachers with the guardrails to know that experimenting is safe.” The House proposal passed the education policy committee with a 9-4 vote and awaits further negotiation with the Illinois State Board of Education. Chicago Public Schools has already developed a guidebook for educators to navigate generative AI.

Collegis Education Highlights AI’s Impact On Higher Ed

Forbes (3/17, Newton) reported that Kim Fahey, CEO of Collegis Education, believes AI is transforming higher education administration. Fahey states that AI is not a simple solution for schools but requires clean data and preparation. AI can automate tasks, enhance marketing, recruitment, and retention, and offer tutoring and advising options. Collegis collaborates with Google Cloud to help institutions manage data. Brad Hoffman of Google Public Sector highlights AI’s role in integrating data to improve decision-making. Fahey notes increased tech spending and competitive pressures, emphasizing the need for skilled IT teams to harness AI effectively.

Tech Companies Request Regulatory Flexibility In South Korea

The Korea Times (3/18) reports that AI policy representatives from major tech firms, including OpenAI and Google, have requested the South Korean government to adopt a flexible approach in implementing the AI Basic Act. OpenAI’s Sandy Kunvatanagarn and Google’s Alice Hunt Friend and Eunice Huang met with the Ministry of Science and ICT officials, as did Jared Ragland from the Business Software Alliance, which includes companies like Adobe, IBM, and Microsoft. The AI Basic Act, passed by the National Assembly in December, is set to become effective in January 2026 and is the second AI law globally after the EU’s. The Ministry of Science and ICT is currently developing enforcement ordinances for the Act. The tech company officials sought flexibility compared to the EU’s stringent AI regulations and discussed operator liability and the definition of high-impact applications.

Nvidia Leads Semiconductor Revenue Growth With 125% Increase

eeNews Europe (3/17, Clarke) reported that Nvidia’s semiconductor revenue surged by 125 percent to $124.4 billion, capturing a 50 percent share of the top 10 companies’ aggregate revenue of $249.8 billion. TrendForce highlights that the adoption of open-source models like DeepSeek may reduce AI adoption costs and boost AI use from servers to personal devices, with Edge AI being the next growth driver. Nvidia’s GPU demand rose in 2024, and upcoming GB200 and GB300 launches in 2025 are expected to further increase revenue. Broadcom’s semiconductor division saw an eight percent revenue increase to $30.64 billion, with AI chips comprising me than 30 percent of its solutions. Qualcomm’s QCT division achieved $34.86 billion in sales, a 13 percent increase, as it shifts focus to AI PCs and edge computing. MediaTek’s 5G smartphone penetration is projected to exceed 65 percent in 2025, with its partnership with Nvidia on Project DIGITS supporting growth.

UK Scientists Win £1 Million Prize For AI Breakthrough In Clean Energy Materials

The Daily Mail (UK) (3/19, Media) reports that British scientists from Imperial College London won a £1 million Government prize for their AI breakthrough that accelerates the development of materials for wind turbines and electric car batteries. The project, Polaron, uses a design tool with microscopic analysis to predict material performance. The Government hopes this technology will aid in creating stronger, lighter, and more efficient components for clean energy and transport. Science Secretary Peter Kyle said, “Polaron exemplifies the promise of AI and shows how, through our Plan for Change, we are putting AI innovation at the forefront.” Business Secretary Jonathan Reynolds emphasized the Government’s dedication to leveraging new technologies like AI to aid British companies in product development and export.

Semiconductor Industry Experiences Explosive Growth In 2024

Tom’s Hardware (3/18) reports that the global semiconductor industry saw significant growth in 2024, driven by AI processor sales, according to TrendForce. The Top 10 fabless chip developers earned $249.8 billion, with Nvidia accounting for half of that revenue. Nvidia’s revenue reached $124.3 billion, a 125% increase from 2023, due to high demand for its Hopper-based GPUs. Qualcomm ranked second with $34.86 billion, a 13% increase, driven by smartphones and automotive sectors. Broadcom held third place with $30.64 billion, an 8% rise, aided by AI-related products. AMD’s revenue grew 14% to $25.79 billion, boosted by its server business. MediaTek earned $16.52 billion, a 19% increase, with success in 5G smartphones and AI products. Marvell, Realtek, Novatek, Will Semiconductor, and MPS also showed growth. TrendForce predicts AI will continue to drive growth in 2025.

Nvidia, xAI Join With Microsoft, BlackRock To Boost AI Infrastructure

Reuters (3/19, Sriram) reports that Nvidia and Elon Musk’s xAI have joined a consortium supported by Microsoft, MGX, and BlackRock, aiming to enhance AI infrastructure in the US. The consortium, established last year, plans to initially invest over $30 billion in AI projects, focusing on data centers and energy facilities to support AI applications like ChatGPT.

        Nvidia CEO Declines Involvement In Intel Consortium. Reuters (3/19) reports that during Nvidia’s annual developer conference on Wednesday in San Jose, California, Nvidia CEO Jensen Huang indicated that orders for 3.6 million “Blackwell” chips from major cloud providers do not reflect full demand, excluding significant customers like Meta. Meta plans to use these chips for its Llama models and anticipates spending up to $65 billion on AI infrastructure, largely on Nvidia chips. Huang addressed investor concerns about AI chip demand, emphasizing that DeepSeek’s focus on reasoning would boost the need for Nvidia chips. Huang noted minimal short-term tariff impact but mentioned potential US production shifts.

Space Force Releases Data, AI Action Plan

MeriTalk (3/19, Perez) reports the Space Force has unveiled an “action plan to transform the service branch into a more data-driven and AI-enabled force and improve its ability to maintain space superiority.” In a statement, Col. Nathen L. Iven, acting deputy chief of space operations for cyber and data, said, “As the world’s first digital service, the United States Space Force recognizes the critical role that data and artificial intelligence will play in maintaining space superiority.” Similar to its FY2024 plan, “the Space Force’s FY2025 strategy places a strong emphasis on advancing data and AI governance, cultivating a workforce culture that understands the critical role of data and AI, and enhancing partnerships across government, academia, industry, and international allies.” The Space Force also plans “to deepen its understanding of AI and space technologies by collaborating with experts through the Commercial Space Office and Space Domain Awareness (SDA) Tap Lab. It also aims to establish standardized benchmarks to assess the performance of Large Language Models in space operations, focusing on mission-critical tasks and domain-specific challenges.”

College Board Introduces AP Courses In Cybersecurity And Business

Education Week (3/19, Klein) reports that the College Board is collaborating with industry leaders like the US Chamber of Commerce and IBM to develop new Advanced Placement (AP) courses aimed at providing high school students with job-relevant skills. The initiative, called AP Career Kickstart, introduces courses in cybersecurity and business principles/personal finance. David Coleman, CEO of the College Board, mentioned that “high schools had a crisis of relevance far before AI,” emphasizing the need for “the next generation of coursework.” The new courses are designed to offer students practical skills and may help them earn college credit or appeal to employers. The cybersecurity course is being piloted in 200 schools and aims to expand to 800 next year. Neil Bradley from the Chamber of Commerce stated, “This course is going to give people a leg up both when they’re applying for jobs, and then once they get the job.”

        Speaking to Education Week (3/19, Klein) last month, Coleman said, “AI-powered tools can already pass nearly every AP test,” highlighting the need for courses that prepare students for AI-dominated workplaces. The first courses will launch in the 2026-27 school year. Coleman emphasized the importance of equipping students with skills such as creativity and critical thinking through courses like AP Seminar, which integrates collaboration into its grading. The College Board is also considering teacher training in AI and cybersecurity.

University Of Idaho Awarded $4.5 Million AI Grant For Research Administration

According to a release (3/19), the University of Idaho (U of I) has received a $4.5 million grant from the National Science Foundation’s GRANTED program to enhance research management using generative AI. The project, led by Principal Investigator Sarah Martonick, director of the Office of Sponsored Programs, aims to reduce administrative burdens by automating data transfer processes. Chris Nomura, U of I’s vice president of research said, “The new AI tools should allow research administrators...to reduce their time spent on repetitive, monotonous tasks.” The initiative is a collaboration between U of I’s Office of Sponsored Programs and the Institute of Interdisciplinary Data Sciences (IIDS). The project also seeks to establish a “community of practice” to share AI tools with other institutions, starting with Southern Utah University, and aims to include more universities by the third year.

Nvidia CEO Emphasizes Need For Fastest Chips At GTC Conference

CNBC (3/19, Leswing) reports that Nvidia CEO Jensen Huang emphasized the importance of acquiring the company’s fastest chips during his unscripted two-hour keynote at the GTC conference. Huang addressed cost concerns by asserting that faster chips, which can be digitally sliced to serve AI to millions simultaneously, are the best cost-reduction system. He explained the economics of these chips, highlighting their potential to increase data center revenue by 50 times compared to previous systems. Nvidia’s Blackwell Ultra systems, set to launch this year, have already seen 3.6 million purchases by major cloud providers. Huang also announced a roadmap for future AI chips, Rubin Next and Feynman, planned for 2027 and 2028. He dismissed the competition from custom chips, noting their lack of flexibility for AI algorithms. Huang emphasized the importance of using Nvidia’s latest systems for upcoming AI infrastructure projects.

Synopsys Unveils AgentEngineer Technology For Chip Design

Reuters (3/19) reports that Synopsys introduced AgentEngineer, a new technology aimed at streamlining the design of computer chips by utilizing AI “agents” to assist human engineers. Synopsys CEO Sassine Ghazi highlighted the increasing complexity and pace of designing AI server systems, which involve thousands of chips, as a challenge for engineering teams. At the company’s annual user conference in Santa Clara, California, Ghazi noted the pressure on engineers due to the complexity and speed required for product delivery. AgentEngineer will initially focus on tasks like testing circuit designs. Shankar Krishnamoorthy, head of technology and development at Synopsys, emphasized AI’s role in enhancing R&D capacity without expanding team sizes. Over time, Synopsys plans for these agents to help coordinate complex systems with multiple chips to ensure timely product delivery.

Open Power AI Consortium Aims To Improve Electric Power With AI

Fast Company (3/20, Sullivan) reports that Nvidia, Microsoft, AWS, Oracle, and more than two dozen regional power companies in the US have announced plans to collaborate on building AI models and apps aimed at improving the generation and distribution of electric power. The initiative, called the Open Power AI Consortium, is organized by the Electric Power Research Institute (EPRI). EPRI President and CEO Arshad Mansoor said in a statement that the consortium will create an AI model, datasets, and apps to “enhance grid reliability, optimize asset performance, and enable more efficient energy management.” Axios climate reporter Alex Freedman noted that the power demands of the so-called AI boom have become a top priority for energy company CEOs in the US.

dtau...@gmail.com

unread,
Mar 29, 2025, 8:55:57 AM3/29/25
to ai-b...@googlegroups.com

Gen AI Browser Assistant Extensions Beam Data to the Cloud

Computer scientists led by Yash Vekaria at the University of California, Davis, found that generative AI browser extensions generally harvest users' sensitive data and share it with their own servers and third-party trackers. In some cases, this violates the browser extensions' privacy commitments and U.S. regulations governing health and student data. The study of 10 generative AI Chrome extensions found that some collect sensitive information from Web forms or full document object models of pages visited by users.
[
» Read full article ]

The Register (U.K.); Thomas Claburn (March 25, 2025)

 

Encryption Breakthrough Lays Groundwork for Privacy-Preserving AI Models

A framework developed by researchers at New York University brings fully homomorphic encryption (FHE) to deep learning, allowing AI models to operate directly on encrypted data without needing to decrypt it first. Using the Orion framework, the researchers demonstrated the first-ever high-resolution FHE object detection using YOLO-v1, a deep learning model with 139 million parameters.
[
» Read full article ]

NYU Tandon School of Engineering (March 25, 2025)

 

Can We Make AI Less Power-Hungry? These Researchers Are Working on It

Researchers working with the ML Energy Initiative are trying to reduce AI power consumption without impacting performance. To alter the internal workings of AI models, the researchers leveraged techniques to reduce a model's parameters and optimization to reduce the amount of memory needed by the remaining parameters. To optimize how datacenters run AI models, they developed a software tool that can slow certain GPUs in a cluster to use less energy, while ensuring the GPUs finish processing workloads at the same time.
[ » Read full article ]

Ars Technica; Jacek Krywko (March 24, 2025)

 

AI Breakthrough Makes DNA Data Retrieval Faster, More Accurate

An AI tool developed by researchers at Technion – Israel Institute of Technology is 3,200 faster times and up to 40% more accurate in retrieving digital information stored in DNA compared to the best current methods. With the new DNAformer approach, 100MB of data can be processed in just 10 minutes, versus several days with current techniques. While the new tool is still too slow for the commercial market, the researchers believe they are moving in the right direction.
[ » Read full article ]

Tom's Hardware; Anton Shilov (March 23, 2025)

 

AlexNet Source Code Is Open Sourced

The Computer History Museum (CHM), in partnership with Google, has released the source code to AlexNet, an artificial neural network created to recognize the contents of photographic images. Developed in 2012 by then University of Toronto graduate students Alex Krizhevsky and Ilya Sutskever and their faculty advisor, ACM A.M. Turing Award laureate Geoffrey Hinton, the source code is available as open source on CHM’s GitHub page.
[ » Read full article ]

IEEE Spectrum; Hansen Hsu (March 21, 2025)

 

AI-Driven Weather Prediction Breakthrough Reported

An AI model can replace the numerical solver step in the weather prediction process to generate faster and more accurate predictions than today's supercomputers, according to researchers at the University of Cambridge in the U.K. Aardvark Weather, trained on raw data from weather stations, satellites, weather balloons, ships, and planes, uses only 10% of the input data required by conventional systems.
[ » Read full article ]

The Guardian (U.K.); Rachel Hall; Ian Sample (March 20, 2025)

 

As AI Nurses Reshape Hospital Care, Human Nurses Push Back

Hospitals increasingly are using AI to perform tasks previously handled by nurses. The hospitals say AI helps nurses work more efficiently while addressing burnout and understaffing, but nurses argue the technology is overriding their expertise and degrading care quality. National Nurses United, the largest nursing union in the U.S., is pushing for greater input into how AI can be used, and protection from discipline if nurses decide to disregard automated advice.
[ » Read full article ]

Associated Press; Matthew Perrone (March 16, 2025)

 

Tech Chiefs, Foreign Leaders Urge U.S. to Rethink AI Chip Curbs

With less than two months to comply with the U.S. framework for controlling AI development worldwide, tech companies are expressing concerns about business in foreign markets, and U.S. allies are seeking exemptions. The "AI diffusion rule" will restrict the number of AI processors that can be exported to most nations and require datacenters to comply with U.S. security standards. Some officials have floated eliminating the three tiers of chip access and associated compute caps, while maintaining export license requirements for most countries.


[
» Read full article *May Require Paid Registration ]

Bloomberg; Mackenzie Hawkins; Jenny Leonard; Brody Ford (March 25, 2025); et al.

 

MEPs Warn EU Against Weakening Landmark AI Rules

Members of the European Parliament (MEPs) instrumental in drafting EU's Artificial Intelligence Act have expressed concerns as EU officials consider whether to ease requirements for AI companies. Officials are weighing whether to make certain provisions of the Act voluntary. The code, being drafted by a panel of experts including ACM A.M. Turing Award laureate Yoshua Bengio, is expected to be finalized in May.


[
» Read full article *May Require Paid Registration ]

Computing; Vikki Davies (March 26, 2025)

 

U.S. Adds Export Restrictions to More Chinese Tech Firms over Security Concerns

The Trump administration added 80 companies and organizations on March 25 to a list of those prohibited from purchasing U.S. technology and other exports due to national security concerns. Among the 80 are 54 Chinese companies and organizations, including Nettrix Information Industry, which manufactures servers used to produce AI, and the Beijing Academy of Artificial Intelligence, which reportedly has attempted to acquire AI models and chips to bolster China's military modernization.


[
» Read full article *May Require Paid Registration ]

The New York Times; Ana Swanson (March 25, 2025)

 

Anthropic Scores Win in AI Copyright Dispute with Record Labels

A U.S. court on Tuesday denied an injunction sought by Universal Music Group and other record labels to prevent AI startup Anthropic from using their copyrighted lyrics to train its Claude chatbot. The music companies said Anthropic infringed copyrighted lyrics from at least 500 songs and sought to prohibit the company from using their works to train its AI models.