Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

Dr. T's AI brief

12 views
Skip to first unread message

dtau...@gmail.com

unread,
Jun 1, 2024, 12:10:34 PM6/1/24
to ai-b...@googlegroups.com

Google DeepMind Unveils AI Model for Living Cells

Google DeepMind's AlphaFold 3 uses AI to provide a new view of living cells and the interactions among the different molecules within them. Developed in conjunction with Isomorphic Labs, DeepMind's drug discovery spin-off, the updated model could help scientists identify new drug molecules to treat cancer and other diseases.
[
» Read full article *May Require Paid Registration ]

Financial Times; Michael Peel; Ian Johnston (May 8, 2024)

 

OpenAI Releases 'Deepfake' Detector to Disinformation Researchers

OpenAI is allowing a small group of disinformation researchers to test a new deepfake detector tool in the hope they can help identify ways to improve it. The company said the tool can detect 98.8% of images created by OpenAI's DALL-E 3 image generator. OpenAI also is working to address the problem of deepfakes by joining a steering committee for the Coalition for Content Provenance and Authenticity, which is working to develop credentials for digital content.
[
» Read full article *May Require Paid Registration ]

The New York Times; Cade Metz; Tiffany Hsu (May 7, 2024)

 

Microsoft Creates Top Secret Generative AI Service for U.S. Spies

Microsoft has rolled out a generative AI platform that operates without an Internet connection, which U.S. intelligence agencies can use to analyze top secret information. The large language model is based on GPT-4 and operates in an "air-gapped" environment in the cloud. The model can read files but is unable to learn from them or from the open Internet.
[
» Read full article *May Require Paid Registration ]

Bloomberg; Katrina Manson (May 7, 2024)

 

AR Slims Down with AI, Holograms

An AR display developed by researchers at Stanford University, the University of Hong Kong, and Nvidia combines 3D holograms, AI, and optical metasurfaces which they say is as comfortable to wear as ordinary eyeglasses. The researchers used AI to optimize the metasurface structure, transform 3D images into high-quality holograms, and calibrate the optics, electronics, and lasers. Said Stanford's Gordon Wetzstein, "Our AI display is thinner than current AR displays and, importantly, it shows 3D images to each eye."
[
» Read full article ]

IEEE Spectrum; Charles Q. Choi (May 8, 2024)

 

New Diplomatic Strategy Emerges as AI Grows

U.S. and Chinese diplomats plan to meet later this month to begin what in essence would be the first arms control talks over the use of AI. The talks in Geneva are an attempt to find some common ground on how AI will be used and in which situations it could be prohibited. For the U.S., the conversation represents the first major foray into a new realm of diplomacy.
[ » Read full article *May Require Paid Registration ]

The New York Times; David E. Sanger (May 7, 2024)

 

'Video Games' Shed Light on How Flies Fly

California Institute of Technology researchers developed a neural network that predicts the wing motion of fruit flies based on muscle activity, in order to better understand the wing's complex hinge structure. This involved creating a "video game" for the flies, surrounding them with LED displays that simulated environmental cues and prompted them to change their flight patterns and speeds. The researchers collected terabytes of data on 72,000 wingbeats.
[ » Read full article ]

IEEE Spectrum; Gwendolyn Rak (May 5, 2024)

 

Fei-Fei Li Building 'Spatial Intelligence' Startup

The "godmother of AI," ACM Fellow Fei-Fei Li, is building a startup that uses human-like processing of visual data to make AI capable of advanced reasoning, according to multiple sources. In describing the startup, one source pointed to a talk Li gave at the TED conference in Vancouver last month, in which she spoke about algorithms that could plausibly extrapolate what images and text would look like in 3D environments and act upon those predictions, using "spatial intelligence."
[ » Read full article ]

Reuters; Katie Paul; Anna Tong; Krystal Hu (May 3, 2024)

 

AI Helps Automate Gruesome Jobs

U.S. meat processors increasingly are automating the processing of animal carcasses using recent advancements in AI, machine learning, and computer vision. Robots debone chicken more efficiently than the average worker, but it is has been more difficult to automate pork and beef butchery, given the size of the animals. Said Cargill's Hans Kabat, "You have to have sensors … on robots to be able to sense where things need to get done and then to actually move the product."
[ » Read full article *May Require Paid Registration ]

Bloomberg; Jr., Gerson Freitas; Isis Almeida (May 1, 2024)

 

AI Lobbying Frenzy in Washington Dominated by Big Tech

A report from nonprofit OpenSecrets revealed an almost threefold increase in the number of organizations lobbying the U.S. government on AI from 158 in 2022 to 451 in 2023. Among the 334 organizations that lobbied on AI for the first time last year were startups like OpenAI, big corporations like Visa and GSK, industry trade associations, and numerous civil society organizations. Meanwhile, OpenSecrets found that Amazon, Meta, Alphabet, and Microsoft each spent more than $10 million on lobbying.
[ » Read full article ]

Time; Will Henshall (April 30, 2024)

 

Georgetown Research: China Leads US In AI Research

Axios Share to FacebookShare to Twitter (5/3) reports that Georgetown University’s Center for Security and Emerging Technology has found that China is leading the U.S. in AI research, notably in fields like computer vision and robotics, despite the U.S. producing more highly cited papers. The study highlights significant growth in global AI research from 2017 to 2022, with notable advances in natural language processing and AI safety, though the latter remains underfunded. Zachary Arnold from CSET emphasized the rapid expansion and diversity of AI research, underscoring the urgent need for federal investment in scientific validation to ensure AI’s safety and reliability. The Chinese Academy of Sciences leads in the number of AI research papers published and in high-quality research, particularly in computer vision.

 

Big Tech Companies Race To Roll Out AI Despite Burnout Concerns

CNBC Share to FacebookShare to Twitter (5/3, Field) reports AI engineers at Amazon, Google, Microsoft, and other top tech firms feel immense pressure and burnout amid the rush to develop and launch generative AI tools. An anonymous Amazon AI engineer said thousands of code lines get written with “zero testing for mistakes” to meet tight deadlines, often for projects that get “deprioritized.” A Google staffer described similar accelerated timelines driven by competitive fears. Some workers raised concerns about oversights on real-world impacts in the pursuit of speed. An Amazon spokesperson said, “It’s inaccurate and misleading to use a single employee’s anecdote to characterize the experience of all Amazon employees working in AI.”

 

Researchers Plan To Investigate Benefits, Risks Of Generative AI In Scholarly Publishing

Inside Higher Ed Share to FacebookShare to Twitter (5/6, Palmer) reports, “The rapid rise of generative artificial intelligence (AI) has confronted the scholarly publishing world with the potential risks and benefits of using the new technology in the production of academic research and writing.” Ithaka S+R, an education research firm, “launched a new study last month to gain more insight into the implications of generative AI on scholarly publishing,” and researchers over the next several months will interview about 15 “decision makers from the publishing sector, and others with subject expertise, on generative AI’s opportunities and risks,” according to a blog post about the project. Ithaka’s inquiries “come as the entire higher education sector is grappling with how to approach generative AI.” As part of the new study, the team has “several critical questions they want to investigate about generative AI and scholarly publishing,” such as, “What are the most pressing ethical and market challenges around the tools?”

 

People Turning To AI As They Grieve

CNN Share to FacebookShare to Twitter (5/6, Kelly) reports that as part of a broader trend where individuals use AI to maintain connections with deceased loved ones, Amazon is developing an update to its Alexa system to allow the technology to mimic any voice, including that of deceased family members. Amazon Senior Vice President Rohit Prasad highlighted at the annual re:MARS conference in June 2022 that the updated system could generate a voice from less than a minute of audio, making it easier to preserve memories of loved ones. While these technologies offer comfort, they also raise ethical concerns and questions about their impact on the grieving process.

 

How College Faculty Are Embedding AI Into Course Offerings

Inside Higher Ed Share to FacebookShare to Twitter (5/7, Mowreader) reports that “to better prepare students for their careers after college, faculty members and campus leaders are prioritizing education on generative artificial intelligence tools and features.” Inside Higher Ed’s most recent survey “of chief academic officers and provosts found 14 percent have reviewed the curriculum to ensure it will prepare students for AI in the workforce and an additional 73 percent plan to do so.” Among the possibilities of “how generative AI can improve learning for students, faculty members have embedded AI in student supports, as course topics and as research tools.” One of the “most common ways faculty are utilizing AI is to enhance current learning outcomes.” Other faculty are engaging with technology “directly in the classroom, teaching students how to hone and develop their own AI tools and projects.”

 

AI Speeds Up Humanoid Robot Development In China

CNBC Share to FacebookShare to Twitter (5/8, Cheng) reports that advancements in generative AI are expediting humanoid robot production in China. Companies like LimX Dynamics aim to deploy robots in factories and households sooner than expected, reducing LimX’s timeline from 10 years to potentially five due to AI’s impact on R&D. Large tech firms and investors are recognizing these opportunities, propelling further development and applications in various sectors.

 

OpenAI Executive Predicts Advances In AI In Near Future

Insider Share to FacebookShare to Twitter (5/6, Mok) reports that OpenAI COO Brad Lightcap, shared insights at the Milken Institute Global Conference on the rapid evolution expected in AI technology. Lightcap predicted a significant enhancement in the capabilities of large language models within the next couple of years, suggesting that current generative AI technologies will seem “laughably bad” in comparison. He outlined a future where AI could serve as an integral system partner, capable of managing more complex tasks and serving as a reliable teammate or assistant.

 

White House Pilot Grants Healthcare Researchers Access To Advanced AI Systems

Politico Share to FacebookShare to Twitter reports the White House announced that a group of healthcare researchers will gain access to advanced artificial intelligence systems through a government pilot. This initiative, stemming from President Joe Biden’s executive order on AI, aims to democratize cutting-edge technology. Launched in January as a two-year pilot, the National AI Research Resource project is led by the National Science Foundation, involving partnerships with the National Institutes of Health and companies like OpenAI and Microsoft. Facilities involved in the program include several major universities and national laboratories.

 

New Robot Hand Enhances AI Training

New Scientist Share to FacebookShare to Twitter (5/9) reports that the UK’s Shadow Robot Company has developed a robotic hand that combines speed, flexibility, and durability, ideal for AI experiments in environments like Google DeepMind. This hand can fully close in 500 milliseconds, exert up to 10 newtons of force, and endure significant damage, essential for reinforcement learning where robots learn through trial and error. Its robust yet heavy design, featuring customizable fingers and in-built sensors for detailed tactile feedback, caters to complex robotics applications, though it might be costly compared to less sophisticated alternatives.

 

Google Looks To Help Scientists Better Understand Microscopic Mechanisms With New AI Tool

The New York Times Share to FacebookShare to Twitter (5/8, Metz) reports, “On Wednesday, Google DeepMind, the tech giant’s central artificial intelligence lab, and Isomorphic Labs, a sister company, unveiled a more powerful version of AlphaFold, an artificial intelligence technology that helps scientists understand the behavior of the microscopic mechanisms that drive the cells in the human body.” The new version, AlphaFold3, “extends the technology beyond protein folding. In addition to predicting the shapes of proteins, it can predict the behavior of other microscopic biological mechanisms, including DNA, where the body stores genetic information, and RNA, which transfers information from DNA to proteins.”

 

Higher Ed Leaders Discuss Academic Approaches To Evolving AI Opportunities

Inside Higher Ed Share to FacebookShare to Twitter (5/9, Coffey) reports, “Higher ed moving beyond initial artificial intelligence (AI) fears to focus on practical and specific opportunities for the technology was a recurring theme at the Digital Universities U.S. conference that concluded on Wednesday in St. Louis.” The conference, “co-hosted this week by Inside Higher Ed and Times Higher Education in collaboration with Washington University in St. Louis, brought together hundreds of college administrators and education technology company officials to explore the possibilities and challenges of digital transformation in higher ed.” Generative AI was a talking point “at many event sessions,” while university leaders also “touched on the importance of addressing the learning and emotional loss that came during and after the COVID-19 pandemic.”

 

TikTok To Begin Automatically Labeling AI-Generated Content

ABC News Share to FacebookShare to Twitter (5/9, Saliba) reports TikTok announced on Thursday that it will begin automatically labeling Artificial Intelligence-generated content when it is uploaded from certain platforms. TikTok will implement this capability by becoming the first video-sharing platform to implement Adobe’s “Content Credentials technology – an open technical standard providing publishers, creators, and consumers the ability to trace the origin of different types of media.” Adam Presser, Head of Operations & Trust and Safety at TikTok, told ABC News, “Our users and our creators are so excited about AI and what it can do for their creativity and their ability to connect with audiences. And at the same time, we want to make sure that people have that ability to understand what fact is and what is fiction.”

 

AI’s Impact Debated At CEO Summit

The Wall Street Journal Share to FacebookShare to Twitter (5/9, Subscription Publication) reports that the Wall Street Journal CEO Council Summit on Thursday discussed the dual impacts of artificial intelligence on industry transformation and potential risks. Aidan Gomez, CEO of Cohere, highlighted AI’s utility in sectors from consumer goods to healthcare, but expressed concerns about its misuse in activities like election manipulation. Meanwhile, John Cassidy of Kindred Capital warned against the risks of over-regulating AI, comparing the situation to the early phases of the space race. Contrarily, Andrew Balls from Pimco questioned AI’s long-term benefits to productivity, suggesting recent improvements may be temporary and not due to AI. Dale Whelehan from 4 Day Week Global also commented on the potential of AI to enable shorter workweeks, enhancing worker satisfaction and retention without impacting wages.

 

Legislators Reveal Bill To Ease Creation Of Export Controls On AI Models

Reuters Share to FacebookShare to Twitter (5/9) reports, “A bipartisan group of lawmakers unveiled a bill late Wednesday that would make it easier for the Biden administration to impose export controls on AI models, in a bid to safeguard the prized U.S. technology against foreign bad actors.” Reuters adds, “The bill, sponsored by House Republicans Michael McCaul, John Molenaar, Max Wise and Democrat Raja Krishnamoorthi, would also give the Commerce Department express authority to bar Americans from working with foreigners to develop AI systems that pose risks to U.S. national security.” The measure materializes as concerns “mount that U.S. adversaries could use the models, which mine vast amounts of text and images to summarize information and generate content, to wage aggressive cyber attacks or even create potent biological weapons.”

 

New AI Vetting Checklist Details How Schools Can Protect Student Data

K-12 Dive Share to FacebookShare to Twitter (5/9, Merod) reports that a new artificial intelligence vetting checklist “from the nonprofit Future of Privacy Forum aims to help schools and districts ensure student data privacy is safeguarded as they navigate the creation of AI use policies for students and staff.” The checklist shares “how districts can ensure AI ed tech products comply with local, state and federal laws – similar to when vetting other ed tech offerings. Schools should also know how the AI tool will be used and ask service providers if the tool will require students’ personal information, and, if so, whether that use complies with existing law.” The Future of Privacy Forum said if the AI ed tech “does use student data, schools need to be prepared to explain to teachers, students and parents how so.” Additionally, schools should determine “if student data will be used to train the AI tool’s large language model. “

dtau...@gmail.com

unread,
Jun 2, 2024, 9:23:16 AM6/2/24
to ai-b...@googlegroups.com

Why Protesters Around the World Are Demanding a Pause on AI Development

Protesters who are part of the "Pause AI" movement took part in protests in 13 countries Monday, calling for government regulation of AI companies and a freeze on the development of new AI models until companies agree to thorough safety tests. Experts don’t understand the inner workings of AI systems like ChatGPT, and they worry that lack of knowledge could lead us to dramatically miscalculate how more powerful systems would act.
[ » Read full article ]

Time; Anna Gordon (May 13, 2024)

 

Illness Took Away Her Voice. AI Created a Replica She Carries in Her Phone

Alexis "Lexi" Bogan (pictured), whose speech remains impaired after a tumor near the back of her brain was removed, has regained her voice through a pilot version of OpenAI's Voice Engine at Rhode Island Hospital. Trained on a 15-second clip of her teenage voice, the AI reads aloud whatever she types into the phone app. Said Dr. Fatima Mirza, “We’re able to help give Lexi back her true voice and she’s able to speak in terms that are the most true to herself.”
[ » Read full article ]

Associated Press; Matt O'Brien (May 13, 2024)

 

Manufacturing Optimized Designs for High Explosives

Lawrence Livermore National Laboratory researchers used AI and machine learning to develop computationally optimized designs for shaped explosive charges, which are used to manipulate metals, to control their hydrodynamic instabilities. The researchers conducted 14 high-explosive (HE) detonation experiments, with the results compared to a baseline design that did not include a buffer between the liner and the HE. Flash X-ray radiographs showed the silicone buffer reliably and consistently mitigated potential instabilities.
[ » Read full article ]

Lawrence Livermore National Laboratory; Shelby Conn (May 13, 2024)

 

UAE Releases AI Model to Compete with Big Tech

The Technology Innovation Institute, a government research center within Abu Dhabi's Advanced Technology Research Council, has released the Falcon 2 series of its open-source generative AI model. This includes Falcon 2 11B, a text-based model, and Falcon 2 11B VLM, a vision-to-language model able to generate a text description of an uploaded image.
[ » Read full article ]

Reuters; Alexander Cornwell (May 13, 2024)

 

Team in Japan Uses Supercomputer to Develop LLM

A team of researchers in Japan developed a large language model (LLM) using the Japanese supercomputer Fugaku jointly developed by Fujitsu and research institute Riken. Trained extensively on data in Japanese, the Fugaku-LLM model is expected to lead to research on generative AI tailored to domestic needs.
[ » Read full article ]

The Japan Times (May 11, 2024)

 

GhostStripe Attack Haunts Self-Driving Cars

Researchers in Singapore and the U.S. revealed an attack that exploits the complementary metal oxide semiconductor (CMOS) camera sensors in self-driving vehicles, preventing them from recognizing road signs. The GhostStripe attack uses LEDs to shine light patterns on road signs, which prevents the vehicles' machine-learning software from reading them. Rapidly flashing different colors onto the sign abuses the sensors' rolling digital shutters, distorting every frame captured by the cameras.
[ » Read full article ]

The Register (U.K.); Laura Dobberstein (May 10, 2024)

 

States Turn to AI to Spot Guns at Schools

Several states have introduced legislation to create grant programs for schools to support the installation and use of AI surveillance systems able to detect individuals carrying guns. Pending legislation in Kansas has raised eyebrows because it requires the AI software to be patented and in use in at least 30 states, among other criteria, and only ZeroEyes, the same company that touted the criteria to lawmakers, meets all the mandated specifications. On Friday, Missouri became the latest state to pass legislation geared toward ZeroEyes.
[ » Read full article ]

Associated Press; David A. Lieb; John Hanna (May 12, 2024)

 

Shadow Hand Withstands the Rigors of AI Research

The Shadow Hand, built by U.K.-based Shadow Robot for Google DeepMind, was designed to withstand the rigors of AI research. The three-fingered Hand includes easily swappable fingers and can withstand being struck by a hammer. The kinematics of each finger, containing 155 individual sensor channels and video from the distal tactile sensor, are similar to those of a human finger, with an ad-abduction joint at the base, and three flex/extend joints along its length.
[ » Read full article ]

The Engineer; Jason Ford (May 10, 2024)

 

Tiny Sample of Human Brain Reveals 57,000 Cells, 150 Million Neural Connections

An analysis of electron microscope images of more than 5,000 slices of a cubic-millimeter sample of human brain tissue by Harvard University and Google researchers found 57,000 individual cells, 150 million neural connections, and 23 centimeters of blood vessels. A machine-learning algorithm was used to map the paths of neurons and other cells through each section, a process that would have taken researchers years to complete. Harvard's Jeff Lichtman said, "We found many things in this dataset that are not in the textbooks."
[ » Read full article ]

The Guardian (U.K.); Ian Sample (May 9, 2024)

 

Will Chatbots Eat India’s IT Industry?

A paper last year by Alexander Copestake of the IMF and colleagues identified “near-exponential growth” in demand for AI-related skills in India’s service sector since 2016, yet there are concerns that generative AI technology could erode India's tech industry. Seven of India's IT companies collectively laid off 75,000 employees last year, equivalent to about 4% of their combined workforce. The companies say that reflects the broader slowdown in the tech sector.

[ » Read full article *May Require Paid Registration ]

The Economist (May 9, 2024)

 

Apple Plans To Release Updated Siri With GenAI

The New York Times Share to FacebookShare to Twitter (5/10, Mickle, Chen, Metz) reported that people familiar with the company’s work say Apple, at its June 10 developers conference, will release a “more conversational and versatile” Siri with underlying technology that includes a new generative AI system. Early last year, Apple executives realized “that new technology had leapfrogged Siri,” and they make generative AI “a tent pole project,” in the company’s “most significant reorganization in more than a decade.” People familiar with the thinking of Apple’s leadership say they are concerned that AI technology will become smartphones’ primary operating system, displacing iOS, and that an ecosystem of AI apps could undermine Apple’s App Store.

 

Politico Report: Lobbyists “Seem To Be Winning Over” Congress On AI

Politico Share to FacebookShare to Twitter reports lobbyists are now engaged in “an all-hands effort to block strict safety rules on advanced artificial intelligence and get lawmakers to worry about China instead – and so far, they seem to be winning over once-skeptical members of Congress.” Politico says their success “marks a change from months in which the AI debate was dominated by well-funded philanthropies warning about the long-term dangers of the technology.” Politico points out this “has already caused key lawmakers to back off some of their more worried rhetoric about the technology.”

 

California Partners With Firms To Explore AI

The AP Share to FacebookShare to Twitter (5/10) reports that California, under Governor Gavin Newsom’s initiative, is collaborating with five companies to develop generative AI tools aimed at improving public services like traffic management and tax guidance. Announced on Thursday, these partnerships involve major tech backers like Microsoft, Google, and Amazon. The state will conduct a six-month trial period for these tools, which are categorized as low risk. This effort is part of California’s broader goal to lead in AI technology, enhanced by a significant presence of AI firms and proactive state guidelines on AI usage.

 

Report: School District Tech Leaders Worry AI Use Could Increase Cyberattacks

Education Week Share to FacebookShare to Twitter (5/10, Langreo) reported that while teachers are “using artificial intelligence in all kinds of ways to help them do their jobs,” that expanding use “has school district tech leaders worried that it could prompt more cyberattacks against schools, concludes a new report.” The Consortium for School Networking’s annual State of EdTech District Leadership report, “released April 30, recognizes that AI has significant potential to improve education, but at the same time it poses huge cybersecurity risks for schools.” The report surveyed 981 district tech leaders, and found that “almost two-thirds (63 percent) of district tech leaders are ‘very’ or ‘extremely’ concerned that the emerging technology will enable new forms of cyberattacks.” The report also found that about half (49 percent) of district tech leaders are also “very” or “extremely” concerned “about the lack of teacher training for integrating AI into instruction.”

 

Opinion: How Educators Can Harness Generative AI’s Power In Classrooms

In an opinion piece for Inside Higher Ed Share to FacebookShare to Twitter (5/10), Ripsimé K. Bledsoe, a faculty member for the First-Year Experience program at Texas A&M University at San Antonio wrote, “The emergence of generative artificial intelligence (GenAI) presents an unprecedented opportunity.” GenAI invites “proactive engagement, offering a canvas for innovation and creativity. This shift is not merely technical but philosophical, challenging us to reimagine our roles not as consumers of technology but as creators and collaborators.” Educators must “take the helm of research and development, steering the integration of AI with intention and expertise.” Six areas around “which to focus action” include “leveraging the deep well of faculty expertise,” recognizing and “leveraging GenAI’s strengths and limitations,” and “integrating GenAI’s strengths into our curricula in small ways.”

 

OpenAI Launches Advanced AI Model GPT-4o

The Wall Street Journal Share to FacebookShare to Twitter (5/13, Subscription Publication) reports that OpenAI introduced an enhanced AI system named GPT-4o, which merges capabilities in text, image, and video processing with a real-time voice interaction feature. This new version facilitates interruptions during voice interactions, marking a significant improvement over existing voice assistants. During a demonstration, the model showcased its ability to translate languages, analyze code, and solve algebra problems instantly. GPT-4o will be available at half the price and double the speed of its predecessor, targeting both individual and corporate users. Moreover, the introduction coincides strategically a day before Google’s developer conference, highlighting the competitive landscape in the AI industry.

        TechCrunch Share to FacebookShare to Twitter (5/13, Wiggers) says, “GPT-4o greatly improves the experience in OpenAI’s AI-powered chatbot, ChatGPT. The platform has long offered a voice mode that transcribes the chatbot’s responses using a text-to-speech model, but GPT-4o supercharges this, allowing users to interact with ChatGPT more like an assistant.”

        OpenAI CEO Advocates For International AI Regulation. Insider Share to FacebookShare to Twitter (5/12, Varanasi) reports that OpenAI CEO Sam Altman has expressed a strong preference for establishing an international agency to oversee frontier AI systems due to their potential global impacts. In an interview on the All-In podcast, Altman highlighted the delicate balance necessary in regulatory measures to avoid stifling innovation while ensuring sufficient safety. He underscored the importance of flexibility in regulation, given the rapid advancement of AI technology. Altman’s call for oversight mirrors ongoing legal advancements, such as the EU’s Artificial Intelligence Act and related US initiatives, aimed at managing AI’s developments responsibly.

 

Report: Experts Expect AI To Reshape Student Experiences, Pedagogy

Inside Higher Ed Share to FacebookShare to Twitter reported that “artificial intelligence will reshape student experiences, pedagogy and how people communicate, according to dozens of higher ed and technology experts, sharing opinions” in a paper released Monday by Educause. The report, “which includes the opinions of 66 higher education and technology experts, also found AI will be used to address climate change, sustainability and politics.” Beyond AI, the respondents said they had “growing concerns about cybersecurity and privacy and pointed toward a growing digital divide with many low-income and rural students unable to access the Internet.”

 

US, Chinese Officials To Discuss AI Concerns On Tuesday

Bloomberg Share to FacebookShare to Twitter (5/13, Subscription Publication) reports that US officials “intend to highlight security concerns with China’s development of artificial intelligence when they meet representatives from that country to launch discussions over the emerging technology, according to senior administration officials.” Representatives from the US and China “will meet Tuesday in Geneva, according to the officials, who briefed reporters about the upcoming discussions on condition of anonymity. The meeting, which a senior official characterized as the first of its kind, initiates talks agreed to” last year by President Biden and his Chinese counterpart Xi Jinping to “address security and safety concerns over AI even as the two countries intensify their competition.”

        The Washington Post Share to FacebookShare to Twitter (5/13, Dou) reports an Administration official said Seth Center, the State Department deputy envoy for critical and emerging technology, and Tarun Chhabra, senior director for technology and national security at the National Security Council, will lead the US delegation. China will be “represented by officials from the Foreign Ministry and the National Development and Reform Commission, the nation’s central economic planning agency.” The AP Share to FacebookShare to Twitter (5/13) reports that China’s official Xinhua news agency, “citing the Foreign Ministry, said that the two sides would take up issues including the technological risks of AI and global governance.” Washington also “sees efforts undertaken on AI by China as possibly undermining the national security of the United States and its allies, and Washington has been vying to stay ahead of Beijing on the use of AI in weapons systems.”

 

Amazon, Microsoft Could Compete With AI Startups They Back

TIME Share to FacebookShare to Twitter (5/13) reports Amazon and Microsoft are intensifying their efforts in the AI sector, previously dominated by companies like Google and Meta. Microsoft has heavily invested in OpenAI, providing computational support for AI development in exchange for exclusive access to new models. Similarly, Amazon has entered into a significant agreement with Anthropic, investing billions to integrate Anthropic’s AI models into its cloud services. Both tech giants reported substantial benefits from these partnerships in their recent earnings, with a notable increase in revenue attributed to AI advancements. Meanwhile, Amazon reportedly “has tasked its AGI team with building a model that outperforms Anthopic’s most capable AI model, Claude 3, by the middle of this year.” These strategic moves suggest a shift towards developing proprietary AI technologies, potentially altering the competitive landscape and raising concerns about market power concentration and the safe development of AI technologies.

        AI Competition Heats Up With New Developments. Axios Share to FacebookShare to Twitter reports on the intensifying competition in the AI sector as major tech companies prepare to unveil new AI-driven features and strategies. Google plans to demonstrate its advancements in generative AI at its I/O developers’ conference, focusing on search improvements and hardware integration. Microsoft is set to highlight its AI enhancements for Bing and other products at the upcoming Build conference in Seattle. Apple, meanwhile, anticipates revealing its generative AI strategy at the Worldwide Developers Conference in June, potentially updating Siri’s capabilities. Amidst these developments, Amazon continues its focus on delivering goods and services, leveraging AI to enhance user experiences. This series of tech conferences will likely shape the AI landscape and influence future technology developments.

 

Meta Uses Public Available Instagram, Facebook Content For AI Training

Insider Share to FacebookShare to Twitter (5/11, Mann) reports that Meta is leveraging publicly available photos on Instagram and Facebook to advance its AI, as stated by Chief Product Officer Chris Cox at Bloomberg’s Tech Summit. Meta restricts its data to public posts, avoiding private user communications for training its AI models. This practice fuels their AI capability, notably their text-to-image model Emu, which generates high-quality images from prompts.

 

Economist Warns of Risks From Impending AI Revolution

CNN Share to FacebookShare to Twitter (5/13, Goodkind) interviews economist and former dean of Columbia Business School Glenn Hubbard who warns AI disruption will likely have a significant impact on the future US economy. Hubbard expresses concerns that the US isn’t prepared for the challenges that come with AI implementation and it could stifle economic growth. Hubbard highlights the need for government intervention and public policy to handle the disruption and prepare workers for changing job demands.

 

Intel Enhances AI Education And Workplace Integration

The AP Share to FacebookShare to Twitter (5/13, Grantham-Philips) reports that Intel is expanding its efforts in artificial intelligence (AI) education and workforce integration, according to Chief People Officer Christy Pambianchi. Intel’s “Digital Readiness” program collaborates internationally to spread AI awareness, and their “AI for Workforce” involves creating extensive AI educational content for U.S. community colleges. This initiative looks to embed AI skills into the curriculum and provide industry-recognized qualifications that can support employment. Pambianchi emphasized the importance of responsible AI usage and maintaining a human-centered approach in its application.

 

Newark Public Schools Considers AI Tutoring Tool Expansion

Chalkbeat Share to FacebookShare to Twitter (5/13, Gomez) reports Newark Public Schools is looking to expand the use of Khanmigo, “an AI program developed by online learning giant Khan Academy,” across the district following a successful pilot. The chatbot acts as a tutor for students and assists teachers with tasks such as planning lessons and assessing student performance. District officials and Superintendent Roger León “confirmed they are looking to expand the use of the program districtwide.” Khanmigo, “powered by ChatGPT technology, includes features meant to help students work through math and science problems, analyze text, navigate college admissions, and revise essays, among other features.”

 

Alphabet Unveils AI Innovations At Developer Conference

Reuters Share to FacebookShare to Twitter reports Google parent company Alphabet has unveiled a number of new artificial intelligence (AI) solutions as part of its efforts to compete with rivals in the AI space. At a developer on Tuesday, Google showcased “an addition to its family of Gemini 1.5 AI models known as Flash that is faster and cheaper to run; a prototype called Project Astra, which can talk to users about anything captured on their smartphone camera in real time; and search results categorized under AI-generated headlines.” Google also provided details on its efforts to power its AI solutions with new computing chips and overhaul its search engine at the event.

        Pro-Palestinian Protesters Disrupt Google I/O Conference. The Guardian (UK) Share to FacebookShare to Twitter (5/14) reports that hundreds of pro-Palestinian protesters chained themselves together at the entrance of Google’s annual developer conference on Tuesday, protesting the company’s involvement in Project Nimbus, a $1.2 billion AI and cloud computing project supported by Google and Amazon for the Israeli government. Groups including the No Tech for Genocide coalition demanded Google halt its military contracts. Despite the protest, the event started on time with attendees redirected to another entrance. Former Google employee Ariel Koren accused Google of enabling “history’s first AI-powered genocide” and urged employees to oppose the company’s military involvement.

 

Report: School District Leaders Grappling With Generative AI, Other Tech Challenges

Education Week Share to FacebookShare to Twitter (5/14, Solis) reports that while generative artificial intelligence “has taken up a lot of space in the minds of K-12 district technology leaders over the past two school years,” according to a report released by the Consortium for School Networking last month, “it is not the No. 1 priority for school district technology leaders.” Other priorities for district leaders include cybersecurity, which “continues to rank as the No. 1 priority for district tech leaders, according to the CoSN report, which surveyed 981 district tech leaders between Jan. 10 and Feb. 29.” Data privacy and security “ranked as the No. 2 priority for district tech leaders this year, one spot higher than in 2023.” Also on the list were staffing shortages in the technology department, professional AI training, and budget deficits.

 

Bipartisan “Senate AI Gang” Issues Report With Recommendations For AI Regulation

The Washington Post Share to FacebookShare to Twitter (5/15, Zakrzewski) says that on Wednesday, the Senate AI Gang – a bipartisan group of four senators, including Majority Leader Schumer – “unveiled the fruits” of their nearly year-and-a-half-long effort “to address the urgent threats posed by artificial intelligence,” releasing “a sprawling 31-page road map that calls for billions of new funding in AI research as the ‘deepest’ AI legislative document to date.” Bloomberg Share to FacebookShare to Twitter (5/15, Seddiq, Dennis, Subscription Publication) explains the “blueprint culminates more than a year’s worth of activity on Capitol Hill to familiarize senators with AI, a first step toward eventually writing legislation governing the rapidly evolving technology. The senators last year held a series of closed-door forums featuring labor and tech industry leaders, including OpenAI’s Sam Altman, Google’s Sundar Pichai, Meta’s Mark Zuckerberg and Elon Musk of Tesla Inc., to examine AI’s vast implications for everything from national security to jobs to individual privacy.”

        The AP Share to FacebookShare to Twitter (5/15, Jalonick, O'Brien) says the report “recommends...that Congress draft emergency spending legislation to boost U.S. investments in artificial intelligence, including new research and development and new testing standards to try to understand the potential harms of the technology. The group also recommended new requirements for transparency as artificial intelligence products are rolled out and that studies be conducted into the potential impact of AI on jobs and the U.S. workforce.” The New York Times Share to FacebookShare to Twitter (5/15, McCabe, Kang) says that the report also “recommended creating a federal data privacy law and said they supported legislation...that would prevent the use of realistic misleading technology known as deepfakes in election campaigns. But they said congressional committees and agencies should come up with regulations on A.I., including protections against health and financial discrimination, the elimination of jobs, and copyright violations caused by the technology.”

        Axios Share to FacebookShare to Twitter says Schumer “told reporters...he plans to meet with House Speaker Mike Johnson to start working together. ... Schumer said he does not plan to pursue a big AI legislative package, but rather individual bills as they gain momentum.”

 

University Of Nevada, Reno Students Compete Against ChatGPT To Improve Writing Skills

Inside Higher Ed Share to FacebookShare to Twitter (5/15, Coffey) reports “amid the swirl of concern about generative artificial intelligence in the classroom,” the University of Nevada at Reno is having students “compete against ChatGPT in writing assignments.” Students in two courses “are going head-to-head with ChatGPT by answering the same prompts as the AI and aiming to get a higher grade.” Two professors “began discussing in the summer of 2023 how to harness the then-new ChatGPT,” and settled on “a mashup of gamification, analysis and competition in two courses for education majors – Second Language Acquisition and a course focused on teaching methods for English learners. In the resulting assignments, students must complete a writing prompt and try to earn a higher grade than ChatGPT, which answers the same prompt.” Introducing ChatGPT in “such a large role in the classroom is still relatively new.”

 

OpenAI Co-Founder Ilya Sutskever Exits Company

The AP Share to FacebookShare to Twitter (5/15) reports that Ilya Sutskever, co-founder and chief scientist of OpenAI, announced his departure from the company on social media platform X on Tuesday. Sutskever, who played a pivotal role in developing the AI behind ChatGPT, did not disclose specifics about his future projects beyond their personal significance to him. He was involved in a notable incident last year where he initially voted to remove CEO Sam Altman, only to reinstate him shortly after. Sutskever will be succeeded by Jakub Pachocki.

 

Other States Monitoring Colorado, Connecticut Efforts To Regulate AI

Politico Share to FacebookShare to Twitter (5/15) reports, “Colorado and Connecticut this year launched the country’s most ambitious plays to become national models for regulating artificial intelligence.” Each of those states’ “bills took aim at the companies that develop and use AI systems, and would have prohibited them from causing discrimination in crucial services like health care, employment and housing.” But this month, “Connecticut’s effort” failed and the tech lobby is now “pushing Colorado’s governor to spike his own party’s bill, arguing that a state-by-state approach to regulating the technology is misguided.” Across the country, lawmakers “are watching to see whether Colorado’s bill, modeled on Connecticut’s, will withstand a tidal wave of pressure from industry.”

 

News Media Alliance Warns About Impact Of Google’s AI-Powered Search Engine.

The New York Post Share to FacebookShare to Twitter (5/16, Barrabi) reports the News Media Alliance – a nonprofit that represents more than 2,200 publishers – is warning that Google’s “controversial move to introduce AI-generated summaries to its search engine could cause ‘catastrophic’ damage to cash-strapped publishers and content creators who rely on the traffic to generate crucial revenue.” The Post explains the “AI Overviews” feature, which Google unveiled at its annual I/O conference this week, “will automatically generate answers to complex user queries such as ‘how to fix my toilet,’ with search results effectively demoting links to other web sites.” However, Alliance CEO Danielle Coffey “described Google’s plans as a ‘perverse twist on innovation’ that will be ‘catastrophic to our traffic.’”

 

California Lawmakers Advance AI Bills

Bloomberg Law Share to FacebookShare to Twitter (5/16, Subscription Publication) reports major bills “that would place guardrails on the potential harm caused by artificial intelligence passed a key procedural test in the California Legislature on Thursday, but only after some were trimmed to save the state money.” The bills “survived the suspense hearing process, where bills that have significant fiscal cost to the state can get killed swiftly and without debate by the Assembly and Senate appropriations committees. The most ambitious AI-related bills – several that would be among the most significant anywhere in the nation – were left largely intact.”

dtau...@gmail.com

unread,
Jun 3, 2024, 8:50:51 AM6/3/24
to ai-b...@googlegroups.com

More Than Half of ChatGPT Answers to Programming Questions Are Wrong

Purdue University researchers found 52% of the answers generated by ChatGPT to programming questions were incorrect. Of 517 questions in Stack Overflow included in the study, the researchers found 77% were more verbose and 78% exhibited different degrees of inconsistency compared to human answers. Meanwhile, a linguistic analysis of 2,000 randomly selected ChatGPT answers concluded they portrayed "less negative sentiment" in a "more formal and analytical" fashion. The researchers found ChatGPT's "polite language, articulated and text-book style answers, and comprehensiveness" contributed to some participants overlooking misinformation in its responses.
[ » Read full article ]

Yahoo! News; Sharon Adarlo (May 23, 2024)

 

China's Latest AI Chatbot Trained on President's Political Ideology

The China Institute of Cybersecurity Affairs announced that its latest AI chatbot, launched for internal use, was trained on seven databases, including one focused on President Xi Jinping's doctrine. The Xi Jinping Thought database is comprised of 14 principles, including ensuring the absolute power of the Chinese Communist Party, strengthening national security and socialist values, and improving people's livelihoods and well-being.
[ » Read full article ]

Associated Press (May 24, 2024)

 

States Move to Regulate AI

Colorado Gov. Jared Polis (pictured) signed legislation intended to prevent discrimination by AI systems in hiring, housing, and medical decisions. Companies using AI systems to make such decisions must assess the systems for potential bias on an annual basis. They also must establish an oversight program, inform the state attorney general if discrimination is found, and notify customers AI was used in the decision-making process and provide an option for appeal.
[ » Read full article ]

Associated Press; Jesse Bedayn (May 23, 2024)

 

Global AI Summit Secures Safety Commitments from Companies

At a global AI summit on May 21, 16 companies committed to prioritizing AI safety. The companies, which included Meta, Microsoft, OpenAI, Amazon, Samsung Electronics, and firms in China, South Korea, and the UAE, pledged to publish safety frameworks for measuring AI risks, avoid models when risks cannot be mitigated sufficiently, and ensure governance and transparency.
[ » Read full article ]

Reuters; Joyce Lee (May 21, 2024)

 

Can AI Make the PC Cool Again?

Microsoft has unveiled personal computers that can run AI systems. Copilot+ PC will be included in Microsoft Surface laptops and high-end laptops from other manufacturers that run on the Windows operating system. For the last two decades, the demand for the fastest laptops has diminished because software moved into cloud computing centers. Microsoft will run the AI systems directly on a personal computer to eliminate the lag time and costs related to running large AI models in datacenters.

[ » Read full article *May Require Paid Registration ]

The New York Times; Karen Weise; Brian X. Chen (May 20, 2024)

 

World Is Ill-Prepared for Breakthroughs in AI, Say Experts

A group of 25 AI experts have published a paper stating that the world is not prepared for AI breakthroughs and that governments must do more to regulate the technology. The experts, who include ACM A. M. Turing Award laureates Geoffrey Hinton and Yoshua Bengio, call for government safety regimes that trigger regulatory action when certain ability levels are reached by AI systems.
[ » Read full article ]

The Guardian (U.K.); Dan Milmo (May 20, 2024)

 

Chatbot Instructs Robots to Help with Surgery

A virtual assistant developed by researchers at Canada's University of Toronto could allow surgeons to instruct surgical robots to perform small tasks by inputting simple text prompts into an AI chatbot. The SuFIA virtual assistant can translate those prompts into commands for a surgical robot using OpenAI's GPT-4 large language model. It breaks down the surgeon's request into a sequence of smaller subtasks, triggering software to run in a surgical robot or camera.

[ » Read full article *May Require Paid Registration ]

New Scientist; Alex Wilkins (May 16, 2024)

 

Orangutans' Distinct Yells Decoded with Help from AI

Cornell University researchers used AI to decode the long call vocalizations of orangutans. The researchers used machine learning to analyze video and audio recordings of long calls from 13 orangutans, with the goal of determining how many pulse types they could find in the vocalizations, their distinguishing features, and their gradations. Cornell's Wendy Erb said, "Through a combination of supervised and unsupervised analytical methods, we identified three distinct pulse types that were well differentiated by both humans and machines."
[ » Read full article ]

Popular Science; Laura Baisas (May 14, 2024)

 

Open AI Dissolves Team Focused On AI Risks

CNBC Share to FacebookShare to Twitter (5/17, Field) reports OpenAI has dissolved its Superalignment team, which was focused on the long-term risks of AI, despite being announced just last year with a four-year commitment. The team’s disbandment follows the resignations of team leaders Ilya Sutskever and Jan Leike, the latter expressing concerns that OpenAI has prioritized product development over safety. This shift occurs amid broader leadership turbulence, including a crisis last November involving CEO Sam Altman’s temporary ousting. The news was first reported by Wired.

        Other coverage includes Insider Share to FacebookShare to Twitter (5/17, Altchek), Gizmodo Share to FacebookShare to Twitter (5/17), Bloomberg Share to FacebookShare to Twitter (5/17, Subscription Publication), Wired Share to FacebookShare to Twitter (5/17, Nast), TechCrunch Share to FacebookShare to Twitter (5/16, Wiggers), and Engadget Share to FacebookShare to Twitter (5/17).

 

Microsoft To Unveil AMD AI Chips At Build Conference

Reuters Share to FacebookShare to Twitter reports that Microsoft announced plans to offer a platform entailing AMD artificial intelligence chips through its Azure cloud computing service, offering an alternative to Nvidia GPUs. At its upcoming Build developer conference, Microsoft will also preview the new Cobalt 100 processors, which Snowflake as well as others have started to utilize.

        Other coverage includes TechZine Share to FacebookShare to Twitter (5/17).

 

Study Suggests AI Approaches Human Accuracy In Grading Essays

Jill Barshay writes in her column for The Hechinger Report Share to FacebookShare to Twitter (5/20) that research by Tamara Tate, a researcher at UC Irvine, found that generative AI “is approaching the accuracy of a human in scoring essays,” although Tate warned that “ChatGPT isn’t yet accurate enough to be used on a high-stakes test or on an essay that would affect a final grade in a class.” Tate still “expects ChatGPT’s grading accuracy to improve rapidly as new versions are released,” but she has noted observing “students’ incremental progress and common mistakes remain important for deciding what to teach next.” Additionally, researchers are unsure “whether student writing improves after having an essay graded by ChatGPT.”

 

Departing OpenAI Executive Says Safety Being Deprioritized

The New York Times Share to FacebookShare to Twitter (5/20) reports on the resignations of OpenAI executives Ilya Sutskever and Jan Leike, saying that comments by the latter have brought up questions regarding whether OpenAI is too loose when it comes to safety. On X, Leike last week remarked, “Safety culture and processes have taken a backseat to shiny products.”

        OpenAI Departing Staff Reportedly Had To Choose Between Equity, Being Able To Speak Openly Against Firm. Engadget Share to FacebookShare to Twitter (5/19) reports OpenAI reportedly required departing staff to select from either retaining their vested equity or having the ability to speak openly against the firm. Vox, which saw the document at issue, said that in the event personnel failed to ink a nondisclosure and non-disparagement agreement, they could “lose all vested equity they earned during their time at the company, which is likely worth millions of dollars,” with this being attributable to a provision featured in off-boarding documents. OpenAI Sam Altman confirmed via tweet that there is a provision of this sort, though remarked, “we have never clawed back anyone’s vested equity, nor will we do that if people do not sign a separation agreement (or don’t agree to a non-disparagement agreement).”

        OpenAI Removes ChatGPT Voice Resembling Scarlett Johansson. Bloomberg Share to FacebookShare to Twitter (5/20, Subscription Publication) reports OpenAI has removed the “Sky” voice option from ChatGPT after feedback that it closely resembled actress Scarlett Johansson. Users attempting to select Sky were redirected to a different voice named “Juniper.” OpenAI clarified in a blog post that the voice was provided by an actress and was not intended to mimic Johansson. This change follows the recent introduction of GPT-4o, which allows ChatGPT to respond to verbal queries with audio replies.

        Google CEO Suspects OpenAI Violated YouTube’s Terms. Insider Share to FacebookShare to Twitter (5/21, Tan) reports that Google CEO Sundar Pichai suspects OpenAI may have breached YouTube’s terms in training its Sora model, as discussed in a “The Verge” interview. Although the specifics remain unclear, YouTube is investigating the matter. OpenAI’s CTO remained ambiguous about the data sources utilized for Sora, despite earlier claiming usage of public and licensed data. Concurrently, YouTube’s CEO pointed out that using YouTube content without permission distinctly violates their terms.

 

Microsoft Touts AI At Build Conference

CNBC Share to FacebookShare to Twitter (5/19, Partsinevelos, Haselton) reports Microsoft’s Build conference commences on Tuesday, providing the firm with a chance to exhibit its newest AI undertakings. Microsoft CEO Satya Nadella during January asserted that this year is going be the one in which AI is going to emerge as the “first-class part of every PC,” with CNBC adding that PC users are now going to find out more regarding how AI is going to be implanted in Windows as well as what they’re able to do with it using fresh AI PCs.

 

Snap Follows Meta’s Lead In AI Spending

Reuters Share to FacebookShare to Twitter reports Snap CEO Evan Spiegel is following Meta Platforms CEO Mark Zuckerberg’s strategy by investing heavily in AI and courting advertisers. Snap’s revenue rose 21% year-over-year to $1.2 billion in Q1 2023, but it struggles to match Meta’s spending power. Snap expects to spend $1.5 billion on infrastructure and $1.3 billion on R&D in 2024. Meta’s free cash flow hit $12.5 billion in Q1, significantly higher than Snap’s $38 million. Snap’s shares surged 69% over the past year, yet Spiegel admits the company fell “behind the curve” on machine learning.

 

CIA Technologist Outlines “Cautious Embrace” Of Generative AI for Intelligence

The AP Share to FacebookShare to Twitter (5/20, Bajak) reports that the Central Intelligence Agency (CIA) is now adding generative AI to its traditional basic machine learning and algorithm tools. According to CIA Director William Burns, this technology will be used to augment humans, not replace them, and Nand Mulchandani, the agency’s first chief technology officer, is “marshaling the tools” with “considerable urgency: Adversaries are already spreading AI-generated deepfakes aimed at undermining U.S. interests.” Mulchandani believes that generative AI can enhance brainstorming and elevate productivity and insight while asserting that it will not replace human analysts. He also reiterated the importance of integrating more AI technologies and enlarging the capabilities of AI applications and systems.

 

EU Council Approves AI Regulations

TechCrunch Share to FacebookShare to Twitter (5/21, Lomas) reports the Council of the European Union confirmed the approval of the EU AI Act. The Council said the law is “ground-breaking” and that, “as the first of its kind in the world, it can set a global standard for AI regulation.” The law adopts a risk-based approach to regulating uses of AI. It bans “unacceptable risk” use-cases such as cognitive behavioral manipulation or social scoring, and also defines a set of “high risk” uses, such as biometrics, facial recognition, and use in areas such as education and employment. The law also establishes a new governance architecture for AI, including an enforcement body within the European Commission called the AI Office.

 

Critics Say Schumer’s Roadmap For AI Regulation Falls Short

Roll Call Share to FacebookShare to Twitter (5/21) reports critics of Senate Majority Leader Schumer’s roadmap for federal legislation on artificial intelligence “say it fails to adequately address the harms that AI systems may pose and lacks the specifics needed to develop strong federal policy.” The article notes that, according to data compiled by OpenSecrets, “the number of organizations lobbying Congress and the federal government on AI nearly tripled to 460 last year from 158 the year prior, ranging from AARP to Zillow Group.” Among them, the Leadership Conference on Civil and Human Rights warns the proposal is “scant on details, especially the guardrails needed to prevent and mitigate harm.”

 

Microsoft Partners With Khan Academy To Provide Educators With Free AI Assistant

CNBC Share to FacebookShare to Twitter (5/21, Rosenbaum) reports Microsoft and Khan Academy are partnering to provide the generative AI assistant Khanmigo for Teachers to all US teachers for free. The assistant can helps prepare lessons for class, including creating lessons, analyzing student performance, planning assignments, and providing opportunities for teachers to enhance their learning. Microsoft and Khan Academy also plan to provide math tutoring using “a new open-source small language model from Microsoft’s Phi-3 AI technology.”

 

Report: AI Has Become Most Popular Speciality For Computer Science Ph.D.s

Inside Higher Ed Share to FacebookShare to Twitter (5/23, Coffey) reports the Computing Research Association’s annual Taulbee survey finds “artificial intelligence (AI) and machine learning are the most popular Ph.D. specialities among graduates in the computer science, computer engineering and information fields.” The new report revealed that, “for the last academic year in North America, more than a quarter (28 percent) of awarded doctoral degrees in those computer-related fields had a speciality focus in machine learning or AI. Human computer interaction was the second most popular area of focus for doctoral degrees, followed by security/information assurance, software engineering and robotics/vision.” The report, which surveyed “176 higher education institutions from fall 2023 through Feb. 14 of this year, addressed areas including degree enrollment, degrees awarded and employment for those in the three areas (computer science, computer engineering and information).”

 

OpenAI, StartUps Face Growing Legal Threats From Copyright Holders

Fortune Share to FacebookShare to Twitter (5/23, Matthews) reports there is now an “ongoing, raging battle” between generative AI companies, including OpenAI, “who are on the hunt for all the data they can get their hands on to keep improving their models, and the creators and license holders on the other end, who have a vested interest in protecting their IP – or at least getting some of these companies to pay for it.” Fortune says, “Copyright issues have become central to the conversation around AI – mostly because we have no idea what nearly all of these companies are using to train their models. ... All of this may lead to even more litigation. “

 

US Intelligence Community “Cautiously Embracing Generative AI”

The AP Share to FacebookShare to Twitter (5/23, Bajak) reports US intelligence agencies are “scrambling to embrace the AI revolution, convinced they’ll otherwise be smothered in data as sensor-generated” surveillance technology is adopted, and also feel the “need to keep pace with competitors, who are already using AI to seed social media platforms with deepfakes.” However, CIA Chief Technology Officer Nand Mulchandani cautions that “because generative AI models ‘hallucinate’ they are best treated as a ‘crazy, drunk friend’ – capable of incredible insight but also bias-prone fibbers.” Still, thousand of intelligence analysts across the IC now use a “CIA-developed” generative AI called Osiris that “ingests unclassified and publicly or commercially available data – what’s known as open-source – and writes annotated summaries.”

 

Emory University Student Sues Over Suspension Related To AI Study Tool

The Wall Street Journal Share to FacebookShare to Twitter reported a student at Emory University, filed a lawsuit against the university after being suspended for developing an artificial intelligence (AI) tool named “Eightball.” The tool, which creates study flashcards from uploaded class materials, won a $10,000 prize at a university startup competition. However, the university later suspended Craver for potential academic dishonesty linked to this AI tool. Emory University argues in court documents that Eightball could enable cheating by spreading class information beyond the school.

 

Survey: Most Researchers Use AI Tools Despite Distrust

Inside Higher Ed Share to FacebookShare to Twitter (5/24, Coffey) reported that an Oxford University Press (OUP) survey released Thursday found that “researchers are using AI despite not trusting the companies behind the technology.” According to the report, “more than three-quarters of researchers use some form of artificial intelligence (AI) tool in their research, despite having concerns about data security, intellectual property rights and AI’s effectiveness.” The OUP survey found “that 76 percent of the 2,345 respondents use an AI tool when conducting their own research. Chatbots and translation machines were the most popular tools, according to the report, with 43 percent and 49 percent usage, respectively.” Despite the high usage, “a majority of the respondents – polled during March and April – said they did not trust AI companies.” Those concerns “balance with the widespread belief that AI is here to stay and will most likely change the world.”

 

Google Turns Off Some AI Search Results After Providing False Information

CNN Share to FacebookShare to Twitter (5/24, Duffy) reported Google was forced to walk back some of its new artificial intelligence search tools just days post launch after the tools returned factually incorrect results. Earlier this month, Google “introduced an AI-generated search results overview tool, which summarizes search results so that users don’t have to click through multiple links to get quick answers to their questions. But the feature came under fire this week after it provided false or misleading information to some users’ questions.” Google confirmed to CNN that the incorrect results have been removed from its search. Google spokesperson Colette Garcia also said in a statement that “the vast majority of AI Overviews provide high quality information, with links to dig deeper on the web,” adding that some viral examples of Google AI mistakes appear to have been manipulated images.

        Also reporting are TechCrunch Share to FacebookShare to Twitter (5/26, Ha), The Atlantic Share to FacebookShare to Twitter (5/24, Mimbs Nyce), and the Financial Times Share to FacebookShare to Twitter (5/24, Subscription Publication).

 

Johansson’s Fight With OpenAI Explained

The Wall Street Journal Share to FacebookShare to Twitter examines Scarlett Johansson’s fight with OpenAI, a development that materialized after the firm exhibited an AI system featuring fresh voice assistants for Chat GPT, one of which Johansson and others thought sounded like her. The Journal says that the advent of AI as a quickly progressing and maybe inexorable force has prompted a great deal of nervousness in creative sectors that have been administered by stringent rules regarding how creators get compensated. OpenAI claimed that the assistant was never meant to sound like Johansson.

 

Tech Workers Rush To Re-Skill For AI Positions Amid Growing Demand, Insufficient Labor Supply

The Wall Street Journal Share to FacebookShare to Twitter describes how an “unbalanced” tech labor market is rapidly pivoting towards AI, with workers “feverishly retooling their skill sets for a time when every company suddenly wants to be an artificial-intelligence company,” despite the shortage of workers specializing in the technology.

 

Congress Remains Divided Over Legislation To Crack Down On AI-Generated Nonconsensual Porn

Politico Share to FacebookShare to Twitter reports Congress is in agreement that “something should be done to rein in nonconsensual porn generated by AI,” but lawmakers “have struggled for more than a year to draft a solution, illustrating how ill-equipped Washington is to set limits on rapidly evolving technology with the power to disrupt people’s lives.” According to Politico, “Legislation has been mired in debate over who should be held accountable for the deepfakes – with tech lobbyists pushing back on any language that would ensnare the platforms that distribute them.” However, Senate Judiciary Chair Dick Durbin told Politico that as Congress delays action, “there are now hundreds of apps that can make non-consensual, sexually explicit deepfakes right on your phone,” and called for legislation to “address this growing crisis as quickly as possible.” Politico says the White House has “appeared to endorse” Durbin’s plan to amend the Violence Against Women Act, but has yet to officially back his DEFIANCE Act.

dtau...@gmail.com

unread,
Jun 8, 2024, 12:20:35 PM6/8/24
to ai-b...@googlegroups.com

Generative AI Job Postings Increase Tenfold in the Past Year

The job posting platform Indeed reported generative AI job postings grew tenfold in the last year. However, AI-related jobs accounted for only 0.12% of all global job postings and slightly less than 2% of all U.S. job postings as of the end of April. Last year, close to 30% of computer science jobs, the most in-demand field, were AI-related.
[ » Read full article ]

Fast Company; Chris Morris (June 6, 2024)

 

ACM, IEEE Publish Comprehensive Curricular Guidelines for Undergrad Computer Science

ACM partnered with the IEEE Computer Society and the Association for the Advancement of Artificial Intelligence (AAAI) to develop and release the Computer Science Curricula 2023 (CS2023). The Curricula, updated every 10 years, is a comprehensive guide to the knowledge and competencies students should attain to earn undergraduate degrees in computer science and related disciplines. The updated CS2023 features increased mathematical and statistical requirements in accordance with the disciplinary demands of AI and machine learning.
[ » Read full article ]

ACM Media Center (June 5, 2024)

 

Generative AI Scans Your Amazon Packages for Defects Before Shipment

Amazon has implemented an AI model called Project P.I., which uses generative AI and computer vision technology to ensure customers receive the correct order. Project P.I. can identify defective or damaged items, as well as detecting items that are the wrong size or color. Said Amazon’s Kara Hurst, the company “is using AI to reach our sustainability commitments with the urgency that climate change demands, while also improving the customer experience.”
[ » Read full article ]

Fast Company; Sam Becker (June 3, 2024)

 

Russian Bots Use Fake Tom Cruise for Olympic Disinformation

Microsoft researchers found a pro-Russia disinformation group used fake AI-generated audio to make it seem as though actor Tom Cruise narrated a video suggesting violence is likely at the upcoming Olympic Games in Paris. The video was presented as a Netflix documentary with falsified endorsements from well-known media outlets. A pro-Russia group also generated a video impersonating media outlet France24 to falsely report that nearly a quarter of Olympic ticket-buyers had sought refunds due to fears of terrorism in Paris.

[ » Read full article *May Require Paid Registration ]

Bloomberg; Jeff Stone; Daniel Zuidijk; Hugo Miller (June 3, 2024); et al.

 

1-Bit LLMs Could Solve AI's Energy Demands

Researchers at Switzerland's ETH Zurich, China's Beihang University, and the University of Hong Kong used post-training quantization to create a 1-bit large language model (LLM), which could help reduce the energy demands of AI systems. The BiLLM method uses a single bit to approximate most network parameters, and two bits for those most influential to performance. This approach was used to binarize a version of Meta's LLaMa LLM with 13 billion parameters. BiLLM outperformed its closest binarization competitor while using a tenth of the memory capacity of the original model.
[ » Read full article ]

IEEE Spectrum; Matthew Hutson (May 30, 2024)

 

Machine Learning Detects Defects in Additive Manufacturing

A machine learning technique developed by University of Illinois Urbana-Champaign researchers can identify defects in 3D-printed components. The technique is based on a model built using tens of thousands of defects generated via computer simulations, each with a different size, shape, and location. When tested on actual 3D-printed parts, the model accurately detected hundreds of defects it had not seen before.
[ » Read full article ]

University of Illinois Urbana-Champaign (June 3, 2024)

 

AI Tools Readily Create Election Lies from Voices of Political Leaders

Testing of six of the most popular AI voice-cloning tools by researchers at the Center for Countering Digital Hate, a D.C.-based digital civil rights group, found the tools could generate convincing voice clones of leading political figures. In 240 tests, the tools generated convincing voice clones in 193 cases, or 80% of the time, the group found. Some of the tools have rules or tech barriers in place to stop election disinformation from being generated, although the researchers found many of those obstacles were easy to avoid.
[ » Read full article ]

Associated Press; Ali Swenson (May 31, 2024)

 

New Techniques to Stop Audio Deepfakes

The U.S. Federal Trade Commission recently announced the three winners of its Voice Cloning Challenge, which involved developing strategies to prevent, monitor, and evaluate audio deepfakes. Researchers at Arizona State University won for OriginStory, a microphone with built-in sensors that detect and measure biosignals from human speakers to verify speech is human-generated. Software technology company OmniSpeech won for its AI Detect speech-processing software, which embeds machine learning algorithms into devices for real-time identification of AI-generated voices. Finally, researchers at Washington University in St. Louis were recognized for DeFake, which prevents cloning by adding tiny perturbations to human-voice recordings.
[ » Read full article ]

IEEE Spectrum; Rina Diane Caballar (May 30, 2024)

 

U.S. Slows AI Chip Exports to Middle East by Nvidia, AMD

The U.S. has slowed the issuance of licenses to Nvidia, Advanced Micro Devices, and other chipmakers for large-scale AI accelerator shipments to the Middle East as a national security review of the region's AI development is performed. Part of the concern is that Chinese companies, largely cut off from cutting-edge U.S. technology themselves, could access those chips through datacenters in the Middle East.
[ » Read full article ]

Bloomberg; Mackenzie Hawkins; Ian King; Nick Wadhams (May 30, 2024)

 

Landslide Forecasting System Can Save Lives

Scientists from Australia, Italy, and Nepal partnered with the Nepal government and Australia’s Department of Foreign Affairs and Trade to develop an AI system that can provide early warning of a potential landslide. The SAFE-RISCCS forecasting system uses AI to analyze satellite images, combining them with rain measurements and ground motion data to continuously monitor and forecast the risk of a landslide at any one time in any one place.
[ » Read full article ]

The Kathmandu Post (Nepal) (June 6, 2024)

 

OpenAI Training New Model

The New York Times Share to FacebookShare to Twitter (5/28, Metz) reports OpenAI on Tuesday announced it’s started training a fresh flagship AI model, with the company remarking via blog post that it anticipates that the model will “bring ‘the next level of capabilities’ as it strives to build ‘artificial general intelligence,’ or A.G.I., a machine that can do anything the human brain can do.” The fresh model would be a catalyst for various AI products such as chat bots as well as search engines.

        OpenAI Board Sets Up Safety Panel. The Wall Street Journal Share to FacebookShare to Twitter reports the board of OpenAI has established a safety as well as security panel, with OpenAI remarking via blog post that the panel will be headed by several individuals, including Bret Taylor, who’s chair, and CEO Sam Altman. Moreover, OpenAI said a first job for the panel is going to be to assess and additionally develop the firm’s processes and safety measures.

Bipartisan Senate Bill Aims To Increase AI Education For K-12 Teachers

Education Week Share to FacebookShare to Twitter (5/28) reports Sens. Maria Cantwell (D-WA) and Jerry Moran (R-KS) this month introduced the bipartisan NSF AI Education Act of 2023. The legislation “seeks to expand scholarship aid and professional development opportunities for K-12 educators interested in artificial intelligence and quantum computing, with support from the National Science Foundation.” It would establish a grant program at the NSF “to promote research on teaching AI at K-12 schools, with a focus on schools that serve low-income, rural, and tribal students.” Additionally, the bill “calls on NSF to award undergraduate and graduate scholarships for future educators, as well as students interested in farming and advanced manufacturing, to study AI.” Furthermore, it “directs NSF to develop publicly available “playbooks” for introducing AI in P-12 classrooms nationwide.” Moran said in a statement, “If we want to fully understand AI and remain globally competitive, we must invest in the future workforce today.”

Zuckerberg Boosts Popularity With Open-Source AI Model

The New York Times Share to FacebookShare to Twitter (5/29, Isaac) reports Mark Zuckerberg, CEO of Meta, has gained renewed popularity in Silicon Valley following the release of Meta’s fully open-source artificial intelligence model in July. This model, which has been downloaded over 180 million times, allows developers to freely modify and utilize the technology. This open-source approach starkly contrasts with the more guarded strategies of tech firms like Google and OpenAI. The move to open-source A.I. has not only improved Meta’s internal systems but has also increased developer engagement with Meta’s technology ecosystem. Despite some past controversies associated with Zuckerberg and Meta, the open-source initiative has been positively received, marking a significant shift in his reputation among technologists.

Ex-OpenAI Board Member Discusses Firing, Rehiring Of Altman

Reuters Share to FacebookShare to Twitter reports ex-OpenAI board member Helen Toner remarked during a “Ted AI Show” podcast interview broadcast on Tuesday that the board found out about ChatGPT’s existence when they spotted it on Twitter. Moreover, Toner discussed the backstory regarding the dismissal and rehiring of CEO Sam Altman during November 2023, saying that one impetus for the dismissal was a pair of OpenAI executives having reported “psychological abuse” occurrences to the board. Toner said, “They were really serious, to the point where they actually sent us screenshots and documentation of some of the instances they were telling us about...” OpenAI, after being queried for comment, referenced a statement incumbent board chair Bret Taylor gave to “The Ted AI Show” podcast that indicated a review was carried out into what happened last November.

California Advances AI Regulation Legislation

The AP Share to FacebookShare to Twitter (5/29, Nguyen) reports that California lawmakers are stepping up AI regulations to build public trust and combat issues like algorithmic discrimination, especially in hiring practices and social media. The proposed measures, which await further approval, include transparency in AI decision-making, restrictions on deepfakes in politics and pornography, and enhanced protections for workers against AI-generated replacements. According to the article, “The efforts in California – home to many of the world’s biggest AI companies – could pave the way for AI regulations across the country.”

OpenAI Disrupts Russian, Chinese, Other Influence Campaigns Using Its Tech

The New York Times Share to FacebookShare to Twitter (5/30, Metz) reports OpenAI announced Thursday that it had “identified and disrupted five online campaigns that used its generative artificial intelligence technologies to deceptively manipulate public opinion around the world and influence geopolitics.” The efforts were run “by state actors and private companies in Russia, China, Iran and Israel” using OpenAI’s technology to “generate social media posts, translate and edit articles, write headlines and debug computer programs, typically to win support for political campaigns or to swing public opinion in geopolitical conflicts.” OpenAI’s report is the “first time that a major A.I. company has revealed how its specific tools were used for such online deception, social media researchers said.”

        The Washington Post Share to FacebookShare to Twitter (5/30) reports the groups used OpenAI’s “tech to write posts, translate them into various languages and build software that helped them automatically post to social media.” None of these groups “managed to get much traction; the social media accounts associated with them reached few users and had just a handful of followers, said Ben Nimmo, principal investigator on OpenAI’s intelligence and investigations team.” Still, the Post writes, OpenAI’s report “shows that propagandists who’ve been active for years on social media are using AI tech to boost their campaigns.” The groups included Spamouflage, operating in China, the Iranian International Union of Virtual Media, Russia-linked Bad Grammar, and an Israeli political campaign firm called Stoic.

        OpenAI Revives Robotics Division. Forbes Share to FacebookShare to Twitter (5/30, Cai) reports that OpenAI is reestablishing its previously disbanded robotics team and has started hiring engineers for it. The revived team aims to collaborate with external partners while integrating OpenAI’s AI models into humanoid robots being developed by other companies, including Figure AI.

AI Career Coaches Show Promise

The Washington Post Share to FacebookShare to Twitter (5/29) reported artificial intelligence tools are now offering career coaching. A test of six AI bots revealed that while they provide decent advice, they sometimes complicate issues or give biased solutions. Experts, including Korn Ferry’s Vinay Menon, caution that AI should support rather than replace human decision-making due to its limitations in empathy and personal insight. Despite these shortcomings, AI career coaches can be useful as supplementary tools for generating ideas and perspectives.

Google’s AI Overviews Questioned Over Accuracy, Source Clarity In Health Responses

The New York Times Share to FacebookShare to Twitter (5/31, Minsberg) reports that Google’s new AI Overviews feature, which uses generative AI to find answers across the internet, raises concerns about the accuracy of health information. The feature has already proven faulty, delivering incorrect, potentially dangerous health advice based on questionable sources. Google says health searches have guardrails, but won’t explain specifics. The lack of clarity around source information is also an issue. Experts express concern that AI Overviews prioritizes its own responses over credible medical sites, and encourage users to approach AI-generated information with caution.

Meta AI Debuts Across Platforms

The AP Share to FacebookShare to Twitter (5/30, Ortutay) reports that Meta Platforms has introduced a new AI assistant, Meta AI, across Facebook, Instagram, and WhatsApp. Described by CEO Mark Zuckerberg as “the most intelligent AI assistant available for free use,” this tool offers functionalities such as suggesting local dining spots, providing information from posts, managing travel bookings, and creating visual content instantly. Users can activate it in chats by typing @MetaAI. The deployment of Meta AI has, however, experienced some glitches, including odd interactions in user groups.

Biden Administration Hosts Major AI Competition Event Without Big Tech

Bloomberg Share to FacebookShare to Twitter (5/30, Nylen, Subscription Publication) reports the Biden Administration’s antitrust enforcers hosted a major event on AI competition but excluded big players like Amazon, Google, Meta, and Microsoft. The workshop, co-hosted by Stanford University, included top venture capital firms, US and international enforcers, and smaller AI companies. Assistant Attorney General for Antitrust Jonathan Kanter emphasized the need for AI companies to “adequately compensate creators for their works.” The FTC is investigating Amazon’s partnerships with AI startups Anthropic and OpenAI. The Justice Department is also scrutinizing AI companies for potential antitrust violations. Kanter stated, “AI is a transformational technology that has the potential to fundamentally alter how markets work.”

Emory University Student Sues Over Suspension Related To AI Study Tool

The Wall Street Journal Share to FacebookShare to Twitter reported a student at Emory University, filed a lawsuit against the university after being suspended for developing an artificial intelligence (AI) tool named “Eightball.” The tool, which creates study flashcards from uploaded class materials, won a $10,000 prize at a university startup competition. However, the university later suspended Craver for potential academic dishonesty linked to this AI tool. Emory University argues in court documents that Eightball could enable cheating by spreading class information beyond the school.

NYTimes Analyses: Google Faces Challenges In Keeping Pace With Competitors In AI Race

In an analysis for the New York Times Share to FacebookShare to Twitter (6/1), Nico Grant discusses the significance of Google’s botched rollout of AI Overviews – a new feature that was supposed to “generate full and useful information summaries above traditional search results.” Grant explains Google appears to have launched a broad “rollback” of AI Overviews “after the new technology produced a litany of untruths and errors – including recommending glue as part of a pizza recipe and suggesting that people ingest rocks for nutrients. Users loudly complained on social media about the mistakes, in many cases outright making fun of Google.” The Times says, “The backtracking was a blow to Google’s efforts to keep up with its rivals Microsoft and OpenAI...in the frenzied race to lead A.I.”

        Meanwhile, a second New York Times Share to FacebookShare to Twitter (6/1, Grant, Robertson) analysis discusses how some media executives are frustrated with Google’s incorporation of AI summaries into its search results at all. The executives “said in interviews that Google had left them in a vexing position. They want their sites listed in Google’s search results, which for some outlets can generate more than half of their traffic. But doing that means Google can use their content in AI Overviews summaries. Publishers could also try to protect their content from Google by forbidding its web crawler from sharing any content snippets from their sites. But then their links would show up without any description, making people less likely to click. Another alternative – refusing to be indexed by Google, and not appearing on its search engine at all – could be fatal to their business.”

New Study By Princeton, Yale Professors Targets Risks Of AI In Science

Forbes Share to FacebookShare to Twitter (5/31, Damiani) reported, “In a paper published in Nature On March 6, Yale anthropologist Lisa Messeri and Princeton cognitive scientist M. J. Crockett put another domain under the risk microscope: scientific research, and by extension, the future of science itself.” In the article, Messeri and Crockett “illuminate the epistemic risks – that is, the risks associated with how knowledge is produced – AI might pose to the sciences.” The co-authors focus on what happens when the errors that AI makes “are no longer issues and AI tools work exactly as intended. They argue that such approaches might narrow the range of questions researchers ask, which then creates knock-on effects for future questions and the efforts that are published and receive funding.” The question is not “whether or not the AI tools are working properly, but rather if, when working properly, they jeopardize how scientific knowledge is developed, funded, and circulated among humans.”

AI To Be Leading Topic At Bilderberg Meeting

CNBC Share to FacebookShare to Twitter (5/30, Gilchrist) reports the “CEOs of artificial intelligence heavyweights Google DeepMind, Microsoft AI, Anthropic and Mistral AI are among the elite list of business and political leaders attending a secretive meeting kicking off in Madrid, Spain, on Thursday.” The piece explains that AI will “once again dominate discussions at the annual Bilderberg Meeting after catapulting onto the agenda last year following the meteoric rise of the burgeoning technology.” The meeting will also be attended by “business executives including Citigroup CEO Jane Fraser, former Google CEO and chair Eric Schmidt, Pfizer CEO Albert Bourla, Shell CEO Wael Sawan and investor Peter Thiel for wide-reaching talks spanning trade, finance and biology.”

OpenAI Introduces AI Education Tool For Higher Ed

Inside Higher Ed Share to FacebookShare to Twitter (5/31, Coffey) reported, “OpenAI unveiled a new version of ChatGPT focused on universities on Thursday.” The platform, called ChatGPT Edu, could be used for a variety of educational applications, from tutoring to reviewing resumes. ChatGPT Edu, “expected to start rolling out this summer,” allows for the creation of personalized large language models while ensuring users’ data privacy. Conversations and data from the platform are not used to enhance OpenAI models, addressing concerns relating to privacy. The product builds on OpenAI’s partnerships with universities including Arizona State University, the University of Pennsylvania, the University of Oxford, and the University of Texas at Austin, which have previously used an enterprise version of the AI tool.

Venture Capitalist Lobbying For “Little Tech” Amid Concerns AI Policy Could Harm Smaller Firms

Politico Share to FacebookShare to Twitter profiles venture capitalist and Y Combinator CEO Garry Tan, who “is working to create a new front in the influence battle over artificial intelligence and rally his startup allies into a force he hopes can challenge the tech industry’s biggest voices in Washington.” Tan, whose company has emerged as “influential” incubator of tech startups, “is trying to build a lobbying operation that fights on behalf of ‘Little Tech,’ the thousands of venture-backed firms competing for a place in the emerging AI economy.” Politico explains, “Y Combinator and many of the companies Tan works with are worried that federal regulations meant to head off potential harms of AI will instead help giants like Microsoft and Google maintain their lead on the technology, boxing out smaller players.”

Gender Divide Among AI Users Could Pose A Problem For Future Development

Forbes Share to FacebookShare to Twitter (6/3, McGregor) reports a new survey by Slack reveals a gender divide in AI usage and adoption, with Gen Z men 25% more likely to have tried AI tools in comparison to Gen Z women. 35% of male respondents of all ages reported having tried AI for work, as opposed to 29% of female respondents. The report also found that workers of color reported using AI more often than white workers. The findings suggest the need for corrective measures to ensure a representative number of people are engaging with AI, which will shape its future development.

Big Tech’s Growing Control Over Data Centers Raises Concerns

Insider Share to FacebookShare to Twitter (6/4, Rogers) reports Amazon, Google, and Microsoft lead a rush in construction for larger data centers globally, powered by the increasing demand for AI and cloud computing. The companies own about 65% of the global data center capacity, which enables them to provide more computing services, storage, and capacity as a one-stop solution for other companies. This trajectory raises the concern of “locking in,” where transitioning from one data ecosystem to another becomes nearly impossible and inhibits competition. Some startups manage to remain competitive by maintaining code compatibility with all three providers, thereby avoiding lock-in. The FTC is scrutinizing Big Tech’s AI investments, aiming to address any anticompetitive behaviors.

Column: AI Competes With Humans In Giving Writing Feedback

In her column for The Hechinger Report Share to FacebookShare to Twitter (6/3), Jill Barshay says researchers from the University of California, Irvine, and Arizona State University “found that human feedback was generally a bit better than AI feedback, but AI was surprisingly good.” The new study, “published in the June 2024 issue of the peer-reviewed journal Learning and Instruction,” evaluated “the quality of ChatGPT’s feedback on students’ writing.” Humans had a “particular advantage in advising students on something to work on that would be appropriate for where they are in their development as a writer.” Still, on a five-point scale “that the researchers used to rate feedback quality, with a 5 being the highest quality feedback, ChatGPT averaged a 3.6 compared with a 4.0 average from a team of 16 expert human evaluators.” ChatGPT was also “slightly better at giving feedback on students’ reasoning, argumentation and use of evidence from source materials – the features that the researchers had wanted the writing evaluators to focus on.”

US Seeks Tech Investment In Climate-Friendly Energy

Reuters Share to FacebookShare to Twitter (6/4) reports that the Biden Administration is engaging with major technology companies to encourage investment in climate-friendly power sources to meet their increasing energy demands, particularly from data centers. Energy Secretary Jennifer Granholm highlighted the importance of clean energy investments in light of the rising demand facilitated by technologies like generative AI. Discussions have included the potential for using small modular reactors for nuclear energy, with a focus on collective action to reduce costs. Granholm underlined the need for clear power purchase agreements, as exemplified by the cancellation of NuScale’s only licensed small modular reactor project due to a lack of committed buyers.

        Energy Dept. Official Says Next-Gen Nuclear Power “Can’t Fail” Amid Surging Electricity Demand For AI. Bloomberg Share to FacebookShare to Twitter (6/4, Malik, Subscription Publication) reports increasing electricity demand “for artificial intelligence and data centers means next-generation nuclear power ‘can’t fail,’ according to a top US Energy Department official.” Under Secretary of Energy for Infrastructure David Crane, previously skeptical of the technology, “said he’s now ‘very bullish’ on emerging designs for so-called small modular reactors.”

AI Accurately Detects Even Smallest Breast Cancers With Fewer False Positive Readings, Study Shows

HealthDay Share to FacebookShare to Twitter (6/4, Thompson) reports, “Artificial intelligence (AI) can improve doctors’ assessments of mammograms, accurately detecting even the smallest breast cancers with fewer scary false positive readings, a new study shows.” On June 4 in the journal Radiology, “AI-assisted mammography detected significantly more breast cancers, with a lower false-positive rate, than doctors assessing mammograms on their own, researchers reported.” Almost “21% fewer women had to come back for a follow-up mammogram when AI helped doctors analyze breast imaging, researchers found.”

AI Grading Gains Traction In California Schools

CALmatters Share to FacebookShare to Twitter (6/3, Johnson) reported California school districts “are signing more contracts for artificial intelligence tools, from automated grading in San Diego to chatbots in central California, Los Angeles, and the San Francisco Bay Area.” English teachers “say AI tools can help them grade papers faster, get students more feedback, and improve their learning experience.” However, “guidelines are vague and adoption by teachers and districts is spotty,” while the California Department of Education does not track AI utilization or its associated costs in schools. A report issued last fall “in response to an AI executive order by Gov. Gavin Newsom mentions opportunities to use AI for tutoring, summarization, and personalized content generation, but also labels education a risky use case.”

IBM Executive Highlights Prompt Engineering As Lucrative Skill For AI Users

CNBC Share to FacebookShare to Twitter (6/5) reports nearly all business executives – 96% – “feel an urgency to incorporate AI into their business operations,” yet more than two-thirds of desk workers “say they’ve never used AI, according to a March 2024 Slack Workforce Lab survey of more than 10,000 professionals.” The one AI skill that’s in “crazy demand,” according to Lydia Logan, IBM’s vice president of global education and workforce development, “and that she encourages everyone to learn, is prompt engineering.” She explains, “If you don’t give a good prompt to a generative AI tool, you’re not going to get a good answer. ... Using AI effectively starts with prompt engineering.” Professionals at “any stage of their career can benefit from learning how best to use AI, Logan says, but those with a high school degree or less might see the biggest gains,” because employers in the AI sector “are increasingly hiring for skills, not degrees.”

Bipartisan Senate Bill Would Authorize NSF To Develop Guidance For AI In Schools

K-12 Dive Share to FacebookShare to Twitter (6/5) reports the bipartisan NSF AI Education Act of 2024 would authorize the “National Science Foundation to develop guidance on artificial intelligence in pre-K-12 classrooms – particularly for low-income, rural and tribal students.” Under the bill introduced by Sens. Maria Cantwell (D-WA) and Jerry Moran (R-KS), the NSF “would establish scholarships for future teachers to study AI in addition to professional development opportunities for current educators.” The bill would also require the NSF “to create an award program spotlighting research on AI in K-12 settings, and to launch an outreach campaign promoting awareness of its AI education opportunities in public schools and colleges – especially in rural and underserved areas.”

Professors See Opportunity For Language Learning In Generative AI Tools

Inside Higher Ed Share to FacebookShare to Twitter (6/6, Coffey) reports many foreign language professors “across the nation view the emergence – and constant reiterations – of generative artificial intelligence (AI) tools as possible launch pads for their subjects, boosting interest from students, improving skills earlier on and advancing the evolution of language learning.” The hope comes “despite recent cuts in the field,” and many language professors “see AI as an opportunity amid a difficult time.” AI tools, “especially ChatGPT’s newest version called GPT4-o, can help students not just with writing, but with speaking – making them on-demand tutors.” However, even with AI’s usage “soaring among students, that recalibration will take time, training and evolving methods.”

AI Impacting Copper Demand For Green Transition

The Wall Street Journal Share to FacebookShare to Twitter (6/6, Subscription Publication) reports that the rise of artificial intelligence is intensifying global copper demand, critical for electrical systems and renewable energy technology. Data centers for AI are forecasted to significantly contribute to a projected copper deficit, exacerbated by increasing power consumption. The International Energy Agency stresses the need for investments in mining and improved recycling to meet growing demands from technologies like solar panels and electric vehicles.

Tesla Overhauls Self-Driving Software Ahead Of Robotaxi Launch

Freethink Share to FacebookShare to Twitter (6/6) reports that electric vehicle maker Tesla, led by Elon Musk, is revamping its Full Self-Driving software. The company is replacing existing rule-based algorithms with a system that leverages neural networks. Prior to this, FSD software relied on predefined rules to navigate vehicles. The new system, patterned on deep-learning techniques, allows the vehicle to learn from real-world driving experiences. However, the upgrade has attracted mixed reviews online, with concerns over transparency and interpretability of the system’s decision-making highlighted.

FTC, DOJ To Probe Big Tech’s Role In Development Of AI Market

CNBC Share to FacebookShare to Twitter (6/6, Field, Javers) reports that the Federal Trade Commission and the Justice Department are “set to open antitrust investigations into Microsoft, OpenAI and Nvidia, examining the powerful companies’ influence on the artificial intelligence industry.” According to CNBC, the FTC “will take the lead on looking into Microsoft and OpenAI, while the DOJ will focus on Nvidia, and the investigations will focus on the companies’ conduct, rather than mergers and acquisitions.” CNBC says “as startups like OpenAI and Anthropic...gain steam in the generative AI market, tech giants like Google, Microsoft, Amazon and Meta have been part of an AI arms race of sorts, racing to integrate the technology to ensure they don’t fall behind in a market that’s predicted to top $1 trillion in revenue within a decade.” Reuters Share to FacebookShare to Twitter also covers the pending probes.

        Yellen Says AI Could Pose “Significant Risks” To Financial System. The Hill Share to FacebookShare to Twitter (6/6) reports Treasury Secretary Yellen “warned Thursday that artificial intelligence (AI) could pose ‘significant risks’ to the financial system.” The “‘complexity and opacity’ of AI models, ‘inadequate’ risk management frameworks, and ‘interconnections’ that arise from using the same data and models can create vulnerabilities, Yellen said at a conference on AI and financial stability.”

dtau...@gmail.com

unread,
Jun 15, 2024, 3:55:41 PM6/15/24
to ai-b...@googlegroups.com

Apple Promises Not to Store, Allow Access to AI Data

At the 2024 Apple Worldwide Developers Conference, Apple’s Craig Federighi (pictured) introduced Private Cloud Compute (PCC). Part of what Apple calls "a brand new standard for privacy and AI," PCC achieves privacy through on-device processing. When a bigger, cloud-based model is needed to fulfill an AI request, it will "run on servers we've created especially using Apple silicon," said Federighi. PCC's server code will be publicly accessible, he said, so "independent experts can inspect the code that runs on these servers to verify this privacy promise."
[ » Read full article ]

Ars Technica; Kyle Orland (June 10, 2024)

 

Engineering a Better Way to Find Doctors

University of Southern California and Kaiser Permanente researchers used AI to improve the quality of results produced by the latest version of Kaiser's Find Doctors & Locations tool. The tool employs semantic search, using a knowledge graph to understand the context and relationship between words in a query. The researchers also leveraged ChatGPT to map body organ terms to related medical specialties. The tool produces results in milliseconds and increases the chances patients will identify a relevant doctor in their region by 20%.
[ » Read full article ]

USC Viterbi School of Engineering; Stephanie Lee (June 11, 2024)

 

Paris Olympics Crowd Scans Fuel AI Surveillance Fears

French authorities plan to use AI surveillance systems during the Olympics in Paris. The systems, already tested at train stations, concerts, and sporting events, will be used by police, fire and rescue services, and some French transport security agents when the games commence in late July. While the systems will scan for potential threats, they cannot be used for gait detection, facial recognition, and other processes used for identification.
[ » Read full article ]

The Japan Times; Adam Smith (June 13, 2024)

 

Meta Seeks to Train AI Model on European Data Despite Privacy Concerns

Meta wants to use data from EU users of its platforms to train its AI models but faces a strict data privacy regime. The company said that in order to better reflect the “languages, geography, and cultural references” of its users in Europe, it needs to use public data from those users to teach its Llama AI large language model. EU data privacy laws give people control over how their personal information is used.
[ » Read full article ]

Associated Press; Kelvin Chan (June 10, 2024)

 

AI Models Hold Opposing Views on Controversial Topics, Study Finds

Researchers at Carnegie Mellon University, the Netherlands' University of Amsterdam, and the AI startup Hugging Face found that generative AI models give inconsistent answers to questions on polarizing topics, such as LGBTQ+ rights, social welfare, and surrogacy. Their study of five models — Mistral's Mistral 7B, Cohere's Command-R, Alibaba's Qwen, Google's Gemma, and Meta's Llama 3 — indicated that the inconsistencies can be attributed to bias embedded in the training data. Said study co-author Giada Pistilli, "Our research shows significant variation in the values conveyed by model responses, depending on culture and language."
[
» Read full article ]

TechCrunch; Kyle Wiggers (June 6, 2024)

 

AI 'Gold Rush' for Chatbot Training Data Could Run Out of Human-Written Text

Epoch AI reported that the supply of publicly available data for training AI language models will run out between 2026 and 2032. Tech companies can forge deals for access to high-quality data sources in the short term, but they will have to tap into private data or depend on "synthetic data" produced by chatbots over the longer term. Said Epoch AI's Tamay Besiroglu, "If you start hitting those constraints about how much data you have, then you can't really scale up your models efficiently anymore."
[
» Read full article ]

Associated Press; Matt O'Brien (June 6, 2024)

 

Brazil Hires OpenAI to Cut Costs of Court Battles

The Brazilian government plans to use OpenAI's services to speed up the process of screening and analyzing lawsuits. The technology will be used to flag actions that must be taken before final decisions are issued, and map trends and possible action areas for the solicitor general's office. The goal is to prevent court-ordered debt payments from taking a toll on the federal budget.
[ » Read full article ]

Reuters; Marcela Ayres; Bernardo Caram (June 11, 2024)

 

EU Struggles to Counter Russian Election Disinformation

Stratcom, an EU team tasked with combating disinformation, struggled with a broad Russian disinformation campaign ahead of the European Parliament election, which ran June 6-9. While the EU's new Digital Services Act requires Big Tech to do more to counter illegal and harmful content, Generative AI has made it faster and easier for foreign actors to spread misinformation, EU officials say. The European Commission's Peter Stano said, "Before with trolls and bots, there was usually a person behind it. With AI, everything has multiplied."
[
» Read full article ]

Reuters; Julia Payne; Jan Lopatka; Anan Koper (June 3, 2024); et al.

Meta Expands AI Training Using Public Posts

The New York Times Share to FacebookShare to Twitter (6/7, Jiménez) reported in continuing coverage that Meta has begun expanding its AI services globally and notified European users that their public Facebook and Instagram posts would be used to train its AI beginning June 26. This has caused privacy concerns. In the US, “where online privacy laws are not as strict, Meta AI has already been using public posts to train its AI.” Despite “concerns about the data usage and a lack of specifics about what exactly Meta will do with people’s information,” Meta asserts compliance with privacy laws.

 Apple To Unveil New AI Products At WWDC Event

“Apple Inc.’s developers conference on Monday will show whether the iPhone maker can become a major player in the burgeoning field of artificial intelligence, marking a critical moment for a company forced to adapt to a new era,” Bloomberg Share to FacebookShare to Twitter (6/10, Gurman, Subscription Publication) reports. While “Apple was an early pioneer in AI, which it has used in photo processing, health features and the Siri digital assistant, it’s now seen as a laggard – especially since ChatGPT and other cutting-edge technology hit the scene in the past two years.” The company is focusing on “providing more information to users” like summarized meeting notes and transcriptions, but “Apple also has been working on its own large language models.” Apple is “not far enough along in that area, people familiar with the matter have said,” and “it’s planning to announce a partnership with OpenAI that will supply Apple with a chatbot.”

Apple Introduces New AI Feature Amid Ongoing Tech Race

The New York Times Share to FacebookShare to Twitter (6/10, Mickle) reports that nearly two years after OpenAI “ignited a race to add generative artificial intelligence into products,” Apple on Monday “revealed plans to bring the technology to more than a billion iPhone users around the world.” Apple “said that it would be using generative A.I. to power what it is calling Apple Intelligence,” a system that “will prioritize messages and notifications and will offer writing tools that are capable of proofreading and suggesting what users have written in emails, notes or text.”

        The Wall Street Journal Share to FacebookShare to Twitter (6/10, Tilley, Subscription Publication) reports that an updated Siri feature will be able to better understand natural language, process contextual information, and take action inside apps. For certain complex Siri requests, Apple will surface a prompt to ask if the user wants to connect with ChatGPT to get a better answer. Apple is also allowing ChatGPT to connect with additional areas of the operating system, such as using the AI to help with composing text. Apple also said it gives users the ability to control when and if they want to use ChatGPT.

        The Washington Post Share to FacebookShare to Twitter (6/10) says the announcements are “aimed at helping the tech giant keep up with competitors such as Google and Microsoft, which have boasted in recent months about why AI makes their phones, laptops and software better than Apple’s.” Apple’s jump into AI also “underscores the extent to which the tech industry has bet its future on the technology.” The company “has generally positioned itself over the years as charting its own way, focusing on a closed ecosystem centered on its expensive phones and computers, touting that model as better for users’ privacy. But the embrace of generative AI shows that the technology trend is too powerful for even Apple to ignore.”

        Musk Threatens To Ban Apple Devices From His Companies If OpenAI Is Integrated Into OS. Bloomberg Share to FacebookShare to Twitter (6/10, Turner, Subscription Publication) reports Elon Musk “said he would ban Apple Inc. devices from his companies if OpenAI’s artificial intelligence software is integrated at the operating system level, calling the tie-up a security risk.” His remarks follows Apple announcing Monday “that customers would have access to OpenAI’s ChatGPT chatbot through the Siri digital assistant.”

UT-Austin Professor Using Machine Learning To Translate Dog Sounds Into Words

CBS News Texas Share to FacebookShare to Twitter (6/7, Nielsen) reported Kenny Zhu, a professor of computer science at the University of Texas at Arlington, is “using machine learning to translate dog sounds into phonetic representations and eventually words.” By collecting a variety of sounds “and stripping away other noises, Zhu then catalogs the sounds and segments them into pieces like syllables, assigning each a symbol similar to our alphabet.” So far, the professor “has transcribed about 10 hours’ worth of barks into meaning and found that dogs in different parts of the world ‘speak’ differently too.”

States Make Moves To Regulate AI Amid Federal Standstill

The New York Times Share to FacebookShare to Twitter (6/10, Kang) reports that as federal lawmakers take their time “regulating A.I., state legislators have stepped into the vacuum with a flurry of bills poised to become de facto regulations for all Americans.” Tech laws created in the states “frequently set precedent for the nation, in large part because lawmakers across the country know it can be challenging for companies to comply with a patchwork across state lines.” State lawmakers have so far “proposed nearly 400 new laws on A.I. in recent months, according to the lobbying group TechNet. California leads the states with a total of 50 bills proposed, although that number has narrowed as the legislative session proceeds.”

Barnard College Using Pyramid-Style Framework To Improve AI Literacy

Inside Higher Ed Share to FacebookShare to Twitter (6/11, Coffey) reports, “Among Barnard College workshops on neurodiversity, academic integrity and environmental justice, a new offering debuted in the spring of 2023: a session dubbed ‘Who’s Afraid of ChatGPT?’” Barnard College “turned to a pyramid-style literacy framework first used by the University of Hong Kong and updated for Barnard’s students, faculty and staff.” Rather than “jumping headfirst into AI, as some institutions have done, the pyramid approach follows a gradual lean into the technology, ensuring a solid foundation before moving to the next step.” The college’s “current focus is on level one and two at the base of the pyramid, creating a solid understanding of AI.” Barnard College “hosts workshops and sessions for students and faculty to tackle these first two levels.”

OpenAI Makes Executive Hires

Axios Share to FacebookShare to Twitter reports OpenAI on Monday announced that hiring of Sarah Friar to be its first CFO and Kevin Weil to be chief product officer. Friar most recently served as CEO of Nextdoor, while Friar most recently occupied a president role at Planet Labs. OpenAI CEO Sam Altman remarked via statement, “Sarah and Kevin bring a depth of experience that will enable OpenAI to scale our operations, set a strategy for the next phase of growth, and ensure that our teams have the resources they need to continue to thrive.”

Apple Shares Rise To Record High After Unveiling AI Features

CNBC Share to FacebookShare to Twitter (6/11, Capoot) reports Apple’s shares rose 5% on Tuesday to reach a new record high of around $203 per share “after the company announced its long-awaited push into artificial intelligence at its annual developer conference on Monday.” During the conference, “Apple introduced a range of new AI features during the event, including an overhaul of its voice assistant Siri, integration with OpenAI’s ChatGPT, a range of writing assistance tools and new customizable emojis.”

Apple To Add Generative AI Features To Products, Raising Questions About Impact On Schools

Education Week Share to FacebookShare to Twitter (6/11, Langreo) reports, “Apple plans to add generative artificial intelligence features to its products – such as the iPhone – as early as this fall, the company announced June 10, raising questions about how those upgrades will affect schools.” Apple’s AI features “can proofread and rewrite documents, generate images and emojis, transcribe phone calls and voice memos, summarize emails and lectures, and solve math problems.” Many educators are concerned “about students using generative AI tools to cheat on assignments, though a couple survey findings have found that student cheating hasn’t skyrocketed over the past 18 months or so, when AI use expanded significantly after the release of ChatGPT.” Educators are also worried “about the overuse of cellphones by students.”

        District, School Leaders Scrambling To Address Questions About Emerging AI. K-12 Dive Share to FacebookShare to Twitter (6/11, Riddell) reports, “From school operations to the classroom, artificial intelligence is spreading in K-12. And as with any emerging technology, these tools are raising plenty of new questions.” District and school “leaders are scrambling to address these questions and more as they craft policies for AI’s use.”

Startup Discovers Numerous Vulnerabilities In Popular AI Tools

The Washington Post Share to FacebookShare to Twitter (6/12, Lorenz) reports Haize Labs, a startup specializing in AI safety, has discovered numerous vulnerabilities in well-known generative AI tools. The programs were found to generate violent content and provide instructions for creating weapons and conducting cyberattacks. Haize, founded by three recent college graduates, aims to expose and resolve these AI vulnerabilities, referring to itself as a “Moody’s for AI.” The AI industry, says Carnegie Mellon professor Graham Neubig, needs independent safety entities. Haize is open-sourcing discovered vulnerabilities on GitHub and is working with Anthropic to test an unreleased algorithmic product.

Musk Withdraws Suit Against OpenAI

Bloomberg Share to FacebookShare to Twitter (6/11, Subscription Publication) reports Elon Musk has withdrawn his lawsuit against OpenAI and CEO Sam Altman, alleging they breached a founding promise by prioritizing profits over humanity. Musk accused OpenAI of becoming a “de facto subsidiary” of Microsoft, violating its original non-profit agreement. The lawsuit was dropped a day before a California judge was set to hear OpenAI’s dismissal request. Musk’s legal action claimed OpenAI had committed to developing AI “for the benefit of humanity.” OpenAI argued that Musk had supported the company’s shift to for-profit status and insisted it raise significant funds. Additionally, Musk recently raised $6 billion for his own AI company. In a recent statement, Musk threatened to ban Apple Inc. devices from his companies if OpenAI’s AI software is integrated into their operating systems. The dismissal was filed “without prejudice,” allowing Musk the option to revive the case in the future.

Researchers Report Findings On AI Ethics After Conducting Focus Groups With Teachers, High School Students

Education Week Share to FacebookShare to Twitter (6/12, Klein) reports “overworked teachers and stressed-out high schoolers are turning to artificial intelligence to lighten their workloads,” but they are not “sure just how much they can trust the technology – and they see plenty of ethical gray areas and potential for long-term problems with AI.” Last year, researchers “conducted small focus groups on AI ethics with a total of 15 teachers nationwide as well as 33 high-school students.” The researchers found that “teachers see potential for generative AI tools to lighten their workload, but they also see big problems.” Researchers also discovered that “teachers and students need to understand the technology’s strengths and weaknesses,” and “students have a more nuanced perspective on AI than you might expect.”

AI Use For College Student Assignments Increasing As Educators Urge Administrators To Offer Better Support, Guidelines

The Chronicle of Higher Education Share to FacebookShare to Twitter (6/13, McMurtrie) reports that professor Jeff Wilson from the University of Waterloo estimates that 25% of his students have utilized generative AI complete their assignments. This growth has raised concerns about academic integrity and the authenticity of learning. Responses among faculty members have varied, with some focusing on the potential educational benefits of AI, while others, like Wilson, worry about its impact on genuine learning. Marc Watkins, a lecturer from the University of Mississippi, advocates for “curious skepticism” towards AI, recognizing both the potential benefits and risks it poses. With AI use likely to increase, educators are calling on administrators to provide better support and guidelines for faculty.

OpenAI Will Not Receive Payments From Apple For ChatGPT Use

Fortune Share to FacebookShare to Twitter (6/13) reports Apple is not paying OpenAI as part of their partnership which will see the integration of ChatGPT into the tech company’s products and services, citing people with knowledge of their agreement. Instead, Apple believes that extending OpenAI’s brand and technology to its millions of customers is of equal or greater value than any monetary compensation. Meanwhile, Apple will receive “the benefit of offering an advanced chatbot to consumers – potentially enticing users to spend more time on devices or even splash out on upgrades.”

Biden’s Cabinet To Meet With Tech Companies On Using AI To “Improve Public Welfare”

The Biden Administration “will call on the tech industry on Thursday to design artificial intelligence models to improve public welfare in the coming decades, including by reducing car crashes and accelerating medical research,” Bloomberg Law Share to FacebookShare to Twitter (6/13, Rozen, Subscription Publication) reports behind a paywall. White House science adviser Arati Prabhakar will lead cabinet officials to “meet with leaders in Washington from companies such as Microsoft Corp., Alphabet Inc.’s Google, and General Electric Co.” During the meetings, “they’ll discuss working together to train AI models to forecast dangerous weather and manage demand for electricity, among other topics, Prabhakar said.”

Texas Lt. Gov. Expresses Concern About Energy Usage Of AI Data Centers And Cryptocurrency Miners

KDFW-TV Share to FacebookShare to Twitter Dallas (6/13, Boyer) reports Texas Lt. Gov. Dan Patrick (R) expressed concern in a post on X on Wednesday about the energy usage of cryptocurrency mining operations and AI data centers after Electric Reliability Council of Texas CEO Pablo Vegas told a state Senate committee that the state’s power demand could double in the next six years due in part to demand from data centers and mining operations. Patrick said, “We need to take a close look at those two industries. They produce very few jobs compared to the incredible demands they place on our grid.” Patrick added, “We want data centers, but it can’t be the wild, wild west of data centers and crypto miners crashing our grid and turning the lights off.”

dtau...@gmail.com

unread,
Jun 22, 2024, 5:48:16 PM6/22/24
to ai-b...@googlegroups.com

The 'Godfather of AI' Emerges

2018 ACM A.M. Turing Award laureate Geoffrey Hinton, the "Godfather of AI," will serve as an advisor on the board of CuspAI more than a year after leaving Google. Said Hinton, "I've been very impressed by CuspAI and its mission to accelerate the design process of new materials using AI to curb one of humanity's most urgent challenges — climate change." Meta's Yann LeCun, who shared the 2018 Turing Award with Hinton and Yoshua Bengio, said Meta will collaborate with CuspAI to speed discovery of new materials for carbon capture.

[ » Read full article *May Require Paid Registration ]

Fortune; Ryan Hogg (June 18, 2024)

 

Microsoft's Nadella Building an AI Empire

Microsoft's partnership with ChatGPT creator OpenAI was just the beginning of the tech giant's strategy to become an AI powerhouse. Microsoft CEO Satya Nadella has since begun to aggressively acquire AI talent, tools, and technology. Nadella hired Mustafa Suleyman, a co-founder of DeepMind and Inflection AI, to head Microsoft's AI efforts. Most of Suleyman's Inflection team also joined Microsoft, and the startup's technology was used to develop an in-house AI model to rival OpenAI's.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Tom Dotan; Berber Jin (June 12, 2024)

 

AI-Equipped Underwater Drones Help U.S. Navy Scan for Threats

The U.S. Navy is boosting its deployment of AI-equipped underwater drones that rely on sonar sensors to detect objects and navigate the ocean after successful testing. The effort from the Pentagon’s Defense Innovation Unit has helped cut in half the time it takes to comb the ocean floor for underwater mines, said Alex Campbell, the unit’s Navy service lead. The AI models parse underwater drone footage to distinguish, for example, fish traps from explosives.
[ » Read full article ]

Bloomberg; Charles Gorrivan (June 17, 2024)

 

Pope Francis Tells G7 Humans Must Not Lose Control of AI

Pope Francis became the first pontiff to address a Group of Seven (G7) summit on Friday, when he warned world leaders that AI must never be allowed to get the upper hand over humanity. The pope said AI represented an "epochal transformation" for mankind, stressing the need for close oversight of the technology to preserve human life and dignity.
[ » Read full article ]

Reuters; Crispian Balmer (June 14, 2024)

 

McDonald's Ending Test of AI-Powered Drive-Thrus with IBM

McDonald's said its global partnership with IBM, through which it tested an AI ordering system at certain drive-thrus, has ended. Sources said the AI-powered drive-thru assistant faced challenges that hindered order accuracy, such as difficulty interpreting different accents and dialects. In a prepared statement, McDonald’s said, “Our work with IBM has given us the confidence that a voice ordering solution for drive-thru will be part of our restaurants’ future.”
[ » Read full article ]

Associated Press; Wyatte Grantham-Philips (June 18, 2024)

 

AI Steve for PM?

U.K. entrepreneur Steve Endacott created an AI avatar to run for election as a Minister of Parliament (MP) for the Brighton Pavillion constituency in the House of Commons. AI Steve's campaign Website is seeking "creators" to help craft new policies; visitors to the site can click on "Speak to AI Steve" to interact with the bot. If elected, Endacott will attend Parliament in person to vote on policies based on feedback gathered by the AI platform.
[ » Read full article ]

Euronews; Anna Desmarais (June 13, 2024)

 

From Wearables to Swallowables

University of Southern California researchers developed ingestible sensors that can detect stomach gases associated with gastritis and gastric cancer and allow for real-time location tracking. They also developed a wearable system to track the smart pills via a coil that generates a magnetic field on a t-shirt. The field coupled with a trained neural network allows for tracking of the capsule within the body.
[ » Read full article ]

USC Viterbi School of Engineering; Amy Blumenthal (June 12, 2024)

 

Chinese Media Uses Stanford AI Plagiarism Incident To Criticize US Research Culture

Inside Higher Ed Share to FacebookShare to Twitter (6/14) reported that Chinese state-run media “has used a plagiarism incident involving students from Stanford University to denigrate U.S. research culture, while analysts suggest the episode shows the increasing influence of Chinese science.” Two undergraduates from Stanford “admitted failing to credit Chinese researchers in the development of a new artificial intelligence (AI) model.” The students apologized on social media platform X “on June 3 after developers pointed out similarities” between their large language model (LLM), “Llama3V”, and “MiniCPM”, an LLM created by researchers from Tsinghua University. While the LLM “was not linked to Stanford University’s AI department, when reporting on the incident, Chinese government news agency Xinhua said it demonstrated that the U.S.’s technological strengths are ‘far from omnipotent’ and that Silicon Valley, where Stanford University is located, has ‘cultivated a negative culture.’”

AI Misinformation Concerns Rise In News Production

Reuters Share to FacebookShare to Twitter (6/17, Dang) reports on growing concerns about AI in news production, posing fresh challenges to newsrooms. The Reuters Institute for the Study of Journalism’s annual Digital News Report, based on surveys of nearly 100,000 people across 47 countries, reveals that 52% of US respondents and 63% of UK respondents are uncomfortable with AI-produced news, particularly on sensitive topics. Reuters Institute Senior Research Associate Nic Newman said, “It was surprising to see the level of suspicion.” Concerns about false news content online rose, with 59% of respondents worried. The report reveals the increasing role of news influencers on platforms like TikTok, where 57% of users pay attention to individual personalities over news brands.

AI Safety Bill Sparks Debate Among Tech Giants

Vox Share to FacebookShare to Twitter (6/14) reports California’s SB 1047, a bill mandating safety testing for AI systems exceeding $100 million in training costs, has ignited debate in the tech industry. The bill proposes liability for AI developers if their systems cause “mass casualty events” or more than “$500 million in damages in a single incident or set of closely linked incidents.” Meta’s Chief AI Scientist Yann LeCun criticized the bill, stating it could “destroy California’s fantastic history of technological innovation.” In contrast, prominent AI researchers Geoffrey Hinton and Yoshua Bengio support the legislation, citing potential AI risks. Critics fear excessive caution could “discourage companies from publicly releasing models,” while supporters argue for necessary safety precautions.

Survey Shows Generational Gap Among Teachers For AI-Powered Chatbot Use

Education Week Share to FacebookShare to Twitter (6/14) reported a new research study concluded “a growing number of teachers are using AI-powered chatbots for work, but there’s a gap opening up among younger and older teachers.” The survey of 1,003 teachers by Impact Research “found that large shares of educators also report that they are receiving little guidance from schools on how they should be using the technology.” It concluded that the “percent of teachers using ChatGPT for school-related work has climbed 9 percentage points since February 2023, even as teachers’ favorable feelings toward the technology have dipped 11 percentage points in the past year. Nearly half of teachers say they use ChatGPT at least weekly for work. Fifty-nine percent have a favorable view of AI chatbots overall.” Still, teachers who are 45 and older “were substantially less likely than their younger counterparts to say they felt confident in their ability to use chatbots effectively – 53 percent compared with 71 percent of teachers who are younger than 45.”

Report: Higher Ed Officials Believe AI Can Bolster Student Success With More Guidance

Inside Higher Ed Share to FacebookShare to Twitter (6/17, Coffey) reports according to a report by education consulting firm EAB, a majority of “student success directors, administrators and advisers say artificial intelligence (AI) can help identify students in need of support, but almost no institutions are creating streamlined approaches to use AI technology.” Most of those surveyed (69 percent) “said they had used AI over the last year, including for crafting student communications, answering questions faster, and helping students with career research. But the report found many of these student success professionals are doing it on their own.” The EAB report lists “several AI recommendations, including centralizing AI best practices, developing AI collaborative spaces, defining and addressing AI risk and making AI a strategic priority by investing financially and offering AI literacy resources.”

Column: Surveys Show High School, College-Age Students Are Embracing AI Tools

In her column for The Hechinger Report Share to FacebookShare to Twitter (6/17), Jill Barshay says two new surveys, “both released this month, show how high school and college-age students are embracing artificial intelligence.” What stands out is “how much teens are turning to AI for information and to ask questions, not just to do their homework for them.” Another big takeaway “is that there are different patterns by race and ethnicity with Black, Hispanic and Asian American students often adopting AI faster than white students.” The first report, released on June 3, “was conducted by three nonprofit organizations, Hopelab, Common Sense Media, and the Center for Digital Thriving at the Harvard Graduate School of Education.” The second report, released on June 11, “was conducted by Impact Research and commissioned by the Walton Family Foundation,” and it found that “Hispanic and Asian American students were sometimes more likely to use AI than white and Black students, especially for personal purposes.”

University Of Alabama At Birmingham To Launch New AI In Medicine Graduate Program

Alabama Reflector Share to FacebookShare to Twitter (6/17) reports on Friday, the Alabama Commission on Higher Education (ACHE) “approved a new graduate program on artificial intelligence (AI) in medicine at The University of Alabama at Birmingham (UAB).” The proposed Master of Science in Artificial Intelligence in Medicine “would be implemented by January 2027. The program will be the first in the state to specialize in the use of AI for medical purposes and aims to meet the needs of the health care industry in Birmingham and throughout Alabama.” The program aims to “train students for AI-focused medical roles,” and it will “train health care professionals in AI applications in medicine, including skills in deep learning, computer vision, and large language modeling for healthcare data.” The program is expected to begin in 2025.

Apple’s AI Decisions Exclude Older iPhones

Forbes Share to FacebookShare to Twitter (6/15, Spence) reports Apple’s recent AI software launch at WWDC revealed that Apple Intelligence, bundled with iOS 18, will only be available on the iPhone 15 Pro and iPhone 15 Pro Max. This excludes millions of iPhones from accessing the latest technology. While iOS 18 will support iPhones from the past six years, only the latest models with the Apple Silicon A17 Pro chip will run Apple Intelligence. In contrast, Samsung’s Galaxy AI, launched in January 2024, supports both new and older models. Apple’s decision is criticized for limiting AI capabilities due to insufficient memory in older iPhones, compelling users to buy new devices to access AI features.

Thousands Of Indiana Students Participate In State Department Of Education’s AI Tutoring Program

Chalkbeat Share to FacebookShare to Twitter (6/17, Appleton) reports the Indiana Department of Education’s “first formal foray into artificial intelligence led to thousands of students working with AI tutors and learning from lesson plans created by AI.” A total of 112 schools “across all grade levels in 36 districts participated in the department’s nearly $2 million AI pilot program grant, which allowed each district to purchase an AI platform that could plan lessons, differentiate content for students depending on their abilities, as well as offer tutoring and feedback to students.” Secretary of Education Katie Jenner said the goal of the grant was to “leverage AI for the good,” as schools in Indiana and nationwide “grapple with how AI can be used ethically in the classroom amid concerns about academic integrity.” The pilot was funded “through a one-time $1.8 million allocation in federal pandemic relief, but some districts have elected to continue funding the platforms via the department’s Digital Learning Grant.”

Nvidia Bypasses Microsoft, Becomes World’s Most Valuable Company

Reuters Share to FacebookShare to Twitter (6/18, Randewich, Biswas) reports Nvidia “became the world’s most valuable company on Tuesday, dethroning tech heavyweight Microsoft...as its high-end processors play a central role in a scramble to dominate artificial intelligence technology.” The AP Share to FacebookShare to Twitter (6/18, Choe) says the development came after a “staggering run for Nvidia’s stock carried it to the market’s mountaintop... The S&P 500 added 0.3% to set an all-time high for the 31st time this year. The Nasdaq composite edged up by less than 0.1% to set its own record, while the Dow Jones Industrial Average added 56 points, or 0.1%.” However, beneath “that calm market surface, Nvidia was the star again. It rose again, this time up 3.5%. It was the strongest force pushing the S&P 500 upward, again. And it lifted its total market value further above $3 trillion, again.”

        The New York Times Share to FacebookShare to Twitter (6/18, Mickle, Rennison) calls Nvidia’s ascent “a testament to how much artificial intelligence has upended the world’s biggest companies.” The Washington Post Share to FacebookShare to Twitter (6/18, De Vynck, Lerman) explains Nvidia’s “computer chips and software are crucial to training the AI algorithms behind image generators and chatbots like OpenAI’s ChatGPT. As the tech and business worlds throw themselves into the AI boom, demand for the chips has skyrocketed, pushing Nvidia’s revenue up to $26 billion in the first quarter of this year, up from just $7.2 billion a year ago.” The Post adds that the AI boom “has been reshuffling the world’s biggest companies in the past two years. In January,” For example, “Microsoft surged past Apple to become the world’s most valuable company as investors have poured money into the hot technology.” The Wall Street Journal Share to FacebookShare to Twitter (6/18, Subscription Publication) provides similar coverage.

More Than Half Of Workers Say They’re Ready For Reskilling Amid AI Advances

HR Dive Share to FacebookShare to Twitter (6/18) reports that despite uncertainty “about generative AI developments, 57 percent of workers globally say they’re ready for reskilling and retraining in new roles to remain ahead in their career, according to a June 13 report from Boston Consulting Group, The Network, and Stepstone Group.” Three-quarters of workers “said they believe generative AI will bring some level of disruption to the workplace. At the same time, many remain confident; about 64 percent said they still hold the upper hand when negotiating for jobs.” Jens Baier, managing director and senior partner at BCG and leader of the firm’s work in HR excellence, said, “We are seeing a rapid evolution and maturing of employee views toward AI, and a crucial recognition that a commitment to continuous reskilling will ensure long-term employability.”

Elon Musk Foresees AI-Driven Changes In Marketing, Creativity, Journalism

MediaPost Share to FacebookShare to Twitter (6/18, Mandese) reports that in a discussion with WPP CEO Mark Read, Elon Musk urged large brand marketers to leverage platform X, highlighting its brand safety and potential for B2B marketing. Musk also predicted significant changes due to AI advancements, suggesting that AI could outperform humans in creativity. Musk’s Neuralink brain chip startup aims to facilitate “human/AI symbiosis”. He foresees an “age of abundance” where AI and robotics could render human work obsolete, potentially leading to a “crisis of meaning”. Musk also suggested AI could replace traditional journalism by aggregating social media insights.

Opinion: OpenAI’s Deal With Apple Sparks Renewed Concerns Over Creative Rights

In an op-ed for the Los Angeles Times Share to FacebookShare to Twitter (6/18), Mary Rasenberger, the CEO of the Authors Guild, writes that amid Apple’s partnership with OpenAI, concerns persist over the company’s use of creative professionals’ work without consent. Despite OpenAI’s announcement of Media Manager for 2025, claiming to give creators control over their work, critics argue it fails to address past ethical lapses where foundational AI models were built using artists’ content. The ongoing legal battles and pleas from writers, artists, and journalists underscore the urgent need for AI companies to respect creators’ rights and compensate them fairly for their intellectual property. Rasenberger concludes, “It’s time for creative professionals to stand together, demand what we are owed and determine our own futures.”

California Legislature Passes Bill That Would Protect Community College Faculty From AI Replacement

Inside Higher Ed Share to FacebookShare to Twitter (6/20, Coffey) reports the California legislature “voted unanimously for a bill to stop artificial intelligence (AI) bots from replacing community college faculty in the state.” The two-page bill passed on Friday “and was sent to the desk of California governor Gavin Newsom, who can sign or veto the bill, or allow it to become law in September by taking no action.” The bill states that the “instructor of record” for a community college course “shall be a person who meets the minimum qualifications to serve as a faculty member.” The process to “meet those minimum qualifications is extensive, involving approval from the Academic Senate and Board of Governors, and would exclude AI bots from being instructors.” An association governance committee “hatched the idea for the bill last September.”

AI Takes On Human Tasks At Rapid Pace

CNN Share to FacebookShare to Twitter (6/20, Egan) reports more than 60% of large US firms plan to use artificial intelligence within the next year to automate tasks previously performed by employees, according to a survey by Duke University and the Federal Reserve Banks of Atlanta and Richmond. The tasks range from financial reporting to crafting job posts. The survey also found that nearly one in three firms of all sizes plan to use AI in the next year. However, experts believe that AI adoption will not cause mass job loss immediately.

Apple Struggles To Launch AI Features In China

Insider Share to FacebookShare to Twitter (6/20, Chowdhury) reports that Apple is struggling to launch its new artificial intelligence (AI) features, including the OpenAI chatbot ChatGPT, in China. This is due to regulatory restrictions that require companies to gain approval from Beijing before offering AI chatbots in the country. Apple has reportedly been in discussions with local companies, including Baidu, Alibaba, and Baichuan AI, about a possible partnership to facilitate the launch. The inability to launch these features could impact Apple’s competitiveness in China, where rivals have already introduced smartphones with AI features.

Tech Companies Pursue Renewable Energy Goals As AI Drives Electricity Demand

The Washington Post Share to FacebookShare to Twitter (6/21, Halper, O'Donovan) reports Microsoft plans to harness atomic fusion for power by 2028, aiming to support the AI revolution and transition to green energy. Critics doubt this timeline. The AI boom drives increased electricity demand, leading to a resurgence in fossil fuel use. Microsoft, Amazon, Google, and Meta aim to erase emissions as early as 2030 but face challenges. Microsoft stated, “If we work together, we can unlock AI’s game-changing abilities to help create the net zero, climate resilient and nature positive works that we so urgently need.” However, critics argue utilities backfill green energy purchases with fossil fuels. Amazon claims to be “the world’s largest corporate purchaser of renewable energy for four straight years.” Despite tech companies’ clean energy claims, the AI industry’s energy demands are causing delays in fossil fuel plant retirements and expansions in natural gas use.

OpenAI Co-Founder Announces Launch Of New AI Firm

CNBC Share to FacebookShare to Twitter (6/19, Haselton, Goswami) reports OpenAI co-founder Ilya Sutskever announced on Wednesday the launch of his new AI company, Safe Superintelligence (SSI), focusing exclusively on AI safety. Sutskever, who left OpenAI last month, will lead SSI alongside Daniel Gross and Daniel Levy. The company has offices in Palo Alto, California, and Tel Aviv. Sutskever previously co-led OpenAI’s Superalignment team, which was dissolved after his departure. He has expressed regret over his role in the attempted ouster of Sam Altman.

How Schools Can Address Societal Biases In AI Tools

Education Week Share to FacebookShare to Twitter (6/20, Klein) reports as artificial intelligence “transforms K-12 education – providing everything from lesson planning assistance for overworked teachers to chatbot tutors for students – educators must be aware of how societal biases reflected in the data that underpins AI can shape its responses, experts say.” The biases can include “whether a student is an English learner, has learning and thinking differences, or even whether they are performing on grade-level.” If they “blindly trust AI’s recommendations, educators risk using cutting edge technology to double down on the very types of discrimination schools are working to move past. And putting products that are untested for bias into classrooms could come at a high cost for schools and ed-tech developers, warned Nathan Kriha, a P-12 policy analyst for The Education Trust, a civil rights organization.” Often, the bias problems emerge “because the technology doesn’t have nearly as much information about one group as it does about another.”

New Jersey Department Of Education Unveils Resources To Support AI In Schools

Chalkbeat Share to FacebookShare to Twitter (6/20, Gomez) reports as part of Gov. Phil Murphy’s (D) call to create an “artificial intelligence moonshot” in New Jersey, “the state’s department of education unveiled a set of resources last week aimed at helping educators understand, implement, and manage artificial intelligence in schools, state education officials said.” The materials do not outline “strict regulations on how to use AI in education but they are New Jersey’s first guidance for school districts to ‘responsibly and effectively’ integrate AI-powered technology in the classroom, and incorporate tools to facilitate administrative tasks in schools, according to a state department of education press release.” Education experts continue to note “that safety and privacy concerns should remain a top priority as AI expands in schools,” and the state’s new artificial intelligence resources “come as Newark Public Schools takes steps to incorporate more AI in classrooms and surveillance systems.”

Professional Development Workshops Help Teachers, School District Leaders Understand AI Tools

The Hechinger Report Share to FacebookShare to Twitter (6/20, Salman) reports, “Over five weeks this spring, about 300 people – teachers, school and district leaders, higher ed faculty, education consultants and AI researchers – came together to learn how to use AI and develop their own basic AI tools and resources.” The opportunity was designed by “technology nonprofit Playlab.ai and faculty at the Relay Graduate School of Education.” Educators say “they want opportunities like this one: According to a recent report from nonprofit Educators for Excellence, many teachers say they are hesitant to use AI in the classroom but would feel more comfortable with training about it.” Playlab.ai co-founder Yusuf Ahmad “said school districts should provide professional development opportunities for teachers on AI, teaching them to ask tough questions about the technology in their classrooms.”

Daniel Tauritz

unread,
Jun 29, 2024, 9:48:56 AM6/29/24
to ai-b...@googlegroups.com

Record Labels Sue AI Song-Generators for Copyright Infringement

Major record companies are suing AI song-generators Suno and Udio for copyright infringement, alleging the startups are exploiting the recorded works of artists. The Recording Industry Association of America announced the lawsuits Monday, brought by labels including Sony Music Entertainment, Universal Music Group, and Warner Records.
[
» Read full article ]

Associated Press (June 24, 2024)

 

AI Companies Bypassing Web Standard to Scrape Publisher Sites

In a letter to publishers, content licensing startup TollBit said several AI companies are scraping content for use in generative AI systems by sidestepping the Robots Exclusion Protocol. The protocol was created in the mid-1990s to avoid overloading websites with Web crawlers and has become a key tool that publishers have used to block tech companies from using their content without permission in generative AI systems.
[ » Read full article ]

Reuters; Katie Paul (June 21, 2024)

 

U.N. Launches Global Principles to Combat Online Hate

United Nations Secretary-General António Guterres on Monday released global principles that call on tech companies, advertisers, media, and other organizations to avoid using, supporting, or amplifying disinformation and hate speech. The principles, created after consultations with 193 U.N. member nations, youth leaders, academia, the media, and civil society, also say AI applications should be designed, deployed, and used safely, securely, responsibly and ethically and that AI developers should uphold human rights.
[ » Read full article ]

Associated Press; Edith M. Lederer (June 24, 2024)

 

UNESCO Sounds Alarm Over AI-Fueled Holocaust Denial

Citing the use of chatbots by hackers to spread Nazi ideology and fuel Holocaust denial, the United Nations Educational, Scientific and Cultural Organization (UNESCO) is calling for ethical safeguards on AI, and for schools to highlight the risks of AI-generated content. The report cites the use of Google's Gemini to produce images of ethnically diverse Nazi soldiers, ChatGPT's invention of "Holocaust by drowning," and the use of Google's Bard to create witnesses to support falsehoods about Nazi massacres.
[ » Read full article ]

France 24 (June 18, 2024)

 

AI Is Revolutionizing Drug Development

Startups are increasingly leveraging AI in drug discovery and development using models that can identify potential drug candidates based on patterns detected in the specialized data on which they are trained. AI-designed drug molecules are transformed into physical molecules, whose interactions with target proteins are tested, with the results used by the AI model to improve its next design. A number of companies have built automated labs to produce data to train such AI models.

[ » Read full article *May Require Paid Registration ]

The New York Times; Steve Lohr (June 17, 2024)

 

U.S. Moves Ahead with Plan to Restrict Chinese Technology Investments

The U.S. Department of the Treasury proposed rules on June 21 that would prohibit U.S. investment in Chinese companies that develop semiconductors, quantum computers, and AI systems. Investors would be required to disclose certain types of investments, while others would be explicitly prohibited. The Treasury Department would have the authority to force divestments and could refer violators to the U.S. Department of Justice for criminal prosecution.
[ » Read full article ]

The New York Times; Alan Rappeport (June 21, 2024)

 

Now Narrating the Olympics: AI-Al Michaels

NBCUniversal and the streaming platform Peacock said subscribers will have access to customized, daily highlight reels from the Summer Olympics presented in the AI-generated voice of broadcaster Al Michaels, who gave permission for his voice to be used. Subscribers can choose which events will be included in the daily highlight reel and the type of highlights to be included, such as viral clips, gold medalists, or elimination events.


[
» Read full article *May Require Paid Registration ]

The New York Times; John Koblin (June 26, 2024)

 

Apple Explores Potential AI Partnership With Meta

TechCrunch Share to FacebookShare to Twitter (6/23, Ha) reports Apple is expanding its AI capabilities through partnerships, including with OpenAI to integrate ChatGPT into Siri. According to The Wall Street Journal, Apple is also in talks with Meta for a similar deal, though discussions remain tentative. Apple’s AI strategy focuses on practical enhancements, like writing suggestions and custom emojis, within existing products. The company also aims to “leverage partnerships to go beyond the capabilities of its own AI models.” Also reporting is the Wall Street Journal Share to FacebookShare to Twitter (6/23, Subscription Publication).

AI Recording Apps Stir Privacy, Efficacy Concerns In University Classrooms

Inside Higher Ed Share to FacebookShare to Twitter (6/24, Coffey) reports Georgetown University Law Center “announced last year it would be using Otter, an artificial intelligence-powered transcription service,” which is raising concerns about privacy, consent, and efficacy. The decision to replace human note-takers with Otter was met with resistance from students, including one who found the AI service “completely unworkable.” Professor Marc Watkins from the University of Mississippi highlighted that many faculty members are unaware of these AI devices being sold directly to students via social media. Questions are also being raised about the impact of AI transcription services on long-term learning. Despite these concerns, some students are embracing the technology as a helpful tool, while others are calling for universities to work with AI transcription companies to address these issues.

Amazon Developing AI Chatbot To Compete With ChatGPT

Insider Share to FacebookShare to Twitter (6/24, Kim) reports Amazon is developing an AI chatbot named Metis to compete with ChatGPT. Metis, powered by Amazon’s Olympus AI model, aims to provide text and image-based answers and perform complex tasks. Amazon CEO Andy Jassy is directly involved in the project, which is part of Amazon’s AGI team led by SVP Rohit Prasad. The project uses resources from Alexa’s upgraded version, “Remarkable Alexa.” Amazon plans to launch Metis in September, coinciding with a major Alexa event.

        SiliconANGLE Share to FacebookShare to Twitter (6/24, Riley) reports Metis will also function as an AI agent, autonomously performing tasks by analyzing data and making decisions. Amazon has faced criticism for lagging in the AI race, a claim echoed by founder Jeff Bezos, who questioned why more AI firms weren’t using AWS. Despite this, Amazon has been advancing its AI services. In November, Amazon previewed Amazon Q, a customizable generative AI assistant, and in May, it was reported that Amazon is developing a new version of Alexa powered by its Titan AI model. To attract more AI companies, AWS committed $230 million to free cloud credits for generative AI startups. AWS CEO Mark Garman emphasized in an interview that AI is a significant investment area for AWS, aimed at helping customers transform their businesses.

Apple Rejects Meta’s AI Chatbot Integration

PYMNTS Share to FacebookShare to Twitter (6/24) reports Apple declined Meta’s offer to integrate its AI chatbot into the iPhone after brief talks in March, citing a report from Bloomberg. Apple instead partnered with OpenAI and plans to offer Alphabet’s Gemini in the future. Apple criticized Meta’s privacy policies and viewed Meta as a competitor in AI and other tech fields, according to Bloomberg’s report. Apple announced its “Apple Intelligence” AI features on June 10, aiming to enhance the iPhone, Mac, and iPad experience. The partnership with OpenAI will integrate ChatGPT 4o into Apple’s operating systems, boosting Siri and writing tools.

AI Data Centers Strain Global Power Grids

Bloomberg Share to FacebookShare to Twitter (6/24, Zafra, Gura, Subscription Publication) features a transcript of a discussion between host David Gura and reporter Josh Saul about how the rapid expansion of AI data centers is significantly increasing global power consumption, straining local grids, and impacting energy prices. In the US, data centers are projected to use 8% of total power by 2030, up from 3% in 2022. Saul highlighted that in Loudoun County, Virginia, residents oppose the proliferation of data centers due to their massive energy demands. AI’s insatiable power needs are causing power companies to delay the retirement of coal and gas plants, challenging climate goals. Saul noted, “Ireland has attracted so many data centers from the big tech companies, you know, Microsoft, Amazon, and others, that the data centers are forecast to consume a third of the country’s energy by 2026.”

Google Expands Gemini AI Tool To Teens With School Accounts

TechCrunch Share to FacebookShare to Twitter (6/24, Malik) reports Google is expanding its AI technology, Gemini, to teen students using school accounts. The company will also give educators access to the tool. Google aims to prepare teens for a future with generative AI by providing “real-time feedback” and promoting information literacy. Google assures that student data won’t be used to train AI models and has implemented guardrails to prevent inappropriate responses. Gemini will be available to teen students through their Google Workspace for Education accounts in over 100 countries, though the functionality will be off by default until admins enable it.

Study Finds Teachers Embrace AI Despite Challenges

Education Week Share to FacebookShare to Twitter (6/24, Klein) reports that a study conducted by Zafer Unal, an education professor at the University of South Florida, revealed that teachers are generally optimistic about the use of artificial intelligence (AI) in education. Unal “asked 140 teachers for their thoughts on artificial intelligence,” and found that most are already using AI tools and do not fear its integration into their teaching methods. However, “at least half the teachers said they didn’t have the training or knowledge they needed to implement AI effectively with students.” Concerns were also raised about potential “privacy problems of generative AI’s thirst for data” and the “high cost of some AI tools.” In response to these challenges, “Unal and his research partner took those concerns as a challenge and decided to create a free, educator-friendly AI platform for schools: Teacherserve.com.”

Indiana Explores AI Integration In Classrooms

WFYI-FM Share to FacebookShare to Twitter Indianapolis (6/24, Fradette) reports, “Thousands of Indiana students and classroom educators took on an assignment to explore artificial intelligence, or AI, this last school year.” The state’s Department of Education “tasked school districts to apply for grant money to try out AI concepts.” The state targeted “around $2 million in pandemic relief funds to conduct the pilot,” which aimed to decrease teacher workloads and increase one-on-one tutoring for students. However, specific funding for the venture has now ended. Despite this, “some school districts and educators will be able to utilize a Digital Learning Grant through IDOE to continue AI programming and other digital learning opportunities.” Feedback from the pilot was somewhat positive, with more than half of the educators surveyed stating that “artificial intelligence had a positive influence on student learning.”

Canada Lagging Behind On AI Commercialization

The Toronto Star (CAN) Share to FacebookShare to Twitter (6/26) reports that when it “comes to turning knowledge of artificial intelligence into companies, products and investment, Canada is lagging behind – and, some experts argue, actively shooting itself in the foot.” While Prime Minister Justin Trudeau said Canada is “fighting to keep our skin in the game,” and that the nation has the ingredients it needs to allow AI to thrive, Council of Canadian Innovators President Benjamin Bergen says Canadians have “fallen far behind.” Ottawa “spent a ‘tremendous amount on the talent side of the equation,’ he said recently, but not on converting it ‘into building companies.” Bergen “said the government has ‘institutionalized the transfer of our AI intellectual property to foreign firms.’” He argued that the nation’s AI strategy must focus on Canada owning its own IP if it plans to benefit from commercialization.

OpenAI Delaying Roll Out Of Voice Featuring For ChatGPT

The Washington Post Share to FacebookShare to Twitter (6/25) reports OpenAI announced Tuesday it would delay the launch of voice and emotion-reading features for ChatGPT, citing the need for additional safety testing. Initially set for late June, the release is postponed by a month, with full availability to paying users in the fall. The delay aims to improve content moderation and reliability.

        OpenAI CTO Discusses AI’s Impact On Creativity. CNBC Share to FacebookShare to Twitter (6/26, DeVon) reports OpenAI CTO Mira Murati discussed AI’s potential to expand human creativity and its possible disruptive impact on creative jobs during a June 19 event at Dartmouth College’s school of engineering. Murati stated, “I expect that we will collaborate with it and it’s going to make our creativity expand.” However, she acknowledged that AI tools might eliminate some creative roles, noting, “Some creative jobs maybe will go away.” Murati emphasized that OpenAI “gives people a lot of control on how their data is used” and is developing tools to compensate data contributors.

California Weighs Safety Testing, Certification For Advanced AI Models

Axios Share to FacebookShare to Twitter (6/26) says California lawmakers are debating a bill that would mandate safety testing and certification for advanced AI models, with provisions for legal action against harmful AI technologies. The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, authored by State Sen. Scott Wiener, will move to the state Senate Judiciary Committee next week. Wiener said California is weighing the legislation in the absence of federal rules, adding, “I hope I’m wrong, but the reality is that in the year 2024, there’s no federal data privacy law. In 2024, other than banning TikTok, Congress has done nothing on social media.”

How Educators Can Better Understand AI’s Impact On Black Students

Education Week Share to FacebookShare to Twitter (6/26, Langreo) reports, “As the use of artificial intelligence spreads in K-12 education, it’s critical to examine the implications of the technology for those who have been historically marginalized, according to a panel of tech leaders, educators, and mental health experts.” AI tools can “generate responses based on outdated information or fabricate facts,” and they can also “generate biased responses and amplify harmful stereotypes about people who are already disadvantaged.” During a June 26 panel discussion at the International Society for Technology in Education conference, experts said that to “become part of the solution of creating more inclusive tools, educators need to know first what the problem is.” Educators have a “responsibility to know about the effects of technology on the children they teach, the panelists said,” meaning that district leaders and policymakers “need to support teachers in learning more about AI.”

Google Executive Shares How AI Can Benefit Student Learning

Education Week Share to FacebookShare to Twitter (6/26, Langreo) reports last month, “Google announced that Gemini – its generative AI model – will be available as an add-on for educational institutions using its Workspace for Education product.” As other ed-tech companies “have also announced AI features for their education products,” the Google announcement and “tech developments at other companies are happening as more educators are trying out AI-driven tools.” During the International Society for Technology in Education conference, EdWeek spoke with Jennie Magiera, Google’s global head of education impact, “about the role of AI in education, the technology’s limitations, and educators’ concerns about it.” She said, “What we’re trying to do at Google is elevate educators and help them personalize learning for every student,” and now that AI technologies “are becoming more advanced and more prevalent through all of our products, that hope and that dream is becoming more real than ever.”

Amazon Expands Use Of Generative AI In Finance Teams

The Wall Street Journal Share to FacebookShare to Twitter (6/27, Maurer, Subscription Publication) reports Amazon is increasing its use of generative AI within its finance teams, aiding in areas like fraud detection, contract review, and financial forecasting. AWS CFO John Felton highlighted AI’s role in fraud detection, saying, “This enhanced fraud detection capability not only protects our bottom line but also helps us ensure compliance.” Amazon expects significant capital expenditures on generative AI this year, with AWS contributing 17.5% of Amazon’s $143.31 billion quarterly revenue.

dtau...@gmail.com

unread,
Jul 6, 2024, 8:16:34 PM7/6/24
to ai-b...@googlegroups.com

AI In an Age of Killer Robots

The widespread availability of off-the-shelf devices, easy-to-design software, powerful automation algorithms, and specialized AI microchips is fueling the potential for an era of killer robots. Playing out in Ukraine, the start of this era is characterized by weapons such as drone systems that use autonomous target tracking and machine guns that can use AI-powered targeting, making human judgment increasingly tangential.

[ » Read full article *May Require Paid Registration ]

The New York Times; Paul Mozur; Adam Satariano (July 2, 2024)

 

$1M Prize for AI That Can Solve Puzzles Simple for Humans

Google and software firm Zapier have launched a $1-million prize fund for AI that can complete the Abstraction and Reasoning Corpus (ARC) at human level or better. The ARC test, developed by Google's François Chollet, requires limited reasoning skills (object permanence, goal-directedness, counting, and basic geometry) to identify the pattern linking paired grids of pixelated shapes. A grand prize of $500,000 will be awarded to the developers of an AI system that can score 85% on the test, 1 percentage point more than the average human.

[ » Read full article *May Require Paid Registration ]

New Scientist; Alex Wilkins (June 25, 2024)

 

AI Accelerates Software Development to Breakneck Speeds

A GitLab survey of 5,315 executives and IT professionals revealed that 78% of respondents already are using AI in software development or plan to do so in the next two years, marking a year-over-year increase of 64%. Forty-seven percent said they used AI for code generation and code suggestion/completion, as well as code explanations (40%), summaries of code changes (38%), chatbots allowing users to ask documentation questions using natural language (35%), and summaries of code reviews (35%).
[ » Read full article ]

ZDNet; Joe McKendrick (June 26, 2024)

 

AI 'Friend' for Public School Students Falls Flat

The AI startup AllHere collapsed just months after the Los Angeles school district hired it to build a chatbot for students. The "Ed" chatbot was intended to help students obtain academic and mental health resources, provide attendance information and test scores to parents, and detect and respond to students’ various emotions. Los Angeles had agreed to pay AllHere up to $6 million to develop Ed.

[ » Read full article *May Require Paid Registration ]

The New York Times; Dana Goldstein (July 1, 2024)

 

Surveys Reveal How Students, Professors Are Using AI

Inside Higher Ed Share to FacebookShare to Twitter (6/28, Mowreader) reported June research from Tyton Partners “found three in five students say they are regular users of AI compared to 36 percent of instructors.” Research also points to a “growing acceptance of the technology among higher education practitioners both inside and outside the classroom. Tyton’s study and newly released data from EAB highlight where generative AI is being applied to benefit students’ academic and overall success.” Tyton surveyed students, instructors and administrators, while EAB’s research “pulls from 220 or so student success professionals and executive leadership.” Among those using AI, “faculty members are most likely to apply it for course design and content-related purposes, with 91 percent of faculty using generative AI at least monthly.” Half of the student respondents “say they would be likely or extremely likely to use generative AI tools, even if they were banned by their instructor.”

Studies Suggest Google’s Gemini Models Struggle With Large Amounts Of Data

TechCrunch Share to FacebookShare to Twitter (6/29, Wiggers) reported, “Two separate studies investigated how well Google’s Gemini models and others make sense out of an enormous amount of data” and “both find that Gemini 1.5 Pro and 1.5 Flash struggle to answer questions about large datasets correctly; in one series of document-based tests, the models gave the right answer only 40% 50% of the time.” Study co-author Marzena Karpinska said, “While models like Gemini 1.5 Pro can technically process long contexts, we have seen many cases indicating that the models don’t actually ‘understand’ the content.”

YouTube Updates AI Content Takedown Policy

TechCrunch Share to FacebookShare to Twitter (7/1, Perez) reports YouTube quietly updated its policy in June, allowing individuals to request the removal of AI-generated content that simulates their face or voice as a privacy violation. The policy requires first-party claims and includes exceptions. YouTube will review complaints based on several factors and give uploaders 48 hours to respond. The policy change, part of YouTube’s responsible AI agenda, was not widely advertised.

California Considers AI Regulation Bill

The AP Share to FacebookShare to Twitter (7/2) reports California lawmakers are considering a bill that would require AI companies to implement safety measures to prevent potential threats, such as wiping out the electric grid or aiding in chemical weapons development. The bill, authored by Democratic state Sen. Scott Wiener, aims to set safety standards for AI models costing over $100 million to train. Meta VP and Deputy Chief Privacy Officer Rob Sherman said, “The bill will make the AI ecosystem less safe, jeopardize open-source models relied on by startups and small businesses, rely on standards that do not exist, and introduce regulatory fragmentation.” The proposal could also drive companies out of state and create a new state agency to oversee AI developers.

Los Angeles Unified’s AI Chatbot Project Faces Setbacks

The New York Times Share to FacebookShare to Twitter (7/1, Goldstein) reports AI platform Ed, developed by AllHere, was supposed to be an “educational friend” to “half a million students in Los Angeles public schools,” aimed to assist students with academic and mental health resources. Superintendent Alberto Carvalho had high hopes for Ed, promising that it would “democratize” and “transform education.” However, two months after Carvalho’s April speech promoting the software, AllHere’s founder left, and the company furloughed most staff due to financial issues. Despite the setbacks, a simplified version of Ed remains available in 100 priority schools. The district’s goal “is for the chatbot to be available in September,” pending AllHere’s acquisition. Anthony Aguilar, chief of special education for the district, noted Ed was part of Carvalho’s plan to address post-pandemic educational challenges.

        The Seventy Four Share to FacebookShare to Twitter (7/1, Keierleber) reports as the eight-year-old startup “rolled out Los Angeles Unified School District’s flashy new AI-driven chatbot,” a former company executive “was sending emails to the district and others that Ed’s workings violated bedrock student data privacy principles. Those emails were sent shortly before The 74 first reported last week that AllHere, with $12 million in investor capital, was in serious straits.” A former senior director of software engineering at AllHere “who was laid off in April” told “district officials, its independent inspector general’s office and state education officials that the tool processed student records in ways that likely ran afoul of L.A. Unified’s own data privacy rules and put sensitive information at risk of getting hacked.”

Google’s Emissions Rise, Despite Net Zero Goal By 2030, Due To AI Demands

The AP Share to FacebookShare to Twitter (7/2, St. John) reports that Google’s emissions rose 13% in 2023, hindering its goal of becoming net zero by 2030. Since its baseline year of 2019, emissions have surged by 48%. Google attributes this increase to the high electricity demands of artificial intelligence (AI) and data centers, which rely heavily on fossil fuels. Despite these challenges, Google remains committed to its net zero target and aims to use 100% clean energy by 2030. Experts suggest Google should invest more in renewable energy and collaborate with cleaner companies. The company achieved an average of 64% carbon-free energy for its data centers and offices last year.

Apple Gains Observer Role On OpenAI Board As Part Of AI Partnership

Bloomberg Share to FacebookShare to Twitter (7/2, Subscription Publication) reports Apple will gain an observer role on OpenAI’s board as part of a landmark agreement announced last month. Phil Schiller, head of Apple’s App Store, will take on this position. The role allows attendance at board meetings without voting rights, providing insights into decision-making. The board seat will put Apple on-par with Microsoft, a long-time supporter of OpenAI. However, “having Microsoft and Apple sit in on board meetings could create complications for the tech giants, which have been rivals and partners over the decades. Some OpenAI board meetings will likely discuss future AI initiatives between OpenAI and Microsoft – deliberations that the latter company may want Schiller excluded from.”

More Teachers Are Embracing AI Grading Tools

The Wall Street Journal Share to FacebookShare to Twitter (7/2, Randazzo, Subscription Publication) reports that teachers are increasingly using new AI grading tools to provide faster feedback and reduce bias in assessments. While some educators find these tools beneficial for improving student writing, others argue they are unreliable for high-stakes grading. AI startups offer grading in subjects like English, history, math, and science. Despite mixed reviews, teachers adjust AI feedback to suit their needs, ensuring human oversight in final grading.

AI Advances Enhance Early Disease Detection

Axios Share to FacebookShare to Twitter (7/3, Reed) reports that artificial intelligence is revolutionizing diagnostic tests by identifying diseases earlier. Advances in algorithms, large datasets, and cloud computing are personalizing, predicting, and prescribing diagnostic tests. However, issues around data transparency and representation must be addressed to ensure accurate AI algorithms. At the moment, “it’s difficult to assess the quality and accuracy of generative AI’s recommendations in particular, so it needs to be limited to lower-risk applications.” And, “while AI may reduce false positives, it also runs the risk of overdiagnosing a disease that may not have turned into a problem.”

Employees Fear AI Will Make Jobs Obsolete

LexBlog Share to FacebookShare to Twitter (7/2, Vogel) reports on how employers can combat their employees fears around AI. The author cites an EY study which found that 75 percent of employees said they are concerned AI will make certain jobs obsolete, and about two-thirds (65 percent) said they are anxious about AI replacing their job. About half (48 percent) of respondents “said they are more concerned about AI today than they were a year ago, and of those, 41 percent believe it is evolving too quickly, EY’s AI Anxiety in Business Survey report stated.” EY said in its report, “The artificial intelligence (AI) boom across all industries has fueled anxiety in the workforce, with employees fearing ethical usage, legal risks and job displacement.”

Undisclosed Hack Of OpenAI Sparked Internal Fears Of Vulnerability To China

The New York Times Share to FacebookShare to Twitter (7/4, Metz) reports on a previously undisclosed 2023 incident in which a hacker “gained access to the internal messaging systems of OpenAI, the maker of ChatGPT, and stole details about the design of the company’s AI technologies.” The hacker “lifted details from discussions in an online forum where employees talked about OpenAI’s latest technologies... but did not get into the systems where the company houses and builds its artificial intelligence.” Executives declined to share the news publicly as they “did not consider the incident a threat to national security because they believed the hacker was a private individual with no known ties to a foreign government.” After the breach, Leopold Aschenbrenner, an OpenAI technical program manager “focused on ensuring that future AI technologies do not cause serious harm, sent a memo to OpenAI’s board of directors, arguing that the company was not doing enough to prevent the Chinese government and other foreign adversaries from stealing its secrets.”

Google’s AI Expansion Increases Emissions

CNN International (7/3, Duffy) reports that Google’s greenhouse gas emissions have surged by 48% since 2019 due to the energy demands of its AI systems. The company’s annual environment report attributes the increase to higher data center energy consumption and supply chain emissions. Google aims to achieve net-zero emissions by 2030, but acknowledges the challenge due to AI’s growing energy needs. The company is investing in clean energy and aims to replenish 120% of the freshwater used in its data centers by 2030.

Reply all
Reply to author
Forward
0 new messages