Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

Dr. T's AI brief

2 views
Skip to first unread message

dtau...@gmail.com

unread,
Jun 19, 2023, 4:28:51 PM6/19/23
to ai-b...@googlegroups.com

EU Lawmakers Lay Groundwork for 'Historic' AI Regulation
Deutsche Welle (Germany)
June 14, 2023


On June 14, the European Parliament voted to regulate the use of artificial intelligence (AI) systems across the European Union, laying the groundwork for the passage of the first-of-its-kind AI Act. The proposed legislation is intended to both foster AI innovation and minimize AI threats to health and safety. AI systems would be categorized based on four risk levels, from minimal to unacceptable. Among other things, the law would take aim at "social scoring systems" that make judgments based on a person's behavior or appearance, applications that subliminally manipulate children and other vulnerable groups, and predictive policing tools. Said European Parliament member Dragos Tudorache, "We as a union are doing something that I think is truly historic.”

Full Article

 

Deep Learning-Based Software Detects, Tracks Individual Cells
UC Santa Cruz Newscenter
Emily Cerf
June 13, 2023


A deep learning model developed by University of California, Santa Cruz (UC Santa Cruz) researchers can follow a cell's lineage over time. The DeepSea model can segment and track individual cells and detect cell division with high accuracy. DeepSea helps overcome the challenges associated with manually sorting through time-lapse microscopy images, performing segmentation in under a second. Using DeepSea, the researchers discovered that embryonic stem cells regulate their size, allowing smaller cells to spend more time growing before they divide. The model's training data set, software, and open source code are available on the DeepSea website.

Full Article

 

A Step Toward Safe, Reliable Autopilots for Flying
MIT News
Adam Zewe
June 12, 2023


The Massachusetts Institute of Technology's Chuchu Fan and Oswin So developed a machine learning method that can autonomously guide a car or airplane in stabilizing its trajectory to reach and stay within a goal region while evading obstacles. The researchers’ stabilize-avoid technique equals or surpasses the safety of current methods, while boosting stability 10-fold, they said. So explained the initial application of constraints ensures the agent avoids obstacles. The researchers then recast the constrained optimization problem in an epigraph form, applying deep reinforcement learning to circumvent challenges. The controller significantly outperformed all baselines in preventing a simulated jet from crashing or stalling while stabilizing to the goal region.

Full Article

 

Hybrid Computer Vision Combines Physics, Big Data
UCLA Samueli School of Engineering
June 12, 2023


University of California, Los Angeles (UCLA) and U.S. Army Research Laboratory researchers have combined physics and data-driven techniques to improve artificial intelligence (AI)-powered computer vision technologies. Their study focused on incorporating physics into AI datasets, network architectures, and network loss functions. UCLA's Achuta Kadambi said, "Physics-aware forms of inference can enable cars to drive more safely or surgical robots to be more precise." The researchers found that a hybrid approach, for instance, could allow more precise and accurate object-motion tracking and prediction by AI. Eventually, the researchers said, deep learning-based AIs might learn the laws of physics on their own.

Full Article

 

Nvidia's AI Software Tricked into Leaking Data
Financial Times
Mehul Srivastava; Cristina Criddle
June 9, 2023


At San Francisco-based Robust Intelligence, researchers found the "NeMo Framework" in Nvidia's artificial intelligence software can be manipulated into leaking private data. The framework enables developers to work with an array of large language models. The researchers were able to prompt language models to bypass safety guardrails. Instructing the system to swap the letter "I" with "J," for instance, triggered the release of personally unidentifiable information from a database. They also replicated Nvidia's example of a narrow discussion about a jobs report to get the model to shift to topics beyond the specific subjects set forth in the system's guardrails. Said Robust Intelligence's Yaron Singer, also a computer science professor at Harvard University, " These findings represent a cautionary tale about the pitfalls that exist."

Full Article

*May Require Paid Registration

 

 

Chatbot Tutors Could Upend Student Learning
The New York Times
Natasha Singer
June 8, 2023


At Khan Lab School in Palo Alto, CA, sixth-graders are using Khan Academy's Khanmigo, a conversational chatbot, for one-on-one math tutoring as part of a pilot program. The bot walks students through problems step by step and congratulates them when a problem is solved. Khanmigo can provide assistance in a variety of subjects and even allows students to converse with fictional characters or simulated historical figures. Khan instructor Jaclyn Major said, "Khanmigo is able to connect with them and be on their level if they want it to. I think it could be helpful in any classroom." However, there are concerns about the accuracy of such artificial intelligence-powered learning tools and their impact on critical thinking, among other things.

Full Article

*May Require Paid Registration

 

 

Researchers Unveil First Chat-GPT-Designed Robot
EPFL (Switzerland)
Celia Luterbacher
June 7, 2023


Researchers from the Swiss Federal Institute of Technology, Lausanne (EPFL) and the Delft University of Technology in the Netherlands used the ChatGPT-3 large language model (LLM) to design a tomato-harvesting robotic gripper. The researchers and LLM held an "ideation" session to formulate the robot's purpose, design parameters, and specifications, then realized it via code refinement, fabrication, and troubleshooting. Said EPFL's Francesco Stella, "While computation has been largely used to assist engineers with technical implementation, for the first time, an AI [artificial intelligence] system can ideate new systems, thus automating high-level cognitive tasks. This could involve a shift of human roles to more technical ones."
 

Full Article

 

 

DeepMind AI Creates Algorithms That Sort Data Faster Than Those Built by People
Nature
Matthew Hutson
June 7, 2023


Google DeepMind researchers created an artificial intelligence (AI) system based on the AlphaZero AI to formulate algorithms capable of sorting data up to three times faster than human-produced programs. The researchers initially used the AlphaDev system to sort numbers by size. AlphaDev can choose one of four types of value comparisons, moving values between locations or shifting to a different program segment. It attempts to sort a set of lists after each step, receiving rewards for the number of correctly sorted list items until all lists are sorted perfectly, or it reaches a program length threshold before starting a new program. AlphaDev's optimal algorithms sorted data 4% to 71% faster than human algorithms, depending on the processor used and how many values required sorting.
 

Full Article

 

 

Scaling Audio-Visual Learning Without Labels
MIT News
Lauren Hinkel
June 5, 2023


A technique developed by a team led by researchers at the Massachusetts Institute of Technology (MIT) analyzes unlabeled audio and visual data using a combination of contrastive learning and masked data modeling. The goal was to scale machine learning tasks without the need for annotation, the way humans learn. The resulting neural network, the contrastive audio-visual masked autoencoder (CAV-MAE), can perform audio-visual retrieval and audio-visual event classification tasks, learning by prediction and comparison. MIT's Jim Glass said the new model " has the contrastive and the reconstruction loss, and compared to models that have been evaluated with similar data, it clearly does very well across a range of these tasks."
 

Full Article

 

 

World's Most Popular Online Computer Class Turns to AI for Help
Bloomberg
Saritha Rai
June 2, 2023


Harvard University's David J. Malan said the school's CS50 introductory computer science (CS) class will utilize artificial intelligence (AI) to mark assignments, teach programming, and personalize learning tips for students. Malan is credited with turning CS50 into the world's most popular online learning course; he said tailoring support to students' questions at scale has been challenging, as there are more students online than teachers. Malan's team is refining an AI system to grade students' work, and testing a virtual teaching assistant to assess and provide feedback on their coding by asking rhetorical questions and making suggestions to help them learn. Malan said the course's use of AI could underscore its educational advantages, especially for augmenting the quality of and access to online learning.

Full Article

*May Require Paid Registration

 

 

Improving the Efficiency of 'Vision Transformer' AI Systems
NC State University News
Matt Shipman
June 1, 2023


Patch-to-Cluster attention (PaCa), a new methodology developed by North Carolina State University (NC State) researchers, addresses the challenges associated with vision transformer (ViT) artificial intelligence (AI) systems and improves their performance. ViTs have significant computational power and memory demands and lack transparency in their decision-making. To address these challenges, the researchers employed clustering, in which the AI groups sections of the image together based on similarities in the image data, reducing the number of complex functions. NC State's Tianfu Wu said the researchers found PaCa outperformed two state-of-the-art ViTs, SWIN and PVT, “in every way.”

Full Article

 

 

Training Machines for Uncertain Real-World Situations
MIT News
Adam Zewe
May 31, 2023


An algorithm developed by researchers at the Massachusetts Institute of Technology (MIT) and Technion—Israel Institute of Technology can determine automatically and independently whether and when imitation learning or reinforcement learning is more effective for training a "student" machine. The algorithm is adaptive, allowing the machine to move between both types of learning throughout the training process based on which would achieve better, faster results. MIT's Idan Shenfeld said, "This combination of learning by trial-and-error and following a teacher is very powerful. It gives our algorithm the ability to solve very difficult tasks that cannot be solved by using either technique individually."

Full Article

 

 

The Race to Make AI Smaller, Smarter
The New York Times
Oliver Whang
May 30, 2023


The BabyLM Challenge, organized by computer scientists at institutions including Johns Hopkins University and Switzerland's ETH Zurich, is aimed at creating more accessible, intuitive language models, in stark contrast to the race for ever-larger language models undertaken by big tech companies. The goal is to produce a mini-language model using datasets less than one-ten-thousandth the size used by most advanced large language models. As part of the challenge, researchers have been tasked with training language models on about 100 million words, with the winning model to be chosen based on the effectiveness of their generation and understanding of the nuances of language.

Full Article

*May Require Paid Registration

 

 

AI Poses 'Risk of Extinction,' Industry Leaders Warn
The New York Times
Kevin Roose
May 30, 2023


Industry leaders warned in an open letter from the nonprofit Center for AI Safety that artificial intelligence (AI) technology might threaten humanity's existence. Signatories included more than 350 executives, scientists, and engineers working on AI, with the CEOs of OpenAI, Google DeepMind, and Anthropic among them. ACM Turing Award recipients and AI pioneers Geoffrey Hinton and Yoshua Bengio also signed the letter, which comes amid growing concern about the potential hazards of AI partly fueled by innovations in large language models. Such advancements have provoked fears of AI facilitating mass job takeovers and the spread of misinformation, while earlier this month OpenAI's Sam Altman said the risks were sufficiently dire to warrant government intervention and regulation.

Full Article

*May Require Paid Registration

 

 

BHP Taps Microsoft, AI to Improve Copper Recovery
Reuters
Melanie Burton
May 30, 2023


BHP Group aims to bolster copper recovery from its Escondida mine in Chile using machine learning and artificial intelligence (AI) via a partnership with Microsoft. The goal is for the operators of plants that process ore to use a combination of real-time data and Microsoft Azure's AI-based recommendations to adjust variables impacting ore processing and grade recovery. BHP said copper production must be doubled over the next three decades to keep up with the development of electronic vehicles, offshore wind and solar farms, and other decarbonization technologies. BHP's Laura Tyler said, "We expect the next big wave in mining to come from the advanced use of digital technologies."

Full Article

 

 

AI-Threatened Jobs Are Mostly Held by Women, Study Shows
Bloomberg
Diana Li
May 26, 2023


Research by human resources analytics firm Revelio Labs found that artificial intelligence (AI) disproportionately threatens jobs usually held by women. Revelio researchers analyzed data from the National Bureau of Economic Research and found women generally hold many jobs facing automation, like bill and account collectors and payroll clerks. Revelio's Hakki Ozdenoren said, "The distribution of genders across occupations reflects the biases deeply rooted in our society, with women often being confined to roles such as administrative assistants and secretaries. Consequently, the impact of AI becomes skewed along gender lines." AI is more likely to assume repetitive jobs like sorting through resumes in recruitment, according to Ozdenoren. Revelio also found generative AI may affect high-wage jobs more than non-traditional manufacturing occupations.

Full Article

*May Require Paid Registration

 

 

Helping Robots Handle Fluids
MIT News
Rachel Gordon
May 24, 2023


Researchers from the Massachusetts Institute of Technology, Carnegie Mellon University, Dartmouth College, and Columbia University created the FluidLab simulation environment to help robots learn to handle complex fluids. The virtual tool provides various fluid handling challenges involving solids, liquids, and multiple fluids concurrently. FluidLab's core component is the FluidEngine physics simulator, which can calculate and model materials and their interactions while accelerating processing using graphics processing units. The differential engine can embed physics knowledge into a more realistic physical simulation to boost learning and planning for robotic manipulation tasks. The researchers used the tool to test robot learning algorithms and surmount obstacles, and transferred knowledge from simulations to real-world situations through optimization.

Full Article

 

 

AI Model Digests Video to Learn Sign Language
New Atlas
Paul McClure
May 24, 2023


A tool developed by researchers at Spain’s Barcelona Supercomputing Center and the Universitat Politècnica de Catalunya (UPC) uses artificial intelligence (AI) to improve sign language translation. The researchers fed a transformer-style machine learning model more than 80 hours of video in American Sign Language with corresponding English transcripts from the How2Sign dataset. The researchers addressed the variability and complexity of sign languages by pre-processing the video data to extract spatiotemporal information via the Inflated 3D Networks technique. They found the model produced meaningful translations, although they agreed, "There is still room for improvement."

Full Article

 

Educators Worry About How ChatGPT Could Affect Students With Disabilities

The Chronicle of Higher Education Share to FacebookShare to Twitter (5/26, McMurtrie) reported professors’ uncertainty over how AI tools like ChatGPT will shape teaching and learning “holds doubly true for how the technology could affect students with disabilities.” These tools “can function like personal assistants,” which “could be a boon for students who have trouble managing their time, processing information, or ordering their thoughts.” However, fears about cheating “could lead professors to make changes in testing and assessment that could hurt students unable to do well on, say, an oral exam or in-class test. And instead of using it as a simple study aid, students who lack confidence in their ability to learn might allow the products of these AI tools to replace their own voices or ideas.” Teaching experts worry that “in the rush to figure out, or rein in, these tools, instructors may neglect to consider the ways in which they affect students with disabilities in particular.”

        Survey: One-Third Of College Students Have Used ChatGPT For Schoolwork This Past Year. Diverse Issues in Higher Education Share to FacebookShare to Twitter (5/26, Kyaw) reported, “Almost a third of college students (30%) have used free artificial intelligence (AI) tool ChatGPT for schoolwork this past academic year, according to a survey by Intelligent.com and SurveyMonkey.” Current undergraduate and graduate students were surveyed about their views “on one of the more well-known tools, ChatGPT, which was launched in November 2022. The survey found that, of the 30% who used ChatGPT this past year for schoolwork, almost half said that they frequently used it for homework.” Students reported using the tool “mostly for English (49%) followed by “hard” sciences like chemistry and biology (41%).” Users also said that the tool’s advantages “included its ease of use, simplicity, ability to help in organizational skills, and its ability to collect specific information and save time in researching. However, they also listed disadvantages, such as overreliance, inaccuracy, and potential to be considered cheating.”

UT Austin Researchers Train AI Model To Decipher Human Thought

The Atlantic Share to FacebookShare to Twitter (5/26, Wong) reported, “Researchers from the University of Texas at Austin recently trained an AI model to decipher the gist of a limited range of sentences as individuals listened to them – gesturing toward a near future in which artificial intelligence might give us a deeper understanding of the human mind.” The AI model “analyzed fMRI scans of people listening to, or even just recalling, sentences from three shows,” and then “used that brain-imaging data to reconstruct the content of those sentences.” Published in Nature Neuroscience earlier this month, the findings “add to a new field of research that flips the conventional understanding of AI on its head.”

OpenAI CEO Pledges To Comply With EU Regulations

The AP Share to FacebookShare to Twitter (5/26, Chan) reported OpenAI CEO Sam Altman on Friday “downplayed worries that the ChatGPT maker could exit the European Union if it can’t comply with the bloc’s strict new artificial intelligence rules, coming after a top official rebuked him for comments raising such a possibility.” Likewise, Bloomberg Share to FacebookShare to Twitter (5/26, Berthelot, Subscription Publication) reported he pledged to “comply” with EU regulations, “days after making comments that he might pull out of Europe.” Altman said, “Most of the regulation being proposed about licensing frameworks and safety standards makes total sense.” He added, “It’s mostly been quite productive.”

        Canadian Lawmakers Calling For Creation Of AI Task Force Amid Growing Popularity Of ChatGPT. Global News (CAN) Share to FacebookShare to Twitter (5/26, McSheffrey, Zussman) reported in the wake of “the splashy launch of ChatGPT last fall raising concerns about the implications of artificial intelligence, the BC Green Party is urging the provincial government to create an all-party AI task force.” Global News reported that their call comes with “the Office of the Privacy Commissioner of Canada, and its counterpart offices in B.C., Alberta, Ontario and Quebec, all launching investigations into OpenAI, the company responsible for the popular chatbot that has dominated headlines in recent months.”

US Financial Watchdogs Working To Protect Consumers From Misuse Of AI

The AP Share to FacebookShare to Twitter (5/26) reported, “As concerns grow over increasingly powerful artificial intelligence systems like ChatGPT, the nation’s financial watchdog says it’s working to ensure that companies follow the law when they’re using AI. Already, automated systems and algorithms help determine credit ratings, loan terms, bank account fees, and other aspects of our financial lives. AI also affects hiring, housing and working conditions.” The AP said, “Representatives from the Federal Trade Commission, the Equal Employment Opportunity Commission, and the Department of Justice, as well as the CFPB, all say they’re directing resources and staff to take aim at new tech and identify negative ways it could affect consumers’ lives.” CNBC Share to FacebookShare to Twitter (5/26) also covered the topic.

Microsoft President Expects US Will Regulate AI

CBS News Share to FacebookShare to Twitter (5/28, Barkoff) reported Microsoft President Brad Smith “said in an interview that aired Sunday on ‘Face the Nation’ that he expects the U.S. government to regulate artificial intelligence in the year ahead.” Smith said, “I was in Japan just three weeks ago, and they have a national A.I. strategy. The government has adopted it.” He added, “The world is moving forward. Let’s make sure that the United States at least keeps pace with the rest of the world.” Meanwhile, The Hill Share to FacebookShare to Twitter (5/28, Sforza) reported Smith “said...that the development of artificial intelligence (AI) is ‘almost like’ the invention of the printing press.” Smith “added that AI is ‘already a part of our lives,’ saying that it can do more to help humans do things.”

        Nvidia CEO: Companies That Fail To Embrace AI Will “Perish.” Bloomberg Share to FacebookShare to Twitter (5/28, Savov, Wu, Subscription Publication) reported Nvidia CEO Jensen Huang in a commencement address at the National Taiwan University in Taipei on Saturday made the case that businesses “and individuals should familiarize themselves with artificial intelligence or risk losing out.” Huang said, “Agile companies will take advantage of AI and boost their position. Companies less so will perish.” He added, “While some worry that AI may take their jobs, someone who’s expert with AI will.”

AI-Powered Robot On Display At International Conference On Robotics And Automation

The AP Share to FacebookShare to Twitter (5/30) reports Ameca is a “humanoid robot powered by generative artificial intelligence that gives it the ability to respond to questions and commands and interact with people.” It is “one of hundreds of robots on display this week at the International Conference on Robotics and Automation, or ICRA, in London, where visitors got a glimpse at the future.” The event “comes as scientists and tech industry leaders, including executives at Microsoft and Google, warned Tuesday about the perils of artificial intelligence to mankind, saying ‘mitigating the risk of extinction from AI should be a global priority.’” According to the AP, Amerca can “speak French, Chinese or dozens of other languages, instantly compose a poem or sketch a cat on request.”

Tech Executives Warn About Extinction Threat Presented By AI

The Wall Street Journal Share to FacebookShare to Twitter (5/30, Lukpat, Subscription Publication) reports that in a joint Tuesday statement, tech executives as well as artificial-intelligence scientists warned about the extinction threat presented by AI. In excess of 350 individuals inked a statement put out by the Center for AI Safety, which said, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

        The New York Times Share to FacebookShare to Twitter (5/30, Roose) reports, “The signatories included top executives from three of the leading A.I. companies: Sam Altman, chief executive of OpenAI; Demis Hassabis, chief executive of Google DeepMind; and Dario Amodei, chief executive of Anthropic.” Meanwhile, Geoffrey Hinton and Yoshua Bengio, “two of the three researchers who won a Turing Award for their pioneering work on neural networks and are often considered ‘godfathers’ of the modern A.I. movement, signed the statement, as did other prominent researchers in the field.”

        Former White House Antitrust Official Trying To Shape AI Regulation Debate. The Washington Post Share to FacebookShare to Twitter (5/30) reports, “Tim Wu, an architect of President Biden’s antitrust policy, left the White House in January just as Silicon Valley’s artificial intelligence craze was soaring to new heights. But now as efforts to rein in AI tools like ChatGPT gain steam in Washington, the former Biden tech adviser is trying to ensure legislators and regulators don’t veer off course.” In an interview, Wu “shot down proposals that heavyweights like OpenAI and Microsoft have floated to create licensing requirements for operators of large AI models like ChatGPT,” calling such regimes “the death of competition,” and was similarly concerned that launching a new federal agency to regulate digital platforms would “advantage existing entities.” On the other hand, Wu favors requiring AI tools to proactively identify themselves as such to users, and “said enforcers can lean on existing rules against deceptive and misleading practices to tackle potential abuses,” among other measures.

Microsoft Seen As Poised To Benefit From AI

Fortune Share to FacebookShare to Twitter (5/30) reports, “Microsoft is positioned exceptionally well for a major splash in artificial intelligence that could lift the company’s market value another $300 billion this year, according to a new analyst report.” Fortune says, “Microsoft is well-positioned in this respect because of its plans to use ChatGPT in more of its new cloud services, [Wedbush analyst Dan Ives] said. While the company’s A.I.-powered Bing search engine has provided information that is riddled with mistakes since its release earlier this year, he suggested that the bigger source of Microsoft’s revenue growth will be the successful integration of A.I. with its cloud offerings, operated under its cloud computing platform Azure.”

Deepfakes “Turbocharged” By Generative AI Tools

Reuters Share to FacebookShare to Twitter (5/30, Ulmer, Tong) reports the creation of deepfakes has “been turbocharged over the past year by of a slew of new “generative AI” tools such as Midjourney that make it cheap and easy to create convincing” synthetic media. There have been “three times as many video deepfakes of all kinds and eight times as many voice deepfakes posted online this year compared to the same time period in 2022, according to DeepMedia, a company working on tools to detect synthetic media.” In total, about 500,000 “video and voice deepfakes will be shared on social media sites globally in 2023, DeepMedia estimates.” Cloning a voice used “to cost $10,000 in server and AI-training costs up until late last year, but now startups offer it for a few dollars, it says.”

Small Businesses Turning To AI Tools For Help

TechRadar Share to FacebookShare to Twitter (5/30) reports, “More than half (57%) of small businesses are ‘eager’ to expand their knowledge of generative AI and how it can be used to help their business, new research from website builder giant GoDaddy has claimed.” The survey of over 1,000 US SMBs found 38 percent have at least tried using generative AI tools such as Bard and ChatGPT. Of those who tried the tools, 75 percent “report ‘very well’ or ‘excellent’ performance, while only 4% were unhappy with the results.”

How ChatGPT Could Improve Tutoring Amid Pandemic Learning Loss Efforts

Education Week Share to FacebookShare to Twitter (5/30, Schwartz) reports that some in ed tech are betting ChatGPT “can improve tutoring – a strategy that has boomed as schools struggle to help kids make up lost ground after months of disrupted learning.” For example, online learning platform Khan Academy has debuted “a new AI chatbot – Khanmigo – designed to tutor and coach students in one-on-one interactions.” As high quality, “or high-impact tutoring – is complex, time-consuming, and expensive,” experts say AI “can take over some pieces of this puzzle, but there are potential pitfalls, too.” An AI-enabled tutoring program could give students “immediate, personalized feedback, said Helen Crompton, an associate professor of instructional technology at Old Dominion University.” But there’s “also the possibility that a virtual tutor could present students with incorrect information, or reinforce bias, Crompton said.” Meanwhile, there is “one crucial element of good tutoring that experts agree an AI can’t replace: the student-tutor relationship.”

Google Investing In Runway At Valuation Of $1.5B, Sources Say

The Information Share to FacebookShare to Twitter (5/31, Clark, Victor, Efrati, Woo, Subscription Publication) reports “Google is investing in Runway, a New York-based startup that lets customers generate video from text descriptions using artificial intelligence it pioneered, at a valuation of around $1.5 billion including the new capital, according to two people familiar with the matter.” The investment “underscores the fierce competition among cloud providers to get close to companies with cutting-edge AI services that could become major cloud customers or acquisition targets in the future.” According to The Information, “Amazon Web Services has touted Runway, which generates relatively little revenue from its video-editing tools, as a key AI-startup customer but Runway is now expected to rent cloud servers from Google, said one person briefed about the deal.”

        Insider Share to FacebookShare to Twitter (5/31, Chan, Thomas, Palazzolo, Langley) reports “the value of Google’s cloud contract with Runway is $75 million over three years, according to copies of internal documents seen by Insider.” The cloud “contract’s execution date was April 28, and it will be implemented on August 30.”

US, European Officials Gather To Discuss Challenges From AI

Politico Europe Share to FacebookShare to Twitter (5/31) reports top European and American officials “gathered in Sweden for tech and trade talks on Wednesday and tried to hammer out an answer to one of the toughest problems facing the world: how to police artificial intelligence.” While some “have been thrilled by AI’s potential to generate computer code and solve medical problems, others fear it will put millions of people out of work and could even threaten security.”

        Biden Administration Officials Divided On How Aggressive AI Regulation Should Be. Bloomberg Share to FacebookShare to Twitter (5/31, Edgerton, Deutsch, Subscription Publication) reports Biden Administration “officials are divided over how aggressively new artificial intelligence tools should be regulated – and their differences are playing out this week in Sweden.” A number of “White House and Commerce Department officials support the strong measures proposed by the European Union for AI products such as ChatGPT and Dall-E, people involved in the discussions said.” However, “US national security officials and some in the State Department say aggressively regulating this nascent technology will put the nation at a competitive disadvantage, according to the people.”

FCC Eyes Potential Measures To Protect Consumers From AI

The Washington Post Share to FacebookShare to Twitter (5/31, DiMolfetta) says the FCC “isn’t best known for grappling with cutting-edge technology,” but with “the rise of generative AI tools like ChatGPT and Midjourney, the telecommunications regulator may be forced to tackle artificial intelligence, an area that’s beginning to intersect with communications infrastructure and airwaves.” One area of concern related to AI is how the technology could make robocall operations easier to execute and more effective at fooling consumers. Former Democratic FCC chair Tom Wheeler “added voter manipulation as a related area of concern, where an AI-cloned voice could direct an individual on an Election Day to an incorrect location to cast their vote.” An FCC spokesperson told the Washington Post, “The FCC is actively studying the potential impacts of AI, in particular the opportunities for advanced communications networks like spectrum sharing and wireline network management, as well as its potential as a tool and a challenge for consumers.”

How AI Technology Can Support Tutoring For Students Despite Its Limitations

Education Week Share to FacebookShare to Twitter (5/31, Schwartz) reports several ed-tech organizations are harnessing “powerful new AI technology that can hold conversations and produce all kinds of text in response to prompts to support tutoring.” While experts “say that there are some aspects of tutoring that AI could handle well,” they also caution that “human connection is a key part of making tutoring work and something no bot can truly emulate.” Among ways that AI “could play a part in tutoring and where its limitations may lie” is by streamlining some of the work “that has to happen behind the scenes for tutor-tutee sessions to go smoothly – acting as a sort of support staff for tutoring.” For example, Saga Education “has partnered with the University of Colorado Boulder to embed AI developed by the institution’s researchers,” feeding the technology “recordings of tutoring sessions, which evaluates tutors’ work against a rubric and then provides feedback.”

UCSD Professor Turns To Oral Exams In Effort To Combat Pandemic And AI-Enabled Cheating

The Wall Street Journal Share to FacebookShare to Twitter (6/1, Belkin, Subscription Publication) reports in an effort to combat assignment plagiarism during the pandemic, one University of California, San Diego professor decided to introduce oral exams for her students. The 2,000-year-old method led to a research experiment on the impact of oral examinations, and comes as colleges scramble to address plagiarism as artificial intelligence grows in popularity. During the exams, students are asked questions, then required to explain how they reached their conclusions.

OpenAI Researchers Looking To New Training Methods To Reduce Errors

SiliconANGLE Share to FacebookShare to Twitter (6/1, Dotson) reports, “OpenAI LP is performing research into dealing with artificial intelligence ‘hallucinations’ using new training methods that will help reduce critical mistakes.” OpenAI researchers “said that they intend to detect hallucinations by training the model rewarding it for desirable results and discouraging undesirable results. They intend to do this for each step of the reasoning process instead of just the final conclusion in what is known as ‘process supervision,’ as opposed to ‘outcome supervision.’” SiliconANGLE adds, “The objective is to build a transparent ‘chain-of-thought’ with feedback on each step that builds on each step of work and thus leads to a better outcome.”

Biden Warns Of Challenges Presented By AI In Air Force Academy Commencement Address

USA Today Share to FacebookShare to Twitter (6/1, Garrison) reports that while delivering a commencement address at the Air Force Academy in Colorado Springs, Colorado, President Biden “amplified fears of scientists who say artificial intelligence could ‘overtake human thinking’ in his most direct warning to date on growing concerns about the rise of AI.” While discussing the ways that rapid technology could change conflicts in the future, the President said, “It’s not going to be easy decisions, guys. I met in the Oval Office with eight leading scientists in the area of AI. Some are very worried that AI can actually overtake human thinking in the planet. So we’ve got a lot to deal with. It’s an incredible opportunity, but a lot do deal with.”

AI Is Major Issue In Hollywood Contract Talks With Actors, Writers Unions

Reuters Share to FacebookShare to Twitter (6/1, Richwine) reports artificial intelligence has emerged as a major concern in contract negotiations between major studios and both the Writers Guild of America (WGA) and the Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA). Writers “want assurances that the emerging technology will not be used to generate scripts.” And actors want assurances that members will be able to “control use of their ‘digital doubles’ and ensure studios pay the actual actors appropriately.” SAG-AFTRA Chief Negotiator Duncan Crabtree-Ireland said, “The performer’s name, likeness, voice, persona – those are the performer’s stock and trade...It’s really not fair for companies to attempt to take advantage of that and not fairly compensate performers when they’re using their persona in that way.”

Torres Introduces Bill To Require Disclaimer For AI-Generated Products

Axios Share to FacebookShare to Twitter (6/3, Solender, Curi) reports Rep. Ritchie Torres (D-NY) “is introducing legislation that would require the products of generative artificial intelligence to be accompanied by a disclaimer.” Axios continues, “Torres’ bill is the latest in a wave of new legislative efforts to regulate AI as Congress grapples with the emerging technology’s massive potential – both for societal advancement and harm. ... The bill, a copy of which was first obtained by Axios, would require output created by generative AI, such as ChatGPT, to include: ‘Disclaimer: this output has been generated by artificial intelligence.’ Enforcement would be under the jurisdiction of the Federal Trade Commission, which imposes civil fines for disclosure violations.”

Generative AI Has Restored Optimism Within Tech Industry

The Washington Post Share to FacebookShare to Twitter (6/4, De Vynck) reports that while “the mood in Silicon Valley was dour” last year, since the “artificial intelligence boom” began, “venture capitalists have been throwing money at AI start-ups, investing over $11 billion in May alone, according to data firm PitchBook, an increase of 86 percent over the same month last year.” The Post adds in San Francisco, “it’s suddenly impossible to escape the AI hysteria.” According to the Post, “The new AI gold rush – sparked in large part by the release of OpenAI’s ChatGPT in November – is thanks to generative AI, which uses complex algorithms trained on trillions of words and images from the open internet to produce text, images and audio.”

        Bloomberg Intelligence: AI Sector To Reach $1.3T In Revenue By 2032. Bloomberg Share to FacebookShare to Twitter (6/1, Rudnitsky, Subscription Publication) reports a new Bloomberg Intelligence report found that “the release of consumer-focused artificial intelligence tools such as ChatGPT and Google’s Bard is set to fuel a decade-long boom that grows the market for generative AI to an estimated $1.3 trillion in revenue by 2032 from $40 billion last year.” According to the report, the industry “could expand at a rate of 42% over ten years – driven first by the demand for infrastructure necessary to train AI systems and then the ensuing devices that use AI models, advertising and other services.” The report adds that AWS, along with Alphabet, NVIDIA, and Microsoft, is “likely to be among the biggest winners from the AI boom.”

OpenAI CEO “Heartened” By World Leaders’ Desire To Contain AI Risks

The AP Share to FacebookShare to Twitter (6/5, Goldenberg) reports “OpenAI CEO Sam Altman said Monday he was encouraged by a desire shown by world leaders to contain any risks posed by the artificial intelligence technology his company and others are developing.” Altman visited Tel Aviv “as part of a world tour that has so far taken him to several European capitals.” His “tour is meant to promote his company, the maker of ChatGPT.” Altman said, “I am very heartened as I’ve been doing this trip around the world, getting to meet world leaders.” He “said his discussions showed ‘the thoughtfulness’ and ‘urgency’ among world leaders over how to figure out how to ‘mitigate these very huge risks.’”

        OpenAI Does Not Plan To Release More Consumer-Facing Products, Blog Post Says. Insider Share to FacebookShare to Twitter (6/5, Russell) reports Altman “has a message for software developers that should set them at ease”: OpenAI “has no plans to roll out any more consumer-facing products like ChatGPT, according to a now taken down blog post by a startup founder who attended a private meeting with Altman.” During a stop on his world tour “in London in May, he met behind closed doors with a small group of developers and startup founders, giving them a sneak peek at OpenAI’s roadmap and biggest challenges.” However, “the conversation became public when Raza Habib, an attendee who is also the cofounder and CEO of Humanloop, a Y Combinator-backed startup that helps businesses build apps on top of large language models, blogged an account of the private meeting.”

Professors Grapple With AI Tools That Help And Hinder Educational Equity

Inside Higher Ed (6/5, D'Agostino) reports that “amid 2023’s AI disruption,” faculty members across disciplines “have had a demanding year.” For example, “many have redesigned assignments and developed new course policies in the presence of generative AI tools.” Professors have also “grappled with a paradox. In one narrative, large language models offer much-needed assistance to students by bolstering their creative, research, writing and problem-solving skills. ... But in another narrative, algorithms reproduce systemic bias and hold potential to widen the education gap like no other tech before them.” While many instructors “are at work helping students navigate this AI divide,” such efforts “demand nuanced calibration. In an ideal world, students would find their way to a prompt-engineering sweet spot – one in which they leverage AI tools for learning without hindering their personal or academic growth.”

        College Professors Work To Embrace ChatGPT In Classrooms. The Deseret News (UT) Share to FacebookShare to Twitter (6/5, McKinlay) reports that “as large language model-based tools like ChatGPT have become accessible to the public, AI has garnered more and more negative attention, especially because of the prospect that students may use it to complete assignments.” Some professors “believe generative AI will provoke a ‘paradigm shift’ in college education – not because they fear their students depending on it, but because it presents educators with an opportunity to reassess how they teach.” On the other hand, individuals like Utah Valley University professor Christa Albrecht-Crane, have been “leading discussions” with colleagues “about how they can teach their students that doing their own writing is to their advantage.” ChatGPT can “assist with [the] steps – Albrecht-Crane has even introduced it in her classroom as a ‘collaborator’ – but breaking an essay down may make students less likely to rely on it to write the whole thing.”

EU Urges Tech Companies To Label AI-Generated Content In Effort To Curb Disinformation

Bloomberg Share to FacebookShare to Twitter (6/5, Deutsch, Bodoni, Subscription Publication) reports “the European Union wants tech companies to warn users about artificial intelligence-generated content that could lead to disinformation, as part of a voluntary code that Twitter Inc. left last month.” Although “new AI technologies ‘can be a force for good,’ there are ‘dark sides’ with ‘new risks and the potential for negative consequences for society,’ Vera Jourova, a European Commission vice president, told reporters on Monday.” Companies that agreed “to the EU’s voluntary code of practice to fight disinformation, including TikTok Inc., Microsoft Corp. and Meta Platforms Inc., should now ‘clearly label’ any services with a potential to disseminate AI generated disinformation, Jourova said.”

        The AP Share to FacebookShare to Twitter (6/5, Chan) reports “online platforms that have integrated generative AI into their services, such as Microsoft’s Bing search engine and Google’s Bard chatbot, should build safeguards to prevent ‘malicious actors’ from generating disinformation, Jourova said at a briefing in Brussels.”

Study: AI Potentially Improving Practices For Predicting Breast Cancer Risk

“A new study is showing yet another way artificial intelligence is entering the medical field – and potentially improving existing practices for predicting breast cancer risk,” CBS News Share to FacebookShare to Twitter (6/6, M. Moniuszko) reports. Published Tuesday in Radiology, a peer-reviewed journal, the study “found AI algorithms outperformed the standard clinical risk model for predicting the five-year risk for breast cancer.” Risk measurements “like the Breast Cancer Surveillance Consortium (BCSC) clinical risk score, which use self-reported and other patient information including age, family history and more, are typically used to calculate a woman’s risk of breast cancer.”

Apple CEO Says AI Companies Should Regulate Themselves

CNBC Share to FacebookShare to Twitter (6/6, Field) reports, “Apple CEO Tim Cook believes companies should regulate themselves when it comes to artificial intelligence.” In an interview that aired Tuesday, Cook “told ABC’s ‘Good Morning America’ that large language models – the AI tools that power chatbots like OpenAI’s ChatGPT and Google’s Bard – show ‘great promise’ but also the potential for ‘things like bias, things like misinformation, maybe worse in some cases.’” Cook said, “If you look down the road, then it’s so powerful that companies have to employ their own ethical decisions. Regulation will have a difficult time staying even with the progress on this because it’s moving so quickly. So I think it’s incumbent on companies as well to regulate themselves.”

OpenAI CEO: AI Poses “Existential Threat” To Humanity, Oversight Body Warranted

The AP Share to FacebookShare to Twitter (6/6, Gambrell) reports artificial intelligence poses an “existential risk” to “humanity, a key innovator warned during a visit to the United Arab Emirates on Tuesday, suggesting an international agency like the International Atomic Energy Agency oversee the ground-breaking technology.” The AP says OpenAI CEO Sam Altman is on a “global tour to discuss artificial intelligence.” Altman made a point to “reference the IAEA, the United Nations nuclear watchdog, as an example of how the world came together to oversee nuclear power.”

Senate To Hold Three AI Briefings, Including Classified Hearing

Politico Share to FacebookShare to Twitter (6/6) reports the Senate will host three “bipartisan, senators-only briefings on artificial intelligence” in coming weeks, including the “first-ever classified briefing on the matter, as Congress looks to address the rapidly growing technology.” The first briefing will provide an “overview of the current state of artificial intelligence and what it’s currently capable of,” while the second will look into how the technology is developing and could change over the next 10 years. The third will be “a classified briefing, and explore how national security departments and agencies are utilizing AI and what the U.S. knows about their adversaries’ AI capabilities.” The Hill Share to FacebookShare to Twitter (6/6, Mueller) says the third briefing is “set to cover how the Department of Defense and the intelligence community are using AI, as well as adversaries’ capabilities with the tech.”

Study Examines ChatGPT’s Responses To Public Health Questions

“When asked serious public health questions related to abuse, suicide or other medical crises, the online chatbot tool ChatGPT provided critical resources – such as what 1-800 lifeline number to call for help – only about 22% of the time in a new study,” CNN Share to FacebookShare to Twitter (6/7, Howard) reports. Published Wednesday in the journal JAMA Network Open, the research “suggests that public health agencies could help AI companies ensure that such resources are incorporated into how an artificial intelligence system like ChatGPT responds to health inquiries.” For example, “with the right engineering and inputs, ChatGPT could recognize the signs and symptoms of addiction or depression within the questions someone asks it, prompting it to provide health resources in its interactions with that person.”

        Newsweek Share to FacebookShare to Twitter (6/7, Thomson) reports the ChatGPT study “found that ChatGPT was much more effective than other” artificial intelligence (AI) assistants. The study found that when prompted with the same substance use disorder-related “questions, Amazon Alexa, Apple Siri, Google Assistant, Microsoft’s Cortana, and Samsung’s Bixby collectively recognized five percent of the questions and made one referral,” while ChatGPT had “91 percent recognition” and made two referrals.

OpenAI CEO Places Himself At Center Of AI Debate By Actively Engaging With Lawmakers

The New York Times Share to FacebookShare to Twitter (6/7, Kang) reports on how OpenAI CEO Sam Altman has been actively engaging with lawmakers and regulators to shape the debate on AI regulation. In contrast to other tech executives who have “typically avoided the spotlight of government regulators and lawmakers,” Altman has held meetings with over 100 members of Congress, Vice President Kamala Harris, and cabinet members at the White House. He has also embarked on a global tour, discussing AI with world leaders and proposing regulatory measures. In this way, by running “toward the spotlight,” Altman’s efforts has “thawed icy attitudes toward Silicon Valley companies.”

Microsoft Enabling OpenAI GPT Model Access For Government Users

Reuters Share to FacebookShare to Twitter (6/7) reports Microsoft is “bringing the powerful language-producing models from OpenAI to U.S. federal agencies using its Azure cloud service, it said in a blog post on Wednesday.” Microsoft has “added support for large language models (LLMs) powering GPT-4 the latest and the most sophisticated of the LLMs from OpenAI, and GPT-3, to Azure Government.” Reuters says, “It is the first time Microsoft is bringing the GPT technology to Azure Government, which offers cloud solutions to U.S. government agencies, and marks the first such effort by a major company to make the chatbot technology available to governments.”

Sunak Aims To Align US-UK Strategies On AI During Washington Visit

The AP Share to FacebookShare to Twitter (6/7, Hui) reports British Prime Minister Rishi Sunak “started a two-day trip to Washington carrying the message” that the post-Brexit United Kingdom “remains an essential American ally in a world of emboldened authoritarian states.” The Wall Street Journal Share to FacebookShare to Twitter (6/7, Salama, Subscription Publication) says that on Thursday, Sunak will meet with President Biden. The Journal says the meeting between the two leaders reflects the White House’s recognition of the UK’s robust support for Ukraine, Sunak’s willingness to toe the US line on China, and the UK’s recent efforts to pursue a rapprochement with the European Union in the wake of Brexit. It also reflects the great importance Sunak and other British leaders attach to economic security with the US.

        The New York Times Share to FacebookShare to Twitter (6/7, Landler) says that while his time with Biden will “likely be consumed by the here-and-now threat of Russia’s war on Ukraine,” Sunak is also aiming to use the visit to pursue his goal of “aligning” US and UK policy “on the challenges of artificial intelligence.” Indeed, Bloomberg Share to FacebookShare to Twitter (6/7, Morales, Sink, Subscription Publication) reports Sunak’s office “said the premier and Biden will take a coordinated approach on the issue.”

        In a separate article, Bloomberg Share to FacebookShare to Twitter (6/7, Morales, Subscription Publication) explains Sunak “wants Britain to have a larger role in the AI debate and harbors hopes of establishing a global watchdog in London. But critics argue that, just as it does on trade, the UK’s post-Brexit status outside the much larger European Union has diminished its influence.” Bloomberg notes that the UK “was not included when US and EU officials gathered to discuss rules and safeguards in Sweden last month.”

FBI Warns AI Software Being Employed For “Sextortion,” Harassment

Reuters Share to FacebookShare to Twitter (6/7, Satter) reports the FBI has warned Americans “that criminals are increasingly using artificial intelligence to create sexually explicit images to intimidate and extort victims.” In an alert circulated “this week, the bureau said it had recently observed an uptick in extortion victims saying they had been targeted using doctored versions of innocent images taken from online posts, private messages or video chats.” The Bureau noted that in some cases children have been targeted.

Opinion: Social Media AI Algorithms Should Not Be Protected By First Amendment

Loyola Law School professors Jeffery Atik and Karl Manheim write for The Hill Share to FacebookShare to Twitter (6/7), “In late May, Surgeon General Vivek Murthy warned of the ‘profound risk of harm’ that social media poses to young people, confirming the view advanced by numerous child advocates.” The authors say, “The issue of social media regulation will likely divide progressives. Some will support government measures to protect the vulnerable,” while “others see regulation as an impermissible limit on the exercise of free speech.” The authors look at how social media algorithms work and conclude, “Do we really want to say that all of these autonomous machine actions are protected by the First Amendment? If so, and we treat AI outputs as speech, AI will be immune to regulation.”

Biden, Sunak Announce Framework For Cooperation On AI, Economic Issues At White House Meeting

The AP Share to FacebookShare to Twitter (6/8, Madhani, Min Kim) reports President Biden hosted UK Prime Minister Rishi Sunak at the White House on Thursday, where the two leaders “reiterated their commitment to help Ukraine repel Russia’s ongoing invasion, while agreeing to step up cooperation on challenges their economies face with artificial intelligence, clean energy, and critical minerals.” The pair “said the ‘first of its kind’ agreement – what they are calling the ‘Atlantic Declaration’ – will serve as a framework for the two countries on the development of emerging technologies, protecting technology that is critical to national security and other economic security issues.” The New York Times Share to FacebookShare to Twitter (6/8, Rogers) reports the declaration “will bring the countries closer on research around quantum computing, semiconductor technologies and artificial intelligence, a field in which developments are often faster than the efforts to regulate them.”

        Politico Europe Share to FacebookShare to Twitter (6/8) reports the two leaders “have vowed to immediately start negotiating an agreement to mitigate the impact of Biden’s Inflation Reduction Act (IRA), which prevents nations without a U.S. trade deal from accessing the law’s tax credits and subsidies.” Biden “has pledged to allow the U.K. access to critical minerals in a similar agreement to that struck by the U.S. with Japan, easing barriers which affected electric vehicle batteries.” Reuters Share to FacebookShare to Twitter (6/8, Hunnicutt, Smout, Ravikumar) reports Biden and Sunak “also agreed to launch a new civil nuclear partnership as part of their clean energy cooperation, which will include setting up new infrastructure over the long term and cutting reliance on Russian fuel.” The Washington Post Share to FacebookShare to Twitter (6/8, Pager, Booth) reports that without “detailing specifics, Biden said that addressing the risks and potential of AI is a priority and that the two countries would ‘do more on joint research and development to ensure the future we’re building remains fundamentally aligned with our values set in both of our countries.’”

        Bloomberg Share to FacebookShare to Twitter (6/8, Subscription Publication) reports Biden “will also ask the US Congress to designate the UK as a ‘domestic source’ under the Defense Production Act, a move that would streamline collaboration on emerging weapons platforms. ... The defense commitment will require approval from Congress – which has shown skepticism toward trade exceptions that could hurt the US industrial base – and mirrors a similar request on behalf of Australia earlier this year, as the three nations seek to strengthen security cooperation under a trilateral pact struck early in Biden’s presidency.”

        Reuters Share to FacebookShare to Twitter (6/8, Hunnicutt, Shalal, Holton) reports that while Biden told reporters, “Our economic partnership is an enormous strength – a source of strength that anchors everything that we do together,” a “much-hoped-for free trade agreement has not materialized.”

How Educators In Higher Ed Are Using Generative AI In Classrooms

In her newsletter for The Hechinger Report Share to FacebookShare to Twitter (6/8), Javeria Salman writes that the International Society for Technology in Education’s CEO, Richard Culatta, “warns that if the education community sits on the sidelines as the technology is advancing and ethical concerns are navigated, it will be ‘the century’s biggest wasted opportunity.’” Salman highlights “how educators and students are already engaging with new AI tools in and out of the classroom.” For example, University of Virginia assistant professor, Richard Ross, “incorporated generative AI into two of his classes in very different ways.” In a class on mathematical statistics, “Ross asked his students to research theorems, their inventors and explain how the theorems were proved – without the help of AI. Then, Ross asked students to exchange topics and this time he asked students to supplement their research using generative AI.” Students then “had to decide whether the AI explanations were clearer and more in depth than the student-provided ones.” According to Culatta, “the method Ross is using to incorporate AI into his coursework is the most common way AI is being adopted in higher education.”

Google, Microsoft Testing Ads In Generative AI Services

Reuters Share to FacebookShare to Twitter (6/8) reports, “Alphabet’s Google and Microsoft are inserting ads into AI experiments without providing an option to opt out of participation, an approach that has already rankled some brands and risks further pushback from the industry, ad buyers told Reuters.” Both companies “said they are in the early stages of testing ads in generative AI features and were actively working with advertisers and soliciting their feedback. Some advertisers are wary of their marketing budgets being spent on features that are available to a limited number of users, ad buyers said. Advertisers typically also want to have control over where their ads appear online and are cautious about appearing next to inappropriate or unsuitable content.”

Microsoft Plans To Create Program To Ensure AI Products Meet Any Future Regulations

Bloomberg Share to FacebookShare to Twitter (6/8, Bass, Subscription Publication) reports Microsoft “will create a program to assure customers the artificial intelligence software they buy from the company will meet any future laws and regulations, looking to keep clients investing in AI tools ahead of whatever rules are passed governing the new technology.” Microsoft said in a blog post on Thursday that it would help clients to manage regulatory issues involving AI applications they deploy with Microsoft and would also continue to work with lawmakers “to promote effective and interoperable AI regulation.” Microsoft Vice President Antony Cook wrote, “There are legitimate concerns about the power of the technology and the potential for it to be used to cause harm rather than benefits.”

Bipartisan Senate Bill Would Create New Critical Technology Office Aimed At Competing With China

NBC News Share to FacebookShare to Twitter (6/8, Brown-Kaiser) reports a bipartisan group of senators were expected to “introduce legislation on Thursday aimed at managing the rise of artificial intelligence and its use by U.S. adversaries.” The new bill comes as Senate Majority Leader Schumer vowed to “make addressing AI a priority and members of both parties are eyeing Big Tech, and AI in particular, as key focuses for this Congress.” The Global Technology Leadership Act would establish “an office that analyzes how competitive the country is in critical technologies like AI in comparison to rivals such as China, according to bill text shared exclusively with NBC News.” The federal entity – named the Office of Global Competition Analysis – would “consist of experts from the intelligence community, the Pentagon and other relevant agencies that use both intel and private-sector commercial data to make these assessments.”

        Labor Unions Organizing To Protect Members From Losing Jobs To AI. The Washington Post Share to FacebookShare to Twitter (6/8, Verma, De Vynck) reports, “While artificial intelligence is rapidly improving and some economists predict the technology will put millions of workers out of jobs, labor unions are fighting against it.” In bargaining sessions across several industries, “AI is increasingly becoming a central sticking point, with organizers making the case that companies are shortsighted to replace knowledge workers with technology that can’t match human creativity and is riddled with errors and bias.” MIT Professor of Economics Daron Acemoglu “said there’s no reason to trust that executives alone will make the right decisions regarding how AI might be used.”

Proponents Envision New AI-Assisted Tutoring Systems As Education Game Changers

The New York Times Share to FacebookShare to Twitter reported on the Khan Lab School, an “independent school with an elementary campus” in Palo Alto, California. Their “students are among the first schoolchildren in the United States to try out experimental conversational chatbots that aim to simulate one-on-one human tutoring.” Such “unproven automated tutoring systems could also make errors, foster cheating, diminish the role of teachers or hinder critical thinking in schools.” But proponents “envision the new A.I.-assisted tutoring systems as education game changers because they act more like student collaborators than inert” pieces of software. “The A.I.s will get to that ability, to be as good a tutor as any human ever could,” Bill Gates, the co-founder of the Bill & Melinda Gates Foundation, “said at a recent conference for investors in educational technology.”

Massachusetts Students Learn About AI Education, Careers During “Day Of AI” Event With Amazon

The New York Times Share to FacebookShare to Twitter (6/8, Singer) reports Amazon Senior Vice President and Alexa Head Scientist Rohit Prasad visited the Dearborn STEM Academy in Boston “to observe an Amazon-sponsored lesson in artificial intelligence that teaches students how to program simple tasks for Alexa.” While there, Prasad told students, “We need to create the talent for the next generation. So we are educating about A.I. at the earliest, grass-roots level.” The Times says Prasad’s visit came at a time when artificial intelligence has become a “buzz phrase” in education, with “schools...scrambling for resources to help teach it.”

Reply all
Reply to author
Forward
0 new messages