Dr. T's AI brief

231 views
Skip to first unread message

dtau...@gmail.com

unread,
Jan 7, 2024, 9:04:06 AMJan 7
to ai-b...@googlegroups.com

Microsoft Adding New PC Keyboard Button
CBS News
Aliza Chasan
January 4, 2024


Microsoft is adding an AI button to its Windows keyboards, the company's first significant keyboard change in nearly three decades. The new Copilot key will launch Microsoft's AI chatbot. Microsoft’s Yusuf Mehdi said the software giant sees the key’s addition as "the entry point into the world of AI on the PC." Copilot is integrated with Microsoft 365 and works alongside Word, Excel, PowerPoint, Outlook, and Teams. Users lacking the Copilot key can access it with the keyboard shortcut Windows + C.

Full Article

 

 

Machine Learning Helps Fuzzing Find Hardware Bugs
IEEE Spectrum
Tammy Xu
January 3, 2024


Texas A&M University researchers used the "fuzzing" technique, which introduces incorrect commands and prompts, to automate chip testing on the assembly line to help identify hardware bugs early in the development process. The researchers used reinforcement learning to select inputs for fuzz testing, then adapted an algorithm used to solve the multi-armed bandit (MAB) problem. The researchers found the MABFuzz algorithm significantly sped up the detection of vulnerabilities and covering the testing space.

Full Article

 

 

Large Fishing Boats Go Untracked as 'Dark Vessels'
New Scientist
Jeremy Hsu
January 3, 2024


An AI analysis of satellite images by researchers at the nonprofit Global Fishing Watch found that the locations of 75% of industrial fishing vessels and 25% of transport and energy ships are not publicly shared. The images, taken from 2017 to 2021 in regions accounting for most large-scale fishing and other industrial activities, were analyzed using AIs trained to identify and categorize boats and offshore structures. Comparing the global map of vessels with a database of those that broadcast their location publicly revealed that a majority turned their automated identification systems off, which could indicate their participation in illegal fishing and other activities.

Full Article

*May Require Paid Registration

 

 

The Times Sues OpenAI and Microsoft over AI Use of Copyrighted Work
The New York Times
Michael M. Grynbaum; Ryan Mac
December 27, 2023


The New York Times has sued OpenAI and Microsoft over copyright issues associated with its written works. The lawsuit contends that millions of articles published by the newspaper were used to train automated chatbots that now compete with the news outlet as a source of reliable information. The complaint cites several examples when a chatbot provided users with near-verbatim excerpts from Times articles that would otherwise require a paid subscription to view. It also highlights the potential damage to The Times’ brand through so-called AI “hallucinations,” a phenomenon in which chatbots insert false information that is then wrongly attributed to a source.
 

Full Article

*May Require Paid Registration

 

 

Content Credentials Will Fight Deepfakes in the 2024 Elections
IEEE Spectrum
Eliza Strickland
December 27, 2023


With nearly 80 countries holding major elections in 2024, the deployment of content credentialing to fight deepfakes and other AI-generated disinformation is expected to gain ground. The Coalition for Content Provenance and Authenticity (C2PA), an organization that’s developing technical methods to document the origin and history of real and fake digital-media files, in 2021 released initial standards for attaching cryptographically secure metadata to image and video files. It has been further developing the open-source specifications and implementing them with leading media companies. Microsoft, meanwhile, recently launched an initiative to help political campaigns use content credentials.
 

Full Article

 

 

'Insect Eavesdropper' Helps Protect Crops
WisBusiness
Alex Moe
January 3, 2024


A machine learning algorithm developed by University of Wisconsin-Madison's Emily Bick can detect insect infestations in plants from audio signals. The algorithm interprets insect feeding sounds picked up by clip-on or stick-on contact microphones, and can distinguish between insect chewing sounds and weather-related noises by tracking vibrations in the plant rather than sound waves in the air. Said Bick, "From those sounds, we can pre-process it and train machine learning algorithms not just to detect presence and absence, but also to differentiate these species."

Full Article

 

 

A Magic Tool to Understanding AI: Harry Potter
Bloomberg
Saritha Rai
December 26, 2023


J.K. Rowling's Harry Potter books are being used by researchers experimenting with generative AI, due to the series' long-lasting pop culture influence. In a recent paper, Microsoft researchers used the Harry Potter books to show that AI models can be edited to eliminate any knowledge of the series, without affecting their overall decision-making and analytical abilities. Microsoft's Mark Russinovich said the universal familiarity of the Harry Potter books would make it "easier for people in the research community to evaluate the model resulting from our technique and confirm for themselves that the content has indeed been 'unlearned.'"

Full Article

*May Require Paid Registration

 

 

Mental Images Extracted from Human Brain Activity
Interesting Engineering
Sejal Sharma
December 18, 2023


"Brain decoding" technology leveraging AI can translate human brain activity into mental images of objects and landscapes, say Japanese researchers led by a team from the National Institutes for Quantum Science and Technology (QST) and Osaka University. The approach produced vivid depictions, such as a distinct leopard with discernible features (ears, mouth, and spots), and objects such as an airplane with red-wing lights. The researchers exposed participants to about 1,200 images and then analyzed and quantified the correlation between their brain signals and the visual stimuli using functional magnetic resonance imaging. This mapping was then used to train a generative AI to decipher and replicate the mental imagery derived from brain activity.
 

Full Article

 

 

AI Glasses Unlock Independence for Some Blind, Low-Vision People
The Globe and Mail (Canada)
Joe Castaldo
December 27, 2023


AI, combined with language processing and computer vision, has led to advanced applications for people who are blind or visually impaired. That includes Internet-connected glasses, such as those from Netherlands-based Envision, which uses an AI model to respond to voice commands like “describe scene” by capturing an image of the person’s surroundings, and composing a description that it reads aloud through a tiny speaker behind the user's ear. The Be My AI app, provided by U.S.-based Be My Eyes, provides descriptions of photos taken by a smartphone.

Full Article

*May Require Paid Registration

 

 

Spying on Beavers from Space Could Help Drought-Ridden Areas
Wired
Ben Goldfarb
December 28, 2023


A group of scientists and Google engineers taught an algorithm to spot beaver infrastructure in satellite imagery, with the ultimate goal of helping drought-ridden areas recover. Beaver-created ponds and wetlands store water, filter out pollutants, furnish habitat for endangered species, and fight wildfires. The Earth Engine Automated Geospatial Elements Recognition, or EEAGER, convolutional neural network-based algorithm was fed with more than 13,000 landscape images with beaver dams from seven western U.S. states, along with some 56,000 dam-less locations. The model categorized the landscape accurately 98.5% of the time.
 

Full Article

*May Require Paid Registration

 

 

AI-Assisted Piano Allows Disabled Musicians to Perform Beethoven
Japan Today
December 24, 2023

The "Anybody's Piano" tracks notes of music and augments players’ performances by adding whatever keystrokes are needed but not pressed. At a recent performance in Tokyo, Kiwa Usami, who has cerebral palsy, was one of three musicians with disabilities performing Symphony No. 9 with the AI-powered piano. Usami helped inspire the instrument. Her dedication to practicing with one finger prompted her teachers to work with Japanese music giant Yamaha. The result of the collaboration was a revised version of Yamaha's auto-playing piano, which was released in 2015.
 

Full Article

 

 

India Boosts AI in Weather Forecasts
Reuters
Kanjyik Ghosh
December 22, 2023


India is exploring the further use of AI to build climate models to improve weather forecasting as extreme weather events proliferate across the country. The India Meteorological Department provides forecasts based on mathematical models using supercomputers. Using AI with an expanded observation network could help generate higher-quality forecast data at lower cost.
 

Full Article

 

 

Seeking a Big Edge in AI, South Korean Firms Think Smaller
The New York Times
John Yoon
December 20, 2023


South Korean firms are taking advantage of AI's adaptability to create systems from the ground up to address local needs. Some have trained AI models with sets of data rich in Korean language and culture, while others are building AI for Thai, Vietnamese, and Malaysian audiences. Some companies are eyeing customers in Brazil, Saudi Arabia, and the Philippines, and in industries like medicine and pharmacy, fueling hopes that AI can become more diverse, work in more languages, be customized to more cultures, and be developed by more countries.
 

Full Article

*May Require Paid Registration

 

Policymakers Aim To Regulate AI-Generated Replications Of Real People

Politico Magazine Share to FacebookShare to Twitter (12/30, Chatterjee) discusses the policy challenges associated with the “wave of AI chatbots modeled on real humans.” Politico explains the technology uses “powerful new systems known as large language models to simulate their personalities online.” While some projects do so through license agreements, “a small handful of projects that have effectively replicated living people without their consent. ... In Washington, spurred mainly by actors and performers alarmed by AI’s capacity to mimic their image and voice, some members of Congress are already attempting to curb the rise of unauthorized digital replicas. In the Senate Judiciary Committee, a bipartisan group of senators – including the leaders of the intellectual property subcommittee – are circulating a draft bill titled the NO FAKES Act that would force the makers of AI-generated digital replicas to license their use from the original human.”

        Politico Analysis: Effective Altruists Gain Influence In Policy Debate Over AI. Brendan Bordelon writes in a nearly 4,700-word analysis for Politico Share to FacebookShare to Twitter (12/30) that the effective altruism movement is gaining influence in the debate over AI policy. Bordelon says a “small army of adherents to ‘effective altruism’ has descended on the nation’s capital and is dominating how the White House, Congress and think tanks approach the technology.” He adds while “the Silicon Valley-based movement is backed by tech billionaires and began as a rationalist approach to solving human suffering,” some critics “say it has morphed into a cult obsessed with the coming AI doomsday.” Bordelon points out that EA’s “most ardent advocates....believe researchers are only months or years away from building an AI superintelligence able to outsmart the world’s collective efforts to control it,” while some adherents believe AI “could wipe out humanity,” and “nearly all...believe AI poses an existential threat to the human race.”

 

Major Media Organizations In Confidential Talks With OpenAI Over Content Use

The New York Times Share to FacebookShare to Twitter (12/29, Mullin) reported major US media organizations have been conducting confidential talks with OpenAI on the issue of pricing and terms of licensing their content to the AI firm. These discussions were brought into the open after The New York Times filed a new law suit against OpenAI and Microsoft “alleging that the companies used its content without permission to build artificial intelligence products.” In its suit, The Times said that it had been in talks with both companies for months prior to suing. Other media companies, “including Gannett, the largest U.S. newspaper company; News Corp, the owner of The Wall Street Journal; and IAC, the digital colossus behind The Daily Beast and the magazine publisher Dotdash Meredith,” have also “been in talks with OpenAI, said three people familiar with the negotiations.”

        Artist To Start Residency At OpenAI. The New York Times Share to FacebookShare to Twitter (12/30, Katz) reported Alexander Reben, the M.I.T.-educated artist, “will become OpenAI’s first artist in residence” next month. Reben “steps in as generative A.I. advances at a head-spinning rate, with artists and writers trying to make sense of the possibilities and shifting implications.” While “some regard artificial intelligence as a powerful and innovative tool that can steer them in weird and wonderful directions,” others “express outrage that A.I. is scraping their work from the internet to train systems without permission, compensation or credit.”

 

Experts Discuss 2024 AI Predictions

The Los Angeles Times Share to FacebookShare to Twitter (1/2, Contreras) “asked a slate of experts and stakeholders to send in their 2024 artificial intelligence predictions. The results alternated between enthusiasm, curiosity and skepticism.” Future Today Institute CEO Amy Webb said, “We may require AI systems to get a professional license. While certain fields require professional licenses for humans, so far algorithms get to operate without passing a standardized test.” Senator Chris Coons (D-DE) said, “Creators, experts and the public are calling for federal safeguards to outline clear policies around the use of generative AI, and it’s imperative that Congress do so.” Tech investor Julie Fredrickson “said she envisions the new year bringing further tensions around regulation.” Pickaxe co-founder Mike Gioia “predicts Apple will launch a ‘Photographed on iPhone’ stamp next year that would certify AI-free photos.”

        Commentary Calls For Curbing Law Enforcement’s Use Of AI. In an op-ed in the New York Times Share to FacebookShare to Twitter (1/2, Buolamwini, Friedman), Joy Buolamwini, founder of the Algorithmic Justice League, and Barry Friedman, a professor at New York University’s School of Law, write, “One of the most hopeful proposals involving police surveillance emerged recently from a surprising quarter – the federal Office of Management and Budget. The office, which oversees the execution of the president’s policies, has recommended sorely needed constraints on the use of artificial intelligence by federal agencies, including law enforcement.” Despite applauding the OMB’s work, the writers cite some “shortcomings in its proposed guidance to agencies could still leave people vulnerable to harm. Foremost among them is a provision that would allow senior officials to seek waivers by arguing that the constraints would hinder law enforcement. Those law enforcement agencies should instead be required to provide verifiable evidence that A.I. tools they or their vendors use will not cause harm, worsen discrimination or violate people’s rights.”

 

Nobel Prize-Winning Economist Warns Against Sole Focus On STEM Education

Bloomberg Share to FacebookShare to Twitter (1/2, Subscription Publication) reports Christopher Pissarides, a Nobel Prize-winning economist, warns against focusing solely on STEM education due to AI advancements that might render such skills obsolete. He suggests that jobs requiring empathy and creativity, like those in hospitality and healthcare, will remain essential. Pissarides emphasizes the importance of diverse skills, including managerial and social abilities, which AI is less likely to replace.

 

Morgan State University Researching Ways To Reduce Bias In AI

The Baltimore Sun Share to FacebookShare to Twitter (1/3) reports that “with artificial intelligence assisting everyone from college admission directors to parole boards, a group of researchers at Morgan State University says the potential for racial, gender and other discrimination is amplified by magnitudes.” Researcher Gabriella Waters said, “You automate the bias, you multiply and expand the bias,” said Gabriella Waters, a director at a Morgan State center seeking to prevent just that. “If you’re doing something wrong, it’s going to do it in a big way.” Bias also “cropped up in an algorithm used to assess the relative sickness of patients, and thus the level of treatment they should receive, because it was based on the amount of previous spending on health care — meaning Black people, who are more likely to have lower incomes and less access to care to begin with, were erroneously scored as healthier than they actually were.”

 

OpenAI Reopens ChatGPT Plus Registrations

Insider Share to FacebookShare to Twitter (1/3, Nolan) reports OpenAI “has reopened sign-ups for its subscription model, ChatGPT Plus.” CEO Sam Altman “announced the news on December 13, saying the company had ‘found more GPUs.’ OpenAI previously paused access to the paid subscription service in November after a surge in demand.”

 

OpenAI To Launch Its GPT App Store Next Week

TechCrunch Share to FacebookShare to Twitter (1/4, Wiggers) reports OpenAI plans to launch a digital store for custom apps based on its AI models sometime in the coming week. The company “said that developers building GPTs will have to review the company’s updated usage policies and GPT brand guidelines to ensure that their GPTs are compliant before they’re eligible for listing in the store — aptly called the GPT Store. They’ll also have to verify their user profile and ensure that their GPTs are published as ‘public.’”

 

Bipartisan Artificial Intelligence Literacy Bill Introduced In Congress

Higher Ed Dive Share to FacebookShare to Twitter (1/4, Crist) reports a bill “introduced in Congress – the Artificial Intelligence Literacy Act – aims to build AI skills and workforce preparedness as the emerging technology continues to change workplace dynamics.” The legislation, “introduced Dec. 15, 2023, has drawn bipartisan support and endorsements from major universities, education associations and workforce partners, including the Society for Human Resource Management.” The AI Literacy Act “would amend the Digital Equity Act of 2021 to include AI literacy and training opportunities, focusing on not only the basic principles and applications of AI but also the limitations and ethical considerations.” The legislation would “also highlight the importance of AI literacy for national competitiveness, workforce preparedness and the well-being and digital safety of Americans.”

        Lobbyists Capitalize On Creating Regulations For AI. Politico Share to FacebookShare to Twitter (1/4, Oprysko) reports AI’s potential “to disrupt virtually every industry means that the scramble to regulate it has turned into a gold rush for K Street.” But in the latest “signal that the political community itself hasn’t managed to escape the uncertainty over the technology, the National Institute for Lobbying & Ethics, a trade group for the government affairs industry, rolled out a new task force today focused on developing a code of ethics for the use of artificial intelligence in advocacy and PAC operations.”

        Khanna: US Needs To Address Rise Of AI More Robustly To Avoid Mistakes Of Globalization. Rep. Ro Khanna (D-CA) writes in the New York Times Share to FacebookShare to Twitter (1/4) that policymakers in the US need to address the rise of technology and AI systems more robustly and more strategically than how the center-left embraced globalization in the 1990s and 2000s. Khanna discusses how globalization did provide some of its promises, but also “hollowed out the working class” with “shuttered factories and rural communities that never saw the promised [knowledge] jobs materialize.” Khanna acknowledges that there is an ever-present tension between business and labor interests in these discussions, but that Democrats need to advocate for policies that will allow the US to benefit from the rise of new technologies like AI without simply accepting the damaging consequences of poor planning.

dtau...@gmail.com

unread,
Jan 13, 2024, 7:21:57 PMJan 13
to ai-b...@googlegroups.com

AI Helps U.S. Intelligence Track Hackers Targeting Critical Infrastructure

At a Jan. 9 conference hosted by Fordham University, cybersecurity leaders said U.S. intelligence authorities are leveraging AI to detect hackers that increasingly are using the same technology to conceal their activities. National Security Agency's Rob Joyce explained that hackers are "using flaws in the architecture, implementation problems, and other things to get a foothold into accounts or create accounts that then appear like they should be part of the network." The FBI's Maggie Dugan noted that hackers are using open source models and their own datasets to develop and train their own generative AI tools, then sell them on the dark web.
[
» Read full article *May Require Paid Registration ]

WSJ Pro Cybersecurity; Catherine Stupp (January 10, 2024)

 

 

Bug-Free Software Advances
University of Massachusetts Amherst
January 4, 2024


University of Massachusetts Amherst computer scientists used a large language model (LLM) to create a tool to prevent software bugs. In developing Baldur, the researchers fine-tuned the Minerva LLM on 118 GB of mathematical scientific papers and webpages with mathematical expressions, with additional fine-tuning on the Isabelle/HOL language used to write mathematical proofs. Baldur can generate a whole proof and check its work using a theorem prover. Errors are fed back into the LLM along with the proof to learn from the mistake before re-generating the proof.

Full Article

 

 

Material Found by AI Could Reduce Lithium Use in Batteries

AI and supercomputing were leveraged by researchers at Microsoft and the Pacific Northwest National Laboratory to identify a new material with the potential to reduce the use of lithium in batteries by up to 70%. It took the researchers less than a week using these technologies to narrow down 32 million potential inorganic materials to 18 promising candidates, a task that would have taken over 20 years using standard methods. It took less than nine months from the discovery of N2116, a solid-state electrolyte, to develop a working battery prototype.
[
» Read full article ]

BBC; Shiona McCallum (January 9, 2024)

 

 

Scientists Say This Is the Probability AI Will Drive Humans to Extinction
Futurism
Victor Tangermann
January 4, 2024


A recent survey of 2,778 AI researchers found that slightly more than half believe there is a 5% chance AI could make humans extinct. In addition, 10% of respondents said they believe AI could outperform humans in all tasks by 2027, while 50% of those polled said that could occur by 2047. However, 68.3% of respondents believe good outcomes from AI will outnumber bad ones.

Full Article

 

 

E-Nose Sniffs Out Coffee Varieties Nearly Perfectly

Researchers at Taiwan's National Kaohsiung University of Science and Technology developed an e-nose device that can identify coffee varieties based on their aroma. The device, which assesses gases to identify the nature of the substance at hand features eight metal semiconductor oxide sensors, each of which detects specific gases and transmits the resulting data to an AI algorithm. In tests of several algorithms on 16 coffee bean varieties, accuracy rates ranged from 81% to 98%, with a convolutional neural network algorithm achieving the greatest accuracy.
[
» Read full article ]

IEEE Spectrum; Michelle Hampson (January 10, 2024)

 

California Budget Deficit May Upend Plans For State To Lead On AI Policy

The San Francisco Chronicle Share to FacebookShare to Twitter (1/5, Bollag) reports California lawmakers have already announced “a flurry of AI bills, with more on the way. Their proposals include efforts to require the state to set new safety standards, create an AI research hub and develop protections against deepfake videos and photos that look real but have been digitally altered to mislead the viewer.” However, the state’s budget deficit could prompt Gov. Gavin Newsom (D) “to veto AI bills with high price tags. The newly introduced bills have not yet been given cost estimates, but any requirements to hire people to implement the legislation or other related costs will likely run up against the reality of a massive budget deficit. The nonpartisan Legislative Analyst’s Office has estimated the state faces a $68 billion shortfall this year, which will likely require Newsom to propose cuts when he releases his budget plan next week.”

 

Survey: One-Third Of Teachers Have Used AI Tools In Their Classrooms

Education Week Share to FacebookShare to Twitter (1/5, Langreo) reported that it’s been “a year since ChatGPT burst onto the K-12 scene, and teachers are slowly embracing the tool and others like it.” One-third “of K-12 teachers say they have used artificial intelligence-driven tools in their classroom, according to an EdWeek Research Center survey of educators conducted between Nov. 30 and Dec. 6, 2023.” Artificial intelligence experts have “touted the technology’s potential to transform K-12 into a more personalized learning experience for students, as well as for teachers through personalized professional development opportunities.” Beyond the classroom, “experts also believe that generative AI tools could help districts become more efficient and fiscally responsible.” Teachers have “used ChatGPT and other generative AI tools to create lesson plans, give students feedback on assignments, build rubrics, compose emails to parents, and write letters of recommendation.”

 

WPost Tests ChatGPT’s Admissions Essay For Harvard. The Washington Post Share to FacebookShare to Twitter (1/8, Verma) reports universities are “concerned that students might use” ChatGPT to “forge admissions essays.” To find out if a chatbot-created essay is “good enough to fool college admissions counselors,” The Post asked a prompt engineer to create two essays: “one responding to a question from the Common Application...and one answering a prompt used solely for applicants to Harvard.” The AI-generated essays were “readable and mostly free of grammatical errors.” But one admissions counselor says he “would’ve stopped reading.” The essay is “such a mediocre essay that it would not help the candidate’s application or chances,” he added.

 

New York Governor To Propose AI Research Center Using $275M In State Funds

The New York Times Share to FacebookShare to Twitter (1/8, Ashford) reports in her third State of the State address, Gov. Kathy Hochul (D) “will propose a first-of-its-kind statewide consortium that would bring together public and private resources to put New York at the forefront of the artificial intelligence landscape.” Under the plan, Hochul would “direct $275 million in state funds toward the building of a center to be jointly used by a handful of public and private research institutions, including the State University of New York and the City University of New York.” Columbia University, Cornell University, New York University and Rensselaer Polytechnic Institute “would each contribute $25 million to the project, known as ‘Empire A.I.’” Hochul described the plan “as an important investment that would strengthen the state’s economy for years, helping to offset the disparities between tech companies and academic institutions in the race to develop A.I.”

 

Researchers Make AI Models Reproduce Trademarked Content With Simple Prompts

Insider Share to FacebookShare to Twitter (1/7, Varanasi) reports, “Generating a copyright lawsuit could be as easy as typing something akin to a game show prompt into an AI. When researchers input the two-word prompt ‘videogame italian’ into OpenAI’s Dall-E 3, the model returned recognizable pictures of Mario from the iconic Nintendo franchise, and the phrase ‘animated sponge’ returned clear images of the hero of ‘Spongebob Squarepants.’” The results “were part of a two-week investigation by AI researcher Gary Marcus and digital artist Reid Southen that found that AI models,” specifically Midjourney and Dall-E 3, “can produce ‘near replicas of trademarked characters’ with a simple text prompt.”

 

Metz: AI To See “Remarkably Rapid Improvement” In 2024

Cade Metz writes in the New York Times Share to FacebookShare to Twitter (1/8, Metz), “The A.I. industry this year is set to be defined by one main characteristic: a remarkably rapid improvement of the technology as advancements build upon one another, enabling A.I. to generate new kinds of media, mimic human reasoning in new ways and seep into the physical world through a new breed of robot.” Chatbots “will expand well beyond digital text by handling photos, videos, diagrams, charts and other media. They will exhibit behavior that looks more like human reasoning, tackling increasingly complex tasks in fields like math and science. As the technology moves into robots, it will also help to solve problems beyond the digital world.” Metz adds, “Because the systems are also learning the relationships between different types of media, they will be able to understand one type of media and respond with another. In other words, someone may feed an image into chatbot and it will respond with text.”

 

California Planning To Use AI Tools To Mitigate Traffic, Make Roads Safer

The Los Angeles Times Share to FacebookShare to Twitter (1/8) reports the California Department of Transportation “is asking technology companies by Jan. 25 to propose generative AI tools that could help California reduce traffic and make roads safer, especially for pedestrians, cyclists and scooter riders.” AI tools “such as ChatGPT can quickly produce text, images and other content, but the technology can also help workers brainstorm ideas.” The request “shows how California is trying to tap into AI to improve government services at a time when lawmakers seek to safeguard against the technology’s potential risks.” The state’s plan “to potentially use artificial intelligence to help alleviate traffic jams stems from an executive order that Gov. Gavin Newsom signed in September about generative AI.” As part “of the order, the state also released a report outlining the benefits and risks of using AI in state government.”

 

Maryland Governor Signs Executive Order Calling For State To Develop AI Guide Rails

The Washington Post Share to FacebookShare to Twitter (1/8) reports Maryland Gov. Wes Moore (D) “signed an executive order calling for the state to develop guide rails to protect residents from the risk of bias and discrimination as artificial intelligence becomes increasingly useful and common, though the order did not specify how the government intends to use AI in the future.” The order “acknowledged the potential for AI to be a ‘tremendous force for good’ if developed and deployed responsibly.” However, Moore’s order “also called out the risk that the technology could perpetuate harmful biases, invade citizens’ privacy, and expose sensitive data when used inappropriately or carelessly.”

 

Google Faces Multi-Billion Dollar AI Patent Trial

Reuters Share to FacebookShare to Twitter (1/9, Brittain, Raymond) reports Google appeared “before a federal jury in Boston on Tuesday to argue against a computer scientist’s claims that it should pay his company $1.67 billion for infringing patents that allegedly cover the processors used to power artificial intelligence technology in Google products.” A lawyer representing Singular Computing, founded by “computer scientist Joseph Bates, told jurors that Google copied Bates’ technology after repeatedly meeting with him to discuss his ideas to solve a problem central to developing AI.” The lawyer “said that after Bates shared his computer-processing innovations with Google from 2010 to 2014, the tech giant unbeknownst to him copied his patented technology rather than licensing it to develop its own AI-supporting chips.”

 

OpenAI Executive’s Lobbying Strategy Seen As Fostering Trust In Company, CEO

The Washington Post Share to FacebookShare to Twitter (1/9) profiles OpenAI vice president of global affairs Anna Makanju, who “has engineered [CEO Sam] Altman’s transformation from a start-up darling into the AI industry’s ambassador.” The Post says, “When global leaders were rattled during Altman’s dramatic five-day ouster in November,” Makanju “reassur[ed] them that the company would continue to exist.” The Post adds, “Tech companies traditionally shun Washington until trouble emerges, asking for forgiveness rather than permission. ... But Makanju, a veteran of SpaceX’s Starlink and Facebook” as well as having national security experience during the Obama Administration, “has turned the Silicon Valley lobbying blueprint on its head” by spending “years courting policymakers with a more solicitous message: Regulate us. Thanks to her strategy, Altman has emerged as a rare tech executive lawmakers from both parties appear to trust.”

 

Survey: How AI Is Impacting What College Students Study

Inside Higher Ed Share to FacebookShare to Twitter (1/10, Flaherty) reports some students “are already being exposed to how artificial intelligence can help them in the workforce,” but even beyond “specialized training, nearly three in four students say their institutions should be preparing them for AI in the workplace, at least somewhat. So finds a new flash survey of 1,250 students across 49 four- and two-year colleges from Inside Higher Ed and College Pulse’s Student Voice series.” Among other takeaways from the survey, “AI is impacting what students plan to study, especially newer students. Asked how much the rise of artificial intelligence has influenced what they’re studying or plan to study in college, 14 percent of students over all say it’s influenced them a lot.” Similar to how students “say AI is impacting their academic plans, 11 percent of students over all say that the rise of AI has significantly influenced their career plans.”

        Higher Ed Leaders, Scholars Discuss Benefits And Pitfalls Of AI Tool Usage. Diverse Issues in Higher Education Share to FacebookShare to Twitter (1/10, Kyaw) reports artificial intelligence (AI) tools “such as ChatGPT can prove very valuable and promising in the realm of higher education but come with their own suite of issues that need to be considered, according to higher ed leaders and faculty who participated in a panel discussion on Wednesday.” The panel, hosted by the American Association of Colleges and Universities, “invited a number of scholars in higher ed to weigh in on the potential and challenges that AI tools may bring to the field.” While AI “has promise in terms of recognizing student patterns and how they relate to student persistence and retention,” one scholar said, generative AI tools “also come with issues of factual inaccuracy and sourcing, according to the panelists.” For instance, “how these tools amass their data collections by pulling from numerous sources, concerns over copyright are present as well, said panelist Dr. Bryan Alexander, a senior scholar at Georgetown University.”

 

New Study Reveals Thousands Of AI Experts Are Divided About What They’ve Created

Vox Share to FacebookShare to Twitter (1/10, Piper) reports researchers at the AI Impacts project recently followed up their groundbreaking 2016 survey with an updated one. The 2016 survey startled the field when the median “gave a 5 percent chance of human-level AI leading to outcomes that were ‘extremely bad, e.g. human extinction.’ That means half of researchers gave a higher estimate than 5 percent saying they considered it overwhelmingly likely that powerful AI would lead to human extinction and half gave a lower one.” The 2023 survey says “between 37.8% and 51.4% of respondents gave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction,” a result that may not be as pessimistic as it first appears, since “The researchers surveyed don’t subdivide neatly into doomsaying pessimists and insistent optimists. Many people...who have high probabilities of bad outcomes also have high probabilities of good outcomes.”

 

Bipartisan Group Unveils Legislation To Require AI Guidelines For Federal Agencies, Government Vendors

Reuters Share to FacebookShare to Twitter (1/10, Alper) reports, “A bipartisan group of congressman on Wednesday unveiled legislation that would require federal agencies and their artificial intelligence vendors to adopt best practices for handling the risks posed by AI, as the U.S. government slowly moves toward regulating the technology.” The proposed bill, “sponsored by Democrats Ted Lieu and Don Beyer alongside Republicans Zach Nunn and Marcus Molinaro, is modest in scope but has a chance of becoming law since a Senate version was introduced last November by Republican Jerry Moran and Democrat Mark Warner.” The bill, if approved, “would require federal agencies to adopt AI guidelines unveiled by the Commerce Department last year.”

 

EU Looking Into Microsoft’s Partnership With OpenAI

CNN Share to FacebookShare to Twitter (1/9, Fung) reports, “The European Union is looking into Microsoft’s partnership with OpenAI and whether it may warrant a formal merger investigation, EU officials said Tuesday.” The move “follows a similar announcement by UK antitrust officials last month, and a report by Bloomberg that the US Federal Trade Commission was conducting a preliminary probe,” and “highlights growing scrutiny of OpenAI after a high-profile leadership crisis last year resulted in the abrupt firing and reinstatement of” CEO Sam Altman, as well as Microsoft “gaining a non-voting seat on OpenAI’s board.” CNN adds, “The inquiry is part of a wider effort to assess competition in the AI field, and officials are also reviewing some of the business contracts that other AI startups have with large tech companies, the commission noted.”

 

AI Regulation Likely To Be Among Main Issues For State Legislatures During 2024

The Washington Post Share to FacebookShare to Twitter (1/11) reports that “workforce shortages, housing and artificial intelligence are likely to dominate state legislatures this year.” Some states are already “setting up task forces to research AI, while others, including South Carolina, are looking at restricting the use of deepfakes made in campaign advertising.” So far, about 15 states “have already adopted resolutions or enacted laws around AI.” Connecticut was “one of the first, establishing an office focused on AI while introducing initial restrictions on the industry. It plans to consider further limitations this year.” Meanwhile, Utah and Arkansas “are among the states that have passed digital privacy laws or bills of rights to limit the use of social media by minors or restrict social media companies’ use of customer information.”

 

West Virginia Releases AI Policy For School Districts

Education Week Share to FacebookShare to Twitter (1/11) reports West Virginia this week “became only the third state to release guidance on how districts and schools should use artificial intelligence.” West Virginia officials “sought to explain how existing laws and policies on issues like cheating and student data privacy apply to AI tools, said Erika Klose, the state’s director of P12 Academic support.” Klose said, “There are many AI products being developed that we know will be marketed to our county school districts. … We wanted to point out that AI is a technology. It’s a new technology. It’s kind of an amazing technology. But it’s a technology nonetheless.” So far, “only two other states – California and Oregon – have released AI guidance specifically for K-12 education,” and at least “11 others are in the process of developing it, according to a report by the Center on Reinventing Public Education at Arizona State University.”

 

Tech Executives, Education Researchers Divided On Future Of AI-Assisted Instruction

The New York Times Share to FacebookShare to Twitter (1/11, Singer) reports Sal Khan, the chief executive of Khan Academy, “gave a rousing TED Talk last spring in which he predicted that A.I. chatbots would soon revolutionize education,” and afterward, prominent tech executives “began issuing similar education predictions.” The spread of generative A.I. tools like ChatGPT, “which can give answers to biology questions and manufacture human-sounding book reports, is renewing enthusiasm for automated instruction – even as critics warn that there is not yet evidence to support the notion that tutoring bots will transform education for the better.” Some tech executives envision that, “over time, bot teachers will be able to respond to and inspire individual students just like beloved human teachers,” though some education researchers say schools “should be wary of the hype around A.I.-assisted instruction.”

dtau...@gmail.com

unread,
Jan 21, 2024, 7:33:43 PMJan 21
to ai-b...@googlegroups.com

AI's Latest Challenge: The Math Olympics

New York University computer scientist Trieu Trinh has developed an AI model that can solve geometry problems from the International Mathematical Olympiad at a level nearly on par with human gold medalists. Trinh served as a resident at Google while developing AlphaGeometry, now part of Google DeepMind's series of AI systems. In a test on 30 Olympiad geometry problems from 2000-2022, AlphaGeometry solved 25, versus an average of 25.9 for a human gold medalist during that same period.

[ » Read full article *May Require Paid Registration ]

The New York Times; Siobhan Roberts (January 17, 2024)

 

 

Australia Responds to Rapid Rise of AI

In response to the accelerated use of AI technologies, the Australian government has announced plans to establish an expert advisory committee to formulate mandatory "safeguards" for the highest-risk AI technologies, such as self-driving vehicle software, predictive technologies used by law enforcement, and hiring-related AI tools. Such safeguards could include independent testing requirements and ongoing audits. Additionally, organizations using high-risk AI could be required to appoint someone to be responsible for safe use of the technologies.
[ » Read full article ]

ABC News (Australia); Jake Evans (January 16, 2024)

 

 

AI Has a Trust Problem. Can Blockchain Help?

Researchers at the data-analytics firm FICO and the blockchain-focused startup Casper Labs are among those developing and training AI algorithms using blockchain technology. FICO's Scott Zoldi explained that blockchain can track the data used to train the algorithm and the various steps taken to vet and verify the data. Meanwhile, Casper is collaborating with IBM on a tool that would allow companies to revert to an earlier version of a model if bias or inaccuracies are identified.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Isabelle Bousquette (January 11, 2024)

 

 

Will Chatbots Teach Your Children?

Khan Academy and Duolingo are among the online learning platforms that have rolled out AI chatbot tutors based on OpenAI's large language model GPT-4. The rise of generative AI tools has pushed the idea of automated instruction to the forefront, with some tech executives hopeful that bot teachers would be able to engage with individual students like human teachers while providing customized instruction. However, some education researchers stress that AI chatbots can be biased and provide false information, and there is little transparency when it comes to how they formulate answers.

[ » Read full article *May Require Paid Registration ]

The New York Times; Natasha Singer (January 11, 2024)

 

 

Our Fingerprints May Not Be Unique

Columbia University researchers developed an AI tool that can determine whether prints from different fingers came from a single person. The tool analyzed 60,000 fingerprints and was 75% to 90% accurate. Though uncertain how the AI makes its determinations, the researchers believe it concentrates on the orientation of the ridges in the center of a finger; traditional forensic methods look at how the individual ridges end and fork.
[ » Read full article ]

BBC; Zoe Kleinman (January 11, 2024)

 

 

AI-Driven Misinformation 'Biggest Short-Term Threat to Global Economy'

The World Economic Forum's annual risks report, based on a survey of 1,300 experts, revealed that respondents believe the biggest short-term threat to the global economy will come from AI-driven misinformation and disinformation. This is a major concern, given that elections will be held this year in countries accounting for 60% of global gross domestic product. Other short-term risks cited by respondents include extreme weather events, societal polarization, cyber insecurity, and interstate armed conflict.
[ » Read full article ]

The Guardian; Larry Elliott (January 10, 2024)

 

New Generation Of AI-Powered Tools Aim To Help Those With Disabilities

“AI is fueling a new generation of technologies to help people who live with disabilities,” which can help build technologies that will “be life-changing for people living with a disability and will be essential in supporting our aging population as health care costs skyrocket,” Axios Share to FacebookShare to Twitter (1/12, Heath) reported. Companies featured such new products at this year’s CES 2024 in Las Vegas; “the new class of tech emerging is built on the experiences and data of people living with disabilities – and the hope is that it’s more affordable and scalable than existing services.” Most popular categories of AI-powered tools to assist those with disabilities include speech recognition and computer vision.

 

Researchers Develop AI-Powered Tool To Diagnose Rheumatic Heart Disease Early

The Washington Post Share to FacebookShare to Twitter (1/16, Johnson) reports “in an advance that shows the potential of artificial intelligence to aid medicine, researchers at Children’s National have developed a new AI-powered tool for diagnosing rheumatic heart disease long before a patient needs surgery.” In collaboration with “staff at the Uganda Heart Institute, the team designed a system that will allow trained nurses to screen and diagnose children early on, when they can still be treated with penicillin for less than $1 a year.” This “early treatment could save thousands from having to undergo surgery.” Use of AI in healthcare “has been exploding since 2018,” and now “there are almost 700 FDA-approved artificial intelligence and machine learning-enabled medical devices.”

 

Regulators, Organizations Increasingly Feeling “Anxiety” About Role Of AI In Global Environment

The Washington Post Share to FacebookShare to Twitter (1/13, De Vynck, J. Lynch) reported a growing number of regulators, organizations, and observers are experiencing growing “anxiety” about the role of AI across multiple industries. For example, the Financial Industry Regulatory Authority (FINRA) has “labeled AI an ‘emerging risk,’” while the World Economic Forum recently “released a survey that concluded AI-fueled misinformation poses the biggest near-term threat to the global economy.” The reports “came just weeks after the Financial Stability Oversight Council in Washington said AI could result in ‘direct consumer harm’ and Gary Gensler, the chairman of the Securities and Exchange Commission (SEC), warned publicly of the threat to financial stability from numerous investment firms relying on similar AI models to make buy and sell decisions.” Meanwhile, some observers have warned that the rise of AI technology has also been used by multiple governments – including China – to help seed propaganda in opposition regions and nation-states.

 

Elon Musk Wants Greater Control Of Tesla Before Building Its AI

Bloomberg Share to FacebookShare to Twitter (1/15, Chan, Subscription Publication) reports, “Elon Musk said he would rather build AI products outside of Tesla Inc. if he doesn’t have 25% voting control, suggesting the billionaire may prefer a bigger stake” in the company. Musk “currently owns more than 12% of the company according to data compiled by Bloomberg.”

 

Elon Musk Expresses Desire For More Control Of Tesla To Further AI Capabilities

Insider Share to FacebookShare to Twitter (1/16, Nolan) reports that Elon Musk, in a post on X, “said he was ‘uncomfortable’ about expanding [Tesla’s] AI and robotics capabilities without controlling 25% of the votes.” Musk is quoted saying in a follow-up post, “If I have 25%, it means I am influential, but can be overridden if twice as many shareholders vote against me vs for me. At 15% or lower, the for/against ratio to override me makes a takeover by dubious interests too easy. ... Unless that is the case, I would prefer to build products outside of Tesla.” The Wall Street Journal Share to FacebookShare to Twitter (1/16, Orru, Subscription Publication) reports Musk, in another post on X, expressed comfort with a dual-class voting structure to gain greater control of Tesla, but was told such an arrangement was impossible after its initial public offering.

 

CUNY To Use $75 Million Gift To Support New York Governor’s AI Project

The New York Times Share to FacebookShare to Twitter (1/16, Barron) reports New York Gov. Kathy Hochul last week “called for a statewide consortium on artificial intelligence. She outlined a public-private partnership that would be spurred on by $275 million in state money, with a center that would be used by half a dozen public and private universities. Each would contribute $25 million to the project, known as Empire A.I. Tomorrow, one of the six institutions, the City University of New York, will announce that it is receiving a $75 million gift and that $25 million will be CUNY’s contribution to Empire A.I.”

 

OpenAI CEO At Davos: Future AI Depends On Energy Breakthrough

Reuters Share to FacebookShare to Twitter (1/16, Dastin) “OpenAI’s CEO Sam Altman on Tuesday said an energy breakthrough is necessary for future artificial intelligence, which will consume vastly more power than people have expected. Speaking at a Bloomberg event on the sidelines of the World Economic Forum’s annual meeting in Davos, Altman said the silver lining is that more climate-friendly sources of energy, particularly nuclear fusion or cheaper solar power and storage, are the way forward for AI. ‘There’s no way to get there without a breakthrough,’ he said. ‘It motivates us to go invest more in fusion.’ In 2021, Altman personally provided $375 million to private U.S. nuclear fusion company Helion Energy, which since has signed a deal to provide energy to Microsoft in future years. Microsoft is OpenAI’s biggest financial backer and provides it computing resources for AI.Altman said he wished the world would embrace nuclear fission as an energy source as well.”

        Altman Says AI Does Not Require Vast Quantities Of Data From Publishers. Bloomberg Share to FacebookShare to Twitter (1/16, Subscription Publication) reports, “Artificial intelligence doesn’t need vast quantities of training data from publishers like The New York Times Co., according to OpenAI Chief Executive Officer Sam Altman, in a response to allegations his startup is poaching copyrighted material.” At the World Economic Forum in Davos, Altman is quoted saying, “There is this belief held by some people that you need all my training data and my training data is so valuable. ... Actually, that is generally not the case. We do not want to train on the New York Times data, for example.”

        OpenAI Working With Pentagon On Cybersecurity Tools. Bloomberg Share to FacebookShare to Twitter (1/16, Subscription Publication) reports OpenAI is working “with the Pentagon on a number of projects including cybersecurity capabilities, a departure from the startup’s earlier ban on providing its artificial intelligence to militaries.” The ChatGPT developer is making tools “with the US Defense Department on open-source cybersecurity software, and has had initial talks with the US government about methods to assist with preventing veteran suicide, Anna Makanju, the company’s vice president of global affairs, said in an interview at Bloomberg House at the World Economic Forum in Davos on Tuesday.” OpenAI also “said that it is accelerating its work on election security, devoting resources to ensuring that its generative AI tools are not used to spread political disinformation.”

 

Alphabet CFO Touts Potential Of AI In Health Care

Alphabet CFO Ruth Porat “said her own experience of breast cancer helped her understand the ‘extraordinary’ potential of AI in health care,” Bloomberg Law Share to FacebookShare to Twitter (1/16, Seal, Subscription Publication) reports (paywall). Porat “said after learning about progress Google made in early metastatic breast cancer detection with AI, she called her own oncologist at Memorial Sloan Kettering Cancer Center and asked: ‘Is this really as important as I hope?’” Porat’s “oncologist told her it was the only technology that could democratize healthcare, she recalled, in an interview with David Rubenstein at Bloomberg House at the World Economic Forum in Davos on Tuesday.”

 

Lawmakers Propose Bupartsian Bill To Criminalize Deepfake Nudes Of Real People

The Wall Street Journal Share to FacebookShare to Twitter (1/16, Jargon, Subscription Publication) reports that on Tuesday, Resp. Joseph Morelle (D-NY) and Tom Kean (R-NJ) re-introduced the “Preventing Deepfakes of Intimate Images Act,” which would criminalize the nonconsensual sharing of digitally-altered intimate images. The Journal explains bipartisan move Tuesday comes in response to an incident at Westfield High School in New Jersey. Boys there were sharing AI-generated nude images of female classmates without their consent.

        California Assemblymember Proposes Bill Cracking Down On Harmful AI-Generated Content. Politico Share to FacebookShare to Twitter (1/16, Korte) reports a California state lawmaker “wants to crack down on AI-generated depictions of child sexual abuse as tech companies face growing scrutiny nationally over their moderation of illicit content.” A new bill “from Democratic Assemblymember Marc Berman, first reported in California Playbook, would update the state’s penal code to criminalize the production, distribution or possession of such material, even if it’s fictitious.” Among the backers “is Common Sense Media, the nonprofit founded by Jim Steyer that for years has advocated for cyber protections for children and their privacy.” The legislation “has the potential to open up a new avenue of complaints against social media companies, who are already battling criticisms that they don’t do enough to eradicate harmful material from their websites.”

 

Teens Discuss Views Of AI’s Impact On Career Prospects

Education Week Share to FacebookShare to Twitter (1/16) reports on how high school students are thinking about the ways AI will impact their current and future lives. AI is “already changing how they interact with each other on social media, what and how they’re learning in school, and how they are thinking about careers. Surveys have shown that teens are concerned about how artificial intelligence will impact their future job prospects.” EdWeek interviews two Illinois high school seniors about “how they’ve used AI tools, their concerns about the technology, and how they see it affecting their career plans.” One student said, “I’m a little worried because I see how many jobs could be affected, especially potential jobs for our generation. If we want to get into jobs that AI can do, then that worries me.” The other student said, “It’s when the tools are used negatively, that’s when it becomes a problem. Using ChatGPT to cheat, that’s a problem. AI is a great resource, and as long as we use it correctly then it can be wonderful. It’s in the hands of its users.”

 

Survey: University Librarians Stress Need For AI Ethics

Inside Higher Ed Share to FacebookShare to Twitter (1/17, Coffey) reports according to a newly released survey conducted in May 2023 by Leo Lo, president-elect of the Association of College and Research Libraries, “nearly three-quarters of university librarians say there’s an urgent need to address artificial intelligence’s ethical and privacy concerns. ... Roughly half the librarians surveyed said they had a ‘moderate’ understanding of AI concepts and principles, according to the study released Friday.”

 

Ferris State University Enrolls AI Transfer Students In Courses To Better Understand Student Experiences

Inside Higher Ed Share to FacebookShare to Twitter (1/18, Coffey) reports Ferris State University’s newest transfer students “are AIs created by the Michigan-based university, which is enrolling them in courses. The project is a mix of researching artificial intelligence and online classrooms while getting a peek into a typical student’s experience.” However, some academics are “raising concerns about privacy, bias and the potential accuracy of garnering student experiences from a computer.” To help “build” the AI students, Ferris State students – “human ones – answered a slew of questions, including about how they felt the first day on campus, anxieties they had and their experiences at the college.” The pilot program has yet to kick off, as the AI students “will be enrolled in a general education course this semester.” The students will start by “listening to the class online, with the hope of eventually bringing them to ‘life’ as classroom robots that can speak with other students.”

 

OpenAI Announces Partnership With Arizona State University

Fortune Share to Facebook