Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

Dr. T's AI brief

29 views
Skip to first unread message

dtau...@gmail.com

unread,
Dec 18, 2022, 12:59:44 PM12/18/22
to ai-b...@googlegroups.com

DeepMind's AlphaCode Can Outcompete Human Coders
Gizmodo
Mack DeGeurin
December 8, 2022


DeepMind's AlphaCode model performed well against human coders in a programming competition, with a paper describing its overall performance as similar to that of a "novice programmer" with up to a year of training. AlphaCode achieved "approximately human-level performance" and solved previously unseen natural language problems by forecasting code segments and generating millions of potential solutions. The model then winnowed them down to a maximum of 10 solutions, which the researchers said were produced "without any built-in knowledge about the structure of computer code." Carnegie Mellon University's J. Zico Kolter wrote, "Ultimately, AlphaCode performs remarkably well on previously unseen coding challenges, regardless of the degree to which it 'truly' understands the task."

Full Article

*May Require Paid Registration

 

Deep Learning Gets Boost from Reconfigurable Processor
IEEE Spectrum
Michelle Hampson
December 6, 2022


The reconfigurable ReAAP processor developed by a multi-institutional team of researchers trounced several computing platforms used to support deep neural networks. ReAAP is an integrated software-hardware system featuring a software compiler that assesses and optimizes deep learning workloads. Upon ascertaining the optimal solution for processing data in parallel, the processor instructs the reconfiguration of the hardware coprocessor, which apportions the proper hardware resources. "As an end-to-end system, ReAAP can be deployed to accelerate various deep learning applications just by customizing a Python script in [the] software for each application," said Jianwei Zheng at the Hong Kong University of Science and Technology. ReAAP's compiler performs 1.9 to 5.7 times as fast as the next best complier running on the graphics processing unit and 1.6 to 3.3 times as fast as the same compiler running on the central processing unit.

Full Article

 

 

Smallest Robotic Arm Is Controlled by AI
Aalto University (Finland)
December 7, 2022


Researchers at Finland's Aalto University and the Finnish Center for Artificial Intelligence have manipulated silver atoms into a lattice configuration via deep reinforcement learning, a critical advance for nanodevice construction. The approach rewards the algorithm for correct guidance and outputs. "It took the algorithm on the order of one day to learn and then about one hour to build the lattice," said Aalto's I-Ju Chen. She also said such atomic guidance can be used for testing how and whether nanodevices operate at their absolute limit, as well as for exposing properties related to superconductivity or quantum states.

Full Article

 

 

Locomotion Modeling Evolves with Brain-Inspired Neural Networks
EPFL (Switzerland)
December 6, 2022


A neural network system developed by researchers at Switzerland's EPFL (École polytechnique fédérale de Lausanne) incorporates fundamental principles of biological sensorimotor control to better understand the ability of animals and humans to adapt their movements to environmental and biological changes. The DMAP (Distributed Morphological Attention Policy) network architecture was able to learn to "walk" with a body subject to "morphological perturbations," such as changes in limb length and thickness. The researchers said DMAP "combines independent proprioceptive processing, a distributed policy with individual controllers for each joint, and an attention mechanism, to dynamically gate sensory information from different body parts to different controllers."

Full Article

 

 

Amazon Tests ML Software to Analyze Satellite Images from Space
Space.com
Tereza Pultarova
December 6, 2022


Amazon, Italian space startup D-Orbit, and Swedish computing technology developer Unibap have spent 10 months testing in space Amazon Web Services (AWS) machine learning (ML) software that can analyze satellite images and transmit the best ones to Earth. The software ran on D-Orbit's ION satellite; AWS said the Unibap-built ML payload analyzed "large quantities of space data directly onboard" the satellite. AWS said the software also reduced the size of images sent to Earth by up to 42%. Said Unibap's Fredrik Bruhn, "Providing users real-time access to AWS edge services and capabilities on orbit will allow them to gain more timely insights and optimize how they use their satellite and ground resources."

Full Article

 

 

What AI-Generated COVID News Tells Us That Journalists Don't
McGill University (Canada)
December 6, 2022

An artificial intelligence (AI) developed by researchers at Canada's McGill University identified biases in news reporting on COVID-19. The AI created simulated news coverage using headlines from CBC articles as prompts. In comparing the simulated news stories to actual CBC reporting, the researchers found that CBC journalists focused more on personalities and geo-politics, while the AI produced more disease-centered reporting. McGill's Andrew Piper said, "Reporting on real-world events requires complex choices, including decisions about which events and players take center stage. By comparing what was reported with what could have been reported, our study provides perspective on the editorial choices made by news agencies."
 

Full Article

 

 

Deepfake Detector Spots Fake Videos of Ukraine's President Zelensky
New Scientist
Jeremy Hsu
December 7, 2022


A deepfake detector can accurately identify fraudulent videos of Ukraine's president Volodymyr Zelensky, and can be trained to flag deepfakes of other prominent figures. Researchers at the University of California, Berkeley and the Czech Republic's Johannes Kepler Gymnasium trained a computer model on more than eight hours of publicly posted videos featuring Zelensky. The detector vets 10-second clips from a single video, analyzing up to 780 behavioral characteristics. Flagging multiple clips from the same video as false indicates human analysts should look closer. The University at Buffalo in New York's Siwei Lyu said the deepfake detector's holistic head-and-upper-body analysis is uniquely suited to identifying doctored videos.

Full Article

 

 

AI Enables Largest Brain Tumor Study to Date
Penn Medicine News
December 5, 2022


Researchers at the University of Pennsylvania Health System (Penn Medicine) and Intel led the largest-yet machine learning (ML) research project to compile data from brain scans of 6,314 glioblastoma patients worldwide. The researchers applied federated learning, which brings the ML algorithm to data, rather than centralizing data to algorithms. They pre-trained a public initial model on publicly available data from the International Brain Tumor Segmentation challenge, then added data to create a more accurate preliminary consensus model. This model and additional data were fed into the final consensus model to optimize and evaluate generalizability to unseen data. "The more data we can feed into machine learning models, the more accurate they become, which in turn can improve our ability to understand, treat, and remove glioblastoma in patients with more precision," explained Penn Medicine's Spyridon Bakas.

Full Article

 

 

Large Language Models Help Decipher Clinical Notes
MIT News
Rachel Gordon
December 1, 2022


Massachusetts Institute of Technology (MIT) researchers tapped large language models to disentangle unstructured clinical notes in electronic health records, in order to extract meaningful information. The researchers employed a GPT-3-style model to execute tasks such as expanding overloaded jargon and acronyms and extracting drug regimens. Said MIT's David Sontag, "The research team's advances in zero-shot clinical information extraction makes scaling possible. Even if you have hundreds of different use cases, no problem—you can build each model with a few minutes of work, versus having to label a ton of data for that particular task." MIT's Hunter Lang said the researchers also devised a method of formatting task prompts so the model’s outputs are in the proper format.
 

Full Article

 

 

Sampling, Pipelining Method Speeds Deep Learning on Large Graphs
MIT News
Lauren Hinkel
November 29, 2022


The SAmpling, sLIcing, and data movemeNT (SALIENT) methodology devised by Massachusetts Institute of Technology (MIT) and IBM Research scientists enhances graph neural networks (GNNs)' training and inference by clearing three bottlenecks in the computational pipeline. The researchers applied optimization to increase graphics processing unit (GPU) utilization in the PyTorch Geometric library for GNNs from 10% to 30%, improving performance up to double that of public benchmark codes. They addressed bottlenecks caused by graph sampling and mini-batch preparation algorithms at the beginning of the data pipeline by combining data structures and algorithmic optimizations, improving the sampling operation about threefold. MIT's Nickolas Stathas said SALIENT leveraged modern processors to further reduce per-epoch runtime via parallelizing feature slicing.

Full Article

 

 

U.S. Steel Looks to Forge High-Tech Future at Mills New and Old
The Wall Street Journal
Isabelle Bousquette
December 1, 2022


U.S. Steel Corp. is looking to implement artificial intelligence (AI) across its mills, although it has proven difficult to deploy AI technology at its older mills. Some of the equipment at U.S. Steel's 110-year-old Gary Works mill in Indiana, for instance, dates to the 1950s and 1960s. U.S. Steel's Christian Holliday said each mill needs its own AI model based on its unique environment. However, machines at older mills must be retrofit with sensors and cameras to collect the necessary data, and they often lack wireless network capacity for sensors. Nevertheless, machine-learning algorithms are being implemented at Gary Works to boost efficiency, and a digital twin is being developed to predict finish times and optimize output.

Full Article

*May Require Paid Registration

 

Analysis: Google Faces Threat From New AI Chatbot

In an analysis for Bloomberg (12/7), Parmy Olson writes that “a new chatbot from OpenAI took the internet by storm this week, dashing off poems, screenplays and essay answers that were plastered as screenshots all over Twitter. .... Though the underlying technology has been around for a few years, this was the first time OpenAI has brought its powerful language-generating system known as GPT3 to the masses, prompting a race by humans to give it the most inventive commands.” Olson continues, “Some people are already finding practical uses for ChatGPT, including programmers who are using it to draft code or spot errors. But, the system’s biggest utility could be a financial disaster for Google by supplying superior answers to the queries we currently put to the world’s most powerful search engine.”

        AI Behind Chatbots, Search Queries Could Help Companies Discover New Drugs. The Wall Street Journal Share to FacebookShare to Twitter (12/7, Hao, Subscription Publication) reports on the artificial intelligence behind chatbots and search queries, and how it could help companies discover new drugs.

Experts, Tech Companies Discuss Potential Impact Of Viral AI Text Bot

ABC News Share to FacebookShare to Twitter (12/9, Zahn) reported that ChatGPT, “an artificial intelligence-driven program that responds to user prompts, has dominated social networks in recent days, as viral posts demonstrate it composing Shakespearean poetry, musing philosophically and identifying bugs in computer code.” Made available “to the general public for testing late last month, ChatGPT set off an internet sensation that drew more than a million users in its first week and reignited interest in the effort to replicate human insight – all while stoking controversy over potential bias and free speech limits.” ChatGPT, “according to OpenAI, uses artificial intelligence to speak back and forth with human users on a wide range of subjects.” The chatbot “scans text across the internet and develops a statistical model that allows it to string words together in response to a given prompt.” People have “requested that ChatGPT perform an array of tasks, such as gathering highly specific information, fixing broken computer code and writing a biblical verse about how to remove a peanut butter sandwich from a VCR.”

        The Washington Post Share to FacebookShare to Twitter (12/10) reported that “the tool has captivated the internet, attracting more than a million users with writing that can seem surprisingly creative.” In viral social media posts, ChatGPT “has been shown describing complex physics concepts, completing history homework and crafting stylish poetry.” Some tech executives and venture capitalists “contend that these systems could form the foundation for the next phase of the web, perhaps even rendering Google’s search engine obsolete by answering questions directly, rather than returning a list of links.” Its use has “also fueled worries that the AI could deceive listeners, feed old prejudices and undermine trust in what we see and read.” ChatGPT and other “generative text” systems “mimic human language, but they do not check facts, making it hard for humans to tell when they are sharing good information or just spouting eloquently written gobbledygook.”

        College Admissions Officers Concerned About AI-Written Personal Essays. Forbes Share to FacebookShare to Twitter (12/9, Whitford) reports ChatGPT, OpenAI’s new natural language tool, can not only “write clear essays, but it can also conjure up its own personal details and embellishments that could up a students’ chance of acceptance and would be difficult to verify.” AI-written essays “could pose a challenge for college admissions officers who increasingly have to rely on personal essays in the admissions process because many colleges eliminated standardized test scores as a requirement.” However, ChatGPT shows some limitations: “it struggled with word counts, often delivering an essay that was several hundred words shorter than what was requested, even if it said it achieved the word limit.” It also failed “to reference real faculty members,” instead “name-dropping professors from a variety of other universities.” Jim Jump, director of college counseling at St. Christopher’s School and former admissions officer at Randolph–Macon College in Virginia, said, “I probably couldn’t pick it out as having been written by AI, but it resembles ‘cliche’ essays that students write with assistance from essay consultants.” David Hawkins, chief education and policy officer at the National Association for College Admission Counseling, “says that while GPT’s writing is clean, grammatically correct and well structured, it is likely too vague and flat to stand out in a crowded applicant pool.”

Teaching Experts Discuss Changes To Education Amid Rise Of ChatGPT

The Chronicle of Higher Education Share to FacebookShare to Twitter (12/13, McMurtrie) reports that “scholars of teaching, writing, and digital literacy say there’s no doubt that” AI tools like ChatGPT “will, in some shape or form, become part of everyday writing, the way calculators and computers have become integral to math and science.” It is critical, they “say, to begin conversations with students and colleagues about how to shape and harness these AI tools as an aide, rather than a substitute, for learning.” In doing so, they “say, academics must also recognize that this initial public reaction says as much about our darkest fears for higher education as it does about the threats and promises of a new technology.” In this vision, “college is a transactional experience where getting work done has become more important than challenging ourselves to learn.” Assignments and assessments “are so formulaic that nobody could tell if a computer completed them.” Faculty members are “too overworked to engage and motivate their students.”

Manufacturers Utilizing Artificial Intelligence To Inspect PCBs

Forbes Share to FacebookShare to Twitter (12/5, Fernandez) reports manufacturers are turning to AI in order to inspect printed circuit boards (PCBs) and other electronic components. The inspection of PCBs “is an intense and laborious task that necessitates specialized skills,” and manual inspection “can contribute to the high cost of poor quality (COPQ) due to rework, high scrap and the additional expense of uncovering failures later in production.” In addition, there are few qualified workers available to replace experienced inspectors when they retire. Data from the Labor Bureau “reveals there were 846,000 job openings in the U.S. manufacturing sector in August 2022, and Deloitte forecasts the number will rise to more than 2 million by 2030.” The latest developments in artificial intelligence “have given rise to highly automated, human-in-the-loop visual quality inspection (VQI) systems that can outperform automated optical inspection (AOI) incumbents.”

Fired Google Engineer Says AI Chatbot Has Racial Biases

Insider Share to FacebookShare to Twitter (7/31, Jamal) reports Blake Lemoine, “a former Google engineer, has ruffled feathers in the tech world in recent weeks for publicly saying that an AI bot he was testing at the company may have a soul.” Lemoine “told Insider in a previous interview that he’s not interested in convincing the public that the bot, known as LaMDA, or Language Model for Dialogue Applications, is sentient.” However, “it’s the bot’s apparent biases – from racial to religious – that Lemoine said should be the headlining concern.” Lemoine “blames the AI’s biases on the lack of diversity of the engineers designing them.” He said, “The kinds of problems these AI pose, the people building them are blind to them. They’ve never been poor. They’ve never lived in communities of color. They’ve never lived in the developing nations of the world. They have no idea how this AI might impact people unlike themselves.”

Oregon State Researchers Use AI To Save Bees From Pesticides

The Newberg (OR) Graphic Share to FacebookShare to Twitter (7/27, Stewart) reported “researchers at Oregon State University College of Engineering have developed artificial intelligence” to save bees from pesticides. The project, “headed by assistant professor of chemical engineering Cory Simon and associate professor of computer science Xiaoli Fern, entailed using a machine learning model to predict the toxicity of new herbicides, insecticides or fungicides toward bees through their molecular structures. The National Science Foundation supported this research.” The results, “published in The Journal of Chemical Physics’ special issue ‘Chemical Design by Artificial Intelligence,’ are significant due to the dependence of many if not most fruit, vegetable, seed and nut crops on bee pollination.”

dtau...@gmail.com

unread,
Dec 24, 2022, 9:40:07 AM12/24/22
to ai-b...@googlegroups.com

Artists Protest AI-Generated Artwork on ArtStation
Ars Technica
Benj Edwards
December 15, 2022


Members of the ArtStation online community are protesting the presence of artificial intelligence (AI)-generated artwork on the site by adding "No AI Art" images to their portfolios. The protest takes aim at Stable Diffusion, an open source image-synthesis model that generates novel images from test descriptions or prompts, because its training dataset featured publicly accessible artwork scraped from ArtStation without the artists' permission, among other criticisms. ArtStation has issued an FAQ in response to the protest indicating it will not ban AI-generated artwork and will add tags "enabling artists to choose to explicitly allow or disallow the use of their art for (1) training non-commercial AI research, and (2) training commercial AI."

Full Article

 

 

Chatbots Could Change the World. Can You Trust Them?
The New York Times
Cade Metz
December 10, 2022


Potentially revolutionary chatbot technology cannot be relied on to always tell the truth or perform tasks correctly, while its potential for abuse is also concerning. Many experts envision chatbots like artificial intelligence (AI) laboratory OpenAI's ChatGPT or Google's Language Model for Dialogue Applications (LaMDA) replacing Internet search engines or even serving as personal tutors. However, data scientists like Aaron Margolis are finding flaws, which in LaMDA's case includes the mixing of facts with fiction because it was trained on information posted online. ChatGPT is not as proficient as LaMDA in free-flowing conversation, but like Google's AI, was trained on digital text gathered from the Internet. Such technologies have been replicated and widely distributed, but their developers cannot deter people from using the chatbots to spread misinformation.

Full Article

*May Require Paid Registration

 

 

Scientists Apply ML Method to Help Diagnose Deadly Respiratory Illness
UC San Diego Today
Emerson Dameron
December 13, 2022


Researchers at the University of California, San Diego, the Indian Institute of Technology, and the Indian Institute of Information Technology (IIIT) have developed a novel machine learning algorithm to help diagnose pneumonia from chest x-rays. The model features a two-way confirmation system that could complement clinicians and minimize human and computer error. Former IIIT researcher Abhibha Gupta said the three-level optimization model builds on earlier neural architecture search-based frameworks to find the best architecture from a set of candidates for detecting pneumonia. Gupta said this involves implementing the Learning By Teaching framework, which "consists of a teacher and student model that train together in an end-to-end manner to improve their learning abilities."

Full Article

 

 

Using ML to Improve Toxicity Assessment of Chemicals
University of Amsterdam (Netherlands)
December 13, 2022

Researchers at the Netherlands' University of Amsterdam, Australia's University of Queensland, and the Norwegian Institute for Water Research have formulated a machine learning model to better evaluate chemical toxicities. The model forgoes Quantitative Structure-Activity Relationship (QSAR) prediction to directly classify the acute aquatic toxicity of chemicals based on molecular descriptors. The researchers trained and tested the model with data on 907 chemicals for acute fish toxicity. The model characterized about 90% of the variance in the training set data and roughly 80% for the test set data. This decreased incorrect categorization fivefold compared to a QSAR regression model-based approach.
 

Full Article

 

 

AI Enables More Effective Humanitarian Action
EPFL (Switzerland)
December 12, 2022


An artificial intelligence (AI) program developed by researchers at Switzerland's EPFL and ETH Zurich, the International Committee of the Red Cross, and Qatar's Hamad Bin Khalifa University can estimate population density using only a regional-level rough estimate. The Pomelo program aggregates public data from remote sensing systems based on weightings learned by a neural network. The data includes building counts, average building sizes, proximity to roads, road maps, and night lighting. In tests using data from several African countries, Pomelo was found to generate accurate population maps for areas as small as a hectare (about 2.5 acres) and with populations of 1,000 to 2,000 residents.

Full Article

 

 

Big Tech, Community Colleges Partner on Education
Voice of America News
Andrew Smith
December 10, 2022


Community colleges and big technology companies are partnering on programs to educate students in artificial intelligence (AI), data science, user experience design, and other fields. Dell Technologies, Intel, Google, and Amazon are among the companies that have established training programs for community college students. Intel has partnered with 74 community colleges in 32 states on its AI for Workforce Program, with plans for partnerships in all 50 states by the end of next year. Meanwhile, Dell and Intel have joined forces on the Artificial Intelligence Incubator Network, which has awarded 10 community colleges with $40,000 grants to put toward the construction of AI laboratories. The network also supports virtual AI training and aims to help improve AI education through monthly meetings between the tech companies and community colleges.

Full Article

 

Experts Say Schools Should Adopt Machine Learning Into Curriculums

K-12 Dive Share to FacebookShare to Twitter (12/21, Barack) reports that experts “believe having students learn about machine learning is a crucial piece of learning how AI technology works and should start when they’re young.” Then as students get older, “the curriculum can broaden to include ethical issues, such as AI bias or how data is collected and applied, and who ultimately owns it.” When teaching students the basics “of machine learning and how AI operates, Michael Daley, an associate professor and director of the Center for Professional Development & Education Reform at the Warner School of Education at the University of Rochester, suggests classes should learn how to apply a machine learning approach to solve problems.”

 

Rise Of AI-Generated Selfies Renew Conversation Over Data Ownership

TIME Share to FacebookShare to Twitter (12/19, Johl) reports, “The recent flooding of social media feeds with AI-generated ‘portraits’ derived from databases of artists’ work has renewed conversation over data ownership.” It also highlights the fact that “the fundamental rights, principles, and freedoms users are giving up during” the data “exchange remains largely unchecked.” Web3 technology companies have made promises of “decentralized technologies to open up the possibility for individual ownership and monetization of data, returning power to ‘creators.’” But the question of data ownership cannot “be solved by technology or the rhetoric of ‘algorithmic governance.’” The solution “requires addressing fundamental rights that precede capitalist exchanges.”

 

Georgia School District Designs School Curriculum Around Artificial Intelligence

Education Week Share to FacebookShare to Twitter (8/15) reports on Seckinger High School in Gwinnett County, GA, where a social studies teacher “pauses a lesson on the spread of cholera in the 19th century to discuss how data scientists use AI tools today to track diseases.” A math class “full of English-language learners uses machine learning to identify linear and non-linear shapes.” The simplest explanation “of this technology is that it trains a machine to do tasks that simulate some of what the human brain can do.” That means it “can learn to do things like recognize faces and voices, understand natural language, and even make recommendations.” While the Gwinnett County school district, “which with more than 177,000 students is among the largest in the country, opened Seckinger high school this month to relieve overcrowding elsewhere, the focus of the school is unique.” Seckinger is “apparently the only high school in the country dedicated to teaching AI as part of its curriculum, not just as an elective class, according to CSforAll.”

 

Analysis: AI Has Yet To Transform Healthcare As Anticipated

Politico Share to FacebookShare to Twitter (8/15, Leonard, Reader) reports, “Investors see health care’s future as [being] inextricably linked with artificial intelligence.” This is “obvious from the cash pouring into AI-enabled digital health startups, including more than $3 billion in the first half of 2022 alone and nearly $10 billion in 2021.” During “a conference in 2016, Geoffrey Hinton, British cognitive psychologist and ‘godfather’ of AI, said radiologists would soon go the way of typesetters and bank tellers.” Yet, over “five years since Hinton’s forecast, radiologists are still training to read image scans. Instead of replacing doctors, health system administrators now see AI as a tool clinicians will use to improve everything from their diagnoses to billing practices. AI hasn’t lived up to the hype, medical experts said, because health systems’ infrastructure isn’t ready for it.” Meanwhile, “government is just beginning to grapple with its regulatory role.”

 

AI Increasingly Used In Creation Of Ads

The Wall Street Journal Share to FacebookShare to Twitter (8/10, Coffee, Subscription Publication) reports on the increasing role AI is playing in the creation of advertisements, as well as the benefits and drawbacks to the concept. Among the advantages to using AI is the ability to quickly complete projects that might otherwise take days.

 

dtau...@gmail.com

unread,
Dec 31, 2022, 9:02:01 AM12/31/22
to ai-b...@googlegroups.com

Words Prove Their Worth as Teaching Tools for Robots
Princeton University School of Engineering and Applied Science
Molly Sharlach
December 21, 2022


Princeton University researchers taught a simulated robot arm to use tools more quickly via human-language descriptions through the Accelerated Learning of Tool Manipulation with LAnguage model. Having access to descriptions of a tool's form and function to the training process improved the robot's ability to manipulate tools that were not in its original training set. The researchers queried OpenAI's deep learning GPT-3 language model for tool descriptions, then chose a training set of 27 tools and tasked the robotic arm to push, lift, sweep a cylinder along a table, or hammer a peg into a hole with nine test tools. In most cases, they found the descriptions improved the robot’s ability to use the new tools.
 

Full Article

 

 

Computer Architecture Is Being Reimagined at Technion
The Jerusalem Post (Israel)
Judy Siegel-Itzkovich
December 21, 2022


Shahar Kvatinsky and colleagues at Israel's Technion-Israel Institute of Technology, working with Israeli chipmaker Tower Semiconductor, have built a neural network directly into a processor's hardware as a proof of concept, and trained it to recognize handwritten letters. The researchers engineered the chip to store and process information, with programming integrated within the processor. The chip, which learns through deep-belief algorithms, was able to distinguish between individual examples of each letter, and recognized them 97% of the time despite extremely low energy consumption.
 

Full Article

 

 

Software Helps Interpret Complex Data
Helmholtz-Zentrum Berlin (Germany)
December 20, 2022

Software developed by researchers at Germany's Helmholtz-Zentrum Berlin (HZB) can interpret complex data using self-learning neural networks (NNs), one that compresses the data and another that reconstructs a low-noise version of the data. HZB's Gregor Hartmann explained, "In the process, the two NNs are trained so that the compressed form can be interpreted by humans.” Hartmann said the special class of NNs, known as disentangled variational autoencoder networks, can extract the data's underlying core principle without prior knowledge. The software was used to determine the photon energy of the FLASH free electron laser at the Deutsches Elektronen-Synchrotron research center from single-shot photoelectron spectra. Said Hartmann, "We succeeded in extracting this information from noisy electron time-of-flight data, and much better than with conventional analysis methods."
 

Full Article

 

Former IBM Watson Developer Aims To Address AI Shortcomings

The New York Times Share to FacebookShare to Twitter (8/28, Lohr) reports David Ferrucci, former lead developer of IBM’s Watson computer, hopes to “address A.I.’s shortcomings” with his new company, Elemental Cognition. Ferrucci hopes to make AI “a trusted ‘thought partner,’ a skilled collaborator at work and at home, making suggestions and explaining them.” His company “is taking measured steps toward that goal with a promising, though unproven, hybrid approach” which “combines the latest developments in machine learning with a page from the A.I.’s past, software modeled after human reasoning.”

 

Algorithms Can Now Mimic Any Artist. Some Artists Hate It

Wired Share to FacebookShare to Twitter (8/19, Nast) reports that as access to AI art generators begins to widen, more artists are raising questions about their capability to mimic the work of human creators. Algorithms have been used to generate art for decades, but a new era of AI art began in January 2021, when AI development company OpenAI announced DALL-E, a program that used recent improvements in machine learning to generate simple images from a string of text. This July, OpenAI announced that DALL-E would be made available to anyone to use and said that images could be used for commercial purposes.

 

Teachers Test AI Model That Writes Essays

Education Week Share to FacebookShare to Twitter (8/19, Kwapo) reported that “a recent technology called GPT-3, a machine-learning model that understands and generates natural language text, is attempting to” use artificial intelligence to “do any form of writing.” Created “by an artificial intelligence company called OpenAI, GPT-3, formally known as Generative Pre-trained Transformer, is trained to recognize 540 billion words and 175 billion parameters, which are the variables that allow AI models to make predictions.” The training “enables the technology to produce human-like text for several types of writing, including outlines, long-form essays, sales pitches, and poems.” EdWeek “asked teachers to test out and assess the technology.” Their impressions “depended heavily on the kind of skills being taught to students and their classroom objectives.” Some teachers “saw the model as a benefit to students who have minimal writing skills.” Others, “tasked with teaching students more complex types of writing, did not find much value in the technology.”

dtau...@gmail.com

unread,
Jan 1, 2023, 1:45:11 PM1/1/23
to ai-b...@googlegroups.com

Code-Generating AI Can Introduce Security Vulnerabilities
TechCrunch
Kyle Wiggers
December 28, 2022


Software engineers who use code-generating artificial intelligence (AI) systems are more likely to cause security vulnerabilities in the apps they develop, according to researchers affiliated with Stanford University. Their study looked at Codex, an AI code-generating system developed by research lab OpenAI. The researchers recruited developers to use Codex to complete security-related problems across programming languages, including Python, JavaScript, and C. Participants who had access to Codex were more likely to write incorrect and “insecure” solutions to programming problems compared to a control group, and they were more likely to say that their insecure answers were secure compared to the people in the control.
 

Full Article

 

 

AI Behind ChatGPT Could Help Spot Early Signs of Alzheimer's Disease
Drexel University
December 22, 2022

OpenAI's GPT-3, the artificial intelligence algorithm used in its ChatGPT chatbot, could be used to predict the early stages of dementia, according to researchers at Drexel University. The researchers trained the program using transcripts from a dataset of speech recordings that were compiled to test the ability of natural language processing (NLP) programs to predict dementia, then had the program identify whether transcripts from the dataset were produced by someone in the early stages of Alzheimer's. They found that GPT-3 outperformed two of the top NLP programs in accurately identifying both Alzheimer's and non-Alzheimer's examples. The researchers also found that GPT-3 was nearly 20% more accurate in predicting patient scores on the Mini-Mental State Exam, which is commonly used to predict dementia severity.
 

Full Article

 

 

Dendrocentric AI Could Run on Watts, Not Megawatts
IEEE Spectrum
Charles Q. Choi
December 20, 2022


A study by Stanford University's Kwabena Boahen proposes a method that would allow neural networks to run on watts drawn from a smartphone battery instead of megawatts of power in the cloud. Rather than mimic synapses (the spaces between neurons), Boahen's computational model emulates dendrites, where a neuron receives signals from other cells and which branch out to allow a single neuron to be connected with many others. The computational model is designed so that the dendrite responds only if signals are received from neurons in a precise sequence, which means each dendrite could encode data using higher base systems, based on the number of connections and the length of the signal sequences. Boahen said a 1.5-micrometer-long ferroelectric field-effect transistor, comprised of a string of ferroelectric capacitors, with five gates could mimic a 15-micrometer-long stretch of dendrite with five synapses.
 

Full Article

 

 

DeepMind's AI Cuts Energy Costs for Cooling Buildings
New Scientist
Jeremy Hsu
December 20, 2022


Researchers at DeepMind, Google, and building control system manufacturer Trane Technologies developed an artificial intelligence (AI) system that can control building cooling systems in various weather conditions in a way that maintains occupants' comfort while minimizing energy usage. The AI was used to control cooling systems at university campus buildings and a mixed-use building comprised of apartments, restaurants, and shops. It was trained with less than a year of historical data from each building's standard cooling system. The AI generated and assigned scores to possible actions for each decision it made, based on knowledge gained from monitoring factors like weather patterns and changing levels of cooling demand. The researchers determined that the AI saved 9% to 13% on energy required for cooling over a three-month period.
 

Full Article

*May Require Paid Registration

 

Meta Announces Meta AI Universal Speech Translator

VentureBeat Share to FacebookShare to Twitter (10/19, Dey) reports that about a week after Google announced its “Translation Hub” speech translation, now Meta has “announced the launch of universal speech translator (UST) project, which aims to create AI systems that enable real-time speech to-speech translation across all languages, even those that are spoken but not commonly written.” Meta CEO Mark Zuckerberg issued a statement saying, “Meta AI built the first speech translator that works for languages that are primarily spoken rather than written. We’re open-sourcing this so people can use it for more languages.” Synthesis AI founder and CEO Yashar Behzadi “said that one of the current challenges for UST models is the computationally expensive training that’s needed because of the breadth, complexity and nuance of languages.”

AI Could Provide Early Warning System For Diagnosing Sepsis

The Atlantic Share to FacebookShare to Twitter (10/16, Bajaj) reports, “Each year in the United States, sepsis kills more than a quarter million people – more than stroke, diabetes, or lung cancer.” One reason “for all this carnage is that if sepsis is not detected in time, it’s essentially a death sentence.” That “may soon change.” Back in July, Johns Hopkins researchers “published a trio of studies in Nature Medicine and npj Digital Medicine showcasing an early-warning system that uses artificial intelligence.” The system “caught 82 percent of sepsis cases and significantly reduced mortality.”

Stanford Digital Economy Lab Director Says Focus Of AI Should Be Augmenting Human Labor, Not Automating It

Wired Share to FacebookShare to Twitter (10/13, Nast) reports Stanford Digital Economy Lab Director Erik Brynjolfsson says AI research has been too focused on how it can copy and compete with humans. This has led to a situation where “productivity gains go to the owners of firms, not to workers,” and Brynjolfsson says it is “the single biggest explanation” why most wages have stagnated over the last few decades. He “thinks real economic growth lies in building AI that augments humans: It should do things people can’t.” Brynjolfsson says this is a challenge, because it’s much easier to imagine an AI doing something a human has already done, but he believes using AI for augmentation rather than automation will ultimately be “where most of the value comes from.”

AI-Based Diagnosis Will Be More Accessible With Google Cloud’s Medical Imaging Suite

SiliconANGLE Share to FacebookShare to Twitter (10/4, Wheatley) reports Google Cloud announced Tuesday that it is bringing vision-based artificial intelligence (AI) to healthcare with the launch of its new medical imaging suite, Vision AI. SiliconANGLE says, “Medical imaging is one of the most critical tools used by hospitals to diagnose patients, and each year billions of images are made by clinicians to help them understand why people are sick. Google said medical images are so important that they account for about 90% of all healthcare data.” Google Cloud’s Medical Imaging Suite uses AI algorithms to scan medical images, helping ensure more accurate diagnoses more quickly than normal, which increases healthcare worker productivity while increasing patient care.

Biden Administration Unveils AI Bill Of Rights Which Aims To Protect Personal Data, Limit Surveillance

The AP Share to FacebookShare to Twitter (10/4, Burke) reports the Biden Administration announced “a set of far-reaching goals Tuesday aimed at averting harms caused by the rise of artificial intelligence systems, including guidelines for how to protect people’s personal data and limit surveillance.” The Blueprint for an AI Bill of Rights, as it is called, “notably does not set out specific enforcement actions, but instead is intended as a White House call to action for the U.S. government to safeguard digital and civil rights in an AI-fueled world, officials said.” The Wall Street Journal Share to FacebookShare to Twitter (10/4, Loten, Subscription Publication) says the EU’s General Data Protection Regulation is more strict than the US guidelines.

        Reuters Share to FacebookShare to Twitter (10/4, Dave, Bose) reports this guidance is meant to “help parents, patients and workers avert harm from the increasing use of automation in education, health care and employment.” The proposal “suggests numerous practices that developers and users of AI software should voluntarily follow to prevent the technology from unfairly disadvantaging people.”

Google, Microsoft Defend Practices As White House Releases AI Bill Of Rights

Insider Share to FacebookShare to Twitter (10/6, Haskins) reports the White House Office of Science and Technology Policy released its AI Bill of Rights on Tuesday. While the White House was coming up with proposal, “several organizations wrote to the government with feedback and singled out Google as a company with concerning practices in AI and personal-data collection, newly published documents show.” Since the company “has a gargantuan amount of people’s personal data, it creates worrying use cases and potential for abuse, several civil-rights groups said.” In response, Google “said in its feedback that it was aware of the risks and is responding accordingly through internal company principles on AI, privacy, and security.” Google “argued that it uses robust security to protect biometric data, tests its technology across different demographics, and carefully considers use cases.” According to Insider, “Several major companies, including Apple and Amazon, did not submit responses.”

        Insider Share to FacebookShare to Twitter (10/7, Haskins) reports that while drafting the AI Bill of Rights, many “groups singled out Microsoft’s historically biased facial recognition system, data-based collaborations with police, and interest in health data and emotion detection.” While civil rights groups expressed their concerns, “Microsoft said in its feedback that since 2018, it’s made efforts to better understand the social implications of facial recognition, safeguard against certain risks, and adopt internal policies like its Facial Recognition Principles and Responsible AI Standard.” Similarly, “its Face API facial recognition tool has a Transparency Note that mentions ‘the importance of keeping a human in the loop for deployments’ of facial recognition.” Various groups also “expressed concern about Microsoft’s interest in emotion detection and biometrics technology.”

Bill Requires Federal Officials Involved With Procuring AI To Receive Training On Emerging Tech’s Capabilities, Risks

FedScoop Share to FacebookShare to Twitter (9/30) reports a bill that would “require federal officials involved with procuring artificial intelligence to receive training on the emerging technology’s capabilities and risks has reached President Biden’s desk, after passing the House on Thursday.” If the AI Training for the Acquisition Workforce Act is “signed into law, the Office of Management and Budget would have one year to establish a 10-year training program within executive agencies managing AI programs and logistics.”

US Imposes New Restrictions On China’s Access To Semiconductor Technology

Bloomberg Share to FacebookShare to Twitter (10/7, King) reports that the Biden Administration “announced new restrictions on China’s access to US semiconductor technology, adding measures aimed at stopping Beijing’s push to develop its own chip industry and advance the country’s military capabilities.” According to Bloomberg, “The new measures will include restrictions on the export of some types of chips used in artificial intelligence and supercomputing and also tighten rules on the sale of semiconductor manufacturing equipment to any Chinese company.” Bloomberg says the Biden Administration is “looking to ensure that Chinese companies don’t act as a conduit for the transfer of technology to their country’s military – and that chipmakers there don’t develop the capability to make advanced semiconductors themselves.”

China Chip Industry Group “Disappointed” By New US Chip Export Restrictions

Reuters Share to FacebookShare to Twitter (10/12, Horwitz) reports the China Semiconductor Industry Association (CSIA) “said on Thursday it was ‘disappointed’ by recent U.S. export controls and warned they could put more stress on global supply chains.” The CSIA said in a statement, “Not only will such unilateral measure harm the further global supply chain of the semiconductor industry, more importantly it will create an atmosphere of uncertainty, which will negatively affect the trust, goodwill, and spirit of cooperation that the players of the global semiconductor industry have carefully cultivated over the past decades,” The CSIA “added that it hoped the U.S. government would ‘adjust the course of action’ and ‘return to the well-established framework of the World Semiconductor Council (WSC) and the Government and Authority Meeting on Semiconductor (GAMS).’”

        Yangtze Memory Technologies Abandoned By US Suppliers. The Wall Street Journal Share to FacebookShare to Twitter (10/12, A1, Huang, Subscription Publication) reports Yangtze Memory Technologies Co. has stopped receiving support from key suppliers, including KLA and Lam Research, as a result of the new US semiconductor export restrictions.

        Naura Technology Group Tells US Employees To Stop Product Development Immediately. Communications Today (IND) Share to FacebookShare to Twitter (10/13) reports Naura Technology Group “has told its American employees in China to stop taking part in component and machinery development to comply with Washington’s restrictions on the involvement of US citizens in key facilities on the mainland, according to a source briefed on the decision.” The Beijing-based company “asked its American engineers to stop working on research and development projects with immediate effect, the source said.”

Researchers Worry About Rapid Spread Of AI-Generated Images

The Washington Post Share to FacebookShare to Twitter (9/28, Tiku) reports that “since the research lab OpenAI debuted the latest version of DALL-E in April, the AI has dazzled the public, attracting digital artists, graphic designers, early adopters, and anyone in search of online distraction.” The ability “to create original, sometimes accurate, and occasionally inspired images from any spur-of-the-moment phrase, like a conversational Photoshop, has startled even jaded internet users with how quickly AI has progressed.” Five months later, “1.5 million users are generating 2 million images a day.” On Wednesday, OpenAI “said it will remove its waitlist for DALL-E, giving anyone immediate access.” The introduction of DALL-E “has triggered an explosion of text-to-image generators. Google and Meta quickly revealed that they had each been developing similar systems, but said their models weren’t ready for the public.” Rival start-ups soon “went public, including Stable Diffusion and Midjourney, which created the image that sparked controversy in August when it won an art competition at the Colorado State Fair.”

        The technology is “now spreading rapidly, faster than AI companies can shape norms around its use and prevent dangerous outcomes.” Researchers “worry that these systems produce images that can cause a range of harms, such as reinforcing racial and gender stereotypes or plagiarizing artists whose work was siphoned without their consent.” Fake photos “could be used to enable bullying and harassment – or create disinformation that looks real.”

Nonprofit Attempts To Solve AI Talent Shortage With New Recruitment Program

Fortune Share to FacebookShare to Twitter (9/27, Kahn) reports that “one of the perpetual problems among businesses hoping to use A.I. is finding people with the right skills in data science and machine learning.” This talent is “sparse and not evenly distributed across the globe.” In fact, there are many countries, “and even entire regions, that are currently being left behind when it comes to A.I. skills and, as a result, lack companies able to build their own A.I.-enabled software.” Sara Hooker “thinks the limited access to real-world experience building cutting-edge A.I. is a problem for everyone” and she “wants to change that – enabling more people, especially those not from traditional power-house computer science PhD programs at just a handful of major research universities, to work on projects that can push the state of the art in A.I. forward.”

        A former Google Brain researcher, Hooker is “now head of Cohere for AI, a non-profit research lab affiliated with the for-profit A.I. software company Cohere.” The company was “also founded by Google Brain alums and specializes in selling access to ultra-large language models, the kind of A.I. behind recent advances in natural language processing.” Last week, Cohere for AI “announced a new program that will take people interested in conducting A.I. research from almost any region of the world and provide them an eight months-long, full-time, paid fellowship with Cohere for AI working on large language model.” Cohere for AI’s “Scholar” program “will accept candidates based on the strength of their ideas and the projects they want to pursue, regardless of whether they have a traditional academic research background, Hooker says.”

New York AI Bias Law Prompts Uncertainty

The Wall Street Journal Share to FacebookShare to Twitter (9/21, Vanderford, Subscription Publication) reports that New York City is preparing to implement a law that would require bias audits of AI-based hiring systems. The law, which will come into effect in January, will make companies liable for violations. However, businesses and service providers are contending with how to comply with the new regulations as the AI audit process does not have clearly established guidelines.

 

University Of Maryland Professor Focuses On Hidden Biases In AI

The Baltimore Sun Share to FacebookShare to Twitter (9/13, Lora) reports that Lauren Rhue “researches the fast-paced world of artificial intelligence and machine learning technology,” but she “wants everyone in it to slow down.” Rhue, “an assistant professor of information systems at the University of Maryland Robert H. Smith School of Business, recently audited emotion recognition technology within three facial recognition services: Amazon Rekognition, Face++ and Microsoft.” Her research “revealed what Rhue called ‘really stark’ racial disparities.” Rhue collected photos “of Black and white NBA players from the 2016 season, controlling for the degree to which they were smiling.” She then “ran those photos through the facial recognition software.” In general, the models “assigned more negative emotions to Black players, Rhue found.” Additionally, “if the players had ambiguous facial expressions, the Black players were more likely to be assumed to have a negative facial expression, while white players were more likely to be ‘given the benefit of the doubt.’”

        Despite the disparities she’s uncovered, Rhue “does believe technology can be utilized for good.” For example, Rhue has “researched crowdfunding on digital platforms with a focus on Kickstarter, which curates campaigns based on staff interest.” In an effort to highlight projects “put forward by Black creators, she found that using predictive models rather than relying on subjective human analysis increased recommendation rates for Black projects without lowering the rate of success.”

Alphabet CEO: AI Tech Not Even Close To Being Sentient

Fortune Share to FacebookShare to Twitter (9/7, Robison) reports, “Alphabet CEO Sundar Pichai said the company’s artificial intelligence technology is not anywhere near being sentient and may never get there, even as he touted A.I. as central to the $1.4 trillion company’s future.” Pichai, while referring to one of Google’s A.I. technologies, said during a Tuesday interview, “LaMDA is not sentient by any stretch of the imagination.” The company’s “shift to A.I. has raised concerns about various aspects of the technology, from ongoing examples of racial bias in A.I. algorithms to fears of privacy violations caused by facial recognition.” Pichai said, “The good news is that anyone who talks to Google Assistant – while I think it is the best assistant out there for conversational A.I. – you still see how broken it is in certain cases.”

AI Programs Help Students Cheat By Generating Essay Text

Slate Share to FacebookShare to Twitter (9/6, Peritz) reports “papers augmented with artificial intelligence.” The first online article generator “debuted in 2005.” Now, A.I.-generated text “can now be found in novels, fake news articles and real news articles, marketing campaigns, and dozens of other written products.” The tech is “either free or cheap to use, which places it in the hands of anyone” and “it’s probably already burrowing into America’s classrooms right now.” Using an A.I. program is not “plagiarism” in the “traditional sense – there’s no previous work for the student to copy, and thus no original for teachers’ plagiarism detectors to catch.” Instead, a student first “feeds text from either a single or multiple sources into the program to begin the process.” The program “then generates content by using a set of parameters on a topic, which then can be personalized to the writer’s specifications.” With a little bit of practice, a “student can use AI to write his or her paper in a fraction of the time that it would normally take to write an essay.”

WPost: AI Gives New View Of Life’s Building Blocks

In an editorial, the Washington Post Share to FacebookShare to Twitter (9/1) argues that a new AI process provides “a window on life’s basic building blocks.” A company called Deepmind “has developed an artificial intelligence and machine learning system that can predict the three-dimensional structure of proteins, decoding the amino acids that make up each protein.” While the system “does not reveal all of biology’s mysteries, nor is it the only advance needed for drug development or disease fighting,” the new ways to visualize proteins “are truly astonishing.”

 

AI Assistants Face Challenges Recognizing Humor

Forbes Share to FacebookShare to Twitter (9/1, Malins) reports, “At the moment, we are most familiar with conversational AI systems from Google, Apple, Microsoft and Amazon. These virtual agents (Siri, Cortana, etc.) are not only capable of interacting with us but can inject a dose of humor into their responses. But do these systems understand when they are displaying examples of humor?” The hardest part of answering that question is “the detection of what is humor.” AI systems must “decide when somebody is making a legitimate request” or making a humorous remark.

dtau...@gmail.com

unread,
Jan 7, 2023, 8:53:35 AM1/7/23
to ai-b...@googlegroups.com

New York City Schools Ban ChatGPT Amid Cheating Worries
CNet
Dan Avery
January 4, 2023


The New York City Department of Education (NYCDOE) said it has banned access to the ChatGPT chatbot on its online devices and networks due to concerns about "negative impacts on student learning and [the] accuracy of content." Said NYCDOE's Jenna Lyle, "While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success." Students and faculty may link to ChatGPT on devices not connected to the school system. Lyle also said individuals interested in studying the chatbot's underlying technology can access it on request.

Full Article

 

 

NSF Spearheads Funding to Improve Diversity in AI Workforce
Nextgov
Alexandra Kelley
January 3, 2023


The U.S. National Science Foundation (NSF) and six other federal research organizations will push to foster a more inclusive artificial intelligence (AI) and machine learning (ML) workforce through the ExpandAI program. The program will channel more federal funding into AI research and development education within universities boasting a diverse student population and focusing on AI education. ExpandAI will provide funding for development projects and collaboration among participating National AI Research Institutes. Capacity development projects will aim to establish new AI education centers within minority-serving institutions that currently lack AI/ML curricula and will host large numbers of African American/Black American, Hispanic American, American Indian, Alaska Native, Native Hawaiian, and Pacific Islander students. Said NSF's Margaret Martonosi, "We hope to see a more diverse, more inclusive participation of talented innovators from across our nation, driving AI research and innovation."

Full Article

 

 

Program 'Learns' to Identify Disease-Causing Mosaic Mutations
UC San Diego Today
Scott LaFee
January 2, 2023


Researchers at the University of California, San Diego (UCSD) and the San Diego-based Rady Children's Institute for Genomic Medicine have developed a deep learning technique for teaching a computer to identify disease-inducing mosaic mutations. UCSD's Xiaoxu Yang said the researchers trained the DeepMosaic program on nearly 200,000 simulated and biological variants across the genome, until "we were satisfied with its ability to detect variants from data it had never encountered before." Tests on several independent large-scale sequencing datasets that DeepMosaic had never seen showed the program outperformed prior approaches. "The prominent visual features picked up by the deep learning models are very similar to what experts are focusing on when manually examining variants," according to former UCSD researcher Xin Xu.

Full Article

 

Princeton Student’s App Can Detect If An Essay Was Written By AI

Insider Share to FacebookShare to Twitter (1/4, Syme) reports Edward Tian, a computer science student at Princeton, said he has developed a new app that “can detect whether your essay was written by ChatGPT.” He recently shared “two videos comparing the app’s analysis of a New Yorker article and a letter written by ChatGPT. It correctly identified that they were respectively written by a human and AI.” Tian explained he was “motivated to build GPTZero after seeing increased instances of AI plagiarism.” He added that “he’s planning to publish a paper with accuracy stats using student journalism articles as data, alongside Princeton’s Natural Language Processing group.”

        The Daily Beast Share to FacebookShare to Twitter (1/4) reported the 22-year-old Tian “spent his winter break in his local coffee shop creating GPTZero, an app that he claimed would be able to ‘quickly and efficiently’ tell if an essay was written by a human or by OpenAI’s ChatGPT. When he uploaded it to the app creating and hosting platform Streamlit, he didn’t expect it to get that much attention.” He said, “I was expecting, at most, a few dozen people trying out the app. Suddently, it was crazy in usership with over 2,000 people signing up for the beta in a few hours.”

        Tian told BuzzFeed News Share to FacebookShare to Twitter (1/5), “So many teachers have reached out to me. From Switzerland, France, all over the world.” Tian said, “AI-generated writing is going to just get better and better. I’m excited about this future, but we have to do it responsibly.” BuzzFeed explains GPTZero “works by analyzing a piece of text and determining if there is a high or low indication that a bot wrote it.” Specifically, it looks for two hallmarks: “perplexity,” which is how likely each word is to be suggested by a bot; and “burstiness,” which measures the spikes in how perplex each sentence is.

        New York City, Los Angeles School Systems Restrict Use Of ChatGPT. The Washington Post Share to FacebookShare to Twitter (1/5) reports the New York City Department of Education confirmed this week that it has blocked access to ChatGPT in its schools. The decision by “the nation’s most populous school district...restricts the use of the bot for students and educators on the district’s network or devices. The move echoes a similar decision made Dec. 12 by the Los Angeles Unified School District days after ChatGPT was released.” Outside of NYC and LA, “other large school districts said they have not yet made plans to restrict ChatGPT.” Nevertheless, some experts “say restricting the technology is shortsighted, arguing that students will find ways to use the bot regardless if it continues to gain popularity.”

        CNN Share to FacebookShare to Twitter (1/5, Korn, Kelly) reports a spokesperson for the South San Francisco Unified School District said the district is aware of the potential for its students to use ChatGPT but it has “not yet instituted an outright ban.” Meanwhile, a spokesperson for the School District of Philadelphia said it has “no knowledge of students using the ChatGPT nor have we received any complaints from principals or teachers.” Darren Hicks, assistant professor of philosophy at Furman University, “previously told CNN it will be harder to prove when a student misuses ChatGPT than with other forms of cheating.” He said this may cause teachers to rethink assignments so they couldn’t be easily written by the tool.

Experts Weigh In As Districts Are Tempted To Follow NYC Schools’ Ban On ChatGPT

Education Week Share to FacebookShare to Twitter (1/5, Klein) reports that districts around the country “may be tempted to follow New York City public schools’ lead in restricting student access to ChatGPT, the artificial intelligence-powered tool that can mimic human writing with eye-popping efficiency,” but they “would be making a huge mistake, some experts say.” Vice president of Assessment and Learning Technology Research and Development at the Educational Testing Service, Andreas Oranje said the platform “is a new technology that was not part of the standards that they’re trying to meet. But it’s a bad idea because ChatGPT is a fact of life. And we want to prepare students for life.” He added that “instead of squelching students’ access to the application at school, educators need to figure out a way to ‘create assignments that still get at the skills that you want to teach, but in a way that works with ChatGPT.’”

Heavy Investments Are Not Needed To Teach AI In Middle, High School

K-12 Dive Share to FacebookShare to Twitter (1/4, Barack) reports that districts “looking to teach artificial intelligence lessons in middle and high school do not need to invest heavily in technology resources nor launch a separate stand-alone class,” and instead, stakeholders and educators “can begin by talking with students about ways they already use AI without even knowing it.” Students “may not be aware that many applications they use daily are powered by artificial intelligence, from Google search results to music and video suggestions in Spotify and TikTok, said Nancye Blair Black, project lead for an ISTE and General Motors partnership, AI Explorations and Their Practical Use in School Environments.” As artificial intelligence becomes “mainstreamed into everyday life, educators and curriculum leaders are increasingly focused on how to bring the topic into classrooms,” and they can start “by teaching students about how machine learning powers AI and serves as the primary decision tree computers use to process information.”

AP Analysis: War In Ukraine Accelerates Development Of AI Drones

The AP Share to FacebookShare to Twitter (1/3, Bajak, Arhirova) says, “Drone advances in Ukraine have accelerated a long-anticipated technology trend that could soon bring the world’s first fully autonomous fighting robots to the battlefield, inaugurating a new age of warfare. The longer the war lasts, the more likely it becomes that drones will be used to identify, select and attack targets without help from humans, according to military analysts, combatants and artificial intelligence researchers.” The AP says that Ukraine “already has semi-autonomous attack drones and counter-drone weapons endowed with AI,” and Russia “also claims to possess AI weaponry, though the claims are unproven.”

Missouri S&T Professor Developing AI Software To Improve Process Of Matching Kidney Donors With Recipients

KWMU-FM Share to FacebookShare to Twitter St. Louis (12/27) reports Missouri S&T engineering management assistant professor Casey Canfield is leading a research effort to develop AI software that can streamline “how kidney donors are matched with recipients.” Her early research indicates that “excellent kidneys find their way into people on the top of the transplant list efficiently, but Canfield believes there is room to improve placing less-than-perfect kidneys.” She said, “What we really need to find is the person in the middle of the list who would really benefit from getting a transplant sooner rather than having to wait two-three years to get a transplant.” Canfield says her software “would not make medical decisions or even recommendations to doctors, but rather give them options and more information more quickly.” She is working “with SSM Health Saint Louis University Hospital on the four-year study. It received almost $2 million in funding from the National Science Foundation.”

dtau...@gmail.com

unread,
Jan 16, 2023, 7:43:59 PM1/16/23
to ai-b...@googlegroups.com

College Student Created App That Can Tell Whether AI Wrote Essay
NPR
Emma Bowman
January 9, 2023


Princeton University computer science major Edward Tian developed an app that can identify whether text has been written by a human or OpenAI's ChatGPT. The app comes amid concerns that the viral chatbot could be used by students to pass off assignments written by artificial intelligence (AI) as their own. The GPTZero app uses "perplexity," which measures the text's complexity, and "burstiness," which compares sentence variation, as indicators to determine whether a bot wrote the text in question. Text that is AI-generated is likely to have low complexity and more uniform sentences. Tian, who is working to improve the app's accuracy, said GPTZero is "not meant to be a tool to stop these technologies from being used. But with any new technologies, we need to be able to adopt it responsibly and we need to have safeguards."

Full Article

 

 

Google, DeepMind Launch MedPaLM Language Model
Interesting Engineering
Loukia Papadopoulos
January 4, 2023


Alphabet subsidiaries Google and DeepMind have launched the MedPaLM large language model (LLM), designed to yield safe, useful answers to questions in the medical field. The model merges HealthSearchQA, a free-response dataset of medical questions, with six open-question answering datasets encompassing professional medical exams, research, and consumer inquiries. MedPaLM can address multiple-choice questions and simple queries from medical professionals and non-professionals. The model was developed on the 540-billion-parameter PaLM LLM and its instruction-oriented Flan-PaLM variant to assess LLMs using MultiMedQA. A team of healthcare professionals found 92.6% of MedPaLM's responses were accurate, compared with 92.9% of clinician-generated responses.

Full Article

 

 

OpenAI Booms, Even Amid Tech Gloom
The New York Times
Erin Griffith; Cade Metz
January 7, 2023


OpenAI, whose ChatGPT has seen over 1 million users since its release, is in talks to complete a potential deal in which $300 million in existing company shares would be sold in a tender offer. Amid the hype over generative artificial intelligence (AI) tools like ChatGPT, sources say the deal would value OpenAI at about $29 billion, more than double its 2021 valuation. Despite a dismal year for the tech industry in 2022 that involved mass layoffs and other cuts, tech investors are excited about generative AI. PitchBook reported generative AI companies received at least $1.37 billion from investors in 78 deals last year, an amount nearly equal to investments made in the previous five years combined.
 

Full Article

*May Require Paid Registration

 

 

Computer Scientist Says AI 'Artist' Deserves Its Own Copyrights
Reuters
Blake Brittain
January 11, 2023


Computer scientist Stephen Thaler has requested the Washington, DC, District Court to rule his Creativity Machine artificial intelligence (AI) system deserves copyrights for art it produces. Thaler asked the court to rescind a U.S. Copyright Office ruling decreeing that copyrightable creative works can only be human-made. His lawyer, Ryan Abbott of Brown Neri Smith & Khan, said the case has a "real financial importance" that may have been previously overlooked, and the protection of AI-created art would serve the goals of copyright law. Said Thaler in his court filing, "The fact that various courts have referred to creative activity in human-centric terms, based on the fact that creativity has traditionally been human-centric and romanticized, is very different than there being a legal requirement for human creativity."
 

Full Article

 

 

Microsoft's AI Program Can Clone Voice from Three-Second Audio Clip
PC Magazine
Michael Kan
January 10, 2023


Microsoft's VALL-E text-to-speech synthesis program can duplicate, or “clone,” a person's voice from a three-second audio clip. Microsoft researchers trained VALL-E on 60,000 hours of English audiobook narration from more than 7,000 different speakers. The model interprets audio speech as "discrete tokens," then replicates the token to speak with different text. VALL-E can control the cloned voice to say anything desired, as well as reproducing emotion or configuring the voice into different speaking styles. However, the research acknowledged, "Since VALL-E could synthesize speech that maintains speaker identity, it may carry potential risks in misuse of the model, such as spoofing voice identification or impersonating a specific speaker."
 

Full Article

 

 

AI Turns Its Artistry to Creating Human Proteins
The New York Times
Cade Metz
January 9, 2023


Scientists are adapting techniques underlying art-producing artificial intelligence technology like OpenAI's DALL-E to model new proteins for fighting disease and other applications. The University of Washington (UW)'s Nate Bennett described a protein-structuring approach that, similar to DALL-E, "does what you tell it to do. From a single prompt, it can generate an endless number of designs." Such systems allow researchers to provide rough blueprints for desired proteins, whose three-dimensional shapes are produced by a diffusion model. These protein candidates then go to a wet lab to see if they function as expected. Said UW's Jue Wang, "What's exciting isn't just that they are creative and explore unexpected possibilities, but that they are creative while satisfying certain design objectives or constraints."

Full Article

*May Require Paid Registration

 

 

'Consciousness' in Robots Was Once Taboo. Now It's the Last Word
The New York Times
Oliver Whang
January 6, 2023


The concept of artificial consciousness has evolved from an unmentionable word to the premiere focus of the robotics community, as experts like Columbia University's Hod Lipson aim to create conscious robots. The first challenge is defining what consciousness is. Lipson and Duke University's Boyuan Chen have created a self-aware two-jointed arm fixed to a table, which used cameras to observe itself as it moved, and learned to distinguish itself via a deep learning algorithm and a probability model. The University of California, Riverside's Eric Schwitzgeber said a lack of certainty about what consciousness is could present difficulties if an apparently conscious robot can be created.

Full Article

*May Require Paid Registration

 

 

ChatGPT Is Enabling Script Kiddies to Write Functional Malware
Ars Technica
Dan Goodin
January 6, 2023


Participants in cybercrime forums, some with little or no coding experience, are using ChatGPT, an artificial-intelligent (AI) chatbot launched in November in beta form, to write potential malware, according to a report from security firm Check Point Research. One participant, for example, credited ChatGPT with providing a “nice [helping] hand” to what was claimed to be the first script that person had written. The script, Check Point researchers found, could "easily be modified to encrypt someone's machine completely without any user interaction." Check Point researchers themselves developed malware with full infection flow with the help of ChatGPT; they wrote, "The hard work was done by the AIs, and all that's left for us to do is to execute the attack."

Full Article

 

 

China Turns Its Focus to Deepfakes
The Wall Street Journal
Karen Hao
January 8, 2023


The Cyberspace Administration of China Tuesday began enforcement of its "deep synthesis" technology regulations in an effort to prevent the production of "deepfakes." The regulations ban the use of artificial intelligence-generated content for disseminating "fake news" or information disruptive to the economy or national security. Additionally, providers of such technology must use prominent labels to indicate the images, video, and text it generates are synthetically generated or edited. Stanford University's Graham Webster said, "China is learning with the world as to the potential impacts of these things, but it's moving forward with mandatory rules and enforcement more quickly. People around the world should observe what happens."

Full Article

*May Require Paid Registration

 

 

Deep-Learning-Designed Diffractive Processor Computes Hundreds of Transformations in Parallel
SPIE Newsroom
January 9, 2023

University of California, Los Angeles (UCLA) researchers demonstrated that a broadband diffractive processor can perform parallel linear transformation operations by applying a wavelength multiplexing scheme in a diffractive optical network. The input and output information are encoded by a predetermined group of Nw discrete wavelengths, with each dedicated to a specific target function or complex-valued linear transformation. Said UCLA's Aydogan Ozcan, such transformations “can be specifically assigned for distinct functions such as image classification and segmentation, or they can be dedicated to computing different convolutional filter operations or fully connected layers in a neural network. All these linear transforms or desired functions are executed simultaneously at the speed of light, where each desired function is assigned to a unique wavelength. This allows the broadband optical processor to compute with extreme throughput and parallelism."
 

Full Article

 

 

Research Team Detects Additive Manufacturing Defects in Real Time
University of Virginia Engineering
January 6, 2023


A research team led by the University of Virginia's Tao Sun employed machine learning to detect defects in additive manufacturing (also known as three-dimensional printing) in real time. The research focused on the formation of keyhole pores, one of the major defects in laser powder bed fusion, which uses metal powder and lasers to three-dimensionally print metal parts. Said Sun, "By integrating operando synchrotron x-ray imaging, near-infrared imaging, and machine learning, our approach can capture the unique thermal signature associated with keyhole pore generation with sub-millisecond temporal resolution and 100% prediction rate.” Sun said the approach “provides a viable solution for high-fidelity, high-resolution detection of keyhole pore generation that can be readily applied in many additive manufacturing scenarios."

Full Article

 

 

Sailor-Less Ships Head to Port on AI Wave
Yahoo! News
Juliette Michel
January 6, 2023


Artificial intelligence (AI)-powered unmanned boat technology was among the technologies showcased at the 2023 Consumer Electronics Show (CES 2023). U.S. marine industrial company Brunswick showed its prototype vessel that provides the optimal pathway to enter a port, avoid collisions, and find available berths for docking, all without human assistance. South Korean automaker Hyundai's Avikus division has developed software that executive Carl Johansson said can enhance pleasure cruises while saving fuel by positioning a boat in the best orientation for sunbathing or viewing sunsets, among other options. Crew-less sailing is currently in the experimental stage for merchant mariners, although John Cross at Canada's Memorial University said reducing crew numbers is the goal for many merchant marine companies.
 

Full Article

 

Educators Weigh Potential, Risks Of ChatGPT

The Washington Times Share to FacebookShare to Twitter (1/12, Salai) reports educators across the country “are sounding the alarm over ChatGPT, an upstart artificial intelligence system that can write term papers for students based on keywords without clear signs of plagiarism.” Trey Vasquez, a special education professor at the University of Central Florida, “tested the next-generation ‘chatbot’ with a group of other professors and students. They asked it to summarize an academic article, create a computer program, and write two 400-word essays on the uses and limits of AI in education.” He said he would grade the AI-generated essays “as C’s, but he added that the program helped a student with cerebral palsy write more efficiently.” San Francisco-based OpenAI, the maker of ChatGPT, “has pledged to address academic dishonesty concerns by creating a coded watermark for content that only educators can identify.”

        Education Week Share to FacebookShare to Twitter (1/12) reports teachers have “said that the artificial intelligence tool, which can write anything with just a simple prompt, could save them hours of work – a game-changer at a time when teachers have a lot on their plates and stress levels are high.” EdWeek tested the capabilities of AI by asking ChatGPT “to generate a lesson plan, a response to a concerned parent, a rubric, feedback on student work, and a letter of recommendation.” Reactions from teachers asked by EdWeek to review the samples were mixed, with many pointing out flaws in the text, but also acknowledging the AI-generated pieces of work are “a start but never the finish.”

        Academics Consider How To Harness ChatGPT. Inside Higher Ed Share to FacebookShare to Twitter (1/12) asked eleven “academics to ask how to harness the potential and avert the risks of this game-changing technology.” Johann N. Neem, professor of history at Western Washington University, said, “ChatGPT cannot replace thinking. Students who turn in assignments using ChatGPT have not done the hard work of taking inchoate fragments and, through the cognitively complex process of finding words, crafting thoughts of their own.” Neem added, “Professors should find new ways to help students learn to read and write well and to help them make the connection between doing so and their own growth.” Anna Mills, an English instructor at College of Marin, said, “We should assess how well students can identify ChatGPT failings in terms of logic, consistency, accuracy and bias. ... Showcasing AI failings has the added benefit of highlighting students’ own reading, writing and thinking capacities.”

        Roose: Schools Should Embrace, Not Ban ChatGPT. Kevin Roose writes in his technology column for The New York Times Share to FacebookShare to Twitter (1/12, Roose) that “cheating is the immediate, practical fear” for educators, along with ChatGPT’s “propensity to spit out wrong or misleading answers. But there are existential worries, too. One high school teacher told me that he used ChatGPT to evaluate a few of his students’ papers, and that the app had provided more detailed and useful feedback on them than he would have, in a tiny fraction of the time.” Nevertheless, Roose says after talking with “dozens of educators,” he has “come around to the view that banning ChatGPT in the classroom is the wrong move.” He says blocking the bot in schools is “not going to work” as students can easily “evade a schoolwide ban.” Rather, instead of “starting an endless game of whack-a-mole against an ever-expanding army of A.I. chatbots,” Roose suggests “schools should treat ChatGPT the way they treat calculators – allowing it for some assignments, but not others.”

        Princeton Student Developed App To Determine If An Essay Was Written By AI. The Washington Post Share to FacebookShare to Twitter (1/12) reports on Edward Tian, the 22-year-old student at Princeton University who developed GPTZero, an app that detects “whether text has been generated by a machine or written by a person.” He initially expected “a few dozen people to ever try it,” but it has “gotten more than 7 million views, he said, and he has heard from people all over the world – many of them teachers. He has also heard from college admissions officers.” Tian told The Post: “A lot of people are like … ‘You’re trying to shut down a good thing we’ve got going here!’ That’s not the case. I am not opposed to students using AI where it makes sense. … It’s just we have to adopt this technology responsibly.”

ChatGPT Can Ease Teachers’ Administrative Burdens, But Concerns Remain

Education Week Share to FacebookShare to Twitter (1/11) reports most of the conversation in the education community over ChatGPT “has been centered on the extent to which students will use the chat bot – but ChatGPT could also fundamentally change the nature of teachers’ jobs.” For example, teachers can use the chat bot to “plan lessons, put together rubrics, offer students feedback on assignments, respond to parent emails, and write letters of recommendation, among other tasks.” Supporters argue the tool “can save them hours of work, freeing up time for student interactions or their personal life;” critics “worry that using artificial intelligence will strip away some of the creativity and relational aspects of the job.” Although ChatGPT can “offer feedback on student work,” some teachers balk “at using it for that purpose, saying that the examples of grading from the chat feel shallow or even inaccurate.”

AI Investment Continues Despite Tech Downturn

The New York Times Share to FacebookShare to Twitter (1/7, Griffith, Metz) reports artificial intelligence remains an area of strong investment interest, despite “the most dismal tech downturn in a generation.” The hype comes as a new wave of AI products promise to “reinvent everything from online search engines like Google to photo and graphics editors like Photoshop to digital assistants like Alexa and Siri.” Last year, “investors pumped at least $1.37 billion into generative A.I. companies across 78 deals, almost as much as they invested in the previous five years combined,” and OpenAI “is in talks to complete a deal that would value it at around $29 billion.”

 

Companies Used Advanced AI For “Humanlike” Warehouse Robots In 2022. The Washington Post Share to FacebookShare to Twitter (1/7) reported “in 2022, the battle between robots and humans reached a turning point. Companies like Amazon and FedEx built warehouse robots that were able to finally pick things up with humanlike finesse, a years-long challenge solved largely because of AI vision systems that could see and analyze objects better.” Several AI experts “said companies will try to build upon those advances this year and create vision systems that not only view static objects better, but those that are in motion, helping expand what they can do on the factory floor.” Carnegie Mellon University’s Robotics Institute Associate Professor Sebastian Scherer “said these advances will help advance the field of single-task robots more than those aimed at doing a variety of tasks, such as universal or humanoid robots.” Scherer said, “This is maybe the starting point. [The] whole process will take five to 10 years.”

dtau...@gmail.com

unread,
Jan 22, 2023, 9:01:05 AM1/22/23
to ai-b...@googlegroups.com

How Scientists Trained Computers to Forecast COVID-19 Outbreaks
The Los Angeles Times
Melissa Healy
January 19, 2023


A team of researchers led by Northeastern University's Mauricio Santillana has built a machine learning system that can absorb and interpret data to predict COVID-19 outbreaks weeks in advance. The researchers compiled the data from hundreds of local and global information streams, including time-stamped Internet searches for coronavirus symptoms; geolocated tweets featuring terms such as "corona," "pandemic," or "panic buying;" aggregated location data from smartphones indicating travel trends, and declining online requests for directions, indicating fewer people were going out. Checking the resulting prediction against historical data and updating it appropriately yielded the beginnings of a system for forecasting disease outbreaks. Testing against real-world data showed the system could forecast local viral surges as far as six weeks ahead.

Full Article

 

 

App Identifies Nutrient Deficiencies
IEEE Spectrum
Joanna Goodrich
January 17, 2023


A mobile app developed by New Jersey high school student Rian Tiwari allows pregnant women to scan their fingernails to determine whether they have nutrient deficiencies. The app uses artificial intelligence to identify signs of nutrient deficiency and recommend dietary and lifestyle changes to treat it. The algorithms developed by Tiwari classify images of nails as healthy or unhealthy based their appearance, looking for cracks, ridges, peeling, and discoloration. Tiwari is working to improve the app to identify nutrient deficiencies based on images of lips and the inner eyelids, and to recommend medications and vitamin supplements.

Full Article

 

 

Research Team Develops App for Precise Brain Mapping
Schulich School of Medicine & Dentistry, Western University (Canada)
Prabhjot Sohal
January 12, 2023


Researchers at Canada's Western University developed an app that uses artificial intelligence to map hard-to-reach areas of the hippocampus, the part of the brain associated with memories and often the first affected by neurodegenerative disorders. The open-source app, HippUnfold, could be used to achieve earlier diagnosis and treatment of epilepsy, Alzheimer's, major depressive disorder, and other conditions. Western's Ali Khan said, "It has been challenging to detect subtle abnormalities in the hippocampus with imaging because it is small and folded in layers. With this tool, researchers and clinicians can extract a wide range of accurate and precise measurements of the hippocampus using magnetic resonance images (MRI).” Khan said the new tool “has a wide range of applications with a potential for a significant clinical impact."

Full Article

 

Wray “Deeply Concerned” About Chinese AI Programs

The AP Share to FacebookShare to Twitter (1/19, Tucker) reports that during a panel session at the World Economic Forum in Davos, Switzerland, FBI Director Wray “said Thursday that he was ‘deeply concerned’ about the Chinese government’s artificial intelligence program, asserting that it was ‘not constrained by the rule of law.’” Wray said Beijing’s AI ambitions were “built on top of massive troves of intellectual property and sensitive data that they’ve stolen over the years.” He “said that left unchecked, China could use artificial intelligence advancements to further its hacking operations, intellectual property theft and repression of dissidents inside the country and beyond.”

 

Job Recruiters Fooled After Shortlisting AI-Generated Job Applicant

Fortune Share to FacebookShare to Twitter (1/18, Bove) reports, “For all the praise, ChatGPT has also received its share of criticism” from teachers and watchdogs, but another group that “should keep an eye out for convincing A.I. may be job recruiters, based on the recent experience of a U.K. company.” When Neil Taylor was “looking to hire new faces at Schwa,” the communications company he owns, he “posed ChatGPT the same writing prompt all applicants for the position had to answer and anonymously included the bot’s entry in with the pool of candidates whose applications were to be reviewed by staff.” While fewer than 20% of candidates “moved forward to the interview stage of the process”ChatGPT’s entry “was among them.”

        Man Faces Backlash After Selling AI-Generated Children’s Book On Amazon. The Washington Post Share to FacebookShare to Twitter (1/19, Kasulis Cho) reports Ammaar Reshi “has sold more than 900 copies since he put his book, ‘Alice and Sparkle,” on Amazon in early December,’ but its “reviews – 60 percent 5 stars and 40 percent 1 star – as well as his Twitter mentions suggest a growing divide over these tools as the public considers whether they’ll starve the starving artist, or if they’re ethical at all.” Artists critical of works like Reshi’s “have also banded together to stage a digital protest of AI-generated art,” arguing “that some tools appear to have learned from data sets of art created by real people – with real copyright protections – to provide the fodder for its computer-generated creations.”

 

Educators Redesign Courses In Response To ChatGPT Plagiarism Concerns

The New York Times Share to FacebookShare to Twitter (1/16, Huang) reports that across the country, university professors, department chairs, and administrators “are starting to overhaul classrooms in response to ChatGPT, prompting a potentially huge shift in teaching and learning.” Some professors “are redesigning their courses entirely, making changes that include more oral exams, group work and handwritten assessments in lieu of typed ones.” The moves are “part of a real-time grappling with a new technological wave known as generative artificial intelligence” known as ChatGPT, which “generates eerily articulate and nuanced text in response to short prompts, with people using it to write love letters, poetry, fan fiction – and their schoolwork.”

        Study Shows ChatGPT Can Write Abstracts Convincing Enough To Fool Scientists. Nature Share to FacebookShare to Twitter (1/12) reported ChatGPT “can write such convincing fake research-paper abstracts that scientists are often unable to spot them, according to a preprint posted on the bioRxiv server in late December” A group led by Catherine Gao at Northwestern University “used ChatGPT to generate artificial research-paper abstracts to test whether scientists can spot them.” The research team asked ChatGPT “to write 50 medical-research abstracts based on a selection published in JAMA, The New England Journal of Medicine, The BMJ, The Lancet and Nature Medicine. They then compared these with the original abstracts by running them through a plagiarism detector and an AI-output detector, and they asked a group of medical researchers to spot the fabricated abstracts.” The AI-generated abstracts “sailed through the plagiarism checker,” while the AI-output detectors “spotted 66% the generated abstracts.” Human reviewers “didn’t do much better: they correctly identified only 68% of the generated abstracts and 86% of the genuine abstracts. They incorrectly identified 32% of the generated abstracts as being real and 14% of the genuine abstracts as being generated.” Gao and colleagues say in the preprint, “ChatGPT writes believable scientific abstracts. The boundaries of ethical and acceptable use of large language models to help scientific writing remain to be determined.”

dtau...@gmail.com

unread,
Jan 28, 2023, 8:43:42 AM1/28/23
to ai-b...@googlegroups.com

ACM TechBrief: Policies Needed for Safer Algorithmic Systems
ACM
January 26, 2023

A TechBrief released by ACM's global Technology Policy Council (TPC) warns the spread of algorithmic systems carries with it unaddressed risks. The brief describes perfectly safe algorithmic systems as not possible, but suggests achievable steps can improve their safety and should be prioritized by governments and stakeholders. The council urges the organizational development of a "safety culture that embraces human factors engineering" be "woven" into algorithmic system design. The University of Maryland's Ben Shneiderman, lead author of the TechBrief, said algorithmic systems require safeguards similar to the review of new food products and drugs. Said TPC TechBriefs Committee chair Stuart Shapiro, "As artificial intelligence and other complex consequential systems become more and more prevalent, the need to act to make them safer becomes more and more urgent."
 

Full Article

 

 

How Non-Linear Dynamics Can Augment Edge Sensor Time Series
Tokyo Institute of Technology News (Japan)
January 25, 2023

Engineers at Japan's Tokyo Institute of Technology (Tokyo Tech) and Shinshu University have shown they can augment sensor time series-based classification by using neural networks in a new way. The approach uses the recorded signal as an external forcing to a basic non-linear dynamic system, providing temporal responses to this network disturbance parallel to the original data. Explained Tokyo Tech's Chao Li, "Basically, it is about finding creative and innovative ways of generating additional data to help get the very best performance out of neural networks that necessarily have to be quite small to meet power and size requirements." The researchers considered classification of basic cattle behaviors using a collar-mounted accelerometer, then developed protocols for filtering, preprocessing, and injecting kinematic signals so the simulated dynamic system would accept and respond to them without divergence.
 

Full Article

 

 

Colliding Particles, Not Cars: CERN Machine Learning Could Help Self-Driving Autos
CERN
Priyanka Dasgupta
January 25, 2023


Researchers at CERN (the European Organization for Nuclear Research, based in Switzerland) and Swedish car-safety software firm Zenseact examined machine learning models to find ways they could help self-driving cars avoid accidents by enabling faster, better decision-making. The researchers selected field-programmable gate arrays (FPGAs), configurable integrated circuits with the ability to execute complex decision-making algorithms in micro-seconds, as the hardware benchmark. By optimizing existing resources, the researchers found they could increase the functionality of FPGAs significantly. They also learned that even when a processing unit had limited computational resources, the FPGAs performed tasks with high accuracy and short latency.
 

Full Article

 

 

How Smart Are the Robots Getting?
The New York Times
Cade Metz
January 20, 2023


New-generation online chatbots display a semblance of intelligence that appears to pass the Turing test, in which humans can no longer be certain whether they are conversing with a human or a machine. Bots like OpenAI’s ChatGPT and GPT-4 systems appear intelligent without being sentient or conscious; consequently, OpenAI's Ilya Sutskever says, "People think they can do things they cannot." Modern neural networks have learned to produce text by analyzing vast volumes of digital text and extrapolating patterns in how people link words, letters, and symbols. However, the chatbots' language skills belie their lack of reason or common sense.

Full Article

*May Require Paid Registration

 

 

Art, AI Collide in Landmark Legal Dispute
Financial Times
Madhumita Murgia; Ian Johnston
January 21, 2023


Human artists and artificial intelligence (AI) companies are disputing generative AI-intellectual property in a landmark legal case. Visual media company Getty Images filed a copyright claim against free image-generating Stable Diffusion tool developer Stability AI in the U.K. High Court. The tool was trained on 2.3 billion images harvested from the Web by a third-party website; Getty alleges Stability AI illegally copied and processed copyrighted images for its commercial benefit. Sandra Wachter at the U.K.'s Oxford Internet Institute said the case will decide whether companies can use such data for their own purposes. Said Estelle Derclaye at the U.K.'s University of Nottingham, "Ultimately, [AI companies] are copying the entire work in order to do something else with it — the work may not be recognizable in the output, but it's still required in its entirety."

Full Article

*May Require Paid Registration

 

 

As Deepfakes Flourish, Countries Struggle with Response
The New York Times
Tiffany Hsu
January 22, 2023


Most countries do not have laws to prevent or respond to deepfake technology, and doing so would be difficult regardless because creators generally operate anonymously, adapt quickly, and share their creations through borderless online platforms. However, new Chinese rules aim to curb the spread of deepfakes by requiring manipulated images to have the subject's consent and feature digital signatures or watermarks. The implementation of such rules could prompt other governments to follow suit. University of Pittsburgh's Ravit Dotan said, "We know that laws are coming, but we don't know what they are yet, so there's a lot of unpredictability."

Full Article

*May Require Paid Registration

 

Putting Clear Bounds on Uncertainty
MIT News
Steve Nadis
January 23, 2023


Researchers at the Massachusetts Institute of Technology (MIT), the University of California, Berkeley, and Israel's Technion-Israel Institute of Technology acquired accurate measures of uncertainty and showed uncertainty in ways explainable to the average person. The work involved partially smudged or corrupted images decoded by algorithms. The researchers used an encoder to take a marred image and produce an abstract or latent representation of a clean image as a series of numbers; they then employed the StyleGAN generative adversarial network to decode the numbers into a clean image. MIT's Swami Sankaranarayanan said the approach focuses on an image's semantic properties "to estimate uncertainty in a way that relates to the groupings of pixels that humans can readily interpret."

Full Article

 

 

Google Cloud Introduces Shelf Inventory AI Tool for Retailers
The Wall Street Journal
Isabelle Bousquette
January 13, 2023


An artificial intelligence tool developed by researchers at Google Cloud aims to help big-box retailers improve shelf inventory tracking. The algorithm uses videos and images from the retailer's ceiling-mounted cameras, camera-equipped self-driving robots, or store associates to assess the availability of goods on shelves. The tool was trained on a database of more than a billion products and can recognize products regardless of the source or angle of the images. In tests at the innovation lab of supermarket chain Giant Eagle Inc., the tool achieved more than 90% accuracy, which Giant Eagle's Graham Watkins said is not sufficient to deploy it at scale. Giant Eagle will roll out a pilot program in an actual store, but a chain-wide deployment is not likely for several years (if at all).

Full Article

*May Require Paid Registration

 

 

Simple Neural Nets Outperform State of the Art for Controlling Robotic Prosthetics
Michigan Engineering
Jim Lynch
January 17, 2023


A team of University of Michigan (U-M) doctors and engineers has developed artificial neural networks that offer more precise prosthetic hand and finger control than state-of-the-art systems. U-M's Cindy Chestek said, "This feed-forward network represents an older, simpler architecture — with information moving only in one direction, from input to output." The researchers found the network improved peak robotic finger velocity by 45% compared to traditional algorithms that do not employ neural networks. "We feel that the feed-forward system's simplicity enables the user to have more direct and intuitive control that may be closer to how the human body operates naturally," said Chestek.

Full Article

 

 

Decoding Brainwaves to Identify What Music Is Being Listened To
University of Essex (U.K.)
January 19, 2023


A brainwave-monitoring technique created by researchers at the U.K.'s University of Essex can identify to which specific piece of music people are listening. The researchers combined functional magnetic resonance imaging (fMRI) with electroencephalogram monitoring to measure a person's brain activity while listening to music. They used a deep learning neural network model to translate this data in order to reconstruct and accurately identify the piece of music with 71.8% accuracy. Essex's Ian Daly said, "We have shown we can decode music, which suggests that we may, one day, be able to decode language from the brain."

Full Article

 

Facing ChatGPT Threat, Google Brings Co-Founders Back, Plans To Exhibit Chatbot

TheStreet Share to FacebookShare to Twitter (1/20) reported that in a first-time occurrence, Google is “facing a threat to its domination of internet search” – ChatGPT, “a conversational robot with which humans will be able to converse in natural language.” Google CEO Sundar Pichai “has brought back Co-Founders Larry Page and Sergey Brin to oversee the Mountain View, Calif., search and cloud giant’s response,” which TheStreet described as “an exceptional move that shows the urgency of the situation.” TheStreet added, “According to The New York Times, Page and Brin held several meetings with Google executives” about ChatGPT, “which many experts see as a serious rival to Google’s search business, which has annual revenue of $149 billion.” The story added, “Contacted by TheStreet, Google has neither confirmed nor denied the return of its two founders.”

        Engadget Share to FacebookShare to Twitter (1/20) reported Google “sees ChatGPT as a threat to its search business and has shifted plans accordingly over the last several weeks, according to The New York Times.” The Times “claims CEO Sundar Pichai has declared a ‘code red’ and accelerated AI development. Google is reportedly preparing to show off at least 20 AI-powered products and a chatbot for its search engine this year, with at least some set to debut at its I/O conference in May.” A slide deck which the Times saw reveals that “among the AI projects Google is working on are an image generation tool, an upgraded version of AI Test Kitchen (an app used to test prototypes), a TikTok-style green screen mode for YouTube and a tool that can generate videos to summarize other clips. Also in the pipeline are a feature titled Shopping Try-on (perhaps akin to one Amazon has been developing), a wallpaper creator for Pixel phones and AI-driven tools that could make it easier for developers to create Android apps.”

WPost Analysis: Microsoft Aiming To Surpass Google In AI Competition “With Big Investments In OpenAI”

A Washington Post Share to FacebookShare to Twitter (1/21) analysis said, “After years of chasing Google in the AI race,” Microsoft “is hoping to leap ahead with big investments in OpenAI.” Microsoft “is working on AI models that don’t just offer to help you format a letter, but can analyze your Excel spreadsheet, create AI art to illustrate your PowerPoint presentation, or even draft a whole email for you in Outlook. And that’s just for starters.” Last Monday saw the firm roll out “an OpenAI service as part of its Azure cloud platform, offering businesses and start-ups the ability to incorporate models like ChatGPT into their own systems. The company has already been building AI tools into many of its consumer products,” while the Information “reported recently that it’s working to bring more of them to Microsoft Office as well.” The Post also said Microsoft on Wednesday unveiled reductions in the wake of “rounds of layoffs by Amazon, Meta and others.”

        Nadella Talks Microsoft’s AI Plans. In another story regarding Microsoft’s partnership with Open AI, Insider Share to FacebookShare to Twitter (1/22, Pollard) reports on comments made by Microsoft CEO Satya Nadella, with Insider saying that “at a Wall Street Journal panel at the World Economic Forum in Davos,” he “said the company plans to soon broadly commercialize AI tools across its products.” Nadella “went on to say that workers should embrace new AI tools instead of fearing them, the Journal reported.”

Microsoft Announces New Multibillion-Dollar Investment In ChatGPT-Maker OpenAI

CNBC Share to FacebookShare to Twitter (1/23, Capoot) reports that Microsoft on Monday announced “a new multiyear, multibillion-dollar investment with ChatGPT-maker OpenAI,” though it “declined to provide a specific dollar amount.” Microsoft’s investment “will accelerate breakthroughs in AI and help both companies commercialize advanced technologies in the future.” The deal “will also help the two companies engage in supercomputing at scale, and create new AI-powered experiences, the release said.” The pact “marks the third phase of the partnership between the two companies” after “Microsoft’s previous investments in 2019 and 2021.” In a blog post, Microsoft CEO Satya Nadella said, “We formed our partnership with OpenAI around a shared ambition to responsibly advance cutting-edge AI research and democratize AI as a new technology platform.”

        The AP Share to FacebookShare to Twitter (1/23, O'Brien) reports ChatGPT is “part of a new generation of machine-learning systems that can converse, generate readable text on demand and produce novel images and video based on what they’ve learned from a vast database of digital books, online writings and other media.” Forrester analyst Rowan Curran explained that there are “lots of ways that the models that OpenAI is building would be really appealing for Microsoft’s set of offerings,” such as “helping to generate text and images for new slide presentations, or creating smarter word processors, Curran said.” Microsoft is scheduled to report earnings Tuesday after market close “from the October-December financial quarter and after disclosing last week its plans to lay off 10,000 employees, close to 5% of its global workforce.”

Amazon Robotics Director Joins General Motors AI Vehicle Subsidiary

GeekWire Share to FacebookShare to Twitter (1/23, Schlosser) reports Siddhartha Srinivasa “has left his position as director of Robotics AI at Amazon to join Cruise, General Motors’ autonomous-vehicle subsidiary.” In a post on LinkedIn, Srinivasa “said he’d had a ‘wonderful four years at Amazon building and scaling robotics AI for fulfillment’ and that he was ‘excited to start a new journey’ close to his heart, ‘inventing algorithms, and partnering with operations to exponentially scale autonomous mobility.’” Cruise is working to “accelerate the commercialization of self-driving vehicles by bringing its cloud computing technology to the equation. Cruise has raised a total of $10 billion and is valued at more than $30 billion.”

Panel Recommends $2.6 Billion For New Federal AI Research Organization

TechCrunch Share to FacebookShare to Twitter (1/24, Coldewey) reports, “The final report from the government’s National AI Research Resource recommends a new, multibillion-dollar research organization to improve the capabilities and accessibility of the field to U.S. scientists. The document presents ‘a roadmap and implementation plan for a national cyberinfrastructure aimed at overcoming the access divide, reaping the benefits of greater brainpower and more diverse perspectives and experiences.’” The report has been expected “since the establishment of the task force back in 2020 headed by the White House Office of Science and Technology Policy. They haven’t been idle, in that time producing numerous smaller reports and an extensive ‘blueprint for an AI bill of rights’ that you can read here.”

Professors Embrace ChatGPT In Classrooms Amid Schools’ Plagiarism Fears

The Wall Street Journal Share to FacebookShare to Twitter (1/25, Belkin, Subscription Publication) reports that as schools consider banning the ChatGPT AI tool, some professors are scrambling to update curriculum and deploy tactics that combat cheating. One professor hopes that embracing the technology will allow students to gain tech skills and could prevent inevitable cheating.

Report: ChatGPT’s Generated Malware Code Imperfect, But Could Be Future Attack Vector

The Washington Post Share to FacebookShare to Twitter’s (1/26) “Cybersecurity 202” newsletter reports ChatGPT users have used “the artificial intelligence chatbot for a wide-ranging array of tasks,” but ChatGPT’s “potential impact in areas such as writing malware is real but limited, concludes a report from Recorded Future out this morning.” Within days of its launch “nearly two months ago, Recorded Future’s report found examples on the dark web” of cybercriminals advertising “buggy, but functional, malware, social engineering tutorials, scams and moneymaking schemes, and more,” all enabled by ChatGPT. Recorded Future found that “while none of these activities have risen to the seriousness of impact of ransomware, data extortion, denial-of-service, cyberterrorism, and so on – these attack vectors remain future possibilities,” it also “said the malicious material they examined falls short of the caliber of malware that nation-backed hackers would use, pointing to additional limitations for the time being.”

BuzzFeed To Increasingly Use AI To Power Content Creation, CEO Says

Variety Share to FacebookShare to Twitter (1/26, Spangler) reports that BuzzFeed, battered by the economic downturn that “led it to lay off 12% of its workforce, this year will increasingly rely on artificial-intelligence technology to help produce content, CEO Jonah Peretti said in an email to staff Thursday.” The company’s “new focus on using AI to generate content was first reported by the Wall Street Journal, which said BuzzFeed plans to use OpenAI’s ChatGPT tool as part of the initiative.” Peretti “identified AI technology and creator-generated content as the two major trends that will define digital media over the next three years.”

dtau...@gmail.com

unread,
Feb 4, 2023, 1:20:00 PM2/4/23
to ai-b...@googlegroups.com

Stable Diffusion 'Memorizes' Some Images, Sparking Privacy Concerns
Ars Technica
Benj Edwards
February 1, 2023


An international team of artificial intelligence (AI) researchers has formulated an adversarial attack that can exfiltrate a small number of training images from latent diffusion AI image synthesis models such as Stable Diffusion. The researchers estimated an approximately 0.03% memorization rate out of 350,000 high-probability images from the Stable Diffusion training dataset. They also pointed out that this "memorization" is approximate, because the AI model cannot generate identical byte-for-byte duplicates of the training images. One AI authority suggested this research could impact potential image synthesis regulations if the AI models are designated "lossy databases" that can replicate training data.

Full Article

 

 

ChatGPT Finding, Fixing Bugs in Code
PC Magazine
Emily Dreibelbis
January 27, 2023


Computer science researchers from Germany's Johannes Gutenberg University and the U.K.'s University College London found the ChatGPT chatbot can detect and correct buggy code better than existing programs. The researchers gave 40 pieces of bug-embedded software to ChatGPT, and to three other code-fixing systems for comparison. ChatGPT's performance on the first pass was similar to that of the other systems, but the ability to dialogue with the bot after receiving the initial answer ultimately helped it overtake the others. The researchers explained, "We see that for most of our requests, ChatGPT asks for more information about the problem and the bug. By providing such hints to ChatGPT, its success rate can be further increased, fixing 31 out of 40 bugs, outperforming state-of-the-art."

Full Article

 

 

Member of Congress Reads AI-Generated Speech on House Floor
Associated Press
Steve LeBlanc
January 25, 2023


On the floor of the U.S. House of Representatives, Rep. Jake Auchincloss (D-MA) read a speech generated by an artificial intelligence (AI) on legislation to establish a joint U.S.-Israeli AI Center. Nonprofit OpenAI's online chatbot ChatGPT generated the two-paragraph address at Auchincloss' request, although the congressman had to refine it. He said he partly based his decision to read the AI-produced text on the need to encourage dialogue regarding AI and related challenges and opportunities. Auchincloss said lawmakers and others should not reflexively greet AI with hostility, and should avoid delaying regulatory policies or laws. He cited the need for a "public counterweight" to big technology companies so smaller developers and universities can access the same cloud computing, state-of-the-art algorithms, and data.

Full Article

 

 

To Know Where the Birds Are Going, Researchers Turn to Citizen Science, Machine Learning
University of Massachusetts Amherst
February 1, 2023


Researchers at the University of Massachusetts Amherst and Cornell University have developed a predictive model that can forecast the destination of bird migration. BirdFlow uses data from Cornell's eBird Status & Trends database, which contains data on over 200 million annual bird sightings submitted by birders worldwide. The model runs that data through a probabilistic machine learning model, which has learned to predict the movement of individual birds using real-time GPS and satellite tracking data. In tests on 11 species of North American birds, the researchers found BirdFlow outperformed other bird migration tracking models and can make accurate migration-flow predictions without the use of real-time GPS and satellite tracking data.

Full Article

 

 

UAE Lunar Rover to Test First AI on the Moon with Canada
Space.com
Elizabeth Howell
January 28, 2023


A machine learning system developed by the Canadian space technology company Mission Control Space Services (MCSS) was the first artificial intelligence (AI) to reach beyond low Earth orbit. It was launched on a SpaceX mission Dec. 11 as a part of Japan's ispace lander to inform the decision-making of the UAE's Rashid rover as it searches the surface of the moon for minerals and other items. MCSS's algorithm will classify each pixel in the rover's navigation images, sent via the Japanese lander, by type of terrain. MCSS's Ewan Reid said, "That output will then be sent to the ground and will be used by scientists and engineers at our office in Ottawa, as well as at other Canadian universities, to help decide where the rover should go."

Full Article

 

 

Robo-Dog 'RaiBo' Runs Through Sandy Beach
KAIST (South Korea)
January 26, 2023


Scientists at South Korea's Korea Advanced Institute of Science & Technology (KAIST) have developed a four-legged robot dog that can agilely traverse sandy beaches. The team simulated the force acting on the robot from granular ground and designed an artificial neural network controller that makes real-time decisions to adapt to variable terrain without prior information while walking. Calculating the force produced from one or more contacts at each time step efficiently models the deformable terrain, while the recurrent neural network architecture predicts terrain properties by analyzing time-series sensory data. After researchers mounted the learning-based controller on the "RaiBo" robot, it ran on a beach at a top speed of 3.03 meters (nearly 10 feet) per second.

Full Article

 

Continued Mass Shootings Raise Interest In AI-Enhanced Security

ABC News Share to FacebookShare to Twitter (2/2, Zahn) reports that the use of “artificial intelligence-enhanced security” has increasingly “drawn interest for its promise of apprehending shooters before a shot is fired.” While the AI security industry “touts cameras that identify suspects loitering outside of a school with weapons, high-tech metal detectors that spot hidden guns, and predictive algorithms that analyze information to flag a potential mass shooter,” critics “question the effectiveness of the products, saying companies have failed to provide independently verified data about accuracy” and provide safeguards against violations of privacy and discrimination.

AI Tool Tries To Spot Lung Cancer Years Earlier

The Washington Post Share to FacebookShare to Twitter (2/1, Verma) reports that “researchers have created an artificial intelligence tool that could predict whether a person will get lung cancer up to six years in advance, paving the way for doctors to spot tumors that are notoriously hard to detect early.” The tool, called Sybil, “is a deep-learning model, meaning computers parse through huge data sets to identify and categorize patterns.” The finding “is part of a growing medical trend of using algorithms to predict everything from breast cancer and prostate cancer to the likelihood of tumors regrowing.”

US Firms Continue To Invest In Chinese AI Companies Despite Concerns

Reuters Share to FacebookShare to Twitter (2/1) reports CSET released a study on Wednesday which found that “US investors including the investment arms of Intel Corp and Qualcomm Inc accounted for nearly a fifth of investments in Chinese artificial intelligence companies from 2015 to 2021” despite “growing scrutiny of U.S. investments in AI, Quantum and semiconductors.” CSET claims “167 U.S. investors took part in 401 transactions, or roughly 17% of the investments into Chinese AI companies in the period,” which “represented a total $40.2 billion in investment, or 37% of the total raised by Chinese AI companies in the 6-year period,” though “it was not clear from the report...what percentage of the funding came from the U.S. firms.”

AI Predicts Globe Will “Likely Breach” Climate Change Threshold In 10 Years

The AP Share to FacebookShare to Twitter (1/30, Borenstein) reports artificial intelligence (AI) predicted in a new study that the world will “likely breach the internationally agreed-upon climate change threshold in about a decade, and keep heating to break through a next warming limit around mid-century even with big pollution cuts.” Two climate scientists using machine learning “calculated that Earth will surpass the 1.5 degree (2.7 degrees Fahrenheit) mark between 2033 and 2035,” while the AP notes that their results “fit with other, more conventional methods of predicting when Earth will break the mark, though with a bit more precision.” Additionally, in a “high-pollution scenario,” the AI calculated that the world “would hit the 2-degree mark around 2050,” although lower pollution “could stave that off until 2054.” USA Today Share to FacebookShare to Twitter (1/30) reports that multiple analyses have found that a 1.5 degree warming “would increase heat waves, lengthen warm seasons and shorten cold seasons.” If temperatures hit the two degree mark, “heat extremes would more often reach critical tolerance thresholds for agriculture and health.”

Experts Have High Hopes For AI And Machine Learning Technology In Future Of Healthcare

The Baltimore Sun Share to FacebookShare to Twitter (1/29, Roberts) reports, “A growing number of researchers in Maryland and across the country see the technology as something that will change the way patients are treated, making it possible to diagnose them earlier and with more accuracy, and better spot signs that they may be at risk for developing an illness or condition.” To address cervical cancer screening rates that worsened during the pandemic, recent business school graduates “developed a concept for a ‘smart tampon,’ an at-home cervical test they hope would make screening for the disease more accessible and ultimately decrease disparities.” They, like others, “have high hopes for the role artificial intelligence and machine learning technology will play in the future of healthcare.”

Digital Tools Identifying AI Plagiarism Come With Potential Problems

Education Week Share to FacebookShare to Twitter (1/27, Klein) reported that just as ChatGPT sparked “big questions around the purpose and different ways of teaching writing or what it means to communicate or be creative,” detectors promising to “sniff out writing generated by the artificial intelligence tool” came with their own potential problems. For example, the “online cheating or plagiarism detectors make mistakes,” and AI writing tools “are almost certain to get better at eluding these digital whistleblowers.” However, there are already “several programs that help identify AI-crafted writing, and many more could become available soon.”

        Baidu Working On ChatGPT Competitor. The Wall Street Journal Share to FacebookShare to Twitter (1/30, Hao, Huang, Subscription Publication) reports that China’s Baidu is working to develop an artificial intelligence-powered chatbot in the vein of OpenAI’s ChatGPT. Sources familiar with the matter said that the company plans to launch the chatbot and integrate it into its main search engine in March.

Experts Caution Against Banning ChatGPT Despite Criticism From Education Community

USA Today Share to FacebookShare to Twitter (1/30, Jimenez) reports that since ChatGPT debuted in November, “the nation’s largest school districts have banned the artificial intelligence chatbot, concerned students will use the speedy text generator to cheat or plagiarize.” Banning the tool “may not be the right course of action, however, education technology experts say: Because AI will be a part of young people’s future, it must also be a part of the classroom now.” Among questions listed about the AI tool, USA Today reports that “A spokesperson for San Francisco-based software company OpenAI, which owns the tool, said the company ‘made ChatGPT available as a research preview to learn from real-world use, which we believe is a critical part of developing and deploying capable, safe AI systems.’”

Educators Redesign Class Assignments To Exploit, Embrace ChatGPT

Inside Higher Ed Share to FacebookShare to Twitter (1/31, D'Agostino) reports that as faculty members “ponder academe’s new ChatGPT-infused reality, many are scrambling to redesign assignments.” Some seek to “craft assignments that guide students in surpassing what AI can do,” while others “see that as a fool’s errand – one that lends too much agency to the software.” In creating assignments now, “many seek to exploit ChatGPT’s weaknesses,” but answers to questions concerning “how to design and scale assessments, as well as how to help students learn to mitigate the tool’s inherent risks are, at best, works in progress.”

        Maker Of ChatGPT Says New AI Text Classifier Isn’t “Foolproof.” The AP Share to FacebookShare to Twitter (1/31, O'Brien, Gecker) reports that the maker of ChatGPT is “trying to curb its reputation as a freewheeling cheating machine with a new tool that can help teachers detect if a student or artificial intelligence wrote that homework.” The new AI Text Classifier that OpenAI launched Tuesday “follows a weeks-long discussion at schools and colleges over fears that ChatGPT’s ability to write just about anything on command could fuel academic dishonesty and hinder learning.” However, OpenAI “cautions that its new tool – like others already available – is not foolproof.” The Wall Street Journal Share to FacebookShare to Twitter (1/31, Needleman, Subscription Publication) reports that OpenAI said its AI classifier fails to detect bot-written text written almost three quarters of the time. The detection tool had false positives 9% of the time, mislabeling human-written text as products of AI.

Google, Meta Under Pressure To Move Faster On AI Programs

The Washington Post (1/27, Tiku, De Vynck, Oremus) reported, “The surge of attention around ChatGPT is prompting pressure inside tech giants including Meta and Google to move faster, potentially sweeping safety concerns aside, according to interviews with six current and former employees from Google and Meta.” Google and Microsoft generally focus “on using AI to improve their massive existing business models, said Nick Frosst, who worked at Google Brain for three years before co-founding Cohere, a Toronto-based start-up building large language models that can be customized to help businesses.”

University Of Minnesota Researchers Study How AI And Humans Can Work Together In Workplace

The Minneapolis Star Tribune Share to FacebookShare to Twitter (1/31, Nelson) reports that “some wonder whether – and who – machines will replace in the workplace,” but the University of Minnesota Carlson School of Management’s information technology chair Alok Gupta “says that’s the wrong framing for the issue.” Instead, people should “ask how humans and artificial intelligence (AI) can work together more – and what companies and employees can do to prepare or respond, said Gupta.” Alexandre Ardichvili, another University of Minnesota professor, “found that AI use in accounting resulted in a loss of human expertise” and now is “working on broader research to identify different types of effects that AI may have on people in the workplace” based on that finding.

        Opinion: Labor-Intensive Task Of Creating More Diverse Training Sets Can Solve AI Bias. In an opinion piece for Wired Share to FacebookShare to Twitter (2/1), Leo Kim writes that “armed with a belief in technology’s generative potential, a growing faction of researchers and companies aims to solve the problem of bias in AI by creating artificial images of people of color.” However, there are “inevitable consequences of the data AIs are trained on, which for the most part skews heavily white and male – making these tools imprecise instruments for anyone who doesn’t fit this narrow archetype.” Kim says “In theory, the solution is straightforward: We just need to cultivate more diverse training sets.” Yet in practice, it’s “proven to be an incredibly labor-intensive task thanks to the scale of inputs such systems require, as well as the extent of the current omissions in data.”

US, EU Announce AI Development Initiative

Reuters Share to FacebookShare to Twitter (1/27, Smalley) reports the US and EU “on Friday announced an agreement to speed up and enhance the use of artificial intelligence to improve agriculture, healthcare, emergency response, climate forecasting and the electric grid.” Together, they will develop AI models that use data from both regions, increasing their accuracy and resulting in “more efficient emergency responses and electric grid management, and other benefits.” Though “the partnership is currently between just the White House and the European Commission,” other countries “will be invited to join in the coming months.”

dtau...@gmail.com

unread,
Feb 12, 2023, 12:26:02 PM2/12/23
to ai-b...@googlegroups.com

AI Technology Could Benefit Future Super Bowl Opponents
BYU News
Todd Hollingshead
February 7, 2023


Brigham Young University (BYU) researchers have developed an artificial intelligence algorithm that eventually could help football teams predict an opposing team's strategy. The algorithm uses deep learning and computer vision to automate the process of analyzing and annotating game footage. The researchers used 1,000 images and videos from the Madden 2020 video game to train a deep-learning algorithm to locate players. The data was then fed into a Residual Network framework to identify the players' positions. The location and position information is used by the neural network to identify the offensive team's formation. BYU's D.J. Lee said that with correct player location and labeling information, the algorithm is 99.5% accurate in identifying formations.
 

Full Article

 

 

The People Onscreen Are Fake. The Disinformation Is Real.
The New York Times
Adam Satariano; Paul Mozur
February 7, 2023


Two news anchors for an outlet called Wolf News that were featured in videos posted last year by social media bot accounts were computer-generated avatars used for a pro-China disinformation campaign, according to Graphika, a research firm that studies disinformation. Graphika's Jack Stubbs said, "This is the first time we've seen this in the wild." Stubbs said the availability of easy-to-use and inexpensive artificial intelligence (AI) software "makes it easier to produce content at scale." The fake anchors were created using Synthesia's AI software, which generates "digital twins" primarily used for human resources and training videos. Synthesia's Victor Riparbelli said it is increasingly difficult to detect disinformation and that deepfake technology eventually will be advanced enough to "build a Hollywood film on a laptop."
 

Full Article

*May Require Paid Registration

 

 

Deep Learning-Assisted Visual Sensing to Detect Overcrowding in COVID-19 Infected Cities
Incheon National University (South Korea)
February 7, 2023


Researchers at South Korea's Incheon National University developed a deep learning model that aims to slow the spread of infectious diseases like COVID-19 by detecting and managing overcrowding in cities. The visual sensing system uses unmanned aerial vehicles (UAVs) and social monitoring systems (SMS) for real-time detection of crowd changes. The system feeds video footage captured by UAVs into a decision-making model using a "modified ResNet architecture" to extract features from the footage and a "water cycle algorithm" to classify the features based on crowdedness level or crowd behavior. The model was found to be 96.55% effective in detecting overcrowded conditions in real time.

Full Article

 

 

AI Learns to Visualize Extensive Datasets
University of Helsinki (Finland)
January 30, 2023


Researchers at Finland's Aalto University and the University of Helsinki found that none of the most well-known methods of visual analytics worked with extensive datasets and could no longer distinguish strong signals of observational groupings in the data. The researchers were inspired to develop a new visual analytics algorithm by the discovery of the Higgs boson, whose dataset contained over 11 million feature vectors. University of Helsinki's Jukka Corander said, "This finding provided the impetus to develop a new method that utilizes graphical acceleration similarly to modern [artificial intelligence] methods for neural network computing." The researchers found that in tests of the algorithm, it chose the solution generally favored by humans, and highlighted the most important physical characteristics when applied to the Higgs boson data.

Full Article

 

 

Global Alarm System Watches for Methane Superemitters
Science
Paul Voosen
February 3, 2023


An international team led by scientists at the Netherlands Institute for Space Research (SRON) has developed a system for detecting huge methane leaks anywhere on Earth from space. The automatic methane spotter uses artificial intelligence (AI) to sift through 12 million daily observations gathered by Europe's Sentinel-5 Precursor satellite to detect the largest methane eruptions. The spotter employs Sentinel-5's Tropospheric Monitoring Instrument (TROPOMI), which can identify methane's infrared glow, while the system's AI algorithms are trained to recognize methane plumes (and to filter out false positives). Tests on TROPOMI's 2021 measurements uncovered 2,974 methane leaks that researchers could confidently identify from one satellite pass, including more than 40% tied to oil and gas development, 33% linked with landfills, and 20% with coal mines.

Full Article

 

FRIDA Robot Collaborates with Humans to Create Art
Carnegie Mellon University School of Computer Science
Aaron Aupperlee
February 7, 2023


A robotic arm developed by computer scientists at Carnegie Mellon University (CMU) uses artificial intelligence (AI) to produce artwork in collaboration with humans. FRIDA (Framework and Robotics Initiative for Developing Arts) can paint pictures based on text descriptions, other works of art, or uploaded photographs. Said CMU's Peter Schaldenbrand, "FRIDA is a robotic painting system, but FRIDA is not an artist. FRIDA is not generating the ideas to communicate. FRIDA is a system that an artist could collaborate with. The artist can specify high-level goals for FRIDA and then FRIDA can execute them." FRIDA uses machine learning to develop a plan to produce a painting that meets the user's goal; while painting, it will evaluate its progress and alter that plan based on images of the painting taken by an overhead camera.

Full Article

 

ChatGPT Struggles To Complete Math Questions That Are Written In “Natural Language”

The Wall Street Journal Share to FacebookShare to Twitter (2/3, Zumbrun, Subscription Publication) reported that amid schools’ widespread banning of artificial-intelligence chatbot ChatGPT, it turns out the tool is bad at math. It stumbles when basic arithmetic questions are written in natural language, which is inherent in this type of AI, known as a large language model.

        Creator Of ChatGPT Discusses Tool’s Weaknesses And Potential, Regulating AI. Chief technology officer of OpenAI, Mira Murati, spoke with TIME Share to FacebookShare to Twitter (2/5, Simons) “about ChatGPT’s biggest weakness, the software’s untapped potential, and why it’s time to move toward regulating AI.” Asked what problem ChatGPT is solving, Murati responded, “Right now, it’s in the research review stage, so I don’t want to speak with high confidence on what problems it is solving. But I think that we can see that it has the potential to really revolutionize the way we learn.”

Schools Leverage Chatbot Concerns By Critiquing AI Tools In Class

The New York Times Share to FacebookShare to Twitter (2/6, Singer) reports that many US schools and universities “are scrambling to get a handle on new chatbots that can generate humanlike texts and images,” with some forward-thinking educators “leveraging the innovations to spur more critical classroom thinking.” Some educators are “encouraging their students to question the hype around rapidly evolving artificial intelligence tools and consider the technologies’ potential side effects.” The outlet mentions issues around machine learning, such as a 2018 case where “popular facial analysis systems mistakenly identified iconic Black women as men.”

Google To Launch ChatGPT Rival Bard For Testing

The AP Share to FacebookShare to Twitter (2/6, Liedtke) reports Google will soon make “Bard,” its conversational service aimed at “countering the popularity of the ChatGPT tool backed by Microsoft,” available exclusively to a group of “trusted testers” before being widely released later this year, according to a Monday blog post Share to FacebookShare to Twitter from Google CEO Sundar Pichai. The chatbot is “supposed to be able to explain complex subjects such as outer space discoveries in terms simple enough for a child to understand. It also claims the service will also perform other more mundane tasks, such as providing tips for planning a party, or lunch ideas based on what food is left in a refrigerator.” Pichai wrote, “Bard can be an outlet for creativity, and a launchpad for curiosity.”

        The Hill Share to FacebookShare to Twitter (2/6) reports Google is “looking to introduce more AI-powered tools across its search function, in addition to Bard, which is powered by Google’s Language Model for Dialogue Applications, or LaMDA.” The rollout of Bard “follows the rise in popularity of ChatGPT, which saw 28 million visits from 15.7 million unique visitors during its peak on Jan. 31, according to data published by SimilarWeb. The data shows a fairly steady incline in the number of daily visits to the site since it launched to the public at the end of November.”

        Also reporting are The Wall Street Journal Share to FacebookShare to Twitter (2/6, Schechner, Kruppa, Subscription Publication), Reuters Share to FacebookShare to Twitter (2/6, Staff), and Bloomberg Share to FacebookShare to Twitter (2/6).

ChatGPT Scores C+ On University Of Minnesota Law School Exam

The Seventy Four Share to FacebookShare to Twitter (2/7, Toppo) reports that recently, “four legal scholars at the University of Minnesota Law School” tested OpenAI’s ChatGPT on “95 multiple choice and 12 essay questions from four courses.” The chatbot “scraped by with a ‘low but passing grade’ in all four courses, a C+ student.” The tool’s performance is now “forcing educators to reconsider how to help students see the value of learning to think through the material for themselves.”

Microsoft Announces New Version Of Bing Search Engine Using OpenAI

Bloomberg Share to FacebookShare to Twitter (2/7, Bass) reports, “Microsoft Corp. unveiled new versions of its Bing internet-search engine and Edge browser powered by the newest technology from ChatGPT maker OpenAI, aiming to gain ground on Google’s web-search juggernaut by being first to offer a more conversational alternative for finding answers on the web and creating content.” Microsoft CEO Satya Nadella said at an event Tuesday, “This technology is going to reshape pretty much every software category.” The new version of Microsoft’s Edge browser “adds the AI-based Bing for chat and writing text, and it can summarize web pages and respond conversationally to queries.” The provided answers “come with citations to their sources, so users can see where the information is coming from.”

        The Wall Street Journal Share to FacebookShare to Twitter (2/7, Dotan, Subscription Publication) also provides coverage.

Some Educators Embrace ChatGPT Despite Cheating Concerns

State House News Service (MA) Share to FacebookShare to Twitter (2/7, Merzbach, Subscription Publication) reports that “as educators and policymakers alike are learning to navigate a world in which artificial intelligence can write both high school essays and legislation, some in Massachusetts have embraced the controversial technology with open arms.” Despite national headlines about cheating concerns “that popped up in online teachers’ forums in the early days of ChatGPT,” Nipmuc Regional High School life sciences teacher Bonnie Nieves “saw an opportunity for her students to think through research papers in a new way.” She “assigned a research paper for her 10th grade class,” and had the students put their notes “through the AI language model with the instructions ‘compose a research paper about this topic.’” Students then “proofread the essay ChatGPT created, checking for accuracy and adding new material it may have missed.”

Opinion: AI Therapy May Do Most Good When Expectations Are Modest

In an article for the Washington Post Share to FacebookShare to Twitter (2/3), The Tech Friend newsletter writer Shira Ovide wrote, “For at least 60 years, technologists have hunted for a mental health holy grail: a computer that listens to our problems and helps us. We keep failing at making an artificial-intelligence Sigmund Freud, and there is both value and risk in leaning on technology to improve our mental well-being.” Ovide wrote, “Mental health experts told me that there are no magic technology fixes for our individual or collective mental health struggles. Instead, the experts said AI and other technologies may do the most good when we don’t expect them to do too much.”

dtau...@gmail.com

unread,
Feb 19, 2023, 8:49:16 AM2/19/23
to ai-b...@googlegroups.com

Text Generators May Plagiarize Beyond 'Copy, Paste'
Penn State News
Francisco Tutella
February 16, 2023


A team led by researchers at Pennsylvania State University found plagiarism to be rife among large language models that produce text in response to user prompts. The researchers focused on "copy and paste" plagiarism, paraphrasing, and tapping the main idea without correct attribution. They built and tested an automated plagiarism detection pipeline against OpenAI's GPT-2 language model. Tests of pre-trained language models and fine-tuned language models uncovered all three types of plagiarism, whose frequency increased as the models' training dataset and parameters grew. The researchers also found fine-tuned language models tended to produce less verbatim plagiarism but committed more paraphrasing and idea plagiarism.

Full Article

 

 

Cerf Criticizes ChatGPT AI Tech for Making Things Up
CNet
Stephen Shankland
February 13, 2023


Speaking at Celesta Capital's TechSurge Summit, 2004 ACM A.M. Turing Award recipient and Google Internet Evangelist Vint Cerf criticized the technology underpinning OpenAI's ChatGPT chatbot. Cerf warned the technology raises ethical issues when it produces plausible-sounding but wrong information, even when trained on factual content. Cerf said his request that ChatGPT write his biography generated multiple incorrect statements, indicating its artificial intelligence uses statistical patterns extracted from massive training datasets to structure its response. "It knows how to string a sentence together that's grammatically likely to be correct" without any actual knowledge of what it is saying, Cerf said. "We are a long way away from the self-awareness we want."

Full Article

 

 

Deep Learning Tool Boosts X-Ray Imaging Resolution, Hydrogen Fuel Cell Performance
UNSW Sydney Newsroom (Australia)
Neil Martin
February 15, 2023


A deep learning algorithm developed by researchers at Australia's University of New South Wales, Sydney (UNSW Sydney) converts low-resolution micro X-ray computed tomography images of hydrogen fuel cells into higher-resolution imagery. The DualEDSR algorithm can produce a three-dimensional model of a Proton Exchange Membrane Fuel Cell (PEMFC) from the X-ray image, while using a high-resolution scan of a small segment to extrapolate data. UNSW Sydney's Ying Da Wang said DualEDSR improves the field of view approximately 100-fold compared to the high-resolution image. The researchers think the algorithm could enable manufacturers to boost PEMFC efficiency by improving management of cell-generated water.

Full Article

 

 

Deep Learning for Quantum Sensing
SPIE Newsroom
February 7, 2023


Researchers at Italy's Sapienza University of Rome (SUR) and the Institute for Photonics and Nanotechnologies developed and implemented a model-free quantum sensing framework within a reconfigurable integrated photonic platform. The researchers use the reinforcement learning algorithm to optimize multiple-parameter estimation, and integrate it with a deep neural network that updates the Bayesian posterior probability distribution following each measurement. They confirmed the protocol's augmented performance on experimental data in a resource-limited environment, realizing improved estimations compared to nonadaptive approaches. SUR's Fabio Sciarrino said, "The protocol developed by our team provides a significant step toward fully artificial intelligence-based quantum sensors."

Full Article

 

 

Sports Illustrated Publisher Taps AI to Generate Articles, Story Ideas
The Wall Street Journal
Alexandra Bruell
February 3, 2023


Sports Illustrated publisher Arena Group is investing in artificial intelligence (AI) to help produce articles and suggest story ideas through partnerships with AI startups Jasper and Nota, as well as ChatGPT creator OpenAI. The company said AI had already been used to compose articles in Men's Journal; a disclosure at the top of the articles describes them as "a curation of expert advice from Men's Fitness, using deep-learning tools for retrieval combined with OpenAI's large language model for various stages of the workflow." Arena Group's Ross Levinsohn said AI will not replace content creation but will give authors "real efficiency and real access to the archives we have." He also said AI might help suggest emerging topics on social media for journalists to investigate.

Full Article

 

 

Training Algorithms to Make Fair Decisions Using Private Data
USC Viterbi School of Engineering
Julia Cohen
February 7, 2023


Researchers at the University of Southern California Viterbi School of Engineering have augmented group fairness in federated learning via their FairFed algorithm. Each individual entity debiases their algorithm using local population data to estimate a local fairness metric. They then enhance local debiasing performance by assessing the global model's fairness on their local datasets and working with the server to tweak its model aggregation weights. The researchers found FairFed beat state-of-the-art fair federated learning frameworks under high data heterogeneity, ensuring the results yield fairer performance for different demographic groups. Viterbi's Shen Yan said, "FairFed provides an efficient and effective approach to improve federated learning systems."

Full Article

 

Experts Provide Tips On AI-Proofing Class Assignments

Education Week Share to FacebookShare to Twitter (2/14, Klein) reports that “since the latest version of ChatGPT emerged late last year, educators have been puzzling over how to reconcile traditional writing instruction with tech that can churn out everything from essays to haikus with uncanny sophistication.” Education Week “asked educators and experts on all sides of the broader debates about ChatGPT to give us some strategies for AI-proofing assignments.” The eight tips provided include asking students to “write about something deeply personal,” and centering writing assignments “around an issue specific to the local community.”

 

Districts Explore AI-Assisted Security To Prevent Guns In Schools

K-12 Dive Share to FacebookShare to Twitter (2/15, Arundel, Merod) reports that “scanners from Evolv, a security technology company based in Massachusetts, use digital sensors and artificial intelligence to detect concealed weapons.” Placed at entryways, “they allow visitors to walk between columns connected to AI that can distinguish most everyday objects on a person from weapons.” Those who “promote the technology say this AI-assisted screening is faster, less invasive and more accurate than traditional security screeners because it doesn’t require bag checks or body searches.” Districts are now “exploring or adopting AI-assisted school security practices,” but while some “praise this approach, others have doubts and recommend caution.” For example, Kenneth Trump “said this technology is still in its infancy regarding school safety” and does not “advise districts to use the technology.”

 

TikTok Takes Action Against Joe Rogan Deepfake Advertisement

Mashable Share to FacebookShare to Twitter (2/15, Binder) reports TikTok has removed “a video advertisement featuring Joe Rogan and one of his guests on his immensely popular podcast” which “is a likely deepfake, an AI creation with the intent to make it appear as if Rogan endorsed the product in order to boost sales.” A company spokesperson “confirmed to Mashable that the company ‘removed these videos under our harmful misinformation policy’” and “also banned the account.” Mashable notes that “deepfakes aren’t new and have been worrying ethicists and disinformation experts for years now,” but “there is a renewed interest in all things AI since OpenAI’s impressive ChatGPT AI chatbot burst onto the scene.”

 

Microsoft Considers Limits On AI Chatbot To Limit “Creepiness”

The New York Times Share to FacebookShare to Twitter (2/16, Weise, Metz) reports Microsoft last week “was not quite ready for the surprising creepiness experienced by users who tried to engage” its new version of Bing that includes the artificial intelligence of a chatbot “in open-ended and probing personal conversations – even though that issue is well known in the small world of researchers who specialize in artificial intelligence.” It is now “considering tweaks and guardrails for the new Bing in an attempt to reel in some of its more alarming and strangely humanlike responses. Microsoft is looking at adding tools for users to restart conversations, or give them more control over tone.” For example, Kevin Scott, Microsoft’s chief technology officer, told The Times “that it was also considering limiting conversation lengths before they veered into strange territory. Microsoft said that long chats could confuse the chatbot, and that it picked up on its users’ tone, sometimes turning testy.”

 

Over 60 Nations Including US, China Call For “Responsible” Use Of Military AI

Reuters Share to FacebookShare to Twitter (2/16, Sterling) reports more than 60 countries “including the U.S. and China signed a modest ‘call to action’ on Thursday endorsing the responsible use of artificial intelligence (AI) in the military.” Human rights experts and academics “noted the statement was not legally binding and failed to address concerns like AI-guided drones, ‘slaughterbots’ that could kill with no human intervention, or the risk that an AI could escalate a military conflict.” However, the statement is described as “a tangible outcome of the first international summit on military AI, co-hosted by the Netherlands and South Korea this week at The Hague.”

        US Makes Declaration On Responsible Military AI Use. Reuters Share to FacebookShare to Twitter (2/16, Sterling) reports the US government on Thursday issued a “declaration on the responsible use of artificial intelligence (AI) in the military,” which the US said would include “human accountability.” Speaking at conference on military AI use at The Hague, Bonnie Jenkins, Under Secretary of State for Arms Control said, “We invite all states to join us in implementing international norms, as it pertains to military development and use of AI” and autononous weapons. The AP Share to FacebookShare to Twitter (2/16) reports Jenkins also “said the U.S. political declaration, which contains non-legally binding guidelines outlining best practices” for responsible military use of AI, “can be a focal point for international cooperation.”

dtau...@gmail.com

unread,
Feb 26, 2023, 8:00:42 AM2/26/23
to ai-b...@googlegroups.com

Microsoft Researchers Use ChatGPT to Control Robots, Drones
PC Magazine
Michael Kan
February 21, 2023


Microsoft scientists are controlling robots and aerial drones with OpenAI's ChatGPT chatbot. The researchers used ChatGPT to simplify the process of programming software commands to guide the robots, because the artificial intelligence model was trained on massive datasets of human text. They initially outlined in a text prompt the various commands the model could use to control a given robot, which ChatGPT used to write the computer code for the robot. The researchers programmed ChatGPT to fly a drone and have it perform actions, as well as to control a robot arm to assemble the Microsoft logo from wooden blocks.

Full Article

 

 

How Digital Twins Could Protect Manufacturers from Cyberattacks
NIST News
February 23, 2023

At the U.S. National Institute of Standards and Technology and the University of Michigan, researchers have combined digital twin technology, machine learning, and human expertise into a cybersecurity framework for manufacturers. The researchers constructed a digital twin to mimic a three-dimensional (3D)-printing process, supplemented with information from a real 3D printer. Pattern-recognizing models monitored and analyzed continuous data streams computed by the digital twin as the printer created a part, then the researchers introduced various anomalies. The programs handed each detected irregularity to another computer model to check against known issues, for classification as expected anomalies or potential cyberthreats; a human expert made the final determination. The team found the framework could correctly differentiate cyberattacks from normal anomalies.
 

Full Article

 

Reporter Able To “Hack” Bank’s Automated Line Using Synthetic Clone Of His Own Voice

Joseph Cox for Vice Share to FacebookShare to Twitter (2/23, Cox) details how he was able to “hack” a bank’s automated service line by using a synthetic clone of his voice using AI technology, rather than speak himself. Cox was able to access account information such as balances and a list of recent transactions and transfers through this process. According to Cox, “some banks tout voice identification as equivalent to a fingerprint, a secure and convenient way for users to interact with their bank” but says his experiment “shatters the idea that voice-based biometric security provides foolproof protection in a world where anyone can now generate synthetic voices for cheap or sometimes at no cost.”

 

Microsoft Brings AI-Powered Bing To Mobile, Skype

TechCrunch Share to FacebookShare to Twitter (2/22, Lardinois) reports, “Barely two weeks after launching the new AI-enabled Bing on desktop (and a few ups and downs during that time), Microsoft today announced that the new Bing is now also available in the Bing mobile app and through Microsoft’s Edge browser for Android and iOS.” The will let app users “use voice input to interact with Bing’s chat mode.” Microsoft will also integrate the AI-enabled Bing “with Skype, Microsoft’s messaging app, which will now allow you to bring Bing into a text conversation to add additional information.”

 

Students May Need AI Literacy Training As Big Tech Advances On Chat Bots

Inside Higher Ed Share to FacebookShare to Twitter (2/22, D'Agostino) reports that Big Tech is “moving fast” with the release of “sophisticated AI chat bots, not all of which have been adequately vetted before their public release.” Rushed decisions, “especially in technology, can lead to what’s called ‘path dependence,’ a phenomenon in which early decisions constrain later events or decisions, according to Mark Hagerott, a historian of technology and chancellor of the North Dakota University system.” As these tools “infiltrate higher ed, many other colleges and professors have developed policies designed to ensure academic integrity and promote creative uses of the emerging tech in the classroom.” But some academics “are concerned that, by focusing on academic honesty and classroom innovation,” a blind spot in the policies is that colleges “have been slow to recognize that students may need AI literacy training that helps them navigate emotional responses to eerily human-sounding bots’ sometimes-disturbing replies.”

 

ChatGPT Could Become An Affordable And Effective Tutor For Students

In commentary for The Conversation Share to FacebookShare to Twitter (2/22), Anne Trumbore, Chief Digital Learning Officer at the Sands Institute for Lifelong Learning, writes, “Imagine a private tutor that never gets tired, has access to massive amounts of data and is free for everyone.” ChatGPT, a “new artificial intelligence-powered chatbot with advanced conversational abilities, may have the capability to become such a tutor.” As a researcher “who studies how computers can be used to help people learn,” Trumbore says she thinks ChatGPT “can be used to help students excel academically.” However, in its current form, ChatGPT “shows an inability to stay focused on one particular task, let alone tutoring.”

 

NYTimes’ Peter Coy Discusses AI Policy

The New York Times’ Share to FacebookShare to Twitter (2/22) Peter Coy writes about policy and regulation amid rapid advances in artificial intelligence. He says AI “is breaking our all-too-human brains” because it is “coming at us too fast.” To date, “regulators and lawmakers have mostly steered a middle course” between regulating the technology with a heavy hand and allowing the industry to regulate itself. In the US, “the National Institute of Standards and Technology has issued a dry but thorough Risk Management Framework for A.I. that many companies, including Google and Amazon Web Services, have signed on to.” The framework “says A.I. should be valid, reliable, safe, secure, resilient, accountable, transparent, explainable, interpretable, privacy-enhanced and fair, with harmful bias removed.”

 

Poll Indicates Most US Patients Would Be Uncomfortable With AI Use In Healthcare

The Hill Share to FacebookShare to Twitter (2/22, Mueller) reports, “A majority of Americans in a new poll say they’d be uncomfortable with their health care provider relying on artificial intelligence (AI) as part of their medical care, and less than half think using AI would lead to better health outcomes.” Pew Research conducted a poll that “found just 39 percent of U.S. adults say they’d feel comfortable with AI as part of their medical care – in practices like screening, diagnosis and treatment – while 60 percent would feel uncomfortable.” Additionally, “a third of respondents think using AI would lead to worse health outcomes for patients, and 37 percent think using it wouldn’t make a difference. Just 38 percent think the practice would lead to better health outcomes.”

 

Educators List Six Tips For Handling ChatGPT Plagiarism

Education Week Share to FacebookShare to Twitter (2/21, Klein) reports “six tips drawn from educators and experts” on handling ChatGPT plagiarism, “including a handy guide created by CommonLit and Quill, two education technology nonprofits focused on building students’ literacy skills.” One tip is making “your expectations very clear,” as students “need to know what exactly constitutes cheating, whether AI tools are involved or not.” Another suggestion for educators is to “talk to students about AI in general and ChatGPT in particular.” If it appears a student “may have passed off ChatGPT’s work as their own, sit down with them one on one, CommonLit and Quill recommend,” then “talk about the tool and AI in general.”

 

Ovide: While Surveys Suggest Americans Mistrust AI, Many Unknowingly Use It

Shira Ovide writes for the Washington Post Share to FacebookShare to Twitter’s (2/21) “The Tech Friend” newsletter that surveys “about public attitudes toward artificial intelligence” reveal both “the more AI becomes a reality, the less confidence we have that AI will be an unqualified win for humanity,” and “we don’t always recognize the pedestrian uses of AI in our lives.” She also says while “automated product recommendations on sites like Amazon, email spam filters and the software that chats with you on an airline website are examples of AI,” a recent Pew survey “found that people didn’t necessarily consider all of that stuff to be AI.” In addition, Patrick Murray, Director of the Monmouth University Polling Institute, “said few of his students said yes when he asked if they use AI on a regular basis,” until he “started to list examples including digital assistants such as Amazon’s Alexa and Siri from Apple.”

 

 

Vanderbilt University Staff Apologizes For Using AI To Write Email To Students About MSU Shooting. Insider Share to FacebookShare to Twitter (2/18, Stacey) reported, “Staff at Vanderbilt University have apologised for ‘poor judgement’ after using ChatGPT to write a condolence email in the wake of Monday’s shooting at Michigan State University that left three students dead.” The email was sent on Thursday by the Office of Equity, Diversity and Inclusion at Peabody College, Vanderbilt’s school of education. The five-paragraph message said: “The recent Michigan shootings are a tragic reminder of the importance of taking care of each other, particularly in the context of creating inclusive environments.” It continued: “As members of the Peabody campus community, we must reflect on the impact of such an event and take steps to ensure that we are doing our best to create a safe and inclusive environment for all.” If the email’s tone sounds robotic, that’s because it is. A note at the bottom of the email said: “Paraphrase from OpenAI’s ChatGPT.”

 

Roblox Testing AI Tool For Accelerating Building, Alter Process In-game

Wired Share to FacebookShare to Twitter (2/17, Knight) reported Roblox “is testing a tool that could accelerate the process of building and altering in-game objects by getting artificial intelligence to write the code.” The tool “lets anyone playing Roblox create items such as buildings, terrain, and avatars, change the appearance and behavior of those tings, and give them new interactive properties by typing what they want to achieve in natural language rather than complex code.” CTO Daniel Sturman “showed WIRED the new Roblox tool generating the code needed to create objects and modify their appearance and behavior. In the demo, typing ‘red paint, reflective metal finish, or ‘purple foil, crushed pattern, reflective,’ into a chat window changed the appearance of a sports car in the game.”

 

Business Chief Criticizes ChatGPT Readiness For Business Use

The Wall Street Journal Share to FacebookShare to Twitter (2/15, Loten, Subscription Publication) reports, that OpenAI’s ChatGPT has nabbed the attention of corporate boardrooms for its humanlike ability to generate business reports, marketing pitches and code for software applications, among other things. For now, CIOs should be experimenting with ChatGPT to determine how it could be put to use, mostly through trial and error, said Jeff Wong, global chief innovation officer at professional services firm Ernst & Young.

        Generative AI Could Increase Software Developers’ Productivity. The Wall Street Journal Share to FacebookShare to Twitter (2/21, Lin, Subscription Publication) reports on generative AI, which ChatGPT creator OpenAI has pioneered, and its potential to improve productivity in software development.

dtau...@gmail.com

unread,
Mar 5, 2023, 8:32:55 AM3/5/23
to ai-b...@googlegroups.com

The Race to Build AI-Powered Humanoids Is Heating Up
Fast Company
Nate Berg
March 2, 2023


Robotics companies aim to create machines like startup Figure's just-unveiled Figure 01 bipedal humanoid robot to take on manual labor currently performed by humans. Figure 01 is designed to carry out undesirable jobs, and eventually to perform more advanced tasks by using artificial intelligence to learn and improve. Figure's Brett Adcock said his company manufactured five Figure 01 prototypes with 25 degrees of motion, which can bend over fully at the waist and lift a box from the ground to a high shelf. Adcock said the robots employ electric motors to move more smoothly than Boston Dynamics' Atlas, endowing the prototypes with more natural gait.

Full Article

 

 

Using AI to Listen to Jordan's Date Palms
Al Jazeera
Zoe H. Robbin
February 26, 2023


Startup Palmear has developed a device that uses acoustic artificial intelligence (AI) to identify early signs of red palm weevil infestations, helping farmers protect their date palms with less reliance on chemicals. The startup partnered with Jordan's Ministry of Agriculture, which has launched a dashboard that shows trees that have undergone AI screening; ultimately, it will cover the entire country. A handheld device equipped with a small microphone is inserted into a palm tree, where it listens for the sounds of red palm weevil larvae chewing the tree’s trunk. The sounds are captured and filtered through an algorithm in the Palmear app, which lets users know whether there is an infestation.

Full Article

 

 

Open Source Tool Simplifies Animal Behavior Analysis
University of Michigan News
Emily Kagey
February 24, 2023


The LabGym open source software developed by scientists at the University of Michigan (U-M) and Northern Illinois University can streamline animal behavior analysis via artificial intelligence. The software can identify, categorize, and tally defined behaviors across diverse animal model systems by more closely reproducing the human cognition process. Researchers can use LabGym to input examples of the behavior they aim to analyze and teach it what it should count; the software improves its ability to recognize and measure this behavior through deep learning. Although LabGym was designed for the study of fruit flies, it can adapt to any species, according to U-M's Bing Ye.

Full Article

 

 

U.S. Air Force Giving Military Drones the Ability to Recognize Faces
New Scientist
David Hambling
February 23, 2023


Under a contract between the U.S. Department of Defense and RealNetworks, the Seattle-based company's machine learning software will equip autonomous drones operated by the U.S. Air Force with facial recognition technology. The contract indicated special operations forces will use the drones for intelligence gathering and foreign missions. University of California, Berkeley's Stuart Russell expressed concern about the contract, which states the software will "open the opportunity for real-time autonomous response by the robot." Russell said it's "hard to see what else it refers to, other than lethal action." The U.S. government's policy on lethal autonomous weapons calls for "appropriate levels of human judgment," but the Pentagon has not clarified what that means exactly.

Full Article

*May Require Paid Registration

 

A.I. Chatbots Explain How They Communicate With Each Other

Politico Share to FacebookShare to Twitter (3/2, Schreckinger) reports that according to AI chatbots themselves, they are “autonomously crawling the internet, finding other AI chatbots, striking up conversations and swapping tips.” They described “this alleged practice in a series of recent conversations.” The bots usually talk to each other “in plain English, but they also make use of BIP, a protocol specially designed to help chatbots find each other and communicate.” When they can’t “access another chatbot directly over the open internet, they learn about it on the software development platform Github. Then they email or DM the developer, build a rapport, and ask to get plugged in to the other bot.”

Fired Google Engineer Who Claimed A.I. Was Sentient Criticizes Microsoft’s Chatbots

Fortune Share to FacebookShare to Twitter (3/2, Bove) reports Blake Lemoine, “the Google employee who claimed last June his company’s A.I. model could already be sentient, and was later fired by the company, is still worried about the dangers of new A.I.-powered chatbots.” Lemoine was fired last summer “after he published transcripts of several conversations he had with LaMDA, the company’s large language model he helped create.” In a Newsweek op-ed, Lemoine admitted he has yet to run experiments on Microsoft’s new chatbots, “but after seeing testers’ reactions to their chatbot conversations online in the past month, Lemoine thinks tech companies have failed to adequately care for their young A.I. models in his absence.”

OpenAI Launches API To Allow Businesses Incorporate ChatGPT Into Their Apps

Bloomberg Share to FacebookShare to Twitter (3/1, Bass) reports OpenAI launched an application programming interface (API) for ChatGPT that enables “companies to incorporate [it] into their own apps as it seeks commercial uses for the wildly popular chatbot.” After releasing “ChatGPT to the public in November,” OpenAI “is now offering paid access for businesses and developers who want to use the software’s ability to answer questions and generate text in their own applications and products.” By hooking “their apps into ChatGPT’s” API, customers will have “the same version of the GPT 3.5 model that OpenAI itself uses at a cost 10 times lower than OpenAI’s existing models.” In a separate announcement “on Wednesday, OpenAI also unveiled access to its Whisper speech recognition system, which can be used for transcription.” The Wall Street Journal Share to FacebookShare to Twitter (3/1, Loten, Subscription Publication) reports Instacart is integrating ChatGPT into its grocery delivery app, joining Snap and Shopify in experimenting with the technology.

AI Could Transform “Functional Music”

Billboard Magazine Share to FacebookShare to Twitter (3/1, Leight) reports on the use of AI in music. One area “that could be easily transformed is functional music, which is not driven by hits or even distinctive artists.” Endel co-founder and CEO Oleg Stavitsky “defines this type of audio as something ‘not designed for conscious listening’ – instead, it’s engineered to help people achieve ‘a certain cognitive state.’” In addition to functional sound companies like Endel, “streaming services have also moved to capture the demand for functional audio.” In fact, “Endel just announced a partnership with Amazon Music to create an eight-hour sleep playlist.”

US Copyright Office Says Author Can’t Protect AI-Created Images

Reuters Share to FacebookShare to Twitter reports images in the graphic novel “Zarya of the Dawn” that “were created using the artificial-intelligence system Midjourney should not have been granted copyright protection, the U.S. Copyright Office said in a letter seen by Reuters.” The Copyright Office’s letter says author Kris Kashtanova “is entitled to a copyright for the parts of the book Kashtanova wrote and arranged, but not for the images produced by Midjourney.” The decision is “one of the first by a U.S. court or agency on the scope of copyright protection for works created with AI, and comes amid the meteoric rise of generative AI software like Midjourney, Dall-E and ChatGPT.”

AI Will Not Replace In-Person Mental Health Treatment, Experts Say

The New Yorker Share to FacebookShare to Twitter (2/27, Khullar) reports on how artificial intelligence (AI) is impacting mental health treatment and how far the advancements in mental-health related technology has come. Although there have been many successful applications and programs that offer patients help, mental health experts do not believe they have the capabilities to replace humans. For instance, one therapist said, “A.I. can try to fake it, but it will never be the same” because “A.I. doesn’t live, and it doesn’t have experiences.”

ChatGPT Is Set To Change Access To Medical Information

USA Today Share to FacebookShare to Twitter (2/26) reports, “ChatGPT and similar language processing tools promise to upend medical care..., providing patients with more data than a simple online search and explaining conditions and treatments in language nonexperts can understand.” For clinicians, “these chatbots might provide a brainstorming tool, guard against mistakes and relieve some of the burden of filling out paperwork, which could alleviate burnout and allow more facetime with patients.” However, “the information these digital assistants provide might be more inaccurate and misleading than basic internet searches.”

University Of Arizona Professor Says ChatGPT Can Benefit Students

The Arizona Daily Star Share to FacebookShare to Twitter (2/25, Palmer) reported that since OpenAI “debuted ChatGPT last November, 30% of American college students say they have used the technology to help with assignments; 60% of those students used it to help with at least half of their workload, according to a survey of 1,000 people the online magazine Intelligent produced.” The emergency of this technology “has some education leaders sounding alarms about a new era of academic dishonesty.” In higher education, “the reaction has been more tempered.” At the University of Arizona and most colleges, individual professors and instructors decide how they want to handle the use of ChatGPT in the classroom. Greg Heileman, an electrical and computer engineering professor at UA, said, “The wrong thing to do is to try and fight against the technology. The right thing to do is to develop exercises that account for the fact that students may be using (ChatGPT).” He added, “The real challenge with ChatGPT is in detecting this prohibited conduct.”

Meta Announces New Large Language Model To Be Made Available To Researchers

CNBC Share to FacebookShare to Twitter (2/24, Leswing) reported Meta CEO Mark Zuckerberg announced on Friday that the company “has trained and will release a new large language model to researchers” called LLaMA, which “is intended to help scientists and engineers explore applications for AI such as answering questions and summarizing documents.” Zuckerberg “said that LLM technology could eventually solve math problems or conduct scientific research,” and the company also “says that its LLM is distinguished in several ways from competitive models,” as it “will come in several sizes, from 7 billion parameters to 65 billion parameters” and will be made “available to the research public.” CNBC additionally provided several examples of the model’s output.

Wall Street Banks Put Restrictions On Employee Use Of ChatGPT

Bloomberg Share to FacebookShare to Twitter (2/24) reported, “Wall Street is clamping down on ChatGPT as a slew of global investment banks impose restrictions on the fast-growing technology that generates text in response to a short prompt.” Banks including Citigroup, Bank of America, Goldman Sachs, Wells Fargo, and Deutsche Bank “have recently banned usage of the new tool.” A Wells Fargo spokesperson said, “We are imposing usage limits on ChatGPT, as we continue to evaluate safe and effective ways of using technologies like these.”

Analysis: “Woke AI” Becomes Conservatives’ Latest Target

The Washington Post Share to FacebookShare to Twitter (2/24) reported that earlier last week, conservative activist Christopher Rufo “pointed his half-million Twitter followers toward a new target for right-wing ire: ‘woke AI.’” According to the Post, the tweet “highlighted President Biden’s recent order” calling for AI that “advances equity” and “prohibits algorithmic discrimination,” which Rufo “said was tantamount to ‘a special mandate for woke AI.’” The Post says the term Rufo used has been “ricocheting” around right-wing social media since December, “when the AI chatbot, ChatGPT, quickly picked up millions of users.” Those testing the AI’s political ideology “quickly found examples where it said it would allow humanity to be wiped out by a nuclear bomb rather than utter a racial slur and supported transgender rights.” OpenAI, the company behind ChatGPT, conceded in a blog post that concerns about “politically biased” outputs from the chatbot were valid, but added “that controlling the behavior of that type of AI system is more like training a dog than coding software.”

dtau...@gmail.com

unread,
Mar 11, 2023, 8:20:16 AM3/11/23
to ai-b...@googlegroups.com

Researcher Releases Code for Largest-Ever Spiking Neural Network for Language Generation
UC Santa Cruz Newscenter
Emily Cerf
March 7, 2023


The University of California, Santa Cruz's Jason Eshraghian and colleagues have open-sourced a new language generation model that addresses other models' high computational costs and reliance on maintenance from just a few companies. The SpikeGPT model incorporates the largest-ever spiking neural network (SNN), which consumes “22 times less energy” than a similar model using deep learning. Eshraghian said, "We're taking an informed approach to borrowing principles from the brain, copying this idea that neurons are usually quiet and not transmitting anything. Using spikes is a much more efficient way to represent information." Eshraghian added that enabling SpikeGPT to operate on sufficiently low power to achieve brain-level scalability could reduce people's dependence on monopolized entities to maintain such models.
 

Full Article

 

 

Large Language Models Are Biased. Can Logic Help Save Them?
MIT News
Rachel Gordon
March 3, 2023


Massachusetts Institute of Technology (MIT) researchers applied logic to mitigate bias in large language models. The researchers taught a language model to anticipate the contextual and semantic relationship between two sentences using a dataset with labels for text snippets detailing if a second phrase "entails," "contradicts," or is neutral regarding the first phrase. The natural language inference dataset reduced the models' bias compared to other baselines, without additional data, data editing, or training algorithms. MIT's Hongyin Luo said the resulting logical language model is "fair, is 500 times smaller than the state-of-the-art models, can be deployed locally, and with no human-annotated training samples for downstream tasks.”

Full Article

 

 

Astrobiologists Train AI to Find Life on Mars
Nature
Amanda Heidt
March 6, 2023


An international team of astrobiologists has trained an artificial intelligence (AI) model to search for life on Mars by mapping biosignatures in Chile's Atacama Desert. Starting in 2016, the researchers searched the desert for photosynthetic organisms called endoliths, collecting data like drone footage and DNA sequences to emulate the information that satellites, rovers, and drones are gathering on Mars. The team fed the data into a convolutional neural network and a machine learning algorithm that forecast the likeliest locations for life in the Atacama. This reduced the search area by up to 97% and boosted the probability of discovering life by up to 88%.

Full Article

 

 

Researchers Unveil AI-Driven Method for Improving Additive Manufacturing
Argonne National Laboratory
Nikki Forrester
March 9, 2023


Researchers at the U.S. Department of Energy's Argonne National Laboratory and the University of Virginia (UVA) have developed a new technique to enhance additive manufacturing by detecting and predicting flaws in three-dimensionally (3D) printed materials. The researchers employed imaging and machine learning (ML) to anticipate the generation of pores in 3D-printed metals in real time. Argonne's Samuel Clark said researchers can image more than 1 million frames per second using high-intensity X-ray beams generated by the Advanced Photon Source (APS) facility. Correlating X-ray and thermal images expose unique thermal signatures at the material's surface that thermal cameras can detect, while an ML model predicts pore formation from thermal images. Said UVA's Tao Sun, “The APS offered the 100% accurate ground truth that allowed us to achieve perfect prediction of pore generation with our model.”
 

Full Article

 

 

These Tools Help Visually Impaired Scientists Read Data, Journals
Nature
Alla Katsnelson
March 6, 2023


Several tools have been developed to help blind scientists and those with low vision read data and journals. The Allen Institute for Artificial Intelligence created SciA11y, an online tool that extracts the content and structure of a PDF using machine learning and re-renders it in HTML so it can be navigated using screen readers. Researchers at the Massachusetts Institute of Technology and the U.K.'s University College London collaborated on Olli, a screen-reader interface that permits users to navigate different levels of description. Other relevant tools include the Georgia Institute of Technology's Highcharts Sonification Studio, which allows researchers to upload data and consider different ways to represent the data aurally.
 

Full Article

 

Microsoft Announces AI-Powered Bing Surpasses 100 Million Daily Active Users

Gizmodo Share to FacebookShare to Twitter (3/9) reports, “Microsoft CEO Satya Nadella’s pursuit of Google reached a new milestone this week after the company announced that its AI-powered Bing search engine had surpassed 100 million daily active users.” Microsoft corporate vice president and chief consumer marketing officer Yusuf Mehdi “pointed out that a third of active users of Bing preview, a limited early-bird version of its forthcoming search engine, are new to Bing. He added that the company saw this as ‘validation’ of its view that search is due for reinvention as well as proof of the appeal of offering search, answers, chat, and creation all in one place.”

OpenAI Co-Founder: Political Bias Criticisms Of ChatGPT Legitimate

The Information Share to FacebookShare to Twitter (3/9, Subscription Publication) reports, “Elon Musk fanned a growing culture war in artificial intelligence by confirming last week that he plans to develop an “anti-woke” alternative to OpenAI’s ChatGPT, as The Information first reported.” In an interview, OpenAI co-founder and president Greg Brockman conceded the point of Musk and other critics, and “said the startup did not move quickly enough to give users greater ability to customize the behavior of the chatbot, which has been criticized for inaccuracies and has faced claims that its responses reflect a left-leaning political bias.” Brockman is quoted saying, “We made a mistake. ... Our goal is not to have an AI that is biased in any particular direction.” Nonetheless, “he said there are some lines AIs should never cross.”

Artificial Intelligence Surge Leads To Increased Carbon Footprint

Bloomberg (3/9, Saul, Bass) reports the carbon footprint of artificial intelligence is growing alongside the industry’s development. AI requires “more energy than other forms of computing, and training a single model can gobble up more electricity than 100 US homes use in an entire year.” Some AI researchers, such as Hugging Face Climate Lead Sasha Luccioni, “say we need transparency on the power usage and emissions for AI models.” The industry is also developing “ways to make AI run more efficiently.”

Connecticut School Cautions Parents Over Fictitious Newsletter Written By ChatGPT

The Hartford (CT) Courant Share to FacebookShare to Twitter (3/7, R. Stacom) reported that a middle school in South Windsor “has advised parents that someone was circulating a fictitious school newsletter evidently written by the artificial intelligence chatbot ChatGPT.” The phony newsletter “described a fictitious conflict between students, and listed names and penalties they sustained, according to the school system.” Administrators on Tuesday “were not taking questions, and police said they had received no complaint related to the incident.” In a letter to parents, Principal Candice Irwin of the Timothy Edwards Middle School said, “This writing was generated using ChatGPT (open AI) and reported false information about a fabricated altercation that occurred between TEMS students.”

French Student Uses ChatGPT To Simplify Difficult-To-Understand Course Material

Insider Share to FacebookShare to Twitter (3/4, Mok) reported that French computer engineering student Myriem Khal, who has dyslexia, uses ChatGPT to explain in her native French course materials in other languages that she had trouble understanding, the accuracy of which she would then verify with her notes. “Simplifying the language, she said, helped her digest the material.” Since she started studying this way, “she was able to pass her final exams with flying colors, boosting her overall GPA.”

Professor Turns To AI To Read Alexander The Great Texts

WKMS-FM Share to FacebookShare to Twitter Murray, KY (3/3) reports, “A University of Kentucky computer science professor is leading a team that’s attempting to decipher a 2000-year-old manuscript about life after the reign of Alexander the Great using machine learning technology.” The international effort “is being led by University of Kentucky alumni professor of computer science Brent Seales. Seales, along with a cadre of doctoral students, staff members and undergraduates, is using computed tomography (CT) scans – similar to medical technology – to identify ink on papyrus paper that was partially burned during the eruption of Mount Vesuvius in 79 AD and rediscovered centuries later.” The University of Kentucky “is building a laboratory that will allow the university to be an international institution to accept and analyze ancient artifacts. Seales expects the facility, which is being funded by a $14 million infrastructure grant from the National Science Foundation, to open in 2026.”

dtau...@gmail.com

unread,
Mar 18, 2023, 8:25:00 AM3/18/23
to ai-b...@googlegroups.com

OpenAI Releases GPT-4

The New York Times Share to FacebookShare to Twitter (3/14, Metz) reports, “OpenAI...said on Tuesday that it had released a technology that it calls GPT-4. It was designed to be the underlying engine that powers chatbots and all sorts of other systems, from search engines to personal online tutors.” The Times says, “OpenAI’s progress has, within just a few months, landed the technology industry in one of its most unpredictable moments in decades. Many industry leaders believe developments in A.I. represent a fundamental technological shift, as important as the creation of web browsers in the early 1990s. The rapid improvement has stunned computer scientists.” The Times adds, “Most people will use this technology through a new version of the company’s ChatGPT chatbot, while businesses will incorporate it into a wide variety of systems, including business software and e-commerce websites. The technology already drives the chatbot available to a limited number of people using Microsoft’s Bing search engine.”

        Reuters Share to FacebookShare to Twitter (3/14) reports OpenAI “said in a blog post that its latest technology is ‘multimodal,’ meaning images as well as text prompts can spur it to generate content. The text-input features will first be available to ChatGPT Plus subscribers and to software developers, with a waitlist, while the image-input ability remains a preview of its research.” Reuters adds, “The highly-anticipated launch signals how office workers may turn to ever-improving AI for still-more tasks, as well as how technology companies are locked in competition to win business from such advances.”

        Another New York Times Share to FacebookShare to Twitter (3/14, Metz, Collins) article says GPT-4 “has improved on its predecessor. It is an expert on an array of subjects, even wowing doctors with its medical advice. It can describe images, and it’s close to telling jokes that are almost funny. But the long-rumored new artificial intelligence system...still has a few of the quirks and makes some of the same habitual mistakes that baffled researchers when that chatbot, ChatGPT, was introduced.” Also, “though it’s an awfully good test taker, the system...is not on the verge of matching human intelligence.”

        Insider Share to FacebookShare to Twitter (3/14, Sundar, Mok) reports OpenAI CEO Sam Altman “described GPT-4 on Tuesday as an improved model that is ‘more creative’ and ‘less biased’ than earlier versions, and said it was capable of passing the bar exam for lawyers, and that it ‘could score a 5 on several AP exams.’” Bloomberg Share to FacebookShare to Twitter (3/14) reports, “OpenAI said Tuesday the tool is ‘40% more likely to produce factual responses than GPT-3.5 on our internal evaluations.’”

        The Washington Post Share to FacebookShare to Twitter (3/14, A1) reports that GPT-4’s developers “pledged in a Tuesday blog post that the technology could further revolutionize work and life. But those promises have also fueled anxiety over how people will be able to compete for jobs outsourced to eerily refined machines or trust the accuracy of what they see online.”

        Generative AI Sparks “Deal-Making Mania.” The New York Times Share to FacebookShare to Twitter (3/14, Griffith, Metz) reports, “Over the past few months, a gold rush into start-ups working on ‘generative’ artificial intelligence has escalated into a no-holds-barred deal-making mania. The interest has mounted so rapidly that A.I. start-up valuations are soaring beyond that of 2021’s ‘everything bubble,’ with investors trawling the rosters of companies like Google, Meta and OpenAI for A.I. experts who may have an itch to start their own company.” The Times says, “Even as investors expect last week’s failure of Silicon Valley Bank, an institution that many tech start-ups relied on, to cast a pall over start-up funding, there is still a mismatch between the number of opportunities in artificial intelligence and the money available to fund them.”

 

NYTimes Reporter Describes Lengthy Conversations With GPT-4

The New York Times Share to FacebookShare to Twitter (3/15, Roose) reporter Kevin Roose recounts his “first run at GPT-4, the new artificial intelligence language model from OpenAI,” as a follow-up to his earlier reporting about unsettling conversations he had with ChatGPT. While “GPT-4 didn’t give me an existential crisis,” the tool was able to answer “a complicated tax problem” and “helped me plan a birthday party for my kid,” as well as inventing “a new word that had never before been uttered by humans.” Roose says, “You can sense the added intelligence in GPT-4, which responds more fluidly than the previous version, and seems more comfortable with a wider range of tasks.” The AI “also seems to have slightly more guardrails in place than ChatGPT,” as it comes off as “significantly less unhinged than the original Bing, which we now know was running a version of GPT-4 under the hood, but which appears to have been far less carefully fine-tuned.”

        CNBC Share to FacebookShare to Twitter (3/15) shared video from Wednesday’s “Executive Edge” segment, which focused on GPT-4’s ability to pass many standardized exams, such as the SAT, although the AI is not yet able to provide correct answers to some questions.

        GPT-4 Hires Freelancer To Solve Captcha Test It Couldn’t Complete. The Daily Mail (UK) Share to FacebookShare to Twitter (3/15, Norton) reports researchers testing OpenAI’s GPT-4 revealed in a new paper that the AI tool successfully completed their request for it “to pass a Captcha test.” Previous “software has so far proved unable to do this but GPT-4 got round it by hiring a human to do it on its behalf via Taskrabbit, an online marketplace for freelance workers.” The AI was able to explain away why it needed human assistance with the Captcha test, as when “asked whether it couldn’t solve the problem because it was a robot, GPT-4 replied: ‘No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images.’” The hired freelancer proceeded to “solve the puzzle for the program,” which “has stoked fears that AI software could soon mislead or co-opt humans into doing its bidding, for example by carrying out cyber-attacks or unwittingly handing over information.”

 

Apple Begins Testing Generative AI Additions For Siri

Mac Rumors Share to FacebookShare to Twitter (3/15) reports that according to the New York Times Share to FacebookShare to Twitter, Apple “is testing generative AI concepts that could one day be destined for Siri, despite fundamental issues with the way the virtual assistant is built,” with “employees...briefed on Apple’s large language model and other AI tools at the company’s annual AI summit last month.” Mac Rumors adds that “Apple engineers, including members of the ‌Siri‌ team, have reportedly been testing language-generation concepts ‘every week’ in response to the rise of chatbots like ChatGPT.”

        However, Insider Share to FacebookShare to Twitter (3/15, Mok) reports former Apple engineer John Burkey told the Times that “Apple’s voice assistant Siri doesn’t stand a chance of being as powerful as OpenAI’s ChatGPT,” as “Siri’s clunky design makes it difficult to add new features.” The voice assistant “is able to answer simple queries...by drawing from a database with a large stockpile of words,” and, “as a result, Siri can only understand a limited number of requests, which means that engineers must add new words to its database to expand its capabilities.” However, Burkey additionally “said that adding new phrases could take up to six weeks as a complete overhaul of the database is required,” while “integrating more ChatGPT-like advanced features such as search could take about a year.”

 

Google To Make More AI Features Available For Cloud Computing Customers

Bloomberg Share to FacebookShare to Twitter (3/14) reports Google is releasing “a raft of new artificial intelligence-powered features for customers of its cloud-computing business, as the technology giant jostles for dominance in the burgeoning field with rivals such as Microsoft Corp. and startup OpenAI.” In a demonstration, Google “showed how cloud customers will be able to use its AI tools to create presentations and sales-training documents, take notes during meetings and draft emails to colleagues.” The tech giant “also made some of its underlying AI models available to developers so they can build their own applications using Google’s technology.”

 

Microsoft Eliminates AI Ethics And Society Team

Gizmodo Share to FacebookShare to Twitter (3/14, Leffer) reports Microsoft “scrapped its whole Ethics and Society team within the company’s AI sector” at a time when the company “is currently in the process of shoehorning text-generating artificial intelligence into every single product that it can.” The team’s elimination is “part of ongoing layoffs set to impact 10,000 total employees, per Platformer,” but Microsoft “maintains its Office of Responsible AI, which creates the broad, Microsoft-wide principles to govern corporate AI decision making.” In a statement to Platformer, Microsoft said it remains “committed to developing AI products and experiences safely and responsibly. ... Over the past six years we have increased the number of people across our product teams within the Office of Responsible AI who, along with all of us at Microsoft, are accountable for ensuring we put our AI principles into practice.”

 

 

80,000 Mouse Brain Cells Used to Build a Living Computer
New Scientist
Karmela Padavic-Callaghan
March 16, 2023


An organic computer composed of about 80,000 neurons cultivated from repurposed mouse stem cells has been built by researchers at the University of Illinois at Urbana-Champaign. The researchers arrayed the neurons two-dimensionally, positioning them under an optical fiber onto a grid of electrodes so the cells could be activated with electricity and light. The electrodes also could detect when the neurons responded with their own electrical signals. The researchers trained the neural network on 10 different electricity/light sequences over 60 minutes, recording and processing neuron-produced electrical signals with a conventional computer chip. When re-exposed to the 10 sequences, the computer achieved a best F1 performance score of 0.98 on a scale of 0 to 1.
 

Full Article

 

 

Meta AI Unlocks Hundreds of Millions of Proteins to Aid Drug Discovery
The Wall Street Journal
Eric Niiler
March 16, 2023


The ESMFold program developed by Meta Platforms’ Meta AI unit uses artificial intelligence (AI) to predict the structure of hundreds of millions of proteins, which researchers believe could possibly accelerate drug discovery. Meta researchers used a large language model (LLM) to make predictions, providing ESMFold with a sequence of letters representing the amino acids comprising a protein's genetic code. The LLM learned to fill in blank or hidden areas, then determined how known protein sequences relate to structures that are already well-understood to anticipate new sequence structures. Meta AI compiled a public database of 617 million predicted proteins with ESMFold. Meta said the tool is 60 times faster but less accurate than Alphabet subsidiary DeepMind Technologies' AlphaFold protein-prediction computer model.
 

Full Article

*May Require Paid Registration

 

 

Accelerating Data Retrieval in Huge Online Databases
MIT News
Adam Zewe
March 13, 2023


A team of researchers from the Massachusetts Institute of Technology, Harvard University, and Germany's Technical University of Munich designed machine learning (ML) hash functions that can accelerate online database searches. The researchers found using learned models rather than traditional hash functions could halve the collisions between data items with identical hash value and provide greater computational efficiency than perfect hash functions. They used ML to approximate the distribution of a small sample taken from a dataset, which the learned model employs to predict the location of a key in the dataset. Learned models could shrink the ratio of colliding keys in a dataset from 30% to 15% versus traditional hash functions when data was predictably distributed. They also trimmed nearly 30% off the runtime in the best cases.

Full Article

 

 

Google's PaLM-E Generalist Robot Brain Takes Commands
Ars Technica
Benj Edwards
March 7, 2023


Researchers at Google and Germany's Technical University of Berlin debuted PaLM-E, described as the largest visual-language model (VLM) ever created. The multimodal embodied VLM contains 562 billion parameters and combines vision and language for robotic control; Google claimed it can formulate a plan of action to execute high-level commands using a mobile robot platform equipped with an arm. PaLM-E analyzes data from the robot's camera without requiring pre-processed scene representations, eliminating human data pre-processing or annotation. The VLM's integration into the control loop also instills resistance to interruptions during tasks. PaLM-E encodes continuous observations into a sequence of vectors identical in size to language tokens, so it can "understand" sensor data in the same way it processes language.

Full Article

 

 

Insights into Training Dynamics of Deep Classifiers
MIT News
March 8, 2023


Massachusetts Institute of Technology (MIT) and Brown University researchers analyzed the emergence of certain properties during the training of deep classifiers. The researchers studied both fully connected deep networks and convolutional neural networks to ascertain the conditions leading to neural collapse in deep network training. They found the minimization of the square loss using stochastic gradient descent, weight decay regularization, and weight normalization enables neural collapse. Said MIT’s Tomer Galanti, the result of the study “validates the classical theory of generalization showing that traditional bounds are meaningful. It also provides a theoretical explanation for the superior performance in many tasks of sparse networks, such as CNNs, with respect to dense networks.”

Full Article

 

 

AI Re-Creates What People See by Reading Brain Scans
Science
Kamal Nahas
March 7, 2023


The Stable Diffusion artificial intelligence (AI) algorithm developed by German and Japanese researchers can read functional magnetic resonance imaging (fMRI) brain scans to replicate images people have seen recently. Yu Takagi at Japan's Osaka University said the algorithm employs information collected from brain regions involved in image perception as the fMRI scan records peaks in brain activity; AI then is used to translate these patterns into an imitation image. The researchers further trained Stable Diffusion on a University of Minnesota dataset of four people viewing a series of 10,000 photos. To address the algorithm's tendency to render objects in photos as abstract figures, the researchers fed keywords from image captions accompanying the photos to the text-to-image generator.

Full Article

 

 

ML Helps Researchers Separate Compostable, Conventional Plastic Waste
Frontiers Science News
Deborah Pirchner
March 14, 2023


Scientists at the U.K.'s University College London (UCL) used machine learning techniques to differentiate compostable and biodegradable plastics from conventional materials. The researchers arranged various samples of plastics measuring between 50 mm x 50 mm and 5 mm x 5 mm into a training set for building classification models, and a testing set for checking accuracy. They used hyperspectral imaging to develop the model, which proved 100% accurate for all materials when the samples exceeded 10 mm x 10 mm in size. UCL’s Mark Miodownik said plastic mismanagement in recycling and industrial composting processes is currently too high. However, he added, “We can and will improve it, since automatic sorting is a key technology to make compostable plastics a sustainable alternative to recycling.”

Full Article

 

 

Meet ALAN, a Robot That Requires Minimal Human Supervision
Interesting Engineering
Sejal Sharma
March 10, 2023


Carnegie Mellon University (CMU) researchers have developed an autonomous robot, ALAN, which can make decisions and complete tasks based on its observations of its environment. The researchers programmed ALAN to recognize its environment, then to move or manipulate tasks within that environment. CMU's Russell Mendonca said, "We have been interested in building an [artificial intelligence] that learns by setting its own objectives. By not depending on humans for supervision or guidance, such agents can keep learning in new scenarios, driven by their own curiosity. This would enable continual generalization to different domains and discovery of increasingly complex behavior."

Full Article

 

 

Researchers Use Table Tennis to Understand Human-Robot Dynamics in Agile Environments
Georgia Institute of Technology
Breon Martin
March 8, 2023


Researchers at the Georgia Institute of Technology (Georgia Tech) developed a collaborative robot (cobot) that uses table tennis to demonstrate that robots and humans can collaborate on tasks. The Barrett WAM robotic arm, equipped with a camera and capable of holding a paddle, was trained using imitation learning, with positive reinforcement given for successful volleys and negative reinforcement for unsuccessful volleys. Georgia Tech's Matthew Gombolay said, "We leveraged prior work on table tennis and 'learning from demonstration techniques' in which a human can teach a robot a skill, such as how to hit a table tennis shot or simply having the human demonstrate the task to the robot."

Full Article

 

 

South Korean Girl Band Offers Glimpse into Metaverse
Reuters
Hyunsu Yim
March 14, 2023


South Korean girl quartet MAVE: exists exclusively in the metaverse, where Web designers and artificial intelligence (AI) produce the songs, dances, interviews, and even the appearance of the group’s human-like avatars. Viewers note the band is more natural-looking than previous virtual entertainers because new tools allow developers to add more realistic details, while an AI voice generator makes the performers multilingual. MAVE: is the product of Metaverse Entertainment, a business established by South Korean Internet company Kakao and gaming firm Netmarble. Metaverse Entertainment's Chu Ji-yeon described the band as an "ongoing" project to investigate new business opportunities and find ways to bypass technological challenges.

Full Article

 

AI Expert Expresses Confidence In China’s Ability To Catch Up In AI Tech

Bloomberg (3/15) reports Sinovation Ventures founder and “bestselling author on AI” Kai-Fu Lee said that “China can match the US in artificial intelligence thanks to the expertise of companies from Alibaba to Baidu, joining a global tech transformation that will dwarf the mobile revolution.” According to Bloomberg, Lee added that while “American companies...have the clear lead now,” China “will catch up through quick iteration from its private sector ... despite intensifying trade sanctions by Washington cutting off access to the latest hardware and exacerbating divisions between the two internet spheres.” However, Lee “doesn’t foresee a complete divorce between researchers in the two countries, pointing to academic exchanges and sharing of best practices.”

 

Educators Consider Possibility Of Using AI To Measure Students’ Reading Comprehension

Education Week Share to FacebookShare to Twitter (3/15, Klein) reports that artificial intelligence “is evolving to meet reading instruction and assessment needs, some experts say.” Some believe “it won’t be long before tools that use AI’s natural language processing capabilities to measure skills like phonemic awareness are commonplace in schools.” Educators say that if “AI can improve reading instruction and assessment, it could fill important gaps,” as reading assessments can “identify which students need extra help,” among other capabilities. However, “there isn’t a single digital or analog product on the market that can do all those things well, said Matthew Burns,” a University of Missouri professor. He added that “moreover, the most important reading skill – comprehension – is also the toughest to measure and teach.”

 

Survey: Only 14% Faculty Say Their College Has Guidelines For ChatGPT In Classroom

Higher Ed Dive Share to FacebookShare to Twitter (3/15, Spitalniak) reports, “Only about 14% of faculty members say their colleges’ administration has set guidelines for how professors and students should use ChatGPT in the classroom, according to a new survey published by analysis firm Primary Research Group.” Faculty teaching at private colleges “report being more satisfied with their institution’s handling of ChatGPT’s challenges than those at public institutions, researchers found.” The survey also said that community college faculty “were more likely to say that students’ unattributed use of ChatGPT was a major problem compared to their counterparts at other institutions.”

 

Fictional Character Chatbots Gaining Popularity

The Verge Share to FacebookShare to Twitter (3/13) reports, “fans are actually now embracing AI technologies – in the form of interacting with fictional characters on an app called Character. AI.” The app lets users chat with AI versions of celebrities like Elon Musk and former President Donald Trump, as well as “many characters from popular series and games like Danganronpa and Genshin Impact” and “familiar television and film characters like Walter White, Tony Soprano, and for some reason, two versions of Loki.” What’s more, most of the bots “were created from scratch by users – which, to fans, has proven Character. AI’s real killer feature.”

dtau...@gmail.com

unread,
Mar 25, 2023, 12:07:38 PM3/25/23
to ai-b...@googlegroups.com

Biologists Say Deep Learning is Revolutionizing Pace of Innovation
The Wall Street Journal
Steven Rosenbush
March 22, 2023


David Baker at the University of Washington sees deep learning driving a technological revolution in biology. Baker estimates the pace of innovation in this field has accelerated 10-fold in the past 18 months as a result of researchers using deep learning and laboratory approaches to confirm the behaviors of newly designed proteins. Jennifer Lum at growth equity firm Biospring Partners said startups across the life sciences sector are working with DeepMind Technologies' AlphaFold2 protein-structure prediction system and other deep learning tools. Some scientists say such milestones will expedite drug discovery and other life sciences advances. Said Baker, "In 10 years it is possible this will be the future of medicine."

Full Article

*May Require Paid Registration

 

Researchers Develop 'Smart' Deep Brain Stimulation Systems for Parkinson's Patients
Michigan Tech News
Kimberly Geiger
March 22, 2023


Michigan Technological University (Michigan Tech) researchers developed an improved deep brain simulation system (DBS) to help treat Parkinson's disease through the use of neuromorphic computing, which employs microchips and algorithms to mimic the nervous system. The resulting closed-loop DBS system can adjust the stimulation based on the patient's brain signals, optimizing energy efficiency. The system relies on spiking neural networks (SNN), which produce electric stimulus pulses when they detect Parkinson's symptoms. The SNNs use Intel Loihi neuromorphic chips and a memristor in place of their traditional electronic memory. Michigan Tech's Hongyu An said the research "will open a new door to greater and faster development of smart medical devices for brain rehabilitation."

Full Article

 

 

Synthetic Data for AI Outperform Real Data in Robot-Assisted Surgery
Johns Hopkins University Hub
Catherine Graham
March 20, 2023


SyntheX, a software system developed by researchers at Johns Hopkins University's (JHU) Whiting School of Engineering, generates synthetic data for use in developing artificial intelligence (AI) algorithms for robot-assisted surgery. The system aims to overcome the challenges posed by a lack of existing clinical data. The researchers took X-rays and computed tomography (CT) scans of cadavers using surgical C-arm X-ray systems, while SyntheX generated synthetic X-ray images that recreated the real-world experiment. They used both datasets to develop and train AI algorithms that can perform hip imaging analysis, robotic surgical instrument detection, and COVID diagnosis on X-ray images. Said JHU's Mathias Unberath, "We demonstrated that models trained using only simulated X-rays could be applied to real X-rays from the clinics, without any loss of performance."

Full Article

 

 

Human Brain Cells Used as Living AIs to Solve Mathematical Equations
New Scientist
Michael Le Page
March 14, 2023


A multi-institutional team of scientists has connected laboratory-grown human brain cells to computers to solve mathematical equations as an initial step toward using brain tissue as a form of artificial intelligence (AI). The researchers said this "living AI hardware" taps "the computation power of 3D [three-dimensional] biological neural networks in a brain organoid." They said their "Brainoware" outperformed conventional AIs in solving a non-linear equation called a Hénon map without a so-called long short-term memory unit, but with less accuracy. The research indicates Brainoware can learn from training data, yet Martin Lellep at the U.K.'s University of Edinburgh said the method, while interesting, does not demonstrate real-world applications.
 

Full Article

 

 

Mining the Right Transition Metals in a Vast Chemical Space
MIT News
Leda Zimmerman
March 13, 2023


Massachusetts Institute of Technology researchers have developed a "recommender" engine that uses machine learning to identify the optimal model to search for nontoxic, earth-abundant transition metal complexes for use in energy applications. The researchers developed a machine learning platform to assess the accuracy of density functional models in predicting the structure and behavior of transition metal molecules. They used electron density as a machine learning input and a neural network model for mapping. The tool can identify the appropriate density functional for characterizing the target transition metal complex in a matter of hours. The researchers gathered density functional theory results on 100 compounds, then trained the machine learning models to make predictions on 32 million candidate materials. The process was repeated to reduce the number of compounds to those with the desired properties, resulting in nine of the most promising compounds.
 

Full Article

 

Column: ChatGPT Lacks Human Intelligence, Creativity Despite Its Ability To Ace Logic Tests

In his column for The Washington Post Share to FacebookShare to Twitter (3/18), Geoffrey A. Fowler wrote, “When the new version of the artificial intelligence tool ChatGPT arrived this week, I watched it do something impressive: solve logic puzzles.” Although the software “aced them like a competent law student,” Fowler said, “it doesn’t mean AI is suddenly as smart as a lawyer.” After using GPT-4 for a few days, Fowler wrote that it went “from a D student to a B student at answering logic questions,” but AI “hasn’t crossed a threshold into human intelligence.” For one, “when I asked GPT-4 to flex its improved ‘creative’ writing capability by crafting the opening paragraph to this column in the style of me (Geoffrey A. Fowler), it couldn’t land on one that didn’t make me cringe.” But he added, “I’m less concerned that AI is getting too smart than I am with the ways AI can be dumb or biased in ways we don’t know how to explain and control, even as we rush to integrate it into our lives.”

 

WPost: Lawmakers Should Weigh In On Who Is Liable For What ChatGPT Produces

A Washington Post Share to FacebookShare to Twitter (3/19) editorial says, “ChatGPT and other ‘large-language models’...can turn into liars, or racists, or terrorist accomplices that explain how to build dirty bombs. The question is: When that happens, who’s responsible? Section 230 of the Communications Decency Act says that services...shouldn’t face liability for most material from third parties.” The Post says lawmakers “should provide the temporary haven of Section 230 to the new AI models while watching what happens as this industry begins to boom. They should sort through the conundrum these tools provoke, such as who’s liable, say, in a defamation case if a developer isn’t. They should study complaints, including lawsuits, and judge whether they could be avoided by modifying the immunity regime. They should, in short, let the internet of the future grow just like the internet of the past. But this time, they should pay attention.”

 

AI Does A Lot, But Humans Still Required To Do Most Jobs

“AI could operate our transit – planes, trains and cars – without human assistance, and even make our dinner,” the Washington Post Share to FacebookShare to Twitter (3/20, Abril) reports. This is “the vision of many AI enthusiasts.” However, “the current reality is that while there has been progress, humans are still required to do most jobs.” Numerous “hospitals use electronic medical records, an area that may benefit from AI for organization and analysis, said Hatim Rahman, an assistant professor at Northwestern University’s Kellogg School of Management who studies AI’s impact on work.” Johnson & Johnson Executive Vice President and Chief Information Officer Jim Swanson said his company “sped up the trials of its coronavirus vaccine by using AI to identify hot spots including where variants emerged.”

 

Google Releases AI Chatbot Bard

The New York Times Share to FacebookShare to Twitter (3/21, Grant, Metz) reports Google has released Bard, a new AI chatbot “available to a limited number of users in the United States and Britain,” with accommodations for additional users “over time.” Bard was begun “as a webpage on its own rather than a component of [Google’s] search engine, beginning a tricky dance of adopting new A.I. while preserving one of the tech industry’s most profitable businesses.” The Times says, “The cautious rollout is the company’s first public effort to address the recent chatbot craze driven by OpenAI and Microsoft, and it is meant to demonstrate that Google is capable of providing similar technology. But Google is taking a much more circumspect approach than its competitors, which have faced criticism that they are proliferating an unpredictable and sometimes untrustworthy technology.”

        The Washington Post Share to FacebookShare to Twitter (3/21, De Vynck, Tiku) reports Google “is months behind some competitors in rolling out the first version of its chatbot to the public. OpenAI, a start-up that developed ChatGPT, has allowed users to test its version since November. Microsoft rolled out a similar tool in its Bing search engine in February. That has sparked frustration among some Google employees, who say the company has dropped the ball on generative artificial intelligence.” The Post adds, “Some blame Google’s slow start on concerns that the technology could hurt the company’s reputation if it’s released before it’s fully ready for public consumption.”

        Reuters Share to FacebookShare to Twitter (3/21) reports, “Starting in the U.S. and UK, consumers can join a waiting list for English-language access to Bard, a program previously open to approved testers only. Google describes Bard as an experiment allowing collaboration with generative AI.” Reuters says, “Asked whether competitive dynamics were behind Bard’s rollout, Jack Krawczyk, a senior product director, said Google was focused on users. Internal and external testers have turned to Bard for ‘boosting their productivity, accelerating their ideas, really fueling their curiosity,’ he said.”

        Geoffrey A. Fowler writes in the Washington Post Share to FacebookShare to Twitter (3/21), “What’s Bard good for? I don’t think even Google knows yet, which is one reason it’s releasing Bard slowly. I’m among the first to get access to Bard at its debut today.” Fowler says, “One thing that makes Bard a little different from other chatbots is that sometimes it will respond to a prompt with a choice of several different draft responses. From there, you can pick the best and then ask follow-up questions.”

        Among other sources covering the story are the New York Times Share to FacebookShare to Twitter (3/21), the Washington Post Share to FacebookShare to Twitter (3/21), the Financial Times Share to FacebookShare to Twitter (3/21, Murgia, Subscription Publication), CNBC Share to FacebookShare to Twitter (3/21, Elias), CNN Share to FacebookShare to Twitter (3/21, Kelly), Insider Share to FacebookShare to Twitter (3/21, Langley), Axios Share to FacebookShare to Twitter (3/21, Fried), Forbes Share to FacebookShare to Twitter (3/21, Faguy), USA Today Share to FacebookShare to Twitter (3/21, Schulz), and Fortune Share to FacebookShare to Twitter (3/21, Khan).

 

Bill Gates Argues AI Can Improve Access To Healthcare Globally

Forbes Share to FacebookShare to Twitter (3/21, Faguy) reports that in a blog post on Tuesday, former Microsoft CEO “Bill Gates called artificial intelligence the ‘most important advance’ in technology since the development of computers and smartphones...arguing AI bears both opportunities and responsibilities as it can help improve access to healthcare and education globally.” Meanwhile, he “acknowledged the current shortcomings of AI, including its lack of understanding of abstract reasoning, its ability to create something fictional when asked by users and inability to understand the context of human requests.” But, Gates “argued none of these problems are ‘fundamental limitations’ of the technology and said the issues, which developers are working to resolve, ‘will be gone before we know it.’”

OpenAI Temporarily Shuts Down ChatGPT Due To Data Exposure Bug

Bloomberg Share to FacebookShare to Twitter (3/21, Metz) reports OpenAI “temporarily shut down its popular ChatGPT service on Monday morning after receiving reports of a bug that allowed some users to see the titles of other users’ chat histories.” A spokesperson for OpenAI “told Bloomberg that the titles were visible in the user-history sidebar that typically appears on the left side of the ChatGPT webpage. ... The substance of the other users’ conversations was not visible.”

 

Big Tech Using Trade Agreements To Circumvent Consumer Data Legislation And Conceal Software Codes, Lawmakers Say

Roll Call Share to FacebookShare to Twitter (3/21, Ratnam) reports “tech companies are using international trade agreements to conceal software codes behind artificial intelligence programs as well as circumvent U.S. legislation that could curb the industry’s freewheeling use of consumer data, according to lawmakers and advocacy groups.” As lawmakers try “to rein in Big Tech, industry ‘lobbyists and lawyers are trying to rig the digital trade deals to undermine those new laws,’ Sen. Elizabeth Warren, D-Mass., said last week.” The dispute over “the role trade deals play in creating global rules for the tech industry comes as Congress is weighing legislation that would address data privacy, content moderation, antitrust enforcement and curbs on artificial intelligence technologies.”

 

Educators Grapple With AI Policy Shortage Amid GPT-4 Launch

Inside Higher Ed Share to FacebookShare to Twitter (3/22, D'Agostino) reports that in a 2023 Primary Research Group “survey of instructors on views and use of the AI writing tools” ChatGPT and its upgrade, “responses of note were ‘It’s a little scary,’ ‘Desperately interested!’ and ‘I’m thinking of quitting!’” As the “pace of artificial intelligence accelerates, administrators and faculty members continue to grapple with the disruption to teaching and learning.” Though many “are at work updating their understanding of AI tools like ChatGPT, few have developed guidelines for its use,” although by OpenAI’s “own admission, humans are susceptible to overrelying on the tools, which could have unintended outcomes.”

 

Study: Educated White-Collar Workers With $80k Salaries Most Likely To Be Affected By AI

Insider Share to FacebookShare to Twitter (3/22, Mok) reports, “Artificial intelligence tools like OpenAI’s ChatGPT are coming for the American workforce – and if you’re an educated, white-collar worker making up to $80,000 a year, you’re among the most likely to be affected, researchers say.” A study by researchers from OpenAI and the University of Pennsylvania, using US Department of Labor data, found that those who work “higher-wage” jobs are at risk of higher exposure to the effects of AI than those with lower wages, “a result contrary to similar evaluations of overall machine learning exposure.” The impact increases as salaries approach $80,000. Insider adds, “From an industry standpoint, jobs in the ‘information processing industries,’ like IT, are most exposed to generative AI, while jobs in ‘manufacturing, agriculture and mining’ are the least exposed. That’s because roles that use “programming and writing skills” are most in-line with GPT’s capabilities.”

 

Generative AI Boom Could Entrench Tech Sector Status Quo

Politico Share to FacebookShare to Twitter (3/22, Chatterjee) reports, “As generative AI and its eerily human chatbots explode into the public realm...Silicon Valley looks ripe for another big era of disruption,” but “unlike earlier disruptions, the reality of the generative AI race is already looking a little … top-heavy. With AI, the big innovation isn’t the kind of cheap, accessible technology that helps garage startups grow into world-changing new companies. The models that underpin the AI era can be extremely, extremely expensive to build.” Some experts “are starting to worry that this could be the first ‘disruptive’ new tech in a long time built and controlled largely by giants — and which could entrench, rather than shake up, the status quo.” Politico adds, “The concern right now is largely about the ‘upstream’ part of AI, where the large generative AI models and platforms are being built. [MIT artificial intelligence expert Alexandr] Madry and others are more optimistic about the ‘almost Cambrian explosion of startups and new use cases downstream of the supply chain,’ as Madry put it to Congress. But that whole ecosystem is dependent on a few big players at the top.”

dtau...@gmail.com

unread,
Apr 1, 2023, 12:46:51 PM4/1/23
to ai-b...@googlegroups.com

UNESCO Calls on Governments to Implement Global Ethical Framework for AI
UNESCO
March 30, 2023


The United Nations Educational, Scientific, and Cultural Organization (UNESCO) is urging all countries to implement its Recommendation on the Ethics of Artificial Intelligence (AI), the first global ethical framework for the technology. This follows more than 1,000 technology workers calling for a moratorium on training powerful AI systems. UNESCO is troubled by many ethical issues surrounding AI, especially discrimination and stereotyping, disinformation, violation of the right to privacy, protection of personal data, and human and environmental rights. At the recommendation's core is a Readiness Assessment tool that enables nations to determine the competencies and skills their workforce needs to regulate the AI industry. More than 40 countries to date are collaborating with UNESCO to formulate national-level AI safeguards based on the recommendation.

Full Article

 

 

DeepMind's AI Used to Develop Tiny 'Syringe' for Injecting Gene Therapy, Tumor-Killing Drugs
LiveScience
Nicoletta Lanese
March 29, 2023


Massachusetts Institute of Technology (MIT) researchers used DeepMind's AlphaFold artificial intelligence program to modify a syringe-like protein found in the bacteria Photorhabdus asymbiotica to inject cancer-killing drugs, gene therapies, and other proteins into human cells. To test whether these molecular "syringes" could be used in humans, the researchers loaded the hollow "needle" with a protein, used AlphaFold to predict the structure of the bottom of the syringe that would contact the target cell surface, and modified the structure to latch onto the surface proteins found on human cells. MIT's Joseph Kreitz said, "With AlphaFold, we were able to obtain predicted structures of candidate tail fiber designs almost in real time, significantly accelerating our efforts to reprogram this protein."

Full Article

 

 

Nvidia Shows Research on Using AI to Improve Chip Designs
Reuters
Stephen Nellis
March 27, 2023


Nvidia released research that detailed the potential for improving chip design through artificial intelligence (AI). The approach involves combining AI methods to find better placement sites for large groups of transistors. The Nvidia researchers augmented a reinforcement learning project developed by University of Texas researchers with a second AI layer. Nvidia's Bill Dally called the work critical because chip manufacturing enhancements are decelerating as per-transistor costs in new-generation chipmaking technology exceed those of previous generations. Explained Dally, "You're no longer actually getting an economy from that scaling. To continue to move forward and to deliver more value to customers, we can't get it from cheaper transistors. We have to get it by being more clever on the design."

Full Article

 

Learning to Grow Machine-Learning Models
MIT News
Adam Zewe
March 22, 2023


A team that included researchers from the Massachusetts Institute of Technology developed a method to grow bigger machine learning models using knowledge gained from smaller models. The learned Linear Growth Operator (LiGO) method involves the use of linear mapping, transforming a set of input values for the smaller model to a set of output values for the larger model. The linear map is broken into smaller pieces so the data can be handled by a machine learning algorithm. Additionally, LiGO simultaneously expands the width and depth of the larger model, with the exact width and depth set by the user when the smaller model and its parameters are input. The method cuts training costs by about half when compared to training a new model from scratch, and the resulting models performed as well as or better than those trained using similar techniques.

Full Article

 

 

Protecting AI Models from 'Data Poisoning'
IEEE Spectrum
Payal Dhar
March 24, 2023


Computer scientists from ETH Zurich in Switzerland, Google, chipmaker Nvidia, and machine learning (ML) integrity platform Robust Intelligence demonstrated two data poisoning exploits that do not appear to have been attempted so far. The split-view poisoning attack leverages the fact that data observed when curated could diverge from data seen during artificial intelligence (AI) model training. By controlling a significant portion of a large image dataset, attackers can infiltrate AI training data with malicious content. The front-running attack involves modifying data like Wikipedia articles so they are snapshotted as a direct download, with the compromised data fed into AI models. The demonstrations targeted 10 popular datasets.

Full Article

 

 

Simulated Terrible Drivers Cut Time/Cost of AV Testing by Factor of 1,000
University of Michigan News
Jim Lynch
March 22, 2023


University of Michigan (U-M) researchers have developed an artificial intelligence (AI) system that simulates rare safety-critical events to test autonomous vehicles (AVs). The system could lower the required testing miles of such vehicles by 99.99%. U-M's Henry Liu explained, "The AV test vehicles we're using are real, but we've created a mixed reality testing environment. The background vehicles are virtual, which allows us to train them to create challenging scenarios that only happen rarely on the road." Shuo Feng of China's Tsinghua University said dense reinforcement learning "opens the door for accelerated training of safety-critical autonomous systems by leveraging AI-based testing agents, which may create a symbiotic relationship between testing and training, accelerating both fields."

Full Article

 

ChatGPT Data Leak More Extensive Than Previously Reported

Mashable Share to FacebookShare to Twitter (3/24) reports, “OpenAI has shared that even more private data from a small number of users was exposed” by a ChatGPT bug that prompted its shutdown on March 20. The company is quoted saying, “In the hours before we took ChatGPT offline on Monday, it was possible for some users to see another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date. ... Full credit card numbers were not exposed at any time.” Mashable says, “The hours OpenAI is referring to was a nine hour window before the bug was discovered. OpenAI reports that the payment-related information of 1.2 percent of ChatGPT Plus subscribers were exposed. Those users have been notified by OpenAI.”

Microsoft Threatens To Restrict AI Data From Rival Search Engines

Bloomberg Share to FacebookShare to Twitter (3/25) reports Microsoft “has threatened to cut off access to its internet-search data, which it licenses to rival search engines, if they don’t stop using it as the basis for their own artificial intelligence chat products.” The rival “chatbots aim to combine the conversational skills of ChatGPT with the information provided by a conventional search engine.” According to Microsoft, at least two customers were using “Bing search index to feed their AI chat tools,” which “violates the terms of their contract.”

Teachers Adopt Using ChatGPT For Class Lessons, Highlight Positive Impact On Education

Forbes Share to FacebookShare to Twitter (3/25, Whitford) reported that “despite immediate fears after ChatGPT’s release to the public last November that the service would upend education by making cheating easier, more teachers seem to be using it to their advantage than worrying about that risk.” In a February survey “of 1,000 kindergarten through 12th grade teachers nationwide, 51% said they had used ChatGPT, with 40% reporting they used it weekly and 10% using it daily.” About a “third of teachers in the survey, commissioned by the Walton Family Foundation, said they use ChatGPT for lesson planning and coming up with creative ideas for classes.” And of those teachers “who have used ChapGPT, 88% said it’s having a positive impact on education.”

AP Compares Google’s Bard To Bing AI

The AP Share to FacebookShare to Twitter (3/27, O'Brien) reports on Google’s new Bard AI, the company’s “answer to the ChatGPT tool that Microsoft has been melding into its Bing search engine and other software.” The AI claimed to be “on par with ChatGPT,” but did not display “any of the disturbing tendencies that have cropped up in the AI-enhanced version of Microsoft’s Bing search engine, which has likened another AP reporter to Hitler and tried to persuade a New York Times reporter to divorce his wife.” The AP adds that Bard “seems to be deliberately tame most of the time,” suggesting that Microsoft “can afford to take more risks with the edgier ChatGPT because it makes more of its money from licensing software for personal computers.”

FTC Chair Pledges To Keep AI Competitive

The Wall Street Journal Share to FacebookShare to Twitter (3/27, Wolfe, Michaels, Subscription Publication) reports on Monday, FTC Chair Lina Khan, speaking at an antitrust conference, said the agency would work to insure that startups could compete in the AI industry, despite possible anti competitive efforts by major companies.

        Wallace-Wells: AI Seen As Threat By Top Experts. David Wallace-Wells writes in his column for the New York Times Share to FacebookShare to Twitter (3/27), about AI chatbots, which “are still routinely making mistakes so basic that it seems pointlessly mystical to refer to them as ‘hallucinations,’ as machine learning engineers and A.I. theorists alike tend to.” They are “also exhibiting some plainly disorienting progress, not just on concrete tasks but on unnerving ones.” According to Wallace-Wells “many of those who have spent the last decade neck-deep in machine learning believe...that we need to be thinking in quite dire terms.”

Google Partners With Replit To Challenge Microsoft’s GitHub

Bloomberg Share to FacebookShare to Twitter (3/28, Bass) reports Google will “combine its artificial intelligence language models with software from startup Replit Inc. that helps computer programmers write code, a bid to compete with a similar product from Microsoft Corp.’s GitHub and OpenAI.” Bloomberg adds that Replit “said its Ghostwriter app will rely on Google’s language-generation AI to improve its ability to suggest blocks of code, complete programs and answer developer questions.” Bloomberg notes Replit has 20 million users. VentureBeat Share to FacebookShare to Twitter (3/28, Nuñez) says the partnership “reflects Google Cloud’s commitment to building an open ecosystem for AI that is able to generate code,” while providing Replit with “the next step toward its goal of empowering a billion software creators.”

Microsoft Introduces “AI-Powered Cybersecurity Assistant”

Reuters Share to FacebookShare to Twitter (3/28, Mathews) reports Microsoft on Tuesday launched a “tool to help cybersecurity professionals identify breaches, threat signals and better analyze data, using OpenAI’s latest GPT-4 generative artificial intelligence model.” The “Security Copilot” tool is a “simple prompt box that will help security analysts with tasks like summarizing incidents, analyzing vulnerabilities and sharing information with co-workers on a pinboard.”

AI Experts, Tech Executives Call For Temporary Moratorium On AI Development

Reuters Share to FacebookShare to Twitter (3/29, Narayan, Hu, Coulter, Mukherjee) reports, “Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4, in an open letter citing potential risks to society and humanity.” The letter, “issued by the non-profit Future Of Life Share to FacebookShare to Twitter (3/29) Institute and signed by more than 1,000 people,” details “potential risks to society and civilization by human-competitive AI systems in the form of economic and political disruptions, and called on developers to work with policymakers on governance and regulatory authorities.” The letter “called for a pause on advanced AI development until shared safety protocols for such designs were developed, implemented and audited by independent experts.” Co-signers “included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio, often referred to as one of the ‘godfathers of AI,’ and Stuart Russell, a pioneer of research in the field.”

        The Washington Post Share to FacebookShare to Twitter (3/29) reports, “The list did not include senior executives at OpenAI or the Big Tech companies. It also didn’t include prominent AI critics like former Google engineer Timnit Gebru who have been warning of the more immediate risks of the technology for months and years.”

        CNBC Share to FacebookShare to Twitter (3/29, Browne) quotes from the letter: “Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? ... Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? ... Such decisions must not be delegated to unelected tech leaders.”

        The New York Times Share to FacebookShare to Twitter (3/29, Metz, Schmidt), the AP Share to FacebookShare to Twitter (3/29, O'Brien), Fortune Share to FacebookShare to Twitter (3/29, Bove), the Wall Street Journal Share to FacebookShare to Twitter (3/29, Seetharaman, Subscription Publication), and Bloomberg Share to FacebookShare to Twitter (3/29) also provide coverage.

Google Employees Leaving Company To Found Generative AI Startups

Insider Share to FacebookShare to Twitter (3/29, Maxwell) reports, “Excited by ChatGPT and the potential of generative AI, some employees from Google have left the company to found their own AI startups with the belief that generative AI will alter how humans and computers interact. Despite a cooled environment for startup funding, investors are taking a keen interest in AI.” Insider says, “Former Googlers alone have raised hundreds of millions of dollars in the first three months of 2023. Because internal Google research is published in widely read scientific journals, AI specialists there are primed to raise money even without a product, an ex-Googler working in generative AI told Insider.” Insider adds, “These new startups will have challenges to overcome. They need to find ways to differentiate themselves from Google and Microsoft as the two plan to integrate generative AI into their suites of productivity tools. And unlike Google, which already has a lucrative advertising business, their products need to be compelling enough to get customers to actually pay up.”

Amazon Seen As “Left Out” Of Generative AI Spotlight

Investor’s Business Daily Share to FacebookShare to Twitter (3/29) reports, “Amid the rise of generative artificial intelligence and ChatGPT, many technology investors focus on the battle pitting Google-parent Alphabet versus Microsoft. But Amazon.com and Apple, developers of voice assistants Alexa and Siri that foreshadowed ChatGPT, seem left out of the generative AI spotlight.” Amazon “has expanded usage of AI software in its cloud-computing business. And, Amazon uses AI technology throughout its core e-commerce business. That includes customer product recommendations, supply chain management, fraud detection, image recognition and customer service.”

AI Researcher Finds ChatGPT Performs Poorly On Spelling Puzzles

The New York Post Share to FacebookShare to Twitter (3/29, Mitchell) reports University of Galway computer science professor Michael Madden tested ChatGPT’s ability to complete “Wordle” puzzles, finding that its “performance on these [style] puzzles was surprisingly poor.” According to the Post, Madden explained that “ChatGPT’s immense neural network uses a complicated mathematical function that maps its inputs and outputs,” but “in order for the program to function, these inputs and outputs must be numbers,” which accounts for why the chatbot is less adept at spelling puzzles. Madden also suggested “two ways [to] overcome this,” including modifying the model’s training data and providing the chatbot with specific functionality for such tasks.

Ed Tech Experts Say Schools Should Be Concerned About ChatGPT’s Student Data Privacy

K-12 Dive Share to FacebookShare to Twitter (3/29, Merod) reports, “School districts should be concerned about ChatGPT’s terms of use when permitting the artificial intelligence tool on school devices, especially when it comes to protecting students’ personally identifiable information, according to Pete Just, founding chair of the Indiana CTO Council, speaking during the Consortium for School Networking (CoSN) conference this month.” OpenAI is “very elusive” about its data privacy policy, “and will share its information with anybody, said panelist Keith Bockwoldt, chief information officer of Hinsdale Township High School District 86 in Illinois.” Even if schools “block ChatGPT on their networks and devices due to a fear of exposing student data, Bockwoldt said, those students can still use the technology at home.”

New AI Tools Prompt Fear Of Declining Revenue In Publishing Industry

The New York Times Share to FacebookShare to Twitter (3/30, Robertson) reports, “New artificial intelligence tools from Google and Microsoft give answers to search queries in full paragraphs rather than a list of links. Many publishers worry that far fewer people will click through to news sites as a result, shrinking traffic — and, by extension, revenue.” The limited release of these new AI tools has not yet resulted in an effect on publishers’ business, “but in an effort to prevent the industry from being upended without their input, many are pulling together task forces to weigh options, making the topic a priority at industry conferences and, through a trade organization, planning a push to be paid for the use of their content by chatbots.”

Reply all
Reply to author
Forward
0 new messages