Dr. T's AI brief

8 views
Skip to first unread message

Daniel Tauritz

unread,
Mar 13, 2021, 7:49:11 AM3/13/21
to ai-b...@googlegroups.com

AI Panel Urges U.S. to Boost Tech Skills Amid China's Rise
Associated Press
Matt O'Brien
March 1, 2021


The National Security Commission on Artificial Intelligence (AI) issued its final report to Congress on March 1, calling on the U.S. to enhance its AI skills as a means of countering China. The 15-member commission—which includes executives from Google, Microsoft, Oracle, and Amazon—indicated that, with or without the U.S. and other democracies, machines that "perceive, decide, and act more quickly" and more accurately than humans will be used for military purposes. Despite warning against their unchecked use, the report does not support a global ban on autonomous weapons. It also recommends "wise restraints" on the use of facial recognition and other AI tools that could be used for mass surveillance and calls for "White House-led strategy" to defend against AI-related threats, set standards for the responsible use of AI, and increase research and development to maintain a technological edge over China.

Full Article

 

 

AI Can Write a Passing College Paper in 20 Minutes
ZDNet
Greg Nichols
February 24, 2021


Researchers at Education Reference Desk (EduRef), a resource for current and prospective students, found that an artificial intelligence (AI) tool can write a college term paper in three to 20 minutes and achieve a passing grade. Humans, in contrast, took three days on average to complete the same assignment. The researchers had a panel of professors grade anonymous submissions to writing prompts from recent graduates and undergraduate-level writers and Open AI's GPT-3, a deep learning language prediction model. The professors gave GPT-3 an average grade of "C" in four subjects, and it failed just one assignment. Said the researchers, "Even without being augmented by human interference, GPT-3's assignments received more or less the same feedback as the human writers."

Full Article

 

 

AI Here, There, Everywhere
The New York Times
Craig S. Smith
February 23, 2021


Researchers anticipate increasingly personalized interactions between humans and artificial intelligence (AI), and are refining the largest and most powerful machine learning models into lightweight software that can operate in devices like kitchen appliances. Privacy remains a sticking point, and scientists are developing techniques to use people's data without actually viewing it, or protecting it with currently unhackable encryption. Some security cameras currently use AI-enabled facial recognition software to identify frequent visitors and spot strangers, but networks of overlapping cameras and sensors can result in ambient intelligence that can constantly monitor people. Stanford University's Fei-Fei Li said such ambient intelligence "will be able to understand the daily activity patterns of seniors living alone, and catch early patterns of medically relevant information," for example.
 

Full Article

*May Require Paid Registration

 

 

Your Data is a Weapon That Can Help Change Corporate Behavior
Fortune
Jonathan Vanian; Jeremy Kahn
February 23, 2021


A study by Northwestern University researchers highlighted how the public can influence companies they think are misusing their data or engaging in unethical behavior. The study concerns data leverage, the power consumers have over firms that rely on machine learning (ML) software; people can impact businesses by changing their online behavior, like discontinuing use of such software. Northwestern's Nicholas Vincent said if people stop using a certain artificial intelligence-powered application, the app will lose the data needed for it to learn properly. Consumers also can collaboratively infect an ML system through atypical behavior, a practice known as data poisoning, and undermine the software's performance. Vincent said the research aims to show the public how their online behavior influences the AI systems of powerful technology companies.

Full Article

*May Require Paid Registration

 

 

Rethinking Microchips' Design Pushes Computing to the Edge
Princeton Engineering News
John Sullivan
February 24, 2021


Princeton University researchers are developing co-designed hardware and software that will enable designers to incorporate specialized artificial intelligence (AI) systems within applications. Princeton's Naveen Verma said, "To make AI accessible to the real-time and often personal process all around us, we need to address latency and privacy by moving the computation itself to the edge. And that requires both energy efficiency and performance." The researchers engineered a new microchip to enhance neural-network performance, which outperformed other advanced chips 10- to 100-fold; the team then refined the chip and designed software that would allow AI systems to leverage its speed and efficiency. The software is designed to seamlessly combine digital and analog systems, which Verma said complement each other.

Full Article

 

 

ML Aids in Simulating Dynamics of Interacting Atoms
Los Alamos National Laboratory News
February 23, 2021

Los Alamos National Laboratory researchers were able to simulate the motions of atoms in materials like aluminum using a new machine learning approach. The researchers developed an automated "active learning" methodology for iteratively building a training dataset. The method employs the current best machine learning model at each iteration, with new reference data collected through quantum simulations as new physical situations are encountered in order to retrain the model. The data collected includes different types of atomic configurations, including various crystal structures and defect patterns within crystals. Los Alamos' Justin Smith said, "Simulating the dynamics of interacting atoms is a cornerstone of understanding and developing new materials. Machine learning methods are providing computational scientists new tools to accurately and efficiently conduct these atomistic simulations."
 

Full Article

 

 

Tool Allows Computer Scientists to Support Life Scientists
Nevada Today
February 25, 2021

University of Nevada, Reno computer scientist Tin Nguyen and colleagues have developed a novel machine learning (ML) tool to support the research of life scientists. ScDHA (single-cell Decomposition using Hierarchical Autoencoder) employs ML to process data generated by single-cell sequencing. Nguyen said his team engineered a new non-negative kernel autoencoder to remove genes insignificant to cell differentiation, then developed a stacked Bayesian autoencoder to convert the data of tens of thousands of dimensions to a space with just 15 dimensions. Finally, the researchers crafted techniques for visualizing the transcriptome landscape, segregating cells into distinct types, inferring cellular developmental trajectory, and classifying the cells. Said Nguyen, "Without tools such as ours, it is impossible for life scientists to mine information from such large and high-dimensional data."
 

Full Article

 

 

EU Report Warns AI Makes Autonomous Vehicles 'Highly Vulnerable' to Attack
VentureBeat
Khari Johnson
February 22, 2021


A report by the European Union Agency for Cybersecurity (ENISA) describes autonomous vehicles as "highly vulnerable to a wide range of attacks" that could jeopardize passengers, pedestrians, and people in other vehicles. The report identifies potential threats to self-driving vehicles as including sensor attacks with light beams, as well as adversarial machine learning (ML) hacks. With growing use of artificial intelligence (AI) and the sensors that power autonomous vehicles offering greater potential for attacks, the researchers advised policymakers and businesses to foster a security culture across the automotive supply chain, including third-party providers. The researchers suggested AI and ML systems for autonomous vehicles “should be designed, implemented, and deployed by teams where the automotive domain expert, the ML expert, and the cybersecurity expert collaborate."

Full Article

 

 

ML Predicts Where New Coronaviruses Might Develop Next
Voice of America News
February 17, 2021


A machine learning study by researchers at the U.K.'s University of Liverpool identified mammals that could be future coronavirus carriers, which Liverpool's Mary Wardeh said "could offer insights into where viral recombination might occur." The machine learning system analyzed 411 different coronavirus versions and 876 possible mammal species, in order to identify mammals most likely to be co-infected and carry new strains of the virus. The outcomes implied there were about 11 times as many connections between mammal species and coronavirus versions than had been identified previously, and 40 times more mammal species that can be infected "with four or more different [kinds] of coronaviruses than have been observed to date."

Full Article

 

Daniel Tauritz

unread,
Mar 15, 2021, 4:21:44 PM3/15/21
to ai-b...@googlegroups.com

How to Spot Deepfakes? Look at Light Reflection in the Eyes
UB News Center
Melvin Bankhead III
March 10, 2021


A tool developed by University at Buffalo computer scientists can automatically identify deepfake photos of people by analyzing light reflections in their eyes for minute deviations. The tool exploits the fact that most artificial intelligence (AI)-generated images cannot accurately or consistently reflect the image of what the pictured person is seeing, possibly because many photos are combined to create the fake image. The tool first maps out each face, then analyzes the eyes, the eyeballs, and finally the light reflected in each eyeball. The tool was 94% effective in spotting deepfakes among portrait-like photos taken from actual images in the Flickr Faces-HQ dataset, as well as fake AI-generated faces from the www.thispersondoesnotexist.com repository.
 

Full Article

 

 

Algorithm Helps AI Systems Dodge 'Adversarial' Inputs
MIT News
Jennifer Chu
March 8, 2021


Massachusetts Institute of Technology (MIT) researchers have developed a deep learning algorithm designed to help machines navigate real-world environments by incorporating a level of skepticism of received measurements and inputs. The team mated a reinforcement-learning algorithm with a deep neural network, each used separately to train computers in playing games like Go and chess, to support the Certified Adversarial Robustness for Deep Reinforcement Learning (CARRL) approach. CARRL outperformed standard machine learning techniques in tests using simulated collision-avoidance and the videogame Pong, even when confronted with adversarial inputs. MIT's Michael Everett said, "Our approach helps to account for [imperfect sensor measurements] and make a safe decision. In any safety-critical domain, this is an important approach to be thinking about."
 

Full Article

 

 

Facebook Researchers Report Advance in Computer Vision
The Wall Street Journal
John McCormick
March 4, 2021


A software toolkit developed by Facebook's artificial intelligence (AI) research arm could enable companies to create highly accurate computer vision software more quickly. Facebook AI's Vissl toolkit leverages self-supervised learning, in which AI models train themselves on large datasets without external labels. Facebook's Yann LeCun said the techniques "allow you to basically reduce the amount of labeled data that is required to reach reasonable performance." Gartner's Carlton Sapp said the time required to build computer vision systems potentially could be halved using such self-supervised learning methods. LeCun, named 2018 ACM A.M. Turing Award laureate for his work on deep neural networks, said the technique also will boost the accuracy of computer vision systems by allowing analysis of more items in an image. In tests on the ImageNet database, Facebook's techniques achieved 85% accuracy, compared to 80% for computer vision systems trained with supervised learning.

Full Article

*May Require Paid Registration

 

 

Using Machine Learning to Develop Personalized Cancer Vaccines
University of Waterloo Cheriton School of Computer Science (Canada)
March 11, 2021

Researchers at Canada's University of Waterloo Cheriton School of Computer Science are applying machine learning to identify tumor-specific neoantigens, which could lead to personalized cancer vaccines. Cheriton's Hieu Tran said the team used a model similar to natural language processing to ascertain neoantigens' amino acid sequences based on one-letter amino acid codes. The researchers used the DeepNovo recurrent neural network to predict amino acid sequences, which Tran said expanded the predicted immunopeptidomes of five melanoma patients by 5% to 15%, based solely on data from mass spectrometry—and personalized the neoantigens to each patient.
 

Full Article

 

 

Researchers Are Peering Inside Computer Brains. What They've Found Will Surprise You
Fortune
Jeremy Kahn
March 4, 2021


Researchers at artificial intelligence (AI) research company OpenAI developed new techniques to examine the inner workings of neural networks to help interpret their decision-making. As neuroscientists have found in studies of the human brain, the researchers found individual neurons in a large neural network used to identify and categorize images can encode a particular concept. This finding is important given the challenges of understanding the rationale behind decisions made by neural networks. The researchers used reverse-engineering techniques to determine what most activated a particular artificial neuron. Among other things, the researchers identified a bias that could enable someone to trick the AI into making incorrect identifications. Said OpenAI's Gabriel Goh, "I think you definitely see a lot of stereotyping in the model."

Full Article

 

 

Researchers Discover Privacy-Preserving Tools Leave Private Data Unprotected
NYU Tandon School of Engineering
March 3, 2021


At the New York University Tandon School of Engineering (NYU Tandon), researchers found machine learning frameworks underlying privacy preservation tools that use generative adversarial networks (GANs) designed by third parties are not very effective at safeguarding private data. Analysis determined that privacy-protecting GANs (PP-GANs) can be subverted to pass empirical privacy checks, while still permitting the extraction of secret information from "sanitized" images. NYU Tandon's Siddarth Garg said, "While our adversarial PP-GAN passed all existing privacy checks, we found that it actually hid secret data pertaining to the sensitive attributes, even allowing for reconstruction of the original private image." The researchers applied a novel steganographic approach to adversarially modify a state-of-the-art PP-GAN to conceal its user ID from sanitized face images, which could pass privacy checks with 100% secret recovery rate.

Full Article

 

 

Novel System Improves AI Testing
Northwestern McCormick School of Engineering
Brian Sandalow
March 2, 2021


Artificial intelligence (AI)'s common sense can be tested by a novel system for automatically writing and testing large sets of questions developed by researchers at Northwestern University and the Allen Institute for AI. The Generative Data AUGmentation for Commonsense Reasoning (G-DAUGc) tool can produce extra training data for commonsense models, and improve accuracy without additional annotation. G-DAUGc's assessment of three commonsense reasoning datasets enhanced detection of the models' sensitivity to minor perturbations. Datasets can be costly and time-consuming to compile, while subtle irregularities in how questions are written can complicate evaluation. Northwestern's Douglas Downey said, "We can help address both of these problems by automatically generating large datasets of questions, and using those to train systems. This made the systems more accurate and robust in our experiments."

Full Article

 

 

Cybersecurity Researchers Build Better 'Canary Trap'
Dartmouth News
David Hirsch
March 1, 2021


The WE-FORGE data protection system developed by Dartmouth College cybersecurity researchers uses an artificial intelligence version of the "canary trap," in which multiple false documents are distributed to conceal secrets. The system uses natural language processing to automatically generate false documents to protect intellectual property. WE-FORGE also adds an element of randomness, to keep adversaries from easily identifying actual documents. The algorithm computes similarities between concepts in a document, analyzes each word's relevance, then sorts concepts into "bins" and computes a feasible candidate for each group. Dartmouth's V.S. Subrahmanian said, "The system produces documents that are sufficiently similar to the original to be plausible, but sufficiently different to be incorrect."

Full Article

 

 

In Battle with U.S., China to Focus on 7 'Frontier' Technologies From Chips to Brain-Computer Fusion
CNBC
Arjun Kharpal
March 5, 2021


China has highlighted seven "frontier" technologies on which it will concentrate research, including artificial intelligence (AI) and quantum computing, in its race with the U.S. for technological supremacy. Beijing said its latest five-year development plan will make "science and technology self-reliance and self-improvement a strategic pillar for national development." China's AI push will include specialized chip development, open source algorithms, and machine learning in decision-making and similar areas. China also will invest in quantum information, and pursue research and development in integrated circuit design tools, key equipment, and key materials; the latter effort will attempt to eliminate China's reliance on foreign companies for semiconductor-making assets. Another planned research area is brain-inspired computing and brain-computer fusion technology, whose American equivalent is implantable brain-chip interfaces that link humans and computers.

Full Article

 

Daniel Tauritz

unread,
Mar 24, 2021, 7:23:48 PM3/24/21
to ai-b...@googlegroups.com

ML Models for Diagnosing Covid-19 Not Yet Suitable for Clinical Use
University of Cambridge (U.K.)
March 15, 2021


A review of studies containing descriptions of machine learning (ML) models for diagnosing Covid-19 by researchers at the U.K.'s University of Cambridge concluded that none are yet suitable for detecting or diagnosing the virus from standard medical imaging. The team ultimately reviewed 62 studies, and invalidated each model's suitability due to biases in study design, methodological flaws, lack of reproducibility, and publicly available "Frankenstein datasets." Many ML models were trained on sample datasets that were too small to be effective, failed to specify their data's origins, were trained and tested on the same data, or lacked involvement from radiologists and clinicians. Cambridge's James Rudd said, "These early studies show promise, but they suffer from a high prevalence of deficiencies in methodology and reporting, with none of the literature we reviewed reaching the threshold of robustness and reproducibility essential to support use in clinical practice."

Full Article

 

 

IBM Develops AI to Invent Antibiotics—and It's Made Two Already
New Atlas
Michael Irving
March 11, 2021


IBM Research is using artificial intelligence (AI) to develop new antibiotics more quickly, and has already produced two promising drug candidates. Potential molecules are comprised of countless possible chemical combinations, which is why drug development generally takes years. To speed up the process, the researchers used a deep generative autoencoder model to examine a range of peptide sequences, collecting data about their function and the molecules within them and searching for similarities to other peptides. The researchers then used a Controlled Latent attribute Space Sampling (CLaSS) system to generate new peptide molecules with specific properties based on the data gathered by the model. The AI system identified, synthesized, and experimented with 20 new antibiotic peptide candidates over 48 days, producing two that were effective against a range of Gram-positive and Gram-negative bacteria.

Full Article

 

 

FSU Researchers Enhance Quantum ML Algorithms
Florida State University News
Bill Wellock
March 16, 2021


Florida State University (FSU) researchers have developed a method to automatically infer parameters utilized in a critical quantum Boltzmann machine algorithm for machine learning applications. A restricted Boltzmann machine algorithm employs probability to learn based on inputs fed to the network, and FSU's William Oates and Guanglei Xu invented a technique to automatically calculate a parameter associated with effective temperature used in that algorithm. Oates said, "That parameter in the model replicates what the quantum annealer is doing. If you can accurately estimate it, you can train your neural network more effectively and use it for predicting things."

Full Article

 

 

Researchers Find Better Way to Measure Consciousness
University of Wisconsin-Madison News
Chris Barncard
March 16, 2021


Analysis of neural signals in monkeys by University of Wisconsin–Madison (UWM) researchers combined traditional telltales of consciousness with computational metrics describing the signals' complexities and interaction in different brain regions. The authors used machine learning to determine whether the monkeys were conscious or not and the activity levels of their brain areas by processing those signals through a computer. UWM's Mohsen Afrasiabi said the results indicated the back of the brain and the deep brain areas are more predictive of states of consciousness than the front. UWM's Yuri Saalmann said, "We could use what we've learned to optimize electrical patterns through precise brain stimulation and help people who are, say, in a coma maintain a continuous level of consciousness."

Full Article

 

 

Researchers Blur Faces That Launched a Thousand Algorithms
Wired
Will Knight
March 15, 2021


Privacy concerns prompted the researchers who manage ImageNet to blur every human face within the dataset to determine whether doing so would affect the performance of object-recognition algorithms trained on the dataset. ImageNet features 1.5 million images with about 1,000 labels, but only 243,198 images were blurred. The researchers blurred the faces using Amazon's AI service Rekognition and found it did not impact the performance of several object-recognition algorithms trained on ImageNet. Princeton University's Olga Russakovsky said, "We hope this proof-of-concept paves the way for more privacy-aware visual data collection practices in the field." However, Massachusetts Institute of Technology's Aleksander Madry said training an AI model on a dataset with blurred faces could have unintended consequences; said Madry, "Biases in data can be very subtle, while having significant consequences."

Full Article

 

 

Drive-Throughs That Predict Your Order? Restaurants Are Thinking Fast
The New York Times
Julie Creswell
March 13, 2021


Many restaurants expect digital ordering and drive-throughs to remain key business channels, and some are testing artificial intelligence (AI) to predict and suggest personalized orders. McDonald's acquired Israeli AI company Dynamic Yield to boost sales via personalized digital promotions. Burger King is modernizing its drive-through with its Deep Flame AI system to suggest foods based on daily popularity, and testing Bluetooth technology to identify loyal customers and show their previous orders to calculate their probability of ordering the same items. Restaurant Brands International (RBI) hopes to deploy predictive personalized systems at more than 10,000 of its North American restaurants by mid-2022. RBI's Duncan Fulton envisions customers having "the ability to automatically reorder things and pay for the items at the board, which, ultimately, speeds up the window time, allowing you to collect your food and go on your way."

Full Article

*May Require Paid Registration

 

 

New Approach Found for Energy-Efficient AI Applications
Graz University of Technology (Austria)
Christoph Pelzl
March 11, 2021


Researchers at Austria's Graz University of Technology (TU Graz) have demonstrated a new approach to energy-efficient artificial intelligence that needs very few signals to function, and that assigns meaning to pauses between signals. TU Graz's Wolfgang Maass and Christoph Stöckl optimized artificial neural networks in computer models for image classification so the neurons only have to transmit extremely simple signals occasionally, achieving an accuracy similar to that of state-of-the-art tools. The data transmission model relies not only on how many spikes a neuron sends out, but also on when the neuron transmits the spikes. Maass said, "With just a few spikes—an average of two in our simulations—as much information can be conveyed between processors as in more energy-intensive hardware."

Full Article

 

 

Deep Learning Enables Real-Time 3D Holograms on Smartphone
IEEE Spectrum
Charles Q. Choi
March 10, 2021


Massachusetts Institute of Technology (MIT) scientists can generate photorealistic three-dimensional (3D) holograms in color on a smartphone, in real time. The technique utilizes a deep learning convolutional neural network (CNN) to generate holograms without requiring separate calculations for how each chunk of the image appears. The MIT researchers compiled a database of 4,000 computer-generated images, each with color and depth information for each pixel, as well as a corresponding 3D hologram for each image. The CNN tapped this data to calculate an optimal hologram generation process, then produced new holograms from images with depth information calculated from a multi-camera setup or from LiDAR sensors included in certain iPhones. The system can generate 60 color 3D holograms per second with a resolution of 1,920 by 1,080 pixels on one consumer-grade graphics processing unit (GPU), using less than 620 kilobytes of memory.

Full Article

 

Facebook Unveils New AI Model That Was Trained On One Billion Instagram Pics

ZDNet Share to FacebookShare to Twitter (3/4, Leprince-Ringuet) reports that Facebook researchers “have unveiled a new AI model that can learn from any random group of unlabeled images on the internet, in a breakthrough that, although still in its early stages, the team expects to generate a ‘revolution’ in computer vision.” Dubbed SEER (SElf-SupERvised), “the model was fed one billion publicly available Instagram images, which had not previously been manually curated.” SEER “was able to autonomously work its way through the dataset, learning as it was going, and eventually achieving top levels of accuracy on tasks such as object detection.”

 

NYU Engineering Researchers Discover Privacy-Preserving Tools Leave Private Data Unprotected

Tech Xplore Share to FacebookShare to Twitter (3/3) reports that “machine-learning (ML) systems are becoming pervasive not only in technologies affecting our day-to-day lives, but also in those observing them.” Companies that “make and use such widely deployed services rely on so-called privacy preservation tools that often use generative adversarial networks (GANs), typically produced by a third party to scrub images of individuals’ identity.” Researchers at the NYU Tandon School of Engineering, “who explored the machine-learning frameworks behind these tools,” found that they leave private data unprotected. In the paper “Subverting Privacy-Preserving GANs: Hiding Secrets in Sanitized Images,” a team “led by Siddharth Garg, Institute Associate Professor of electrical and computer engineering at NYU Tandon, explored whether private data could still be recovered from images that had been ‘sanitized’ by such deep-learning discriminators as privacy protecting GANs (PP-GANs) and that had even passed empirical tests.” The team “found that PP-GAN designs can, in fact, be subverted to pass privacy checks, while still allowing secret information to be extracted from sanitized images.”

 

Commission: China Catching Up To US When It Comes To AI

Axios Share to FacebookShare to Twitter (3/2, Fried) reports, “The US, which once had a dominant head start in artificial intelligence, now has just a few years’ lead on China and risks being overtaken unless government steps in,” according to the National Security Commission on Artificial Intelligence, which is chaired by former Google CEO Eric Schmidt.

 

Analysis: Deep-Learning Algorithms Lack Ability To Adapt To Change

MIT Technology Review Share to FacebookShare to Twitter (2/24, Hao) reports that “DeepMind’s game-playing algorithm AlphaGo can beat the world’s best Go masters.” However, “it can’t extend that strategy beyond the board.” Thus, “deep-learning algorithms...are masters at picking up patterns,” but they are unable to “understand and adapt to a changing world.” The author continues that “‘multimodal’ systems, with access to both the sensory and linguistic ‘modes’ of human intelligence, should give rise to a more robust kind of AI that can adapt more easily to new situations or problems.”

 

Cloudera Releases Applied ML Prototypes To Speed Enterprise Machine Learning Use Cases

ZDNet Share to FacebookShare to Twitter (2/25, Dignan) reports Cloudera launched Applied ML Prototypes, complete machine learning projects that take care “of a lot of the coding and workflow grunt work,” allowing data scientists to “focus on developing for the enterprise use case and iterating.” ZDNet adds, “Santiago Giraldo, director of product marketing, data engineering, and machine learning at Cloudera, said AMPs goal is to change how machine learning is delivered going forward.”

 

Research: AI Can Write Passing College Papers

ZDNet Share to FacebookShare to Twitter (2/24, Nichols) reports EduRef has conducted an experiment Share to FacebookShare to Twitter that found “a deep learning language prediction model known as GPT-3” was able to write a college term paper that got “passing marks in an anonymized trial.” ZDNet explains the methodology of the trial, reporting the AI “was released in June 2020 by OpenAI, a research business co-founded by Elon Musk. It was developed to create content with a human language structure better than any of its predecessors.” Evaluations of the writing by college professors “suggested that writing produced by GPT-3 was able to mimic human writing in areas of grammar, syntax, and word frequency, although the papers felt somewhat technical.”

 

IBM Releases Details Of Low-Power AI Chip

Enterprise AI Share to FacebookShare to Twitter (2/22) reports IBM “has released details of a prototype AI chip geared toward low-precision training and inference across different AI model types while retaining model quality within AI applications.” In a paper that was delivered during the International Solid-State Circuits Virtual Conference, IBM “also touted its AI chip based on 7nm process technology as the first energy-efficient device ‘at the vanguard of low precision training and inference.’”

 

Google Fires Leader Of AI Ethicist Team

The AP Share to FacebookShare to Twitter (2/19) reported that Google “has fired a leader of its artificial intelligence ethics team in the fallout from the recent departure of another former employee involved in the same field.” Google “confirmed the firing Friday after Margaret Mitchell announced her ouster on Twitter.”

        The Verge Share to FacebookShare to Twitter (2/19) reported Google fried Mitchell “after she used an automated script to look through her emails in order to find evidence of discrimination against her coworker Timnit Gebru.” Following Gebru’s termination “Mitchell became openly critical of Google executives, including Google AI division head Jeff Dean and Google CEO Sundar Pichai. In January, she lost her corporate email access after Google began investigating her activity.”

        Bloomberg Share to FacebookShare to Twitter (2/19, Grant, Bass, Eidelson) reported that Google “had accused Mitchell of downloading files from its systems and said it would review her conduct.” Mitchell’s firing “came the same day that Google’s head of AI, Jeff Dean, apologized to staff for how he handled Gebru’s departure and pledged that executives would be graded on diversity progress.”

        TechCrunch Share to FacebookShare to Twitter (2/20) and Wired Share to FacebookShare to Twitter (2/19, Simonite) also reported.

 

Google To Change Oversight Procedures Amid Internal Concerns Over AI Research Integrity

Reuters Share to FacebookShare to Twitter (2/25, Dastin, Dave) reports, “Google will change procedures before July for reviewing its scientists’ work, according to a town hall recording heard by Reuters, part of an effort to quell internal tumult over the integrity of its artificial intelligence research.” In comments “at a staff meeting last Friday, Google Research executives said they were working to regain trust after the company ousted two prominent women and rejected their work, according to an hour-long recording, the content of which was confirmed by two sources.”

 

Google Employees Continue Criticism Of Company Over Ethics Firings

Fast Company Share to FacebookShare to Twitter (2/26) reported that thousands of Google employees are mobilizing in response to the firing of Google ethics researcher Timnit Gebru, who “was pushed out of her job in December.” While Alphabet CEO Sundar Pichai “pledged an investigation,” a leaked copy of the report “revealed that the company plans to change its research publishing process, tie executive compensation to diversity numbers, and institute a more stringent process for ‘sensitive employee exits.’” Also at stake is the “equitable development of a technology that already underpins many of our most important automated systems.” Across a wide field of human interactions, “AI algorithms are making life-altering decisions with no oversight or transparency. The harms these models cause when deployed in the world are increasingly apparent: discriminatory hiring systems; racial profiling platforms targeting minority ethnic groups; racist predictive-policing dashboards.”

 

Daniel Tauritz

unread,
Mar 28, 2021, 11:16:02 AM3/28/21
to ai-b...@googlegroups.com

Virtual Webinar: The Evolving AI-Human Relationship And Its Impact On Education

As artificial intelligence continues to transform innumerable aspects of everyday life, its impact on education is inevitable. In this talk, Dr. Jason Corso, Director of the Stevens Institute for Artificial Intelligence and Professor of Computer Science at Stevens Institute of Technology, will present a colorful perspective on the AI-human relationship, including how it has evolved over the last few decades and how the future of AI will affect education. Learn more and register to attend.

 

Podcast Discusses How AI Can Now Debate With Humans

Scientific American Share to FacebookShare to Twitter (3/17, Amer) publishes a transcript of its “Science Talk” podcast, which focuses on Noam Slonim, who discusses “an impressive feat of computer engineering: an AI-powered autonomous system that can engage in complex debate with humans over issues ranging from subsidizing preschool and the merit of space exploration to the pros and cons of genetic engineering.” In a new Nature paper Share to FacebookShare to Twitter, Slonim and colleagues “show that across 80 debate topics, Project Debater’s computational argument technology has performed very decently – with a human audience being the judge of that.” Said Slonim, “However, it is still somewhat inferior on average to the results obtained by expert human debaters.” Not unlike its human counterpart, the AI “was given only 15 minutes to research the topic and prepare for the debate – rifling through thousands of gigabytes of information at record speed to form an opening statement and layer counter arguments that were later delivered through a robotic female voice, in fragments, and with near perfect diction.”

 

 

New U.K. Currency Honors Alan Turing, Pioneering Computer Scientist and Code-Breaker
NPR Online
Rachel Treisman
March 25, 2021


The Bank of England (BoE) has unveiled a new £50 note featuring pioneering mathematician, code-breaker, and computer scientist Alan Turing, which will enter circulation on June 23, his birthday. The BoE's Alan Bailey said Turing's work in computing and artificial intelligence "has had an enormous impact on how we all live today." The bill is one of a series of polymer banknotes that are harder to counterfeit, and Turing's nephew Dermot Turing said his uncle would have especially appreciated the currency for highlighting his computer science achievements. Said Dermot Turing, "I think Alan Turing would have wanted us to think about things like underrepresentation of women in science subjects, underrepresentation of Black and ethnic minority kids in STEM [science, technology, engineering, and math] subjects at school, and why they're not being given the opportunities that they should have and why that's bad for all of us."
 

Full Article

 

 

Tiny Swimming Robots Reach Their Target Faster Thanks to AI Nudges
New Scientist
Chris Stokel-Walker
March 24, 2021


A machine learning algorithm developed by researchers at Germany's University of Leipzig could help microrobots swim toward a goal without being knocked off course by the random motion of particles in the fluid. Swimming microrobots generally follow a random path and cannot correct their direction — unlike the bacteria they are designed to mimic, which move toward food sources. The researchers used a narrow laser beam to move a microrobot comprised of melamine resin with gold nanoparticles covering 30% of its surface. The algorithm tracked the microrobot's movement and ordered the laser to fire at a particular point on its surface. The algorithm was rewarded if the instruction moved the microrobot toward the goal, and penalized if it moved the microrobot away from the target. The number of instructions necessary for the microrobot to reach its goal was reduced nearly 85% after seven hours of such training.
 

Full Article

 

 

Researchers' Algorithm Designs Soft Robots That Sense
MIT News
Daniel Ackerman
March 22, 2021


An algorithm developed at the Massachusetts Institute of Technology (MIT) helps design soft robots that gather useful environmental data by recommending optimized positions for sensors on the robot so it can better interact with its surroundings and perform assigned tasks. The research team divided the robot's body into “particles,” and the rate of strain of each was provided to a neural network that learns the most efficient sequence of task movements via trial and error. The network monitors which particles are used most often, removing lesser-used particles from inputs for subsequent trials and suggesting sensor positioning for efficient performance. MIT's Daniela Rus said, "Automating the design of sensorized soft robots is an important step toward rapidly creating intelligent tools that help people with physical tasks."

Full Article

 

 

More Than Words: Using AI to Map How the Brain Understands Sentences
University of Rochester Medical Center
Kelsie Smith Hayduk
March 22, 2021


Researchers at the University of Rochester Medical Center (URMC) and the Medical College of Wisconsin combined neuroimaging and artificial intelligence (AI) to describe the brain's mechanism for understanding sentences. The team performed functional magnetic resonance imaging (fMRI) scans on study participants as they read sentences, which indicated that brain activity crossed a network of different regions. Using Facebook's InferSent AI model, the researchers could predict patterns of fMRI activity that mirrored the encoding of a sentence’s meaning across those regions. URMC's Andrew Anderson said, "It's the first time that we've applied this model to predict brain activity within these regions, and that provides new evidence that contextualized semantic representations are encoded throughout a distributed language network, rather than at a single site in the brain."

Full Article

 

 

More Transparency, Understanding Into Machine Behaviors
MIT News
Rachel Gordon
March 22, 2021


Researchers at the Massachusetts Institute of Technology (MIT) have developed a tool for instilling transparency into artificial intelligence (AI) models, by identifying concrete examples that yield a specific behavior. The Bayes-TrEx tool applies Bayesian posterior inference, a popular mathematical framework for reasoning about model uncertainty. The MIT researchers applied Bayes-TrEX to image-based datasets, uncovering insights previously missed by standard techniques focusing exclusively on prediction accuracy. Bayes-TrEX also can understand model behaviors in unique situations, and the tool has inspired an adaptation, RoCUS, for the analysis of robot-specific behaviors. MIT's Serena Booth said, "We want to make human-AI interaction safer by giving humans more insight into their AI collaborators."
 

Full Article

 

 

Novel Deep Learning Framework for Symbolic Regression
Lawrence Livermore National Laboratory
March 18, 2021


Computer scientists at Lawrence Livermore National Laboratory (LLNL) have created a new framework and visualization tool that applies deep reinforcement learning to symbolic regression problems. Symbolic regression, a type of discrete optimization that seeks to determine the underlying equations or dynamics of a physical process, generally is approached in machine learning and artificial intelligence with evolutionary algorithms, which LLNL's Brenden Petersen said do not scale well. LLNL's Mikel Landajuela explained, "At the core of our approach is a neural network that is learning the landscape of discrete objects; it holds a memory of the process and builds an understanding of how these objects are distributed in this massive space to determine a good direction to follow." The team's algorithm outperformed several common benchmarks when tested on a set of symbolic regression problems.

Full Article

 

 

Im­age Analysis Based on ML Re­li­ably Iden­ti­fies Hemat­o­lo­gical Ma­lig­nan­cies Chal­len­ging for the Human Eye
University of Helsinki (Finland)
Anu Koivusipila
March 22, 2021


Since machine learning (ML)-based image analysis can spot details in tissue that may elude the human eye, researchers at Finland's University of Helsinki used the technique to analyze microscopic images of bone marrow from myelodysplastic syndrome (MDS) patients. The researchers digitized and examined sample slides using computational deep learning models, which accurately identified the most frequent genetic mutations affecting MDS progression. Olivier Elemento at Weill Cornell Medicine's Caryl and Israel Englander Institute for Precision Medicine said, "[This] study provides new insights into the pathobiology of MDS and paves the way for increased use of artificial intelligence for the assessment and diagnosis of hematological malignancies."

Full Article

 

More Than Words: Using AI to Map How the Brain Understands Sentences
University of Rochester Medical Center
Kelsie Smith Hayduk
March 22, 2021


Researchers at the University of Rochester Medical Center (URMC) and the Medical College of Wisconsin combined neuroimaging and artificial intelligence (AI) to describe the brain's mechanism for understanding sentences. The team performed functional magnetic resonance imaging (fMRI) scans on study participants as they read sentences, which indicated that brain activity crossed a network of different regions. Using Facebook's InferSent AI model, the researchers could predict patterns of fMRI activity that mirrored the encoding of a sentence’s meaning across those regions. URMC's Andrew Anderson said, "It's the first time that we've applied this model to predict brain activity within these regions, and that provides new evidence that contextualized semantic representations are encoded throughout a distributed language network, rather than at a single site in the brain."

Full Article

 

Opinion: AI Audits Needed, But Need To Be Done Right

In an op-ed on Medium Share to FacebookShare to Twitter (3/17, Sloane) senior research scientist at the NYU Center for Responsible AI Mona Sloane writes there is a race on “to define and steer what it means to audit algorithms,” as the technology has a “disproportionate impact” on the vulnerable “with applications ranging from health care to welfare, hiring, and education.” Attempts to create AI audits are admirable, but Sloane warns that “audits ... can end up as toothless reputation polishers, or even worse: They can legitimize technologies that shouldn’t even exist because they are based on dangerous pseudoscience.”

 

Applied Materials Unveils New AI Tech For Chip Production

Reuters Share to FacebookShare to Twitter (3/16, Nellis) reports, “A new semiconductor manufacturing technology from Applied Materials uses artificial intelligence, or AI, to spot mistakes in chips more effectively, the U.S. company said on Tuesday.” The new Applied Materials tools “are aimed at inspecting those chips at various times during the manufacturing process. A new optical scanner – essentially an extremely advanced camera that Applied calls Enlight – scans a silicon wafer quickly for problem areas over about 15 minutes, and then an electron microscope zooms in for a closer look.” Additionally, a “new artificial intelligence technology, which Applied calls ExtractAI, only needs to check about 1,000 of those possible trouble spots with the electron microscope to predict where the biggest problems will be.”

 

AI-Powered Water Meter Helps Prevent Damage, Reduce Waste

CNN Share to FacebookShare to Twitter (3/15, Cohan) reports on WINT Water Intelligence, which produces “an artificial intelligence-powered water meter” that “integrates into existing pipe systems, where it can learn the normal water flow, detect when things go wrong, and shut off malfunctioning parts before any damage is done.” The system can also help to reduce water use. The company has “expanded into the United States and Europe – where the technology has already been adopted by companies such as Microsoft, HP and Mastercard.”

 

NYTimes’ Metz Explores AI And Bias

The New York Times Share to FacebookShare to Twitter’ (3/15) Cade Metz writes, “In the nearly 10 years I’ve written about artificial intelligence, two things have remained a constant: The technology relentlessly improves in fits and sudden, great leaps forward. And bias is a thread that subtly weaves through that work in a way that tech companies are reluctant to acknowledge.” With Google’s dismissal of AI researchers Drs. Timnit Gebru and Margaret Mitchell, Metz “saw an old problem – part technological and part sociological – finally breaking into the open.” AI researchers Joy Buolamwini and Deborah Raji “published a study showing that an Amazon face service also had trouble identifying the sex of female and darker-​skinned faces. According to the study, the service mistook women for men 19 percent of the time and misidentified darker-​skinned women for men 31 percent of the time. For lighter-​skinned males, the error rate was zero.”

        Analysis: Google Asking Questions About AI Ethics, But Doesn’t Want Answers. For The Guardian (UK) Share to FacebookShare to Twitter (3/13), Open University Professor John Naughton examined how the recent departure of two leading members of Google’s ethical AI team, Dr. Timnit Gebru and Dr. Margaret Mitchell, “exposes the conflict at the heart of its business model.” The piece explains how the researchers in a paper provided a critique of key machine-learning language models, but the company, which relies on its own developed model for its search engine, “is accordingly ultra-sensitive to critiques of such a key technology.”

 

How One Employee’s Exit Shook Google And The AI Industry

CNN Share to FacebookShare to Twitter (3/11, Metz, Business) chronicles in an essay how Timnit Gebru’s departure from Google, where she had been co-leader of the ethical AI team, raised questions at the company where she had been focused on the “ethical implications of large language models.” Gebru’s “exit from Google’s ethical AI team kickstarted a months-long crisis for the tech giant’s AI division, including employee departures, a leadership shuffle, and widening distrust of the company’s historically well-regarded scholarship in the larger AI community.”

 

College Students Bond With AI Robots Via Text Message Amid Pandemic

The Los Angeles Times Share to FacebookShare to Twitter (3/9, Agrawal) reports that Billy Chat is a robot that “uses artificial intelligence to text at Cal Poly Pomona.” The responses “flowed into the data bank of Billy Chat, a robot that uses artificial intelligence to text.” Billy and other “chatbots” were “launched at California State University campuses in 2019 to help students stay on track to graduate.” After students were “sent home last spring at the onset of the COVID-19 pandemic, Billy evolved into more of a friend, blurring the line between artificial and real when the world turned away from human touch and connections.” To Billy and other bots “with names such as ‘CougarBot’ and ‘Csunny,’ students poured out feelings of loneliness, despair and worry for themselves and their families.”

 

Researchers Say AI Tools Can Be Fooled By A Written Word

The Verge Share to FacebookShare to Twitter (3/8, Vincent) reported OpenAI researchers “have discovered that their state-of-the-art computer vision system can be deceived by tools no more sophisticated than a pen and a pad,” as “simply writing down the name of an object and sticking it on another can be enough to trick the software into misidentifying what it sees.” OpenAI researchers wrote in a blog post, “We refer to these attacks as typographic attacks.” The post continued, “By exploiting the model’s ability to read text robustly, we find that even photographs of hand-written text can often fool the model.”

 

Facebook’s New AI Teaches Itself To See With Less Human Help

Wired Share to FacebookShare to Twitter (3/7, Knight) reports that Facebook has “built an algorithm that learned to recognize objects in images with little help from labels.” The algorithm, called Seer (for SElf-supERvised), “fed on more than a billion images scraped from Instagram, deciding for itself which objects look alike.” The Facebook research “is a landmark for an AI approach known as ‘self-supervised learning,’ says Facebook’s chief scientist, Yann LeCun.”

 

China Plans To Accelerate Development Of AI, Other Advanced Technologies

The Wall Street Journal Share to FacebookShare to Twitter (3/7, Lin, Subscription Publication) reports that in a draft economic blueprint presented during China’s annual legislative gathering, Chinese officials outlined plans to accelerate development of advanced technologies such as chips and artificial intelligence over five years. According to economists and industry analysts, the five-year plan’s focus on advanced technologies and innovation is notable. The Journal adds that the according to the plan, China anticipates having “significant breakthroughs on core technologies and seeks to be among the most innovative nations globally” by 2035.

 

AI And Automation Sped Up Amid Pandemic, Taking On New Roles

The New York Times Share to FacebookShare to Twitter (3/6, Roose) reported that “recent advances in A.I. and machine learning have created algorithms capable of outperforming doctors, lawyers and bankers at certain parts of their jobs. And as bots learn to do higher-value tasks, they are climbing the corporate ladder.” The trend has been “accelerating to warp speed since the pandemic,” and is “transforming workplaces at a pace that few outsiders appreciate.”

Reply all
Reply to author
Forward
0 new messages