Spooky Action Could Help Boost Quantum Machine Learning
IEEE Spectrum
Charles Q. Choi
March 7, 2022
Researchers at Los Alamos National Laboratory and Louisiana State University suggest quantum entanglement, or "spooky action at a distance," might help clear a path toward quantum machine learning (ML) by overcoming the no-free-lunch theorem. The theorem posits that any ML algorithm is only as good as any other when their performance is averaged over many problems and training datasets. This implies that modeling a quantum system could require a volume of training data that must grow exponentially as the modeled system expands. Researchers suggested entangling extra or "ancilla" quantum bits with the quantum system that a quantum computer intends to model, enabling the quantum ML circuit to interact with many quantum states in the training data concurrently. They used quantum-hardware startup Rigetti's Aspen-4 quantum computer to confirm these findings.
Gender Bias in Recruitment: AI Hiring Tools Are Hindering Women's Careers
EuroNews
Natalie Huet
March 8, 2022
New research found artificial intelligence (AI) is hurting women's careers, because of algorithms built on biased data. The United Nations Educational, Scientific, and Cultural Organization (UNESCO), the Organization for Economic Co-operation and Development, and the Inter-American Development Bank determined that job platforms generate fewer ads for high-paying jobs for users who select the female gender. UNESCO's Marielza Oliveira said recruitment and HR recommendation algorithms are based on historical data with baked-in bias against women. AI also tends to pair women with jobs demanding soft skills, which are not compensated as highly as those requiring hard skills, maintaining the gender pay gap. New systems are trying to address this by reconsidering language used in job ads. "We cannot have innovation by depreciating and disengaging half of the brains of the world," said Oliveira. "If we want AI to work, we really need to make it inclusive."
These Campaigns Hope 'Deepfake' Candidates Help Get Out the Vote
The Wall Street Journal
Timothy W. Martin; Dasl Yoon
March 8, 2022
South Korean presidential candidate Yoon Suk-yeol is reaching out to younger voters in online videos featuring a digital avatar (AI Yoon) that is more personable and relatable. AI Yoon, based on hours of video and audio recorded by the candidate along with his expressions and facial mannerisms, is controlled by his campaign team; Yoon's political opponents have adapted similar tactics with an avatar of rival candidate Lee Jae-myung, produced from real footage of his actual comments. Seoul-based artificial intelligence company DeepBrain AI provides the technology behind AI Yoon by synthesizing voice and video into avatars that hold down real-time dialogue. Some avatar videos can be produced in a half hour, while others take half a day, depending on what knowledge the avatar must express. Lee Jun-seok, head of Yoon's People Power Party, said Yoon has adapted his campaigning style based on the popularity of his avatar's videos.
*May Require Paid Registration
Scientists Translate Pig Grunts into Emotions for the First Time
The Daily Mail (U.K.)
Jonathan Chadwick
March 7, 2022
An international team of European scientists says it has translated pig grunts into emotions by training an artificial intelligence algorithm on 7,414 recordings of noises collected throughout the lives of pigs. The researchers claim the algorithm can decode whether an individual pig is experiencing positive emotions, negative emotions, or something in between. More rigorous analysis unearthed a new pattern that revealed what the animals experienced in certain situations, with clear differences in pig calls. "By training an algorithm to recognize these sounds, we can classify 92% of the calls to the correct emotion," said Elodie Briefer, associate professor at the University of Copenhagen in Denmark. Experts suggest sufficient data could be applied to better understand the emotions of other animals, which Briefer said could lead to applications that farmers could use to improve their livestock's well-being.
'Fingerprint' ML Technique Identifies Bacteria in Seconds
KAIST (South Korea)
March 4, 2022
Researchers at the Korea Advanced Institute of Science and Technology (KAIST) combined surface-enhanced Raman spectroscopy and a deep learning model to identify bacteria in seconds with up to 98% accuracy. Their model, named DualWKNet (dual-branch wide-kernel network), was trained to identify the "fingerprint" spectra of the molecular components of multiple bacteria. Said KAIST's Sungho Jo, "We demonstrated a markedly simple, fast, and effective route to classify the signals of two common bacteria and their resident media without any separation procedures." Jo added, "Ultimately, with the use of DualWKNet replacing the bacteria and media separation steps, our method dramatically reduces analysis time."
Preparing for the Next Pandemic
University of Waterloo News (Canada)
Wendy Philpott
February 28, 2022
An interdisciplinary research team at GoodLabs Studio and Canada's University of Waterloo is working on the Syndrome Anomaly Detection System (SADS) to provide real-time, data-driven insights that could help health officials prevent a future pandemic. The system uses machine learning (ML) and artificial intelligence to conduct widespread disease monitoring, with the goal of identifying patterns of atypical disease across communities to allow for quick action. SADS features an app that uses natural language processing (NLP) to capture symptoms described by patients in conversation with their doctors. The use of NLP within the app protects patients' personal information. Deep language ML is then used to aggregate and categorize the data—symptoms, age, gender, and location—to detect patterns. SADS tracks atypical symptoms over time to show how a novel disease could be spreading in a community.
Using AI to Find Anomalies Hiding in Massive Datasets
MIT News
Adam Zewe
February 25, 2022
A machine learning model developed by researchers at the MIT-IBM Watson AI Lab can identify anomalies in massive data streams automatically, in real time. The researchers used artificial intelligence (AI) to model the interconnectedness of the power grid to better detect glitches, without needing annotated data on power grid anomalies for training. This means the model could be applied in real-world situations where the availability of high-quality, labeled datasets is limited. The researchers indicated the model could be applied to other circumstances where a large number of interconnected sensors collect and report data. The framework was tested on power grid, traffic, and water system datasets that contained anomalies previously identified by humans, and outperformed the baselines by identifying a higher percentage of true anomalies in each dataset.
The Wall Street Journal (1/24, Bhattacharyya, Subscription Publication) reports Meta Platforms on Monday said its research team has built a new AI supercomputer, which the company maintains will soon be the fastest in the world. The supercomputer was the result of nearly two years of work, in a combination of remote and in-person world.
Additional coverage includes CNN (1/24, Metz).
FierceBiotech (1/25, Hale) reports, “What if the information you need to gauge your risk of a heart attack within the next year could be found in your eyes?” A team of “researchers led by the University of Leeds believe that to be true, and they’ve trained an artificial intelligence program to spot the warning signs.” Their “algorithm analyzes retinal scans taken during a routine eye doctor visit and looks for changes in the tiny blood vessels feeding the back of the eye, including complex patterns that have been linked to broader cardiovascular disease.” Along with other data, “the AI predicted their chance of having a heart attack within the next 12 months with an accuracy between 70% and 80%, the researchers said.” The “findings were published in the journal Nature Machine Intelligence.”
The Chronicle of Higher Education (1/27) reports that “colleges nationwide are increasingly adopting artificial-intelligence tools such as chatbots to expand and streamline communication.” In an Educause Quick Poll “from June 2021, 36% of IT professionals who responded said chatbots and digital assistants were already in place on their campuses, while 17% reported they were in the works.” The poll “simultaneously revealed, however, that 68% of respondents saw ethical concerns as a barrier to adoption.” The University of California system, “including UC-Irvine, has been at the forefront of thinking about ethical AI, including and beyond chatbots.” The system “adopted recommendations from a nearly 80-page report in the fall of 2021 – among the first of its kind in higher education – that includes best practices for incorporating AI into different aspects of the ‘student experience,’ such as admissions and financial aid, advising, mental health, and remote proctoring.”
Government Technology (1/24, Maruri) reports legislators in Vermont proposed a bill recently that “looks to create a commission to oversee the ethical use of AI technology within state government.” The bill is a merger of two separate pieces of legislation that aim “to create an index of current AI technology used within the state” and “develop a code of ethics and support the responsible use of AI technology among state agencies.” National Association of State Chief Information Officers senior policy analyst Amy Glasscock said, “Legacy infrastructure is also another big challenge for states in using AI because there is so much of it.,” adding that many “states haven’t been able to move forward with a lot of AI applications.”
'No-Code' Brings the Power of AI to the Masses
The New York Times
Craig S. Smith
March 15, 2022
"Citizen developers" are tapping products that allow anyone to use artificial intelligence (AI) without writing any computer code, as part of the "no-code" movement envisioned by advocates as revolutionary. "We are trying to take AI and make it ridiculously easy," said Craig Wisneski at the startup Akkio, which lets anyone make predictions using data. No-code platforms replace coding languages with simple and familiar Web interfaces, and new startups are making the power of AI available to nontechnical people in visual, textual, and audio spheres. The Juji tool, for example, is engineered to simplify chatbot building, by using machine learning to automatically manage complex conversation flows and deduce user characteristics to personalize each engagement. The power of no-code platforms is also growing through AI innovations such as OpenAI's GPT-3 system, which can write code when prompted with simple English.
WARNING: Objects in Driverless Car Sensors May Be Closer Than They Appear
Duke University Pratt School of Engineering
Ken Kingery
March 14, 2022
Duke University researchers have identified an attack strategy that can trick industry-standard autonomous vehicle sensors into believing nearby objects are closer or further than they appear. This involves using a laser gun to strategically place 3D LiDAR data points within a certain area of the vehicle camera's 2D field of view. The researchers determined the vulnerable area extends out in front of the camera's lens in a frustum shape. "This so-called frustum attack can fool adaptive cruise control into thinking a vehicle is slowing down or speeding up," said Duke's Miroslav Pajic. Pajic suggested adding redundancy in the form of "stereo cameras" with overlapping fields of view to better estimate distances and detect LiDAR data that does not match their perception, or the developing systems that allow cars in close proximity to share some of their data.
Researchers Improve Algorithm that Detects Alzheimer's Disease from MRI Images
News-Medical Life Sciences
Emily Henderson
March 14, 2022
Researchers at Lithuania's Kaunas University of Technology (KTU) have augmented an algorithm that detects Alzheimer's disease (AD) from magnetic resonance imaging (MRI) with more than 98% accuracy. The team modified the algorithm and used a broader convolutional neural network to improve previous research, and KTU's Rytis Maskeliuna said the DensNet201 network variant tapped for the latest study offers better parameter optimization than the modified ResNet18 network used for the earlier analysis. The algorithm reviewed images of brain MRIs from 125 subjects in the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset in terms of AD, mild cognitive impairment, and dementia. "Using the ever-increasing ADNI dataset, the algorithm is getting ready to recognize the symptoms of the disease in various images and becomes less sensitive to a specific data source,” said Maskeliuna. “It's not a revolution, but certainly an evolution."
An 'Oracle' Predicting the Evolution of Gene Regulation
MIT Department of Biology
Raleigh McElvery
March 9, 2022
Massachusetts Institute of Technology (MIT) researchers have developed a neural network model to predict gene expression by focusing on "regulatory" or non-coding DNA, which can help turn genes on or off. The researchers created a dataset to train the model by inserting millions of random non-coding DNA sequences into yeast and determining how each sequence affected gene expression. Said MIT's Aviv Regev, "We now have an 'oracle' that can be queried to ask: What if we tried all possible mutations of this sequence? Or, what new sequence should we design to give us a desired expression?"
UCI Researchers Develop Hybrid Human-Machine Framework for Building Smarter AI
UCI News
March 7, 2022
University of California, Irvine (UCI) researchers unveiled a mathematical model that can improve the performance of artificial intelligence (AI) via a hybrid human-machine framework. "We show through empirical demonstrations as well as theoretical analyses that humans can improve the predictions of AI even when human accuracy is somewhat below [that of] the AI—and vice versa," said UCI's Mark Steyvers. "And this accuracy is higher than combining predictions from two individuals or two AI algorithms." The model was tested by having humans and algorithms separately classify distorted images of animals and everyday items, with humans ranking their confidence in the accuracy of each image identification, while the algorithms produced a continuous score. Confidence differed widely across images, and the hybrid model's performance topped that of either human or machine predictions.
Open-Source Code Generator Is Very Good at Writing in C
ZDNet
Liam Tung
March 7, 2022
Carnegie Mellon University (CMU) researchers have launched an automated code generator model trained on multiple programming languages, which they said they found was very good at writing code in C. The researchers hope the open-source PolyCoder can democratize research into artificial intelligence (AI) code generation, which companies like Alphabet subsidiary DeepMind and Open AI now dominate. Underlying auto code generation is the premise that the process can save developers time, assuming the output is accurate and lacks security flaws. The CMU researchers said PolyCoder has "2.7 [billion] parameters based on the GPT-2 architecture, that was trained on 249 [gigabytes] of code across 12 programming languages on a single machine."
Can AI Help Casinos Cut Down on Problem Gambling?
The New York Times
Bradford Pearson
March 8, 2022
Researchers are looking to artificial intelligence (AI) as a way to potentially identify and predict problem gambling. One program from Mindway AI, a company with ties to Denmark's Aarhus University, relies on psychologists to train AI algorithms to detect behaviors associated with problem gambling. The program scores 14 risk factors from 1 to 100 and generates a risk assessment for each player. However, there are concerns about how to explain such data to players. While some online gaming companies use pop-up messaging, texts, or emails, Mindway's Rasmus Kjærgaard said the data could be used to personalize phone calls to players.
Doctoral Student Offers Insight into ML Error Estimation
Texas A&M University Engineering
Rachel Rose
March 4, 2022
Texas A&M University doctoral student Omar Maddouri used transfer learning principles to evaluate machine learning (ML) models and developed an error estimator for the accuracy of the models in classifying datasets. The process involves creating a model with initial inferences about the model parameters in the target and source domains, then updating the model as more information about the datasets is learned. Maddouri said, "In transfer learning, we try to transfer knowledge or bring data from another domain to see whether we can enhance the task that we are doing in the domain of interest, or target domain."
The Wall Street Journal (2/2, Vanderford, Subscription Publication) reports on the controversy over the use of AI in the hiring process, which some believe could potentially exacerbate hiring biases, while proponents say such technology could actually make hiring decisions fairer. Those in favor argue that results from AI decisions are more easily audited, and cannot be influenced by factors such as common interests or superficial friendliness.
Deepfakes Trying to Change the Course of War
CNN Business
Rachel Metz
March 25, 2022
Deepfake videos, including doctored appeals by Ukraine President Volodymyr Zelensky and Russian President Vladimir Putin, are being used to manipulate the course of the war in Ukraine. Experts cite them as examples of how quickly disinformation can spread, and their growing presence further blurs the line between fact and fiction. They say deepfakes' use as tools to influence people during a war is especially insidious, because the confusion they inflict can be catastrophic. "When people really can't think very rationally and they only have a very truly short span of attention, and they see something like this, that's when it becomes a problem," explained the University at Albany's Siwei Lyu.
Inoculating Deep Neural Networks to Thwart Attacks
University of Michigan News
March 24, 2022
University of Michigan (U-M) scientists have developed the Robust Adversarial Immune-inspired Learning System (RAILS) to defend deep neural networks. "RAILS represents the very first approach to adversarial learning that is modeled after the adaptive immune system, which operates differently than the innate immune system," said U-M's Alfred Hero. RAILS was modeled after how adaptive immune systems in mice respond to an antigen, in order to emulate immune-system defenses to identify and address suspicious network inputs. The system achieved effective biomimicry, outperforming two of the most common machine learning countermeasures for adversarial attacks—Robust Deep k-Nearest Neighbor and convolutional neural networks. Using image identification as the test case, the researchers showed RAILS improved protection, including against the especially damaging Projected Gradient Descent attack.
Multiplexing Could Give Neural Networks a Big Boost
IEEE Spectrum
Charles Q. Choi
March 21, 2022
Princeton University researchers developed the DataMUX method to test whether multiplexing could accelerate neutral networks' data processing and enable them to multitask. The researchers used DataMUX, which adds multiplexing and demultiplexing layers to both ends of the network on transformers, multilayer perceptrons, and convolutional neural networks to perform tasks that included image recognition, sentence classification, and named entity recognition. Transformers-based text classification yielded maximum multiplexing of 40 inputs, and an 18-fold processing speed-up with as little as 2% reduced accuracy. "We hope this can have a substantial impact on energy consumption and the environmental footprint of machine learning models, especially for computing services that process a large number of requests at a time," said Princeton's Vishvak Murahari.
Computer Algorithm Predicts Coronary Artery Disease-related Events Based on Voice Recordings
News-Medical Life Sciences
Emily Henderson
March 24, 2022
Mayo Clinic researchers found that an artificial intelligence (AI)-based computer algorithm was able to accurately forecast coronary artery disease (CAD)-related events based on voice recordings alone. The researchers enlisted 108 patients referred for a coronary angiogram, who recorded three 30-second voice samples using the Vocalis Health smartphone application. The recordings were of a prepared text, spontaneous narratives about positive experiences, and narratives about negative experiences, respectively. The AI then evaluated the voice samples, and patients with a high voice biomarker score were found to be 2.6 times more likely to experience major CAD events, and three times more likely to exhibit plaque buildup in medical tests, than low-scoring patients. Scientists have not determined why certain voice features seem to signify CAD; Mayo Clinic's Jaskanwal Deep Singh Sara said the autonomic nervous system may be a factor.
How Native Americans Try to Debug AI’s Biases
The New York Times
Alex V. Cipolle
March 22, 2022
The annual conference for the American Indian Science and Engineering Society hosted a workshop where students created metadata to train a photo recognition algorithm to understand an image's cultural significance. The students tagged images of ceremonial sage in a seashell and a 19th-century picture of Native American children outside a boarding school, with words carrying indigenous connotations. The researchers then compared the algorithm's responses to those generated by a major image recognition application. Microsoft engineer Tracy Montieth said the app was unsuccessful because it lacked proper training data, demonstrating that such data dictates the performance of artificial intelligence (AI), and in this case was biased against marginalized cultures. Florida International University's W. Victor H. Yarlott said more accurate data makes AI systems more representative of human intelligence.
'Off-Label' Use of Imaging Databases Could Lead to Bias in AI Algorithms
Berkeley Engineering
Marni Ellery
March 21, 2022
A study by researchers at the universities of California, Berkeley (UC Berkeley) and Texas at Austin highlights how off-label use of datasets can inject bias into artificial intelligence algorithms. UC Berkeley's Michael Lustig said the researchers traced their failure to replicate the results of a medical imaging study to a preprocessed dataset used to train the algorithm. The team processed raw images using two common data-processing pipelines that impact many open-access magnetic resonance imaging databases—commercial scanner software and data storage with JPEG compression. It trained three image reconstruction algorithms on those datasets, then quantified the accuracy of the reconstructed images versus the extent of data processing. The researchers said although the algorithmically produced images look good, the inability to reproduce them with raw data highlights the risk of applying biased algorithms clinically.
When It Comes to AI, Can We Ditch the Datasets?
MIT News
Adam Zewe
March 15, 2022
Massachusetts Institute of Technology (MIT) researchers have demonstrated the use of a generative machine-learning model to produce synthetic data, based on real data, to train another model for image classification. Researchers showed the generative model millions of images containing objects in a specific class, after which it learned those objects' appearance in order to generate similar objects. MIT's Ali Jahanian said generative models also learn how to transform underlying training data, and connecting a pretrained generative model to a contrastive learning model enabled both models to work together automatically. The results show that a contrastive representation learning model trained only on synthetic data can learn visual representations that rival or top those learned from real data. In analyzing how the number of samples influenced the model's performance, researchers determined that, in some cases, generating larger numbers of unique samples facilitated additional enhancements.
Simple Mathematical Trick Could Slash AI Development Time in Half
New Scientist
Matthew Sparkes
March 10, 2022
Researchers at the U.K.'s University of Oxford have cut the two-stage back-propagation process used to train artificial intelligence (AI) models to one stage, saving time, energy, and computing power. Back-propagation involves passing data from one side of the neural network to the other through every link in the chain of artificial neurons, then working backwards to the beginning to calculate the gradient. The new method calculates an approximation of the gradient during the first pass that is close enough to be effective. Numerous tests comparing their approach with back-propagation showed the AI's performance was comparable, and that the new approach potentially could halve the amount of time needed to train AI models.
AI to Bring Museum Specimens to the Masses
Cardiff University News (U.K.)
March 24, 2022
Scientists at Cardiff University in the U.K. are using cutting-edge artificial intelligence methods to automatically segment and capture data from museum specimens throughout Europe and conduct data quality improvement without human input. The image segmentation technique can easily and automatically locate and bound visual regions on images ranging from microscope slides to herbarium sheets with high accuracy. Automatic segmentation can be used to concentrate data capture from specific regions of a slide or sheet, and to conduct quality control on images to ensure digital copies of specimens are as accurate as possible. Cardiff's Paul Rosin said this approach "could help improve the workflows of staff working with natural history collections to drastically speed up the process of digitization in return for very little cost and resource."
Using AI-Powered Apps May Speed Up Stroke Treatment
The Pittsburgh Post-Gazette
Anya Sostek
March 20, 2022
Pittsburgh hospitals are using artificial intelligence (AI)-powered applications to identify patients at risk for stroke and to alert doctors. The University of Pittsburgh Medical Center (UPMC) is using the Viz.ai app to read computed tomography (CT) scans of patients and send smartphone alerts to doctors and nurses if it finds abnormalities. Meanwhile, Allegheny Health Network employs the RapidAI app for similar functions; after its introduction, the time between CT scans and artery unblocking procedures decreased from an average of 93 minutes to 68 minutes. UPMC's Raul Nogueira said the AI does not replace clinicians, but it can flag things faster and perhaps identify abnormalities a doctor might overlook. He added that Viz.ai and its underlying AI have been in development for roughly five years, while UPMC will expand its deployment from three hospitals in January to an additional two dozen through June.
Here Come the AI Nutritionists
The New York Times
Sandeep Ravindran
March 14, 2022
A host of applications are using artificial intelligence to make personalized diet recommendations, based on research demonstrating that individual bodies respond differently to the same foods. The DayTwo app uses a machine learning algorithm based on research by scientists at Israel's Weizmann Institute of Science to chart individual dietary regimes, in order to control blood sugar. The algorithm can identify patterns and learn from data with human assistance, analyzing information from individuals' blood sugar responses to tens of thousands of meals to recognize personal characteristics underlying certain glucose reactions. From these findings, DayTwo can forecast a specific food's effect on blood sugar, and assign scores to meals. Last year, DayTwo learned that in using the algorithm to match diet to an individual's microbiome and metabolism, it could control blood sugar more effectively than the Mediterranean diet.
In a piece for Forbes (2/7, Eliot), Dr. Lance B. Eliot, an expert on Artificial Intelligence (AI) and Machine Learning (ML), provides an overview of arguments on whether it is better to “go limp or attempt to tighten and brace yourself” when “you know that a crash is about to occur.” Eliot goes on to discuss what an “AI-based true self-driving car” should “do when it detects an imminent and unavoidable car crash in terms of alerting any passengers therein.” Eliot concludes: “If AI ever becomes sentient, might it decide that the ‘best’ approach is to save the AI system that’s onboard the self-driving car, and ergo consider the human passengers as nothing more than added baggage? Thus, when choosing what to do, the AI will lean toward preserving the vehicle and the processors, more so than those pesky and irritating humans.”
Mac Rumors (2/7) reports Apple has acquired AI Music, “a startup that uses artificial intelligence to generate personalized soundtracks and adaptive music.” AI Music developed “an ‘Infinite Music Engine’ meant to create bespoke audio solutions for marketers, publishers, fitness professionals, and creative agencies.” The technology is able “to generate dynamic soundtracks that change based on user interaction.” Music during a workout, for example, “could change based on the wearer’s heartbeat, adjusting to workout intensity.” AI Music described it as a feature that “could allow advertisers to create audio that would match user context, like mood.” Apple could potentially “make use of the AI Music technology in several ways.”
ACM Prize in Computing Recognizes Pioneer of Robot Learning
ACM
April 6, 2022
ACM announced this week that Pieter Abbeel's pioneering contributions to next-generation robotic learning have earned him the 2021 ACM Prize in Computing. Abbeel developed apprenticeship learning methods to improve robotic manipulation of deformable objects, by introducing and combining techniques to augment robot visual perception, physics-based tracking, control, and learning from demonstration. His core contribution to deep reinforcement learning was the Trust Region Policy Optimization method, which stabilizes reinforcement learning and allows robots to learn a spectrum of simulated control skills. Said ACM President Gabriele Kotsis, "Abbeel has made leapfrog research contributions, while also generously sharing his knowledge to build a community of colleagues working to take robots to an exciting new level of ability."
Full-Color Night Vision Almost Reality After Deep Learning Breakthrough
Interesting Engineering
Grant Currin
April 6, 2022
A new deep learning algorithm can create a full-color reconstruction of a scene captured using infrared images imperceptible to the human eye, paving the way for advancements in night-vision technology. The researchers created a dataset using a monochromatic camera to take photos of printed images of color palettes and faces at various wavelengths in the visible and near-infrared spectrums. They used the dataset to test a deep learning algorithm that could infer a scene in the visible spectrum based on infrared images. The researchers found that an algorithm using deep U-Net-based architectures produced full-color photos of three infrared images that were similar to normal photos of the identical images.
AI Can Create Computer Inside Itself to Run Another AI, Play Pong
New Scientist
Matthew Sparkes
March 30, 2022
University of Pennsylvania researchers have developed an approach that enables a neural network to run a code like a standard computer, in an effort to speed up certain calculations. The researchers determined the effects of individual artificial neurons and used that information to assemble a neural network capable of addition and other basic tasks. To mimic the behavior of the logic gates in computer chips, the researchers connected several of these networks in chains so they could perform more complex operations, then combined the chains into a network that could run a virtual neural network and play the game Pong. Francesco Martinuzzi at Germany's Leipzig University said, "There will definitely be specific applications where these computers are going to be outperforming standard computers. And by far, I believe."
Fighting Discrimination in Mortgage Lending
MIT News
Adam Zewe
March 30, 2022
To help combat discrimination in mortgage lending, researchers at the Massachusetts Institute of Technology developed a process to remove bias from the data used to train machine learning (ML) models. The technique, called DualFair, subdivides a dataset into the largest number of subgroups based on combinations of sensitive attributes and options to eliminate label bias. DualFair evens out the number of borrowers in each subgroup by duplicating individuals from minority groups and deleting individuals from the majority group, then balances the proportion of loan acceptances and rejections in each subgroup to match the median in the original dataset before recombining them. To eliminate selection bias, DualFair iterates on each datapoint to identify discrimination, removing those found to be biased from the dataset. The researchers found their method lowered discrimination in predictions, while maintaining high accuracy.
Researchers Protect Solar Technologies from Cyberattack
UGA Today
Mike Wooten
March 28, 2022
University of Georgia (UGA) researchers unveiled a sensor system that watches power electronic converters at solar energy farms for signs of cyberattack in real time. The system can detect anomalies in a converter's operations using just one voltage sensor and one current sensor, applying deep learning methods to differentiate between normal conditions, open-circuit faults, short-circuit faults, and cyberattacks. A passive sensor linked to the power converter gathers data on electrical waveforms and feeds it to a computer monitor, and unusual activity is detectable in the converter's electrical current, even if the firewall or security software misses an attack. The system also can diagnose the nature of a problem, and the researchers said it can identify cyberattacks in a solar farm model more proficiently than current techniques.
Security Tool Guarantees Privacy in Surveillance Footage
MIT News
Rachel Gordon
March 28, 2022
A multi-institutional group of researchers has developed a system that can better guarantee privacy in video footage from surveillance cameras. When analysts submit video data queries, the Privid system adds noise to the result to prevent identification of individuals. Instead of running code over the entire video, Privid parses the video and runs code over each segment; the segments are aggregated with noise added, while data about the result's error bound also is provided. Privid lets analysts use their own deep neural networks to analyze the video, and make queries that the system's designers did not expect. The system was found to be accurate within 79% to 99% of a non-private system across different videos and queries.
AI Method Could Find Precious Rare-Earth Compounds
ScienceAlert
David Nield
March 23, 2022
A predictive machine learning (ML) artificial intelligence system developed by researchers at Iowa State (ISU) and Texas A&M universities could enhance the search for new rare-earth compounds. "It's not really meant to discover a particular compound," said ISU's Yaroslav Mudryk. "It was, ‘how do we design a new approach or a new tool for discovery and prediction of rare-earth compounds?’" The researchers constructed the model using a rare-earth database and concepts from density functional theory, enabling rapid testing of hundreds of permutations, followed by assessment of each result’s phase stability. They supplemented these calculations with online data found via custom-made algorithms, prior to validation and multiple checks.
The Independent (UK) (2/9, Cuthbertson) reported that a “team from Purdue University in the US demonstrated an innovative circuit capable of rewiring itself as it learns, just like the human brain,” a breakthrough that “could transform the development of human-level artificial intelligence.” A study published in Science describing the chip describes “how the chip is able to optimise its underlying hardware in order to meet new challenges as they arise” and how it can “function as a ‘grow when required’ (GWR) neural network, which allows it to grow and shrink depending on the size of the task,” optimizing its efficiency “in a way that conventional chips are unable to.” Independent adds that the chip’s design “could also help make artificial intelligence more portable, such as for autonomous vehicles or robots, as the AI could be embedded directly into hardware rather than just running on software.”
TechCentral (2/14) reports Google Cloud “has announced several new resources and offerings to help improve customer success with artificial intelligence.” TechCentral says, “With its new Custom AI Solutions practice, Google Cloud will help organisations deploy artificial intelligence (AI) and machine learning (ML) applications more efficiently with custom-built Vertex AI solutions. ... The search giant also intends to triple the size of its global delivery centre teams in Argentina, Poland, and India by the end of the year. ... Furthermore, the company plans to build a Google Cloud talent pool in Mexico and Portugal.”
AI Strips Out City Noise to Improve Earthquake Monitoring Systems
New Scientist
Chris Stokel-Walker
April 13, 2022
Stanford University's Gregory Baroza and colleagues used a deep learning algorithm to eliminate city noise from earthquake monitors, in an attempt to fine-tune the ability to locate where tremors originate. The researchers trained the artificial intelligence on 80,000 samples of urban noise and 33,751 samples of earthquake signals to distinguish between the two. Running audio through the neural network enhanced the signal-to-noise ratio by an average of 15 decibels, triple the average of previous denoising methods. Rice University's Maarten de Hoop said one shortcoming of the approach was the network's training via supervised learning using human-labeled data sampled from one area; he said this makes the technique less likely to be effective when presented with noise from somewhere else.
Ancient Art Meets AI for Better Materials Design
Argonne National Laboratory
John Spizzirri
April 7, 2022
University of Southern California (USC) researchers combined kirigami, the ancient Japanese art of paper cutting, with autonomous reinforcement learning to help improve materials design. In an effort to create a two-dimensional molybdenum disulfide structure embedded with electronics that can stretch while remaining stable, the researchers determined that a series of precise cuts could enable the thin material to stretch up to 40%. To determine the correct combination of cuts, the researchers performed simulations on the Theta supercomputer at the U.S. Department of Energy's Argonne National Laboratory. The model was trained on 98,500 simulations of kirigami design strategies involving one to six cuts; even without additional training data, it determined in a matter of seconds that 10 cuts would provide more than 40% stretchability. USC's Pankaj Rajak said, "It learned something the way a human learns, and used its knowledge to do something different."
Printing Circuits on Nanomagnets Yields New Breed of AI
IEEE Spectrum
Dexter Johnson
April 5, 2022
Los Alamos National Laboratory (LANL) researchers have developed an artificial spin glass made of nanomagnets that is configured to mimic a neural network, paving the way for artificial intelligence algorithms to be printed as physical hardware. The artificial spin glass is comprised of thin layers of iron-nickel alloy, and the positions and orientations of the microscopic bar magnets align with the interaction structure of an artificial neural network. LANL's Michael Saccone said their artificial spin glass, a proof-of-principle Hopfield neural network, is akin to a slide rule in which "the rules of the geometry encode simple arithmetic." The architecture of a Hopfield network and nanomagnetic system involves information flowing constantly between all nanomagnets in all directions. Saccone explained, "This takes a while for a sequential algorithm to simulate, but in a physical system there is no cost to the inherent parallelization. The universe just does its thing."
Better, Faster, Energy Efficient Predictions
Harvard University John A. Paulson School of Engineering and Applied Sciences
Leah Burrows
April 7, 2022
Researchers at Harvard University's John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed "intelligent alloys" that combine the power of computation with artificial intelligence to create models that complement predictive evolutionary simulations. SEAS' Petros Koumoutsakos and Jane Bae computed turbulent flows by combining reinforcement learning and numerical methods, using machine learning (ML) agents that interact with mathematical equations. "We take an equation and play a game where the agent is learning to complete the parts of the equations that we cannot resolve," Bae said. "The agents add information from the observations the computations can resolve and then they improve what the computation has done."
AI Is Explaining Itself to Humans. It's Paying Off.
Reuters
Paresh Dave
April 6, 2022
Startups and major technology companies are investing heavily in explainable artificial intelligence (XAI), as U.S. and EU regulators campaign for fairness and transparency in automated decision-making. XAI advocates say it has helped make AI more effective in fields such as healthcare and sales. Microsoft saw its LinkedIn subscription revenue increase 8% after providing its sales team with CrystalCandle software, which identifies clients potentially at risk of cancellation, while explaining its reasoning. Skeptics say an AI’s explanations of why it made the predictions it did are still too unreliable. LinkedIn says an algorithm's integrity cannot be judged without understanding its reasoning, while tools like CrystalCandle, for example, could help physicians learn why AI predicts someone is at greater risk of disease.
Does This AI Think Like a Human?
MIT News
Adam Zewe
April 6, 2022
Massachusetts Institute of Technology (MIT) and IBM Research scientists have developed the Shared Interest method for rapidly analyzing a machine learning model's behavior by evaluating its individual explanations. The technique uses saliency methods to highlight how the model made specific decisions, comparing them to ground-truth data. Shared Interest then applies quantifiable metrics that compare the model's reasoning to that of a human by measuring the alignment between its decisions and the ground truth, then classifying those decisions into eight categories. The method can be used for image and text classification. MIT's Angie Boggust warned that the technique is only as good as the saliency methods on which it is based; if those techniques are biased or contain inaccuracies, the technique will inherit those limitations.
ZDNet (2/21, Foley) reports, “Microsoft’s Azure and Research teams are working together to build a new AI infrastructure service, codenamed ‘Singularity.’” According to job postings, the Singularity team is working on “a new AI platform service ground-up from scratch that will become a major driver for AI, both inside Microsoft and outside.” Microsoft researchers provided further details in a paper that says Singularity is “about providing data scientists and AI practitioners with a way to build, scale, experiment and iterate on their models on a Microsoft-provided distributed infrastructure service built specifically for AI.” The researchers wrote, “At the heart of Singularity is a novel, workload-aware scheduler that can transparently preempt and elastically scale deep learning workloads to drive high utilization without impacting their correctness or performance, across a global fleet of accelerators (e.g., GPUs, FPGAs).” ZDNet mentions that “Microsoft officials previously have discussed plans to make FPGAs, or field-programmable gate arrays,” including through a 2018 effort called Project Brainwave. ZDNet says, “I would guess that Singularity is the next phase in turning Brainwave into a commercially available service.”
ZDNet (2/20, Chanthadavong) reports a research team led by Stanford University “set a new Guinness World Record for the fastest DNA sequencing technique using AI computing to accelerate workflow speed.” Published in The New England Journal of Medicine, the study “involved speeding up every step of genome sequencing workflow by relying on new technology,” including using “nanopore sequencing on Oxford Nanopore’s PromethION Flow Cells to generate more than 100 gigabases of data per hour, and Nvidia GPUs on Google Cloud to speed up the base calling and variant calling processes.”
The Washington Post (2/20, Blakemore) reported research published in Nature Scientific Reports suggests artificial intelligence may help identify and classify hip fractures, finding AI classified “hip fractures 19 percent more accurately than” clinicians. For the study, investigators “had a minimum of two clinicians classify over 3,600 hip radiographs,” and found that “the algorithms’ accuracy varied depending on the type of fracture, but overall their diagnoses were accurate 92 percent of the time compared with 77.5 percent of the time for the clinicians.”
Reuters (2/16, Foo) reports the European Union’s “effort to set a standard for artificial intelligence will likely take more than a year, with the debate focusing on whether facial recognition should be banned and who should enforce the rules, two key lawmakers said on Wednesday.” Parliament could “agree on a common position in November, kicking off talks with EU countries that could take a year and a half, Dragos Tudorache, one of two lawmakers steering the issue, told Reuters in an interview.” Tudorache “said facial recognition is a key topic.” While the European Commission wants to permit law enforcement to use the technology “in terror attacks and serious crime,” civil rights activists are worried “it could facilitate discrimination and surveillance by governments and companies.” Lawmaker Axel Voss “agreed that facial recognition should be allowed with safeguards in place.”