ML Model Shows Diamond Melting at High Pressure
Sandia National Laboratories
January 26, 2022
The Spectral Neighbor Analysis Potential (SNAP) machine learning model developed by Sandia National Laboratories researchers predicted the behavior of billions of interacting atoms to simulate diamond melting at high pressures and temperatures. A team of researchers used the Summit supercomputer at the U.S. Department of Energy's Oak Ridge National Laboratory to perform the diamond-specific modeling in a day. SNAP trained a surrogate model of a micron-sized piece of compressed diamond that replicated atomic forces estimated via quantum mechanical calculations, then scaled it up to predict forces and accelerations for systems featuring billions of atoms. By performance-optimizing the software to run on supercomputers like Summit, the researchers accelerated the simulation’s runtime by 97%, said Sandia's Aidan Thompson.
Calculating the Best Shapes for Things to Come
University of Michigan News
January 25, 2022
University of Michigan (U-M) and Northeastern University researchers have created an algorithm for designing structures that maximize performance and efficiency. The algorithm applies "nongradient" optimization techniques, and expedites them by first approximating the system via machine learning, then employing a self-directed learning framework. U-M's Changyu Deng said, "This research can dramatically accelerate non-gradient optimizers for topology optimization to make nongradient methods feasible. In this way, more complicated problems can be tackled." When tested in four optimization scenarios, the tool slashed the computational time required to reach the best solution by 100 to 100,000 times over traditional methods, and outperformed other state-of-the-art algorithms.
How AI Can Identify People Even in Anonymized Datasets
ScienceNews
Nikk Ogasa
January 25, 2022
Artificial intelligence (AI) can identify people in anonymized datasets by studying patterns in their weekly social interactions, according to researchers at the U.K.'s Imperial College London and University of Oxford. The researchers structured mobile phone interaction data on 43,606 anonymous phone service subscribers into web-shaped configurations of nodes representing the user and their contacts, connected with strings threaded with interaction data. When shown the interaction web of a known individual, the AI sifted the anonymized dataset for the most similar-looking web, and was able to correctly identify the target more than half of the time. When supplied the target and contacts' interaction data collected 20 weeks after the anonymous dataset, the AI correctly identified users 24.3% of the time, suggesting social behavior remains identifiable for long durations.
AI Technology Diagnoses COVID-19 'in Minutes'
University of the West of Scotland (U.K.)
January 19, 2022
An artificial intelligence (AI) technique developed by researchers at the U.K.'s University of the West of Scotland (UWS) can diagnose COVID-19 in a matter of minutes, versus two hours for a PCR test. Using x-ray technology, the technique compares scans to a database of about 3,000 images of healthy people, those with COVID-19, and those with viral pneumonia. It uses a deep convolutional neural network to compare the scans and make the diagnosis. In testing, researchers found the technique was more than 98% accurate. Said UWS' Naeem Ramzan, "COVID-19 symptoms are not visible in x-rays during the early stages of infection, so it is important to note that the technology cannot fully replace PCR tests. However, it can still play an important role in curtailing the virus' spread, especially when PCR tests are not readily available."
AI Learns to Create Images from Text Descriptions by Destroying Data
New Scientist
Matthew Sparkes
January 18, 2022
Researchers at OpenAI have developed a new artificial intelligence (AI) model able to create accurate images from textual descriptions, that is smaller and able to produce better results than similar software the company released last year. The new GLIDE model has only 3.5 billion parameters, down from last year’s DALL-E program's 12 billion. While DALL-E was trained on a large set of images with associated captions, GLIDE uses a diffusion model in which the neural network is still trained on a large set of images, but noise is added to destroy those images. From this process, GLIDE ultimately learned how to create a photorealistic image matching the text description from an input that is just noise. GLIDE's images were preferred by human judges over DALL-E's 87% of the time for their photorealism, and 69% of the time for their accuracy in matching the text input.
*May Require Paid Registration
Moon's Hidden Depths Uncovered with Algorithm
Scientific American
Connie Chang
February 1, 2022
An international team of scientists developed a deep learning algorithm to more deeply explore permanently shadowed regions (PSRs) of the Moon, and to image extremely small geologic features. The researchers trained the algorithm on over 70,000 images of PSRs, coupled with data about the camera's temperature and orbital position, in order to identify and screen out camera noise. They then fed the algorithm millions of sunlit lunar photos paired with simulated versions in shadow, to address residual noise. The researchers used the algorithm to analyze the size and distribution of craters and boulders in several PSRs that might be explored by the U.S. National Aeronautics and Space Administration's Artemis lunar program.
Researchers Use AI to Analyze Tweets Debating Vaccination, Climate Change
University of Waterloo News (Canada)
January 17, 2022
Researchers at Canada's universities of Waterloo and Guelph used artificial intelligence (AI) to analyze tweets about climate change and vaccination, and found uniformity of online opinion on the first topic but broad disagreement on the second. About 87 million tweets posted between 2007 and 2016 were analyzed, with the AI ranking each as pro, anti, or neutral on the issues before classifying users in those same categories. Said Waterloo’s Chris Bauch, “We expected to find that user sentiment and how users formed networks and communities to be more or less the same for both issues. But actually, we found that the way climate change discourse and vaccine discourse worked on Twitter were quite different.” Analysis indicated individuals and online communities holding diverse sentiments toward vaccination interacted less than those involved in the climate change debate.
Researchers Develop CAPTCHA Solver to Aid Dark Web Research
BleepingComputer
Bill Toulas
January 14, 2022
Researchers at the Universities of Arizona, Georgia, and South Florida have devised a machine learning-based CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) solver to plumb the dark web. The system interprets rasterized images, which differs from other analyses that also utilized generative adversarial networks. The solver differentiates letters and numbers by studying them individually, denoising each character image, identifying borders between letters, and partitioning content into individual characters; it thwarts countermeasures by using samples extracted across multiple local regions to spot refined features like lines and edges. The authors have published the solver's final version on GitHub, without releasing its training dataset.
Screening Tool Identifies Older-Appearing Brains Typical of Dementia
King's College London (U.K.)
January 14, 2022
A machine learning tool developed by researchers at the U.K.'s King's College London can predict the age of a human brain compared to the rest of the population by analyzing magnetic resonance imaging (MRI) brain scans. The tool can predict a patient's correct age, so long as their brain's volume loss over time is consistent with the patient's age. If the tool shows a mismatch between the patient's real and predicted ages, it can help clinicians diagnose conditions that can cause the disparity. Said King's College's Dr. Tom Booth, " Automatically detecting volume loss in real time helps screen for the common problem of neurodegeneration during scans obtained for all reasons."
AI Turned a Rembrandt Masterpiece into 5.6 Terabytes of Data
Popular Science
Purbita Saha
January 11, 2022
The Rijksmuseum in the Netherlands has posted an ultra-high-resolution image of Rembrandt's "The Night Watch," constructed from 8,439 photos taken with a 100-megapixel HD camera that were digitized, color-corrected, and stitched together by algorithms. The Rijksmuseum said it is the largest digital image of an art piece ever created, at 717 billion pixels and 5.6 terabytes of data. Visitors to the museum's Website can zoom into every 0.0005-millimeter square at fine resolution. The digital version of "The Night Watch" was created following a two-year restoration process in which artificial intelligence was used to restore missing elements from the original painting.
Deep Learning Model Simulates Brain Topography
Carnegie Mellon University
Caroline Sheedy
February 3, 2022
A deep learning model developed by Carnegie Mellon University (CMU) researchers can simulate the spatial organization, or topography, of the brain's inferotemporal cortex, to further understanding of the impact of damage to that area. Understanding how neighboring clusters of brain tissue are organized and interact could help researchers develop better treatments. After the Interactive Topographic Network model was trained to recognize images from different domains (faces, objects, and scenes), it produced selective spatial areas for each domain, as in the brain. The researchers observed a decline in the model's ability to recognize faces, objects, and scenes after it simulated lesions in each domain. Said CMU's Nicholas Blauch, "It shows us that the specialization within these networks can be strong, but also somewhat mixed." Blauch said that implies "that it may be better thought of as one system with internal specialization, rather than a collection of independent modules."
AI Could Cut Hiring Biases as Companies Make Push to Find Workers, Proponents Say
The Wall Street Journal
Richard Vanderford
February 2, 2022
With more companies launching widespread recruitment campaigns amid a worker shortage, some proponents of artificial intelligence (AI) say its use can make hiring decisions faster and fairer. This comes even as regulators and other AI experts warn the algorithms can learn from, and then perpetuate, human biases. Said Kevin Parker of hiring technology firm HireVue, "When you can ask the candidates exactly the same question about the skills associated with their job, you get a much fairer outcome ... and diversity improves as a result of that." On the other hand, McGill University’s Matissa Hollister said, “There’s a growing realization that these tools can exacerbate bias,” adding, “I don’t know how many times I’ve heard, ‘Keep the humans in human resources’.”
Demystifying ML Systems
MIT News
Adam Zewe
January 27, 2022
Researchers at the Massachusetts Institute of Technology (MIT) have developed a system that aims to understand the inner workings of black box neural networks by automatically producing descriptions of the individual "neurons" that process data in natural language. The descriptions were found to be more accurate and specific than those provided by other methods. The researchers used the MILAN (mutual-information guided linguistic annotation of neurons) system to determine the most important neurons in a neural network. MILAN also was used to audit models to determine whether they learned something unexpected in their results, and to edit neural networks by removing neurons that detect bad correlations in the data. Said MIT's Sarah Schwettmann, "We want to tap into the expressive power of human language to generate descriptions that are a lot more natural and rich for what neurons do."
Instagram Teaches AI to Recognize Rooms |
ML Fine-Tunes Flash Graphene
Rice University News
January 31, 2022
Researchers at Rice University and the University of Missouri are using machine learning (ML) to refine graphene synthesis from waste via flash Joule heating. The ML models adjust to variables in feedstock, and formulate ways to optimize flash procedures. Said Rice's James Tour, whose laboratory discovered the synthesis method, "Machine learning algorithms will be critical to making the flash process rapid and scalable without negatively affecting the graphene product's properties." Tour and colleagues followed the lead of materials scientists who have incorporated ML into their everyday discovery processes, in order to parse out flash Joule heating's chemical reactions. Tour said the synergy between ML and flash heating "made it possible to synthesize graphene from scrap material based entirely on the models' understanding of the Joule heating process."
Physical Systems Perform ML Computations
Cornell Chronicle
David Nutt
January 26, 2022
Cornell University researchers have trained physical systems to execute generic machine learning computations, demonstrating an early but viable substitute for conventional electronic processors. The training process enabled demonstrations with mechanical, optical, and electrical physical systems. The mechanical system involved a titanium plate positioned atop a speaker to create a driven multimode mechanical oscillator; the optical system beamed a laser through a nonlinear crystal to convert the incoming light's colors into new colors by combining photon pairs, and the electrical system harnessed an electronic circuit with a resistor, a capacitor, an inductor, and a transistor. The researchers fed each system pixels of an image of a handwritten number, encoded in a light pulse or an electrical voltage, and returned a similar type of optical pulse or voltage as output. "It turns out you can turn pretty much any physical system into a neural network," said Cornell's Peter McMahon.
Lego Robot with Organic 'Brain' Learns to Navigate Maze
Scientific American
Saugat Bolakhe
January 28, 2022
Researchers at Germany's Max Planck Institute for Polymer Research and the Netherlands' Eindhoven University of Technology have built a maze-solving robot based on an organic neuromorphic chip. The team constructed and incorporated a circuit of organic polymer transistors into a Lego Mindstorms EV3 robot. The chip controls the direction in which the robot's wheels move. The researchers created a honeycomb-like maze for the robot to navigate. The polymer circuit received a corrective stimulus whenever the machine made a wrong turn, which helped train the robot to make binary decisions toward solving the maze. "The device learns in the same way we teach kids, giving rewards if they are correct or not rewarding if they are wrong," said City University of Hong Kong's Arindam Basu.
Good News About the Carbon Footprint of ML Training
Google AI Blog
David Patterson
February 15, 2022
Scientists at Google and the University of California, Berkeley, investigating how to reduce the energy and carbon footprint of machine learning (ML) training of natural language processing models, identified four best practices that together can cut energy and carbon emissions “significantly.” According to David Patterson, ACM A.M. Turing Award recipient and a Distinguished Engineer at Google Research, these model, machine, mechanization, and map optimization (4M) practices are based on selecting more-efficient ML model architectures, and employing ML-optimized processors and systems to enhance performance and energy efficiency. Other helpful practices include reducing emissions through the use of cloud computing, and selecting locations for datacenters that offer the cleanest energy. Patterson said the 4M practices together can reduce the energy requirements of computing in datacenters by 100 times, and emissions by 1,000 times.
Algorithm Could Shorten Quality Testing, Research in Many Industries by Months
Sandia National Laboratories
February 15, 2022
Researchers at Sandia National Laboratories and Ohio State University have developed a machine-learning algorithm that could make industrial testing of bulk materials faster and less expensive. For the algorithm called MAD3, said Sandia's Montes de Oca Zapiain, "We've trained the model to understand the relationship between crystallographic texture and anisotropic mechanical response. You need an electron microscope to get the texture of a metal, but then you can drop that information into the algorithm, and it predicts the data you need for the simulation software without performing any mechanical tests." The algorithm was trained on the results of 54,000 simulated materials tests using a feed-forward neural network. Said Sandia's Hojun Lim, "The developed algorithm is about 1,000 times faster compared to high-fidelity simulations."
Research Advances Technology of AI Assistance for Anesthesiologists
MIT News
David Orenstein
February 14, 2022
Massachusetts Institute of Technology (MIT) and Massachusetts General Hospital researchers trained a machine learning algorithm to continuously automate the administration of propofol during surgery to optimize dosage and keep patients unconscious during general anesthesia. The researchers equipped the algorithm with two neural networks: an "actor" that determined dosages in real time, and a "critic" that helped the actor maximize "rewards" directed by the programmer. The most effective reward framework was a "dose penalty" setup, in which the critic questioned every dose the actor administered, continuously scolding it to keep dosing to a necessary minimum to maintain unconsciousness. The algorithm outperformed more conventional software in physiology-based patient models. MIT's Emery N. Brown said, "Algorithms such as this one allow anesthesiologists to maintain more careful, near-continuous vigilance over the patient during general anesthesia."
Fake Faces Created by AI Look More Trustworthy than Real People
New Scientist
Christa Lesté-Lasserre
February 14, 2022
Researchers at the U.K.'s Lancaster University and the University of California, Berkeley found that people have a hard time distinguishing images of human faces created by artificial intelligence from images of real faces. The researchers asked a group of 315 people to distinguish a selection of 400 fake photos from 400 photos of real people; they were accurate less than half (48.2%) of the time (a second group trained to recognize computer-generated faces did slightly better, with an accuracy rate of 59%). The researchers found that white faces were hardest for participants to distinguish, possibly due to the software being trained on disproportionally more white faces.
Researchers Train Neural Network to Recognize Chemical Formulas From Research Papers
Skolkovo Institute of Science and Technology (Russia)
February 14, 2022
Scientists at Russia's Syntelly automation startup, Lomonosov Moscow State University, and the Sirius University of Science and Technology have educated neural networks to automatically identify chemical formulas in research papers. The researchers used Google's Transformer machine translation neural network to convert images of molecules or molecular templates into textual representations named Functional-Group-SMILES. The network was able to learn almost anything it was provided, as long as the pertinent depiction style was represented in the training data. The researchers also designed a data generator to produce examples of molecular templates by blending randomly chosen molecule fragments and depiction styles. Syntelly's Sergey Sosnin said, "Our study is a good demonstration of the ongoing paradigm shift in the optical recognition of chemical structures.”
How NFL's Digital Athlete Uses AI to Reduce Concussions, Injuries
New Scientist
February 13, 2022
The National Football League (NFL) and Amazon Web Services have created the Digital Athlete, an artificial intelligence tool that uses TV imagery and sensors in football attire to help reduce injuries in American football games. The system generates a digital replica of an athlete in a virtual environment and uses machine learning and computer vision to identify impacts and injuries to his/her virtual body. Priya Ponnapalli at Amazon Machine Learning Solutions Lab said the virtual setting allows limitless game scenarios and environmental conditions to be experienced, "giving the ability to test out new safety equipment, test out rule changes, and predict player injury events and recovery trajectories eventually." Said the NFL's Jeff Miller, "Having the computers understand how many times a player hits his helmet during the course of a game [helps] find ways to reduce the amount of helmet contact."
AI Pioneer Judea Pearl Receives BBVA Foundation Frontiers of Knowledge Award
UCLA Samueli School of Engineering
February 11, 2022
Judea Pearl at the University of California, Los Angeles has received Spain's 2021 Banco Bilbao Vizcaya Argentaria (BBVA) Foundation Frontiers of Knowledge Award in the information and communication technologies category "for laying the foundations of modern artificial intelligence, so computer systems can process uncertainty and relate causes to effects." Pearl, a recipient of ACM's A.M. Turing Award, invented Bayesian networks, which enable computers to account for uncertainty, as well as a causation calculus that allows empirical scientists to measure and extract cause-and-effect relationships from statistical data and auxiliary knowledge. "Beyond its broader impact across science and engineering, this formulation is playing a critical role in current research in fairness in machine learning algorithms," the BBVA Foundation said.
Sony AI Drives Race Car Like a Champ
Wired
Will Knight
February 9, 2022
Researchers at Sony and Polyphony Digital say their GT Sophy is the first artificial intelligence (AI) able to beat professional esports players in the motorsport game Gran Turismo. While AI has mastered board games like chess or Go, mastering video games like Gran Turismo requires high-speed reflexes and the ability to make continuous judgments. Stanford University's Chris Gerdes said, "Outracing human drivers so skillfully in a head-to-head competition represents a landmark achievement for AI." Gerdes said the techniques used to develop GT Sophy, which mastered Gran Turismo through hours of practice, could be applied to the development of autonomous cars. However, University of Massachusetts Amherst's Bruno Castro da Silva said it is difficult to guarantee the safety and reliability of reinforcement learning algorithms like GT Sophy in the real world. Said da Silva, "A lack of safety guarantees is one of the main reasons why machine learning-based robots are not yet widely used in factories and warehouses."
Eye Provides Clues to Vascular Disease
University of Bonn (Germany)
February 11, 2022
Atherosclerosis (hardening of the arteries) could be diagnosed through the use of self-learning software devised by researchers at Germany's University of Bonn and University Hospital of Bonn to identify vascular changes in images of the eye. The researchers fed training images of the eyes of patients with early-stage peripheral arterial disease (PAD) to a convolutional neural network (CNN). The University of Bonn's Thomas Schultz said the software correctly identified "a good 80% of all affected individuals, if we took into account 20% false positives." He added that this is impressive "because even for trained ophthalmologists, PAD can't be detected from fundus images." Further analysis showed the CNN focuses especially on large vessels in the back of the eye, although digital images needed to be of sufficiently high resolution to realize optimal results.
Fusion Race Kicked Into High Gear by Smart Tech
BBC News
Paul Rincon
February 10, 2022
U.S. company TAE Technologies is stepping up the race to practical fusion energy by tapping Google's machine learning (ML) expertise to enhance its self-improving software. The ML expertise is used to optimize TAE's 100-foot-long fusion cylinder, where fast-moving particles are fused within plasma. TAE's Michl Binderbauer said, "We can now optimize in fractions of an afternoon" tasks that previously took two months. ML also is used to reconstruct what happens during fusion experiments, by pulling together multiple strands of data. Jeremy Chittenden at the U.K.'s Imperial College London said TAE's work differs from other fusion experiments by firing external particle beams into plasma to generate particles, rather than relying on the heat of the plasma itself.
AI Efficiently Fixes Mistakes in Error-Prone Quantum Computers
New Scientist
Matthew Sparkes
February 10, 2022
Artificial intelligence (AI) can eliminate errors from a group of quantum bits, thanks to Lorenzo Cardarelli and colleagues at Germany's RWTH Aachen University. The researchers used a small quantum computer model to generate a quantum version of a machine learning autoencoder algorithm. They fed data into the system to see which errors it produced, creating pairs of correct and error-saturated data with which they trained the quantum autoencoder to detect and fix errors. Cardarelli said the AI could be trained on a specific quantum computer and learn to correct errors unique to that system. "This is almost like custom designing an error-correction code according to the characteristics of the particular hardware," said Sonika Johri at U.S.-based quantum computing firm IonQ.
Facial Analysis Improves Diagnosis
University of Bonn (Germany)
February 10, 2022
The latest version of the "GestaltMatcher" artificial intelligence (AI) system, created by an international team of researchers led by Germany's University of Bonn, can diagnose rare genetic diseases with greater accuracy that its predecessor, DeepGestalt, through the use of facial analysis. The AI focuses on certain facial characteristics, calculates similarities, and automatically connects them to clinical symptoms and genetic data of patients. GestaltMatcher requires fewer patients as a reference for feature matching than DeepGestalt, and considers similarities with as-yet-undiagnosed patients, so it can "recognize" and suggest diagnoses for previously unknown diseases. The team trained the system on 17,560 patient photos; said University Hospital Bonn's Peter Krawitz, "This wide variation in appearance trained the AI so well that we can now diagnose with relative confidence even with only two patients as our baseline at best, if that's possible."
Researchers Make Chip That Can Be Rewired Like the Human Brain
Silicon Republic
Leigh McGowran
February 9, 2022
A multi-institutional research effort led to the invention of a reprogrammable-on-demand electronic chip, which could eventually lead to the creation a computer that learns continuously, like the human brain. The chip is made from hydrogen-sensitive perovskite nickelate in order to adapt and learn in a way similar to the brain. The researchers applied electrical impulses at different voltages to refine the concentration of hydrogen ions on the chip, generating states that could be mapped out to corresponding brain functions. "Using our reconfigurable artificial neurons and synapses, simulated dynamic networks outperformed static networks for incremental learning scenarios," the researchers explained. "The ability to fashion the building blocks of brain-inspired computers on demand opens up new directions in adaptive networks."
Fingerprinting the IoT
Carnegie Mellon University College of Engineering News
Madison Brewer
February 9, 2022
Carnegie Mellon University (CMU) researchers strengthened the security of Internet of Things (IoT) devices by making them more resilient against exploitation through their development of radio-frequency fingerprinting (RFF). RFF can be used to identify specific IoT devices by detecting hardware variations that produce unique radio wave signatures. CMU's Jiachen Xu used power amplifiers to foil RFF exploits by changing the IoT signal's features, and a convolutional neural network classified incoming signals as safe or unsafe by assessing the RFF in the processed signal. The researchers also proved Bayesian neural networks could identify and classify RFF quickly and accurately, without requiring excessive computational power.
The Washington Post (1/17, Zeitchik) reports that “from Facebook algorithms that learn how to stoke anger to facial recognition apps that don’t recognize people of color, AI frequently offers less of an improvement on the status quo than an insidious reinforcement of it.” Now a Silicon Valley upstart “says he has a fresh approach to the problem.” Alan Cowen, “a former Google data scientist with a background in psychology, has created a research company, Hume AI, and a companion not-for-profit that he says can help make the whole messy business of AI more empathetic and human.” By getting “trained on hundreds of thousands of facial and vocal expressions from around the world, artificial intelligence on the Hume platform can react to how users are truly feeling and cater more closely to their emotional needs, Cowen said.” He said he “hopes the platform will eventually be incorporated into consumer applications like viral videos and digital assistants.”
The Benefits of Peripheral Vision for Machines
MIT News
Adam Zewe
March 2, 2022
Massachusetts Institute of Technology (MIT) scientists have demonstrated a computer-vision model that perceives visual representations similar to human peripheral vision. The researchers investigating why adversarially trained neural networks are robust against image manipulation tasked study participants with distinguishing original images from noise-synthesized versions generated by a "normal" machine learning model, an adversarially robust model, and one called Texforms that accounts for certain aspects of human peripheral processing. MIT's Arturo Deza said the inability to tell original images apart from the adversarially robust model or the Texforms model implies that the former captures some peripheral vision aspects. Thomas Wallis at Germany's Technical University of Darmstadt said the researchers "propose that the same mechanism of learning to ignore some visual input changes in the periphery may be why robust images look the way they do, and why training on robust images reduces adversarial susceptibility."
Here Comes the Full Amazonification of Whole Foods
The New York Times
Cecilia Kang
February 28, 2022
Amazon has almost completely automated a Whole Foods store in Washington, DC's Glover Park neighborhood. The store incorporates Just Walk Out technology, a network of cameras, sensors, and deep learning software that analyzes shopping habits. Shoppers can activate virtual shopping by scanning their palms at kiosks or by scanning quick response codes in the Amazon phone app. Just Walk Out detects when shoppers lift sensor-affixed products, itemizes their picks, and charges their Amazon account when they exit the store, skipping checkout lines. Amazon, which has tested such automation for over four years, plans to open a second prototype automated Whole Foods store in Los Angeles this year.
Fingertip Sensitivity for Robots
Max Planck Institute for Intelligent Systems (Germany)
February 24, 2022
A thumb-shaped sensor with a camera inside, developed by researchers at Germany's Max Planck Institute for Intelligent Systems (MPI-IS), uses computer vision and a deep neural network to infer haptic contact information. The Insight system generates a three-dimensional force map when the thumb is touched, estimating where objects come into contact with the sensor and the amount of force applied. Said MPI-IS's Huanbo Sun, "Our unique hybrid structure of a soft shell enclosing a stiff skeleton ensures high sensitivity and robustness. Our camera can detect even the slightest deformations of the surface from one single image."
Deep Learning Technique Predicts Clinical Treatment Outcomes
MIT News
Lauren Hinkel
February 24, 2022
Massachusetts Institute of Technology (MIT) and IBM researchers have developed a deep learning predictive methodology that models counterfactual, time-varying, and dynamic clinical treatment strategies, enabling doctors to select the best treatment option for patients. The G-Net technique is based on the g-computation algorithm, a causal inference method that calculates the impact of dynamic exposures in the presence of measured confounding factors. G-Net applies recurrent neural networks, which feature node connections that allow them to better simulate temporal sequences with complex and nonlinear dynamics. MIT's Li-wei Lehman described G-Net as the first deep learning g-computation-based process "that can predict both the population-level and individual-level treatment effects under dynamic and time-varying treatment strategies."
Robust Radar: AI Sensor Technology for Autonomous Driving
Graz University of Technology (Austria)
Christoph Pelzl
February 23, 2022
An artificial intelligence (AI) system for automotive radar sensors developed by researchers at Austria's Graz University of Technology (TU Graz) filters out interfering signals from other radar sensors to improve object detection. The researchers built model architectures for automatic noise suppression based on convolutional neural networks (CNNs). To make them more efficient, the researchers trained the neural networks with noisy data and desired output values, then compressed the most efficient models further by reducing bit widths, resulting in an AI model with high filter performance and low energy consumption. Said TU Graz's Franz Pernkopf, "We want to make CNNs’ behavior a bit more explainable. We are not only interested in the output result, but also in its range of variation. The smaller the variance, the more certain the network is.”
People Are Bad at Spotting Fake LinkedIn Profiles Generated by AI
New Scientist
Chris Stokel-Walker
February 21, 2022
Scientists at the University of Illinois at Urbana-Champaign (UIUC) and Santa Clara University found fake profiles produced by artificial intelligence (AI) on the business social network LinkedIn can easily deceive people. Nearly 300 participants of the study examined three profiles each, two of which contained either a deepfake profile picture or AI-generated text. Participants accepted friend requests from 90% of the deepfake profiles that were consistent, and between 79% and 85% of those with obvious errors. UIUC's Jaron Mink said they apparently did not notice age differences, and were less suspicious of grammatical errors than of image glitches. When informed that deepfake profiles have previously been used to trick people, acceptance levels for inconsistent profiles fell by up to 43%. UIUC's Gang Wang observed that although trained users were better at spotting fake profiles, "The overall stats show they're still not super-good at them."
Can ML Models Overcome Biased Datasets?
MIT News
Adam Zewe
February 21, 2022
Researchers at the Massachusetts Institute of Technology (MIT), Harvard University, and Japanese information and communications technology company Fujitsu investigated how machine learning (ML) models can surmount dataset bias. They applied neuroscience principles to determine whether an artificial neural network can learn to identify previously unseen objects. The results of the research indicate diverse training data can shape the network's ability to overcome bias, but can concurrently cause its performance to deteriorate. "When the neural network gets better at recognizing new things it hasn't seen, then it will become harder for it to recognize things it has already seen," explained MIT's Xavier Boix. The training method itself and the types of neurons that emerge during the training process also can influence the network's handling of biased datasets. Said Boix, "We need to be very careful about how we design datasets in the first place."
Deep Learning Toolbox Now Apparently Includes Ground-Up Glass
IEEE Spectrum
Charles Q. Choi
February 18, 2022
Scientists in China, Hong Kong, and Singapore suggest ground-up pieces of glass could help in securely encrypting facial images via a new optical cryptosystem. The system transmits facial-image data through the glass, producing speckles with apparently random scattered patches of light and dark recorded by camera as a secret message. The researchers said the process generates keys 17.2 billion bits long; they trained a deep learning neural network to decrypt messages by feeding it 19,800 facial images before and after being sent through a given set of ground glass. Preliminary tests showed the network could decrypt the images with over 98% accuracy. Hong Kong Polytechnic University's Puxiang Lai described the system as "fast, low-cost, and easy to integrate with other systems."
Using ML to Understand How Brain Cells Work
University of Wisconsin-Madison News
Charlene N. Rivera-Bonet
February 17, 2022
The University of Wisconsin-Madison (UW)'s Daifeng Wang and colleagues used machine learning (ML) and artificial intelligence to better understand how interacting traits influence brain cells' functions. The researchers applied manifold learning to predict neuronal features by aligning gene expression and electrophysiological data for about 3,000 neurons in the mouse brain; both neuronal cell features manifested high values in the same group of cells, but low values in the remainder, and exhibited a relationship to one another that described their manifold shape. The researchers then used cell clusters to unveil connections between electrophysiological features and specific genes governing the expression of other genes. This informed the development of deepManReg, a new manifold learning model that enhances the prediction of neuronal traits based on gene expression and electrophysiology. "Basically, [we can study] how those genes are regulated to affect the electrophysiology or behaviors in diseased cells," Wang said.
DeepMind Uses AI to Control Plasma Inside Tokamak Fusion Reactor
New Scientist
Matthew Sparkes
February 16, 2022
Scientists at U.K. artificial intelligence (AI) company DeepMind and the Swiss Federal Institute of Technology in Lausanne (EPFL) have developed a neural network that can control the magnetic fields within EPFL's Variable Configuration Tokamak (TCV) fusion reactor. This eliminated the need to monitor the TCV's interior with 19 magnetic coils, each controlled by an individual algorithm; the network automatically learns which voltages must be supplied to best contain plasma within the reactor. The researchers trained the AI on a digital model of the reactor and it successfully contained the plasma for about two seconds, which is approaching the reactor's threshold. The algorithm also was able to configure and move the plasma within the reactor, which may help to improve efficiency or stability in new reactors.
AI Outdoes Radiologists in Identifying Hip Fractures
The Washington Post
Erin Blakemore
February 20, 2022
A U.K. study found artificial intelligence could outperform radiologists in identifying hip fractures. The study compared the results of clinicians in classifying over 3,600 hip radiographs against the findings of a pair of computer models. The algorithms were accurate 92% of the time overall, versus 77.5% of the time for the radiologists. The researchers believe the algorithm could unclog the U.K.'s radiology bottleneck, which arose due to a shortage of radiologists at a time when demand for radiology services has risen sharply. Richie Gill at the U.K.'s University of Bath said the technique could facilitate greater access and accelerate diagnoses.
Social Media Echo Chambers Spread Vaccine Misinformation
Technical University of Denmark
Hanne Kokkegard
February 16, 2022
Technical University of Denmark (DTU) researchers analyzed 60 billion tweets posted before the pandemic to find that social media is spreading misinformation about vaccines. The researchers developed deep learning and natural language processing techniques to train a computer to identify the vaccine view expressed in a given tweet, and were able to identify users who consistently were for or against vaccines, and from which sources the profiles shared their vaccine information. DTU's Bjarke Mønsted said, "Where vaccine supporters often refer to news media and science sites when sharing knowledge about vaccines on Twitter, we can see that profiles belonging to anti-vaccine profiles far more often share links to YouTube videos and to sites that are known to spread fake news and conspiracy theories." The results confirmed that social media is an echo chamber that inhibits vaccine advocates and opponents from encountering each other's views online.
Researchers Discover How to Predict Degradation of Neural Network
URV Activ@
February 16, 2022
Researchers at Spain's Universitat Rovira i Virgili (URV) have identified the theoretical underpinnings for predicting how neural networks will function and degrade over time. These findings indicate how much damage a system can endure before it will completely degrade and lose its functionality, known as the phase transition of percolation degradation. URV's Alex Arenas said, "We have been able to find this transition and we have also been able to calculate the homeostatic response [i.e., the ability to find alternatives and continue functioning] of the network." He added that a set of mathematical tools "that can be very useful not only in neuroscience but in any type of network" is now available to the scientific community.
'Underground Maps' Segment Cities Using Fashion, AI
Cornell Chronicle
Tom Fleischman
February 15, 2022
Artificial intelligence (AI) developed by scientists at Cornell University, the University of Texas at Austin, and Facebook can automatically segment cities into "underground maps" -based commonalities of interests. The work builds on the Cornell team's development of the GeoStyle tool, which can discover geospatial events and predict fashion trends. The researchers applied a fashion recognition algorithm to photos geolocated from 37 large cities, in order to detect clothing styles, then mixes of those styles within a given radius. They then used AI to identify spatially and stylistically coherent segments within a city. Two human-centered benchmark programs, HoodMaps and OpenStreetMaps, were used with resident surveys to calculate the technique's accuracy; in every instance, it described the “sense” of a neighborhood better than existing methods.