Lighting Up Artificial Neural Networks
University of Exeter (U.K.)
April 27, 2022
An international team of scientists has mimicked biological computations of the mammalian brain using a nanoscale "optomemristor" device that responds to multiple electronic and photonic inputs at the same time. The optomemristor enables multifactorial computation, which allows the brain to learn from either positive or negative reinforcements. IBM Research Europe's Syed Ghazi Sarwat said the research showed a practical approach to using hardware to mimic reinforcement learning efficiently, which the researchers used to enable an artificial rodent to learn to navigate through a maze. Said Huawei Technologies’ Timoleon Moraitis, “By emulating the so-called ‘shunting inhibition’ function of dendrites of biological neurons, we illustrate how our optomemristor can effectively provide a single-neuron solution for difficult computational problems.”
Improved Approach to the 'Traveling Salesperson Problem' Could Boost Logistics, Transport Sectors
University of Cambridge (U.K.)
April 26, 2022
Researchers at the U.K.'s University of Cambridge have developed an enhanced approach to the Traveling Salesperson Problem that yields high-quality solutions at a faster rate than other cutting-edge tools. The challenge involves finding the shortest possible delivery route for visiting multiple destinations in a single trip. The researchers' solution integrates a machine learning model supplying information about the previous best routes with a "metaheuristic" tool that draws the new route from this data. Said Cambridge's Ben Hudson, "Our goal with this research is to improve such methods so that they produce better solutions—solutions that result in lower distances being traveled and therefore lower carbon emissions and reduced impact on the environment."
Which Animal Viruses Could Infect People? Computers Are Racing to Find Out
The New York Times
Carl Zimmer
April 27, 2022
Researchers are using machine learning models to predict which animal viruses can infect human cells. In March, Georgetown University's Colin Carlson and his colleagues established the VIRION open access database, which incorporates data about 9,521 viruses and their 3,692 animal hosts. Carlson and colleagues also developed a model to identify the animals most likely to harbor relatives of SARS-CoV-2. In 2020, the model identified over 300 species of bats most likely to harbor betacoronaviruses; since then, 47 of those species have been found to harbor betacoronaviruses. Said Rocky Mountain Laboratories' Emmie de Wit, "What we really want to know is not necessarily which viruses can infect humans, but which viruses can cause an outbreak."
AI Helps Researchers Peer into the Brains of Mice
Johns Hopkins Medicine Newsroom
April 28, 2022
Researchers from Johns Hopkins University, George Washington University, and technology company Corning developed an artificial intelligence (AI) training system to record images of mouse neurons in action. The researchers said the system, combined with ultra-small microscopes, can pinpoint precisely where and when mouse brain cells are activated during movement, learning, and memory. The researchers designed the AI solution to identify building blocks of the brain from fixed tissue images, and in images from a head-restrained living mouse. Despite the reduced image resolution of the ultra-small microscopes, the team found the AI could adequately restore image quality up to 26 frames per second, and could clearly visualize activity spikes of individual brain cells.
Using AI to Detect Cancer from Patient Data Securely |
Time to Get Social: Tracking Animals with Deep Learning
EPFL (Switzerland)
April 22, 2022
Researchers at Switzerland's École polytechnique fédérale de Lausanne (EPFL) enhanced their DeepLabCut software to enable high-performance tracking of animals in videos. The open source software uses deep learning to teach computers "pose estimation" without requiring physical or virtual markers on the animals. The update allows DeepLabCut to track the movements of social animals, like mice or fish, which can confuse the computer because they look too similar or obscure each other. With the help of researchers at the Massachusetts Institute of Technology and Harvard University, the EPFL team developed four datasets for benchmarking multi-animal pose estimation networks. The researchers also developed a multi-task neural network that can predict keypoints, limbs, and animal identity from single frames, and an assembly algorithm to help identify animals with varied body shapes.
AI Drug Discovery Systems Might Be Repurposed to Make Chemical Weapons
Scientific American
Rebecca Sohn
April 21, 2022
Scientists at Collaborations Pharmaceuticals presented research at the 2020 Spiez CONVERGENCE conference in Switzerland that artificial intelligence (AI)-powered drug discovery frameworks might be harnessed to produce chemical weapons. The researchers applied Collaborations' MegaSyn software to build a compendium of toxic molecules similar to the nerve agent VX, generating 40,000 substances that included other weapons besides VX in less than six hours. Collaborations' Fabio Urbina said the team had previously used MegaSyn to generate potentially therapeutic molecules with the same molecular target as VX, while the toxic molecules were produced by asking the system to design similar molecules without structural inputs. However, some experts said the research did not answer important questions about whether using AI software to find toxins could lead to the practical development of a biological weapon.
ML Model Can Distinguish Antibody Targets
University of Illinois News Bureau
Diana Yates
April 21, 2022
Researchers at the University of Illinois Urbana-Champaign (UIUC) and Scripps Research have developed a proof-of-concept machine learning model that can distinguish antibodies targeting influenza from those targeting the COVID-19 coronavirus. The researchers trained the model on antibody data from 88 published studies and 13 patents, teaching it to base its predictions of which virus an antibody will attack on each antibody's genetic sequencing. UIUC's Yiquan Wang said the model was nearly 85% accurate. UIUC's Nicholas Wu thinks that with sufficient data, scientists should be able to anticipate the virus an antibody will target, and the pathogenic features to which the antibody binds.
Scientists Use ML to Identify Antibiotic-Resistant Bacteria
University of Nottingham (U.K.)
April 20, 2022
Experts at the U.K.'s University of Nottingham integrated DNA sequencing and machine learning (ML) to determine the site and extent of antibiotic-resistant bacteria's transmission between humans, animals, and the environment. In examining a commercial poultry farm in China, the researchers compiled 154 samples from animals, carcasses, laborers, and their households and environments. They isolated E. coli bacteria from the samples, and classified distinct pathogens found at the farm through ML, whole genome sequencing, gene sharing networks, and mobile genetic components. The team uncovered a complete network of genes that correlated with antimicrobial resistance, shared between animals, farm workers, and the surrounding environment. "We cannot say at this stage where the bacteria originated from, we can only say we found it and it has been shared between animals and humans,” said Nottingham's Tania Dottorini.
AIs Spot Drones with Help from Fly Eye
Scientific American
Monique Brouillette
April 20, 2022
Researchers at the University of South Australia, defense firm Midspar Systems, and Australia's Flinders University have developed an artificial intelligence (AI) algorithm for visual drone detection. The researchers reverse-engineered the visual system of the hoverfly—whose compound eyes can separate relevant information from noise—to develop a tool that filters out noisy data. They fed the algorithm spectrograms based on acoustic data from outdoors as drones flew by. The algorithm was able to amplify data related to the frequencies emitted by drones, while reducing background noise from other sources. The researchers found that it could identify drones up to 50% farther away than conventional AI systems.
Neural Network Can Read Tree Heights from Satellite Images
ETH Zurich (Switzerland)
Stéphanie Hegelbach
April 20, 2022
Researchers at Switzerland's ETH Zurich leveraged an artificial neural network and satellite images to develop a high-resolution global vegetation height map for 2020 that could be used for sustainable regional development planning or to assess carbon emissions associated with deforestation. The map allows users to determine tree heights on any piece of woodland on Earth at a resolution of as little as 10x10 meters per pixel. The convolutional neural network was trained using millions of images from the European Space Agency's two Copernicus Sentinel-2 satellites, along with tree height data based on space laser measurements from NASA's Global Ecosystem Dynamics Investigation mission. The map will be made public along with its source code.
Can AI All but End Car Crashes? The Potential Is There
The New York Times
Tanya Mohn
April 19, 2022
Although artificial intelligence (AI) already has been deployed to improve road safety through smartphone apps that monitor safe driving and connected vehicles that communicate with each other and road infrastructure, proponents of the technology believe it also can be used to predict and help to prevent dangerous driving behaviors. Australia's Acusensus, for instance, has rolled out systems that use AI algorithms to assess driver distraction, occupant restraint, and vehicle speed using cameras installed on overpasses and other infrastructure. The company’s Heads-Up Real Time product, which has been proposed for use in the U.S., would deliver data and images in real time to law enforcement. Acusensus technology also can be used to pinpoint "hot spots" where enforcement or infrastructure should be improved. On the other hand, the technology also raises concerns about privacy issues.
*May Require Paid Registration
Intel Develops AI to Detect Emotional States of Students
Tom's Hardware
Francisco Pires
April 18, 2022
An artificial intelligence (AI) software solution developed by Intel and Classroom Technologies to identify students' emotional states is generating controversy in the context of ethics and privacy. The technology, incorporated into Classroom Technologies’ Class software product, can classify students' body language and facial expressions whenever digital classes are conducted through Zoom. The software inputs students' video streams into the AI engine alongside contextual, real-time data that enables it to identify students' level of comprehension of subject matter. Intel's Sinem Aslan said the main goal is to improve one-on-one teaching by allowing educators to respond in real time to each student's emotional state. Among the software's caveats is that the act of labeling emotional states into easy-to-grasp categories invites error.
Neural Network Model Helps Predict Site-Specific Impacts of Earthquakes
Hiroshima University (Japan)
April 15, 2022
A neural network model developed by researchers at Japan's Hiroshima University can calculate how soil layers can amplify the seismic waves of large earthquakes. The researchers analyzed microtremor data from 105 sites in the Chugoku district from 2012 to 2020 using a generalized spectral inversion method. Data from each site was split into a training set to educate the neural network, a validation set applied to iterative model optimization, and a test set to assess the model's performance. The model performed well on the test dataset, although, said Hiroshima's Hiroyuki Miura, ”the number of training samples analyzed in this study sites is still limited.” Miura said more training samples needed to be considered “before assuming that the neural network model applies nationwide or globally.”
Researchers Use AI to Detect Weak Heart Pump via Patients' Apple Watch ECGs
Mayo Clinic
Terri Malloy
May 2, 2022
Researchers at the Mayo Clinic developed an artificial intelligence (AI) algorithm that can identify patients with weak heart pumps based on single-lead ECG tracings from an Apple Watch. The researchers modified an established 12-lead algorithm for low ventricular ejection fraction (a weak heart pump) to interpret ECG signals generated from the single lead of an Apple Watch; they also developed a smartphone app that enabled study participants to submit single-lead ECGs from their Apple Watches. Said the Mayo Clinic's Dr. Paul Friedman, "It is absolutely remarkable that AI transforms a consumer watch ECG signal into a detector of this condition, which would normally require an expensive, sophisticated imaging test, such as an echocardiogram, CT scan, or MRI."
Method Detects Deepfake Videos with Up to 99% Accuracy
UC Riverside News
Holly Ober
May 3, 2022
University of California, Riverside (UC Riverside) scientists have developed a technique to detect doctored facial expressions in deepfake videos with up to 99% accuracy. The method splits the task along two branches within a deep neural network: the first branch determines facial expressions, and feeds data on facial regions containing the expression to the second branch, an encoder-decoder, which has an Expression Manipulation Detection architecture to detect and localize manipulated regions in the image. UC Riverside’s Ghazal Mazaheri said, “Multi-task learning can leverage prominent features learned by facial expression recognition systems to benefit the training of conventional manipulation detection systems. Such an approach achieves impressive performance in facial expression manipulation detection.”
Unpacking Black-Box Models
MIT News
Adam Zewe
May 5, 2022
A mathematical framework developed by researchers at the Massachusetts Institute of Technology and Microsoft Research aims to quantify and evaluate the understandability of a machine learning model's explanations for its predictions. The framework, called ExSum (explanation summary), can evaluate a rule on an entire dataset. ExSum enables the user to see if a rule holds up based on three metrics: coverage, or how broadly applicable the rule is across the entire dataset; validity, or the percentage of individual examples that agree with the rule; and sharpness, or how precise the rule is. Said MIT's Yilun Zhou, "Before this work, if you have a correct local explanation, you are done. You have achieved the holy grail of explaining your model. We are proposing this additional dimension of making sure these explanations are understandable."
Rapid Adaptation of Deep Learning Teaches Drones to Survive Any Weather
California Institute of Technology
May 4, 2022
The deep learning Neural-Fly method developed by California Institute of Technology (Caltech) engineers can help aerial drones deal with new and unknown wind conditions in real time. Caltech's Soon-Jo Chung said Neural-Fly uses "a combined approach of deep learning and adaptive control that allows the aircraft to learn from previous experiences and adapt to new conditions on the fly with stability and robustness guarantees." The researchers said Neural-Fly employs a separation strategy that requires only the updating of a few neural network parameters; Caltech's Guanya Shi said a meta-learning algorithm pre-trains the network to perform these updates. Neural-Fly-outfitted autonomous quadrotor drones trained on just 12 minutes of flight data improved their response to strong winds.
The Problems with Elon Musk's Plan to Open-Source the Twitter Algorithm
MIT Technology Review
Chris Stokel-Walker
April 27, 2022
Elon Musk’s announced plans for the Twitter social network include open-sourcing its algorithms, which experts say would do little to boost transparency without access to their training data. Said Jennifer Cobbe of the U.K.'s University of Cambridge, "Most of the time when people talk about algorithmic accountability these days, we recognize that the algorithms themselves aren't necessarily what we want to see—what we really want is information about how they were developed." There also are concerns open-sourcing Twitter's algorithms would enable bad actors to identify vulnerabilities to exploit and could make it more difficult to defeat spam bots.
Comparing AI Reasoning with Human Thinking
IEEE Spectrum
Charles Q. Choi
April 27, 2022
Researchers at the Massachusetts Institute of Technology (MIT) and IBM Research have developed a method for comparing the reasoning of artificial intelligence (AI) software with that of human thinking, in order to better understand the AI's decision-making. The Shared Interest technique compares saliency analyses of an AI decision with human-annotated databases. It classifies the AI's reasoning as one of eight patterns, ranging from the AI being completely distracted (making incorrect predictions and not aligning with human reasoning) to making correct predictions and being completely human-aligned. Said MIT's Angie Boggust, "Providing human users with tools to interrogate and understand their machine-learning models is crucial to ensuring machine-learning models can be safely deployed in the real world."
AI Aids Diagnosis of Mild Cognitive Impairment that Progresses to Alzheimer's
Universitat Oberta de Catalunya News (Spain)
Jesús Méndez
April 27, 2022
Using artificial intelligence, researchers at Spain's Universitat Oberta de Catalunya (UOC) have developed a method to predict which patients with mild cognitive impairment will progress to having Alzheimer's disease. The researchers used a multi-stream convolutional neural network to compare magnetic resonance images (MRIs) of the brains of healthy patients with those with Alzheimer's disease to identify distinct landmarks. The system, trained using nearly 700 images from publicly available datasets, achieved an accuracy rate of nearly 85% in distinguishing and classifying the two forms of mild cognitive impairment. UOC's Mona Ashtari-Majlan said the process "overcomes the complexity of learning caused by the subtle structural changes that occur between the two forms of mild cognitive impairment, which are much smaller than those between a normal brain and a brain affected by the disease."
Everyone Has Opinions, Even AI
Dartmouth College
David Hirsch
April 25, 2022
Dartmouth College computer scientists trained an artificial intelligence (AI) algorithm to produce opinionated product reviews. They trained the algorithm on thousands of published wine and beer reviews, then had it generate its own human-like reviews of such products. With the help of faculty at Dartmouth's Tuck School of Business, review-writing code was developed to turn the algorithm into a marketing tool. Said Tuck's Prasad Vana, "Using [AI] to write and synthesize reviews can create efficiencies on both sides of the marketplace. The hope is that AI can benefit reviewers facing larger writing workloads and consumers who have to sort through so much content about products."
Popular Science (2/24, Hu) reports that Meta is making “a concerted effort to illustrate how AI works behind-the-scenes with a new explanatory tool called ‘system cards.’” In a blog post, Meta said, “AI powers back-end services like personalization, recommendation, and ranking. But understanding how and why AI operates can be difficult for everyday users and others. We’re aiming to change that.” Meta’s pilot system card “will attempt to showcase how Instagram ranks its feed. In a slideshow animation on its blog, Meta runs through the different steps the AI model goes through to order posts on a user’s feed.”
Motherboard (2/23, Fassler) reports AI “researchers – some of whom are affiliated with Facebook’s parent company Meta and Cornell University – used more than 7 million public, geolocated social media photos from Instagram and Flickr to construct what they’re calling an ‘underground fashion map’ that spans 37 cities” and “can reveal groupings of people within a city, including areas that are the most ‘trendy’ or ‘progressive.’” The map “builds on an Amazon-funded AI tool called GeoStyle to forecast fashion trends, according to a press release about the research.” Motherboard says, “The project’s affiliation with Facebook and Amazon raises larger questions about the unexpected ways tech companies use personal data, often without explicitly notifying users.”
Meta CEO Zuckerberg Reveals AI Projects Aimed At Building The Metaverse. Reuters (2/23, Culliford) reports Facebook-owner Meta is working on AI research “to generate worlds through speech, improve how people chat to voice assistants and translate between languages, CEO Mark Zuckerberg said on Wednesday, as he sketched out key steps to building the metaverse.” Speaking at the company’s virtual “Inside the Lab” event, he said, “The key to unlocking a lot of these advances is AI.”
The AP (2/23) reports Miami Dade College is “announcing a multimillionaire funding from nonprofit and public entities to make artificial intelligence classes a requirement for every student pursuing a degree in any field.” The college’s president “and other officials say it will be the first time a college embeds artificial intelligence courses across all degrees including nursing, business and liberal arts.” The $15 million investment “comes as the city of Miami continues an effort to lure technology investors, which began during the pandemic.”
On the Road to Cleaner, Greener, Faster Driving
MIT News
Adam Zewe
May 17, 2022
Massachusetts Institute of Technology (MIT) researchers have developed a machine learning approach that can learn to control autonomous vehicle fleets so they arrive at intersections during green lights. The researchers found through simulations that the control system could increase travel speeds by 20% and reduce fuel consumption by 18% and carbon dioxide emissions by 25% if all vehicles on the road were autonomous. However, MIT's Cathy Wu said, "If we only control 25% of vehicles, that gives us 50% of the benefits in terms of fuel and emissions reduction. That means we don't have to wait until we get to 100% autonomous vehicles to get benefits from this approach."
The Man Who Controls Computers with His Mind
The New York Times Magazine
Ferris Jabr
May 15, 2022
Paralyzed since 2006, Dennis DeGray has regained a semblance of control over his body via a brain-computer interface (BCI) developed by Stanford University researchers. Implanted in him in 2016, the BCI enables DeGray to move a cursor on a computer screen by thought, using machine learning algorithms that associate different neural activity patterns with different intended hand movements. DeGray has learned to control various technologies with his mind, including videogames, robotic limbs, and a simulated aerial drone. BCI advancements to date have relied on a combination of invasive and noninvasive technologies. Thomas Oxley at BCI developer Synchron believes future models will help physically disabled people re-engage with physical and digital environments.
We Got Closer to Mapping Nearly Every Single Cell Type in the Human Body
ScienceAlert
Tessa Koumoundouros
May 13, 2022
More than 2,300 researchers across 83 countries participated in four studies that collectively mapped over 1 million cells in 33 human organs. Explained Stanford University's Steve Quake, "What we've been able to do collectively here is help understand and interpret how different parts of the genome are used to define different cell types and create molecular definitions for all the different cell types we've studied collectively in these papers." The researchers applied new machine learning algorithms to extract critical molecular patterns, and used experimental processes to profile more cell types to help compile and search through the cell atlas. The datasets will enable scientists to chart links between cells for a whole-body map of biological processes and disease-inducing disruptions.
Drones, ML Help Reveal the Scale of Plastic Pollution
University of Tsukuba (Japan)
May 12, 2022
Researchers at Japan's University of Tsukuba have combined flying drones and machine learning (ML) to map plastic pollution in river habitats. The researchers employed drones to capture high-resolution optical and thermal images along Indonesia's Brantas River, which were processed by ML algorithms. "Using these methods, we found that a combination of optical and thermal images produced the most accurate estimates of the amounts of plastic litter," said Tsukuba's Fatwa Ramdani. "On their own, neither image type produced particularly accurate results."
ACM Recognizes Technical Achievements with Special Awards
ACM
May 11, 2022
ACM announced the recipients of four technical awards for their contributions to pioneering research and practical applications that affect everyday use of technology. The University of California, Berkeley's Raluca Ada Popa will receive the 2021 ACM Grace Murray Hopper Award for designing secure distributed systems that shield confidentiality against intruders with full access to servers without losing functionality. A team of international researchers will receive the ACM Software System Award for developing CompCert, the first practical optimizing compiler for multiple commercial architectures, whose proof of correctness can be mechanically checked. ACM awarded several teams of multi-institutional collaborators the ACM Paris Kanellakis Theory and Practice Award for making fundamental contributions to differential privacy development. Artificial intelligence (AI) pioneer Carla Gomes will receive the ACM-AAAI Allen Newell Award for conceiving of and cultivating the field of computational sustainability, and for her fundamental contributions to AI.
AI Predicts Infant Age, Gender Based on Temperament
WSU Insider
Will Ferguson
May 10, 2022
A multi-institutional team of researchers used artificial intelligence to analyze temperament data on 4,438 infants in order to classify them by gender and age. The researchers utilized infant behavior data collected between 2006 and 2019, which rated infants along 14 temperament dimensions. The University of Idaho's Erich Seamon used machine learning algorithms to classify infants as either male or female at zero to 24 weeks, 24 to 48 weeks, and older than 48 weeks based on those ratings. The researchers determined the algorithms better classified gender in babies more than 48 weeks old, while fear was the most critical element in distinguishing boys and girls in the youngest and mid-range age groups.
AI System Learns Concepts Shared Across Video, Audio, Text
MIT News
Adam Zewe
May 4, 2022
An artificial intelligence technique developed by Massachusetts Institute of Technology (MIT) researchers can learn to represent data by capturing concepts shared between visual and audio modalities. The researchers' representation learning model can encode raw data like videos and corresponding text captions by retrieving observations about objects and actions in the video and mapping the data points in a grid. Similar data is clustered as single points in the grid, each of which is represented by an individual word. The model can use only 1,000 words to label vectors, choosing the word that best represents the data. Because the method uses a shared embedding space to encode two modalities together, the model can learn the relationship between them, such as video showing a person juggling and an audio recording of a person saying "juggling." The researchers found the model outperforms other machine learning methods at cross-modal retrieval tasks.
Government Technology (2/28, Pattison-Gordon) reports that as misinformation and disinformation spread and evolve, “organizations are bringing both human power and advanced technologies to the battle against misinformation. ... Fact-checking is best done by people rather than AI, because only humans can understand the nuances and complexities of false narratives, said Sarah Brandt, executive vice president of partnerships at NewsGuard,” which “provides a browser extension that displays indicators next to news story links to inform users whether articles come from trustworthy or untrustworthy publications.”
The Phoenix Business Journal (2/28, Gifford, Subscription Publication) reports, “University of Arizona researchers have launched a startup after finding a way to detect new lung disease using artificial intelligence.” Two professors of medicine “developed a way to use artificial intelligence and machine learning to identify the specific ‘fingerprints’ of diseases that come from metabolites,” which “allows for earlier diagnosis of lung disease than current practices.” The study was funded by the NIH.