Dr. T's AI brief

12 views
Skip to first unread message

Daniel Tauritz

unread,
Aug 17, 2020, 7:43:53 AM8/17/20
to ai-b...@googlegroups.com

Machine Learning Helps Find Materials That Buck the Trend
RIKEN
August 7, 2020

Chemists at Japan's RIKEN research institute developed a machine learning model to predict the compositions of out-of-trend new materials. The team fed the open source BoundLess Objective free eXploration (BLOX) algorithm compositions and properties of known materials in order to make predictions of materials suited for use in applications involving trade-offs between two or more desirable traits. BLOX combines data for materials randomly chosen from a database with experimental or calculation results, uses the model to anticipate properties of a new set of materials, and identifies the material deviating most from distribution. The material’s properties are confirmed by experiment or calculations, then employed to update the machine learning model, and the process is repeated. The chemists used BLOX to identify eight trend-flouting molecules with a high level of photoactivity from a drug-discovery database; their properties correlated well with those BLOX predicted, which RIKEN's Kei Terayama said "shows the potential of computation-driven materials development."
 

Full Article

 

 

Intel AI System Rates Similarity of 2 Pieces of Code
VentureBeat
Kyle Wiggers
July 29, 2020


Intel scientists collaborating with Massachusetts Institute of Technology and Georgia Institute of Technology researchers said they have created an automated engine that determines when two pieces of code perform similar tasks, even if they utilize different structures and algorithms. Machine Inferred Code Similarity (MISIM) employs a context-aware semantic structure (CASS) that deduces the purpose of a given piece of source code using artificial intelligence and machine learning algorithms. Once the code's structure is integrated with CASS, algorithms assign similarity scores according to the tasks the code is designed to execute. If two pieces of code seem distinct but perform the same function, the models rate them as similar, and vice versa. MISIM performs up to 40 times better than current state-of-the-art systems, and shows potential for use on applications ranging from code recommendation to automated debugging.

Full Article

 

Surprisingly Recent Galaxy Discovered Using Machine Learning
SciTechDaily
July 31, 2020


Scientists at the National Astronomical Observatory of Japan and the University of Tokyo used machine learning on big data captured by the Subaru Telescope in Hawaii to discover a galaxy with a record-breaking low oxygen abundance, suggesting the recent formation of most of its stars. Finding galaxies in the early formation stage from a vast amount of wide-field telescopic data required a novel machine learning method in which a computer repeatedly learned galactic colors expected from theoretical models, and then selected only early-stage galaxies. The researchers conducted follow-up observations to ascertain the elemental abundance ratios of four of 27 computer-selected candidates, and a galaxy in the constellation Hercules had the lowest values ever reported. This finding potentially indicates the newborn galaxy may be the last-generation galaxy in cosmic history.

Full Article

 

 

50 Years After Designing Electric Lady Studios, Architect Cuts New Track with AI
The Wall Street Journal
Steven Rosenbrush
August 3, 2020


Acoustics architect John Storyk, who designed Electric Lady Studios for Jimi Hendrix over 50 years ago, has built software to help optimize musical recording and performance spaces for low-frequency sounds, by rapidly analyzing rooms of varying shapes and sizes. He and Walters-Storyk Design Group (WSDG) research and development director Peter D'Antonio spent more than a year developing the Non-Cuboid Iterative Room Optimization (NIRO) tool from a genetic algorithm. NIRO predicts how a space's acoustics will respond to specially constructed damping units, in order to optimize the physical structure. The software has been used in nearly 20 projects to date on new studios for Sony Corp. of America, rapper J. Cole, and Phish singer Mike Gordon, among others.

Full Article

*May Require Paid Registration

 

 

Deep Learning Moves Cancer Vaccines Toward Reality
Technology.org
August 1, 2020


Researchers at India's International Institute of Information Technology Bangalore (IIIT Bangalore) have designed a deep learning method to help develop personalized cancer vaccines by better understanding the biological mechanisms of interaction between cancer cells and the human immune system. The MHCAttnNet deep learning model uses bidirectional long short-term memories to predict Major Histocompatibility Complex allele (MHC)-peptide binding with greater accuracy than current methods. The model also employs natural language processing to emphasize key subsequences from the amino-acid sequences of peptides and MHC alleles used to generate the prediction. Shrisha Rao of IIIT Bangalore the work was “an illustration of how artificial intelligence and machine learning research using cloud-based solutions can make a mark in different domains including medicine, in a much shorter time and at a fraction of the usual cost.”

Full Article

 

 

Army Advances Learning Capabilities of Drone Swarms
U.S. Army Research Laboratory
August 10, 2020


Researchers at the U.S. Army Combat Capabilities Development Command's Army Research Laboratory (ARL) and North Carolina State and Oklahoma State universities developed a reinforcement learning technique to enable swarms of unmanned drones to carry out missions while minimizing performance uncertainty. The Hierarchical Reinforcement Learning (HRL) approach breaks down a global control objective into hierarchies, including multiple small group-level microscopic control and broad swarm-level macroscopic control. Each hierarchy exhibits a unique learning loop with its own local and global reward functions, and running these loops in parallel significantly compresses learning time. Compared to centralized reinforcement learning, HRL shortened learning time by the swarm by 80%, and limited loss of optimality (the point at which the difference between benefits and costs is maximized) to 5%.

Full Article

 

 

NIST Neural Network Model Finds Small Objects in Dense Images
NIST
August 4, 2020


Computer scientists at the U.S. National Institute of Standards and Technology (NIST) have created a neural network model to detect small geometric objects in dense images. The researchers modified a network architecture developed by German scientists for analyzing biomedical images in order to retrieve raw data from journal articles that had been degraded or otherwise lost. The images present data points with various markers, mainly circles, triangles, and squares, both filled and open, of differing size and clarity. The model captured 97% of objects in a defined set of test images, finding their centers to within a few pixels of manually selected sites. NIST's Adele Peskin said the technique could find use in other applications, because "object detection is used in a wide range of image analyses, self-driving cars, machine inspections, and so on, for which small, dense objects are particularly hard to locate and separate."

Full Article

 

Using AI to Smell the Roses
UC Riverside News
Iqbal Pittalwala
July 28, 2020


Researchers at the University of California, Riverside (UCR) used machine learning to understand what a chemical smells like to humans. UCR's Anandasankar Ray and Joel Kowalewski developed a technique for a computer to learn chemical features that activate known human odorant receptors (ORs), then screened about 500,000 compounds for binding ligand molecules for 34 ORs. The team then concentrated on whether their OR activity-estimating algorithm could predict diverse perceptual qualities of odorants, and learned that such activity accurately predicted 146 distinct chemical percepts. Only a few ORs were required for prediction. Said Ray, "The machine learning algorithm can eventually predict how a new chemical will smell, even though we may initially not know if it smells like a lemon or a rose."

Full Article

 

 

How to Optimize Neural Networks on a Brain-Inspired Computer
HPCwire
July 28, 2020


A study by scientists at Germany’s Heidelberg University and the Max Planck Institute for Dynamics and Self-Organization reveals how "critical states" can be used to optimize artificial neural networks running on brain-inspired neuromorphic hardware. Critical states are the points at which systems can quickly and fundamentally change their overall characteristics. Although they are widely assumed to be optimal for computation in recurrent neural networks, the researchers found that criticality is not beneficial for every task. In an experiment performed on a prototype of the analog neuromorphic BrainScales-2 chip, the researchers found that changing input strength permits easy adjustment of the distance to criticality. They also showed a clear relationship between criticality and task performance, finding that only complex, memory-intensive tasks benefited from criticality.

Full Article

 

 

Air Force Readies Launch of In-Orbit Network to Support AI Applications in Space
The Wall Street Journal
Sara Castellanos
July 30, 2020


The U.S. Air Force is partnering with technology startup Hypergiant Industries to build the first software-based U.S. satellite system that could eventually conduct real-time data analysis for artificial intelligence (AI) applications in orbit. The initial deployment of the Chameleon Constellation is scheduled for next February, with the system to be comprised of about 36 satellites. The project aims to modernize military satellites through remote software upgrades for rapid reprogramming—for example, to leverage new AI algorithms that could be used to better detect threats. Srinivas Bettadpur, director of the Center for Space Research at the University of Texas at Austin, said the project could help military satellites better understand and detect anomalies on Earth on their own. “That’s the desire, to have that level of autonomy, where you can put up an instrument right away and figure out what is going on,” he said.

Full Article

*May Require Paid Registration

 

 

AI Identifies Prostate Cancer With Near-Perfect Accuracy
SciTechDaily
July 27, 2020


Researchers at the University of Pittsburgh and the University of Pittsburgh Medical Center (UPMC) trained an artificial intelligence (AI) program to recognize and characterize prostate cancer, achieving the highest accuracy rate to date. The researchers used images from more than 1 million parts of stained tissue slides taken from patient biopsies to train the AI, then tested the algorithm on a set of 1,600 slides from 100 consecutive patients seen for suspected prostate cancer. The AI demonstrated 98% sensitivity and 97% specificity at detecting prostate cancer. The algorithm also is the first to achieve high performance for tumor grading, sizing, and invasion of the surrounding nerves. Said UPMC's Rajiv Dhir, "Algorithms like this are especially useful in lesions that are atypical. A nonspecialized person may not be able to make the correct assessment." Dhir added that new algorithms will need to be trained to detect other types of cancer.

Full Article

 

Daniel Tauritz

unread,
Aug 24, 2020, 8:13:17 AM8/24/20
to ai-b...@googlegroups.com

AI Program Beats Human Pilot In Simulated Dogfight

Fortune Share to FacebookShare to Twitter (8/20, Pressman) reports that during Defense Advanced Research Projects Agency’s (DARPA) “AlphaDogFight Trials,” a human F-16 pilot engaged in a simulated dogfight with an F-16 controlled by an AI program. The AI “program won all five rounds in under two minutes, showing the technology’s promise.” In the simulation the aircraft “exceeded speeds of 500 miles per hour and pulled 9 Gs as they twisted and turned through the virtual airspace. Each craft was armed with essentially a laser beam that simulated the use of machine guns.” One “strength of the winning program, created by a small defense contractor called Heron Systems, was its ability to aim its guns more accurately during the high-speed dogfight than the human pilot.” However, Col. Daniel Javorsek, “who oversees the A.I. piloting program at DARPA, said the results come with ‘plenty of caveats and disclaimers.’”

 

 

Facebook AI Takes MRI Images Four Times Faster by 'Recreating Missing Parts'—Experts Can't Tell the Difference
The Daily Mail
Ryan Morrison
August 18, 2020


Facebook and experts from New York University have developed an open source artificial intelligence (AI) model for magnetic resonance imaging (MRI) scanners that can reconstruct missing sections in less detailed scans. The developers trained the fast MRI model on thousands of full scans, and tests with six radiographers showed the method was accurate and produced higher-quality images four times faster than normal. The researchers said the model could "make MRI as fast as or faster than an X-ray," but with much more data available for diagnosis and without radiation. "We are using AI to create complete images from far less raw data,” Facebook said. “Since collecting that data is what makes MRIs so slow, this has the potential to speed up the scanning process significantly."
 

Full Article

 

 

Peregrine Swoops on Flaws in 3D Printing
The Engineer
August 17, 2020

Researchers at the U.S. Department of Energy's Oak Ridge National Laboratory (ORNL) have developed an artificial intelligence software package for powder bed three-dimensional printers that performs real-time part-quality analysis. Peregrine is a novel convolutional neural network that employs a custom algorithm to process pixel values of images, accounting for composition of edges, lines, corners, and textures. Peregrine alerts operators upon detecting a flaw that may affect part quality, in order that they implement appropriate adjustments. The software generates a common image database that can be transferred to new machines to rapidly train new neural networks, while operating on a single high-powered laptop or desktop. Peregrine is being tested on multiple printers at ORNL, and forms part of the Transformational Challenge Reactor Demonstration Program pursuing the world's first additively manufactured nuclear reactor.
 

Full Article

 

Analysis: Robot Scientists Are Learning To Develop Its Own Research

Discover Magazine Share to FacebookShare to Twitter (8/10, Walter) reports on Adam, “the first machine to fully automate the scientific process and make a discovery on its own,” and the man behind the machine, Ross King, “a professor of machine intelligence at Chalmers University of Technology in Sweden.” He started building Adam “in 2004 to study enzymes in yeast, and later created a second robot – aptly named Eve – to search for potential malaria drugs.” Automation “is becoming more common in modern-day labs, and AI can assist researchers with myriad projects.” It’s the combination of “both automation and AI to equip machines with the ability to carry out each step of the scientific process – forming hypotheses, conducting experiments, analyzing data and drawing conclusions – that puts these bots in a class of their own.”

 

Meet The Female Computer Scientist Whose Research On Facial Recognition Got Big Tech To Back Down

Fast Company Share to FacebookShare to Twitter (8/4, Farley) reports on how Joy Buolamwini, a computer scientist and activist at MIT’s Media Lab, “got Jeff Bezos to back down” after her research into racial, skin, and gender disparities in facial recognition technology “revealed how systems from Amazon, IBM, Microsoft, and others were unable to classify darker female faces as accurately as those of white men – effectively shattering the myth of machine neutrality.” Currently, Buolamwini “is galvanizing a growing movement to expose the social consequences of artificial intelligence.” Through her “nearly four-year-old nonprofit, the Algorithmic Justice League (AJL), she has testified before lawmakers at the federal, state, and local levels about the dangers of using facial recognition technologies with no oversight of how they’re created or deployed.” Since George Floyd’s death, “she has called for a complete halt to police use of face surveillance, and is providing activists with resources and tools to demand regulation.”

 

 

The Hack That Could Make Face Recognition Think Someone Else Is You
Technology Review
Karen Hao; Patrick Howell O'Neill
August 5, 2020


McAfee researchers used machine learning to show that modern facial recognition systems can be tricked into seeing someone else. They fed 1,500 photos of the two project leads into an image translation algorithm known as CycleGAN to morph them into one another. The CycleGAN eventually created a faked image that looked like person A to the naked eye but was identified as person B by the facial recognition system. Said McAfee's Steve Povolny, "[Artificial intelligence] and facial recognition are incredibly powerful tools to assist in the pipeline of identifying and authorizing people. But when you just take them and blindly replace an existing system that relies entirely on a human without having some kind of a secondary check, then you all of a sudden have introduced maybe a greater weakness than you had before."
 

Full Article

 

 

Super-Resolution Method Reveals the Fine Details
Texas A&M Today
Vandana Suresh
August 13, 2020


Researchers at Texas A&M University, Virginia Commonwealth University, and the U.S. Air Force Research Laboratory have developed an image processing technique to improve the quality of low-resolution electron micrographs without compromising the integrity of specimen samples. They were able to further enhance details in lower-resolution images by training deep neural networks on pairs of images from the same sample at different physical resolutions. Said Texas A&M's Yu Ding, "[W]ith our image processing techniques, we can super resolve an entire image by using just a few smaller-sized, high-resolution images. This method is less destructive since most parts of the specimen sample needn't be scanned with high-energy electron beams." The researchers found that their algorithm could enhance hard to discern features in low-resolution images by up to 50%, though the technique requires a significant amount of processing power.
 

Full Article

 

 

Google Uses Crowdsourced Photos to Recreate Landmarks in 3D for AR/VR
VentureBeat
Jeremy Horwitz
August 12, 2020


Researchers at Google have created a method to automate and improve the three-dimensional (3-D) modeling process using a neural network. Google's NeRF-W, or neural radiance fields in the wild, system uses crowdsourced photos of a location to convincingly recreate landmarks and lighting in 3-D for users of virtual reality and augmented reality devices. Standard NeRF aims to determine where light rays terminate in order to extract 3-D depth data from two-dimensional images. NeRF-W goes further by using "in-the-wild photo collections" as inputs to broaden the computer's ability to view landmarks from multiple angles. It also can separate out photographic and environmental variations and shot-to-shot object differences, and recreate scenes with radiance guidance for scene lighting and shadowing. A comparison of standard NeRF and NeRF-W results shows that the new system allows users to experience complex architecture as it actually appears, with time-of-day and weather variations.
 

Full Article

 

 

Baby-Mounted Cameras Teach AI to 'See' Through Child's Eyes
New Scientist
Chris Stokel-Walker
August 11, 2020


New York University (NYU) researchers used video footage captured by head-mounted cameras worn by children throughout their first three years of life to train an artificial intelligence (AI) neural network to extract meaning from the video. Using the SAYCam dataset, the AI learned to recognize objects that appear repeatedly, often by extending its focus beyond the objects. NYU's Brenden Lake said the algorithm does not identify objects as a child would, but these findings offer "a proof of concept that [visual features] are learnable with enough naturalistic data." He acknowledged that AI requires a significant amount of data and labels in order to perform tasks with child-level adeptness. However, algorithms that can be taught to learn as children do could potentially become more intuitive.
 

Full Article

 

 

AI Dupes Humans Into Believing Synthesized Sound Effects Are Real
IEEE Spectrum
Michelle Hampson
August 10, 2020


Researchers from the University of Texas at San Antonio (UTSA) have engineered a machine learning artificial intelligence that synthesizes its own sound effects to match scenes by analyzing motion in video frames. The AutoFoley model produces sounds authentic enough to fool humans into thinking they are real effects, as produced by Foley studio artists. UTSA's Jeff Prevost and Sanchita Ghose developed two models that identify actions in a video and ascertain the appropriate sound, by respectively extracting image features from frames and analyzing the temporal relationship of an object in separate frames. The second model applies relational reasoning to compare different frames temporally, and predict what action is occurring. In the last step, AutoFoley synthesizes sound to match the predicted activity or movement, and 73% of 57 college students thought the soundtrack synthesized by the first model was real, versus 66% who found the second model's soundtrack convincing.
 

Full Article

 

Daniel Tauritz

unread,
Aug 31, 2020, 8:45:20 AM8/31/20
to ai-b...@googlegroups.com

White House Announces AI, Quantum Research Institutes
VentureBeat
Kyle Wiggers
August 26, 2020


The White House announced the creation of 12 new artificial intelligence (AI) and quantum information science research institutes, to be funded by federal agencies. The Trump Administration said the U.S. National Science Foundation will invest $100 million in five AI institutes over five years, in partnership with the Department of Agriculture (USDA)'s National Institute of Food and Agriculture, the Department of Homeland Security's Security Science and Technology Directorate, and the Department of Transportation's Federal Highway Administration. USDA will separately fund two institutes of its own, with focus areas including "user-driven trustworthy AI" for weather, climate, and coastal hazards applications, and theoretical challenges like neural architecture optimization. Meanwhile, the Department of Energy will invest $625 million in five quantum information science research centers, whose objectives will include surmounting obstacles in quantum state resilience, controllability, and scalability.

Full Article

 

For First Time, Researchers Use AI To Confirm 50 Exoplanets From Telescope Data

CNET News Share to FacebookShare to Twitter (8/25, Kooser) reports that for the first time, a research team has discovered how to utilize artificial intelligence to sort through telescope data to confirm exoplanets. Telescopes such as NASA’s Transiting Exoplanet Survey Satellite (TESS) “look for a telltale dip in brightness that indicates something is passing by a star. Sometimes this is a planet, sometimes it’s a glitch, asteroids, dust or a quirk of a binary star system.” The team “created a machine learning algorithm and trained it using data on confirmed planets and false-positives from NASA’s retired Kepler mission.” They then used it “to analyze a group of unconfirmed planet candidates, also from the Kepler data. In a first, the AI system confirmed 50 planets out of that bunch.”

        Fortune Share to FacebookShare to Twitter (8/25, Hackett) reports that the authors of the study, which was published in the Monthly Notices of the Royal Astronomical Society, wrote in an abstract, “Our models can validate thousands of unseen candidates in seconds.” Given the “gargantuan size of many astronomical datasets, the method could greatly boost the speed of discovery for world-hunting.”

        The Daily Mail (UK) Share to FacebookShare to Twitter (8/25, Liberatore) reports that University of Warwick’s David Armstrong, who led the research team, said, “In terms of planet validation, no-one has used a machine-learning technique before. ... Machine learning has been used for ranking planetary candidates but never in a probabilistic framework, which is what you need to truly validate a planet.” Armstrong added, “We hope to apply this technique to large samples of candidates from current and future missions like TESS and PLATO.”

 

NASA Looking To Machine Learning To Detect Cyber Threats

ExecutiveGov Share to FacebookShare to Twitter (8/24, Rivers) reports NASA Associate Chief Information Officer (ACIO) for Cybersecurity & Privacy and Senior Agency Information Security Officer (SAISO) Mike Witt “has said the agency is working to apply zero-trust principles through emerging technologies and partnerships with the Department of Defense (DoD), FedScoop reported Friday.” Witt “told attendees at an ACT-IAC webinar that NASA is investing in an enterprise-wide system that will leverage machine learning to collate system logs and detect cyber threats.” According “to Witt, NASA is also partnering with the DoD and intelligence community in red teaming activities to improve agency capacities for network threat detection.”

 

Companies Using AI, Robotics To Pivot Supply Chains During COVID-19

TechTarget Share to FacebookShare to Twitter (8/21, Sutner) reported artificial intelligence and robots “are transforming supply chains after the COVID-19 pandemic disrupted manufacturing and logistics systems around the world.” Many enterprises “that already used AI and machine learning-driven robots for supply chain responded quickly, stepping up investments in the technologies to automate movement of goods from the factory floor to distributors to retail shelves and consumers’ doorsteps.” Among the “most useful applications for AI in the supply chain for consumer goods multinational Unilever is its ability to remedy stocking problems.” Now, Unilever “uses AI algorithms to factor in more dynamic features, such as ordering patterns in the current manufacturing environment, as opposed to past economic conditions, and more flexible load pooling – bundling of stock bound for a specific region.”

 

Car Manufacturers Using AI To Help Drivers Stay Calm Behind Wheel

In a piece for TIME Share to FacebookShare to Twitter (8/26), Corinne Purtill writes about driving in a Nissan Leaf, saying, “I am unwittingly communicating my stress to this vehicle in countless subtle ways: the strength of my grip on the steering wheel, the slight expansion of my back against the seat as I breathe, the things I mutter to myself...” She writes that “a calm voice” asks her, “‘What’s stressing you out right now?’ The conversation that ensues offers a window into the ways in which artificial intelligence could transform our experience behind the wheel: not by driving the car for us, but by taking better care of us as we drive.”

 

 

F-16 Pilot Took on AI in a Dogfight. Here's Who Won
Fortune
Aaron Pressman
August 20, 2020


The U.S. Defense Department’s Defense Advanced Research Projects Agency (DARPA) recently hosted virtual "AlphaDogFight Trials," in which an artificial intelligence (AI) program easily beat a U.S. Air Force officer in a simulated dogfight. The AI won all five rounds of the trials in under two minutes. Each F-16 aircraft was armed with a laser simulating the use of machine guns, and the human pilot wore a virtual reality headset that made it appear as though he were in the cockpit of a real plane. DARPA's Col. Daniel Javorsek said the results come with "plenty of caveats and disclaimers" and do not mean the program could have won in actual combat. The winning program from defense contractor Heron Systems, which was able to aim its guns more accurately than the human pilot, beat out programs developed by the Georgia Institute of Technology and Lockheed Martin, among others.

Full Article

 

 

AI Automatic Tuning Delivers Step Forward in Quantum Computing
University of Oxford (U.K.)
August 18, 2020


Researchers at the U.K.'s Oxford and Lancaster universities, Switzerland's University of Basel, and Google's DeepMind artificial intelligence laboratory have developed a machine learning algorithm able to automatically engage with and tune a quantum device faster than human experts can, without human input. This "Minecraft explorer for quantum devices" takes an approach modeled after Minecraft gameplay, searching for the right operating conditions for a quantum device by exploring the space of parameters defined by voltages. Oxford's Natalia Ares said, "Our ultimate goal is to fully automate the control of large quantum circuits, opening the path to completely new technologies which harness the particularities of quantum physics."

Full Article

 

 

Rewriting the Rules of Machine-Generated Art
MIT News
Kim Martineau
August 18, 2020


Massachusetts Institute of Technology (MIT) researchers have shown it is possible to edit deep layers of neural networks to generate images never seen before. Generative adversarial networks (GANs) typically are trained on massive datasets, but MIT’s study suggests large datasets are not essential to the process. Said MIT's David Bau, "We’re like prisoners to our training data. GANs only learn patterns that are already in our data, but here I can manipulate a condition in the model to create horses with hats. It’s like editing a genetic sequence to create something entirely new, like inserting the DNA of a firefly into a plant to make it glow in the dark.” The tool has immediate applications in computer graphics, and in teaching expert AI systems to recognize rare features and events through data augmentation.

Full Article

 

 

AI-Based Traffic Management Gets Green Light
ZDNet
Greg Nichols
August 17, 2020


The NoTraffic autonomous traffic management company has deployed an artificial intelligence (AI)-driven traffic management system in Phoenix, AZ, switching traffic-light coordination from a timer-based model to one based on demand. The goal is to improve traffic flow and cut vehicle and pedestrian delays at intersections, and the system has reduced vehicle delays by up to 40% in some instances. The NoTraffic platform monitors road assets as they approach an intersection and calculates optimal service for the intersection in real time, autonomously changing signals accordingly. Phoenix Street Transportation director Kini Knudson said, "We are now seeing the convergence of technology-enabled automobiles and traffic management systems working together to move vehicles more effectively through busy corridors."

Full Article

 

 

Elephants vs Trains: AI Helps Ensure They Don't Collide
ZDNet
Anna Solana
August 17, 2020


Researchers at Spain’s Polytechnic University of Catalonia (UPC)-BarcelonaTech developed a "smart ear" to help prevent fatal collisions involving elephants and trains in India, where trains on the Siliguri-Jalapaiguri railway line have struck and killed more than 200 elephants over the last decade. The researchers placed recorders and cameras on train tracks in West Bengal, then analyzed the recorded sounds using machine learning techniques to differentiate elephant sounds from others. As a result, the “smart ear” was able to identify elephant sounds from distances of 1 kilometer, and images of elephants from 250 meters, during daylight hours. When an elephant is detected, the system can send real-time alerts to train drivers’ phones. Said UPC's Michel André, "We should listen to wildlife to detect their presence and avoid confrontation to achieve a better coexistence between man and nature. Today's bioacoustics can meet that challenge."

Full Article

 

 

ML Reveals Role of Culture in Shaping Meanings of Words
Princeton Engineering News
Rachel Nuwer
August 14, 2020


Researchers from Princeton University, the University of Wisconsin, Madison, and the U.K.'s University of Bristol conducted a machine learning study of dozens of languages, and found that culture, history, and geography wield a sizable influence on word meanings. Machine learning models analyzed more than 1,000 words in 41 languages, and an algorithm examined neural networks trained on various languages to compare millions of semantic associations. Another algorithm compared similarities of the languages' root cultures based on an anthropological dataset, and could predict the ease of translation between languages according to those similarities. Said Princeton’s William Thompson, “One way to look at what we’ve done is a data-driven way of quantifying which words are most translatable.”

Full Article

 

 

Researchers Propose Using 'Rare Word' Dictionaries to Bolster Unsupervised Language Model Training
Venture Beat
Kyle Wiggers
August 13, 2020


Researchers at Microsoft and China’s Peking and Nankai universities collaborated on a method to increase the efficiency of unsupervised language model pretraining. Taking Notes on the Fly (TNF), inspired by human note-taking, features a note dictionary that saves a rare word's contextual information when the rare word occurs and uses that information to enhance a sentence's semantics when the rare word occurs again. Because the note dictionary is updated on the fly, TNF involves little computational overhead at pretraining, and the note dictionary is discarded when pretraining is complete. The researchers found that TNF accelerates models based on Google’s Bidirectional Encoder Representations from Transformers (BERT) technique throughout the pretraining process; one model reached BERT performance within two days, compared to nearly six days for a TNF-free BERT model. Said the researchers, "If trained with the same number of updates, TNF outperforms original BERT pre-training by a large margin in downstream tasks."

Full Article

 

Daniel Tauritz

unread,
Sep 5, 2020, 6:32:57 PM9/5/20
to ai-b...@googlegroups.com

Google Announces $5 Million Partnership With NSF To Fund Human-AI Research Initiative

CNET News Share to FacebookShare to Twitter (9/2, Smith) reports, “Google on Wednesday announced a $5 million grant and partnership with the US National Science Foundation to create a National AI Research Institute for Human-AI Interaction and Collaboration.” In addition, Google “said it will provide expertise in the field of artificial intelligence, collaborate on research and offer cloud computing support for researchers and educators working with the program.”

 

 

Toward an ML Model That Can Reason About Everyday Actions
MIT News
Kim Martineau
August 31, 2020


Researchers from the Massachusetts Institute of Technology (MIT), Columbia University, and IBM have trained a hybrid language-vision machine learning model to recognize abstract concepts in video. The researchers used the WordNet word-meaning database to map how each action-class label in MIT's Multi-Moments in Time and DeepMind's Kinetics datasets relates to the other labels in both datasets. The model was trained on this graph of abstract classes to generate a numerical representation for each video that aligns with word representations of the depicted actions, then combine them into a new set of representations to identify abstractions common to all the videos. When compared with humans performing the same visual reasoning tasks online, the model performed as well as them in many situations. MIT's Aude Oliva said, "A model that can recognize abstract events will give more accurate, logical predictions and be more useful for decision-making."
 

Full Article

 

 

Foiling Illicit Cryptocurrency Mining with AI
Los Alamos National Laboratory News
August 20, 2020


Computer scientists at the U.S. Department of Energy's Los Alamos National Laboratory have developed an artificial intelligence (AI) system that could potentially detect malware that hijacks supercomputers for cryptocurrency mining. The system compares the contours in an algorithm's flow-control graph to a catalog of graphs for programs permitted to run on a given computer, but also checks for the presence of a graph that identifies programs that should be running. The researchers compared a known benign code to a malicious bitcoin-mining code with their system, which identified the illicit mining operation faster and more reliably than conventional, non-AI analyses. The system's reliance on graph comparisons renders it immune to common code-masking deceptions that illicit cryptocurrency miners employ.

Full Article

 

 

AI Identifies 50 New Planets From Old NASA Data
CNN
Jessie Yeung
August 26, 2020


Machine learning artificial intelligence (AI) developed by astronomers and computer scientists at the U.K.'s University of Warwick found 50 new planets by mining old data from the U.S. National Aeronautics and Space Administration (NASA). The researchers educated the algorithm on data collected by the Kepler Space Telescope, teaching it to differentiate real planets from false positives. The AI was then tasked to analyze old datasets of planetary candidates, in which it discovered the 50 previously unknown exoplanets. Warwick's David Armstrong said this is the first time machine learning has been used to rank planetary candidates in a probabilistic framework, and the research suggests the AI could "validate thousands of unseen candidates in seconds."

Full Article

 

 

Scientists Use Reinforcement Learning to Train Quantum Algorithm
Argonne National Laboratory
Jared Sagoff
August 25, 2020


Researchers at the Argonne National Laboratory, the Illinois Institute of Technology (Illinois Tech), Clemson University, and Los Alamos National Laboratory developed an algorithm to optimize the Quantum Approximate Optimization Algorithm (QAOA) using reinforcement learning. QAOA is a hybrid quantum-classical algorithm that enables a quantum computer to solve combinatorial optimization problems, where the solution space grows exponentially larger as the number of decision variables is expanded. The new algorithm learns how to configure QAOA through a feedback mechanism that trains it with "rewards" and "penalties" based on its performance. Said Illinois Tech's Sami Khairy, "It's an iterative procedure that allows us to improve how the computation is running. It learns a better way to assign new parameters, and we want to assign good parameters as fast as possible."
 

Full Article

 

 

UCF Researchers Develop AI to Detect Fentanyl, Derivatives Remotely
University of Central Florida
Robert Wells
August 25, 2020


Researchers at the University of Central Florida (UCF) have developed an artificial intelligence (AI) method of quickly and remotely detecting the drug fentanyl and its derivatives by using infrared light spectroscopy in a portable tabletop device. The researchers said their AI algorithm identified molecules related to fentanyl with a 92.5% accuracy rate during testing. The researchers used a national organic molecules database to identify molecules with at least one of the functional groups found in the parent compound fentanyl, then developed machine-learning algorithms to identify those molecules based on their infrared spectral properties. UCF’s Subith Vasu said the AI algorithm will be used in a device the researchers are building for the U.S. Defense Advanced Research Projects Agency.
 

Full Article

 

 

Machine Learning Peeks Into Nano-Aquariums
University of Illinois News Bureau
Lois Yoksoulian
August 24, 2020


Researchers at the University of Illinois, Urbana-Champaign (U of I) have developed a machine learning workflow that streamlines the process of interpreting data from watching nanoparticles interact via liquid-phase electron microscopy. The machine learning workflow builds off an existing neural network, known as U-Net, that successfully identified irregular cellular features using other types of microscopy. Measurements from about 300,000 pairs of interacting nanoparticles were collected for the study. Said U of I's Qian Chen, "Developing a method even to see these particles was a huge challenge. Figuring out how to efficiently get the useful data pieces from a sea of outliers and noise has become the new challenge.” The researchers have made the source code for the machine learning program publicly available.

Full Article

 

 

Robot Boats Leave Autonomous Cars in Their Wake
The Wall Street Journal
Christopher Mims
August 29, 2020


U.K. ocean-research nonprofit Promare will launch an autonomous boat next spring to retrace the trans-Atlantic journey of the Mayflower in a promotional demonstration of technology that can substitute for crew on voyages deemed too costly or risky. Marine AI is building the unmanned Mayflower, which is being programmed to manage storms. IBM's Andy Stanford Clark said another challenge is programming the ship's artificial intelligence not only to comply with maritime regulations in international waters, but also to explain its decisions. The ship's software is partly derived from repurposed financial-services technology, while Promare trained its computer-vision system on images of vessels, buoys, and floating debris.

Full Article

*May Require Paid Registration

 

 

Machines Rival Expert Analysis of Stored Red Blood Cell Quality
Ryerson University News
August 24, 2020


A three-year, multi-institutional study by 19 researchers at 12 institutions in five countries has yielded artificial intelligence (AI) models that can automate evaluations of stored red blood cell (RBC) quality that match or surpass expert assessment. The investigators used 40,900 cell images to teach neural networks to classify RBCs into six categories; a fully-supervised machine learning algorithm agreed with human experts’ assessments 77% of the time (even human experts only agree 83% of the time). Ryerson University's Michael Kolios calls this achievement "a testament to how technology and science are now interconnecting to solve today's biomedical problems."

Full Article

 

Reply all
Reply to author
Forward
0 new messages