Dr. T's AI brief

12 views
Skip to first unread message

Daniel Tauritz

unread,
Oct 25, 2020, 12:34:15 PM10/25/20
to ai-b...@googlegroups.com

A Radical Technique Lets AI Learn with Practically No Data
MIT Technology Review
Karen Hao
October 16, 2020


Scientists at Canada's University of Waterloo suggest artificial intelligence (AI) models should be capable of “less than one”-shot (LO-shot) learning, in which the system accurately recognizes more objects than those on which it was trained. They demonstrated this concept with the 60,000-image MNIST computer-vision training dataset, based on previous work by Massachusetts Institute of Technology researchers that distilled it into 10 images, engineered and optimized to contain an equivalent amount of data to the full set. The Waterloo team further compressed the dataset by generating images that combine multiple digits and feeding them into an AI model with hybrid, or soft, labels. Said Waterloo’s Ilia Sucholutsky, “The conclusion is depending on what kind of datasets you have, you can probably get massive efficiency gains.”
 

Full Article

 

 

Opening the Black Box of Neural Networks
Pacific Northwest National Laboratory
Allan Brettman
October 13, 2020


Pacific Northwest National Laboratory (PNNL) researchers used deep learning neural networks to model water molecule interactions, unearthing data about hydrogen bonds and structural patterns. The PNNL team employed 500,000 water clusters from a database of more than 5 million water cluster minima to train a neural network, relying on graph theory to extract structural patterns of the molecules' aggregation. The method provides additional analysis after the network has been trained, allowing comparison between measurements of the water cluster networks' structural traits and the predicted neural network, enhancing the network's understanding in subsequent analyses. PNNL's Jenna Pope said, "If you were able to train a neural network, that neural network would be able to do computational chemistry on larger systems. And then you could make similar insights in computational chemistry about chemical structure or hydrogen bonding or the molecules’ response to temperature changes.”

Full Article

 

 

Split-Second 'Phantom' Images Can Fool Tesla's Autopilot
Wired
Andy Greenberg
October 11, 2020


Researchers at Israel's Ben Gurion University of the Negev (BGU) found they could fool Tesla's Autopilot driver-assistance systems into automatically reacting without warning by flashing split-second images of phantom road signs on an Internet-connected billboard's video. BGU's Yisroel Mirsky said, "The attacker just shines an image of something on the road or injects a few frames into a digital billboard, and the car will apply the brakes or possibly swerve, and that's dangerous." The team injected frames of a phantom stop sign on digital billboards, which tricked a Tesla upgraded to the HW3 version of Autopilot, as well as a Mobileye 630 device. In an email to the researchers, Tesla said its Autopilot feature should not be considered a fully autonomous driving system, but "a driver assistance feature that is intended for use only with a fully attentive driver who has their hands on the wheel and is prepared to take over at any time."

Full Article

 

 

Probing Fine-Scale Connections in the Brain
Nature
Esther Landhuis
October 19, 2020


A global consortium of researchers aim to map the 70 million neurons in the mouse brain over the next decade. Said Harvard University's Jeff Lichtman, "We're dealing with a dataset that will be on the sale of an exabyte," or 1 billion gigabytes. Lichtman noted that mapping the mouse brain connectome will be "enormous compared to anything that's been done as a single project. Connectomes are just magnificently complicated." Researchers are beginning to map living neural networks and their connections at higher resolutions and scale, thanks to advances in microscopy and artificial intelligence, and crowdsourced help from human gamers. For instance, researchers at Princeton University, Baylor College of Medicine, and the Allen Institute for Brain Science are collaborating on the creation of a new online community, Pyr, to map the mouse brain.
 

Full Article

 

Facebook’s Open Catalyst Project Uses AI To Find New Ways To Store Renewable Energy

SiliconANGLE Share to FacebookShare to Twitter (10/14) reports Facebook AI research unit “said today it has teamed up with Carnegie Mellon University’s Department of Chemical Engineering on a collaborative project to use AI to accelerate quantum mechanical simulations.” Facebook AI research scientist Larry Zitnick “said in a blog post the project aims to tackle the intermittent nature of renewable energy sources such as wind and solar energy.” The idea “is to discover new ‘electrocatalysts’ that can provide a more efficient and scalable method of storing and using renewable energy.”

 

 

U.S. Government Agencies to Use AI to Cull, Cut Outdated Regulations
Reuters
David Shepardson
October 16, 2020


The White House Office of Management and Budget (OMB) said federal agencies will use artificial intelligence (AI) to remove outdated, obsolete, and inconsistent requirements across government regulations. A 2019 pilot employing machine learning algorithms and natural-language processing at the U.S. Department of Health and Human Services turned up hundreds of technical errors and outdated mandates in agency rulebooks. The White House said agencies will utilize AI and other software "to comb through thousands and thousands of regulatory code pages to look for places where code can be updated, reconciled, and generally scrubbed of technical mistakes." According to OMB director Russell Vought, the initiative would help agencies "update a regulatory code marked by decades of neglect and lack of reform." Participating agencies include the departments of Transportation, Agriculture, Labor, and the Interior.
 

Full Article

 

 

Deep Learning Takes on Synthetic Biology
The Harvard Gazette
Lindsay Brownell
October 7, 2020


Two teams of scientists from Harvard University and the Massachusetts Institute of Technology have developed machine learning algorithms that can analyze RNA-based "toehold switch" molecular sequences and predict which will reliably sense and respond to a desired target sequence. The researchers first designed and synthesized a massive toehold switch dataset, which Harvard's Alex Garruss said "enables the use of advanced machine learning techniques for identifying and understanding useful switches for immediate downstream applications and future design." One team trained an algorithm to analyze switches as two-dimensional images of base-pair possibilities, and then to identify patterns signaling whether a given image would be a good or a bad toehold via an interpretation process called Visualizing Secondary Structure Saliency Maps. The second team tackled the challenge with orthogonal techniques using two distinct deep learning architectures. Their Sequence-based Toehold Optimization and Redesign Model and Nucleic Acid Speech platforms enable the rapid design and optimizing of synthetic biology components.

Full Article

 

 

WEF Says Machines Will Create More Jobs Than They Destroy, but Warns of Pandemic 'Double-Disruption'
CNBC
Ryan Browne
October 20, 2020


The World Economic Forum (WEF) forecasts that robotics and artificial intelligence advancements will result in a net increase in jobs over the next five years. Although the spread of machines and automation would cause the loss of 85 million jobs by 2025, 97 million new jobs would be created concurrently, for an overall gain of 12 million, according to a new WEF report. WEF predicts equal division of labor between humans and machines by 2025, which will require a significant amount of reskilling and upskilling from employers to ensure staff are sufficiently prepared for the future of work. WEF said the combination of technological adoption and the Covid-19 crisis creates a "double-disruption" for workers, with the former transforming the work environment and the latter leading to lob losses.

Full Article

 

 

Use AI to Convert Ancient Maps Into Satellite-Like Images
IEEE Spectrum
Michelle Hampson
October 16, 2020


Researchers at the Polytechnic University of Pernambuco in Brazil developed a machine learning algorithm that can transform ancient maps into Google-style satellite images, which could help determine how land use has changed over time, as well as displaying the social and economic impacts of urbanization. The researchers used an existing artificial intelligence (AI) tool, Pix2pix, which uses two neural networks—one that creates images based on the input set, and another that determines whether the generated image is fake or not—that ultimately create realistic-looking images based on the historical data. The researchers said they are working to improve the resolution of the resulting images.

Full Article

 

 

Australian, New Zealand Scientists Use AI to Predict Heart Disease Risk
ZDNet
Aimee Chanthadavong
October 13, 2020


An artificial intelligence (AI) system developed and trained by researchers at the University of Melbourne in Australia and the University of Otago in New Zealand can predict one’s risk of cardiovascular disease (CVD) by looking into a person's retinas. The AI was trained using more than 70,000 digital retinal photographs from 15 diverse multi-ethnic and multi-county datasets. The researchers found the AI system performed the same or better than experts in predicting a patient's risk of CVD by measuring such things as blood pressure, body-mass index, total cholesterol, and glycated-hemoglobin levels. Said the researchers, "We showed that such retinal caliber measurements are correlated with CVD risk factors and are associated with incident CVD events."

Full Article

 

 

ML Helps to Predict New Violence Hot Spots
Cardiff University News (U.K.)
October 12, 2020


Researchers at the U.K.'s Cardiff University used machine learning (ML) to forecast new outbreaks of violence, by mapping reported incidents against the locations of retail sites serving alcohol and places where alcohol is not sold in 10 city centers across England and Wales. In comparison with a model mapping only alcohol-vending outlets, the combined model more accurately predicted levels of violence. Cardiff's Joseph Redfern said, "Our study provides a means to better understand where violence takes place and could inform new, targeted initiatives. The research also shows that such violence reduction strategies would be most effective if developed on a city-by-city basis, rather than implemented as a generic 'one-size-fits-all' approach."

Full Article

 

Daniel Tauritz

unread,
Oct 31, 2020, 5:35:54 PM10/31/20
to ai-b...@googlegroups.com

Mayo Clinic, Google Partnering To Use Artificial Intelligence To Boost Radiation Therapy For Cancer

FierceHealthcare Share to FacebookShare to Twitter (10/28, Horowitz) reports, “Mayo Clinic and Google Health have announced they will use artificial intelligence to improve radiation therapy planning for cancer care.” The effort “is the first initiative in a 10-year strategic partnership between the Rochester, Minnesota-based hospital and the tech giant, announced in September 2019.” Experts in regulation “at Mayo and Google also are exploring potential approaches to get approval from... [the FDA] for a medical device that incorporates these algorithms.”

        Modern Healthcare Share to FacebookShare to Twitter (10/28, Cohen, Subscription Publication) also reports.

 

Apple Buys Self-Learning AI Video Startup To Improve Apps

Bloomberg Share to FacebookShare to Twitter (10/27, Gurman) reports Apple has acquired “a startup specializing in advanced artificial intelligence and computer vision technology that may help the iPhone maker improve its own AI across a number of apps and services.” Apple acquired Barcelona-based Vilynx “earlier this year, according to people familiar with the deal.” Vilynx “developed technology that uses AI to analyze a video’s visual, audio and text content to understand what the video shows.”

 

Mark Cuban Commits $2M To AI Bootcamps To Teach AI Skills To High Schoolers

The Wall Street Journal Share to FacebookShare to Twitter (10/27, Castellanos, Subscription Publication) reports that billionaire Mark Cuban has committed $2 million to expand the Mark Cuban Foundation AI Bootcamps that aims to teach artificial-intelligence skills for free to high school students in low-income communities across the US.

 

 

Intrinsic Plasticity Improves Learning in Deep Neural Networks
University of Waterloo Cheriton School of Computer Science (Canada)
October 26, 2020


Researchers at the David R. Cheriton School of Computer Science at the University of Waterloo in Canada used what they call "intrinsic plasticity" to improve learning in deep neural networks. In deep neural networks, usable error signal declines as information is propagated through a network from one layer to the next, making it increasingly difficult to tune the parameters of earlier layers, an issue known as the vanishing gradient problem. Batch normalization is one technique that has been used to solve this problem, and Waterloo’s Nolan Shaw has found a way to use the technique locally in a neural network. Shaw says the new method “addresses why biological neurons tune themselves. The answer seems to be to improve informational efficiency. The other question is, how do biological brains perform something like batch normalization? Intrinsic plasticity provides a possible answer to that question as well.”

Full Article

 

 

Robot Trained in Simulation Performs Better in Real Life
New Scientist
Chris Stokel-Walker
October 21, 2020


Researchers at the Swiss Federal Institute of Technology, Zurich (ETH Zurich) trained a neural network algorithm designed to control a four-legged robot in a simulated environment resembling a video game. The ETH Zurich team told the algorithm which direction the simulated robot should be attempting to move in, and restricted how fast it could turn, in order to reflect the capabilities of the actual robot. The researchers started with a neural network preprogrammed with knowledge about the environment so the algorithm could absorb and recall inputs from virtual sensors, then transferred this knowledge to a large network controlling the real robot. As a result, the robot was able to move on uneven, mossy terrain more than twice as fast as it was able to with its default programming.

Full Article

 

 

AI Reveals Hundreds of Millions of Trees in the Sahara
University of Copenhagen
October 20, 2020


Researchers at Denmark’s University of Copenhagen, along with international collaborators, used artificial intelligence and detailed satellite imagery to count more than 1.8 billion trees and shrubs in a 1.3-million-sq.km. area covering the western-most portions of the Sahara Desert, the Sahel region, and the sub-humid zones of West Africa. This marks the first time that trees across a large dryland region have been counted. The combination of detailed satellite imagery from the U.S. National Aeronautics and Space Administration (NASA) and deep learning helped researchers determine that trees do grow in the Sahara Desert, counting millions of trees in the desert alone. The university's Martin Brandt said the research is important because climate models generally do not include trees outside of forested areas.

Full Article

 

 

AI Outperforms Humans in Speech Recognition
Karlsruhe Institute of Technology (Germany)
October 20, 2020


An artificial intelligence (AI) system developed at Karlsruhe Institute of Technology (KIT) in Germany outperformed humans in transcribing conversations containing spontaneous speech, with minimum latency. It traditionally has been more difficult for AI to accurately transcribe conversations that include stops, stutterings, hesitations, laughs, coughs, and words not pronounced clearly. The Lecture Translator system has been used in KIT's lecture halls since 2012 to translate lectures from German or English into the languages spoken by foreign students. KIT's Alex Waibel says highly accuracy and fast speech recognition is essential for further downstream processing of speech, as it allows dialog, translation, and other modules of the AI system to provide better voice-based interaction.

Full Article

 

 

NASA Uses AI To Identify New Martian Craters

SPACE Share to FacebookShare to Twitter (10/25, Mathewson) reports that NASA engineers at the Jet Propulsion Laboratory have developed a machine-learning algorithm which helped the agency “identify a cluster of craters on Mars that formed within the last decade.” The algorithm “represents the first time artificial intelligence has been used to identify previously unknown craters on the Red Planet, according to a statement from NASA.” To identify the clusters, the algorithm used “more than 112,000 images taken by the Context Camera on NASA’s Mars Reconnaissance Orbiter (MRO).”

 

 

Robots Deciding Their Next Move Need Help Prioritizing
University of Illinois at Urbana-Champaign Aerospace Engineering
October 20, 2020


Researchers at the University of Illinois at Urbana-Champaign (UIUC) used a model based on the game Capture the Flag to develop a deep reinforcement learning technique that can help robots weigh their next move. UIUC's Huy Tran said his team noticed the robots needed assistance prioritizing. A hierarchical deep reinforcement learning scheme divides tasks so the model can tackle more complex challenges. Tran said, "We trained a high-level decisionmaker who assigns a sub-task for each agent to focus on," and the hierarchical framework simplifies updating. He added, “This approach has the potential to solve interesting and challenging problems, but there are a lot of issues that we still need to address before we can deploy these systems in real-world situations.”

Full Article

 

 

Scientists Develop ‘Mini-Brains’ to Help Robots Recognize Pain, Self-Repair
Nanyang Technological University (Singapore)
October 15, 2020


Researchers at Nanyang Technological University, Singapore (NTU Singapore) have developed an approach that uses artificial intelligence (AI) to enable robots to recognize “pain” and self-repair minor damage in real time. This involves embedding AI into the network of sensor nodes, which are connected to multiple processing units and act like "mini-brains" on the robot’s skin. The robot can respond to pressure exerted by a physical force, and an ion gel material allows the robot to restore its mechanical functions without human intervention when cut by a sharp object. Said NTU Singapore's Nripan Mathews, “Our team has taken an approach that is off the beaten path by applying new learning materials, devices, and fabrication methods for robots to mimic the human neuro-biological functions.”

Full Article

 

 

Earphone Tracks Facial Expressions, Even with Face Mask
Cornell Chronicle
Melanie Lefkowitz
October 12, 2020


Cornell University researchers have developed an earphone that can continuously track full facial expressions by observing the contour of the wearer’s cheeks, and converting such perceived expressions into emojis or silent speech commands. The C-Face device would allow users to express emotions to online collaborators without cameras in front of their faces. The device features a miniature RGB camera oriented below each ear with headphones or earphones; the cameras record changes in facial contours when facial muscles move. The captured images are reconstructed by computer vision and a deep learning model, using a convolutional neural network. Cornell's Cheng Zhang said C-Face can capture expressions even when users are wearing masks.

Full Article

 

 

ML Uncovers Potential TB Drugs
MIT News
Anne Trafton
October 15, 2020


Researchers at the Massachusetts Institute of Technology used a new machine learning approach to identify promising drug compounds that target a protein required by the bacteria that causes tuberculosis. The researchers used the Gaussian process to assign uncertainty values to the data used to train the algorithm. The model was trained on a dataset of 72 small molecules and their interactions with more than 400 protein kinases. It then analyzed nearly 11,000 small molecules, many different from those in the training data. The researchers identified molecules with very strong predicted binding affinities for the protein kinases put into the model. About 90% of the molecules assigned the highest certainty by the model proved to be true hits, compared to the hit rate of 30% to 40% for existing machine learning models used for drug screens.

Full Article

 

Daniel Tauritz

unread,
Nov 8, 2020, 1:53:37 PM11/8/20
to ai-b...@googlegroups.com

MIT Scientists Develop Artificial Intelligence Model That Can Detect Asymptomatic COVID-19 Coughs

McClatchy Share to FacebookShare to Twitter (11/5, Camero) reports that “scientists at the Massachusetts Institute of Technology developed an artificial intelligence model that could distinguish between a healthy cough and one that comes from an asymptomatic coronavirus patient.” The differences “are nonexistent to the naked human ear, but the AI was able to accurately identify nearly 99% of coughs from people with COVID-19, including all of the coughs from individuals without symptoms.” The model was trained “by listening to more than 200,000 recordings of coughs and spoken words, the ‘largest cough dataset that we know of,’ the researchers say in an Oct. 29 news release.” The team “said it’s working on incorporating the model into apps, and eventually smart speakers and other listening devices, so that people can consistently and conveniently be screened for coronavirus infection.” This, researchers say, “can help prevent asymptomatic individuals from unknowingly spreading the virus to others.”

 

 

AI Tool Provides More Accurate Flu Forecasts
Stevens Institute of Technology
November 2, 2020


An influenza forecasting tool powered by artificial intelligence (AI), developed by researchers at the Stevens Institute of Technology, incorporates location data to deliver up to 11% more accurate predictions than other state-of-the-art forecasting techniques, predicting flu outbreaks up to 15 weeks in advance. Stevens' Yue Ning and colleagues used a graph neural network to encode flu infections as interconnected regional clusters, enabling the algorithm to parse out patterns in how infections spread between regions, and to use patterns spotted in one region to inform forecasts in other areas. The team trained the AI tool on real-world state and regional data from the U.S. and Japan, then tested its forecasts against historical flu data. Said Ning, "Our algorithm will keep learning and improving as we collect new data, allowing us to deliver even more accurate long-term predictions."

Full Article

 

 

How AI Can Help Save Forests
The Wall Street Journal
Ted Alcorn
November 3, 2020


Scientists are developing artificial intelligence (AI)-driven methods to aid forest conservation by enabling a new level of real-time awareness. Foresters in California are hoping to prevent or mitigate future wildfires via a forest-clearing plan that relies largely on remotely sensed data and machine learning (ML). Meanwhile, the Global Forest Watch is integrating radar data to penetrate clouds that conceal tropical areas. To reduce false positives of deforestation, consumer-goods company Unilever hired Descartes Labs to apply ML techniques to better differentiate between vegetation of forests requiring protection, and palm plantations where deforestation is necessary. In the hope of helping to create a sustainable trade in carbon offsets, forestry data company SilviaTerra employs AI to process satellite imagery of various tree species and time periods, calculating size and species based on factors like when leaves start changing color in fall.
 

Full Article

*May Require Paid Registration

 

Graphene-Based Memory Resistors Show Promise for Brain-Based Computing
Penn State News
Walt Mills
October 29, 2020


Researchers at Pennsylvania State University (Penn State) are building an artificial neural network that aims to emulate the energy and area efficiencies of the brain. This artificial neural network can be reconfigured by applying a brief electric field to a sheet of graphene. The researchers were able to show at least 16 possible memory states with graphene-based memory resistors, compared with two in most oxide-based memory resistors. Said Penn State's Saptarshi Das, "What we have shown is that we can control a large number of memory states with precision using simple graphene field effect transistors."
 

Full Article

 

Tool Simplifies Data Sharing, Preserves Privacy
Carnegie Mellon University College of Engineering
Daniel Tkacik
October 28, 2020


Researchers in the CyLab of Carnegie Mellon University (CMU) and IBM have come up with a tool for creating synthesized data that simplifies data sharing while maintaining privacy. The DoppelGANger tool employs generative adversarial networks (GANs), which apply machine learning to synthesize datasets with the same statistics as training data. Models trained with DoppelGANger-generated synthetic data had up to 43% greater accuracy than models trained on synthetic data from rival tools, the researchers found. CMU's Vyas Sekar said, "We believe that future organizations will need to flexibly utilize all available data to be able to react to an increasingly data-driven and automated attack landscape. In that sense, any tools that facilitate data sharing are going to be essential.”

Full Article

 

Researchers Develop Sentence Rewriting Technique to Fool Text Classifiers
VentureBeat
Kyle Wiggers
October 27, 2020


Researchers at the Massachusetts Institute of Technology (MIT) created a framework to rewrite sentences specifically to attack text classifiers and trigger misclassification. Attacks on text classifiers could hurt industries like home lending, which relies on artificial intelligence (AI) for document processing. The conditional BERT sampling (CBS) framework, which feeds sentences from an AI language model to RewritingSampler, has a higher attack success rate than existing word-level methods. The CBS framework and RewritingSampler iteratively sample and replace words in a seed sentence for a given number of times, using the sum of word embeddings to maintain the sentence's literal meaning. The system could be misused for attacks, but also may be used to test the robustness of models and to improve their generalization via adversarial training.
 

Full Article

 

 

Australian, Korean Researchers Warn of Loopholes in AI Security Systems
ZDNet
Aimee Chanthadavong
October 23, 2020


Researchers at Australia's Commonwealth Scientific and Industrial Research Organization's Data61, the Australian Cyber Security Cooperative Research Center, and South Korea's Sungkyunkwan University warn that certain objects could be used as triggers to permit a subject to digitally disappear from artificial intelligence (AI) security systems. The researchers tested the popular YOLO object-detection camera, and found the camera could detect a subject initially, but putting a red beanie on it allowed it to be undetected by the camera. Data61's Sharif Abuadbba cited the adversarial nature of AI models, which pose a security risk if they are not trained to detect all possible scenarios. Abuadbba said, "If you're a sensitive organization, you need to generate your own dataset that you trust and train it under supervision ... the other option is to be selective from where you take those models."

Full Article

 

UK Researchers To Conduct Study Using AI To Identify Illness In Livestock

Feedstuffs Share to FacebookShare to Twitter (11/4) reports Innovate UK is helping to fund research that “will be led by the Quant Foundry (QF), in collaboration with the University of Bristol Veterinary School and Agri-EPI Centre,” focused on using “novel artificial intelligence methods with behavioral analytics to provide rapid and reliable insights to animal health for farmers across the U.K.” The researchers will conduct a feasibility study at the South West Dairy Development Centre in Somerset, England, with the goal of providing “a new cost-effective solution for farmers and veterinarians to identify illness in livestock, providing not only cost savings but also a means to reduce the impact of farming on the environment.” The story says the Agri-EPI Centre is helping to speed up “the adoption of precision agriculture and engineering technologies to boost productivity across the whole agri-food chain by exploring how to optimize performance of highly complex agricultural production and processing systems.”

 

AI Gaining In Low-Level Tasks, But Humans Remain Critical In Banking

Bank Innovation Share to FacebookShare to Twitter (11/2) reports analysis from Autonomous Research found 1.2 million bank employees’ roles will be automated by 2030. However, Accenture believes financial services companies using AI wisely will see a net jobs gain of 14% and revenue gain of 34% by 2022. AI is so far tending to replace the most repetitive and time-consuming tasks, “but humans will always maintain a critical role, especially in higher-level tasks where skills like strategy and decision-making, empathy, and the ability to connect and form relationships are paramount.”

 

Researchers, Tech Companies Work On How To Make AI Less Biased

The Wall Street Journal Share to FacebookShare to Twitter (11/3, Totty, Subscription Publication) reports on the work being done by companies to reduce bias in AI and examines why it’s so difficult. The Journal says that if an AI system that receives training data containing primarily white faces is used in a self-driving vehicle, the vehicle may be unable to identify a Black pedestrian as a human being that needs to be avoided.

 

 

Tricking Fake News Detectors With Malicious User Comments
Penn State News
Jordan Ford
October 30, 2020


Researchers at the Pennsylvania State University (Penn State) have demonstrated how fake news detectors, like those used by Twitter and Facebook, can be manipulated through user comments. The researchers found that adversaries are able to use random accounts on social media to post malicious comments to flag real stories as fake news or promote fake stories as real news. This involves attacking the detector itself, rather than the story's content or source. The framework developed by the researchers to generate, optimize, and add malicious comments to articles successfully tricked five of the leading neural network-based fake news detectors more than 93% of the time. Penn State's Thai Le said the research "highlights the importance of having robust fake news detection models that can defend against adversarial attacks."

Full Article

 

 

Computational Tools Open New Era of Fossil Pollen Research
National Science Foundation
November 2, 2020


A team including researchers from the Smithsonian Tropical Research Institute, the universities of Illinois at Urbana-Champaign and California, Irvine, and other institutions combined machine learning (ML) with high-resolution imaging to enhance fossil pollen research. The scientists designed and trained three ML models to distinguish between existing Amherstieae legume genera, and tested them against fossil specimens from western Africa and northern South America dating back to the Paleocene era. The models classified existing pollen accurately more than 80% of the time, and strongly agreed on the identification of fossil pollen specimens. Said the National Science Foundation’s Jie Yang, "Machine learning and computer vision technologies can not only lead to new scientific discoveries, but also help us better understand what happened in the past."

Full Article

 

 

White Castle Plans to Use Robot in More Locations
Forbes
Lana Bandoim
October 28, 2020


The White Castle restaurant chain said it is expanding the use of the Flippy Robot-on-a-Rail (ROAR) system to more U.S. locations, after a successful test pilot in one establishment. Developer Miso Robotics said the robot can glide across multiple workstations on a rail, easing storage and use, while relying on artificial intelligence and machine learning to cook food. Miso said, "Capable of identifying current temperatures, predicting time remaining to meet consistency in taste perfection, and alerting staff of unsafe internal cooking temperatures, ROAR features powerful image processing and depth perception from Intel RealSense technology." White Castle intends to use Flippy as a kitchen assistant during night shifts.

Full Article

 

 

How Fish-Recognition Tech Is Assisting Demand for Canned Tuna
Bloomberg
Anuradha Raghu
October 28, 2020


The coronavirus pandemic has spurred remote monitoring of fishing vessels to determine whether tuna catches are sustainable. The Nature Conservancy's Mark Zimring said some vessels are using video cameras, sensors, and systems that use algorithms to recognize different types of marine life, similar to Facebook's facial recognition technology. Satellite imagery, machine learning tools, and artificial intelligence (AI) also are being used to ensure vessels are not misreporting the contents and volumes of their catches and are safely releasing at-risk species caught by accident. The effort initially is being focused on large scale fisheries because the monitoring systems cost $14,000 to $24,000 per vessel per year.

Full Article

 

 

Translating Lost Languages Using ML
MIT News
Adam Conner-Simons
October 21, 2020


Researchers at the Massachusetts Institute of Technology (MIT) have developed a machine learning system that can automatically translate a lost language, without advanced knowledge of its relationship to other dialects. The system applies principles based on historical linguistic insights, including the fact that languages generally evolve in certain predictable patterns. MIT's Regina Barzilay and Jiaming Luo developed a decipherment algorithm that can segment words in an ancient language and map them to words in related languages. The algorithm infers relationships between languages, and can assess proximity between languages.

Full Article

 

Daniel Tauritz

unread,
Nov 15, 2020, 10:29:16 AM11/15/20
to ai-b...@googlegroups.com

AI Model Detects Asymptomatic Covid-19 Infections Through Cellphone-Recorded Coughs
MIT News
Jennifer Chu
October 29, 2020


Massachusetts Institute of Technology (MIT) researchers have developed an artificial intelligence model that differentiates between asymptomatic people infected with Covid-19 and healthy individuals via forced-cough recordings submitted through Web browsers, cellphones, and laptops. The MIT team trained the model on cough samples and spoken words; it accurately identified 98.5% of coughs from people confirmed to have the virus (100% from those who are asymptomatic) when fed new cough recordings. The researchers are incorporating the model into a user-friendly application which could potentially be a free, convenient, noninvasive prescreening tool to identify asymptomatic people infected with the virus. Users could log in daily, cough into their handset, and instantly receive information on whether they might be infected and confirm with a formal test.

Full Article

 

 

AI Shows Potential to Gauge Voter Sentiment
The Wall Street Journal
Jared Council; John McCormick
November 6, 2020


Some technology experts believe using artificial intelligence (AI) to gauge voter sentiment could help to better understand the electorate. Allen Institute for AI CEO Oren Etzioni said, "I would direct [pollsters] to try to leverage machine learning, data mining, and AI in their work more to get better projections." Heidi Messer with AI and predictive technology provider Collective[i] said polls must use data sources that capture actual behavior, rather than relying on historical classifications and averages. Italian software company Expert.ai used its natural language processing system to analyze millions of social posts around the U.S. presidential candidates, weighing factors like tone and emotion, and projecting how that might translate into votes; its predictions have closely matched actual results. Also yielding accurate forecasts is swarm-intelligence software from Unanimous.ai, which aggregates predictions and decisions from groups of people.

Full Article

*May Require Paid Registration

 

AI on the Edge
WSU Insider
November 6, 2020


A framework developed by researchers at Washington State University (WSU) would allow more efficient use of artificial intelligence algorithms on mobile platforms and portable devices by bringing decision-making back to the devices themselves. Decisions for voice-recognition software, mobile health, robotics, and Internet of Things devices are made in the cloud, which generally does not allow for real-time decision-making. The framework can run complex neural network-based algorithms locally, using less power and computation. This involves prioritizing problem-solving, with most energy spent on the complex parts of problems and less devoted to the easier parts. Said WSU's Janna Doppa, “The goal is to push intelligence to mobile platforms that are resource-constrained in terms of power, computation, and memory,” which “has a huge number of applications.”

Full Article

 

International Research Team Develops AI-Powered 'Electronic Nose' to Sniff Out Meat Freshness
Nanyang Technological University (Singapore)
November 10, 2020


An international team of scientists led by Nanyang Technological University, Singapore (NTU Singapore) has developed an artificial intelligence-enabled electronic nose (e-nose) that mimics the ability of mammals to evaluate meat’s freshness from its odor. The e-nose includes a "barcode" that shifts color in response to gases produced by decaying meat, and a "reader" smartphone application. The researchers trained the e-nose to identify and predict freshness from an archive of barcode colors. Its deep convolutional neural network algorithm predicted the freshness of commercially packaged chicken, fish, and beef samples with 98.5% accuracy. NTU Singapore's Chen Xiaodong said the artificial olfactory system “can be easily integrated into packaging materials and yields results in a short time without the bulky wiring used for electrical signal collection in some e-noses that were developed recently."

Full Article

 

 

Researchers Isolate, Decode Brain Signal Patterns for Specific Behaviors
USC Viterbi School of Engineering
November 9, 2020


Researchers at the University of Southern California Viterbi School of Engineering (USC Viterbi) and New York University have developed a machine learning algorithm that isolates and decodes behaviors based on signals from the brain. USC Viterbi's Maryam Shanechi said the algorithm "can dissociate the dynamic patterns in brain signals that relate to specific behaviors one is interested in." The algorithm also can find neural patterns overlooked by other methods, as it considers both brain and behavioral signals, finding common patterns and more effectively decoding behavior represented by neural signals. USC Viterbi's Omid Sani added that the algorithm simulates common dynamic patterns between any signals, like between signals from different brain regions. Said Shanechi, "By isolating dynamic neural patterns relevant to different brain functions, this machine learning algorithm can help us investigate basic questions about [the] brain's functions and develop enhanced brain-machine interfaces to restore lost function in neurological and mental disorders."

Full Article

 

AI News Bias Tool Created by USC Computer Scientists
USC Viterbi School of Engineering
Amy Blumenthal
November 6, 2020


University of Southern California (USC) computer scientists have developed an artificial intelligence tool to automatically detect bias in news through a combination of natural-language processing and moral foundation theory. Rather than just searching for keywords, the algorithm considers more complex patterns in the contextual application of language to evoke key themes or frames. The algorithm determined content posted on left-leaning sites highlighted fairness and equity, while material on right-leaning sites emphasized moral purity under fear of contamination and appeals to law and order. USC's Kristina Lerman said, "Our ultimate goal was to grade news to create a news bias 'nutrition label' for when you are consuming news."

Full Article

 

Using ML to Track the Pandemic's Impact on Mental Health
MIT News
Anne Trafton
November 5, 2020


Researchers at the Massachusetts Institute of Technology (MIT) and Harvard University have demonstrated that machine learning can be used to measure the pandemic's impact on mental health by analyzing the words people use to express their anxiety online. Natural language processing algorithms were used to analyze the text of more than 800,000 Reddit posts, measuring the frequency of words associated with anxiety, death, isolation, and substance abuse. The researchers identified changes in the tone and content of language used from January to April 2020 during the first wave of the pandemic. Said Harvard and MIT's Daniel Low, "We found that there were these natural clusters that emerged related to suicidality and loneliness, and the amount of posts in these clusters more than doubled during the pandemic as compared to the same months of the preceding year."

Full Article

 

AI-Enabled Critter Camera Only Records Targeted Behaviors
New Atlas
Ben Coxworth
November 4, 2020


To address the power challenges associated with bio-logging devices, researchers at Japan's Osaka University have created an artificial intelligence-based bio-logger that can determine when an animal is engaging in specific behaviors and activate a camera. Bio-logging devices typically lack the battery capacity to record continuously and so often are set to record during scheduled periods each day, which means behaviors that occur outside of those recording sessions are missed. The new bio-logger features an accelerometer and a GPS unit to monitor an animal's activity; when the camera is triggered, it records only for as long as the targeted behavior lasts. The researchers tested the bio-loggers on seagulls off the coast of Japan. Said Osaka's Joseph Korpela, "The new method improved the detection of foraging behaviors in the black-tailed gulls 15-fold compared with the random sampling method."

Full Article

 

Daniel Tauritz

unread,
Nov 22, 2020, 8:27:17 AM11/22/20
to ai-b...@googlegroups.com

Google Researchers Find AI Training May Be Fundamentally Flawed

MIT Technology Review Share to FacebookShare to Twitter (11/18) reports that “the process used to build most of the machine-learning models we use today can’t tell if they will work in the real world or not – and that’s a problem.” A group of Google researchers have identified a “major cause for the common failure of machine-learning models,” called “underspecification,” and it “could be an even bigger problem than data shift.”

 

Google Cloud Debuts Professional Machine Learning Engineer Certification

CRN Share to FacebookShare to Twitter (11/17, Goodison) reports that Google Cloud announced last week its new Professional Machine Learning Engineer Certification, which “validates cloud professionals’ expertise in designing, building and productionizing machine-learning (ML) models to solve business challenges using Google Cloud technologies, along with their knowledge of proven ML models and techniques.”

 

 

2020 ACM Gordon Bell Prize Awarded for ML Method That Achieves Record Molecular Dynamics Simulation
Association for Computing Machinery
November 19, 2020

ACM named a U.S.-Chinese team of researchers recipients of the 2020 ACM Gordon Bell Prize for Deep Potential Molecular Dynamics (DPMD), a machine learning (ML)-based protocol that can simulate a more than 1 nanosecond-long trajectory of over 100 million atoms per day. The team claimed its method realizes the first efficient MD simulation of 100 million atoms with ab initio accuracy. The researchers developed a highly optimized code (GPU Deep MD-Kit), which they ran on Oak Ridge National Laboratory's Summit supercomputer. GPU Deep MD-Kit efficiently scaled up Summit, achieving 91 petaflops in double precision and 162/275 petaflops in mixed-single/half precision. The authors said, "The great accomplishment of this work is that it opens the door to simulating unprecedented size and time scales with ab initio accuracy. It also poses new challenges to the next-generation supercomputer for a better integration of machine learning and physical modeling."
 

Full Article

 

 

UC San Diego Leads Research That Earns Gordon Bell Special Prize
University of California, San Diego
November 19, 2020

ACM named a team led by the University of California, San Diego's Rommie Amaro and Argonne National Laboratory's Arvind Ramanathan as recipients of its Gordon Bell Special Prize for High Performance Computing-Based Covid-19 Research. The authors built an artificial intelligence (AI)-based workflow to more efficiently model the SARS-CoV-2's spike protein, and scaled it to Oak Ridge National Laboratory's Summit supercomputer. The team initially optimized the atomic-movement modeling Nanoscale Molecular Dynamics and Visual Molecular Dynamics codes on smaller cluster systems, then ran them on Summit. Layering and combining the experimental and simulation data with the AI-based protocol modeled the virus and its mechanisms in unprecedented detail. "Our methods of computing allow us to ... see detailed intricacies of this virus that are useful for understanding not only how it behaves," Amaro said, "but also its vulnerabilities, from a vaccine development standpoint, and a drug targeting perspective."
 

Full Article

 

 

System Brings Deep Learning to Internet of Things Devices
MIT News
Daniel Ackerman
November 13, 2020


Massachusetts Institute of Technology researchers have developed a system that could implement deep learning within Internet of Things (IoT) devices. The MCUNet system designs compact neural networks that supply unprecedented speed and accuracy amid memory and processing constraints. MCUNet features two critical co-designed elements for running neural networks on microcontrollers—TinyEngine, an inference engine that directs resource management; and TinyNAS, a neural architecture search algorithm that produces custom-sized networks. The University of California at Berkeley's Kurt Keutzer said this development "extends the frontier of deep neural network design even farther into the computational domain of small energy-efficient microcontrollers." He added that MCUNet could "bring intelligent computer-vision capabilities to even the simplest kitchen appliances, or enable more intelligent motion sensors."
 

Full Article

 

 

Pitt Researchers Aim To Increase Lifespan Of Sensors, IoT Devices By Using AI, Energy-Harvesting Technology

Network World Share to FacebookShare to Twitter (11/16, Nelson) reports that IoT sensors are unable to transmit data “if the devices continually run out of power.” This is why “researchers are exploring ambient energy harvesting. Numerous projects have shown that small amounts of power can be generated by converting ambient energy in the environment – from stray magnetic fields, humidity, waste heat, and even unwanted wireless radio noise, for example – into usable electrical energy to power the IoT.” University of Pittsburgh researchers “are proposing a system that applies artificial intelligence to cut back on IoT sensors’ energy consumption and mitigate battery longevity issues.” The project relies on “piggyback sensors, which are powered by energy harvested from the environment, to trigger the main sensors.” These sensors will “run unattended and are trained, using AI algorithms, to signal the main devices to turn on only when specific event conditions are met.”

 

 

Researchers Can Detect Early Onset of Suicidal Thoughts
The Jerusalem Post
Aaron Reich; Hannah Brown; Jeremy Sharon
November 11, 2020


Researchers from Israel's Technion-Institute of Technology and the Hebrew University of Jerusalem have developed a technological means of detecting early onset of suicidal ideation and tendencies in the general population. The system integrates machine learning, natural language processing, and psychological and psychiatric analytical tools with a layered neural network. The scientists analyzed more than 80,000 Facebook posts written by adults in the U.S., comparing the language they used with scores on diverse psychological indices. Technion's Roi Reichart said, "Attempts to predict suicide attempts based on demographic, psychological, and medical data have not been particularly successful despite five decades of intensive research. Therefore, we realized that we had to approach the challenge from different directions simultaneously.”

Full Article

 

 

AI Makes 'Smart' Apps Faster, More Efficient
University of Saskatchewan
Federica Giannelli
November 12, 2020


An artificial intelligence (AI) computer model developed by Hao Zhang at Canada's University of Saskatchewan (USask) could potentially make "smart" applications safer, faster, and more efficient. Zhang said his model segments AI computational processes into smaller "chunks," in order to help run apps locally on the phone rather than on external servers. He ran simulations to compare the model to those used on modern phone systems, and determined that it can concurrently run multiple apps 20% faster than current commercial devices, doubling battery life. Zhang also observed that AI processes can manage data efficiently using smaller four-bit sequences with variable length, while current devices use a fixed 32-bit sequence to process data more accurately, at the cost of speed and memory storage efficiency. USask's Seok-Bum Ko said, "Shorter sequences can be used to save power and increase speed performance, but can still guarantee enough accuracy for the app to function."
 

Full Article

 

 

PNNL Researchers Speed Power Grid Simulations Using AI
Pacific Northwest National Laboratory
Lynne Roeder
November 9, 2020


The Smart Power Grid Simulator (Smart-PGSim) developed by researchers at the Pacific Northwest National Laboratory (PNNL) and the University of California, Merced, efficiently solves power grid simulations using multi-task learning modeling, a novel neural network approach. The researchers integrated grid-specific domain knowledge into the layers of a multi-task learning model. They found that the Smart-PGSim solved power flow calculations about three times faster than a traditional numerical model, with nearly the same accuracy. Said PNNL's Gokcen Kestor, "These techniques can be broadly applied to many scientific high-performance computing applications, not just the optimization problem in power grid simulations. Other potential applications could include fluid dynamic simulation, molecular dynamics simulation, and cosmology modeling."
 

Full Article

 

 

AI-Directed Robotic Hand Learns How to Grasp
IEEE Spectrum
Michelle Hampson
November 6, 2020


Researchers at Germany's FZI Forschungszentrum Informatik are teaching a robot to grasp objects of different shapes and hardness, using artificial neurons arranged in a spiking neural network (SNN). The robotic system trains its neural net to better model system and object motions, then grasps items more autonomously by adapting to the motion in real time. The system utilizes a Schunk SVH 5-finger hand, and the SNN is divided into sub-networks, one controlling each finger as another governs grasping movement; a neural circuit in each finger detects contact with an object using motor current and joint velocity, which triggers a controller to regulate finger exertion. FZI's Juan Camilo Vasquez Tieck said the ultimate goal is "a system that can perform grasping similar to humans, without intensive planning for contact points or intense stability analysis, and [that is] able to adapt to different objects using visual and haptic feedback."

Full Article

 

 

Israeli Researchers Create AI Capable of Writing Personalized Jazz Solos
Jerusalem Post
Idan Zonshine
November 10, 2020


Researchers at Technion – Israel Institute of Technology have created an artificial intelligence (AI) model that can generate personalized jazz solos. The researchers trained a music language model, BebopNet, to produce symbolic saxophone jazz improvisations to any chord progression, using hundreds of original jazz solos to build the initial data set. The AI trains a personal preference metric to predict notes that reflect the user's personal taste. The user is asked to rate a series of jazz improvisations according to their preference, and a regression model is used to predict their taste. The model then uses "beam search" to optimize the note-generation process in accordance with the user's taste. The researchers observed, “While our computer-generated solos are locally coherent and often interesting or pleasing, they lack the qualities of professional jazz solos related to general structure such as motif development and variations.”
 

Full Article

 

 

Electric-Car Batteries Get Boost From AI
The Wall Street Journal
Suzanne Oliver
November 3, 2020


Researchers are using artificial intelligence (AI) to accelerate electric-car battery upgrades. Experts expect AI’s role in battery development will help electric vehicles (EVs) reach price parity with traditional combustion-engine cars within five years. AI enables scientists to pinpoint the sweet spot between charging speeds, charging currents, charging frequency, and battery life. General Motors' Danielle Cory said, "The robust charging performance of our EVs that simultaneously extends the life of our batteries would not be achievable without AI." AI also helps shorten testing times for experimental batteries by quickly analyzing massive datasets to predict performance faster.

Reply all
Reply to author
Forward
0 new messages