Dr. T's AI brief

2 views
Skip to first unread message

Daniel Tauritz

unread,
Jan 18, 2021, 8:30:02 AM1/18/21
to ai-b...@googlegroups.com

Leading Computer Scientists Debate the Next Steps for AI in 2021
VentureBeat
Ben Dickson
January 2, 2021


Canadian research company MontrealAI convened leading computer scientists in December to debate moving artificial intelligence (AI) forward in the year ahead. Cognitive scientist Gary Marcus cited key drawbacks of deep learning, including excessive data requirements, low capacity for inter-domain knowledge transfer, opacity, and a dearth of reasoning and knowledge representation. Early last year he suggested hybridizing learning algorithms and rules-based software, while computer researcher Luis Lamb proposed a foundational strategy for neural-symbolic AI based on logical formalization and machine learning. ACM A.M. Turing Award recipient Judea Pearl said AI systems require world knowledge and common sense to use the data they receive most efficiently.

Full Article

 

 

DeepMind's AI Masters Games Without Even Being Taught the Rules
IEEE Spectrum
Philip E. Ross
December 23, 2020


The MuZero artificial intelligence (AI) developed by Alphabet subsidiary DeepMind can master games without learning their rules beforehand. The system attempts first one action, then another, learning what the game rules permit, and concurrently noting the rewards proffered. MuZero then modifies its strategy until it finds a way to collect rewards more readily; this observational learning is perfect for any AI confronting problems that cannot be specified easily. MuZero outperforms previous DeepMind AIs in its economical use of data, because it models only environmental parameters that matter in decision-making. Once trained, MuZero processes so little for decision-making that its entire function might be managed on a smartphone.

Full Article

 

 

Research Team Demonstrates World's Fastest Optical Neuromorphic Processor
Swinburne University of Technology (Australia)
January 7, 2021


Researchers from Australia's Swinburne University of Technology, Monash University, and RMIT University have demonstrated the world's fastest, most powerful optical neuromorphic processor for artificial intelligence. The processor operates at a rate of more than 10 trillion operations per second, over 1,000 times faster than any previous processor, and can process ultra-large-scale data. The processor incorporates integrated "optical micro-combs" comprised of hundreds of high-quality infrared lasers on a single chip. Said Monash's Xingyuan Xu, "We’re currently getting a sneak-peak of how the processors of the future will look. It’s really showing us how dramatically we can scale the power of our processors through the innovative use of microcombs.”

Full Article

 

 

Computational Drug Screening Technique Finds Chemotherapy Drug May Be Effective Against Covid-19
UPI
Brian P. Dunleavy
December 31, 2020


A novel computational drug screening technique developed by Chinese researchers determined that the lymphoma-combating chemotherapy drug pralatrexate may be effective against Covid-19. The team blended multiple computation methods with deep learning technologies that model drug-virus interactions, and screened 1,906 existing medications for their potential to constrain or halt Covid-19 replication by targeting RNA-dependent RNA polymerase. The researchers said this screening approach yielded four promising drugs, which then were tested against the virus in laboratory experiments at the Shenzhen Institutes of Advanced Technology. Pralatrexate and azithromycin were found to successfully block viral replication, with pralatrexate apparently more effective than the drug remdesivir, according to the team.

Full Article

 

 

Using AI to Find New Uses for Existing Medications
Ohio State University
Emily Caldwell
January 4, 2020


Ohio State University (OSU) researchers used artificial intelligence to process massive datasets in order to determine whether existing drugs could be applied to illnesses for which they were not previously used. The researchers used insurance claims data on roughly 1.2 million heart-disease patients, which included information on assigned treatments, disease outcomes, and various values for potential confounders (something other than the thing being studied that could be causing the results seen). The study, which focused on repurposing medications to prevent heart failure and stroke in patients with coronary artery disease, identified nine drugs considered likely to provide those therapeutic benefits. OSU's Ping Zhang said the model used in this study “could be applied to any disease, if you can define the disease outcome."

Full Article

 

 

AI Solves Schrödinger's Equation, a Fundamental Problem in Quantum Chemistry
SciTechDaily
January 2, 2021


Scientists at Germany's Freie Universität Berlin (FU) have developed a deep learning artificial intelligence (AI) technique for calculating the ground state of the Schrödinger equation in quantum chemistry. FU's Jan Hermann said, "We believe that deep 'Quantum Monte Carlo', the approach we are proposing, could be equally, if not more successful [than functional theory]. It offers unprecedented accuracy at a still acceptable computational cost." The FU team's deep neural network is a novel approach for representing electronic wave functions, capable of learning the intricate patterns in which electrons surround nuclei by incorporating the functions' antisymmetry, or "Pauli's exclusion principle." FU's Frank Noé said, "Building the fundamental physics into the AI is essential for its ability to make meaningful predictions in the field. This is really where scientists can make a substantial contribution to AI, and exactly what my group is focused on."

Full Article

 

 

ETH Researchers Compute Turbulence With AI
ETH Zurich (Switzerland)
Simone Ulmer
January 4, 2021


The modeling of turbulence has been automated by researchers at ETH Zurich in Switzerland by merging reinforcement learning (RL) algorithms with turbulent flow simulations on the Swiss National Supercomputing Centre's (CSCS) "Piz Daint" supercomputer. The two major approaches for simulating turbulent flows are direct numerical simulation (DNS) and large eddy simulation (LES). The researchers used artificial intelligence (AI) to determine the best turbulent closure models from DNS and apply them to LES. Their RL algorithm uses the grid points that resolve the flow field as AI agents, which observed thousands of flow simulations to learn turbulence closure models. Said ETH's Petros Koumoutsakos, "The machine 'wins' when it succeeds to match LES with DNS results, much like machines learning to play a game of chess or GO.” Koumoutsakos added that the new methodology “offers a new and powerful way to automate multiscale modeling and advance science through a judicious use of AI."

Full Article

 

 

How Machines Are Changing the Way Companies Talk
VentureBeat
Khari Johnson
December 27, 2020


Economics and machine learning (ML) researchers at Columbia and Georgia State universities found artificial intelligence is causing companies to change how they communicate. Natural language processing is being used to parse and process text in the financial documents companies must submit to the U.S. Securities and Exchange Commission (SEC), then ML tools are used to abstract text or rate the language used as positive, neutral, or negative. Signals provided by these tools inform advisers, analysts, and investors' decisions, and machine downloads are associated with faster trading after the posting of an SEC filing. According to a paper written by the researchers, "Anecdotal evidence suggests that executives have become aware that their speech patterns and emotions, evaluated by human or software, impact their assessment by investors and analysts."

Full Article

 

 

AI-Designed Serotonin Sensor May Help Scientists Study Sleep, Mental Health
National Institute of Neurological Disorders and Stroke
December 23, 2020


Using advanced genetic engineering methods, scientists at the University of California, Davis School of Medicine, the Howard Hughes Medical Institute, and the California Institute of Technology created a new research tool to monitor serotonin transmission. The team used artificial intelligence to repurpose the bacterial protein OpuBC into a serotonin catcher. Researchers applied machine learning algorithms to help a computer formulate 250,000 designs, narrowed down to one after three rounds of testing. Experiments in mice ultimately showed the sensor could help monitor neurotransmission under more natural conditions, catching events that traditional serotonin monitoring techniques overlooked.

Full Article

 

 

Daniel Tauritz

unread,
Jan 19, 2021, 1:20:46 PM1/19/21
to ai-b...@googlegroups.com

Computer Scientists: We Wouldn't Be Able to Control Superintelligent Machines
Max Planck Institute for Human Development (Germany)
January 11, 2021


An international team of computer scientists concluded, based on theoretical calculations, that a superintelligent artificial intelligence (AI) would be beyond human control. The researchers proposed a theoretical containment algorithm to ensure a superintelligent AI cannot hurt people by modeling its behavior first and stopping it if deemed harmful, yet analysis indicated such an algorithm cannot currently be built. Said Iyad Rahwan at Germany's Max Planck Institute for Human Development, "If you break the problem down to basic rules from theoretical computer science, it turns out that an algorithm that would command an AI not to destroy the world could inadvertently halt its own operations. If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI."

Full Article

 

 

FDA Releases AI/ML Action Plan
U.S. Food and Drug Administration
January 12, 2021

The U.S. Food and Drug Administration (FDA) this week unveiled its first Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device Action Plan, designed to further agency oversight of the technology. Said Baku Patel at the FDA’s Center for Devices and Radiological Health's Digital Health Center of Excellence, "The plan outlines a holistic approach based on total product lifecycle oversight to further the enormous potential that these technologies have to improve patient care while delivering safe and effective software functionality that improves the quality of care that patients receive." The plan, on which the FDA continues to accept feedback, specifies five intended FDA actions: further developing the proposed regulatory framework; backing development of good ML practices to assess and enhance ML algorithms; nurturing a patient-centered approach, including device transparency to users; devising ML algorithm development and improvement techniques; and advancing real-world performance tracking pilots.
 

Full Article

 

 

Tweaking AI Software to Function Like a Human Brain Improves Computer's Learning Ability
Georgetown University Medical Center
January 12, 2021


Neuroscientists at Georgetown University Medical Center and the University of California, Berkeley have developed a model that enables artificial intelligence software to function more like a human brain and learn new visual concepts more quickly. The software identifies relationships between entire visual categories; the standard approach involves identifying objects using only low-level and intermediate visual features like shape and color. Georgetown's Maximilian Riesenhuber explains, "Our model provides a biologically plausible way for artificial neural networks to learn new visual concepts from a small number of examples. We can get computers to learn much better from few examples by leveraging prior learning in a way that we think mirrors what the brain is doing."

Full Article

 

Stanford Researchers Design Chips That Speed AI Inferencing

VentureBeat Share to FacebookShare to Twitter (1/11) reported Stanford researchers “have developed hardware that can run AI tasks quickly and energy-efficiently by harnessing special-built chips.” A paper Share to FacebookShare to Twitter “published in Nature Electronics describes the chips, each of which have data processors built next to their own memory storage, which leverage algorithms to meld eight separate cores into one AI processing engine called the Illusion System.” The team “built and tested its prototype” and “showed how systems with 64 hybrid chips – eight times the number in the prototype – could run AI applications seven times faster than current processors using one-seventh as much energy.”

 

New Study Shows Potential Of Photonic Processors For AI Application

The Science Times Share to FacebookShare to Twitter (1/7, Marie) reported that “a new study published in the Nature journal investigates photonic processors’ potential for applications of artificial intelligence or AI.” According to Tech Xplore, “such photonic processors allow computers to complete complex computations at incredibly fast speeds.” For the first time, “the results exhibit that these tools can process information fast and in parallel, something that the present day’s electric chips are not capable of doing.” According to electrical and computer engineering assistant professor Nathan Youngblood, “from the University of the Pittsburgh Swanson School of Engineering, ‘Neural networks are learning by taking large sets of data and identifying ‘patterns through a series of algorithms.’”

 

 

Neural Networks Playing Video Games Teach Us About Our Own Brains
California Institute of Technology
Lori Dajose
January 7, 2021


In comparing brain scans of humans playing classic Atari video games to artificial intelligence (AI) networks trained to play the same games, California Institute of Technology (Caltech) researchers found the AI's “neurons” behaved similarly to neurons in the human brain. The Caltech team trained DeepMind's Deep Q Network (DQN) agent to play "Pong," "Space Invaders," and "Enduro" (a racing game), then used its artificial neurons to predict behavior and brain activity in functional magnetic resonance imaging brain scans of human players. Caltech's John O'Doherty said, "If we can understand why and how the brain can solve these games much more efficiently compared to an AI, this may help guide the development of smarter and more humanlike AI algorithms in the future."

Full Article

 

 

Advanced Materials in a Snap
Sandia National Laboratories
January 5, 2021


A machine learning algorithm developed by researchers at Sandia National Laboratories can perform simulations for materials scientists about 42,000 times faster than normal, which could dramatically shorten the design cycle for new technologies. The accelerated computer simulation predicts how tweaking a design or fabrication process will affect a material. The researchers performed an unaided simulation on a high-performance computing cluster with 128 processing cores in 12 minutes; the same simulation with the machine learning algorithm took just 60 milliseconds using only 36 cores. The new algorithm's answer was 5% different from that of the standard simulation. Said Sandia's Rémi Dingreville, "Our machine-learning framework achieves essentially the same accuracy as the high-fidelity model but at a fraction of the computational cost."

Full Article

 

 

UCI Researchers Use Deep Learning to Identify Gene Regulation at Single-Cell Level
UCI News Center
January 5, 2021


University of California, Irvine (UCI) researchers have developed a novel deep learning framework to forecast gene regulation at the cellular level. The framework facilitates analysis of the framework of single-cell-level transcription factor (TF) binding. The UCI team identified novel gene regulation for individual cells or cell types by training a neural network on large-scale genomic and epigenetic datasets, and by tapping the expertise of collaborators in multiple departments. UCI's Qing Nie said, "Our capability of predicting whether certain transcriptional factors are binding to DNA in a specific cell or cell type at a particular time provides a new way to tease out small populations of cells that could be critical to understanding and treating diseases." He added that researchers can use the framework to identify key signals in cancer stem cells, which are especially difficult to target for treatment, or even to measure.

Full Article

 

 

AI App That Detects COVID Carriers By Their Cough Could Help Reopen Schools

The Seventy Four Share to FacebookShare to Twitter (1/5, Lehrer-Small) reports that a paper recently published in the IEEE Journal of Engineering in Medicine and Biology Share to FacebookShare to Twitter by MIT researchers found that AI “might be able to detect COVID positivity at a high accuracy rate simply by the sound of one’s cough recorded through a smartphone speaker – even for asymptomatic individuals.” The technology has “accurately identified 98.5 percent of coughs from people who were confirmed to have COVID-19, including 100 percent of coughs from asymptomatic individuals.” The MIT team is now working to “incorporate their model into an app, which if approved by the FDA could provide a free, non-invasive pre-screening tool to help identify people who have COVID-19 – even those who are not experiencing symptoms.” If effective, such an app “could have big impacts on the school reopening landscape, health experts say.”

 

Analysis: Possible Regulation In 2021 Offers Hope For AI Community

In “Exclusive Content”, VentureBeat Share to FacebookShare to Twitter (1/4, Wiggers) writes that “as researchers criticize Google over its opaque (and censorial) research practices, firms commercialize models whose training contributes to carbon emissions, and problematic language systems make their way into production, 2020 was a year of regression rather than progression in many ways for the AI community.” However, at the regulatory level, “there’s hope for righting the ship.” The piece mentions that among industry expert “conversations with VentureBeat’s Khari Johnson last year,” IBM Research Director Dario Gil “predicted that finding ways for AI to better reflect the kind of society people want to build would become a front-and-center issue in 2020.”

 

NASA Scientists Introduce AI Robot Dog To Explore Martian Caves

SPACE Share to FacebookShare to Twitter (1/7, Weisberger) reports that at the American Geophysical Union (AGU) annual meeting, researchers with NASA/JPL-Caltech introduced the “Mars Dog,” a four-legged, animal-mimicking robot with artificial intelligence, “which can maneuver in ways the iconic wheeled rovers such as Spirit, Opportunity, Curiosity and the recently launched Perseverance never could.” The robots’ resilience and agility “are coupled with sensors that allow them to avoid obstacles, choose between multiple paths and build virtual maps of buried tunnels and caverns for operators at home base, scientists said at AGU.” The robot will be able to “self-right from a multitude of falls,” scientists said, making it better suited for challenging Martian regions. The Mars Dog “would also be roughly 12 times lighter than current rovers and would be capable of traveling much faster, reaching normal walking speeds of 3 mph (5 km/h) during tests.”

 

Uber Researchers Train AI Language Model To Give Positive, Polite Responses

VentureBeat Share to FacebookShare to Twitter (1/4) reported that “studies show that users respond better to social language in the sense that they’re more responsive and likelier to complete tasks,” and “inspired by this, researchers affiliated with Uber and Carnegie Mellon developed a machine learning model,” that “injects social language into an assistant’s responses while preserving their integrity.” The researchers detailed the study in a paper Share to FacebookShare to Twitter. After training the model, “researchers used automated and human-driven techniques to evaluate the politeness and positivity of their model’s messages. They found it could vary the politeness of its responses while preserving the meaning of its messages, but that it was less successful in maintaining overall positivity.”

 

Microsoft Explores Using AI To Create 3D Chatbots Based On Real-Life People

TechRepublic Share to FacebookShare to Twitter (1/5) reports Microsoft “appears to be exploring the possibility of creating chatbots based on the likeness of a real-life person using their electronic data.” A patent filed by Microsoft in December 2020 “describes how an AI likeness of an individual could be developed by their communications data, such as social media posts, texts and emails, and even written letters, into a machine-learning algorithm.”

Reply all
Reply to author
Forward
0 new messages