Dr. T's AI brief

3 views
Skip to first unread message

Daniel Tauritz

unread,
Jan 31, 2021, 7:31:18 PM1/31/21
to ai-b...@googlegroups.com

Fruit Fly Brain Hacked for Language Processing
Discover
January 28, 2021


Yuchan Liang at the Rensselaer Polytechnic Institute, working with colleagues at that institution and at the MIT-IBM Watson AI Lab, hacked a fruit fly's neural network for tasks that include natural language processing. The team used a computer program to reconstruct the network on which the fruit fly brain's mushroom body sensory structure relies—projection neurons feeding data to roughly 2,000 Kenyon cells. The researchers then trained the network to identify correlations between words in text, and found its performance is comparable to that of artificial learning networks in natural language processing, while consuming fewer computational resources. The researchers said, "We view this result as an example of a general statement that biologically inspired algorithms might be more compute-efficient compared with their classical [non-biological] counterparts."

Full Article

 

AI Commission Presses DOD To Complete AI infrastructure Work By 2025

FedScoop Share to FacebookShare to Twitter (1/28) reports an independent advisory committee “recommended in a draft of its final report to Congress that the DOD needs to widely implement an infrastructure to support artificial intelligence by 2025.” The National Security Commission on AI, chaired by former DOD Deputy Secretary Bob Work and Google CEO Eric Schmidt, “says that 2025 is the year that basic infrastructure needs to be in place for the military to widely adopt the family of emerging technologies.” According to the report, “accomplishing that will depend heavily on IT modernization and better harnessing the department’s data.” Once completed, the Pentagon “will have a chance to work toward some of its greater objectives, like building AI-enabled, sensor-based command and control networks.”

 

Chess Engine Sacrifices Mastery to Mimic Human Play
Cornell Chronicle
Melanie Lefkowitz
January 25, 2021


A team of researchers from Cornell University, Canada's University of Toronto, and Microsoft Research have developed an artificial intelligence chess engine that is trained to play like, rather than beat, humans. The Maia chess engine was taught to mimic human behavior through training on individual human chess moves, instead of the larger problem of winning the game. The researchers found Maia matched human moves within each skill level over 50% of the time, an accuracy rate higher than those of the popular chess engines Stockfish and Leela. Cornell's Jon Kleinberg said, "Our model didn't train itself on the best move; it trained itself on what a human would do. But we had to be very careful—you have to make sure it doesn't search the tree of possible moves too thoroughly, because that would make it too good. It has to just be laser-focused on predicting what a person would do next."

Full Article

 

Google Investigates Second AI Researcher

Bloomberg Share to FacebookShare to Twitter (1/20, Grant, Bass) reports that Google “locked out artificial intelligence researcher Margaret Mitchell from its corporate systems, making her the second outspoken critic at the company to be sidelined.” Google said its security systems triggered after detecting that “an account had exfiltrated thousands of files,” and confirmed that it is investigating the incident. The incident comes amid backlash over the departure of ethicist Timnit Gebru. Bloomberg notes, “Mitchell has been a vocal supporter of Gebru on social media and a critic of the shortcomings of Google and other tech companies in how they address issues of race, gender and systemic bias.”

 

 

How to Train a Robot (Using AI and Supercomputers)
Texas Advanced Computing Center
Aaron Dubrow
January 19, 2021


Computer scientists at the University of Texas at Arlington (UT Arlington) are using generative adversarial networks (GANs) to train robots about objects. Such training typically requires a large dataset of images, but GANs can create a potentially limitless amount of data with which to train a robot in just seconds. The researchers developed PCGAN, the first conditional GAN to generate dense colored point clouds in an unsupervised mode. In an evaluation of 5,000 random samples for each object class, they determined PCGAN can synthesize high-quality point clouds for a disparate array of object classes. Said UT Arlington's William Beksi, "We're starting small, working with objects, and building to a hierarchy to do full synthetic scene generation that would be extremely useful for robotics."

Full Article

 

 

Who Needs a Teacher? AI Designs Lesson Plans for Itself
Science
Matthew Hutson
January 19, 2021


Researchers at the University of California, Berkeley (UC Berkeley) and Google have created artificial intelligence (AI) that can ascertain how best to educate itself. In one experiment, the scientists tested an AI agent that navigates a two-dimensional grid populated with blocks to reach a destination quickly; the agent improved its abilities via reinforcement learning. Through the PAIRED approach, the researchers coupled their AI (protagonist) with a nearly identical one with a varied set of strengths (antagonist); a third AI designed environments that the antagonist but not the protagonist could solve easily. Trained over many trials, the protagonist attempted a set of complex mazes, using the PAIRED method to solve one in five. Meanwhile, UC Berkeley's Pieter Abbeel demonstrated that autocurricula can help robots learn to manipulate objects, and further suggested AI could help customize material to a learner's needs.

Full Article

 

 

Facebook AI Predicts Likelihood of Worsening Covid Symptoms
CNBC
Sam Shead
January 15, 2021


Artificial intelligence (AI) researchers at social media company Facebook claim to have developed software that can forecast the probability of a Covid-19 patient's symptoms escalating, based on their chest x-rays. The Facebook team worked with academics in New York University Langone Health's predictive analytics unit and radiology department. The researchers said the new software could help prevent doctors from sending at-risk patients home too early, as well as helping hospitals plan for oxygen demand. The collaborators said they devised three machine learning models: one predicts deterioration based on a single chest x-ray, the second performs the same task with a sequence of x-rays, and the third employs one x-ray to anticipate how much supplemental oxygen a patient might require. The team said the sequential chest x-ray-based model could predict a patient's needs up to 96 hours in advance.

Full Article

 

 

On the 100th Anniversary of 'Robot,' They're Finally Taking Over
The Wall Street Journal
Christopher Mims
January 23, 2021


Robot technology has in many ways surpassed the vision presented in Karel Capek's play "R.U.R." ("Rossum's Universal Robots"), which introduced the word "robot" a century ago. Robots' rapid evolution is reflected in their growing presence in stores, streets, and elsewhere; the nonprofit International Federation of Robotics said in 2019 373,000 industrial robots were sold and put into use, bringing the total employed worldwide to 2.7 million. Meanwhile, 173,000 professional service robots were sold and installed in 2019, a number expected to climb to 537,000 units annually by 2023. The Brookings Institution's Mark Muro anticipates an acceleration in automation's disruption of American workers, especially in the service industry—with economic recession a key driver of this trend.

Full Article

*May Require Paid Registration

 

 

U.S. Leading Race in AI, with China Rising: Survey
Times of India
January 25, 2021


A study of 2020 data by the Information Technology and Innovation Foundation confirmed the U.S. leads the world in the development and use of artificial intelligence (AI), with China rapidly gaining ground, and the European Union trailing. Analysis of metrics including human talent, research activity, commercial development, and hardware and software investment on the survey gave America a score of 44.6 points out of 100, with China following with a score of 32, and the EU scoring just 23.3. The U.S. is outspending rivals in startup investment and research and development funding, while China controls more of the world's 500 most powerful supercomputers (214) compared to the U.S. (113) and Europe (91). The foundation's Daniel Castro said both the U.S. and Europe should respond to China's progress, "because nations that lead in the development of and use of AI will shape its future and significantly improve their economic competitiveness."

Full Article

 

 

When a Story is Breaking, AI Can Help Consumers Identify Fake News
Rensselaer Polytechnic Institute
Jeanne Hedden Gallagher
January 21, 2021


Rensselaer Polytechnic Institute (RPI) researchers determined that artificial intelligence (AI) can help assess news stories accurately, but only when a story is first emerging. Overall, the RPI team found AI-driven interventions are ineffective when used to flag issues with stories on frequently covered subjects about which people have established convictions. Yet tailored AI-generated advice can help readers make better judgments on the legitimacy of news articles when the topic is too fresh for opinions to emerge. This intervention is most effective when it provides reasoning in line with a person's natural thought process, like an assessment of the accuracy of facts provided or the source's reliability. RPI's Dorit Nero said, "If we can get to people early on when the story breaks and use specific rationales to explain why the AI is making the judgment, they're more likely to accept the advice."

Full Article

 

 

Merging Technologies with Color to Avoid Design Failures
Penn State College of Engineering
Miranda Buckheit
January 21, 2021


Pennsylvania State University (Penn State) researchers analyzed machine learning (ML) and image colorization algorithms to avoid design failures. Penn State's Pranav Milind Khanolkar reviewed the use of the ABAQUS additive-manufacturing simulation software, which can pose difficulties because its speed and performance level rely on a computer's hardware processing power. The team deployed ML algorithms to lower the exclusive use of computationally demanding finite element analysis, and accelerate simulations. It then applied image colorization algorithms to material microstructure data, and repurposed programs typically used to add color to monochrome photos. Khanolkar said, "Using intelligent technology to help people and empower their creativity and empathy during the design process is important. These algorithms need lots of computational power and using artificial intelligence ... allows designers to be more creative without impacting production cost."

Full Article

 

 

Scientists Count Elephants From Space with Satellites, Computer Smarts
CNet
Amanda Kooser
January 19, 2021


Scientists at the U.K.'s Universities of Oxford and Bath have developed a technique for counting African elephant populations using satellite camera imagery and a deep learning algorithm. The Maxar space technology company, which supplied the satellites used in the research, said the method boasts "comparable accuracy to human detection capabilities." The satellite images could replace elephant surveillance by manned aircraft, as the technique can single out elephants from a variegated landscape of grass and woodlands. Bath's Olga Isupov said, "Accurate monitoring is essential if we're to save the species."

Full Article

 

 

Optimizing Traffic Signals to Reduce Intersection Wait Times
Texas A&M Today
Stephanie Jones
January 15, 2021


Researchers at Texas A&M University and the U.K.'s University of Edinburgh have developed a system that uses machine learning to optimize the timing of traffic signals to reduce wait times at intersections. Their approach can successfully train a deep neural network in real time and transfer what it has learned from real-world observations to a different control function that can be understood and regulated by traffic engineers. The researchers used a simulation of a real intersection and found that optimizing their interpretable controller reduced vehicle delays by as much as 19.4% compared to commonly deployed signal controllers, which are the "brains" of an intersection.

Full Article

 

 

AI to Map Our Intestinal Bacteria
University of Copenhagen (Denmark)
January 12, 2021


Researchers at Denmark's University of Copenhagen (UCPH) are applying artificial intelligence (AI) to the exploration of human intestinal bacteria and its relationship to disease. UCPH's Simon Rasmussen and colleagues developed an algorithm that uses AI to complete the DNA strings of the approximately 1 billion bacteria found in feces. Said Rasmussen, "If we are able to reconstruct their DNA, it will give us an idea of the types of bacteria we are dealing with, what they are capable of, and what they actually do. It is not the complete picture, but it is a huge step forward."

Full Article

 

Daniel Tauritz

unread,
Feb 9, 2021, 9:10:30 AM2/9/21
to ai-b...@googlegroups.com

AI Predicts Asymptomatic Carriers of Covid-19
IEEE Spectrum
Emily Waltz
February 2, 2021


Researchers at technology company Synergies Intelligent Systems and Germany's Universität Hamburg have developed a machine learning algorithm that can identify which people in a moving crowd are most likely asymptomatic carriers of Covid-19. The continuous learning and inference of individual probability (CLIIP) algorithm bases its predictions on global positioning system (GPS)-tracked movement of people in a city, and known cases of infection. CLIIP's accuracy relies on people using a GPS-based smartphone application that tracks their location to within a meter (3.2 feet), and logging their positive viral test results. Synergies' Michael Chang said, "With this type of technology, we can quarantine a very small fraction of people—just 3% to 5%—and pretty effectively reduce the effect of the disease."
 

Full Article

 

 

'Liquid' ML System Adapts to Changing Conditions
MIT News
Daniel Ackerman
January 28, 2021


A team of researchers from the Massachusetts Institute of Technology (MIT), the Institute of Science and Technology Austria, and the Vienna University of Technology in Austria has developed flexible algorithms, also known as "liquid" networks, that continually alter underlying equations to adapt to new data inputs. Unlike most neural networks, whose behaviors are fixed after training, a liquid network can adapt to the variability of real-world systems and is more resilient to unexpected or noisy data. Said MIT's Ramin Hasani, "Just changing the representation of a neuron, you can really explore some degrees of complexity you couldn't explore otherwise. The model itself is richer in terms of expressivity." The network outperformed other state-of-the-art time series algorithms by a few percentage points in predicting future values in datasets. Said Hasani, "Everyone talks about scaling up their network. We want to scale down, to have fewer but richer nodes."

Full Article

 

 

Computer Model Makes Strides in Search for Covid-19 Treatments
Ohio State News
Emily Caldwell
February 1, 2021


A deep learning computer model developed by Ohio State University (OSU) computer scientists to predict human genes' interaction with drugs has identified at least 10 compounds as potential Covid-19 treatments. The DeepCE model taps the L1000 repository of human cell-line data concerning how gene expression changes in response to medications, and the DrugBank dataset on chemical structures and other details of roughly 11,000 approved and investigational drugs. The investigators applied DeepCE's gene expression prediction matrix to the genetic data in early Covid-19 papers and additional government information. Said OSU's Ping Zhang, "We can predict gene expression values for new chemicals not from one cell to one cell, but automatically predict the role of a drug on different cell lines and different genes."

Full Article

 

 

Amazon Plans AI-Powered Cameras in Delivery Vans to Improve Driver Safety
Reuters
Vishwadha Chander
February 3, 2021


Online retail giant Amazon has been installing artificial intelligence (AI)-powered video cameras across its delivery fleet to improve the safety of drivers and communities. The cameras, from transportation technology company Netradyne, uses AI to provide real-time alerts to drivers about such things as excessive speed and distracted driving. In an instructional video, Amazon's Karolina Haraldsdottir said the cameras have been shown to reduce collisions and improve driver behavior. The cameras record video constantly, but upload footage only if triggered by hard breaking, driver drowsiness, tailgating, and other such actions. Said Haraldsdottir, “Our intention with this technology is to set up drivers for success and provide them with support for being safer on road and handling incidents if and when they happen.”
 

Full Article

 

IBM, Boston Scientific Teaming Up To Develop Machine Learning Framework To Objectively Measure Chronic Pain Levels

Axios Share to FacebookShare to Twitter (2/3, Walsh) reports, “Scientists are working on a way to use AI to create quantitative measurements for chronic pain.” Because pain measurement remains “rudimentary,” doctors are hoping to build “methods that can objectively measure pain” that “can help ensure that the millions in need of palliative care aren’t left to suffer.” To address this, “scientists from IBM and Boston Scientific presented new research outlining a framework that uses machine learning and activity monitoring devices to capture and analyze biometric data that can correspond to the perception of pain.”

 

Coalition Launches Free, Online Course On AI Ethics

AiThority Share to FacebookShare to Twitter (2/4) reports the Governance Lab, NYU Tandon School of Engineering, Global AI Ethics Consortium, Center for Responsible AI @ NYU, and Technical University of Munich’s Institute for Ethics in Artificial Intelligence launched a “free, online course, AI Ethics: Global Perspectives” on Feb. 1. The course “conveys the breadth and depth of the ongoing interdisciplinary conversation on AI ethics and seeks to bring together diverse perspectives from the field of ethical AI, to raise awareness and help institutions work towards more responsible use.” Julia Stoyanvoich, director of the Center for Responsible AI at NYU Tandon, said, “It is up to us – current and future data scientists, business leaders, policy makers, and members of the public – to make AI what we want it to be.”

 

Two Google Engineers Resign Over Firing Of AI Ethics Researcher Timnit Gebru

Reuters Share to FacebookShare to Twitter (2/3, Dastin, Dave) reports that “an engineering director and a software developer have quit Alphabet Inc’s Google over the dismissal of AI researcher Timnit Gebru, a sign of the ongoing conflicts at the search giant over diversity and ethics.” Gebru, “who co-led a team on AI ethics, says she pushed back on orders to pull research that speech technology like Google’s could disadvantage marginalized groups.”

 

 

AI Can Tell What Song You Are Listening to From Your Brainwaves
New Scientist
Matthew Sparkes
January 26, 2021


Artificial intelligence (AI) developed by researchers at Delft University of Technology in the Netherlands can identify the songs a person is listening to by examining their brainwaves. The researchers used an electroencephalography (EEG) cap that detects the brain’s electrical activity to record the brainwaves of 20 test subjects as they listened to 12 songs through headphones while blindfolded in a dimly lit room. The AI was trained using short segments of each person's EEG readings along with the matching music clip to identify patterns, and identified the songs with 85% accuracy in tests on unseen portions of the data. However, accuracy fell below 10% when the AI was trained on EEG data from one person and then sought to identify a song when a different person listened to it. Said Delft's Derek Lomas, music is "just voltage fluctuations. And it's the same with the EEG."

Full Article

*May Require Paid Registration

 

 

Crafting Common Sense Into AI Models Through Gameplay
USC Viterbi School of Engineering
Rene Van Steenbergen
January 25, 2021


A gameplay-mediated training process for artificial intelligence (AI) models developed by researchers at the University of Southern California Viterbi School of Engineering (USC Viterbi) tested AI's ability to master decision-making in various settings and contexts. USC Viterbi's Jon May and colleagues applied deep reinforcement learning to teach the AI text-based games that followed a "choose-your-own-adventure" structure. The team used cooking games to train Google's Bidirectional Encoder Representations from Transformers (BERT) language-processing model, which eventually learns which decisions are beneficial and which are undesirable, but lacks common sense. The researchers trained BERT to make decisions for achieving desirable outcomes on unseen cooking games, and to generalize these skill sets to novel games in an unseen treasure-hunting domain.

Full Article

 

 

A NEAT Reduction of Complex Neuronal Models Accelerates Brain Research
University of Bern (Switzerland)
January 27, 2021


Neuroscientists at Switzerland's University of Bern have crafted a computational method to simplify complex dendrite models, which could help artificial intelligence (AI) accelerate brain research. Bern's Willem Wybo said, "With these simpler models, neural responses can more easily be characterized and simulation of large networks of neurons with dendrites can be conducted." The technique leverages a mathematical relation between responses of simplified and complex dendrite models, rendering a linear model of the optimized objective within the parameters of the simplified model. The methodology has been compiled into the open source NEAT (NEural Analysis Toolkit) software solution for automating the simplification process. Said the university’s Walter Senn, ”With these simpler models, neural responses can more easily be characterized and simulation of large networks of neurons with dendrites can be conducted."

Full Article

 

 

AI Art Critic Can Predict Which Emotions a Painting Will Evoke
New Scientist
Edd Gent
January 29, 2021


Stanford University researchers have developed an artificial intelligence (AI) art critic that can predict emotional reactions to famous paintings and even provide explanations. The team created a massive dataset of human reactions using a survey of more than 6,000 people who were asked to choose the dominant emotion evoked by 81,000 paintings and write a caption describing their decision. After being trained with this data, the AI was tasked with predicting emotions for paintings it had never seen before and providing captions. In a test using human evaluators, captions produced by the AI “passed” as being written by a human 50% of the time.

Reply all
Reply to author
Forward
0 new messages