Dr. T's AI brief

4 views
Skip to first unread message

dtau...@gmail.com

unread,
Aug 15, 2021, 3:48:10 PM8/15/21
to ai-b...@googlegroups.com

AI Can Now Be Recognized as an Inventor
ABC (Australia)
Alexandra Jones
July 31, 2021


Australia's Federal Court has granted artificial intelligence (AI) systems legal recognition as inventors in patent applications, challenging the assumption that invention is a purely human act. The decision recognizes DABUS (device for the autonomous bootstrapping of unified sentience), an AI system whose creators have long argued can autonomously perform the "inventive step" required to qualify for a patent. DABUS is a swarm of disconnected neutral networks that continuously generate "thought processes" and "memories" which independently produce new and inventive outputs. It has "invented" a design for a container based on fractal geometry, and a "device and method for attracting enhanced attention" that makes light flicker in a pattern mimicking human neural activity. Although DABUS is listed as the inventor, its creator Stephen Thaler owns the patent, which means the push for the AI's inventor status is not an attempt to advocate for AI property rights.

Full Article

 

 

Like Babies Learning to Walk, Autonomous Vehicles Learn to Drive by Mimicking Others
The Brink (Boston University)
Gina Mantica
July 30, 2021


Engineers at Boston University aim to teach autonomous vehicles to drive safely by having them mimic others, similar to the way babies learn to walk. Their machine learning algorithm estimates the viewpoints and blind spots of other nearby cars to generate a bird's-eye-view of the surrounding environment, in order to help autonomous cars detect obstacles and understand how other vehicles turn, negotiate, and yield without colliding. The self-driving cars learn by translating the surrounding vehicles' actions into their algorithm-powered neural networks. Observations from all of the surrounding vehicles in a scene are a core element in the algorithm's training, so the model encourages data sharing and improves autonomous vehicle safety.

Full Article

 

 

Q-CTRL, University of Sydney Devise ML Technique Used to Pinpoint Quantum Errors
HPCwire
July 29, 2021

Researchers at Australia's University of Sydney (USYD) and quantum control startup Q-CTRL have designed a method of pinpointing quantum computing errors via machine learning (ML). The USYD team devised a means of recognizing the smallest divergences from the conditions necessary for executing quantum algorithms with trapped ion and superconducting quantum computing equipment. Q-CTRL scientists assembled custom ML algorithms to process the measurement results, and minimized the impact of background interference using existing quantum controls. This yielded an easy distinction between sources of correctable "real" noise and phantom artifacts of the measurements themselves. USYD's Michael J. Biercuk said, "The ability to identify and suppress sources of performance degradation in quantum hardware is critical to both basic research and industrial efforts building quantum sensors and quantum computers."
 

Full Article

 

 

Biden Directs Agencies to Develop Cybersecurity Standards for Critical Infrastructure
The Wall Street Journal
Dustin Volz
July 28, 2021


U.S. President Joe Biden this week directed federal agencies to formulate voluntary cybersecurity standards for managers of critical U.S. infrastructure, in the latest bid to strengthen national defenses against cyberattacks. In a new national security memo, Biden ordered the Department of Homeland Security (DHS)'s cyber arm and the National Institute of Standards and Technology to work with agencies to develop cybersecurity performance goals for critical infrastructure operators and owners. DHS now is required to offer preliminary baseline cybersecurity standards for critical infrastructure control systems by late September, followed by final "cross-sector" goals within a year. Sector-specific performance goals also are required as part of a review of "whether additional legal authorities would be beneficial" to protect critical infrastructure, most of which is privately owned. An administration official said, "We're starting with voluntary, as much as we can, because we want to do this in full partnership. But we're also pursuing all options we have in order to make the rapid progress we need."
 

Full Article

*May Require Paid Registration

 

 

Cybersecurity Technique Keeps Hackers Guessing
U.S. Army DEVCOM Army Research Laboratory
July 27, 2021

Development Command's Army Research Laboratory (ARL) has designed a machine learning-based framework to augment the security of in-vehicle computer networks. The DESOLATOR (deep reinforcement learning-based resource allocation and moving target defense deployment framework) framework is engineered to help an in-vehicle network identify the optimal Internet Protocol (IP) shuffling frequency and bandwidth allocation to enable effective, long-term moving target defense. Explained ARL's Terrence Moore, "If you shuffle the IP addresses fast enough, then the information assigned to the IP quickly becomes lost, and the adversary has to look for it again." ARL's Frederica Free-Nelson said the framework keeps uncertainty sufficiently high to defeat potential attackers without incurring excessive maintenance costs, and prevents performance slowdowns in high-priority areas of the network.
 

Full Article

 

 

How Olympic Tracking Systems Capture Athletic Performances
Scientific American
Eleanor Cummins
July 27, 2021


This year's Olympic Games in Tokyo use an advanced three-dimensional (3D) tracking system that captures athletes' performances in fine detail. Intel's 3DAT system sends live camera footage to the cloud, where artificial intelligence (AI) uses deep learning to analyze an athlete's movements and identify key performance traits like top speed and deceleration. 3DAT shares this information with viewers as slow-motion graphic representations of the action in less than 30 seconds. Intel's Jonathan Lee and colleagues trained the AI on recorded footage of elite track and field athletes, with all body parts annotated; the model could then link the video to a simplified rendering of an athlete's form. The AI can track this "skeleton" and calculate the position of each athlete's body in three dimensions as it moves through an event.

Full Article

 

 

Stanford ML Tool Streamlines Student Feedback Process for Computer Science Professors
Stanford News
Isabel Swafford
July 27, 2021


Stanford University researchers have developed and tested a machine learning (ML) teaching tool designed to assist computer science (CS) professors in gauging feedback from large numbers of students. The tool was developed for Stanford's Code In Place project, in which 1,000 volunteer teachers taught an introductory CS course to 10,000 students worldwide. The team scaled up feedback using meta-learning, a technique in which an ML system can learn about numerous problems with relatively small volumes of data. The researchers realized accuracy at or above human levels on 15,000 student submissions, using data from previous iterations of CS courses. The tool learned from human feedback on just 10% of the total Code In Place assignments, and reviewed the remainder with 98% student satisfaction.
 

Full Article

 

 

Bipedal Robot Learns to Run, Completes 5K
Oregon State University News
Steve Lundeberg
July 25, 2021


An untethered bipedal robot completed a five-kilometer (3.10-mile) run in just over 53 minutes. The Cassie robot, engineered by Oregon State University (OSU) researchers and built by OSU spinout company Agility Robotics, is the first bipedal robot to use machine learning to maintain a running gait on outdoor terrain. The robot taught itself to run using a reinforcement learning algorithm, and it makes subtle adjustments to remain upright while in motion. OSU's Jonathan Hurst said Cassie's developers "combined expertise from biomechanics and existing robot control approaches with new machine learning tools.” Hurst added, “In the not-very-distant future, everyone will see and interact with robots in many places in their everyday lives, robots that work alongside us and improve our quality of life.”

Full Article

 

 

AI Helps Improve NASA's Eyes on the Sun
NASA
Susannah Darling
July 23, 2021


U.S. National Aeronautics and Space Administration (NASA) scientists are calibrating images of the sun with artificial intelligence to enhance data for solar research. The Atmospheric Imagery Assembly (AIA) on NASA's Solar Dynamics Observatory captures this data, and requires regular calibration via sounding rockets to correct for periodic degradation. The researchers are pursuing constant virtual calibration between sounding rocket flights by first training a machine learning algorithm on AIA data to identify and compare solar structures, then feeding it similar images to determine whether it identifies the correct necessary calibration. The scientists also can employ the algorithm to compare specific structures across wavelengths and improve evaluations. Once the program can identify a solar flare without degradation, it can then calculate how much degradation is affecting AIA's current images, and how much calibration each needs.

Full Article

 

 

Companies Beef Up AI Models with Synthetic Data
The Wall Street Journal
Sara Castellanos
July 23, 2021


Companies are building synthetic datasets when real-world data is unavailable to train artificial intelligence (AI) models to identify anomalies. Dmitry Efimov at American Express (Amex) said researchers have spent several years researching synthetic data in order to enhance the credit-card company's AI-based fraud-detection models. Amex is experimenting with generative adversarial networks to produce synthetic data on rare fraud patterns, which then can be applied to augment an existing dataset of fraud behaviors to improve general AI-based fraud-detection models. Efimov said one AI model is used to generate new data, while a second model attempts to determine the data's authenticity. Efimov said early tests have demonstrated that the synthetic data improves the AI-based model's ability to identify specific types of fraud.

Full Article

*May Require Paid Registration

 

 

Framework Applies ML to Atomistic Modeling
Northwestern University McCormick School of Engineering
Alex Gerage
July 21, 2021


A new framework uses machine learning to enhance the modeling of interatomic potentials—the rules governing atomic interaction—which could lead to more accurate predictions of atomic-level nanomaterial behavior. An international team led by researchers from Northwestern University’s McCormick School of Engineering designed the framework, which applies multi-objective genetic algorithm optimization and statistical analysis to minimize user intervention. Northwestern's Horacio Espinosa said the algorithms "provide the means to tailor the parameterization to applications of interest." The team found the accuracy of interatomic potential correlated with the complexity and number of the stated parameters. Said Northwestern's Xu Zhang, "We hope to make a step forward by making the simulation techniques more accurately reflect the property of materials."

Full Article

 

 

Training Computers to Transfer Music from One Style to Another
UC San Diego News Center
Doug Ramsey
July 20, 2021


Translating musical compositions between styles is possible via the ChordGAN tool developed by the University of California, San Diego (UCSD)'s Shlomo Dubnov and Redmond, WA, high school senior Conan Lu. ChordGAN is a conditional generative adversarial network (GAN) framework that uses chroma sampling, which only records a 12-tone note distribution profile to differentiate style from content (tonal or chord changes). Lu said, "This explicit distinction of style from content allows the network to consistently learn style features." The researchers compiled a dataset from several hundred MIDI audio-data samples in the pop, jazz, and classical music styles; the files were pre-processed to convert the audio files into piano roll and chroma formats. Said Lu, "Our solution can be utilized as a tool for musicians to study compositional techniques and generate music automatically from lead sheets."

Full Article

 

 

Extracting More Accurate Data From Images Degraded by Rain, Nighttime, Crowded Conditions
Yale-NUS College (Singapore)
July 19, 2021


Novel computer vision and human pose estimation methods can extract more accurate data from videos obscured by visibility issues and crowding, according to an international team of scientists led by researchers at the Yale-National University of Singapore College. The research team used two deep learning algorithms to enhance the quality of videos taken at night and in rainy conditions. One algorithm boosts brightness while simultaneously suppressing noise and light effects to produce clear nighttime images, while the other algorithm applies frame alignment and depth estimation to eliminate rain streaks and the rain veiling effect. The team also developed a technique for estimating three-dimensional human poses in videos of crowded environments more reliably by combining top-down and bottom-up approaches.

Full Article

 

AI Software Developers Urged To Take Diversity Into Account

Automotive News Share to FacebookShare to Twitter (7/5, St. John) reports on the effort to consider diversity when developing AI and autonomous systems in vehicles, as failing to account for different people or ways of thinking can result in crashes. AN says autonomous systems “can recognize or predict unusual road scenarios as well as perceive changes in the driving environment and navigate around them – so long as the system is appropriately trained,” using a study showing that autonomous systems were less likely to detect people of color as an example of a failure of this caveat. AN says AI bias can arise “from a lack of understanding about the type of data needed to solve the problem at hand – or not supplying enough diversity of data or scenarios to the system,” as well as a “lack of diversity in the development team.”

 

Scientist Teaching AI To Police Human Speech

The Washington Post Share to FacebookShare to Twitter (7/1) reports that “for years, automated ‘crawlers’ had been vacuuming the Internet...into this gargantuan database in 100 languages: Arabic, Malagasy, Urdu and dozens more.” Artificial-intelligence research scientist Alexis Conneau “couldn’t read it himself,” but his creation, XLM-RoBERTa, “had read it many times: This was its brain matter, the code with which the machine could, in some way, learn to emulate how people speak.” At its most basic level, “the software that powers this artificial intelligence revolution is being built by people like Conneau...who believes, like many AI researchers, that the field’s growing number of ethical questions are better decided by someone else.” Conneau “has helped spearhead a category of AI known as natural-language processing that has redefined how we communicate on the Web.” He has “led research into AI that Facebook and others used to refine their automatic blocking systems for bullying, bigotry and hate speech, tackling the coarsening influence of the Web faster and more rigorously than any human moderator ever could.”

Reply all
Reply to author
Forward
0 new messages