Dr. T's AI brief

7 views
Skip to first unread message

dtau...@gmail.com

unread,
Jan 19, 2020, 8:11:54 AM1/19/20
to ai-b...@googlegroups.com

Solving the Puzzle of AI's role in Cybersecurity
You want prestigious cybersecurity research? Look no further than Daniel Tauritz, chief cyber AI strategist for the Auburn Cyber Research Center. Fri, 17 Jan 2020

 

 

Scientists at UVM, Tufts Create 'Living Robots'
The Boston Globe
January 13, 2020


Researchers at the University of Vermont (UVM) and Tufts University created tiny living robots made of frog cells using a supercomputer, an achievement with implications for regenerative medicine. The biobots were designed on the Deep Green supercomputer cluster at UVM's Vermont Advanced Computing Core. The supercomputer considered billions of designs in search of one that would quickly travel across the bottom of a petri dish, then the team used tiny forceps and a tinier electrode to assemble cells from the embryos of African frogs into a close approximation of the designs specified by the computer. Said UVM's Joshua Bongard. "We used an evolutionary algorithm, a computer program which, in virtual words, evolves virtual creations."

Full Article

 

 

Deep Learning, 3D Technology to Improve Structure Modeling for Protein Interactions, Create Better Drugs
Purdue University News
Chris Adam
January 9, 2020


Researchers at Purdue University have developed a system that applies deep learning principles to virtual models of protein interactions in an effort to better understand how proteins interact in the body, with the goal of developing better drugs that specifically target these interactions. The DOVE (DOcking decoy selection with Voxel-based deep neural nEtwork) system captures the structural and energetic features of the interface of a protein docking model with a three-dimensional (3D) box and judges whether the model is more likely to be correct or incorrect through the use of a 3D convolutional neural network. Said Purdue's Daisuke Kihara, "This may be the first time researchers have successfully used deep learning and 3D features to quickly understand the effectiveness of certain protein models."

Full Article

 

 

AI Learns to Defy Laws of Physics to Win at Hide-and-Seek
New Scientist
Douglas Heaven
September 17, 2019


Scientists at the OpenAI artificial intelligence (AI) laboratory have developed AI bots that trained themselves to cooperate by playing hide-and-seek. The team had the bots play the game in a simulated environment containing fixed walls and movable boxes; each bot had its own perspective of its surroundings, and could not directly communicate with other bots. The bots that hid quickly deduced the fastest way to fool seekers was to find objects in the environment with which to conceal themselves; the seekers learned they could manipulate objects like ramps to overcome obstacles like walls. The bots learned that cooperation—like passing objects to each other or co-building a hideout—was the quickest way to win. Said Chelsea Finn at Stanford University, “The main limitation of this kind of work is that it is in simulation.”

Full Article

 

 

DOE Unveils AI and Technology Office
NextGov.com
Brandi Vincent
September 9, 2019


The U.S. Department of Energy (DOE) has established an Artificial Intelligence and Technology Office to coordinate and accelerate the agency's enterprise efforts to leverage the technology. The DOE currently has a wide range of artificial intelligence (AI) and machine learning applications and projects under way, including strengthening the nation's national security posture, supporting the nuclear stockpile, and enabling the Internet of Things. The agency also conducts AI research at its national labs, where experts are building three exascale computers and conducting advanced research on four of the 10 fastest supercomputers in the world. Said Energy Secretary Rick Perry, "The importance of [AI] to our country, the importance of this to your children, to our families, and economy, can’t be overstated."

Full Article

 

 

AI Speeds Photodynamics Simulations
University of Vienna (Austria)
Stephan Brodicky
September 11, 2019


Researchers at the University of Vienna in Austria have developed a method of using artificial neural networks to accelerate the simulation of light-induced processes significantly. The approach provides new possibilities for a better understanding of biological processes, such as the first steps of carcinogenesis or the aging processes of matter. The team taught the system complex quantum-mechanical relationships by performing some calculations beforehand and passing that knowledge on to the neural network. As part of the study, the researchers completed photodynamics simulations of a test molecule called methylenimmonium cation. After two months of computing, the researchers were able to reproduce the reaction for a nanosecond; using previous methods, the simulation would have taken about 19 years.

Full Article

 

 

How to Beat South Korea's AI Hiring Bots and Land a Job
Reuters
Sangmi Cha
January 12, 2020


Major South Korean companies are using artificial intelligence (AI) to screen job applicants, giving rise to an industry to help job-seekers beat these systems. For example, the People & People consultancy offers classes on handling video interviews using facial-recognition technology. One AI system asks interviewees to introduce themselves, while it identifies and tallies facial expressions and analyzes word choices; other systems employ gamification to test a candidate's personality and adaptability. Some candidates think it is futile to try to beat AI hiring. University student Kim Seok-wu said, "The AI interview is too new, so job applicants don't know what to prepare for and any preparations seem meaningless, since the AI will read our faces if we make something up."

Full Article

 

 

dtau...@gmail.com

unread,
Jan 21, 2020, 8:08:56 AM1/21/20
to ai-b...@googlegroups.com

Bosch Deploys AI to Prevent Attacks on Car Electronics
The Wall Street Journal
James Rundle; John McCormick
January 6, 2020


Engineering company Robert Bosch is deploying artificial intelligence (AI) to fortify cars' electronic systems against hackers who attempt to feed the systems intentionally incorrect road-sign information. Road-sign standardization makes traffic-sign recognition technology well-suited to machine learning and deep learning image-identification algorithms, but malefactors can deceive the algorithms by defacing the signs. Bosch’s Michael Bolle said the company has unveiled a computer-vision-based AI process designed to analyze and compare an object from two different perspectives. The findings of deep learning algorithms that identify road signs are checked by computer-vision algorithms, and discrepancies between the readings could indicate spoofing. Said Bolle, "In the area of machine learning and AI, products and machines learn from data, and so the data itself can be part of the attack surface."

Full Article

*May Require Paid Registration

 

 

Algorithms Help Find Minimum Energy Paths, Saddle Points More Effectively
Aalto University
January 8, 2020


A researcher at Finland's Aalto University developed and tested machine learning algorithms based on Gaussian process regression to improve searches for minimum energy paths and saddle points, an arduous challenge in theoretical chemistry. Accurately assessing energy and forces for each atomic configuration consumes much computing power; Aalto's Olli-Pekka Koistinen designed algorithms to reduce the number of observation points and costly energy evaluations required to compute energy paths and saddle points. The process traditionally involves iterative techniques that follow along an energy surface, but Koistinen's method allows use of earlier observations to model the surface and reach objectives with fewer iterations. Said Koistinen, “This is another example of a research topic in which machine learning methods can be helpful.”

Full Article

 

 

Neural Nets Probe Dark Matter in the Universe
ETH Zurich
Oliver Morsch
September 18, 2019


Researchers at ETH Zurich in Switzerland used machine learning algorithms to estimate the amount of dark matter in the universe. The ETH Zurich researchers taught deep artificial neural networks to extract the greatest amount of information from mass maps of the sky showing where dark matter is located by feeding the networks computer-generated data that simulates the universe. By analyzing iterations of the dark matter maps, a neural network learned to extract the desired data, coming up with values 30% more accurate than those obtained by human-made statistics. The scientists then used their trained neural network to analyze actual dark matter maps. Said ETH Zurich researcher Alexandre Refregier, “Facebook uses its algorithms to find eyes, mouths, or ears in images; we use ours to look for the tell-tale signs of dark matter and dark energy."

Full Article

 

 

Predicting El Niño Up to 18 Months in Advance
Science
Warren Cornwall
September 18, 2019


Researchers at Chonnam National University in South Korea have found that artificial intelligence (AI) can forecast the El Niño storm system 18 months in advance. The AI used was a convolutional neural network trained on global images of historic sea surface temperatures and deep ocean temperatures to ascertain how they correlated to the future generation of El Niño storms. The researchers fed the network recreations of historic ocean conditions produced by reputable climate models often used to investigate climate change, including several thousand simulations of historic data. The AI forecast El Niño states as far ahead as 18 months when tested against actual data from 1984 to 2017, and was more capable of locating which region of the Pacific would experience the highest rise in temperature. Retired researcher William Hsieh said this is a clever approach for bypassing the shortage of El Niño data that limits other forecasting models.

Full Article

 

 

AI Researchers See Danger of Haves, Have-Nots
The New York Times
Steve Lohr
September 26, 2019


Scientists warn the growing cost of artificial intelligence (AI) research means fewer people can easily access computing power to advance the technology, which could make behemoths like Google, Microsoft, and Facebook dominant, while university laboratories get shortchanged. The Allen Institute for Artificial Intelligence estimated that the calculations required to perform AI tasks have ballooned approximately 300,000 times in the last six years. University of Massachusetts, Amherst researchers also found AI software can consume an enormous amount of power. Allen Institute scientists suggest changing the AI success quantification model could help resolve both cost and power issues—with efficiency as well as accuracy factored in. Large tech firms are focusing on efficiency upgrades, and they argue this will make computing power more available to outside developers and academics.

Full Article

*May Require Paid Registration

 

 

Machine Learning Helps Open Up New Possibilities for Quantum Devices
University of Oxford
September 26, 2019


Researchers at the University of Oxford and Lancaster University in the U.K. and the University of Basel in Switzerland have developed an algorithm that can be used to measure quantum dots automatically, a step toward the large-scale application of qubits. The machine learning algorithm reduces the measuring time and the number of measurements compared with conventional data acquisition. The researchers trained the algorithm with data on the current flowing through the quantum dot at different voltages. The program gradually learns where further measurements are needed, and aims to achieve the maximum information gain. Said Oxford's Natalia Ares, "For the first time, we've applied machine learning to perform efficient measurements in gallium arsenide quantum dots, thereby allowing for the characterization of large arrays of quantum devices." Basel's Dominik Zumbühl added, "The next step at our laboratory is now to apply the software to semiconductor quantum dots made of other materials that are better suited to the development of a quantum computer."

Full Article

 

 

Israel Prepares to Unleash AI on Healthcare
The Wall Street Journal
Dov Lieber
September 15, 2019


Israel is becoming a testbed for artificial intelligence (AI)-driven healthcare enhancements. Last year, the Israeli government announced an initiative to integrate health maintenance organizations (HMOs)' databases of Israelis' digital medical records into a single unified system. The hope is the records' structure will make them useable by AI programs and data analytics, boost their value to scientists and healthcare companies, and better personalize healthcare. Leaders in Israel’s medical and government sectors continue to grapple with ethical and privacy issues related to the initiative, but some international health officials say the potential benefits outweigh the pitfalls.

Full Article

*May Require Paid Registration

 

 

AWS, Blackberry Partner On Connected Cars

Roadshow Share to FacebookShare to Twitter (1/6, Hyatt) reports AWS and Blackberry are partnering to make cars “more secure and quicker to adapt.” They “are taking BlackBerry’s well-developed, relatively secure and widely employed QNX operating system and pairing it with AWS’ internet-of-things and machine learning capabilities to create something called the Intelligent Connected Vehicle Software Platform that will allow vehicle manufacturers to securely access a vehicle’s data.” Roadshow explains that “having fast and secure access to vehicle data means that a vehicle manufacturer can offer things like cockpit personalization and vehicle health monitoring as well as the rapid adoption of new driver assistance systems, and it can all be done without ever having to plug the vehicle into a dealer computer.”

        Engadget Share to FacebookShare to Twitter (1/6) reports the software environment will centralize control over “vehicle systems like engine controllers, vehicle trackers (TCUs) and digital cockpits.” At the same time, IoT tech from AWS will “let car manufactures build AI apps and access vehicle sensor data for connected, electric and autonomous cars.” TechRadar Share to FacebookShare to Twitter (1/6, Spadafora) reports car companies can use the platform “to enable essential functions such as continuous EV battery life monitoring and prediction, managing vehicle sub-systems to control warranty costs, providing security updates, updating software features and monetizing vehicle data while exercising control over data access.”

        TechRepublic Share to FacebookShare to Twitter (1/6, Combs) reports AWS connectivity lets “automotive software developers...collect data from vehicle sensors, build software applications and machine learning models using vehicle data and then deploy the models inside the vehicle to enable in-vehicle inference and actions.” The connected vehicle software “platform uses Amazon SageMaker for developing ML models and AWS IoT edge services for in-vehicle ML inference.” BlackBerry Technology Solutions SVP John Wall said in a press release, “By combining our technologies and strengths into one dedicated cloud-based offering, we can ensure automakers have the tools they need to deliver the driving experience consumers want, without sacrificing on the security and reliability they need.”

 

dtau...@gmail.com

unread,
Jan 26, 2020, 4:41:02 PM1/26/20
to ai-b...@googlegroups.com

Can AI Put Humans Back in the Loop?
ZDNet
Tiernan Ray
January 21, 2020


Scientists at Germany's Technische Universitat Darmstadt have developed a process for having a human domain expert review an artificial intelligence model's inner mechanisms during training, in order to catch simple problems and correct errors. Such an expert would check the reasoning offered by a neural network, with the overall goal of building more trust in machine learning. The experimental explanatory interactive learning procedure involves a convolutional neural network classifying the phenotype of a plant as healthy or diseased by analyzing leaf images. The researchers visualize the features the network is using, then a plant biology specialist fixes any network errors.

Full Article

 

 

AI Learns to Rule the Quantum World
Aarhus University
Rasmus Rorbaek
January 16, 2020


Researchers at Aarhus University in Denmark used a game-playing algorithm to run a quantum computer. The AlphaZero algorithm can teach itself to beat human game-players at chess without human intervention, and the Aarhus group used computer simulations to apply AlphaZero to three distinct control problems that could each find potential use in a quantum computer. The researchers realized the best results when combining AlphaZero with a specialized quantum optimization algorithm. Said Aarhus' Jacob Sherson, "This indicates that we are still in need of human skill and expertise, and that the goal of the future should be to understand and develop hybrid intelligence interfaces that optimally exploit the strengths of both."

Full Article

 

 

AI Suggests Earth Has Had Fewer Mass Extinctions Than Thought
New Scientist
Michael Le Page
January 16, 2020


Researchers at China's Nanjing Institute of Geology and Paleontology and the U.S. National Museum of Natural History used machine learning and a supercomputer to generate a biodiversity record that suggests fewer mass extinctions have occurred in the history of Earth than previously thought. Their record omits the late Devonian extinction by dividing the past into shorter chunks of time—only 26,000 years long, instead of about 10 million years long—and statistically analyzing 100,000 records of 11,000 marine species found in fossils. The researchers developed machine-learning procedures to process the data on China’s Tianhe-2 supercomputer. Said paleontologist Richard Bambach, "The mid-late Devonian diversity decrease is still very clear, but it is spread through the whole time and not concentrated in a single mass extinction.”

Full Article

 

 

AB InBev Taps Machine Learning to Root Out Corruption
The Wall Street Journal
Dylan Tokar
January 17, 2020


Brewer Anheuser-Busch InBev spent three years developing machine learning technology to spot corruption in its business partners. The BrewRight analytics platform harnesses data from operations in more than 50 countries to proactively track legal risks and deter violations, rather than investigating problems after they crop up. Companies have traditionally probe misconduct after it happens, but Harvard Business School's Eugene Soltes said, "Data analytics and what AB InBev has done changes that equation. They want to put much more on the front-end—on prevention and detection." The machine learning aspect allows the platform to become smarter and more effective over time. It already has cut hundreds of thousands of dollars in costs associated with investigating suspect payments.

Full Article

*May Require Paid Registration

 

 

An AI Pioneer Wants His Algorithms to Understand the 'Why'
Wired
Will Knight
October 8, 2019


Yoshua Bengio, a researcher at the University of Montreal in Canada who is co-recipient of the 2018 ACM A.M. Turing Award for contributions to the development of deep learning, thinks artificial intelligence will not realize its full potential until it can move beyond pattern recognition and learn more about cause and effect, which would make existing AI systems smarter and more efficient. A robot that understands dropping things causes them to break, for example, would not need to toss dozens of vases onto the floor to see what happens to them. Bengio is developing a version of deep learning that can recognize simple cause-and-effect relationships. His team used a dataset that maps causal relationships between real-world phenomena in terms of probabilities. The resulting algorithm essentially forms a hypothesis about which variables are causally related, and then tests how changes to different variables fit the theory.

Full Article

 

 

Ready, Set, Algorithms! Teams Learn AI by Racing Cars
The Wall Street Journal
Sara Castellanos
October 1, 2019


Amazon Web Services (AWS) has developed the DeepRacer League, a competition designed to teach a branch of artificial intelligence (AI) known as reinforcement learning, in which algorithms learn the correct way to perform an action based on trial and error, and observations. As part of the DeepRacer League, teams or individuals build and train AI algorithms using Amazon SageMaker software, then deploy them to self-driving model cars measuring about 10 inches long, which they race around a track roughly 17 feet by 26 feet. Morningstar is one of the companies participating in the DeepRacer League, and thanks to the training, the company expects to have dozens of projects based on reinforcement learning and other machine learning techniques in deployment by the end of 2020. AWS developed the DeepRacer program in an effort to teach software developers about machine learning in a more engaging way than reading scientific articles.

Full Article

*May Require Paid Registration

 

 

Using Machine Learning to Hunt Down Cybercriminals
MIT News
Adam Conner-Simons
October 8, 2019


Researchers at the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory and the University of California, San Diego have developed a machine learning system to detect hackers of Internet Protocol (IP) addresses. IP hijackers exploit a vulnerability in the Border Gateway Protocol (BGP), through which networks share routing information so that data packets arrive at the right destination. The team extracted data from network operator mailing lists, and historical BGP data from the global routing table, noting malefactor behaviors of malicious actors, then training the model to automatically identify such patterns. The researchers flagged false positives via manual vetting. Akamai Technologies' David Plonka said, "This project could nicely complement the existing best solutions to prevent such abuse that include filtering, antispoofing, coordination via contact databases, and sharing routing policies so that other networks can validate it."

Full Article

 

 

AI Used in U.K. Job Interviews to Find Best Applicants
The Telegraph (U.K.)
Charles Hymas
September 27, 2019


Consumer goods giant Unilever is among those using artificial intelligence and facial expression technology during job interviews in the U.K. to help identify the best candidates. The companies are using the technologies to analyze the language, tone, and facial expressions of candidates when they are asked a set of identical job questions which are filmed on a mobile phone or laptop. Algorithms identify the best applicants by assessing their performances in the videos against about 25,000 facial and linguistic data points compiled from previous interviews of those who have proven to be good at the job. Hirevue, the company that developed the interview system, claims it enables hiring firms to interview more candidates in the initial stage of hiring, rather than just relying on resumes.

Full Article

 

 

Language-Based Software's Accurate Predictions Translate to Benefits for Chemists
Chemistry World
Alexander Whiteside
September 30, 2019


Researchers at the University of Cambridge in the U.K. have developed a software program that can predict chemical reaction outcomes and retrosynthetic steps. The Molecular Transformer software uses a new type of neural network that is easier to train and more accurate than those that powered earlier translation-based approaches to chemistry. The Molecular Transformer is built around a neural network with a transformer architecture. Neural networks based on transformer architecture makes heavy use of a mechanism called attention, which allows them to learn which parts of the input are relevant to each part of the output, regardless of their positions. This reduces the amount of training needed and improves the resulting language models' accuracy. The team found that the Molecular Transformer outperformed other language-based approaches, predicting the correct reaction outcome 90% of the time.

Full Article

 

 

AI Helps Basketball Coaches Outmaneuver Opposing Team
Science
Edd Gent
September 27, 2019


An artificial intelligence (AI)-powered computer program could allow basketball coaches to view potential maneuvers by opposing teams on a top-down virtual tactic board on the computer. Coaches sketch plays on the board, with different-colored dots representing the teams, then drag the dots representing virtual players to signal movements and passes. The AI program, trained on player movements from the National Basketball Association, renders the sketches as realistic models of offensive and defensive players' movements. The program employs a generative adversarial network in which two AI programs vie against each other, with one attempting to produce realistic player movements, as the other provides feedback on how closely these movements match actual data—with increasingly realistic plays emerging over time.

Full Article

 

dtau...@gmail.com

unread,
Jan 27, 2020, 8:07:19 AM1/27/20
to ai-b...@googlegroups.com

Facebook Bans 'Deepfake' Videos
The Guardian
Alex Hern
January 7, 2020


Facebook has banned doctored videos, or deepfakes, designed to spread misinformation as the 2020 U.S. presidential election approaches. The prohibition only applies to artificial intelligence (AI)-produced content, and does not ban misleading videos made with conventional editing tools. Facebook’s Monika Bickert said the company, along with Instagram (a social networking service that it owns), will eliminate deepfakes that have been edited or synthesized in ways that are not apparent to the average person, and that would likely trick viewers into attributing words to the subject of the video they never actually spoke. Another requirement for removal is for the material to be produced by AI or machine learning that combines, substitutes, or superimposes content onto a video, making it seem authentic. Said Bickert, "Videos that don't meet these standards for removal are still eligible for review by one of our independent third-party fact-checkers, which include over 50 partners worldwide fact-checking in over 40 languages."

Full Article

 

 

White House Proposes Guidelines for Regulating Use of AI
Associated Press
Matt O'Brien
January 7, 2020


The Trump administration is proposing new federal rules for regulating the use of artificial intelligence (AI) in the private sector, but those regulations do not address concerns about a dearth of accountability as computers increasingly assume human responsibilities in high-risk settings. A White House document said agencies must weigh fairness, lack of bias, openness, transparency, safety, and security in deciding regulatory action. However, the rules will not apply to how federal agencies like law enforcement use AI. U.S. chief technology officer Michael Kratsios said he hopes these guidelines can be a model for other Western democratic institutions to protect shared values without excessive regulation.

Full Article

 

 

Dating Apps Need Women, Advertisers Need Diversity. AI Companies Offer a Solution: Fake People
The Washington Post
Drew Harwell
January 7, 2020


Artificial intelligence (AI) startups are marketing images of computer-generated faces, giving companies opportunities to create imaginary models and boost diversity in ads without the use of actual people. The software for generating images is freely available, and trained on vast datasets of real faces to reproduce their features in new configurations. AI experts are concerned this could further fraud and manipulation by allowing the construction of fictitious online personas, concealing discrimination in hiring, undermining industry diversification, and escalating distrust of the Internet. The University of California, Los Angeles' Elana Zeide said the technology “showcases how little power and knowledge users have in terms of the reality of what they see online.” Meanwhile, Valerie Emanuel of the Role Models Management talent agency said the fake photos could give rise to a monoculture of similar-looking faces.

Full Article

*May Require Paid Registration

 

 

AI Comes to the Operating Room
The New York Times
Denise Grady
January 6, 2020


Surgeons are utilizing artificial intelligence and new imaging methods to diagnose brain tumors faster than pathologists, and with similar accuracy. Traditionally, sample tissue from the brain is sent to a lab for analysis through a microscope, which can take about 30 minutes. The new technique takes less than three minutes, using lasers to create images of the sample tissue after it has been removed from the brain and a computer to read the images in the operating room. The researchers used images of tissue samples taken from 415 brain surgery patients to train an artificial intelligence system to identify the 10 most common types of brain tumor. Overall, the system misdiagnosed 14 cases that doctors got right, while doctors missed 17 cases that the computer got right. Neurosurgeon Daniel A. Orringer at NYU Langone Health said the study shows "the combination of an algorithm plus human intuition improves our ability to predict diagnosis."

Full Article

*May Require Paid Registration

 

 

AI for #MeToo: Training Algorithms to Spot Online Trolls
Caltech News
Robert Perkins
January 8, 2020


California Institute of Technology (Caltech) and Stanford University researchers have demonstrated that machine learning algorithms can track evolving online social media conversations, which could eventually yield an automated method to detect trolling. The technique is designed to overcome the ineffectiveness of current methods, which are either fully automated and non-interpretable, or reliant on a static series of keywords that can rapidly become obsolete. The researchers employed a Global Vectors for Word Representation model, in which the distance between two words quantifies their linguistic or semantic resemblance, while also measuring relationships between keywords to determine context. Said Caltech's Anima Anandkumar, "Hopefully, the tools we're developing now will help fight all kinds of harassment in the future."

Full Article

 

 

DeepMind AI Beats Humans at Deciphering Damaged Ancient Greek Tablets
New Scientist
Gege Li
October 18, 2019


Yannis Assael at DeepMind and colleagues trained an artificial intelligence (AI) algorithm to guess missing words or characters from Greek inscriptions up to 2,600 years old. The Pythia algorithm learned to recognize patterns on 35,000 relics containing more than 3 million words. The patterns it identifies include the context in which different words appear, the grammar, and the shape and layout of the inscriptions. In a head-to-head test on the missing parts of 2,949 damaged inscriptions, human experts took two hours to get through 50 inscriptions, while Pythia gave its answers in seconds, and was 30% more accurate. Given an inscription with missing information, Pythia provides 20 different suggestions so experts could then select the best one.

Full Article

 

 

Bike Designed with AI Breaks World Speed Records
Ecole Polytechnique Fédérale de Lausanne (Switzerland)
Celia Luterbacher
October 10, 2019


Ilona Peltier and Fabian Canal set the women's and men's world records for fastest human-powered vehicle at the 2019 edition of the World Human Powered Speed Challenge last month in Nevada, riding a bicycle designed using a software application developed by Neural Concept, a spin-off of Switzerland’s Ecole Polytechnique Fédérale de Lausanne Computer Vision Laboratory. The artificial intelligence-driven software uses deep learning to take a set of constraints—such as pilot height, width, and weight—to perform aerodynamic simulations. The software's algorithm learns from the data to propose the best designs within the constraints provided. Said Neural Concept's Thomas von Tschammer, "The big advantage of this technology is that it can speed up simulation processes, allowing companies to increase efficiency and decrease costs, in addition to optimization."

Full Article

 

 

Gymnastics' Latest Twist? AI Judges That See Everything
The New York Times
Andrew Keh
October 10, 2019


The gymnastics world championships in Germany, the biggest gymnastics meet outside the Olympics, for the first time used an artificial intelligence (AI) system to evaluate athletes' performance by measuring and analyzing skeletal positions, speed, and angles via three-dimensional laser sensors. International Gymnastics Federation president Morinari Watanabe envisions such robot judges eliminating human error and subjectivity from gymnastics contests; “this is a step toward the challenge of justice through technology,” Watanabe said. At the world championships, the AI system was a means for human judges to confirm scores when gymnasts either formally contested their score, or the score widely deviated between judges. International Gymnastics Federation sports director Steve Butcher said all athlete information collected at the competition would be discarded at a predetermined expiration date, to address privacy concerns.

Full Article

*May Require Paid Registration

 

 

TU Delft Researchers Design New Material Using AI Only
TU Delft
October 14, 2019


Researchers at Delft University of Technology (TU Delft) in the Netherlands have designed a new material solely with artificial intelligence (AI), without conducting experimental trials. TU Delft's Miguel Bessa used machine learning to guide the manufacture of two designs at different length scales, converting brittle polymers into lightweight, recoverable, and highly compressible metamaterials. The concept involves using AI to investigate new design possibilities, while keeping experimentation to a minimum. Said Bessa, "Machine learning creates an opportunity to invert the design process by shifting from experimentally guided investigations to computationally data-driven ones, even if the computer models are missing some information. The essential requisites are that 'enough' data about the problem of interest is available, and that the data is sufficiently accurate."

Full Article

 

 

International Team Uses Deep Learning to Create Virtual ‘Super Instrument’
Southwest Research Institute
October 7, 2019


Researchers at the Southwest Research Institute (SwRI) and nine other institutions, in collaboration with the National Aeronautics and Space Administration's Frontier Development Laboratory, have developed a deep learning algorithm that analyzes ultraviolet images of the Sun and measures the energy the Sun emits as ultraviolet light. The algorithm is already in use as part of a Frontier Development Laboratory project for forecasting ionospheric disturbances. Said SwRI's Andrés Muñoz-Jaramillo, "In essence, deep learning involves sophisticated transformation of data. We can make these transformations into scientifically useful data and modernize the way we view not just the Sun, but a great number of scientific questions."

Full Article

 

dtau...@gmail.com

unread,
Feb 1, 2020, 4:14:07 PM2/1/20
to ai-b...@googlegroups.com

Tool Predicts How Fast Code Will Run on a Chip
MIT News
Rob Matheson
January 6, 2020


Massachusetts Institute of Technology (MIT) researchers have created a machine learning tool to predict processors' code-execution speed. The researchers first unveiled the ithemal neural network model that trains on labeled data in basic blocks—segments of computing instructions—to predict how long a given chip will take to execute previously unseen blocks. The researchers then compiled a benchmark suite of basic blocks to validate performance models in the BHive dataset. Ithemal predicted how fast Intel chips would run code with better accuracy than a performance model from Intel; the tool can produce code that runs faster and more efficiently on increasingly diverse black box chip designs. MIT's Thirimadura Charith Mendis said, "If you want to train a model on some new architecture, you just collect more data from that architecture, run it through our profiler, use that information to train Ithemal, and now you have a model that predicts performance."

Full Article

 

Using AI to Enrich Digital Maps
MIT News
Rob Matheson
January 23, 2020


Researchers at the Massachusetts Institute of Technology (MIT) and Qatar Computing Research Institute have developed an artificial intelligence model that uses satellite imagery to tag road features in digital maps. The RoadTagger model combines a convolutional neural network (CNN) and a graph neural network (GNN) to automatically predict the number of lanes and road types hidden by obstructions. The CNN digests raw satellite imagery while the GNN segments the road into 20-meter tiles or graph nodes linked by lines; the CNN extracts road features and shares that data with its immediate neighbors. RoadTagger analyzed occluded roads from digital maps of 20 U.S. cities, tallying lane numbers with 77% accuracy and deducing road types with 93% accuracy. MIT's Sam Madden said, "Our goal is to automate the process of generating high-quality digital maps, so they can be available in any country."

Full Article

 

 

Researchers Use Deep Learning to Predict Disease-Related Mutations of Metal Binding Sites in Protein
Business Standard
December 28, 2019


Researchers at the University of Hong Kong (HKU) and the Mayo Clinic, Arizona, used deep learning to forecast disease-associated mutations of metal-binding sites in a protein. The researchers found a specific mutation in the human genome that might disrupt these sites and induce severe diseases. HKU's Hongzhe Sun and colleagues determined different metals have different disease connections. For example, a mutation in calcium and magnesium binding sites is respectively tied to muscular and immune system diseases, while zinc plays a key role in breast, liver, kidney, immune system, and prostate diseases. Sun said, "I believe this novel deep learning approach can be used in other projects."

Full Article

 

 

What Makes an Image Memorable? Ask a Computer
MIT News
Kim Martineau
November 1, 2019


Massachusetts Institute of Technology (MIT) researchers developed an artificial intelligence model to visualize elements that make images memorable. The new GANalyze model applies generative adversarial networks (GANs) to visualize a single image as it incrementally evolves from mediocre to memorable. Three models compose the GAN: an MemNet-based assessor that tunes memorability levels on the target image, and estimates how to realize the desired effect; a transformer to execute instructions; and a generator to produce the final image. Experiments revealed close-ups, brightness, centering the subject in the frame, and placement within a square or circular shape are factors that heavily influence memorability. Said Lore Goetschalckx, a visiting graduate student from Katholieke Universiteit Leuven in Belgium, “The human brain evolved to focus most on these features, and that’s what the GAN picks up on.”

Full Article

 

 

Deep Learning Is Making Video Game Characters Move Like Real People
Gizmodo
Andrew Liszewski
October 31, 2019


Scientists at the University of Edinburgh in the U.K. and Adobe Research used deep learning neural networks to help digital characters in video games move more realistically. The team trained a neural network on a database of motions by a live performer on a soundstage which they recorded and digitized. The network can adapt what it learned from the database to most scenarios so characters move in natural-looking ways. The network is filling the gaps between a digital character's various poses and motions, intelligently and seamlessly stitching together these elements into a whole.

Full Article

 

 

Turning Chairs Into Tables: Novel Deep Learning Method for Unpaired Shape Transforms
SIGGRAPH Asia 2019
October 18, 2019


Researchers at Canada's Simon Fraser University, China's Shenzhen University, and Israel's Tel Aviv University have developed a deep neural network that can learn to perform natural unpaired shape transforms. The LOGAN network can automatically translate shapes between two domains and automatically execute content and style transfers without changing network architecture. LOGAN is trained on two sets of input shapes, and encodes them into a common latent space; it then uses a generative adversarial network technique for unpaired general-purpose shape transforms. Explained Simon Fraser’s Hao Xhang, “Shape transform is one of the most fundamental and frequently encountered problems in computer graphics and geometric modeling. What is new and emerging is to tie this important problem to deep learning—can a machine learn to transform shapes, particularly under the unsupervised or unpaired setting?”

Full Article

 

 

Would You Like Fries With That? McDonald's Already Knows the Answer
The New York Times
David Yaffe-Bellany
October 28, 2019


McDonald's is acquiring companies that develop artificial intelligence and machine learning to make the company more like Amazon. Its incorporation of technology is aimed at reversing its recent loss of customers, resulting in restaurants closing and sales declining. The company is incorporating technologies such as digital boards that promote its products, taking into account environmental factors like the weather and the length of the wait for service. The company has tested algorithms at its drive-throughs that capture license-plate numbers, so the restaurant can list recommended purchases personalized to a customer's previous orders, as long as the person agrees allow the fast-food chain to store that data. McDonald's also recently tested voice recognition at certain outlets, with the goal of deploying a faster order-taking system. Regarding the use of new technologies, the company’s CIO, Daniel Henry, said, “You just grow to expect that in other parts of your life ... We don’t think food should be any different than what you buy on Amazon."

dtau...@gmail.com

unread,
Feb 2, 2020, 10:50:50 AM2/2/20
to ai-b...@googlegroups.com

Singapore Releases Latest AI Framework Edition to Improve Governance, Trust
ZDNet
Aimee Chanthadavong
January 22, 2020


The government of Singapore has released a second edition of the Model AI Governance Framework, which was developed to increase consumer trust in the use and governance of artificial intelligence (AI). The updated version of the framework includes considerations such as robustness and reproducibility, to make it more relevant and usable. The new edition also notes that an algorithm audit should only be conducted if it is necessary to discover the actual operations of algorithms comprised in models, and only at the request of a regulator. Said Singapore Minister for Communications and Information S. Iswaran, "The objective really is to translate these ethical principles—that it must be human-centric and responsible—and translate that into practical guidelines."

Full Article

 

 

AI System Evolves to Evade Internet Censorship
University of Maryland College of Computer, Mathematical, & Natural Sciences
November 13, 2019


University of Maryland scientists led development of an artificial intelligence system based on genetic evolution that learned to automatically evade online censorship by repressive governments. The Geneva (genetic evasion) system was tested in China, India, and Kazakhstan, and learned to exploit gaps in censors' logic and flaws that humans could not spot. When operating on a computer that is sending out Web requests via a censor, Geneva tweaks the data's fragmentation and transmission so the censor fails to identify banned content, or cannot block the connection. Geneva assembles sets of instructions from small code fragments, which follow refined evasion strategies for breaking up, configuring, or sending data packets. Said Maryland’s Dave Levin, “With Geneva, we are, for the first time, at a major advantage in the censorship arms race.”

Full Article

 

 

Advanced Technology May Indicate How Brain Learns Faces
University of Texas at Dallas
Stephen Fontenot
December 9, 2019


Researchers at the University of Texas at Dallas (UT Dallas) have demonstrated that deep convolutional neural networks (DCNNs) operate similarly to the way human brains do, in terms of identifying faces. “For the last 30 years, people have presumed that computer-based visual systems get rid of all the image-specific information — angle, lighting, expression and so on,” said Alice O'Toole at UT Dallas. Previous-generation algorithms were effective in recognizing faces that had only minor changes from images they already knew. However, current technology knows an identity well enough to recognize faces despite changes in expression, viewpoint, or appearance. For example, the researchers found the DCNN excelled at connecting caricatures to their corresponding identities. Said O'Toole, "Given these distorted images with features out of proportion, the network understands that these are the same features that make an identity distinctive and correctly connects the caricature to the identity."

Full Article

 

 

How to Train Computers Faster for 'Extreme' Datasets
Futurity.org
Mike Williams
December 12, 2019


Rice University researchers have developed a new approach for training computers to address extreme classification problems and accelerate the machine learning process. Rice's Anshumali Shrivastava and Tharun Medini applied their merged-average classifiers via hashing (MACH) technique to an Amazon search dataset of about 70 million queries and over 49 million products, which only demanded a fraction of the training resources of state-of-the-art commercial systems. Shrivastava said MACH boosts training time up to 10-fold, with a two- to four-times-smaller memory footprint than optimal baseline performances of previously disclosed large-scale, distributed deep learning systems. MACH randomly sorts data into separate classes, and a classifier only has to map a search to one of the classes, rather than the data itself. Medini said MACH needs no communication between parallel processors, a crucial advantage for extreme classification.

Full Article

 

 

Etalumis 'Reverses' Simulations to Reveal New Science
Berkeley Lab News Center
Keri Troutman
November 12, 2019


Multinational researchers tapped Lawrence Berkeley National Laboratory's National Energy Research Scientific Computing Center (NERSC) to create the first probabilistic programming framework for controlling existing simulators operating on high-performance computing platforms. The Etalumis system offers a way to reverse-engineer simulations to better interpret the experimental data fed into these models and gain scientific knowledge. Etalumis executes Bayesian inference, basically inverting the simulator to predict input parameters from observations. Atilim Gunes Baydin at the U.K.'s University of Oxford said, "This means the simulator is no longer used as a black box to generate synthetic training data, but as an interpretable probabilistic generative model that the simulator's code already specifies, in which we can perform inference." The researchers applied Etalumis to Switzerland's Large Hadron Collider at CERN, reducing the time it takes to train a complex dynamic neural network on NERSC's Cori supercomputer from months to minutes.

Full Article

 

 

Researchers Criticize AI Software That Predicts Emotions
Reuters
Joseph Menn; Jeffrey Dastin
December 12, 2019


The AI Now Institute at New York University has urged a ban on artificial intelligence (AI) that automatically analyzes facial expressions to influence hiring and other decisions. The Institute cited an academic review of studies on how people interpret moods from facial expressions, which concluded that such perceptions are unreliable, given that emotional communication widely varies across cultures, scenarios, and individuals in a single situation. The Institute said action against such software-driven “affect recognition” was its top priority because science does not justify the technology's use. Institute founders Kate Crawford and Meredith Whittaker warn of the spread of damaging applications of AI despite broad consensus on underlying ethical principles, due to a lack of consequences for violating those principles.

Full Article

 

 

AI to Run Chemical Factories of the Future
Illinois News Bureau
Lois Yoksoulian
November 13, 2019


University of Illinois (U of I) researchers designed an artificial intelligence-driven automated biomanufacturing platform, to produce the pigment lycopene. The researchers said such a tool must choose from thousands of experimental iterations to achieve true chemical factory automation. BioAutomata completed two rounds of fully automated assembly/optimization of the lycopene production pathway, covering design and construction of the pathways, transfer of DNA-encoding pathways into host cells, cell growth, and extraction and quantification of lycopene production. Said U of I’s Huimin Zhao, “A hundred years ago, people built cars by hand. Now, that process is much more economical and efficient thanks to automation, and we imagine the same for biomanufacturing of chemicals and materials.”

Full Article

 

 

Pentagon Advisory Board Releases Principles for Ethical Use of AI in Warfare
The Washington Post
Aaron Gregg
November 1, 2019


The U.S. Defense Innovation Board has published a set of ethical principles for how military agencies should design weapons enabled by artificial intelligence (AI) and apply them on the battlefield. However, the board's recommendations are not legally binding, and it is now up to the Pentagon to determine how and whether to proceed with them. The recommendations pertained mostly to broadly defined goals such as "formalizing these principles" or "cultivating the field of AI engineering." Other recommendations included setting up a steering committee or a set of workforce training programs. The document specified that AI systems should be equitable, traceable, reliable, and governable.

Full Article

 

 

Data Science Dominates LinkedIn's Emerging Jobs Ranking
ZDNet
Larry Dignan
December 10, 2019


A study by online employment site LinkedIn found employers increasingly prize data science and related skills among workers. The top-ranked emerging job in the study was artificial intelligence (AI) specialist, which requires proficiency in machine learning, deep learning, TensorFlow, Python coding language, and natural language processing. LinkedIn said hiring of AI specialists grew 74% from 2015 to this year, with industries seeking such skills including computer software, Internet, information technology, higher education, and consumer electronics. Robotics engineer and data scientist ranked second and third among emerging job titles identified in the study, while full stack engineer and site reliability engineer held fourth and fifth places, respectively.

Full Article

 

 

Rainforest Preservation Through Machine Learning
ETH Zurich
Florian Meyer
December 5, 2019


David Dao at the Swiss Federal Institute of Technology (ETH) Zurich's Institute for Computing Platforms has developed intelligent machine learning algorithms to predict likely areas of rainforest deforestation based on satellite and drone images. The algorithms interpret individual image sequences to order to identify forested regions and their possible shrinkage. Comparison of chronologically sequential aerial views enables the algorithms to map out changes in road systems and forest coverage over time. The algorithms do not require labels to produce images that indicate rainforest contraction, and they also can forecast where the worst deforestation will next appear. Dao's work will be utilized for a pilot project starting next month in Chile.

Full Article

 

dtau...@gmail.com

unread,
Feb 2, 2020, 8:20:55 PM2/2/20
to ai-b...@googlegroups.com

AI Epidemiologist Sent First Warnings of Wuhan Virus
Wired
Eric Niiler
January 25, 2020


Canadian health monitoring platform BlueDot was the first organization to alert its customers to the outbreak of coronavirus, on Dec. 31. BlueDot’s algorithm analyzes foreign-language news reports, animal and plant disease networks, and governmental information in order to warn its clients to avoid dangerous regions where disease outbreaks appear to be occurring. The algorithm also considers global airline ticketing data, to help predict where and when infected residents are traveling. BlueDot correctly predicted the coronavirus would jump from Wuhan, China, to Bangkok, Seoul, Taipei, and Tokyo in the days following its initial appearance. Said BlueDot CEO Kamran Khan, “What we have done is use natural language processing and machine learning to train this engine to recognize whether this is an outbreak of anthrax in Mongolia versus a reunion of the heavy metal band Anthrax.”

Full Article

 

 

Deep Neural Networks Coming to Your Phone
News@Northeastern
Laura Castanon
January 28, 2020


Northeastern University researchers have developed a technique to run deep neural networks on smartphones or similar systems, resulting in the fulfillment of tasks up to 56 times faster than previously achieved, while maintaining accuracy. Northeastern's Yanzhi Wang and colleagues reduced the size of the network model and automatically generated code to improve operational efficiency, which could enable the deployment of deep neural networks in off-the-shelf devices that may lack consistent Internet access. Wang suggested local data processing that does not require transmission to remote servers could boost people's comfort with artificial intelligence-powered devices. Said Wang, "Previously, people believed that deep learning needed dedicated chips, or could only be run on servers over the cloud. This kind of assumption of knowledge limits the application of the deep learning."

Full Article

 

 

Illinois Law Gives Candidates Rights to Understand How Employers Use Algorithms to Rate Job Interviewees
The Philadelphia Inquirer
Abdel Jimenez
January 28, 2020


A new law in Illinois aims to give job candidates a better understanding of how companies use artificial intelligence (AI) to screen video interviews. The AI analyzes applicants' facial expressions, tone, and language, and many underlying algorithms are not explained by their designers; critics say this leads to biased assessments. Under the Illinois Artificial Intelligence Video Interview Act, employers must inform applicants when AI will be used for this purpose, and explain the algorithms' workings. Companies must secure permission from applicants before using such AI, and cannot share submitted videos except with "persons whose expertise or technology" are necessary to screen interviewees. Applicants also can request the videos' destruction, and employers have 30 days to comply with requests.

Full Article

 

 

Team Designs Machine Learning Algorithms to Offer Safety, Fairness Guarantees
University of Massachusetts Amherst
November 21, 2019


Researchers at the University of Massachusetts Amherst, Stanford University, and the Federal University of Rio Grande do Sul in Brazil have developed a framework for designing machine learning algorithms that make it easier for users of the algorithm to specify safety and fairness constraints. The team tested the framework by using it to create a Seldonian algorithm with constraints to predict grade point averages in a data set of 43,000 students in Brazil; the algorithm successfully avoided several types of gender bias. Said Amherst’s Philip Thomas, “The framework is a tool for the machine learning researcher. It guides them toward creating algorithms that are easier for users to apply responsibly to real world problems.”

Full Article

 

Commentary: AI Could Bring About Changes In Workforce, Economic Equity

In a New York Times Share to FacebookShare to Twitter (1/30) op-ed, David Deming, director of the Malcolm Wiener Center for Social Policy at the Harvard Kennedy School, writes about the looming impact of AI taking over “much of the work done by humans,” saying “the danger we are facing isn’t really about technology. It’s about politics and economic fairness.” Deming writes that impact of AI technology on workers is sometimes positive, in that it creates new jobs. AI “will create many new opportunities, including some that we cannot foresee. And it will have other benefits, too, such as increased safety and reduced environmental impact. Still, we shouldn’t be Pollyannas about artificial intelligence.”

 

Neural Network Fills In Data Gaps for Spatial Analysis of Chromosomes
Carnegie Mellon News
November 7, 2019


Researchers at Carnegie Mellon University have developed a machine learning method that fills in missing pixels in low-quality images or video in order to help scientists determine how DNA is organized in a cell. The researchers used an artificial neural network called a denoising autoencoder to help fill in the gaps in less-than-complete Hi-C datasets. The autoencoder supplies missing pixels by learning the types of pixels typically found together and making its best guess. The team trained the autoencoder on the dataset for GM12878 to recognize which sequences of DNA pairs from different chromosomes typically interact with each other in the cell nucleus. The method proved successful in identifying subcompartments in eight cell lines whose interchromosomal interactions were only partially known.

Full Article

 

Machine Learning Advances Tool to Fight Cybercrime in the Cloud
Purdue University News
Chris Adam
November 5, 2019


Purdue University researchers used machine learning to develop a cloud forensic model that collects digital evidence associated with illegal activities in cloud storage applications. The system deploys deep learning models to classify child exploitation, illegal drug trafficking, and illegal firearms transactions uploaded to cloud storage applications, and to automatically report detection of any such illegal activities via a forensic evidence collection system. The researchers tested the system on more than 1,500 images, and found that the model accurately classified an image about 96% of the time. Said Purdue’s Fahad Salamh, "It is important to automate the process of digital forensic and incident response in order to cope with advanced technology and sophisticated hiding techniques and to reduce the mass storage of digital evidence on cases involving cloud storage applications."

Full Article

 

 

We Teach AI Systems Everything, Including our Biases
The New York Times
Cade Metz
November 11, 2019


Artificial intelligence reflects prejudices inherent in the information it is trained on, potentially bringing old attitudes into new technology. For example, Google's BERT (Bidirectional Encoder Representations from Transformers), a universal language model, is fed massive amounts of digitized text likely encompassing centuries of biases, leading it to consider computer programming an exclusively male profession, and giving women little credit. Computer scientist Robert Munro analyzed Google and Amazon Web Services' cloud-computing services designed to help businesses add language skills to new apps; the services identified the word "his" as a pronoun, but did not do the same for "hers." Said Munro, “This is the same historical inequity we have always seen. Now, with something like BERT, this bias can continue to perpetuate.”

dtau...@gmail.com

unread,
Feb 8, 2020, 5:34:43 PM2/8/20
to ai-b...@googlegroups.com

TextFooler Generates Adversarial Text to Strengthen Natural Language Models
VentureBeat
Kyle Wiggers
February 7, 2020


Researchers at the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory (CSAIL), the University of Hong Kong, and Singapore’s Agency for Science, Technology have developed a framework for creating adversarial text examples that can deceive natural language processing (NLP) systems. MIT's Di Jin said the TextFooler framework demonstrates potentially disastrous scenarios for compromising critical online systems based on text classification models—including voice assistants and email spam filters. Jin acknowledged that TextFooler can be used to attack any NLP model to gauge its robustness, adding that in addition, "the generated adversaries can be used to improve the robustness and generalization of deep learning models via adversarial training, which is a critical direction of this work."

Full Article

 

 

AI-Created Medicine to Be Used on Humans for First Time
BBC News
Jane Wakefield
January 30, 2020


A group of researchers from U.K.-based drug-discovery startup Exscientia and Japan’s Sumitomo Dainippon Pharma used artificial intelligence (AI) to create a drug molecule that will be used in human trials. The drug was designed to treat patients with obsessive-compulsive disorder (OCD). The AI drug took 12 months to develop, compared to the five years traditional drug development typically takes to get to the trial stage. The molecule, DSP-1181, was created using algorithms that analyzed potential compounds and checked them against a database of parameters. Said Exscientia's Andrew Hopkins, "This year was the first to have an AI-designed drug, but by the end of the decade, all new drugs could potentially be created by AI."

Full Article

 

 

Jigsaw Tool to Help Journalists Spot Doctored Images
The New York Times
Davey Alba
February 4, 2020


Technology incubator Jigsaw, owned by Google parent Alphabet, has released a free tool designed to help journalists identify doctored photos, including those generated through the use of artificial intelligence. More than a dozen news and fact-checking organizations worldwide are testing the Assembler tool, which is not intended for public use. Journalists feed images into Assembler, which can spot photo-manipulation techniques via seven "detectors," and highlight areas where traces of manipulation may linger. Jigsaw also announced an interactive platform documenting coordinated disinformation campaigns from the past decade, in the hope of cultivating a taxonomy for media outlets and groups studying such disinformation techniques.

Full Article

*May Require Paid Registration

 

 

Improving AI's Ability to Identify Students Who Need Help
NC State News
Matt Shipman
February 5, 2020


Researchers at North Carolina State University's Center for Educational Informatics have developed an artificial intelligence (AI) model to predict students' absorption of knowledge via educational gameplay. The model utilizes multi-task learning, in which it is asked to execute multiple tasks to forecast whether students would answer each question on a test correctly, based on their game behavior. The AI was assigned to learn 17 tasks correlating with the test's 17 questions. The model studies each student's gameplay and question-answering pattern on the test's first question, and identifies common behaviors of students who answered the question correctly or incorrectly to ascertain how new students would answer; it simultaneously performs this function for all questions. The multi-task model is about 10% more accurate than models dependent on conventional AI training, and the researchers think the AI could help flag students who may need additional instruction.

Full Article

 

 

AI is Being Used to Select Embryos for Women Undergoing IVF
New Scientist
Donna Lu
January 29, 2020


Harrison.ai, a tech firm based in Sydney, Australia, has developed an artificial intelligence (AI) algorithm that is being used during in vitro fertilization to select embryos with the highest chance of resulting in a successful pregnancy. The Ivy algorithm analyzes time-lapse videos of embryos as they are incubated after being fertilized. It was trained on more than 10,000 videos of embryos growing inside an incubator for five days, in combination with data about which embryos resulted in pregnancy. “The results look very promising,” said Cornell University's Iman Hajirasouliha, who along with colleagues has developed a similar tool, based on videos of 12,000 embryos on day five post-fertilization.

Full Article

 

 

Can a Stealth T-Shirt Foil Digital Spies?
The Boston Globe
Hiawatha Bray
November 25, 2019


Researchers at Northeastern University, the Massachusetts Institute of Technology, and IBM have designed stealth apparel that renders the wearer invisible to spying computers. The adversarial T-shirt sports patterns that confuse artificial intelligence (AI) systems so they cannot identify the wearer as a human being. The shirt exploits AI vision systems' perception of pixels, the addition of which can fool the systems. Northeastern's Xue Lin and Kaidi Xu engineered the shirt to trick the Yolo object-recognition algorithm, which did not identify the clothing's wearers as people 63% of the time. Said Lin, “We try to explore the vulnerability of these neural networks, and hopefully, we can fix this problem.”

Full Article

*May Require Free Registration

 

 

Using Computers to View the Unseen
MIT News
Rachel Gordon
December 6, 2019


Scientists at the Massachusetts Institute of Technology (MIT)'s Computer Science and Artificial Intelligence Laboratory have advanced a technique to envision scenes outside a line of sight without special gear. The passive method can reconstruct concealed images from subtle shadows and reflections. An algorithm monitors how shadow and geometry interact to predict the path light travels in a scene, then calculates what is unseen from the observed shadows—and builds a silhouette of a live-action performance. The algorithm concurrently trains two neural networks specialized for a single target: one creates the scrambling pattern, while the other estimates the hidden video. The networks are rewarded when the blended factors replicate the video recorded from the clutter, driving them to explain their observations with plausible concealed data.

Full Article

 

 

China Issues Rules to Clamp Down on Deepfake Technologies
South China Morning Post (Hong Kong)
Meng Jing
November 30, 2019


China has issued new rules banning online video and audio providers from generating, distributing, and broadcasting fake news as it cracks down on deepfake technologies. This follows California's introduction of legislation to outlaw political deepfakes, and the European Union’s proposed strategy for probing such online disinformation. China's rules are broader, including prohibitions not only on using deep learning and related technologies to produce political news, but the use of virtual reality and deep learning to produce fake news as well. The rules mandate providers and users of online video news and audio information services, among other things, must use technology to determine whether audio and video news content has been manufactured or potentially doctored.

Full Article

 

dtau...@gmail.com

unread,
Feb 16, 2020, 3:22:47 PM2/16/20
to ai-b...@googlegroups.com

AI Shortcuts Speed Simulations Billions of Times
Science
Matthew Hutson
February 12, 2020


Researchers led by the University of Oxford in the U.K. used artificial intelligence to generate accurate machine learning emulator algorithms for accelerating simulations billions of times, for all scientific disciplines. The neural network-based emulators absorb the inputs and outputs of a full simulation, seeking patterns and learning to guess what the model would do with new inputs while avoiding the need to run the full simulation many times. The Deep Emulator Network Search (DENSE) method randomly inserts computation layers between network inputs and outputs and trains the system with the limited data, so added layers that improve performance are more likely to end up in future variations. DENSE-produced emulators for 10 simulations in physics, astronomy, geology, and climate science were 100,000 to 2 billion times faster than the models with the addition of specialized graphical processing chips—and were highly accurate.

Full Article

 

 

Tool Spots Slowdown from Software Updates
Texas A&M Today
Vandana Suresh; Stephanie Jones
February 11, 2020


Texas A&M University researchers and Intel Labs computer scientists have developed an automated tool to track down the source of computer errors induced by software updates. Their deep learning algorithm diagnoses performance regressions resulting from such updates in a manner compatible with many software and programming languages. The algorithm simultaneously monitors data from numerous performance counters by reducing the size of the data, and seeks abnormal patterns indicative of errors. The researchers think the tool could be applied to the development of autonomous vehicle technology. Texas A&M's Abdullah Muzahid said, "Self-driving cars must be able to detect whether a car or a human is in front of it and then act accordingly. So, it's again a form of anomaly detection and ... that is what our algorithm is already designed to do."

Full Article

 

 

Deep Learning Accurately Forecasts Heat Waves, Cold Spells
Rice University
Jade Boyd
February 4, 2020


Rice University engineers have developed a computer system that used deep learning to teach itself to accurately predict extreme weather events like heat waves and cold spells, based on a minimal amount of weather data. Rice's capsule neural network studied hundreds of pairs of maps during training, with each map displaying surface temperatures and air pressures at five-kilometer elevations, several days apart. After training, the network could examine previously unseen maps and produce five-day extreme weather projections with 85% accuracy. The researchers think the network eventually could function as an early warning system for weather forecasters, as well as a tool for learning more about the atmospheric precursors of extreme weather.

Full Article

 

 

Deep Learning Model Can Identify Sleep Stages
University of Eastern Finland
February 3, 2020


Researchers at the University of Eastern Finland have developed a deep learning model that can identify sleep stages as accurately as an experienced physician. Using polysomnographic recording data from both healthy people and those with suspected obstructive sleep apnea (OSA), the researchers found that in healthy individuals, the system was able to identify sleep stages with 83.7% accuracy when using a single frontal electroencephalography channel (EEG), and with an 83.9% accuracy when supplemented with electrooculogram (EOG). In patients with suspected OSA, the model reached accuracies of 82.9% for a single EEG channel and 83.8% for combined EEG and EOG channels.

Full Article

 

 

White House Earmarks New Money for AI, Quantum Computing
The New York Times
Cade Metz
February 10, 2020


White House officials have proposed boosting federal funding for developing artificial intelligence (AI) and quantum computing, which defense officials believe will be critical to future national security. The plan would raise funding for AI research at the U.S. Defense Advanced Research Projects Agency from $50 million to $249 million, and at the National Science Foundation (NSF) from about $500 million to $850 million. The Trump administration also pledged to double funding for non-Defense Department AI and quantum computing research by 2022. The NSF plans to use $50 million of the new funding to train AI experts, while the Department of Energy hopes to use $25 million to build a national "quantum Internet" connecting its 17 research laboratories.

Full Article

*May Require Paid Registration

 

 

In the Eternal Quest to Decode Fedspeak, Here Come the Computers
The Wall Street Journal
James Hookway
February 9, 2020


Investors are using artificial intelligence to decrypt the purposefully vague or baffling jargon used by central banks to predict their financial policies. The Royal Bank of Canada's Peter Schaffrik co-developed the ECB-O-Meter algorithm, which scans speeches by European Central Bank leaders to predict potential future strategies. Meanwhile, U.S.-based Prattle Analytics tracks the U.S. Federal Reserve and other central banks by aggregating keywords for each bank, then applying machine learning to monitor asset prices when officials use those words in public comments. University of Tokyo programmers algorithmically analyzed facial expressions at televised news conferences for hints of future policy, noting Bank of Japan Governor Haruhiko Kuroda's looks of anger and disgust foreshadowed the bank's rollout of negative interest rates in 2016. Some developers expect machine models to overtake humans, although Schaffrik said humans must occasionally step in to feed algorithms new data.

Reply all
Reply to author
Forward
0 new messages