Dr. T's AI brief

1 view
Skip to first unread message

dtau...@gmail.com

unread,
Feb 29, 2020, 6:07:42 PM2/29/20
to ai-b...@googlegroups.com

How to Reduce Bias in AI? Selective Amnesia.
USC Viterbi School of Engineering
Rishbha Bhagi
February 24, 2020


Artificial intelligence (AI) researchers at the University of Southern California (USC) Viterbi School of Engineering’s Information Sciences Institute have created a mechanism for inducing selective amnesia in computing models. This adversarial forgetting methodology could help reduce bias in AI by teaching deep learning models to ignore unwanted data factors. The mechanism is used to train a neural network to represent all underlying aspects of the data being analyzed, and then to forget specified biases, resulting in models that lack those biases when making decisions. Adversarial forgetting also could enhance content generation. USC's Greg Ver Steeg said, "For content generation to succeed, we need new ways to control and manipulate neural network representations and the forgetting mechanism could be a way of doing that."

Full Article

 

 

UCI, Disney Research Scientists Develop AI-Enhanced Video Compression Model
University of California, Irvine
Brian Bell
February 18, 2020


An artificial intelligence (AI)-enhanced video compression model developed by computer scientists at the University of California, Irvine (UCI) and Disney Research has shown that deep learning can provide results comparable to established video compression technology. The team showed its compressor yielded less distortion and significantly smaller bits-per-pixel rates than classical coding-decoding algorithms when trained on specialized video content. The compressor achieved similar results on downscaled, publicly available YouTube videos. “Ultimately, every video compression approach works on a trade-off,” said UCI's Stephan Mandt. "The hope is that our neural network-based approach does a better trade-off overall between file size and quality.”

Full Article

 

 

Pentagon Adopts Ethical Principles for Using AI in War
Associated Press
Matt O'Brien
February 24, 2020


The Pentagon is adopting new ethical principles regarding the use of artificial intelligence (AI) technology on the battlefield. The new principles ask users to "exercise appropriate levels of judgement and care" when using AI systems. In addition, decisions made by automated systems should be "traceable" and "governable," meaning users must be able to deactivate or disengage AIs if they exhibit unintended behaviors. The principles are intended to guide both combat and non-combat AI applications. Lucy Suchman, an anthropologist who studies the role of AI in warfare, said she worries “that the principles are a bit of an ethics-washing project. The word ‘appropriate’ is open to a lot of interpretations.”

Full Article

 

 

Study Allows Brain, Artificial Neurons to Link Up Over the Web
University of Southampton
February 26, 2020


Researchers at the University of Southampton in the U.K., the University of Padova in Italy, and the University of Zurich and ETH Zurich in Switzerland have developed a system that enables brain neurons and artificial neurons to communicate with each other over the Internet. The research demonstrates how three emerging technologies—brain-computer interfaces, artificial neural networks, and memristors—can work together to create a hybrid neural network. Southampton’s Themis Prodromakis said the research lays "the foundations for the Internet of Neuro-electronics and brings new prospects to neuroprosthetic technologies, paving the way towards research into replacing dysfunctional parts of the brain with AI chips."

Full Article

 

 

AI Algorithm Better Predicts Corn Yield
I Illinois ACES
Lauren Quinn
February 20, 2020


An interdisciplinary research team at the University of Illinois at Urbana-Champaign has developed a convolutional neural network (CNN) that generates crop yield predictions, incorporating information from topographic variables such as soil electroconductivity, nitrogen levels, and seed rate treatments. The team worked with data captured in 2017 and 2018 from the Data Intensive Farm Management project, in which seeds and nitrogen fertilizer were applied at varying rates across 226 fields in the Midwest U.S., Brazil, Argentina, and South Africa. In addition, on-ground measurements were combined with high-resolution satellite images from PlanetLab to predict crop yields. Said Illinois's Nicolas Martin, while "we don’t really know what is causing differences in yield responses to inputs across a field … the CNN can pick up on hidden patterns that may be causing a response.”

Full Article

 

 

AI Discovers Antibiotics to Treat Drug-Resistant Diseases
Financial Times
Madhumita Murgia
February 20, 2020


Massachusetts Institute of Technology (MIT) researchers used artificial intelligence to discover a new antibiotic that successfully destroyed 35 drug-resistant bacteria. MIT's Regina Barzilay designed the algorithm, which was trained through deep learning to analyze the makeup of 2,500 molecules, including current antibiotics and other natural compounds, and to rate their anti-bacterial effectiveness. The algorithm then scanned a database of 100 million molecules to predict the efficacy of each against specific pathogens, as well as searching for molecules that appeared to physically differ from existing antibiotics to eliminate continued resistance among the newly discovered compounds. Said Barzilay, "There is still a question of whether machine learning tools are really doing something intelligent in healthcare, and how we can develop them to be workhorses in the pharmaceuticals industry. This shows how far you can adapt this tool."

Full Article

*May Require Paid Registration

 

 

Optimization Algorithm Sets Speed Record for Solving Combinatorial Problems
IEEE Spectrum
John Boyd
February 10, 2020


Researchers at Toshiba Corp. in Japan have developed a quantum-inspired heuristics algorithm that is 10 times faster than competing technologies. In October, the researchers announced a prototype device implementing the algorithm that can detect and execute optimal arbitrage opportunities from among eight currency combinations in real time. The researchers claim the likelihood of the algorithm finding the most profitable arbitrage opportunities is greater than 90%. The team implemented the Simulated Bifurcation Algorithm on a single flat-panel gate array (FPGA) chip, and were able to run 8,000 operations in parallel to solve a 2,000-spin problem. In a separate test using eight GPUs, the system solved a 100,000-spin problem in 10 seconds—1,000 times faster than when using standard optimized simulated annealing software.

Full Article

 

 

Fear of Big Brother Guides EU Rules on AI
Agence France-Presse
February 17, 2020


The artificial intelligence (AI) policy unveiled by the European Union (EU) this week urges authorities and companies to practice caution before rolling out facial recognition technology. The European Commission hopes to address Europeans' concerns about the growing importance of AI in their lives amid reports from China of facial recognition technology being used to suppress dissent. EU Commissioner Margrethe Vestager recommends organizations consider the ramifications of facial recognition—specifically any scenarios in which the technology should be authorized. Vestager says Europe has a desire to be "sovereign" on AI and to shield "the integrity of our grids, of our infrastructure, of our research."

Full Article

 

Reply all
Reply to author
Forward
0 new messages