Dr. T's AI brief

閲覧: 10 回
最初の未読メッセージにスキップ

Daniel Tauritz

未読、
2020/05/16 8:00:142020/05/16
To: ai-b...@googlegroups.com

ACM Prize in Computing Awarded to AlphaGo Developer
ACM
April 1, 2020


ACM has named DeepMind research scientist David Silver to receive the 2019 ACM Prize in Computing for advances in computer game-playing. His most highly publicized accomplishment was leading the group that developed AlphaGo, an algorithm that beat the world Go champion. Silver designed the algorithm by integrating concepts from deep learning, reinforcement learning, traditional tree-search, and large-scale computing. Silver's team trained AlphaGo on expert human games, then applied reinforcement learning to enhance its performance. Silver later developed the AlphaZero algorithm that learned by playing games against itself, using only game rules, and demonstrated unprecedented generality of game-playing techniques. Infosys' Pravin Rao said, "The frameworks that Silver and his colleagues have developed will inform all areas of [artificial intelligence], as well as practical applications in business and industry for many years to come."

Full Article

 

 

Balcan to Receive ACM Grace Murray Hopper Award for Innovations to Machine Learning
Association for Computing Machinery
April 8, 2020


Carnegie Mellon University's Maria Florina “Nina” Balcan has been named recipient of the 2019 ACM Grace Murray Hopper Award for her contributions to minimally supervised machine learning. The award is given to the outstanding young computer professional of the year, selected on the basis of a single recent major technical or service contribution made while he or she was 35 years of age or less. “Nina Balcan wonderfully meets the criteria for the ACM Grace Murray Hopper Award, as many of her groundbreaking contributions occurred long before she turned 35,” said ACM President Cherri M. Pancake. “Although she is still in the early stages of her career, she has already established herself as the world leader in the theory of how AI systems can learn with limited supervision. More broadly, her work has realigned the foundations of machine learning, and consequently ushered in many new applications that have brought about leapfrog advances in this exciting area of artificial intelligence."

Full Article

 

 

AI-Powered Search Engine Makes It Easier to Find Information in the Fight Against COVID-19
University of Waterloo Cheriton School of Computer Science
April 7, 2020


Researchers at the University of Waterloo Cheriton School of Computer Science in Canada and New York University have developed a dedicated search engine for researchers fighting the COVID-19 pandemic. Neural Covidex supplies access to the Allen Institute for Artificial Intelligence (AI)'s COVID-19 Open Research Dataset (CORD-19), a publicly available, curated archive of roughly 45,000 scholarly articles, medical reports, journal articles, and preprints about COVID-19 and the coronavirus virus family. Cheriton's Jimmy Lin channeled his expertise in building systems for extracting insights from text into creating natural language processing and information retrieval components for exploring the CORD-19 dataset. Said Lin, "We're working hard to improve our system incrementally by leveraging the latest AI techniques."

Full Article

 

 

To Tune Up Your Quantum Computer, Better Call an AI Mechanic
NIST
Chase Boutin
March 31, 2020


Researchers at the National Institute of Standards and Technology (NIST) have developed a method for teaching an artificial intelligence (AI) program to make an interconnected set of adjustments to tiny quantum dots, a significant aspect of creating properly functioning qubits. The researchers created a simulator to generate thousands of images of quantum dot measurements that could then be fed to the AI as a training exercise. Said NIST mathematician Justyna Zwolak, "We simulate the qubit setup we want and run it overnight, and in the morning we have all the data we need to train the AI to tune the system automatically." The team used a setup of two quantum dots, and verified that within certain constraints the trained AI could auto-tune the system to the desired setup.

Full Article

 

 

Researchers Propose Paradigm That Trains AI Agents Through Evolution
VentureBeat
Kyle Wiggers
March 24, 2020


Researchers at Carnegie Mellon University, OpenAI, Facebook AI Research, the University of California, Berkeley, and China's Shanghai Jiao Tong University have proposed a paradigm that could exponentially scale up multi-agent reinforcement learning, in which artificial intelligence (AI) agents interact in a space where their population grows over time. The concept involves dividing learning into stages with an increasing number of agents in the environment, so they first learn to interact in simpler situations with fewer agents, then leverage these experiences to more agents. The Evolutionary Population Curriculum paradigm introduces new agents by cloning existing ones from the previous stage, incorporating crossover, mutation, and selection to ensure only agents with superior adaptation graduate to the next stage. In experiments using three challenging environments, the researchers said the AI agent's performance and training stability "significantly" improved over baselines.

Full Article

 

 

How a Real Dog Taught a Robot Dog to Walk
Wired
Matt Simon
April 3, 2020


Researchers at Google have developed a robotic dog and taught it to walk by showing it motion-capture videos of real dogs walking on treadmills. The robot, Laikago, has a body very different from a biological dog's body; a digital version of Laikago used computer simulations to determine how to imitate the motion of the digital version of the biological dog without directly copying its mechanics. The researchers used a reinforcement learning algorithm to help the robot learn to move as similarly to the original reference motion as possible. The algorithm attempts random movements, and gets a digital "reward" if it gets closer to the dog's reference motion. Over many iterations, this reward systems teaches the simulated robot dog to move like a real dog. Said Google’s Jason Peng, “The drawback with the kind of manual approach is that it's not really scalable for every skill that we want a robot to perform.”

Full Article

 

 

AI Hiring Expected to Show Resilience Amid Coronavirus Slowdown
The Wall Street Journal
Angus Loten
March 30, 2020


The number of jobs related to artificial intelligence (AI) globally could increase by as much as 16% this year, reaching 969,000, according to technology research firm International Data Corp. (IDC) estimates, driven by stronger demand for AI workers as organizations deal with the impact of the coronavirus pandemic. IDC's forecast includes a variety of AI-related jobs, such as data engineers, data scientists, and machine learning developers. While millions of U.S. businesses have announced layoffs or furloughs, the project resilience of the AI job market mirrors forecasts of continued spending on AI tools and technology. Global AI spending this year could reach $50.7 billion, up 32% over last year, according to IDC.

Full Article

*May Require Paid Registration

 

 

Recipe for Neuromorphic Processing Systems?
AIP Publishing
March 24, 2020


Researchers at Bielefeld University in Germany and the University of Zurich and ETH Zurich in Switzerland studying how biological neural processing systems execute computation have formulated a recipe for reproducing these computing principles in analog/digital electronics and novel materials. The researchers used standard complementary metal-oxide semiconductor (CMOS) electronic circuits and advanced nanoscale memory technologies to build intelligent systems capable of learning. They determined that apparent drawbacks to low-power computing technologies can be leveraged to conduct robust and efficient computation, similar to the brain's use of highly variable and noisy neurons to deploy robust behavior. The University of Zurich’s Giacomo Indiveri said such neuromorphic processing systems "offer promising solutions for those applications that require compact and very low-power [sub-milliwatt] real-time processing with short latencies."

Full Article

 

 

Deep Learning Cuts Costs in Building Control
Pacific Northwest National Laboratory
March 23, 2020


Researchers at the Pacific Northwest National Laboratory (PNNL) and KU Leuven in Belgium have demonstrated that deep learning could clear a path for wider adoption of Model Predictive Control (MPC) in buildings by tackling cost and implementation challenges. High installation costs have thwarted MPC deployment in a large segment of building stock, since each building is unique and necessitates its own custom physics-based model; the high computational cost of physics-based models restricts the number of control strategy alternatives that can be considered, and often demands dedicated hardware. The researchers used physics-based MPC to train deep learning neural network models, which return control actions that closely approximate those generated directly by physics-based MPC, but are faster and more power-efficient. PNNL's Jan Drgona said, "By applying these methods, we are on track to reduce engineering costs and achieve a generic solution that is broadly available to the building control community."

Full Article

 

Projecting the Outcomes of People's Lives with AI Isn't So Simple
Virginia Tech News
Egan Jimenez; Kendall Daniels
March 30, 2020


The results of a multi-institutional study using statistical and machine learning models to predict and quantify life outcomes for children, parents, and households in the U.S. suggest that sociologists and data scientists should be cautious when employing predictive modeling, particularly in the criminal justice system and social programs. The Fragile Families Challenge study found that even the best artificial intelligence predictive models had little accuracy when applying state-of-the-art modeling to a high-quality dataset. Virginia Polytechnic Institute and State University's Brian J. Goode said, " I think the Fragile Families Challenge shows that we need more research support in this area, particularly as machine learning has a greater impact on our everyday lives."

Full Article

 

 

AI Identifies Optimal Material Formula
Ruhr-University Bochum (Germany)
Meike Drießen
March 26, 2020


Researchers at Ruhr-Universitat Bochum (RUB)'s Materials Discovery Department in Germany used a machine learning algorithm to predict the properties of a nanostructured layer, without requiring time-consuming experiments. RUB's Lars Banko and colleagues altered a generative model, then trained the algorithm to generate images of the surface of a model layer of aluminum, chromium, and nitrogen using specific process parameters. This enabled the researchers to compare the results of the calculations with experimental outcomes, and to analyze each prediction's reliability. RUB's Alfred Ludwig said, "We combined five parameters and were able to look in five directions simultaneously using the algorithm—without having to conduct any experiments at all."

Full Article

 

 

Neural Networks Facilitate Optimization in the Search for Materials
MIT News
David L. Chandler
March 26, 2020


Massachusetts Institute of Technology researchers have created a machine learning system to streamline the process of searching for new materials for applications when millions of potential materials are being considered and multiple criteria must be met and optimized. The system was used to consider nearly 3 million candidates for an energy storage system, producing a set of the eight most-promising materials. The process was accomplished in five weeks, but researchers say it would have taken 50 years using conventional methods. The study involved teaching an advanced machine-learning neural network about the relationship between the materials' chemical compositions and their physical properties, with that knowledge used to produce suggestions for possible materials to be used for the next round of training. This iterative optimization system generated reliable results with only a few hundred samples. Said Northwestern University’s George Schatz, “This is a beautiful combination of concepts in statistics, applied math, and physical science that is going to be extremely useful in engineering applications.”

Full Article

 

 

Study Uses AI to Estimate Unexploded Bombs From Vietnam War
Ohio State News
Jeff Grabmeier
March 24, 2020


Researchers at The Ohio State University (OSU) used artificial intelligence to find unexploded Vietnam War-era bombs in Cambodia. They used machine learning to analyze a commercial satellite image of a 100-square-kilometer area near Kampong Trabaek in Cambodia, for evidence of bomb craters, and the method has increased true bomb crater detection by more than 160% over standard methods. The study suggests that about 44% to 50% of the bombs dropped there remain unexploded. OSU's Erin Lin said, "The process of demining is expensive and time-intensive, but our model can help identify the most vulnerable areas that should be demined first."

Full Article

 

Daniel Tauritz

未読、
2020/05/17 19:05:402020/05/17
To: ai-b...@googlegroups.com

Google Claims Its AI Can Design Computer Chips in Under Six Hours
VentureBeat
Kyle Wiggers
April 23, 2020


Researchers at Google have developed an artificial intelligence-based approach to chip design that can learn from past experience and improve over time. The system can complete designs in under six hours on average, significantly faster than the weeks it takes human experts. The approach aims to place a "netlist" graph of logic gates, memory, and more onto a chip canvas, while adhering to constraints on placement density and routing congestion. Said the researchers, “Unlike existing methods that optimize the placement for each new chip from scratch, our work leverages knowledge gained from placing prior chips to become better over time."

Full Article

 

 

Apple, CMU Researchers Demo Low Friction Learn-by-Listening System for Smarter Home Devices
TechCrunch
Natasha Lomas
April 23, 2020


Researchers at Apple and Carnegie Mellon University have demonstrated a system that allows embedded artificial intelligence to learn by listening to environmental noises, without upfront training data or burdensome user supervision. Listen Learner uses acoustic activity recognition to enable a smart device to interpret events occurring in its surroundings through self-supervised learning. Manual labeling is facilitated by one-shot user interactions, like the speaker asking a person "what was that sound?" after hearing a sound enough times to classify it into a cluster. The system also could make an initial guess on what an acoustic cluster might signify by looping in a general pre-trained model, while refinement questions could be implemented to help Listen Learner identify sounds that have been closely clustered yet might still signify a distinct occurrence.

Full Article

 

 

AI Can Distinguish Between Bots, Humans Based on Twitter Activity
New Scientist
Donna Lu
April 22, 2020


Researchers at the University of Southern California (USC) have trained an artificial intelligence system to detect Twitter bots based on differences in the patterns of activity of real and fake accounts. The researchers analyzed two separate datasets of Twitter users, which were classified manually or by a pre-existing algorithm as either bot or human. The manually verified data set included 8.4 million tweets from 3,500 human accounts, and 3.4 million tweets from 5,000 bots. The team found that human users replied between four and five times more frequently to other tweets than bots did. In addition, human users gradually become more interactive, with the fraction of replies increasing over the course of an hour-long Twitter session. USC researcher Emilio Ferrara thinks the algorithm could complement bot-detection tools that analyze the language within posts.

Full Article

 

 

Pay Users to Spot Bias in AI, Say Top Researchers
Financial Times
Siddharth Venkataramakrishnan
April 23, 2020


Leading researchers at institutions such as OpenAI, Google, and the U.K.'s Alan Turing Institute and University of Cambridge have proposed a system of financially rewarding users to spot bias in algorithms. The concept was inspired by the bug bounty programs created to encourage software developers to report flaws. Potential users could include artificial intelligence (AI) researchers with direct access to algorithms, the public, and journalists who encounter apparent bias in everyday systems. Cambridge's Haydn Belfield said incentivizing users to rigorously check AI systems could help identify problems earlier in development, while OpenAI's Miles Brundage suggested monetary rewards would encourage developers to spot issues not discovered in public documentation. The Alan Turing Institute's Adrian Weller said financial compensation could encourage greater transparency on algorithmic bias, but cautioned that full transparency could reveal how to exploit such systems.

Full Article

*May Require Paid Registration

 

 

AI Is Evolving All by Itself
Science
Edd Gent
April 13, 2020


Google's Quoc Le and colleagues have designed a program that borrows concepts from Darwinian evolution, including survival of the fittest, to assemble artificial intelligence (AI) that generationally improves with effectively no human input. The AutoML-Zero program generates 100 candidate algorithms by randomly combining mathematical operations, then tests them on a simple task, like an image-recognition problem. AutoML-Zero compares the algorithms' performance to that of hand-designed algorithms, with copies of top-performing algorithms mutated by randomly replacing, editing, or deleting some of their code to create variations; these ‘offspring’ are added to the population while older algorithms are removed. AutoML-Zero was able to reproduce decades of AI research in days. Le said, "Our ultimate goal is to actually develop novel machine learning concepts that even researchers could not find."

Full Article

 

 

Deep Learning Takes on Tumors
Nature
Esther Landhuis
April 21, 2020


Scientists are using deep learning artificial intelligence (AI) technology for cancer research. Researchers at the Massachusetts Institute of Technology's Broad Institute applying image-based profiling to screen for genetic mutations discovered that machine learning can detect meaningful variants in images about as well as processes that quantify gene expression in cells. Investigators also are training a deep learning model to predict drug responses based on a person's cancer-genome sequence. Some deep learning tools are open source to help researchers with minimal coding skills; one example is the ZeroCostDL4Mic platform, which uses Colab, Google's free cloud service for AI developers, to provide access to deep learning microscopy tools.

Full Article

 

Uber Claims Its AI Enables Driverless Cars to Predict Traffic Movement with High Accuracy
VentureBeat
Kyle Wiggers
April 15, 2020


Researchers at Uber's Advanced Technologies Group have proposed an artificial intelligence methodology to enhance driverless vehicles' traffic-movement predictions. The scene-compliant generative adversarial network (SC-GAN) produces trajectories that follow limitations within scenes, given access to high-definition maps of scenes and detection and tracking systems informed by on-vehicle LiDAR, radar, and camera sensors. The GAN outputs nearby cars' frames of reference, and for each car whose potential future trajectories the GAN predicts, the scene data and map constraints are bundled within an RGB image presented by a mathematical object called a matrix. An SC-GAN deployment in Google's TensorFlow machine learning framework reduced off-road false positives by 50% compared with a baseline, and predicted cars' movements even in fairly challenging edge cases. The researchers said, "The method outperforms the current state-of-the-art in GAN-based motion prediction of the surrounding actors, producing more accurate and realistic trajectories."

Full Article

 

 

AI Taught to Instantly Transform Objects in Image-Editing Software
New Scientist
Donna Lu
April 16, 2020


A team of researchers from NVIDIA, Adobe Research, and Aalto University in Finland has developed an artificial intelligence (AI)-controlled program that lets the user transform the shape of objects in images, as well as permitting adjustments to lighting and perspective. The researchers built the program using generative adversarial networks (GANs), which are made up of two competing AIs: a generator that creates an image, and a discriminator that decides whether the image is real or fake. After several rounds, the generator can produce images so realistic that the discriminator can no longer tell the difference. The team analyzed existing GANs to determine the way they each modify images into components. The program provides a simple yet powerful way to create images with existing AIs, without the massive computational requirements to train new algorithms for specific tasks.

Full Article

 

 

Sarit Kraus Named ACM Athena Lecturer for Wide-Ranging Contributions to AI
ACM
April 15, 2020


ACM has named Sarit Kraus of Israel's Bar-Ilan University the 2020-2021 ACM Athena Lecturer for foundational contributions to artificial intelligence (AI), including multi-agent systems (MAS) and other fields. Kraus, an ACM fellow, is recognized as one of the world's preeminent researchers of MAS, in which a distributed group of agents collaborates with each other to solve problems; Kraus combined machine learning techniques for human modeling, formal decision-making, and game theory to enable agents to interact well with humans. She also gained recognition for advancing automated negotiation, in which the goal is to build computers that can reach agreements with other computers, negotiate for humans, or even outperform human negotiators. ACM president Cherri M. Pancake said, "With seminal work in AI stretching back to the early 1990s ... Sarit Kraus has introduced new ways of thinking in multi-agent systems research, while also shepherding research ideas into practical applications."

Full Article

 

 

DeepMind AI Can Understand the Unusual Atomic Structure of Glass
New Scientist
Jason Arunn Murugesu
April 6, 2020


Researchers at artificial intelligence (AI) firm DeepMind used machine learning to simulate how atomic particles in a piece of glass respond to different temperatures and pressures. The AI ran the software several times to account for all the various combinations of particles and neighbor particles, and to model how the entire piece of glass would react to different conditions. The AI's predictions of initial particle movements under different pressures and temperatures achieved an average accuracy of 96%, which fell to 64% over longer time scales, but was still more accurate than current computer modeling techniques. The researchers hope to use this AI to model traffic flow, treating cars as particles and using the same neighbor-particle concept to forecast vehicles' behavior in traffic jams.

Full Article

*May Require Paid Registration

 

 

Companies Devote Shrinking Tech Budgets to Cloud, AI
The Wall Street Journal
Angus Loten
April 16, 2020


According to a new report from International Data Corporation (IDC), overall corporate spending on enterprise technology is likely to decline this year due to the coronavirus pandemic. However, companies worldwide increasingly are shifting capital spending to cloud services, artificial intelligence (AI), and other tools that could slash costs and increase revenue. Said IDC CEO Crawford Del Prete, "As a result of COVID-19 there's a premium on agility, and the cloud and associated services can give enterprises a high degree of agility." He noted that AI and other advanced software tools enable companies to analyze data, automate processes, and improve efficiencies. The report indicates that many businesses were caught off guard by the pandemic, and just two weeks ago, some had been planning to increase IT spending.

Full Article

*May Require Paid Registration

 

 

You Can't Spell Creative Without AI
The New York Times
John Markoff
April 8, 2020


Innovations in language-processing software form the core of the debate over whether artificial intelligence (AI) will augment or even supplant human creativity. Last year, the OpenAI research group announced the GPT-2 (generative pretrained transformer) language model, which Temple University's Hector Postigo tweaked with a collection of human-written material to compose a short statement about ethics policies for AI systems, and later classical music, poetry, and rap lyrics. More recently, AI research groups and the National Library of Medicine have organized more than 44,000 scientific research papers on the coronavirus so a machine learning algorithm could help scientists extract meaning to aid research. AI experts cautioned that such programs are assistive, and not creating artistic works or independently realizing other intellectual achievements. However, Brian Smith at Canada's University of Toronto suggests that these tools could assume many creative tasks.

Daniel Tauritz

未読、
2020/05/18 18:40:342020/05/18
To: ai-b...@googlegroups.com

Yann LeCun, Yoshua Bengio: Self-Supervised Learning is Key to Human-Level Intelligence
VentureBeat
Kyle Wiggers
May 2, 2020


ACM A.M. Turing Award recipients Yann LeCun and Yoshua Bengio say that self-supervised learning could lead to the creation of artificial intelligence (AI) programs that are more humanlike in their reasoning. Speaking at the International Conference on Learning Representation (ICLR) 2020, which took place online, LeCun, Facebook's chief AI scientist, said supervised learning systems will play a diminishing role as self-supervised learning algorithms—those that generate labels from data by exposing relationships between the data's parts, believed to be critical to achieving human-level intelligence—comes into wider use. Meanwhile Bengio, director at the Montreal Institute for Learning Algorithms, predicts new studies will reveal the way high-level semantic variables connect with how the brain processes information, including visual information. Humans communicate these kinds of variables using language, and they could lead to a new generation of deep learning models.

Full Article

 

Researchers Use Machine Learning to Unearth Underground Instagram 'Pods'
NYU Tandon School of Engineering
April 27, 2020


A team of researchers at New York University (NYU) Tandon School of Engineering and Drexel University has applied machine learning to uncover Instagram "pods," user clusters that manipulate curation algorithms and artificially boost content popularity. The researchers initially analyzed 1.8 million Instagram posts belonging to 111,455 unique accounts, promoted across more than 400 Instagram pods hosted on Twitter's Telegram instant messaging service. The team then trained a machine learning classifier on metadata from pod groups, and Instagram data connected to both the pods and control posts, to detect pod engagement. NYU Tandon's Janith Weerasinghe said, “Already there is evidence of recently increasing adoption of this strategy: the pods we discovered have emerged at an accelerating pace over the last two years.”

Full Article

 

 

Automating the Search for New 'Curiosity' Algorithms
MIT News
Kim Martineau
April 28, 2020


Massachusetts Institute of Technology (MIT) researchers have used machine learning to identify new algorithms for encoding forms of curiosity. MIT's Ferran Alet said, "We were inspired to use [artificial intelligence] to find algorithms with curiosity strategies that can adapt to a range of environments." The researchers chose a set of basic modules to define their exploration algorithms, selecting about 36 high-level operations to guide the agent to perform tasks like remembering previous inputs, comparing current and past inputs, and using learning methods to change its own modules. The computer integrated up to seven operations at a time to generate computation graphs describing 52,000 algorithms. Two were entirely new, or apparently too obvious or counterintuitive to be of human design, and outperformed human-designed algorithms on a wide range of simulated tasks and environments.

Full Article

 

 

How AI Steered Doctors Toward Possible Coronavirus Treatment
The New York Times
Cade Metz
April 30, 2020


In January, researchers at U.K.-based artificial intelligence (AI) startup BenevolentAI mined scientific literature about the coronavirus to uncover a potential treatment within two days. BenevolentAI's technology can pinpoint information buried in massive volumes for the design of new drugs, using universal language models that teach themselves to understand written and spoken language by analyzing digital text. The company's engineers employed automated language tools to generate an interconnected database of biological processes related to the coronavirus, then BenevolentAI's Peter Richardson applied additional tools to browse the findings. He plotted out linkages between human genes and the biological processes affected by the virus, and identified two particular genes. Using a digital flow chart to outline how current medications targeted these genes, the researchers identified the anti-inflammatory drug baricitinib as a possible treatment that may block the coronavirus from entering cells; the drug is being prepared for clinical testing.

Full Article

*May Require Paid Registration

 

 

AI Spots Critical Microsoft Security Bugs 97% of the Time
VentureBeat
Kyle Wiggers
April 16, 2020


Microsoft says it has developed an artificial intelligence system that correctly distinguishes between security and non-security software bugs 99% of the time. The software giant said the model also accurately identifies critical, high-priority security bugs 97% of the time. The system was trained on a dataset of 13 million work items and bugs from 47,000 developers at Microsoft stored in the AzureDevOps and GitHub repositories. As the model was created, security experts approved the training data and used statistical sampling to provide a manageable amount of data. Wrote Microsoft's Scott Christiansen and Mayana Pereira in a blog post, “We discovered that by pairing machine learning models with security experts, we can significantly improve the identification and classification of security bugs."

Full Article

 

 

Reducing the Carbon Footprint of AI
MIT News
Rob Matheson
April 23, 2020


Massachusetts Institute of Technology (MIT) researchers have developed an automated artificial intelligence (AI) system for training and running certain types of neural networks, which has a relatively small carbon footprint. The researchers built the system via automated machine learning, which trained the OFA network. Using the system to train a computer-vision model, the researchers calculated that the effort required approximately 1/1,300th the carbon emissions of modern state-of-the-art neural architecture search strategies, and reduced the duration of inference by 1.5 to 2.6 times. MIT's John Cohn said, "The upside of developing methods to make AI models smaller and more efficient is that the models may also perform better."

Full Article

 

Daniel Tauritz

未読、
2020/05/19 12:14:222020/05/19
To: ai-b...@googlegroups.com

National Lab Taps AI Machine With Massive Chip to Fight Coronavirus
The Wall Street Journal
Agam Shah
May 6, 2020


Argonne National Laboratory scientists are using an artificial intelligence (AI) system driven by a massive processor to accelerate development of treatments and vaccines for the coronavirus. The CS-1 computer from AI chip startup Cerebras Systems is running learning models to find compounds in existing or novel drugs that target the virus; the neural network on the 8.5-inch by 8.5-inch chip facilitates faster problem-solving and data processing. Argonne is collecting data on roughly 4 billion molecules from drugs on the market or approved by the U.S. Food and Drug Administration. The CS-1's deep learning model analyzes this data via techniques like image recognition to predict how well a certain molecule will bind with coronavirus' proteins and inhibit its penetration and spread among human cells. The model then scores the effectiveness of molecules in blocking the virus, and the most promising compounds are flagged for further research.

Full Article

*May Require Paid Registration

 

 

Preventing AI From Divulging Its Own Secrets
IEEE Spectrum
Jeremy Hsu
May 6, 2020


North Carolina State University (NC State) researchers have demonstrated the first countermeasure for shielding artificial intelligence from differential power analysis attacks. Such attacks involve hackers exploiting neural networks' power signatures to reverse-engineer the inner mechanisms of computer chips that are running those networks. The attack relies on adversaries physically accessing devices in order to measure their power signature, or analyze output electromagnetic radiation. Attackers can repeatedly have the neural network run specific computational tasks with known input data, and eventually determine power patterns associated with the secret weight values. The countermeasure is adapted from a masking technique; explains NC State's Aydin Aysu, "We use the secure multi-part computations and randomize all intermediate computations to mitigate the attack."

Full Article

 

New Hardware Could Curb AI's Energy Appetite
Purdue University News
Kayla Wiles
May 7, 2020


Purdue University engineers have developed hardware that learns using artificial intelligence (AI) that currently runs on software platforms. They said sharing intelligence features between hardware and software would offset the massive power requirements for using AI in more advanced applications. The team is the first to demonstrate artificial "tree-like" memory in hardware at room temperature. The hardware is fashioned from a quantum material, to which the researchers added a proton and applied an electric pulse that caused the proton to move. Each new proton position generated a different resistance state, creating an information storage site or memory state—and multiple pulses created a branch comprised of memory states. Purdue's Shriram Ramanathan said, "This discovery opens up new frontiers for AI that have been largely ignored because implementing this kind of intelligence into electronic hardware didn't exist."

Full Article

 

 

A Soft Touch for Robotic Hardware
University of Tokyo
May 15, 2020


A team of researchers at the University of Tokyo in Japan designed a flexible soft robot with embedded sensors and pneumatic artificial muscles (PAM), which combine rubber and fiber-based fluid-driven systems that expand and contract. The researchers used a machine learning technique called reservoir computing to model the PAM in real time, feeding data into a special artificial neural network to make the model flexible and adaptable to the environment. The material's electrical resistance changes depending on its configuration during contraction, and adding carbon makes the varying resistance more readable. Tokyo's Kohei Nakajima said this approach may lead to a new generation of soft robots that work with humans, adding that remote-sensing applications, which require real-time data processed in a decentralized manner, also could benefit.

Full Article

 

 

Australia Wins AI 'Eurovision Song Contest'
BBC News
Jane Wakefield
May 12, 2020


An Australian team won the unofficial AI ‘Eurovision Song Contest’, named for an annual international song competition cancelled this year due to the COVID-19 pandemic. The event challenged contestants to write a song using artificial intelligence (AI). The Uncanny Valley team—made up of computer scientists, math, and social-anthropology researchers, as well as music producers—defeated 12 other teams with its song "Beautiful the World." The song had a melody and lyrics written by an AI system that was trained with audio samples of koalas, kookaburras, and Tasmanian devils. A panel of AI experts rating the entries said, "Composing a song with AI is hard because you have all the creative challenges that come with song-writing, but you also have to juggle getting the machine learning right. The teams not only pushed the boundaries of their personal creativity, but also gave the audience a look into the exciting future of human-AI musical collaboration."

Full Article

 

 

Machine Learning Method Streamlines Particle Accelerator Operations
SLAC National Accelerator Laboratory
Erika K. Carlson
April 29, 2020


Researchers at the U.S. Department of Energy's Stanford Linear Accelerator Center (SLAC) National Accelerator Laboratory used machine learning to design a streamlined technique for speeding up experiments with the Linac Coherent Light Source x-ray laser. The resulting algorithm combines machine learning with knowledge about the physics of the particle accelerator by harnessing the Gaussian process, which forecasts the effect a specific tuning has on the quality of the x-ray beam. This also generates uncertainties for its predictions, and the algorithm then decides which adjustments to attempt to yield the biggest improvements. SLAC's Jane Shtalenkova said, "Our ability to increase our tuning efficiency is really ... critical to being able to deliver a beam faster and with better quality to people who are coming from all over the world to run experiments."

Full Article

 

 

Visualizing the World Beyond the Frame
MIT News
Kim Martineau
May 6, 2020


A study by Massachusetts Institute of Technology researchers proposed a new test for measuring how far generative adversarial networks (GANs) can extrapolate data from a specific image, rendering new visualizations beyond the frame. The GANs "steer" a model into the subject of a photo, and ask it to draw new renderings like close ups, in different colors, or under bright light. These variations closely correspond to how creative the photographers were in framing scenes, and the underlying dataset reflects their biases; the steering technique is designed to add visibility to those biases. The researchers applied their method to GANs previously trained on 14 million ImageNet photos, then gauged how far the models went in transforming different classes of photos of animals, objects, and scenes. The degree of artistic risk-taking varied widely by the class of subject the GAN was attempting to manipulate, and the researchers learned that GANs still cannot represent images in physically meaningful ways.

Full Article

 

 

Computational Imaging Benefits From Untrained Neural Network
Chinese Academy of Sciences
Zhang Nannan
May 6, 2020


Investigators at the Chinese Academy of Sciences (CAS) in Beijing combined an untrained neural network with knowledge of physics to remove constraints from deep learning-based computational imaging techniques. CAS' Situ Guohai said the new method requires only raw measurement data for an object captured by the detector, and compares it to the physical model of the object. The researchers based the technique on a deep neural network, which is used to fit different mapping functions from many training data pairs, and the free space propagation principle. They applied this method to a lensless quantitative phase imaging problem, which requires reassembling phase data lost in the detection stage; the team successively resolved the information using a single intensity diffraction pattern. Guohai said, "The new approach for phase imaging is a single shot, non-interferometric method, which has great potential in microscopy and optical metrology."

Full Article

 

 

AI’s Use In Coronavirus Assessment, Diagnosis Has Mixed Results

The Wall Street Journal Share to FacebookShare to Twitter (5/4, Olson, Subscription Publication) reports the use of AI software in hospitals to diagnose, assess patients with COVID-19 has had mixed results. Some physicians report errors when using AI, while others say it is too early to determine whether it has helped with efficiency or saved more lives during the pandemic.

Daniel Tauritz

未読、
2020/05/23 19:55:082020/05/23
To: ai-b...@googlegroups.com

DeepMind Researchers Develop Method to Efficiently Teach Robots Tasks Like Grasping
VentureBeat
Kyle Wiggers
May 18, 2020


DeepMind researchers said simple sensor intentions (SSIs) can streamline the knowledge required to define rewards in reinforcement learning systems, which can help address complex robotic tasks using only raw sensor data. SSIs ostensibly deliver a generic means of encouraging artificial intelligence agents to probe their environments, as well as providing guidance for collecting data to solve a main task. SSIs are sets of auxiliary tasks defined by acquiring a sensor response and calculating a commensurate reward, either rewarding an agent for reaching a specific target response or for incurring a specific change in response. The researchers converted raw images from a camera-equipped robot into a set of SSIs, then applied them to a controller to train the robot to lift colored blocks. The researchers said the approach “requires less prior knowledge than the broadly used shaping reward formulation, that typically rely on task insight for their definition and state estimation for their computation."

Full Article

 

 

Sony Says It Created World's First Image Sensor with Built-in AI
Bloomberg
Takashi Mochizuki; Vlad Savov
May 14, 2020


Sony announced its development of the world's first image sensor with built-in artificial intelligence (AI), which promises to accelerate and enhance the security of data collection. The sensor features a logic processor and memory, and can perform image recognition without generating any images in order to conduct AI tasks like identifying, analyzing, or counting objects without offloading data to a separate chip. Sony said this boosts privacy and facilitates near-instant analysis and object tracking. While the technology was developed for use by commercial customers, the company says it holds promise for consumer applications as well, like helping a smart device to identify objects and users securely.

Full Article

 

 

Algorithms Help Scientists Connect Data Points from Multiple Sources to Solve High Risk Problems
CSIRO (Australia)
Emily Connel
May 7, 2020


StellarGraph, an open source graph machine learning library managed by Australia's Commonwealth Scientific and Industrial Research Organization (CSIRO), has launched a series of new algorithms for network graph analysis to help find patterns in data, work with larger data sets, and accelerate performance while reducing memory usage. The StellarGraph library algorithms give data scientists and engineers the tools to build, test, and experiment with powerful machine learning models on their own network data. This allows them to see patterns and help apply their research to solve real-world problems across industries. Said CSIRO's Tim Pitman, "The new algorithms in this release open up the library to new classes of problems to solve, including fraud detection and road traffic prediction."

Full Article

 

 

Intel, UPenn Partner with 29 Health Organizations to Train AI to Spot Brain Tumors
ZDNet
Stephanie Condon
May 11, 2020


Intel and the University of Pennsylvania are launching a federation with 29 research and healthcare institutions in an effort to train artificial intelligence models to identify brain tumors. The consortium will train robust models using the largest brain tumor dataset to date. The organizations will use federated learning, a privacy-preserving technique, to contribute to that dataset without actually sharing their patient data. Intel began working with researchers at the University of Pennsylvania in 2018 to produce the first proof-of-concept application of federated learning to real-world medical imaging. The new initiative builds on that research, using Intel software and hardware to implement federated learning in a way that provides additional privacy protections to both the model and the data. Participating institutions are based in the U.S., Canada, the U.K., Germany, Switzerland, and India.

Full Article

 

 

AI Gauges Head Injuries by Classifying Brain Lesions
New Atlas
Nick Lavars
May 14, 2020


Scientists at the U.K.’s University of Cambridge and Imperial College London have developed an artificial intelligence (AI) algorithm that can detect and classify different types of brain lesions, to gauge the impact of a head injury. The researchers trained the machine learning AI on more than 600 computed tomography (CT) brain scans featuring lesions of different sizes and types. When applied to another set of CT scans, the algorithm was able to classify the volume and progression of brain lesions. Said Cambridge's David Menon, "We hope it will help us identify which lesions get larger and progress, and understand why they progress so that we can develop more personalized treatment for patients in future."

Full Article

 

 

AI Technique Detects, Classifies Galaxies in Astronomy Image Data
UC Santa Cruz Newscenter
Tim Stephens
May 12, 2020


Researchers at the University of California, Santa Cruz (UCSC) have developed a new artificial intelligence (AI) program that can analyze image data to detect and classify galaxies and stars in large datasets from astronomical surveys. Morpheus is a deep learning framework that utilizes AI technologies developed for applications like image and speech recognition. The algorithm employs as input the original image data in standard digital file format used by astronomers, with pixel-level classification to handle complicated images. UCSC's Ryan Hausen said, "Morpheus provides detection and morphological classification ... at a level of granularity that doesn't currently exist."

Full Article

 

Waymo Develops AI To Simulate Autonomous-Vehicle Sensor Data For Driving Simulations

VentureBeat Share to FacebookShare to Twitter (5/20, Wiggers) reports Waymo researchers are starting “to leverage AI to generate camera images for simulation by using sensor data collected” by the company’s autonomous vehicles, with a paper published by company researchers outlining their SurfelGAN technique that “uses texture-mapped surface elements to reconstruct scenes and camera viewpoints for positions and orientations.” With SurfelGAN, Waymo researchers pull from “feeds from real-world lidar sensors and cameras” that the system then “renders...from various distances and viewing angles.” In an emailed statement to VentureBeat, a Waymo spokesperson explained, “In simulation, when a trajectory of a self-driving car and other agents (e.g. other cars, cyclists, and pedestrians) changes, the system generates realistic visual sensor data that helps us model the scene in the updated environment.”

 

EdWeek Publishes Special Report On How AI Is Being Used In Schools

Education Week Share to FacebookShare to Twitter (5/19) on Tuesday published a special report titled, “Tech Use Surges Under COVID-19: Where Does Artificial Intelligence Fit In?” According to EdWeek, the report “examines how artificial intelligence is currently being used in schools and how the surge in tech use during the COVID-19 school building closures might affect the use of AI once school buildings reopen; important lessons the K-12 system can learn about AI use from the health care, military, and higher education sectors; and perspectives on the upsides and downsides of expanding the use of artificial intelligence in schools.”

        Education Week Share to FacebookShare to Twitter (5/20, Rauf) reports that some school districts might “shy away from AI tools in the immediate future while teachers and staff adjust to a new digital ecosystem already pushing the boundaries for many.” But, voice-activated devices such as Alexa, Siri, and Google Home are increasingly “being used as teaching assistants in classes.” Additionally, “AI is helping districts identify students who are at risk of dropping out, and math tutors and automated essay-scoring systems that have been used for decades now feature more sophisticated AI software than they did in the past.”

        According to Education Week Share to FacebookShare to Twitter (5/20, Davis), “Sectors of our economy such as the military, health care, and higher education are much further along than the K-12 system in incorporating artificial intelligence systems and machine learning into their operations.” However, many of those uses “can spark ideas for applications in K-12 that may be more pertinent than ever imagined.”

        Education Week Share to FacebookShare to Twitter (5/20, Rauf) interviews Robert Murphy, an AI expert who has “cautioned that artificial intelligence is not likely to transform education the same way it already has other high-profile industries such as transportation, drug discovery and health care.” Instead, Murphy has “argued that AI will continue to play a back-up role to enhance the classroom experience, assisting teachers with second-language learning, feedback on writing drafts, early diagnosis of reading problems, and through adaptive instruction for remediation.”

        Education Week Share to FacebookShare to Twitter (5/20, Lieberman) interviews Deb Norton, a K-12 tech expert in Wisconsin who has “been connecting an increasingly diverse set of educators with the possibilities of AI as a teaching tool.” She teaches a course on AI for K-12 teachers, administrators, and tech leaders that “includes sections on the definition of artificial intelligence; machine learning; voice experiences and chatbots; and the role of data in AI systems.” Norton says “AI could become a really big part of virtual learning and at-home learning,” but she just doesn’t “think we’re quite there yet.”

Daniel Tauritz

未読、
2020/05/30 8:08:112020/05/30
To: ai-b...@googlegroups.com

Amazon's AI Tool Can Plan Collision-Free Paths for 1,000 Warehouse Robots
Venture Beat


Kyle Wiggers
May 18, 2020


Researchers at the University of Southern California and Amazon Robotics have developed a solution to the problem of lifelong multi-agent path finding (MAPF), which involves moving agents to goal locations without collisions. The solution has been applied to Amazon's warehouses, where drive units are in constant motion as they move inventory pods or flat packages between locations. The solution models the MAPF problem as a graph containing vertices that correspond to locations, a series of edges that correspond to connections between neighboring locations, and a set of agents like drive units. The solution updates each agents' start and goal locations at every timestep, calculates the number of steps each agent needs to visit all locations, and moves the agents along collision-free paths. The researchers said their method outperformed all others in terms of throughput.

Full Article

 

 

Don’t Stand So Close to Me: AI Cameras Police Social Distancing at Work
The Wall Street Journal
Parmy Olson
May 15, 2020


Artificial intelligence-powered sensors are being repurposed to meet a surge in demand from organizations trying to comply with government guidelines on social distancing due to the COVID-19 pandemic. However, privacy advocates are concerned the technology could be used to track individuals and monitor productivity. Even if the initial implementation is for health and safety in the workplace, in the future vendors could repurpose their technology to monitor other kinds of behavior, according to Albert Gidari, director of privacy at the Stanford Center for Internet and Society. Employers should be transparent with workers and consider removing the technology after the pandemic is over, Gidari adds.

Full Article

 

 

Deep Learning Accurately Stains Digital Biopsy Slides
MIT News
Becky Ham
May 22, 2020


A research team from the Massachusetts Institute of Technology (MIT)'s Media Lab, Stanford University’s School of Medicine, and Harvard Medical School used deep learning algorithms to computationally stain digital tissue biopsy slides that were indistinguishable from traditionally stained slides. The researchers trained the algorithms on 87,000 physically dyed slides, after which the neural networks generated images that were 90% to 96% similar to the hand-dyed slides. This process enables researchers and clinicians to analyze slides for multiple types of diagnostic and prognostic tests, without extracting additional tissue sections. The researchers also analyzed the steps by which neural networks stained the slides, which is critical for clinical translation of deep learning systems. MIT's Pratik Shah said, "This provides explanation and validation of randomized clinical trials of deep learning models and their findings for clinical applications."

Full Article

 

 

More Efficient Drug Development With the Help of Computer Models
Leiden University (The Netherlands)
May 19, 2020


Researchers at Leiden University's Center for Computational Life Sciences (CCLS) in the Netherlands are using computer models to reduce costs and speed the process of drug development in an effort to find a vaccine for the novel coronavirus. The researchers are using machine learning combined with multi-criteria optimization algorithms, which make it possible to compare different alternatives or scenarios on the basis of multiple criteria. The team is using this method to search all possible molecules for optimal candidate molecules with respect to conflicting criteria. Said CCLS researcher Michael Emmerich, "Our research can yield improvements in the mid-term and long-term. Once the first drugs are available, the search for alternative drugs will continue, that is, drugs with less side effects, less costly, or applicable to a broader range of patients.”

Full Article

 

Congressional Bill Seeks To Build Out NSF, Expand Research Funding Channels

Inside Higher Ed Share to FacebookShare to Twitter (5/28) reports the Association of Public and Land-grant Universities is hailing the introduction of national legislation to “dramatically expand the National Science Foundation [NSF] and pump $100 billion into the agency over five years to increase research in areas like artificial intelligence, machine learning, robotics and advanced manufacturing.” Under the “Endless Frontiers Act, the NSF would be renamed the National Science and Technology Foundation. The new agency would have two deputy directors – one to oversee the NSF’s current operations and another to lead a new technology directorate to advance technology in 10 areas as the U.S. faces greater competition from China and other countries.”

Daniel Tauritz

未読、
2020/06/07 8:46:282020/06/07
To: ai-b...@googlegroups.com

'Artificial Chemist' Combines AI, Robotics to Conduct Autonomous R&D
NC State News
Matt Shipman
June 4, 2020


North Carolina State University (NC State) and University at Buffalo researchers have developed a technology combining artificial intelligence (AI) with robotics to conduct chemical reactions, in order to expedite research and development and synthesis of commercially desirable materials. The Artificial Chemist platform uses a "brain" AI program that characterizes materials being synthesized by an experiment-conducting "body," and applies this data to autonomously decide about the next set of experimental conditions. NC State's Milad Abolhasani said, "We tried to mimic the process that humans use when making decisions, but more efficiently." In proof-of-concept experiments, Artificial Chemist was able to identify and produce the best possible quantum dots for any color in 15 minutes or less.

Full Article

 

 

Butterfly Landmines Mapped by Drones, Machine Learning
The Engineer (UK)
May 27, 2020


Researchers at the State University of New York at Binghamton have found that drones and advanced machine learning can be used to detect improvised explosive devices (IEDs) and butterfly landmines (surface plastic landmines with low-pressure triggers). The researchers used convolutional neural networks (CNNs) to develop a method for automating detection and mapping of landmines. According to the researchers, a CNN-based approach is much faster than manually counting landmines from an aerial image, and unlike subjective human visual detection, it is quantitative and reproducible. Binghamton’s Alek Nikulin said drone-assisted mapping and automated detection of scatterable mine fields “would assist in addressing the deadly legacy of widespread use of small scatterable landmines in recent armed conflicts and allow to develop a functional framework to effectively address their possible future use.”

Full Article

 

 

Gamemakers Inject AI to Develop More Lifelike Characters
Wired
Will Knight
May 27, 2020


Researchers at game developer Electronic Arts and Canada’s University of British Columbia are testing artificial intelligence (AI) techniques to accelerate game development and make characters more lifelike. The team is using reinforcement learning to automatically animate humanoid game characters. The researchers trained a machine learning model to identify and recreate statistical patterns in motion-capture data, then used reinforcement learning to train another model to replicate realistic motion with a specific goal, like running toward a ball. The result is animations not included in motion-capture data. Automating the animation process with AI could save game developers millions of dollars while creating games with realistic characters that are sufficiently efficient to run on a smartphone.

Full Article

 

 

A Good Egg: Robot Chef Trained to Make Omelets
University of Cambridge (UK)
June 1, 2020


Engineers at the University of Cambridge in the U.K. and Turkish domestic appliance company Beko trained a robot to make omelets via machine learning, refining its culinary skills to produce a consistently tasty dish. The machine learning method used Bayesian inference to extract the greatest amount of information possible from a limited number of data samples. To account for the subjectivity of human taste, Cambridge's Fumiya Iida said, "We needed to tweak the machine learning algorithm—the so-called batch algorithm—so that human tasters could give information based on comparative evaluations, rather than sequential ones." Added Iida, “The omelettes, in general, tasted great; much better than expected!”

Full Article

 

 

Algorithm Quickly Simulates Roll of Loaded Dice
MIT News
Steve Nadis
May 28, 2020


Massachusetts Institute of Technology (MIT) researchers have created an algorithm to produce random numbers with the best mix of speed, accuracy, and low memory currently available for certain tasks. The Fast Loaded Dice Roller (FLDR) algorithm simulates the roll of dice to generate random integers. The dice can have any number of sides, and are loaded to make some sides more likely to come up than others, constraining randomness to meet a preset probability distribution. FLDR ensures perfectly loaded dice that exactly achieve the specified probabilities. MIT's Vikash Mansinghka thinks FLDR could boost the efficiency of Monte Carlo simulation and inference techniques.

Full Article

 

Daniel Tauritz

未読、
2020/06/15 7:55:412020/06/15
To: ai-b...@googlegroups.com

Facebook's TransCoder AI Converts Code From One Programming Language Into Another
VentureBeat
Kyle Wiggers
June 9, 2020


Facebook says its TransCoder can convert code from one high-level programming language into another. The system, which Facebook researchers describe as “a neural transcompiler,” uses an unsupervised learning approach to translate between languages like C++, Java, and Python. The researchers trained TransCoder on a public GitHub corpus featuring more than 2.8 million open source repositories. To evaluate its capabilities, the researchers extracted 852 parallel functions in C++, Java, and Python from the online GeeksforGeeks platform and developed a new computational accuracy metric that tests whether hypothesis functions generate the same outputs as a reference when given the same inputs. Wrote the researchers, “TransCoder can easily be generalized to any programming language, does not require any expert knowledge, and outperforms commercial solutions by a large margin.”

Full Article

 

 

Engineers Put Tens of Thousands of Artificial Brain Synapses on a Single Chip
MIT News
Jennifer Chu
June 8, 2020


Massachusetts Institute of Technology (MIT) engineers have designed a 'brain on a chip' composed of tens of thousands of artificial brain synapses, or memristors. MIT's Jeehwan Kim said, "Traditionally, metallurgists try to add different atoms into a bulk matrix to strengthen materials, and we thought, why not tweak the atomic interactions in our memristor, and add some alloying element to control the movement of ions in our medium." The silicon-copper alloy chip can "remember" and replicate stored images many times over, in versions that are sharper and cleaner compared with unalloyed designs. The goal is to build actual neural network hardware for portable artificial intelligence systems, rather than constructing the synapses as software.

Full Article

 

University Of Cincinnati Assistant Professor Developing COVID-19 Test Using A Smartphone Camera

ZDNet Share to FacebookShare to Twitter (6/11, Ray) reports, “Aashish Priye, an assistant professor in the Department of Chemical and Environmental Engineering at the University of Cincinnati, has been steadily refining an ultra-portable test-in-a-box, if you will, for infectious” diseases like COVID-19. Using only “a heating plate, an LED light source, and a smartphone, Priye’s design for a portable, battery-operated kit can be used to test for the presence of virus in under an hour.” In his newest “version, a deep learning neural network known as a convolutional neural network, or ConvNet, can be used to tell how much viral material is in a sample of blood or saliva, based on the pixels captured by the smartphone camera.”

 

 

IBM Will No Longer Offer, Develop, or Research Facial-Recognition Technology
The Verge
Jay Peters
June 8, 2020


IBM CEO Arvind Krishna declared in a letter to Congress that the company will discontinue developing, researching, and offering general-purpose facial-recognition or analysis software. He stated that IBM opposes the technology's application "for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency." Even as artificial intelligence has drastically improved facial recognition, the technology has been found to exhibit bias along lines of age, race, and ethnicity, which can reduce reliability for law enforcement and security, and lead to civil rights abuses. IBM in 2018 issued a public dataset to help reduce bias, as part of the training data for a facial-recognition model. However, the company was found to be sharing a separate dataset of about 1 million photos taken from Flickr without subjects' consent.

Full Article

 

 

AI Dramatically Improves Accuracy of Classic Eye Exam
Science
Marina Wang
June 3, 2020


Stanford University researchers have developed an artificial intelligence-driven online vision test that generates dramatically more accurate diagnoses than the classic Snellen chart eye exam. Stanford's Chris Piech and colleagues designed the Stanford acuity test (StAT) to have users first calibrate their screen size by adjusting a box on a Web page to the size of a credit card. After users input their distance from the screen, the test displays an "E" in one of four orientations, and the algorithm uses statistics to predict a vision score based on the answer. As the test progresses, the algorithm can produce a more accurate score prediction. The team ran the StAT through 1,000 computer simulations mimicking actual patients, and decreased diagnostic error by 74% compared with the Snellen test.

Full Article

 

 

Daniel Tauritz

未読、
2020/06/21 11:44:192020/06/21
To: ai-b...@googlegroups.com

Circular Reasoning: Spiraling Circuits for More Efficient AI
University of Tokyo
June 14, 2020


Researchers at the University of Tokyo Institute of Industrial Science in Japan stacked resistive random-access memory modules for artificial intelligence (AI) applications in a novel three-dimensional spiral. The modules feature oxide semiconductor access transistors, which boost the efficiency of the machine learning training process. The team further enhanced energy efficiency via a system of binarized neural networks, which restricts the parameters to be either +1 or -1, rather than any number, to compress the volume of data to be stored. In having the device interpret a database of handwritten digits, the researchers learned that increasing the size of each circuit layer could improve algorithmic accuracy to approximately 90%. The University of Tokyo's Masaharu Kobayashi said, "In order to keep energy consumption low as AI becomes increasingly integrated into daily life, we need more specialized hardware to handle these tasks efficiently."

Full Article

 

 

Engineers Offer Smart, Timely Ideas for AI Bottlenecks
Rice University
Jade Boyd
June 11, 2020


Engineers at Rice University, Stanford University, and the University of California, Santa Barbara have demonstrated techniques for designing data-centric computing hardware and co-designing hardware with machine-learning algorithms, which enhance energy efficiency by up to two orders of magnitude. Rice's Yingyan Lin said large-scale deep neural networks consume more than 90% of the power needed to run a system in transferring data between memory and processor, which stems from the von Neumann bottleneck. Two proposed complementary data-centric processing optimization methods overcome this bottleneck. The Time-domain, In-Memory Execution, LocalitY (TIMELY) architecture removes inefficiency caused by frequent access to the main memory for managing intermediate input and output, and the interface between local and main memories. SmartExchange is an energy-efficient design combining algorithmic and accelerator hardware innovations that basically trades higher-cost memory for lower-cost computation.

Full Article

 

 

 

Computer Chips Could Power AI to Next Level
The Wall Street Journal
Agam Shah
June 2, 2020


Neuromorphic chips designed for artificial intelligence (AI) could facilitate new applications. Dan Hutcheson at semiconductor research company VLSI Research said such chips diverge from the conventional approach of producing answers logically, deriving probable answers through associations and patterns among available data. Hutcheson said some processors have neural networks that calculate probabilities so robots and drones have a better contextual understanding of their surroundings, and can make smarter decisions. Cerebras Systems' AI chip can run onboard neural networks for data processing; Argonne National Laboratory is using the processor to research drugs and vaccines for Covid-19. AI chips like Intel's specialized graph processor also have implications for contact tracing, transferring data between memory and storage at speeds that conventional chips cannot match, to draw connections between data faster.

Full Article

*May Require Paid Registration

 

 

AI Makes Blurry Faces Look More Than 60 Times Sharper
Duke Today
Robin A. Smith
June 11, 2020


Researchers at Duke University have created an artificial intelligence (AI) tool that can improve an image of a face with up to 64 times its original resolution, making a blurry, unrecognizable picture into a sharp, realistic image. However, the method, known as PULSE, cannot be used to identify people, such as by making images from a security camera clearer, because the faces it produces may not actually exist. Using a generative adversarial network machine learning tool, the system weeds through AI-generated examples of high-resolution faces to find images that appear similar to the input image when reduced to the same size. Said Duke’s Cynthia Rudin, “Never have super-resolution images been created at this resolution before with this much detail.”

Full Article

 

 

Researchers Train Drones to Perform Flips, Rolls, Loops with AI
VentureBeat
Kyle Wiggers
June 12, 2020


Researchers at Intel and Switzerland's University of Zurich and ETH Zurich have created an artificial intelligence system that enables autonomous drones to perform aerial acrobatics, including barrel rolls, loops, and flips. The system trains the drones in simulation, along with demonstrations from a controller module, to enable real-world deployment. The researchers employed the open source Gazebo simulator to train a controller module; over 10 training runs the controller successfully completed each maneuver 100% of the time, without intervention or breaks. The researchers said, "We have shown that designing appropriate abstraction of the input facilities direct transfer of the policies from simulation to physical reality."

Full Article

 

 

Improving the Operation, Performance of Wi-Fi Networks for the 5G/6G Ecosystem
Universitat Pompeu Fabra (Spain)
June 5, 2020


Researchers at Universitat Pompeu Fabra in Spain have found that machine learning can improve the operation and performance of 5G/6G Wi-Fi networks. They focused on Wi-Fi networks with multiple access points and found that, using reinforcement learning techniques, stations (such as PCs, tablets, and mobile phones) can determine dynamically which access point offers the best service for their needs. Each station autonomously considers how the Wi-Fi network is behaving and chooses an access point based on the benefits it can provide. The researchers developed an algorithm that shortens station learning time, and found that the entire network benefits if even just a few stations implement the algorithm.

Full Article

 

 

Inside the NBA's Plan to Use Smart Technology, Big Data to Keep Players Safe From Coronavirus
CNBC
Jessica Golden
June 17, 2020


The NBA plans to use smart technologies to protect players and staff from the coronavirus as 22 teams prepare to play games at Disney's Wide World of Sports complex in Orlando. In addition to subjecting players to extensive testing, quarantining them from their families, and imposing strict rules for social behavior, each will be given a "smart" ring, a Disney MagicBand, an individual pulse oximeter, and a smart thermometer. Oura's titanium rings can measure body temperature, respiratory functions, and heart rate and predict Covid-19 symptoms up to three days in advance with 90% accuracy. The MagicBand will act as a hotel room key, allow players to check in at security checkpoints and coronavirus screenings, and help with contact tracing. The league also is considering a small device that will set off an audio alert when the wearer is within six feet of another person for longer than five seconds.

Full Article

 

Daniel Tauritz

未読、
2020/06/27 12:15:442020/06/27
To: ai-b...@googlegroups.com

AI Researchers Condemn Predictive Crime Software, Citing Racial Bias, Flawed Methods
TechCrunch
Taylor Hatmaker
June 24, 2020


A coalition of more than 1,000 researchers, academics, and experts in artificial intelligence condemned soon-to-be-published research claims of predictive crime software. The opponents sent an open letter to the publisher Springer, asking that it reconsider publishing the controversial research. Authors and Harrisburg University researchers Roozbeh Sadeghian and Jonathan W. Korn claim their facial recognition software can forecast whether a person will become a criminal, but the coalition expressed doubts on their findings, citing "unsound scientific premises, research, and methods, which numerous studies spanning our respective disciplines have debunked over the years." The letter from the coalition said, "The uncritical acceptance of default assumptions inevitably leads to discriminatory design in algorithmic systems, reproducing ideas which normalize social hierarchies and legitimize violence against marginalized groups."

Full Article

 

 

Deep Learning-Based Surrogate Models Could Hasten Scientific Discoveries
Lawrence Livermore National Laboratory
June 17, 2020


Lawrence Livermore National Laboratory (LLNL) researchers have designed neural network-based surrogate models that can outperform computationally expensive simulators. The team developed a deep learning-driven Manifold & Cyclically Consistent (MaCC) surrogate model incorporating a multi-modal neural network for accurately mimicking complex scientific processes, including high-energy density physics involved in inertial confinement fusion (ICF). Applying MaCC to ICF implosions conducted at LLNL’s National Ignition Facility showed it could adequately replicate the simulator and outperform state-of-the-art surrogate models across a broad spectrum of metrics. LLNL's Timo Bremer said the surrogate model's ability to analyze a large volume of complex data has ramifications for stockpile modernization, and could potentially lead to new scientific discoveries and a novel class of simulation performance and analysis methods.

Full Article

 

 

AI Helps Decode Mysterious Earthquake Swarms
The Wall Street Journal
Daniela Hernandez
June 24, 2020


Seismologists at the California Institute of Technology (Caltech) input data from a flurry of small earthquakes near Cahuilla from 2016 to 2019 into an artificial intelligence (AI) algorithm and generated one of the most detailed descriptions so far of earthquake swarms. The AI system scanned recorded earthquake data and identified the time they occurred and their locations. More than 22,000 earthquakes ranging from magnitude 0.7 to 4.4 were detected by the software; most were imperceptible to humans and less advanced methods of analysis. Said Caltech's Zachary Ross, "These smaller earthquakes are happening all the time, which means they're filling in gaps between all the larger ones. It's connecting the dots better." The findings are important because the geographic events that lead to large earthquakes are not fully understood by seismologists.

Full Article

*May Require Paid Registration

 

 

AI Takes Player Performance Analysis to New Dimension
Loughborough University
June 18, 2020


Computer scientists at Loughborough University in the U.K. have developed artificial intelligence algorithms that could revolutionize player performance analysis for football (soccer) clubs. The researchers designed a hybrid system that accelerates and supplements human data entry with camera-based automation to meet demand for timely performance data generated from large amounts of videos. The team applied the latest computer vision and deep learning technologies to identify actions by detecting players' body poses and limbs, and trained the deep neural network to track individual players and capture data on individual performance throughout the match video. Loughborough’s Baihua Li said the new technology “will allow a much greater objective interpretation of the game as it highlights the skills of players and team cooperation.”

Full Article

 

Chinese Researchers Unveil AI That Can Turn Simple Sketches Into Fake Photorealistic Pictures
Daily Mail (U.K.)
James Pero
June 17, 2020


Researchers at the Chinese Academy of Sciences have created an artificial intelligence (AI) that can convert simple sketches of a face into photorealistic images, extrapolating from rough and even incomplete sketches. The DeepFaceDrawing AI analyzes a drawing's details, then checks each individual feature separately against a database of facial features to construct its own image. Said the researchers, "Our key idea is to implicitly model the shape space of plausible face images and synthesize a face image in this space to approximate an input sketch. Our method essentially uses input sketches as soft constraints and is thus able to produce high-quality face images even from rough and/or incomplete sketches." The researchers said the technology aims to help users with little drawing skill produce high-quality images.

Full Article

 

Amazon Deploys AI 'Distance Assistants' to Notify Warehouse Workers if They Get Too Close
The Verge
James Vincent
June 16, 2020


Amazon has deployed a new artificial intelligence (AI) tool to help employees follow social distancing rules. The "Distance Assistant" uses a TV screen, depth sensors, and an AI-enabled camera to track employees' movements and provide real-time feedback. Circles around employees' feet flash red on the screen if they come closer than six feet to one another, warning them to keep a safe distance apart. Amazon says the tool is a "standalone unit" that only requires power, implying it doesn’t store data about worker movements. The company plans to open source the technology so others can replicate and deploy the devices.

Full Article

 

MIT Researchers Develop More Efficient Neural Network

Science Blog Share to FacebookShare to Twitter (6/20) reports regarding neural networks, that researchers at MIT have “made strides toward an alternative system, which uses physical, analog devices that can much more efficiently mimic brain processes.” Jesús del Alamo, professor in the Department of Electrical Engineering and Computer Science, said, “I think the bottleneck today for building [neural network] applications is energy efficiency. It just takes too much energy to train these systems.” Bilge Yildiz, Professor of Nuclear Science and Engineering, and Professor of Materials Science and Engineering, said, “What we have demonstrated here...even though it’s not an optimized device, gets to the order of energy consumption per unit area per unit change in conductance that’s close to that in the brain.” The findings Share to FacebookShare to Twitter were published in Nature Communications.

 

全員に返信
投稿者に返信
転送
新着メール 0 件