Facial Recognition, Other 'Risky' AI Set for Constraints in EU
Bloomberg
Natalia Drozdiak
April 21, 2021
The European Commission has proposed new rules constraining the use of facial recognition and other artificial intelligence applications, and threatening fines for companies that fail to comply. The rules would apply to companies that, among other things, exploit vulnerable groups, deploy subliminal techniques, or score people’s social behavior. The use of real-time remote biometric identification systems by law enforcement also would be prohibited unless used specifically to prevent a terror attack, find missing children, or for other public security emergencies. Other high-risk applications, including for self-driving cars and in employment or asylum decisions, would have to undergo checks of their systems before deployment. The proposed rules need to be approved by the European Parliament and by individual member-states before they could become law.
Brain-on-a-Chip Would Need Little Training
KAUST Discovery (Saudi Arabia)
April 20, 2021
Researchers at King Abdullah University of Science and Technology (KAUST) in Saudi Arabia used a spiking neural network (SNN) on a microchip as a foundation for developing more efficient hardware-based artificial intelligence systems. KAUST's Wenzhe Guo said SNNs mimic the biological nervous system and can process information faster and more efficiently than artificial neural networks. The researchers created a brain-on-a-chip using a standard FPGA microchip and a spike-timing-dependent plasticity model, which allowed the neuromorphic computing system to learn real-world data patterns without training. Compared to other neural network platforms, the brain-on-a-chip was more than 20 times faster and 200 times more energy efficient. Guo said, "Our ultimate goal is to build a compact, fast and low-energy brain-like hardware computing system."
U.S. Banks Deploy AI to Monitor Customers, Workers Amid Tech Backlash
Reuters
Paresh Dave; Jeffrey Dastin
April 19, 2021
Several U.S. banks, including City National Bank of Florida, JPMorgan Chase & Co., and Wells Fargo & Co. are rolling out artificial intelligence systems to analyze customer preferences, monitor employees, and detect suspicious activity near ATMs. City National will commence facial recognition trials in early 2022, with the goal of replacing less-secure authentication systems. JPMorgan is testing video analytic technology at some Ohio branches, and Wells Fargo uses the technology in an effort to prevent fraud. Concerns about the use of such technology range from errors in facial matches and the loss of privacy to disproportionate use of monitoring systems in lower-income and non-white communities. Florida-based Brannen Bank's Walter Connors said, "Anybody walking into a branch expects to be recorded. But when you're talking about face recognition, that's a larger conversation."
Combining News Media, AI to Rapidly Identify Flooded Buildings 0
Tohoku University (Japan)
April 16, 2021
A machine learning model developed by researchers at Japan's Tohoku University can help identify flooded buildings within 24 hours of a disaster using news media photos. The model was applied to Mabi-cho, Kurashiki city in Okayama Prefecture, which experienced heavy rains and flooding in 2018. After identifying press photos and geolocating them based on landmarks and other visual cues, the researchers used synthetic aperture radar (SAR) PALSAR-2 images from the Japan Aerospace Exploration Agency to approximate the conditions of unknown areas. Buildings surrounded by floodwaters or within non-flooded areas were classified using a support vector machine. About 80% of the buildings classified by the model as flooded were actually flooded during the event. Tohoku's Shunichi Koshimura said, "Our model demonstrates how the rapid reporting of news media can speed up and increase the accuracy of damage mapping activities, accelerating disaster relief and response decisions."
Some FDA-Approved AI Medical Devices Are Not 'Adequately' Evaluated, Stanford Study Says
VentureBeat
Kyle Wiggers
April 12, 2021
Certain artificial intelligence (AI)-powered medical devices approved by the U.S. Food and Drug Administration (FDA) are susceptible to data shifts and bias against underrepresented patients, according to a study by Stanford University researchers. The researchers compiled a database of FDA-approved medical AI devices, and analyzed how each was evaluated before approval. They found 126 of 130 devices approved between January 2015 and December 2020 underwent only retrospective studies at submission, and none of the 54 approved high-risk devices were assessed via prospective review. The researchers contend prospective studies are needed especially for AI medical devices, given that field applications of the devices can deviate from their intended uses. They also said data about the number of sites used in an evaluation must be "consistently reported," in order for doctors, researchers, and patients to make informed decisions about the reliability of a AI-powered medical device.
KAUST Collaboration With Intel, Microsoft, University of Washington Accelerates Training in ML Models
HPCwire
April 12, 2021
Researchers at Saudi Arabia's King Abdullah University of Science and Technology (KAUST), Intel, Microsoft, and University of Washington have achieved a more than five-fold increase in the speed of machine learning on parallelized computing systems. Their "in-network aggregation" technology involved inserting lightweight optimization code in high-speed network devices. The researchers used new programmable dataplane networking hardware developed by Intel's Barefoot Networks to offload part of the computational load during distributed machine learning training. The new SwitchML platform enables the network hardware to perform data aggregation at each synchronization step during the model update phase. KAUST's Marco Canini said, "Our solution had to be simple enough for the hardware and yet flexible enough to solve challenges such as limited onboard memory capacity."
IBM Releases Qiskit Modules That Use Quantum Computers to Improve ML
VentureBeat
Chris O'Brien
April 9, 2021
IBM has released the Qiskit Machine Learning suite of application modules as part of its effort to encourage developers to experiment with quantum computers. The company’s Qiskit Applications Team said the modules promise to help optimize machine learning (ML) by tapping quantum systems for certain process components. The team said, "Quantum machine learning (QML) proposes new types of models that leverage quantum computers' unique capabilities to, for example, work in exponentially higher-dimensional feature spaces to improve the accuracy of models." IBM expects quantum computers to gain market momentum by performing specific tasks that are offloaded from classic computers to a quantum platform.
AI Could 'Crack the Language of Cancer, Alzheimer's
University of Cambridge (U.K.)
April 8, 2021
A study by researchers at St. John's College, University of Cambridge in the U.K. found that the "biological language" of cancer, Alzheimer's, and other neurodegenerative diseases can be predicted by machine learning. The researchers used algorithms similar to those employed by Netflix, Facebook, and voice assistants like Alexa and Siri to train a neural network-based language model to study biomolecular condensates. St. John's Tuomas Knowles said, "Any defects connected with these protein droplets can lead to diseases such as cancer. This is why bringing natural language processing technology into research into the molecular origins of protein malfunction is vital if we want to be able to correct the grammatical mistakes inside cells that cause disease."
Rice, Intel Optimize AI Training for Commodity Hardware
Rice University News
Jade Boyd
April 7, 2021
Rice University computer scientists and collaborators at Intel have demonstrated artificial intelligence (AI) software that operates on commodity processors and trains deep neural networks (DNNs) significantly faster than graphics processing unit (GPU)-based platforms. Rice's Anshumali Shrivastava said the cost of DNN training is the biggest bottleneck in AI, and the team's sub-linear deep learning engine (SLIDE) overcomes it by running on commodity central processing units (CPUs), and by approaching DNN training as a search problem to be addressed with hash tables. The latest research considered the impact of vectorization and memory optimization accelerators on CPUs. Shrivastava said, "We leveraged those innovations to take SLIDE even further, showing that if you aren't fixated on matrix multiplications, you can leverage the power in modern CPUs and train AI models four to 15 times faster than the best specialized hardware alternative."
UCLA Researchers Develop AI to Analyze Cells Without Killing Them
Daily Bruin
Madison Pfau; Anna Novoselov
April 9, 2021
An artificial intelligence (AI) model developed from images of stem cells by University of California, Los Angeles (UCLA) researchers enables the analysis of a cell's appearance and protein content without killing it. The model features two AI networks; UCLA's Sara Imboden explained the first is fed colorful immunofluorescent images and black-and-white microscope images from the same field of view, learning by detecting the inputs' relationships. Imboden said after studying these images, the generator tries to output an image as similar to the colorful image as possible. The second network compares the predicted image to the AI-generated image and guesses which is fake, while highlighting weaknesses produced by the first network. UCLA's Cho-Jui Hsieh said, "No matter what kind of input you give, the AI will always try to predict something. Measuring the faithfulness of the AI prediction is very important future work."
Using AI to Diagnose Neurological Diseases Based on Motor Impairment
Heidelberg University (Germany)
April 7, 2021
Researchers at Germany's Heidelberg University, working with collaborators in Switzerland, have developed a machine learning (ML) technique for recognizing motor impairments, in order to diagnose neurological diseases. The team's unsupervised behavior analysis and magnification using deep learning (uBAM) method features an ML-based algorithm that utilizes artificial neural networks, and identifies independently and fully automatically characteristic behavior and pathological deviations. The uBAM interface's underlying convolutional neural network was trained to identify similar movement behavior in the case of different subjects, despite differences in outward appearance. Heidelberg's Björn Ommer said, "As compared to conventional methods, the approach based on artificial intelligence delivers more detailed results with significantly less effort."
ML Tool Converts 2D Material Images Into 3D Structures
Imperial College London (U.K.)
Caroline Brogan
April 5, 2021
A new machine learning algorithm developed by researchers at the U.K.'s Imperial College London (ICL) can render two-dimensional (2D) images of composite materials into three-dimensional (3D) structures. ICL's Steve Kench said, "Our algorithm allows researchers to take their 2D image data and generate 3D structures with all the same properties, which allows them to perform more realistic simulations." The tool uses deep convolutional generative adversarial networks to learn the appearance of 2D composite cross-sections, and expands them so their “phases” (the different components of the composite material) can be studied in 3D space. The researchers found this method to be less expensive and faster than generating 3D computer representations from physical 3D objects, and able to identify different phases more clearly. ICL's Sam Cooper said, "We hope that our new machine learning tool will empower the materials design community by getting rid of the dependence on expensive 3D imaging machines in many scenarios."
The Wall Street Journal (4/1, Castellanos, Subscription Publication) reports, “Pfizer Inc. drew on digital technologies and artificial intelligence to roll out its Covid-19 vaccine to market in less than a year, said Lidia Fonseca, the pharmaceutical company’s chief digital and technology officer.” Speaking at the WSJ Pro AI Executive Forum Wednesday, Fonseca discussed how the company developed an AI-powered dashboard to track the spread of COVID-19 down to the local level – which helped the company choose where to hold clinical trials – as well as dashboards to monitor clinical trial progress virtually.
Axios (3/26, Walsh) reported the ACLU “will be seeking information about how the government is using artificial intelligence in national security.” The development of AI has “major implications for security, surveillance, and justice.” The ACLU’s request may help “shed some light on the government’s often opaque applications of AI.” The ACLU is “specifically concerned about ‘vetting and screening processes in agencies like Homeland Security, and tools that can analyze voice, data and video.’” The FOIA request was “prompted in part by a recent 750-page report put out by the National Security Commission on Artificial Intelligence that lays out a case for the US to embrace AI throughout the national security sector.”
Engadget (3/29, Holt) reports that after examining “ten of the most-cited datasets used to test machine learning systems,” a team led by MIT computer scientists “found that around 3.4 percent of the data was inaccurate or mislabeled, which could cause problems in AI systems that use these datasets.” The datasets “include text-based ones from newsgroups, Amazon and IMDb.” According to Engadget, “Errors emerged from issues like Amazon product reviews being mislabeled as positive when they were actually negative and vice versa.” Engadget said, “If labels are even a little off, that could lead to huge ramifications for machine learning systems. If an AI system can’t tell the difference between a grocery and a bunch of crabs, it’d be hard to trust it with pouring you a drink.”