Artificial Neurons Recognize Biosignals in Real Time
University of Zurich (Switzerland)
May 27, 2021
A chip built from artificial neurons can decode brainwaves in real time, and can determine which brain regions trigger epileptic seizures. Researchers at Switzerland's University of Zurich, ETH Zurich, and University Hospital Zurich (UHZ) developed the chip using neuromorphic engineering, to enable the detection of previously recorded high-frequency oscillations (HFOs). The team first developed an algorithm to detect HFOs by modeling the brain's spiking neural network (SNN), then deployed the SNN in an energy-efficient device that receives neural signals via electrodes; the hardware executes calculations with a high temporal resolution, without requiring the Internet or cloud computing. UHZ's Johannes Sarnthein said, "A portable or implantable chip such as this could identify periods with a higher or lower rate of incidence of seizures, which would enable us to deliver personalized medicine."
Drones May Have Attacked Humans Fully Autonomously for the First Time
New Scientist
David Hambling
May 27, 2021
A recent report by the United Nations Security Council's Panel of Experts reveals that an incident in Libya last year may have marked the first time military drones autonomously attacked humans. Full details of the incident have not been released, but the report said retreating forces affiliated with Khalifa Haftar, commander of the Libyan National Army, were "hunted down" by Kargu-2 quadcopters during a civil war conflict in March 2020. The drones, produced by the Turkish firm STM, locate and identify targets in autonomous mode using on-board cameras with artificial intelligence, and attack by flying into the target and detonating. The report called the attack "highly effective" and said the drones did not require data connectivity with an operator.
*May Require Paid Registration
AI Tool Helps Doctors Manage COVID-19
University of Waterloo News (Canada)
Brian Caldwell
May 26, 2021
Researchers at Canada's University of Waterloo have developed artificial intelligence (AI) technology to evaluate the degree of COVID-19 severity, as part of the open source COVID-Net project between Waterloo, spinoff startup DarwinAI, the Stony Brook School of Medicine, and the Montefiore Medical Center. The researchers trained the deep learning AI to extrapolate the extent and opacity of infection in the lungs of COVID-19 patients from chest x-rays. The software's evaluations were compared to expert radiologists' evaluations of the same images, and were found to align well with them. Waterloo's Alexander Wong said, “The promising results in this study show that artificial intelligence has a strong potential to be an effective tool for supporting frontline healthcare workers in their decisions and improving clinical efficiency, which is especially important given how much stress the ongoing pandemic has placed on healthcare systems around the world."
Fetal Heart Defect Detection Improved by Using ML
University of California, San Francisco News
Elizabeth Fernandez
May 26, 2021
University of California, San Francisco (UCSF) researchers doubled the accuracy of doctors in detecting fetal heart defects in unborn children by integrating ultrasound imaging with machine learning (ML). The researchers trained ML models to mimic tasks that doctors conduct in diagnosing congenital heart disease (CHD). The technique employs neural networks to find five views of the heart, then uses neural networks again to decide whether each view is normal; finally, a third algorithm combines the results of the first two into a diagnosis of fetal-heart normality or abnormality. Humans typically detect 30% to 50% of CHD cases in utero, while the UCSF system detected 95% of CHD cases in the test dataset. UCSF's Rima Arnaout said, "Our goal is to help forge a path toward using machine learning to solve diagnostic challenges for the many diseases where ultrasound is used in screening and diagnosis."
Archaeologists vs. Computers: Study Tests Who's Best at Sifting the Past
The New York Times
Heather Murphy
May 25, 2021
Computers can sort pottery shards into subtypes at least as accurately as human archaeologists, as demonstrated by Northern Arizona University researchers. The researchers pitted a deep learning neural network against four expert archaeologists in classifying thousands of images of Tusayan White Ware pottery among nine known types; the networks outperformed two experts and equaled the other two. The network also sifted through all 3,000 photos in minutes, while each expert's analysis took three to four months. The network also could more specifically communicate its reasoning for certain categorizations than its human counterparts, and offered a single answer for each classification.
AI Technology Protects Privacy
Technical University of Munich (Germany)
May 24, 2021
Technology developed by researchers at Germany's Technical University of Munich (TUM) ensures that the training of artificial intelligence (AI) algorithms does not infringe on patients' personal data. The team, collaborating with researchers at the U.K.'s Imperial College London and the OpenMined private AI technology nonprofit, integrated AI-based diagnostic processes for radiological image data that preserve privacy. TUM's Alexander Ziller said the models were trained in various hospitals on local data, so "data owners did not have to share their data and retained complete control." The researchers also used data aggregation to block the identification of institutions where the algorithm was trained, while a third technique was utilized to guarantee differential privacy. TUM's Rickmer Braren said, "It is often claimed that data protection and the utilization of data must always be in conflict. But we are now proving that this does not have to be true."
Deep Learning Enables Dual Screening for Cancer, Cardiovascular Disease
Rensselaer Polytechnic Institute News
May 20, 2021
A new deep learning algorithm shows promise for screening patients for both cardiovascular disease and lung cancer. Engineers at Rensselaer Polytechnic Institute (RPI) used data from over 30,000 low-dose computed tomography (CT) scans to develop, train, and validate the algorithm, which weeded out unwanted artifacts and noise and extracted diagnostic features. Massachusetts General Hospital (MGH) researchers then tested the algorithm on state-of-the-art scans, and found it was as effective in analyzing those images as the hospital’s radiologists. RPI's Deepak Vashishth said, “ This innovative research is a prime example of the ways in which bioimaging and artificial intelligence can be combined to improve and deliver patient care with greater precision and safety.”
Envisioning Safer Cities with AI
Texas Advanced Computing Center
Aaron Dubrow
May 19, 2021
University of California, Berkeley (UC Berkeley) researchers have designed an artificial intelligence toolkit for automatically identifying building properties, and for gauging urban structures' resilience. BRAILS (Building Recognition using AI at Large-Scale) applies machine learning, deep learning, and computer vision on data about the built environment as a tool for more efficient urban planning, design, and management of buildings and infrastructure. The basic BRAILS framework derives building characteristics from satellite and ground-level images drawn from Google Maps, combining them with data from sources like Microsoft Footprint Data and OpenStreetMap. The researchers trained the BRAILS modules and ran simulations using supercomputers at the Texas Advanced Computing Center. UC Berkeley's Charles Wang said the research aims "to create a more resilient built environment."
Cheap, User-Friendly Smartphone App Predicts Vineyard Yields
Cornell Chronicle
Krishna Ramanujan
May 17, 2021
Cornell University engineers and plant scientists have developed an inexpensive machine learning application to predict vineyard yields earlier in the season and with greater accuracy than costlier manual counting techniques. Growers can use a smartphone to record video of their grapevines, then upload the footage to a server to process; the system uses computer vision to improve yield estimates. Cornell's Kirstin Petersen said, "Compared to the technology, a farmer would have to manually count 70% of their vineyard to gain the same level of confidence in their yield prediction, and no one would do that." As a result, Petersen said, “This could be a real game-changer for small and medium-sized farms in the Northeast.”
CloudTech (4/19, Bourne) reports Siemens and Google Cloud “are working together to deliver AI-based solutions in manufacturing – in what can be seen as another ‘industry cloud’ offering.” The companies “announced an agreement where Siemens will integrate Google Cloud’s machine learning and artificial intelligence technologies with its factory automation products,” enabling manufacturers to “run cloud-based AI and ML models on top of their data, as well as deploy algorithms at the network edge.” CloudTech explains, “This translates to applications such as visual inspection of products, or predicting the wear and tear of machines on the assembly line.”
Spectrum News (4/16, Sandoval, Poukish) reports, “Fire officials in Sonoma County are using new artificial intelligence to detect heat and smoke to potentially stop wildfires before they grow out of control.” The county is partnering “with South Korean company Alchera to install fire-spotting cameras throughout the area.” The $300,000 program, which has already installed 600 cameras so far, “will come from a hazard mitigation grant from the Federal Emergency Management Agency and Sonoma County will provide a $75,000 match.”
Inside Science (4/8, Shurkin) reported that “researchers at the University of Colorado Boulder used artificial intelligence and computer vision to trace both the position and travel direction of worker bees as they found their way home.” They “published their findings in March in the journal Proceedings of the National Academy of Sciences.” Orit Peleg, “an assistant professor of computer science at Boulder, said that worker bees in a hive (the vast majority of bees) have to know where the queen is at all times because she is the sole source of eggs that keep the hive populated.” Instead of “shouting information, the queen extrudes her pheromones to the bees closest to her, and those bees, in turn, amplify the queen’s scent, and flap the information to the bees behind them.” The information is “eventually disseminated all the way out, like to the out-skirts of an army.” This works “up to several meters from the hive.”
Early Endeavors on the Path to Reliable Quantum Machine Learning
ETH Zurich (Switzerland)
June 8, 2021
An international research team led by computer scientists at ETH Zurich in Switzerland used the theory of quantum hypothesis testing to develop a new approach that could pave the way for reliable quantum-based machine learning models. The approach can identify the threshold above which the assignments of a quantum classification algorithm are guaranteed to be reliable and robust. ETH's Zhikuan Zhao said, "When we realized that quantum algorithms, like classical algorithms, are prone to errors and perturbations, we asked ourselves how we can estimate these sources of errors and perturbations for certain machine learning tasks, and how we can guarantee the robustness and reliability of the chosen method. If we know this, we can trust the computational results, even if they are noisy." The researchers found their method can determine the error tolerance of a computation for both hacking attacks and natural errors.
AI Spots Coronal Holes to Automate Space Weather Prediction
Skolkovo Institute of Science and Technology (Russia)
June 7, 2021
An international team of scientists brought automated space weather prediction a step closer to reality via a neural network that can identify coronal holes—gaps in the solar atmosphere left by particles that cause geomagnetic storms on Earth—in space-based observations. Robert Jarolim at Austria's University of Graz said CHRONNOS (Coronal Hole RecOgnition Neural Network Over multi-Spectral-data) applies artificial intelligence to spot coronal holes "based on their intensity, shape, and magnetic field properties, which are the same criteria as a human observer takes into account." The team trained the convolutional neural network on about 1,700 extreme ultraviolet wavelength images of the sun's corona recorded in 2010-2017, and compared its results to 261 manually identified coronal holes. CHRONNOS matched human performance in 98% of the cases, and outperformed humans in identifying coronal holes from magnetic field maps.
Scientists Can Predict How Well a Stroke Survivor Will Recover Language Skills Using Computer Simulations of the Brain
The Brink (Boston University)
Jessica Colarossi
June 3, 2021
Scientists at Boston University and the University of Texas at Austin used computer simulations of the brain to predict the extent to which bilingual Hispanic stroke survivors will recover language skills. The neural network models simulate the brain of a bilingual person with language impairment, and their brain's response to therapy in English and Spanish. The model can highlight the optimal language to focus on in rehabilitation, as well as forecasting post-therapy results. The models were able to predict therapeutic effects accurately in the treated language, which implied that the simulations could inform healthcare providers' rehabilitation plans.
Researchers Fine-Tune Control Over AI Image Generation
NC State University News
Matt Shipman
June 1, 2021
Refined control over artificial intelligence (AI)-driven conditional image generation by North Carolina State University (NC State) researchers has potential for use in fields ranging from autonomous robotics to AI training. NC State's Tianfu Wu said, "Like previous approaches, ours allows users to have the system generate an image based on a specific set of conditions. But ours also allows you to retain that image and add to it." The approach also can rig specific components to be identifiably the same, but shifted position or somehow altered. In testing the approach using the COCO-Stuff and the Visual Genome datasets, the technique bested previous state-of-the-art image generation methods. Wu suggested applications for the technique like helping autonomous robots "imagine" the appearance of an end result before undertaking a given task, or producing images for AI training.
How AI Could Alert Firefighters of Imminent Danger
U.S. National Institute of Standards and Technology
June 1, 2021
The artificial intelligence-driven Prediction Model for Flashover (P-Flash) tool is designed to warn firefighters of flashover, the near-simultaneous ignition of flammable materials in a room. Developed by researchers at the U.S. National Institute of Standards and Technology (NIST), P-Flash makes predictions based on temperature data from a building's heat detectors, and is engineered to function even after those detectors fail. The investigators developed the tool by feeding a machine learning algorithm temperature data from heat detectors in 5,041 simulations of a burning three-bedroom, one-story ranch-style home using NIST's Consolidated Model of Fire and Smoke Transport fire modeling program. In tests of the tool's ability to anticipate imminent flashovers in more than 1,000 simulated fires and over a dozen actual fires, it correctly predicted flashovers one minute in advance for about 86% of the simulated fires.
Microscopists Push Neural Networks to the Limit to Sharpen Fuzzy Images
U.S. National Institutes of Health
June 1, 2021
Microscopists and computer scientists at the U.S. National Institutes of Health (NIH) generated clearer cellular images under the extremely low illumination necessary to keep cells viable, through the use of a neural network. The researchers trained the neural network on matched image pairs, so it could learn to forecast a blurry image's appearance if it were sharpened, and then convert fuzzy images into clear ones. The NIH team partnered with imaging software provider SVision to enhance a residual channel attention network to denoise and sharpen three-dimensional fluorescence microscopy images with low signal-to-noise ratio (SNR). NIH's Hari Shroff said, We were able to ‘beat’ the limitations of the microscope by using artificial intelligence to ‘predict’ the high SNR image from the low SNR image.”
UTSA Researchers Among Collaborative Improving Computer Vision for AI
Google-Led Research Pushes Back Against AI Inefficiency ClaimsVentureBeat (4/29, Wiggers) reports that this week, Google “pushed back against claims by earlier research that large AI models can contribute significantly to carbon emissions.” In a new “paper coauthored by Google AI chief scientist Jeff Dean, researchers at the company say that the choice of model, datacenter, and processor can reduce carbon footprint by up to 100 times and that ‘misunderstandings’ about the model lifecycle contributed to ‘miscalculations’ in impact estimates.” In June of last year, “researchers at the University of Massachusetts at Amherst released a report estimating that the amount of power required for training and searching a certain model involves the emissions of roughly 626,000 pounds of carbon dioxide, equivalent to nearly 5 times the lifetime emissions of the average U.S. car.” AI researcher Timnit Gebru separately “coauthored a paper that spotlights the impact of large language models’ carbon footprint on marginalized communities.” The new “Google-led research, which was conducted with University of California, Berkeley researchers and focuses on natural language model training, defines the footprint of a model as a function of several variables.”
SourceAI Tool Uses AI To Write Code Based On Text DescriptionWired (4/23, Knight) reports that Paris startup SourceAI “is fine-tuning a tool that uses artificial intelligence to write code based on a short text description of what the code should do.” For example, tell the tool to solve a math problem, “and it will whip up a dozen or so lines in Python to do just that.” But while “automating coding could change software development...the limitations and blind spots of modern AI may introduce new problems.” One issue is that “machine-learning algorithms can behave unpredictably, and code generated by a machine might harbor harmful bugs unless it is scrutinized carefully.”
Artificial Intelligence Is Infiltrating Higher Education With Admissions, GradingThe Hechinger Report (4/26, Newton) reports that “artificial intelligence, or AI, is being used to shoot off these seemingly personal appeals and deliver pre-written information through chatbots and text personas meant to mimic human banter.” It can “help a university or college by boosting early deposit rates while cutting down on expensive and time-consuming calls to stretched admissions staffs.” AI has long “been quietly embedding itself into higher education in ways like these, often to save money – a need that’s been heightened by pandemic-related budget squeezes.” Now, simple AI-driven tools “like these chatbots, plagiarism-detecting software and apps to check spelling and grammar are being joined by new, more powerful – and controversial – applications that answer academic questions, grade assignments, recommend classes and even teach.” |
The AP (4/21, CHAN) reports European Union officials on Wednesday “outlined an ambitious package of proposed regulations to rein in the rapidly expanding AI technology.” The draft regulations from the EU’s executive commission include “rules for applications deemed high risk such as AI systems to filter out school, job or loan applicants.” They would also “ban artificial intelligence outright in a few cases considered too risky, such as government ‘social scoring’ systems that judge people based on their behavior.” The proposals are the EU’s “latest move to maintain its role as the world’s standard-bearer for technology regulation, as it tries to keep up with the world’s two big tech superpowers, the US and China.” EU officials “say they are taking a four-level ‘risk-based approach’ that seeks to balance important rights such as privacy against the need to encourage innovation.”
The Washington Post (4/29, Board) editorializes that, as the EU revealed new proposals to regulate AI, “the United States should play an active role in addressing” the relevant policy issues and take steps “to shape the EU’s answers in a manner that enshrines liberties without stifling innovation.”
Insider (4/22, Hartmans, Taylor) reports that although some Amazon delivery drivers “are chafing at a new camera system that watches them inside their vans,” such cameras “are becoming more common across the industry,” and some drivers “say there are some key advantages to the new monitoring system.” While the drivers interviewed by Insider “said they felt ‘micromanaged’ and slowed down by the cameras,” they also “highlighted several benefits, saying the cameras encourage safer driving and could protect them...in cases of traffic accidents or other dangerous situations.” The camera system – named Driveri – “was created by a transportation company called Netradyne, which uses artificial intelligence to monitor drivers.” Amazon “told Insider that it saw improvements in driver safety during a pilot test of the Netradyne cameras” between April and October 2020. Accidents “decreased 48%, stop-sign violations decreased 20%, incidents of workers driving without a seatbelt decreased 60%, and distracted driving decreased 45%.”
The Washington Post (4/22, Brown) reports that robots are “shouldering more responsibility at Church Brothers Farms in Gonzalez, Calif.” and notes that it is “far from the only farm going hi-tech.” A new era of machine modernization is spreading across the US agricultural landscape amid rising labor costs, climate change, and increased food demand. Supporters of robotic farming “tout increased automation as a step toward improving efficiency” and “freeing people from monotonous, backbreaking tasks that few laborers want to do.” The article highlights how other farms are using robots and AI.
Fusion of AI, Nanopore Technology: Passing the COVID Test in Just 5 Minutes
SciTechDaily
June 21, 2021
A SARS-CoV-2 virus test developed by scientists at Japan's Osaka University combines artificial intelligence and nanopore technology to differentiate similarly-sized coronaviruses that underpin different respiratory ailments. The platform demonstrated 90% sensitivity and 96% specificity for SARS-CoV-2 detection from clinical saliva samples in just five minutes. The researchers bored 300-nanometer-diameter nanopores into a silicon nitride membrane; partial blockage of the opening when a virus was pulled through a nanopore by electrophoretic force caused a temporary ionic flow reduction inside the nanopore, which registered as a change in electrical current. Current as a function of time yielded data on volume, structure, and surface charge of the analyzed target, and the algorithm was trained on 40 polymerase chain reaction (PCR)-positive and 40 PCR-negative saliva samples. Said Osaka's Masateru Taniguchi, "We expect that this research will enable rapid point-of-care and screening tests for SARS-CoV-2 without the need for RNA extraction."
EU Data Protection Authorities Call for Ban on Facial Recognition
Politico Europe
Melissa Heikkila
June 21, 2021
The European Data Protection Supervisor (EDPS) and the European Data Protection Board (EDPB) have urged a ban on the use of artificial intelligence (AI)-driven facial recognition technology in public places. The European Commission's AI bill limits its use in public places by law enforcement, without prohibiting it outright. In a joint statement, EDPB chair Andrea Jeline and EDPS Wojciech Wiewiorowski said, "A general ban on the use of facial recognition in publicly accessible areas is the necessary starting point if we want to preserve our freedoms and create a human-centric legal framework for AI." They also urged a ban on logging gait, fingerprints, DNA, voice, keystrokes, and other biometric data, as well as on AI systems that biometrically distinguish ethnicity, gender, and political or sexual orientation.
Microscopy Deep Learning Predicts Viral Infections
University of Zurich (Switzerland)
June 21, 2021
A deep learning artificial neural network can reliably identify cells infected by adenoviruses or herpes viruses from fluorescent images of live cells. The algorithm, developed by researchers at Switzerland's University of Zurich (UZH), is designed to detect changes in the arrangement of cellular nuclei that signal the presence of infection. The researchers trained the model to identify patterns typical of infected or uninfected cells using a large dataset of microscopy images. The algorithm also demonstrated the ability to identify acute and severe infections with 95% accuracy, up to 24 hours in advance, by distinguishing between two separate phases of lytic infections.
Smart Tires Hit the Road
The Wall Street Journal
Sara Castellanos
June 16, 2021
Tire manufacturers Goodyear Tire & Rubber and Bridgestone are launching new smart tire features for last-mile delivery vehicles transporting packages from e-commerce sites like Amazon.com. Goodyear's SightLine solution runs data from a sensor through proprietary machine learning algorithms to capture tire wear, pressure, road-surface conditions, and other variables to forecast flats or other problems days ahead of time. Goodyear's Chris Helsel said SightLine could detect 90% of tire-related issues ahead of time in a test that involved about 1,000 vehicles operated by 20 customers. Meanwhile, Bridgestone Americas is developing an intelligent tire system that combines sensors, artificial intelligence algorithms, and digital twins to predict tire wear and readiness for retreading.
MSU, Facebook Develop Research Model to Fight Deepfakes
Michigan State University
June 16, 2021
A new reverse-engineering approach developed by artificial intelligence experts at Michigan State University (MSU) and Facebook aims to identify and attribute "deepfakes." Facebook's Tal Hassner said, "With model parsing, we can estimate properties of the generative models used to create each deepfake, and even associate multiple deepfakes to the model that possibly produced them. This provides information about each deepfake, even ones where no prior information existed." The researchers tested their approach using a dataset of 100,000 synthetic images produced by 100 publicly available generative models, and found that it outperformed the random baselines of previous detection models. MSU's Xiaoming Liu said, "Our framework can not only perform model parsing, but also extend to deepfake detection and image attribution."
Testing AI Fairness in Predicting College Dropout Rate
Cornell Chronicle
Tom Fleischman
June 17, 2021
Cornell University and University of California, Irvine researchers have found that removing protected student characteristics from machine learning models used to identify college students at risk of dropping out does not improve the accuracy or fairness of the models' predictions. The research compared predictive models with and without such protected attributes using a dataset of 564,104 residential course records and 81,858 online course records. The dataset was used to build 58 student attributes across four categories: student gender, first-generation college status, membership in an underrepresented minority group, and high financial need. The researchers found that including protected attributes had no significant effect on three common measures of overall prediction performance when the model already included commonly used features like academic records.
ML Can Reduce Worry About Nanoparticles in Food
Texas A&M Today
Vandana Suresh
June 15, 2021
Texas A&M University scientists used two machine learning (ML) algorithms to assess the properties of metallic nanoparticles that make their absorption by plants more likely. The team trained an artificial neural network and gene-expression programming on a database culled from previous research on metallic nanoparticles and the plants in which they had collected. The algorithms can accurately predict a given metallic nanoparticle's likelihood to accumulate in a plant species, and how its chemical composition influences the tendency for absorption among plants in a nutrient-rich or hydroponic medium. Texas A&M's Xingmao Ma said, "It is quite understandable that people are concerned about the presence of nanoparticles in their fruits, vegetables, and grains. But instead of not using nanotechnology altogether, we would like farmers to reap the many benefits provided by this technology but avoid the potential food safety concerns."
U.S. Task Force to Study Opening Government Data for AI Research
The Wall Street Journal
Ryan Tracy
June 10, 2021
The Biden administration’s new National Artificial Intelligence Research Resource Task Force is tasked with developing a strategy for making government data available to artificial intelligence (AI) scientists. The task force's 12 members hail from academia, government, and industry, and are supervised by officials at the White House Office of Science and Technology Policy (OSTP) and the U.S. National Science Foundation. The panel's strategy could provide researchers with secure access to anonymized data about Americans, as well as to the computing power needed to analyze the data. OSTP's Lynne Parker said the group intends to provide Congress with guidance for establishing a standard AI research infrastructure for non-governmental personnel.
UW Researchers Can Turn a Single Photo into a Video
University of Washington News
Sarah McQuate
June 14, 2021
A new deep learning method can convert a single photo of any flowing material into an animated video running in a seamless loop. University of Washington (UW) researchers invented the technique, which UW's Aleksander Holynski said requires neither user input nor additional data. The system predicts the motion that was occurring when a photo was captured, and generates the animation from that information. The researchers used thousands of videos of fluidly moving material to train a neural network, which eventually was able to spot clues to predict what happened next, enabling the system to ascertain if and in what manner each pixel should move. The team's “systemic splatting” method forecasts both the future and the past for an image, then blends them into one animation.
Algorithm Reveals Mysterious Foraging Habits of Narwhals
University of Copenhagen (Denmark)
June 16, 2021
Researchers at Denmark's University of Copenhagen and the Greenland Institute of Natural Resources utilized algorithms in their study of the foraging habits of narwhals, and how they may be affected by human disturbances and global warming. The researchers used artificial intelligence to detect patterns in the way narwhals move and the sounds they emit. Among their findings was that certain sounds indicate when narwhals are hunting prey. University of Copenhagen’s Raghavendra Selvan said, “The major challenge was that these whales have very complex movement patterns, which can be tough to analyze. This becomes possible only with the use of deep learning, which could learn to recognize both the various swimming patterns of whales, as well as their buzzing sounds. The algorithm then discovered connections between the two.”
Reuters (5/3, Dave, Nellis) reports Apple on Monday “hired former distinguished Google scientist Samy Bengio, who left the search giant amid turmoil in its artificial intelligence research department.” Bengio “is expected to lead a new AI research unit at Apple under John Giannandrea, senior vice president of machine learning and AI strategy.” Bengio’s arrival at Apple follows Google’s firings of fellow scientists Margaret Mitchell and Timnit Gebru after the former allegedly “transferred electronic files out of the company” and the latter “threatened to quit rather than retract a paper.” As “one of the early leaders of the Google Brain research team,” Bengio “advanced the ‘deep learning’ algorithms that underpin today’s AI systems for analyzing images, speech, and other data.”
Also reporting is Ars Technica (5/3, Axon).
VentureBeat (5/6, Wiggers) reports, “Nvidia this week detailed a partnership with the U.S. Postal Service to transform the latter’s mail operations with AI.” The Postal Service worked with “half a dozen architects at Nvidia and other companies,” and “arrived at the deep learning models it needed after a three-week sprint.” This work “was the genesis of the Edge Compute Infrastructure Program, a distributed edge AI system that’s running on the NVIDIA EGX platform at the Postal Service today.” The new solutions “could help the Postal Service improve delivery standards, which have fallen over the past year.”
Computer Weekly (5/6, Tan) reports that Wipro announced a partnership “with stem cell technology firm Transcell Oncologics on the use of its Holms artificial intelligence (AI) platform to improve vaccine safety.” Wipro said Holms is “capable of predicting adverse neurovirulent impacts resulting from vaccinations” by applying AI to the vaccine development process. The platform “can also be integrated into existing workflows, including quality control tests in vaccine production, batchwise release and other safety evaluation assays, enabling researchers to ascertain the safety and potency of vaccines.” Transcell CEO Subhadra Dravida said the partnership “will bring new and advanced insights from the use of stem cells in an in-vitro setting” and will lead to “new medical treatments and accelerate innovation and vaccine availability that is vital to saving lives.”
In a 10,500-word story, ZDNet (4/30, Ray) reported on how companies, scholars, and regulators are grappling with the ethical implications of artificial intelligence (AI) use. Criticism around AI development and deployment, “and the phenomenon of corporations snapping back in response,” come as the accelerating “commercial and governmental implementation of AI make the stakes even greater.”
The New York Times (4/30, Metz) reported multiple teams of researchers are working to develop AI that can operate surgical robots. For example, a recent paper published by researchers at UC Berkeley reported a surgical robot could relocate rings on a peg board better when operated by AI than when operated by humans. Researchers have also trained surgical robots to perform other tasks needed in surgery, such as placing a pin in a bone, better than humans.
Mashable (4/29, Dawood) reports that “innovators in Israel have integrated AI into handsets for data driven psychiatric treatment.” The new “platform developed by Montfort uses an app called EncephaLog, which reads a person’s cognitive and psychiatric symptoms through sensors on a smartphone.” By replacing short clinic visits with a way to “collect information on a patient in real-time, the system gauges people’s emotions and thoughts from their responses to on-screen commands.” All the data collected is “processed via machine learning, after which biomarkers are created to spot symptoms of mental health disorders.” The platform’s “AI focuses on aspects like an individual’s social interactions and their physical activity, to create a database which helps psychiatrists devise a strategy for accurate and effective treatment.”
Global Smart-City Competition Highlights China's Rise in AI
Wired
Khari Johnson
July 3, 2021
Chinese tech giants Alibaba and Baidu took first and second place in all five categories in the recent AI City Challenge, outperforming competitors from almost 40 countries. Carnegie Mellon University's Stan Caldwell points out that China invests twice as much as the U.S. in research and development as a share of gross domestic product. Caldwell said, "We want the technologies to develop, because we want to improve safety and efficiency and sustainability. But selfishly, we also want this technology to develop here and improve our economy." The U.S. National Institute for Standards and Technology is calling on American artificial intelligence (AI) researchers to participate in its Automated Streams Analysis for Public Safety (ASAPS) Challenge Program, which aims to develop AI to help emergency operators predict when their services will be needed.
Technion Study Finds Warmth of AI Systems More Important Than Capability
The Jerusalem Post
June 30, 2021
A study by researchers at the Technion – Israel Institute of Technology found that potential users of artificial intelligence (AI) systems consider such systems’ “warmth” more important than capability and competence. The study of more than 1,600 participants defined warmth as related to traits indicating the AI system's perceived intent, such as friendliness, helpfulness, sincerity, trustworthiness, and morality. The researchers found participants preferred “warm” AI systems built for the consumer that use algorithms trained on less data over systems built for the producer that use state-of-the-art artificial neural network algorithms. The researchers looked at navigation apps, search engines, and recommender systems, in contrast to prior research that focused on virtual agents or robots.
Deep ML Completes Information About the Bioactivity of 1 Million Molecules
IRB Barcelona (Spain)
June 28, 2021
Deep machine learning (ML) computational models have deduced experimental data for 1 million chemical compounds, guiding the development of a suite of programs for creating estimates of any type of molecule. Scientists at Spain's Institute for Research in Medicine Barcelona based the technique on the Chemical Checker, the largest database of bioactivity profiles for pseudo pharmaceuticals. The database is missing critical data, which the new tool provides by integrating all the experimental information available so the bioactivity profiles for all molecules can be completed. Said the Institute’s Patrick Aloy, "The new tool also allows us to forecast the bioactivity spaces of new molecules, and this is crucial in the drug discovery process as we can select the most suitable candidates and discard those that, for one reason or another, would not work."
AI Clears Up Images of Fingerprints to Help with Identification
New Scientist
Matthew Sparkes
June 28, 2021
West Virginia University researchers have trained an artificial intelligence (AI) model to clean up distorted images of fingerprints from crime scenes to improve identification. The researchers developed a generative adversarial network by creating blurred versions of 15,860 clean fingerprint images from 250 subjects. They trained the AI using nearly 14,000 of these pairs of images; when they tested its performance on the remainder, they found the model to be 96% accurate at the lower end of the range of blurring intensity, and 86% at the higher end. Forensic Equity's David Goodwin said the use of neural networks to manipulate images would have trouble standing up in court because they cannot be audited like human-generated code, and the inner workings of these models are unknown.
AI Breakthrough in Premature Baby Care
James Cook University (Australia)
June 24, 2021
A hybrid neural network can accurately forecast premature babies' individual mortality risk in order to better guide their care, thanks to scientists at Australia's James Cook University (JCU). JCU's Stephanie Baker said the Neonatal Artificial Intelligence Mortality Score (NAIMS) network assesses preterm infants' mortality risk based on simple demographics and trends in heart and respiratory rate. Baker said NAIMS could predict an infant's mortality risk within three, seven, or 14 days from data generated over 12 hours, without requiring invasive procedures or knowledge of medical histories. Said Baker, "Due to the simplicity and high performance of our proposed scheme, NAIMS could easily be continuously and automatically recalculated, enabling analysis of a baby's responsiveness to treatment and other health trends."
Rembrandt's 'Night Watch' on Display with Missing Figures Restored by AI
Reuters
Toby Sterling
June 23, 2021
Researchers at the Rijksmuseum in the Netherlands used artificial intelligence to restore missing parts of Rembrandt’s "The Night Watch" for a new exhibit. This marks the first time in 300 years that the 1642 painting is on display in its original size. Strips that were cut from all four sides of the painting during a 1715 move and later lost were recreated by restorers and computer scientists with the help of a copy made by another artist of the time. Images of the original painting and the smaller 1655 copy attributed to Gerrit Lundens were scaled to the same size, with the Lundens work warped to fit with the Rembrandt where the placement of figures and objects slightly differed.
AI Breakthrough Gives Longer Advance Warning of Ozone Issues
University of Houston News
Sally Strong
June 24, 2021
University of Houston researchers have developed an artificial intelligence system that can forecast ozone levels in the earth's troposphere (the lowest level of the atmosphere) up to two weeks in advance. The researchers used index of agreement (IOA), which compares the gaps between what is expected and the actual outcome, as the loss function for the AI model. This combination of a conventional numerical model and the IOA improved the accuracy of the machine learning algorithm in predicting outcomes of real-life ozone conditions through the incorporation of historical ozone data. The model was trained using four to five years of ozone data to recognize ozone conditions and better estimate forecasts over time. The system could be used to help people prepare for high ozone levels than can exacerbate breathing issues.
ML Aids Earthquake Risk Prediction
Texas Advanced Computing Center
Aaron Dubrow
June 23, 2021
University of Texas at Austin (UT Austin) researchers have engineered an earthquake-predicting machine learning (ML) model. UT Austin's Ellen Rathje and Maria Giovanna Durante utilized a random forest strategy with a binary classification to predict whether lateral spreading movements occurred at a specific site during the 2011 quake in Christchurch, New Zealand, then applied multiclass classification to forecast displacement volumes. The researchers used the Texas Advanced Computing Center's Frontera supercomputer to train the model on data related to the peak ground shaking experienced, water table depth, topographic slope, and other factors in Christchurch, covering more than 7,000 data points. They tested the model citywide on 2.5 million sites around the quake's epicenter to determine displacement, and found predictions that liquefaction would occur were 80% accurate, while predictions of the extent of displacement were 70% accurate.
Algorithm Helps Autonomous Vehicles Find Themselves, Summer or Winter
California Institute of Technology
Robert Perkins
June 23, 2021
Visual terrain-relative navigation (VTRN) now can operate effectively regardless of seasonal changes, thanks to a new algorithm. California Institute of Technology researchers applied deep learning and artificial intelligence to eliminate seasonal content that can trip up VTRN systems, which rely on close similarity between the terrain they are looking at and database images. The algorithm utilizes self-supervised learning to educate itself, seeking patterns in images by parsing out details and properties that humans likely would overlook. VTRN systems equipped with the algorithm can localize more accurately: one upgraded system could match 92% of images of summer foliage against winter leaf-off imagery, with the remaining 8% easily addressed through other methods.
ML Methods Could Improve Environmental Predictions
University of Minnesota
June 22, 2021
New process- or knowledge-guided machine learning (ML) techniques can predict flow and temperature in river networks more accurately even when data is scarce, according to researchers at the University of Minnesota, the University of Pittsburgh (Pitt), and the U.S. Geological Survey. The work involved an algorithm that was taught physical rules to generate more accurate forecasts and identify physically significant relationships between inputs and outputs. The method was designed to avoid common traps in ML-based prediction by informing the model through correlation across time, spatial links between streams, and energy budget equations. Pitt's Xiaowei Jia said, "Accurate prediction of water temperature and streamflow [can assist in] decision making for resource managers, for example helping them to determine when and how much water to release from reservoirs to downstream rivers."
MIT Technology Review (5/13, Hao), publishes an interview with Ayanna Howard, who in March, “began a new position as dean of the college of engineering at Ohio State University.” She is the “first woman to hold the position.” The interview discusses “her career and her latest research” involving artificial intelligence and designing distrust into AI systems to make them safer. When discussing actively designing distrust into an AI system to make it more safe, she said, “We’re actually trying an experiment right now around the idea of denial of service. We don’t have results yet, and we’re wrestling with some ethical concerns. Because once we talk about it and publish the results, we’ll have to explain why sometimes you may not want to give AI the ability to deny a service either.”
VentureBeat (5/10, Wiggers) reports, “At its Think conference this week, IBM introduced Project CodeNet, which the company claims is the largest open source dataset for benchmarking around AI for code.” CodeNet is “approximately 10 times larger than the next most similar dataset, which has 52,000 samples,” and is focused “specifically on the problems of code translation, code similarity, and code constraints. The goal is to advance the development of AI systems that can automatically translate code into another programming language, identify overlaps and similarities between different sets of code, and customize constraints based on a developer’s specific needs and parameters.”
Google VP of Engineering Marian Croak said at the Wall Street Journal ’s (5/11, Mickle, Subscription Publication) Future of Everything Festival that the company intends to double the size of its AI ethics research team in the coming years, as personnel changes and research controversies have hurt its credibility. Croak said Alphabet and Google CEO Sundar Pichai has committed to increasing the team’s operating budget.
STAT (5/11, Joseph) reports, “For the myriad applications of artificial intelligence, Eric Schmidt, the onetime Google CEO, sees one area where it’s poised to unleash the most sweeping changes.” Schmidt said on Tuesday, “When I try to market the importance of AI, I say that AI will have its biggest impact in biology and health, because biology is so complicated.” However, “to fully unlock AI’s full potential, there need to be policy changes, Schmidt said in a conversation with Linda Henry, the CEO of Boston Globe Media Partners.”
WHO Releases AI Guidelines for Health
Government Technology
July 12, 2021
A new report from the World Health Organization (WHO) offers guidance for the ethical use of artificial intelligence (AI) in the health sector. The six primary principles for the use of AI as set forth in the report are to protect autonomy; promote human well-being, safety, and the public interest; ensure transparency, explainability, and intelligibility; foster responsibility and accountability; ensure inclusiveness and equity; and promote responsive and sustainable AI. These principles are intended as a foundation for AI stakeholders, including governments, developers, and society. The report, Ethics and Governance of Artificial Intelligence for Health, finds public- and private-sector collaboration necessary to ensure AI’s accountability.
Faces Are the Next Target for Fraudsters
The Wall Street Journal
Parmy Olson
July 7, 2021
Facial recognition systems increasingly are a target for fraudsters. Identity verification company ID.me Inc. found more than 80,000 attempts to trick facial identification verification to claim fraudulent unemployment benefits between June 2020 and January 2021. ID.me's Blake Hall said these attempts involved people wearing masks, using deepfakes, or holding up images or videos of other people. Veridium LLC's John Spencer said fraudsters sometimes try to carry out "presentation attacks" by using a photo of someone's face, cutting out the eyes and using it as a mask. Adversa.ai's Alex Polyakov said the algorithms underpinning these systems need to be updated, or the models need to be trained with a large number of adversarial examples, to protect against such spoofing.
AI's Role in Debugging Code Expected to Grow
The Wall Street Journal
Sara Castellanos
July 6, 2021
Technology companies are developing artificial intelligence (AI)-based tools to debug code as software maintenance becomes ever more challenging. Intel Labs' Justin Gottschlich said developers find it increasingly difficult to identify bugs in code without machine assistance; debugging consumes about half of developers' time, and correcting a single bug can take weeks. Gottschlich said Intel Labs expects to issue two free AI-based software debugging tools for outside developers by year's end. The ControlFlag tool can automatically detect coding errors via statistical analysis and machine learning, and the Machine Inferred Code Similarity tool can automatically recognize code snippets that execute similar functions.
UOC Team Develops Neural Network to Identify Tiger Mosquitoes
Universitat Oberta de Catalunya (Spain)
Santiago Campillo
July 8, 2021
A new technique can learn to identify tiger mosquitoes using a large set of images captured on mobile phones and uploaded to the Mosquito Alert platform by volunteers. Scientists at Spain's Universitat Oberta de Catalunya (UOC) engineered a deep neural network to differentiate between the morphological similarities of diverse mosquito species, and UOC's Gereziher Adhane said the network performs as well or nearly as well as human experts, and can process vast volumes of images. Adhane also said the algorithm, with modest tweaking, could work with mobile applications. Adhane said the neural network can “make predictions about images of mosquitoes taken using smartphones efficiently and in real time, as has happened with the Mosquito Alert project."
Meet the Scientist Teaching AI to Police Human Speech
The Washington Post
Drew Harwell
July 1, 2021
Facebook and Google have engineered artificial intelligence (AI) systems capable of understanding dozens of languages with remarkable accuracy through the efforts of scientists like Alexis Conneau. At Facebook, Conneau and others advanced machine learning algorithms' ability to abstract language numerically, eventually training an AI model to piece through different languages concurrently; the 100-language XLM-R model was almost as accurate as its specialized single-language peers. Conneau's final work for Facebook was on wav2vec-U, an unsupervised speech-recognition system that reads words from audio. Conneau has helped lead research on natural language processing, and spearheaded work in AI that Facebook and others have applied to the online policing of bullying, bigotry, and hate speech. He believes this problem can be addressed only through automation, while critics claim such innovations will just give companies more information on Web users to exploit.
ML Helps Predict When Immunotherapy Will Be Effective
Eindhoven University of Technology (Netherlands)
June 30, 2021
Researchers at the Eindhoven University of Technology (TU/e) in the Netherlands have developed a machine learning model that can predict whether immunotherapy will work for a patient. One type of immunotherapy that involves immune checkpoint blockers (ICB) is effective in only a third of patients. The researchers used computational algorithms and datasets from previous clinical patient care to search the tumor microenvironment for biomarkers to predict patient response to ICB. TU/e's Federica Eduati said, "RNA-sequencing datasets are publicly available, but the information about which patients responded to ICB therapy is only available for a small subset of patients and cancer types." To solve the data problem, the researchers searched for substitute immune responses from the same datasets, which could be an indicator of ICB's effectiveness. Eduati said, "Our machine learning model outperforms biomarkers currently used in clinical settings to assess ICB treatments."
Wired (6/8, Simonite) reports on what happened to Google researcher Timnit Gebru, who last November was ordered “to retract her latest research paper – or else remove her name from its list of authors.” The paper “in question was, in Gebru’s mind, pretty unobjectionable.” It “surveyed the known pitfalls of so-called large language models, a type of AI software...that was stoking excitement in the tech industry.” But, after conflict over the paper with a Google executive, Gebru was forced out. Google “maintained – and still does – that she resigned.”
The AP (6/3, O'Brien) reports that computer scientist Lynne Parker “made breakthroughs in getting robots to work together so they could perform difficult missions, like cleaning up after a nuclear disaster, waxing floors or pulling barnacles off a ship.” Her job now is “getting the US government working together – alongside American businesses, research universities and international allies – as director of a new national initiative on artificial intelligence.” She’s America’s first “AI czar, at a time of rising promise and a heavy dose of both hype and fear about what computers can do as they think more like humans.” A first task “for Parker, who took on the role in the waning days of the Trump administration, is adapting to priorities set by the Biden administration.” Those include “confronting the societal risks of AI and putting the technology to work on causes such as health equity and reducing climate change.”
Vox (6/2, Ghaffary) reports that “six months after star AI ethics researcher Timnit Gebru said Google fired her over an academic paper scrutinizing a technology that powers some of the company’s key products, the company says it’s still deeply committed to ethical AI research.” It “promised to double its research staff studying responsible AI to 200 people, and CEO Sundar Pichai has pledged his support to fund more ethical AI projects.” However, some current members of Google’s ethical AI group “told Recode the reality is different from the one Google executives are publicly presenting.” The 10-person group, “which studies how artificial intelligence impacts society, is a subdivision of Google’s broader new responsible AI organization.” They “say the team has been in a state of limbo for months, and that they have serious doubts company leaders can rebuild credibility in the academic community – or that they will listen to the group’s ideas.” Google “has yet to hire replacements for the two former leaders of the team.” Many members “feel so adrift that they convene daily in a private messaging group to complain about leadership, manage themselves on an ad-hoc basis, and seek guidance from their former bosses.”
STAT (6/2, Ross) reports, “Machine learning, a subset of AI driving billions of dollars of investment in the field of medicine, is facing a credibility crisis.” Failure to test AI models on different sources of data – “a process known as external validation – is common in studies published on preprint servers and in leading medical journals. It often results in an algorithm that looks highly accurate in a study, but fails to perform at the same level when exposed to the variables of the real world, such as different types of patients or imaging scans obtained with different devices.” A recent investigation by STAT “found that only 73 of 161 AI products approved by the federal Food and Drug Administration publicly disclosed the amount of data used to validate the product, and just seven reported the racial makeup for their study populations.”
SiliconANGLE (6/1, Deutscher) reports Google on Tuesday “introduced a new infrastructure option for its cloud platform that will enable enterprises to provision instances with Tensor Processing Units, the search giant’s internally developed artificial intelligence chips.” Cloud TPU VMs “are available in preview. Early adopters are using them for tasks ranging from AI-powered healthcare analytics to quantum chemistry.”
Fortune (6/1, Vanian) reports Amazon and “other tech giants [are] desperately hunting for workers who have an expertise in artificial intelligence.” According to the report, Amazon “has many businesses – its core e-commerce division, the Alexa voice-activated digital service, and the AWS cloud computing unit – that depend on machine learning. But there are relatively few computer scientists who know the technology, and those who do are in high demand.” Bratin Saha, a vice president and general manager of machine learning services at Amazon, told Fortune that the company has been requiring potential new programming hires to take machine learning classes because executives “believe they can teach these developers machine learning basics over a few weeks so that they can work on more cutting-edge projects after they’re hired.”
ZDNet (5/31, Ray) discusses how chipmakers are increasingly utilizing AI to spot defects in its products. Applied Materials VP Keith Wells said, “You should naturally want to inspect more, because there are more process steps, more things that can go wrong, but if you look at what has happened, the economics have prohibited our customers from doing that inspection.” A number of tools “are being employed to solve that economic quandary, and one of the areas with the most potential, said Wells, is artificial intelligence, in particular the machine learning form of AI.” Applied “believes more tools such as ExtractAI will augment the work of human fab technicians and thereby help close the yield gap, the difference between desire and actual output.”
VentureBeat (5/23, Dickson) reports, “IBM’s AI research division has released a 14-million-sample dataset to develop machine learning models that can help in programming tasks.” Dubbed Project CodeNet, the “dataset takes its name after ImageNet, the famous repository of labeled photos that triggered a revolution in computer vision and deep learning.” The “researchers at IBM have tried to create a multi-purpose dataset that can be used to train machine learning models for various tasks.” CodeNet’s creators describe it as a “very large scale, diverse, and high-quality dataset to accelerate the algorithmic advances in AI for Code.”
The Wall Street Journal (5/21, Olson, Subscription Publication) reports that, according to anonymous sources, Google has formally ended its long-running negotiations with DeepMind managers over DeepMind’s demand for greater autonomy, and DeepMind failed to achieve its goals of establishing an independent legal structure. The article examines the meaning of Google’s decision to permanently integrate AI research within the company, and the expected impact on the AI industry more broadly.
SiliconANGLE (5/21) reported that Facebook’s AI research group published a new blog post on Friday, detailing a new way to “build speech recognition systems without using any transcribed audio data to train them.” The blog post read, “Wav2vec-U is the result of years of Facebook AI’s work in speech recognition, self-supervised learning, and unsupervised machine translation.” It continued, “It is an important step toward building machines that can solve a wide range of tasks just by learning from their observations.”
The Washington Times (5/20, Wolfgang) reports on the implications of this month’s “first full-length test” at Tyndall Air Force Base in Florida of the Skyborg “artificial intelligence system” which performed “basic aviation capabilities” through a UAV platform. The systems worked, “but critics say that historic flight may have been a small step toward a doomsday scenario in which AI-powered aircraft inadvertently spark the next human world war.”
TechCrunch (5/14, Coldewey) reports Facebook is looking to improve the performance of AI by using “structured forgetfulness.” TechCrunch says, “Machine learning algorithms, especially during training, now routinely handle millions of data points and ingest terabytes of imagery or language. And because they’re built to constantly compare new data with their accrued knowledge, failing to forget unimportant things means they’re bogged down by constant references to pointless or outdated data points.”
VentureBeat (5/18, Wiggers) reports that Google “announced LaMDA, a sophisticated language model that Google says is far better at understanding the context of conversations than leading models today. Google CEO Sundar Pichai says that LaMDA, which was built for dialogue applications, is open domain and designed to converse on any topic. Importantly, the model doesn’t have to be retrained to have another conversation.” In a “related announcement, Google detailed multitask unified model (MUM), a multimodal model that can multitask in order to unlock information in new ways. Trained on over 75 languages at once, MUM can simultaneously understand different forms of information including text, images, and videos.”
Fierce Healthcare (5/19, Landi) reports, “At Google’s annual I/O developer conference Tuesday, the company previewed its AI-powered dermatology assist tool, which is a web-based application” that aims “to help consumers, and potentially doctors, better identify dermatologic conditions using just smartphone or digital cameras.” The tool “marks Google’s first consumer-facing medical device, but the company is currently not seeking approval from the Food and Drug Administration (FDA) for the technology,” according to Google Health Product Manager Peggy Bui, MD. Fierce Healthcare adds, “The new search tool will launch later this year in markets outside the U.S.”
The Hill (5/17, Somvichian-Clausen) reports that admissions officers are often “saddled with the nearly impossible task of efficiently evaluating thousands of applications each school year, with the expectation that their choices will reflect the institution’s standards, grow diversity and that the students chosen will then be inspired enough to enroll and attend classes in the fall.” The issue is that “humans are inherently biased, and schools are now beginning to realize the faults in their traditional approach to admissions.” Surprisingly, the solution “to this issue – to this lack of humanity – might possibly be found through the utilization of artificial intelligence.” The organization Kira Talent “works with learning institutions around the world, in hopes of delivering a more holistic approach to reviewing candidates.” Students “undergo a video interview process” where video- and text-based prompts “lead a person through a series of questions, and the applicant’s answers are then used to evaluate things like leadership potential.”
New Scientist (5/19, Sparkes) reports that new research suggests “an artificial intelligence that interprets images from wearable cameras can identify food and accurately estimate its weight to determine how many calories a person is consuming and what nutrients they are ingesting, which could be useful for automating dietary research.”
Public Database of AI-Predicted Protein Structures Could Transform Biology
Science
Robert F. Service
July 22, 2021
A team of researchers says it has used a new artificial intelligence (AI) algorithm to forecast the three-dimensional structures of 350,000 proteins from humans and 20 model organisms. The team at U.K.-based AI developer DeepMind (which is owned by Alphabet, the parent of Google) developed the AlphaFold computer model, which it says has generated structures for almost 44% of all human proteins, encompassing nearly 60% of the amino acids encoded by the human genome. Researchers at the European Molecular Biology Laboratory (EMBL) in Germany compiled a freely available public database of DeepMind's new protein predictions, which is likely to help biologists determine out how thousands of unknown proteins operate. EMBL's Edith Heard said, "We believe this will be transformative to understanding how life works."
Method Predicts Whether COVID-19 Clinical Trials Will Succeed
Florida Atlantic University
Gisele Galoustian
July 21, 2021
Scientists at Florida Atlantic University (FAU) used a novel method combining machine learning algorithms and ensemble learning to simulate COVID-19 completion versus cessation in clinical trials. The researchers compiled a testbed of 4,441 COVID-19 trials from the ClinicalTrials.gov national registry and database, then produced 693 dimensional features to represent each clinical trial. By applying ensemble learning and sampling, the model realized what was characterized as high efficacy in clinical trial prediction. FAU's Stella Batalama said this new technique "will be helpful to design computational approaches to predict whether or not a COVID-19 clinical trial will be completed, so that stakeholders can leverage the predictions to plan resources, reduce costs, and minimize the time of the clinical study."
Will AI Grade Your Next Test?
The New York Times
Cade Metz
July 20, 2021
Stanford University researchers have developed an artificial intelligence (AI) system designed to provide automated feedback to students taking the online Code in Place course. The researchers trained a neural network to analyze computer code using examples from a decade's worth of midterm exams featuring programming exercises. After the system offered 16,000 pieces of feedback to students this spring, the researchers found students agreed with the AI feedback 97.9% of the time, and with feedback from human instructors 96.7% of the time. Stanford's Chris Piech stressed that the system is not intended to replace instructors, but to reach more students than they could on their own.
How AI Is Fighting Wildfires
The New York Times
Jill Cowan
July 15, 2021
The University of California, San Diego's Ilkay Altintas and colleagues have spent eight years helping fire managers and scientists combat wildfires through the WIFIRE Lab. Such efforts involve combining vast troves of information with artificial intelligence to rapidly predict fires' progression, and help plan countermeasures. Examples include a fire map with an intuitive interface, which Altintas said was used to help fight every major fire in California; and the BurnPro 3D product, which displays three-dimensional images of vegetation down to a one-meter resolution. Altintas said one of lab's primary goals is to make data and data analysis accessible to a wide range of collaborators nationwide, since that is the only way researchers and officials can keep pace with evolving environmental crises.
AI System Developed to Diagnose Heart Problems
The Jerusalem Post
July 12, 2021
Researchers at the Technion – Israel Institute of Technology have developed an artificial intelligence (AI) system that can diagnose cardiac issues based on hundreds of electrocardiograms (ECG). The AI system uses an augmented neural network trained on more than 1.5 million ECG tests on hundreds of patients worldwide. The system is more accurate in reading ECGs than humans, and can detect pathological conditions human cardiologists cannot. For instance, the system can identify patients at risk of arrhythmia, which can lead to heart attacks and strokes, even if the condition does not show up in the ECG. The AI explains its decisions using official cardiology terminology.
CNBC (6/29, Novet) reports Microsoft on Tuesday “announced an artificial intelligence system that can recommend code for software developers to use as they write code.” The system, called GitHub Copilot, “draws on source code uploaded to code-sharing service GitHub, which Microsoft acquired in 2018, as well as other websites.” Microsoft and GitHub “developed it with help from OpenAI.”
FedScoop (6/28, Barnett) reports DOD will produce guidance “for its artificial intelligence ethical principles by late August, six months after an initial self-directed deadline for the creation of the guidance.” Officials had said “by February 2021 the DOD would detail how the bureaucracy should implement its five AI principles, which state that the technology should be responsible, equitable, traceable, reliable and governable.” But that date has “come and gone without the any such document detailing how the bureaucracy should translate the principles into their daily work.” Head of responsible AI at the Joint AI Center, Alka Patel said during a press conference Thursday, “We recognizing the urgency around this work. We are making progress.”
Wired (6/22, Nast) reports that the National Institutes of Standards and Technology (NIST) “is a federal agency best known for measuring things like time or the number of photons that pass through a chicken.” Now NIST “wants to put a number on a person’s trust in artificial intelligence.” Trust is “part of how we judge the potential for danger, and it’s an important factor in the adoption of AI.” As AI “takes on more and more complicated tasks, officials at NIST say, trust is an essential part of the evolving relationship between people and machines.” In a research paper, “creators of the attempt to quantify user trust in AI say they want to help businesses and developers who deploy AI systems make informed decisions and identify areas where people don’t trust AI.” NIST “views the AI initiative as an extension of its more traditional work establishing trust in measurement systems.” Public comment is “being accepted until July 30.”
CNBC (6/18, Shead) reports, “Computer scientists are questioning whether DeepMind, the Alphabet-owned U.K. firm that’s widely regarded as one of the world’s premier AI labs, will ever be able to make machines with the kind of ‘general’ intelligence seen in humans and animals.” In its search “for artificial general intelligence, which is sometimes called human-level AI, DeepMind is focusing a chunk of its efforts on an approach called ‘reinforcement learning.’” DeepMind, “which was acquired by Google in 2014 for around $600 million, believes that AI systems underpinned by reinforcement learning could theoretically grow and learn so much that they break the theoretical barrier to AGI without any new technological developments.” The company’s researchers “argued in a paper submitted to the peer-reviewed Artificial Intelligence journal last month that ‘Reward is enough’ to reach general AI.” The “researchers claim that if you keep ‘rewarding’ an algorithm each time it does something you want it to, which is the essence of reinforcement learning, then it will eventually start to show signs of general intelligence.”
ZDNet (6/9, Chanthadavong) reports, “Researchers from Google have uncovered that artificial intelligence (AI) could potentially design computer chips more efficiently than human engineers.” In a research paper published in Nature, Google Research said “that designing a physical layout of a computer chip, known as floorplanning, is a complex and time-consuming task but essential.” To help obtain “a more efficient outcome, the researchers developed a series of algorithms to treat chip floorplanning as a game, components as pieces, the canvas on which the components sit on as the board, and the winning results were based on the performance which was assessed using evaluation metrics based on reference datasets of 10,000 chip placements.” Researchers found that “during a six-hour period, the algorithms were able to develop chips that were similar or more efficient that those designed by humans, who typically take months.”
Pew Research Center (6/16) “asked experts where they thought efforts aimed at creating ethical artificial intelligence would stand in the year 2030.” Some 602 technology innovators, developers, business and policy leaders, researchers and activists “responded to this specific question: By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public good?” Beth Noveck, director, NYU Governance Lab and its MacArthur Research Network on Opening Governance, responded, “Successful AI applications depend upon the use of large quantities of data to develop algorithms. But a great deal of human decision-making is also involved in the design of such algorithms, beginning with the choice about what data to include and exclude. Today, most of that decision-making is done by technologists working behind closed doors on proprietary private systems.” Noveck continued, “If we are to realize the positive benefits of AI, we first need to change the governance of AI and ensure that these technologies are designed in a more participatory fashion with input and oversight from diverse audiences, including those most affected by the technologies.”
In an interview with Forbes (6/9, Sayej), Shalini Kantayya, director of the PBS documentary “Coded Bias,” discussed big tech and the future of AI. Kantayya said, “Everything we love, everything we care about as citizens of a democracy is going to be totally transformed by artificial intelligence – in fact, is in the process of being transformed. AI systems are often the most invisible, automated, first-line gatekeepers to every opportunity we have, and they are rarely vetted for bias or even accuracy.” Kantayya also stated, “What I learned in making the film – which stands on the on the foundation of probably three decades of scholarship and activism and research, mostly by women, people of color, and LGBTQ communities who have been speaking the truth about what’s happening in Silicon Valley – is that these technologies have not been vetted for racial bias for gender bias, or even accuracy or fairness. And they exist in these black boxes that we can’t examine as a society.”
The Wall Street Journal (6/10, Tracy, Subscription Publication) reports the Biden Administration on Thursday launched the National Artificial Intelligence Research Tax Force, a 12-member group from academia, government and industry tasked with giving artificial intelligence researchers greater access to government data about Americans. The Journal reports the task force is part of the Administration’s effort keep the US at the forefront of the new technology.
Vox (6/2, Ghaffary) reports that while Google has “promised to double its research staff studying responsible AI to 200 people, and CEO Sundar Pichai has pledged his support to fund more ethical AI projects,” some current members of Google’s ethical AI group “told Recode the reality is different from the one Google executives are publicly presenting.” They “say the team has been in a state of limbo for months, and that they have serious doubts company leaders can rebuild credibility in the academic community – or that they will listen to the group’s ideas.” Many members “feel so adrift that they convene daily in a private messaging group to complain about leadership, manage themselves on an ad-hoc basis, and seek guidance from their former bosses.”