Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

Dr. T's AI brief

25 views
Skip to first unread message

dtau...@gmail.com

unread,
Oct 30, 2021, 8:44:42 AM10/30/21
to ai-b...@googlegroups.com

AI Can Detect Deepfake Face Because Its Pupils Have Jagged Edges
New Scientist
Chris Stokel-Walker
September 10, 2021


A computer model developed by researchers at New York’s University of Albany can determine whether an image of a face is a deepfake by examining its pupils; the model will deem the image a fake if the pupils are not circular or elliptical. If the image passes that test, the model will check whether the pupil has smooth or jagged edges, with the latter indicating a deepfake. University of Albany's Siwei Lyu said, "Even though [generative adversarial network] GAN models are very powerful, they don't really understand human biology very well. A lot of these very fine details won't be represented by the model effectively." Although the shape of one's pupils can be affected by certain diseases and infections, Lyu noted that such cases are rare.

Full Article

*May Require Paid Registration

 

 

AI Can Now Write Its Own Computer Code. That's Good News for Humans.
The New York Times
Cade Metz
September 9, 2021


Experts suggest new artificial intelligence (AI) technology that can write its own programs poses no threat to coders' jobs, and may enhance human productivity. Research laboratory OpenAI trained its Codex system on a massive dataset of prose and code so users can ask the system in plain English to write programs that perform specific functions; it can write programs in 12 computer languages and translate between them, although it is highly error-prone and cannot think on its own. According to OpenAI, Codex generates correct code 37% of the time, which makes it useful only to seasoned coders. Tom Smith of AI startup Gado Images said Codex could help train a new generation of programmers, as well as helping accelerate everyday tasks by directing them to basic coding elements or toward new concepts.

Full Article

*May Require Paid Registration

 

 

Coughs Say Much About Your Health, If Your Smartphone Is Listening
The Wall Street Journal
Betsy McKay
September 8, 2021


Many researchers are trying to train artificial intelligence (AI) to enable smartphones to analyze patterns in a person’s coughs, as a means of diagnosing and treating deadly respiratory diseases like tuberculosis (TB) and COVID-19. Peter Small at acoustic AI developer Hyfe said a cough's sound and frequency contain valuable information, because coughs resulting from different diseases produce audible distinctions. Hyfe and other developers of what Small calls acoustic epidemiology are compiling datasets and training AI to recognize the different types of coughs. Hyfe offers free smartphone applications for consumers and researchers that use AI to detect and track how frequently someone coughs. The apps operate continuously on a smartphone, recording half-second clips when the AI perceives a cough sound. The University of California, San Francisco's Adithya Cattamanchi is using the researcher app to record coughs in several countries, as the first steps toward building a TB cough database.

Full Article

*May Require Paid Registration

 

 

ML Approach Better at Spotting Enzymatic Metals in Proteins
University of Kansas News
September 8, 2021


The new Metal Activity Heuristic of Metalloprotein and Enzymatic Sites (MAHOMES) machine learning (ML) model distinguishes between enzymatic and non-enzymatic metals in proteins with 92.2% precision. University of Kansas (KU) researchers developed MAHOMES to differentiate non-chemical reaction-inducing metalloproteins from reaction-enabling metalloenzymes, which closely resemble each other. KU's Ryan Feehan assembled a structural dataset of enzymatic and nonenzymatic metalloprotein sites, then tested ML differentiation strategies with KU's Meghan Franklin. They trained MAHOMES to analyze and predict whether a cleft in a protein was enzymatic, and the model realized 90.1% recall in distinguishing active from inactive sites. KU's Joanna Slusky thinks this approach could help boost enzymes' usefulness in the production of life-saving drugs and other industrial processes.

Full Article

 

 

This Technology Can Predict Early Alzheimer's with Near 100% Accuracy
EuroNews
Pascale Davis
September 6, 2021


A method for analyzing brain images developed by researchers at Kaunas University of Technology (KTU) in Lithuania is more than 99% accurate in predicting the possible onset of Alzheimer's disease. The method, which uses artificial intelligence and deep learning, was used to analyze functional MRI images from 138 subjects. The researchers determined their technique performed better than traditional methods in terms of accuracy, sensitivity, and specificity. KTU's Rytis Maskeliunas said, "Obviously, such high numbers are not indicators of true real-life performance, but we're working with medical institutions to get more data."

Full Article

 

 

Researchers Say They've Found a Wildly Successful Bypass for Face Recognition Tech
Gizmodo
Lucas Ropek
August 4, 2021


Computer scientists at Israel's Tel Aviv University (TAU) say they have developed a "master face" method for circumventing a large number of facial recognition systems, by applying artificial intelligence to generate a facial template. The researchers say the technique exploits such systems' usage of broad sets of markers to identify specific people; producing facial templates that match many such markers essentially creates an omni-face that can bypass numerous safeguards. The researchers created the master face by plugging an algorithm into a generative adversarial network that builds digital images of artificial human faces. The TAU team said testing showed the template was able unlock over 20% of the identities in an open source database of 13,000 facial images operated by the University of Massachusetts.

Full Article

 

 

Researchers Use AI to Unlock the Secrets of Ancient Texts
Notre Dame News
Jessica Sieff
August 3, 2021


University of Notre Dame (ND) researchers are developing an artificial neural network to interpret ancient handwriting based on human perception to enhance deep learning transcription. ND's Walter Scheirer said the documents are written in long-unused styles dating back centuries in archaic languages; the project aims "to automate transcription in a way that mimics the perception of the page through the eyes of the expert reader and provides a quick, searchable reading of the text." Scheirer's team combined traditional machine learning techniques with visual psychophysics, the measurement of links between physical stimuli and mental phenomena. The researchers studied digitized ninth-century Latin manuscripts written in Switzerland's Cloister of St. Gall, with readers inputting manual transcriptions into a software interface while their reaction times were measured. Said Scheirer, "We then inform the network of common difficulties in the perception of these characters and can make corrections based on those measurements."
 

Full Article

 

 

Endlessly Changing Playground Teaches AIs to Multitask
MIT Technology Review
Will Douglas Heaven
July 30, 2021


Alphabet’s DeepMind Technologies has developed a videogame-like three-dimensional world that allows artificial intelligence (AI) agents to learn skills by experimenting and exploring. Those skills can be used to perform tasks they have not performed before. XLand is managed by a central AI that controls the environment, game rules, and number of players, with reinforcement learning helping the playground manager and players to improve over time. The AI players played 700,000 different games in 4,000 different worlds and performed 3.4 million unique tasks. Rather than learning the best thing to do in each scenario, the AI players experimented until they completed the task at hand.

Full Article

 

 

Scientists Share Wiring Diagram Tracing Connections for 200,000 Mouse Brain Cells
GeekWire
Alan Boyle
July 29, 2021


A multi-institutional team of neuroscientists spent five years and $100 million developing a high-resolution model detailing the connections between 200,000 mouse brain cells. Created under the federally-funded Machine Intelligence From Cortical Networks (MICrONS) program, the dataset encompasses 120,000 neurons and about 80,000 other types of brain cells in a cubic millimeter of a mouse brain's visual neocortex. The researchers recorded neural activity patterns as the mouse watched images or films of natural scenes, then captured 150 million images of fractionated brain tissue using electron microscopes. Each cell and its internal structure were mapped using machine learning techniques. R. Clay Reid at Seattle’s Allen Institute for Brain Science said, "The final step is to interpret this network, at which point we may be able to say we can read the brain's program."

Full Article

 

 

Platform Teaches Nonexperts to Use ML
Cornell Chronicle
Louis DiPietro
July 28, 2021


An interactive machine learning (ML) platform developed by Cornell University scientists is designed to train nonexperts to use algorithms effectively, efficiently, and ethically. Cornell's Swati Mishra said, "If we design machine learning tools correctly and give enough agency to people to use them, we can ensure their knowledge gets integrated into the machine learning model." Said Cornell's Jeff Rzeszotarski, "While our eventual goal is to help novices become advanced machine-learning users, providing some 'training wheels' through transfer learning can help novices immediately employ machine learning for their own tasks." Added Mishra, “We as researchers and designers have to mitigate user perceptions of what machine learning is. Any interactive tool must help us manage our expectations.”

Full Article

 

dtau...@gmail.com

unread,
Oct 31, 2021, 8:53:54 AM10/31/21
to ai-b...@googlegroups.com

How to Map a Fly Brain in 20 Million Easy Steps
The New York Times
Emily Anthes
October 26, 2021


Scientists at the Howard Hughes Medical Institute's Janelia Research Campus and Google have been mapping the fruit fly brain connectome since 2014. The researchers initially segmented the fly's brain and imaged the slices using focused-ion beam scanning electron microscopy, then re-stitched the millions of images into a three-dimensional volume. Google researchers used machine learning algorithms to identify each individual neuron and trace its branches, then the Janelia team pinpointed the synapses and vetted and refined the connectivity diagrams. Analysis enabled the researchers to identify many new neuron types and locate synapses that appear to help the insect navigate. The freely available online connectome includes roughly 25,000 neurons and 20 million synapses. Rockefeller University's Cori Bargmann called the project "a tremendous step toward the goal of working out the connectivity of the brain."
 

Full Article

 

 

ML Reveals Brain Networks Involved in Child Aggression
YaleNews
October 26, 2021

Yale University researchers used machine learning (ML) to expose brain connectivity disruptions in children exhibiting aggression. The researchers compiled functional magnetic resonance imaging (fMRI) data from subjects observing images of faces making calm or fearful expressions; neural connections identified using ML helped to differentiate children with and without aggressive-behavior histories. The researchers found brain-network patterns associated with social and emotional processes could anticipate aggression, as confirmed in a separate dataset. Abnormal connectivity to the dorsolateral prefrontal cortex consistently predicted aggression when tested in subgroups of children demonstrating aggressive behaviors and disorders like anxiety, attention-deficit/hyperactivity disorder, and autism. Yale's Karim Ibrahim suggested this "may represent a neural marker of aggression that can be leveraged in clinical studies."
 

Full Article

 

 

AI Can Turn Collection of 2D Images into Explorable 3D World
New Scientist
Chris Stokel-Walker
October 25, 2021


A neural network developed by researchers at Germany's University of Erlangen-Nuremberg can convert standard two-dimensional images into explorable three-dimensional (3D) worlds. Using still images and a rough 3D model of a scene created using COLMAP, an off-the-shelf software product, the neural network can determine what the scene would look like from any angle. The smoothest examples of 3D worlds were created using 300 to 350 images of a scene taken from different angles. Said Erlangen-Nuremberg’s Darius Rückert, "The more images you have, the better the quality. The model cannot create stuff it hasn't seen." Abound Labs' Tim Field said the system is "proof that automated photorealism is possible."
 

Full Article

 

 

A Way to Solve the 'Hardest of the Hard' Computer Problems
Ohio State News
Jeff Grabmeier
September 21, 2021


Scientists have vastly accelerated the speed and efficiency of reservoir computing to tackle some of the most difficult information processing challenges. An Ohio State University (OSU) team used next-generation reservoir computing on a Lorenz weather prediction task, which was 33 to 163 times faster than the current-generation model. The new model also was about a million times faster in terms of forecasting accuracy, and OSU's Daniel Gauthier said it achieved this improvement using the equivalent of just 28 neurons, versus 4,000 for the current-generation model. Gauthier credited the speedup to the new model's reduced warmup and training time, adding, "What's exciting is that this next generation of reservoir computing takes what was already very good and makes it significantly more efficient."

Full Article

 

 

Preparing for a Future Pandemic with AI
Pacific Northwest National Laboratory
Melissae Fellet
September 21, 2021


Artificial intelligence (AI)-driven investigations into COVID treatments by scientists at the U.S. Department of Energy's Pacific Northwest National Laboratory (PNNL) are yielding insights for application to future pandemic responses. PNNL's Jeremy Zucker and colleagues used a counterfactual AI framework to simulate biochemical data collected from hypothetical patients with severe COVID-19 infections; each fictitious patient had different viral loads, received a different dose of a drug, and either recovered or died. The analysis delivered more precise information about the drug's potential benefit to individuals, compared with algorithms that just predicted average patient post-treatment outcomes. Researchers at PNNL and the University of Washington also combined high-throughput biochemical measurements and AI-based screening to extract one molecule with promising antiviral activity against SARS-CoV-2 out of 13,000.

Full Article

 

 

DRNets Can Solve Sudoku, Speed Scientific Discovery
Cornell Chronicle
Tom Fleischman
September 17, 2021


Deep Reasoning Networks (DRNets) developed by scientists at Cornell University and the California Institute of Technology combine deep learning with constraint reasoning, knowledge of a subject's boundaries and rules. The framework can solve problems with little input data. The team tested the framework by having it de-mix overlapping handwritten Sudoku puzzles, tasking it to separate the puzzles into two solved Sudokus without any training data (which it did with nearly 100% accuracy). The researchers then had the framework automate crystal-structure phase mapping of solar-fuels materials, using x-ray diffraction patterns. Cornell's Carla Gomes said, "Verifying that a chemical system solution satisfies the physics rules is easier than producing it, the same way checking that a completed Sudoku is correct is easier than completing it."

Full Article

 

 

'Whoop'—Autonomous Method Precisely Detects Whale Vocalizations
Florida Atlantic University News Desk
Gisele Galoustian
September 15, 2021


Florida Atlantic University (FAU) scientists applied artificial intelligence and machine learning (ML) in designing a technique for detecting the up-call or "whoops" vocalizations of the North Atlantic Right whale. The system uses Multimodal Deep Learning (MMDL) algorithms to assess acoustic recordings for the presence of up-calls, outperforming conventional ML methods. The researchers confirmed the MMDL model's efficacy for Right whale up-call detection with Cornell University's dataset. FAU's Laurent M. Chérubin said, "The near-zero false-positive, false-negative and false alarm rates indicate that this new MMDL detector could be a powerful tool in the detection and monitoring of the low density, endangered North Atlantic Right Whale, especially in environments with high acoustic-masking."
 

Full Article

 

 

Brain-Inspired AI Enables Future Medical Implants
IEEE Spectrum
Rebecca Sohn
September 10, 2021


An international team of researchers developed a biocompatible artificial intelligence system from networks of polymer fibers that can detect and categorize abnormal electrical signals in the human body. The researchers fabricated the networks from a carbon-based material called PEDOT and immersed them in an electrolyte solution that replicates the inside of the human body, where they operate as organic electrochemical transistors. These mechanisms convert electrical inputs in the body into nonlinear signals like the binary code computers use, making it suitable for computation. In tests, the system successfully distinguished irregular from regular heartbeats.

Full Article

 

 

Exact Symbolic AI for Better Assessment of AI Fairness
MIT News
Rachel Paiste
August 9, 2021


A new artificial intelligence programming language can evaluate algorithmic fairness faster and more precisely than other available tools, thanks to the work of Massachusetts Institute of Technology researchers. The Sum-Product Probabilistic Language (SPPL) returns fast, precise answers to probabilistic inference questions, and only permits users to write probabilistic algorithms for which it can automatically provide exact probabilistic inference outcomes; users also can check inference's expected speed to avoid writing slow programs. Boston College's Jean-Baptiste Tristan said, "SPPL offers improved flexibility and trustworthiness over other PPLs on this challenging and important class of problems due to the expressiveness of the language, its precise and simple semantics, and the speed and soundness of the exact symbolic inference engine."

Full Article

 

 

U.S. Prisons Mull AI to Analyze Inmate Phone Calls
Reuters
David Sherfinski; Avi Asher-Schapiro
August 9, 2021


The U.S. House Appropriations Committee's push to study the use of artificial intelligence (AI) to analyze prison inmates' phone calls has prisoner advocates and families warning of risks of error, misunderstandings, and racial bias. Several state and local prisons have already begun using such technology, and the House panel is urging the Department of Justice to consider potential federal utilization and to identify shortcomings in the information the tech generates. The Oxford, AL police department has deployed Verus software from LEO Technologies, which uses Amazon Web Services’ natural language processing and transcription technology to process and flag prisoner calls. Oxford Police Chief Bill Partridge said such surveillance has helped local forces solve cold case homicides and has prevented suicides. Critics warn of tools potentially amplifying racial bias; for example, a Stanford/Georgetown University analysis found Amazon's automatic speech recognition software committed significantly more errors for black speakers than white speakers.
 

Full Article

 

 

Model Predicts COVID-19 Outbreak Two Weeks Ahead of Time
Florida Atlantic University
Gisele Galoustian
August 6, 2021


Researchers at Florida Atlantic University (FAU) and Lexis-Nexis Risk Solutions have crafted a long short-term memory (LSTM) deep-learning model that could potentially predict a COVID-19 outbreak two weeks in advance. The team blended driving-mobility data compiled by the Apple Maps application, COVID-19 statistics, and county-level demographics from 531 U.S. counties. Researchers trained the model to record the impact of government responses and age on COVID-19 cases and viral spread, respectively. Results indicated that average daily cases declined as the retiree percentage expanded and increased as the youth percentage grew. FAU's Stella Batalama said the research "has significant applications for effective management of the pandemic and future outbreaks, which has the potential to save lives and keep our economies thriving."
 

Full Article

 

 

Neural Network Model Unravels Why People with Autism Have Difficulty Interpreting Facial Expressions
News-Medical Life Sciences
August 5, 2021

Researchers at Japan's Tohoku University developed an artificial neural network model that can help explain the difficulty people with autism spectrum disorder have in interpreting facial expressions. The model takes into account predictive processing theory, which states that the brain predicts the next sensory stimulus and adapts using such sensory information as facial expressions to reduce errors in its predictions. The model learned to predict the movement of parts of the face using videos of facial expressions and was able to generalize facial expressions not provided during training. However, the model's ability to generalize decreased along with the heterogeneity of activity in the neural population, restraining emotional cluster formation in higher-level neurons, similar to what occurs with autism spectrum disorder. Tohoku's Yuta Takahashi said, "The study will help advance developing appropriate intervention methods for people who find it difficult to identify emotions."
 

Full Article

 

dtau...@gmail.com

unread,
Oct 31, 2021, 8:00:23 PM10/31/21
to ai-b...@googlegroups.com

Breakthrough Proof Clears Path for Quantum AI
Los Alamos National Laboratory News
October 15, 2021


Scientists at the U.S. Department of Energy's Los Alamos National Laboratory (LANL) have devised a proof that convolutional neural networks can always be trained on quantum computers, avoiding the threat of "barren plateaus" in optimization problems. LANL's Marco Cerezo said while a barren plateau eliminates any possibility of quantum speedup or advantage, "We proved the absence of barren plateaus for a special type of quantum neural network. Our work provides trainability guarantees for this architecture, meaning that one can generically train its parameters." LANL's Patrick Coles said, "With this guarantee in hand, researchers will now be able to sift through quantum-computer data about quantum systems and use that information for studying material properties or discovering new materials, among other applications."

Full Article

 

 

Remote Sensing, ML Reveal Archaic Shell Rings
Penn State News
A'ndrea Elyse Messer
August 19, 2021


An international team of researchers found ancient indigenous shell rings in the American Southeast by evaluating sensing data via deep machine learning techniques. The team tapped datasets collected by satellites or aircraft using Light Detection and Ranging (LiDAR), synthetic-aperture radar (SAR), and multispectral measurement. The researchers used deep learning to train a convolutional neural network to identify shell rings, shell mounds, and other landscape structures from the LiDAR data, then combined this information with the SAR and multispectral data; the result was the identification of potentially hundreds of new shell ring sites. Said Pennsylvania State University's Dylan S. Davis, "One difficulty with deep learning is that it usually requires massive amounts of information for training, which we don't have when looking for shell rings. However, by augmenting our data and by using synthetic data, we were able to get good results."

Full Article

 

 

AI Gave Val Kilmer His Voice Back. Critics Worry the Technology Could Be Misused.
The Washington Post
Dalvin Brown
August 18, 2021


U.K. software company Sonantic helped actor Val Kilmer speak again after surgery robbed him of his natural voice, but critics worry about potential misuse of such artificial intelligence-based voice-cloning software. Sonantic reproduced Kilmer's voice using samples from old footage cleaned of background noise; engineers linked the audio in "short chunks" and processed the data through "voice engine" algorithms, which learn to speak from the recordings. The end product was a desktop-based text-to-speech program that Sonantic claims can emulate Kilmer's projection levels and emotion when reading lines of text aloud. The technology concerns people like voice actors, who imagine unauthorized use of the technology endangering their livelihood, as well as making statements they themselves would not make.

Full Article

*May Require Paid Registration

 

 

Can AI Make a Better Fusion Reactor?
IEEE Spectrum
Rebecca Sohn
August 13, 2021


Researchers at Portugal’s University of Lisbon have trained artificial intelligence, machine learning, and deep learning models for use in nuclear fusion research. The models were trained using diagnostic data from 48 sensors connected to the U.K.'s Joint European Torus reactor. In a study at the University of Washington, researchers used a single graphics processing unit (GPU) to control a fusion experiment that previously required multiple computers. At the KTH Royal Institute of Technology in Sweden, researchers have developed a deep learning model that calculates the electric fields of plasma more quickly and accurately than previous methods. Said KTH’s Stefano Markidis, “At the end of the day, it's going to be our minds that [will] solve the fusion problem. It's just a matter of what tools we use, and AI and machine learning will be a key tool."

Full Article

 

 

Researchers Teach AI to See Depth in Photos, Paintings
Simon Fraser University (Canada)
Andrew Ringer
August 10, 2021


Researchers at Simon Fraser University (SFU) in Canada and Adobe Research used a process called monocular depth estimation to teach artificial intelligence to determine depth from a single photograph. SFU's Sebastian Dille said, "We analyze the image and optimize our process by looking at the image content according to the limitations of current architectures. We give our input image to the network in many different forms, to create as many details as the model allows while preserving a realistic geometry." SFU's Yagiz Aksoy added, "With the high-resolution depth maps that we are able to get for real-world photographs, artists and content creators can now immediately transfer their photograph or artwork into a rich 3D (three-dimensional) world."

Full Article

 

dtau...@gmail.com

unread,
Nov 7, 2021, 12:18:46 PM11/7/21
to ai-b...@googlegroups.com

Pamela McCorduck, Historian of AI, Dies at 80
The New York Times
Richard Sandomir
November 4, 2021


Pamela McCorduck, who authored a history of the first two decades of artificial intelligence (AI), has died at 80. She first co-edited an influential book of academic papers on AI at the University of California, Berkeley with computer scientists Edward Feigenbaum (an ACM A.M. Turing Award recipient) and Julian Feldman. As an English teacher at Carnegie Mellon University, McCorduck got to know AI pioneers like Turing Award recipients Herbert Simon and Raj Reddy. Feigenbaum said, "She was dumped into this saturated milieu of the great and greatest in AI at Carnegie Mellon—some of the same people whose papers she'd helped us assemble—and decided to write a history of the field." The book was "Machines Who Think: A Personal Inquiry Into the History and Prospects of Artificial Intelligence." Said Simon, "She was interacting with all the movers and shakers of AI. She was in the middle of it, an eyewitness to history."

Full Article

*May Require Paid Registration

 

 

Facebook, Citing Societal Concerns, Plans to Shut Down Facial Recognition System
The New York Times
Kashmir Hill; Ryan Mac
November 2, 2021


Facebook intends to shutter its facial recognition system, deleting the face-scan data of over 1 billion users and removing a feature that has provoked privacy concerns, government probes, litigation, and regulatory distress. Jerome Pesenti at Facebook's recently renamed parent firm, Meta, said the closure was prompted by "many concerns about the place of facial recognition technology in society." The software feature introduced in 2010 automatically identified people appearing in users' digital photo albums and suggested users tag them with a click, linking their accounts to the images. Although Facebook limited facial recognition to its own site and kept it from third parties, privacy advocates questioned how much facial data was collected and what the company could do with it.

Full Article

*May Require Paid Registration

 

 

Understanding Finite-Temperature Quantum Effects Better with ML
RIKEN (Japan)
November 1, 2021


Theoretical physicists at Japan's RIKEN institute have developed two machine learning (ML) techniques to explore finite-temperature interaction between atoms and electrons, which will inform the future development of quantum computing devices. The techniques employ neural networks to simulate thermal effects in quantum many-body systems. The team first applied the deep Boltzmann machine process to generate a mathematical description of a Gibbs state system, then optimized the network's parameters through stochastic sampling. RIKEN's Yusuke Nomura said, "The ultimate goal of our approach is to reveal complex finite-temperature phenomena that remain unexplored in a wide range of fields, including condensed-matter physics, atomic physics, statistical mechanics, and quantum optics."

Full Article

 

 

'PodoSighter' Uses AI to Identify Kidney Disease Indicator
University at Buffalo News Center
Ellen Goldbaum
October 28, 2021


The cloud-based PodoSighter tool developed by University at Buffalo (UB) researchers is engineered to identify early indicators of kidney disease by detecting and quantifying podocytes, a specialized type of cell in the kidney that is damaged during early stage kidney disease. UB's Darshana Govind explained, "The tissue is prepared in the clinic and the AI [artificial intelligence]-based method detects it for you. You click a button and the podocytes are identified." UB's Pinaki Sarder said the PodoSighter also estimates podocyte number and density in each capillary bundle, or glomerulus, containing the cells. Sarder added that increasing glomerulus size and declining podocyte count signal the progression of kidney disease.

Full Article

 

 

DeepMind AI Can Accurately Predict if it Will Rain in Next 90 Minutes
New Scientist
Matthew Sparkes
September 29, 2021


Artificial intelligence (AI) programmed by researchers at Alphabet subsidiary DeepMind and the U.K.'s Meteorological Office (Met Office) can forecast extremely short-term rainfall more accurately than current models. The researchers trained a neural network on weather radar data from 2016 to 2018 and tested it using data from 2019. The resulting model can make forecasts over areas measuring up to 1,536 kilometers (954 miles) by 1,280 kilometers (795 miles), and predict the chance of rain in a given 1-kilometer (0.6-mile) by 1-kilometer (0.6-mile) area from five to 90 minutes ahead. DeepMind said the AI model was ranked first for accuracy in 89% of experiments in a blind study of 50 Met Office meteorologists.

Full Article

 

 

Smartphone Motion Sensors Could Be Used to Listen to Phone Conversations
University of Illinois at Urbana-Champaign
Kim Guderman
September 27, 2021


Smartphone accelerometers could be used to eavesdrop on phone conversations, according to University of Illinois Urbana-Champaign (UIUC) researchers. The motion sensor can capture sound vibrations during conversations, and the researchers developed a neural network to convert that very-low-sampling-rate data into high-bandwidth signals. UIUC's Tarek Abdelzaher said, "Human speech has a special pattern. By constraining your interpretation to that special pattern, you can guess higher frequency from low frequency. This won't work with any random sound, but for some keywords or numbers, it works fairly well."

Full Article

 

 

How AI Completed Beethoven's Unfinished Tenth Symphony
Smithsonian
Ahmed Elgammal
September 24, 2021


Computer scientists, music historians, musicologists, and composers collaborated with the startup Playform AI to apply artificial intelligence (AI) to the task of completing Beethoven's unfinished Tenth Symphony. The researchers taught an AI system Beethoven's entire body of work and his creative process, including the methods he used to develop certain musical forms, how to harmonize a melodic line, how to bridge two sections of music together, how to compose a coda, and how to orchestrate the full composition. Over a period of more than two years, the research team’s efforts yielded two movements (symphonies typically include four), each more than 20 minutes long. The release of the full recording of the Tenth Symphony, as well as its world premiere performance in Bonn, Germany, are scheduled for Oct. 9.

Full Article

 

 

Peering into the Moon's Shadows with AI
Max Planck Institute for Solar System Research (Germany)
September 23, 2021


A research team led by Germany's Max Planck Institute for Solar System Research has generated the highest-resolution images so far of 17 craters in the Moon's polar regions that never receive sunlight. These craters could be targeted by future lunar missions because they may contain frozen water. The researchers generated images at 1-2 meters per pixel using a novel image processing method and taking advantage of sunlight reflected from nearby hills. The HORUS (Hyper-effective nOise Removal U-net Software) machine learning algorithm can clean up noisy images using over 70,000 calibration images from NASA's Lunar Reconnaissance Orbiter taken on the dark side of the Moon, along with information about camera temperature and the spacecraft's trajectory. Said Ben Moseley at the U.K.’s University of Oxford, "With the help of the new HORUS images, it is now possible to understand the geology of lunar shadowed regions much better than before."

Full Article

 

 

Simple Make-Up Tips Can Help You Avoid Facial Recognition Software
New Scientist
Chris Stokel-Walker
September 24, 2021


Researchers at Israel's Ben-Gurion University of the Negev developed artificial intelligence software that can provide makeup advice to foil facial recognition systems. Tests showed the software's recommendations tricked real-world facial recognition systems 98.8% of the time, with the success rate in identifying women wearing its recommended makeup declining from 42.6% to 0.9%, and from 52.5% to 1.5% for men. The adversarial machine learning system identifies which elements of a person's face are considered unique by facial recognition systems and highlights them on a digital heat map, which is used to determine where makeup can be applied to change the person's perceived face shape. The system recommends only natural makeup hues, so people potentially could protect their privacy without drawing attention to themselves.

Full Article

*May Require Paid Registration

 

 

Using AI, Old Reports to Understand Medical Images
MIT News
Steve Nadis
September 27, 2021


Scientists at the Massachusetts Institute of Technology (MIT), Harvard Medical School, and Boston’s Beth Israel Deaconess Medical Center combined artificial intelligence and written radiology reports accompanying medical images to enhance the interpretive abilities of machine learning algorithms. The process involves training a neural network to gauge disease extent using medical images and a physician's severity rating, while a second network performs a similar task for text, and a third integrates text and image data. MIT's Polina Golland said, "When the mutual information between images and text is high, that means that images are highly predictive of the text and the text is highly predictive of the images."

Full Article

 

Senators Unveil Bipartisan Bill To Protect AI-Collected Data

The Hill Share to FacebookShare to Twitter (10/21, Miller) reports the GOOD AI Act – a new “bipartisan” Senate bill introduced by Sens. Gary Peters (D-MI) and Rob Portman (R-OH) on Thursday – “is aiming to secure data collected by artificial intelligence technologies, such as facial recognition technologies, as these types of technologies continue to grow in use.” The measure “would require the Office of Management and Budget to establish and consult with an AI working group in ensuring that all federal contractors are taking adequate steps to secure data obtained through AI, and that the data is being used to protect national security while not compromising privacy.” The Hill adds that “legislation has strong bipartisan backing.”

 

Artificial Intelligence Is Being Used To More Efficiently Develop New Drugs

Axios Share to FacebookShare to Twitter (10/20, Walsh) reports, “Artificial intelligence – the ultimate optimization engine – is meeting one of its biggest challenges: untangling the messy, slow and expensive work of drug development.” Right now, “drug development is a great business – if you don’t mind repeated, expensive failure.” The slow “process of discovering and developing a new drug can take over a decade and costs $2.8 billion on average – and even then, 9 out of 10 therapeutic molecules fail Phase II clinical trials and regulatory approval.” However, “AI can offer a boost at nearly every point of the drug development cycle, evangelists argue.”

 

Space Security A Growing Concern

Space News Share to FacebookShare to Twitter (8/17, Erwin, Subscription Publication) reports that the Space Foundation and KPMG released a report called “Navigating Space: A Vision for the Space Domain” Tuesday. The report concludes that military organizations will cease to be the dominant players in space and will have to partner with civilian concerns in order to ensure openness in space. The Space Foundation and KPMG said, “A growing number of countries are realigning their defense organizations to recognize the importance of space. Non-governmental organizations are also crowding into the domain. And that is changing the focus for many military players.” US Space Force Chief of Space Operations General John “Jay” Raymond is quoted as saying, “In the long term, space is going to become the most vital domain for national security, surpassing air, land and sea.”

 

Schools Look To AI Teacher Assistants To Help Students Learn

The Wall Street Journal Share to FacebookShare to Twitter (8/6, Trent, Campa, Subscription Publication) highlighted how schools are looking for help from artificial intelligence education products, which eventually may be able to serve as a peer learning partner or as a type of teacher’s assistant personalized for each student. The Journal discusses three teaching aids, now in development, that put AI to work, such as keeping students engaged, freeing up time for teachers, and serving as a companion and assistant to students.

dtau...@gmail.com

unread,
Nov 13, 2021, 7:41:52 PM11/13/21
to ai-b...@googlegroups.com

Boxing Body Uses AI Vetting of Judges to Restore Trust
Associated Press
November 5, 2021


The International Boxing Association (AIBA) used an artificial intelligence (AI) system to evaluate judges and referees before clearing them for the current men's world championships, and to restore its credibility following claims of corruption. AIBA's Richard McLaren said the automated voice analysis system "measures the cognitive functions of the brain in the verbal responses," and assigns judges low-, medium-, or high-risk grades. McLaren said the AI project "clearly identifies problems" if employed in a broader series of pre-competition vetting, follow-up interviews, and "a human assessment" of officials' suitability. He suggested the AI system could help AIBA "to put their house in order," and could be used by other Olympic sports that rely on judges to determine event outcomes.

Full Article

 

 

 

Flying High-Speed Drones into the Unknown with AI
University of Zurich (Switzerland)
October 7, 2021

Researchers at Switzerland's University of Zurich used artificial intelligence (AI) to train an autonomous quadrotor to fly through unknown and complex environments at speeds as fast as 40 kph (25 mph). The researchers used an algorithm that flew a computer-generated drone through a simulated environment of complex obstacles to teach the drone's neural network to fly. The data from this "simulated expert" helped the neural network learn to predict the best trajectory using only data from on-board sensors. Said Zurich's Antonio Loquercio, "While humans require years to train, the AI, leveraging high-performance simulators, can reach comparable navigation abilities much faster, basically overnight."
 

Full Article

 

 

 

China Leaps Ahead in Effort to Rein In Algorithms
The Wall Street Journal
Stephanie Yang
October 5, 2021


A campaign launched by the Cyberspace Administration of China calls for the establishment of a comprehensive system to regulate the use of algorithms within three years, and for algorithms to be fair and transparent while complying with the ruling Communist Party's ideology. The move comes as an EU proposal aims to restrict certain uses of artificial intelligence to prevent possible harm, and as U.S. lawmakers consider the impact of algorithm-driven content on Facebook users. James Gong of the law firm Bird & Bird in Beijing said increased regulatory oversight of algorithms are likely to impact China's Internet industry, as nearly all companies "use algorithms and automatic decision-making and profiling to make sure their marketing is more precise and to improve their efficiency of the business and increase profit."

Full Article

*May Require Paid Registration

 

 

Microsoft Predicts Weather for Individual Farms
IEEE Spectrum
Rebecca Sohn
October 4, 2021


Microsoft’s DeepMC framework uses machine learning and artificial intelligence (AI) to predict local weather accurately. DeepMC combines data from on-site sensors with standard local weather forecast data accessed through application programming interfaces from sources including the U.S. National Oceanic and Atmospheric Administration and the National Weather Service. Microsoft Research's Peeyush Kumar said a "fusion mechanism" integrates data from on-site sensors with standard forecast data, which the framework then provides to an AI trained to find errors between local forecasts and micro-climate weather conditions. The system predicts each weather parameter based on historical forecast and local sensor data, and applies decomposition to determine short- and long-term trends and patterns in weather data.

Full Article

 

 

AI Makes It Faster, Easier to Analyze Hockey Video
University of Waterloo News (Canada)
October 4, 2021


Engineers at Canada's University of Waterloo have advanced the use of artificial intelligence (AI) to automatically analyze video of hockey games by combining two deep learning methods. Waterloo's Kanav Vats explained, "Using different representations to teach the same thing can improve performance. We combined a wholistic representation and a digit-wise representation with great results." The researchers compiled a dataset of over 54,000 images from National Hockey League games to train algorithms to recognize sweater numbers in images. The multi-task learning technique can identify players by sweater numbers with 90% accuracy.

Full Article

 

 

Deep Learning-Based Image Analysis Now a Click Away
EPFL (Switzerland)
October 1, 2021


Engineers at Switzerland's École polytechnique fédérale de Lausanne (EPFL) and Spain's Universidad Carlos III de Madrid have developed deepImageJ, an open source plugin that facilitates deep learning-based image analysis for life science research. With deepImageJ, life scientists can ask a computer engineer to design and train a machine learning algorithm to execute a specific task; the scientists then can run the resulting algorithm through a user interface without the need to review any code. The plugin was developed as a resource to enable engineers, computer scientists, mathematicians, and biologists to collaborate more efficiently.

Full Article

 

 

Tying Quantum Computing to AI Prompts Smarter Power Grid
Cornell Chronicle
Blaine Friedlander
September 29, 2021


Combining quantum computing with artificial intelligence could lead to the rapid diagnosis and identification of remedies for electrical grid problems, according to Cornell University's Fengqi You and Akshay Ajagekar. The researchers proposed a hybrid quantum-computing-based deep learning "intelligent system" design for a fault-diagnosis framework that can accurately pinpoint problems in electrical power systems. They demonstrated the solution in a large-scale Institute of Electrical and Electronics Engineers test electric grid; the hybrid approach could be scaled efficiently for quick diagnosis in larger power systems without sacrificing performance. Ajagekar said, "Integrating quantum computing with intelligence—even though it is not yet a mature technology—will solve real problems now."

Full Article

 

 

Deep Learning Framework to Enable Material Design in Unseen Domain
KAIST (South Korea)
September 29, 2021


Researchers at South Korea's KAIST and the University of California, Berkeley have developed a framework that uses a deep neural network to facilitate more efficient material or structure design beyond the domain of the initial training set. The method compensates for the weak predictive power of neural networks.

This involves three steps:

• using genetic algorithms to search for candidates with improved properties close to the training set, and mixing superior designs in the training set;
• determining whether the candidates actually have improved properties, and using data augmentation to duplicate validated designs and expand the training set; and
• using transfer learning to update the neural network with newly generated superior designs to broaden the reliable prediction domain.

Researchers are using the optimization framework to design metamaterial structures, segmented thermoelectric generators, and optimal sensor distributions.

Full Article

 

German Startup Creates AI Language Model That Learns Other Languages

Wired Share to FacebookShare to Twitter (8/23, Knight) reports that “in recent years, machines have learned to generate passable snippets of English, thanks to advances in artificial intelligence.” Now they are “moving on to other languages.” Aleph Alpha, “a startup in Heidelberg, Germany, has built one of the world’s most powerful AI language models.” It is “fluent not just in English but also in German, French, Spanish, and Italian.” The algorithm “builds on recent advances in machine learning that have helped computers handle language with what sometimes seems like real understanding.” By drawing on “what it has learned from reading the web, the algorithm can dream up coherent articles on a given subject and can answer some general knowledge questions cogently.”

 

Tesla Unveils AI Model Training Chip Inside Its Data Centers

CNBC Share to FacebookShare to Twitter (8/19, Novet) reports that on Thursday, Tesla “unveiled a custom chip for training artificial-intelligence networks in data centers,” demonstrating “the company’s continuing pursuit of vertical integration.” Autopilot hardware Senior Director Ganesh Venkataramanan “said the Tesla technology will be the fastest AI-training computer. Chipmaker Intel, graphics card maker Nvidia and start-up Graphcore are among the companies that make chips that companies can use to train AI models.” The chips could help train models for “recognizing a variety of items from video feeds collected by cameras inside Tesla vehicles.”

dtau...@gmail.com

unread,
Nov 14, 2021, 1:37:25 PM11/14/21
to ai-b...@googlegroups.com

Biodiversity 'Time Machine' Uses AI to Learn from the Past
University of Birmingham (U.K.)
November 9, 2021


An international, cross-disciplinary research team led by the U.K.'s University of Birmingham developed a "time machine framework" that uses artificial intelligence to inform future decisions about biodiversity management. Decision makers could use the framework to examine links between biodiversity, pollution events, and environmental change in the past, to assess their impact on ecosystems. The researchers looked at how the framework could be used to make decisions on climate change mitigation, food provisioning, and clean water. University of Birmingham's Niamh Eastwood said, "We are working with stakeholders to make this framework accessible to regulators and policymakers. This will support decision making in regulation and conservation practices."

Full Article

 

 

These Neural Networks Know What They're Doing
MIT News
Adam Zewe
October 14, 2021


Massachusetts Institute of Technology (MIT) researchers have demonstrated that a specific neural network can learn the cause-and-effect structure of a navigation task it is taught. The researchers observed that a Neural Circuit Policy (NCP) system assembled by liquid neural network cells can autonomously control a self-driving vehicle using just 19 control neurons. They determined that when an NCP is being trained to complete a task, the network learns to interact with the environment and factor in interventions, or to recognize if an intervention is altering its output, and then it can relate cause and effect together. Tests put NCPs through various simulations in which autonomous drones performed navigation tasks. MIT's Ramin Hasani said, "Once the system learns what it is actually supposed to do, it can perform well in novel scenarios and environmental conditions it has never experienced."

Full Article

 

 

Duke Computer Scientist Receives $1-Million AI Prize, a 'New Nobel'
Duke University Pratt School of Engineering
Ken Kingery
October 12, 2021


Duke University computer scientist Cynthia Rudin was awarded the $1-million Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity by the Association for the Advancement of Artificial Intelligence (AAAI). Rudin was honored for "pioneering scientific work in the area of interpretable and transparent AI systems in real-world deployments, the advocacy for these features in highly sensitive areas such as social justice and medical diagnosis, and serving as a role model for researchers and practitioners." Said Duke’s Jun Yang, Rudin “is changing the landscape of how AI is used in societal applications by redirecting efforts away from black box models and toward interpretable models by showing that the conventional wisdom—that black boxes are typically more accurate—is very often false.”

Full Article

 

 

Deep Learning Helps Predict Traffic Crashes Before They Happen
MIT News
Rachel Gordon
October 12, 2021


A deep learning model trained on historical traffic crash data, road maps, satellite imagery, and global positioning system trajectory patterns can generate high-resolution crash risk maps. Scientists at the Massachusetts Institute of Technology and the Qatar Computing Research Institute (QCRI) developed the model, which yields risk maps that can define the expected number of crashes over a future period, identifying high-risk areas and forecasting future collisions. The maps are composed of 5x5-meter grid cells, a resolution that shows highway roads, for example, have a greater risk for traffic accidents than nearby residential roads, while highway ramps have higher risk than other roads. QCRI's Amin Sadeghi said, "Our model can generalize from one city to another by combining multiple clues from seemingly unrelated data sources."
 

Full Article

 

 

Researchers Create Earthquake System Model with Better Detection Capabilities
University of Wyoming
October 12, 2021

The University of Wyoming's Pejman Tahmasebi and Tao Bai have invented a machine learning (ML) model that boosts the accuracy of earthquake detection significantly over current models. Tahmasebi said the model processes signal data recorded by seismometers, and can automatically distinguish seismic events from seismic noise. The model combines existing long short-term memory and fully convolutional network ML models; the former captures data signal changes over time, and the latter filters out hidden features of seismic events. Tahmasebi said the model boasts 89.1% classification accuracy, a 14.5% improvement over the state-of-the-art ConvNetQuake model.
 

Full Article

 

 

ML Offers High-Definition Glimpse of How Genomes Organize in Single Cells
Carnegie Mellon University
Niki Kapsambelis
October 11, 2021


Carnegie Mellon University (CMU) researchers have developed an algorithm to visualize genome organization in single human cells in high definition. The Higashi algorithm is founded on the machine learning (ML) process of hypergraph representation learning, using neural networks on hypergraphs to produce a high-definition analysis of genome organization. Higashi works with single-cell Hi-C technology, which captures snapshots of chromatin interactions occurring concurrently within one cell, and provides more detailed analysis of chromatin's organization in the cells of complex tissues and biological processes, as well as varying cell-to-cell interactions. The algorithm also enables researchers to analyze other genomic signals jointly defined with single-cell Hi-C simultaneously.
 

Full Article

 

 

AI Brings Hidden Picasso Nude to Life
CNN
Jack Guy
October 11, 2021


Artificial intelligence (AI), advanced imaging, and three-dimensional (3D) printing have unveiled the portrait of a nude woman hidden beneath the surface of a Picasso painting. The figure, which Picasso painted over while creating "The Blind Man's Meal" in 1903, was reconstructed by U.K.-based Oxia Palus, a company that resurrects lost art with technology. Oxia Palus used X-ray fluorescence imaging and image processing to outline the concealed nude, then trained AI to add brushstrokes mimicking Picasso's style. A heightmap gave the portrait texture, and 3D printing rendered it onto canvas.

Full Article

 

 

White House Proposes Tech 'Bill of Rights' to Limit AI Harms
Associated Press
Matt O'Brien
October 8, 2021


Scientists in the Biden administration have proposed a technology "bill of rights" to protect against the potential harms of artificial intelligence (AI). The White House Office of Science and Technology Policy (OSTP) initiated a fact-finding mission to examine facial recognition and other biometric technologies for identifying people or evaluating their emotional or mental states and character. President Biden's chief science adviser, Eric Lander, and OSTP deputy director for science and society Alondra Nelson wrote in Wired about the need to shield against AI that can discriminate against people or breach privacy. They suggested the federal government should not purchase technology products that do not respect these rights, mandate federal contractors to use technologies that comply with the new “bill of rights,” and embrace new laws and rules to fill gaps.
 

Full Article

 

 

The Rise of the Robo-Voices
The Wall Street Journal
Ellen Gamerman
October 7, 2021


Artificial intelligence (AI) is being used to create human-like voices, which could be used to dub foreign movies and TV shows, fix dialogue in post-production without the need for actors, and even resurrect audio from celebrities who have died or are no longer able to speak. In the coming months, dubbed versions of 2019 indie horror movie "Every Time I Die" are slated to be released in South America, using synthetic voices created by AI voice company Deepdub based on five-minute recordings of each actor speaking English. Said Zeena Qureshi of Sonantic, which recreated about two minutes of actor Val Kilmer's voice as a demonstration, "That idea of being able to customize voice content, to change emotions, pitch, direction, delivery, style, accents—that's now possible where it wasn't before."

Full Article

*May Require Paid Registration

 

 

Simulated AI Creatures Demonstrate How Mind, Body Evolve, Succeed Together
TechCrunch
Devin Coldewey
October 6, 2021


Stanford University scientists engineered virtual artificial intelligences performing tasks in simulated environments to mimic the evolution of mind and body. The team dropped simulated animals they called unimals (for universal animals) into a simulation, initially so they could learn to walk. The virtual creatures developed various walks based on their environment's terrain; in further experiments, the unimals competed on more complex tasks. Those that had learned to walk on variable terrain learned the latter tasks faster and performed them better than those adapted to flat terrain. The researchers said this work "opens the door to performing large-scale in silico experiments to yield scientific insights into how learning and evolution cooperatively create sophisticated relationships between environmental complexity, morphological intelligence, and the learnability of control tasks."

Full Article

 

 

Google Wants to Use AI to Time Traffic Lights More Efficiently
Reuters
Paresh Dave
October 6, 2021


Google said it intends to test the use of artificial intelligence (AI) to optimize traffic-signal timing in Rio de Janeiro, following a successful implementation in Israel. The Israeli project slashed fuel use and traffic delays at four locations in Haifa and Beer-Sheva by 10% to 20%. Rio's municipal traffic authority said Google's system should be launched within months, and had high hopes it could better control traffic signals. Although simulations showed AI control of traffic signals could smooth traffic flow, the University of Pittsburgh's Aleksandar Stevanovic questioned whether a technology firm with zero traffic engineering expertise could realize such software.

Full Article

 

 

Novel ML Technique Identifies Structural Similarities, Trends in Materials
SciTechDaily
October 8, 2021


Scientists at Lehigh and Stanford universities demonstrated a novel machine learning (ML) method to recognize structural similarities and trends in materials from an unstructured image dataset. The team developed and taught a neural network to factor in symmetry-aware features, then applied the technique to 25,133 piezoresponse force microscopy images of diverse materials systems. The researchers employed the Uniform Manifold Approximation and Projection non-linear dimensionality reduction process to generate projections and cluster similar classes of material together from the image set.

Full Article

 

Education Program Use AI Tutor To Teach Children How To Read

The Hechinger Report Share to FacebookShare to Twitter (9/9) reports “artificial intelligence has been used for years in education to monitor teaching quality, teach classes, grade assignments and tailor instruction to student ability levels.” Now, a small “but growing number of programs are attempting to use AI to target reading achievement in the early years.” More than 800 schools and 355 districts “across the country have already adopted” an artificial intelligence avatar named Amira “as part of their literacy instruction.” The program “was designed in part by Houghton Mifflin Harcourt (HMH) and based on reading research and reading science from researchers at several universities including Carnegie Mellon University, Johns Hopkins University and the University of Texas Health Sciences.” The program’s designers “recommend Amira is used by students for a total of 40 minutes each week over the course of two or three days. “This summer, Washington, DC, “offered the program for free to students in grades 2-5 who participated in the district’s reading program.”

 

Toronto-Based AI Startup Raises $40M

The Globe and Mail (CAN) Share to FacebookShare to Twitter (9/7, Silcoff) reports Toronto AI startup Cohere Inc. “has raised US$40-million as it prepares to formally launch its technology to help make it easier for humans to talk to machines.” The venture is “backed by some of the leading experts in the field globally,” and was “co-founded by protégés of AI luminaries Geoffrey Hinton and Jeff Dean.” Cohere “says its natural language processing (NLP) software provides a richer understanding of human language, including semantics, sentiments and tone.” Additionally, “it improves on existing software that powers machine-human interactions such as online conversations between consumers and customer service chatbots.”

 

Artificial Intelligence In College Admissions Can Both Reduce And Reinforce Biases

Diverse Issues in Higher Education Share to FacebookShare to Twitter (9/2, Kelliher) reports “admissions offices have been rolling out new technologies that use artificial intelligence (AI) to engage with or evaluate prospective students.” Experts and enrollment professionals “point out that AI holds the power to close equity gaps as much as augment them, depending on how these emerging tools are used.” Dr. Kirsten Martin, a professor of technology and ethics at the University of Notre Dame’s Mendoza School of Business, discussed how an AI program “could reduce or reinforce biases to recruit students in admissions.” A program, for example, “could find high schools in marginalized communities that a university has not reached out to historically.” She adds, “The exact same program could be used to find students who can save the college money, not increase diversity, if you ask who is expensive to admit and who is not.”

 

University Of New Orleans Science Professor Receives Grant to Fight Hacking

The New Orleans Times-Picayune Share to FacebookShare to Twitter (9/1) reports the National Science Foundation “has awarded a $1.2 million grant to Phani Vadrevu, a computer science professor at the University of New Orleans, to develop methods to protect users from web-based social engineering attacks such as survey scams, scareware and phishing expeditions.” The project “will use artificial intelligence to track and model online attacks.”

dtau...@gmail.com

unread,
Nov 20, 2021, 9:58:19 PM11/20/21
to ai-b...@googlegroups.com

Big Data Privacy for ML Just Got 100 Times Cheaper
Rice University News
Jade Boyd
November 16, 2021


Rice University's Anshumali Shrivastava and Ben Coleman have developed RACE (repeated array of count estimators), an inexpensive technique to ensure personal data privacy when using or sharing large databases for machine learning (ML). The researchers applied locality sensitive hashing to generate abstracts or "sketches" of a huge database of sensitive records. Coleman said RACE sketches are safe for public release and useful for algorithms that employ kernel sums, and for ML programs that execute common tasks like classification, ranking, and regression analysis. Said Shrivastava, "RACE changes the economics of releasing high-dimensional information with differential privacy. It's simple, fast, and 100 times less expensive to run than existing methods."

Full Article

 

 

Randomly Connected Neural Network for Self-Supervised Monocular Depth Estimation
Imperial College London (U.K.)
Erh-Ya Tsui
November 15, 2021


Researchers at the U.K.'s Imperial College London (ICL) designed a self-supervised monocular depth estimation technique using a randomly connected neural network. ICL's Sam Tukra said the random network connections "can attain a unique powerful architecture for the specific task,” such as depth estimation in surgery. The researchers modeled the networks as graphs and linked nodes using a random graph generator algorithm. They rendered the graphs into a neural network within a deep learning library, and the resulting framework can augment the use of deep semantic features in encoder feature maps. The researchers also formulated a new loss function to enhance image reconstruction quality; performance evaluation on surgical datasets confirmed the technique's efficacy.

Full Article

 

 

When Algorithms Get Creative
University of Bern (Switzerland)
November 10, 2021

An international team of scientists led by Switzerland's University of Bern has developed evolutionary algorithms that can learn creatively. Such algorithms determine the "fitness" of a candidate solution based on how well it solves the underlying problem. The researchers' evolving-to-learn (E2L) or "becoming adaptive" approach was applied to three typical learning scenarios. The first was to detect a repeating pattern in a continuous input stream without performance feedback; the second virtually rewarded the computer for behaving in a desired manner; and the third guided the computer on how much its behavior diverged from a desired pattern. “In all these scenarios,” Bern's Jakob Jordan said, "The evolutionary algorithms were able to discover mechanisms of synaptic plasticity, and thereby successfully solved a new task."
 

Full Article

 

 

AI Brings Power of NLP to African Languages
Waterloo News (Canada)
November 9, 2021

A neural network model developed by researchers at Canada's University of Waterloo enables computer-based analysis of text in 11 African languages. The AfriBERTa model achieved output quality similar to that of existing models while requiring significantly less data to train the model, just a gigabyte of text. The African languages covered by the model are considered low-resource, meaning there is a lack of data to feed to neural networks. University of Waterloo's Jimmy Lin explained that requiring less training data results in “lower carbon emissions associated with operating massive data centers.” Lin added that using smaller datasets also makes data curation more practical, “which is one approach to reduce the biases present in the models.”
 

Full Article

 

 

AI Predicts Eye Movements
Max Planck Institute for Human Cognitive and Brain Sciences (Germany)
November 8, 2021


Scientists at Germany's Max Planck Institute for Human Cognitive and Brain Sciences and Norway's Kavli Institute for Systems Neuroscience have developed software to predict eye movements from magnetic resonance imaging (MRI) scans, via artificial intelligence. The open source DeepMReye software can track individual viewing behavior without cameras by detecting patterns of eye movement during an MRI scan. The researchers have trained a neural network with their own and publicly available data from participants so DeepMReye can track eye movements even in data on which it has not been trained. The software can forecast when eyes are open or closed, and tracks eye movements even when the eyes stay closed.

Full Article

 

 

'Thinking' Robot with Neurons Grown from Living Brain Cells is Taught to Steer Around Obstacles
Daily Mail (U.K.)
Jonathan Chadwick
October 26, 2021


A robot developed by researchers at Japan's University of Tokyo was equipped with brain-like neurons to teach it to think. The compact robotic vehicle was connected to a culture of brain neurons grown from living cells and placed in a maze. Said the researchers, "When the robot collided with obstacles or when its goal was not within 90 degrees in front of it, an electrical stimulation from an electrode was applied to the culture." Homeostatic signals fed to the robot let it know it was moving toward its goal, but this homeostasis was interrupted with disturbance signals in the presence of an obstacle, prompting the robot to recalibrate.

Full Article

 

 

AI Sheds Light on How the Brain Processes Language
MIT News
Anne Trafton
October 25, 2021


Research by Massachusetts Institute of Technology (MIT) neuroscientists suggests the latest predictive language models' underlying mechanism functions similarly to the human brain's language-processing centers. MIT's Nancy Kanwisher said, "The better the model is at predicting the next word, the more closely it fits the human brain." Computer models that perform well on other language tasks do not exhibit this resemblance, implying the brain may drive language processing using next-word prediction. Stanford University's Daniel Yamins said, "Since the AI [artificial intelligence] network didn't seek to mimic the brain directly—but does end up looking brain-like—this suggests that, in a sense, a kind of convergent evolution has occurred between AI and nature."

Full Article

 

 

Artificial Networks Learn to Smell Like the Brain
MIT News
Jennifer Michalowski
October 18, 2021


A team of researchers from the Massachusetts Institute of Technology (MIT) and Columbia University found a machine learning model can train itself to smell by building an artificial neural network that mimics the brain's odor-processing olfactory circuits. The researchers used the fruit fly's olfactory system as a template, building an artificial network comprised of an input layer, a compression layer, and an expansion layer; links between neurons would be rewired as the model learned to classify smells. The network self-organized in minutes into a structure closely resembling the fly brain's olfactory network. MIT's Guangyu Robert Yang said, "By showing that we can match the architecture [of the biological system] very precisely, I think that gives more confidence that these neural networks can continue to be useful tools for modeling the brain."

Full Article

 

 

Robot Artist Ai-Da Released by Egyptian Border Guards
BBC News
October 21, 2021


Egyptian border agents have released a British-built robot after detaining it at customs for 10 days due to concerns that it might have been concealing spy tools. The robot, named Ai-Da, creates abstract art using cameras and a robotic arm. The robot's creator, Aidan Meller, offered to remove the robot's modem, which had generated suspicion among the border agents. However, when they raised issues with its cameras, Meller said those could not be removed, as the cameras' recordings are turned into works of art using artificial intelligence algorithms. The robot's work was scheduled to be shown at the Great Pyramid of Giza in the first contemporary art exhibition at the Pyramids in 4,500 years.

Full Article

 

 

AI Can Help Diagnose Mental Health Disorders Where Access to Care Is Lacking
UPI
Brian P. Dunleavy
October 15, 2021


At the French Institute for Research in Computer Science and Automation, researchers have developed a machine learning approach that accurately diagnosed mental health disorders based on "proxy measures" for mental health about 90% of the time. The researchers based their models on sociodemographic and behavioral data on more than 500,000 adults from the U.K. Biobank database, along with magnetic resonance imaging (MRI) brain scans of more than 10,000 participants. In tests on data from 7,000 U.K. Biobank participants, about half of whom had a diagnosed mental health disorder, the researchers found a combination of sociodemographic information and brain imaging to assess brain age was the most accurate proxy measure for mental health. Said lead researcher Denis Engemann, "We showed that useful proxy measures can be derived from other inputs than brain images."

Full Article

 

dtau...@gmail.com

unread,
Nov 22, 2021, 10:25:59 AM11/22/21
to ai-b...@googlegroups.com

Can a Machine Learn Morality?
The New York Times
Cade Metz
November 19, 2021


Morality is a thorny issue for machines, as scientists learned in testing Delphi, a system programmed by the Allen Institute for Artificial Intelligence (AI) to make moral judgments. The neural network analyzed more than 1.7 million ethical judgments made by humans to establish a morality baseline for itself, and people generally agreed with its decisions when it was released to the open Internet. Some, however, have found Delphi to be inconsistent, illogical, and insulting, highlighting how AI systems reflect the bias, arbitrariness, and worldview of their creators. Delphi's developers hope to build a universally applicable ethical framework for AI, but as Zeerak Talat at Canada's Simon Fraser University observed, "We can't make machines liable for actions. They are not unguided. There are always people directing them and using them."

Full Article

*May Require Paid Registration

 

 

AI System to Help Fight Spread of Infections in Hospitals
The Pittsburgh Post-Gazette
November 17, 2021


University of Pittsburgh and Carnegie Mellon University (CMU) scientists have developed an artificial intelligence (AI)-based technique for detecting and halting hospital-based infectious disease outbreaks. The researchers integrated machine learning and whole genome sequencing to enable identification of outbreaks much faster than with traditional methods. The AI system employs genomic sequencing surveillance to detect whether patients in hospitals have near-identical strains of an infection; computers then can mine patients' electronic health records to determine common transmission vectors, such as procedures using the same equipment or shared healthcare providers. The system “can quickly detect and characterize an emerging outbreak, helping hospitals take swift and precise actions to stop a bug from spreading," said CMU's Artur Dubrawski.

Full Article

 

 

AI Recognizes Primate Behaviors in the Wild
University of Oxford (U.K.)
November 17, 2021


Researchers and wildlife conservationists will be able to streamline analysis of animal behavior in video footage through a new artificial intelligence model that can identify chimpanzees' behavioral patterns in the wild. A team of scientists from the U.K., Japan, and the U.S. trained the model on videos from two chimpanzee populations in West Africa to automatically recognize wild primate behavior. Said Max Bain at the U.K.'s University of Oxford, "We use methods from deep learning with networks that are able to ingest both the audio and the visual stream of information from a video, crucial in the wild where an animal behavior might be heard but not seen, or inaudible actions." The technique combines individual identification from previous audio-visual behavior recognition methods to investigate animal behaviors more thoroughly.

Full Article

 

 

NYC Passes Bill To Curb Business Practices Involving AI Hiring Tools

AP Share to FacebookShare to Twitter (11/19, O'Brien) reported that New York City passed a bill that “would ban employers from using automated hiring tools unless a yearly bias audit can show they won’t discriminate based on an applicant’s race or gender” and force the AI creators “to disclose more about their opaque workings and give candidates the option of choosing an alternative process...to review the application.” Supporters of the bill liken it to requiring calories on menus. However, “some AI experts and digital rights activists are concerned that it doesn’t go far enough to curb bias” and creates a weak standard to follow for federal regulators and lawmakers. Center for Democracy & Technology President Alexandra Givens said the bill leaves out discrimination against age and disabilities, and claims that the audits “could end up giving AI vendors a ‘fig leaf’ for building risky products with the city’s imprimatur.”

 

Democrats Say House, Senate Will Meet To Discuss R&D Proposals

Roll Call Share to FacebookShare to Twitter (11/18, DeChiaro) reports Democratic leaders have revealed that “House and Senate negotiators will soon go to conference in an effort to send bipartisan legislation aimed at advancing U.S. competitiveness in science and technology to President Joe Biden’s desk.” In June, the Senate “passed legislation that would authorize around $200 billion in spending for the National Science Foundation, the Energy Department and other government agencies tasked with research and development in 21st-century fields of technology such as artificial intelligence, quantum computing, robotics and cybersecurity.” The bill, called the US Innovation and Competition Act “USICA,” “would also approve $52 billion in spending to bolster the struggling U.S. semiconductor industry.” However, “the House has not passed a companion measure.” A representative of “Speaker Nancy Pelosi did not respond to questions about which bills the House would bring to conference.”

 

High School Students Fight For Ethical Uses Of Artificial Intelligence

CNN Business Share to FacebookShare to Twitter (9/29, Metz) reports on Encode Justice, “an international group of grassroots activists pushing for ethical uses of artificial intelligence.” There have been “legislators to lobby, online seminars to hold, and meetings to attend, all in hopes of educating others about the harms of facial-recognition technology.” Most of the team “behind Encode Justice have had to cram it all in around high school.” That’s because the group “was created and is run almost entirely by high schoolers.” Its founder and president, Sneha Revanur, “is a 16-year-old high-school senior in San Jose, California and at least one of the members of the leadership team isn’t old enough to get a driver’s license.” It may be the “only youth activist group focused squarely on pointing out the dangers – both real and potential – of AI-based applications such as facial-recognition software and deepfakes.”

 

University Of Glasgow Scientists Develop AI Tool That May Help Prevent Next Pandemic

The Daily Beast Share to FacebookShare to Twitter (9/28, Patel) reports “after 20 months of lockdowns and despair, the world desperately wants to avoid another global-health crisis like this one.” Now, a trio of scientists “at the University of Glasgow in the U.K. think they have just the tool: an A.I. model that can identify animal viruses with a high risk of one day infecting humans.” Early tests, “detailed in a new study in PLOS Biology, even purport to show how this very technology might have helped identify SARS-CoV-2, the technical name for the virus that causes COVID-19, before its documented emergence in Wuhan, China, in late 2019.” The Glasgow team “developed its own A.I. model to help evaluate zoonotic risk soon after viruses have just been discovered – when there’s hardly any information available yet.”

 

IBM Research Says Analog AI To Be 100 Times More Efficient

Forbes Share to FacebookShare to Twitter (9/23, Freund) reports how IBM’s AI Hardware Research Center has “delivered significant digital AI logic, and now turns their attention to solving AI problems in an entirely new way.” The article concludes, “There remains significant work to do, and a lot of invention especially if IBM wants to train neural networks in analog. But IBM must feel fairly confident in their prospects to start writing blogs about the technology’s prospects.”

 

AI Being Embedded Throughout Enterprise

Fortune Share to FacebookShare to Twitter (9/28) CEO Alan Murray writes that “A.I. is steadily working its way into the business processes of big companies.” Fortune held a virtual event “as part of the lead-up to our Brainstorm A.I. gathering in Boston this November.” He says “A.I. applications are no longer limited to the easiest use cases – chat bots, robotic process automation, cybersecurity detection, etc.” – “they are rapidly being embedded throughout the enterprise.” He offers instances of this from companies like Accenture, Boeing, and Honeywell.

 

UK Govt Unveils National Artificial Intelligence Strategy To Rival US, China

CNBC Share to FacebookShare to Twitter (9/22, Shead) reports the UK government released its “National Artificial Intelligence Strategy” – a “10-year plan to make the country a global ‘artificial intelligence superpower’” – on Wednesday. The plan “is designed to boost the use of AI among the nation’s businesses, attract international investment into British AI companies and develop the next generation of homegrown tech talent,” and it boasts “a number of programs, reports and initiatives” – such as “a new National AI Research and Innovation program” and a program specifically aimed at supporting “support AI development outside London and Southeast England.” The plan comes “after the U.K. government allowed tech giants in the U.S. and Asia to snap up some of Britain’s most innovative AI companies.”

 

Microsoft’s AI For Earth Program Aims For Cloud Computing To Reshape Natural Disaster Modeling

TechCrunch Share to FacebookShare to Twitter (9/22, Crichton) spotlights Microsoft’s AI for Earth program, through which the company “sees both an opportunity to do good as well as a lucrative niche to secure for its cloud computing service Azure.” Program Director Bruno Sánchez-Andrade Nuño provides further insight.

 

Survey: Companies Need More Tech Talent To Deliver On AI Goals

ZDNet Share to FacebookShare to Twitter (9/27, McKendrick) reports, “A new survey of IT executives conducted by ZDNet found that companies need more data engineers, data scientists, and developers to deliver on” goals of making AI “as fair, unbiased, and as accurate as possible” in order to make decisions deemed trustworthy by businesses. The study “confirmed that AI and ML initiatives are front and center at most enterprises;” ZDNet quotes AWS VP of machine learning Swami Sivasubramanian saying this is the “golden age” of AI and machine learning, because the technology “is becoming a core part of businesses around the world.”

 

UN Calls For Ban On Certain AI Tech Until Safeguards Are Set Up

ABC News Share to FacebookShare to Twitter (9/15) reports the United Nations Human Rights chief on Wednesday “called for a moratorium on the sale of and use of artificial intelligence technology that poses human rights risks – including the state use of facial recognition software – until adequate safeguards are put in place.” The plea comes as artificial intelligence “develops at a rapid clip, despite myriad concerns ranging from privacy to racial bias plaguing the emerging technology.” U.N. High Commissioner for Human Rights Michelle Bachelet said in a statement Wednesday, “Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our times. But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights.” Bachelet’s warnings “accompany a report released by the UN Human Rights Office analyzing how artificial intelligence systems affect people’s right to privacy – as well as rights to health, education, freedom of movement and more.”

dtau...@gmail.com

unread,
Nov 25, 2021, 11:58:28 AM11/25/21
to ai-b...@googlegroups.com

Deep Learning Method Adds 301 Planets to Kepler's Total Count
Jet Propulsion Laboratory
November 22, 2021


Scientists have added 301 validated exoplanets to the number of confirmed exoplanets through the use of the new ExoMiner deep learning neural network to analyze data from the U.S. National Aeronautics and Space Administration (NASA) Kepler and K2 missions. ExoMiner taps NASA's Pleiades supercomputer to identify new exoplanets within datasets, and can differentiate real exoplanets from false positives. Jon Jenkins at NASA's Ames Research Center said ExoMiner is not a "black box" that conceals its reasoning, so researchers know why it confirms or rejects exoplanets. Said Ames' Hamed Valizadegan, "ExoMiner is highly accurate and in some ways more reliable than both existing machine classifiers and the human experts it's meant to emulate because of the biases that come with human labeling."

Full Article

 

 

Research Team Advances Brain-Inspired Computing
USC Viterbi News
Amy Blumenthal
November 21, 2021


Researchers at the University of Southern California, the U.S. Army Research Laboratory, the University of Florida, and the Georgia Institute of Technology have unveiled advancements in brain-inspired computing that tap randomness to augment performing optimization. The researchers, aiming to mimic neurons as much as possible, developed a circuit to address combinatorial optimization problems, a hetero-memristor that dynamically tunes randomness features through a memory-computation combination by adding a third electrical terminal and modulating its voltage. "The randomness introduced in the new device demonstrated in this work can prevent it from getting stuck at a not-so-viable solution, and instead continue to search until it finds a close-to-optimal result," according to the researchers.

Full Article

 

 

Supercomputers Flex Their AI Muscles
IEEE Spectrum
Samuel K. Moore
November 20, 2021


Japan's Fugaku supercomputer and the Summit supercomputer at the U.S. Department of Energy's Oak Ridge National Laboratory each have surpassed the exascale benchmark High Performance Linpack for Accelerator Introspection (HPL-AI). By this measure, Fugaku has maintained its 2-exaflop performance from June, while Summit's performance improved 23%, reaching 1.4 exaflops. Meanwhile, industry organization MLCommons released the results from version 1.0 of its high-performance computing benchmarks (MLPerf HPC), which demonstrated a four- to seven-fold improvement in the time needed to train machine learning models to execute scientific tasks. Eight participating supercomputing centers produced 30 benchmark results in testing three neural networks.

Full Article

 

 

Reading the Mind of a Worm
Salk News
November 19, 2021


Scientists can learn which chemical a worm smelled seconds before by studying its brain activity, through research conducted at the Salk Institute for Biological Studies. Salk's Sreekanth Chalasani and colleagues studied the neuronal reactions of the Caenorhabditis elegans worm to smelling five different chemicals. They had developed worms in which each of their 302 neurons contains a fluorescent sensor that brightens when the neuron is active, then exposed 48 of the worms to repeated bursts of five chemicals; an average of 50 to 60 neurons activated in response to each chemical. The researchers applied graph theory to differentiate between the chemicals, then used a machine learning algorithm to make finer distinctions in neural response. "It's a start, but we're still only getting a partial answer as to how the brain discriminates these things," said Chalasani.

Full Article

 

 

Wikipedia Tests AI for Spotting Contradictory Claims in Articles
New Scientist
Matthew Sparkes
November 19, 2021


Researchers at Taiwan's National Cheng Kung University, in conjunction with the Wikimedia Foundation, have developed artificial intelligence technology which they say can identify contradictory claims in Wikipedia articles and flag them for human review. The researchers found 2,321 contradiction warnings in all English Wikipedia articles posted by March 2020. They used 80% of 1,105 examples of contradictions and solutions by human editors to train the neural network to detect contradictions on its own. The remaining 20% of the data then was used to test the neural network, which was found to have an accuracy rate of up to 65%.

Full Article

*May Require Paid Registration

 

Deep Learning Reveals How Proteins Interact
UW Medicine
Ian Haydon
November 19, 2021


A multi-institutional group of researchers has constructed three-dimensional (3D) models of protein interactions in eukaryotic cells using evolutionary analysis and deep learning. The researchers analyzed known gene sequences in yeast to map the interactions that create protein complexes, and statistically identified gene pairs that naturally acquire mutations in a connected manner. They also employed UW Medicine's RoseTTAFold and DeepMind's AlphaFold deep learning software to simulate the interacting proteins' 3D shapes. UW Medicine's David Baker said the research yielded insights into protein interactions for nearly all core eukaryotic cellular processes, including "over 100 interactions that have never been seen before."

Full Article

Imaging Technology May Reduce Need for Skin Biopsies
UCLA Samueli School of Engineering
November 18, 2021


University of California, Los Angeles (UCLA) researchers used virtual histology technology to assess images of skin tissues, which could reduce the need for skin biopsies to detect skin cancers. The researchers invented a deep learning framework to convert images of intact skin obtained by reflectance confocal microscopy (RCM) into a format that dermatologists and pathologists can use. The convolutional neural network was trained to rapidly render RCM images of unstained skin as virtually stained three-dimensional images. Said UCLA's Aydogan Ozcan, "This approach may allow diagnosticians to see the overall histological features of intact skin without invasive skin biopsies or the time-consuming work of chemical processing and labeling of tissue."

Full Article

 

 

Microrobotic Trajectory Tracking Method Using Broad Learning System
Chinese Academy of Sciences (China)
Li Yuan
November 19, 2021


Researchers at the Chinese Academy of Sciences have developed a broad learning system (BLS)-based technique for tracking microrobots' trajectories flexibly and accurately. The researchers designed a learning-based microrobot servo control algorithm that derived the controller parameters' limitations when combined with the Lyapunov theory; they also created a BLS-based controller training algorithm that employs multiple tracking trajectories as demonstrations, and used it to acquire controller parameters. Models and experimental results found the BLS-based technique could train the controller in about six seconds.

Full Article

 

 

ML Helps Locally Restore Wetlands for Coastal Protection
Royal Netherlands Institute for Sea Research (Netherlands)
November 16, 2021


Researchers in China, the Netherlands, Britain, and Belgium collaborated on the use of machine learning (ML) to convert an experimental dataset into models that can forecast coastal wetland restoration for flood protection. "From the large datasets generated in recent field and laboratory experiments, we know that the establishment process of wetland vegetation is complex and depending on a diverse set of factors in its living environment," said Zhan Hu of China's Sun Yat-Sen University. The ML model showed marsh establishment can be managed well despite continuing global climate change. The model also revealed that "It is especially the sediment supply, the local wave height, and shape of the tidal flat in front of the marsh that we need to control to counteract the threats of changing wind climate and rising sea level," Hu said.

Full Article

 

Facebook, Tesla Find Artificial Intelligence Failing To Deliver Despite High Expectations

In a Bloomberg Opinion Share to FacebookShare to Twitter (10/4, Olson) post, Parmy Olson says both Facebook Inc. co-founder Mark Zuckerberg and Tesla CEO Elon Musk “are grappling with big problems that stem, at least in part, from putting faith in artificial intelligence systems that have underdelivered. Zuckerberg is dealing with algorithms that are failing to stop the spread of harmful content; Musk with software that has yet to drive a car in the ways he has frequently promised.” Olson says there is “one lesson to be gleaned from their experiences: AI is not yet ready for prime time. Furthermore, it is hard to know when it will be. Companies should consider focusing on cultivating high-quality data – lots of it – and hiring people to do the work that AI is not ready to do.”

 

VentureBeat Speaks To Women Who Pioneered AI Ethics Teams

VentureBeat Share to FacebookShare to Twitter (9/30, Lazzaro) reports that following “countless discoveries of AI systems causing real-world harm and a slew of professionals ringing the alarm, tech companies now know that all eyes – from customers to regulators – are on their AI.” Many companies have established AI ethics teams to perform in-house reviews. Such teams are “present at companies including Google, Microsoft, IBM, Facebook, Salesforce, Sony, and more, such groups and boards were largely positioned as places to do important research and even act as safeguards against the companies’ own AI technologies.” VentureBeat speaks to several women who have worked on these boards and weigh in on their effectiveness.

dtau...@gmail.com

unread,
Dec 4, 2021, 12:34:52 PM12/4/21
to ai-b...@googlegroups.com

DeepMind's AI Helps Untangle the Math of Knots
Nature
Davide Castelvecchi
December 1, 2021


Researchers at U.K.-based artificial intelligence (AI) laboratory DeepMind worked with mathematicians to use AI to identify patterns that humans had overlooked in a theory of knots and the study of symmetry, via machine learning. Knot theorists determined the saliency maps technique would be most useful in finding a pattern that linked two knot properties, which led to a formula that appeared correct in all testable cases. The University of Oxford's Marc Lackenby and András Juhász devised a proof that the formula applied to a large class of knots. Geordie Williamson and colleagues at Australia's University of Sydney used AI to discover a technique for guessing precisely which graphs and polynomials originated from the same symmetries. DeepMind's Alex Davies said the mathematicians had to customize AI techniques to the mathematical problems. Said Juhász, "Any area of mathematics where sufficiently large datasets can be generated could benefit from this approach."

Full Article

 

 

Study Shows How ML Could Improve COVID-19 Predictive Models
News from Brown
December 1, 2021


Brown University mathematicians suggest machine learning (ML) could augment the performance of commonly used epidemiological models for predicting pandemics. Brown's George Karniadakis said nine prominent COVID-19 prediction models were incorrect and ineffective because they treated key parameter values as fixed over time. The researchers used physics-informed neural networks (PINNs) featuring equations describing physical laws governing a system to capture shifting parameters in epidemiological models. They fed PINN-outfitted models real-world data from New York City, Rhode Island, Michigan, and Italy, and allowed them to infer values for key parameters over time. In January, they used the models to make six-month forecasts based on these parameters, and case rates from January through June fell within the anticipated uncertainty window. Brown's Ehsan Kharazmi said, "This can provide some insights for making or adjusting policies."

Full Article

 

 

Living Robots Can Now Reproduce, Scientists Say
CNN
Katie Hunt
November 29, 2021


Researchers at the University of Vermont, Tufts University, and Harvard University discovered that the living robots they developed last year from the stem cells of the African clawed frog are able to reproduce. University of Vermont's Josh Bongard said the xenobots reproduce using kinetic replication, which has never been seen before at the scale of whole cells or organisms. The researchers used a supercomputer and artificial intelligence (AI) to determine the C-shape that would make the xenobots more effective at kinetic replication. Bongard said the AI came up “with this Pac-Man shape” of the xenobots, which “influences how the xenobots behave to amplify this incredibly surprising process."

Full Article

 

 

AI That Understands Object Relationships
MIT News
Adam Zewe
November 29, 2021


A deep learning model developed by researchers at the Massachusetts Institute of Technology (MIT) understands the underlying relationships between objects in a scene. This could further the development of robots able to learn from and interact with their environments. The framework can generate an image of a scene based on text descriptions of objects and their relationships within that scene. They used energy-based models to represent individual object relationships in a scene description, then combined them to infer all objects and relationships. Said MIT's Yilun Du, "We can increase our sentence from having one relation description to having two, or three, or even four descriptions, and our approach continues to be able to generate images that are correctly described by those descriptions, while other methods fail."

Full Article

 

Analysis: AI Enables “Dramatic” Transit Improvements

VentureBeat Share to FacebookShare to Twitter (10/25, Cole) reports on how AI can enable “dramatic improvements to our mobility.” In many cities, mass transit systems “struggle to keep up with rider levels, resulting in gridlock, accidents, and dissatisfied consumers,” but places like Park City, Utah have shown that AI can “improve the planning and scheduling of mass transit.” VentureBeat comments, “In essence, AI is not doing anything for mass transit that it isn’t doing for any other segment of society. The difference is that transportation is a crucial aspect of modern life, and it is under extraordinary pressure these days.”

 

Internal Facebook Data Shows AI Content Moderation Less Effective Than Promised

The Wall Street Journal Share to FacebookShare to Twitter (10/17, A1, Seetharaman, Horwitz, Scheck, Subscription Publication) reports Facebook executives believe advances in artificial intelligence will help address the company’s long-running problems with hate speech moderation, but such technology is much farther away than what they suggest, according to documents obtained by the Journal. Civil rights groups have long criticized Facebook for keeping its internal AI data close to its chest.

 

Facebook Researchers Collect Thousands Of Hours Of First-Person Video To Train AI

TechCrunch Share to FacebookShare to Twitter (10/14, Coldewey) reports that Facebook researchers collected a few thousand hours of first-person video to help train AI. The challenge Facebook is “attempting to get a grip on is simply that even the most impressive of object and scene recognition models today have been trained almost exclusively on third-person perspectives,” so it can “recognize a person cooking, but only if it sees that person standing in a kitchen, not if the view is from the person’s eyes.” The solution to “machine learning problems is generally either more or better data, and in this case it can’t hurt to have both.” Facebook “contacted research partners around the world to collect first-person video of common activities like cooking, grocery shopping, typing shoelaces or just hanging out.”

 

Carnegie Melon, Pitt Researchers Study How Biological Learning Is Similar To How Computers Learn

Technology Networks Share to FacebookShare to Twitter (10/14) reported that “pinpointing how neural activity changes with learning is anything but black and white.” Recently, “some have posited that learning in the brain, or biological learning, can be thought of in terms of optimization, which is how learning occurs in artificial networks like computers or robots.” A new perspectives piece “co-authored by Carnegie Mellon University and University of Pittsburgh researchers relates machine learning to biological learning, showing that the two approaches aren’t interchangeable, yet can be harnessed to offer valuable insights into how the brain works.” Byron Yu, professor of biomedical engineering and electrical and computer engineering at Carnegie Mellon, said, “It turns out that in machine learning and artificial intelligence, there is a well-developed framework in which something learns, known as optimization. We and others in the field have been thinking about how the brain learns in comparison to this framework, which was developed to train artificial agents to learn.” The optimization viewpoint “suggests that activity in the brain should change during learning in a mathematically prescribed way, akin to how the activity of artificial neurons changes in a specific way when they are trained to drive a robot or play chess.”

 

How Tech Companies Are Trying To Prevent Ethical Lapses Around AI

Fortune Share to FacebookShare to Twitter (10/12, Fry) asks, “Is there a way for tech companies to more responsibly navigate the gray, unregulated spaces they’re often operating in – especially where artificial intelligence is involved?” Industry experts speaking at the Fortune Most Powerful Women Summit in Washington, D.C., on Monday offered their prescriptions. Raj Seshadri said, “The approach we’ve taken is to be very principled about it.” Mastercard “has principles around data security and transparency as well as consumers’ rights, she noted, adding that when it comes to artificial intelligence, her team tries to probe the A.I.’s sources:”

 

Pentagon Official Resigns, Claims US Has Lost AI Battle With China

Reuters (10/11, Faulconbridge) reports China has prevailed in its “artificial intelligence battle with the United States and is heading towards global dominance because of its technological advances, the Pentagon’s former software chief told the Financial Times.” Nicolas Chaillan, the Pentagon’s first chief software officer “who resigned in protest against the slow pace of technological transformation in the U.S. military, said the failure to respond was putting the United States at risk.” Chaillan told the Times, “We have no competing fighting chance against China in 15 to 20 years. Right now, it’s already a done deal; it is already over in my opinion,” adding, “Whether it takes a war or not is kind of anecdotal.” Chaillan blamed sluggish innovation, “the reluctance of U.S. companies such as Google to work with the state on AI and extensive ethical debates over the technology.”

 

Google Wants To Use AI To Time Traffic Lights More Efficiently

Reuters Share to FacebookShare to Twitter (10/6, Dave) reports Alphabet Inc’s Google “cut fuel use and traffic delays by 10% to 20% at four locations in Israel by using artificial intelligence to optimize signal lights and it next plans to test the software in Rio de Janeiro, the company said on Wednesday.” The early-phase research project “is among new software initiatives inside Google to combat climate change.” Based “on early results in Israel’s Haifa and Beer-Sheva, Rio de Janeiro’s municipal traffic authority expressed high hopes for the AI to better time traffic signal changes.” The authority “told Reuters the system should be introduced within months with locations announced soon.” Aleksandar Stevanovic, “an associate professor of civil and environmental engineering at University of Pittsburgh, said simulations show AI could smooth traffic flow.” However, he “questioned whether a tech company without traffic engineering expertise ultimately could bring such software to reality.”

 

Opinion: AI Flaws Could Make Cars Racist

The Los Angeles Times Share to FacebookShare to Twitter (10/7, Kim) posts an opinion piece from Yale University Associate Professor of Computer Science Theodore Kim arguing that the failure to create machine learning algorithms depicting Black people could make AI cars less likely to recognize them in cases of emergency. Kim details computer graphics researchers “have not systematically investigated the shine and gloss that characterizes dark and Black skin, or the characteristics of Afro-textured hair.” Kim concludes by arguing for investment “in the research and development of algorithms that are rigorously, measurably, demonstrably capable of depicting the full spectrum of humanity.”

dtau...@gmail.com

unread,
Dec 12, 2021, 9:04:50 AM12/12/21
to ai-b...@googlegroups.com

AI Tool Detects Most Common Climate Falsehoods
The Washington Post
Kasha Patel
December 9, 2021


A machine learning algorithm developed by researchers at Australia's Monash University can detect climate misinformation online. The researchers programmed the artificial intelligence tool to flag Websites that present false or misleading information about climate change science and remedies in multiple categories. The team analyzed over 250,000 documents from 1998 to 2020 from 20 popular, mostly U.S.-based conservative think tanks, and 33 central contrarian blogs that mainly refuted climate information about science/scientists and solutions. Monash's John Cook said, “There is a dearth of research into understanding attacks on climate science and scientists themselves, let alone developing solutions.”

Full Article

*May Require Paid Registration

 

 

Australia's AI Cameras Catch Over 270,000 Drivers Using Phones
New Scientist
Alice Klein
December 8, 2021


Artificial intelligence (AI)-equipped cameras have spotted more than 270,000 drivers using phones while driving in New South Wales (NSW), Australia, since the state began issuing fines in March 2020. The cameras capture high-definition images of the front of each passing vehicle, and AI software analyzes them to identify drivers using a handheld cellphone; officers vet images flagged as potentially showing violations before fining those drivers. Transport for NSW's Tara McCarthy said, "We know that mobile phone detection cameras are working and people are getting the message not to use their phone illegally, as we have seen a significant drop in offenses."

Full Article

 

 

Deep Learning Makes Nanoscale Designs More Robust Against Defects
Penn State College of Engineering News
Gabrielle Stewart
December 7, 2021


Pennsylvania State University (Penn State) researchers have developed a deep learning technique to make nanoscale metasurface designs more resilient. The researchers used optimizer software to virtually assess designs and suggest new iterations based on the most successful configurations; they could impose limits on minimum feature sizes in metasurface components according to the constraints of the nanofabrication technique. The resulting data then was fed into a deep learning model trained to forecast the effect of manufacturing defects on performance. The model applied this knowledge to ascertain the optimized designs' robustness to defects, which was input to the optimizer, yielding high-performance designs. Penn State's Sawyer Campbell said, "By implementing fabrication errors on the simulation side in advance, we can enable more freeform designs that have been shown in simulations to achieve extremely high performance."

Full Article

 

 

Generating a Realistic 3D World
MIT News
Lauren Hinkel
December 6, 2021


Researchers at the Massachusetts Institute of Technology (MIT), the MIT-IBM Watson Artificial Intelligence (AI) Laboratory, and Harvard and Stanford universities developed the ThreeDWorld (TDW) platform to simulate high-fidelity audio and visual environments. Said the MIT-IBM Watson AI Lab's Chuang Gan, "We are trying to build a general-purpose simulation platform that mimics the interactive richness of the real world for a variety of AI applications." TDW can produce photorealistic scenes and render audio in real time for compilation into audio-visual datasets, modified through interaction, and adapted for human and neural network learning and prediction tests.

Full Article

 

 

Using Statistics to Aid the Fight Against Misinformation
American University
Rebecca Basu
December 2, 2021


A machine learning-based statistical model developed by American University (AU) researchers can detect misinformation in social posts while addressing issues of algorithmic transparency. AU's Zois Boukouvalas said, "We would like to know what a machine is thinking when it makes decisions, and how and why it agrees with the humans that trained it." The researchers trained the model on tweets labeled as either misinformation or real, and included pre-defined rules about language used in misinformation; nuances in human language and linguistic features associated with misinformation also were used in the training. The model correctly classified a test set of 112 real and misinformation tweets with nearly 90% accuracy, "while offering transparency about how it detected the tweets that were misinformation," Boukouvalas said.

Full Article

 

Researchers Studying How Artificial Intelligence Could Affect Future Of Clerical Jobs

The Cedar Rapids (IA) Gazette Share to FacebookShare to Twitter (11/4) reported that a team of investigators is looking at “how artificial intelligence could affect the future of clerical work – long an entry to the middle class for many workers.” AI is “rapidly developing” the capability to perform clerical tasks such as answering phones, filing documents, and entering data in a software program. A team at the University of Iowa received “a $150,000 National Science Foundation grant to fund the initial stages of its work. Over the next year they’ll survey earlier research and meet with experts in several fields to identify what questions to ask and [in] what directions they might lead.”

 

Google Seen As “Aggressively Pursuing” Military Technology Contract

The New York Times Share to FacebookShare to Twitter (11/3, Wakabayashi, Conger) reports, “Three years after an employee revolt forced Google to abandon work on a Pentagon program that used artificial intelligence, the company is aggressively pursuing” the Joint Warfighting Cloud Capability contract to provide technology to the military. The company’s move “could raise a furor among its outspoken work force and test the resolve of management to resist employee demands.” Amid ongoing legal challenges from AWS, the Defense Department in July “scrapped the JEDI contract and said it would be replaced with the Joint Warfighting Cloud Capability.” DOD “also noted that Amazon and Microsoft were the only companies that likely had the technology to meet its needs, but said it would conduct market research before ruling out other competitors.” The Pentagon “said it planned to reach out to Google, Oracle and IBM.”

        The Wall Street Journal Share to FacebookShare to Twitter (11/3, Mickle, Subscription Publication) reports that in challenging “the JEDI award to Microsoft, Amazon contended in court that then-President Donald Trump exerted improper pressure on the Pentagon to keep the contract from going to Amazon because it was led by founder Jeff Bezos.”

        The Hill Share to FacebookShare to Twitter (11/3, Schnell) reports Google Cloud “announced an emergency ‘Code Yellow’ in September for the Pentagon proposal, two sources told the Times, which allows the company to transfer engineers from other assignments to the military endeavor.” Google Cloud CEO Thomas Kurian “reportedly met with Charles Q. Brown, the Air Force chief of staff, on Tuesday, where personnel from the tech giant made the argument for why its company is fit for the contract.”

        The Verge Share to FacebookShare to Twitter (11/3, Clark) reports that following employee backlash over its work on Project Maven, Google “announced its AI ethics principles, promising that it wouldn’t work on AI-powered weapons or AI surveillance projects that were likely to draw ire from human rights or privacy advocates. The company did, however, say it would continue working with the military ‘in many other areas.’” In fact, Google “has continued to do work with the military since its pledge, with some projects involving AI.”

        Additional coverage includes Engadget Share to FacebookShare to Twitter (11/3) and The Information Share to FacebookShare to Twitter (11/3, Subscription Publication).

 

Facebook To Shut Down Facial Recognition System

The New York Times Share to FacebookShare to Twitter (11/2, Hill, Mac) reports Facebook “plans to shut down its decade-old facial recognition system this month, deleting the face scan data of more than one billion users and effectively eliminating a feature that has fueled privacy concerns, government investigations, a class-action lawsuit and regulatory woes.” In a blog post, Jerome Pesenti, Vice President of Artificial Intelligence at Facebook parent company Meta, “said...the social network was making the change because of ‘many concerns about the place of facial recognition technology in society.’” The Times adds that while Facebook “will not eliminate the software that powers the system, which is an advanced algorithm called DeepFace,” and a Meta spokesman conceded the company “has also not ruled out incorporating facial recognition technology into future products,” privacy advocates “nonetheless applauded the decision.”

 

MIT Study Explores Avoiding Shortcut Solutions In AI

MIT News Share to FacebookShare to Twitter (11/2, Zewe) reports “a new study by researchers at MIT explores the problem of shortcuts in a popular machine-learning method and proposes a solution that can prevent shortcuts by forcing the model to use more data in its decision-making.” By “removing the simpler characteristics the model is focusing on, the researchers force it to focus on more complex features of the data that it hadn’t been considering.” Then, by “asking the model to solve the same task two ways – once using those simpler features, and then also using the complex features it has now learned to identify – they reduce the tendency for shortcut solutions and boost the performance of the model.” One potential application “of this work is to enhance the effectiveness of machine learning models that are used to identify disease in medical images.” Shortcut solutions in this context could lead to false diagnoses and have dangerous implications for patients.

 

Facebook Developing New AI-Based Robotics

Engadget Share to FacebookShare to Twitter (11/1, Tarantola) reports that Facebook “developed a suite of tactile technologies that will impart a sense of touch into robots that the mad doctor could never imagine.” Facebook AI Research “(in partnership with Carnegie Mellon University) has developed ReSkin, a touch-sensitive ‘skin’ for robots and wearables alike.” On the machine-learning side, Engadget explains that simulations allow “ML systems to train in a virtual environment without the need to collect heaps of real-world hardware data,” which is how “FAIR has developed TACTO. This system can generate hundreds of frames of realistic high-resolution touch readings per second as well as simulate vision-based tactile sensors like DIGIT so that researchers don’t have to spend hours upon hours tapping on sensors to create a compendium of real-world training data.”

 

Ford Broadens AI Tools For Non-IT Professionals

The Wall Street Journal Share to FacebookShare to Twitter (11/1, McCormick, Subscription Publication) reports Ford Motor Co. is giving artificial-intelligence development tools to engineers, designers and other non-IT professionals to help the company harness the technology for a range of pressing issues, including supply-chain challenges.

dtau...@gmail.com

unread,
Dec 18, 2021, 2:02:55 PM12/18/21
to ai-b...@googlegroups.com

AI Argues For, Against Itself in Oxford Union Debate
BBC
December 16, 2021

An artificial intelligence (AI) engine both defended and argued against itself at the U.K.'s Oxford Union debating society. The Megatron LLB Transformer developed by Nvidia's Applied Deep Research team was trained on a corpus of data including all of Wikipedia, 63 million English news articles from 2016 to 2019, and 38 gigabytes of Reddit chat. The AI referred to itself a tool that can be used and abused, and contended that humans were not "smart enough" to make it ethical or moral. "In the end I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defense against AI," it said. The AI also said the "best AI will be the AI that is embedded into [humans'] brains, as a conscious entity."
 

Full Article

 

 

Misinformation Already In the Metaverse
Bloomberg
Jillian Deutsch; Naomi Nix; Sarah Kopit
December 15, 2021


Misinformation has infiltrated the metaverse, an immersive, interconnected digital environment for social interaction that technology giants are racing to create. Regulators warn the same qualities that make the metaverse appealing may also invite harmful content, with weaponization of Virtual Reality-powered experiences a potential threat. Karen Kornbluh with the German Marshall Fund's Digital Innovation and Democracy Initiative said, "The Facebook Papers showed that the platform can function almost like a turn-key system for extremist recruiters, and the metaverse would make it even easier to perpetrate that violence." Andrea-Emilio Rizzoli at Switzerland's Dalle Molle Institute for Artificial Intelligence (AI) said the relative safety of the metaverse will depend on how companies teach their AI systems to moderate their metaverse platforms.
 

Full Article

 

 

U.N. Chief Urges Action on Lethal Autonomous Weapons as Geneva Talks Open
Reuters
Emma Farge
December 13, 2021


U.N. Secretary-General Antonio Guterres issued a new call for regulation of lethal autonomous weapons (LAWS) at the Convention on Certain Conventional Weapons this week in Geneva, Switzerland. LAWS are fully machine-controlled and use technology like artificial intelligence and facial recognition; regulatory urgency has escalated since a U.N. panel reported in March that the first autonomous drone attack may have already transpired in Libya. Some states participating in the talks support a total ban of LAWS, while others, like the U.S., think such weapons can be used to hit targets more precisely than humans. A diplomat involved in the talks said while there is insufficient support to launch a treaty right now, but “We think some principles could be agreed for national implementation."
 

Full Article

 

 

DeepMind Massive Language AI Approaches Human-Level Reading Comprehension
Fortune
Jeremy Kahn
December 8, 2021


Artificial intelligence (AI) research company DeepMind unveiled the Gopher language software, which the company says approaches human-level performance in reading comprehension. Gopher performs many tasks with greater accuracy than other ultra-large language models, like answering questions about specialized subjects; for tasks like logical reasoning and mathematics, it performs as well or nearly as well as other models. Gopher is smaller than some ultra-large language models, featuring about 280 billion different parameters, compared to 535 billion in Microsoft and Nvidia's Megatron model and 1.6 trillion in Google's model. DeepMind's Koray Kavukcuoglo said the company has no plans to commercialize the software at present.

Full Article

 

AI-Driven APOLLO Predicts Processor Power Consumption
The Engineer (U.K.)
December 13, 2021

Duke University computer engineers have developed the APOLLO artificial intelligence (AI) method, which can predict the power consumption of any kind of computer processor over 1 trillion times per second. Said Duke's Yiran Chen, "Our approach runs directly on the microprocessor in the background, which opens many new opportunities." APOLLO employs AI to identify and choose 100 of a processor's millions of signals that correspond most closely with its power consumption, then constructs a power consumption model and watches the signals to generate a real-time performance forecast. The researchers said APOLLO's autonomous, data-driven nature means it can deployed on most processor architectures.
 

Full Article

 

 

ML Flags Remedies That Might Do More Harm Than Good
MIT News
Adam Zewe
December 9, 2021


Scientists at the Massachusetts Institute of Technology (MIT), Microsoft Research, and Adobe India designed a machine learning (ML) model to identify medical treatments that could pose greater potential danger than alternatives, and to alerting doctors when sepsis patients may require a change in treatment. The researchers trained a reinforcement learning model on limited data from a hospital intensive care unit (ICU) to identify treatments to avoid. The Dead-end Discovery model indicated about 12% of treatments administered to sepsis patients in an ICU were harmful, with about 3% of patients entering a medical “dead end” in their treatment as long as 48 hours prior to their deaths. MIT's Taylor Killian said the model is “almost eight hours ahead of a doctor's recognition of a patient's deterioration,” which he described as “powerful because in these really sensitive situations, every minute counts, and being aware of how the patient is evolving, and the risk of administering certain treatment at any given time, is really important."
 

Full Article

 

 

Research Enables Key Step Toward Personalized Medicine: Modeling Biological Systems
Oregon State University News
Armand Resto-Spotts
December 9, 2021


Oregon State University (OSU) researchers have found that machine learning can be used to model nonlinear biological systems at the cellular level, paving the way for personalized medicine. Although biological systems are resistant to traditional upscaling techniques, machine learning can reduce the information load for complex systems at the cellular level, allowing researchers to model the impact or response of cells with high fidelity without modeling them individually. OSU's Brian D. Wood explained the process as "simplifying a computational problem that has tens of millions of data points by reducing it to thousands of data points." Said Wood, "Our work capitalizes on what are called deep neural networks to upscale the nonlinear processes found in transport and reactions within tissues," which could lead to potential treatments based on numerical model outcomes.

Full Article

 

 

Robot Plays Table Tennis After 90 Minutes of Training
New Scientist
Chris Stokel-Walker
December 10, 2021


Researchers at Germany's University of Tübingen developed a computer simulation that taught a virtual robot arm outfitted with a table tennis racket how to return virtual ping pong balls. A machine learning algorithm observed the simulation for 90 minutes, learning how the ping pong ball's path was affected by velocity and the racket's orientation. The researchers then used the algorithm to control a real robot arm at a real table tennis table, and found the robot arm could return balls within an average of 24.9 centimeters of the target location. However, it struggled with fast, slow, and backspin shots. Said University of Tübingen's Andreas Zell, "It's not worse than a regular human player. It's already on par with me."

Full Article

 

 

Research Brings Analog Computers One Step from Digital
Washington University in St. Louis
Brandie Jefferson
December 8, 2021


A new processing-in-memory (PIM) circuit developed by researchers at Washington University in St. Louis (WUSTL) adds neural network flexibility to PIM computing. With PIM computing, the memory and processing unit are merged into a single unit, and computations are performed using the machine's physical properties. The researchers are working on resistive random-access memory PIM, with resistors in both the memory and the processor. Said WUSTL's Xuan "Silvia" Zhang, "In resistive memory, you do not have to translate to digital, or binary. You can remain in the analog domain." To overcome the bottleneck that occurs when converting analog information into a digital format, the researchers added neural approximators that perform multiple calculations in the most efficient manner.

Full Article

 

 

A System for Designing, Training Intelligent Soft Robots
MIT News
Rachel Gordon
December 7, 2021


Scientists at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory designed the Evolution Gym, a large-scale testing platform for co-optimizing the design and control of soft robots. The researchers integrated standard design optimization and deep reinforcement learning algorithms into the platform to assess robotic aptitude; the former evolves robotic bodies while the latter optimizes a controller for each proposed design. Said MIT’s Jagdeep Bhatia, “By creating a large-scale benchmark that focuses on speed and simplicity, we not only create a common language for exchanging ideas and results within the reinforcement learning and co-design space, but also enable researchers without state-of-the-art computing resources to contribute to algorithmic development in these areas.”

Full Article

 

 

Humanlike Robot Sophia Becomes 'AI Being' for Metaverse in NFT Auction
Bloomberg
Blake Schmidt
December 6, 2021


A virtual anime version of Hanson Robotics' humanoid robot Sophia will be released in 100 non-fungible tokens (NFTs) that can interact with people in the metaverse. Beginning tomorrow, the NFTs will be auctioned on the Binance NFT Marketplace. The startup beingAI created Sophia beingAI in collaboration with Alethea AI, which is constructing a decentralized protocol to develop an intelligent metaverse populated with interactive NFTs. Said beingAI's Jeanne Lim, "I believe the virtual format can allow us to scale to millions of concurrent interactions across devices and media platforms, which would create large-scale impact." Lim said she hopes the new artificial intelligence will bring humanity and technology together "to help humans attain our true nature of unconditional love and pure possibilities."

Full Article

*May Require Paid Registration

 

 

Group Backed by Top Companies Moves to Combat AI Bias in Hiring
The New York Times
Steve Lohr
December 8, 2021


Major employers in a variety of industries have signed up with the Data & Trust Alliance in an effort to prevent bias in artificial intelligence software used for hiring. With the help of corporate and outside experts and software vendors, the alliance has created a 55-question evaluation and a scoring system to identify and prevent algorithmic bias. The bias detection, measurement, and mitigation system will be used to analyze the data practices and design of human resources software. Said Walmart's Nuala O'Connor of the scoring system, "Every algorithm has human values embedded in it, and this gives us another lens to look at that. This is practical and operational."

Full Article

*May Require Paid Registration

 

Group Backed By Top Companies Moves To Combat A.I. Bias In Hiring

The New York Times Share to FacebookShare to Twitter (12/8, Lohr) reports a group of major employers are backing the newly-announced Data & Trust Alliance, an effort aiming to prevent AI hiring software from delivering biases results. The group of corporations “is not a lobbying organization or a think tank,” and instead “has developed an evaluation and scoring system for artificial intelligence software” that aims to detect and prevent bias in algorithms.

 

Nabla Bio Secures $11M Funding For AI-Driven Antibody Design

FierceBiotech Share to FacebookShare to Twitter (12/7, Park) reports Nabla Bio, “which is developing an artificial intelligence-powered drug design platform to engineer next-generation antibody therapeutics,” secured “an $11 million seed funding round” this week. With the funds, the startup “will be able to ‘design better antibodies using our AI-first drug design platform, expand our team and advance our current partnerships,’ said Surge Biswas, Ph.D., the startup’s co-founder and CEO.”

 

Chatbots To Help Navigate Student Mental Health Crises Discussed

Diverse Issues in Higher Education Share to FacebookShare to Twitter (11/30, Kelliher) reports that “with computer science graduate students,” Dr. Jinho Choi, associate professor of computer science at Emory University, “built an AI-enabled bot called Emora, which can hold nuanced, human-like conversations with people.” Emora’s team “last year won first place in Amazon’s Alexa Prize Socialbot Grand Challenge, a national competition among university students to build chatbots.” The bot is “not yet sophisticated enough to be used as mental health support for students, though Choi hopes it will get there.” A handful of psychological AI-enabled chatbots “are already on the market, though they are designed to provide additional support, not replace therapists.” As higher education “tackles a worsening student mental health crisis on slim institutional budgets, studies are looking at the potential benefits of this new, affordable technology among college students.”

Daniel Tauritz

unread,
Dec 23, 2021, 5:50:27 PM12/23/21
to ai-b...@googlegroups.com

Face Detection in Untrained Deep Neural Networks
KAIST (South Korea)
December 21, 2021


Scientists at South Korea's Korea Advanced Institute of Science and Technology (KAIST) have confirmed that higher visual cognitive functions can emerge spontaneously in untrained deep neural networks. The researchers employed a model neural network that records properties of the visual cortex's ventral stream, and determined the involuntary face-selectivity was comparable to that seen in face-selective neurons in the brain. KAIST's Se-Bum Paik said the findings offer "a broad conceptual advance, as well as advanced insight into the mechanisms underlying the development of innate functions in both biological and artificial neural networks, which may unravel the mystery of the generation and evolution of intelligence."

Full Article

 

 

Nonsense Can Make Sense to ML Models
MIT News
Rachel Gordon
December 15, 2021


Massachusetts Institute of Technology (MIT) scientists found machine learning models can make confident predictions based on details that humans find nonsensical. Examples include image-processing neural networks in autonomous vehicles that classify traffic lights and street signs based on backgrounds, edges, or sky patterns. Models trained on datasets like CIFAR-10 and ImageNet exhibited overinterpretation, with CIFAR-10-trained models making confident predictions even when 95% of input images were missing, and the rest were meaningless to humans. The signals' validity in datasets means typical evaluation methods cannot diagnose overinterpretation. MIT's Brandon Carter suggests that while it may seem the model is the likely culprit for this misidentification, the datasets themselves are more likely to blame.

Full Article

 

 

South Korea to Test AI-Powered Facial Recognition to Track COVID-19 Cases
Reuters
Sangmi Cha
December 13, 2021


South Korea in January plans launch a pilot project in the city of Bucheon that will use an artificially-intelligent algorithm, facial recognition technology, and nearly 11,000 CCTV cameras to track the movements of persons infected with COVID-19, anyone they had close contact with, and whether they were wearing a mask. The system can track up to 10 people simultaneously in five to ten minutes, according to a plan submitted by the city. There are concerns, however, that the government will retain and harness the resulting data far beyond the needs of the pandemic. "The government's plan to become a Big Brother on the pretext of COVID is a neo-totalitarian idea," said opposition lawmaker Park Dae-chul.

Full Article

 

 

Different Strokes: Using AI to Tell Art Apart
Case Western Reserve University
December 13, 2021


Researchers at Case Western Reserve University have developed an artificial intelligence algorithm that can distinguish between the brush strokes of different artists. The tool potentially could improve the identification of forgeries of famous works, among other things. The new technique reads and learns from a painting's three-dimensional (3D) topography. The researchers generated 3D surface height data for each painting, divided them into virtual patches, and trained a convolutional neural network to determine which painter made each stroke. The neural network was able to identify the correct artist 95% of the time from brush strokes about the diameter of a single bristle.

Full Article

 

 

ML Speeds Up Vehicle Routing
MIT News
Becky Ham
December 10, 2021


Researchers at the Massachusetts Institute of Technology (MIT) have developed a machine learning approach that could speed up vehicle routing algorithms, which tend to slow down when applied to large urban datasets. The new "learning-to-delegate" approach could increase the speed of the strongest algorithmic solvers by determining the most useful subproblems to solve, rather than having the algorithm solve all subproblems. The researchers created a neural network that automatically identifies those subproblems that, when solved, result in the greatest gain in solution quality. The researchers said the approach could be used with a variety of solvers and resource allocation problems. Said MIT's Cathy Wu, "We may unlock new applications that now will be possible because the cost of solving the problem is 10 to 100 times less."

Full Article

 

Researchers Build An AI That Seems Able To Predict Whether A Healthy Person Will Get Breast Cancer

The Washington Post Share to FacebookShare to Twitter (12/21, A1, Zeitchik) reports that researchers “have built an AI that seems able to predict with unprecedented accuracy whether a healthy person will get breast cancer, in an innovation that could seriously disrupt how we think about the disease.” As the investigators “laid out in an article Share to FacebookShare to Twitter in the Journal of Clinical Oncology last month and explore further in an upcoming piece set to be published in Nature Medicine, by analyzing a mammogram’s set of byzantine pixels and then cross-referencing them with thousands of older mammograms, the AI – known as Mirai – can predict nearly half of all incidences of breast cancer up to five years before they happen.” The model is about to start clinical trials.

dtau...@gmail.com

unread,
Jan 1, 2022, 3:40:31 PM1/1/22
to ai-b...@googlegroups.com

Happy New Year!

  • Dr. T

 

Is AI About to Transform the Mammogram?
The Washington Post
Steven Zeitchik
December 21, 2021


The Massachusetts Institute of Technology's Regina Barzilay and colleagues have built an artificial intelligence (AI) called Mirai that can predict breast cancer. Mirai analyzes and cross-references a mammogram's pixels with thousands of older mammograms, to forecast nearly half of all breast cancer incidences up to five years ahead. The researchers trained the AI on 200,000 mammograms, and it would generate predictions and be "penalized" or "rewarded," depending on how well they matched reality. The researchers then applied Mirai to a massive mammogram dataset, and its forecasts were correct an average of about 76 out of 100 cases, indicating 22% greater accuracy and 20% to 25% greater sensitivity over the Tyrer-Cuzick statistical model.
 

Full Article

*May Require Paid Registration

 

 

Identifying Fake Voice Recordings
Ruhr-Universität Bochum (Germany)
Julia Weiler
December 20, 2021


Joel Frank and Lea Schönherr at Germany's Ruhr-Universität Bochum (RUB) are developing tools to identify artificial intelligence (AI)-generated fake voice recordings. The researchers first compiled a dataset of about 118,000 AI-generated audio deepfakes, comprising roughly 196 hours of English and Japanese content. They then compared the deepfakes with recordings of real speech, and plotted the files as spectrograms showing frequency distribution over time, yielding subtle distinctions in the high frequencies between real and fake files. Frank and Schönherr then programmed algorithms that can distinguish between deepfakes and real speech as a starting point for scientists to devise novel detection methods.
 

Full Article

 

 

Platform Uses Machine Learning, Mass Spectrometer to Rapidly Process COVID-19 Tests
University of California, Davis
December 17, 2021

Researchers at the University of California, Davis (UC Davis) and the Las Vegas-based startup SpectraPass have partnered on a study of a new rapid COVID-19 testing platform. The scalable testing system identifies an infection by analyzing the different protein profiles produced by a body in response to various types of infections. UC Davis' Nam Tran said the goal of the study, which will involve about 2,000 participants in Sacramento and Las Vegas, "is to have enough COVID-19 positive and negative individuals to train our machine learning algorithm to identify patients infected by SARS-CoV-2." Previously, UC Davis researchers determined the method to be 98.3% accurate for positive COVID-19 tests and 96% accurate for negative tests. The next-generation sequencing panel can identify more than 280 respiratory pathogens, including SARS-CoV-2 and related variants like the flu and the common cold.
 

Full Article

 

 

AI Magnifies Utility of Electron Microscopes
Argonne National Laboratory
Joan Koka
December 16, 2021


Charudatta Phatak, Tao Zhou, and Mathew Cherukara at the U.S. Department of Energy's Argonne National Laboratory have developed an artificial intelligence technique to enhance an electron microscope's resolution and sensitivity. The researchers proposed using deep neural network training algorithms to retrieve important data stored in the electron wave, or phase. The approach also enables scientists to retrieve critical data about the electron microscope, as well as recover tiny phase shifts to acquire information about small changes in magnetization and electrostatic potential. Said Zhou, "The fact that we didn't need to add any new equipment to leverage these benefits is a huge advantage from an experimentalist's point of view."
 

Full Article

 

 

Model Improves ML Accuracy in COVID-19 Diagnosis While Preserving Privacy
University of Cambridge (U.K.)
December 16, 2021

Researchers in the U.K and China have developed a machine learning artificial intelligence (AI) model capable of diagnosing COVID-19 while preserving the privacy of patient data. The researchers used federated learning to train a model on 9,573 three-dimensional computed tomography scans from 3,336 patients in 23 hospitals in China and the U.K. They tested the framework using two well-curated external validation datasets of appropriate size. Federated learning supported a more generalized AI model to mitigate bias resulting from different datasets, while upholding the privacy of each datacenter in a collaborative environment. Michael Roberts at the U.K.'s University of Cambridge said, "We've shown that encrypting medical data is possible, so we can build and use these tools while preserving patient privacy across internal and external borders."
 

Full Article

 

 

AI Writes Better Stories When It Works Backwards From an Ending
New Scientist
Matthew Sparkes
December 15, 2021


Researchers at the Georgia Institute of Technology (Georgia Tech) found that when using artificial intelligence (AI) to craft a story, starting at the end can result in more coherent plots. The researchers trained a neural network using thousands of film and television plot summaries, then supplied it with a human-composed ending. The software looks at the last sentence to assess the characters' actions, generates 15 potential preceding sentences to explain their behavior, and ultimately selects one as the most statistically likely or appropriate. This process is repeated as the story is written in reverse. Georgia Tech's Mark Riedl said tests with human readers found these backward-generated stories were 15% more coherent than those written by an AI from the beginning. However, Riedl said, "We’re still not able to do some of the more complicated things you see in the movies and books where you foreshadow really far into the future."
 

dtau...@gmail.com

unread,
Jan 8, 2022, 8:23:21 AM1/8/22
to ai-b...@googlegroups.com

The Battle to Prevent Another January 6 Features New Weapon: The Algorithm
The Washington Post
Steven Zeitchik
January 6, 2022


Data scientists specializing in unrest prediction are applying machine learning (ML) to the causes of political violence to predict the likelihood of another January 6-like insurrection in the U.S. The University of Central Florida's CoupCast algorithm, for example, forecasts the probability of coups and electoral violence in dozens of countries every month. Following January 6, 2021, researchers reprogrammed CoupCast's model to account for factors it had previously downplayed, like the role of a leader stoking mob violence, while underplaying other factors, like long-term democratic history. Meanwhile, the nonprofit Armed Conflict Location & Event Data Project combines ML and software-equipped humans to track and predict crises worldwide.

Full Article

*May Require Paid Registration

 

 

Solving the 'Big Problems' via Algorithms Enhanced by 2D Materials
Penn State News
Jamie Oberdick
January 5, 2022


Pennsylvania State University (Penn State) researchers have developed a method of solving combinatorial optimization problems using two-dimensional (2D) materials. The researchers utilized a simulated annealing algorithm to determine the ground state of an Ising spin glass system. Penn State's Amritanand Sebastian said the process involves conducting in-hardware computational operations, with the hardware deployed via 2D material-based transistors that also store data. "We make use of this in-memory computation capability in order to perform simulated annealing in an efficient manner," he explained. According to Sebastian, the method saves energy through ultra-low-power operation, allows efficient computation of the spin system's energy, and does not require the hardware to scale with the size of the problem.

Full Article

 

 

AI Prototype Helps Reduce Surgical Complications
CBC (Canada)
Ryan Patrick Jones
December 29, 2021


Researchers at Canada's University Health Network (UHN) and Cleveland's Case Western Reserve University have developed a prototype artificial intelligence system to guide surgeons in real time during gallbladder surgery. The technology projects onto a video monitor colored areas indicating safe and unsafe areas to make incisions; a heatmap-style projection also changes color based on the model's confidence in a safe area's location. The researchers taught their algorithm with videos of surgeries combined with integrated annotations from expert surgeons. Frame-by-frame analysis of the videos enabled the algorithm to recognize patterns and gain the ability to make independent decisions. UHN's Amin Madani said, "It's like I have a panel of experts standing, watching me over my shoulder, guiding me, navigating me, and helping me not get into trouble during that operation."

Full Article

 

 

Robots Collect Underwater Litter
Technical University of Munich (Germany)
December 29, 2021


Researchers at Germany's Technical University of Munich (TUM) are developing a multi-robot system that uses machine learning to find and collect garbage underwater, as part of the European SeaClear Project. An autonomous boat scans the seafloor to identify pockets of litter, then an observation robot detects undersea litter and sends close-up images and other data to computers. An aerial drone is used to identify more litter, which is used with previously amassed data to form a map that then is used by a fourth robot to visit points on the map and collect litter. The team's artificial intelligence module predicts robot movement and behavior; TUM's Sandra Hirche said the system uses high-efficiency sampling to make precise predictions with limited computing power.

Full Article

 

 

ML Predicts Synthesis of Novel Materials
Northwestern Now
Lila Reynolds
December 22, 2021


A machine learning (ML) algorithm developed by scientists at Northwestern University and the Silicon Valley-based Toyota Research Institute (TRI) can predict the synthesis of novel nanomaterials. The researchers generated "Megalibraries" of nanostructures encoded on chips, and used the algorithm to forecast nanomaterial synthesis from them. "We asked the model to tell us what mixtures of up to seven elements would make something that hasn't been made before," said Northwestern Chad Mirkin. "The machine predicted 19 possibilities, and, after testing each experimentally, we found 18 of the predictions were correct." TRI's Joseph Montoya said the data suggests "the application of machine learning, combined with Megalibrary technology, may be the path to finally defining the materials genome."

Full Article

 

AI In Dentistry Shows Promise But Hasn’t Reached Full Potential Yet, Researchers Say

DrBicuspid Share to FacebookShare to Twitter (1/6, Sherwood) reported that artificial intelligence (AI) “shows promise, but its use in dentistry still lags behind AI in clinical medicine, according to research Share to FacebookShare to Twitter published online January 3 in Dentistry Review.” Although “it’s encouraging that dentistry is moving toward embracing AI, it is only now being applied to certain dental specialties such as orthodontics and endodontics, noted the authors,” who identified and reviewed “a set of 28 studies that investigated the use of AI in the dental fields.” While “this technology has been applied to some of the dental specialties in the research and academic settings, it has not yet been fully introduced to dental research nor have they reached technological readiness and cost-efficiency to enter the dental market,” the group wrote.

 

Professor Creates Robot That Uses AI To Perform, Craft Original Jokes

TIME Share to FacebookShare to Twitter (1/4, Purtill) reports on a robot experiment “billed as a comedy act” called Jon the Robot, “the brainchild of Naomi Fitter, an assistant professor in the School of Mechanical, Industrial and Manufacturing Engineering at Oregon State University.” The tiny android “performs when a handler (who must also hold the mic) presses a button, then tells the same jokes in the same order.” The robot’s “act is more human than it might first appear.” Jon is “learning how to respond to its audience – it can now vary the timing of its delivery based on the length of the audience’s laughter, and append different responses to jokes based on the level of noise in the room.” It can “deliver one line if a joke gets a roar of laughter and another if there are crickets.” The prospect “of an AI that understands why we are laughing, and that can generate its own genuinely funny material, is sort of a holy grail for a subset of AI researchers.” Artificial intelligence “can diagnose tumors, read maps and play games, often faster and with more accuracy than humans can.”

Reply all
Reply to author
Forward
0 new messages