Dr. T's AI brief

10 views
Skip to first unread message

dtau...@gmail.com

unread,
Sep 24, 2022, 7:58:13 AM9/24/22
to ai-b...@googlegroups.com

DeepMind AI Learns Simple Physics Like a Baby
Nature
Davide Castelvecchi
July 11, 2022


Computer scientists at the DeepMind artificial intelligence (AI) research laboratory trained a software model to learn simple physical rules about object behavior. The researchers trained the Physics Learning through Auto-encoding and Tracking Objects (PLATO) neural network model using animated videos and images of objects like cubes and balls, in order for it to generate an internal representation of the physical properties of each object. The model learned patterns such as continuity, solidity, and persistence of shape. DeepMind's Luis Piloto said the software makes predictions at every step in the video, and its accuracy increases as the video progresses. Piloto suggested PLATO could be a first step toward AI that can test theories about how human infants learn.

Full Article

 

 

Researchers Tackle Longstanding 'Data Heterogeneity' Problem for Federated Learning
NC State University News
Matt Shipman
July 11, 2022


A federated learning approach developed by North Carolina State University (NC State) researchers enables artificial intelligence models to be developed faster and more accurately. Federated learning involves multiple devices (clients) each using their own data to train local models and send updates to a centralized server to create a hybrid model that ideally performs better than the clients on their own. However, heterogeneity in the clients' data can result in local model modifications that hurt the hybrid model's performance. In the new approach, the central server is given more information about relevant data characteristics in the form of Jacobian matrices, which are input into an algorithm that generates an improved model. Said NC State's Ryan Pilgrim, "The algorithm not only sidesteps multiple communication rounds, but also keeps divergent local updates from degrading the model."

Full Article

 

 

Solving Algorithm 'Amnesia' Reveals Clues to How We Learn
UCI News
July 6, 2002


University of California, Irvine (UCI) researchers have gained insights into how algorithms can absorb information more efficiently, which could yield clues to the brain's learning process. The researchers looked at artificial neural networks (ANNs) that mimic the behavior of neurons, but which tend to forget previously acquired knowledge when they ingest fresh knowledge too fast. The traditional solution involves retraining the network on the complete set of past data, whether or not it was closely related to the new data. Said UCI's Rajat Saxena, "When ANNs interleaved a much smaller subset of old information, including mainly items that were similar to the new knowledge they were acquiring, they learned it without forgetting what they already knew." Saxena said this implies an underlying brain mechanism for why experts at something can gain new knowledge of that subject faster than non-experts.

Full Article

 

 

Researchers Defeat Facial Recognition Systems with Universal Face Mask
Help Net Security
Zeljka Zorz
July 12, 2022


Researchers at Israel's Ben-Gurion University of the Negev (BGU) and Tel Aviv University found that facial recognition (FR) systems may be thwarted by fabric face masks boasting adversarial patterns. The researchers employed a gradient-based optimization process to generate a universal perturbation and mask to falsely classify each wearer as an unknown identity. BGU's Alon Zolfi said, "The perturbation depends on the FR model it was used to attack, which means different patterns will be crafted depending on the different victim models." Zolfi suggested FR models could see through masked face images by training them on images containing adversarial patterns, by teaching them to make predictions based only on the upper area of the face, or by training them to generate lower facial areas based on upper facial areas.

Full Article

 

 

Building Explainability into Components of ML Models
MIT News
Adam Zewe
June 30, 2022


Researchers at the Massachusetts Institute of Technology (MIT) and cybersecurity startup Corelight have developed a taxonomy to help developers create components of machine learning (ML) models that incorporate explainability. The researchers defined properties that make features interpretable for five varieties of users, and that provide instructions for engineering features into formats that will be easier for laypersons to understand. Key to the taxonomy is the precept that there is no universal model for interpretability. The researchers define properties that can make components approximately explainable for different decision-makers, and outline which properties are likely most valuable to users. MIT's Alexandra Zytek said, "The taxonomy says, if you are making interpretable features, to what level are they interpretable? You may not need all levels, depending on the type of domain experts you are working with."

Full Article

 

 

How ML Can Identify Gun Buyers at Risk of Suicide
The Hill
Gianna Melillo
July 14, 2022


Research by University of California, Davis (UC Davis) scientists suggests machine learning could use handgun purchasing data to identify gun purchasers at risk of suicide. The researchers analyzed data from the California's Dealer's Record of Sale database, applying random forest classification to assess the data's ability to predict those at an elevated suicide risk within a year of a gun purchase. They estimated a 0.07% rate of suicide for gun buyers within that year, and identified 41 risk factors for firearm suicide, including older age, being a first-time purchaser, white race, living close to the seller, and the purchase of a revolver. Said UC Davis' Hannah S. Laqueur, "This study contributes to the growing evidence that computational methods can aid in the identification of high-risk groups and the development of targeted interventions."

Full Article

 

 

Researchers Create 'COVID Computer' to Speed Diagnosis
University of Leicester (U.K.)
July 1, 2022

An artificial intelligence tool developed by researchers at the U.K.'s University of Leicester detects COVID-19 with a high degree of accuracy by analyzing computed tomography (CT) chest scans. The software, which uses deep learning algorithms, can be deployed on portable devices. University of Leicester's Yudong Zhang said the tool "can find suspicious regions in the chest images automatically and make accurate predictions based on the representations.” Zhang added that the accuracy of the tool means it “can be used in the clinical diagnosis of COVID-19, which may help to control the spread of the virus."
 

Full Article

 

 

Deep Learning Accelerates Detection of Live Bacteria Using TFT Arrays
UCLA California NanoSystems Institute
July 5, 2022


A team of scientists at the University of California, Los Angeles and Texas A&M University has developed an artificial intelligence (AI)-powered bacterial colony detection system using a thin-film transistor (TFT) array. The 27-millimeter (mm) x 26-mm array allows the system to capture the growth patterns of bacterial colonies without requiring scanning. The system analyzes the images it captures as a function of time, and automatically detects colony growth using a deep neural network. After a colony is detected, a second neural network classifies its bacteria species. The researchers were able to detect colonies of E. coli, Citrobacter, and Klebsiella pneumoniae with over 90% accuracy within nine hours, and classified the species of each colony within 12 hours.

Full Article

 

 

AI-Enhanced System to Track Players for Offside Calls at 2022 World Cup
PC Magazine
Emily Dreibelbis
July 5, 2022


The Federation Internationale de Football Association (FIFA) will use a semi-automated artificial intelligence system to help referees make offside calls during the 2022 World Cup. The system features 12 cameras mounted under the stadium roof, which will track 29 body position data points for each player 50 times per second, and a sensor in the ball to track its location. The system issues an alert if a player is determined to be offside, with a team of referees in a video room tasked with reviewing the alert and making the final determination of the call. Said FIFA's Pierluigi Collina, "Even the best referee can make mistakes and that's why we implemented a system that can really reduce the possibility that a human mistake can affect the outcome of the match."

Full Article

 

 

When Gamers Get Nasty
IEEE Spectrum
Michelle Hampson
July 4, 2022


Researchers at the Netherlands' Utrecht University have developed an artificial intelligence (AI) model that can identify toxic behavior among gamers in real time. The model is designed to screen out problematic language, or to help moderators decide on sanctions for players using such language. Participants viewed videos of gamers playing Overwatch and rated their level of toxicity in the game's voice chat, which highlighted what Utrecht's Julian Frommel called "the challenge of subjectivity" in automatically detecting toxicity. The researchers used this data to build and validate the AI model, which can forecast a match's toxicity with 86.3% accuracy. Frommel said the model incurs few computational costs, and "can be automated and applied as a noninvasive approach in many games that use in-game voice chats."

Full Article

 

 

Human-Robot-AI Teamwork Accelerates Regenerative Medicine
RIKEN (Japan)
June 28, 2022

Researchers at Japan's RIKEN Center for Biosystems Dynamics Research have created a robotic artificial intelligence (AI) system for regenerative medicine that can autonomously optimize conditions for growing replacement retina layers. The system uses a humanoid robot called Maholo to repeatedly facilitate a series of movements and manipulations, and AI software to assess the results and draw up the next experiment. The software applies an optimization algorithm to ascertain which parameters should be adjusted and how, to improve differentiation efficiency in the next experimental series. The researchers fed the protocols for cultivating retinal pigment epithelium cells from stem cells to Maholo, and the algorithm increased differentiation efficiency from 50% to 90% in 185 days (compared to the 30 months human scientists would have needed to accomplish this).
 

Full Article

 

 

AI-Powered BirdNET App Makes Citizen Science Easier
Cornell Chronicle
Pat Leonard
June 28, 2022


The free machine learning-powered BirdNET application can identify more than 3,000 types of birds by the sounds they make. Cornell University's Stefan Kahl said, "Our guiding design principles were that we needed an accurate algorithm and a simple user interface." Users listen for birds and record their sounds on a smartphone via the app; BirdNET then employs artificial intelligence to automatically identify the bird’s species, and captures the recording for research. The researchers chose test-cases in the U.S. and Europe to test BirdNET's reliability, and found the app data accurately reproduced the established distribution pattern of song-types among white-throated sparrows, and the brown thrasher's seasonal and migratory ranges.
 

Full Article

 

Meta Outlines Efforts To Use AI To Improve Sustainability

SiliconANGLE Share to FacebookShare to Twitter (4/18, Dotson) reports Meta Platforms, “which has been working to use machine learning technologies to combat climate change and increase the efficiency of industrial systems,” published a document outlining the “approaches it’s taking using artificial intelligence to tackle these issues and develop elegant engineering solutions.” One initiative, the Open Catalyst Project, is a collaboration with Carnegie Mellon University’s Department of Chemical Engineering. The project “provides datasets for the discovery of chemical catalysts for building cheaper and scalable batteries for renewable energy networks.”

 

Intel Debuts “Brain-Like” Chip

The Register (UK) Share to FacebookShare to Twitter (4/15) discusses Intel’s brain-like Loihi chip technology which “could aid with low-power AI tasks like speech recognition.” However, “What hasn’t been clear yet is how the chipmaker will make money from the experimental silicon.” The Register adds, “In a recent roundtable with journalists, Intel Labs lead Rich Uhlig offered two possibilities: integrating Loihi in a CPU for PCs to perform energy-efficient AI tasks and potentially offering the its neuromorphic chips as a cloud service, although Uhlig was clear he wasn’t firming actual product plans, just projecting what could theoretically happen in the future.”

dtau...@gmail.com

unread,
Sep 25, 2022, 7:20:28 PM9/25/22
to ai-b...@googlegroups.com

Deep Learning Method Worth Its Salt
UC News
Michael Miller
July 22, 2022


A multi-institutional team of researchers led by the University of Cincinnati's Yu Shi has developed a novel technique for modeling the thermodynamic properties of molten salts via deep learning artificial intelligence. Shi said the researchers trained a neural network on data produced by quantum simulations, which they used to estimate the free energy of molten sodium chloride. The research, according to Shi, offers a reliable way of studying the conversion of dissolved gas to vapor in molten salts, helping to understand how impurities and solutes affect corrosion. He added that the method also could help scientists analyze the emission of potentially toxic gas into the atmosphere, which will be useful for fourth-generation molten salt nuclear reactors.
 

Full Article

 

 

Teaching Machines to Really Hear Us
UVA Today
Eric Williamson
July 20, 2022


SITHCon, a deep-learning tool that enables existing artificial intelligence neural networks to hear people clearly, is the result of a collaboration between researchers at the University of Virginia (UVA), Boston University, and Indiana University. The algorithm applies a type of compression that can be unpacked on an as-needed basis, saving storage space by only storing the "memory" of a sound at a resolution that will be useful later. "We showed that SITHCon could generalize to speech scaled up or down in speed, whereas other models failed to decode information at speeds they didn't see at training," said UVA's Brandon Jacques. The researchers have opted to open source the SITHCon code, and they say the information should adjust for any neural network that translates voice.
 

Full Article

 

 

Engineers Develop Tool to Enable More Personalized Cell Therapies
University of Minnesota College of Science & Engineering
July 20, 2022

In a step toward more personalized cell therapies, engineers at the University of Minnesota, Twin Cities have developed a tool to forecast and tailor site-specific recombination DNA editing. The tool integrates high-throughput experiments with a machine learning (ML) model to boost efficiency and predictability, enabling researchers to control the DNA editing rate and the speed at which a therapeutic cell generates a drug or therapeutic protein. The researchers first devised an experimental method to calculate the rate of site-specific recombination, then fed that data to an ML algorithm. Researchers can type in a DNA sequence, and the model predicts its recombination rate. The model also can predict and program simultaneous production of multiple cellular proteins, which could be used to induce stem cells to produce new tissues or organs, or to generate multiple drugs in pre-programmed concentrations.
 

Full Article

 

 

AI Choir, Dancers Accompany Human Orchestra in Hong Kong Concert
South China Morning Post (Hong Kong)
Mabel Lui
July 21, 2022


Artificial intelligence (AI)-powered virtual ballet dancers and a 320-voice choir accompanied human musicians in a Hong Kong Baptist University (HKBU) symphony orchestra concert. The digitally generated voices sang under music director Johnny Poon Ming-lun's guidance, accompanied by an AI-produced video backdrop based on the lyrics. HKBU researchers trained algorithms on voice samples of the late pop singer Leslie Cheung Kwok-wing and eight other singers, then used human voices as reference to assemble the choir. Poon wore a motion capture suit so the AI "singers" could follow his movements and perform appropriately, and a similar approach was used to generate the virtual dancers. HKBU's Guo Yike said the goal of such research is not to create artistic technology that mimics humans, but to develop technology that can tap its own creativity and "understand humanity."
 

Full Article

 

 

Open Source Platform Enables Research on Privacy-Preserving ML
University of Michigan News
Zachary Champion
July 19, 2022


University of Michigan (U-M) researchers have open-sourced the largest benchmarking dataset for a privacy-shielding machine learning (ML) method to date. Federated learning trains ML models on end-user devices, rather than transferring private data to central servers. "By training in-situ on data where it is generated, we can train on larger real-world data," said U-M's Fan Lai. "This also allows us to mitigate privacy risks and high communication and storage costs associated with collecting the raw data from end-user devices into the cloud." The FedScale platform can model the behavior of millions of user devices using a few graphic processing units and central processing units, allowing ML model developers to evaluate model performance without large-scale deployments.

Full Article

 

 

'Earable' Uses Sonar to Reconstruct Facial Expressions
Cornell University Chronicle
Patricia Waldron
July 19, 2022


Cornell University researchers have developed EarIO, a wearable earphone device (earable) that can reconstruct the wearer's face using sonar. EarIO sends facial movements to a smartphone. A speaker on either side of the earphone transmits acoustic signals to the sides of the face, and a microphone detects the echoes, which change due to facial movements as wearers talk, smile, or raise their eyebrows. A deep learning algorithm processes and translates that data back into facial expressions via artificial intelligence. The earable can communicate with a smartphone via a wireless Bluetooth connection, maintaining the user’s privacy.

Full Article

 

 

Nanomagnets Can Choose a Wine, Could Quench AI's Thirst for Energy
NIST News
July 18, 2022

Scientists at the U.S. National Institute of Standards and Technology, the University of Maryland, and Western Digital Technologies have developed artificial intelligence (AI) devices that could consume less energy and operate faster than other hardware. The researchers programmed a neural network from magnetic tunnel junctions (MTJs) and trained it to taste wines virtually. They used 148 of 178 wines produced from three grape varieties, with each wine possessing 13 properties; each property was assigned a value between 0 and 1 for the network to appraise when distinguishing between wines. The network then "tasted" a dataset that included 30 previously unknown wines, yielding a 95.3% success rate and committing just two errors on the untasted wines. The results indicate an array of MTJ devices could potentially be scaled up and assembled into new AI systems.
 

Full Article

 

 

The Never-Ending Quest to Predict Crime Using AI
The Washington Post
Pranshu Verma
July 15, 2022


Scientists continue to build crime-predicting artificial intelligence, despite a history of the technology disproportionately directing police against communities of color. Algorithms are trained to find patterns in crime reports, arrest records, and license plate images to anticipate where and when a certain type of crime will occur. However, New York University's Vincent Southerland said the software's accuracy reflects historically biased data skewed toward minorities and low-income neighborhoods. University of Chicago (UChicago) researchers recently claimed a new algorithm can forecast crime with "90% accuracy" by identifying locations in major cities with a high likelihood of crimes happening in the next week. UChicago's Ishanu Chattopadhyay said the software was intended to assess bias in policing across neighborhoods in eight major U.S. cities, revealing very different enforcement in affluent and poorer neighborhoods.

Full Article

*May Require Paid Registration

dtau...@gmail.com

unread,
Oct 1, 2022, 8:16:34 PM10/1/22
to ai-b...@googlegroups.com

DeepMind Research Cracks Structure of Almost Every Known Protein
Financial Times
Madhumita Murgia
July 28, 2022


Scientists at DeepMind used its AlphaFold algorithm to predict the 3D shape of almost every known protein. The AlphaFold database encompasses more than 200 million predicted protein structures, covering nearly all organisms that have had their genomes sequenced. The new tool enables structural biologists to "look up a 3D structure of a protein almost as easily as doing a keyword Google search," said DeepMind's Demis Hassabis. Researchers can access the structures through a public database hosted by the European Bioinformatics Institute (EMBL-EBI).
 

Full Article

*May Require Paid Registration

 

 

Could ML Fuel a Reproducibility Crisis in Science?
Nature
Elizabeth Gibney
July 26, 2022


Princeton University's Sayash Kapoor and Arvind Narayanan warn of a "brewing reproducibility crisis" driven by increasing reliance on machine learning (ML) to base predictions on patterns in data. The researchers say ML is overhyped as an easily learned tool, and peer reviewers lack the time to vet these models, leaving academia with no mechanisms to eliminate irreproducible papers. Kapoor and Narayanan analyzed 20 reviews in 17 research fields, citing 329 papers whose results could not be fully replicated due to flawed ML application. They highlighted data leakage as the most significant problem, and have developed guidelines for avoiding such traps. The guidelines instruct researchers to include with their manuscripts evidence that their models do not have each of eight types of leakage, as well as suggest a template for such documentation.
 

Full Article

 

 

Turing Award Winner Reflects on His Pioneering Algorithms
IEEE Spectrum
Kathy Pretz
July 27, 2022


ACM's 2021 A.M. Turing Award recipient Jack Dongarra conceived of what many deem world-changing software libraries that played a role in the expansion of high-performance computing in fields such as artificial intelligence (AI), data analytics, genomics, and healthcare. In an interview, Dongarra reflects on his pioneering algorithms, many of which were influenced by the work of previous winners. The software libraries "have basic components that are needed in many areas of science so that users can draw on those components to help them solve their computational problems,” Dongarra says. His current project is a software library for the Frontier supercomputer, which is housed at the U.S. Department of Energy's Oak Ridge National Laboratory and processes 1-quintillion-plus operations per second.
 

Full Article

 

 

Smart Microrobots Learn to Swim, Navigate with AI
New Jersey Institute of Technology
August 4, 2022


Researchers from Santa Clara University, the New Jersey Institute of Technology (NJIT), and the University of Hong Kong (HKU) trained microrobots to swim using artificial intelligence. The researchers combined artificial neural networks with reinforcement learning to teach a microbot to swim and navigate in any direction. Each movement by the microrobot receives feedback on the action's correctness, so it learns how to swim by interacting with its surroundings. HKU's Alan Tsang said, "Similar to a human learning how to swim, the microswimmer learns how to move its 'body parts'—in this case three microparticles and extensible links—to self-propel and turn." The researchers enabled a microbot to follow an intricate pathway without explicit programming, which NJIT's Yuan-nan Young called "Our first step in tackling the challenge of developing microswimmers that can adapt like biological cells in navigating complex environments autonomously."

Full Article

 

Algorithm Aces University Math Course Questions
MIT News
Adam Zewe
August 3, 2022


A multi-institutional team of researchers led by the Massachusetts Institute of Technology's Iddo Drori utilized a neural network model to solve university-level math problems in seconds. The researchers used OpenAI's Codex model, which was pretrained on text and "fine-tuned" on code, to learn how pieces of text and code relate to each other. The model can render text questions into code, given a few question-code examples, then run the code to solve the problem. The model also automatically explains its solutions, and can produce new problems in university math subjects which university students were unable to distinguish from human-generated questions. "This work opens the field for people to start solving harder and harder questions with machine learning," Drori said.

Full Article

 

 

Hardware Offers Faster Computation for AI with Much Less Energy
MIT News
Adam Zewe
July 28, 2022


Researchers at the Massachusetts Institute of Technology (MIT) have developed an analog processor technology that uses inorganic phosphosilicate glass (PSG) to allow for faster computation with greater energy efficiency. Analog machine learning is enabled by increasing and decreasing the electrical conductance of protonic programmable resistors, controlled by the movement of protons into and out of a channel in the resistor. The researchers used PSG to create a programmable protonic resistor that is 1 million times faster than the researchers' previous fastest device; it also can operate at room temperatures using much less energy. Said MIT's Murat Onen, "Once you have an analog processor, you will no longer be training networks everyone else is working on. You will be training networks with unprecedented complexities that no one else can afford to, and therefore vastly outperform them all. In other words, this is not a faster car, this is a spacecraft."

Full Article

 

 

Machines Can Learn from Fables
USC Viterbi School of Engineering
Habeeba Kamel
July 25, 2022


Researchers at the University of Southern California (USC) used Aesop's fables to teach AI to analogize. Previous attempts could not enable the AI to comprehend the implications of analogies or to make large-scale generalizations. The team used natural language processing to analyze the fables and to produce story pairs based on words and meanings in the text. USC's Jay Pujara said stories imparting moral lessons were chosen because often multiple fables teach the same lessons in different ways, creating a semantic meaning that differs from the fable's surface form.
 

Full Article

 

 

Dreaming Up Proteins, AI Churns Out Possible Medicines, Vaccines
Science
Robert F. Service
July 22, 2022


University of Washington (UW) researchers have unveiled artificial intelligence (AI) software that can design nonexistent proteins, including potential cancer treatments and a vaccine candidate for respiratory syncytial virus. The AI taps innovations in predicting three-dimensional protein structures from amino acid sequences, using the RoseTTAFold software to form proteins from scratch via inpainting and constrained hallucination. Inpainting gives the AI a starting point from which it can fill in additional protein components based on its understanding of protein-folding. Meanwhile, constrained hallucination establishes a goal, and the AI evolves a virtual protein to meet the goal. Both approaches yielded successful laboratory-cultured proteins.
 

Full Article

 

 

Think Your Street Needs a Redesign? Ask an AI
Bloomberg
Marie Patino
July 30, 2022


Brooklyn-based artist and former activist Zach Katz is using artificial intelligence (AI) laboratory OpenAI's DALL-E 2 system to redesign streets to be more friendly to pedestrians and bikes. DALL-E 2, the second iteration of the DALL-E neural network, can convert text into visual features and display how these features could relate to one another, thanks to a massive training dataset. Users can input text queries and request specific styles and features from the AI, although results can be biased against women and people of color due to a lack of diversity in its training data. Katz posts images of DALL-E-reimagined streets on Twitter, and has a backlog of requests for such reimaginings from around the world.

Full Article

 

 

Researchers Use Wearable Tech to Detect COVID-19 Before Onset of Symptoms
McMaster University (Canada)
Jesse Dorey
July 19, 2022


Researchers in Canada and Europe detected COVID-19 prior to symptom onset by pairing wrist-worn health devices with machine learning. The researchers analyzed more than 1,100 participants wearing a fertility tracker that monitors respiration, heart rate, heart rate variability, skin temperature, and blood flow at night while sleeping. The tracker was synchronized to a mobile application that recorded activity that might affect the central nervous system, as well as potential COVID-19 symptoms. Over 100 participants tested positive for the virus, and the tracker detected changes in all physiological markers during infection. The team trained an algorithm to detect COVID-19 symptoms in 70% of positive-testing participants, and 73% of confirmed cases were detected as early as two days before symptom onset; tests on the remaining cohort yielded a 68% detection rate up to two days ahead.
 

Full Article

 

 

Using AI to Train Teams of Robots to Work Together
University of Illinois Grainger College of Engineering
Debra Levey Larson
July 19, 2022


University of Illinois Urbana-Champaign (UIUC) scientists have created a reinforcement learning framework to train multiple agents such as robots to collaborate. UIUC's Huy Tran said the researchers envisioned a decentralized scheme, while also focusing "on situations where it's not obvious what the different roles or jobs for the agents should be." He explained that they produced a utility function "that allows us to identify when an individual agent contributes to the global team objective." The algorithms also can detect when an agent or robot is performing a task that is not useful, and the researchers tested them on simulated games like Capture the Flag and StarCraft. Tran said such algorithms can be applied to real-world scenarios, like military surveillance or coordinated autonomous vehicle delivery.
 

Full Article

 

 

Sensing Platform Deployed at Controlled Burn Site, Could Help Prevent Forest Fires
Argonne National Laboratory
Jared Sagoff
July 26, 2022


Scientists from the U.S. Department of Energy's Argonne National Laboratory deployed the Sage platform to conduct a controlled burn on a Kansas prairie, as part of research that could be applied to forest fire prevention. Sage integrates multiple sensors with edge computing and embedded machine learning (ML) algorithms, enabling on-the-spot detection, monitoring, and analysis of the burned area. Sage uses the open-source Waggle wireless sensor platform; Argonne's Pete Beckman said, "It's basically as if Waggle is a cellphone, and Sage is the network that the phone uses to communicate plus the apps that run on it." The controlled burn generated a dataset about the progression of smoke and fire, which can be used to teach an ML algorithm to ascertain the behavior of other fires in real time.
 

Full Article

 

dtau...@gmail.com

unread,
Oct 2, 2022, 12:07:44 PM10/2/22
to ai-b...@googlegroups.com

First Studies with Quantum Machine Learning at LHCb
University of Liverpool (U.K.)
August 4, 2022


The Data Processing & Analysis Project team, led by a researcher at the U.K.'s University of Liverpool, demonstrated the use of quantum machine learning (QML) to identify the charge of b-quark initiated jets at the Large Hadron Collider (LHC) at CERN (the European Organization for Nuclear Research). This marked the first time QML techniques were applied to hadronic jet charge identification. The researchers compared the performance of a variational quantum classifier, based on two different quantum circuits, with that of deep neural network (DNN); they found the classical DNN performed only slightly better than the QML algorithms. However, they found QML reaches optimal performance with a lower number of events, while DNN performance exceeds that of QML with larger numbers of features.

Full Article

 

 

Researchers Taught Machines to Follow Lego Instruction Manuals
Gizmodo
Andrew Liszewski
August 3, 2022


Researchers at Stanford University have developed a learning-based framework that enables machines to interpret step-by-step instructions for building Lego models. Lego instruction manuals typically show semi-assembled models that change as bricks are added with each step. The Manual-to-Executable-Plan Network (MEPNet) compares previously generated three-dimensional models to the next iteration to determine where new pieces fit in each step. Because differences are too subtle for the framework to detect from scanned images or printed pages, it must make these determinations on its own. The process is made easier by the fact that MEPNet understands how Lego bricks fit together and their positional limitations, which helps narrow down potential attachment points in the semi-assembled model.

Full Article

 

 

Using AI to Control Digital Manufacturing
MIT News
Adam Zewe
August 2, 2022


An international team of researchers led by the Massachusetts Institute of Technology has taught a machine learning system with computer vision to monitor and control three-dimensional (3D) printing in real time. The researchers used simulations to train a neural network to adjust printing parameters to minimize errors, then applied it to a 3D printer. The machine-vision system employs two cameras focused on the printer's nozzle, and calculates the printing material's thickness as it is extruded via light measurement. The system printed objects with greater accuracy than all other 3D printing control methods evaluated by the researchers. "This project is really the first demonstration of building a manufacturing system that uses machine learning to learn a complex control policy," said MIT's Wojciech Matusik.

Full Article

 

 

AI May Come to the Rescue of Future Firefighters
NIST News
August 10, 2022

A new artificial intelligence model from researchers at the U.S. National Institute of Standards and Technology and Hong Kong Polytechnic University could warn firefighters of imminent combustion or flashovers. The Flashover Prediction Neural Network (FlashNet) model can predict flashovers as early as 30 seconds before eruption, with up to 92.1% accuracy. The researchers used graph neural networks to strengthen FlashNet and trained it on nearly 25,000 fire cases, then used another 16,000 for refinement and final testing. In digital simulations of over 41,000 fires in 17 types of buildings representing common U.S. residential floorplans, FlashNet outperformed five other machine learning-based models.
 

Full Article

 

 

An Open-Source Data-Science Toolkit for Energy GridDS
Lawrence Livermore National Laboratory
August 2, 2022


GridDS, an open-source data-science toolkit, was designed to provide an integrated energy data storage and augmentation infrastructure, as well as a flexible and wide-ranging set of cutting-edge machine learning (ML) models. Researchers at the U.S. Department of Energy's Lawrence Livermore National Laboratory (LLNL) developed GridDS to train and validate ML models to help enhance the efficiency of distributed energy resources. The toolkit also is engineered to harness advanced metering infrastructure, outage management systems data, supervisory control data acquisition, and geographic information systems to predict energy requirements and to detect nascent grid failures. LLNL's Vaibhav Donde said, "GridDS can take general approaches, apply them to highly specific energy tasks, and evaluate and validate their performance."

Full Article

 

 

AI Model Recommends Personalized Fonts to Improve Digital Reading, Accessibility
University of Central Florida
Cara Cannon
August 11, 2022


Researchers at the University of Central Florida (UCF) and Adobe found that Adobe's FontMART machine learning model can expand digital information accessibility and improve individual reading experiences. The model offers personalized font recommendations that can improve reading speed by matching reader characteristics, such as font familiarity and age, with specific font characteristics, such as heavier weight. The researchers determined that when recommending a font, the reader's age plays a significant role, as older adults with weaker and variable eyesight generally find thicker font strokes easier to read. Said UCF's Ben Sawyer, "The future of readability is a device watching humans read and using their performance to tailor the format so that they read at their best. We look forward to the day when you can pick up a device, read, and receive information in a way that uniquely suits your needs."
 

Full Article

 

 

How the Secrets of Ancient Cuneiform Texts Are Being Revealed by AI
New Scientist
Alison George
August 3, 2022


Artificial intelligence (AI) is being used to read and translate cuneiform, helping to recreate ancient libraries by piecing together fragmented tablets and predicting missing text. The Fragmentarium, part of the Electronic Babylonian Literature project set up by Enrique Jiménez of Germany's Ludwig Maximilians University in 2018, is using algorithms to determine which tablet fragments belong together. The AI can predict missing segments and search for cuneiform signs in a database of fragments. As part of the project, 40,000 smashed tablet pieces at the British Museum in London are being photographed, and software is being used to stitch the images together so cuneiform experts can translate them. The entire collection of images will be made public by next year.

Full Article

*May Require Paid Registration

 

University Of Michigan Researcher Discusses Impact Of Language-Generation AIs On Science

Scientific American Share to FacebookShare to Twitter (5/4) reports “machine-learning algorithms that generate fluent language from vast amounts of text could change how science is done – but not necessarily for the better, says Shobita Parthasarathy, a specialist in the governance of emerging technologies at the University of Michigan in Ann Arbor.” In a report “published on 27 April, Parthasarathy and other researchers try to anticipate societal impacts of emerging artificial-intelligence (AI) technologies called large language models (LLMs).” These can “churn out astonishingly convincing prose, translate between languages, answer questions and even produce code.” The corporations building them “-- including Google, Facebook and Microsoft – aim to use them in chatbots and search engines, and to summarize documents.” In an interview, Parthasarathy said, “I had originally thought that LLMs could have democratizing and empowering impacts. When it comes to science, they could empower people to quickly pull insights out of information: by querying disease symptoms for example, or generating summaries of technical topics. But the algorithmic summaries could make errors, include outdated information or remove nuance and uncertainty, without users appreciating this.”

 

Lockheed Martin Employing AI, Space Expertise To Help Fight Wildfires

The Denver Post Share to FacebookShare to Twitter (5/3, Kohler) reports Lockheed Martin Space is “tapping decades of experience of managing satellites, exploring space and providing information for the U.S. military to offer more accurate data quicker to ground crews” through artificial intelligence and working with the “U.S. Forest Service, university researchers and a Colorado state agency about how their technology could help.” One example, Dan Lordan, senior manager for AI integration at Lockheed Martin’s Artificial Intelligence Center, “said, is information gathered and managed by federal agencies on the types and conditions of vegetation across the country..” Lockheed Martin uses satellite data from the ESA updated around every five days, and is working with Nvidia “to produce a digital simulation of a wildfire based on an area’s topography, condition of the vegetation, wind and weather to help forecast where and how it will burn.” After the fact, the companies “used the information about the Cameron Peak fire, plugging in the more timely satellite data on fuel conditions, and generated a video simulation that Lordan said was similar to the actual fire’s behavior and movement.”

 

AI Can Revolutionize Vehicle Maintenance

Automotive World Share to FacebookShare to Twitter (4/27, Lampinen) reports AI “is re-writing the rule book for vehicle diagnostics and helping to detect future faults before they cause a problem” as connected tech can “automatically arrange for the necessary servicing, ensuring the parts are in stock and a technician will be available at the right time.” This predictive maintenance ability “could prove a real profit driver” for fleets with thin margins; for AVs, “it could prove a necessity.” AW reports that Questar “is one of the leading innovators in this space” with “nearly two decades of expertise in vehicle diagnostics.”

dtau...@gmail.com

unread,
Oct 10, 2022, 8:20:07 AM10/10/22
to ai-b...@googlegroups.com

DeepMind Invents Faster Algorithms to Solve Tough Math Puzzles
Nature
Matthew Hutson
October 5, 2022


Researchers at artificial intelligence (AI) laboratory DeepMind have created an algorithm that can solve tough mathematical calculations with improved computing efficiency. The researchers designed the AlphaTensor algorithm to execute matrix multiplication, which entails multiplying numbers arranged in grids that might represent data. AlphaTensor incorporates reinforcement learning as well as tree search, a game-playing approach in which the AI probes the outcomes of branching possibilities while planning its next action. The researchers tested AlphaTensor on input matrices up to 5 x 5. In some cases it rediscovered shortcuts previously formulated by mathematicians, while in others it found new shortcuts of its own.
 

Full Article

 

 

Device Can Recognize 'Millions of Colors' in Machine Vision Advance
Northeastern University
Tanner Stening
October 5, 2022


Northeastern University researchers have constructed a device that uses artificial intelligence techniques to perceive "millions of colors" in what they call a massive step for machine vision technology. The researchers said the A-Eye device can analyze and process color with greater accuracy than existing machines. The team produced two-dimensional material whose quantum properties can process diverse colors when incorporated within an optical window. The material forms "transmissive windows" that substitute for filters. The researchers said A-Eye also can "accurately recognize and reproduce 'seen' colors with zero deviation from their original spectra" through machine learning algorithms. "A-Eye can continuously improve color estimation by adding any corrected guesses to its training database," they said.
 

Full Article

 

 

Rethinking the Computer Chip in the Age of AI
Penn Engineering Today
Devorah Fischler
September 29, 2022


A team of researchers from the University of Pennsylvania (Penn), Sandia National Laboratories, and Brookhaven National Laboratory has unveiled a computing architecture suited for artificial intelligence (AI). The researchers developed a transistor-free compute-in-memory (CIM) architecture where processing and storage happen in the same place, removing transfer time and minimizing energy consumption. The architecture, which builds on earlier work on a ferroelectric switching scandium-alloyed aluminum nitride semiconductor, could potentially perform up to 100 times faster than a conventional computing architecture. The design also performs on-chip storage, parallel search, and matrix multiplication acceleration. Penn's Xiwen Liu said the work "proves that we can rely on memory technology to develop chips that integrate multiple AI data applications in a way that truly challenges conventional computing technologies."
 

Full Article

 

 

Deep Learning Pioneer Geoffrey Hinton Receives Royal Medal
U of T News (Canada)
Chris Sasaki
August 24, 2022


The Royal Society awarded the Royal Medal to ACM A.M. Turing Award recipient Geoffrey Hinton at Canada's University of Toronto for his groundbreaking work in deep learning. The U.K.'s national academy of sciences said Hinton earned the medal for "pioneering work on algorithms that learn distributed representations in artificial neural networks and their application to speech and vision, leading to a transformation of the international information technology industry." In 1986, Hinton and his collaborators created deep learning based on the backpropagation algorithm, a core learning mechanism for artificial neural networks. Said Hinton, “I believe that the spectacular recent progress in large language models, image generation and protein structure prediction is evidence that the deep learning revolution has only just started.”

Full Article

 

 

Super-Fast Electric Car Charging, with Tailor-Made Touch
American Chemical Society
August 22, 2022


Idaho National Laboratory researchers used machine learning to develop fast-charging methods for electric vehicle batteries that do not damage the batteries or reduce their lifespans. The researchers trained the machine learning model using data on the condition of different lithium-ion batteries during their charging/discharging cycles; the data for each battery model then was used to identify and optimize new charging protocols, which were tested on real batteries. Said Idaho National Laboratory's Eric Dufek, "We've significantly increased the amount of energy that can go into a battery cell in a short amount of time. Currently, we're seeing batteries charge to over 90% in 10 minutes without lithium plating or cathode cracking."

Full Article

 

 

A Neuromorphic Chip for AI on the Edge
UC San Diego News Center
Ioana Patringenaru
August 17, 2022


An international team of researchers created the NeuRRAM neuromorphic chip to compute directly in memory and run artificial intelligence (AI) applications with twice the energy efficiency of platforms for general-purpose AI computing. The chip moves AI closer to running on edge devices, untethered from the cloud; it also produces results as accurate as conventional digital chips, and supports many neural network models and architectures. "The conventional wisdom is that the higher efficiency of compute-in-memory is at the cost of versatility, but our NeuRRAM chip obtains efficiency while not sacrificing versatility," said former University of California, San Diego researcher Weier Wan.

Full Article

 

 

Made-Up Words Trick AI Text-to-Image Generators
Discover
August 16, 2022

Columbia University's Raphaël Millière found that made-up words can trick text-to-image generators, raising questions about their security. Millière created nonsense words using the "macaronic prompting" technique, which involves combining parts of real words from different languages. For instance, the made-up word "falaiscoglieklippantilado," a combination of the German, Italian, French, and Spanish words for "cliff," generated images of cliffs when input into the DALL-E 2 text-to-image generator. Millière said, "The preliminary experiments suggest that hybridized nonce strings can be methodically crafted to generate images of virtually any subject as needed, and even combined together to generate more complex scenes." However, Millière noted, "In principle, macaronic prompting could provide an easy and seemingly reliable way to bypass [content] filters in order to generate harmful, offensive, illegal, or otherwise sensitive content, including violent, hateful, racist, sexist, or pornographic images, and perhaps images infringing on intellectual property or depicting real individuals."
 

Full Article

 

 

Your Smartphone Could Recognize You by How You Hold It
New Scientist
Jeremy Hsu
August 16, 2022


A group of researchers at France's Toulouse Institute of Computer Science Research trained an artificial intelligence (AI)-powered algorithm to identify persons based on patterns in their hand vibrations when holding a smartphone. Over 200 volunteers each held a smartphone while its sensors gathered data during either a 30-second session or a series of sessions. The AI analyzed roughly half of this data to formulate how best to distinguish individual users, then was tested on the other half to identify specific persons. The algorithm identified test subjects correctly about 92.5% of the time within 1.5 seconds across 1,100 tests.

Full Article

 

 

Uncovering Nature's Patterns at the Atomic Scale in Living Color
Argonne National Laboratory
Joseph E. Harmon
August 15, 2022


Researchers led by the U.S. Department of Energy's Argonne National Laboratory (ANL) have invented a technique for producing color-coded graphs of large datasets from X-ray diffraction. The tool uses computational data sorting to identify clusters related to physical properties, which should ramp up research on atomic-level structural changes caused by varying temperature. The X-ray Temperature Clustering method uses unsupervised machine learning processes developed at Cornell University. Analysis of two crystalline materials showed they become superconducting at temperatures near absolute zero, and manifest other unusual properties at higher temperatures caused by structural changes. "What might have taken us months in the past, now takes about a quarter hour, with much more fine-grained results," said ANL's Raymond Osborn.
 

Full Article

 

 

AI Model Can Detect Parkinson's from Breath Patterns
MIT News
Alex Ouyang
August 22, 2022


A multi-institutional group of researchers created an artificial intelligence (AI) model that can detect Parkinson's disease from a person's breathing patterns. The Massachusetts Institute of Technology (MIT)'s Yuzhe Yang and Yuan trained the AI model, which is a neural network, to determine the disease's presence from nocturnal breathing, and to rate its severity and track its progression over time. It was implemented in a device that emits radio signals, analyzes their reflections from its surroundings, and extracts the subject's breathing patterns without bodily contact; the researchers feed those patterns to the neural network, which then evaluates Parkinson's passively.

Full Article

 

dtau...@gmail.com

unread,
Oct 15, 2022, 7:12:40 PM10/15/22
to ai-b...@googlegroups.com

More Accurate Assessments of Hurricane Damage for Responders
Ohio State News
Jeff Grabmeier
October 6, 2022


A machine learning model developed by Ohio State University (OSU) researchers can better forecast building damage from hurricanes, and may soon assist emergency responders. The model can construct building footprints from pre-hurricane satellite imagery, then compare them with post-storm images. OSU's Desheng Liu and Polina Berezina tested the model on data from Hurricane Michael in 2018, and it yielded an 86.3% accurate damage assessment in one region of Florida, outperforming the support vector machine model (SVM) by 11%. Said Liu, "The SVM struggled to distinguish between minor and major damage, which can be a major issue for teams responding after a hurricane." Liu said the model could rate the likelihood that individual buildings are in a certain damage class to help guide emergency management and first responders to initial sites.
 

Full Article



AI Can Learn the Patterns of Human Language
MIT News
Adam Zewe
August 30, 2022


Researchers at Massachusetts Institute of Technology, Cornell University, and McGill University developed an artificial intelligence model that can learn the rules and patterns of human languages automatically, without specific human guidance. The model was trained and tested on problems from linguistic textbooks in 58 different languages that involved word-form changes. The researchers observed that the model could determine a correct set of rules to describe the word-form changes for 60% of the problems. Said Cornell's Kevin Ellis, "One of the things that was most surprising is that we could learn across languages, but it didn't seem to make a huge difference. That suggests two things. Maybe we need better methods for learning across problems. And maybe, if we can't come up with those methods, this work can help us probe different ideas we have about what knowledge to share across problems."

Full Article

 

The Animal Translators
The New York Times
Emily Anthes
August 30, 2022


Researchers are using machine learning (ML) systems to decode animal communication. Scientists at Germany's Max Planck Institute for Brain Research used ML algorithms to analyze 36,000 mole rat chirps in seven colonies, identifying unique vocal signatures for each mole rat, as well as a distinct dialect for each colony. The multi-institutional Project CETI (Cetacean Translation Initiative) hopes to decipher the communication of sperm whales through the efforts of ML specialists, marine biologists, roboticists, linguists, and cryptographers. The project will involve recording whale sounds and movements via underwater microphones, robotic fish, and acoustic tags. Other projects aim to build technologies that enable human-animal communication, with Hunter College's Diana Reiss envisioning "a Google Translate for animals."

Full Article

*May Require Paid Registration

 

 

Driving Simulations That Look More Lifelike
Ohio State News
August 29, 2022


Ohio State University (OSU) researchers trained a Generative Adversarial Network (GAN) to create random photorealistic three-dimensional scenery for driving simulations. The researchers based their simulator on the open-source CARLA tool, then applied a GAN-based image synthesizer to render and integrate background elements with more conventionally rendered objects. The researchers determined that combining foreground objects separately from background scenery resulted in an image that was more lifelike. OSU's Ekim Yurtsever said, "We can never actually replace real-world testing, but if we can make simulations a little bit better, we can get better insight on how we can improve autonomous driving systems and how we interact with them."

Full Article

 

 

Quantum AI Breakthrough: Theorem Shrinks Appetite for Training Data
Los Alamos National Laboratory
August 23, 2022


A proof devised by a multi-institutional team of scientists demonstrates that quantum neural networks can train on minimal data. "The need for large datasets could have been a roadblock to quantum AI [artificial intelligence], but our work removes this roadblock," said Patrick Coles at the U.S. Department of Energy's Los Alamos National Laboratory (LANL). Coles said quantum AI training occurs in a mathematical construct called a Hilbert space, and the theorem shows that navigating this space requires only as many data points as the number of parameters in a given model. The researchers could ensure that a quantum model can be compiled in far fewer computational gates relative to the volume of data. LANL's Marco Cerezo said, "We can compile certain very large quantum operations within minutes with very few training points—something that was not previously possible."

Full Article

 

 

Chinese Game Company Appoints Humanoid Robot as CEO
Interesting Engineering
Nergis Firtina
September 1, 2022


Chinese mobile game company NetDragon Websoft has appointed an artificial intelligence (AI)-supported virtual human named Tang Yu as its CEO. The company said Ms. Tang Yu will serve as a real-time data center and analytics tool for the company's board, support decision-making during daily operations, and promote a fair working environment for employees. NetDragon founder Dejian Liu said, "We believe AI is the future of corporate management, and our appointment of Ms. Tang Yu represents our commitment to truly embrace the use of AI to transform the way we operate our business and ultimately drive our future strategic growth.”

Full Article

 

 

Optimizing Fluid Mixing with Machine Learning
Tokyo University of Science (Japan)
August 29, 2022


Researchers in Japan have proposed a machine learning-based approach for optimizing fluid mixing for laminar flows. The researchers used reinforcement learning (RL), in which intelligent agents perform actions in an environment to maximize the cumulative reward. The team addressed RL's inefficiencies in dealing with systems involving high-dimensional state spaces by describing fluid motion using only a single parameter. Researchers used the Markov decision process to formulate the RL algorithm, and the Tokyo University of Science's Masanobu Inubushi said the program "identified an effective flow control, which culminated in an exponentially fast mixing without any prior knowledge." The RL method also enabled effective transfer learning of the trained "mixer," significantly reducing its time and training cost.

Full Article

 

 

French Tax Officials Use AI to Spot 20,000 Undeclared Pools
The Guardian (U.K.)
Kim Willsher
August 29, 2022


French tax officials used artificial intelligence (AI) from Google and Capgemini to identify over 20,0000 undeclared private swimming pools, constituting about €10 million (over $10 million) in tax receipts. The AI system can spot pools in aerial images, and cross-checks them with land registry databases to determine which may have not been reported. The tax authorities' technical team said it cannot yet establish whether a rectangular configuration on an aerial image is an extension or a tent, terrace, or tarpaulin placed on the ground. Antoine Magnant, France's deputy director general of public finances, said tests of the system are only in their second stage.

Full Article

 

 

Robot Dog Learns to Walk Tough Terrain in 20 Minutes
New Scientist
Alex Wilkins
August 26, 2022


Researchers at the University of California, Berkeley (UC Berkeley) developed a machine learning algorithm that enabled a robot dog to learn to navigate difficult terrain in only 20 minutes. The Q-learning algorithm does not need a model of the target terrain. As a result, said UC Berkeley's Sergey Levine, "We don't need to understand how the physics of an environment actually works, we just put the robot into an environment and turn it on." The algorithm teaches the robot by rewarding it for each successful action until reaching its ultimate goal. The researchers demonstrated that the robot was able to walk on terrains it had not previously encountered, including grass, a layer of bark, a memory foam mattress, and a hiking trail, after about 20 minutes of training on each.

Full Article

 

 

AI-Created Lenses Let Camera Ignore Some Objects
New Scientist
Matthew Sparkes
August 23, 2022


University of California, Los Angeles researchers developed a deep-learning artificial intelligence (AI) model design three-dimensionally (3D) printed plastic camera lenses that capture images of certain objects, while ignoring others in the same frame. The researchers trained the model using thousands of images of numbers, designated either as target objects to appear in images or objects to ignore. The model was told when images that were supposed to reach the camera's sensor did and did not pass through a trio of lenses, and when images that were not supposed to reach the sensor did. The AI used the data to improve its lens design. The completed lenses use complex patterns printed into the plastic to diffract away light relating to objects that are not designated to appear in the final image. Unwanted objects are not captured digitally, so they do not need to be edited out of the image.

Full Article

 

Intel Announces New Processors, Including AI Chips, As It Challenges Nvidia And AMD

Bloomberg Share to FacebookShare to Twitter (5/10, King) reports Intel on Tuesday rolled out “new processors designed for data centers, a lucrative market where it’s facing tougher competition” from Nvidia and AMD. The new lineup “will include updated artificial intelligence chips, fresh versions of Intel’s Xeon processors and semiconductors that help connect telecommunications networks,” and “for the first time, the company also will sell graphics chips designed for data centers, challenging Nvidia on its own turf.” Intel CEO Pat Gelsinger “is announcing the products at a company event in Dallas on Tuesday.”

        Reuters Share to FacebookShare to Twitter (5/10, Scheer, Lanhee Lee) reports that Intel “launched a new chip called Gaudi2 focused on artificial intelligence computing, as the chip manufacturer makes a bigger push into the AI chip market currently dominated by Nvidia.” Gaudi2 “is the second generation processor by Havana Labs, an Israeli AI chip startup Intel bought in late 2019 for about $2 billion.” Taking market share from Nvidia “has been a challenge as many AI researchers and companies have been accustomed to using Nvidia’s software platform CUDA,” so “in addition to new chips for AI computing, Intel told Reuters it has been focusing on its software development.” Intel “also launched a new chip called Greco for inferencing work: taking an AI algorithm and making a prediction or identifying an object.”

        The Register (UK) Share to FacebookShare to Twitter (5/10, Martin) reports, “Gaudi2 is built on a 7nm process, a major improvement in nodes from Gaudi’s 16nm, packing 24 Tensor Processor Cores and 96GB of HBM2e high-bandwidth memory, as well tripling the number of cores and amount of HBM2 memory of the chip’s predecessor.” Memory bandwidth likewise has “nearly tripled to 2.45TB/s while SRAM has doubled to 48 MB. Throughput is also significantly improved, expanding the chip’s networking capacity to 24 100 GbE ports from the first Gaudi’s 10 100 GbE ports.” Greco also moves “to a 7nm process from its predecessor’s 16nm process,” and “while the 16GB memory capacity is the same as the first-generation chip, Greco is moving to LPDDR5 from Goya’s DDR4, which significantly increases the memory bandwidth to 204GB/s from 40GB/s.” In contrast to Gaudi2, “Greco is getting smaller, moving to a single-slot half-height, half-length Pcie form factor from Goya’s dual-slot form factor. And it requires less power, with a thermal design power of 75 watts compared to the 200 watts for the first-generation chip.”

 

Research Shows Instruction, Annotations Can Bias AI Systems

TechCrunch Share to FacebookShare to Twitter (5/8, Wiggers, Coldewey) reports a new study found that bias in AI systems can “start with the instructions given to the people recruited to annotate data from which AI systems learn to make predictions.” The research found that annotators often “pick up on patterns in the instructions, which condition them to contribute annotations that then become over-represented in the data, biasing the AI system.” This phenomenon “suggests that systems trained on biases instruction/annotation data might not perform as well as initially thought.”

 

WPost Calls For Guidelines On Use Of AI In Government Surveillance

The Washington Post Share to FacebookShare to Twitter (5/5) writes in an editorial that Congress should address the future possible uses of facial recognition by companies such as Clearview AI to avoid a “world where everyone is being watched.” No federal law “regulates facial recognition, though some cities and states have passed restrictions, and the need for rules that apply everywhere has never been clearer.” These “strictures could and should allow authorities to harness this tool for public safety while still respecting civil liberties. What private actors can do with facial recognition and other biometric identification tools also must be constrained. But those still-missing guidelines should also arrive in tandem with another legislative goal legislators can’t seem to score.”

 

Clearview AI Agrees To New Privacy Terms In Court Settlement

The AP Share to FacebookShare to Twitter (5/9, Foody, O'Brien) reports, “Facial recognition startup Clearview AI has agreed to restrict the use of its massive collection of face images to settle allegations that it collected people’s photos without their consent.” In a legal filing, the company “agreed to permanently stop selling access to its face database to private businesses or individuals around the U.S., putting a limit on what it can do with its ever-growing trove of billions of images pulled from social media and elsewhere on the internet.” Clearview also agreed “to stop making its database available to Illinois state government and local police departments for five years. The New York-based company will continue offering its services to federal agencies, such as U.S. Immigration and Customs Enforcement, and to other law enforcement agencies and government contractors outside of Illinois.” If approved by a judge in Chicago, this settlement would “end a 2-year-old lawsuit brought by the American Civil Liberties Union and other groups over alleged violations of an Illinois digital privacy law. The company still faces a separate privacy case before a federal judge in Illinois.”

        The Washington Post Share to FacebookShare to Twitter (5/9, Harwell) reports, “Clearview, based in New York, has argued in court that the Illinois law restricted the company’s ability to collect and analyze public information – and, therefore, violated its First Amendment-protected freedom of speech.”

        The Hill Share to FacebookShare to Twitter (5/9, Rodrigo) reports that Deputy Director of the ACLU Speech, Privacy, and Technology Project Nathan Freed Wessler said in a statement, “By requiring Clearview to comply with Illinois’ pathbreaking biometric privacy law not just in the state, but across the country, this settlement demonstrates that strong privacy laws can provide real protections against abuse. ... Clearview can no longer treat people’s unique biometric identifiers as an unrestricted source of profit. Other companies would be wise to take note, and other states should follow Illinois’ lead in enacting strong biometric privacy laws.”

dtau...@gmail.com

unread,
Oct 16, 2022, 7:58:11 AM10/16/22
to ai-b...@googlegroups.com

UW Computer Science Professor Wins MacArthur 'Genius Grant'
The Seattle Times
Amanda Zhou
October 12, 2022


University of Washington (UW) computer science professor Yejin Choi has been selected to receive one of this year's MacArthur Foundation “genius grant” fellowships. Choi applies natural language processing to develop artificial intelligence (AI) systems capable of understanding implied meanings in human languages. She has created models that absorb text and image inputs and has used computational linguistics to help AI detect sentiment or misleading intent in writing. Her team of researchers applied the latter concept to identify false online consumer reviews and to categorize news stories as hoaxes, satirical, or trustworthy.
 

Full Article

 

 

AI Offers Insight into Who Responds to Antidepressants
IEEE Spectrum
Michelle Hampson
October 10, 2022


Machine-learning algorithms developed by scientists at the New York Institute of Technology (NYIT) and Canada's McMaster University analyze the electrical activity of the brain to predict responses to antidepressants with high accuracy. Researchers analyzed electroencephalogram data from 228 patients with depression before treatment, who were randomly assigned to receive a placebo or Sertraline. The algorithms forecast Sertraline response with 83.7% accuracy and placebo response with 83% accuracy.
 

Full Article

 

 

Anonymous Data Doesn't Mean Private
Illinois Institute of Technology
Casey Moffitt
October 6, 2022


Illinois Institute of Technology (Illinois Tech) researchers used machine learning and artificial intelligence algorithms to exfiltrate personal information from anonymous cellphone data. The neural-network model estimated the gender and age of individual users via their private communications by tapping data from a Latin American cellphone company. The algorithm guessed individuals' gender with 67% accuracy and age with 78% accuracy, which significantly outperformed current models. Researchers employed commonly accessible computing equipment to extract this information, and although the dataset they used was not publicly available, malefactors could compile a similar dataset by capturing data through public Wi-Fi hotspots or by targeting service providers' computing infrastructure, said Illinois Tech's Vijay K. Gurbani.
 

Full Article

 

 

Exploring Reinforcement Learning to Control Nuclear Fusion Reactions
Carnegie Mellon University News
Aaron Aupperlee
September 8, 2022


Carnegie Mellon University (CMU) doctoral candidate Ian Char, the first CMU researcher to run an experiment on the DIII-D National Fusion Facility's tokamak machine, demonstrated that reinforcement learning algorithms can control the rotation of the machine's hydrogen plasma. Char developed two algorithms: one was trained using data from the tokamak on how the plasma reacts, while the other calculates the rate and direction at which to add hydrogen particles to affect the speed of the plasma's rotation. Said CMU's Jeff Schneider, "This work shows a path to using reinforcement learning to control other parts of the plasma state and ultimately achieve the temperatures and pressures long enough to have a power plant. That would mean limitless, clean energy for everyone."

Full Article

 

 

Collaborative ML That Preserves Privacy
MIT News
Adam Zewe
September 7, 2022


Researchers at the Massachusetts Institute of Technology (MIT) and MIT-originated startup DynamoFL have enhanced federated learning to better train a privacy-preserving machine learning model. Federated learning involves hundreds or thousands of users training their own model with their own data on their own device, then transferring the models to a central server that pools them into a better model. The researchers' FedLTN system improves the accuracy of the combined model while shrinking its size, which accelerates communication between users and the central server while ensuring each user receives a model tailored for their environment. FedLTN follows the lottery ticket hypothesis, positing that within large neural network models are smaller subnetworks that can realize the same performance. The researchers reduced model size by nearly an order of magnitude versus other methods, yielding four- to six-fold lower communication costs for individual users, and boosting model accuracy 10%.

Full Article

 

 

Next Generation of Hearing Aids Reads Lips Through Masks
University of Glasgow (U.K.)
September 7, 2022

An international team led by researchers at the U.K.'s University of Glasgow developed a system that can read lips accurately even through face masks using radio-frequency (RF) sensing and artificial intelligence. The researchers had volunteers repeat vowel sounds while masked and unmasked, with RF signals from a dedicated radar sensor and a Wi-Fi transmitter used to scan their faces while speaking and while still. The resulting 3,600 samples of data were used to train machine learning and deep learning algorithms to recognize the lip and mouth movements related to each vowel sound. Because RF signals can pass through masks, the machine learning and deep learning algorithms were able to learn to identify vowel formation by masked speakers.
 

Full Article

 

 

Revolutionizing Image Generation by AI: Turning Text into Images
Ludwig-Maximilian University of Munich (Germany)
September 1, 2022


Scientists at Germany's Ludwig-Maximilian University of Munich (LMU Munich) have developed an algorithm that can convert text into images in seconds. The Stable Diffusion artificial intelligence (AI) model can accomplish this with a conventional graphics card and without the need to use supercomputers. The researchers trained the model on the servers of their startup Stability.Ai, and the algorithm refines the essence of billions of training images into an AI model of just a few gigabytes. Said LMU Munich's Björn Ommer, "Once such AI has really understood what constitutes a car or what characteristics are typical for an artistic style, it will have apprehended precisely these salient features and should ideally be able to create further examples, just as the students in an old master's workshop can produce work in the same style."

Full Article

 

 

AI Can Better Monitor Maine's Forests
UMaine News
September 1, 2022

A study by researchers at the universities of Maine (UMaine), New Hampshire, and Vermont applied artificial intelligence (AI) and machine learning to monitor soil moisture more efficiently. The researchers designed a wireless sensor network that uses AI to learn to monitor soil moisture and process the data with greater power efficiency. The software learns over time how to best employ available network resources, which helps generate power-efficient systems at reduced cost for large-scale monitoring. "AI can learn from the environment, predict the wireless link quality and incoming solar energy to efficiently use limited energy, and make a robust low-cost network run longer and more reliably," said UMaine's Ali Abedi.
 

Full Article

 

 

English Bias in Computing: Images to the Rescue
University of Copenhagen (Denmark)
August 31, 2022

An image-based benchmark could overcome cultural bias stemming from machine learning (ML) training datasets being written in English. An international group of researchers led by Denmark's University of Copenhagen (KU) developed the Image-Grounded Language Understanding Evaluation (IGLUE) tool, which can score an ML solution's efficiency in 20 languages. Image labels in ML are typically in English, while IGLUE covers 11 language families, nine scripts, and three geographical macro-areas. IGLUE's images feature culture-specific components supplied by volunteers in geographically diverse countries in their natural language. KU's Emanuele Bugliarello said the researchers hope IGLUE's underlying methodology could improve solutions "which help visually impaired in following the plot of a movie or another type of visual communication."
 

Full Article

 

 

City Digital Twins Help Train Deep Learning Models to Separate Building Facades
Research at Osaka University (Japan)
September 5, 2022

Researchers at Japan's Osaka University used images generated automatically by digital city twins to train a deep learning model that can accurately separate out the building facades in an image. The researchers used a three-dimensional city model from the PLATEAU platform to create the synthetic digital city twin data. They input the model into the Unity game engine and drove a virtual car with a camera setup around the city to gather virtual images in various lighting and weather conditions. Real street-level images were incorporated from Google Maps API. The researchers found segmentation accuracy was improved by adding synthetic data to the real dataset, and in terms of performance, the model can compete with a deep convolutional neural network trained on 100% real data. Said Osaka University's Tomohiro Fukuda, "These results reveal that our proposed synthetic dataset could potentially replace all the real images in the training set."
 

Full Article

 

 

A Novel Approach to Creating Tailored Odors, Fragrances Using Machine Learning
Tokyo Tech News (Japan)
September 5, 2022

Researchers at Japan's Tokyo Institute of Technology (Tokyo Tech) used machine learning (ML) to predict the sensing data of odor mixtures, and to design customized fragrances. The technique taps standard mass spectrum data and ML models to forecast molecular features based on odor impression. As an example, Tokyo Tech's Takamichi Nakamoto explained the approach was able to identify molecules that emit the mass spectrum of apple flavor with enhanced "fruit" and "sweet" impressions. "Combinations of either 59 or 60 molecules give the same mass spectrum as the one obtained from the specified odor impression," Nakamoto said. "With this information, and the correct mixing ratio needed for a certain impression, we could theoretically prepare the desired scent."
 

Full Article

 

AI Can Help AV Traffic Flow “Smoothly” At Intersections

Tech Xplore Share to FacebookShare to Twitter (5/17, Zewe) reports that a new study by MIT researchers was able to “demonstrate a machine-learning approach that can learn to control a fleet of autonomous vehicles as they approach and travel through a signalized intersection in a way that keeps traffic flowing smoothly.” The researchers “found that their approach reduces fuel consumption and emissions while improving average vehicle speed,” and “gets the best results if all cars on the road are autonomous, but even if only 25% use their control algorithm, it still leads to substantial fuel and emissions benefits.” The researchers “want the system to learn a strategy that reduces fuel consumption and limits the impact on travel time” and utilized deep reinforcement learning. In simulated tests, the AI “can reduce fuel consumption by 18% and carbon dioxide emissions by 25%, while boosting travel speeds by 20%.”

 

AI Can Improve Supply Chain Management

The Economic Times (IND) Share to FacebookShare to Twitter (5/16) reports that the “supply chain function is at the epicenter of an organization’s success” and the “benefits of technology in the supply chain can be categorized into three buckets: Increased automation, Better transparency, Generating insights to spark innovation.” ET reports that while “RFID and barcode technology are heavily used in inventory management, tracking, logistics, and vehicle route tracking,... recent advancements in technology are helping organizations to make incredible advancements in automation.” This automation shift includes, “Warehouse bots, driverless delivery vehicles, and drone delivery.” The advanced technologies have “generated a wealth of rich data” which AI can leverage to boost supply-chain management. ET discusses how AI can improve efficiency in supply chain management, such as via “forecasts for future demand,” new product introduction forecasts, and more.

 

Investors Pulled Back On Artificial Intelligence In Q1, As Funding Fell 32%

Axios Share to FacebookShare to Twitter (5/16, Brodwin) reports, “Funding for AI-focused health startups fell 32% in Q1 2022, after nine straight quarters of steady growth, according to a fresh analysis from CB Insights.” Although it is “often excessively hyped, AI has huge potential to boost patient outcomes, curb hospital spending and even pinpoint diseases earlier. Even taking this quarter’s dip into account, health AI funding remained at higher levels than those seen before 2021.” Meanwhile, “health tech companies aren’t the only ones seeing decreases in interest around AI – global funding for all kinds of AI startups fell 12% last quarter to $15.1 billion, down from $17.1 billion in Q4 2021.”

dtau...@gmail.com

unread,
Oct 23, 2022, 8:57:36 AM10/23/22
to ai-b...@googlegroups.com

We Can Train Big Neural Networks on Small Devices
IEEE Spectrum
Matthew Hutson
September 20, 2022


A new training method expands small devices' capabilities to train large neural networks, while potentially helping to protect privacy. The University of California, Berkeley's Shishir Patil and colleagues integrated offloading and rematerialization techniques using suboptimal heuristics to reduce memory requirements for training via the private optimal energy training (POET) system. Users feed POET a device's technical details and data on the architecture of a neural network they want to train, specifying memory and time budgets; the system generates a training process that minimizes energy usage. Defining the problem as a mixed integer linear programming challenge was critical to POET's effectiveness. Testing showed the system could slash memory usage by about 80% without significantly increasing energy consumption.

Full Article

 

 

New York AI Bias Law Prompts Uncertainty
The Wall Street Journal
Richard Vanderford
September 21, 2022


A law that goes into effect in New York City in January requires companies to audit their artificial intelligence (AI) hiring systems to assess whether they incorporate racial and gender biases. The law holds hiring companies liable for any biases and could impose fines for violations, but lacks clear guidelines for the AI audit process. While the city’s Department of Consumer and Worker Protection has not offered a timeline for when it will publish rules to implement the law, some companies already are taking steps to comply. Said Anthony Habayeb of AI governance software company Monitaur Inc., "Instead of waiting for someone to tell me what to do…I built controls around these applications because I know like with any software, things can and do go wrong."
 

Full Article

*May Require Paid Registration

 

 

AI Message Decoder Based on Bacterial Growth Patterns
Duke University
Ken Kingery
September 23, 2022


Biomedical engineers at the Duke University Pratt School of Engineering have deciphered bacterial growth patterns with an artificial intelligence (AI) decoder. The researchers produced a virtual bacterial colony, and controlled growth and the numbers and sizes of bacterial dots, to create an alphabet based on the colonies' appearance in a Petri dish. This emorfi encoding scheme encrypts messages by assembling a movie of a series of patterns, each correlating to a specific letter that a machine-learning algorithm can identify. Interlopers should be unable to crack the code without their own AI, provided the receiver knows the set of initial conditions that led to the code's creation.
 

Full Article