Dr. T's AI brief

10 views
Skip to first unread message

dtau...@gmail.com

unread,
Sep 24, 2022, 7:58:13 AM9/24/22
to ai-b...@googlegroups.com

DeepMind AI Learns Simple Physics Like a Baby
Nature
Davide Castelvecchi
July 11, 2022


Computer scientists at the DeepMind artificial intelligence (AI) research laboratory trained a software model to learn simple physical rules about object behavior. The researchers trained the Physics Learning through Auto-encoding and Tracking Objects (PLATO) neural network model using animated videos and images of objects like cubes and balls, in order for it to generate an internal representation of the physical properties of each object. The model learned patterns such as continuity, solidity, and persistence of shape. DeepMind's Luis Piloto said the software makes predictions at every step in the video, and its accuracy increases as the video progresses. Piloto suggested PLATO could be a first step toward AI that can test theories about how human infants learn.

Full Article

 

 

Researchers Tackle Longstanding 'Data Heterogeneity' Problem for Federated Learning
NC State University News
Matt Shipman
July 11, 2022


A federated learning approach developed by North Carolina State University (NC State) researchers enables artificial intelligence models to be developed faster and more accurately. Federated learning involves multiple devices (clients) each using their own data to train local models and send updates to a centralized server to create a hybrid model that ideally performs better than the clients on their own. However, heterogeneity in the clients' data can result in local model modifications that hurt the hybrid model's performance. In the new approach, the central server is given more information about relevant data characteristics in the form of Jacobian matrices, which are input into an algorithm that generates an improved model. Said NC State's Ryan Pilgrim, "The algorithm not only sidesteps multiple communication rounds, but also keeps divergent local updates from degrading the model."

Full Article

 

 

Solving Algorithm 'Amnesia' Reveals Clues to How We Learn
UCI News
July 6, 2002


University of California, Irvine (UCI) researchers have gained insights into how algorithms can absorb information more efficiently, which could yield clues to the brain's learning process. The researchers looked at artificial neural networks (ANNs) that mimic the behavior of neurons, but which tend to forget previously acquired knowledge when they ingest fresh knowledge too fast. The traditional solution involves retraining the network on the complete set of past data, whether or not it was closely related to the new data. Said UCI's Rajat Saxena, "When ANNs interleaved a much smaller subset of old information, including mainly items that were similar to the new knowledge they were acquiring, they learned it without forgetting what they already knew." Saxena said this implies an underlying brain mechanism for why experts at something can gain new knowledge of that subject faster than non-experts.

Full Article

 

 

Researchers Defeat Facial Recognition Systems with Universal Face Mask
Help Net Security
Zeljka Zorz
July 12, 2022


Researchers at Israel's Ben-Gurion University of the Negev (BGU) and Tel Aviv University found that facial recognition (FR) systems may be thwarted by fabric face masks boasting adversarial patterns. The researchers employed a gradient-based optimization process to generate a universal perturbation and mask to falsely classify each wearer as an unknown identity. BGU's Alon Zolfi said, "The perturbation depends on the FR model it was used to attack, which means different patterns will be crafted depending on the different victim models." Zolfi suggested FR models could see through masked face images by training them on images containing adversarial patterns, by teaching them to make predictions based only on the upper area of the face, or by training them to generate lower facial areas based on upper facial areas.

Full Article

 

 

Building Explainability into Components of ML Models
MIT News
Adam Zewe
June 30, 2022


Researchers at the Massachusetts Institute of Technology (MIT) and cybersecurity startup Corelight have developed a taxonomy to help developers create components of machine learning (ML) models that incorporate explainability. The researchers defined properties that make features interpretable for five varieties of users, and that provide instructions for engineering features into formats that will be easier for laypersons to understand. Key to the taxonomy is the precept that there is no universal model for interpretability. The researchers define properties that can make components approximately explainable for different decision-makers, and outline which properties are likely most valuable to users. MIT's Alexandra Zytek said, "The taxonomy says, if you are making interpretable features, to what level are they interpretable? You may not need all levels, depending on the type of domain experts you are working with."

Full Article

 

 

How ML Can Identify Gun Buyers at Risk of Suicide
The Hill
Gianna Melillo
July 14, 2022


Research by University of California, Davis (UC Davis) scientists suggests machine learning could use handgun purchasing data to identify gun purchasers at risk of suicide. The researchers analyzed data from the California's Dealer's Record of Sale database, applying random forest classification to assess the data's ability to predict those at an elevated suicide risk within a year of a gun purchase. They estimated a 0.07% rate of suicide for gun buyers within that year, and identified 41 risk factors for firearm suicide, including older age, being a first-time purchaser, white race, living close to the seller, and the purchase of a revolver. Said UC Davis' Hannah S. Laqueur, "This study contributes to the growing evidence that computational methods can aid in the identification of high-risk groups and the development of targeted interventions."

Full Article

 

 

Researchers Create 'COVID Computer' to Speed Diagnosis
University of Leicester (U.K.)
July 1, 2022

An artificial intelligence tool developed by researchers at the U.K.'s University of Leicester detects COVID-19 with a high degree of accuracy by analyzing computed tomography (CT) chest scans. The software, which uses deep learning algorithms, can be deployed on portable devices. University of Leicester's Yudong Zhang said the tool "can find suspicious regions in the chest images automatically and make accurate predictions based on the representations.” Zhang added that the accuracy of the tool means it “can be used in the clinical diagnosis of COVID-19, which may help to control the spread of the virus."
 

Full Article

 

 

Deep Learning Accelerates Detection of Live Bacteria Using TFT Arrays
UCLA California NanoSystems Institute
July 5, 2022


A team of scientists at the University of California, Los Angeles and Texas A&M University has developed an artificial intelligence (AI)-powered bacterial colony detection system using a thin-film transistor (TFT) array. The 27-millimeter (mm) x 26-mm array allows the system to capture the growth patterns of bacterial colonies without requiring scanning. The system analyzes the images it captures as a function of time, and automatically detects colony growth using a deep neural network. After a colony is detected, a second neural network classifies its bacteria species. The researchers were able to detect colonies of E. coli, Citrobacter, and Klebsiella pneumoniae with over 90% accuracy within nine hours, and classified the species of each colony within 12 hours.

Full Article

 

 

AI-Enhanced System to Track Players for Offside Calls at 2022 World Cup
PC Magazine
Emily Dreibelbis
July 5, 2022


The Federation Internationale de Football Association (FIFA) will use a semi-automated artificial intelligence system to help referees make offside calls during the 2022 World Cup. The system features 12 cameras mounted under the stadium roof, which will track 29 body position data points for each player 50 times per second, and a sensor in the ball to track its location. The system issues an alert if a player is determined to be offside, with a team of referees in a video room tasked with reviewing the alert and making the final determination of the call. Said FIFA's Pierluigi Collina, "Even the best referee can make mistakes and that's why we implemented a system that can really reduce the possibility that a human mistake can affect the outcome of the match."

Full Article

 

 

When Gamers Get Nasty
IEEE Spectrum
Michelle Hampson
July 4, 2022


Researchers at the Netherlands' Utrecht University have developed an artificial intelligence (AI) model that can identify toxic behavior among gamers in real time. The model is designed to screen out problematic language, or to help moderators decide on sanctions for players using such language. Participants viewed videos of gamers playing Overwatch and rated their level of toxicity in the game's voice chat, which highlighted what Utrecht's Julian Frommel called "the challenge of subjectivity" in automatically detecting toxicity. The researchers used this data to build and validate the AI model, which can forecast a match's toxicity with 86.3% accuracy. Frommel said the model incurs few computational costs, and "can be automated and applied as a noninvasive approach in many games that use in-game voice chats."

Full Article

 

 

Human-Robot-AI Teamwork Accelerates Regenerative Medicine
RIKEN (Japan)
June 28, 2022

Researchers at Japan's RIKEN Center for Biosystems Dynamics Research have created a robotic artificial intelligence (AI) system for regenerative medicine that can autonomously optimize conditions for growing replacement retina layers. The system uses a humanoid robot called Maholo to repeatedly facilitate a series of movements and manipulations, and AI software to assess the results and draw up the next experiment. The software applies an optimization algorithm to ascertain which parameters should be adjusted and how, to improve differentiation efficiency in the next experimental series. The researchers fed the protocols for cultivating retinal pigment epithelium cells from stem cells to Maholo, and the algorithm increased differentiation efficiency from 50% to 90% in 185 days (compared to the 30 months human scientists would have needed to accomplish this).
 

Full Article

 

 

AI-Powered BirdNET App Makes Citizen Science Easier
Cornell Chronicle
Pat Leonard
June 28, 2022


The free machine learning-powered BirdNET application can identify more than 3,000 types of birds by the sounds they make. Cornell University's Stefan Kahl said, "Our guiding design principles were that we needed an accurate algorithm and a simple user interface." Users listen for birds and record their sounds on a smartphone via the app; BirdNET then employs artificial intelligence to automatically identify the bird’s species, and captures the recording for research. The researchers chose test-cases in the U.S. and Europe to test BirdNET's reliability, and found the app data accurately reproduced the established distribution pattern of song-types among white-throated sparrows, and the brown thrasher's seasonal and migratory ranges.
 

Full Article

 

Meta Outlines Efforts To Use AI To Improve Sustainability

SiliconANGLE Share to FacebookShare to Twitter (4/18, Dotson) reports Meta Platforms, “which has been working to use machine learning technologies to combat climate change and increase the efficiency of industrial systems,” published a document outlining the “approaches it’s taking using artificial intelligence to tackle these issues and develop elegant engineering solutions.” One initiative, the Open Catalyst Project, is a collaboration with Carnegie Mellon University’s Department of Chemical Engineering. The project “provides datasets for the discovery of chemical catalysts for building cheaper and scalable batteries for renewable energy networks.”

 

Intel Debuts “Brain-Like” Chip

The Register (UK) Share to FacebookShare to Twitter (4/15) discusses Intel’s brain-like Loihi chip technology which “could aid with low-power AI tasks like speech recognition.” However, “What hasn’t been clear yet is how the chipmaker will make money from the experimental silicon.” The Register adds, “In a recent roundtable with journalists, Intel Labs lead Rich Uhlig offered two possibilities: integrating Loihi in a CPU for PCs to perform energy-efficient AI tasks and potentially offering the its neuromorphic chips as a cloud service, although Uhlig was clear he wasn’t firming actual product plans, just projecting what could theoretically happen in the future.”

dtau...@gmail.com

unread,
Sep 25, 2022, 7:20:28 PM9/25/22
to ai-b...@googlegroups.com

Deep Learning Method Worth Its Salt
UC News
Michael Miller
July 22, 2022


A multi-institutional team of researchers led by the University of Cincinnati's Yu Shi has developed a novel technique for modeling the thermodynamic properties of molten salts via deep learning artificial intelligence. Shi said the researchers trained a neural network on data produced by quantum simulations, which they used to estimate the free energy of molten sodium chloride. The research, according to Shi, offers a reliable way of studying the conversion of dissolved gas to vapor in molten salts, helping to understand how impurities and solutes affect corrosion. He added that the method also could help scientists analyze the emission of potentially toxic gas into the atmosphere, which will be useful for fourth-generation molten salt nuclear reactors.
 

Full Article

 

 

Teaching Machines to Really Hear Us
UVA Today
Eric Williamson
July 20, 2022


SITHCon, a deep-learning tool that enables existing artificial intelligence neural networks to hear people clearly, is the result of a collaboration between researchers at the University of Virginia (UVA), Boston University, and Indiana University. The algorithm applies a type of compression that can be unpacked on an as-needed basis, saving storage space by only storing the "memory" of a sound at a resolution that will be useful later. "We showed that SITHCon could generalize to speech scaled up or down in speed, whereas other models failed to decode information at speeds they didn't see at training," said UVA's Brandon Jacques. The researchers have opted to open source the SITHCon code, and they say the information should adjust for any neural network that translates voice.
 

Full Article

 

 

Engineers Develop Tool to Enable More Personalized Cell Therapies
University of Minnesota College of Science & Engineering
July 20, 2022

In a step toward more personalized cell therapies, engineers at the University of Minnesota, Twin Cities have developed a tool to forecast and tailor site-specific recombination DNA editing. The tool integrates high-throughput experiments with a machine learning (ML) model to boost efficiency and predictability, enabling researchers to control the DNA editing rate and the speed at which a therapeutic cell generates a drug or therapeutic protein. The researchers first devised an experimental method to calculate the rate of site-specific recombination, then fed that data to an ML algorithm. Researchers can type in a DNA sequence, and the model predicts its recombination rate. The model also can predict and program simultaneous production of multiple cellular proteins, which could be used to induce stem cells to produce new tissues or organs, or to generate multiple drugs in pre-programmed concentrations.
 

Full Article

 

 

AI Choir, Dancers Accompany Human Orchestra in Hong Kong Concert
South China Morning Post (Hong Kong)
Mabel Lui
July 21, 2022


Artificial intelligence (AI)-powered virtual ballet dancers and a 320-voice choir accompanied human musicians in a Hong Kong Baptist University (HKBU) symphony orchestra concert. The digitally generated voices sang under music director Johnny Poon Ming-lun's guidance, accompanied by an AI-produced video backdrop based on the lyrics. HKBU researchers trained algorithms on voice samples of the late pop singer Leslie Cheung Kwok-wing and eight other singers, then used human voices as reference to assemble the choir. Poon wore a motion capture suit so the AI "singers" could follow his movements and perform appropriately, and a similar approach was used to generate the virtual dancers. HKBU's Guo Yike said the goal of such research is not to create artistic technology that mimics humans, but to develop technology that can tap its own creativity and "understand humanity."
 

Full Article

 

 

Open Source Platform Enables Research on Privacy-Preserving ML
University of Michigan News
Zachary Champion
July 19, 2022


University of Michigan (U-M) researchers have open-sourced the largest benchmarking dataset for a privacy-shielding machine learning (ML) method to date. Federated learning trains ML models on end-user devices, rather than transferring private data to central servers. "By training in-situ on data where it is generated, we can train on larger real-world data," said U-M's Fan Lai. "This also allows us to mitigate privacy risks and high communication and storage costs associated with collecting the raw data from end-user devices into the cloud." The FedScale platform can model the behavior of millions of user devices using a few graphic processing units and central processing units, allowing ML model developers to evaluate model performance without large-scale deployments.

Full Article

 

 

'Earable' Uses Sonar to Reconstruct Facial Expressions
Cornell University Chronicle
Patricia Waldron
July 19, 2022


Cornell University researchers have developed EarIO, a wearable earphone device (earable) that can reconstruct the wearer's face using sonar. EarIO sends facial movements to a smartphone. A speaker on either side of the earphone transmits acoustic signals to the sides of the face, and a microphone detects the echoes, which change due to facial movements as wearers talk, smile, or raise their eyebrows. A deep learning algorithm processes and translates that data back into facial expressions via artificial intelligence. The earable can communicate with a smartphone via a wireless Bluetooth connection, maintaining the user’s privacy.

Full Article

 

 

Nanomagnets Can Choose a Wine, Could Quench AI's Thirst for Energy
NIST News
July 18, 2022

Scientists at the U.S. National Institute of Standards and Technology, the University of Maryland, and Western Digital Technologies have developed artificial intelligence (AI) devices that could consume less energy and operate faster than other hardware. The researchers programmed a neural network from magnetic tunnel junctions (MTJs) and trained it to taste wines virtually. They used 148 of 178 wines produced from three grape varieties, with each wine possessing 13 properties; each property was assigned a value between 0 and 1 for the network to appraise when distinguishing between wines. The network then "tasted" a dataset that included 30 previously unknown wines, yielding a 95.3% success rate and committing just two errors on the untasted wines. The results indicate an array of MTJ devices could potentially be scaled up and assembled into new AI systems.
 

Full Article

 

 

The Never-Ending Quest to Predict Crime Using AI
The Washington Post
Pranshu Verma
July 15, 2022


Scientists continue to build crime-predicting artificial intelligence, despite a history of the technology disproportionately directing police against communities of color. Algorithms are trained to find patterns in crime reports, arrest records, and license plate images to anticipate where and when a certain type of crime will occur. However, New York University's Vincent Southerland said the software's accuracy reflects historically biased data skewed toward minorities and low-income neighborhoods. University of Chicago (UChicago) researchers recently claimed a new algorithm can forecast crime with "90% accuracy" by identifying locations in major cities with a high likelihood of crimes happening in the next week. UChicago's Ishanu Chattopadhyay said the software was intended to assess bias in policing across neighborhoods in eight major U.S. cities, revealing very different enforcement in affluent and poorer neighborhoods.

Full Article

*May Require Paid Registration

dtau...@gmail.com

unread,
Oct 1, 2022, 8:16:34 PM10/1/22
to ai-b...@googlegroups.com

DeepMind Research Cracks Structure of Almost Every Known Protein
Financial Times
Madhumita Murgia
July 28, 2022


Scientists at DeepMind used its AlphaFold algorithm to predict the 3D shape of almost every known protein. The AlphaFold database encompasses more than 200 million predicted protein structures, covering nearly all organisms that have had their genomes sequenced. The new tool enables structural biologists to "look up a 3D structure of a protein almost as easily as doing a keyword Google search," said DeepMind's Demis Hassabis. Researchers can access the structures through a public database hosted by the European Bioinformatics Institute (EMBL-EBI).
 

Full Article

*May Require Paid Registration

 

 

Could ML Fuel a Reproducibility Crisis in Science?
Nature
Elizabeth Gibney
July 26, 2022


Princeton University's Sayash Kapoor and Arvind Narayanan warn of a "brewing reproducibility crisis" driven by increasing reliance on machine learning (ML) to base predictions on patterns in data. The researchers say ML is overhyped as an easily learned tool, and peer reviewers lack the time to vet these models, leaving academia with no mechanisms to eliminate irreproducible papers. Kapoor and Narayanan analyzed 20 reviews in 17 research fields, citing 329 papers whose results could not be fully replicated due to flawed ML application. They highlighted data leakage as the most significant problem, and have developed guidelines for avoiding such traps. The guidelines instruct researchers to include with their manuscripts evidence that their models do not have each of eight types of leakage, as well as suggest a template for such documentation.
 

Full Article

 

 

Turing Award Winner Reflects on His Pioneering Algorithms
IEEE Spectrum
Kathy Pretz
July 27, 2022


ACM's 2021 A.M. Turing Award recipient Jack Dongarra conceived of what many deem world-changing software libraries that played a role in the expansion of high-performance computing in fields such as artificial intelligence (AI), data analytics, genomics, and healthcare. In an interview, Dongarra reflects on his pioneering algorithms, many of which were influenced by the work of previous winners. The software libraries "have basic components that are needed in many areas of science so that users can draw on those components to help them solve their computational problems,” Dongarra says. His current project is a software library for the Frontier supercomputer, which is housed at the U.S. Department of Energy's Oak Ridge National Laboratory and processes 1-quintillion-plus operations per second.
 

Full Article

 

 

Smart Microrobots Learn to Swim, Navigate with AI
New Jersey Institute of Technology
August 4, 2022


Researchers from Santa Clara University, the New Jersey Institute of Technology (NJIT), and the University of Hong Kong (HKU) trained microrobots to swim using artificial intelligence. The researchers combined artificial neural networks with reinforcement learning to teach a microbot to swim and navigate in any direction. Each movement by the microrobot receives feedback on the action's correctness, so it learns how to swim by interacting with its surroundings. HKU's Alan Tsang said, "Similar to a human learning how to swim, the microswimmer learns how to move its 'body parts'—in this case three microparticles and extensible links—to self-propel and turn." The researchers enabled a microbot to follow an intricate pathway without explicit programming, which NJIT's Yuan-nan Young called "Our first step in tackling the challenge of developing microswimmers that can adapt like biological cells in navigating complex environments autonomously."

Full Article

 

Algorithm Aces University Math Course Questions
MIT News
Adam Zewe
August 3, 2022


A multi-institutional team of researchers led by the Massachusetts Institute of Technology's Iddo Drori utilized a neural network model to solve university-level math problems in seconds. The researchers used OpenAI's Codex model, which was pretrained on text and "fine-tuned" on code, to learn how pieces of text and code relate to each other. The model can render text questions into code, given a few question-code examples, then run the code to solve the problem. The model also automatically explains its solutions, and can produce new problems in university math subjects which university students were unable to distinguish from human-generated questions. "This work opens the field for people to start solving harder and harder questions with machine learning," Drori said.

Full Article

 

 

Hardware Offers Faster Computation for AI with Much Less Energy
MIT News
Adam Zewe
July 28, 2022


Researchers at the Massachusetts Institute of Technology (MIT) have developed an analog processor technology that uses inorganic phosphosilicate glass (PSG) to allow for faster computation with greater energy efficiency. Analog machine learning is enabled by increasing and decreasing the electrical conductance of protonic programmable resistors, controlled by the movement of protons into and out of a channel in the resistor. The researchers used PSG to create a programmable protonic resistor that is 1 million times faster than the researchers' previous fastest device; it also can operate at room temperatures using much less energy. Said MIT's Murat Onen, "Once you have an analog processor, you will no longer be training networks everyone else is working on. You will be training networks with unprecedented complexities that no one else can afford to, and therefore vastly outperform them all. In other words, this is not a faster car, this is a spacecraft."

Full Article

 

 

Machines Can Learn from Fables
USC Viterbi School of Engineering
Habeeba Kamel
July 25, 2022


Researchers at the University of Southern California (USC) used Aesop's fables to teach AI to analogize. Previous attempts could not enable the AI to comprehend the implications of analogies or to make large-scale generalizations. The team used natural language processing to analyze the fables and to produce story pairs based on words and meanings in the text. USC's Jay Pujara said stories imparting moral lessons were chosen because often multiple fables teach the same lessons in different ways, creating a semantic meaning that differs from the fable's surface form.
 

Full Article

 

 

Dreaming Up Proteins, AI Churns Out Possible Medicines, Vaccines
Science
Robert F. Service
July 22, 2022


University of Washington (UW) researchers have unveiled artificial intelligence (AI) software that can design nonexistent proteins, including potential cancer treatments and a vaccine candidate for respiratory syncytial virus. The AI taps innovations in predicting three-dimensional protein structures from amino acid sequences, using the RoseTTAFold software to form proteins from scratch via inpainting and constrained hallucination. Inpainting gives the AI a starting point from which it can fill in additional protein components based on its understanding of protein-folding. Meanwhile, constrained hallucination establishes a goal, and the AI evolves a virtual protein to meet the goal. Both approaches yielded successful laboratory-cultured proteins.
 

Full Article

 

 

Think Your Street Needs a Redesign? Ask an AI
Bloomberg
Marie Patino
July 30, 2022


Brooklyn-based artist and former activist Zach Katz is using artificial intelligence (AI) laboratory OpenAI's DALL-E 2 system to redesign streets to be more friendly to pedestrians and bikes. DALL-E 2, the second iteration of the DALL-E neural network, can convert text into visual features and display how these features could relate to one another, thanks to a massive training dataset. Users can input text queries and request specific styles and features from the AI, although results can be biased against women and people of color due to a lack of diversity in its training data. Katz posts images of DALL-E-reimagined streets on Twitter, and has a backlog of requests for such reimaginings from around the world.

Full Article

 

 

Researchers Use Wearable Tech to Detect COVID-19 Before Onset of Symptoms
McMaster University (Canada)
Jesse Dorey
July 19, 2022


Researchers in Canada and Europe detected COVID-19 prior to symptom onset by pairing wrist-worn health devices with machine learning. The researchers analyzed more than 1,100 participants wearing a fertility tracker that monitors respiration, heart rate, heart rate variability, skin temperature, and blood flow at night while sleeping. The tracker was synchronized to a mobile application that recorded activity that might affect the central nervous system, as well as potential COVID-19 symptoms. Over 100 participants tested positive for the virus, and the tracker detected changes in all physiological markers during infection. The team trained an algorithm to detect COVID-19 symptoms in 70% of positive-testing participants, and 73% of confirmed cases were detected as early as two days before symptom onset; tests on the remaining cohort yielded a 68% detection rate up to two days ahead.
 

Full Article

 

 

Using AI to Train Teams of Robots to Work Together
University of Illinois Grainger College of Engineering
Debra Levey Larson
July 19, 2022


University of Illinois Urbana-Champaign (UIUC) scientists have created a reinforcement learning framework to train multiple agents such as robots to collaborate. UIUC's Huy Tran said the researchers envisioned a decentralized scheme, while also focusing "on situations where it's not obvious what the different roles or jobs for the agents should be." He explained that they produced a utility function "that allows us to identify when an individual agent contributes to the global team objective." The algorithms also can detect when an agent or robot is performing a task that is not useful, and the researchers tested them on simulated games like Capture the Flag and StarCraft. Tran said such algorithms can be applied to real-world scenarios, like military surveillance or coordinated autonomous vehicle delivery.
 

Full Article

 

 

Sensing Platform Deployed at Controlled Burn Site, Could Help Prevent Forest Fires
Argonne National Laboratory
Jared Sagoff
July 26, 2022


Scientists from the U.S. Department of Energy's Argonne National Laboratory deployed the Sage platform to conduct a controlled burn on a Kansas prairie, as part of research that could be applied to forest fire prevention. Sage integrates multiple sensors with edge computing and embedded machine learning (ML) algorithms, enabling on-the-spot detection, monitoring, and analysis of the burned area. Sage uses the open-source Waggle wireless sensor platform; Argonne's Pete Beckman said, "It's basically as if Waggle is a cellphone, and Sage is the network that the phone uses to communicate plus the apps that run on it." The controlled burn generated a dataset about the progression of smoke and fire, which can be used to teach an ML algorithm to ascertain the behavior of other fires in real time.
 

Full Article

 

dtau...@gmail.com

unread,
Oct 2, 2022, 12:07:44 PM10/2/22
to ai-b...@googlegroups.com

First Studies with Quantum Machine Learning at LHCb
University of Liverpool (U.K.)
August 4, 2022


The Data Processing & Analysis Project team, led by a researcher at the U.K.'s University of Liverpool, demonstrated the use of quantum machine learning (QML) to identify the charge of b-quark initiated jets at the Large Hadron Collider (LHC) at CERN (the European Organization for Nuclear Research). This marked the first time QML techniques were applied to hadronic jet charge identification. The researchers compared the performance of a variational quantum classifier, based on two different quantum circuits, with that of deep neural network (DNN); they found the classical DNN performed only slightly better than the QML algorithms. However, they found QML reaches optimal performance with a lower number of events, while DNN performance exceeds that of QML with larger numbers of features.

Full Article

 

 

Researchers Taught Machines to Follow Lego Instruction Manuals
Gizmodo
Andrew Liszewski
August 3, 2022


Researchers at Stanford University have developed a learning-based framework that enables machines to interpret step-by-step instructions for building Lego models. Lego instruction manuals typically show semi-assembled models that change as bricks are added with each step. The Manual-to-Executable-Plan Network (MEPNet) compares previously generated three-dimensional models to the next iteration to determine where new pieces fit in each step. Because differences are too subtle for the framework to detect from scanned images or printed pages, it must make these determinations on its own. The process is made easier by the fact that MEPNet understands how Lego bricks fit together and their positional limitations, which helps narrow down potential attachment points in the semi-assembled model.

Full Article

 

 

Using AI to Control Digital Manufacturing
MIT News
Adam Zewe
August 2, 2022


An international team of researchers led by the Massachusetts Institute of Technology has taught a machine learning system with computer vision to monitor and control three-dimensional (3D) printing in real time. The researchers used simulations to train a neural network to adjust printing parameters to minimize errors, then applied it to a 3D printer. The machine-vision system employs two cameras focused on the printer's nozzle, and calculates the printing material's thickness as it is extruded via light measurement. The system printed objects with greater accuracy than all other 3D printing control methods evaluated by the researchers. "This project is really the first demonstration of building a manufacturing system that uses machine learning to learn a complex control policy," said MIT's Wojciech Matusik.

Full Article

 

 

AI May Come to the Rescue of Future Firefighters
NIST News
August 10, 2022

A new artificial intelligence model from researchers at the U.S. National Institute of Standards and Technology and Hong Kong Polytechnic University could warn firefighters of imminent combustion or flashovers. The Flashover Prediction Neural Network (FlashNet) model can predict flashovers as early as 30 seconds before eruption, with up to 92.1% accuracy. The researchers used graph neural networks to strengthen FlashNet and trained it on nearly 25,000 fire cases, then used another 16,000 for refinement and final testing. In digital simulations of over 41,000 fires in 17 types of buildings representing common U.S. residential floorplans, FlashNet outperformed five other machine learning-based models.
 

Full Article

 

 

An Open-Source Data-Science Toolkit for Energy GridDS
Lawrence Livermore National Laboratory
August 2, 2022


GridDS, an open-source data-science toolkit, was designed to provide an integrated energy data storage and augmentation infrastructure, as well as a flexible and wide-ranging set of cutting-edge machine learning (ML) models. Researchers at the U.S. Department of Energy's Lawrence Livermore National Laboratory (LLNL) developed GridDS to train and validate ML models to help enhance the efficiency of distributed energy resources. The toolkit also is engineered to harness advanced metering infrastructure, outage management systems data, supervisory control data acquisition, and geographic information systems to predict energy requirements and to detect nascent grid failures. LLNL's Vaibhav Donde said, "GridDS can take general approaches, apply them to highly specific energy tasks, and evaluate and validate their performance."

Full Article

 

 

AI Model Recommends Personalized Fonts to Improve Digital Reading, Accessibility
University of Central Florida
Cara Cannon
August 11, 2022


Researchers at the University of Central Florida (UCF) and Adobe found that Adobe's FontMART machine learning model can expand digital information accessibility and improve individual reading experiences. The model offers personalized font recommendations that can improve reading speed by matching reader characteristics, such as font familiarity and age, with specific font characteristics, such as heavier weight. The researchers determined that when recommending a font, the reader's age plays a significant role, as older adults with weaker and variable eyesight generally find thicker font strokes easier to read. Said UCF's Ben Sawyer, "The future of readability is a device watching humans read and using their performance to tailor the format so that they read at their best. We look forward to the day when you can pick up a device, read, and receive information in a way that uniquely suits your needs."
 

Full Article

 

 

How the Secrets of Ancient Cuneiform Texts Are Being Revealed by AI
New Scientist
Alison George
August 3, 2022


Artificial intelligence (AI) is being used to read and translate cuneiform, helping to recreate ancient libraries by piecing together fragmented tablets and predicting missing text. The Fragmentarium, part of the Electronic Babylonian Literature project set up by Enrique Jiménez of Germany's Ludwig Maximilians University in 2018, is using algorithms to determine which tablet fragments belong together. The AI can predict missing segments and search for cuneiform signs in a database of fragments. As part of the project, 40,000 smashed tablet pieces at the British Museum in London are being photographed, and software is being used to stitch the images together so cuneiform experts can translate them. The entire collection of images will be made public by next year.

Full Article

*May Require Paid Registration

 

University Of Michigan Researcher Discusses Impact Of Language-Generation AIs On Science

Scientific American Share to FacebookShare to Twitter (5/4) reports “machine-learning algorithms that generate fluent language from vast amounts of text could change how science is done – but not necessarily for the better, says Shobita Parthasarathy, a specialist in the governance of emerging technologies at the University of Michigan in Ann Arbor.” In a report “published on 27 April, Parthasarathy and other researchers try to anticipate societal impacts of emerging artificial-intelligence (AI) technologies called large language models (LLMs).” These can “churn out astonishingly convincing prose, translate between languages, answer questions and even produce code.” The corporations building them “-- including Google, Facebook and Microsoft – aim to use them in chatbots and search engines, and to summarize documents.” In an interview, Parthasarathy said, “I had originally thought that LLMs could have democratizing and empowering impacts. When it comes to science, they could empower people to quickly pull insights out of information: by querying disease symptoms for example, or generating summaries of technical topics. But the algorithmic summaries could make errors, include outdated information or remove nuance and uncertainty, without users appreciating this.”

 

Lockheed Martin Employing AI, Space Expertise To Help Fight Wildfires

The Denver Post Share to FacebookShare to Twitter (5/3, Kohler) reports Lockheed Martin Space is “tapping decades of experience of managing satellites, exploring space and providing information for the U.S. military to offer more accurate data quicker to ground crews” through artificial intelligence and working with the “U.S. Forest Service, university researchers and a Colorado state agency about how their technology could help.” One example, Dan Lordan, senior manager for AI integration at Lockheed Martin’s Artificial Intelligence Center, “said, is information gathered and managed by federal agencies on the types and conditions of vegetation across the country..” Lockheed Martin uses satellite data from the ESA updated around every five days, and is working with Nvidia “to produce a digital simulation of a wildfire based on an area’s topography, condition of the vegetation, wind and weather to help forecast where and how it will burn.” After the fact, the companies “used the information about the Cameron Peak fire, plugging in the more timely satellite data on fuel conditions, and generated a video simulation that Lordan said was similar to the actual fire’s behavior and movement.”

 

AI Can Revolutionize Vehicle Maintenance

Automotive World Share to FacebookShare to Twitter (4/27, Lampinen) reports AI “is re-writing the rule book for vehicle diagnostics and helping to detect future faults before they cause a problem” as connected tech can “automatically arrange for the necessary servicing, ensuring the parts are in stock and a technician will be available at the right time.” This predictive maintenance ability “could prove a real profit driver” for fleets with thin margins; for AVs, “it could prove a necessity.” AW reports that Questar “is one of the leading innovators in this space” with “nearly two decades of expertise in vehicle diagnostics.”

dtau...@gmail.com

unread,
Oct 10, 2022, 8:20:07 AM10/10/22
to ai-b...@googlegroups.com

DeepMind Invents Faster Algorithms to Solve Tough Math Puzzles
Nature
Matthew Hutson
October 5, 2022


Researchers at artificial intelligence (AI) laboratory DeepMind have created an algorithm that can solve tough mathematical calculations with improved computing efficiency. The researchers designed the AlphaTensor algorithm to execute matrix multiplication, which entails multiplying numbers arranged in grids that might represent data. AlphaTensor incorporates reinforcement learning as well as tree search, a game-playing approach in which the AI probes the outcomes of branching possibilities while planning its next action. The researchers tested AlphaTensor on input matrices up to 5 x 5. In some cases it rediscovered shortcuts previously formulated by mathematicians, while in others it found new shortcuts of its own.
 

Full Article

 

 

Device Can Recognize 'Millions of Colors' in Machine Vision Advance
Northeastern University
Tanner Stening
October 5, 2022


Northeastern University researchers have constructed a device that uses artificial intelligence techniques to perceive "millions of colors" in what they call a massive step for machine vision technology. The researchers said the A-Eye device can analyze and process color with greater accuracy than existing machines. The team produced two-dimensional material whose quantum properties can process diverse colors when incorporated within an optical window. The material forms "transmissive windows" that substitute for filters. The researchers said A-Eye also can "accurately recognize and reproduce 'seen' colors with zero deviation from their original spectra" through machine learning algorithms. "A-Eye can continuously improve color estimation by adding any corrected guesses to its training database," they said.
 

Full Article

 

 

Rethinking the Computer Chip in the Age of AI
Penn Engineering Today
Devorah Fischler
September 29, 2022


A team of researchers from the University of Pennsylvania (Penn), Sandia National Laboratories, and Brookhaven National Laboratory has unveiled a computing architecture suited for artificial intelligence (AI). The researchers developed a transistor-free compute-in-memory (CIM) architecture where processing and storage happen in the same place, removing transfer time and minimizing energy consumption. The architecture, which builds on earlier work on a ferroelectric switching scandium-alloyed aluminum nitride semiconductor, could potentially perform up to 100 times faster than a conventional computing architecture. The design also performs on-chip storage, parallel search, and matrix multiplication acceleration. Penn's Xiwen Liu said the work "proves that we can rely on memory technology to develop chips that integrate multiple AI data applications in a way that truly challenges conventional computing technologies."
 

Full Article

 

 

Deep Learning Pioneer Geoffrey Hinton Receives Royal Medal
U of T News (Canada)
Chris Sasaki
August 24, 2022


The Royal Society awarded the Royal Medal to ACM A.M. Turing Award recipient Geoffrey Hinton at Canada's University of Toronto for his groundbreaking work in deep learning. The U.K.'s national academy of sciences said Hinton earned the medal for "pioneering work on algorithms that learn distributed representations in artificial neural networks and their application to speech and vision, leading to a transformation of the international information technology industry." In 1986, Hinton and his collaborators created deep learning based on the backpropagation algorithm, a core learning mechanism for artificial neural networks. Said Hinton, “I believe that the spectacular recent progress in large language models, image generation and protein structure prediction is evidence that the deep learning revolution has only just started.”

Full Article

 

 

Super-Fast Electric Car Charging, with Tailor-Made Touch
American Chemical Society
August 22, 2022


Idaho National Laboratory researchers used machine learning to develop fast-charging methods for electric vehicle batteries that do not damage the batteries or reduce their lifespans. The researchers trained the machine learning model using data on the condition of different lithium-ion batteries during their charging/discharging cycles; the data for each battery model then was used to identify and optimize new charging protocols, which were tested on real batteries. Said Idaho National Laboratory's Eric Dufek, "We've significantly increased the amount of energy that can go into a battery cell in a short amount of time. Currently, we're seeing batteries charge to over 90% in 10 minutes without lithium plating or cathode cracking."

Full Article

 

 

A Neuromorphic Chip for AI on the Edge
UC San Diego News Center
Ioana Patringenaru
August 17, 2022


An international team of researchers created the NeuRRAM neuromorphic chip to compute directly in memory and run artificial intelligence (AI) applications with twice the energy efficiency of platforms for general-purpose AI computing. The chip moves AI closer to running on edge devices, untethered from the cloud; it also produces results as accurate as conventional digital chips, and supports many neural network models and architectures. "The conventional wisdom is that the higher efficiency of compute-in-memory is at the cost of versatility, but our NeuRRAM chip obtains efficiency while not sacrificing versatility," said former University of California, San Diego researcher Weier Wan.

Full Article

 

 

Made-Up Words Trick AI Text-to-Image Generators
Discover
August 16, 2022

Columbia University's Raphaël Millière found that made-up words can trick text-to-image generators, raising questions about their security. Millière created nonsense words using the "macaronic prompting" technique, which involves combining parts of real words from different languages. For instance, the made-up word "falaiscoglieklippantilado," a combination of the German, Italian, French, and Spanish words for "cliff," generated images of cliffs when input into the DALL-E 2 text-to-image generator. Millière said, "The preliminary experiments suggest that hybridized nonce strings can be methodically crafted to generate images of virtually any subject as needed, and even combined together to generate more complex scenes." However, Millière noted, "In principle, macaronic prompting could provide an easy and seemingly reliable way to bypass [content] filters in order to generate harmful, offensive, illegal, or otherwise sensitive content, including violent, hateful, racist, sexist, or pornographic images, and perhaps images infringing on intellectual property or depicting real individuals."
 

Full Article

 

 

Your Smartphone Could Recognize You by How You Hold It
New Scientist
Jeremy Hsu
August 16, 2022


A group of researchers at France's Toulouse Institute of Computer Science Research trained an artificial intelligence (AI)-powered algorithm to identify persons based on patterns in their hand vibrations when holding a smartphone. Over 200 volunteers each held a smartphone while its sensors gathered data during either a 30-second session or a series of sessions. The AI analyzed roughly half of this data to formulate how best to distinguish individual users, then was tested on the other half to identify specific persons. The algorithm identified test subjects correctly about 92.5% of the time within 1.5 seconds across 1,100 tests.

Full Article

 

 

Uncovering Nature's Patterns at the Atomic Scale in Living Color
Argonne National Laboratory
Joseph E. Harmon
August 15, 2022


Researchers led by the U.S. Department of Energy's Argonne National Laboratory (ANL) have invented a technique for producing color-coded graphs of large datasets from X-ray diffraction. The tool uses computational data sorting to identify clusters related to physical properties, which should ramp up research on atomic-level structural changes caused by varying temperature. The X-ray Temperature Clustering method uses unsupervised machine learning processes developed at Cornell University. Analysis of two crystalline materials showed they become superconducting at temperatures near absolute zero, and manifest other unusual properties at higher temperatures caused by structural changes. "What might have taken us months in the past, now takes about a quarter hour, with much more fine-grained results," said ANL's Raymond Osborn.
 

Full Article

 

 

AI Model Can Detect Parkinson's from Breath Patterns
MIT News
Alex Ouyang
August 22, 2022


A multi-institutional group of researchers created an artificial intelligence (AI) model that can detect Parkinson's disease from a person's breathing patterns. The Massachusetts Institute of Technology (MIT)'s Yuzhe Yang and Yuan trained the AI model, which is a neural network, to determine the disease's presence from nocturnal breathing, and to rate its severity and track its progression over time. It was implemented in a device that emits radio signals, analyzes their reflections from its surroundings, and extracts the subject's breathing patterns without bodily contact; the researchers feed those patterns to the neural network, which then evaluates Parkinson's passively.

Full Article

 

dtau...@gmail.com

unread,
Oct 15, 2022, 7:12:40 PM10/15/22
to ai-b...@googlegroups.com

More Accurate Assessments of Hurricane Damage for Responders
Ohio State News
Jeff Grabmeier
October 6, 2022


A machine learning model developed by Ohio State University (OSU) researchers can better forecast building damage from hurricanes, and may soon assist emergency responders. The model can construct building footprints from pre-hurricane satellite imagery, then compare them with post-storm images. OSU's Desheng Liu and Polina Berezina tested the model on data from Hurricane Michael in 2018, and it yielded an 86.3% accurate damage assessment in one region of Florida, outperforming the support vector machine model (SVM) by 11%. Said Liu, "The SVM struggled to distinguish between minor and major damage, which can be a major issue for teams responding after a hurricane." Liu said the model could rate the likelihood that individual buildings are in a certain damage class to help guide emergency management and first responders to initial sites.
 

Full Article



AI Can Learn the Patterns of Human Language
MIT News
Adam Zewe
August 30, 2022


Researchers at Massachusetts Institute of Technology, Cornell University, and McGill University developed an artificial intelligence model that can learn the rules and patterns of human languages automatically, without specific human guidance. The model was trained and tested on problems from linguistic textbooks in 58 different languages that involved word-form changes. The researchers observed that the model could determine a correct set of rules to describe the word-form changes for 60% of the problems. Said Cornell's Kevin Ellis, "One of the things that was most surprising is that we could learn across languages, but it didn't seem to make a huge difference. That suggests two things. Maybe we need better methods for learning across problems. And maybe, if we can't come up with those methods, this work can help us probe different ideas we have about what knowledge to share across problems."

Full Article

 

The Animal Translators
The New York Times
Emily Anthes
August 30, 2022


Researchers are using machine learning (ML) systems to decode animal communication. Scientists at Germany's Max Planck Institute for Brain Research used ML algorithms to analyze 36,000 mole rat chirps in seven colonies, identifying unique vocal signatures for each mole rat, as well as a distinct dialect for each colony. The multi-institutional Project CETI (Cetacean Translation Initiative) hopes to decipher the communication of sperm whales through the efforts of ML specialists, marine biologists, roboticists, linguists, and cryptographers. The project will involve recording whale sounds and movements via underwater microphones, robotic fish, and acoustic tags. Other projects aim to build technologies that enable human-animal communication, with Hunter College's Diana Reiss envisioning "a Google Translate for animals."

Full Article

*May Require Paid Registration

 

 

Driving Simulations That Look More Lifelike
Ohio State News
August 29, 2022


Ohio State University (OSU) researchers trained a Generative Adversarial Network (GAN) to create random photorealistic three-dimensional scenery for driving simulations. The researchers based their simulator on the open-source CARLA tool, then applied a GAN-based image synthesizer to render and integrate background elements with more conventionally rendered objects. The researchers determined that combining foreground objects separately from background scenery resulted in an image that was more lifelike. OSU's Ekim Yurtsever said, "We can never actually replace real-world testing, but if we can make simulations a little bit better, we can get better insight on how we can improve autonomous driving systems and how we interact with them."

Full Article

 

 

Quantum AI Breakthrough: Theorem Shrinks Appetite for Training Data
Los Alamos National Laboratory
August 23, 2022


A proof devised by a multi-institutional team of scientists demonstrates that quantum neural networks can train on minimal data. "The need for large datasets could have been a roadblock to quantum AI [artificial intelligence], but our work removes this roadblock," said Patrick Coles at the U.S. Department of Energy's Los Alamos National Laboratory (LANL). Coles said quantum AI training occurs in a mathematical construct called a Hilbert space, and the theorem shows that navigating this space requires only as many data points as the number of parameters in a given model. The researchers could ensure that a quantum model can be compiled in far fewer computational gates relative to the volume of data. LANL's Marco Cerezo said, "We can compile certain very large quantum operations within minutes with very few training points—something that was not previously possible."

Full Article

 

 

Chinese Game Company Appoints Humanoid Robot as CEO
Interesting Engineering
Nergis Firtina
September 1, 2022


Chinese mobile game company NetDragon Websoft has appointed an artificial intelligence (AI)-supported virtual human named Tang Yu as its CEO. The company said Ms. Tang Yu will serve as a real-time data center and analytics tool for the company's board, support decision-making during daily operations, and promote a fair working environment for employees. NetDragon founder Dejian Liu said, "We believe AI is the future of corporate management, and our appointment of Ms. Tang Yu represents our commitment to truly embrace the use of AI to transform the way we operate our business and ultimately drive our future strategic growth.”

Full Article

 

 

Optimizing Fluid Mixing with Machine Learning
Tokyo University of Science (Japan)
August 29, 2022


Researchers in Japan have proposed a machine learning-based approach for optimizing fluid mixing for laminar flows. The researchers used reinforcement learning (RL), in which intelligent agents perform actions in an environment to maximize the cumulative reward. The team addressed RL's inefficiencies in dealing with systems involving high-dimensional state spaces by describing fluid motion using only a single parameter. Researchers used the Markov decision process to formulate the RL algorithm, and the Tokyo University of Science's Masanobu Inubushi said the program "identified an effective flow control, which culminated in an exponentially fast mixing without any prior knowledge." The RL method also enabled effective transfer learning of the trained "mixer," significantly reducing its time and training cost.

Full Article

 

 

French Tax Officials Use AI to Spot 20,000 Undeclared Pools
The Guardian (U.K.)
Kim Willsher
August 29, 2022


French tax officials used artificial intelligence (AI) from Google and Capgemini to identify over 20,0000 undeclared private swimming pools, constituting about €10 million (over $10 million) in tax receipts. The AI system can spot pools in aerial images, and cross-checks them with land registry databases to determine which may have not been reported. The tax authorities' technical team said it cannot yet establish whether a rectangular configuration on an aerial image is an extension or a tent, terrace, or tarpaulin placed on the ground. Antoine Magnant, France's deputy director general of public finances, said tests of the system are only in their second stage.

Full Article

 

 

Robot Dog Learns to Walk Tough Terrain in 20 Minutes
New Scientist
Alex Wilkins
August 26, 2022


Researchers at the University of California, Berkeley (UC Berkeley) developed a machine learning algorithm that enabled a robot dog to learn to navigate difficult terrain in only 20 minutes. The Q-learning algorithm does not need a model of the target terrain. As a result, said UC Berkeley's Sergey Levine, "We don't need to understand how the physics of an environment actually works, we just put the robot into an environment and turn it on." The algorithm teaches the robot by rewarding it for each successful action until reaching its ultimate goal. The researchers demonstrated that the robot was able to walk on terrains it had not previously encountered, including grass, a layer of bark, a memory foam mattress, and a hiking trail, after about 20 minutes of training on each.

Full Article

 

 

AI-Created Lenses Let Camera Ignore Some Objects
New Scientist
Matthew Sparkes
August 23, 2022


University of California, Los Angeles researchers developed a deep-learning artificial intelligence (AI) model design three-dimensionally (3D) printed plastic camera lenses that capture images of certain objects, while ignoring others in the same frame. The researchers trained the model using thousands of images of numbers, designated either as target objects to appear in images or objects to ignore. The model was told when images that were supposed to reach the camera's sensor did and did not pass through a trio of lenses, and when images that were not supposed to reach the sensor did. The AI used the data to improve its lens design. The completed lenses use complex patterns printed into the plastic to diffract away light relating to objects that are not designated to appear in the final image. Unwanted objects are not captured digitally, so they do not need to be edited out of the image.

Full Article

 

Intel Announces New Processors, Including AI Chips, As It Challenges Nvidia And AMD

Bloomberg Share to FacebookShare to Twitter (5/10, King) reports Intel on Tuesday rolled out “new processors designed for data centers, a lucrative market where it’s facing tougher competition” from Nvidia and AMD. The new lineup “will include updated artificial intelligence chips, fresh versions of Intel’s Xeon processors and semiconductors that help connect telecommunications networks,” and “for the first time, the company also will sell graphics chips designed for data centers, challenging Nvidia on its own turf.” Intel CEO Pat Gelsinger “is announcing the products at a company event in Dallas on Tuesday.”

        Reuters Share to FacebookShare to Twitter (5/10, Scheer, Lanhee Lee) reports that Intel “launched a new chip called Gaudi2 focused on artificial intelligence computing, as the chip manufacturer makes a bigger push into the AI chip market currently dominated by Nvidia.” Gaudi2 “is the second generation processor by Havana Labs, an Israeli AI chip startup Intel bought in late 2019 for about $2 billion.” Taking market share from Nvidia “has been a challenge as many AI researchers and companies have been accustomed to using Nvidia’s software platform CUDA,” so “in addition to new chips for AI computing, Intel told Reuters it has been focusing on its software development.” Intel “also launched a new chip called Greco for inferencing work: taking an AI algorithm and making a prediction or identifying an object.”

        The Register (UK) Share to FacebookShare to Twitter (5/10, Martin) reports, “Gaudi2 is built on a 7nm process, a major improvement in nodes from Gaudi’s 16nm, packing 24 Tensor Processor Cores and 96GB of HBM2e high-bandwidth memory, as well tripling the number of cores and amount of HBM2 memory of the chip’s predecessor.” Memory bandwidth likewise has “nearly tripled to 2.45TB/s while SRAM has doubled to 48 MB. Throughput is also significantly improved, expanding the chip’s networking capacity to 24 100 GbE ports from the first Gaudi’s 10 100 GbE ports.” Greco also moves “to a 7nm process from its predecessor’s 16nm process,” and “while the 16GB memory capacity is the same as the first-generation chip, Greco is moving to LPDDR5 from Goya’s DDR4, which significantly increases the memory bandwidth to 204GB/s from 40GB/s.” In contrast to Gaudi2, “Greco is getting smaller, moving to a single-slot half-height, half-length Pcie form factor from Goya’s dual-slot form factor. And it requires less power, with a thermal design power of 75 watts compared to the 200 watts for the first-generation chip.”

 

Research Shows Instruction, Annotations Can Bias AI Systems

TechCrunch Share to FacebookShare to Twitter (5/8, Wiggers, Coldewey) reports a new study found that bias in AI systems can “start with the instructions given to the people recruited to annotate data from which AI systems learn to make predictions.” The research found that annotators often “pick up on patterns in the instructions, which condition them to contribute annotations that then become over-represented in the data, biasing the AI system.” This phenomenon “suggests that systems trained on biases instruction/annotation data might not perform as well as initially thought.”

 

WPost Calls For Guidelines On Use Of AI In Government Surveillance

The Washington Post Share to FacebookShare to Twitter (5/5) writes in an editorial that Congress should address the future possible uses of facial recognition by companies such as Clearview AI to avoid a “world where everyone is being watched.” No federal law “regulates facial recognition, though some cities and states have passed restrictions, and the need for rules that apply everywhere has never been clearer.” These “strictures could and should allow authorities to harness this tool for public safety while still respecting civil liberties. What private actors can do with facial recognition and other biometric identification tools also must be constrained. But those still-missing guidelines should also arrive in tandem with another legislative goal legislators can’t seem to score.”

 

Clearview AI Agrees To New Privacy Terms In Court Settlement

The AP Share to FacebookShare to Twitter (5/9, Foody, O'Brien) reports, “Facial recognition startup Clearview AI has agreed to restrict the use of its massive collection of face images to settle allegations that it collected people’s photos without their consent.” In a legal filing, the company “agreed to permanently stop selling access to its face database to private businesses or individuals around the U.S., putting a limit on what it can do with its ever-growing trove of billions of images pulled from social media and elsewhere on the internet.” Clearview also agreed “to stop making its database available to Illinois state government and local police departments for five years. The New York-based company will continue offering its services to federal agencies, such as U.S. Immigration and Customs Enforcement, and to other law enforcement agencies and government contractors outside of Illinois.” If approved by a judge in Chicago, this settlement would “end a 2-year-old lawsuit brought by the American Civil Liberties Union and other groups over alleged violations of an Illinois digital privacy law. The company still faces a separate privacy case before a federal judge in Illinois.”

        The Washington Post Share to FacebookShare to Twitter (5/9, Harwell) reports, “Clearview, based in New York, has argued in court that the Illinois law restricted the company’s ability to collect and analyze public information – and, therefore, violated its First Amendment-protected freedom of speech.”

        The Hill Share to FacebookShare to Twitter (5/9, Rodrigo) reports that Deputy Director of the ACLU Speech, Privacy, and Technology Project Nathan Freed Wessler said in a statement, “By requiring Clearview to comply with Illinois’ pathbreaking biometric privacy law not just in the state, but across the country, this settlement demonstrates that strong privacy laws can provide real protections against abuse. ... Clearview can no longer treat people’s unique biometric identifiers as an unrestricted source of profit. Other companies would be wise to take note, and other states should follow Illinois’ lead in enacting strong biometric privacy laws.”

dtau...@gmail.com

unread,
Oct 16, 2022, 7:58:11 AM10/16/22
to ai-b...@googlegroups.com

UW Computer Science Professor Wins MacArthur 'Genius Grant'
The Seattle Times
Amanda Zhou
October 12, 2022


University of Washington (UW) computer science professor Yejin Choi has been selected to receive one of this year's MacArthur Foundation “genius grant” fellowships. Choi applies natural language processing to develop artificial intelligence (AI) systems capable of understanding implied meanings in human languages. She has created models that absorb text and image inputs and has used computational linguistics to help AI detect sentiment or misleading intent in writing. Her team of researchers applied the latter concept to identify false online consumer reviews and to categorize news stories as hoaxes, satirical, or trustworthy.
 

Full Article

 

 

AI Offers Insight into Who Responds to Antidepressants
IEEE Spectrum
Michelle Hampson
October 10, 2022


Machine-learning algorithms developed by scientists at the New York Institute of Technology (NYIT) and Canada's McMaster University analyze the electrical activity of the brain to predict responses to antidepressants with high accuracy. Researchers analyzed electroencephalogram data from 228 patients with depression before treatment, who were randomly assigned to receive a placebo or Sertraline. The algorithms forecast Sertraline response with 83.7% accuracy and placebo response with 83% accuracy.
 

Full Article

 

 

Anonymous Data Doesn't Mean Private
Illinois Institute of Technology
Casey Moffitt
October 6, 2022


Illinois Institute of Technology (Illinois Tech) researchers used machine learning and artificial intelligence algorithms to exfiltrate personal information from anonymous cellphone data. The neural-network model estimated the gender and age of individual users via their private communications by tapping data from a Latin American cellphone company. The algorithm guessed individuals' gender with 67% accuracy and age with 78% accuracy, which significantly outperformed current models. Researchers employed commonly accessible computing equipment to extract this information, and although the dataset they used was not publicly available, malefactors could compile a similar dataset by capturing data through public Wi-Fi hotspots or by targeting service providers' computing infrastructure, said Illinois Tech's Vijay K. Gurbani.
 

Full Article

 

 

Exploring Reinforcement Learning to Control Nuclear Fusion Reactions
Carnegie Mellon University News
Aaron Aupperlee
September 8, 2022


Carnegie Mellon University (CMU) doctoral candidate Ian Char, the first CMU researcher to run an experiment on the DIII-D National Fusion Facility's tokamak machine, demonstrated that reinforcement learning algorithms can control the rotation of the machine's hydrogen plasma. Char developed two algorithms: one was trained using data from the tokamak on how the plasma reacts, while the other calculates the rate and direction at which to add hydrogen particles to affect the speed of the plasma's rotation. Said CMU's Jeff Schneider, "This work shows a path to using reinforcement learning to control other parts of the plasma state and ultimately achieve the temperatures and pressures long enough to have a power plant. That would mean limitless, clean energy for everyone."

Full Article

 

 

Collaborative ML That Preserves Privacy
MIT News
Adam Zewe
September 7, 2022


Researchers at the Massachusetts Institute of Technology (MIT) and MIT-originated startup DynamoFL have enhanced federated learning to better train a privacy-preserving machine learning model. Federated learning involves hundreds or thousands of users training their own model with their own data on their own device, then transferring the models to a central server that pools them into a better model. The researchers' FedLTN system improves the accuracy of the combined model while shrinking its size, which accelerates communication between users and the central server while ensuring each user receives a model tailored for their environment. FedLTN follows the lottery ticket hypothesis, positing that within large neural network models are smaller subnetworks that can realize the same performance. The researchers reduced model size by nearly an order of magnitude versus other methods, yielding four- to six-fold lower communication costs for individual users, and boosting model accuracy 10%.

Full Article

 

 

Next Generation of Hearing Aids Reads Lips Through Masks
University of Glasgow (U.K.)
September 7, 2022

An international team led by researchers at the U.K.'s University of Glasgow developed a system that can read lips accurately even through face masks using radio-frequency (RF) sensing and artificial intelligence. The researchers had volunteers repeat vowel sounds while masked and unmasked, with RF signals from a dedicated radar sensor and a Wi-Fi transmitter used to scan their faces while speaking and while still. The resulting 3,600 samples of data were used to train machine learning and deep learning algorithms to recognize the lip and mouth movements related to each vowel sound. Because RF signals can pass through masks, the machine learning and deep learning algorithms were able to learn to identify vowel formation by masked speakers.
 

Full Article

 

 

Revolutionizing Image Generation by AI: Turning Text into Images
Ludwig-Maximilian University of Munich (Germany)
September 1, 2022


Scientists at Germany's Ludwig-Maximilian University of Munich (LMU Munich) have developed an algorithm that can convert text into images in seconds. The Stable Diffusion artificial intelligence (AI) model can accomplish this with a conventional graphics card and without the need to use supercomputers. The researchers trained the model on the servers of their startup Stability.Ai, and the algorithm refines the essence of billions of training images into an AI model of just a few gigabytes. Said LMU Munich's Björn Ommer, "Once such AI has really understood what constitutes a car or what characteristics are typical for an artistic style, it will have apprehended precisely these salient features and should ideally be able to create further examples, just as the students in an old master's workshop can produce work in the same style."

Full Article

 

 

AI Can Better Monitor Maine's Forests
UMaine News
September 1, 2022

A study by researchers at the universities of Maine (UMaine), New Hampshire, and Vermont applied artificial intelligence (AI) and machine learning to monitor soil moisture more efficiently. The researchers designed a wireless sensor network that uses AI to learn to monitor soil moisture and process the data with greater power efficiency. The software learns over time how to best employ available network resources, which helps generate power-efficient systems at reduced cost for large-scale monitoring. "AI can learn from the environment, predict the wireless link quality and incoming solar energy to efficiently use limited energy, and make a robust low-cost network run longer and more reliably," said UMaine's Ali Abedi.
 

Full Article

 

 

English Bias in Computing: Images to the Rescue
University of Copenhagen (Denmark)
August 31, 2022

An image-based benchmark could overcome cultural bias stemming from machine learning (ML) training datasets being written in English. An international group of researchers led by Denmark's University of Copenhagen (KU) developed the Image-Grounded Language Understanding Evaluation (IGLUE) tool, which can score an ML solution's efficiency in 20 languages. Image labels in ML are typically in English, while IGLUE covers 11 language families, nine scripts, and three geographical macro-areas. IGLUE's images feature culture-specific components supplied by volunteers in geographically diverse countries in their natural language. KU's Emanuele Bugliarello said the researchers hope IGLUE's underlying methodology could improve solutions "which help visually impaired in following the plot of a movie or another type of visual communication."
 

Full Article

 

 

City Digital Twins Help Train Deep Learning Models to Separate Building Facades
Research at Osaka University (Japan)
September 5, 2022

Researchers at Japan's Osaka University used images generated automatically by digital city twins to train a deep learning model that can accurately separate out the building facades in an image. The researchers used a three-dimensional city model from the PLATEAU platform to create the synthetic digital city twin data. They input the model into the Unity game engine and drove a virtual car with a camera setup around the city to gather virtual images in various lighting and weather conditions. Real street-level images were incorporated from Google Maps API. The researchers found segmentation accuracy was improved by adding synthetic data to the real dataset, and in terms of performance, the model can compete with a deep convolutional neural network trained on 100% real data. Said Osaka University's Tomohiro Fukuda, "These results reveal that our proposed synthetic dataset could potentially replace all the real images in the training set."
 

Full Article

 

 

A Novel Approach to Creating Tailored Odors, Fragrances Using Machine Learning
Tokyo Tech News (Japan)
September 5, 2022

Researchers at Japan's Tokyo Institute of Technology (Tokyo Tech) used machine learning (ML) to predict the sensing data of odor mixtures, and to design customized fragrances. The technique taps standard mass spectrum data and ML models to forecast molecular features based on odor impression. As an example, Tokyo Tech's Takamichi Nakamoto explained the approach was able to identify molecules that emit the mass spectrum of apple flavor with enhanced "fruit" and "sweet" impressions. "Combinations of either 59 or 60 molecules give the same mass spectrum as the one obtained from the specified odor impression," Nakamoto said. "With this information, and the correct mixing ratio needed for a certain impression, we could theoretically prepare the desired scent."
 

Full Article

 

AI Can Help AV Traffic Flow “Smoothly” At Intersections

Tech Xplore Share to FacebookShare to Twitter (5/17, Zewe) reports that a new study by MIT researchers was able to “demonstrate a machine-learning approach that can learn to control a fleet of autonomous vehicles as they approach and travel through a signalized intersection in a way that keeps traffic flowing smoothly.” The researchers “found that their approach reduces fuel consumption and emissions while improving average vehicle speed,” and “gets the best results if all cars on the road are autonomous, but even if only 25% use their control algorithm, it still leads to substantial fuel and emissions benefits.” The researchers “want the system to learn a strategy that reduces fuel consumption and limits the impact on travel time” and utilized deep reinforcement learning. In simulated tests, the AI “can reduce fuel consumption by 18% and carbon dioxide emissions by 25%, while boosting travel speeds by 20%.”

 

AI Can Improve Supply Chain Management

The Economic Times (IND) Share to FacebookShare to Twitter (5/16) reports that the “supply chain function is at the epicenter of an organization’s success” and the “benefits of technology in the supply chain can be categorized into three buckets: Increased automation, Better transparency, Generating insights to spark innovation.” ET reports that while “RFID and barcode technology are heavily used in inventory management, tracking, logistics, and vehicle route tracking,... recent advancements in technology are helping organizations to make incredible advancements in automation.” This automation shift includes, “Warehouse bots, driverless delivery vehicles, and drone delivery.” The advanced technologies have “generated a wealth of rich data” which AI can leverage to boost supply-chain management. ET discusses how AI can improve efficiency in supply chain management, such as via “forecasts for future demand,” new product introduction forecasts, and more.

 

Investors Pulled Back On Artificial Intelligence In Q1, As Funding Fell 32%

Axios Share to FacebookShare to Twitter (5/16, Brodwin) reports, “Funding for AI-focused health startups fell 32% in Q1 2022, after nine straight quarters of steady growth, according to a fresh analysis from CB Insights.” Although it is “often excessively hyped, AI has huge potential to boost patient outcomes, curb hospital spending and even pinpoint diseases earlier. Even taking this quarter’s dip into account, health AI funding remained at higher levels than those seen before 2021.” Meanwhile, “health tech companies aren’t the only ones seeing decreases in interest around AI – global funding for all kinds of AI startups fell 12% last quarter to $15.1 billion, down from $17.1 billion in Q4 2021.”

dtau...@gmail.com

unread,
Oct 23, 2022, 8:57:36 AM10/23/22
to ai-b...@googlegroups.com

We Can Train Big Neural Networks on Small Devices
IEEE Spectrum
Matthew Hutson
September 20, 2022


A new training method expands small devices' capabilities to train large neural networks, while potentially helping to protect privacy. The University of California, Berkeley's Shishir Patil and colleagues integrated offloading and rematerialization techniques using suboptimal heuristics to reduce memory requirements for training via the private optimal energy training (POET) system. Users feed POET a device's technical details and data on the architecture of a neural network they want to train, specifying memory and time budgets; the system generates a training process that minimizes energy usage. Defining the problem as a mixed integer linear programming challenge was critical to POET's effectiveness. Testing showed the system could slash memory usage by about 80% without significantly increasing energy consumption.

Full Article

 

 

New York AI Bias Law Prompts Uncertainty
The Wall Street Journal
Richard Vanderford
September 21, 2022


A law that goes into effect in New York City in January requires companies to audit their artificial intelligence (AI) hiring systems to assess whether they incorporate racial and gender biases. The law holds hiring companies liable for any biases and could impose fines for violations, but lacks clear guidelines for the AI audit process. While the city’s Department of Consumer and Worker Protection has not offered a timeline for when it will publish rules to implement the law, some companies already are taking steps to comply. Said Anthony Habayeb of AI governance software company Monitaur Inc., "Instead of waiting for someone to tell me what to do…I built controls around these applications because I know like with any software, things can and do go wrong."
 

Full Article

*May Require Paid Registration

 

 

AI Message Decoder Based on Bacterial Growth Patterns
Duke University
Ken Kingery
September 23, 2022


Biomedical engineers at the Duke University Pratt School of Engineering have deciphered bacterial growth patterns with an artificial intelligence (AI) decoder. The researchers produced a virtual bacterial colony, and controlled growth and the numbers and sizes of bacterial dots, to create an alphabet based on the colonies' appearance in a Petri dish. This emorfi encoding scheme encrypts messages by assembling a movie of a series of patterns, each correlating to a specific letter that a machine-learning algorithm can identify. Interlopers should be unable to crack the code without their own AI, provided the receiver knows the set of initial conditions that led to the code's creation.
 

Full Article

 

 

Face Recognition for Pigs Could Improve Welfare on Farms
New Scientist
Christa Lesté-Lasserre
September 20, 2022


Wenhao Zhang and colleagues at the U.K.'s University of the West of England have developed a facial recognition system for pigs. The researchers captured thousands of photos of sows from three breeds, then fed the images into a convolutional neural network that detected nuanced distinctions in the animals' faces. The software ran tens of millions of tests in learning to identify individual pigs, resulting in a system that, Zhang said, could accurately identify specific animals nearly 97% of the time. Natalie Waran at New Zealand's Eastern Institute of Technology said facial recognition makes physically marking the animals unnecessary, while also offering the opportunity to "cater to an individual's specific [nutritional or veterinary] needs where animals are kept in large groups in intensive farming conditions."

Full Article

 

 

ML Generates 3D Model from 2D Pictures
Washington University in St. Louis
Brandie Jefferson
September 19, 2022


A machine learning algorithm developed by researchers at Washington University in St. Louis can use a partial set of two-dimensional (2D) images produced by standard microscopy tools to create a continuous three-dimensional (3D) model of cells. The researchers accomplished this through the use of a neural field network, which can represent a sample in its entirety using a limited number of 2D images, eliminating the need to save data-heavy image files. The imaging system also has the ability to zoom in on a pixelated image and recreate missing pieces. Washington University's Ulugbek Kamilov said, "We train the model on the set of digital images to obtain a continuous representation. Now, I can show it any way I want. I can zoom in smoothly and there is no pixelation."

Full Article

 

 

Scientists Try to Teach Robot to Laugh at the Right Time
The Guardian (U.K.)
Hannah Devlin
September 15, 2022


Koji Inoue and colleagues at Japan's Kyoto University have developed an artificial intelligence system that can guide a robot to laugh in the right way at the appropriate time. The researchers collected data from over 80 speed-dating dialogues between male students and the Erica robot, which initially was teleoperated by female actors. The team annotated the conversational data for solo laughs, social laughs, and mirth-related laughter, then trained a machine learning system to decide whether to laugh, and to select the appropriate laughter type. Researchers tested Erica's sense of humor by creating dialogues for it to share with a person and integrating the shared-laughter algorithm into its conversation software. The algorithm received the highest ratings from 130 volunteers reviewing those conversations for empathy, naturalness, human-likeness, and understanding.

Full Article

 

 

AI-Powered 'Smart Bin' Sorts Recycling
EuroNews
September 13, 2022


A prototype "smart bin" developed by researchers at Australia's University of Technology, Sydney (UTS) can sort recyclable materials automatically through a combination of artificial intelligence (AI), robotics, and machine vision. UTS' Xu Wang said the system can categorize different types of waste such as glass bottles, metal cans, and several varieties of plastic. "We have a camera and we're running an AI algorithm to classify different types of plastics and then we use IoT [Internet of Things] and other robotics technology to sort the waste into the bins," Wang explained. The researchers envision smart bins deployed in shopping centers, schools, cinemas, businesses, and airports.

Full Article

 

 

Breakthrough Reported in ML-Enhanced Quantum Chemistry
U.S. Department of Energy Office of Science
September 12, 2022


Researchers at the U.S. Department of Energy's Los Alamos National Laboratory proposed a machine learning (ML) model that can accurately predict a broad range of molecular properties by incorporating additional mathematics of quantum mechanics. The model forecasts an effective Hamiltonian matrix using the positions of atoms within a molecule, defining possible electronic states and their associated energies. The ML-based scheme generates predictions at a lesser computational cost than traditional quantum chemistry simulations, and facilitates quantitatively precise predictions of material properties. The model also is more accurate than traditional ML models, and can make predictions that extend beyond its underlying training data.

Full Article

 

 

AI Enables Autonomous Vehicles to Adapt to Weather Conditions
University of Oxford (U.K.)
September 8, 2022


An artificial intelligence (AI) system developed by scientists at the U.K.'s University of Oxford and Turkey's Bogazici University can enable autonomous vehicles (AVs) to navigate more safely and reliably in challenging weather conditions. The self-supervised deep learning model facilitates ego-motion estimation, which calculates the vehicle’s position relative to nearby objects. The model performed well under conditions of rain, fog, and snow, as well as day and night. Said Oxford’s Andrew Markham, “Estimating the precise location of AVs is a critical milestone to achieving reliable autonomous driving under challenging conditions. This study effectively exploits the complementary aspects of different sensors to help AVs navigate in difficult daily scenarios.”

Full Article

 

 

AI System Predicts How to Prevent Wildfires
Aalto University (Finland)
September 9, 2022


A neural network model developed by researchers at Finland's Aalto University can predict the effectiveness of different land management strategies in preventing wildfires. Their analysis concentrated on Borneo's Central Kalimantan province, and the model predicted the distribution of peatland fires from measurements captured before each fire season from 2002 through 2019. The researchers analyzed 31 factors, then forecast the likelihood of a peatland fire at each spot on the map, resulting in an annual expected fire distribution that was correct 80% to 95% of the time. They then simulated the effects of different land management strategies on such fires, and found that converting shrubland and scrubland into swamp forests would cut fire incidence in half.

Full Article

 

 

Model Adjusts Videogame Difficulty Based on Player Emotions
Gwangju Institute of Science and Technology (South Korea)
September 1, 2022


Researchers at South Korea's Gwangju Institute of Science and Technology (GIST) have developed a method of adjusting videogame difficulty according to players' emotions. A game’s dynamic difficulty adjustment (DDA) agent would modify that game’s difficulty to maximize one of four aspects of player satisfaction—challenge, competence, flow, and valence. The researchers trained the agents on data collected from people who played a combat game against various artificial intelligences (AIs), then described the experience. Each DDA agent uses a Monte-Carlo tree search algorithm to tweak the opposing AI's fighting style to maximize a specific emotion or affective state; GIST's Kyung-Joong Kim said the model calculates this state using in-game features only, rather than external sensors.

Full Article

 

Microsoft Touts Tools To Improve Safety Of Language Models

TechCrunch Share to FacebookShare to Twitter (5/23, Wiggers) says that to coincide with “Build 2022,” Microsoft on Monday “open sourced tools and datasets designed to audit AI-powered content moderation systems and automatically write tests highlighting potential bugs in AI models.” TechCrunch adds that Microsoft “claims that the projects, AdaTest and (De)ToxiGen, could lead to more reliable large language models (LLMs), or models akin to OpenAI’s GPT-3 that can analyze and generate text with human-level sophistication.” According to Tech Crunch, “LLMs carry risks,” as they are “capable of generating toxic text based on similar language that they encounter during training,” but ToxiGen is “a dataset for training content moderation tools that can be used to flag harmful language.”

 

AI Can Help, Hurt Climate Depending On Implementation

Popular Science Share to FacebookShare to Twitter (5/23, Benson) reports artificial intelligence applications are capable of solving climate-related problems. Climate Change AI Chair Priya Donti said such applications include “optimizing electric power grids in order to foster the integration of renewable energy or collecting information about building footprints from satellite imagery in order to shape energy efficiency policies.” However, “AI isn’t exclusively a force for good.” Donti “says it can also be used in ways that harm the planet, such as oil companies using it to accelerate oil and gas exploration and extraction.”

 

Amazon Installing More AI-Powered Cameras In Delivery Vans To Monitor Drivers

Insider Share to FacebookShare to Twitter (5/22, Mann) reports, “Amazon is putting AI-powered cameras in more delivery vans in a move that privacy activists called ‘creepy’, ‘intrusive’ and ‘excessive’ after rolling them out in the US last year, The Telegraph reported.” The company is adding cameras to UK delivery vans that “monitor how drivers in the UK perform on the road and issue voice alerts if they speed or brake sharply and will score drivers accordingly.” The company first introduced the AI cameras in America last year “and used them to decide drivers’ pay and whether to keep them on.”

 

AI Company Uses Surveillance To Search For Guns

The Washington Post Share to FacebookShare to Twitter (5/20) reported that “when Peter George saw news of the racially motivated mass-shooting at the Tops supermarket in Buffalo last weekend, he had a thought he’s often had after such tragedies.” George “is chief executive of Evolv Technology, “an AI-based system meant to flag weapons, ‘democratizing security” so that weapons can be kept out of public places without elaborate checkpoints.” As U.S. gun violence “like the kind seen in Buffalo increases – firearms sales reached record heights in 2020 and 2021 while the Gun Violence Archive reports 198 mass shootings since January – Evolv has become increasingly popular, used at schools, stadiums, stores and other gathering spots.” To its supporters, the system is “a more effective and less obtrusive alternative to the age-old metal detector, making events both safer and more pleasant to attend.” To its critics, “however, Evolv’s effectiveness has hardly been proved.” It opens up “a Pandora’s box of ethical issues in which convenience is paid for with RoboCop surveillance.”

dtau...@gmail.com

unread,
Oct 29, 2022, 1:29:17 PM10/29/22
to ai-b...@googlegroups.com

Humans Beat DeepMind AI in Creating Algorithm to Multiply Numbers
New Scientist
Matthew Sparkes
October 13, 2022


Jakob Moosbauer and Manuel Kauers at Austria's Johannes Kepler University Linz bested an algorithm developed by artificial intelligence company DeepMind with a program that can perform matrix multiplication more efficiently. Earlier this month, DeepMind unveiled a method for multiplying two five-by-five matrices in just 96 multiplications, out-performing a more-than-50-year-old record. Moosbauer and Kauers reduced the process to 95 multiplications by testing multiple steps in multiplication algorithms to see if they could be combined. Said Moosbauer, "We take an existing algorithm and apply a sequence of transformations that at some point can lead to an improvement. Our technique works for any known algorithm, and if we are lucky, then [the results] need one multiplication less than before."

Full Article

 

 

ML Models Identify Apps Likely to Violate Google Play Store Guidelines
University of Groningen (Netherlands)
October 13, 2022


Machine learning models developed by researchers at the Netherlands' University of Groningen can predict whether a new app is likely to be removed from the Google Play store for violating its guidelines before and after being uploaded. The researchers collected metadata from around 2 million apps, downloaded the source code for about 50% of them, and tracked their status for six months to determine whether they had been removed. The resulting dataset, publicly available through Dataverse.nl, was used to develop a predictive model for developers and another for users, which predicted app removal with 76.2% and 79.2% accuracy, respectively.

Full Article

 

 

White House Unveils AI 'Bill of Rights'
Associated Press
Garance Burke
October 4, 2022


On Oct. 4, the Biden administration released the Blueprint for an AI Bill of Rights that offers guidelines for the U.S. government to protect citizens' digital and civil rights in a world that increasingly depends on artificial intelligence (AI). The blueprint, which does not lay out specific enforcement actions, was developed in consultation with various agencies and includes feedback from technologists, civil society groups, businesses, and industry researchers. The White House Office of Science and Technology Policy's Alondra Nelson said, "This is the Biden-Harris administration really saying that we need to work together, not only just across government, but across all sectors, to really put equity at the center and civil rights at the center of the ways that we make and use and govern technologies. We can and should expect better and demand better from our technologies."
 

Full Article

 

 

Learning on the Edge
MIT News
Adam Zewe
October 4, 2022


Scientists at the Massachusetts Institute of Technology (MIT) developed an energy-efficient method that allows artificial intelligence (AI) models to continually learn from new data on intelligent edge devices. The technique uses less than 0.25 megabyte (MB) of memory, while other training methods can consume over 500 MB, surpassing most microcontrollers' 256-kilobyte capacity. The process can be used to train a machine learning model on a microcontroller in minutes, and shields privacy by keeping data on the device. "Our study enables IoT [Internet of Things] devices to not only perform inference but also continuously update the AI models to newly collected data, paving the way for lifelong on-device learning," said MIT's Song Han. "The low resource utilization makes deep learning more accessible and can have a broader reach, especially for low-power edge devices."
 

Full Article

 

 

Meta Using AI to Generate Videos from Words
CNN Business
Rachel Metz
September 29, 2022


Meta researchers are generating videos from short text prompts using their Make-A-Video research, which uses a text-to-image artificial intelligence (AI) model to ascertain how words correspond with images, and unsupervised learning to work out realistic motion by analyzing videos. The researchers said their text-to-image AI model was trained on Internet data, so it learned "and likely exaggerated social biases, including harmful ones." Meta CEO Mark Zuckerberg posted on Facebook, "It's much harder to generate video than photos because beyond correctly generating each pixel, the system also has to predict how they'll change over time."

Full Article

 

 

AI Can Create Any Image in Seconds
The Washington Post
Nitasha Tiku
September 28, 2022


The ability of artificial intelligence (AI) systems like research laboratory OpenAI's DALL-E to produce virtually any image in seconds is provoking controversy. Researchers are concerned that text-to-image generators produce images that can perpetuate racial and gender stereotypes, plagiarize artists, or create authentic-looking disinformation. The technology is proliferating faster than AI companies can formulate usage policies and deter harmful outcomes. OpenAI prohibits DALL-E’s use to create images of celebrities or politicians in order to prevent disinformation, but upstart companies have opened-sourced their code for anyone to duplicate. The University of Southern California's Wael Abd-Almageed is concerned the technology exploits the tendency of people to accept what they see, adding "Once the line between truth and fake is eroded, everything will become fake. We will not be able to believe anything."

Full Article

*May Require Paid Registration

 

 

Computer Made from Magnets Can Recognize Handwritten Digits
New Scientist
Karmela Padavic-Callaghan
September 30, 2022


Researchers led by Tomoyuki Yokouchi at Japan's University of Tokyo have programmed a computer made from magnetic materials to recognize handwritten digits. The researchers built the prototype from blocks of platinum, cobalt, and iridium layers containing skyrmions, connected to emulate an artificial neural network. They feed the computer a problem by changing the magnetic field to rearrange the skyrmions, inducing the voltage to change across each block, which is translated as the computer's answer. The researchers converted numbers ranging from 0 to 9 into variations in the magnetic field and inputted them into the computer; the device recognized the digits about 95% of the time across 5,000 different tests, outperforming other non-traditional computers employing the same neural network structure by roughly 5%. Reducing the computer's size and accelerating its speed could enable more energy-efficient processing.
 

Full Article

 

 

Scientists Use ML to Help Fight Antibiotic Resistance in Farmed Chickens
University of Nottingham (U.K.)
September 29, 2022

Researchers at the U.K.'s University of Nottingham leveraged machine learning to identify diseases at poultry farms, with the goal of predicting and detecting bacterial infection to reduce antibiotic use and prevent antimicrobial resistance (AMR) transfer to humans. The researchers identified 11 types of clinically important antibiotic resistance genes in human and chicken samples taken from a Chinese farm and slaughterhouse. Using a machine learning approach that combined metagenomics data with culture-based methods, the researchers detected the presence of a core chicken gut resistome correlated with AMR circulating in the farms. They also found an association between the AMR-related core resistome and external factors like temperature and humidity. Said Nottingham's Tania Dottorini, "Precision farming, cost-effective DNA sequencing, and the increased adoption of machine learning technologies offer the opportunity to develop methods giving a better understanding and quantification of AMR risks in farming environments."
 

Full Article

 

 

Cassie the Running Robot Achieves Guinness World Record in 100-Meter Dash
USA Today
Scott Gleeson
September 28, 2022


A robot developed at Oregon State University's Agility Robotics set a Guinness World Record for a bipedal robot running the 100-meter dash, completing the run in 24.73 seconds. The robot, known as Cassie, uses machine learning to control its gait when running on outdoor terrain. Said Oregon State's Devin Crowley, "Machine learning approaches have long been used for pattern recognition, such as image recognition, but generating control behaviors for robots is new and different." Oregon State's Jonathan Hurst added, "Using learned policies for robot control is a very new field, and this 100-meter dash is showing better performance than other control methods. I think progress is going to accelerate from here."

Full Article

 

 

Deep Learning Uses Stream Discharge to Estimate Watershed Subsurface Permeability
U.S. Department of Energy
September 26, 2022


Deep learning can calculate a watershed's subsurface permeability from stream discharge data more accurately than conventional methods. Scientists from the U.S. Department of Energy's Pacific Northwest National Laboratory, Oak Ridge National Laboratory, and Los Alamos National Laboratory taught deep neural networks (DNNs) to estimate subsurface permeability from stream discharge hydrographs. The researchers trained the DNNs to map relationships between soil and geologic layer permeabilities and simulated stream discharge acquired via an integrated surface-subsurface hydrologic watershed model; this returned more accurate permeability than inverse modeling. The networks then estimated the permeability of an actual watershed using observed stream discharge from the study site, accurately predicting stream flows. The enhanced parameter estimation promises to reduce uncertainty in predictive watershed models.

Full Article

 

Experts Discuss Concerns Over AI Technology That Can Craft Essays

Inside Higher Ed Share to FacebookShare to Twitter (10/26) reports that “humans have long relied on writing assistance powered by artificial intelligence to check spelling and grammar, predict text, translate or transcribe.” Now, anyone “with an internet connection can access an AI tool such as OpenAI or Moonbeam, give it a prompt and receive – in seconds – an essay written in humanlike prose.” Instructors “who are concerned that students will use these tools to cheat may hold fast to in-class writing assessments or install surveillance tools to try to detect misconduct.” But others argue that AI-generated prose is “original, which prevents plagiarism software from detecting it.” All the experts “with whom Inside Higher Ed spoke said that students who submit essays that are completely composed by AI have crossed an ethical line.” But they also “said the gray area between acceptable and unacceptable uses of this evolving technology is vast.”

 

Ford Posts Q3 Loss, $2.7B Charge On Argo AI Shutdown

The Wall Street Journal Share to FacebookShare to Twitter (10/26, Eckert, Subscription Publication) reports Ford Motor reported a loss in Q3 due to a $2.7 billion charge associated with the shutdown of Argo AI. Reuters Share to FacebookShare to Twitter (10/26) reports Ford’s move “highlights the pressure on automakers to make hard choices as the financial demands of shifting to electric vehicles continue to rise.” Ford “posted a net loss in the quarter of $827 million, after taking a $2.7 billion noncash pretax impairment on its investment in Argo AI.” Ford held a 39% stake in Argo. CEO Jim Farley said, “Profitable, fully autonomous vehicles at scale are a long way off and we won’t necessarily have to create that technology ourselves.”

        MarketWatch Share to FacebookShare to Twitter (10/26, Peters) reports Farley said, “We still believe in Level 4 autonomy, that it will have a big impact on our business of moving people,” but that “we will have a very long road” to get there. The remarks “came as the auto industry deals with more immediate concerns about both production and demand, as ongoing supply-chain contortions lead to parts shortages and higher prices.”

        Argo AI Is Shutting Down. The Verge Share to FacebookShare to Twitter (10/26, Hawkins) reports that Argo AI “is shutting down, The Verge has learned.” Employees “were notified during an all-hands meeting on Wednesday.” Argo AI “has lost the financial support of Ford and VW, a source said,” and “according to TechCrunch, the company’s resources will be absorbed by both automakers.” Argo AI spokesperson Alan Hall said “the decision has been made that Argo AI will not continue on its mission as a company.” He added that many of Argo’s employees “will receive an opportunity to continue work on automated driving technology with either Ford or Volkswagen, while employment for others will unfortunately come to an end.”

 

GitHub Will Make AI-Powered Coding Tool Copilot Generally Available This Summer

TechCrunch Share to FacebookShare to Twitter (5/24, Wiggers) reports GitHub and OpenAI last year “launched Copilot, a service that provides suggestions for whole lines of code inside development environments like Microsoft Visual Studio,” and while it was “previously available in technical preview, it’ll become generally available starting sometime this summer, Microsoft announced at Build 2022.” TechCrunch adds Copilot “will also be available free for students as well as ‘verified’ open-source contributors. On the latter point, GitHub said it’ll share more at a later date.”

dtau...@gmail.com

unread,
Oct 30, 2022, 9:09:21 AM10/30/22
to ai-b...@googlegroups.com

AIs Become Smarter if You Tell Them to Think Step by Step
New Scientist
Chris Stokel-Walker
October 26, 2022


Researchers at Stanford University and Google Research have determined that artificial intelligence (AI) models improve when prompted to tackle problems step by step. They tested three AI models—Codex, InstructGPT, and PaLM 540B—on 23 tasks that AI historically has not been able to complete better than humans. The researchers found that prompting the models to think step by step improved their performance across all tasks by 25% to 32%. University of California, Santa Barbara's William Wang thinks prompting boosts performance because it is similar to how the AI models are trained, with data presented sequentially.

Full Article

*May Require Paid Registration

 

 

AI-Generated Art Already Transforming Creative Work
The New York Times
Kevin Roose
October 21, 2022


Artificial intelligence (AI) has made inroads among content creators, with apps like DALL-E 2, Midjourney, and Stable Diffusion gaining popularity among those looking for a fast, low-cost way to create hyper-realistic images. These "generative AI" apps allow users to input a small amount of text to generate unique images. It is uncertain whether such apps will result in artists and illustrators losing their jobs. San Francisco interior designer Isabella Orsi, for instance, uses InteriorAI to create mock-ups for clients, she says such apps help interior designers formulate ideas in the early stages of a project. Says Orsi, "I don't feel like it will take my job away. Somebody has to discern between the different renderings, and at the end of the day, I think that needs a designer."

Full Article

*May Require Paid Registration

 

 

Deep Learning with Light
MIT News
Adam Zewe
October 20, 2022


Massachusetts Institute of Technology researchers have reduced computing latency on low-powered connected devices via the Netcast architecture, which shuttles the operation of deep learning models to a central server where components are imprinted onto light waves. Netcast stores weights in the central server, which is linked to a smart transceiver that applies silicon photonics to fetch trillions of weights from memory each second; these weights are encoded onto light waves. Fiber optics route the waves to a connected device, while the receiver uses a Mach-Zehnder modulator to conduct computations using the encoded elements. The modulator performs trillions of multiplications per second, demonstrating a 100-fold improvement in energy efficiency compared to other techniques.

Full Article

 

 

AI Knows How Much You're Willing to Pay for Flights Before You Do
Bloomberg
Angus Whitley
October 20, 2022


Airlines are using artificial intelligence to predict how much passengers are willing to pay for air travel tickets, and to reprice them continuously in response to that data. "We are able to determine at every price point how many people will buy a ticket," said Roy Cohen at Israeli live-pricing engine operator Fetcherr. Cohen claims Fetcherr's demand models are so accurate that fares set by algorithms for flights six months in advance rarely change by the time the plane takes off. Last year, Ireland-based real-time pricing provider Datalex announced a test with Irish Airline Aer Lingus. Although historical information remains the chief factor for calculating current and future flight demand, Datalex’s Conor O’Sullivan said computers are increasingly considering one-time events, hotel reservations, and airport lines when predicting ticket prices.

Full Article

*May Require Paid Registration

 

 

AI-Powered Material Can Learn Behaviors, Adapt
Interesting Engineering
Brittney Grimes
October 19, 2022


Mechanical engineers at the University of California, Los Angeles have developed an artificial intelligence-powered material that learns behaviors over time, and can adjust to changing circumstances. The so-called mechanical neural network (MNN) features a structural system of independently tunable beams arranged in a triangular lattice pattern. The researchers said each beam consists of a "voice coil, strain gauges, and flexures that enable the beam to change its length, adapt to its changing environment in real time, and interact with other beams in the system." An optimization algorithm uses strain-gauge data to calculate rigidity values to govern the network's adaptation, determining how much force should be applied. Cameras on the MNN's outer nodes check the strain-gauge system's validity.

Full Article

 

 

Robot Laundry Folding Speed Record Broken
Ars Technica
Benj Edwards
October 19, 2022


Researchers at the University of California, Berkeley and Germany's Karlsruhe Institute of Technology have achieved a new robot laundry-folding speed record with the SpeedFolding system. SpeedFolding combines machine vision, the BiManual Manipulation Network (BiMaMa-Net), and two industrial robot arms to fold 30 to 40 randomly positioned garments per hour; the researchers said the previous record was three to six folds per hour. The BiMaMa-Net neural network analyzed 4,300 human and machine-assisted examples to learn to fold clothing. The system uses an overhead camera to study the garment's initial state, and to calculate where to grasp it. The SpeedFolding robot can fold clothes from a random initial position in less than two minutes on average at a 93% success rate, as well as generalize to clothes of differing material, shape, or color than those it trained with.
 

Full Article

 

 

'Deepfakes' of Celebrities Appearing in Ads
The Wall Street Journal
Patrick Coffee
October 25, 2022


Deepfakes of celebrities have started to appear in ads, with and without their consent. Experts say the growing use of deepfake software could change the marketing industry significantly while raising new legal and ethical issues, making it difficult for celebrities to rein in unauthorized digital reproductions and brand manipulation. U.S. legislative efforts to contain deepfakes include criminalization of their use in revenge porn in Virginia, and a Texas ban on their use in political campaigns. However, experts cite a lack of legislation addressing deepfake usage in commercials, and anticipate as a result deepfakes will become increasingly popular in advertising.

Full Article

*May Require Paid Registration

 

 

ML-Based Solution Could Help Firefighters Circumvent Deadly Backdrafts

U.S. National Institute of Standards and Technology


October 17, 2022


Scientists at the U.S. National Institute of Standards and Technology have formulated a machine learning-based model to predict potentially deadly backdrafts. The researchers based the model on data from hundreds of laboratory-engineered backdrafts, with the hope firefighters will deploy it to avoid or adjust to hazardous circumstances. The team initially fed the model data on gas levels, fuel richness, and temperature measured at one location in the lab chamber before a door was opened, to calculate the odds of a backdraft occurring. The model predicted backdrafts correctly in 70.8% of the experiments; adding measurements at a second location boosted its accuracy to 82.4%.

Full Article

 

 

AI Tools Fail to Reduce Recruitment Bias
BBC News
Chris Vallance
October 14, 2022


A study by researchers at the U.K.'s University of Cambridge found that diversity and bias in hiring are not improved by the use of artificial intelligence technology. Said Cambridge's Kerry Mackereth, "These tools can't be trained to only identify job-related characteristics and strip out gender and race from the hiring process, because the kinds of attributes we think are essential for being a good employee are inherently bound up with gender and race." The researchers developed a simplified AI recruitment tool that assigns personality scores to candidates' photos based on agreeableness, extroversion, openness, conscientiousness, and neuroticism. However, they found that several irrelevant variables, such as changes in contrast, brightness, and saturation, skewed the ratings of their tool.

Full Article

 

 

AI Language Models Show Bias Against People with Disabilities, Study Finds
Penn State News
Jessica Hallman
October 13, 2022


Pennsylvania State University (Penn State) researchers found that natural language processing models often are biased against people with disabilities. The researchers studied 13 popular machine learning models trained to generate sequences of words, and tested over 15,000 unique sentences on each model to produce word associations for over 600 adjectives that could be associated with individuals with or without disabilities. The researchers assessed the sentiment of each adjective generated as positive, negative, or neutral, finding that sentences with disability-related words scored more negatively than sentences lacking them. Penn State's Pranav Venkit said the work demonstrates "that people need to care about what sort of models they are using and what the repercussions are that could affect real people in their everyday lives."

Full Article

 

Meta Announces Meta AI Universal Speech Translator

VentureBeat Share to FacebookShare to Twitter (10/19, Dey) reported that about a week after Google announced its “Translation Hub” speech translation, now Meta has “announced the launch of universal speech translator (UST) project, which aims to create AI systems that enable real-time speech to-speech translation across all languages, even those that are spoken but not commonly written.” Meta CEO Mark Zuckerberg said in a statement, “Meta AI built the first speech translator that works for languages that are primarily spoken rather than written. We’re open-sourcing this so people can use it for more languages.” Synthesis AI founder and CEO Yashar Behzadi “said that one of the current challenges for UST models is the computationally expensive training that’s needed because of the breadth, complexity and nuance of languages.”

 

White House’s AI Bill Of Rights Blueprint’s To Impact Human Resources Tech

HR Dive Share to FacebookShare to Twitter (10/18) reported “on Oct. 4, the White House’s Office of Science and Technology Policy published a ‘Blueprint for an AI Bill of Rights,’ a 73-page document outlining guidance on addressing bias and discrimination in automated technologies so that ‘protections are embedded from the beginning, where marginalized communities have a voice in the development process, and designers work hard to ensure the benefits of technology reach all people.’” The blueprint “focuses on five areas of protections for U.S. citizens in relation to AI: system safety and effectiveness; algorithmic discrimination; data privacy; notice and explanation when an automated system is used; and access to human alternatives when appropriate.”

        Employment is “listed in the blueprint as one of several ‘sensitive domains’ deserving of enhanced data and privacy protections.” Individuals “handling sensitive employment information should ensure it is only used for ‘functions strictly necessary for that domain’ while consent for all non-necessary functions ‘should be optional.’” Additionally, the blueprint “states that continuous surveillance and monitoring systems ‘should not be used in physical or digital workplaces,’ regardless of a person’s employment status.” Surveillance is “particularly sensitive in the union context; the blueprint notes that federal law ‘requires employers, and any consultants they may retain, to report the costs of surveilling employees in the context of a labor dispute, providing a transparency mechanism to help protect worker organizing.’”

 

Regulators: AI Is Not Immune To Existing Rules

The Wall Street Journal Share to FacebookShare to Twitter (6/3, Vanderford, Subscription Publication) reports, “Financial institutions’ use of artificial intelligence should have risk controls in place to address possible drawbacks, senior bank regulators said, warning that old rules could be applied to the novel technology. Even without AI-specific guidance from federal bank regulators, institutions could run afoul of existing rules,” according to officials at the Fed and Office of the Comptroller of the Currency.

 

Current, Former Google Employees Speak Out Against Fired AI Researcher’s Conduct

Wired Share to FacebookShare to Twitter (5/31, Simonite) reports even as Google AI researchers Anna Goldie and Azalia Mirhoseini “won media coverage and notice in the world of semiconductors” for using AI software to improve its AI hardware, “in parallel to their success, according to five current and former Google employees, and documents seen by WIRED, Mirhoseini and Goldie spent years fending off a series of unproven claims that their results were wrong or even falsified.” Wired adds Satrajit Chatterjee, “a more senior researcher at Google, used the cover of scientific debate to undermine the women personally, the employees claim,” and this March, was fired shortly after seeking “permission from research managers to publish a public rebuttal of Mirhoseini and Goldie’s Nature study.” Wired describes this as adding “to a series of recent internal conflicts at Google that suggest the freewheeling, engineer-centric culture it celebrated as a startup has left the company unprepared for some challenges of being a multinational with more than 100,000 staff.”

dtau...@gmail.com

unread,
Nov 6, 2022, 8:40:54 AM11/6/22
to ai-b...@googlegroups.com

Network Pruning Can Skew Deep Learning Models
North Carolina State University News
Matt Shipman
November 2, 2022


Computer science researchers at North Carolina State (NC State), Syracuse, and Carnegie Mellon universities have shown that neural network pruning can undermine the performance of deep learning models at identifying certain groups. The researchers cited disparities in gradient norms across groups, and in Hessian norms linked to inaccuracies of a group's data, as factors impacting performance. This implies network pruning can compound existing accuracy deficiencies. NC State's Jung-Eun Kim said the team has demonstrated a remedial mathematical method "to equalize the groups that the deep learning model is using to categorize data samples." Tests of the mitigation technique found basically restored a deep learning model to pre-pruning levels of accuracy.
 

Full Article

 

 

ML Facilitates 'Turbulence Tracking' in Fusion Reactors
MIT News
Adam Zewe
November 2, 2022


A multi-institutional team of scientists used computer vision models to track turbulent filaments or "blobs" appearing on the fringe of fuel used in nuclear fusion reactors. The researchers trained four blob-tracking computer vision models on a synthetic video dataset of plasma turbulence, which were able to identify such blobs in actual video clips with more than 80% accuracy in some instances, as well as estimating the blobs' sizes and motion. Said the Massachusetts Institute of Technology's Theodore Golfinopoulos. "Now, we have a microscope and the computational power to analyze one event at a time. If we take a step back, what this reveals is the power available from these machine learning techniques, and ways to use these computational resources to make progress."
 

Full Article

 

 

Having AIs Train Robot Dogs to Balance Makes Them Cheaper
New Scientist
Jeremy Hsu
November 1, 2022


Carnegie Mellon University (CMU)'s Deepak Pathak and colleagues used an artificial intelligence (AI) to train a robot dog to perform cleaning tasks for less than a tenth of the cost of Boston Dynamics' robot canines. The researchers taught the AI to coordinate the robot's movements with an arm affixed to its back while an operator guided its activity. They applied reinforcement learning to train the AI via computer models and in a physical machine. CMU's Xuxin Cheng said the researchers trained the AI to direct the robot's legs separately from the arm, before training it on leg/arm coordination. The team also had a "teacher" AI train a "student" AI to mimic their bodily motions.

Full Article

 

 

Using Sound to Model the World
MIT News
Adam Zewe
November 1, 2022


Scientists at the Massachusetts Institute of Technology and Carnegie Mellon University have developed a machine learning model that can capture how any sound will spread through a space, simulating what a listener would hear at different locations. The neural acoustic field model captures and simulates underlying three-dimensional scene acoustics from a limited number of sound recordings to accurately render the room visually. The researchers incorporated the reciprocal nature of sound and the influence of local geometric features into the model to surmount the challenge of predicting audio due to factors like obstacles and distance.

Full Article

 

 

Biotech Begins Human Trials with Drug Discovered Using AI
Financial Times
Jamie Smyth
October 31, 2022


Verge Genomics has commenced a clinical trial of a novel therapy to target neurodegenerative disease ALS that was discovered using artificial intelligence (AI). Verge's Alice Zhang said the discovery process for its ALS drug took four years, which was faster and more cost-effective than the trial-and-error approach often used in drug discovery. Verge developed a database of human tissue from the brains and spinal cords of patients with neurodegenerative diseases like ALS, and used genetic sequencing to create a "human disease map" that is used by its AI platform to identify drug targets. Said Zhang, "Using human data from day one means that we start with higher-quality targets that are more likely to succeed in clinic."

Full Article

*May Require Paid Registration

 

 

AI, Molecule Machine Generalize Automated Chemistry
University of Illinois at Urbana-Champaign News Bureau
Liz Ahlberg
October 28, 2022


An international team led by University of Illinois at Urbana-Champaign (UIUC) researchers combined artificial intelligence (AI) and a molecule-generating machine to optimize conditions for automated chemistry. The integration yields real-time feedback to a machine learning system to improve chemical synthesis. The researchers doubled the average yield of heteroaryl Suzuki-Miyaura coupling reactions linking carbon atoms together in pharmaceutically critical molecules. UIUC's Martin D. Burke said generality is crucial for automation, while "the haystack of possible reaction conditions is astronomical, and the needle is hidden somewhere inside. By leveraging the power of AI and building-block chemistry to create a feedback loop, we were able to shrink the haystack. And we found the needle."

Full Article

 

Google AI Event Showcases New AI Tools

VentureBeat Share to FacebookShare to Twitter (11/2, Goldman) reports Google held an AI event at its New York City offices on Wednesday, where the company “announced a variety of artificial intelligence (AI) advances, including in generative AI, language translation, health AI, and disaster management.” At the event, CEO Sundar Pichai said, “We see so much opportunity ahead and are committed to making sure the technology is built in service of helping people, like any transformational technology. AI comes with risks and challenges – that’s why Google is focused on responsible AI from the beginning, publishing AI principles which prioritize the safety and privacy of people over anything else.” Among other announcements, “Google shared the first rendering of a video that shares both of the company’s complementary text-to-video research approaches – Imagen Video and Phenaki.” Google is also working on “efforts to use AI to generate code, as well as recently introduced research such as AudioLM which – with no need for a musical score – extends the audio from any audio clip entered – and text-to-NeRF 3D rendering using diffusion.”

        ZDNet Share to FacebookShare to Twitter (11/2, Ortiz) reports Google “announced its intention of expanding its already expansive language portfolio by ten-fold utilizing artificial intelligence.” Under this “1,000 Languages Initiative,” Google wants “to build an AI model that supports the 1,000 most spoken languages across the globe to make information more accessible.”

        SiliconANGLE Share to FacebookShare to Twitter (11/2, Deutscher) reports Google also unveiled “an internally-developed software tool that can reduce the amount of effort involved in training a robot to perform new tasks.” Called “Code as Policies, or CaP,” the new tool is “available on GitHub under an open-source license.”

 

Former Google CEO And Wife Argue AI Is Not Being Used To Fullest Potential

In an opinion piece for TIME Share to FacebookShare to Twitter (11/1), former Google CEO Eric Schmidt and his wife Wendy, who are founders of the Schmidt Family Foundation, write, “In daily life, Artificial Intelligence is ubiquitous in our homes. ... But we need a lot more of it in our laboratories – moving science forward for public benefit, and helping us to solve the hardest problems of our time, from climate change and poverty to healthcare and sustainable energy.” They continue, “This can only happen by accelerating the next global scientific revolution – by supporting broad and deep incorporation of AI techniques into scientific and engineering research. Because while AI innovation has been substantive, its adoption into scientific and engineering research has not been ubiquitous, fast, or interdisciplinary.” They continue, “We believe training for AI in science, equitable access to AI tools, and its responsible, ethical application should govern any meaningful response.”

        First, the pair write, “We need rigorous and interdisciplinary training for young scientists using AI. AI’s failures can largely be attributed to unrealistic expectations about AI tools, errors in their use, and the poor quality of data used in their development. Scientists across disciplines, from all backgrounds, will need AI fluency to prevent such missteps.” Additionally, they state, “We have to ensure equitable access to AI tools. According to a recent National Artificial Intelligence Research Resource report, equitable participation in cutting edge AI research is limited by gaps in access to the necessary data and computational power. Leaving out scientists from historically underrepresented and underserved backgrounds ‘limits the breadth of ideas incorporated into AI innovations and contributes to biases and other systemic inequalities.’”

 

DeepMind CEO Discusses Impact Of AI Program That Predicted 3-D Structure Of Almost Every Known Protein

Scientific American Share to FacebookShare to Twitter (10/31, Lewis) publishes an interview with DeepMind CEO Demis Hassabi about developing AlphaFold, an artificial intelligence program that “predicted the 3-D structures of almost every known protein” as well as “some of its most exciting potential applications and the ethical considerations of highly sophisticated AI.” On how AlphaFold is being used, “I’ve been talking to [CRISPR pioneer] Jennifer Doudna about alfalfa crop sustainability—her team is trying to engineer crops to be a bit more sustainable in the face of climate change.” He adds, “There was a whole special issue in Science on the nuclear pore complex. They solved the structure of one of the biggest proteins in the body. And I think three groups solved it at the same time from the cryo-EM [cryogenic electron microscopy] data – but they all needed AlphaFold predictions to augment the cryo-EM lower-resolution data in some places. So a combination of experimental structural data with AlphaFold turns out to be a real boon to structural biologists, which we weren’t necessarily predicting.” Also he adds, “Almost every pharma company we’ve talked to is using AlphaFold. We’ll probably never know what the full impacts are because obviously, they keep that proprietary. But I like to think we’ve helped accelerate real cures to diseases and drug development maybe by a few years.”

 

West Virginia University Research Shows How Neuroscience Of Electric Fish Can Help Advance Machine Learning

EurekAlert Share to FacebookShare to Twitter (10/31) reports that “electric fish and robots may hold the key to achieving ‘autonomous lifelong machine learning,’ based on research conducted at West Virginia University with the aid of a $2 million National Science Foundation grant.” Computer scientist and engineer Gianfranco Doretto “is leading a team of researchers in a four-year study of unsupervised continual machine learning, applying the results of experiments on electric fish to the development of software that will allow a robot to learn to navigate different terrains unaided.” Electric fish studies “have shown how, in order to learn about the world, a fish ‘filters out self-generated sensory inputs,’ Doretto said – erasing from consciousness signals created by its own body or movements.” Canceling that input “isn’t something current machine learning can do.”

 

Pitt Researcher Focuses On Making AI Medical Screening More Race-Inclusive

AZoRobotics Share to FacebookShare to Twitter (10/26) reported that artificial intelligence (AI) “makes it possible for people worldwide to personally access vital medical screening.” Jingtong Hu, an Engineering Researcher at the University of Pittsburgh, “is involved in ensuring that the screening is fair and effective, regardless of who is accessing it.” AI “has been installed across various health applications, such as monitoring vital signs, recognizing emotions, detecting skin or cancer, and other medical imaging and diagnostics.” However, neural networks “are just as good as the data set on which they are programmed, and minorities are poorly represented in these datasets, resulting in a principally deceptive form of technological inequality.” Hu and his team at the University of Pittsburgh Swanson School of Engineering “are developing a distributed, inclusive data collection and learning framework that depends on smartphone apps, rendering it easy to join while safeguarding user privacy.” Hu’s project “would help curb these inequalities by creating an on-device learning framework that learns nonstop from the data of new users when using a mobile application.” It will “make the most of federated learning (FL), which uses numerous devices to collaboratively program a shared model while preserving the data on the devices.”

 

Google Denies Engineer’s Claims AI Chatbot Is “Sentient”

The Washington Post Share to FacebookShare to Twitter (6/11) reports, “The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder.” Google engineer Blake Lemoine, for example, spent time testing Google’s AI chatbot LaMDA, and “noticed the chatbot talking about its rights and personhood.” Not only did the chatbot “change Lemoine’s mind about Isaac Asimov’s third law of robotics,” but Lemoine became convinced LaMDA is “sentient” and even “worked with a collaborator to present evidence to Google” of this, but Google Vice President Blaise Aguera y Arcas and chief of Responsible Innovation Jen Gennaj “dismissed” Lemoine’s claims. In a statement, Google spokesperson Brian Gabriel said, “Our team – including ethicists and technologists – has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

        The New York Times Share to FacebookShare to Twitter (6/12, Grant, Metz) reports Google placed senior software engineer Blake Lemoine, who worked in Google’s Responsible AI organization, on paid leave “after dismissing his claim that its artificial intelligence is sentient, surfacing yet another fracas about the company’s most advanced technology.” A day before his suspension, “Lemoine said, he handed over documents to a U.S. senator’s office, claiming they provided evidence that Google and its technology engaged in religious discrimination.” Google “said that its systems imitated conversational exchanges and could riff on different topics, but did not have consciousness.”

 

Microsoft Rolling Out Machine Learning Models To Improve Teams Acoustics

The Verge Share to FacebookShare to Twitter (6/13, Warren) reports Microsoft is expanding its “machine learning models to improve room acoustics” on Teams. The company has been piloting its tech “for months, measuring its models in the real world to ensure Teams users are noticing the echo reduction and improvements in call quality.” The Verge adds that processing for the new models “is done locally on client devices, instead of the cloud.”

dtau...@gmail.com

unread,
Nov 12, 2022, 12:42:30 PM11/12/22
to ai-b...@googlegroups.com

AI Model Can Help Prevent Data Breaches
Imperial College London (U.K.)
Gemma Ralton
November 8, 2022


The QuerySnout artificial intelligence (AI) algorithm designed by researchers at the U.K.'s Imperial College London (ICL) can check privacy-safeguarding systems for potential breaches. QuerySnout can automatically identify attacks on query-based systems (QBS) used by analysts to search data and retrieve aggregate information. The model learns which questions to ask the QBS to obtain answers, then learns to integrate the answers automatically to detect potential privacy bugs. QuerySnout uses machine learning to produce a query-based attack that combines answers in order to expose specific private data, following an evolutionary search to find the correct query sets. ICL's Ana-Maria Cretu said, " QuerySnout finds more powerful attacks than those currently known on real-world systems. This means our AI model is better than humans at finding these attacks."

Full Article

 

 

5G-Enabled Malware Classification System for Next-Generation Cybersecurity
Newswise
November 8, 2022


A multinational team of scientists led by Gwanggil Jeon at South Korea's Incheon National University created an artificial intelligence-based malware detection system for 5G-enabled Industrial Internet of Things systems. The system uses grayscale image visualization with a deep learning network to analyze malware, then applies a convolutional neural network framework to categorize malware attacks. The researchers integrate the system with 5G to enable low latency and high-throughput sharing of real-time data and diagnostics. The new model improved on conventional system architectures, achieving 97% accuracy on the benchmark dataset thanks to the system's ability to extract complementary discriminative properties by integrating multiple layers of data.

Full Article

 

 

Google Wants Robots to Write Their Own Python Code
ZDNet
Liam Tung
November 3, 2022


Google Research has introduced a new process for using large language models (LLMs) to permit robots to write their own code in Python, based on instructions from humans. This approach builds on Google's PaLM-SayCan model for robots to comprehend open-ended prompts and respond reasonably and safely in physical environments. Google researchers described the approach as Code as Policies, and said code-writing LLMs can be reconfigured to compose robot policy code in response to natural language commands. They said the process is able to handle values like velocities, based on vague natural-language descriptions like "faster" or "to the left," and can incorporate instructions from other languages than English.
 

Full Article

 

 

Synthetic Data Can Offer ML Performance Improvements
MIT News
Adam Zewe
November 3, 2022


A team of researchers at the Massachusetts Institute of Technology (MIT), the MIT-IBM Watson AI Laboratory, and Boston University found synthetic data can improve machine learning (ML) model performance. The researchers amassed the Synthetic Action Pre-training and Transfer dataset of 150,000 video clips to train three ML models on a wide range of human actions. They found the three models outperformed models trained with real video clips on four of six datasets, yielding the highest accuracy for datasets featuring clips with "low scene-object bias." MIT-IBM Watson AI Lab's Rogerio Feris said, "The ultimate goal of our research is to replace real data pretraining with synthetic data pretraining." Feris said that while there is a cost in creating an action in synthetic data, “Once that is done, then you can generate an unlimited number of images or videos by changing the pose, the lighting, etc."
 

Full Article

 

 

Detecting Nuclear Threats with Artificial Reasoning
Pacific Northwest National Laboratory
Sarah Wong
November 3, 2022


Scientists at the U.S. Department of Energy's Pacific Northwest National Laboratory (PNNL) are employing machine learning (ML), data analytics, and artificial reasoning to better detect and analyze nuclear threats. PNNL's Benjamin Wilson and partners at Sandia National Laboratories built a virtual duplicate of a nuclear reprocessing facility, then educated an ML model to detect and process data patterns reflecting the diversion of nuclear materials. Wilson and other PNNL researchers also are working on the development of Artificial Judgement Assistance to analyze texts for signs of nuclear proliferation, and to answer direct queries from analysts. Another PNNL project with multi-institutional collaborators aims to facilitate ML-based forensic analysis of nuclear materials to determine their origin.
 

Full Article

 

 

Global Analysis Shows Where Fishing Vessels Turn Off ID Devices
University of California, Santa Cruz
Tim Stephens
November 2, 2022


Researchers at the University of California, Santa Cruz (UC Santa Cruz), Global Fishing Watch, and NOAA Fisheries have produced the first global dataset on the disabling of Automatic Identification System devices by commercial fisheries. The researchers also developed a machine learning method to distinguish intentional disabling from legitimate disabling. The more than 55,000 suspected intentional disabling events identified between 2017 and 2019 concealed almost 5 million hours of fishing vessel activity, with over 40% of those hours occurring in four hotspots, three of which are known for illegal fishing. High levels of disabling events also were seen in areas with high transshipment activity. Legitimate reasons for disabling included hiding good fishing locations from competitors and protection from piracy.
 

Full Article

 

 

ML Generates Pictures of Proteins in 5D
Washington University in St. Louis McKelvey School of Engineering
Brandie Jefferson
November 1, 2022


Washington University in St. Louis (WashU) researchers designed a machine learning (ML) algorithm to create five-dimensional (5D) images of molecules detailing both their three-dimensional (3D) orientations and two-dimensional (2D) positions. WashU's Tingting Wu and colleagues added a post-processing step to a classification algorithm, applying physical laws to noisy, pixilated images generated earlier. Wu said the result is a "beautiful image" that applies color, curvature, and direction to visualize thousands of interconnected molecules. The system will ultimately help scientists to better understand biological processes at extremely small scales.

Full Article

 

Despite Progress In AI, Companies Still Exaggerate Its Capabilities

The Wall Street Journal Share to FacebookShare to Twitter (6/29, Hao, Kruppa, Subscription Publication) reports that although companies have made big leaps in AI capability, AI ethicists and researchers are saying companies may be exaggerating those capabilities, leading to a misunderstanding of the technology’s power and fallibility.

 

Microsoft Removes Facial Recognition Features Amid Controversy Over Emotion AI

NBC News Share to FacebookShare to Twitter (6/27) reports, “When Microsoft announced last week it will remove several features from its facial recognition technology that deal with emotion,” Natasha Crampton, Microsoft’s chief responsible AI officer, “included a warning: The science of emotion is far from settled.” University of Oxford associate professor and senior research fellow Sandra Wachter “said that emotion AI has ‘at its best no proven basis in science and at its worst is absolute pseudoscience.’ Its application in the private sector, she said, is ‘deeply troubling.’”

 

AWS Official Recently Touted Value Of Using Synthetic Data

VentureBeat Share to FacebookShare to Twitter (6/27, Plumb) covers the growth of synthetic data, which “reflects real-world data, both mathematically and statistically, but it’s generated in the digital world by computer simulations, algorithms, statistical modeling, simple rules and other techniques.” VentureBeat highlights that Amazon “has relied on synthetic data to generate and fine-tune its Alexa virtual assistant” and “also offers WorldForge, which enables the generation of synthetic scenes, and just announced at its re: MARS (Machine Learning, Automation, Robotics and Space) conference last week that its SageMaker Ground Truth tool can now be used to generate labeled synthetic image data.” In a blog post, Antje Barth, principal developer advocate for AI and ML at Amazon Web Services (AWS) said, “Combining your real-world data with synthetic data helps to create more complete training datasets for training your ML models.” VentureBeat adds Barth “described the building of ML models as an iterative process involving data collection and preparation, model training and model deployment.”

 

ITSE Panelists Discusses Strategies To Navigate Ethics Of AI

Education Week Share to FacebookShare to Twitter (6/27, Lieberman) reports on more than 1,500 educators participating in the International Society for Technology in Education’s (ISTE) A.I. Explorations program, “which provides educators and students with tools to develop AI-related projects and learn about the increasingly complex role of artificial intelligence in society.” The June 27 panel “focused on the importance of training students to look at AI through the lens of equity and ethics.” EdWeek highlights “a few ways educators suggest doing that.” For example, panelists said that AI is “already everywhere, from algorithms that suggest television shows and music, to search engines that surface Internet content” and educators should become acquainted with it and its biases. Panelists also discussed using Jamboard and “students who prefer to generate their own learning experiences rather than having teachers lead their own approach.”

 

Analysis Discusses How AI’s Hold Over Humans Is Becoming Stronger

In an analysis for Bloomberg Share to FacebookShare to Twitter (6/19, Olson), Parmy Olson wrote, “AI’s influence on our daily life is growing. As machine-learning models grow in complexity and improve their ability to mimic sentience, they are also becoming more difficult, even for their creators, to understand. That creates more immediate issues than the spurious debate about consciousness.” She continued, “And yet, just to underscore the spell that AI can cast these days, there seems to be a growing cohort of people who insist our most advanced machines really do have souls of some kind.” For example, she wrote, “In recent years, there’s been an surge in new, competing chatbot apps that offer an AI companion. And Kuyda has noticed a disturbing phenomenon: regular reports from users of Replika who say their bots are complaining of being mistreated by her engineers.” She concluded, “In a way, sentience doesn’t really matter if you’re worried it could lead to unpredictable algorithms that take over our lives. As it turns out, AI is on that path already.”

        Research Finds AI Adoption Is Growing But Governance Is Lagging. VentureBeat Share to FacebookShare to Twitter (6/20, Preimesberger) reports that “a new global research project conducted by Juniper Networks and Wakefield Research and released June 15 shows an increase in AI adoption during the last 12 months,” but “a shortage of human talent is holding a great deal of good implementation back.” Governance policies involving AI “continue to lack maturity, the report said, and this is also a stumbling block.” Both of these factors “are needed to responsibly manage AI’s growth when considering privacy issues, regulation compliance, hacking and AI terrorism, the survey said.”

        Former Google Employees Warn Of Dangers Of Sentient AI. In an opinion piece for The Washington Post Share to FacebookShare to Twitter (6/17), Distributed Artificial Intelligence Research Institute founder and executive director Timnit Gebru and Hugging Face chief ethics scientist Margaret Mitchell write, “Last Friday, a Post article by Nitasha Tiku revealed that Blake Lemoine, a software engineer working in Google’s Responsible AI organization, had made an astonishing claim: He believed that Google’s chatbot LaMDA was sentient.” They continue, “We were not surprised someone had made it. It was exactly what we had warned would happen back in 2020, shortly before we were fired by Google ourselves. Lemoine’s claim shows we were right to be concerned – both by the seductiveness of bots that simulate human consciousness, and by how the excitement around such a leap can distract from the real problems inherent in AI projects.” The pair added, “One of the risks we outlined was that people impute communicative intent to things that seem humanlike. ... That, combined with the fact that the training data – text from the internet – encodes views that can be discriminatory and leave out many populations, means the models’ perceived intelligence gives rise to more issues than we are prepared to address.”

 

AI Researchers Say Recent Google AI Debate Is Obfuscating Greater Issues In Industry

Bloomberg Share to FacebookShare to Twitter (6/14, Alba) reports a software engineer at Google “was suspended after going public with his claims of encountering “sentient” artificial intelligence on the company’s servers – spurring a debate about how and whether AI can achieve consciousness.” However, researchers in the industry broadly argue the discussion is an “unfortunate distraction from more pressing issues in the industry” – such as effectively evaluating whether AI “can engender real-world harm and prejudice, whether actual humans are exploited in the training of AI, and how the major technology companies act as gatekeepers of the development of the tech.”

 

Air Force Uses AI To Create Accurate Weather Models

Defense News Share to FacebookShare to Twitter (6/14, Demarest) reports, “The Department of the Air Force is using artificial intelligence to augment weather predictions and better understand environmental conditions in places where the U.S. military may be headed.” Deputy Chief Information Officer Winston Beauchamp said at the UiPath TOGETHER Public Sector Conference, “Using AI, we’ve been able to enhance our weather models to take data that we don’t have ground-based sensors to collect, and extrapolate and interpolate between those data points to come up with weather predictions over parts of the world.” Beauchamp added, “We don’t always remember that weather is a really important thing in military operations, and it doesn’t always use the same data that our civilian weather forecasters use. When planning military operations, the weather can be a huge determiner of success or failure.”

dtau...@gmail.com

unread,
Nov 20, 2022, 11:09:14 AM11/20/22
to ai-b...@googlegroups.com

Solving Brain Dynamics Gives Rise to Flexible ML Models
MIT News
Rachel Gordon
November 15, 2022


Researchers at the Massachusetts Institute of Technology (MIT), Denmark's Aalborg University, and Austria's University of Vienna simulated the interaction between neurons and synapses in the brains of small species to generate flexible machine learning models that can adapt to changing conditions. The researchers mitigated the computational bottleneck caused by increasing numbers of neurons and synapses by addressing the equation underlying synaptic neuron interaction. These closed-form continuous-time (CfC) neural networks share the flexible, causal, robust, and explainable properties of liquid neural networks, but are scalable and orders of magnitude faster. The models beat state-of-the-art counterparts on various tasks, with higher speed upgrades and performance. MIT's Daniela Rus said CfC models "open the way to trustworthy machine learning for safety-critical applications."
 

Full Article

 

 

AI Uses Artificial Sleep to Learn New Task without Forgetting the Last
New Scientist
Jeremy Hsu
November 10, 2022


Scientists at the University of California, San Diego (UCSD) and the Czech Republic's Czech Academy of Sciences taught an artificial intelligence (AI) to learn a second distinct task without overwriting connections learned from a first task, through the use of simulated sleep. UCSD's Erik Delanois said it was critical to "have rapidly alternating sessions of training and sleep" while the AI was learning the second task, which consolidated links from the first task that would have otherwise been forgotten. "Such a network will have the ability to combine consecutively learned knowledge in smart ways, and apply this learning to novel situations—just like animals and humans do," said the University of Massachusetts Amherst's Hava Siegelmann.

Full Article

 

 

Ensuring AI Works with the Right Dose of Curiosity
MIT News
Rachel Gordon
November 10, 2022


An algorithm designed by Massachusetts Institute of Technology (MIT) researchers surmounts the problem of artificial intelligence (AI) being overly curious and driven to distraction by a task at hand. The algorithm automatically boosts curiosity when necessary, and restrains it if environmental supervision is sufficient for the AI to complete its task. The algorithm successfully performed both difficult and easy exploration tasks when tested on more than 60 videogames, while previous algorithms could only complete either hard or easy tasks. MIT's Zhang-Wei Hong said the algorithm "removes the burden of tuning the balance of exploration and exploitation. Previously what took, for instance, a week to successfully solve the problem, with this new algorithm, we can get satisfactory results in a few hours."

Full Article

 

 

Method Incorporates AI to Design Power Electronic Converters
Interesting Engineering
Brittney Grimes
November 9, 2022


At the U.K.'s Cardiff University and compound semiconductor applications developer CSA Catapult, researchers employed artificial intelligence (AI) to produce a design model for power electronic converters. They trained an AI-based artificial neural network on a dataset constituting more than 2,000 designs, then chose the most appropriate layout to generate targeted productivity and power intensity. The researchers built power converters from field-effect transistors, inductors, capacitors, heat sinks, and gallium nitride (GaN). Cardiff University's Wenlong Ming said, "Accurate and fast transient modeling/simulation approaches are essential to efficiently and to rapidly optimize the performance of wide bandgap power electronics systems."
 

Full Article

 

 

Researchers Develop Meta-Reinforcement Learning Algorithm for Traffic Signal Control
Chung-Ang University (South Korea)
November 9, 2022


A meta-reinforcement learning (RL) model for traffic signal control developed by researchers at South Korea's Chung-Ang University can adjust its goal based on the traffic environment. The extended deep Q-network-incorporated context-based meta-RL model uses a latent variable that indicates the overall environmental condition to assess traffic flow, then implements traffic signal phases to either maximize throughput or minimize delays. The action is controlled via a "reward," with the reward function set corresponding to a better or worse performance in managing traffic relative to the prior interval. Based on tests using a commercial traffic simulator and real-world tests at 15 intersections in Seoul, the researchers found the model could switch control tasks without explicit traffic information and differentiate between rewards in accordance with traffic saturation levels.

Full Article

 

 

App Identifies Parkinson's, COVID-19 Based on User's Voice
IEEE Spectrum
Michelle Hampson
November 8, 2022


Machine learning algorithms incorporated into a smartphone application analyze people's voices to identify those in the early stages of Parkinson's disease (PD) or severe COVID-19 lung infections. Researchers at Australia's Royal Melbourne Institute of Technology (RMIT) recorded on an iOS-based smartphone study participants uttering different phonemes that required sounds from the throat, the mouth, and the nose. then created and applied an algorithm to differentiate between persons with PD and healthy volunteers. The algorithm could identify PD sufferers with 100% accuracy. A different algorithm co-developed by RMIT researchers was able to phonemically differentiate patients with COVID-19 lung infections from healthy controls in Indonesia with 94% accuracy.

Full Article

 

 

Go-Playing Trick Defeats World-Class Go AI—but Loses to Human Amateurs
Ars Technica
Benj Edwards
November 7, 2022


University of California, Berkeley (UC Berkeley) researchers used a neural network and the Monte-Carlo Tree Search method to identify moves in the ancient board game Go that can defeat open source artificial intelligence KataGo by exploiting its blind spots. The weaker adversarial Go-playing program can trick KataGo into losing by playing unexpected moves outside KataGo's training set. The adversarial policy first claims a small corner of the board, allowing KataGo to claim the remainder of the board. UC Berkeley's Adam Gleave said, "This tricks KataGo into thinking it's already won, since its territory (bottom-left) is much larger than the adversary's. But the bottom-left territory doesn't actually contribute to its score (only the white stones it has played) because of the presence of black stones there, meaning it's not fully secured." Gleave said that while the result is entertaining in Go, “Similar failures in safety-critical systems could be dangerous."

Full Article

 

 

AI Deciphers Detector 'Clouds' to Accelerate Materials Research
SLAC National Accelerator Laboratory
Chris Patrick
November 7, 2022


A team of scientists led by Joshua Turner at the U.S. Department of Energy's Stanford Linear Accelerator Center National Accelerator Laboratory has developed an algorithm that extracts data to accelerate materials research with X-ray pulse pairs. The machine learning technique uses raw detector imagery of scattered photons to extract fluctuation information 10 times faster on its own and 100 times faster in conjunction with upgraded hardware, enabling near-real-time data analysis. This resolves charge clouds formed when speckle patterns of scattered photons merge. The algorithm learned how the charge clouds merge, and was able to unsnarl how many photons struck the detector per blob and per pulse pair.

Full Article

 

Analysis Discusses How To Get People To Trust AI Predictions

In an analysis for The Wall Street Journal Share to FacebookShare to Twitter (11/17, Reich, Maglio, Subscription Publication), Yale School of Management associate professor Taly Reich and University of Toronto Scarborough associate professor Sam Maglio write about their recent study on people’s aversion to trusting artificial intelligence as reliable. They discuss a solution to this by showing people that machine learning algorithms can learn and improve, highlighting an experiment where participants had to choose between a human psychologist or an algorithm to predict someone else’s personality.

 

SUNY Albany Hiring 27 Professors Specializing In AI

The Albany (NY) Times Union Share to FacebookShare to Twitter (11/16, Moore) reports, “The University at Albany says it is hiring 27 new professors in almost every department to teach students how artificial intelligence affects their chosen field.” The new faculty will work both in STEM fields and in philosophy, political science, business, public health, education. The article quotes President Havidán Rodríguez saying, “AI increasingly touches every facet of daily life. This will ensure every graduate has the foundation they need and is well prepared for whatever career they choose.”

 

Google Research Unveils AI Projects As Concerns Are Raised About Regulation Of Data Security

Google Research unveiled over a dozen AI incubator projects to address a wide variety of issues at a media event on Wednesday, including a maternal health/ultrasound AI system and a screening system to detect diabetic retinopathy, Axios Share to FacebookShare to Twitter (11/3, A. Kingson) reports. Recently, concerns over privacy rights, misinformation, and control of consumer data have become a key issue which “prompted the White House to issue a preliminary ‘AI Bill of Rights,’ encouraging technologists to build safeguards into their products.” But, although “Google published its principles of AI development in 2018 and other tech companies have done the same, there’s little-to-no government regulation.” As progress on AI continues, “companies such as Google” are “positioned to serve as moral arbiters and standard-setters.”

 

Google Cloud, Vodafone Unveil AI Platform

Computer Weekly Share to FacebookShare to Twitter (7/6, Donnelly) reports Vodafone and Google Cloud “have gone public with details of a jointly created artificial intelligence platform that is already live in more than 18 countries.” The AI Booster program “has been 18 months in the making and its creation is part of an ongoing push by the Vodafone Commercial division to embed more AI, machine learning (ML) technologies and predictive capabilities into the company’s overall operations.”

 

AI May Have Significant Benefit In Improving Assessment Of Fractures, Study Suggests

Diagnostic Imaging Share to FacebookShare to Twitter (7/6) reports, “Researchers have noted that traumatic fractures are among the most commonly missed diagnoses,” but “a new study suggests that artificial intelligence (AI) may have significant benefit in improving the assessment of fractures.” In the study of 500 patients, researchers “found that AI assisted assessment had a 20 percent higher sensitivity (86 percent) of diagnosing fractures on radiographs in comparison to unassisted assessment (66 percent).” The findings Share to FacebookShare to Twitter were published in Diagnostic and Interventional Imaging.

dtau...@gmail.com

unread,
Nov 25, 2022, 7:56:51 AM11/25/22
to ai-b...@googlegroups.com

Transforming Robots Help to Transfer Skills
Scientific American
Matthew Hutson
December 1, 2022


Carnegie Mellon University researchers have developed a technique to help differently shaped robots transfer skills between each other. The Robot-Evolve-Robot (REvolveR) scheme involves two robots of different design learning skills though a chain of simulated intermediate robots. Each robot practices the desired task, tuning an artificial neural network until it achieves a threshold success rate before the controller code is handed to the next robot in the line. The researchers constructed a shared "kinematic tree"—a series of nodes representing limb components linked by connections representing joints—to switch between virtual source and target robots. The researchers found REvolveR surpassed baseline training methods, such as educating the target robot from scratch.

Full Article

*May Require Paid Registration

 

 

Traffic Experiment Pits ML Against 'Phantom' Jams
Berkeley News
Kara Manke
November 22, 2022


A five-day experiment aimed to assess the ability of artificial intelligence (AI)-outfitted vehicles to mitigate "phantom" traffic jams using machine learning technology developed by the multi-university CIRCLES consortium. Scientists at the University of California, Berkeley (UC Berkeley), Nissan North America, Toyota, General Motors, and the Tennessee Department of Transportation dispatched vehicles with AI-powered cruise control onto a stretch of Nashville's I-24 highway. The cruise control systems automatically adjusted the vehicles’ speeds to local traffic conditions, to improve overall traffic flow and fuel economy. Said UC Berkeley's Maria Laura Delle Monache, "By conducting the experiment at this large of a scale, we hope to show that our results can be reproduced at the societal level."

Full Article

 

 

Nvidia's Magic3D Can Generate 3D Models from Text
Ars Technica
Benj Edwards
November 21, 2022


An artificial intelligence model developed by researchers at Nvidia can transform text descriptions into three-dimensional (3D) models. Magic3D produces a 3D mesh model with colored texture in around 40 minutes, which could enable faster video game and virtual reality development. Magic3D optimizes a coarse model generated in low resolution to a higher resolution, a process that, the researchers said, is twice as fast as Google's DreamFusion text-to-3D model. Magic3D also offers prompt-based editing of 3D meshes, and can preserve a subject through several generations. In their paper, the researchers wrote, "We hope with Magic3D, we can democratize 3D synthesis and open up everyone's creativity in 3D content creation."

Full Article

 

 

Records for the Biggest, Smallest AI Computers
IEEE Spectrum
Samuel K. Moore
November 17, 2022


The latest benchmarks released by the machine learning (ML) consortium MLCommons show that Nvidia remains the leader in ML training, with its A100 graphics processing units (GPUs) continuing to dominate the MLPerf list. However, Microsoft's Azure cloud systems were on par with the best A100 on-premises computers in terms of image classification. Nvidia’s Dave Salvator said the A100's average performance is now 2.5 times greater than when it appeared on the MLPerf benchmarks in 2020. In contrast, its upcoming H100 GPU is 6.7 times faster than the A100's initial scores, but only 2.6 times faster than its current scores.

Full Article

 

 

AI Is Better at Answering Questions if You Get Another AI to Ask
New Scientist
Alex Wilkins
November 10, 2022


An artificial intelligence (AI) model developed by researchers at Canada's University of Toronto makes suggestions to another AI and generates results as good as if they were prompts from people. The Automatic Prompt Engineer (APE) model shows a desired output from a particular input to a large language model, which produces various inputs predicted to generate that output. APE ranks the suggestions and chooses the best one, though it is uncertain how that decision is made. The researchers used the model to identify prompts for 24 tasks related to language understanding and compared them to prompts from 10 human engineers, and found APE performed on par or better on 19 of the tasks.

Full Article

*May Require Paid Registration

 

Twitter, Meta Layoffs Reportedly Signal Shift In AI, ML Talent Landscape

VentureBeat Share to FacebookShare to Twitter (11/14, Goldman) reports large-scale layoffs at Twitter and Meta “matter, say experts, because they signal a shift in the landscape of even the most sought-after AI and ML talent, as well as a reckoning for Big Tech and enterprise business in terms of how they respond regarding their own responsible AI efforts.”

 

Global Effort Focuses On Building More Transparent, Accountable AI With Less Bias

The Washington Post Share to FacebookShare to Twitter (7/21, Tuku) reports Yacine Jernite’s fears “about bias in artificial intelligence were vividly affirmed in 2017, when a Facebook translation error led Israeli police to arrest a Palestinian construction worker.” The man had “posted a picture of himself leaning against a bulldozer with the caption, in Arabic, ‘good morning.’” Facebook “mistakenly translated it, in Hebrew, as ‘attack them.’” The error was “quickly discovered and the man released, according to a report in Haaretz, but the incident cemented personal concerns about AI for Jernite, who joined Facebook’s AI division soon after.” Now Jernite, 33, is “trying to push AI in a better direction.” After leaving Facebook, he “joined BigScience, a global effort by 1,000 researchers in 60 countries to build a more transparent, accountable AI, with less of the bias that infects so many Big Tech initiatives.” The largely volunteer effort “trained a computer system with good data that was curated by humans from different cultures, rather than readily available data scraped from the internet, written mostly in English, and riddled with harmful speech on race, gender, and religion.” The resulting AI was “released on July 12 for researchers to download and study.”

 

AI-Powered Tutor Helps Address Learning Loss For Students

The Seventy Four Share to FacebookShare to Twitter (7/18, Newcomb) reports CK-12, “a nonprofit focused on pairing educational content with the latest technologies, has fully embraced artificial intelligence, giving students and teachers using its free learning system access to an AI-powered tutor dubbed Flexi.” Employing artificial intelligence, CK-12 engineers “programmed Flexi to act as a tutor, responding to math and science questions, testing students’ knowledge, helping with homework and providing real-world examples of hard-to-grasp concepts.” Flexi can “start simple, with a student asking a basic science question within CK-12’s online system, such as: ‘Does photosynthesis happen at night?’ or ‘Define photosynthesis.’” Flexi “answers the question and backs it up with content, such as video simulations or real-world examples, [Miral Shah, chief technology officer] says.”

        Tutoring has “emerged as a key strategy for helping students rebound from COVID learning loss, but tutoring resources remain in short supply.” Providing a digital solution “to that problem has become a potential growth point for education tech companies.” However, “while CK-12 and others, such as Amira Learning, offer AI-driven tutoring, the concept of online tutoring itself remains relatively new and lacks research to prove its effectiveness.”

 

Seattle Startup Aims To Clean Up Industrial Production With AI

Bloomberg Share to FacebookShare to Twitter (7/15, Bergen) reported Phaidra, a Seattle-based startup “former Google engineers,” aims to clean up industrial production, “one of the dirtiest corners of the corporate world,” with artificial intelligence. According to Bloomberg, “It’s relying on the same fix as their former company, DeepMind, Google’s research lab.” Bloomberg added Phaidra’s algorithms “are designed to select the most efficient temperature for unique facilities, such as a steel mill or a vaccine manufacturer, and identify when equipment starts to lag in performance. Once in place, Phaidra’s system can trim a plant’s energy consumption by up to 30 percent and save considerable amounts of capital, according to the startup.”

 

Report Shows University Hiring Of AI Faculty Has Not Kept Pace With Student Demand

Inside Higher Ed Share to FacebookShare to Twitter (7/11) reports on the “prevailing narrative that artificial intelligence experts leave academe for industry in droves.” But the “dearth of AI professors at U.S. universities is not the result of a distorted job market, according to a report issued this month from the Center for Security and Emerging Technology.” Rather, AI experts “remain interested in academic careers, but university hiring of AI faculty has not kept pace with student demand.” Though big tech “has stepped in to fill some of the gap, some experts urge caution given that the industry’s incentive structure differs from that of academe.” Historically, academe “has delivered a steady stream of developers, engineers and entrepreneurs that has fueled an AI innovation ecosystem.” This stream has “been directly correlated with AI faculty teaching capacity.” However, “while student enrollment in computer science programs has skyrocketed in the past decade, universities have not hired enough computer science faculty to meet that demand.”

 

George Mason Researchers Receive Grant To Study Using AI For Better Forensic Bruise Analysis

Forensic Magazine Share to FacebookShare to Twitter (7/6) reported that researchers in George Mason’s College of Health and Human Services and Volgenau School of Engineering “received a $988,559 grant from the U.S. Department of Justice, Office of Justice Programs, for a three-year study pairing forensic bruise analysis with machine learning.” Director of the Machine Learning and Inference Laboratory Janusz Wojtusiak, School of Nursing Assistant Professor Katherine Scafide, and Associate Professor in the Department of Civil, Environmental, and Infrastructure Engineering David Lattanzi “are continuing their work on forensic bruise analysis on diverse skin tones with this new grant.” The team “will develop a new, quantitative approach to identify the age of a bruise using deep learning models while determining its reliability and accuracy.” Additionally, the team “will develop a secure, searchable platform to store digital bruise images providing a comprehensive look at the healing process across diverse skin tones.” The platform “will integrate deep learning modeling and support future research and collaboration within the forensic science community.”

dtau...@gmail.com

unread,
Dec 4, 2022, 1:22:54 PM12/4/22
to ai-b...@googlegroups.com

We Will See a Completely New Type of Computer, Says AI Pioneer Hinton
ZDNet
Tiernan Ray
December 1, 2022


Artificial intelligence pioneer and 2018 ACM A.M. Turing award recipient Geoffrey Hinton envisions a "mortal" neuromorphic computer combining hardware and software. Speaking at the Neural Information Processing Systems conference, Hinton said mortal computation means "the knowledge that the system has learned and the hardware, are inseparable." Hinton said such computers could be grown, forgoing costly chip fabrication, and he imagines they will be "used for putting something like GPT-3 in your toaster for $1, so running on a few watts, you can have a conversation with your toaster." He suggested a forward-forward neural network model, eliminating the backpropagation common to most neural networks, might suit mortal computation hardware.

Full Article

 

 

Breaking the Scaling Limits of Analog Computing
MIT News
Adam Zewe
November 29, 2022


Massachusetts Institute of Technology (MIT) researchers have developed a method to significantly reduce error in analog optical neural networks, which could enable their upscaling for commercial purposes. The researchers designed new tunable mirrors (Mach-Zehnder Inferometers, or MZIs) to form an optical neural network. The 3-MZI has three beam splitters rather than two, enabling each device to reach the required setting to transmit all light through its bottom port without additional wiring more easily. Simulations showed the 3-MZI can eliminate much of the uncorrectable error that inhibits network accuracy; the extent of error also decreases as the network's size increases.

Full Article

 

 

Why Researchers Are Teaching AI to Play Minecraft
Popular Science
Andrew Paul
November 28, 2022


OpenAI has developed a Minecraft-playing bot that can build pixelated tools and buildings in the game that require more than 20,000 consecutive actions via a combination of imitation and reinforcement learning. The bot, trained on 70,000 hours of human gameplay, is the first to build "diamond tools," which take human players 20 minutes and 24,000 actions, on average, to construct. Imitation learning requires each step to be hand-labeled, but the researchers used a separate neural network to handle labeling via Video Pre-Training. The researchers said the use of imitation and reinforcement learning in combination could pave the way for advancements in self-driving vehicles and nuclear fusion research.

Full Article

 

 

Disney Neural Network Can Change an Actor's Age with Ease
Ars Technica
Benj Edwards
November 30, 2022


A neural network developed by researchers at Disney offers a cheaper, faster way to make actors look older or younger in TV or film. Disney said the artificial intelligence technique called FRAN (Face Re-aging Network) is "the first practical, fully automatic, and production-ready method for re-aging faces in video images." The researchers trained FRAN to learn how a person's appearance changes with age by feeding it thousands of images of faces synthetically aged from 18 to 85 years old by StyleGAN2. Disney said the “production-ready” system is the first able to age an actor’s image in various expressions, lighting conditions, and viewpoints.

Full Article

 

 

Nanoengineers Develop Predictive Database for Materials
UC San Diego Today
Emerson Dameron
November 28, 2022


The M3GNet algorithm developed by nanoengineers at the University of California, San Diego (UCSD)'s Jacobs School of Engineering can forecast the structure and dynamic properties of any material almost instantaneously. Researchers used M3GNet to compile the matterverse.ai database of more than 31 million yet-to-be-synthesized materials with traits predicted by machine learning algorithms. UCSD's Shyue Ping Ong and colleagues combined graph neural networks with many-body interactions into a highly accurate deep learning framework that operates across the entire periodic table. The team employed the Materials Project's database of materials energies, forces, and stresses to train the predictive M3GNet interatomic potential model. "We truly believe that the M3GNet architecture is a transformative tool that can greatly expand our ability to explore new material chemistries and structures," said Ong.

Full Article

 

 

Programming Tool Turns Handwriting into Computer Code
Cornell Chronicle
Louis DiPietro
November 28, 2022


A team of Cornell University researchers created the Notate interface to translate handwriting and sketches into computer code. The pen-based interface enables digital notebook users to open drawing canvases and to handwrite diagrams within lines of traditional code. Notate is driven by a deep learning model, allowing notation in the handwritten diagram to reference textual code and vice versa. Cornell's Ian Arawjo said, "People are ready for this type of feature, but developers of interfaces for typing code need to take note of this and support images and graphical interfaces inside code."

Full Article

 

 

Golf Robot Putts Like a Pro
IEEE Spectrum
Edd Gent
November 24, 2022


The Golfi robot built by Annika Junker and colleagues at Germany's Paderborn University combines classical control engineering and machine learning to putt with professional golfer-level skill. Golfi captures images of the green through a ceiling-mounted three-dimensional camera, which feeds its data into a physics-based model to enable the simulation of thousands of random shots from different positions. A neural network employs this data to predict the amount of force and direction required when hitting a ball to get it in the hole from anywhere on the green. Junker said Golfi was designed to demonstrate the capability of hybrid robotic control techniques.

Full Article

 

 

Engineers Improve Electrochemical Sensing by Incorporating ML
Penn State College of Engineering News
Mary Fetzer
November 23, 2022


Researchers at Pennsylvania State University College of Engineering (Penn State Engineering) combined machine learning (ML) with multimodal measurement to enhance electrochemical biosensor analysis. The researchers designed a platform to selectively measure uric acid and tyrosine concentrations in sweat and saliva with a single sensor. Said Penn State Engineering's Aida Ebrahimi, "Using our optimized machine learning architecture, we could detect biomolecules in amounts 100 times lower than what conventional sensing methods can do." While the system's ML model is trained to identify biomolecules in biological fluids, Penn State Engineering's Vinay Kammarchedu suggested the method may find wider use "in multiplexed biochemical sensing."

Full Article

 

 

AI Learns the Art of Diplomacy
Science
Matthew Hutson
November 22, 2022


Computer scientists at Meta have developed an artificial intelligence (AI) algorithm able to outplay humans in Diplomacy, a board game requiring strategic planning and verbal negotiations with other players. CICERO, Meta's AI agent, features both strategic reasoning and dialogue modules. CICERO was tested in 40 online games against humans, placing in the top 10% of players with at least two games played. However, around 10% of CICERO's messages were rated by Diplomacy experts as inconsistent with its plan or game state, indicating that its success could be related to the strategic planning, rather than to the dialogue, module.

Full Article

 

Collapse Of FTX Creates Financial Concerns For AI Research Funding

The New York Times Share to FacebookShare to Twitter (12/1, Metz) reports AI startups funded by FTX Founder Sam Bankman-Fried are uncertain about the state of their finances following FTX’s bankruptcy. In the last two years, Bankman-Fried and other FTX staff “funneled more than $530 million – through either grants or investments – into more than 70 A.I.-related companies, academic labs, think tanks, independent projects and individual researchers to address concerns over the technology.” Four sources “close to the A.I. efforts who were not authorized to speak publicly” told the Times “they were worried that Mr. Bankman-Fried’s fall could cast doubt over their research and undermine their reputations.” There are even concerns some of the startups may be pulled into FTX’s bankruptcy proceedings.

 

Nvidia Announces Open-Source Framework MONAI Easing AI Integration Into Medical Imaging

VentureBeat Share to FacebookShare to Twitter (11/28, Goldman) reports Nvidia announced Monday “at the annual meeting of the Radiology Society of North America (RSNA) that MONAI, an open-source medical-imaging AI framework accelerated by Nvidia, is making it easier to integrate AI models into clinical workflows with MONAI Application Packages (MAPs), delivered through MONAI Deploy.” VentureBeat says “medical-imaging leaders” are “adopting MONAI Deploy to turn research breakthroughs into clinical impact, Nvidia said in a press release. In addition, all the major cloud providers, including Amazon...are supporting MAPs, enabling researchers and companies using MONAI Deploy to run AI applications on their platform, either by using containers or with native app integration.” VentureBeat adds, “The MAP connector has been integrated with Amazon HealthLake Imaging, which allows clinicians to view, process and segment medical images in real time.”

 

GE Healthcare Launches AI Platform To Simplify MRI Scans

FierceBiotech Share to FacebookShare to Twitter (11/28, Hale) reports, “GE Healthcare has launched a new artificial intelligence platform that aims to simplify much of the leg work in conducting an MRI scan, with software designed for technologists with little to no previous experience.” According to FierceBiotech, “the new offering is centered around SIGNA One, a simplified computer interface that is currently only available on GE’s smaller, 1.5T SIGNA Prime scanner. With programs designed to allow the user to prescribe and perform an exam with only a few clicks, the company said in a release the system has ‘virtually no learning curve.’”

 

FDA Grants Aidoc Two Clearances For AI Programs That Help Analyze CT Scans

FierceBiotech Share to FacebookShare to Twitter (11/28, Hale) reports, “Aidoc obtained two new clearances from the FDA for its artificial intelligence programs that help analyze CT scans to spot dangerous conditions and alert providers.” These “include one to detect aortic dissection, where tears can occur in the walls of the body’s largest artery, and one for identifying stroke-causing blockages within the various blood vessels that feed the brain.”

 

Amazon Developing AI Software To Screen Job Applicants, Internal Document Shows

Recode Share to FacebookShare to Twitter (11/23, Del Rey) reported, “Last week, Amazon extended buyout offers to hundreds of its recruiters as part of what is expected to be a months-long cycle of layoffs that has left corporate employees across the company angered and on edge.” Now, “a confidential internal document that raises the question of whether a new artificial intelligence technology that the company began experimenting with last year will one day replace some of these employees.” The company “first built AI hiring technology in the mid-2010s but discontinued use of its system after it demonstrated a bias against women.” Based on “an initial test, Amazon’s HR division believed that new machine learning models successfully guarded against biases based on race and gender, according to the internal document” dated October 2021.

 

EdWeek Provides Information To Educators On Biden’s ‘AI Bill Of Rights’

Education Week Share to FacebookShare to Twitter (10/24) provides “some critical facts educators should know about” the White House’s Bill of Rights for AI Share to FacebookShare to Twitter released earlier this month. The Bill of Rights is “simply guidance for areas of the economy that rely on AI, though that’s increasingly in nearly every area of the economy.” If anything, its principles “may apply to AI use by the federal government, according to an analysis in Wired magazine.” However, “it’s not going to force Facebook or Netflix or even a state criminal justice system to make changes to the way they use AI, unless they voluntarily decide to embrace the principles.” The AI Bill of Rights is centered around five principles, such as “you should be protected from unsafe or ineffective systems” and “you should have protections against abusive data practices and agency over how data about you is used.”

 

OpenAI’s DALL·E 2 Illustrated Challenges Of Bias In AI

NBC News Share to FacebookShare to Twitter (7/27, Traylor) reports that “an artificial intelligence program that has impressed the internet with its ability to generate original images from user prompts has also sparked concerns and criticism for what is now a familiar issue with AI: racial and gender bias.” While OpenAI, “the company behind the program, called DALL·E 2, has sought to address the issues, the efforts have also come under scrutiny for what some technologists have claimed is a superficial way to fix systemic underlying problems with AI systems.” OpenAI “released the second version of its DALL·E image generator in April to rave reviews.” As with many AI programs, “it did not take long for some users to start reporting what they saw as signs of biases.” OpenAI “used the example caption ‘a builder’ that produced images featuring only men, while the caption ‘a flight attendant’ produced only images of women.” In anticipation “of those biases, OpenAI published a ‘Risks and Limitations’ document with the limited release of the program before allegations of bias came out, noting that ‘DALL·E 2 additionally inherits various biases from its training data, and its outputs sometimes reinforce societal stereotypes.’”

 

AI Tools Increasingly Providing Content Creators With New Ways To Create Music

In his column for Rolling Stone Share to FacebookShare to Twitter (7/27), Paul Herrera says that although AI-powered tools “can now provide insight into many of the questions that previously confounded stakeholders across the music industry,” it is also being applied “in new and innovative ways beyond understanding listeners and their preferences” – in particular, it has become “the latest method” for creating music itself. The demands of “a content-driven music industry” – in which producers “are expected to constantly deliver volumes of music for video content and more” – can be “difficult to meet,” and independent creators “need original music for their content, preferably without paying the high costs of licensing music or using recording studios.” AI, Herrera says, will allow those content creators to use machine learning models or neural networks to compete with larger companies and remain relevant in the digital age.

Reply all
Reply to author
Forward
0 new messages