AI Predicts Potential Nutrient Deficiencies from Space
Scientific American
Rachel Berkowitz
June 1, 2022
Harvard University computer scientists found that geographic areas with populations at high risk of micronutrient deficiencies can be identified using publicly available satellite data and artificial intelligence. The method could allow for early public health interventions. The researchers found that a combination of data, including vegetation cover, weather, and water presence, can determine where populations may suffer from a lack of iron, vitamin B12, or vitamin A. The researchers trained their model using blood samples tested in labs. In a study of four regions of Madagascar, the researchers found the model's predictions of regional-level micronutrient deficiency in populations outside the training datasets were as accurate, and sometimes more accurate, than estimates based on surveys by local public health officials.
Solving Complex Physics Problems by Seeking Signs of Symmetry
New Scientist
Alex Wilkins
May 18, 2022
Massachusetts Institute of Technology (MIT) researchers developed an artificial neural network that can solve physics problems by transforming those with complex coordinate systems to be more symmetric. The system was tested on six well-known physics problems that already had been solved, and rediscovered the symmetry in each. The neural network was provided a list of known symmetries, and attempts to transform complex coordinate systems until it identifies a match among known symmetries. Referring to the first equations to describe a black hole, MIT's Ziming Liu said, "It took 17 years for people to realize that there is a transformation that can simplify this system. But our tool only takes half an hour [to get to this stage]."
*May Require Paid Registration
ML Models Help Identify Long COVID Patients
UNC Health News
May 16, 2022
Researchers at the University of North Carolina (UNC) School of Medicine used machine learning models to identify patients with long COVID from de-identified electronic health record data in the National COVID Cohort Collaborative (N3C) clinical database. The researchers developed XGBoost machine learning models to identify long COVID patients. The models were trained and tested on data from nearly 600 long COVID patients, and were found to accurately identify potential long COVID patients. Said UNC's Emily R. Pfaff, "Using algorithms like the one we've created on large clinical datasets can narrow down vast numbers of patients to those who could qualify for a long COVID trial, potentially giving researchers a head start on recruitment, making trials more efficient, and hopefully getting to findings faster."
AI Powered Autonomous Cargo Ship for 500 Miles
Interesting Engineering
Ameya Paleja
May 16, 2022
A 749-gross-ton vessel is the first commercial cargo ship to be navigated entirely by artificial intelligence (AI) during a nearly 500-mile, 40-hour journey from Tokyo Bay to Ise Bay in Japan. The Suzaka was outfitted with Israeli startup Orca AI’s Automatic Ship Target Recognition System as part of a collaboration with Japan’s NYK Line shipping company. Data from the Orca system, which offers real-time data detection, tracking, classification, range estimation, and 360-degree views, was monitored by a fleet operations center in Tokyo. During the journey, the onboard navigational software made 107 collision avoidance maneuvers without human assistance, avoiding 400 to 500 vessels along the way.
Nanobiotics: Model Predicts How Nanoparticles Interact with Proteins
University of Michigan
May 16, 2022
University of Michigan (U-M) researchers developed a machine learning model that can predict interactions between nanoparticles and proteins, potentially paving the way for the creation of engineered nanoparticles able to disrupt antibiotic-resistant infections. The new algorithm compares nanoparticles to proteins using a conventional chemical description, as well as descriptions of the protein's complex surface and how it could reconfigure itself to allow for lock-and-key fits with specific proteins. Said U-M's Nicholas Kotov, "By applying mathematical methods to protein-protein interactions, we have streamlined the design of nanoparticles that mimic one of the proteins in these pairs. Nanoparticles are more stable than biomolecules and can lead to entirely new classes of antibacterial and antiviral agents."
Algorithms Empower Metalens Design
Harvard University John A. Paulson School of Engineering and Applied Sciences
Leah Burrows
May 16, 2022
Researchers from Harvard University and the Massachusetts Institute of Technology (MIT) have developed a method of using machine intelligence to generate designs for large-scale metasurfaces. The researchers developed a program that automatically generates metasurface designs using the foundation of physics, simultaneously designing millions to billions of parameters. The program employs an inverse design process, with researchers identifying the desired function of the metalens and the algorithm determining the best design geometries to achieve the goal. Said MIT's Steven G. Johnson, "This is an orders-of-magnitude increase in the scale of inverse design for nanostructured photonic devices, generating devices tens of thousands of wavelengths in diameter compared to hundreds in previous works, and it opens up new classes of applications for computational discovery."
NPR (3/18, Archie) reports the nonpartisan Pew Research Center “surveyed more than 10,000 adults and found that their support of artificial intelligence varied, depending on its use.” NPR reports the “most strongly opposed applications of AI were computer chip brain implants (56%) and driverless cars (44%).” NPR adds, “When asked what regulation of AI might look like, an average of 61% of Republicans were concerned the government would ‘go too far,’ while an average of 64% of Democrats thought the government would ‘not go far enough.’”
The New York Times (3/15, Smith) describes the “growing army of ‘citizen developers,’ who use new products that allow anyone to apply artificial intelligence without having to write a line of computer code.” Proponents of “the ‘no-code’ A.I. revolution believe it will change the world” by letting “users with a web browser and an idea have the power to bring that idea to life themselves.” The Times highlights startups and developers trying to advance the idea, such as “Akkio, a start-up that allows anyone to make predictions using data.” Akkio co-founder Craig Wisneski said, “We are trying to take A.I. and make it ridiculously easy.” The Times comments, “Eventually the broader public will be able to create A.I.-enabled software in much the same way that teenagers today can create sophisticated video effects that would have required a professional studio a decade or two ago. For now, though, most no-code-A.I. users are business professionals who want to streamline the way things are done without having to involve a programmer.”
Wired (3/15, Thompson) reports that “as a devotee of the open source software movement,” Feross Aboukhadijeh “has written immensely popular web apps, peer-to-peer file exchanges, and more than 100 other pieces of code that he has given away, all in the 10 years since he graduated from college.” Lately, though, he “has entered a new kind of flow state, helped along by a tool called Copilot.” It’s a “piece of artificially intelligent software that does some of the typing, and the thinking, for him.” Built by OpenAI, “the private research lab, and GitHub, the Microsoft-owned website where programmers share code, the tool is essentially autocomplete for software development.” The tool was “released last summer to a select group of coders.” Hammond Pearce, “a computer engineering professor at New York University, led a team that studied how Copilot wrote code in scenarios that ought to be secure.” He “found that a full 40 percent of the time, it produced software that was vulnerable – in particular, to SQL injection, a well-known attack that allows bad actors to insert malicious code. In the worst case, attackers could gain total control of a victim’s servers.”
Scientific American (3/15, Vitak) publishes the transcript of the Scientific American’s 60 Second Science podcast, where host Sarah Vitak discusses that “new research shows that detecting digital fakes generated by machine learning might be a job best done with humans still in the loop.” She talks with Matt Groh, “a PhD student and researcher at the MIT Media lab” on the subject. When asked if AI can detect AI-created deepfakes, he stated, “The answer is kind of Yes. But kind of No. And so I can go, you want me to go into like, why that? Okay. Cool. So the reason why it’s kind of difficult to predict whether video has been manipulated or not, is because it’s actually a fairly complex task. And so AI is getting really good at a lot of specific tasks that have lots of constraints to them. And so, AI is fantastic at chess. AI is fantastic at Go. AI is really good at a lot of different medical diagnoses, not all, but some specific medical diagnoses AI is really good at. But video has a lot of different dimensions to it.”
Mark Tech Post (3/6) reports “in recent years, scientists have been working on a variety of systems that can identify and traverse items in their environment.” Most of these systems “rely on deep learning and machine learning algorithms that use radar and necessitate a large amount of labeled training data.” Despite the “enormous advantages of radars over optical sensors, there are currently very few image datasets available for training that comprise data obtained using radar sensors.” To “overcome the issue of data scarcity, University of Arizona researchers have devised a new method for automatically generating datasets with tagged radar data-camera images.” It “labels the radar point cloud using an object recognition algorithm (YOLO) on the camera image stream and an association technique (the Hungarian algorithm).”
AI Translates Math Problems into Code to Make Them Easier to Solve
New Scientist
Alex Wilkins
June 6, 2022
Google's Yuhuai Wu and colleagues used the Codex neural network of artificial intelligence (AI) research company OpenAI to translate mathematical problems from plain English into formal code. Codex correctly translated 25% of 12,500 secondary-school math competition problems into a format compatible with a formal proof-solver program called Isabelle. Wu said the system's inability to understand certain mathematical concepts was responsible for many of the unsuccessful translations. The team then tested the process by applying Codex to problems pre-formalized by humans. The network produced its own formal versions, and the researchers used the MiniF2F AI to solve both versions; the auto-formalized versions raised MiniF2F's success rate from 29% to 35%, suggesting Codex's formalization was superior to that of humans.
Chip Processes, Classifies Nearly Two Billion Images per Second
Penn Engineering Today
Melissa Pappas
June 1, 2022
University of Pennsylvania (Penn) engineers have designed a 9.3-square-millimeter chip that can detect and classify images in less than a nanosecond. The chip directly processes light received from objects of interest using an optical deep neural network. "Our chip processes information through what we call 'computation-by-propagation,' meaning that unlike clock-based systems, computations occur as light propagates through the chip," explained Penn's Firooz Aflatouni. "We are also skipping the step of converting optical signals to electrical signals because our chip can read and process optical signals directly, and both of these changes make our chip a significantly faster technology." Penn's Farshid Ashtiani said direct processing of optical signals makes a large memory unit unnecessary.
AI Could Prevent Eavesdropping
Science
Matthew Hutson
May 31, 2022
Neural Voice Camouflage technology can help to prevent eavesdropping by producing custom background noise, which thwarts artificial intelligence (AI) that captures and transcribes recorded voices. The solution uses machine learning to alter audio so the AI, but not people, misinterpret sounds. Columbia University researchers trained a neural network on hours of recorded speech so it can constantly process two-second clips of audio, masking what it predicts will be spoken next. The researchers overlaid their system's output onto recorded speech as it fed into an automatic speech recognition (ASR) system; their technology boosted the ASR software's word error rate from 11.3% to 80.2%. Said Columbia University’s Mia Chiquier, “Artificial intelligence collects data about our voice, our faces, and our actions. We need a new generation of technology that respects our privacy.”
Autonomous Vehicles Can Be Tricked into Dangerous Driving Behavior
UCI News
May 26, 2022
Researchers at the University of California’s Irvine (UCI) and Los Angeles locations discovered that objects on the side of the road can fool driverless vehicles into making sudden stops, and other hazardous driving behavior. The researchers were investigating security flaws in the planning module that manages the vehicle's decision-making processes. The team used the PlanFuzz tool to assess behavioral planning implementations of the Apollo and Autoware autonomous driving systems. The tool determined cardboard boxes and bicycles on the side of the road caused vehicles to permanently halt on empty thoroughfares and intersections; the cars also neglected to change lanes as planned when perceiving nonexistent threats. UCI's Qi Alfred Chen said autonomous cars cannot differentiate between objects left on the road accidentally or intentionally.
“In case you’d heard any scary stories about AI advancing to the point of taking clinicians’ jobs, here’s your regular reminder that we’re very, very far from such a scenario,” Axios (3/30, Brodwin) reports. In even “the most advanced of cases, health care-related algorithms are typically being used in an effort to guide clinical decision making – not replace a doctor’s judgment.” Even when “it comes to those types of tools, developers must be careful to ensure their algorithms are trained with the appropriate data and account for any confounding variables.” One common “issue that’s surfaced a lot of late involves AI tools trained on non-diverse populations and then deployed on diverse populations.”
The Wall Street Journal (3/22, Loten, Subscription Publication) reports Alphabet is spinning off Sandbox AQ, a software startup focused on commercial applications for quantum computing and AI. The spinoff follows a nine-figure funding round. Former Google CEO Eric Schmidt will act as the company’s chairman. Jack Hidary is CEO. SiliconANGLE (3/22, Deutscher) reports Sandbox AQ “is launching with an initial team of 55 engineers, scientists and technologists,” and it “plans to hire more staffers using its newly raised funding round, including artificial intelligence experts who will explore ways of using quantum computing to enhance neural networks.” Sandbox AQ “has shared only a few details about its technology roadmap so far. The startup said [Tues]day that it’s working on quantum computing hardware, as well as applications and services.” The company “describes itself as an ‘enterprise SaaS’ company, which suggests that the startup intends to make its products accessible through the cloud as some other quantum computing startups have done.”
The Wall Street Journal (3/22, Vanderford, Subscription Publication) reports cybersecurity experts are warning that US banks are uniquely exposed to Russia-linked cyberattacks, particularly as related to vulnerabilities in their AI systems. According to Andrew Burt, a former Policy Adviser to the head of the cyber division of the FBI, AI systems are vulnerable to their complex analytic systems and the outsized role they are now playing in financial services at various institutions. Machine learning programs are particularly at risk due to the relative infancy of the field and the complex roles they play in financial services, according to experts.
Politico (3/22, Davidson, Weaver) reports the Treasury Department, “which is in regular contact with Wall Street on cybersecurity issues, has increased its meetings with financial firms since the Russian invasion began,” according to a source. “Those sit-downs include classified briefings and tabletop exercises, with an eye toward gaming out how officials in the sector and in the government would respond and coordinate in the event of a major cyberattack on the U.S. financial infrastructure.”
Ad Age (3/22, Sloane) reports Meta has “updated its ads platform, introducing more automation into campaigns, as the company continues to adjust to privacy and data restrictions in online marketing.” Goksu Nebol-Perlman, Meta’s VP of product marketing, ads and business products, on Tuesday “announced the new automation and machine learning tools within Facebook’s ads platform.” The company unveiled a “newly branded ad service called Meta Advantage,” which “consolidates the automated ad products.” Automation helps “advertisers set app-install ad campaigns and create ‘lookalike’ audiences, by relying more on Meta’s artificial intelligence.”
Can Computers Understand Complex Words, Concepts?
UCLA Newsroom
Stuart Wolpert
June 16, 2022
Researchers at the University of California, Los Angeles (UCLA), the Massachusetts Institute of Technology, and the National Institutes of Health found that artificial intelligence (AI) can learn complex word meanings and understand them like humans do. The researchers developed the "semantic projection" technique to study 52 word groups to determine whether an AI system could learn to sort meanings. The results of their research showed their method was very similar to human intuition. across many objects and contexts. Said UCLA's Idan Blank, "This machine learning system is much smarter than we thought; it contains very complex forms of knowledge, and this knowledge is organized in a very intuitive structure."
Artificial Neural Networks Model Face Processing in Autism
MIT News
Matthew Hutson
June 16, 2022
The Massachusetts Institute of Technology's Kohitij Kar trained an artificial neural network to model the process through which autistic people recognize emotions on others’ faces. The network features layers of units approximating neurons that process visual information as it passes from an input image to a final evaluation, signaling the likelihood the face is, for example, happy. The network's behavior more closely resembled that of neurotypical controls than it did autistic adults. Kar also found the difference between how well the network matched controls and how well it matched autistic people was greatest when the output was based on the last network layer, which mimics the brain's inferior temporal cortex to a degree.
AI Chip Hits Ultralow Power Lows
IEEE Spectrum
Michelle Hampson
June 15, 2022
Researchers at India's Indian Institute of Technology, Bombay (IIT Bombay) have announced an ultralow-energy artificial intelligence (AI) chip that facilitates spiking neural networks (SNNs) that emulate the brain's neural signal processing. The researchers developed an SNN that charges capacitors using a band-to-band-tunneling current. Explained IIT Bombay's Udayan Ganguly, "In comparison to state-of-art [artificial] neurons implemented in hardware spiking neural networks, we achieved 5,000 times lower energy per spike at a similar area and 10 times lower standby power at a similar area and energy per spike." When the researchers applied the SNN to a speech recognition model inspired by the brain's auditory cortex, the model recognized spoken words.
AI Reveals Scale of Eelgrass Vulnerability to Warming, Disease
Cornell University Chronicle
Tom Fleischman
June 15, 2022
Cornell University researchers were part of an international, interdisciplinary research team that used artificial intelligence (AI) to detect eelgrass wasting disease at 32 field sites along the U.S. Pacific Coast. The researchers determined that warmer-than-normal water temperatures, especially in early summer and regardless of region, is associated with wasting disease, which is caused by the organism Labyrinthula zosterae. The researchers developed an AI system called Eelgrass Lesion Image Segmentation Application (EeLISA) to analyze thousands of images of seagrass leaves. The system was able to distinguish diseased from healthy tissue 5,000 times faster than human experts, while improving its results over time. Said Cornell's Brendan Rappazzo, "We reached the point where it was at human-level accuracy, and needed to be checked only sporadically."
Harnessing ML to Analyze Quantum Material
Cornell University Chronicle
Kate Blackwood
June 14, 2022
Researchers at Cornell University developed an unsupervised machine learning algorithm to analyze the quantum metal Cd2Re2O7. The XRD Temperature Clustering (X-TEC) algorithm took only minutes to analyze eight terabytes of X-ray data spanning 15,000 Brillouin zones (uniquely defined cells), providing the researchers a better understanding of electron behavior in the pyrochlore oxide metal. Said Cornell's Eun-Ah Kim, "To the best of our knowledge, this is the first instance of the detection of a Goldstone mode using [X-ray powder diffraction (XRD)]. This atomic-scale insight into fluctuations in a complex quantum material will be only the first example of answering key scientific questions accompanying any discovery of new phases of matter...using information-rich voluminous diffraction data."
Calculating 'Fingerprints' of Molecules with AI
Helmholtz-Zentrum Berlin (Germany)
June 13, 2022
Researchers at Germany's Helmholtz-Zentrum Berlin (HZB) have demonstrated that self-learning graphical neural networks (GNNs) can calculate the spectral fingerprint of larger molecules faster than conventional methods. The new method involves inputting small molecules into the GNN, which compares the calculated spectra of each with known target spectra and makes corrections as necessary, eventually learning how to calculate spectra on its own. Said HZB's Kanishka Singh, "We have trained five newer GNNs and found that enormous improvements can be achieved with one of them, the SchNet model; the accuracy increases by 20% and this is done in a fraction of the computation time."
AI Reveals Never-Before-Described 3D Structure in Rotavirus Spike Protein
Baylor College of Medicine
Ana María Rodríguez
June 9, 2022
At the Baylor College of Medicine, researchers used artificial intelligence (AI) to identify a new three-dimensional (3D) structure in the rotavirus spike protein called VP8* domain. The researchers used the AI-based computational program AlphaFold2, which predicts the 3D structure of proteins based on their genetic sequence. Said Baylor's Liya Hu, "We were surprised when AlphaFold2 predicted a 3D structure for the VP8* [of rotavirus group B] that was not just totally different from that of the VP8* domain in rotavirus A and C, but also that no other protein before had been reported to have this structure." The researchers confirmed AlphaFold2's prediction aligned with the actual structure of the protein using X-ray crystallography.
Reuters (4/6, Dave) reports AI developers are learning that to make their tools “more effective for human operators, the AI may need to explain itself through another algorithm.” Explainable AI, or XAI, “has spurred big investment in Silicon Valley as startups and cloud giants compete to make opaque software more understandable and has stoked discussion in Washington and Brussels where regulators want to ensure automated decision-making is done fairly and transparently.” LinkedIn Director of Machine Learning and Head of Data Science Applied Research Parvez Ahammad said giving his team XAI “has helped experienced salespeople by arming them with specific insights to navigate conversations with prospects.”
Fox News (3/31, Aaro) reports that “researchers at Vanderbilt University have developed artificial intelligence technology to potentially assist basketball players in improving their game on the court.” Jules White, “associate dean for strategic learning programs and associate professor of computer science and computer engineering, and Carlos Olea, a Ph.D. student in the Department of Computer Science, developed an AI software called a temporal relational network to help determine the context and mechanics behind each shot a player takes.” White told Fox News Digital, “I’m really excited about the potential for AI to help amateurs at home learn and improve. I think there’s a limited amount of time and money that many people have to get access to coaching, so I think it expands the access to information that you need to improve.”
Extra 'Eye' Movements Key to Better Self-Driving Cars
RIKEN (Japan)
June 27, 2022
Researchers at Japan's RIKEN Center for Brain Science have created artificial neural networks that learn to identify objects faster and more accurately by focusing on unnoticed eye movements. RIKEN's Andrea Benucci designed a convolutional neural network (CNN) to optimize the classification of objects in visual scenes while one’s eyes are moving, first by training it to classify 60,000 black-and-white images into 10 categories. Training the CNN with shifted images or motor copies enabled the model to better accommodate visual noise, provided the direction and size of the eye movements causing the shift were also included. Said Benucci, "This advancement will help avoid dangerous mistakes in machine vision," such as those caused by pixel alterations or adversarial attacks, which could affect the safety of self-driving vehicles.
Open Source Language AI Challenges Big Tech Models
Nature
Elizabeth Gibney
June 22, 2022
A natural language processing model developed by an international team of about 1,000 researchers will rival in scale those developed by Google and OpenAI, but will be open source and multilingual. The BigScience collaboration's BLOOM language model was trained with $7 million-worth of publicly funded computing time, and granted free access to France's national Jean Zay supercomputer facility. The project's goal is to minimize the human biases and other flaws of artificial intelligence (AI) systems that are difficult to address because big technology companies have largely closed their models to researchers. While the model will not be bias-free, HuggingFace's Yacine Jernite said the team aimed to improve on existing models through the use of multicultural, high-quality sources.
Measuring AI's Carbon Footprint
IEEE Spectrum
Matthew Hutson
June 26, 2022
Researchers at the Allen Institute for AI have developed a method of calculating the carbon emissions of artificial intelligence (AI) systems that boasts greater accuracy. The new approach records the energy usage of server chips as a series of measurements and aligns that data with a series of data points detailing local emissions per kilowatt-hour (kWh) of the energy consumed. The researchers found graphics processing units training the biggest model to only 13% completion emitted nearly as much carbon as powering a U.S. home for a year does. In contrast, training the smallest models generated as much carbon as charging a phone.
Deep Learning Algorithms Deployed on Mobile Devices to Screen for Eye Disease
The Engineer
June 26, 2022
A deep learning (DL) algorithm developed by scientists at Japan's Tohoku University scans eye images to identify disease-related features. The lightweight DL model can be trained on small number of images, and can be employed on mobile devices. "Our developed model has better segmentation accuracy and enhanced model training reproducibility, even with fewer parameters—making it efficient and more lightweight when compared to other commercial software," explained Tohoku's Toru Nakazawa. The researchers used low-resource devices to acquire measurements of the eye's foveal avascular zone to augment screening for glaucoma. "Our model is also capable of detecting/segmenting optic discs and hemorrhages in fundus images with high precision," Nakazawa said.
Robots Play with Play Dough
MIT News
Rachel Gordon
June 23, 2022
Researchers at the Massachusetts Institute of Technology (MIT), Stanford University, and the University of California, San Diego have developed a system that allows robots to manipulate play dough, a rubbery mixture of water, salt, and flour. The RoboCraft system, which learns directly from visual input, enabled a robot with a two-fingered gripper to learn to shape play dough into letters after receiving just 10 minutes of data. RoboCraft relies on cameras to collect visual sensor data from the environment that is transformed into graphs of tiny particles, which are used by a graph-based neural network to learn how the object moves. The robot's behavior is then planned using algorithms that teach it how to shape the object based on training data from numerous pinches of the material.
AI Generates Photorealistic 3D Scenes, Lets You Edit Them
New Scientist
Alex Wilkins
June 22, 2022
Artificial intelligence (AI) models capable of generating or editing three-dimensional (3D) photorealistic scenes could simplify and expedite the creation of animated films. Stanford University's Eric Ryan Chan and colleagues developed the EG3D AI model to produce high-resolution random images of faces and other objects, along with an underlying geometric framework. Chan said the model offers near-photorealistic rendering, and can be run on a laptop in real time. His team used features from existing high-resolution two-dimensional generative adversarial networks, and added a component that can convert these images into three dimensions. At the University of Wisconsin-Madison, the GiraffeHD model developed Yong Jae Lee and colleagues attempts to extract controllable features of a 3D image, which could be used to edit 3D-generated images.
Flawed AI Makes Robots Racist, Sexist
Johns Hopkins University Hub
Jill Rosen
June 21, 2022
Researchers at Johns Hopkins University, the Georgia Institute of Technology (Georgia Tech), and the University of Washington found that a robot equipped with a publicly downloadable artificial intelligence model built with the CLIP neural network exhibited significant gender and racial biases. The robot was given 62 commands that involved selecting blocks with assorted human faces on them and placing them into a box. The researchers found the robot chose males 8% more frequently than females, and picked white and Asian men the most and Black women the least. The robot also identified women as "homemakers" over white men, Black men as "criminals" 10% more than white men, Latino men as "janitors" 10% more than white men, and men as "doctors" more than women of all ethnicities. Said Georgia Tech's Andrew Hundt, "The robot has learned toxic stereotypes through these flawed neural network models."
Google Wants to Challenge AI with 200 Tasks, to Replace the Turing Test
New Scientist
Matthew Sparkes
June 14, 2022
Artificial intelligence (AI) researchers at 132 institutions, including Google, have proposed an update to the Turing test for rating the capabilities of AI technology. The researchers suggested using the Beyond the Imitation Game benchmark (BIG-bench), comprised of 204 diverse tasks designed not to be fully solved by state-of-the-art AI models. The researchers observed that AI models performed poorly on most of the tasks compared with humans, although they showed improvement as computing power increased. Adrian Hilton of the U.K.'s University of Surrey said, "I can see that setting up a set of benchmarks is one way of comparing one machine-learning algorithm with another, one AI with another. But I don't think that necessarily answers the question of intelligence though."
Technology Helps Self-Driving Cars Learn From Own Memories
Cornell University Chronicle
Tom Fleischman
June 21, 2022
Technology developed by Cornell University researchers aims to imbue autonomous vehicles with the ability to generate "memories" of previous experiences for use in navigation. The researchers amassed the Ithaca365 traversal dataset by driving a car with LiDAR (Light Detection and Ranging) sensors along a 15-kilometer (9.3-mile) loop 40 times over 18 months in various environments, weather conditions, and times of day. The approach applies neural networks to compute descriptors of objects the car passes, then compresses and stores these Spatial-Quantized Sparse History (SQuaSH) features on a virtual map. When the car next traverses the same location, it can query the local SQuaSH database of all LiDAR points along the route and recall that knowledge.
The Lost Masterpieces Being Revealed
BBC News
Jo Lawson-Tancred
June 21, 2022
Advanced technologies are rediscovering iconic art works by revealing previously hidden masterpieces. George Cann and Anthony Bourached from the U.K.'s University College London (UCL) employed artificial intelligence (AI) to render x-ray images of paintings as three-dimensional prints that recreate concealed images’ colors and texture. London's National Gallery partnered with UCL and Imperial College London to produce clearer images of a portrait underneath Francisco de Goya's Doña Isabel de Porcel by having a computer combine multiple scans from different bands of the electromagnetic spectrum. Amsterdam's Rijksmuseum also has used AI to reconstruct lost panels from Rembrandt's The Night Watch from a copy by artist Gerrit Lundens, converting his style into Rembrandt's.
Hybrid Machine Learning Forecasts Lake Ecosystem Responses to Climate Change
UC San Diego Scripps Institution of Oceanography
Steve Koppes
June 20, 2022
An international team of researchers applied a hybrid empirical dynamic modeling (EDM) methodology to predict the effects of climate change and phosphorus pollution on Switzerland's Lake Geneva. George Sugihara at the University of California, San Diego's Scripps Institution of Oceanography explained that EDM is a form of supervised machine learning that can help model the mechanisms of interconnected and independent ecosystems. The hybrid model implies that increasing air temperature by 3 degrees Celsius (5.4 degrees Fahrenheit) would impact Lake Geneva's water quality as much as the phosphorus contamination of the past 100 years has.
Teaching Physics to AI Can Allow It to Make Discoveries All on Its Own
SciTechDaily
June 16, 2022
Duke University researchers found that adding known physics to machine learning algorithms can help them gain new insights into material properties. They developed a neural network to identify the characteristics of metamaterials and predict their interaction with electromagnetic fields. By first accounting for the metamaterial's known physical restrictions, the algorithm was able to predict the metamaterial's properties faster, more accurately, and with additional insights than prior approaches. Duke's Jordan Malof said, "By forcing the neural network to obey the laws of physics, we prevented it from finding relationships that may fit the data but aren't actually true."
Vox (4/14, Samuel) reports on pictures that “were created by DALL-E 2, a new AI system that can turn textual descriptions into images.” Sam Altman, “the CEO of OpenAI – the company that created DALL-E 2 – called it ‘the most delightful thing to play with we’ve created so far … and fun in a way I haven’t felt from technology in a while.’” However, “there are the obvious risks – that people could use this type of AI to make everything from pornography to political deepfakes, or the possibility that it’ll eventually put some human illustrators out of work.” There is “also the risk that DALL-E 2 – like so many other cutting-edge AI systems – will reinforce harmful stereotypes and biases, and in doing so, accentuate some of our social problems.” OpenAI researchers “made some attempts to resolve bias and fairness problems.” But they “couldn’t really root out these problems in an effective way because different solutions result in different trade-offs.”
NextGov (4/12, Williams) reports the intelligence community is preparing “for a busy research year with up to a dozen new projects, many of which lean on artificial intelligence and a special tailor-made quantum offering for the National Security Agency.” IARPA Director Dr. Catherine Marsh previewed a “range of the agency’s research offerings that are expected to take shape this year during a keynote address at the Intelligence National Security Alliance’s Spring Symposium April 11.” Marsh “said the goal is to kick off 10 to 12 new projects this year, including one that attempts to link ‘digital fingerprints’ to a text’s author.” In the coming weeks, Marsh “said the agency will release a broad agency announcement for HAYSTAC, a program that seeks to ‘figure out human movement’ using AI-reasoning engines that can identify abnormal patterns while generating normal ones.” Marsh’s comments come after ODNI “released its four-year tech investment plan, which ranked artificial intelligence among the most sought after capabilities, including expanded data, cyber, computing and sensor capabilities.”
The Wall Street Journal (4/10, Landro, Subscription Publication) reports on how AI could help identify and treat the highest-risk patients in hospital emergency rooms, inpatient wards and intensive-care units. While AI is being tested in many areas of healthcare, the technology could act as an early-warning system for subtle, significant issues that are hard to notice in a busy unit.
The Wall Street Journal (4/9, Fitch, Subscription Publication) reported Intel is reworking its AI strategy to better compete with NVIDIA. Intel’s strategy is to offer a range of chips and software that meet a wide variety of computing needs, with the goal to help companies optimize performance for specific tasks while saving money by cutting power consumption.
The Wall Street Journal (4/9, Snow, Subscription Publication) reported that as cities and states create AI algorithms to help them provide services like policing and traffic management, they are also creating policies for using this new technology. AI can disadvantage already marginalized groups, adding to human-driven bias. Its decisions can also be opaque, making it difficult to tell how to fix that bias. Cities are now looking at solutions to these problems. Some are requiring disclosure when an AI model is used in decisions. Others are mandating audits of algorithms, tracking where AI causes harm, or where it seeks public input before implementing new AI systems in place.
Wired (4/8, Johnson) reported “there’s room to improve on the census in the US, but the Constitution requires that one be conducted every decade, and” Greg Yetman, “codirector of the Center for International Earth Science Information Network (CIESIN),” says the country is “data rich.” Obstacles “such as cost, conflicts, or difficulty reaching remote locations can make some communities harder to count.” In research “published last month, satellite imagery and machine learning were used to automatically identify housing plots and predict population, age, and sex in five provinces in the western half of the Democratic Republic of Congo (DRC).” In the US, Yetman “explains, applying an AI model trained using images of roofs from the western US is problematic if it’s applied to homes on the East Coast because the western expansion of the country follows a grid-based system, while cities like Boston developed with less uniformity.” Equally, “a roof in South Africa looks different from one in Zambia.” AI “can easily mistake the roof of a stall at a commercial market in Accra, Ghana with the roof of an unregistered home or struggle to accurately predict the number of people in urban settlements or rural villages.”
The Verge (4/7, Clark) reports Google revealed “how it’s using artificial intelligence, including its restaurant-calling Duplex tech, to try and keep business hours up to date on Google Maps.” If it’s “confident enough in the AI’s prediction of what a business’s hours should be,” Google “will update the information in Maps.” The company explained in a blog post “the various factors its AI analyzes to determine whether it should do these updates. First, it looks at when the business profile was last updated, other similar shops’ hours, and Popular Times data to decide how likely it is that the hours are incorrect.”