Only Humans, Not AI Machines, Get a U.S. Patent, Judge Rules
Bloomberg
Susan Decker
September 3, 2021
U.S. District Judge Leonie Brinkema in Alexandria, VA, has ruled that only an actual human can be listed as an inventor on patents under U.S. law, while a computer using artificial intelligence (AI) cannot. Brinkema ruled that under federal law, an "individual," defined as a natural person, is required to take an oath that they are the inventor on a patent application. The case involved the Artificial Inventor Project at the U.K.'s University of Surrey, which has undertaken a global effort to have a computer named as an inventor. Courts in South Africa and Australia have ruled in the project's favor. University of Surrey's Ryan Abbott said the U.S. ruling will be appealed, along with those in the U.K. and Europe. Said Abbott, "We believe listing an AI as an inventor is consistent with both the language and purpose of the Patent Act."
Using Adversarial Attacks to Refine Molecular Energy Predictions
MIT News
Vineeth Venugopal
September 1, 2021
Massachusetts Institute of Technology (MIT) researchers found adversarial attacks can improve the reliability of neural networks (NN) in predicting molecular energies by quantifying their uncertainty. Researchers used adversarial attacks to sample molecular geometries in a potential energy surface (PES), and tapped multiple NNs to forecast the PES from the same data. MIT's Rafael Gomez-Bombarelli said, "We aspire to have a model that is perfect in the regions we care about [i.e., the ones that the simulation will visit] without having had to run the full ML [machine learning] simulation, by making sure that we make it very good in high-likelihood regions where it isn't." The technique could be the most realistic approach for exploring the limits of models used to predict material behavior and chemical reactions' progress.
Imaginary Numbers Protect AI from Very Real Threats
Duke University Pratt School of Engineering
Ken Kingery
August 31, 2021
Computer engineers at Duke University have shown that numbers with both real and imaginary components can be critical in securing artificial intelligence algorithms against threats while preserving efficiency. Including just two complex-valued layers among hundreds if not thousands of training iterations offers sufficient protection. For example, using complex numbers with imaginary components can instill additional flexibility for adjusting internal parameters within a neural network being trained on a set of images. Duke's Eric Yeats said, "The complex-valued neural networks have the potential for a more 'terraced' or 'plateaued' landscape to explore. And elevation change lets the neural network conceive more complex things, which means it can identify more objects with more precision." This enables gradient-regularization neural networks using complex numbers to arrive at solutions just as quickly as those lacking the extra security.
ML Revolutionizes How Scientists Study Behavior
Carnegie Mellon University
Caroline Sheedy
August 31, 2021
Carnegie Mellon University's Eric Yttri and Alex Hsu have designed an unsupervised machine-learning algorithm to simplify and fine-tune behavioral study. Behavioral segmentation of open field in DeepLabCut (B-SOiD) finds behaviors by identifying patterns in the position of an animal's body, and can tell researchers what behavior is occurring at every frame in a video using computer vision software. Hsu said, "It uses an equation to consistently determine when a behavior starts. Once you reach that threshold, the behavior is identified, every time." Yttri said B-SOiD eliminates user bias as well as time cost and painstaking work, processing hours of data in minutes.
Cutting 'Edge': A Tunable Neural Network Framework Towards Compact, Efficient Models
Tokyo Institute of Technology News (Japan)
August 23, 2021
A sparse convolutional neural network (CNN) framework and training algorithms developed by researchers at Japan's Tokyo Institute of Technology (Tokyo Tech) can seamlessly integrate CNN models on low-power edge devices. The 40-nanometer sparse CNN chip yields high accuracy and efficiency through a Cartesian-product multiply and accumulate (MAC) array and pipelined activation aligners that spatially shift activations onto a regular Cartesian MAC array. Tokyo Tech's Kota Ando said, "Regular and dense computations on a parallel computational array are more efficient than irregular or sparse ones. With our novel architecture employing MAC array and activation aligners, we were able to achieve dense computing of sparse convolution."
AI Matches Cardiologists' Expertise, While Explaining Its Decisions
University of California, San Francisco News
Elizabeth Fernandez
August 26, 2021
Scientists at the University of California, San Francisco and the University of California, Berkeley designed an artificial intelligence (AI) algorithm that diagnosed cardiovascular ailments as well as expert cardiologists, while explaining its reasoning. The researchers trained the convolutional neural network on commonly accessible electrocardiogram (ECG) data. The researchers said the algorithm performed strongly across 38 different diagnoses in five broad diagnostic categories. Because the researchers incorporated "explainability" into the algorithm, it highlighted ECG segments critical for each diagnosis, which may boost physicians' confidence in using it. The researchers said their results “offer strong support for AI algorithms like neural networks to be incorporated into existing commercial ECG algorithms, since they perform better for many diagnoses, can improve over time and provide additional insights through explainability.”
Sandia Uncovers Hidden Factors That Affect Solar Farms During Severe Weather
Sandia National Laboratories LabNews
Mollie Rappe
August 27, 2021
Researchers at Sandia National Laboratories used machine learning to study the effects of severe weather on solar farms. The researchers combined six years of maintenance tickets from over 800 solar farms in 24 states, and two years of real-world electricity production data from over 100 solar farms in 16 states, with historical weather data. They determined that electricity production was affected most by snowstorms, followed by hurricanes and a general group of other types of storms. The researchers used a machine learning algorithm to identify factors contributing to low electrical production performance from these weather events, and found, among other things, that older solar farms were affected most by severe weather, potentially due to greater wear and tear from longer exposure to the elements.
AI Accurately Identifies Whether Objects Can Be Recycled At a Glance
New Scientist
Chris Stokel-Walker
August 27, 2021
A convolutional neural network trained on a dataset of images employed a standard Webcam to distinguish recyclable objects from non-recyclable objects with greater than 95% accuracy. Ryan Grammenos and Youpeng Yu at the U.K.’s University College London designed the network to establish connections between an object’s appearance and its constituent materials. The researchers applied extra tricks—like interpreting deformed or misshapen objects—to improve the software's classification capability. Serge Belongie at Denmark's University of Copenhagen said, "Computer vision researchers often talk about 'in-the-wild' recognition challenges, and waste classification is an outstanding example of this. This study shows promising results using some state-of-the-art methods, and I can see this kind of technology gaining traction in a variety of public settings."
Stanford ML Algorithm Predicts Biological Structures More Accurately Than Ever Before
Stanford News
Isabel Swafford
August 26, 2021
Stanford University researchers have developed a machine learning algorithm that accurately predicts the three-dimensional structure of biological molecules, even when training data is limited. The researchers let the algorithm discover molecular structures on its own, rather than specify properties that influence predictive accuracy; the latter approach can bias an algorithm toward certain features while overlooking other insightful structures. Stanford's Raphael Townshend said, "The algorithm has clearly recovered things that we knew were important, but it has also recovered characteristics that we didn't know about before." The algorithm performed well when applied to proteins and RNAs, and Townshend said, "You can start designing new molecules and medicines with this kind of information."
In an opinion piece for Fast Company (7/21, Pedro), Unbabel CEO and Co-Founder Vasco Pedro writes, “No one deserves to feel like a second-class citizen, and English as the common lingua franca just doesn’t work for everyone. Luckily, advances in AI-powered translation can help us overcome language inequities around the globe by helping everyone access information in their native language. It isn’t just good for people. It’s good for business. In an English-centric world, achieving global diversity and inclusion is a big challenge for business, and language equality plays a huge part.” Pedro continues, “What is the effect of language inequality for business? Missed opportunities, frustrated customers, and, ultimately, lower profits. A 2020 study from Common Sense Advisory showed that 40% of customers will not buy products from companies that don’t support their language. Three-quarters of buyers said they want product information in their native language.” He adds, “Prior to smart, AI-driven translation, it was complicated and costly to have a robust customer service operation in multiple languages. Now there’s just no excuse. If you’re offering a 24-7 support center, those agents can handle many languages thanks to fast, increasingly accurate translation software technology. Using AI, an English-speaking customer service agent can reply to an email or chat to a Portuguese customer, in Portuguese.”
In an op-ed for Fast Company (7/20, Sahota) author of Own the A.I. Revolution, IBM Master Inventor, and the University of California, Irvine professor Neil Sahota writes that ClimateBert is an “AI tool that deconstructs corporate statements...to assess climate-related...actual performance.” Made by the Task Force on Climate-Related Financial Disclosures (TCFD) who found it complex and time-consuming to analyze information from companies. The article reports that the AI found in 800 companies, corporations talk “good game” but lack “actual performance.” The AI attributed it to three things. The first is that greenwashing went largely unnoticed so companies were not pressured to change. The second is that the Paris accords made companies more “selective” in what they disclose. The third is that, excluding France, the reporting of “corporate climate is a voluntary disclosure”, meaning companies can choose what they share. Sahota concludes that by using AI tools such as ClimateBert, consumers can build trust in corporate communication, and shift towards a more sustainable corporate culture.
The New York Times (7/20, Metz) reports that “this spring, Philips Pham was among the more than 12,000 people in 148 countries who took an online class called Code in Place.” Run by Stanford University, the course “taught the fundamentals of computer programming.” During this online class, “a new kind of artificial intelligence offered feedback to Pham and thousands of other students who took the same test.” Built by a team of Stanford researchers, “this automated system points to a new future for online education, which can so easily reach thousands of people but does not always provide the guidance that many students need and crave.” Chelsea Finn, a Stanford professor and A.I. researcher who helped build the new system said, “We’ve deployed this in the real world, and it works better than we expected.” Dr. Finn and her team “designed this system solely for Stanford’s programming class.” However, “they used techniques that could automate student feedback in other situations, including for classes beyond programming.”
CNBC (7/15, Choudhury) reports, “Making sure that AI-driven services and products are ethical and can be trusted could become a competitive strength for businesses, experts said Wednesday.” The piece explains that “some experts say the technology is a long way from becoming so-called artificial general intelligence, or AGI — which states AI’s hypothetical ability to understand or learn any intellectual task that a human being can. But others have pointed out that even in its current, narrow capabilities, AI raises a series of ethical questions.”
Variety (7/15, Tangcay) reports Anthony Bourdain is the subject of a new documentary that opens on July 16. “Roadrunner: A Film About Anthony Bourdain” was directed by Morgan Neville, and “paints a raw and personal image” of the deceased celebrity chef. But the film has garnered criticism for having used an AI-created version of Bourdain’s voice in what many consider a misleading way. The filmmakers “weave in narration by Bourdain pulled from audio clips, show outtakes, video interviews, and audiobooks.” However, “when asked how he obtained some quotes from Bourdain in a new article in the New Yorker, Neville tells writer Helen Rosner that he used artificial intelligence to create three quotes with Bourdain’s voice.” Neville defended his actions, saying, “I checked, you know, with his widow and his literary executor, just to make sure people were cool with that. And they were like, Tony would have been cool with that. I wasn’t putting words into his mouth. I was just trying to make them come alive.”
Phone Arena (7/12, Petrova) reports that Google CEO Sundar Pichai discussed “important matters in the tech world” in an interview with the BBC. He commented on “internet freedom, taxes paid by Google, and the next big things in technology that we might expect, specifically, focusing on AI and quantum computing.” Pichai “stated that privacy is a foundation to everything Google does and that privacy is a fundamental human right.” He focused on AI and quantum computing when asked about “what to expect from the evolution of tech,” saying that “AI is a profound technology, even more profound than the discovery of fire and electricity, or the internet” and quantum computing “is bound to open up an entirely new range of solutions for future development.”
The Wall Street Journal (7/8, Loten, Hand, Subscription Publication) reports that due to continuing advances in AI and other fields, computers are gaining the capacity for humanlike senses of sight, hearing, touch, and even smell. Image and voice recognition capabilities are already ubiquitous in consumer tech, but now, “customized sensors, machine learning and neural networks – a subset of AI that mimics the way our brains work – are pushing digital senses to the next level, creating robots that can tell when a package is fragile, sniff out an overheated radiator or identify phony Chardonnay.”
The Washington Post (7/7) examines the emerging reality in which technological advancements make it possible to mass produce autonomous weaponized drones “on the cheap.” While “efforts to enact a total ban on lethal autonomous weapons...are now being supported by 30 countries,” the leading military powers “insist that isn’t necessary.” The Post explains that “the U.S. military says concerns are overblown, and humans can effectively control autonomous weapons, while Russia’s government says true AI weapons can’t be banned because they don’t exist yet.” Still, it is inevitable that “the demand for speed in war will...push militaries to offload more decisions to machines, especially in combat situations ... It’s not hard to imagine opposing algorithms responding to each other faster than humans can monitor what’s happening.” According to the Post, “Companies including Google, Amazon, Apple and Tesla have poured billions of dollars into developing the technology, and critics say AI programs are sometimes being deployed without full knowledge of how they work and what the consequences of widespread use could be.”