Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

Dr. T's AI brief

3 views
Skip to first unread message

Daniel Tauritz

unread,
Jul 19, 2023, 5:25:41 AM7/19/23
to ai-b...@googlegroups.com

ACM Issues Principles for Generative AI Technologies
ACM
July 11, 2023


ACM's global Technology Policy Council (TPC) released "Principles for the Development, Deployment, and Use of Generative AI Technologies" in response to innovations in generative artificial intelligence (AI) and their ramifications. The statement lists eight principles for cultivating fair, accurate, and advantageous decision-making for generative and all other AI. Principles applicable to generative AI include imposing limits and guidance on deployment and use; accounting for the technology's structure and function in intellectual property law and regulation; personal control of data; and correctability of errors via public repositories provided by generative AI system developers. TPC's Ravi Jain added, "We must also build a community of scientists, policymakers, and industry leaders who will work together in the public interest to understand the limits and risks of generative AI as well as its benefits."

Full Article

 

 

Learning the Language of Molecules to Predict Their Properties
MIT News
Adam Zewe
July 7, 2023


A unified framework developed by researchers at the Massachusetts Institute of Technology (MIT) and the MIT-IBM Watson AI Laboratory can forecast molecular properties while producing new molecules using only small datasets. The researchers programmed a machine learning (ML) system to automatically learn the "molecular grammar" of a domain-specific dataset in order to build viable molecules and anticipate their characteristics "so you can train a model to do the prediction without all of these cost-heavy experiments," according to MIT's Minghao Guo. The team organized a hierarchical strategy to accelerate molecular grammar-learning, decoupling the process into a widely applicable metagrammar provided at the outset and a molecule-specific grammar from the domain dataset. This approach accurately predicted molecular properties and generated viable molecules from a dataset containing fewer than 100 samples, outperforming several popular ML techniques on small and large datasets.

Full Article

 

 

U.S. Military Takes Generative AI Out for a Spin
Bloomberg
Katrina Manson
July 5, 2023


The U.S. Department of Defense is testing five large language models (LLMs) as part of an effort to develop data integration and digital platforms for military use. U.S. Air Force Col. Matthew Strohmeyer said one experiment using a LLM to perform a military task was "highly successful" and "very fast." However, Strohmeyer said, "That doesn't mean it's ready for primetime right now." The exercises involve feeding classified operational information into the LLMs, with the ultimate goal of using AI-enabled data in decision-making, sensors, and firepower. Specifically, the LLMs are being tasked with helping plan a military response to an escalating global crisis, with a focus on the Indo-Pacific region.

Full Article

*May Require Paid Registration

 

 

AI, CRISPR Precisely Control Gene Expression
New York University
July 3, 2023


Researchers at New York University, Columbia University, and the New York Genome Center manipulated human gene expression with a deep learning model combining artificial intelligence and CRISPR (clustered regularly interspaced short palindromic repeats) screens. The Targeted Inhibition of Gene Expression via guide RNA design (TIGER) model forecasts the activity of RNA-targeting CRISPRs using the Cas13 enzyme to maximize CRISPR activity on the intended target RNA, while minimizing activity on other RNAs that could negatively affect the cell. The researchers used TIGER to quantify the activity of 200,000 guide RNAs targeting essential genes in human cells. They showed it could anticipate on-target and off-target activity and demonstrated the model's off-target predictions can be used to modulate gene dosage.

Full Article

 

 

Green-Screen Filming Method Uses Magenta Light
New Scientist
Matthew Sparkes
July 7, 2023


Researchers at streaming service Netflix have developed a way to instantly generate visual effects for film and TV using a new green-screen technology powered by artificial intelligence (AI). The Magenta Green Screen technique involves filming actors with bright green light-emitting diodes (LEDs) from the back and red and blue LEDs from the front, producing a magenta glow. Film editors can replace the background-recording green channel in real time, splicing the actors into the foreground of another scene without difficulty, even with potentially problematic areas. Netflix uses AI to remove the actors' magenta tint by restoring a normal range of color to the foreground, utilizing a photo of the actors under normal illumination as reference for a realistic-appearing green channel.

Full Article

 

 

Training Robots to Make Decisions on the Fly
University of Illinois Urbana-Champaign
Debra Levey Larson
July 6, 2023


A learning-based method developed by researchers at the University of Illinois Urbana-Champaign (UIUC) allows autonomous landers to decide where and how to land and collect terrain samples based on terrain’s topology and material composition. The learning method allows battery-operated robots navigating unfamiliar terrain to achieve high-quality scooping actions using vision and limited on-line training experience. The researchers trained a robot modeled on a lander arm on materials of varying sizes and volumes on 67 different terrains. The NASA Jet Propulsion Laboratory plans to use the model in its Ocean World Lander Autonomy Testbed. UIUC's Pranay Thangeda observed that the lander’s batteries have a lifespan of about 20 days, so “We can't afford to waste a few hours a day to send messages back and forth" to instruct the lander how to proceed.

Full Article

 

 

AI Is Coming for Mathematics, Too
The New York Times
Siobhan Roberts
July 2, 2023


An artificial intelligence (AI)-driven transformation of mathematics looms, with former Google computer scientist Christian Szegedy forecasting computers will match or surpass human mathematicians' problem-solving ability by 2026. Terence Tao at the University of California, Los Angeles said mathematicians' concerns about AI potentially threatening mathematical aesthetics or their profession have emerged in the last several years. The University of Wisconsin-Madison's Jordan Ellenberg suggested AI gadgets could help optimize mathematicians' work. Microsoft's open source Lean proof assistant, which uses automated reasoning powered by AI, is drawing interest for its recent achievements, yet its frequent complaining of being unable to understand the mathematician's inputs makes research awkward. Geordie Williamson at Australia's University of Sydney said mathematicians and computer scientists should participate in discussions about AI's mathematical implications more aggressively.

Full Article

*May Require Paid Registration

 

 

AI Battles the Bane of Space Junk
IEEE Spectrum
Sarah Wells
July 1, 2023


Researchers are using artificial intelligence (AI) to track space debris, predict collisions, and devise methods for the debris’ removal and reuse. A team led by Fabrizio Piergentili at Italy's Sapienza University of Rome developed a machine learning algorithm that tracks the rotational motion of space debris. Researchers at the Air Force Institute of Technology also demonstrated how computer simulations can help predict satellite behavior. Meanwhile, researchers at Italy's Roma Tre University trained a neural network on radar and optical data from ground telescopes to detect space debris in low Earth orbit. However, the University of Texas at Austin's Moriba Jah warned about depending on AI; said Jah, "If the version of today that you feed it is limited, the prediction of tomorrow is also going to be limited."

Full Article

 

 

Teaching AI to Write Better Chart Captions
MIT News
Adam Zewe
June 30, 2023


A dataset developed by Massachusetts Institute of Technology (MIT) researchers aims to improve automatic captioning systems by training machine learning models to customize the complexity and content in chart captions based on users' needs. The VisText dataset features over 12,000 charts, each represented as a data table, image, and scene graph. The researchers found machine learning models trained with scene graphs performed as well as or better than models trained with data tables. MIT's Benny J. Tang said, "A scene graph is like the best of both worlds — it contains almost all the information present in an image while being easier to extract from images than data tables. As it's also text, we can leverage advances in modern large language models for captioning."

Full Article

 

 

Computer Vision System Weds Image Recognition, Generation
MIT News
Rachel Gordon
June 28, 2023


Scientists at the Massachusetts Institute of Technology (MIT) and Google designed the Masked Generative Encoder (MAGE) computer vision system to unify image recognition and generation. MAGE renders images as compact 16 x 16 abstracts of image sections called "semantic tokens" in a process that self-supervised frameworks can use to pre-train on unlabeled image datasets. The system's "masked token modeling" technique involves randomly concealing certain tokens, then training a neural network to fill in the blanks. MIT's Tianhong Li explained, "MAGE's ability to work in the 'token space' rather than 'pixel space' results in clear, detailed, and high-quality image generation, as well as semantically rich image representations."

Full Article

 

 

Artificially Cultured Brains Improve Processing of Time Series Data
Tohoku University (Japan)
June 29, 2023


Researchers at Japan's Tohoku University (TU) and Future University Hakodate gauged the computational powers of an "artificially cultured brain" composed of rat cortical neurons using reservoir computing. TU's Hideaki Yamamoto explained the researchers "first recorded the multicellular responses of the cultured neuronal network" via optogenetics and fluorescent calcium imaging. Reservoir computing-enabled decoding revealed that the brain "possessed a short-term memory of several hundred milliseconds, which could be used to classify time-series data, such as spoken digits," according to Yamamoto. The researchers found improved classification performance in samples with a higher level of modularity, while a model trained on one dataset could classify another dataset in the same category.

Full Article

 

Security Experts Reflect On Benefits And Risks Of Generative AI

In a more than 3,100-word special report, SiliconANGLE Share to FacebookShare to Twitter (7/3, Gillin, Dotson) “posed a simple question to numerous security experts: Will artificial intelligence ultimately be of greater benefit to cyber criminals or those whose mission is to foil them?” According to SiliconANGLE, “Their responses ran the gamut from neutral to cautiously optimistic. Although most said AI will simply elevate the cat-and-mouse game that has characterized cybersecurity for years, there is some reason to hope that generative models can be of greater value to the defenders than the attackers.”

 

AI Weather Models Could Eventually Predict Natural Disasters

The Washington Post Share to FacebookShare to Twitter (7/4) reports artificial intelligence “is already helping improve forecasts of hurricane tracks, tornado potential, flood risk and other weather threats, but meteorologists are still wrestling with how to fully integrate AI models into daily forecasting and how much to trust the new predictions.” While the “maturation of AI weather models comes a decade after the rivalry between conventional weather model heavyweights” first went mainstream, a new wave of AI models, “largely developed by the private sector, appears to be equaling or exceeding the performance of conventional models operated by the world’s leading government weather agencies.” However, “it’s still up for debate if and when AI models could become the primary tools used by meteorologists to make forecasts.”

 

OpenAI Suspends ChatGPT Browsing Feature That Bypassed Paywalls

SiliconANGLE Share to FacebookShare to Twitter (7/4, Riley) reports OpenAI LP has suspended ChatGPT’s browsing feature because it “allowed paying users to obtain search results through Microsoft Corp.’s Bing search engine, ostensibly because the feature bypasses paywalls.” SiliconANGLE explains “Browse with Bing” had been “pitched...as allowing paying users to obtain up-to-date information and search the internet,” but “according to an update on an OpenAI help page, the company has strangely come to the conclusion that allowing people to bypass paywalls is bad.”

 

Study: US Investment In Public Safety AI To Increase $71B By 2030

WTTG-TV Share to FacebookShare to Twitter Washington (7/3, Eberhart) reports US spending on artificial intelligence “in public safety is projected to increase from $9.3 billion in 2022 to $71 billion by 2030, according to a new analysis by the Insight Partners research firm.” The projected boom is expected “to be fueled by global and domestic terrorism, a growing need for security training and rising public safety demands coming out of the pandemic, the study says.” The study also notes AI algorithms could predict and prevent disasters by “analyzing vast amounts of data, such as weather patterns, geological activity, and infrastructure conditions, to identify potential risks and vulnerabilities.” The researchers “said AI-powered security cameras and video analytics can be used in preventive policing, criminal investigations, cold-case investigations and combating terrorism, among dozens of other uses.”

 

Students At Elite Colleges Are Working To Address AI’s Risks And Threats

The Washington Post Share to FacebookShare to Twitter (7/5, Tiku) reports that in recent years, Silicon Valley has become “enthralled by a distinct vision of how super-intelligence might go awry,” though in these scenarios, “AI isn’t necessarily sentient. Instead, it becomes fixated on a goal – even a mundane one, like making paper clips – and triggers human extinction to optimize its task.” To prevent this theoretical outcome, “mission-driven labs like DeepMind, OpenAI and Anthropic are racing to build a good kind of AI programmed not to lie, deceive or kill us.” Meanwhile, “wealthy tech philanthropists have begun recruiting an army of elite college students to prioritize the fight against rogue AI over other threats.” At Stanford, Open Philanthropy awarded a professor and fellow “more than $1.5 million in grants to launch the Stanford Existential Risk Initiative, which supports student research in the growing field known as ‘AI safety’ or ‘AI alignment.’”

        Students Say They’re Grappling With Political, Philosophical Implications Of AI Revolution. Insider Share to FacebookShare to Twitter (7/5, Varanasi) reports in the past five years, the number of classes at Stanford “related to artificial intelligence appears to have doubled.” The current curiosity, though, “has been fueled by the accessibility of generative AI. Anyone can experiment with ChatGPT, for example, and use their experience as a starting point to consider broader questions around artificial intelligence.” Students majoring in “everything from English to economics to symbolic systems” told Insider that they’re “grappling with the political, social, and even philosophical implications of the AI revolution.”

 

Tech Companies Racing To Meet Corporate Interest In Generative AI

The New York Times Share to FacebookShare to Twitter (7/5, Lu) reports that many businesses are “eager to find ways to tap the power of generative artificial intelligence,” and “to meet this new demand, tech companies are racing to introduce products” and also “investing more in A.I. development.” The Times discusses how tech companies are also taking steps to respond to the risks of generative AI: “chatbots can produce inaccuracies and misinformation, provide inappropriate responses and leak data. A.I. remains largely unregulated.” The Times adds, “to prevent data leakage and to enhance security, some have engineered generative A.I. products so they do not keep a company’s data and have instructed the A.I. models to answer only questions based on the source of data.”

 

AI Startups Need Connections With Big Tech To Get Access To Enough Computing Power

The New York Times Share to FacebookShare to Twitter (7/5, Metz) reports that because generative AI requires “huge amounts of money and computing power...for start-ups trying to make a go of it with today’s hottest technology,” the stories of past tech industry startups growing from simple beginnings may become “a thing of the past.” The Times points to the example of AI startup Cohere, created by former Google employees Aidan Gomez and Nick Frosst. The startup’s founders were dependent on access to computing power provided by their former employer to succeed. David Katz, a partner with Cohere investor Radical Ventures, said companies such as Google and Microsoft have control over the limited amount of computing chips that can power AI, which gives them the power to decide which startups have access to those resources.

 

ChatGPT Saw First Monthly Decline In Traffic In June

Reuters Share to FacebookShare to Twitter (7/5, Hu) reports, “ChatGPT, the wildly popular AI chatbot launched in November, saw monthly traffic to its website and unique visitors decline for the first time ever in June, according to analytics firm Similarweb.” Similarweb’s analysis found “traffic to the ChatGPT website decreased by 9.7% in June from May, while unique visitors to ChatGPT’s website dropped 5.7%.” Similarweb Senior Insights Manager David Carr said the decrease is a sign of a drop in interest following initial excitement over the chatbot’s novelty. RBC Capital Markets Analyst Rishi Jaluria said the drop could indicate greater demand for generative AI that uses real-time information.

        ChatGPT Planning Research To Ensure Superintelligence AI Remains Safe For Humans. Reuters Share to FacebookShare to Twitter (7/5, Tong) reports that in a blog post on Wednesday, ChatGPT’s creator OpenAI announced “plans to invest significant resources and create a new research team that will seek to ensure its artificial intelligence remains safe for humans – eventually using AI to supervise itself.” Reuters explains humans “will need better techniques than currently available to be able to control the superintelligent AI, hence the need for breakthroughs in so-called ‘alignment research,’ which focuses on ensuring AI remains beneficial to humans.” Accordingly, the company “is dedicating 20% of the compute power it has secured over the next four years to solving this problem... In addition, the company is forming a new team that will organize around this effort, called the Superalignment team. The team’s goal is to create a ‘human-level’ AI alignment researcher, and then scale it through vast amounts of compute power.”

 

Some Experts Call For Lawmakers To Be Wary Of Big Tech Push For AI Regulation

The Los Angeles Times Share to FacebookShare to Twitter (7/5, Tucker Smith) says, “Technology interests, especially OpenAI...have gone on the offensive in Washington, arguing for regulations that will prevent the technology from posing an existential threat to humanity. They’ve engaged in a lobbying spree: According to an analysis by OpenSecrets, which tracks money in politics, 123 companies, universities and trade associations spent a collective $94 million lobbying the federal government on issues including AI in the first quarter of 2023.” However, some experts are skeptical of these efforts. For example, Stanford fellow Marietje Schaake “expressed concern” that when industry stakeholders “warn of existential threats from AI, they are putting the regulatory focus on the horizon, rather than in the present. If lawmakers are worrying about AI ending humanity, they’re overlooking the more immediate, less dramatic worries.”

 

Google Executive Discusses How AI Could Impact Efforts To Personalize Learning Experiences

Education Week Share to FacebookShare to Twitter (7/6, Langreo) reports that in recent years, “there has been a greater focus on ensuring students have more personalized learning experiences to help them catch up or accelerate their learning.” Additionally, with the “recent advances in artificial intelligence and other adaptive technologies, there are more ways to ‘transform the future of school’ into a more ‘personal learning experience,’” suggests Shantanu Sinha, Google for Education’s vice president and general manager. In an email interview with EdWeek, Sinha discussed “how the adoption of personalized learning in K-12 has changed, the role of technology in personalized learning, and how AI will likely impact efforts to personalize learning.” On how the adoption of personalized learning in K-12 schools “changed in the last 5 or 10 years,” Sinha said, “What’s changed is that teachers now have the tools to make this easier.”

 

Opinion: AI Will Never Replace Physicians

In an opinion piece for the New York Times Share to FacebookShare to Twitter (7/6), Brigham and Women’s Hospital Pulmonary and Critical-Care Physician Daniela Lamas says, “We find ourselves at the dawn of what many believe to be a new era in medicine, one in which artificial intelligence promises to write our notes, to communicate with patients, to offer diagnoses. ... But as these systems improve and are integrated into our practice in the coming years, we will face complicated questions.” She adds, “Though medicine is a field where breakthrough innovation saves lives, doctors are – ironically – relatively slow to adopt new technology.” And “part of this hesitation is the need for any technology to be tested before it can be trusted.” Lamas concludes, “A.I. can be part of that [decision-making] process, just one more tool that we use, but it will never replace a hand at the bedside, eye contact, understanding – what it is to be a doctor.”

 

Students Challenge Study That Found AI Could Complete MIT’s Undergraduate Curricula

The Chronicle of Higher Education Share to FacebookShare to Twitter (7/7, Bartlett) reported a study which “found that ChatGPT, the popular AI chatbot, could complete the Massachusetts Institute of Technology’s undergraduate curriculum in mathematics, computer science, and electrical engineering with 100-percent accuracy.” While the study “hadn’t yet passed through peer review,” it boasted 15 authors, “including several MIT professors.” And “considering the remarkable feats performed by seemingly omniscient chatbots in recent months, the suggestion that AI might be able to graduate from MIT didn’t seem altogether impossible.” Three MIT students later “took a close look at the study’s methodology and at the data the authors used to reach their conclusions.” They identified “glaring problems” that amounted to, “in their opinion, allowing ChatGPT to cheat its way through MIT classes.”

 

ChatGPT Loses Users For First Time

The Washington Post Share to FacebookShare to Twitter (7/7, De Vynck) reports the number of ChatCPT users fell in June for the first time since the app and website launched in November. The monthly falloff may be “a sign that consumer interest in artificial intelligence chatbots and image-generators may be beginning to wane.” App and desktop traffic for ChatGPT “worldwide fell 9.7 percent in June from the previous month, according to internet data firm Similarweb.” In addition, “Downloads of the bot’s iPhone app, which launched in May, have also steadily fallen since peaking in early June, according to data from Sensor Tower.”

        Authors Sue OpenAI For Violating Copyright Law. Insider Share to FacebookShare to Twitter (7/9, Rivera) reports, “Two award-winning authors recently sued OpenAI, accusing the generative-AI bastion of violating copyright law by using their published books to train ChatGPT without their consent.” The lawsuit “is the latest example of tension between creatives and generative AI tools capable of producing text and images in seconds.” Vanderbilt University Law Professor Daniel Gervais “told Insider that the writers’ lawsuit is one of a handful of copyright cases against generative AI tools nationwide” and it likely won’t be the last. He “expects many more authors will sue companies developing large language models and generative AI as these programs advance and improve at replicating the style of writers and artists,” and added that “he believes a deluge of legal challenges targeting the output of tools like ChatGPT nationwide is imminent.”

 

Amazon CEO Addresses AI Concerns In Interview

Fortune Share to FacebookShare to Twitter (7/7, Confino) reports Amazon CEO Andrew Jassy “called generative A.I. ‘one of the biggest technical transformations of our lifetimes’ in an interview with CNBC on Thursday. He also called many of today’s A.I. chatbots and other generative A.I. tools part of the ‘hype cycle,’ declaring that Amazon was focused on the ‘substance cycle.’” Jassy “shed some light on Amazon’s A.I. game plan, outlining three macro layers: the computing capabilities, the underlying models, and what Jassy refers to as the ‘application layer,’ for example ChatGPT or Bard.”

 

Companies Overwhelmed By Number Of New AI Tools

The Wall Street Journal Share to FacebookShare to Twitter (7/7, Bousquette, Subscription Publication) reported the number of new artificial intelligence tools available on the market is causing confusion for companies and chief information offices. Vendors are launching so many new products with AI features that business leaders are struggling to determine which are right for their needs and how these tools should coexist with each other.

 

As College Students Become More Diverse, AI Tools Present Issues With Language Diversity

Inside Higher Ed Share to FacebookShare to Twitter (7/10, D'Agostino) reports college students today are “more racially and ethnically diverse than decades ago,” though biased AI technology “hints at a larger societal problem. Even if the tech were fixed in time, academics have persistent concerns about humans’ harmful language biases.” Meanwhile, language rules “look stable when written down,” but few linguists argue “that language is fixed, even for a moment in time.” Tech tools “often show up in students’ lives as language authorities, and many of those tools were trained with language data from the internet,” which presents a “language diversity problem, as 10 languages account for nearly 90 percent of the top 10 million websites.” This comes as humans “are losing languages at an alarming rate,” despite their ability to “provide information about human history, cognition and culture, according to a recent Science Advances paper.”

 

Researchers Warn AI-Powered Climate Forecasting May Be “Highly Erratic”

Politico Share to FacebookShare to Twitter (7/10, Skibell) reports experts are warning “that climate change may pose a distinct challenge for AI weather models, which rely on historical data to produce forecasts, writes Chelsea Harvey.” Unable to draw on “similar trends from the past, AI may not be able to accurately forecast climate-fueled disasters” leading to a “lack of preparedness for the worst of what nature has to offer.” Without historical or “predictive data, AI systems may not be able to forecast climate change-fueled events, which are increasingly smashing records,” and according to Colorado State University researchers Imme Ebert-Uphoff and Kyle Hilburn could generate “highly erratic predictions.”

 

Companies Fear Use Of Generative AI Tools Could Cause Leak Of Sensitive Information

The Washington Post Share to FacebookShare to Twitter (7/10, Telford, Verma) reports while generative AI tools “such as OpenAI’s ChatGPT have been heralded as pivotal for the world of work, with the potential to increase employees’ productivity by automating tedious tasks and sparking creative solutions to challenging problems,” corporations are worried over the potential that using the tools could disclose sensitive company information. The Post adds, “Several corporate leaders said they are banning ChatGPT to prevent a worst-case scenario where an employee uploads proprietary computer code or sensitive board discussions into the chatbot while seeking help at work, inadvertently putting that information into a database that OpenAI could use to train its chatbot in the future.”

 

Leading AI Researcher To Leave Google

Bloomberg Share to FacebookShare to Twitter (7/11, Love, Subscription Publication) reports artificial intelligence researcher Llion Jones, “who helped write the pioneering AI paper ‘Attention Is All You Need,’ confirmed to Bloomberg that he will depart Google Japan later this month,” and “said he plans to start a company after taking time off.” Bloomberg explains his 2017 paper “has become a sensation in Silicon Valley,” as it “introduced the concept of transformers, systems that help AI models zero in on the most important pieces of information in the data they are analyzing.”

 

Comedian Joins Copyright Lawsuits Against OpenAI, Meta

The New York Times Share to FacebookShare to Twitter (7/10, Small) reports comedian Sarah Silverman “has joined a class-action lawsuit against OpenAI and another against Meta accusing the companies of copyright infringement, saying they ‘copied and ingested’ her protected work in order to train their artificial intelligence programs, according to court papers.” The Times says authors Christopher Golden and Richard Kadrey initially filed the lawsuits on Friday.

        The Hill Share to FacebookShare to Twitter (7/10, Kurtz) reports the lawsuit “against OpenAI, the creator of the AI chatbot ChatGPT, claims that ‘much of the material’ used to train the tool ‘comes from copyrighted works – including books written by Plaintiffs – that were copied by OpenAI without consent, without credit, and without compensation,” while the lawsuit against Meta “alleges that ‘much of the material’ in the training dataset used to develop its LLaMA large language model ‘comes from copyrighted works – including books written by Plaintiffs – that were copied by Meta without consent, without credit, and without compensation.’”

 

Lockheed Martin Believes AI, Quantum Computing, Nuclear Power Are Key For Future Space Missions

Space News Share to FacebookShare to Twitter (7/10, Klaczynska, Subscription Publication) reports, “Artificial intelligence, quantum computing and nuclear power are among the key technologies Lockheed Martin sees as important for future space missions. Through a project called Destination: Space 2050, Lockheed Martin executives are exploring, for example, how AI could assist scientific exploration of locations where communications with remote sensors would be disrupted by high latency.” Aura Roy, Lockheed Martin deputy program manager for Multi-slit Solar Explorer mission, said that in the future, “we will need to rely on a AI to augment human decision makers at all levels of command with advanced AI data processing and course-of-action generation that will support all types of operations.”

 

Senators To Receive Classified White House AI Briefing

Reuters Share to FacebookShare to Twitter (7/10, Shepardson) reports the White House will brief “senators Tuesday on artificial intelligence in a classified setting as lawmakers consider adopting legislative safeguards on the fast-moving technology.” The 3 p.m. ET briefing, “organized by Senate Democratic Leader Chuck Schumer and other senators, will be the first-ever classified Senate briefing on AI and will take place in a sensitive compartmented information facility (SCIF) at the U.S. Capitol.” The briefers will include DNI Haines, “Deputy Secretary of Defense Kathleen Hicks, White House Office of Science and Technology Policy director Arati Prabhakar and National Geospatial Intelligence Agency Director Trey Whitworth.” In a letter, Schumer told senators the briefing will outline how the US government is “using and investing in AI to protect our national security and learn what our adversaries are doing in AI.... Our job as legislators is to listen to the experts and learn as much as we can so we can translate these ideas into legislative action.”

 

National Security Officials Hold Classified Senate Briefing On AI Technology

Bloomberg Share to FacebookShare to Twitter (7/11, Manson, Subscription Publication) reports senior US national security officials were “scheduled to defend their use and development of artificial intelligence on Tuesday in a classified briefing with senators, amid calls for lawmakers to regulate or even temporarily halt the emerging technology.” Deputy Defense Secretary Kathleen Hicks was “expected to say that congressional support is critical to the Department of Defense’s efforts to adopt AI responsibly, quickly and at sufficient scale, according to a defense official who was briefed on her expected remarks.” Craig Martell, the Pentagon’s digital and AI chief, was “expected to cite examples of AI’s use, including how algorithms have enabled logistics tracking in Ukraine, according to the official.” Hicks and Martell were to “joined by Avril Haines, director of national intelligence, Arati Prabhakar, director for the White House Office of Science and Technology Policy, and Frank ‘Trey’ Whitworth, vice admiral and director of the National Geospatial Intelligence Agency, which now runs Project Maven.”

        The Hill Share to FacebookShare to Twitter (7/11, Klar, Beitsch, Weaver) reports senators left the briefing “with increased concerns about the risks posed by the technology and no clear battle lines on a legislative plan to regulate the booming industry,” and Sen. Chris Coons (D-DE) said he left the briefing “more concerned than ever that we have significant challenges in front of us and that the Senate needs to legislate to address these.”

        Meta Executive Details Company’s Vision For Regulating AI. In an interview with the Washington Post Share to FacebookShare to Twitter (7/11, Oremus, Nix) “Technology 202” newsletter, Meta President of Global Affairs Nick Clegg related that Meta “has its own vision for how AI should be regulated – one in which openness is viewed more as a virtue than a threat. And it’s increasingly pushing that view in Washington and beyond.” Clegg “argued that keeping AI models ‘under lock and key’ is misguided – and that the industry doesn’t need a special licensing regime” and “said the ‘existential threats’ supposedly posed by supersmart AI systems are far-off and hypothetical.” He added, “No one thinks the kind of models that we’re looking at [with] LLaMA one or LLaMA version two are even remotely knocking on the door of these kind of high-capability [AI models] that might require some specialized regulatory licensing treatment.”

        Roose: AI Startup Anthropic At “White-Hot Center Of AI Doomerism.” The New York Times Share to FacebookShare to Twitter (7/11) columnist Kevin Roose profiles AI startup Anthropic, “one of the world’s leading A.I. research labs, and a formidable rival to giants like Google and Meta.” After spending weeks with the company and its employees, he writes that “Anthropic’s employees aren’t just worried that their app will break, or that users won’t like it. They’re scared – at a deep, existential level – about the very idea of what they’re doing: building powerful A.I. models and releasing them into the hands of people, who might use them to do terrible and destructive things.”

 

AI Detection Tools Frequently Misclassify Non-Native Speakers As Bots, Study Says

The Daily Beast Share to FacebookShare to Twitter (7/11, Ho Tran) reports that while native speakers are “prone to a lot more complexity in our languages,” if you’re “just learning a new language, however, the opposite is probably true.” This “lack of linguistic complexity” also distinguishes text written by large language models “like ChatGPT or Bard, from text written by humans. This idea is what underpins many AI detection apps used by professors and teachers to assess whether or not a student’s essay was actually written by one of their students.” However, a study published in the journal Patterns found that these models “frequently misclassify non-native English writing as AI generated.’” The study’s authors wrote, “The design of many GPT detectors inherently discriminates against non-native authors, particularly those exhibiting restricted linguistic diversity and word choice.” They added that “the parameters used by these detectors include ‘measures that also unintentionally distinguish non-native- and native-written samples.’”

 

University Of Washington Researchers Create AI Tool To Design New Proteins

Nature Share to FacebookShare to Twitter (7/11, Callaway) reports on RFdiffusion, a new AI tool created by a team of researchers led by University of Washington Computational Biophysicist David Baker that designs new proteins. It is “inspired by AI software that synthesizes realistic images” and “can churn out realistic protein shapes to criteria that designers specify.” RFdiffusion and similar protein-designing AIs “are based on the same principles as neural networks that generate realistic images.” Meanwhile, “these ‘diffusion’ networks are trained on data, be they images or protein structures, which are then made progressively noisier, eventually bearing no resemblance to the starting image or structure.” Then, the network “learns to ‘denoise’ the data, performing the task in reverse.” Science Share to FacebookShare to Twitter (7/11, Service) reports RFdiffusion “may speed efforts to design everything from drugs to fight cancer and infectious diseases to novel proteins able to quickly extract carbon dioxide from the atmosphere.”

 

AI’s Rapid Rise Spurring More Fears Than Job Losses For Now

Bloomberg Share to FacebookShare to Twitter (7/11, Horobin, Subscription Publication) reports the “impact on employment of the rapid spread of Artificial Intelligence is limited so far, but the potential for the technology to substitute jobs is significant, and workers are increasingly worried about their future, an OECD study showed.” The piece explains that “early adopters of AI are reluctant to fire staff, and it can improve working lives by helping with tedious and dangerous tasks, according to the survey of 2,000 employers and 5,300 workers in manufacturing and finance across seven OECD countries.”

 

AI-Driven Healthcare Slow To Start, But Wave Appears To Be Building

CNBC Share to FacebookShare to Twitter (7/12, Curry) reports, “AI-driven health care goes beyond chatbot doctors and AI diagnoses.” Numerous “transformations happen behind the scenes with productivity and comprehension enhancements.” Given that “83% of executives agreeing science tech capabilities could help address health-related challenges around the world, the move to AI-driven health care may seem slow at first, but the wave appears to be building.” Using AI to uplift its offering is Prenuvo, offering “whole-body MRI scans for preventative health screenings.” Although “these scans are available to individuals at clinics across North America, companies like TDK Ventures and Caffeinated Capital have employed Prenuvo’s enterprise services to support their workforce.”

 

Elon Musk Launches New AI Company xAI

The Washington Post Share to FacebookShare to Twitter (7/12) reports Elon Musk announced a “new artificial intelligence company xAI during a live event Wednesday evening” in a sit-down chat with Reps. Ro Khanna (D-CA) and Mike Gallagher (R-WI) on Twitter Spaces. In the interview, Musk “said he welcomed government oversight, that he’d recently had conversations with senior Chinese government officials about AI risks and regulation, and said he believes China would be open to international cooperation on regulating the tech.” Musk “registered xAI in Nevada in March,” but “on Wednesday, he unveiled a team of 11 employees, drawn from OpenAI, Google and the University of Toronto, a center of academic AI research.”

 

Bill Gates Touts AI’s Potential In Blog Post

CNBC Share to FacebookShare to Twitter (7/12, Leswing) reports Microsoft co-founder Bill Gates believes in the “potential of artificial intelligence, repeating often that he believes models like the one at the heart of ChatGPT are the most important advancement in technology since the personal computer.” In a blog post, he wrote, “One thing that’s clear from everything that has been written so far about the risks of AI – and a lot has been written – is that no one has all the answers. Another thing that’s clear to me is that the future of AI is not as grim as some people think or as rosy as others think.” Gates broadcasting a “middle-of-the-road view to AI risks could shift the debate around the technology away from doomsday scenarios towards more limited regulation addressing current risks, just as governments around the world grapple with how to regulate the technology and its potential downfalls.”

        Analysis: Copyright May Be “Achilles Heal” Of Generative AI. In an analysis for Fortune Share to FacebookShare to Twitter (7/12), Lance Lambert says that while copyright-related lawsuits “have been an occasional feature of the generative A.I. boom since it began...the past couple weeks have seen a real flurry of activity. The highest-profile suits come courtesy of star comedian Sarah Silverman, who...last Friday went after both Meta and OpenAI over the training of the companies’ large language models...on their copyrighted books.” Lambert continues, “It’s becoming clear that, if generative A.I. has an Achilles’ heel (that isn’t its tendency to ‘hallucinate’), it’s copyright. Some of these suits may be more plausible than others, and justice needs to take its course, but there does at least seem to be a strong argument for saying generative A.I. relies on the exploitation of stuff that people have created, and that the business models that accompany the technology do not allow for those people to be compensated for this absorption and regurgitation.”

 

Harris Discusses AI With Labor, Civil Rights Leaders

Bloomberg Share to FacebookShare to Twitter (7/12, Gardner, Subscription Publication) reports Vice President Harris “convened a group of civil rights and labor leaders Wednesday to discuss the field of artificial intelligence, which critics warn is already perpetuating discrimination and increased surveillance of American workers.” The meeting “is part of a broader administration push to identify and tackle concerns with artificial intelligence technologies as their use by employers and the general public has exploded.” Harris “cited the need for privacy protections for consumers and workers.” The Vice President said, “Innovation has so much possibility to improve the condition of human life. ... We must also ensure that in that process, we are not trampling on people’s rights.”

        The Hill Share to FacebookShare to Twitter (7/12, Klar) reports Harris said, “This is a very multifaceted issue and topic, and we also know that this is technology that is rapidly developing. ... We have is a sense of urgency that we get in front of this issue in terms of understanding the implications so that we can work as a community of folks – private sector, public sector, non-profits, government – to do what is in the best interest of the health and safety and well being of the people of our country.” Harris “said a ‘guiding principle’ for the administration is to reject the ‘false choice’ that suggests the U.S. can either advance innovation or protect consumers.” She said, “We can do both.” The Hill Share to FacebookShare to Twitter (7/12) provides video footage of the meeting.

        Additional coverage includes Bloomberg BNA Share to FacebookShare to Twitter (7/12) and the Boston (MA) Globe Share to FacebookShare to Twitter (7/12).

        WPost: AI Regulation Urgently Needed To Avoid Electoral Chaos. A Washington Post Share to FacebookShare to Twitter (7/12) editorial calls on US policymakers to “act now” to counter the electoral threat posed by AI, which could “make misrepresentation more realistic than ever” and cause “chaos in the 2024 election,” as AI-generated images have already “been directly deployed as electoral tools.” The Post explains that while “ideally, campaigns would refrain altogether from using AI to depict false realities,” the potential for AI to “evolve into an ever more adept illusionist, as well as the likelihood that bad actors will deploy it to huge audiences, means it’s crucial to preserve a world in which voters can (mostly) believe what they see.” In conclusion, the Post adds that upcoming AI legislation is “no excuse for government not to take smaller steps forward on the path immediately in front of it.”

        Experts Worry About Propagation Of AI-Generated Content Online. The Wall Street Journal Share to FacebookShare to Twitter (7/12, McMillan, Subscription Publication) reports NewsGuard estimates there were about 49 news website generating false or misleading content through AI tools back in May of this year, but that number rose to 277 websites by June. Experts worry the proliferation of AI-generated content on the level of spam will result in harm, from political disinformation to phishing websites and other criminal activity. Google says it works to derank search results that come from spam sources or ones that rely on AI content to boost their rankings and page hits.

 

How Educators Can Speak With Students About Digital Literacy, Limitations Of AI Tools

K-12 Dive Share to FacebookShare to Twitter (7/12, Barack) reports that as students begin using ChatGPT for class work, “it’s also important that educators speak with them about the limitations of artificial intelligence.” Wikipedia’s rise “offers a good analogy, said Danny Liu, associate professor and member of the Educational Innovation team at the University of Sydney in Australia. When the site first appeared, he said, teachers explained to students that using Wikipedia as a scholarly source was not appropriate but that it still held value.” He also said, “Teachers helped students to develop their information and digital literacy, for example, around Wikipedia, so that they would take a critical lens to things they see online. A similar approach needs to be taken with generative AI.”

 

FTC Launches Investigation Of OpenAI Over Possible Violation Of Consumer Protection Laws

The Wall Street Journal Share to FacebookShare to Twitter (7/13, A1, McKinnon, Tracy, Subscription Publication) reports the Federal Trade Commission has launched an investigation investigating whether OpenAI’s ChatGPT has harmed people by publishing false information about them. The New York Times Share to FacebookShare to Twitter (7/13, Kang, Metz) reports the agency wrote in a letter to OpenAI that it is examining whether the company “engaged in unfair or deceptive privacy or data security practices or engaged in unfair or deceptive practices relating to risks of harm to consumers.” The agency “said it was also looking into OpenAI’s security practices” and asked “dozens of questions...including how the start-up trains its A.I. models and treats personal data.”

        The Washington Post Share to FacebookShare to Twitter (7/13, A1, Zakrzewski) explains the FTC “has issued multiple warnings that existing consumer protection laws apply to AI, even as the administration and Congress struggle to outline new regulations. ... The FTC’s demands of OpenAI are the first indication of how it intends to enforce those warnings. If the FTC finds that a company violates consumer protection laws, it can levy fines or put a business under a consent decree, which can dictate how the company handles data.”

        The CBS Evening News Share to FacebookShare to Twitter (7/13) reported that for its part, OpenAI released a statement “saying in part that it is confident that its technology follows the law.” Reuters Share to FacebookShare to Twitter (7/13, Bartz) reports OpenAI CEO Sam Altman also “said in a series of tweets on Thursday that the latest version of the company’s technology, GPT-4, was built on years of safety research and the systems were designed to learn about the world and not private individuals.” He added, “Of course we will work with the FTC.”

        Meanwhile, a Wall Street Journal Share to FacebookShare to Twitter (7/13, Subscription Publication) editorial writes critically of the FTC’s probe of OpenAI and compares the move to a fishing expedition. The Journal also argues that the agency has not received authorization from Congress to regulate artificial intelligence.

        Associated Press Forms Partnership With OpenAI To Explore Use In News Operations. Reuters Share to FacebookShare to Twitter (7/13) reports the Associated Press “is licensing a part its archive of news stories to OpenAI under a deal that will explore generative AI’s use in news, the companies said on Thursday, a move that could set the precedent for similar partnerships between the industries.” The news publisher “will gain access to OpenAI’s technology and product expertise as part of the deal, whose financial details were not disclosed.” AP “also did not reveal how it would integrate OpenAI’s technology in its news operations.” The publisher “already uses AI for automating corporate earnings reports, recapping sporting events and transcription for certain live events.”

 

Report: Generative AI Could Capitalize On Healthcare’s Wealth Of Unstructured Data

“Generative artificial intelligence could capitalize on the healthcare industry’s wealth of unstructured data, alleviating provider documentation burden and improving relationships between patients and their health plans, according to a new report by consulting firm McKinsey,” Healthcare Dive Share to FacebookShare to Twitter (7/13, Olsen) reports. The report contends that “generative AI could help payers quickly pull benefits material for members or help call center workers aggregate information during conversations about claims denials.” AI could be used by providers “to take conversations with patients and turn them into clinical notes, create discharge summaries or handle administrative questions from workers at health systems.”

 

US Companies Are On A Hiring Spree For AI Jobs

CNBC Share to FacebookShare to Twitter (7/13, Liu) reports that “the U.S. is leading the way in artificial intelligence jobs, and many of them easily pay six figures, according to new data from the global job search platform Adzuna.” CNBC says “there were 7.6 million open jobs in the U.S. in June, according to the Adzuna database, with a growing share calling for AI skills: 169,045 jobs in the U.S. cited AI needs, and 3,575 called for generative AI work in particular.” AWS VP of Global Talent Acquisition Jay Shankar said back in April, “It’s a super important skillset employers are looking for, across all industries. ... AI is practically everywhere now...and to me, if there’s one technical skill you want to learn, that’s the area to focus on.”

 

Educators Increasingly See AI As Both Friend And Foe

The Washington Post Share to FacebookShare to Twitter reported that as AI “jolts education, public school teachers and university professors are discovering that it is not just for students.” Some educators are “using it to help develop tests, generate case studies, write emails and rethink teaching strategies.” What’s more, “Facebook groups about AI have drawn teachers and professors from across the globe – looking for tips, posting about successes, raising ethical quandaries.” In other words, “they see AI as both friend and foe – with the capacity to enrich learning, spur creativity and save time on tasks, even as it raises alarms.” Some public schools have even “blocked access to it, including those in New York City and Los Angeles, citing concerns about student learning and cheating.”

        UC Berkeley Professor Says AI-Powered Chatbot Tutors May Revolutionize Traditional Education For Students. Fox News Share to FacebookShare to Twitter (7/12, Colton) reported AI-powered chatbot tutors “will likely revolutionize traditional education and benefit students with one-on-one training, according to a University of California, Berkeley professor of computer science.” Stuart Russell told the Guardian that education “ought to be possible within a few years, maybe by the end of this decade, to be delivering a pretty high quality of education to every child in the world. That’s potentially transformative.” At the UN’s AI for Good Global Summit, Russell argued that personalized chatbots could possibly cover “most material through to the end of high school” for students, “all from their cell phone or computer.” This comes as OpenAI is “currently testing a virtual tutor program powered by GPT-4, according to an announcement of a partnership with an education nonprofit in March.”

Reply all
Reply to author
Forward
0 new messages