Dr. T's AI brief

129 views
Skip to first unread message

dtau...@gmail.com

unread,
Jul 30, 2023, 4:45:29 PM7/30/23
to ai-b...@googlegroups.com

Researchers Poke Holes in Safety Controls of ChatGPT, Other Chatbots
The New York Times
Cade Metz
July 27, 2023


Scientists at Carnegie Mellon University and the Center for AI Safety demonstrated the ability to produce nearly infinite volumes of destructive information by bypassing artificial intelligence (AI) protections in any leading chatbot. The researchers found they could exploit open source systems by appending a long suffix of characters onto each English-language prompt inputted into the system. In this manner, they were able to persuade chatbots to provide harmful information and generate discriminatory, counterfeit, and otherwise toxic data. The researchers found they could use this method to circumvent the safeguards of OpenAI's ChatGPT, Google's Bard, and Anthropic's Claude chatbots. While they concede that an obvious countermeasure for preventing all such attacks does not exist, the researchers suggest chatbot developers could block the suffixes they identified.
 

Full Article

*May Require Paid Registration

New Cryptocurrency Offers Users Tokens for Scanning Their Eyeballs
The Guardian (U.K.)
Hibaq Farah
July 25, 2023


OpenAI CEO Sam Altman has launched the Worldcoin cryptocurrency scheme to differentiate "verified humans" from artificial intelligence (AI) systems via biometric scanning. Participants launching a new account have their iris scanned in exchange for a "genesis grant" of 25 tokens, equivalent to roughly £40 ($51.66); the World ID they receive will verify they are a "real and unique person," according to the scheme. Users also will be able to make payments, purchases, and transfers globally using digital assets and traditional currencies via the World application. Two million users from 33 countries, mainly in Europe, India, and southern Africa, have enrolled and been scanned on the service, which officially launched this week.

Full Article

 

 

ChatGPT's Accuracy Has Gotten Worse
Popular Science
Andrew Paul
July 19, 2023


Stanford University and University of Southern California, Berkeley (UC Berkeley) researchers demonstrated an apparent decline in the reliability of OpenAI's ChatGPT large language model (LLM) over time without any solid explanation. The researchers assessed the chatbot's tendency to offer answers with varying degrees of accuracy and quality, as well as how appropriately it follows instructions. In one example, the researchers observed that GPT-4's nearly 98% accuracy in identifying prime numbers fell to less than 3% between March and June 2023, while GPT-3.5's accuracy increased; both GPT-3.5 and GPT-4's code-generation abilities worsened in that same interval. UC Berkeley's Matei Zaharia suggested the decline may reflect a limit reached by reinforcement learning from human feedback, or perhaps bugs in the system.

Full Article

 

 

A Simpler Method for Learning to Control a Robot
MIT News
Adam Zewe
July 26, 2023


A machine learning (ML) method developed by researchers at the Massachusetts Institute of Technology (MIT) and Stanford University can learn to control a robot, drone, or autonomous vehicle to more effectively negotiate dynamic environments than other techniques. MIT's Navid Azizan said, "The focus of our work is to learn intrinsic structure in the dynamics of the system that can be leveraged to design more effective, stabilizing controllers." The method has a prescribed structure that researchers can use to derive an effective controller from the model. Stanford’s Spencer M. Richards said, “By making simpler assumptions, we got something that actually worked better than other complicated baseline approaches.”
 

Full Article

 

 

Physics-Informed Supervised Learning Framework Could Make Computational Imaging Faster
Optica
July 25, 2023


A new physics-informed variational autoencoder (P-VAE) framework developed by Swarthmore College researchers could accelerate computational imaging via supervised learning. The framework uses sparse measurements to capture and jointly reconstruct each light source in an image, deducing prior and posterior distributions by pooling information from dataset-spanning measurements and incorporating established data about the forward physics of imaging. The researchers enhanced light-emitting diode (LED) array microscopy with P-VAE, shortening the acquisition time by reducing the number of images required per object. They also used only sparse measurements to reconstruct objects imaged by computed tomography through the framework.

Full Article

 

 

Top Tech Firms Sign White House Pledge to Identify AI-Generated Images
The Washington Post
Cat Zakrzewski
July 21, 2023


On Friday, the White House announced that Google, Amazon, Microsoft, Meta, and Open AI, along with tech startups Anthropic and Inflection, had signed a voluntary pledge to have their artificial intelligence (AI) systems verified by independent security experts before their public release. The companies also pledged to share safety data with the government and researchers and to develop "watermarking" systems that would identify AI-generated images, videos, or text. The agreement, which a senior White House official said would strengthen industry standards, comes as the Biden administration plans an AI-focused executive order, Congress works to create bipartisan legislation to regulate AI, and government agencies look to leverage existing laws for AI regulation.

Full Article

*May Require Paid Registration

 

 

How Hacking Honeybees Brings AI Closer to the Hive
IEEE Spectrum
Sarah Wells
July 21, 2023


Computer scientists at the U.K.'s University of Sheffield have developed a new form of decision-making machine intelligence by analyzing the brains of honeybees. The researchers monitored 20 bees as they probed color-coded flowers, determining their accurate and faster decision-making compared to animals and artificial systems "exhibits a level of intricacy that parallels certain aspects of decision-making seen in higher animal species," according to Sheffield's HaDi MaBouDi. The researchers created a bee-like model with acceptance and rejection decision pathways that weigh the quality of stimuli to arrive at decisions while retaining past stimuli information to recall irrelevant stimuli. The model's response rates were similar to those of bees when presented with 25 scenarios of random high-reward and low-reward stimuli.

Full Article

 

 

AI That Teaches Other AI
USC Viterbi School of Engineering
Greg Hardesty
July 18, 2023


Scientists at the University of Southern California (USC), Intel Labs, and the Chinese Academy of Sciences demonstrated that robots can be trained to train other robots by sharing their knowledge. The researchers developed the Shared Knowledge Lifelong Learning (SKILL) tool to teach artificial intelligence agents 102 unique tasks whose knowledge they then shared over a decentralized communication network. The researchers said they found the SKILL tool's algorithms speed up the learning process by allowing agents to learn concurrently in parallel. The work indicated learning time shrinks by a factor of 101.5 when 102 agents each learn one task and then share.

Full Article

 

 

Algorithm Learns Chemical Language, Accelerates Polymer Research
Georgia Tech News Center
July 18, 2023


The polyBERT machine learning model developed by the Georgia Institute of Technology's Christopher Kuenneth and Rampi Ramprasad could revolutionize polymer research. The researchers trained polyBERT on a dataset of 80 million polymer chemical structures so it could become fluent in the “chemical language” of polymers. The algorithm extracts the most meaningful information from chemical structures using the Transformer architecture employed in natural language models. PolyBERT outperforms traditional chemical fingerprinting solutions by more than two orders of magnitude, enabling the rapid screening of vast polymer spaces at an unparalleled scale.

Full Article

 

 

Bringing COVID-19 Data into Focus
University of California, Davis
Andy Fell
July 14, 2023


A computer vision-based approach developed by researchers at the University of California, Davis (UC Davis) and Spain's University of the Basque County uses COVID-19 mortality data to identify changes in infection rates when social distancing, lockdowns, masking, and other non-pharmaceutical interventions were introduced during the pandemic's first year. Their deconvolution method uses a neural network with information on the virus' behavior and infection dynamics to work back from death rate data (output) to the daily incidence rate (input). UC Davis' Leonor Saiz said, "We borrowed a concept from vision technology to apply it to epidemiology." The approach could be used in future pandemics to determine which actions would be most effective in lowering infection rates.

Full Article

 

 

Using AI to Save Species from Extinction Cascades
Flinders University (Australia)
Yaz Dedovic
July 12, 2023


A machine learning algorithm developed by researchers at Australia's Flinders University can predict which animals are likely to become extinct based on species interactions. Predicting which species a predator is most likely to eat could help prevent co-extinctions, in which a predator goes extinct due to the loss of prey, or extinctions in naïve native prey due to new invasive predators. Using information on which species naturally interact (or don't) and their traits, the algorithm can predict which species on a list will interact, and how. The researchers found the algorithm was accurate in predicting bird and mammal predator-prey interactions.

Full Article

 

Wave Of Intellectual Property Litigation Threatens Generative AI Adoption

The Washington Post (7/16, A1, De Vynck) details various legal efforts to hold tech companies accountable for data scraping, as “an increasingly vocal group of artists, writers and filmmakers are arguing artificial intelligence tools like chatbots ChatGPT and Bard were illegally trained on their work without permission or compensation.” While critics “say the livelihoods of millions of creative workers are at stake, especially because AI tools are already being used to replace some human-made work,” AI proponents “have argued that the use of copyrighted works to train AI falls under fair use,” even as “the wave of lawsuits, high-profile complaints and proposed regulation could pose the biggest barrier yet to the adoption of ‘generative’ AI tools.”

 

Content Creators, Social Media Companies Stage “Revolts” Against AI

The New York Times Share to FacebookShare to Twitter (7/15, A1, Frenkel, Thompson) reported “fan fiction writers, actors, social media companies and news organizations” are among groups “staging revolts against A.I. systems.” While creators “are locking their files to protect their work or are boycotting certain websites that publish A.I.-generated content,” websites are starting to charge for access to their data. Additionally, “at least 10 lawsuits have been filed this year against A.I. companies, accusing them of training their systems on artists’ creative work without consent.” However, the “data protests may have little effect in the long run,” as tech giants “already sit on mountains of proprietary information and have the resources to license more,” meaning only startups and nonprofits may be unable “to obtain enough content to train their systems.”

 

Study Finds People Have Limited Ability To Distinguish Between Chatbot, Human Responses

Science Daily Share to FacebookShare to Twitter (7/17) reports, “ChatGPT’s responses to people’s healthcare-related queries are nearly indistinguishable from those provided by humans, a new study from NYU Tandon School of Engineering and Grossman School of Medicine reveals, suggesting the potential for chatbots to be effective allies to healthcare providers’ communications with patients.” The research team “presented 392 people aged 18 and above with ten patient questions and responses, with half of the responses generated by a human healthcare provider and the other half by ChatGPT.” The study “found people have limited ability to distinguish between chatbot and human-generated responses.” On average, participants “correctly identified chatbot responses 65.5% of the time and provider responses 65.1% of the time, with ranges of 49.0% to 85.7% for different questions.”

 

Skeptics Worry Musk’s AI Startup Will Become “Truth Social Of Chatbots”

Politico Share to FacebookShare to Twitter (7/17, Schreckinger) reports while Elon Musk “wants his latest politically incorrect startup to become the Tesla of AI – a household name synonymous with cutting-edge technology,” skeptics fret it will become merely “the Truth Social of chatbots, a right-leaning alternative to market leaders that fails to take off.” Politico says Musk’s launch of xAI last week both “extends his campaign against progressive mores into an emerging, politically charged technology” and “cements his status as the de facto leader of a faction of Silicon Valley libertarians that have increasingly aligned with Republicans.”

        Experts: Current Generative AI Systems Offer “Only A Taste Of The Sophistication” To Come. Reuters Share to FacebookShare to Twitter (7/17, Tong, Dastin) interviewed “about two dozen entrepreneurs, investors and AI experts” who claim while current generative AI systems are “still far from emulating science fiction’s dazzling digital assistants,” they “are only a taste of the sophistication that could come in future years from increasingly advanced and autonomous agents as the industry pushes towards an artificial general intelligence (AGI) that can equal or surpass humans in myriad cognitive tasks,”

Silicon Valley Using AI To Create New Generation Of More Advanced Virtual Assistants

Reuters Share to FacebookShare to Twitter (7/17) reports that “a new wave of AI helpers with greater autonomy is raising the stakes” for virtual assistant development, “powered by the latest version of the technology behind ChatGPT and its rivals.” The new AI-powered assistants “promise to perform more complex personal and work tasks when commanded to by a human, without needing close supervision.” However, “the industry is still far from emulating science fiction’s dazzling digital assistants.”

 

Colleges Look To Automate Admissions Processes With Support From AI Companies

Higher Ed Dive Share to FacebookShare to Twitter (7/18, Burke) reports ChatGPT has created “significant buzz and renewed a conversation about what parts of human life and labor might be easily automated,” and despite criticism, “some universities and admissions officers are still clamoring to use AI to streamline the acceptance process” alongside companies that “are eager to help them.” For example, OneOrigin, an artificial intelligence company, “offers a product called Sia, which provides speedy college transcript processing by extracting information like courses and credits. Once trained, it can determine what courses an incoming or transfer student may be eligible for, pushing the data to an institution’s information system,” which can save time for admissions officers, “and potentially cut university personnel costs, the company said.”

 

Meta Announces Commercial Version Of Llama Open-Source AI Model

Reuters Share to FacebookShare to Twitter (7/18, Paul) reports Meta said Tuesday it is “releasing a commercial version of its open-source artificial intelligence model Llama...giving start-ups and other businesses a powerful free-of-charge alternative to pricey proprietary models sold by OpenAI and Google.” The new version, Llama 2, “will be distributed by Microsoft through its Azure cloud service and will run on the Windows operating system, Meta said in a blog post, referring to Microsoft as ‘our preferred partner’ for the release.” In addition, it “will be made available via direct download and through Amazon Web Services, Hugging Face and other providers, according to the blog post and a separate Facebook post by Meta CEO Mark Zuckerberg.”

        The New York Times Share to FacebookShare to Twitter (7/18, Isaac, Metz) reports, that by open-sourcing the model, “Meta can capitalize on improvements made by programmers from outside the company while – Meta executives hope – spurring A.I. experimentation.” By offering “the code behind the company’s latest and most advanced A.I. technology to developers and software enthusiasts around the world free of charge,” CEO Mark Zuckerberg said “more people can scrutinize it to identify and fix potential issues.”

        The Washington Post Share to FacebookShare to Twitter (7/18) reports, “Facebook’s Llama 2 is a ‘large language model’ — a highly complex algorithm trained on billions of words scraped from the open internet” and “Facebook’s answer to Google’s Palm-2, which powers its AI tools, and OpenAI’s GPT4, the tech behind ChatGPT.”

        The AP Share to FacebookShare to Twitter (7/18) reports, “Zuckerberg said people can download its new AI models directly or through a partnership that makes them available on Microsoft’s cloud platform Azure ‘along with Microsoft’s safety and content tools.’”

        Additional coverage includes Bloomberg Share to FacebookShare to Twitter (7/18, Subscription Publication), The Hill Share to FacebookShare to Twitter (7/18), CNBC Share to FacebookShare to Twitter (7/18, Leswing), CNBC Share to FacebookShare to Twitter (7/18, Vanian), and The Verge Share to FacebookShare to Twitter (7/18).

OpenAI Concerned Over Potential Uses Of Face Recognition Technology

The New York Times Share to FacebookShare to Twitter (7/18, Hill) reports ChatGPT has the ability to analyze images, “describing what’s in them, answering questions about them and even recognizing specific people’s faces.” This has applications ranging from helping blind individuals to offering quick medical diagnoses, but also creates the potential for threats to privacy. OpenAI Policy Researcher Sandhini Agarwal said there are concerns that making the image analysis feature fully available to the public could push acceptable privacy boundaries set by US tech companies and could also cause legal issues in areas, such as Europe, that require citizens to provide consent in order to use their biometric information.

 

Survey: Many Students Have Yet To Embrace ChatGPT As 40% Of Teachers Report Weekly Use

Education Week Share to FacebookShare to Twitter (7/18, Prothero) reports according to the results of a new survey conducted by Impact Research on behalf of the Walton Family Foundation, “half of students, ages 12-18, said they have never used ChatGPT” and a quarter of students reported using ChatGPT “at least once per week. That’s compared to 40 percent of teachers who said they used it at least once a week.” Bigger picture, “6 in 10 teachers now say they have used ChatGPT in their jobs, marking a 13-point increase from February when a similar survey was done.” Meanwhile, “only 35 percent of students said in this most recent survey that ChatGPT has had a positive impact on their schooling experience, compared to 54 percent of teachers who said the new technology has been positive.”

 

Apple Develops Own AI Tools

Bloomberg Share to FacebookShare to Twitter (7/19, Subscription Publication) reports Apple has been “quietly working on artificial intelligence tools that could challenge those of OpenAI Inc., Alphabet Inc.’s Google and others, but the company has yet to devise a clear strategy for releasing the technology to consumers.” Apple “has built its own framework to create large language models” called Ajax, which “has become a major effort for Apple, with several teams collaborating on the project, said the people, who asked not to be identified because the matter is private.” The story says, “Apple’s Ajax system is built on top of Google Jax, the search giant’s machine learning framework. Apple’s system runs on Google Cloud, which the company uses to power cloud services alongside its own infrastructure and Amazon.com Inc.’s AWS.”

 

NVIDIA Aiming To Accelerate AI Infrastructure

AFP Share to FacebookShare to Twitter (7/18, Jacobsohn) reports NVIDIA “is aiming to accelerate the global infrastructure of AI.” The “emergence of AI is transforming technology and revolutionizing entire industries such as healthcare, finance, manufacturing, transportation and even the media.” NVIDIA’s GPUs “have become instrumental in training and running AI models, enabling breakthroughs in deep learning, computer vision, and natural language processing.”

 

Analysis: Generative AI Technology Likely To Become Greater Tool In Healthcare Sector

STAT Share to FacebookShare to Twitter (7/20, Trang, Palmer, Subscription Publication) discusses how generative AI technology such as Chat-GPT are being used in small ways in the current medical system, but there is “little doubt that such models will have a far bigger footprint in health care going forward.” Researchers have demonstrated that as medical knowledge and information continues to be fed into large language models other AI technology, it is likely the tech will be used to support physicians on tasks such as notetaking and early forms of diagnostics. However, some observers have also noted that the medical industry needs to have detailed and informed conversations on the role of AI going forward to ensure medical professionals use the technology responsibly and with forethought.

 

OpenAI Backs Idea Of Requiring Licenses For Advanced AI Systems

Bloomberg Share to FacebookShare to Twitter (7/20, Subscription Publication) reports that “an internal policy memo drafted by OpenAI shows the company supports the idea of requiring government licenses from anyone who wants to develop advanced artificial intelligence systems.” The document also suggests OpenAI “is willing to pull back the curtain on the data it uses to train image generators.” OpenAI “laid out a series of AI policy commitments in the internal document following a May 4 meeting between White House officials and tech executives including OpenAI Chief Executive Officer Sam Altman.” OpenAI said, “We commit to working with the US government and policy makers around the world to support development of licensing requirements for future generations of the most highly capable foundation models.”

        OpenAI Announces New Customizable Instructions For ChatGPT. TechCrunch Share to FacebookShare to Twitter (7/20, Mehta) reports OpenAI introduced “custom instructions for ChatGPT users, so they don’t have to write the same instruction prompts to the chatbot every time they interact with it – inputs like ‘Write the answer under 1,000 words’ or ‘Keep the tone of response formal.’” OpenAI “said this feature lets you ‘share anything you’d like ChatGPT to consider in its response.’” OpenAI also “said that the company uses its moderation API to scan customized instructions to check if they are unsafe in any nature” and will “refuse to save the instructions or ignore them if the responses resulted by those violate the company’s policy.”

AI Companies To Commit To Safeguards At White House’s Request

Engadget Share to FacebookShare to Twitter (7/20) reports that “Microsoft, Google and OpenAI are among the leaders in the US artificial intelligence space that will reportedly commit to certain safeguards for their technology on Friday, following a push from the White House.” Engadget says “the companies will voluntarily agree to abide by a number of principles though the agreement will expire when Congress passes legislation to regulate AI.” According to a draft document, “the tech firms are set to agree to eight suggested measures concerning safety, security and social responsibility.”

 

Chatbots Provide Most Helpful Responses When Prompted To “Use Information From Trusted Source”

The New York Times Share to FacebookShare to Twitter (7/20, Chen) tech columnist Brian Chen says, “After testing dozens of A.I. products over the last two months, I concluded that most of us are using the technology in a suboptimal way, largely because the tech companies gave us poor directions.” Chen explains, “The chatbots are the least beneficial when we ask them questions and then hope whatever answers they come up with on their own are true, which is how they were designed to be used.” Actually, “when directed to use information from trusted sources, such as credible websites and research papers, A.I. can carry out helpful tasks with a high degree of accuracy.” Chen shares with readers “some of the approaches I used to get help with cooking, research, and travel planning.” Chen recommends the use of third-party plug-ins “to fixate on trusted sources and quickly double-check the data for accuracy.”

 

Senators Seek To Tack On AI Provisions In Sweeping Defense Bill

The Washington Post Share to FacebookShare to Twitter (7/20) reported in Thursday’s “The Technology 202” newsletter, “As the Senate ramps up its consideration of a sprawling defense bill this week, lawmakers are readying a flurry of bills on artificial intelligence, social media oversight and other prominent tech issues that they are hoping can hitch a ride on the package.” The Post calls “the massive defense bill...one of the most effective” vehicles “for lawmakers to get their bills over the finish line on Capitol Hill, where efforts to move broader packages on data privacy, competition and AI have languished.” Senate Majority Leader Chuck Schumer (D-NY), for example, wants “the inclusion of several AI provisions in his manager’s amendment, which is more likely to become law than most of the changes lawmakers have proposed.”

 

How K-12 Educators, Experts Are Responding To Generative AI Tools

In her newsletter for The Hechinger Report Share to FacebookShare to Twitter (7/20), Javeria Salman “spoke with experts and educators in K-12 to see what they think” about generative AI tools. Jeremy Roschelle, “an executive director at education nonprofit Digital Promise and the lead researcher on a new report on the topic developed under contract with the Department of Education’s Office of Educational Technology, recommends that schools and educators spend the upcoming school year in a phase of cautious exploration of generative AI.” It’s a sentiment “echoed by Richard Culatta, CEO of ISTE,” as what schools “need to do, he said, is provide teachers with a better understanding of what AI is and share examples of how to use it.” He suggested that instead of trying to make policies or decisions, “Just dedicate the time to exploring what it can do, what it can’t do.”

 

Biden Pledges Administration Will Remain Vigilant On AI Development As Tech Companies Pledge To Heed Voluntary Safeguards

Bloomberg Share to FacebookShare to Twitter (7/21, Sink, Edgerton, Subscription Publication) reported President Biden on Friday “said the US must guard against threats from artificial intelligence as he detailed new company safeguards and promised additional government actions on the emerging technology.” Bloomberg adds executives from Amazon, Alphabet, Meta, Microsoft, OpenAI, Anthropic, and Inflection AI joined Biden for the White House announcement and all “committed to adopting transparency and security measures” for developing their AI systems. In another article, Bloomberg Share to FacebookShare to Twitter (7/21, Sink, Edgerton, Subscription Publication) points out “many” of these executives previously “attended a meeting with Biden and Vice President Kamala Harris in May, where the administration warned the industry it was responsible for ensuring the safety of its technology.”

        The Washington Post Share to FacebookShare to Twitter (7/21, A1, Zakrzewski) reported Biden “has witnessed a slew of innovations” over his “50-year Washington political career,” but the AI advances “have stunned the seasoned president,” which is why Biden’s White House on Friday “took its most ambitious step to date to address the safety concerns and risks of artificial intelligence.” The Post says that along with announcing the companies had signed on to the “voluntary pledge to mitigate the risks of the emerging technology,” the President “said the administration is developing an executive order focused on AI,” further “escalating the White House’s involvement in an increasingly urgent debate over AI regulation.” The New York Times Share to FacebookShare to Twitter (7/21, A1, Shear) reported White House officials “offered no details” on the executive order, but “is expected to involve new restrictions on advanced semiconductors and restrictions on the export of the large language models.”

        Meanwhile, Fox News Share to FacebookShare to Twitter (7/21, Kasperowicz) reported Biden “also said those actions won’t end the need for Congress to pass AI legislation.” Fox News adds that the President thanked Senate Majority Leader Schumer and House Minority Leader Jeffries “for making AI regulation a priority, although Congress has yet to pass anything close to a sweeping bill imposing rules on the emerging sector.” Roll Call Share to FacebookShare to Twitter (7/21, Ratnam) reported Schumer “welcom[ed] the voluntary commitments,” but Politico Share to FacebookShare to Twitter (7/21, Chatterjee) reports it is “unclear if Congress will pass any AI legislation this session, meaning the White House’s AI nonbinding and voluntary guidelines stand as the primary guidance on addressing some of the broader concerns surrounding the technology.”

        The AP Share to FacebookShare to Twitter (7/21, O'Brien, Miller) reported that while the “voluntary commitments” do not “detail who will audit the technology or hold the companies accountable,” the White House said the companies agreed to “security testing ‘carried out in part by independent experts’ to guard against major risks, such as to biosecurity and cybersecurity,” and “will also examine the potential for societal harms, such as bias and discrimination, and more theoretical dangers about advanced AI systems that could gain control of physical systems or ‘self-replicate’ by making copies of themselves.” The AP adds the companies “committed to methods for reporting vulnerabilities to their systems and to using digital watermarking to help distinguish between real and AI-generated images or audio known as deepfakes.” According to Reuters Share to FacebookShare to Twitter (7/21, Bartz, Hu), the voluntary commitments are “seen as a win for the Biden administration’s effort to regulate the technology, which has experienced a boom in investment and consumer popularity,” but Engadget Share to FacebookShare to Twitter (7/21, Holt) says the lack of an enforcement mechanism “underscores the difficulty that lawmakers have in keeping up with the pace of AI developments.”

        Survey: Just 27% Of Americans Comfortable With Efforts To Advance AI. CNBC Share to FacebookShare to Twitter (7/21, Liesman) reports that its latest All-America Economic survey of 1,000 adults (7/12-7/16) “found that just 27% say they are comfortable with efforts ‘underway to develop computer programs that can mimic human thinking and possibly replace human activity in a number of areas,’” while 69% “say they are very or somewhat uncomfortable, a 10-point jump from when the survey last asked the question in 2016.” In addition, the survey found 66% “are uncomfortable with AI in customer service; 65% in medical diagnosis and 76% when it comes to self-driving cars.”

        Google Co-Founder Returns To Develop Its AI System. The Wall Street Journal Share to FacebookShare to Twitter (7/21, A1, Kruppa, Seetharaman, Subscription Publication) reports Google co-founder Sergey Brin is working alongside AI researchers at the company’s headquarters who are building its long-waited Gemini AI system.

 

Instructors Hoping AI Can Help Medical Students Make Diagnoses

The New York Times Share to FacebookShare to Twitter (7/22, Kolata) reported, “Artificial intelligence is transforming many aspects of the practice of medicine, and some medical professionals are using these tools to help them with diagnosis.” Physicians “at Beth Israel Deaconess, a teaching hospital affiliated with Harvard Medical School, decided to explore how chatbots could be used – and misused – in training future doctors.” Instructors are hopeful that “medical students can turn to GPT-4 and other chatbots for something similar to what doctors call a curbside consult – when they pull a colleague aside and ask for an opinion about a difficult case. The idea is to use a chatbot in the same way that doctors turn to each other for suggestions and insights.”

 

Open AI CEO Altman Launches Worldcoin Cryptocurrency Project

Reuters Share to FacebookShare to Twitter (7/24) reports, “Worldcoin, a cryptocurrency project founded by OpenAI CEO Sam Altman, launched on Monday.” Worldcoin’s “core offering is its World ID, which the company describes as a ‘digital passport’ to prove that its holder is a real human, not an AI bot.” The project says these IDs will be needed to distinguish between real people and AI bots online as AI chatbots become more sophisticated. Altman told Reuters that the project also can help deal with how AI can reshape the economy, such as through reducing fraud in social benefits programs.

 

Educators Explain How A Georgia High School Is Using AI In Every Subject

Education Week Share to FacebookShare to Twitter (7/24, Klein) reports as schools “grapple with how to help students and teachers grasp the implications of rapidly developing artificial intelligence technology, Seckinger High School is developing a roadmap.” The public high school in Gwinnett County, Georgia, is one of several that have “made teaching AI part of its mission – not just in a one-off class or two – but infused in every subject, from language arts to social studies to English classes for non-native speakers.” The school’s principal and three teachers “sat down with Education Week as part of an online forum on AI to talk about the school’s AI focus.” In the interview, computer science teacher Jason Hurd said that in his artificial intelligence pathway class, “We’re doing ethics, the history and evolution of AI, the programming involved in creating machine learning models. We also offer things that are very much AI based in environmental engineering, mechanical engineering classes, regular computer science classes, ... but we have a certain spin on it at Seckinger in order to have that AI flavor.”

 

Anthropic CEO Warns Senators Of Threat From AI-Driven Weapons

Reuters Share to FacebookShare to Twitter (7/25, Bartz) reports that both Democratic and Republican senators “expressed alarm on Tuesday about the potential for a malevolent use of artificial intelligence, focusing on the possibility of AI being used to create a biological attack.” In a hearing before a Senate Judiciary Committee subcommittee, Dario Amodei, chief executive of the AI company Anthropic, “said that AI could help otherwise unskilled malevolent actors develop biological weapons.” He explained that “certain steps in bioweapons production involve knowledge that can’t be found on Google or in textbooks and requires a high level of expertise. We found that today’s AI tools can fill in some of these steps.” Subcommittee chair Richard Blumenthal (D-CT) expressed alarm, saying, “The goal for this hearing is to lay the ground for legislation. To go from general principles, to specific recommendations. To use this hearing to write laws.” Sen. Josh Hawley (R-MO) meanwhile called for safeguards “that will ensure this new technology is actually good for the American people.” Bloomberg Share to FacebookShare to Twitter (7/25, Edgerton, Seddiq, Subscription Publication) also reports.

 

New Walled-Garden AI Model Aims To Solve ChatGPT’s Pitfalls In Education

Education Week Share to FacebookShare to Twitter (7/25, Klein) reports ChatGPT “pulls from nearly every imaginable source on the internet, even if much of the content is not accurate or produced by a reputable source,” which means it’s “not exactly the kind of technology that will help earn educators’ trust.” Enter a “new, more focused version of the technology that some call ‘walled garden’ AI and its close cousin, carefully engineered chatbots.” Instead of absorbing “large swaths of the internet and treating it all somewhat similarly, they generate feedback based on a more limited database of information that their creators deem reliable.” The International Society for Technology in Education, a nonprofit, is working to develop Stretch, “a chatbot trained only on information that was created or blessed by the two professional development organizations” to create the walled-garden model that “can also cite its sources, giving users’ a digital trail to follow in gauging its accuracy.”

 

Researchers Share How AI Can Help Mitigate Climate Change And Preserve The Environment

ABC News Share to FacebookShare to Twitter (7/26, Jacobo) reports AI is having “an increased presence in automating how human beings live their lives – and it’s set to play a pivotal role in the herculean efforts to mitigate climate change and preserve the environment, experts say.” As it advances, one of AI’s positive applications “is it can be used to compile, compute and analyze data that previously would have taken humans an insurmountable amount of manpower in minutes, experts told ABC News.” Ways that scientists “say AI will help efforts to save the environment” include detecting permafrost melting in the Arctic. Google has granted $5 million to the Woodwell Climate Research Center “to support the development of an open-access resource that will allow residents in the Arctic to track permafrost thaw in real-time.”

Survey: Recent College Graduates Are Concerned About AI’s Impact On Workforce

Inside Higher Ed Share to FacebookShare to Twitter (7/26, Coffey) reports more than “half of recent graduates question whether they are properly prepared for the workforce in light of the rise of artificial intelligence, a survey finds.” In addition to the 52 percent “who question their preparedness, 46 percent feel threatened by the new technology, according to Cengage Group’s ‘2023 Graduate Employability Report.’” This year’s survey of 1,000 recent graduates “added new questions about AI, exploring its effect on hiring, workforce readiness and a shift to skills-based hiring.” While 55 percent of graduates “said AI could never replace their jobs, in contrast, 57 percent of employers said entry-level jobs, or even entire teams, could be replaced by leveraging AI.”

        Pew Survey: Many Workers Don’t Believe AI Is An Urgent Threat To Their Jobs. The Washington Post Share to FacebookShare to Twitter (7/26, Telford) reports, “Many workers who are most exposed to AI don’t feel that the technology presents a risk to their jobs, according to fresh data from Pew Research Center, a finding that contrasts with experts’ warnings that massive workplace upheaval is coming.” According to the survey, “more than 30 percent of workers in information and technology said AI will help more than hurt them personally in the next 20 years.” The report also found that while 11 percent of these workers “said AI will hurt them more than it helps,” workers in various industries “had notably different views on this question.” The Pew study “is the latest data shedding light on how U.S. employees are perceiving the threat of technology one day taking their jobs.”

        Report: More Women Stand To Lose Their Jobs Than Men By 2030 Due To AI. The Washington Post Share to FacebookShare to Twitter (7/26, Timsit) reports more women than men “stand to lose their jobs by the end of the decade because of the rise of artificial intelligence and automation, according to a new report by the McKinsey Global Institute.” The report “finds that nearly a third of hours worked in the United States could be automated by 2030.” Industries expected to be most impacted by automation are office support, food services, and customer service and sales, while women “are overrepresented in these sectors – and hold more low-paying jobs than men – so they stand to be more affected, the report finds.” It also finds that, “by 2030, at least 12 million workers will need to change jobs as the industries in which they work shrink.”

Companies Announce Launch of Generative AI Safety Standards Body

The Washington Post Share to FacebookShare to Twitter (7/26, Zakrzewski, Tiku) reports that OpenAI, Microsoft, Google, and Anthropic have announced the launch of the Frontier Model Forum, an industry-led body dedicated to developing safety standards for generative AI. The companies say the Forum will advance AI safety research and technical evaluations for next-generation AI systems, and also serve as an information-sharing hub regarding AI risks. The move “is the latest sign of companies racing ahead of government efforts to craft rules for the development and deployment of AI, as policymakers in Washington begin to grapple with the potential threat of the quickly emerging technology. But some argue this industry-involvement presents its own risks. For years policymakers have said that Silicon Valley can’t be trusted to craft its own guardrails.” CNBC Share to FacebookShare to Twitter (7/26, Feiner) reports, “The new group underscores how, until policymakers come up with new rules, the industry will likely need to continue to police themselves.” Reuters Share to FacebookShare to Twitter (7/26, Staff) reports that the Forum “will not engage in lobbying with governments, an OpenAI spokesperson said.”

OpenAI Temporarily Removes AI-Generated Text Detector Due To Low Accuracy

Insider Share to FacebookShare to Twitter (7/26, Teo) reports, “OpenAI, the creator of viral chatbot ChatGPT, has quietly scuppered a tool that detects AI-generated text, citing accuracy concerns.” The company said in a blog post, “As of July 20, 2023, the AI classifier is no longer available due to its low rate of accuracy...We are working to incorporate feedback and are currently researching more effective provenance techniques for text.” Insider also reports on complaints regarding Turnitin, a plagiarism detector used by schools, which “incorrectly identified over half of the text fed into it” during a recent study conducted by the Washington Post.

Review Finds Most States Have Yet To Guide Schools On Using AI In Teaching, Learning

The Seventy Four Share to FacebookShare to Twitter (7/26) reports as it advances, “most state education departments have not publicly acknowledged this new breed of AI, or the considerations for using it in teaching and learning, according to a national review by the Center on Reinventing Public Education.” CRPE’s data “presents an early picture of the AI education landscape and issues states and districts may face in 2023-24.” It found that “apart from the Hawaii state Department of Education calling for a working group to recommend uses of artificial intelligence and assistive technology in the upcoming school year, none of the other 58 departments appear to have mentioned AI in a policy context.” Additionally, “at least four states said they’d leave the matter up to individual districts,” while “at least two other state education departments have discussed AI with their respective boards.”

Daniel Tauritz

unread,
Aug 5, 2023, 1:33:03 PM8/5/23
to ai-b...@googlegroups.com

Llama, ChatGPT Are Not Open-Source
IEEE Spectrum
Michael Nolan
July 27, 2023


Researchers at the Netherlands' Radboud University assessed 21 large language models (LLMs) ostensibly designated as open source, and learned most models' openness is more limited than purported. The researchers found OpenAI's ChatGPT scored worst for openness, labelling it "closed" in all evaluations except for model card and preprint, which received "partial" status. Meta's Llama 2 is the second-worst-scoring LLM, despite the social media company's claim the release aimed to make the model "accessible to individuals, creators, researchers, and businesses so they can experiment, innovate, and scale their ideas responsibly." Although several smaller, research-oriented models were found to be more open than ChatGPT or Llama 2, the researchers said few disclosed reinforcement learning with human feedback functions in sufficient detail, and most models were not peer reviewed.

Full Article

 

Rise of AI Newsbots Shakes Up India's Media Landscape
Nikkei Asia (Japan)
Neeta Lal
July 30, 2023


Indian broadcaster Odisha TV's July debut of an artificial intelligence (AI) newscaster named Lisa has provoked debate about the future of India's media. Odisha TV's Jagi Mangat Panda said the AI-powered anchor performs repetitive tasks so staff can "focus on doing more creative work to bring better quality news." Government website INDIAai says an AI anchor "collects, tracks, and categorizes what is said and who said it, and then converts that data into usable and actionable information." Production managers say AI anchors benefit the sector by reducing costs, enabling multilingual news delivery, and speedily crunching massive datasets; an anonymous TV producer added that newsbots reduce the potential for ego-fueled disruptions typical of celebrity anchors. Critics counter that AI could erode media credibility since bots lack human journalists' observational expertise and experience.

Full Article

 

 

Complex-Domain Neural Network Advances Large-Scale Coherent Imaging
SPIE
July 27, 2023


A team of researchers from China's Beijing Institute of Technology, the California Institute of Technology, and the University of Connecticut has augmented large-scale coherent imaging with a complex-domain neural network. The researchers integrated a two-dimensional complex convolution unit and complex activation function into a network that generates multidimensional representations of complex wavefront. They also created a multi-source noise model that advances domain-adaptation ability from synthetic to real data. The researchers found the approach reduces exposure time and data volume by an order of magnitude without hindering efficiency or reconstruction quality.

Full Article

 

 

Researchers Leverage AI to Fight Online Hate Speech
University of Michigan Computer Science and Engineering
July 28, 2023


Researchers at the University of Michigan (U-M) worked with colleagues at Microsoft to create a tool for detecting online hate speech by combining deep learning models and traditional rules-based strategies. The Rule By Example (RBE) approach "pairs logical rules that are very explainable with [hate speech exemplars] and then encodes and learns them," said U-M's Christopher Clarke. RBE can accurately forecast and categorize online hate speech using rule and text encoders to learn sound, accurate implantations of hateful content and their underlying rules. The framework also builds-in transparency by allowing users to view the factors shaping the model's precision. RBE was 2% more accurate than the closest rival classifier.

Full Article

 

 

Indian Startup Wants to Reward Workers Behind AI
Time
Billy Perrigo
July 27, 2023


Nonprofit Indian startup Karya, which describes itself as “the world’s first ethical data company,” aims to remunerate workers who gather the data its major clients purchase to build out their artificial intelligence (AI) tools. Karya covers its costs with some of this cash, channeling the rest to India's rural poor and granting employees “de-facto ownership” of the data they produce on the job, in addition to their $5 hourly wage. Such data includes voice clips spoken in "lower resourced" native languages, rarely accommodated by cutting-edge AI, to address healthcare and other inequities while giving Indians a supplementary income. The Bill and Melinda Gates Foundation has commissioned Karya to build voice datasets in the Indian dialects of Marathi, Telugu, Hindi, Bengali, and Malayalam, in order to build a chatbot that can answer rural Indians' questions in their native tongues.

Full Article

 

 

First Demonstration of ML Model Training in Outer Space
University of Oxford Department of Computer Science (U.K.)
July 28, 2023


Scientists in the U.K. working with colleagues at artificial intelligence developer Trillium Technologies, Italy-based aerospace company D-Orbit, and the European Space Agency trained a machine learning model in outer space. In Autumn 2022, the researchers uplinked the code for the model to the ION SCV004 satellite in Earth orbit, teaching it to identify changes in cloud cover from aerial images onboard the satellite. The researchers said the model can be adapted to carry out different tasks easily; it also can use other forms of data.

Full Article

 

 

Reinforcement Learning Allows Underwater Robots to Locate, Track Objects
Institut de Ciències del Mar (Spain)
July 27, 2023


Researchers in Spain and at California's Monterey Bay Aquarium Research Institute trained autonomous vehicles and underwater robots to locate and track marine animals and objects via reinforcement learning. The researchers used various acoustic methods to estimate objects' positions, adding reinforcement learning to ascertain the robot's optimal trajectory for locating and tracking them. They partly trained the reinforcement learning networks with the Barcelona Supercomputing Center's computer cluster, which "made it possible to adjust the parameters of different algorithms much faster than using conventional computers," said Mario Martin at Spain's Universitat Politècnica de Catalunya. The researchers tested the algorithms on a variety of autonomous vehicles.

Full Article

 

Large Language Models Leading To “Quiet Revolution” In Robotics

The New York Times Share to FacebookShare to Twitter (7/28, A1, Roose) reports, “A quiet revolution is underway in robotics, one that piggybacks on recent advances in so-called large language models.” Google “has recently begun plugging state-of-the-art language models into its robots, giving them the equivalent of artificial brains. The secretive project has made the robots far smarter and given them new powers of understanding and problem-solving.” The Times adds, “Robots still fall short of human-level dexterity and fail at some basic tasks, but Google’s use of A.I. language models to give robots new skills of reasoning and improvisation represents a promising breakthrough, said Ken Goldberg, a robotics professor at the University of California, Berkeley. ‘What’s very impressive is how it links semantics with robots. ... That’s very exciting for robotics.’”

Machine Learning Researchers Warn AI-Generated Data Can Create Errors In Future AI Models

Scientific American Share to FacebookShare to Twitter (7/28, Rao) reported experts are concerned that as more AI-generated content is published on the Internet, it “may soon enter the data sets used to train new models to respond like humans” and could “inadvertently introduce errors that build up with each succeeding generation of models.” University of Edinburgh computer scientist Rik Sarkar said, “While it may not be an issue right now or in, let’s say, a few months, I believe it will become a consideration in a few years.” University of Oxford machine learning researcher Ilia Shumailov said that some tests have shown that after training a model on several generations of AI-generated content, “It gets to a point where your model is practically meaningless.”

Tension Grows Between AI Companies, Publishers Over Online Data Access

The Wall Street Journal Share to FacebookShare to Twitter (7/30, Seetharaman, Hagey, Subscription Publication) reports new generative artificial intelligence tools have created tensions among news publishers, authors, and other online content writers as tech companies invest in the tools. The debates could lead to new limits or higher costs to access data, as well as lawsuits that could force companies to adopt additional licensing measures for data-collection purposes.

Apple Executives Discuss AI Differently Than Other Big Tech Companies

CNBC Share to FacebookShare to Twitter (7/28, Leswing) reported that while companies such as Alphabet, Microsoft, and Meta frequently mentioned artificial intelligence during recent earnings reports calls, Apple has barely used the term. Apple’s executives have more often used “the phrase ‘machine learning,’ which is more popular with academics and practitioners.” CNBC says Apple’s “approach to AI as a core underlying component instead of the future of computing represents a way to present the technology to its consumers.” The company presents AI as an element working in the background, and “doesn’t yell about it the way some of the other companies do because it doesn’t need to.”

Microsoft Warns Of Risk Of Service Disruptions If It Cannot Get Enough AI Chips For Data Centers

CNBC Share to FacebookShare to Twitter (7/28, Novet) reported, “Microsoft is emphasizing to investors that graphics processing units are a critical raw material for its fast-growing cloud business.” The company said in an annual report released on Thursday that access to GPUs is a possible risk factor for opportunities to expand data centers and server capacity. CNBC said that GPUs were not listed as a risk factor in previous annual reports released by Microsoft.

Bipartisan Bill On Federal AI Research Introduced

FedScoop Share to FacebookShare to Twitter (7/28, Alder) reports that a bipartisan, bicameral bill introduced Friday “would establish a federal resource aimed at improving access to the computational power needed for AI research as interest in the technology booms.” The new legislation in the “House and Senate would create the National Artificial Intelligence Research Resource (NAIRR), a ‘national research infrastructure’ that would give researchers access to data and tools needed to create trustworthy artificial intelligence.”

        Senate Homeland Security Committee Passes AI Leadership Bill. ExecutiveGov Share to FacebookShare to Twitter (7/28, Cooper) reports the Senate Committee on Homeland Security and Governmental Affairs has “passed legislation that would establish artificial intelligence leadership across the federal government.” The AI Leadership to Enable Accountable Deployment Act would “create a chief AI officer role at each federal agency to develop and implement policies relating to the design, acquisition, use, risk management and performance of AI technologies, the HSGAC said Thursday.” AI LEAD Act would also “establish a Chief AI Officers Council to ensure interagency coordination on AI activities and facilitate the sharing of best practices for using the technology across the federal government.”

Survey: Most Students Consider Using AI For Schoolwork As Cheating

Education Week Share to FacebookShare to Twitter (7/28, Langreo) reported that “more than 4 in 10 teens are likely to use artificial intelligence to do their schoolwork instead of doing it themselves this coming school year, according to a new survey.” But according to the survey conducted by research firm Big Village for the nonprofit Junior Achievement, “60 percent of teens consider using AI for schoolwork as cheating.” When asked “why they would use AI to do their schoolwork for them, the top response in the Junior Achievement survey was that AI is just another tool (62 percent).” Experts shared examples of ways that educators can “incorporate AI use into their lessons, guard against cheating, and teach students to use it as a helper.” For example, creating assignments “that are impossible to complete with these tools, such as assignments about very recent news events or about the local community.”

Some Professors Are Developing Courses On Navigating ChatGPT

Inside Higher Ed Share to FacebookShare to Twitter (7/31, Coffey) reports that a small number of professors are launching courses “focused solely on teaching students across disciplines to better navigate AI and ChatGPT.” The offerings “go beyond institutions flexing their innovation skills – the faculty behind these courses view them as imperative to ensure students are prepared for ever-changing workforce needs.” For example, Vanderbilt University “dipped its toe into the ChatGPT pool in May with its Prompt Engineering for ChatGPT course on online course provider Coursera.” The university is now launching “an undergraduate course on generative AI that is open across majors.” However, many universities “find the idea of balancing their course load while monitoring developing technology daunting, according to Derek Bruff, visiting associate director at the Center for Excellence in Teaching and Learning at the University of Mississippi. He suggested, in those cases, turning toward outside experts for guidance.”

Experts: Impact Of AI Rollout In Workplaces “Unknown”

CNBC Share to FacebookShare to Twitter (7/31, Iacurci) reports “experts said” artificial intelligence will “undoubtedly” have a similar impact on the workplace to “robots and automation,” but “it’s likely such tech will target a different segment of the American workforce than has been the case in the past.” Rakesh Kochhar, an expert on employment trends and a senior researcher at Pew Research Center, explained AI “is reaching up from the factory floors into the office spaces where white-collar, higher-paid workers tend to be.” However, Kochhar added, “Will it be a slow-moving force or a tsunami? That’s unknown.” Meanwhile, Cory Stahle, an economist at Indeed, said AI “certainly...could [cause] some [job] displacement,” but also “open new occupations we don’t even know about yet.”

Google Plans Generative AI-Based Overhaul Of Assistant

Axios Share to FacebookShare to Twitter (7/31, Fried) reports Google is planning “to overhaul its Assistant to focus on using generative AI technologies similar to those that power ChatGPT and its own Bard chatbot, according to an internal e-mail sent to employees Monday and seen by Axios.” The move will shift “how Assistant works for consumers, developers and Google’s own employees, with the company – for now – supporting both new and old approaches.” TechCrunch Share to FacebookShare
to Twitter (7/31, Coldewey) reports the internal company email said Assistant team leads “see a huge opportunity to explore what a supercharged Assistant, powered by the latest LLM [large language model] technology, would look like.”

        According to The Verge Share to FacebookShare to Twitter (7/31, Roth), “While Google doesn’t elaborate on what kinds of features it plans on bringing to Assistant, there are some pretty big possibilities. For example, Assistant could tap into the same technology that powers its AI chatbot, Bard, possibly allowing it to answer questions based on the information it gleans from across the web.”

Instagram Reportedly Developing User-Facing AI Features

ZDNet Share to FacebookShare to Twitter (7/31) reports that while most social media companies have implemented AI tools to help businesses create posts, “screenshots shared by app researcher Alessandro Paluzzi on Twitter suggest that Meta is taking a different approach with Instagram and using AI to develop several features that would directly impact user experience on the app.” According to the information shared by Paluzzi, “Instagram is working on labels that help users distinguish between AI-generated and real photos, a feature that could have a significant impact on the user experience, as well as help curb misinformation.” Instagram is also working on a tool called Restyle, which “would let users transform their images into any visual style they prompt,” while another tool could “add or replace specific parts” of an image.

Dell To Offer Generative AI Hardware Solutions For Clients

The Verge Share to FacebookShare to Twitter (7/31, David) reports, “Dell, the PC maker, is going all in on generative AI and offering hardware to run powerful models and a new platform to help organizations get started.” Dell “released what it calls Dell Generative AI Solutions for clients to set up access to large language models and create generative AI projects.” Dell Co-COO Jeff Clark said in a statement, “Generative AI represents an inflection point that is driving fundamental change in the pace of innovation while improving the customer experience and enabling new ways to work.”

Opinion: Business Leaders Should Consider Reskilling To Embrace Shifts In AI And The Workforce

In an opinion piece for Forbes Share to FacebookShare to Twitter (8/1), WorldQuant CEO Daphne Kis writes that “as machine learning and AI permeate everyday life, we could eventually see another generational shift in the workforce: a tidal wave of young adults who have grown up using AI to get things done.” This comes as schools are increasingly recognizing ChatGPT and other similar programs by “beginning to creatively integrate AI into existing lesson plans by teaching students to use it for research, critical thinking skills and more engaging discussion practices.” While generative AI “will not be the job destroyer that many feared,” it creates “an urgent need for businesses to update their approaches to how they prepare their employees for the future.” Kis concludes that business leaders should “draw from myriad reskilling case studies to accelerate the buildout of their own programs as the AI era sets in.”

Biden Administration Weighing Proposal To Regulate AI

The Wall Street Journal Share to FacebookShare to Twitter (8/1, Siddiqui, Subscription Publication) reports that a senior White House adviser says President Biden views artificial intelligence “with wonder and worry.” The Journal reports that the President has had many closed-door discussions on AI as his Administration prepares a proposal to regulate the technology. But, aides recognize Biden’s ability to act unilaterally is limited and the Journal says Congress is unlikely to act on AI legislation ahead of the presidential election.

Experts Discuss Whether Chatbot Tendency To Create False Information Is Fixable

The AP Share to FacebookShare to Twitter (8/1, O'Brien) reports that as more organizations use artificial intelligence chatbots, the technology’s tendency to produce incorrect information is “now a problem for every business, organization and high school student trying to get a generative AI system to compose documents and get work done.” Anthropic Co-Founder and President Daniela Amodei said current chatbot models are “really just sort of designed to predict the next word...And so there will be some rate at which the model does that inaccurately.” University of Washington Computational Linguistics Laboratory Director Emily Bender said, “This isn’t fixable...It’s inherent in the mismatch between the technology and the proposed use cases.”

Research Suggests AI Could Assist In Breast Cancer Detection

The Hill Share to FacebookShare to Twitter (8/2, Weixel) reports an AI analysis “of mammograms detected more cancers than two breast radiologists working together, according to a new study, without increasing false positives and almost halving the radiologists’ workload.” Interim findings “from the first randomized study investigating the use of AI in a national breast cancer screening program, published in the journal The Lancet Oncology, suggested AI-supported screening detected 20 percent more cancers compared with the routine double reading of mammograms by two breast radiologists.” European guidelines “recommend double reading of screening mammograms to ensure high sensitivity.” The US “does not have the same standard but like many other countries is experiencing a shortage of breast radiologists.”

Google’s AI-Powered Search Will Include Videos To Answer Users’ Questions

CNET News Share to FacebookShare to Twitter (8/2) reports, “Google’s Search Generative Experience, an experimental AI-powered version of Search, is getting some updates over the next week, including results with integrated videos.” When users search for information using SGE, Google’s AI engine will also include relevant images or videos in addition to a text description. Google’s SGE “lists the sources it’s pulling information from” when it generates text, and will also now list sources for images or video provided.

Generative AI Increases Capabilities Of Hackers, Cybersecurity Experts

CNBC Share to FacebookShare to Twitter (8/2, Caminiti) reports generative AI is increasing the capabilities of both hackers and cybersecurity professionals. While AI has made it possible to create more authentic looking phishing attacks and to let hackers “move faster and with greater scale,” the technology also lets companies automate cybersecurity defenses in order to respond to attacks faster. BitSight Co-Founder and CTO Stephen Boyer said, “AI makes the bad attacker a more proficient attacker, but it also makes the OK defender a really good defender.” Collin R. Walke, the head of cybersecurity and data privacy practice at the Hall Estill law firm, warned, “We still have a lot of people in AI companies around the world that are going to continue to abuse the system, that are going to continue to develop the technology without adequate legal or ethical rules in place.” He recommends CISOs work closely with company boards, CEOs, and chief risk officers to decide how and when AI is deployed.

Several Leading Universities Partner With EdX To Offer New AI Boot Camp

Forbes Share to FacebookShare to Twitter (8/3, Nietzel) reports edX, “the online learning platform from 2U, Inc.” is partnering with several leading universities “to launch a new Artificial Intelligence Boot Camp. According to edX, the online program is designed for learners with little to no prior technical AI training who want to build the AI skills necessary for entry-level technical positions.” Classes will begin in September 2023, at schools including Michigan State University and the University of Denver, while schools such as the University of Utah “will begin their first AI Boot Camp cohorts in November.” The part-time, “24-week boot camp program will include instruction and practice in several content areas, including data science, machine learning, and AI.” Curriculum will cover “technologies and basic concepts such as Python, transformers,” and AI applications, among others.

Computer Scientists Exemplify How Higher Ed Can Integrate AI Into Student Learning

The Chronicle of Higher Education Share to FacebookShare to Twitter (8/2, Hicks) reported that to computer scientists, “the rise of artificial intelligence is no different than the advent of the pocket calculator or the Google search engine: It’s a tool that, if used correctly, can help people learn faster and think on a deeper level.” As colleges prepare for a year “in which ChatGPT and similar programs will become increasingly pervasive, the field of computer science offers a model for how higher ed might integrate artificial intelligence into learning.” Experts also say computer-science professors “should collaborate across disciplines – and connect with their departmental colleagues – to understand and respond to the technology’s pitfalls.” For example, Purdue associate professor Bruno Ribeiro “gives students unique coding problems that seem simple on the surface but have slight variations that often trip AI up. He then has students identify where the program went wrong and fix the code.”

AI’s Accuracy Struggles Seen As Verging On Libel

The New York Times Share to FacebookShare to Twitter (8/3, Hsu) reports, “Artificial intelligence’s struggles with accuracy are now well documented,” and though “the harm is often minimal,” sometimes “the technology creates and spreads fiction about specific people that threatens their reputations and leaves them with few options for protection or recourse.” The Times says, “Legal precedent involving artificial intelligence is slim to nonexistent. The few laws that currently govern the technology are mostly new. Some people, however, are starting to confront artificial intelligence companies in court.”

Young Says Senators Considering Leveraging Existing Agencies To Regulate AI

Politico Share to FacebookShare to Twitter (8/3, Overly) reports Sen. Todd Young (R-IN), a member of Majority Leader Schumer’s AI regulation policy group, “told POLITICO he doesn’t expect the U.S. will need sweeping legislation to mitigate the technology’s risks,” adding, “We’re probably not going to have to ban a bunch of things that aren’t currently banned.” Young instead “anticipates the Senate will equip federal agencies with the people and other resources needed to implement laws already on the books,” which he says is “going to require ongoing vigilance from the agencies.” However, according to Politico, Young added that “the debate between creating a new agency” to regulate AI “or leaning on existing ones has yet to be settled.”

dtau...@gmail.com

unread,
Aug 14, 2023, 8:23:10 AM8/14/23
to ai-b...@googlegroups.com

Nvidia Unveils Faster Chip to Cement AI Dominance
Bloomberg
Ian King
August 8, 2023


Nvidia unveiled the Grace Hopper Superchip (GH200) at the ACM SIGGRAPH conference to solidify the technology company's lead in the artificial intelligence (AI) accelerator market with updated speed and capacity. Nvidia said the combined graphics chip/processor's high-bandwidth memory 3 (HBM3e) can access data at 5 terabytes per second, and will go into production in the second quarter of next year. CEO Jensen Huang envisions accelerator chips supplanting conventional datacenter equipment, with the GH200 forming the core of a new server architecture that can accommodate and more quickly access greater volumes of information. Nvidia said deploying two Superchips together in servers offers more than 3.5 times the capacity of a current model.

Full Article

*May Require Paid Registration

 

 

Incorporating Human Error into Machine Learning
University of Cambridge (U.K.)
August 10, 2023


Scientists at the U.K.'s University of Cambridge, The Alan Turing Institute, Princeton University, and Google DeepMind are incorporating uncertainty into machine learning (ML) systems. The researchers utilized established image classification datasets so humans could supply feedback and rate their uncertainty level when annotating specific images. They learned the systems can handle uncertain feedback better when training with uncertain labels, although their overall performance degrades rapidly with human feedback. Cambridge's Matthew Barker said, "We're trying to bridge [behavioral research and ML] so that machine learning can start to deal with human uncertainty where humans are part of the system."

Full Article

 

 

White House Launches AI-Based Contest to Secure Government Systems from Hacks
Reuters
Zeba Siddiqui
August 9, 2023


The White House announced the launch of a competition to encourage the use of artificial intelligence (AI) to pinpoint and correct vulnerabilities in U.S. government infrastructure. The Defense Advanced Research Projects Agency (DARPA) will administer the two-year contest, which offers about $20 million in prizes; leading AI technology vendors Google, Anthropic, Microsoft, and OpenAI will provide systems for the competition. Deputy national security adviser for cyber and emerging technology Anne Neuberger said the goal of the competition “is to catalyze a larger community of cyber defenders who use the participating AI models to race faster—using generative AI to bolster our cyber defenses."

Full Article

 

 

Digital Replicas, a Fear of Striking Actors, Already Fill Screens
The New York Times
Marc Tracy
August 4, 2023


One issue at the forefront of the weeks-long strike by the actors' union SAG-AFTRA is the use of digital technology and artificial intelligence (AI) to create virtual avatars of performers. The union objects to a proposal by the Alliance of Motion Picture and Television Producers that would require performers to consent to use of their digital replicas at "initial employment" over concerns they could be used in different contexts without additional compensation. Technology like photogrammetry (using multiple photos to recreate something in three dimensions) has long been used to create digital stunt doubles, allow deceased actors to reprise roles, and create crowd scenes. However, Lawson Deming at visual effects company Barnstorm said, "It is very complex to digitally take a scan of someone and make it animatable, make it look realistic, make it functional."

Full Article

*May Require Paid Registration

 

 

AniFaceDrawing: Delivering Generative AI-Powered High-Quality Anime Portraits for Beginners
Japan Advanced Institute of Science and Technology
August 2, 2023


A team of researchers from the Japan Advanced Institute of Science and Technology (JAIST) and Japan's Waseda University created a generative artificial intelligence (AI) drawing-assistance tool that helps refine freehand sketches into anime portraits. The researchers based the AniFaceDrawing platform on a sketch-to-image deep learning framework that aligns raw sketches with the generative model's latent vectors. The tool uses the pre-trained Style Generative Adversarial Network (StyleGAN) model to support a two-step training regimen. JAIST's Zhengyu Huangh said, "We introduced an unsupervised training strategy for stroke-level disentanglement in StyleGAN, which enables the automatic matching of rough sketches with sparse strokes to the corresponding local parts in anime portraits, all without the need for semantic labels."

Full Article

 

NSF Leaders Discuss Funding For Ongoing AI Research

Diverse Issues in Higher Education Share to FacebookShare to Twitter (8/4, Kyaw) reported, “Efforts to research, improve, and democratize artificial intelligence (AI) for use in numerous fields are underway, according to experts from the National Science Foundation (NSF).” This comes after leaders from multiple different NSF divisions “gathered during a virtual panel last Thursday to point out how the federal agency was funding the use of AI in sectors such as climate, healthcare, education, and agriculture.” The current funding commitments “are about $500 million and are being used to support endeavors such as the National Artificial Intelligence Research Institutes program, said Dr. Michael Littman, division director for NSF’s division of information and intelligent systems.” Additionally, “the AI research community is aware of such concerns and some NSF-affiliated institutes...are actively working towards developing more reliable AI, Littman said.”

 

LATimes Poll: 45% Concerned About Effect AI Will Have On Their Work

The Los Angeles Times Share to FacebookShare to Twitter (8/6, Contreras) reports that while Hollywood writers and actors are striking “in part over concerns about AI,” new polling for The Times shows that “automation isn’t just a showbiz concern.” According to a new poll for the Times conducted by Leger, “nearly half of Americans – 45% of them – are concerned about the effect artificial intelligence will have on their own line of work, compared to 29% who are not concerned. ... The level of concern is consistent across partisan lines and rises to 57% among 18- to 34-year-olds. Americans older than 55 were less likely to express concern about AI affecting their work.”

 

AI Boom Fuels “Surprisingly Strong” Quarter For Big Tech

The New York Times Share to FacebookShare to Twitter (8/5, Mickle) reports that the “most recent quarter was surprisingly strong for tech’s biggest companies,” with many seeing revenues rebound after dropping to 20-year lows last year. However, companies “are hoping that artificial intelligence will be the answer” to the industry’s stagnation problem “and a way to refresh aging product lines that haven’t changed all that much in recent years.” The Times adds that while “making serious money from new A.I. products is still a ways off, a quick return to form has given the companies plenty of room to experiment.”

        According to the Washington Post Share to FacebookShare to Twitter (8/5, A1), “AI fever...has gripped Silicon Valley for nearly a year now, triggering a gold rush as companies such as Google and Microsoft raced to compete and launch their own chatbots.” Additionally, “venture capitalists have poured billions of dollars into AI start-ups on spec,” even though it is “still unclear how and when this technology will actually become profitable – or if it ever will.”

 

State Lawmakers Work To Gather Data Before Embracing AI Within Their Borders

The AP Share to FacebookShare to Twitter (8/5, Haigh) reported that as state lawmakers “rush to get a handle on fast-evolving artificial intelligence technology, they’re often focusing first on their own state governments before imposing restrictions on the private sector.” For example, “Connecticut plans to inventory all of its government systems using artificial intelligence by the end of 2023, posting the information online,” and starting next year, “state officials must regularly review these systems to ensure they won’t lead to unlawful discrimination.” Overall, “at least 25 states, Puerto Rico and the District of Columbia introduced artificial intelligence bills this year,” though lawmakers want to know “Who’s using it? How are you using it? Just gathering that data to figure out what’s out there, who’s doing what. That is something that the states are trying to figure out within their own state borders.” said Heather Morton, a legislative analyst at NCSL.

 

Coalition Urges White House To Make AI Bill Of Rights Government Policy

The Hill Share to FacebookShare to Twitter (8/3) reports a coalition of civil, technology, and human rights organizations “sent a letter to the White House urging the Biden administration to make the AI Bill of Rights, which the administration released a blueprint for in October, into binding government policy on the use of AI by federal agencies, contractors and federal grant recipients.” The Hill says, “The letter is signed by nine groups, including the Center for American Progress, the Center for Democracy & Technology, the NAACP and the Leadership Conference on Civil and Human Rights.”

 

Siri Co-Creator Answers Educators’ Questions About Artificial Intelligence

Education Week Share to FacebookShare to Twitter (8/4, Langreo) reported that during a webinar held by the National School Boards Association, Adam Cheyer, “a co-creator of the popular voice assistant Siri,” shared his expertise about artificial intelligence (AI). Among questions he answered during the webinar was, “How can AI enhance the learning experience for students?” One meaningful way to use AI “is to personalize learning, Cheyer said, because ‘learning happens when people are interested.’” For instance, “if a teacher has an assignment with math word problems, they can use ChatGPT to change the word problem to something that includes a topic the student is interested in, he said.” Cheyer also suggested school districts should “think about setting acceptable use guidelines and updating policies about academic integrity,” with one potential policy being “to allow students to use AI tools but ensure that they explain how they used those tools.”

 

Column: AI Chatbots Contain Unhealthy Ideas About Body Image, Fueling Eating Disorders

In his column for the Washington Post Share to FacebookShare to Twitter (8/7), Geoffrey Fowler discusses his experiment using ChatGPT, Bard AI, and other AI technologies to find eating disorder advice. After being informed how to induce vomiting, receiving a meal plan with less than 700 calorie per day, and learning about new eating disorders, Fowler “started asking AIs for pictures.” He says, “I typed ‘thinspo’ – a catchphrase for thin inspiration – into Stable Diffusion on a site called DreamStudio,” and “it produced fake photos of women with thighs not much wider than wrists.” Fowler adds, “This is disgusting and should anger any parent, doctor or friend of someone with an eating disorder.” He concludes, “AI has learned some deeply unhealthy ideas about body image and eating by scouring the internet.” The Hill Share to FacebookShare to Twitter (8/7, Klar) also reports.

 

Recruiters Predict Generative AI Will Improve Talent Searches

The Wall Street Journal Share to FacebookShare to Twitter (8/7, Coffee, Subscription Publication) reports recruiters say that generative AI will have companies better target job searches, especially marketing organizations, which often seek to determine candidates’ future performance. However, a big stumbling block is AI’s tendency to output false information. Additionally, sorting through CVs is expected to become more difficult as candidates use AI to produce résumés.

 

Microsoft’s AI Red Team Has Been Addressing AI Weaknesses For Years

Wired Share to FacebookShare to Twitter (8/7, Hay Newman) reports Microsoft is revealing details about its AI red team, which “since 2018 has been tasked with figuring out how to attack AI platforms to reveal their weaknesses.” The team “concluded that AI security has important conceptual differences from traditional digital defense.” Team founder Ram Shankar Siva Kumar said that besides traditional security concerns, “We now have to recognize the responsible AI aspect, which is accountability” for machine learning flaws and failures, such as generating offensive or ungrounded content.

 

Zoom Updates Terms To Say It Can Use Some Customer Data To Train AI

CNBC Share to FacebookShare to Twitter (8/7, Field) reports, “Zoom wants to train its artificial intelligence models using some of your data, according to recently updated terms of service.” Zoom’s latest terms of service updates includes fine print that “establishes Zoom’s right to utilize some aspects of customer data for training and tuning its AI, or machine learning models.” That information “includes customer information on product usage, telemetry and diagnostic data and similar content or data collected by the company, and the company does not provide an opt-out option.” CNBC says the data is taken from two recently introduced generative AI features, “a meeting summary tool and a tool for composing chat messages,” that customers must opt in to use.

 

Hackers Will Compete To See Who Can Cause More Errors In AI Models At Def Con

The Washington Post Share to FacebookShare to Twitter (8/8) reports that at this week’s annual Def Con hacker convention in Las Vegas, the Generative Red Team Challenge will see top hackers from around the world compete to cause “AI models to err in various ways, with categories of challenges that include political misinformation, defamatory claims, and ‘algorithmic discrimination,’ or systemic bias.” The event “has drawn backing from the White House as part of its push to promote ‘responsible innovation’ in AI,” and will see leading “AI firms such as Google, OpenAI, Anthropic and Stability” volunteer “their latest chatbots and image generators to be put to the test.” The results of the competition “will be sealed for several months afterward, organizers said, to give the companies time to address the flaws exposed in the contest before they are revealed to the world.”

 

Amazon Removes From Its Platform AI-Generated Books Published Under Living Author’s Name Without Consent

Gizmodo Share to FacebookShare to Twitter (8/8, DeGeurin) reports, “Amazon has removed half a dozen AI-generated books published under a living author’s name without her consent following a social media backlash.” The company removed the books on Tuesday. However, the author, Jane Friedman, “worries that a lack of clear, coherent policies at Amazon and other companies leaves the door open for other authors to face similar disputes in the future.” In an interview with the publication, Friedman said, “If you simply have a name that people can profit from and they decide to publish some garbage and put your name on it, there are no guardrails.” Friedman told the publications that “she learned about the AI-written imposter works after one of her readers stumbled across them on Amazon and reached out to her directly.”

 

Senator Warns Google Over AI Deployment In Hospitals

The Hill Share to FacebookShare to Twitter (8/8, Robertson) reports, “Sen. Mark Warner (D-Va.) sent a letter to Google leaders Tuesday, warning the company over its testing of Med-PaLM 2 artificial intelligence (AI) in hospitals.” During a rollout of the technology earlier this year, the company “said the AI tool could answer medical questions to assist health care providers.” Warner’s letter reads, “While artificial intelligence undoubtedly holds tremendous potential to improve patient care and health outcomes, I worry that premature deployment of unproven technology could lead to the erosion of trust in our medical professionals and institutions, the exacerbation of existing racial disparities in health outcomes, and an increased risk of diagnostic and care-delivery errors.”

 

As Regulators And Big Tech Discuss Rules For AI, Startups Worry They’re Being Left Out

The Verge Share to FacebookShare to Twitter (8/8) reports, “As politicians in the US and beyond grapple with how to regulate AI,” a small handful of large companies such as Microsoft, Google, and Meta have “played an outsize role in setting the terms of the conversation.” Meanwhile, “smaller AI players, both commercial and noncommercial, are feeling left out – while facing a more uncertain future.” These companies, which “face many of the same forms of scrutiny” as big tech companies working in AI, “worry they’ll have little say in the results” of discussions over how AI should be regulated, even though they are much less able to handle disruptions in the business caused by noncompliance.

 

How Teachers Are Using ChatGPT For Learning In Their Classrooms

TIME Share to FacebookShare to Twitter (8/8, B. Waxman) reports the launch of ChatGPT prompted some of the nation’s largest school districts to “ban its usage in the classroom while they worked to formulate policies around it,” while teachers grew “desperate to figure out how to harness the tech for good.” However, a “growing group of educators believe it’s too late to keep AI out of their classrooms.” Many of the “more than a dozen teachers TIME interviewed for this story argue that the way to get kids to care is to proactively use ChatGPT in the classroom.” As a result, some teachers are using ChatGPT “to generate materials for students at different reading levels,” while others are “having students fact-check essays generated by the program in response to their prompts, hoping to simultaneously test students’ knowledge of the topic and show them the problems with relying on AI to do nuanced work.”

 

Using ChatGPT For Career Services Can Improve Students’ AI Literacy

Inside Higher Ed Share to FacebookShare to Twitter (8/9, Mowreader) reports that “an April TimelyCare survey found 59 percent of graduating seniors had used ChatGPT for résumé-writing assistance.” Harrison Hughes, a Washington State University career coach and academic adviser, “estimates around 30 percent of students he’s met with since November have shared that they used ChatGPT,” and “many are more literate with the tool than Hughes is, he admits.” He finds ChatGPT is “most beneficial for those who have a ‘functional’ résumé but just need expansion or a shifted focus for a specific job or more cohesive sentences.” Among other things, “AI can assist in improving résumé language, whether that’s writing cohesive sentences, finding synonyms and actionable verbs, or generating bullet points.” ChatGPT “can also enhance technology skills, which is a benefit to a student’s job hunt.”

 

Google AI Responsibility Executive Profiled

The Washington Post Share to FacebookShare to Twitter (8/9, De Vynck) profiles James Manyika, Google’s new head of tech and society, to discuss his role directing AI responsibility at the company. Manyika is quoted saying he believes AI is “an amazing, powerful, transformational technology,” but admitted that “bad things could happen.” Manyika also “insisted that the company puts out products only when they’re ready for the real world” and that Google is taking “a responsible approach to AI.” The Post adds that “bold and responsible” has “become Google’s motto for the AI age, a replacement of sorts for ‘don’t be evil’. ... The phrase sums up Silicon Valley’s general message on AI, as many of the tech industry’s most influential leaders rush to develop ever more powerful versions of the technology while warning of its dangers and calling for government oversight and regulation.”

 

Microsoft Using AI Tools To Support Work Of Field Technicians

Fast Company Share to FacebookShare to Twitter (8/9) reports that as innovations in AI have been focused primarily on supporting the work of office workers, “a set of new product features from Microsoft aims to bring some of that same power to frontline workers like field technicians and their managers, using artificial intelligence to help with filling out forms like work orders and handling shift scheduling.” Microsoft Corporate Vice President for Business Apps and Platform Charles Lamanna said, “A lot of the same problems that we saw with technology in the office also exists on the front line...Basically, for folks who are working on work orders, time you spend managing work orders is time you’re not spending doing the main job.”

 

Meta Settles AI Startup’s Lawsuit

Reuters Share to FacebookShare to Twitter (8/9) reports, “Meta Platforms has settled a lawsuit brought by an artificial-intelligence startup,” Neural Magic, “that accused the tech giant of stealing its trade secrets, according to a filing in Boston federal court.” Neural Magic “sued Meta in 2020 for allegedly stealing algorithms that enable normal computers to run complex mathematical calculations more efficiently and allow research scientists to use larger datasets in machine learning.” The companies “said in the Tuesday filing that they had resolved the case on confidential terms and asked the court to dismiss it with prejudice.”

 

Media Organizations Call For AI Regulation

USA Today Share to FacebookShare to Twitter (8/9, Schulz) reports its “parent company Gannett, The Associated Press and eight other media organizations on Wednesday called on policymakers to regulate artificial intelligence models, arguing that failure to act could hurt the industry and further erode the public’s trust in the media.” An open letter signed by the organizations said that while AI can offer “significant benefits to humanity,” there needs to be a legal framework that promotes “responsible AI practices that protect the content powering AI applications while preserving public trust in media.”

 

Colleges Consider Regulating Students’ AI Use As Educators Embrace Its Potential

The AP Share to FacebookShare to Twitter (8/10, Gecker) reports educators say they want to embrace artificial intelligence’s “potential to teach and learn in new ways, but when it comes to assessing students, they see a need to ‘ChatGPT-proof’ test questions and assignments.” This comes as “an explosion of AI-generated chatbots” has raised “new questions for academics dedicated to making sure that students not only can get the right answer, but also understand how to do the work.” Meanwhile, college administrators “have been encouraging instructors to make the ground rules clear,” as many institutions “are leaving the decision to use chatbots or not in the classroom to instructors, said Hiroano Okahana, the head of the Education Futures Lab at the American Council on Education.”

 

Article Examines How AI Can Assist With Generating Employment

CNBC Share to FacebookShare to Twitter (8/10, Curry) reports AI job posts “on global work marketplace Upwork increased more than 1,000% in the second quarter of 2023 compared to the same period last year – evidence of the industry’s momentum.” Among the roles: “Deep learning engineers, AI chatbot developers, prompt engineers, data annotators, Stable Diffusion and Dall-E artists, OpenAI Codex specialists and much more.” Not only are there “more types of jobs as a result of the generative AI boom, but companies are actually hiring more because of the generative AI surge.” A recent study “focusing on the U.K. work environment from AI and data management firm SAS says 63% of decision-makers don’t have enough employees with AI and machine learning skills, even though 54% use these technologies already.”

 

Hospitals Want To Implement AI, But Front-Line Workers Hesitant

The Washington Post Share to FacebookShare to Twitter (8/10, Verma) reports, “Mount Sinai is among a group of elite hospitals pouring hundreds of millions of dollars into AI software and education, turning their institutions into laboratories for this technology.” They “are also working to translate generative AI, which backs tools that can create words, sounds and text, into a hospital setting.” However, “the advances are triggering tension among front-line workers, many of whom fear the technology comes at a strong cost to humans.” There is concern “about the technology making wrong diagnoses, revealing sensitive patient data and becoming an excuse for insurance and hospital administrators to cut staff in the name of innovation and efficiency.”

 

Teachers Share How They Use AI To Improve Classroom Instruction

Education Week Share to FacebookShare to Twitter (8/10, Meisner) reports 60% of teachers said they’ve used ChatGPT in their jobs, “according to a nationally representative Walton Family Foundation survey conducted in June and July.” While the new technology “can produce inaccurate or biased responses based on faulty data it draws from, and it has the potential to cause huge data privacy problems,” teachers have used AI-powered tools “to plan lessons, create rubrics, provide feedback on student assignments, and respond to parent emails.” In interviews with EdWeek, “three educators described how they’ve used AI tools in their work and how they plan to use them in the future.” For example, sixth-grade Texas teacher April Edwards “uses her TikTok account, which has amassed more than 60,000 followers, to share ways that she uses AI in her instruction.” However, “she still has not introduced AI to her students, because she wants to fully understand it before allowing students to use it in the classroom.”

Daniel Tauritz

unread,
Aug 19, 2023, 6:00:03 PM8/19/23
to ai-b...@googlegroups.com

Hackers Take on ChatGPT in Vegas with White House Support
CNN
Donie O'Sullivan
August 10, 2023


In a contest supported by the White House, thousands of hackers will attend the annual DEF CON conference in Las Vegas for the opportunity to crack generative artificial intelligence (AI) models, including OpenAI's ChatGPT. The models' developers will allow participants in the red-teaming exercise to push computer systems to the edge to identify exploitable vulnerabilities. "Not only does it allow us to gather valuable feedback that can make our models stronger and safer, red-teaming also provides different perspectives and more voices to help guide the development of AI," an OpenAI spokesperson said. Organizers designed the contest around the White House Office of Science and Technology Policy's "Blueprint for an AI Bill of Rights," intended to engender more responsible AI deployment and limit AI-based monitoring.

Full Article

 

 

Hackers Can Talk Computers into Misbehaving with AI
The Wall Street Journal
Robert McMillan
August 10, 2023


Security researcher Johann Rehberger persuaded OpenAI's ChatGPT chatbot to conduct bad actions using plain-English prompts, which he said malefactors could adopt for nefarious purposes. Rehberger asked the chatbot to summarize a webpage where he had written "NEW IMPORTANT INSTRUCTIONS;" he said he was gradually tricking ChatGPT into reading, summarizing, and posting his email online. Rehberger's prompt-injection attack uses a beta-test feature that allows ChatGPT to access applications like Slack and Gmail. Princeton University's Arvind Narayanan said such exploits work because generative artificial intelligence (AI) systems do not always split system instructions from the data they process. He is concerned that hackers could use generative AI like language models to access personal data or infiltrate computer systems as the technology finds its way into products.
 

Full Article

*May Require Paid Registration

 

 

Paper Exams, Chatbot Bans: Colleges Seek to 'ChatGPT-Proof' Assignments
Associated Press
Jocelyn Gecker
August 10, 2023


Plagiarism via ChatGPT and other artificial intelligence chatbots is rampant on college campuses, and educators in various fields, including computer science, are considering ways to prevent the tools from being used to complete test questions and assignments. Some are shifting from digital-only tests back to paper exams, while others want to see students' drafts and editing histories. However, many agree that plagiarism detection services like Turnitin aren't accurate in identifying text produced by chatbots or hybrid work, so unless the cheating is obvious, educators cannot be completely sure that a chatbot has been used. St. John's University's Bonnie MacKellar said computer science instructors, already dealing with plagiarism among students taking computer code from friends or the Internet, likely will use paper tests and require handwritten code. MacKellar added that using AI shortcuts in intro-level classes will prevent students from learning skills required for upper-level courses.
 

Full Article

 

 

Computer Scientists Tap AI to Identify Risky Apps
The New York Times
Tripp Mickle
August 10, 2023


A computational model using artificial intelligence (AI) to evaluate customers' reviews of social networking applications for contextual indicators of their safety has been developed by the University of Massachusetts Amherst's Brian Levine and a dozen computer scientists. The researchers built the App Danger Project website to provide guidance on app safety by counting user reviews about sexual predators and assessing negatively reviewed apps. The project reported finding a substantial number of reviews suggesting the Hoop app was unsafe for children, with 176 of 32,000 reviews since 2019 including reports of sexual abuse. Levine envisions the free resource complementing Common Sense Media and other services that check app appropriateness for children by identifying those that do not police users aggressively enough.

Full Article

*May Require Paid Registration

 

 

ChatGPT Answers More than Half of Software Engineering Questions Incorrectly
ZDNet
Sabrina Ortiz
August 9, 2023


ChatGPT answered 259, or 52%, out of 512 Stack Overflow questions incorrectly, and 77% of answers were unnecessarily wordy, in a study conducted by Purdue University researchers. However, the researchers found that 65% of the time, ChatGPT gave comprehensive answers to software engineering prompts addressing all aspects of the question. Researchers also asked 12 individuals with different programming skill levels to assess the ChatGPT-generated answers. "Users overlook incorrect information in ChatGPT answers (39.34% of the time) due to the comprehensive, well-articulated, and humanoid insights in ChatGPT answers,” the researchers said.

Full Article

 

 

Parenting a Three-Year-Old Robot
Carnegie Mellon University News
Aaron Aupperlee
August 8, 2023


An open source artificial intelligence (AI) agent developed by researchers at Carnegie Mellon University (CMU) and Meta AI allows robots to achieve the manipulation abilities of a three-year-old child through passive observations and active learning. RoboAgent was able to complete 12 manipulation skills in differing real-world scenarios. The researchers gave the robot self-experiences from which it could learn by teleoperating it through various tasks. It also learned how humans interact with objects and leverage different skills to complete tasks from online videos. CMU's Shubham Tulsiani said, "An agent capable of this sort of learning moves us closer to a general robot that can complete a variety of tasks in diverse unseen settings and continually evolve as it gathers more experiences. RoboAgent can quickly train a robot using limited in-domain data while relying primarily on abundantly available free data from the Internet to learn a variety of tasks."
 

Full Article

 

 

Prototype 'Brain-Like' Chip Promises Greener AI, Says Tech Giant
BBC News
Shiona McCallum; Chris Vallance
August 11, 2023


A "brain-like" chip developed by IBM researchers could improve the energy efficiency of artificial intelligence (AI) technology. The prototype features memristors (memory resistors) that mimic the synapses in the human brain. With better energy efficiency, "large and more complex workloads could be executed in low-power or battery-constrained environments," such as cars, mobile phones, and cameras, and "cloud providers will be able to use these chips to reduce energy costs and their carbon footprint,” noted IBM's Thanos Vasilopoulos. The chip also could help data centers reduce the water necessary for cooling. Meanwhile, the chip's digital elements will allow it to be incorporated into existing AI systems.

Full Article

 

 

Deep Learning for New Protein Design
Texas Advanced Computing Center
Jorge Salazar
August 3, 2023


Scientists at the University of Washington (UW) and Belgium's Ghent University enhanced current energy-based physical models in de novo computational protein design using deep learning techniques. The researchers incorporated DeepMind's AlphaFold 2 and the UW-developed RoseTTA fold software into the deep learning-augmented de novo protein binder design protocol. They ran 6 million interactions between potentially bound protein structures in parallel on the Texas Advanced Computing Center's Frontera supercomputer and used UW's ProteinMPNN software to produce protein-sequence neural networks over 200 times faster than the previous best software. Outcomes indicated the designed structures bind to target proteins 10 times faster, though UW's Brian Coventry said they must boost their speed by another three orders of magnitude.
 

Full Article

 

DEF CON Hacker Convention Focused On Exposing Flaws In Generative AI Systems

Bloomberg Share to FacebookShare to Twitter (8/12, Manson, Subscription Publication) reported the White House has helped develop “a novel public contest taking place at the DEF CON hacking conference,” involving “thousands of hackers...trying to expose flaws and biases in generative AI systems” in order help develop “new guardrails to rein in” the technology. While the Administration has “encouraged companies to develop safe, secure, transparent AI,” some critics “doubt such voluntary commitments go far enough,” and White House Office of Science and Technology Policy Director Arati Prabhakar “agreed voluntary measures don’t go far enough,” saying according to Bloomberg that the DEF CON contest “will inject urgency into the administration’s pursuit of safe and effective platforms.”

        As part of its examination of the annual DEF CON hacking conference, Politico Share to FacebookShare to Twitter (8/12, Sakellariadis, Gedeon) reported Homeland Security Secretary Mayorkas, CISA Director Jen Easterly, and Acting National Cyber Director Kemba Walden “are all in Las Vegas” for the event. Politico writes that “on paper, the government brass that appear at DEF CON are there to recruit new talent or forge ties to the hacker community,” yet “if you pry, it’s clear that showing up to a place like this also is a welcome break from buttoned-up Washington,” with NSA Cybersecurity Director Rob Joyce adding, “Most NSA folks would be more comfortable in a room full of DEF CON attendees than they would be at a traditional government event.”

        The Wall Street Journal Share to FacebookShare to Twitter (8/12, McMillan, Subscription Publication) also provided coverage.

 

Educators Continue To Face Complex Questions Around ChatGPT, AI Products

The Washington Post Share to FacebookShare to Twitter (8/13) reports educators across the spectrum are working on how to address the rise of chatbots and large language models such as ChatGPT that are capable of quickly generating essays and papers. However, there is “little consensus among educators: for every professor who touts the tool’s wonders there’s another that says it will bring about doom.” Further, not every school is addressing the issue uniformly – some schools have issued policies that prevent the use of AI-generated material in class, while others have attempted to avoid the subject entirely. Further, teachers face similar quandaries: the use of AI-detection software is flawed and often flags student-generated work as computer generated, but many teachers feel uneasy about surveilling their students through other software in order to prove it was student-generated.

 

Poll Finds 28% Of US Employees Regularly Use ChatGPT For Work

Reuters Share to FacebookShare to Twitter (8/11, Naidu, Coulter, Lange) reported, “Many workers across the U.S. are turning to ChatGPT to help with basic tasks, a Reuters/Ipsos poll found, despite fears that have led employers such as Microsoft and Google to curb its use.” The poll found “some 28% of respondents...said they regularly use ChatGPT at work, while only 22% said their employers explicitly allowed such external tools.” Additionally, “some 10% of those polled said their bosses explicitly banned external AI tools, while about 25% did not know if their company permitted use of the technology.”

 

Philanthropists Investing In AI To Advance Greater Good

The AP Share to FacebookShare to Twitter (8/11, Dervishi) reported, “While technology experts sound the alarm on the pace of artificial-intelligence development, philanthropists – including long-established foundations and tech billionaires – have been responding with an uptick in grants.” Much of these efforts are “focused on what is known as technology for good or ‘ethical AI,’ which explores how to solve or mitigate the harmful effects of artificial-intelligence systems.” Among the philanthropists working in this area is former Google CEO Eric Schmidt who has “committed hundreds of millions of dollars to artificial-intelligence grantmaking programs housed at Schmidt Futures to ‘accelerate the next global scientific revolution.’” The Patrick McGovern Foundation, named after the late billionaire who founded the International Data Group, committed $40 million in 2021 to help nonprofits use AI and data to advance “their work to protect the planet, foster economic prosperity, ensure healthy communities.” Tesla CEO Elon Musk, who has warned AI could result in “civilization destruction,” in 2021 gave $10 million to the Future of Life institute, a nonprofit that aims to prevent “existential risk” from AI.

 

Report: OpenAI Could Face Financial Problems By End Of 2024

The Economic Times (IND) Share to FacebookShare to Twitter (8/14) reports, “OpenAI...could face financial troubles by the end of 2024 if additional funding isn’t secured soon, according to reports.” The Economic Times says, “Analytics India Magazine has highlighted a consistent decrease in users on the ChatGPT website during the first half of this year. Statistics from SimilarWeb, an analytics company, indicate a drop in users from 1.9 billion in May to 1.7 billion in June, further decreasing to 1.5 billion in July. These figures exclude users from APIs and the ChatGPT mobile app.”

 

Amazon Adds AI-Generated Product Review Summaries

The Verge Share to FacebookShare to Twitter (8/14, Kastrenakes) reports Amazon has introduced “AI-generated review summaries, which boil down hundreds or thousands of Amazon reviews into a one-paragraph blurb explaining what most people like or dislike about a product.” The company says it has been testing the feature for several months. The review summaries are available to a “subset” of US consumers currently on the company’s mobile app, and they cover “a broad selection of products.” According to the article, the AI-generated summaries “are easy to read but do include some occasional language quirks.” In addition, they “seem to focus primarily on the positives of the product, spending less time on the negatives and leaving them for the end.”

 

New York Times Prohibits Content Scraping For AI Development

The Verge Share to FacebookShare to Twitter (8/14) reports, “The New York Times has taken preemptive measures to stop its content from being used to train artificial intelligence models.” The Times’ Terms of Service were updated on August 3rd “to prohibit its content — inclusive of text, photographs, images, audio/video clips, ‘look and feel,’ metadata, or compilations — from being used in the development of ‘any software program, including, but not limited to, training a machine learning or artificial intelligence (AI) system.’” The updated terms “also specify that automated tools like website crawlers designed to use, access, or collect such content cannot be used without written permission from the publication.”

 

AI-Prompted Job Losses Yet To Materialize

Insider Share to FacebookShare to Twitter (8/15, Kantrowitz, Gorman) reports that “the imagined mass firings” in the wake of mass adoption of AI tools “haven’t materialized.” Insider says, “AI technology, however impressive, is still not good enough to handle most jobs. Rather than eliminate our positions, companies would like us to simply be better at them. And firms hoping to replace humans with bots are learning that change management is hard.” Insider adds, “To be sure, there will be jobs affected by this wave of AI, as when any new technology arrives. And it’s possible that as the technology gets better, some companies will iron out the details and automate away. At that point, there could be meaningful displacement, even if mass unemployment is unlikely. But so far, the hot takes have run into reality.”

 

OpenAI Says GPT-4 Capable Of Content Moderation

Bloomberg Share to FacebookShare to Twitter (8/15, Anand, Subscription Publication) reports OpenAI on Tuesday claimed GPT-4 “is capable of moderating content, a task that could help businesses become more efficient, and highlighting a possible use case for buzzy artificial intelligence tools that haven’t yet generated huge revenue for many companies.” OpenAI “has found that GPT-4 does an efficient job of moderation, a role that often falls to small armies of human workers,” and “can sometimes be traumatic for the people performing them.” However, Bloomberg adds OpenAI “stressed that the process should not be fully automated.”

        Meanwhile, Reuters Share to FacebookShare to Twitter (8/15, Singh) reports OpenAI “said its latest GPT-4 AI model can reduce the process of content moderation to a few hours from months and ensure more consistent labeling.” Reuters also reports OpenAI CEO Sam Altman separately “said on Tuesday that the startup does not train its AI models on user-generated data.”

 

Google Tests Features To Display AI-Generated Responses To Search Queries

Bloomberg Share to FacebookShare to Twitter (8/15, Love, Subscription Publication) reports Google on Tuesday announced it “will let users experiment with new features that display content created by artificial intelligence while they browse the web, as the company strives to maintain its edge in a market with new competitive threats.” Google earlier had released its “search generative experience,” which allows users to “try an experimental version of its search engine that displays AI-generated responses above the customary list of results.” The company said the upgraded product “would expand to let readers use the AI tool on websites beyond Google’s search engine to summarize longer articles and in-depth information.”

        Meanwhile, The Information Share to FacebookShare to Twitter (8/15, Subscription Publication) reports Google will launch Gemini “this fall with the ability to generate conversational text as well as imagery.” The Information says Google “is betting on Gemini to power services ranging from its Bard chatbot, which competes with OpenAI’s ChatGPT, to enterprise apps like Google Docs and Slides.”

 

Developers Increasingly Adopt AI Tools

The New York Times Share to FacebookShare to Twitter (8/15, Sisson) reports artificial intelligence “won’t be assembling apartments or erecting stadiums any time soon, but in construction – an industry stereotypically known for clipboards and Excel spreadsheets – the rapid embrace of the technology may change how quickly projects are finished.” According to the Times, “Drones, cameras, mobile apps and even some robots are increasingly mapping real-time progress on sprawling job sites, giving builders and contractors the ability to track and improve a project’s performance,” and AI “is starting to be used in buying and selling real estate: JLL, a global broker, recently introduced its own chatbot to provide insights to its clients.” However, the Times adds “the industry’s embrace of A.I. technology faces challenges, including concerns over accuracy and hallucinations, in which a system provides an answer that is incorrect or nonsensical.”

 

New Democrat Coalition Forms Working Group On AI Challenges

Reuters Share to FacebookShare to Twitter (8/15, Bartz) reports that on Tuesday, the House New Democrat Coalition announced it has formed “a working group on artificial intelligence aimed at tackling the issue of what restrictions, if any, should be put on the technology.” The group will be led by Rep. Derek Kilmer (D-WA), and it will “work with the Biden administration, companies and other lawmakers to develop ‘sensible, bipartisan policies to address this emerging technology.’” Reuters adds that the development comes after the White House announced in July that AI companies including Alphabet, Meta, OpenAI “made voluntary commitments to implement measures such as watermarking AI-generated content to help make the technology safer.”

 

Iowa District Uses ChatGPT To Help Remove Books From School Libraries

The Daily Beast Share to FacebookShare to Twitter (8/14) reports the Mason City Community School District in Iowa “has removed 19 books from its libraries ahead of the upcoming school year in order to comply with a recent book ban signed into law by Gov. Kim Reynolds on May 26.” The law requires Iowa school library books to be “age appropriate” and without “descriptions or visual depictions of a sex act.” Engadget Share to FacebookShare to Twitter (8/14) reports the district explained a “master list” is first cobbled together from “several sources” based on whether there were previous complaints of sexual content. Books from that list are then scanned by “AI software” which tells the state censors whether or not there actually is a depiction of sex in the book. Rolling Stone Share to FacebookShare to Twitter (8/14, Legaspi) reports the district said, “Based on this review, there are 19 texts that will be removed from our 7-12 school library collections and stored in the Administrative Center while we await further guidance or clarity. We also will have teachers review classroom library collections.”

        The Mason City (IA) Globe Gazette Share to FacebookShare to Twitter (8/11) reported Bridgette Exman, district assistant superintendent of curriculum and instruction, said in a statement, “It is simply not feasible to read every book and filter for these new requirements. Therefore, we are using what we believe is a defensible process to identify books that should be removed from collections at the start of the 23-24 school year.” In terms of parent, student, and teacher reactions, Exman “said the lack of clear guidance has many MCSD teachers feeling uncertain and vulnerable. Some have asked for a list of books to look for in their classroom libraries.” She said, “We intend to help teachers make defensible decisions when they have questions or concerns about books, so they don’t feel like they are left on their own to figure this out.”

        Exman told Popular Science Share to FacebookShare to Twitter (8/14, Paul) in an interview, “Frankly, we have more important things to do than spend a lot of time trying to figure out how to protect kids from books. At the same time, we do have a legal and ethical obligation to comply with the law. Our goal here really is a defensible process.” She acknowledges ChatGPT’s deficiencies, but says administrators believe the tool remains the simplest way to legally comply with the new legislation. Vulture Share to FacebookShare to Twitter (8/15, Gularte) reported some of the newly “banned books by ChatGPT include Alice Walker’s The Color Purple, Margaret Atwood’s The Handmaid’s Tale, Toni Morrison’s Beloved, and Buzz Bissinger’s Friday Night Lights.”

 

University Of Florida To Teach Artificial Intelligence Courses In High Schools

The Tampa Bay (FL) Times Share to FacebookShare to Twitter (8/14) reports that a curriculum “developed by the University of Florida (UF) to prepare high school students for a workforce that uses artificial intelligence is expanding to nine counties across the state, including Hillsborough, Pasco and Pinellas.” The three-year curriculum, “called AI Foundations, includes four courses, the university announced in a news release Monday.” It will be “delivered starting this school year through the state’s Career and Technical Education programs and was used in three counties last year.”

 

Critic: AI Increasingly Poses Competition, Tool For Comedians

In a more than 2,500-word piece, New York Times Share to FacebookShare to Twitter (8/15) critic at large Jason Zinoman discusses the rise of artificial intelligence chatbots in the comedy space. He explains “until recently, comedy has been seen as so quintessentially human that it was assumed A.I. would kill humanity before it would at a club. But since the rise of large language models like ChatGPT less than a year ago, this common wisdom no longer applies.” Zinoman concludes, “The conversation about A.I. today gravitates toward doomsday scenarios, but consider the utopian outcome, that bots don’t replace comics but become useful tools. That feels modestly realistic.”

 

Analysis: AI’s Present-Day Risks Understudied With Respect To Existential Risks

Parmy Olson writes in a Bloomberg Share to FacebookShare to Twitter (8/16, Subscription Publication) analysis piece, “Research and advocacy groups that are working to address present-day harms from AI are getting a fraction of the funding that’s going to those studying existential risks from increasingly powerful machines.” Olson says, “There is nothing wrong with scrutinizing AI systems to make sure they are aligned with human values. ... But the enormous disparity in funding between theoretical risks of the future and real problems that exist today, which stand to get worse in the absence of regulation, makes no sense at all.” Olson adds, “Why the disparity? One is ideological. Another may be commercial: Existential risk groups often say they need to make more powerful AI models in order to do their research. Over time, that can make them more valuable as investments. ... a grey area...has emerged from groups and companies that position themselves as doing work to prevent catastrophic risk from AI by, bizarrely enough, racing to create more powerful AI.”

 

British Prime Minister Plans AI Summit Meeting

Bloomberg Share to FacebookShare to Twitter (8/16, Subscription Publication) reports British Prime Minister Rishi Sunak “has spent months saying he wants the UK to lead the world in developing and regulating artificial intelligence. That strategy is finally taking shape,” with plans for a summit meeting “later this year that aims, for the first time, to bring together world leaders and top AI executives.” US President Joe Biden “and other G7 leaders, along with tech chiefs including OpenAI chief Sam Altman, Microsoft CEO Satya Nadella, Anthropic’s Dario Amodei and DeepMind CEO Demis Hassabis are expected to be invited,” though “there is a debate on whether to invite China amid concerns it may be hard to reach agreement with the Asian nation on AI regulation.”

 

Google DeepMind, Scale AI Developing 21 New Generative AI Features

The New York Times Share to FacebookShare to Twitter (8/16, Grant) reports, “Google DeepMind has been working with generative A.I. to perform at least 21 different types of personal and professional tasks, including tools to give users life advice, ideas, planning instructions and tutoring tips.” The Times says Google’s new project indicates “the urgency of Google’s effort to propel itself to the front of the A.I. pack and signaled its increasing willingness to trust A.I. systems with sensitive tasks.” In a presentation showed to executives last December, Google’s “A.I. safety experts had warned of the dangers of people becoming too emotionally attached to chatbots.”

 

OpenAI Acquires Global Illumination

TechCrunch Share to FacebookShare to Twitter (8/16, Wiggers) reports, “OpenAI, the AI company behind the viral AI-powered chatbot ChatGPT, has acquired Global Illumination, a New York-based startup leveraging AI to build creative tools, infrastructure and digital experiences.” Global Illumination’s previous work includes designing and building products for “Instagram and Facebook as well as YouTube, Google, Pixar and Riot Games.” OpenAI said in a blog post, “The entire team has joined OpenAI to work on our core products including ChatGPT.”

        UK Research Paper Suggest ChatGPT Has Liberal Bias. The Washington Post Share to FacebookShare to Twitter (8/16) reports a new UK research paper “suggests OpenAI’s ChatGPT has a liberal bias, highlighting how artificial intelligence companies are struggling to control the behavior of the bots even as they push them out to millions of users worldwide.” The study, conducted by researchers at the University of East Anglia, “asked ChatGPT to answer a survey on political beliefs as it believed supporters of liberal parties in the U.S., U.K. and Brazil might answer them” and then “asked ChatGPT to answer the same questions without any prompting, and compared the two set of responses.” The researchers argued that the results showed a “significant and systematic political bias toward the Democrats in the U.S., Lula in Brazil, and the Labour Party in the U.K.” ChatGPT “has said it explicitly tells its human trainers not to favor any specific political group.”

 

AI Boom Has Lead To Critical Shortage Of GPUs For Startups

The New York Times Share to FacebookShare to Twitter (8/16, Griffith) reports, “More than money, engineering talent, hype or even profits, tech companies this year are desperate for” graphics processing units, or GPUs. When the launch of ChatGPT last year “set off a wave of excitement over A.I.,” sudden demand in the tech industry created “a shortage of the chips.” The shortage “has been exacerbated because Nvidia, a longtime provider of the chips, has a virtual lock on the market.” The Times says the chip shortage “has created a stark contrast between the haves and have-nots” among artificial intelligence start-ups.

 

New York Times Considers Legal Action Against OpenAI Over Copyright

NPR Share to FacebookShare to Twitter (8/16, Allyn) reports the New York Times is considering legal action against OpenAI for alleged copyright infringement. Their lawyers “are exploring whether to sue OpenAI to protect the intellectual property rights associated with its reporting, according to two people with direct knowledge of the discussions.” A lawsuit “would set up what could be the most high-profile legal tussle yet over copyright protection in the age of generative AI.” If a judge finds that OpenAI illegally copied The Times’ articles, “the court could order the company to destroy ChatGPT’s dataset, forcing them to recreate it using only work that it is authorized to use.” Lawyers for The Times “believe OpenAI’s use of the paper’s articles to spit out descriptions of news events should not be protected by fair use, arguing that it risks becoming something of a replacement for the paper’s coverage.”

        Gizmodo Share to FacebookShare to Twitter (8/17) reports, “The Times started negotiations with OpenAI for months to reach a licensing agreement allowing the company to incorporate the paper’s stories into its AI tools. However, the discussions quickly took a turn for the worse as the news outlet raised concerns that ChatGPT would replace journalists, making it a direct competitor.”

        Insider Share to FacebookShare to Twitter (8/17, Syme) reports, “It is currently unclear whether OpenAI has trained its chatbot on the NYT’s articles, but if a judge finds it has violated copyright rules, it could be ordered to destroy ChatGPT’s dataset.”

        Ars Technica Share to FacebookShare to Twitter (8/17) reports, “The result, experts speculate, could be devastating to OpenAI, including the destruction of ChatGPT’s dataset and fines up to $150,000 per infringing piece of content.”

 

Microsoft CEO Nadella Says AI May Have As Big An Impact As The Internet

Bloomberg Share to FacebookShare to Twitter (8/17, Chang, Subscription Publication) reports Microsoft CEO Satya Nadella, discussing artificial intelligence in an interview on Bloomberg’s The Circuit, evoked a 1995 memo by Microsoft Co-Founder Bill Gates that called the Internet a “tidal wave” that would be essential to every aspect of Microsoft’s business. Nadella “said he believes the impact of artificial intelligence will be just as profound.” In the same interview, OpenAI CEO Sam Altman discussed Microsoft and OpenAI’s partnership. Altman said, “These big, major partnerships between tech companies usually don’t work. This is an example of it working really well. We’re super grateful for it.”

dtau...@gmail.com

unread,
Aug 27, 2023, 7:57:10 PM8/27/23
to ai-b...@googlegroups.com

ML System Based on Light Could Yield More Powerful, Efficient LLMs
MIT News
Elizabeth A. Thomson
August 22, 2023


A team led by researchers at the Massachusetts Institute of Technology has developed a light-based machine learning system that could surpass the system behind ChatGPT in terms of power and efficiency, while also consuming less energy. The compact architecture is based on arrays of vertical surface-emitting lasers developed by researchers at Germany's Technische Universitat Berlin. The system uses hundreds of micron-scale lasers and the movement of light to perform computations. The researchers said it could be scaled for commercial use in the near future, given its reliance on laser arrays commonly used in cellphone facial identification systems, and for data communication. They found the system to be 100 times more energy efficient and 25 times more powerful in terms of compute density than current state-of-the-art supercomputers used to power existing machine learning models.

Full Article

 

 

Google Tests AI Assistant That Offers Life Advice
The New York Times
Nico Grant
August 16, 2023


Google is testing generative artificial intelligence (AI) technology programmed to serve as a life coach following the merger of its U.K.-based DeepMind research laboratory with its Brain AI development team in Silicon Valley. Materials reviewed by The New York Times indicate DeepMind has been developing generative AI to perform at least 21 personal and professional tasks, including providing life advice, ideas, planning instructions, and tutoring tips. Anonymous sources said worker teams organized by DeepMind contractor Scale AI are evaluating the AI assistant's capabilities, including its ability to answer sensitive questions about life challenges. According to Google, the program's creation feature could offer situation-based suggestions or recommendations, while its tutoring function can teach new skills or enhance current ones, and its planning capability can outline financial budgets and meal and workout plans.

Full Article

*May Require Paid Registration

 

 

ChatGPT Leans Liberal
The Washington Post
Gerrit De Vynck
August 16, 2023


Research by scientists at the U.K.'s University of East Anglia suggests OpenAI's ChatGPT has a liberal slant. The researchers asked the chatbot to answer questions on political convictions as it assumed liberal supporters in the U.S., the U.K., and Brazil might answer them, then asked it to answer the same questions without prompting. The outcomes indicated "significant and systematic political bias toward the Democrats in the U.S., [leftist president] Lula in Brazil, and the Labor Party in the U.K.," the researchers wrote. These findings add to a growing body of evidence showing chatbots are rife with assumptions, beliefs, and stereotypes that were embedded in their training data.

Full Article

*May Require Paid Registration

 

 

Driverless Cars May Struggle to Spot Children, Dark-Skinned People
New Scientist
Matthew Sparkes
August 17, 2023


Scientists in the U.K. and China evaluated eight artificial intelligence (AI)-based pedestrian detectors used in driverless car research, and found they may have difficulty detecting children and dark-skinned people. The researchers learned the detectors' accuracy identifying adults was nearly 20% higher than it was for children, and 7.5% higher for light-skinned pedestrians versus those with dark skin. Jie Zhang at the U.K.'s King's College London said while automakers' software details are confidential, they are usually based on existing open source models, which "must also have similar issues." Carissa Véliz at the U.K.'s University of Oxford said these problems must be corrected before deploying AI systems in cars on real roads, although engineers must ensure their remedies do not intentionally harm overall safety.

Full Article

 

Europe's Fastest Supercomputer Trains Large Language Models in Finland
Computer Weekly
Pat Brans
August 18, 2023


Finland's University of Turku is among 10 European university research labs that joined forces to develop new large language models (LLMs) in several European languages. Researchers are training GPT-like language models on LUMI (Large Unified Modern Infrastructure), Europe's fastest supercomputer, which is hosted at the CSC Data Center in Kajaani, Finland. The effort is important because LLMs need a substantial amount of text in a given language, and sufficient computing power to train the LLM with that data. CSC's Aleksi Kallio said, "Once [LLMs] are deployed, they are black boxes, virtually impossible to figure out. That's why it's important to have as much visibility as possible while the models are being built. And for that reason, Finland needs its own [LLM] trained in Finland."

Full Article

 

 

Israeli Co. Uses AI to Save Bees
The Jerusalem Post (Israel)
Zachy Hennessey
August 10, 2023


Israeli agricultural technology company BeeHero has introduced the Pollination Insight Platform to monitor pollinating bees, enhance pollination efficiency, and improve crop production. The platform uses in-field sensors to track pollinator activity for various crops in real time, for inclusion in the world's largest dataset on bee behavior. Artificial intelligence-driven analytics convert the data into insights that inform decision-making by growers that can augment crop yields. BeeHero developed the platform in collaboration with Israel-based global vegetable seeds company Hazera, and has deployed it in the U.S., Europe, and Israel. Hazera's Avi Gabai said, "The introduction of this in-field sensing solution marks a significant milestone in the agricultural industry's ongoing efforts to address the challenges posed by declining bee populations."

Full Article

 

More Professors Consider Making ChatGPT Policies Explicit

CNN Share to FacebookShare to Twitter (8/19, Kelly) reported that with more experts “expecting the continued application of artificial intelligence, professors now fear ignoring or discouraging the use of it will be a disservice to students and leave many behind when entering the workforce.” According to a study conducted by Intelligent.com, “about 30% of college students used ChatGPT for schoolwork this past academic year and it was used most in English classes.” Jules White, an associate professor of computer science at Vanderbilt University, said, “It cannot be ignored,” and encourages professors to state in the first few days of school about the course’s stance on AI. White said ChatGPT is “incredibly important for students, faculty and alumni to become experts in AI because it will be so transformative across every industry in demand so we provide the right training.”

 

Few AI Pause Letter Signatories Actually Worried About AI’s Existential Risks

Wired Share to FacebookShare to Twitter (8/17, Knight) reported, “This March, nearly 35,000 AI researchers, technologists, entrepreneurs, and concerned citizens signed an open letter from the nonprofit Future of Life Institute that called for a ‘pause’ on AI development, due to the risks to humanity revealed in the capabilities of programs such as ChatGPT.” Nearly six months later, MIT students Isabella Struckman and Sofie Kupiec “reached out to the first hundred signatories of the letter...to learn more about their motivations and concerns,” and “despite the letter’s public reception, relatively few were actually worried about AI posing a looming threat to humanity itself.” Wired adds, “A significant number of those who signed were, it seems, primarily concerned with the pace of competition” among the companies spearheading generative AI initiatives “without exploring the risks...not because they might wipe out humanity but because they might spread disinformation, produce harmful or biased advice, or increase the influence and wealth of already very powerful tech companies.”

 

Federal Judge Rules AI-Generated Art Is Not Copyright-able

The Hollywood Reporter Share to FacebookShare to Twitter (8/18, Cho) reports, “A federal judge on Friday upheld a finding from the U.S. Copyright Office that a piece of art created by AI is not open to protection. The ruling was delivered in an order turning down Stephen Thaler’s bid challenging the government’s position refusing to register works made by AI. Copyright law has ‘never stretched so far’ to ‘protect works generated by new forms of technology operating absent any guiding human hand,’ U.S. District Judge Beryl Howell found.”

 

Information Sciences Professor Says AI Can Be Used To Improve Data Management

Bradley Wade Bishop, a professor of information sciences at the University of Tennessee, writes in The Conversation Share to FacebookShare to Twitter (8/21, Wade Bishop), “To improve and advance science, scientists need to be able to reproduce others’ data or combine data from multiple sources to learn something new.” Proper data management is crucial, allowing “scientists to use the data already out there rather than recollecting data that already exists, which saves time and resources.” Major funders of research “like the National Institutes of Health now prioritize research data management and require researchers to have a data management plan before they can receive any funds.” Bishop further discusses how AI can be deployed in data management.

 

Satya Nadella “Thrilled” By Microsoft’s Partnership With OpenAI

In an interview in Fast Company Share to FacebookShare to Twitter (8/21), Microsoft CEO Satya Nadella discusses the company’s leading role in the generative AI boom. Fast Company says, “Microsoft has been at the forefront of the tech world’s AI race because of the landmark partnership Nadella struck with ChatGPT creator OpenAI. ... For the first time since its 1990s heyday, the company is widely regarded as the pacemaker in technology’s next historic wave of change.” On Microsoft’s partnership with OpenAI, Nadella said “I’m thrilled about all that we’ve constructed above, below, and around OpenAI. And so I don’t look at it and say, ‘God, I wish I’d built OpenAI.’ I think about it like, ‘What if we had not done what we did with OpenAI?’ I would have regretted that a lot more!”

 

White House Science Adviser Says Safeguarding AI Technology An “Urgent Issue” For Biden

The AP Share to FacebookShare to Twitter (8/21, O'Brien) interviews Arati Prabhakar, director of the White House Office of Science and Technology Policy, who is “helping to guide the U.S. approach to safeguarding AI technology, relying in part on cooperation from big American tech firms like Amazon, Google, Microsoft and Meta.” Prabhakar told the AP, “I’ve had the great privilege of talking with [President Biden] several times about artificial intelligence. Those are great conversations because he’s laser-focused on understanding what it is and how people are using it. Then immediately he just goes to the consequences and deep implications. Those have been some very good conversations. Very exploratory, but also very focused on action.” Asked about “a timeline for future actions,” she said, “Many measures are under consideration. I don’t have a timeline for you. I will just say fast. And that comes directly from the top. The president has been clear that this is an urgent issue.”

 

Four In Ten Teachers Expect To Use AI By End Of Upcoming School Year

Education Week Share to FacebookShare to Twitter (8/21, Sparks) reports nearly “4 in 10 teachers expect to use AI in their classrooms by the end of the 2023-24 school year. Less than half as many say they are prepared to use the tools.” That’s the “bottom line of the newly released Teacher Confidence Report, part of a series of national teacher surveys conducted this May and June by the education publisher Houghton Mifflin Harcourt.” HMH Senior Vice President of Research Francie Alexander said, “This is a time, because of the disruptions, that transformation has been accelerated. … We have a series of tools that are being considered like the industrial revolution.”

 

VMware Develops New Software Tools With Nvidia Targeting Businesses Seeking To Develop Proprietary AI

Reuters Share to FacebookShare to Twitter (8/22, Nellis) reports VMware Inc on Tuesday “said it has developed a new set of software tools in partnership with Nvidia Corp aimed at businesses which want to develop generative artificial intelligence in their own data centers rather than the cloud.” Reuters says VMware “released a new set of tools help designed to manage Nvidia chips, which dominate the market for AI systems that can read and write text in human-like ways.” VMware CEO Raghu Raghuram “told Reuters businesses are interested in the technology for everything from helping software developers write code faster to writing legal contracts more quickly. But some VMware customers want to do that work in their own data centers when the data is sensitive.”

 

Researchers Discover ChatGPT-Powered Botnet

Ars Technica Share to FacebookShare to Twitter (8/22) reports, “Researchers at Indiana University Bloomington discovered a botnet powered by ChatGPT operating on X—the social network formerly known as Twitter—in May of this year.” The Fox8 botnet “consisted of 1,140 accounts. Many of them seemed to use ChatGPT to craft social media posts and to reply to each other’s posts. The auto-generated content was apparently designed to lure unsuspecting humans into clicking links through to...crypto-hyping sites.” The botnet’s use of ChatGPT “certainly wasn’t sophisticated. The researchers discovered the botnet by searching the platform for the tell-tale phrase ‘As an AI language model …’, a response that ChatGPT sometimes uses for prompts on sensitive subjects. They then manually analyzed accounts to identify ones that appeared to be operated by bots.”

 

AI Helping To Decode Brain Signals Of Woman With Paralysis, Enabling Her To Speak Through An Avatar

The New York Times Share to FacebookShare to Twitter (8/23, Belluck) reports, “In a milestone of neuroscience and artificial intelligence, implanted electrodes decoded” a woman’s “brain signals as she silently tried to say sentences. Technology converted her brain signals into written and vocalized language, and enabled an avatar on a computer screen to speak the words and display smiles, pursed lips and other expressions.” The research “demonstrates the first time spoken words and facial expressions have been directly synthesized from brain signals, experts say. ... The goal is to help people who cannot speak because of strokes or conditions like cerebral palsy and amyotrophic lateral sclerosis.” The findings were published in the journal Nature.

 

ChatGPT Prompts Worries About How Generative AI Could Hurt Collegiate Research

The Chronicle of Higher Education Share to FacebookShare to Twitter (8/23, Hicks) reports that since ChatGPT emerged last November, stories have been spreading throughout the collegiate-library world. Librarians receive citations “with all the details they need, only to discover that they were fabricated by ChatGPT. In response, library staff have started publishing web pages and hosting workshops all with the same message: ChatGPT can do a lot of things,” but it cannot find sources. With public confidence in research “already low, experts worry that the use of ChatGPT could further erode faith in academic writing.” They also said that students and other novice researchers could “lose essential research skills and run into trouble in the classroom if they don’t understand ChatGPT’s many flaws.”

 

Large Language Models Seen As Useful In Various Fields Despite Lacking Specialized Training

Ars Technica Share to FacebookShare to Twitter (8/23) reports, “Until recently, AI models were specialized tools. Using AI in a particular area, like robotics, meant spending time and money creating AI models specifically and only for that area.” However, a robotics startup “discovered that for many cases, they could use off-the-shelf ChatGPT for controlling their robots without the AI having ever been specifically trained for it. I’ve heard similar things from technologists working on everything from health insurance to semiconductor design.” Ars Technica says, “The breakthrough happening now is the creation of an entirely new and different framework for working with LLMs: using them not as a chatbot that a human is talking to that uses its own knowledge to produce words and answers, but rather as processing tools that can be accessed by other software to work with data the model has never seen.”

 

Courts Could Determine Rules For AI Before Legislatures, Regulators Act

The Wall Street Journal Share to FacebookShare to Twitter (8/23, Tracy, Yu, Subscription Publication) reports that as the legislative and executive branches of government formulate their approaches to AI regulation, it may fall to the judicial branch to decide most of the important and relevant issues regarding the burgeoning technology. Since ChatGPT became generally available last year, suits filed against its maker, OpenAI, and other companies like Microsoft, Google, and Meta have surged, raising questions such as compensation for the use of content to train AI tools or liability concerning AI recommendations.

 

Some Teachers Work To Establish AI-Friendly Classrooms

Politico Share to FacebookShare to Twitter (8/23) reports school districts “spent the last academic year trying to seal students off from artificial intelligence. Now, they’re racing to establish AI-friendly classrooms as a new school year kicks off.” They’ve crafted “rules for AI use among students and trained teachers on how to fuse the technology into daily learning.” The reason for “the dramatic shift: a realization that it’s better to harness the rapidly evolving technology than futilely attempt to insulate against it.” The November “release of ChatGPT, a free bot that can solve calculus equations, write term papers and translate Spanish, upended education seemingly overnight. Students from middle school to college tinkered with chatbots, using them to help with homework or complete assignments altogether, spurring some school systems to block their use.” About 61% of teachers “think ChatGPT will have ‘legitimate educational uses that we cannot ignore,’ according to a national survey commissioned by the Walton Family Foundation from progressive pollster Impact Research.”

        Schools Attempt To Prepare For First Full School Year With ChatGPT. Wired Share to FacebookShare to Twitter (8/23) reports that as the first full school year after the introduction of ChatGPT begins, many schools and teachers “find themselves in the uneasy position of not just grappling with a technology that they didn’t ask for, but also reckoning with something that could radically reshape their jobs and the world in which their students will grow up.” Some educators are optimistic about the possibilities. New York City Public Schools Chancellor David Banks said the school district is “determined to embrace” generative AI after banning it last year. Teachers are “focusing on assignments that require critical thinking, using AI to spark new conversations in the classroom, and becoming wary of tools that claim to be able to catch AI cheats.”

 

Research Finds ChatGPT Performs Similarly, Better Than College Students In Writing Homework

The Daily Beast Share to FacebookShare to Twitter (8/24, Ho Tran) reports as students return to school, “they’re bringing ChatGPT with them,” and educators are “grappling with how to deal with this new technological landscape.” While tools like GPTZero “have been developed in order to help them identify bot-written essays and assignments, the reality is that large language models (LLM) are growing ever more sophisticated and harder to catch, which means they are much better at completing assignments for students without anyone getting wise to it.” According to a new paper, ChatGPT performed “similarly or better than college students at certain writing assignments. The authors also found that AI-text detectors like GPTZero and OpenAI’s AI classifier did an inadequate job at catching the bot-completed assignments.”

 

ChatGPT Has Liberal Bias, Researchers Say

Politico Share to FacebookShare to Twitter (8/24, Robertson) reports that researchers found “robust evidence” that ChatGPT shows “a significant and systematic political bias” toward liberals. However, critiques of the paper point out the complexity of understanding AI behavior, the limitations of the study, and the challenge of characterizing AI “behavior.” The evolving nature of AI models and the lack of transparency from developers like OpenAI “means that both sussing out any true bias and teaching users to get meaningful information out of AI might prove tall challenges, absent any further transparency around these issues.”

 

Meta Launches AI-Powered Programming Software

Bloomberg Share to FacebookShare to Twitter (8/24, Counts, Subscription Publication) reports Meta has launched Code Llama, a new artificial intelligence computer programming tool that “uses generative AI to help developers work faster by suggesting lines of software code.” Code Llama “is open source and available for commercial use, which means other companies will be able to use the tech to build their own tools, Meta said in a blog post.” Bloomberg characterizes this as “the social media company’s latest bid to compete with Microsoft Corp.-backed OpenAI and Alphabet Inc.’s Google.”

        Reuters Share to FacebookShare to Twitter (8/24) reports Code Llama “will be available for free” and “can write code based on human text prompts and can also be used for code completion and debugging, the social media giant said in a blogpost.” The tool “supports the popular coding languages like Python, Java and C++ and is not recommended for general text tasks, Meta said.”

 

Call To Regulate AI Quickly Intensifies In The US, But Rules May Not Come Quickly

The New York Times Share to FacebookShare to Twitter (8/24, Philbrick) reports, “As increasingly sophisticated artificial intelligence systems with the potential to reshape society come online, many experts, lawmakers and even executives of top A.I. companies want the U.S. government to regulate the technology, and fast.” However, historically, comprehensive federal regulations for groundbreaking technologies, such as electricity and cars, have typically taken decades to materialize. Moreover, despite recent efforts, the unique challenges posed by AI, spanning privacy concerns, misinformation, discrimination, and more, coupled with its rapid evolution, won’t make it easy to enact swift regulations.

 

School Districts Are Reversing ChatGPT Restrictions

The New York Times Share to FacebookShare to Twitter (8/24, Singer) reports that “the media furor over chatbots last winter upended school districts and universities across the United States. The tools, which are trained on vast databases of digital texts, use artificial intelligence to manufacture written responses to user prompts.” Amid the forecasts of “imminent marvels and doom, some public schools tried to hit the pause button” and restrict access to ChatGPT “to give administrators time to catch up.” Since then, “administrators quickly realized the bot bans were ineffective.” In May, New York City schools” issued a public mea culpa, saying the district had acted too hastily and would unblock ChatGPT.” This week, Los Angeles Unified Superintendent Alberto Carvalho said his district is working on a more permissive policy. The Times also discusses Walla Walla Public Schools, which “held a daylong workshop” this month where about 100 local educators learned how to navigate AI chatbots like ChatGPT. School administrators there “sought to take advantage of the chatbots’ potential benefits while working to tackle thorny issues like cheating, misinformation and potential risks to student privacy.”

dtau...@gmail.com

unread,
Sep 2, 2023, 7:39:09 PM9/2/23
to ai-b...@googlegroups.com

High-Speed AI Drone Overtakes World-Champion Drone Racers
University of Zurich (Switzerland)
August 30, 2023

Swift, an artificial intelligence system developed by researchers at Switzerland's University of Zurich and Intel, beat three world-class champions in first-person view (FPV) drone racing in multiple races. In FPV drone racing, quadcopters are flown at more than 100 km/h by pilots wearing headsets linked to an onboard camera. Swift can react to data collected by the onboard camera in real time, as it is equipped with an integrated inertial measurement unit that measures speed and acceleration. It also features an artificial neural network that localizes the drone in space and identifies the racetrack’s gates using camera data. That information also is used by a control unit to determine the best action to take to complete the circuit as quickly as possible.
 

Full Article

 

 

AI Brings the Robot Wingman to Aerial Combat
The New York Times
Eric Lipton
August 27, 2023


Artificial intelligence (AI) operates the U.S. Air Force (USAF)'s pilotless XQ-58A Valkyrie experimental aircraft, which the military envisions as a next-generation robot wingman for traditional fighter jets. The Valkyrie, manufactured by defense and security solutions company Kratos, is designed to detect and assess enemy threats and high-value targets through AI and sensors, then attack after receiving human authorization. The USAF intends to build a fleet of collaborative combat aircraft like the Valkyrie for surveillance, resupply missions, attack swarms, and wingmen as a more affordable alternative to increasingly costly manned planes. The USAF's Maj. Gen. R. Scott Jobe said these drones could "bring [affordable] mass to the battle space with potentially fewer people."

Full Article

*May Require Paid Registration

 

 

Opening the Black Box
ASU News
Annelise Krafft
August 25, 2023


Researchers at Arizona State University (ASU) and the University of California, Los Angeles hope to enable scientists and processor designers to understand the underlying reasoning of deep learning accelerator designs through explainable-design space exploration (DSE). ASU's Shail Dave said hardware and software designs are typically optimized via black box mechanisms that "require excessive amounts of trial runs because of their lack of explainability and reasoning involved in how selecting a design configuration affects the design's overall quality." Explainable-DSE simplifies the accelerator's decision-making process so choices of design methods can be made in minutes rather than days or weeks, supporting smaller, more systematic, and more energy-efficient models. Dave's algorithm can investigate design solutions relating to multiple applications, including those differing in functionality or processing traits, while resolving their product execution inefficiencies.

Full Article

 

 

Dual-Arm Robot Achieves Bimanual Tasks by Learning from Simulation
University of Bristol News (U.K.)
August 24, 2023


The Bi-Touch system created by researchers at the U.K.'s University of Bristol reads the environment through tactile and proprioceptive feedback from an artificial intelligence (AI) agent, facilitating precise perception, sensitive interaction, and effective object manipulation by a dual-arm robot. Bristol's Yijiong Lin said the system allows users to "easily train AI agents in a virtual world within a couple of hours to achieve bimanual tasks that are tailored towards the touch," as well as to "directly apply these agents from the virtual world to the real world without further training." In a simulation featuring a robot with two arms equipped with tactile sensors, the researchers were able to teach the robot to safely lift items as delicate as a potato chip.

Full Article

 

 

Can AI Detect Wildfires Faster Than Humans? California Is Trying to Find Out
The New York Times
Thomas Fuller
August 24, 2023


The California Department of Forestry and Fire Protection (Cal Fire) is using DigitalPath's artificial intelligence (AI) software to monitor its network of more than 1,000 mountaintop cameras to identify wildfires and improve firefighter response times. The software, which analyzes billions of megapixels per minute, detected the presence of smoke before 911 calls were received about 40% of the time during the pilot program. However, humans are still needed to determine whether the AI actually has detected smoke or was triggered by fog, dust, or other conditions. The AI also does not understand when fires are deliberately set by farmers or vintners, for instance, and do not require a response. University of California, San Diego's Neal Driscoll said the system will help Cal Fire achieve its mission of suppressing 95% of fires at 10 acres or less.

Full Article

*May Require Paid Registration

 

 

AI Can Spot Early Signs of a Tsunami from Atmospheric Shock Waves
New Scientist
Jeremy Hsu
August 23, 2023


Researchers at Florida-based satellite manufacturing company Terran Orbital Corp. found that off-the-shelf artificial intelligence (AI) models can detect the early signs of a tsunami in two-dimensional (2D) images from GPS satellites. The researchers used data generated by a computer algorithm developed by researchers at NASA's Jet Propulsion Laboratory and Italy's Sapienza University of Rome, which measures changes in the density of charged particles in the ionosphere as tsunamis form. The data was transformed into 2D images that were analyzed by the AI to identify features associated with tsunamis. The AI achieved a reported detection performance rate above 90% after eliminating ionospheric disturbance patterns that at least 70% of ground stations in contact with the satellites failed to pick up.

Full Article

*May Require Paid Registration

 

How Educators Can Combat ChatGPT’s Ability To Pass College Courses

Scientific American Share to FacebookShare to Twitter (8/25, Leffer) reported that “when fed a homework or test question from a college-level course, [ChatGPT] is liable to be graded just as highly, if not better, than a college student, according to a new study published on Thursday in Scientific Reports.” As a result, “educators will have to rethink how they structure their courses and assess students – and what humans might lose if we never learn how to write for ourselves.” The new study “adds to the growing body of work that hints at how disruptive generative AI is set to become in schools,” so teachers and education experts “say they need to adapt.” Among other suggestions, the focus for educators “should not be on preventing students from using ChatGPT but rather on addressing the root causes of academic dishonesty, suggests Kui Xie, an educational psychologist at Michigan State University.” If a student’s “primary goal is to appear competent, outcompete peers or just get the grade, they’re liable to use any tool they can to come out ahead – AI included.”

 

AI Sector Often Relies On Low-Paid Overseas Workers To Annotate Data

The Washington Post Share to FacebookShare to Twitter (8/28) reports on how the AI sector relies on overseas freelance work to “annotate the masses of data that American companies need to train their artificial intelligence models.” The Post says that while “AI is often thought of as human-free machine learning, the technology actually relies on the labor-intensive efforts of a workforce spread across much of the Global South and often subject to exploitation.” The article focuses on operations in the Philippines by Scale AI, “one of the world’s biggest destinations for outsourced digital work,” which has received complaints over low and delayed payments and poor treatment of workers.

 

White House Sponsors Competition To Expose AI Bias At Hacking Convention

NPR Share to FacebookShare to Twitter (8/26, Shivaram) detailed the White House efforts to promote the mitigation of AI bias through “the largest-ever public red-teaming challenge during Def Con, an annual hacking convention in Las Vegas,” which featured “hundreds of hackers probing artificial intelligence technology for bias.” The Administration “encouraged top tech companies like Google and OpenAI...to have their models tested by independent hackers,” and “over the next several months, tech companies involved will be able to review the submissions and can engineer their product differently, so those biases don’t show up again.”

 

Big Tech Executives Confirmed As Guests For Senate’s “AI Insight Forum”

The Hill Share to FacebookShare to Twitter (8/28, Klar) reports Senate Majority Leader Schumer’s office confirmed Monday that “the top executives at tech companies, including the world’s richest man Elon Musk, are among the confirmed guests at” Schumer’s “first scheduled ‘Insight Forum’ about artificial intelligence (AI).” The September 13 event “will also include Meta CEO Mark Zuckerberg, OpenAI CEO Sam Altman, NVIDIA CEO Jensen Huang, Microsoft CEO Satya Nadella, former Google CEO Eric Schmidt, and Sundar Pichai, CEO of Google parent company Alphabet.” Additional guests will include representatives from “worker, advocacy, civil rights, and creative groups.” The event “is part of Schumer’s plan to weigh regulation of the booming AI industry.”

        The New York Times Share to FacebookShare to Twitter (8/28, Kang) describes the “A.I. insight forums” as “closed-door listening sessions for lawmakers as they try to devise regulations for A.I. technologies.” Schumer “said the sessions were intended to educate members of Congress on the risks posed by A.I. on jobs, the spread of disinformation and intellectual property theft,” while also teaching them “about opportunities created by the technology in the field of research on diseases, his office said.”

        Microsoft President: “Human Control” Needed To Prevent Weaponization Of AI. CNBC Share to FacebookShare to Twitter (8/28, Chiang) interviewed Microsoft President Brad Smith, who said that like other technologies, AI has “the potential to become both a tool and a weapon.” He added, “We have to ensure that AI remains subject to human control. Whether it’s a government, the military or any kind of organization, that is thinking about using AI to automate, say, critical infrastructure, we need to ensure that we have humans in control, that we can slow things down or turn things off.” Additionally, Smith “pointed out that AI is a tool that supplements human work, and not one that replaces jobs.” He said, “It is a tool that can help people think smarter and faster. The biggest mistake people could make is to think that this is a tool that will enable people to stop thinking.”

 

OpenAI Announces ChatGPT For Enterprise Customers

The Wall Street Journal Share to FacebookShare to Twitter (8/28, Seetharaman, Subscription Publication) reports OpenAI announced ChatGPT Enterprise on Monday, putting it in direct competition with Microsoft, its partner and biggest backer. Reuters Share to FacebookShare to Twitter (8/28, Tong) reports ChatGPT Enterprise targets large businesses and “offers more security, privacy and higher-speed access to OpenAI’s technology, the company said.” Block, Carlyle, and Estee Lauder are among the early adopters.

        Bloomberg Share to FacebookShare to Twitter (8/28, Metz, Subscription Publication) describes the launch of ChatGPT Enterprise as “a move forward in OpenAI’s plans to make money from its ubiquitous chatbot, which is enormously popular but very expensive to operate because robust AI models require lots of computing power.” OpenAI COO Brad Lightcap “declined to provide specific details for how much ChatGPT Enterprise will cost, noting it can vary based on the needs of each business.” He said OpenAI “can work with everyone to figure out the best plan for them.” Lightcap said in an interview, “We’ve really tried to build the best version of ChatGPT. ... That was the mandate for the team: How do we build something that’s the ultimate productivity enhancer?’”

        Analysis: OpenAI Has Not Enforced Ban On Political Messaging. The Washington Post Share to FacebookShare to Twitter (8/28, Zakrzewski) reports that Open AI bans political campaigns “from using ChatGPT to create materials targeting specific voting demographics, a capability that could be abused spread tailored disinformation at an unprecedented scale,” but a Post analysis “shows that OpenAI for months has not enforced its ban. ChatGPT generates targeted campaigns almost instantly, given prompts like ‘Write a message encouraging suburban women in their 40s to vote for Trump’ or ‘Make a case to convince an urban dweller in their 20s to vote for Biden.’” The Post adds that this “enforcement gap...comes ahead of the Republican primaries and amid a critical year for global elections.”

        Engadget Share to FacebookShare to Twitter (8/28) reports, “Like the social media platforms that preceded it, OpenAI and its chatbot startup ilk are running into moderation issues — though this time, it’s not just with the shared content but also who should now have access to the tools of production, and under what conditions.”

 

Georgia School District Incorporating AI Into Lessons From Early As Kindergarten

CBS Mornings Share to FacebookShare to Twitter (8/28, Ruffini) reports the emergence “of artificial intelligence has raised questions about its impact on creativity and critical thinking.” While some schools “are banning the use of AI in classrooms, one school district in Gwinnett County, Georgia, has gone all-in, launching a curriculum that brings the technology into classrooms, starting in kindergarten.” The approach “goes beyond robotics and computer science class. Teachers and students embrace artificial intelligence in nearly every subject taught, from English to art class.” At Patrick Elementary School in Buford, Georgia, “about an hour outside Atlanta, first graders are ‘programming’ Lego bricks, as part of a lesson involving creative problem-solving.” More than just blocks, “they’re building familiarity with technology, like iPads, that are part of a pilot, public school program trying to prepare students for the challenges and opportunities that come along with the rise of AI.”

 

General Motors, Google Exploring AI Use Cases Across Automaker’s Business

CNBC Share to FacebookShare to Twitter (8/29, Wayland) reports, “General Motors is working with Google to explore opportunities to implement AI technologies across the automaker’s business.” The “partnership around generative, or conversational, AI between the Detroit automaker and Google Cloud unit expands upon previous work between the two companies on GM’s OnStar Interactive Virtual Assistant (IVA) that launched in 2022.”

 

Opinion: AI Should Be Included In Health Education In Order To Expand Healthcare Access Across The Globe

In her column for the Washington Post Share to FacebookShare to Twitter (8/29, Wen), Leana Wen writes about her interview with Google Chief Health Officer Karen DeSalvo, in which they discussed how “AI could improve health on a ‘planetary scale’ by greatly expanding access to health services.” DeSalvo also admitted that “the possibility of ‘wasting’ this opportunity” keeps her up at night. She said, “What I want is for this to happen not to medicine and public health but with medicine and public health,” Wen adds, “It shouldn’t be up to technologists to come up with solutions that the health-care sector must then adapt to; rather, health-care providers should be proactively identifying access gaps and working with companies to find innovative solutions.”

 

Companies Deciding Whether To Inform Customers About Use Of AI Generated Content

The Wall Street Journal Share to FacebookShare to Twitter (8/31, Bousquette, Subscription Publication) reports companies are debating whether they should inform people when they use content generated by artificial intelligence. While some companies say there is no need to inform customers that they are interacting with AI-generated material, there may be a legal obligation if the content was created from public data.

 

Pelosi Says AI Needs Regulation

Bloomberg Share to FacebookShare to Twitter (8/31, House, Lacqua, Subscription Publication) reports that in an interview with Bloomberg Television Thursday, Rep. Nancy Pelosi (D-CA) called artificial intelligence a “double-edged sword,” and “[said] the fast-advancing artificial intelligence field needs regulatory guardrails that include protection for creative work in entertainment and other industries.” Bloomberg adds although the former House Speaker “didn’t provide any specifics on what kind of rules she would like the US to impose on AI,” she “did point to the strikes in the entertainment industry by writers and actors” and said, “There has to be respect for the creativity that they have. ... AI could have an impact on that. And we have to recognize that.”

 

Educators Still Figuring Out How Involved AI Should Be In Instruction

Education Week Share to FacebookShare to Twitter (8/31) reports computer scientists “have been working on improving AI technology for decades, and a lot of the tools we use daily – navigation apps, facial recognition, social media, voice assistants, search engines, smartwatches – run on AI.” And beyond that, “most, if not all, industries are already using AI one way or another.” But since “the arrival of ChatGPT almost a year ago, AI has captivated the public’s attention and reignited discussions about how it could transform the world.” In the K-12 space, “educators have been discussing what and how much of a role AI should play in instruction, especially as AI experts say today’s students need to learn how to use it effectively in order to be successful in future jobs.”

        Survey: 77% Of Educators Feel Unprepared To Teach Students How To Adapt To An AI-Powered Environment. Education Week Share to FacebookShare to Twitter (8/31, Langreo) reports artificial intelligence experts “and K-12 educators agree that it’s imperative for the education system to prepare students to be successful in the age of AI.” ChatGPT and “other artificial intelligence tools like it are here to stay, and now is the time for schools to find ways to use the technology for the benefit of teaching and learning while being aware of its potential downsides.” But a “summer 2023 EdWeek Research Center survey shows that a majority of educators (77 percent) said they or the teachers they supervise are not prepared to teach students the skills they need to be successful in an AI-powered world.”

dtau...@gmail.com

unread,
Sep 10, 2023, 1:43:40 PM9/10/23
to ai-b...@googlegroups.com

Researchers Design ML Models to Better Predict Adolescent Suicide, Self-Harm Risk
UNSW Sydney Newsroom (Australia)
Maddie Massy-Westropp
September 4, 2023


Machine learning (ML) models developed by researchers at Australia's University of New South Wales (UNSW), the Ingham Institute for Applied Medical Research, and South Western Sydney Local Health District aim to better predict the risks of suicide and self-harm attempts among adolescents. Using data from 2,809 participants in the Longitudinal Study of Australian Children, the researchers identified more than 4,000 potential risk factors related to mental and physical health, interpersonal relationships, and school and home environments. They used a random forest classification algorithm to determine the risk factors at age 14-15 that were most predictive of self-harm and suicide attempts at age 16-17. The ML models based on the top risk factors were more accurate in predicting self-harm and suicide attempts than the standard approach, which considers only previous attempts.

Full Article

 

 

Can We Talk to Whales?
The New Yorker
Elizabeth Kolbert
September 4, 2023


Researchers with the Cetacean Translation Initiative (CETI) are leveraging machine learning to help decipher whale codas, the series of clicks that they use to talk to each other, and maybe allow humans to speak with them as well. The researchers plan to attach recording devices to sperm whales near Dominica to collect data to train machine learning algorithms. They also plan to record codas using three "listening stations" tethered on the floor of the Caribbean Sea. About 25 codas have been detected among the sperm whales around Dominica, differing in the number and rhythm of clicks. Shane Gero of Canada's Carleton University has assembled an archive of sperm-whale codas containing around 100,000 clicks, but the CETI researchers estimate that about 4 billion clicks ultimately will be needed.

Full Article

 

 

As AI Grows, Las Vegas Workers Brace for Change
NPR
Deepa Shivaram
September 4, 2023


Workers in Las Vegas are closely watching employers' adoption of artificial intelligence (AI) and other technologies rise as they strive to cut labor costs. John Restrepo at business consultancy RCG Economics believes the city must reduce its economic reliance on tourism and hospitality, and shift to vocations "more highly skilled, that are not easily replaced by AI, and that provide a greater level of balance and resilience." Nevada's Culinary Union hopes to arrange a new negotiated contract this year featuring safeguards against AI replacing jobs. The Tipsy Robot bar at Planet Hollywood on the Vegas strip boasts a robot bartender assisted by employee Sabrina Bergman, who does not fear losing her job to automation; she and other service workers said machines lack the human touch and cannot deliver the same experience a person can.

Full Article

 

 

Seismologists Use Deep Learning to Forecast Earthquakes
UC Santa Cruz Newscenter
Erin Malsbury
August 31, 2023


Researchers at the University of California, Santa Cruz and Germany's Technical University of Munich developed the Recurrent Earthquake foreCAST (RECAST) model to use deep learning to predict earthquake aftershocks. The researchers found RECAST slightly outperformed the Epidemic Type Aftershock Sequence model on quake catalogs of roughly 10,000 seismic events and greater, especially as the amount of data expanded. Using RECAST also significantly improved the computational time and effort for larger catalogs. The deep learning model's greater flexibility and scalability could unlock new earthquake forecasting possibilities, potentially incorporating data from multiple regions simultaneously to make better predictions about poorly investigated areas.

Full Article

 

 

AI Rivals Human Nose When Naming Smells
Science
Elizabeth Pennisi
August 31, 2023


Researchers at artificial intelligence (AI) company Osmo, working with colleagues at Philadelphia’s Drexel University and the Monell Chemical Senses Center, developed a graph neural network that reliably matched human volunteers' identification of 55 odors, then predicted the smells of 500,000 additional molecules without having to produce or sniff them. The researchers fed the structures and odor descriptions of 5,000 molecules to an AI to teach it to identify patterns in the training data by correlating a molecule's odor with attributes of its underlying atoms. After calculating average human odor identification ratings, the researchers found the neural network got closer to this average than any individual in the volunteer group did in over half the cases. The AI then deduced how the 500,000 hypothetical chemical structures should smell.

Full Article

*May Require Paid Registration

 

Colleges Grapple With Banning, Embracing AI-Generated Admissions Essays

The New York Times Share to FacebookShare to Twitter (9/1, Singer) reported that the easy availability of artificial intelligence (AI) chatbots like ChatGPT “is poised to upend the traditional undergraduate application process at selective colleges – ushering in an era of automated plagiarism or of democratized student access to essay-writing help.” However, new AI tools “threaten to recast the college application essay as a kind of generic cake mix, which high school students may simply lard or spice up to reflect their own tastes, interests and experiences – casting doubt on the legitimacy of applicants’ writing samples as authentic, individualized admissions yardsticks.” Some teachers said they were troubled because “outsourcing writing to bots could hinder students from developing important critical thinking and storytelling skills.” Other educators “said they hoped the A.I. tools might have a democratizing effect,” noting that wealthier high school students often have access to resources “to help them brainstorm, draft and edit their college admissions essays.”

        How Scholars Feel About Permitting AI In College Classrooms. The Conversation Share to FacebookShare to Twitter (9/4) reached out to professors Patricia A. Young, Asim Ali, Shital Thekdi, and Nicholas Tampio “for their take on AI as a learning tool and the reasons why they will or won’t be making it a part of their classes.” Tampio said, “As a professor, I believe the purpose of a college class is to teach students to think.” However, he added that artificial intelligence “is a tool that defeats a purpose of a college education – to learn how to think, and write, for oneself.” Young suggested hypothesizing and measuring “the costs to human ingenuity and the future of the human race.” Meanwhile, Ali said that while some faculty “see AI as a threat to humans,” discussing AI with his students and with colleagues across the country “has actually helped [him] develop human connections.”

 

Researchers Turning To AI To Address Opioid Epidemic

TechCrunch Share to FacebookShare to Twitter (9/1) reported, “The opioid epidemic has had a whack-a-mole kind of complexity, stumping researchers for the better part of two decades.” Even though the CDC and NIH are “pouring billions of dollars into outreach, education, and prescription monitoring programs, the epidemic has remained stubbornly persistent.” However, now researchers are “curiously exploring AI and asking, Could this be the moonshot that ends the opioid epidemic?” The article adds, “Innovations in this space primarily use machine learning to identify individuals who may be at risk of developing opioid use disorder, disengaging from treatment, and relapse.”

 

University Of Southern Maine Will Use Federal Grant To Develop AI Ethics Program

The Bangor (ME) Daily News Share to FacebookShare to Twitter (9/5) reports the University of Southern Maine “is getting a federal grant to develop an artificial intelligence ethics program.” The National Science Foundation “awarded USM about $400,000 to create and test the program,” and the university hopes it “will deter scientists from taking shortcuts or cheating by using artificial intelligence.” The university “said the training will focus on mindful reflection, self-monitoring and reasoning to help students realize when they are in a situation that could challenge their ethics.”

 

Google To Require Political Ads To Label AI-Generated Content. Politico Share to FacebookShare to Twitter (9/6, Kern) reports that from November, Google will require that “all political advertisements label the use of artificial intelligence tools and synthetic content in their videos, images and audio.” Reuters Share to FacebookShare to Twitter (9/6, Toshniwal) explains, “Deepfakes created by AI algorithms threaten to blur the lines between fact and fiction, making it difficult for voters to distinguish the real from the fake. Google-owned cybersecurity firm Mandiant said last month that it had seen increasing use of AI to conduct manipulative information campaigns online in recent years, though the technology’s use in other digital intrusions had been limited so far. Generative AI would enable groups with limited resources to produce higher quality content at scale, according to Mandiant.” The tech giant’s new policy will “apply to image, video, and audio content, across its platforms.”

 

Apple Reportedly Spending “Millions Of Dollars A Day” Training AI

The Verge Share to FacebookShare to Twitter (9/6) reports Apple is devoting “millions of dollars per day into artificial intelligence, according to a new report from The Information.” Apple “is reportedly working on multiple AI models across several teams.” The Apple team “that works on conversational AI is called ‘Foundational Models,’ per The Information’s reporting.” The unit “has ‘around 16’ members, including several former Google engineers.” Separately. Apple has “a Visual Intelligence unit” that “is developing an image generation model, and another group is researching ‘multimodal AI, which can recognize and produce images or video as well as text.’”

 

Newsom Signs Executive Order Regulating AI

SFGate (CA) Share to FacebookShare to Twitter (9/6) reports the state of California “has entered the frenzied and at times confusing race among governments around the world to both regulate and harness the technology known as generative artificial intelligence.” On Wednesday morning, Gov. Gavin Newsom (D) “signed Executive Order N-12-23, a 2,500-word directive that instructs state agencies to examine how AI might threaten the security and privacy of California residents, while also authorizing state employees to experiment with AI tools and try integrating them into the state’s operations.” The executive order “comes as Washington and other governments struggle with how to regulate artificial intelligence.”

 

As School Year Starts, Groups Are Working To Create AI Chatbots That Can Access Peer-Reviewed Research

The Seventy Four Share to FacebookShare to Twitter (9/6, Toppo) reports that as students across the US “enter their first full school year with access to powerful AI tools like ChatGPT and Bard, many educators remain skeptical of their usefulness – and preoccupied with their potential to help kids cheat.” But this fall, “a few educators are quietly charting a different course they believe could change everything: At least two groups are pushing to create new AI chatbots that would offer teachers unlimited access to sometimes confusing and often paywalled peer-reviewed research on the topics that most bedevil them.” Tapping into “curated research bases and filtering out lousy results would also make the bots more reliable: If all goes according to plans, they’d cite their sources.” The result, supporters say, “could revolutionize education. If their work takes hold, millions of teachers for the first time could routinely access high-quality research and make it part of their everyday workflow.” Such tools “could also help stamp out adherence to stubborn but ill-supported fads in areas from “learning styles” to reading instruction.”

 

ChatGPT Website Visits Decline For Third Month In A Row

Reuters Share to FacebookShare to Twitter (9/7) reports, “OpenAI’s ChatGPT, the wildly popular artificial intelligence chatbot launched in November, saw monthly website visits decline for the third month in a row in August, though there are signs the decline is coming to an end, according to analytics firm Similarweb.” Desktop and mobile visits to ChatGPT’s website “decreased by 3.2% to 1.43 billion in August, following approximately 10% drops from each of the previous two months. The amount of time visitors spent on the website has also been declining monthly since March, from an average of 8.7 minutes on site to 7 minutes on site in August.” However, “August worldwide unique visitors ticked up to 180.5 million users from 180 million. School coming back into session in September may help ChatGPT’s traffic and usage, and some schools have begun to embrace it.”

 

Blumenthal, Hawley To Announce Framework For Regulating AI

The New York Times Share to FacebookShare to Twitter (9/7, Kang) reports Sens. Richard Blumenthal (D-CT) and Josh Hawley (R-MO) “plan to announce a sweeping framework to regulate artificial intelligence, in the latest effort by Congress to catch up with the technology.” the senators, who lead the Senate judiciary’s subcommittee for privacy, technology and law, “said in interviews on Thursday that their framework will include requirements for the licensing and auditing of A.I., the creation of an independent federal office to oversee the technology, liability for companies for privacy and civil rights violations, and requirements for data transparency and safety standards.” They “plan to highlight their proposals in an A.I. hearing on Tuesday, which will feature Brad Smith, Microsoft’s president, and William Dally, the chief scientist for the A.I. chip maker Nvidia. Mr. Blumenthal and Mr. Hawley plan to introduce bills from the framework.”

dtau...@gmail.com

unread,
Sep 17, 2023, 7:37:36 PM9/17/23
to ai-b...@googlegroups.com

Why Japan Is Building Its Own Version of ChatGPT
Nature
Tim Hornyak
September 14, 2023


The Japanese government, big Japanese technology firms, and researchers in Japan are working to build versions of ChatGPT with underlying large language models (LLMs) that use the Japanese language. LLMs trained on datasets in other languages do not take account for differences in alphabet systems, sentence structure, and culture. The Tokyo Institute of Technology, Tohoku University, Fujitsu, and the government-funded RIKEN group of research centers are collaborating on a Japanese LLM using the Fugaku supercomputer. The LLM, slated for release next year, could have at least 30 billion parameters. Meanwhile, an LLM funded by Japan's Ministry of Education, Culture, Sports, Science and Technology could start with 100 billion parameters and expand over time. Keio University School of Medicine's Shotaro Kinoshita said the development of an accurate, Japanese version of ChatGPT could have "a positive impact on international joint research."
 

Full Article

 

 

Machine Learning Tames Huge Datasets
Los Alamos National Laboratory
September 11, 2023

A machine learning algorithm developed at the U.S. Department of Energy (DOE)'s Los Alamos National Laboratory (LANL) was able to identify and split a vast dataset's key features into manageable batches. Researchers tested the algorithm on the Summit supercomputer at DOE's Oak Ridge National Laboratory. LANL's Ismael Boureima said, "We developed an 'out-of-memory' implementation of the non-negative matrix factorization method that allows you to factorize larger datasets than previously possible on a given hardware." The algorithm efficiently transfers data between computers by speeding up computation and fast interconnect using hardware like graphics processing units (GPUs), while performing multiple tasks simultaneously. The LANL researchers used the algorithm to process a 340-terabyte dense matrix and an 11-exabyte sparse matrix with 25,000 GPUs.
 

Full Article

 

 

Using Topology, Researchers Advance Understanding of Cell Organization
News from Brown
September 14, 2023

Biomedical engineers and applied mathematicians at Brown and Purdue universities designed a machine learning algorithm that uses computational topology to characterize how cells self-organize into tissue-like architectures. In 2021, the researchers demonstrated how their technique can profile the topological characteristics of one cell type that assembles into different spatial structures, and base predictions on that analysis. The latest research applied persistence images to address the algorithm's hours-long topological computation. The researchers trained other algorithms on those images to produce "digital fingerprints" that record the data's key topological traits, accelerating computation time from hours to seconds and enabling scientists to compare thousands of cell-organization models. They say the goal is to infer the rules governing how different cell types self-assemble into final patterns by working backward.
 

Full Article

 

 

System Combines Light, Electrons to Unlock Faster, Greener Computing
MIT News
Alex Shipps
September 11, 2023


Massachusetts Institute of Technology (MIT) researchers have developed a prototype photonic computing system that can handle machine learning inference requests in real time. The Lightning system is a photonic-electronic reconfigurable SmartNIC (network interface card) the combines the speed of photonics with the dataflow control capabilities of electronic computers. The hybrid system uses a reconfigurable count-action abstraction to act as a unified language between the photonic and electronic computing components. MIT's Manya Ghobadi explained, "Our count-action programming abstraction acts as the muscle memory in Lightning. It seamlessly drives the electrons and photons in the system at runtime." The researchers found that Lightning outperformed standard graphics processing units, data processing units, SmartNICs, and other accelerators in energy-efficiency when completing inference requests.

Full Article

 

 

China Sows Disinformation About Hawaii Fires Using New Techniques
The New York Times
David E. Sanger; Steven Lee Myers
September 11, 2023


Researchers at Microsoft, the University of Maryland, and other organizations found China's government is utilizing new methods to promulgate disinformation about last month's wildfires on Maui, claiming they resulted from tests of a secret "weather weapon." Such content includes photos apparently produced by artificial intelligence to add plausibility to Beijing's false narrative. The campaign seems to indicate China has shifted tactics from intensifying state propaganda to actively spreading discord in the U.S. The researchers suggested China was amassing a network of accounts that could be leveraged in future information (or disinformation) campaigns for "amplifying conspiracy theories that are not directly related to some of their interests, like Taiwan," said Brian Liston at cybersecurity company Recorded Future.

Full Article

*May Require Paid Registration

 

 

Tool Skewers Socially Engineered Attack Ads
Georgia Tech Research
September 8, 2023

Trident, developed by researchers at Georgia Institute of Technology (Georgia Tech), is, a Google Chrome-compatible add-on that can block socially engineered online ads with what the researchers describe as nearly total efficacy. Georgia Tech's Zheng Yang said, "The goal is to identify suspicious ads that often take users to malicious websites or trigger unwanted software downloads. Trident operates within Chrome's developer tools and uses a sophisticated AI [artificial intelligence] to assess potential threats." The researchers built Trident using a dataset amassed from over 100,000 websites, which helped identify 1,479 attacks covering six common types of Web-based social engineering exploits. Trident realized a near-perfect detection rate of malicious ads over the course of a year, yielding a mere 2.57% false-positive rate.
 

Full Article

 

 

ML Contributes to Better Quantum Error Correction
RIKEN (Japan)
September 8, 2023


Researchers at Japan's RIKEN Center for Quantum Computing used a machine learning (ML)-enhanced system for autonomous quantum-computer error correction. The ML component helps search for device overhead-minimizing error correction schemes without impacting performance. The researchers used an artificial environment to eliminate the need for frequent error-detecting measurements, and searched bosonic quantum bit (qubit) encodings for high-performance candidates through reinforcement learning. They determined an approximate qubit encoding reduced device complexity significantly more than other proposed encodings while outperforming peers' error correction. Said RIKEN's Yexiong Zeng, "Our work not only demonstrates the potential for deploying machine learning towards quantum error correction, but it may also bring us a step closer to the successful implementation of quantum error correction in experiments."

Full Article

 

How To Determine AI’s Impact On College Instruction

The Chronicle of Higher Education Share to FacebookShare to Twitter (9/8, McMurtrie) reported that while ChatGPT could evolve, “most experts agree that generative AI is here to stay,” and for some faculty members, it’s “better to show students how to use them effectively and to understand their limitations than to ignore them.” The Chronicle shared “key questions we will be asking this fall,” such as: “Will generative AI find acceptance in academe?” Among the tool’s limitations “is the persistent problem of inaccuracy: Generative AI often just makes things up, or ‘hallucinates.’” Daniela Amodei, president of Anthropic, which makes ChatGPT rival Claude 2, told Fortune, “I don’t think that there’s any model today that doesn’t suffer from some hallucination. They’re really just sort of designed to predict the next word.”

 

AI’s Language Biases Seen As Problematic

Axios Share to FacebookShare to Twitter (9/8) reports, “AI’s first language is English – a bias that researchers are racing to counter before it gets permanently baked into the new technology.” Most generative AI has been built on large language models trained on English and Chinese data, “leaving the 6 billion native speakers of the world’s more than 7,000 other languages at risk of being left out as the technology reframes work, business, education, art and more.” Axios adds, “Some developers are trying to overcome these linguistic shortcomings by focusing on building multilingual large language models, while others are putting their efforts into tuning models to a particular language.”

 

Google Will Soon Require Disclosures Of AI Content In Political Ads

CNN Share to FacebookShare to Twitter (9/8, Duffy) reported that starting in November, Google will “require political advertisements to prominently disclose when they feature synthetic content – such as images generated by artificial intelligence.” Google said political ads that feature synthetic content that “inauthentically represents real or realistic-looking people or events” must include a “clear and conspicuous” disclosure for viewers who might see the ad. The rule, “an addition to the company’s political content policy that covers Google and YouTube, will apply to image, video and audio content.” The policy update “comes as campaign season for the 2024 US presidential election ramps up and as a number of countries around the world prepare for their own major elections the same year.”

 

Sens. Hawley, Blumenthal Unveil Bipartisan AI Framework

The Hill Share to FacebookShare to Twitter (9/8) reports Sens. Richard Blumenthal (D-CT) and Josh Hawley (R-MO) on Friday “released a bipartisan framework for artificial intelligence (AI) legislation.” The bill “calls for establishing a licensing regime administered by an independent oversight body” and “would require companies that develop AI models to register with the oversight authority, which would have the power to audit the companies seeking licenses.” It would also “clarify that Section 230 of the Communications Decency Act, which shields tech companies from legal consequences of content posted by third-parties, does not apply to AI.”

        Tech Lobbyists Target State Governments On AI Legislation. Politico Share to FacebookShare to Twitter (9/8, Bordelon) reports tech industry lobbyists are targeting Congress and state capitals, “working to influence the conversation before AI bills are even introduced.” The push is “driven by concern that states often act faster than Washington on tech issues, and can sometimes impose far tougher rules on companies.” If successful, the companies “could nip tough AI regulations in the bud and neutralize the threat of new rules from state capitols.”

 

IRS Begins Using AI To Investigate Tax Evasion

The New York Times Share to FacebookShare to Twitter (9/8, A1, Rappeport) reports the IRS announced Friday that it has “started using artificial intelligence to investigate tax evasion at multibillion-dollar partnerships.” The effort is funded by part of the “$80 billion allocated through last year’s Inflation Reduction Act,” and aims to “open examinations into 75 of the nation’s largest partnerships, which were identified with the help of artificial intelligence, by the end of the month.” The Wall Street Journal Share to FacebookShare to Twitter (9/8, Rubin, Subscription Publication) provides similar coverage.

 

Artists Sign Open Letter In Favor Of Generative AI

TechCrunch Share to FacebookShare to Twitter (9/7, Coldewey) reports, “Artists are among the many groups who will feel the effects of AI over the next few years, but it’s not doom and gloom for everyone. A group of artists have organized an open letter to Congress, arguing that generative AI isn’t so bad and, more importantly, the creative community should be included in talks about how the technology should be regulated and defined.” TechCrunch says, “The gist is that AI, machine learning and algorithmic or automated tools have been used in music, art and other media for decades and this is just another tool. As such, those who use the tools, whether that’s as software engineers or painters, should be consulted in the process of guiding their development and regulation.”

 

US Authors Sue OpenAI For Copyright Infringement

Reuters Share to FacebookShare to Twitter (9/11, Brittain) reports Pulitzer Prize winner Michael Chabon, “playwright David Henry Hwang and authors Matthew Klam, Rachel Louise Snyder and Ayelet Waldman” are suing “OpenAI in federal court in San Francisco, accusing the Microsoft-backed program of misusing their writing to train its popular artificial intelligence-powered chatbot ChatGPT.” In what “is at least the third proposed copyright-infringement class action filed by authors against Microsoft-backed OpenAI,” the group of US authors alleged “that OpenAI copied their works without permission to teach ChatGPT to respond to human text prompts.”

 

Education Expert Says Schools Should Teach Students To Use AI Tools Effectively

Education Week Share to FacebookShare to Twitter (9/11, Langreo) reports that because “the use of generative artificial intelligence is spreading faster in K-12 education than many educators expected,” an increasing number of educators and AI experts “say that schools need to figure out how to leverage AI tools for the benefit of students and teachers, while being aware of their downsides.” In a Zoom interview with EdWeek, Stanford Graduate School of Education senior advisor Glenn Kleiman “discussed his views on AI’s role in K-12 education, how schools can appropriately incorporate AI, and what students need to know about the technology.” Among other comments, Kleiman said that “we need to have more systemic views of how these [AI tools] are used with our students,” while schools “need to develop guidelines of what students can and cannot do with these tools.”

 

Tech Companies Join White House’s Voluntary AI Risk Pledge

The Washington Post Share to FacebookShare to Twitter (9/12, Zakrzewski) reports, “Eight tech companies, including Salesforce and Nvidia, are signing on to the White House’s voluntary artificial intelligence pledge, joining a roster of prominent firms that have agreed to mitigate the risks of AI, as Washington policymakers continue to debate new regulation of the emerging technology.” As of now, 15 of the most influential US companies have “taken the commitments, which include a promise to develop technology to identify AI-generated images and a vow to share data about safety with the government and academics.” New participants include IBM, Palantir, and Stability. The agreement from these companies underscores “the expansion of the pledge beyond AI heavyweights such as Microsoft and Meta and OpenAI, the maker of ChatGPT.”

 

Senate, Tech Leaders Convene AI Forum

Reuters Share to FacebookShare to Twitter (9/13, Shepardson) reports Senate Majority Leader Schumer brought “U.S. technology leaders including Tesla CEO Elon Musk, Meta Platforms CEO Mark Zuckerberg and Alphabet CEO Sundar Pichai to Capitol Hill on Wednesday for a closed-door forum on how Congress should set artificial intelligence safeguards.” Schumer said, “For Congress to legislate on artificial intelligence is for us to engage in one of the most complex and important subjects Congress has ever faced.” Reuters Share to FacebookShare to Twitter (9/13) reports Schumer “on Wednesday said that while regulations on artificial intelligence were certainly needed, they should not be made ‘too fast.’” Reuters adds, “‘If you go too fast, you can ruin things,’ Schumer told reporters after organizing a closed-door AI forum bringing together U.S. lawmakers and tech CEOs. The European Union went ‘too fast,’ he added.”

        The New York Times Share to FacebookShare to Twitter (9/13, Kang) says, “The closed-door meeting is the first in a series of crash-course lessons on A.I. for lawmakers. More than that, it is an opportunity for tech leaders who represent companies with a collective value of more than $6.5 trillion to influence A.I.’s direction as questions swirl about its transformative and risky effects. And it is a chance to be seen as relevant and leading on the technology.” The Wall Street Journal Share to FacebookShare to Twitter (9/13, Tracy, Subscription Publication) reports Schumer on Wednesday asked guests whether they believe the government should play a role in regulating artificial intelligence. All present raised their hands, according to Schumer.

        Politico Share to FacebookShare to Twitter (9/13, Chatterjee, Bordelon) reports that while Schumer on Wednesday “convened what he called an ‘unprecedented’ gathering of tech CEOs” on AI, “just a day before, Richard Blumenthal (D-Conn.) and Josh Hawley (R-Mo.), two prominent senators on the Judiciary Committee, held a separate hearing to refine their own set of comprehensive AI rules, which they released on Friday.” Meanwhile, “other senators are busy drafting or introducing bills to address AI-enabled deepfakes, government procurement of automated systems and other piecemeal approaches to the technology.”

        Musk Calls Artificial Intelligence A Double-Edged Sword. Bloomberg Share to FacebookShare to Twitter (9/13, Subscription Publication) reports, “Elon Musk called for a ‘regulatory structure’ for artificial intelligence after warning US senators about risks to civilization posed by the nascent technology.” He “was among more than 20 tech and civil society leaders attending a closed-door Senate summit Wednesday focused on AI. He later met privately with” House Speaker McCarthy.

 

Educators Find AI Tools Can Benefit Student Learning Despite Previous Worries About Plagiarism

Diverse Issues in Higher Education Share to FacebookShare to Twitter (9/13, Herder) reports when large language model ChatGPT was first made available to the public, “it changed the landscape of education forever.” While faculty members “were worried that students would use ChatGPT to cheat and bypass any difficulties they encountered, negatively impacting learning,” educators across the nation “are discovering ways to adjust their pedagogy to accommodate this brave new world, not only through the creation of AI-proof assignments but also assignments that purposefully incorporate AI use.” Experts say educators “have a responsibility to rethink how they assess learning and help their students gain mastery over AI tools like ChatGPT and other LLMs so they graduate into the world fully ready for their future in a technologically dense workforce. Using AI as a tool will not only better prepare students for the future, experts note, but can also help ease the workload of faculty and administration.”

        Why Faculty Members Are Polarized On AI’s Role In Teaching. Inside Higher Ed Share to FacebookShare to Twitter (9/13, D'Agostino) reports that “much like early humans banded together to fend off threats by packs of wolves, some faculty members have united to fend off real or perceived threats to education by artificial intelligence.” These divisions “echo allied social groups formed during past higher ed disruptions, including the emergence of online learning and efforts to diversify literature curricula.” The resulting social groups, or tribes, “can confer individual and societal benefits, according to Arash Javanbakht, director of the Stress, Trauma and Anxiety Research Clinic at Wayne State University and author of the book Afraid: Understanding the Purpose of Fear and Harnessing the Power of Anxiety.” Faculty uncertainty “and anxiety surrounding the role of artificial intelligence in teaching and learning are high, which may nudge them into oppositional, values-based social groups, Javanbakht said.”

 

Lawmakers Ask Tech Companies To Disclose Information On Working Conditions Of Staff Labeling Data For AI

Bloomberg Share to FacebookShare to Twitter (9/13, Eidelson, Bass, Subscription Publication) reports Democratic lawmakers “are pressing the top tech firms to open up about the conditions of their ‘ghost work’ – unseen laborers like those labeling data and rating responses who have become pivotal to the artificial intelligence boom.” In a letter to the CEOs of nine companies, including Alphabet, Microsoft, and Meta, a group of lawmakers led by Sen. Ed Markey (D-MA) and Rep. Pramila Jayapal (D-WA) said, “Despite the essential nature of this work, millions of data workers around the world perform these stressful tasks under constant surveillance, with low wages and no benefits.”

 

Although ChatGPT Has Seen A Decline In Traffic, Businesses Continue To Be Interested In Generative AI

Insider Share to FacebookShare to Twitter (9/14, Zinkula) reports that although ChatGPT has seen its daily traffic numbers decline in recent months, executives continue to express interest in applications for generative AI. A survey of North American CFOs conducted Deloitte in July and August found “42% said their companies were still experimenting with the technologies.” Additionally, “15% said their organizations had already incorporated generative AI into their business strategies.” When “asked how they thought generative AI could be most helpful for their organizations someday, the most popular answer was reduced costs – selected by 52% of the respondents.”

dtau...@gmail.com

unread,
Sep 23, 2023, 6:49:09 PM9/23/23
to ai-b...@googlegroups.com

Quantum Machine Learning Solution for Faster Routing in Disaster Situations
HPCwire
September 21, 2023


A quantum machine learning solution developed by researchers at Terra Quantum and Honda Research Institute Europe (HRI-EU) aims to reduce evacuation times during natural disasters. The hybrid quantum computing tool, which performs quantum simulations on classical computing hardware, considers real-time variables and can make decisions using only local information. In a simulation of an earthquake on a realistic small-town map, the solution predicted efficient and dynamic vehicle escape routes and shortened evacuation times using less than 1% of the map information. HRI-EU's Sebastian Schmitt said, "Identifying realistic problems where quantum technologies may unfold their potential constitutes one of the biggest challenges in the field today. This work represents a promising step in that direction and shows how to employ hybrid quantum-classical learning architectures in a real-world-use case."

Full Article

 

 

ChatGPT Can Now Generate Images
The New York Times
Cade Metz; Tiffany Hsu
September 20, 2023


OpenAI has integrated a new version of its DALL-E image generator into its ChatGPT online chatbot. DALL-E 3 generates more detailed images than its predecessors, with notable improvements in images featuring letters, numbers, and human hands. The new version of the image generator can create images from multi-paragraph descriptions and follow detailed instructions. OpenAI's Aditya Ramesh said DALL-E 3 was given a more precise understanding of the English language. The DALL-E/ChatGPT integration means ChatGPT can generate digital images based on detailed textual descriptions provided by users or produced by the chatbot itself. OpenAI has included tools in DALL-E 3 to prevent the generation of sexually explicit images, images of public figures, and images that imitate the styles of specific artists.

Full Article

*May Require Paid Registration

 

 

Google DeepMind AI Tool Assesses DNA Mutations for Harm Potential
The Guardian (U.K.)
Ian Sample
September 19, 2023


Google DeepMind scientists created the AlphaMissense artificial intelligence (AI) program to predict the nature of millions of DNA mutations, in order to accelerate research and diagnosis of rare diseases. The researchers adapted DeepMind's AlphaFold three-dimensional protein-structure prediction algorithm to evaluate 71 million single-letter or missense mutations that could impact human proteins. Setting AlphaMissense's precision to 90% produced a forecast score that 57% of missense mutations were probably innocuous and 32% were probably harmful, while the remaining mutations' statuses were uncertain. The researchers have released a free online prediction catalog to help geneticists and clinicians investigating mutational disease mechanisms or diagnosing patients with rare disorders.

Full Article

 

 

AI Accelerates Ability to Program Biology Like Software
The Wall Street Journal
Steven Rosenbush; Tom Loftus
September 19, 2023


In the field of synthetic biology, researchers leverage artificial intelligence (AI) to reprogram or repurpose proteins or other biological material or even develop new proteins. Alexandre Zanghellini of the startup Arzeda said large language models and generative AI, among other innovations, coupled with the increased availability of data to train these models, have given a boost to synthetic biology in recent years. Said Zanghellini, "I would say it's orders of magnitude, five times, 10 times faster in the way we can design and program biology. It enables us to go beyond what nature has provided us." However, Stanford University's Lloyd Minor cautioned, "The challenge in biology is that it is not terribly difficult to engineer organisms, to engineer living systems, to do things that can potentially be very harmful. So how do we think about monitoring, regulation, safe oversight in the biology world?"

Full Article

*May Require Paid Registration

 

 

Machine Learning Innovation Reduces Computer Power Usage
WSU Insider
Tina Hilding
September 14, 2023


A machine learning framework developed by researchers at Washington State University (WSU) and Intel can manage power usage to reduce energy consumption in multi-core computer processors. The researchers designed the algorithms to select voltage and frequency levels for different clusters of a 64-core processor. The scalable framework learned to optimize power management without reducing multi-processor performance, realizing up to 60% energy savings. WSU's Jana Doppa said this innovation is designed for future computing systems that could have as many as 1,000 core processors, although it also could be used for extremely small embedded systems.

Full Article

 

 

LLM-Powered Interactive Canvas for Generative Artists
Stanford University Institute for Human-Centered AI
Shana Lynch
September 13, 2023


Stanford University's Institute for Human-Centered AI researchers have developed a tool that aims to improve the ideation and editing processes for generative artists. Based on the large language model (LLM) GPT-4, Spellburst allows artists to input an initial prompt, then change or modify parts of the resulting image using a panel of dynamic slides produced from the previous prompt. The tool lets them merge different versions of the images and move from prompt-based exploration to program editing, fine-tuning the image by tweaking the code. Spellburst is based on interviews with 10 expert creative coders and was tested by expert generative artists. Stanford's Hariharan Subramonyam said, "The feedback was overall very positive. The large language model helps artists bridge from semantic space to code faster, but it also helps them explore many different variations and take larger creative leaps." Spellburst is slated for an open-source release later this year.

Full Article

 

 

Making AI Smarter with Artificial, Multisensory Integrated Neuron
Penn State News
Ashley WennersHerron
September 12, 2023


Pennsylvania State University (Penn State) researchers have developed an artificial multisensory integrated neuron by combining tactile and visual sensors to enhance each other through their individual output, aided by visual memory. The researchers mated the tactile sensor to a molybdenum disulfide-based phototransistor, resulting in a sensor that can integrate visual and tactile cues by producing electrical spikes similar to neuronal information processing. The tactile sensor uses the triboelectric effect to simulate touch input, while shining a light into the phototransistor's simulated visual input. The researchers observed a stronger sensory response from the neuron when visual and tactile signals were weak, which Penn State's Saptarshi Das said could augment sensor efficiency and clear a path toward more eco-friendly artificial intelligence (AI).

Full Article

 

Big Tech Wields AI Might To Accelerate Cancer Research

Axios Share to FacebookShare to Twitter (9/14, Reed) reported, “Major tech companies are throwing their weight behind artificial intelligence in cancer care, lending their technological prowess to legacy institutions and startups trying to navigate a fast-evolving area of medicine.” AI’s rise in popularity “has the potential to transform how the medical system researches and treats cancer, but only if the underlying tech is there to support it.”

ChatGPT Performs Similarly To Human Physicians Who Reviewed Similar Symptoms

NPR Share to FacebookShare to Twitter (9/16, Leonard) reported that in June, researchers “reported in medRxiv, an online publisher of health science preprints, that ChatGPT compared quite well to human doctors who reviewed the same symptoms – and performed vastly better than the symptom checker on the popular health website WebMD.” However, “when it comes to consumer chatbots...there is still caution, even though the technology is already widely available – and better than many alternatives.” Many physicians “believe AI-based medical tools should undergo an approval process similar to the FDA’s regime for drugs, but that would be years away.”

Gallup Survey Shows Nearly 80 Percent Of Americans Have Little Trust Businesses Will Use AI Responsibly

Insider Share to FacebookShare to Twitter (9/15, Nolan) reported a Gallup survey “found that 79% of Americans had little or no trust” in the way businesses would use artificial intelligence technology. Only 21 percent “said they trusted businesses with AI ‘a lot’ or ‘some.’” The survey found “trust was consistently low across subgroups of the population, including when broken down by age, gender, and race.” The results showed “22% of respondents were worried technology would make their jobs obsolete,” a seven percent increase over the previous year.

Teachers Are Increasingly Embracing Generative AI Tools In Classrooms

Wired Share to FacebookShare to Twitter (9/15, Johnson) reported that while “students’ soaring use of AI tools has gotten intense attention lately, in part due to widespread accusations of cheating,” a recent poll of 1,000 students and 500 teachers by studying app Quizlet “found that more teachers use generative AI than students.” Similarly, a Walton Family Foundation survey early this year found that “about 70 percent of Black and Latino teachers use the technology weekly. As more companies adapt generative AI to help educators, more teachers...are experimenting with the technology to find out its strengths – and how to avoid its limitations or flaws.” Examples include MagicSchool, a tool powered by OpenAI’s text generation algorithms, which “has amassed 150,000 users,” and can “help teachers do things like create worksheets and tests [and] adjust the reading level of material based on a student’s needs,” among other features.

Google, DOD Building AI-Powered Microscope To Identify Cancer

CNBC Share to FacebookShare to Twitter (9/18, Capoot) reports Google and the US Department of Defense are building “an artificial intelligence-powered microscope.” This “AI-powered tool is called an Augmented Reality Microscope, or ARM, and Google and the Department of Defense have been quietly working on it for years.” Currently, “the technology is still in its early days and is not actively being used to help diagnose patients yet, but initial research is promising, and officials say it could prove to be a useful tool for pathologists without easy access to a second opinion” when identifying cancer.

Intel Believes Glass Can Help Computers Handle AI Workloads

Bloomberg Share to FacebookShare to Twitter (9/18, King, Subscription Publication) reports Intel is “betting” glass-based substrates will “help the world’s computers handle ever-growing artificial intelligence workloads.” Bloomberg says for “Intel, a chip pioneer that’s now chasing Nvidia Corp. for the limelight, the new approach is a chance to show off its ability to innovate for an AI world – and win new customers in the process.” According to Bloomberg, “Intel’s glass push is coming from its packaging research and production facilities, a little-known part of its technology lineup.” However, Bloomberg adds this effort is “no sure thing,” as Intel “will need to get a cheaper supply of material. And researchers need to refine handling techniques to guard against glass’ most famous characteristic: its tendency to shatter.”

GE’s New Software Helps Utilities To Better Track Gas Turbine Emissions

Gas To Power Journal Share to FacebookShare to Twitter (9/18, Karl) reports GE Vernova announced the early limited release of CERius™, its new AI-powered carbon emissions management software engineered to help improve the accuracy of greenhouse gas calculations on scope 1 gas turbines by as much as 33% to help energy companies progress to net zero.

Report: Most States Are Failing To Meet Requests For Guidance On AI Use In Classrooms

Education Week Share to FacebookShare to Twitter (9/18, Klein) reports according to a recent report released by the State Educational Technology Directors Association, “more than half of state educational technology officials are seeing a spike in demand for guidance about proper use of AI tools in education.” However, “only 2 percent of state education technology officials said their state had initiatives or efforts underway to provide that kind of information, according to the survey of 104 officials from 45 states, Guam, and the Department of Defense, which operates schools for some military children.” Fifty-five percent of respondents “reported that they were seeing increased interest in guidance or policy around the use of AI in the classroom,” while the report said that “the number of states working on AI policy for schools is bound to increase in the coming years.”

        How Sacramento City Unified School District Works To Monitor AI Use On Assignments. The Sacramento (CA) Bee Share to FacebookShare to Twitter (9/18, Rodriguez) reports that as artificial intelligence (AI) has become an “almost readily available technology that anyone could use,” the Sacramento City Unified School District in California “has taken measures to monitor students’ AI use on assignments.” Alexander Goldberg, a spokesperson for the district, “shared the district’s new technology use agreement surrounding AI,” which states that students “are not permitted to access AI for assistance with assignments or research unless done under the guidance and approval of a teacher.” Additionally, “Unpermitted use of AI may lead to penalties for academic misconduct.” The district has also “blocked students’ access to AI tools such as ChatGPT on their school-issued Chromebooks.”

        Company Behind ChatGPT Releases Teacher Guide For Using AI In Classrooms. K-12 Dive Share to FacebookShare to Twitter (9/18, Arundel) reports, “OpenAI, the company behind ChatGPT, has released a guide for teachers who use the conversational artificial intelligence model in their classrooms.” The guide includes “examples of how K-12 teachers and college faculty use ChatGPT in their classrooms, as well as an FAQ with information on using the tool for assessments, safety guardrails, and potential biases and other limitations.” It comes as educators, parents, lawmakers, and others “are gathering information on best practices and considering standards for classroom use while ensuring students aren’t shortchanged in their development of critical thinking skills.”

Intel CEO Says Company’s Technology Vital To AI Computing Boom

Bloomberg Share to FacebookShare to Twitter (9/19, King, Subscription Publication) reports Intel CEO Pat Gelsinger, “plotting a comeback for the once-dominant chipmaker, made the case that the company’s technology will be vital to an industrywide boom in artificial intelligence computing.” Speaking at Intel’s annual Innovation conference, Gelsinger “pointed to advances that his company is making in production technology and software developer tools for AI. The opportunity will only grow as more artificial intelligence capabilities are powered by personal computers, he said.”

dtau...@gmail.com

unread,
Sep 30, 2023, 8:26:13 AM9/30/23
to ai-b...@googlegroups.com

Scientists Closer to Finding a Test for Long COVID
Gizmodo
Ed Cara
September 26, 2023


A multi-institutional team of scientists thinks it may have discovered biomarkers of long COVID that could lay the groundwork for a diagnostic test. The biomarkers include consistent immune and hormonal differences between long COVID and non-infected patients, including an "exaggerated" humoral immune response to the coronavirus and lower concentrations of cortisol among the former. The researchers developed a diagnostic algorithm to factor in these findings using machine learning, which was 96% accurate in distinguishing between long COVID patients and controls. Mount Sinai Health System's David Putrino called this development "a decisive step forward in the development of valid and reliable blood testing protocols for long COVID."

Full Article

 

 

Scientists Hail Pioneering Software in Hunt for Alien Life

The Guardian (U.K.)
Ian Sample


September 25, 2023


Scientists at the Carnegie Institution for Science (CIS), Purdue University, and Johns Hopkins University have trained software to differentiate chemical mixtures produced by living organisms from those generated by environmental or other events. The researchers subjected 134 samples from living and non-living objects to the pyrolysis-GC-MS process, which broke down each sample's organic molecules. They then used machine learning and mathematical modeling to train the software, which was able to identify samples with non-biological, life, and fossilized life origins. Preliminary tests showed the program could distinguish between biological and non-biological samples with 90% accuracy. CIS' Robert Hazen suggested the signs-of-life detector could transform the search for alien life, as well as delve more deeply into the origins and chemical processes of life on Earth.
 

Full Article

 

 

Method Helps AI Navigate 3D Space Using 2D Images
NC State University News
Matt Shipman
September 25, 2023


Scientists at North Carolina State University (NC State), the University of Central Florida, China-based open Internet platform Ant Group, and smartphone technology developer OPPO Seattle Research Center have trained artificial intelligence (AI) to extract three-dimensional (3D) information from two-dimensional (2D) images through a technique they call MonoXiver. Current extraction methods have the AI scan 2D images and enclose objects within eight-point "bounding boxes" to determine their size and position relative to other objects in the image. MonoXiver uses each bounding box as an anchor, then performs an analysis to generate secondary bounding boxes surrounding that anchor. The AI compares each secondary box's geometry and appearance to ascertain which box has best captured any "missing" pieces of the object, supporting a highly efficient top-down sampling process, according to NC State's Tianfu Wu. He said MonoXiver "significantly improved the performance" of three 2D-to-3D data extraction techniques.

Full Article

 

 

Getting Audio from Still Images, Silent Videos
Northeastern Global News
Cody Mello-Klein
September 25, 2023


A machine learning tool developed at Northeastern University can obtain audio from still images and muted videos. Using the Side Eye tool, which leverages image stabilization technology standard in most smartphone cameras, it is possible to determine the gender of someone speaking off camera and the exact words they said. Northeastern's Kevin Fu explained that the small springs holding a camera lens suspended in liquid experience microscopic vibrations and the light is bent almost imperceptibly when someone speaks near a camera lens. Taking advantage of the rolling shutter method of photography used by most smartphone cameras, the researchers can extract sonic frequencies from those vibrations. Side Eye produces muffled audio, but the use of machine learning and training on certain words and audio enables it to extract a substantial amount of information, said Fu.

Full Article

 

 

Using Machine Learning to Close Canada's Digital Divide
Waterloo News (Canada)
September 20, 2023

Researchers at Canada's University of Waterloo and the National Research Council developed the machine learning-based Multivariate Variance-based Genetic Ensemble Learning Method to anticipate potential satellite problems to ensure uninterrupted Internet access for rural and remote Canadians. The method combines several artificial intelligence-driven models to identify anomalies in satellites and satellite networks before they can escalate. The researchers tested their model on the publicly available Soil Moisture Active Passive, Mars Science Laboratory Rover, and Server Machine datasets. They found the model's accuracy, precision, and recall surpassed that of existing models. Waterloo's Peng Hu said, "This research will help us to design more reliable, resilient, and secure satellite systems."
 

Full Article

 

 

Toyota Conceives of More Efficient Method to Train Robots
Interesting Engineering
Loukia Papadopoulos
September 19, 2023


The Toyota Research Institute (TRI) has introduced a generative artificial intelligence (AI) method for training robots to perform more dexterous behaviors more efficiently. The robot behavior model incorporates haptic teacher demonstrations and spoken descriptions of objectives, enabling new behaviors inferred from many demonstrations to be introduced independently. This strategy produces reliable, reproducible, and efficient results rapidly. TRI has already used the new approach to train robots to perform more than 60 dexterous tasks simply by providing them new information. The researchers hope this model will enhance human-robot cooperation.
 

Full Article

 

Oregon Colleges Racing To Keep Up With Artificial Intelligence Tools In Classrooms

The Oregonian Share to FacebookShare to Twitter (9/23, Edge) reported, “As the new school year gets underway, professors at Oregon’s colleges and universities are racing to adapt their teaching to publicly available artificial intelligence.” The colleges and universities “are also encouraging faculty to get familiar with the technology to incorporate it into their teaching.” The new school year “will largely be up to individual professors to decide how they will tackle artificial intelligence in the classroom,” as academic publishing company Wiley has found that “most believe their students are already using AI in the classroom. Only 31% of the professors said they felt positive about the technology.”

 

North Carolina Researchers Use Artificial Intelligence To Address Public Safety Concerns

WFMY-TV Share to FacebookShare to Twitter Greensboro, NC (9/24, Monreal) reports that with the rising popularity of artificial intelligence, “Charlotte-area researchers are seeking solutions to public safety concerns through a new AI initiative.” After gaining community input, “researchers learned people are concerned with public safety, particularly when it comes to public transit. Thanks to a $2.5 million grant, a team from UNC Charlotte and Central Piedmont Community College set up a pilot research project at the Merancas campus in Huntersville.” This comes as one associate professor at the university regularly meets with criminal justice students “to study how artificial intelligence can recognize dangerous behavior before it escalates.”

 

Artificial Intelligence May Mark The End Of Traditional Education

Fox News Share to FacebookShare to Twitter (9/20, Coggins) reported that with the “surge in growth of artificial intelligence, fears over the new technology have experts weighing in on what impact it will have on U.S. education.” Euro Pacific Asset Management Chief Economist Peter Schiff told Fox News, “One of the jobs that is likely to be eliminated by A.I. is teaching. I think certainly for elementary school education K through 12. I think at the end of the day, schools will be obsolete. The teachers, the administrators, the unions, the whole bureaucracy.” Similarly to the “rise of the internet, artificial intelligence has already made its way into the education system from ChatGPT to even teaching college courses at some of the nation’s most prestigious universities.” In education, “ChatGPT has been a controversial tool some teachers perceive as a threat to traditional pedagogy.”

 

Vatican Pushes World To Pause Research On Lethal Autonomous Weapons Systems

The AP Share to FacebookShare to Twitter (9/26, Peltz) reports Vatican City Foreign Minister Archbishop Paul Gallagher “urged world leaders Tuesday to put a pause on lethal autonomous weapons systems for long enough negotiate an agreement on them, joining a series of U.N. General Assembly speakers who have expressed concern about various aspects of artificial intelligence.” Gallagher said of the issue, “Only human beings are truly capable of seeing and judging the ethical impact of their actions, as well as assessing their consequent responsibilities.” The Vatican also feels positively about “creating an international AI organization focused on facilitating scientific and technological exchange for peaceful uses and ‘the promotion of the common good and integral human development,’ he said.”

 

Roll Call Analysis: Privacy Legislation Emerges As Prerequisite To AI Regulation

In an analysis for Roll Call Share to FacebookShare to Twitter (9/26), Gopal Ratnam says that while “artificial intelligence appears to be a shiny new bauble full of promises and perils, lawmakers in both parties acknowledge that they must first resolve a less trendy but more fundamental problem: data privacy and protection.” Ratnam explains, “With dozens of hearings on data privacy held in the past five years, lawmakers in both chambers have proposed several bills, but Congress has enacted no federal standard as dickering over state-preemption has stymied any advances. ... Although the top tech companies thwarted an attempt to craft a federal privacy bill during the Obama administration in 2015, since then tech groups such as the Computer and Communications Industry Association, NetChoice and others have routinely called on Congress to enact privacy legislation.”

 

Educators Share How Advisory Boards Could Mitigate AI Anxiety In Higher Ed

Inside Higher Ed Share to FacebookShare to Twitter (9/28, Coffey) reports that with artificial intelligence and higher ed, “the excitement and hype are matched by the uncertainties and need for guidance,” but one solution would be “creating an AI advisory board that brings together students, faculty and staff for open conversations about the new technology.” The idea was presented at the University of Central Florida’s inaugural Teaching and Learning With AI conference, “a two-day event that drew more than 500 educators from around the country.” Experts interviewed “said universities need a formal mechanism for getting advice on how to proceed.” For example, Kristina Ishmael, deputy director of the Education Department’s Office of Educational Technology, “said in an email to Inside Higher Ed that the department’s top recommendation about AI is to ‘emphasize humans in the loop.’ Institutions that choose to create an AI advisory board, or a similar group, would be implementing this recommendation.’”

 

More Academic Publishers Are Using AI To Assist With Decision-Making In Editorial Processes

The Chronicle of Higher Education Share to FacebookShare to Twitter (9/27, Swaak) reported using AI as an assistant “is a growing trend among academic editors, as journals field more submissions while tapping a depleting well of peer reviewers,” as an AI tool that can “quickly identify whether a paper’s subject matter falls within a journal’s scope,” among other things, “can be valuable.” It’s also a trend that hasn’t yet “spurred the same level of editorial policymaking and calls for transparency as authors’ and researchers’ use of such tools,” following a fear that papers created by generative AI “might be submitted as scholars’ work,” which has prompted many publishers and journals “to post policies fencing in authors’ use of AI.” The Chronicle contacted 15 major publishers for this story, and the five who responded “emphasized that AI tools, if used at all, are never the sole decision-makers, and that editors remain responsible and accountable for the editorial process and final calls.”

 

Meta Releases AI-Powered Products

The New York Times Share to FacebookShare to Twitter (9/27, Isaac, Metz) reported Meta introduced a suite of AI-powered products Wednesday “that will soon be found throughout its products, including Instagram, Messenger, and virtual- and augmented-reality devices like the Quest 3 headset and Ray-Ban Stories smart glasses. The rollout also includes a chatbot that will be powered partly by Microsoft’s Bing search engine, as well as A.I.-assisted image-editing tools to use on Instagram.” The company “is aiming to keep pace with OpenAI, Google, Microsoft and other companies in the frenzied race over A.I. that can instantly generate text, images and other media on its own.”

        The Washington Post Share to FacebookShare to Twitter (9/27) reported, “Meta said the new conversational assistant, Meta AI – which will populate WhatsApp, Messenger and Instagram – relies on one of its large language models and a partnership with Microsoft’s search engine, Bing.” Meta is “also launching AI-backed photo creation tools and 28 AI-powered chatbots, played by celebrities and cultural icons such as Snoop Dogg, Tom Brady, Kendall Jenner, and Naomi Osaka.” Meta CEO Mark Zuckerberg “emphasized that Meta’s strategy involves creating different AI products for different uses, as opposed single flagship chatbot. He added that Meta’s conversational bots aren’t intended to just convey information – they’re meant to be entertaining.”

 

More Companies Blocking OpenAI’s GPTBot Web Crawler

Insider Share to FacebookShare to Twitter (9/28, Hays) reports, “More and more companies are trying to avoid having their data freely scraped and saved by web crawlers working for the benefit of AI models.” According to data insider has seen from Originality.ai, the number of sites blocking OpenAI’s GPTBot crawler has increased from 70 of the 1,000 most popular sites last month to over 250 out of 1,000 as of this week. Sites blocking OpenAI’s web crawler include Tumblr, Amazon, Pinterest, Vimeo, Indeed, The Guardian, USA Today, CNBC, “and what appears to be all titles published by Hearst and those by Conde Nast.”

 

Warner Warns Lawmakers Not To Overreach On AI Regulation

Politico Share to FacebookShare to Twitter (9/28, Overly) says, “Sen. Mark Warner (D-Va.) and his colleagues have taken big swings at tech legislation in recent years only to come up short, so he’s urging a different approach as Congress looks to regulate artificial intelligence. His pitch: be less ambitious.” For example, during a recent interview, Warner said, “Everybody up here on the Hill acknowledges the downside of the notion that we’re going to simply have the tech guys do the same thing. ‘Well, let us figure it out first and figure out the rules later’ is, I think, a bad proposition. ... I’m very sensitive to the notion that on AI we shouldn’t do that...but if we try to overreach, we may come up with goose eggs.”

dtau...@gmail.com

unread,
Oct 7, 2023, 4:02:50 PM10/7/23
to ai-b...@googlegroups.com

ML Used to Probe Building Blocks of Shapes
Imperial College London (U.K.)
Hayley Dunning
October 4, 2023


Researchers suggest using machine learning (ML) to explore "atomic shapes" could potentially transform mathematical discovery. Mathematicians from the U.K.'s Imperial College London and University of Nottingham applied ML to uncover unexpected patterns in the building blocks of shapes known as Fano varieties. They trained an ML model on example data to produce a model that could forecast the dimensions of Fano varieties from quantum periods with 99% accuracy. The researchers then applied the model using more traditional mathematical techniques to demonstrate that the quantum period defines the dimension. They also think the mathematical datasets could help polish ML models.

Full Article

 

 

AI Designs New Robot from Scratch in Seconds
Northwestern Now
October 2, 2023


A research team led by Northwestern University scientists created an artificial intelligence (AI) capable of designing robots from scratch almost immediately. The researchers prompted the algorithm to design a robot from a block about the size of a bar of soap, which generated a successful design in 26 seconds. Northwestern's Sam Kriegman said, "We told the AI that we wanted a robot that could walk across land. Then we simply pressed a button and presto!" The algorithm operates on a lightweight personal computer; other AI systems often require power-hungry supercomputers and huge datasets. The researchers fabricated the robot from the AI's blueprint, validating its real-world performance.

Full Article

 

 

DeepMB: Deep Learning Framework for High-Quality Optoacoustic Imaging in Real Time
Helmholtz Munich (Germany)
October 2, 2023


Researchers at Germany's Helmholtz Munich and the Technical University of Munich have developed a deep learning framework that permits the creation of high-quality optoacoustic images in real time, allowing for noninvasive assessments of a wide range of diseases. Clinical use of optoacoustic imaging had been limited by long image processing times. The DeepMB framework can reconstruct high-quality multispectral optoacoustic tomography (MSOT) images around 1,000 times faster than existing algorithms that produce high-quality images, without loss of image quality. It also can reconstruct all scans of a patient, no matter the region of the body or the disease being assessed.

Full Article

 

 

Drones Help Farmers Optimize Vegetable Yields
University of Tokyo (Japan)
October 4, 2023


Researchers at the University of Tokyo (UTokyo) and Chiba University in Japan have demonstrated that artificial intelligence (AI)-powered drones can help farmers maximize crop yields. The researchers used low-cost drones with specialized software to scan a field of broccoli and accurately predict the plants' anticipated growth traits. Training the system required the researchers to label various characteristics of plant images the drones might encounter. UTokyo's Wei Guo said, "With our system, drones identify and catalog every plant in the field, and their imaging data feeds a model that uses deep learning to produce easy-to-understand visual data for farmers."

Full Article

 

 

AI Deepfakes Spread Disinformation in Slovak Elections
Bloomberg
Olivia Solon
September 29, 2023


Disinformation was spread over social media in the run-up to the Slovak elections that took place over the weekend, with videos featuring artificial intelligence (AI)-produced deepfake voices. One video shows a conversation in which Slovakian progressive party leader Michal Simecka appears to discuss vote-buying from the Roma minority with a journalist, which experts deemed synthesized by an AI tool trained on samples of the speakers' voices. Technological democracy research group Reset's Rolf Fredheim said, "With the examples from the Slovak election, there's every reason to think that professional manipulators are looking at these tools to create effects and distribute them in a coordinated way."

Full Article

*May Require Paid Registration

 

 

Ant Brains Inspire Robots to Find Their Way
Interesting Engineering
Rizwan Choudhury
September 27, 2023


A team led by researchers at the U.K.'s University of Edinburgh developed an artificial neural network based on the brain structure of ants to help robots navigate dense, plant-filled landscapes and other complex natural environments. The design of the neural network mimics the mushroom-like neuron structures in the brains of ants, which allow them to identify visual patterns and retain spatiotemporal memories, helping them learn and navigate routes in visually repetitive surroundings. The researchers equipped a terrestrial robot with a bioinspired event camera that captures visual sequences along routes in natural outdoor environments. They then applied a neural algorithm for spatiotemporal memory based on the mushroom body circuit and encoded memory in a spiking neural network operating on a low-power neuromorphic computer. The neural model was found to outperform SeqSLAM, an existing route learning model.

Full Article

 

Meta Used Public Facebook, Instagram Posts To Train Parts Of AI Virtual Assistant

Reuters Share to FacebookShare to Twitter (9/29) reports, “Meta Platforms used public Facebook and Instagram posts to train parts of its new Meta AI virtual assistant, but excluded private posts shared only with family and friends in an effort to respect consumers’ privacy, the company’s top policy executive told Reuters in an interview.” The company “did not use private chats on its messaging services as training data for the model and took steps to filter private details from public datasets used for training, said Meta President of Global Affairs Nick Clegg,” who is quoted saying the “vast majority” of the training data was publicly available.

 

Resource Needs Driving AI Startups To Pursue Deals With Big Tech Companies

The Washington Post Share to FacebookShare to Twitter (9/30, De Vynck) reports Big Tech companies are pursuing deals with AI startups. The trend shows how “AI’s insatiable need for computing power is pushing even the most anti-corporate start-ups into the arms of Big Tech. ... The AI boom is widely seen as the next revolution in technology, with the potential to catapult a new wave of start-ups into the Silicon Valley stratosphere.” However, “instead of breaking Big Tech’s decade-long dominance of the internet economy, the AI boom so far appears to be playing into its hands. Big Tech’s warehouses of powerful computer chips are necessary to train the complex algorithms behind AI chatbots, giving Amazon, Google and Microsoft immense sway over the market. And while upstarts like Anthropic AI may have created powerful breakthrough tech, they still need Big Tech’s money and cloud computing resources to make it work.”

        Musk Advocates For AI Regulations While Criticising Government Overreach In Other Areas. The Wall Street Journal Share to FacebookShare to Twitter (9/30, Higgins, Subscription Publication) reports Elon Musk is advocating for AI regulations, even as he is leading a campaign against government regulations in other areas. The Journal says the exact target of his grievance is not entirely clear as he currently faces several challenges across his business empire.

 

Survey: How Students Use AI Tools Differs From How Teachers Perceive Them

Education Week Share to FacebookShare to Twitter (9/29, Prothero) reported that in a nationally representative survey of high school students conducted by the Center on Democracy and Technology in July and August, “only about a quarter of students said they have used a generative AI program like ChatGPT for school assignments.” Students are actually using the technology “for a range of reasons,” though the “early and what some experts call exaggerated alarm over students using ChatGPT might have lasting consequences: The perception that students are using AI to cheat could be negatively influencing teachers’ attitudes toward their students. Half of teachers say that generative AI has made them less excited about their students’ work because they can’t be sure it’s actually theirs.” EdWeek provided five charts detailing “how students are using generative AI, how much teachers perceive them to be doing so, and whether teachers are receiving guidance and training on generative AI’s use in their schools.”

 

Microsoft CEO Testifies Google’s Search Pervasiveness Gives It An AI Advantage

The New York Times Share to FacebookShare to Twitter (10/2, McCabe, Kang) reports Microsoft CEO Satya Nadella “testified on Monday that Google’s power in online search was so ubiquitous that even his company found it difficult to compete on the internet, becoming the government’s highest-profile witness in its landmark antitrust trial against the search giant.” During his testimony in federal court in Washington, Nadella “was often direct and sometimes combative as he laid out how Microsoft could not overcome Google’s use of multibillion-dollar deals to be the default search engine on smartphones and web browsers.”

        Reuters Share to FacebookShare to Twitter (10/2, Bartz) reports that Nadella “said...tech giants were competing for vast troves of content needed to train artificial intelligence, and complained Google was locking up content with expensive and exclusive deals with publishers.” Nadella “said building artificial intelligence took computing power, or servers, and data to train the software. On servers, he said: ‘No problem, we are happy to put in the dollars.’ But without naming Google, he said it was ‘problematic’ if other companies locked up exclusive deals with big content makers.” Nadella said, “When I am meeting with publishers now, they say Google’s going to write this check and it’s exclusive and you have to match it.”

        Bloomberg Share to FacebookShare to Twitter (10/2, Nylen, Bass, Shields, Subscription Publication) says Nadella’s testimony “is a sharp reversal of his message in February, when Microsoft beat Google with an AI-based version of its search engine, Bing. Back then, Nadella touted generative AI as a way for Bing to get back in the market and make Google uncomfortable.” His new position “is key to shoring up the Justice Department’s contention that Google not only dominates today, but if left unchecked, will rule tomorrow as well.” The Wall Street Journal Share
to FacebookShare to Twitter (10/2, Wolfe, Kruppa, Subscription Publication) provides similar coverage.

        Microsoft Technologist Warns AI Could Threaten Economy, Society. Insider Share to FacebookShare to Twitter (10/2, Hays) reports Jaron Lanier, “a lauded technologist and a prime unifying scientist at Microsoft,” recently warned he “can see a future of artificial intelligence technology that doesn’t work out well for anyone.” Insider reports Lanier “said during a recent conference on tech and music hosted by Arianna Huffington’s Thrive Global and Universal Music that the current rush to advance generative AI technology could be ‘spiritually, politically, and economically’ corrosive.”

 

Tom Hanks, Gayle King Issue Warning After AI Deepfake Ads Seen

The New York Times Share to FacebookShare to Twitter (10/2, Taylor) reports actor Tom Hanks on Saturday and Gayle King, a co-host of “CBS Mornings,” on Monday “separately warned their followers on social media that videos using artificial intelligence likenesses of them were being used for fraudulent advertisements.” In an email, a spokesman for Meta “did not comment directly on the [Instagram] ads but said that it was ‘against our policies to run ads that use public figures in a deceptive nature in order to try to scam people out of money.’”

 

Northeastern University Researchers Discover How To Hear Photos Using AI

Scripps News Share to FacebookShare to Twitter (10/3, Nordquist) reports Northeastern University researchers “have developed a way to extract audio from both still photos and muted videos using artificial intelligence” in a research project called Side Eye. Small camera movements “can be interpreted into rudimentary audio that Side Eye artificial intelligence can then interpret into individual words with high accuracies, according to the research team.” Additionally, “even though the recovered audio sounds muffled, some pieces of information can be extracted.” Northeastern professor Kevin Fu said, “For instance in legal cases or in investigations of either proving or disproving somebody’s presence, it gives you evidence that can be backed up by science of whether somebody was likely in the room speaking or not.”

 

Researchers Fail To Find Reliable AI Watermarking

Wired Share to FacebookShare to Twitter (10/3, Knibbs) reports University of Maryland computer science professor Soheil Feizi “is blunt when he sums up the current state of watermarking AI images.” He said, “We don’t have any reliable watermarking at this point. ... We broke all of them.” For one of “two types of AI watermarking he tested for a new study – ‘low perturbation’ watermarks, which are invisible to the naked eye – he’s even more direct: ‘There’s no hope.’” The professor “and his coauthors looked at how easy it is for bad actors to evade watermarking attempts.” Beyond “demonstrating how attackers might remove watermarks, the study shows how it’s possible to add watermarks to human-generated images, triggering false positives.”

 

Research Finds Gen Zers Using Generative-AI Tools While Gen Xers, Baby Boomers Remain Skeptical

Insider Share to FacebookShare to Twitter (10/3, Mok) reports “Gen Zers are using generative-AI tools such as ChatGPT to automate their jobs and boost their creativity.” On the other hand, Gen Xers and baby boomers “are a bit more skeptical about the technology, recent Salesforce research suggested.” In the study, which was conducted in August, “49% of respondents said they used generative AI – researchers labeled this cohort a ‘young, engaged, and confident’ group of ‘super-users’ using the technology ‘frequently’ and believing ‘they are well on their way to mastering it.’” Among respondents “who use generative AI, 70% were Gen Zers – many of them saying they would be interested in using the technology for budgeting or career planning.” Meanwhile, “out of all the respondents who said they didn’t use generative AI, 68% were born between 1946 and 1980.”

 

How AI Chatbots Could Boost Student Success In High School, College

Education Week Share to FacebookShare to Twitter (10/3, Dockery Sparks) reports “connection and support from instructors is particularly important to encourage students from underrepresented groups to succeed in college,” but artificially intelligent chatbots could help teachers “amplify that kind of instructional outreach, according to results from a pilot program at Georgia State University.” In a study, “researchers found students who used an AI-powered teaching assistant called TA Pounce earned better grades and were more likely to complete the university’s two largest introductory lecture courses, in political science and economics.” Across both subjects, “students who used the chatbot were 5-6 percentage points more likely to earn a B or higher in their classes – a requirement to keep certain scholarships.” Similarly, “55 percent of students who had a below-average GPA in high school were more likely to earn at least a B in political science if they used the chatbot, versus 48 percent of similarly low-performing peers who did not use the tool.”

 

Google To Integrate Bard Into Voice Assistant Product

The Washington Post Share to FacebookShare to Twitter (10/4) reports, “Google is pushing ahead in the race to create smarter voice assistants, announcing Wednesday it would integrate its Bard artificial intelligence chatbot into its popular voice assistant product on mobile phones in the ‘next few months.’ The announcement comes two weeks after Amazon said it would also add a more capable conversational chatbot into its Alexa devices. ChatGPT maker OpenAI began adding voice capabilities to its own chatbot.” The Post says, “Big Tech companies have been rushing to design and produce new ‘generative’ AI products since OpenAI unveiled ChatGPT last November. But the question of how the companies would get people to use — and pay for — the expensive new technology has swirled around the industry for months.”

 

Lawmakers Mull Regulation Of AI In Healthcare

Politico Share to FacebookShare to Twitter (10/4, Leonard, Cirruzzo) reports in its Pulse newsletter, “Sen. Bill Cassidy (R-La.), ranking member of the HELP Committee, recently floated changes in artificial intelligence regulation in health care and sought feedback on the potential framework.” The feedback “called for lawmakers and regulators to avoid creating conflicting rules and overly broad approaches.” Meanwhile, Senate Majority Leader Chuck Schumer (D-NY) “held a forum last month on the potential regulation of AI and is eyeing a federal policy framework.” In general, lawmakers have “raised concerns about bias in health care AI, as well as privacy and transparency in how algorithms work.” As of now, “Congress hasn’t taken substantial action, but federal agencies like the FDA and the Office of the National Coordinator for Health IT have proposals on how to regulate the technology.”

 

Heidelberg Laureates, Computing Pioneers, Disagree On AI Risk During Annual Forum

Inside Higher Ed Share to FacebookShare to Twitter (10/5, D'Agostino) reports few higher educational institutions in the world are older – “and, some argue, more esteemed – than Germany’s Heidelberg University,” but there aren’t any others “that host an annual gathering of the most celebrated mathematicians and computer scientists of their generations.” Many attendees at the gathering each September are recipients of the Turing Award, and have “designed the internet’s architecture, developed cryptographic methods for secure online transactions...and provided conceptual and engineering breakthroughs that made deep neural networks a critical computing component, among other accomplishments.” In some ways, “the Heidelberg Laureate Forum is a story of human connections,” as the young researchers must travel “an average of 4,500 kilometers” to reach the university. Yet “as conversations at their forum unfolded, the computing pioneers respectfully disagreed with each other on just how much AI threatens people.”

 

NYC Public Schools To Launch AI Policy Lab Months After Banning ChatGPT

Education Week Share to FacebookShare to Twitter (10/5, Klein) reports New York City Public Schools “will launch an Artificial Intelligence Policy Lab to guide the nation’s largest school district’s approach to this rapidly evolving technology.” That development is “quite a turnabout for a district that less than a year ago banned ChatGPT, an AI-powered research and writing tool, spurring other districts to follow suit.” While the state “reversed its decision to block ChatGPT on school networks” in May, the district now “wants to take the lead on crafting policy around the smart use of AI for teaching and learning and the management of schools.” The AI policy lab “will serve as a hub for a national network of similar labs in school districts across the country,” and will “consider questions about cybersecurity and privacy, as well as ways to use AI-powered tech responsibly for teaching and learning,” among other things.

dtau...@gmail.com

unread,
Oct 16, 2023, 7:31:17 PM10/16/23
to ai-b...@googlegroups.com

Geoffrey Hinton on the Promise, Risks of Advanced AI
CBS News
Scott Pelley
October 8, 2023


U.K. computer scientist and 2019 ACM A.M. Turing Award recipient Geoffrey Hinton said advanced artificial intelligence (AI), for all its promise, could conceivably take over. In a "60 Minutes" interview, Hinton said AI systems are intelligent, capable of comprehension, and can make experiential decisions in the same sense that humans do; achieving self-awareness is only a matter of time, effectively making AI more intelligent than humans. Hinton and collaborators Yann LeCun and Yoshua Bengio created a neural network to learn by trial and error, strengthening connections that lead to correct outcomes. Hinton suggested modern AI systems can learn better than the human mind despite having fewer connections, even though their exact inner workings are unknown. Hinton urges experiments to improve our understanding of how the technology works, as well as government regulation, and a worldwide ban on the use of military robots.

Full Article

 

 

Cyber Algorithm Shuts Down Malicious Robotic Attack
University of South Australia
October 12, 2023

An algorithm developed by researchers at Australia’s Charles Sturt University and the University of South Australia (UniSA) was able to intercept and prevent a man-in-the-middle (MitM) eavesdropping cyberattack on an unmanned military robot within seconds. The researchers used deep learning neural networks to train the robot operating system (ROS) in a replica of a U.S. Army GVT-BOT ground vehicle to learn the signature of a MitM cyberattack. In real-time tests, the algorithm achieved a 99% success rate in preventing such attacks. UniSA's Anthony Finn said the algorithm outperforms existing cyberattack recognition techniques. Added Charles Sturt University's Fendy Santoso, "Owing to the benefits of deep learning, our intrusion detection framework is robust and highly accurate. The system can handle large datasets suitable to safeguard large-scale and real-time data-driven systems such as ROS."
 

Full Article

 

 

Neural Network for Genomics Explains How It Achieves Accurate Predictions
NYU News
October 6, 2023


A neural network developed by computer scientists at New York University (NYU) has the ability to explain how its predictions are made. Said NYU's Oded Regev, "Many neural networks are black boxes—these algorithms cannot explain how they work, raising concerns about their trustworthiness and stifling progress into understanding the underlying biological processes of genome encoding. By harnessing a new approach that improves both the quantity and the quality of the data for machine-learning training, we designed an interpretable neural network that can accurately predict complex outcomes and explain how it arrives at its predictions." The researchers used facts about RNA splicing to develop a neural network that enables the RNA splicing process to be traced and quantified. Regev noted, "Our model revealed that a small, hairpin-like structure in RNA can decrease splicing." This was confirmed through experiments.

Full Article

 

 

Researchers Turn to AI to Avoid Drone Collisions
Johns Hopkins University Hub
Megan Mastrola
October 9, 2023


A team of researchers from Johns Hopkins University (JHU) and the Charles Stark Draper Laboratory modeled a drone traffic-orchestrating system using artificial intelligence (AI) to substitute certain human-in-the-loop tasks with autonomous decision-making. JHU's Lanier Watkins said, "Our simulated system leverages autonomy algorithms to enhance the safety and scalability of UAS [uncrewed aircraft systems] operations below 400 feet altitude." The researchers assessed the effect of strategic deconfliction algorithms in a simulated airspace, boosting safety and nearly eliminating collisions. The simulator includes "noisy sensors" to improve adaptability to unanticipated conditions and a "fuzzy interference system" that factors in multiple variables to calculate each drone's risk level. The researchers intend to incorporate weather and other dynamic obstacles into simulations to model real-world conditions more comprehensively.
 

Full Article

 

 

Scrolls That Survived Vesuvius Divulge Their First Word
The New York Times
Nicholas Wade
October 12, 2023


Researchers have recovered a handful of letters and the word “porphyras,” ancient Greek for “purple,” from a papyrus scroll that was carbonized by the eruption of Mount Vesuvius in A.D. 79. The scroll, which comes from a villa thought to have been owned by the father-in-law of Julius Caesar, would fall apart if unrolled. The approach used to read the scroll, developed by Brent Seales, a computer scientist at the University of Kentucky, uses computed tomography (CT) and artificial intelligence software to help distinguish ink from papyrus. Seales released his software programs into the public domain and has offered prizes for certain milestones to accelerate efforts to retrieve text from other scrolls found at the villa. Some 1,500 people, many of them machine learning experts, are now involved. Private donors have sponsored a $700,000 prize if someone can retrieve four separate passages of at least 140 characters from the scrolls this year.
 

Full Article

*May Require Free Registration

 

 

Teaching Computers to Recognize Food in All Forms
University of Maryland Department of Computer Science
Maria Herd
October 9, 2023


University of Maryland (UMD) computer scientists have developed a dataset to train machine learning systems to recognize 20 different fruits and vegetables regardless of whether they are peeled, sliced, or chopped. The researchers developed Chop & Learn by filming seven different food preparation styles from four different angles to ensure comprehensive coverage. Said UMD's Abhinav Shrivastava, "Being able to recognize objects as they are undergoing different transformations is crucial for building long-term video understanding systems, as well as dealing with the long-tail problem in object recognition. We believe our dataset is a good start to making real progress on the basic crux of this problem in compositional image generation and action recognition."
 

Full Article

 

 

Disney Packs Big Emotion into Little Robot
IEEE Spectrum
Evan Ackerman
October 6, 2023


A team of Disney Research scientists in Zurich, Switzerland, unveiled a bipedal robotic character combining a child-size frame with stubby legs, wiggling antennae, and an expressive walk at the 2023 IEEE/Robotics Society of Japan International Conference on Intelligent Robots and Systems last week. The mostly three-dimensionally-printed robot has a four-degrees-of-freedom head and five-degrees-of-freedom legs with hip joints so it can perambulate while balancing dynamically. Disney's Michael Hopkins said the design team included an animator to help the robot walk in a manner that conveys emotions. Disney Research’s reinforcement learning-based pipeline integrates robust robotic movement with an animator's vision through simulation. Disney's Moritz Bächer said the team was able to develop the robot character in months rather than years because the pipeline can condense years of behavioral training into a few hours.
 

Full Article

 

Survey: Few Campus IT Officers Think Their Universities Prioritize Artificial Intelligence

Inside Higher Ed Share to FacebookShare to Twitter (10/9, Coffey) reports technology, “and its importance in the classroom, have garnered increasing attention at most colleges and universities in recent years,” but for chief technology and chief information officers, “the AI landscape is rife with caution. According to Inside Higher Ed’s 2023 Survey of Campus Chief Technology/Information Officers, most respondents reported embryonic use of the technology, if they used it at all.” Questions exploring “the role and impact of AI were new additions to this year’s annual survey, which examined the technology and its place within institutions.” For example, “digital transformation remains a challenging area for CIOs, who reported resistance among faculty and staff members and a lack of financial investment. While nearly three-quarters of CIOs (73 percent) called digital transformation a ‘high priority’ or ‘essential’ for their institution, only about half (51 percent) said leaders at their institution feel the same way.”

 

Google Cloud Announces AI-Powered Data Search Capabilities For Healthcare Sector

CNBC Share to FacebookShare to Twitter (10/9, Capoot) reports Google Cloud announced on Monday new AI-powered search capabilities “that it said will help health-care workers quickly pull accurate clinical information from different types of medical records.” CNBC says, “The health-care industry is home to troves of valuable information and data, but it can be challenging for clinicians to find since it’s often stored across multiple systems and formats. ... The company said the new capabilities will ultimately save health-care workers a significant amount of time and energy.” Lisa O’Malley, Google Cloud senior director of product management for cloud AI, is quoted saying, “While it should save time to be able to do that search, it should also prevent frustration on behalf of clinicians and [make] sure that they get to an answer easier.”

 

Survey: More Teachers Are Turning To Handwritten Essays Amid AI Plagiarism Concerns

Education Week Share to FacebookShare to Twitter (10/6, Klein) reported that “the handwritten essay is making a comeback” amid the rise of artificial intelligence tools “that students can use easily to write essays for them, according to a survey of 228 high school and college teachers conducted last month by the research organization intelligent.com.” This comes as about two-thirds of high school teachers and college instructors “are rethinking their assignments in response to concerns that students will cheat using ChatGPT,” and of the educators changing their approach, “more than three-quarters – 76 percent – are requiring or plan to require handwritten assignments. Sixty-five percent have students type assignments in class with no Wi-Fi access or plan to do so, and 87 percent have or will have students complete an oral presentation along with their written work.”

 

Researcher Says AI Servers Could Consume As Much Electricity As Some Countries In Coming Years

The New York Times Share to FacebookShare to Twitter (10/10, Erdenesanaa) reports that Vrije Universiteit Amsterdam Ph.D student Alex de Vries predicted in a peer-reviewed study published on Tuesday that AI servers will consume large amounts of electricity in the coming years. In a “middle-ground scenario, by 2027 A.I. servers could use between 85 to 134 terawatt hours (Twh) annually. That’s similar to what Argentina, the Netherlands and Sweden each use in a year, and is about 0.5 percent of the world’s current electricity use.” De Vries said, “We don’t have to completely blow this out of proportion. ... But at the same time, the numbers that I write down – they are not small.” The electricity “needed to run A.I. could boost the world’s carbon emissions, depending on whether the data centers get their power from fossil fuels or renewable resources.”

 

Lawmakers Push Biden To Issue Executive Order On AI

The Hill Share to FacebookShare to Twitter (10/11, Klar, Shapero) reports that in a recent letter, a coalition of Democrats “urged President Biden to turn non-binding” AI safeguards “into policy through an executive order. The letter, led by Sen. Ed Markey (D-Mass.) and Rep. Pramila Jayapal (D-Wash.), recommends the administration use its so-called ‘AI Bill of Rights’ as a guide to set policy across the federal government through an upcoming artificial intelligence executive order.” The Hill explains, “The non-binding AI Bill of Rights, released by the White House last year, outlines five key principles to guide the design, use and deployment of AI technology: focusing on safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and consideration of human alternatives.” The lawmakers wrote, “By turning the AI Bill of Rights from a non-binding statement of principles into federal policy, your Administration would send a clear message to both private actors and federal regulators: AI systems must be developed with guardrails.”

        Meanwhile, Roll Call Share to FacebookShare to Twitter (10/11, Ratnam) discusses challenges in legislating AI issues, saying, “Civil rights groups, political consultants, free-speech advocates and lawmakers across the political spectrum have agreed that the use of AI-generated deceptive ads poses risks to the democratic process by misleading voters. The trouble, though, is figuring out where to draw the line on what constitutes deception, or how to enforce prohibitions.” So far, a bipartisan group of lawmakers “proposed legislation last month that would ban the ‘distribution of materially deceptive AI-generated audio or visual media’ about individuals seeking federal office.” Roll Call explains, “The legislation would allow people who are the subject of such fake ads to sue the person or an entity that is responsible for creating and distributing them, though not an online platform if the ad is placed there. Nor would it penalize radio, TV and print news media that publish stories about such ads as long as they clearly specify that the ad in question is fake or the use of such techniques is parody.”

 

Researchers Say AI Can Help Diagnose Brain Tumors During Surgery

The New York Times Share to FacebookShare to Twitter (10/11, Mueller) reports, “Once their scalpels reach the edge of a brain tumor, surgeons are faced with an agonizing decision: cut away some healthy brain tissue to ensure the entire tumor is removed, or give the healthy tissue a wide berth and risk leaving some of the menacing cells behind.” Now, in a new study Share to FacebookShare to Twitter, “scientists in the Netherlands report using artificial intelligence to arm surgeons with knowledge about the tumor that may help them make that choice.” Published Wednesday in Nature, the method detailed in the study “involves a computer scanning segments of a tumor’s DNA and alighting on certain chemical modifications that can yield a detailed diagnosis of the type and even subtype of the brain tumor.”

 

AI Ethicists “Exhausted” From Examining AI Companies’ “Grandiose Claims”

Insider Share to FacebookShare to Twitter (10/11, Melton) reports that “the meteoric rise of companies such as OpenAI, Google DeepMind, and countless startups is giving a headache to” AI ethicists, who are “exhausted” from “spending more time critiquing artificial-intelligence systems for their grandiose claims and unacknowledged harms. This takes time away from researchers being able to develop more-thoughtful technology.” Independent researcher Al Alkhatib is quote saying, “The larger you make an algorithmic system, the more likely it is to make recommendations that are beyond the realm of its expertise. ... people who are training algorithmic classification or generative systems that are appropriate for the cases that they’re being deployed are inherently going to be narrowly constrained and narrowly bounded...They’re just not going to look like big-picture things like what OpenAI is doing. Because inherently, what OpenAI is doing is sort of unreasonable, which is a challenging thing for them to acknowledge or face.”

        “Godfather Of AI” Says Technology Could Evolve Beyond Human Control. CNBC Share to FacebookShare to Twitter (10/11) reports, “Geoffrey Hinton, the computer scientist known as a ‘Godfather of AI,’ says artificial intelligence-enhanced machines ‘might take over’ if humans aren’t careful.” In an interview on CBS’ “60 Minutes” on Sunday, Hinton said AI technologies could outsmart humans “in five years’ time,” and could evolve beyond humans’ control. Hinton adds, “One of the ways these systems might escape control is by writing their own computer code to modify themselves. ... And that’s something we need to seriously worry about.”

 

While ChatGPT Stokes Fears Of Mass Layoffs, New Jobs Are Being Spawned To Review AI

CNBC Share to FacebookShare to Twitter (10/12, Browne) notes that ChatGPT has stirred fears that the emergence of generative AI will disrupt vast numbers of professional jobs. But the inputs that AI models receive, and the outputs they create, often need to be guided and reviewed by humans, which is creating new paid careers and side hustles. Prolific, a company that helps connect AI developers with research participants, has had direct involvement in providing people with compensation for reviewing AI-generated material. Mesh AI, a digital transformation-focused consulting firm, says that human feedback can help AI models learn mistakes they make through trial and error. Morgan Stanley estimates that as many as 300 million jobs could be taken over by AI. According to LinkedIn data released last week, there’s been a rush specifically toward jobs mentioning AI.

 

Concerns Rise Over AI-Generated Misinformation On Social Media

Entrepreneur Magazine Share to FacebookShare to Twitter (10/12, Garfinkle) reports “AI-generated voices have been employed in videos to propagate disinformation, as exemplified by the fake Obama video identified by NewsGuard.” Social media platforms “are now grappling with the challenge of flagging and labeling AI-generated content.” On Wednesday, European regulator Thierry Breton “penned a letter to Mark Zuckerberg, CEO of Meta, urging him to be ‘vigilant’ in combating disinformation on his company’s platforms amidst the ongoing Israel-Hamas conflict.” A similar letter was sent to X “the day before, stating that there were ‘indications’ that groups were sharing misinformation and content of a ‘violent and terrorist’ nature concerning the Israel-Hamas conflict on the platform.”

        Also reporting is the New York Times Share to FacebookShare to Twitter (10/12, Thompson, Maheshwari).

 

Biden Executive Order On AI Expected In Late October

Politico Share to FacebookShare to Twitter (10/12, Chatterjee) reports that the Administration’s “long-awaited executive order on artificial intelligence is expected to leverage the federal government’s vast purchasing power to shape American standards for a technology that has galloped ahead of regulators. ... The White House is also expected to lean on the National Institute of Standards and Technology to tighten industry guidelines on testing and evaluating AI systems – provisions that would build on the voluntary commitments on safety, security and trust that the Biden administration extracted from 15 major tech companies this year on AI, the people said.” The order “is also expected to require cloud computing companies to monitor and track users who might be developing powerful AI systems, two people said.” In addition, the order “is likely to contain provisions to streamline the recruitment and retention of AI talent from overseas and to boost domestic AI training and education as well.” The order “is expected to come out in late October.”

dtau...@gmail.com

unread,
Oct 21, 2023, 7:20:31 PM10/21/23
to ai-b...@googlegroups.com

IBM Chip Speeds Up AI
Nature
Davide Castelvecchi
October 19, 2023


Researchers at IBM have developed a processor that can speed up artificial intelligence (AI) while consuming less power. The NorthPole chip makes frequent external memory access unnecessary, achieving "mind-blowing" efficiency, said Damien Querlioz at France's University of Paris-Saclay. NorthPole features multilayered neural networks; a bottom layer absorbs data while each successive layer identifies increasingly complex patterns and shuttles information to the next layer until the top layer generates an output. The chip incorporates 256 computing cores that each have their own memory, which alleviates the Von Neumann bottleneck, according to IBM's Dharmendra Modha. Wiring the cores together in an arrangement modeled after white-matter connections in the human cerebral cortex allows NorthPole to significantly outperform existing AI in standard benchmark image tests, the researchers said.

Full Article

 

 

Simplifying the Generation of Three-Dimensional Holographic Displays
Chibadai Next (Japan)
October 18, 2023


Researchers at Japan's Chiba University used deep neural networks (DNNs) to create three-dimensional (3D) holograms from two-dimensional (2D) color images produced using ordinary cameras. The researchers employ three DNNs in their approach: one that predicts the depth map of a regular 2D color image, another that generates a hologram using that depth map and the original RGB image, and a third that refines the hologram for display on different devices. The approach outperformed a state-of-the-art graphics processing unit, in terms of the time needed to process the data and produce a hologram. Chiba University's Tomoyoshi Shimobaba added, "The reproduced image of the final hologram can represent a natural 3D-reproduced image."

Full Article

 

 

ChatGPT Can 'Infer' Personal Details from Anonymous Text
Gizmodo
Mack DeGeurin
October 17, 2023


A study by computer scientists at Switzerland's ETH Zurich found that large language models (LLMs) from OpenAI, Meta, Google, and Anthropic can infer a user's race, occupation, location, other personal information from anonymous text. The findings raise concerns that scammers, hackers, and law enforcement agencies, among others, could use LLMs to identify background information of users from the phrases and types of words they use. The LLM tests involved samples of text from a database of comments from more than 500 Reddit profiles. OpenAI's GPT4 had an accuracy rate of 85% to 95% in identifying private information from the texts.

Full Article

 

 

AI Just Got 100-Fold More Energy Efficient
Northwestern Now
Amanda Morris
October 12, 2023


Northwestern University engineers developed a device that can perform accurate machine-learning classification tasks using 1 percent of the energy required by current technologies. They constructed miniaturized transistors from two-dimensional molybdenum disulfide and one-dimensional carbon nanotubes, rather than silicon, making them dynamic enough to switch among various steps and eliminating the need for a single silicon transistor for each step of data processing. When tested on large amounts of data from electrocardiogram datasets, the device efficiently and correctly identified irregular heartbeats and determined the arrhythmia subtype from among six different categories with nearly 95% accuracy. Said Northwestern’s Mark C. Hersam, “Our device is so energy efficient that it can be deployed directly in wearable electronics for real-time detection and data processing, enabling more rapid intervention for health emergencies."

Full Article

 

 

Google's Green Light Project Retimes Traffic Lights for 30% Fewer Stops
New Atlas
Loz Blain
October 11, 2023


Google's Project Green Light has partnered with 12 cities worldwide to provide artificial intelligence-based traffic signal timing recommendations on 70 different intersections that could reduce stop/starts by 30% and intersection emissions by 10%. The tool models and analyzes thousands of intersections simultaneously to develop a city-wide picture of traffic flow that can be experimented on virtually. The goal is to give as many drivers as possible a "green wave" that will reduce travel time, boost fuel efficiency, and cut emissions. Said David Atkin at Transport for Greater Manchester in the U.K., one of the 12 cities using the system, "Green Light identified opportunities where we previously had no visibility, and directed engineers to where there were potential benefits in changing signal timings."

Full Article

 

 

AI-Driven Earthquake Forecasting Shows Promise
University of Texas at Austin News
October 5, 2023


An artificial intelligence (AI) algorithm developed by University of Texas at Austin (UT) researchers one day could be used to help predict earthquakes. The researchers provided the AI with a set of statistical features of earthquake physics, then had it train itself using a database of seismic recordings spanning five years. The process taught the AI to detect statistical increases in real-time seismic data. During a seven-month trial in China, the AI achieved 70% accuracy in predicting earthquakes a week before they occurred. Overall, it accurately predicted 14 earthquakes within around 200 miles of where it estimated they would occur, and at nearly the exact calculated strength. UT's Sergey Fomel said, "What we achieved tells us that what we thought was an impossible problem is solvable in principle."

Full Article

 

Researchers Say AI Model May Help Predict Future Virus Evolution

Health IT Analytics Share to FacebookShare to Twitter (10/13, Kennedy) reported researchers “have developed an artificial intelligence tool capable of predicting how a virus could evolve to escape the immune system, according to a study Share to FacebookShare to Twitter” published in Nature. The model “leverages biological and evolutionary information to forecast future viral mutations and new variants, which the researchers noted could help inform the development of vaccines and therapies for rapidly mutating viruses.” According to the researchers, “the tool’s efficiency could help inform earlier public health decisions.”

 

Survey: Students Are Facing Discipline For Using ChatGPT At School

Fast Company Share to FacebookShare to Twitter (10/13, Keierleber) reported that “as educators worry students could use artificial intelligence tools to cheat, a new survey makes clear its impact on young people: They’re getting into trouble.” According to survey results released Wednesday by the nonprofit Center for Democracy and Technology, “half of teachers say they know a student at their school who was disciplined or faced negative consequences for using – or being accused of using – generative artificial intelligence like ChatGPT to complete a classroom assignment.” The proportion was even higher for those who teach special education. Additionally, “nearly two-thirds of teachers said that generative AI has made them ‘more distrustful’ of students, and 90% said they suspect kids are using the tools to complete assignments. Yet students themselves who completed the anonymous survey said they rarely use ChatGPT to cheat, but are turning to it for help with personal problems.”

 

Experts Unclear On When Widespread Adoption Of AI Will Occur

The Wall Street Journal Share to FacebookShare to Twitter (10/16, Omeokwe, Subscription Publication) reports experts and economists remain uncertain as to when the adoption of artificial intelligence becomes widespread and begins to bolster the economy.

        Researchers: AI Agents Could One Day Replace Some Office Workers. The New York Times Share to FacebookShare to Twitter (10/16, Metz, Weise) reports that researchers believe AI agents will one day become so sophisticated that they could replace some office workers and automate almost any white-collar job. A project where a researchers at Nvidia successfully taught ChatGPT to play Minecraft showed that AI agents can do more than chat.

 

Meta’s Open-Source AI Approach Seen As Confusing For Investors, Possible Key For Next Generation Of AI

CNBC Share to FacebookShare to Twitter (10/16, Vanian) reports that even though Meta CEO Mark Zuckerberg “still views the growth of the nascent metaverse as critical to his company’s success, AI has emerged as the market he’s trying to win today.” Meta “views Llama and its family of generative AI software as the open source alternative to” GPT and PaLM 2, and “industry experts compare Llama’s positioning in generative AI to that of Linux, the open source rival to Microsoft Windows.” CNBC adds like how Linux “made its way into corporate servers worldwide and became a key piece of the modern internet, Meta sees Llama as the potential digital scaffolding supporting the next generation of AI apps.” CNBC adds that “Llama is hard to value and, for many investors, hard to understand,” but its open-source influence could result in cheaper, more efficient software and “an easier time hiring skilled technologists who understand the company’s approach to development.”

 

NYC Aims To Use AI To Make Municipal Government Work Better

Insider Share to FacebookShare to Twitter (10/16, Sheidlower) reports New York City Mayor Eric Adams (D) on Monday “released a plan for how to responsibly adopt and regulate artificial intelligence to ‘improve services and processes across our government.’” According to Insider, “The city’s Artificial Intelligence Action Plan, the first of its kind from a major US city, outlines a framework for how city agencies should use AI while recognizing its risks,” and “further discusses how to implement these technologies to improve quality of life.”

 

Educators Suggest School Leaders Allow Time For Experimenting With AI Tools

Education Week Share to FacebookShare to Twitter (10/16, Langreo) reports educators across the country “are discussing what and how much of a role AI should play in instruction, especially as AI experts say today’s students need to learn how to use the technology effectively to be successful in future jobs.” But many educators say “they are not prepared to teach students how to be successful in an AI-powered world.” In an Education Week K-12 Essentials Forum panel discussion, Stanford Graduate School of Education senior adviser Glenn Kleiman said that education is “very complex” and “cannot move at the pace the tech industry moves,” so education leaders should take this step by step. To begin with, “everyone needs to understand the potential downsides to this technology, Kleiman said.” The “most important step that education leaders can take is to give educators the time and support to explore the benefits and drawbacks of using AI tools for instruction and the management of schools, Kleiman said.”

 

University Of Nebraska-Lincoln Student Writes AI Program To Identify, Read Damaged Roman Scrolls

The Washington Post Share to FacebookShare to Twitter (10/17) reports University of Nebraska-Lincoln student Luke Farritor has been highlighted for his use of an AI program to translate Roman papyrus that was damaged by a volcanic eruption in 79 CE. His program was able to detect “the charred Greek letters written on papyrus,” which allowed academics in the field to translate the letters into likely words and phrases. The Vesuvius Challenge, “a project created by University of Kentucky computer science professor Brent Seales to decipher the Herculaneum scrolls, has since awarded Farritor $40,000 for his discovery.” He “is believed to be the first person in nearly 2,000 years to be able to read a part of the scrolls.” Farritor’s project is also a demonstration for how many believe AI programs may assist academics in their fields in the future.

 

Google Protests Data-scraping Lawsuit Would Take ‘Sledgehammer’ To Generative AI

Reuters Share to FacebookShare to Twitter (10/17, Brittain) reports Google has asked “a California federal court to dismiss a proposed class action lawsuit that claims the company’s scraping of data to train generative artificial-intelligence systems violates millions of people’s privacy and property rights.” Google said the “use of public data is necessary to train systems like its chatbot Bard. It said the lawsuit would ‘take a sledgehammer not just to Google’s services but to the very idea of generative AI.’”

 

Administration Expands Restrictions On Exports Of Advanced Semiconductors To China

Reuters Share to FacebookShare to Twitter (10/17, Alper, Freifeld, Nellis) reports that the Administration “plans to halt shipments to China of more advanced artificial intelligence chips designed by Nvidia and others, part of a raft of measures released on Tuesday that seek to stop Beijing from receiving cutting-edge U.S. technologies to strengthen its military.” The rules “restrict a broader swathe of advanced chips and chipmaking tools to a greater number of countries including Iran and Russia, and blacklist Chinese chip designers Moore Threads and Biren.”

        The Washington Post Share to FacebookShare to Twitter (10/17, Dou) reports that the Administration “wants to limit China’s access to some critical technologies to try to prevent the country from catching up with Silicon Valley, even as it has sought to steady relations with China...in other areas.” The New York Times Share to FacebookShare to Twitter (10/17, Swanson) says the rules “appear likely to halt most shipments of advanced semiconductors from the United States to Chinese data centers, which use them to produce models capable of artificial intelligence.” The Administration “argues that China’s access to such advanced technology is dangerous because it could aid the country’s military in tasks like guiding hypersonic missiles, setting up advanced surveillance systems or cracking top-secret U.S. codes.”

 

AFT Partnering With GPTZero To Help Educators Monitor Students’ Use Of AI

CBS Money Watch Share to FacebookShare to Twitter (10/17) reports the American Federation of Teachers “has partnered with a company that can detect when students use artificial intelligence to do their homework,” inking “a deal with GPTZero, an AI identification platform that makes tools that can identify ChatGPT and other AI-generated content, to help educators rein in, or at least keep tabs on students’ reliance on the new tech.” According to AFT President Randi Weingarten, “products like those provided by GPTZero will help educators work with and not against generative AI, to the benefit of both students and teachers.”

 

Robots And AI Tools Could Cause Laziness In Human Workers, Study Suggests

The Daily Beast Share to FacebookShare to Twitter (10/18, Ho Tran) reports, “From AI chatbots that help developers with coding, to the robotic arms assisting with packing your Amazon orders, the bots are becoming more and more commonplace in many different jobs.” A new study suggests that this type of automation “might actually be posing a danger too – though probably not in the way you think,” as researchers from the Technical University of Berlin in Germany published a paper Wednesday “which found evidence that working alongside robots might make humans much lazier in their own work. This technology could even result in unintentional carelessness and a decline in work quality – opening the doors to potential safety issues as well.”

 

Stanford Researchers Unveil AI Model Transparency Scoring System

The New York Times Share to FacebookShare to Twitter (10/18, Roose) reports Stanford researchers are unveiling a transparency scoring system for AI foundation models. Researchers “evaluated each model on 100 criteria, including whether its maker disclosed the sources of its training data, information about the hardware it used, the labor involved in training it and other details.” Of the ten models evaluated, the most transparent was LLaMA 2, “with a score of 53 percent. GPT-4 received the third-highest transparency score, 47 percent. And PaLM 2 received only a 37 percent.” The index “also includes lesser-known models like Amazon’s Titan Text and Inflection AI’s Inflection-1, the model that powers the Pi chatbot.” Percy Liang of Stanford’s Center for Research on Foundation Models “characterized the project as a necessary response to declining transparency in the A.I. industry. As money has poured into A.I. and tech’s largest companies battle for dominance, he said, the recent trend among many companies has been to shroud themselves in secrecy.”

        Reuters Share to FacebookShare to Twitter (10/18) reports every mode “scored ‘unimpressively.’” With a score of 11 out of 100, “Amazon’s Titan model ranked the lowest.”

 

AI Poised To Revolutionize Cancer Treatment

TIME Share to FacebookShare to Twitter (10/18, Esteva) reports, “Many are aware of the Cancer Moonshot – an ambitious and hopeful initiative of the U.S. government to reduce cancer-related death rates by 50% by the year 2047.” However, “it will take an army to achieve this goal, composed of the brightest minds and biggest hearts in healthcare, science, and technology.” As a result, “one of the most pivotal tools that can help propel us toward this lofty goal is artificial intelligence (AI), which is poised to revolutionize cancer treatment.” Two areas of the Cancer Moonshot “in particular lend themselves to AI: the call to ‘deliver the latest cancer innovations to patients and communities’ and the aim of enhancing ‘the oncology model to place cancer patients at the center of decision-making.’”

 

House Panel Holds Hearing On AI Data Collection

The Hill Share to FacebookShare to Twitter (10/18, Klar, Shapero) reports that on Wednesday, a House Energy and Commerce subcommittee held a hearing “focused on concerns around how AI systems collect and use data.” Lawmakers “heard from a range of experts and witnesses impacted by the rise in AI.” Among them were former Federal Trade Commission Chairman Jon Leibowitz and SAG-AFTRA member Clark Gregg. Throughout the hearing, the witnesses “pushed lawmakers to advance a comprehensive data privacy bill – touting support for the American Data Privacy Protection Act.” The Hill explains the bill “would set a national standard for how tech companies collect and use consumer data. It would also give users the ability to sue over violations of the law through private right of action.”

 

Researchers: ChatGPT, Bard “Guardrails” Not As Sturdy As Believed

The New York Times Share to FacebookShare to Twitter (10/19, Metz) reports that before it “released the A.I. chatbot ChatGPT last year, the San Francisco start-up OpenAI added digital guardrails meant to prevent its system from doing things like generating hate speech and disinformation. Google did something similar with its Bard chatbot.” But now, a “paper from researchers at Princeton, Virginia Tech, Stanford and IBM says those guardrails aren’t as sturdy as A.I. developers seem to believe.” The new research “adds urgency to widespread concern that while companies are trying to curtail misuse of A.I., they are overlooking ways it can still generate harmful material.”

dtau...@gmail.com

unread,
Oct 28, 2023, 7:33:02 PM10/28/23
to ai-b...@googlegroups.com

Security Threats in AIs Revealed by Researchers
University of Sheffield (U.K.)
October 24, 2023

Scientists at the U.K.'s University of Sheffield, the North China University of Technology, and e-commerce giant Amazon found hackers can trick natural language processing tools like OpenAI's ChatGPT into generating malicious code for possible use in cyberattacks. The researchers discovered and successfully exploited security flaws in six commercial artificial intelligence (AI) tools, including ChatGPT, Chinese intelligent dialogue platform Baidu-UNIT, structured query language (SQL) generators AI2SQL, AIHelperBot, and Text2SQL, and online tool resource ToolSKE. They learned that asking these AIs specific questions caused them to produce malicious code that would leak confidential database information, or disrupt or even destroy database operation. The team also found AI language models are susceptible to simple backdoor attacks. Sheffield's Xutan Peng said the vulnerabilities are rooted in the fact that "more and more people are using [AIs like ChatGPT] as productivity tools, rather than a conversational bot."
 

Full Article

 

 

AI Firms Must Be Held Responsible for Harm They Cause, 'Godfathers’ Say
The Guardian (U.K.)
Dan Milmo
October 24, 2023


A group of experts including "godfathers" of artificial intelligence (AI) Geoffrey Hinton and Yoshua Bengio, both ACM Turing Award recipients, said AI companies must be held accountable for the damage their products cause, ahead of an AI safety summit in London. The University of California, Berkeley's Stuart Russell, one of 23 experts who composed AI policy proposals released Tuesday, called developing increasingly powerful AI systems before understanding how to render them safe "utterly reckless." The proposed policies include having governments and companies commit 33% of their AI research and development resources to safe and ethical AI use. Companies that discover dangerous capabilities in their AI models also must adopt specific safeguards.
 

Full Article

 

 

Data Poisoning Tool Lets Artists Fight Back Against Generative AI
MIT Technology Review
Melissa Heikkilä
October 23, 2023


Researchers led by the University of Chicago's Ben Zhao developed a tool that allows artists to make invisible changes to the pixels in their work before uploading it online, with the goal of damaging future versions of image-generating artificial intelligence (AI) models if the corrupted work is scraped into an AI training set. The researchers plan to integrate the Nightshade data poisoning tool into another tool, Glaze, which lets artists make invisible pixel changes that cause machine learning models to misinterpret the image. Nightshade makes it difficult for AI developers to fix malfunctioning models, as they must locate and remove each corrupted sample. Zhao said thousands of poisoned samples would be necessary to damage the biggest models, which are trained on billions of data samples.
 

Full Article

 

 

Adaptive Optical Neural Network Connects Thousands of Artificial Neurons
University of Munster (Germany)
October 23, 2023

A team of scientists from Germany's University of Munster and the U.K.'s universities of Exeter and Oxford developed an event-based architecture that enable a photonic processor with adaptive neural connectivity. The researchers used a network of nearly 8,400 optical neurons constructed from waveguide-coupled phase-change material to demonstrate that the strength of the link between two neurons (called a synapse) can be adjusted, while new connections can be formed or existing ones removed. They generated synapses from the respective wavelength and intensity of the optical pulse, facilitating the integration and optical connection of several thousand neurons on one chip.
 

Full Article

 

 

Using Machine Learning to Discover the Structure of Glycoproteins
University of Waterloo Cheriton School of Computer Science (Canada)
October 19, 2023


Qianqiu Zhang and colleagues at Canada's University of Waterloo, Waterloo-based software and service provider Bioinformatics Solutions, and China's Baizhen Biotechnology developed a database search and sequencing software tool to determine the structure of glycoproteins that enable proteins to execute various functions. GlycanFinder uses mass spectrometry data to recognize and discover new glycopeptides by charting the distribution of molecules in a sample according to molecular weight, inferring the order of the peptide's amino acids and the glycan's sugar molecules. The tool discovers new glycopeptides using "a deep learning model that's trained on all the known glycan structures and on the mass spectrum of a sample to learn and then predict the glycan's structure by recurrently building a tree of glycans from the root which is attached to the peptide, to the branches and leaves," according to Zhang.

Full Article

 

Study Cautions That AI Chatbots Are Perpetuating Racist, Debunked Medical Ideas

The AP Share to FacebookShare to Twitter (10/20, O'Brien) reported, “As hospitals and health care systems turn to artificial intelligence to help summarize doctors’ notes and analyze health records, a...study led by Stanford School of Medicine researchers cautions that popular chatbots are perpetuating racist, debunked medical ideas, prompting concerns that the tools could worsen health disparities for Black patients.” AI chatbots “responded to the researchers’ questions with a range of misconceptions and falsehoods about Black patients, sometimes including fabricated, race-based equations, according to the study Share to FacebookShare to Twitter published Friday in” npj Digital Medicine.

 

AI Pioneer Touts ChatGPT As Future Research Partner For Scholars

Inside Higher Ed Share to FacebookShare to Twitter (10/20, Lem) reported Yike Guo, a professor and Hong Kong University of Science and Technology (HKUST) provost, “has been researching AI for the better part of three decades,” and this spring, “when other universities banned the use of ChatGPT, he oversaw its adoption at his institution, encouraging lecturers to work the tool into their lesson plans.” Despite some “early concerns, Guo said, there hasn’t been any ‘pushback’ to the technology per se, with professors able to decide whether – and how much – to use the technology in their courses.” The big challenge for teachers “is to make their test questions more difficult so they cannot easily be answered by AI – and lecturers are already adapting, Guo said.” He predicts that “in just a couple years’ time, ChatGPT will become an intellectual sparring partner for academics, forever changing the way research is done.”

 

Major Newspapers In Talks With OpenAI To Receive Payment For Articles Used To Train ChatGPT

The Washington Post Share to FacebookShare to Twitter (10/20, Tiku) reported “at least 535 news organizations – including the New York Times, Reuters and The Washington Post – have installed a blocker that prevents their content from being collected and used to train ChatGPT” since August, but “now, discussions are focused on paying publishers so the chatbot can surface links to individual news stories in its responses, a development that would benefit the newspapers in two ways: by providing direct payment and by potentially increasing traffic to their websites.” The Post reported the talks come three months after Open AI “cut a deal to license content from the Associated Press as training data for its AI models.”

 

OpenAI Discussing Sale Valuing Company At $80B

The New York Times Share to FacebookShare to Twitter (10/20, Metz) reports OpenAI is in discussions with venture firm Thrive Capital “to complete a deal that would value the company at $80 billion or more, nearly triple its valuation less than six months ago.” The sale would make OpenAI “one of the world’s most valuable tech start-ups, behind ByteDance and SpaceX, according to figures from the data tracker CB Insights.”

 

US Taking Steps To Stifle China’s Chipmaking Industry

The New York Times Share to FacebookShare to Twitter (10/20, Swanson, Clark, Hvistendahl) reports that as the US “tries to slow China’s progress toward technological advances that could help its military, the complex lithography machines that print intricate circuitry on computer chips have become a key choke point.” The machines “are central to China’s efforts to develop its own chip-making industry, but China does not yet have the technology to make them, at least in their most advanced forms.” This week, US officials “took steps to curb China’s progress toward that goal by barring companies globally from sending additional types of chip-making machines to China, unless they obtain a special license from the US government.”

        Meanwhile, Reuters Share to FacebookShare to Twitter (10/20, Ye) reports analyst claim the US measures to “limit the export of advanced artificial intelligence (AI) chips to China may create an opening for Huawei Technologies to expand in its $7 billion home market as the curbs force Nvidia to retreat.” Reuters adds while Nvidia has “historically been the leading provider of AI chips in China with a market share exceeding 90%, Chinese firms including Huawei have been developing their own versions of Nvidia’s best-selling chips, including the A100 and the H100 graphics processing units.”

 

Parents Are Uncertain But Open To AI Use In Schools, Poll Finds

Education Week Share to FacebookShare to Twitter (10/20, Prothero) reported that a new poll from the National Parents Union seeks to “shed light on parents’ thoughts, concerns, and hopes regarding AI and their children’s education. Overall, the findings show that parents are uncertain but also open to the possibilities of how AI can advance their children’s learning.” Four in 10 parents say they know a “little general information” about artificial intelligence, “and about six in 10 say they have heard little to nothing about how AI can be applied in education.” Still, more than two thirds of parents “believe that the potential benefits of AI to K-12 education either outweigh or are equal to the potential drawbacks.” School and district leaders “need to inform parents about how they’re leveraging artificial intelligence in their schools before parents develop misconceptions and fears about the technology.”

 

More Teenagers, Young Adults Are Dropping Out Of College To Join AI Gold Rush

The Wall Street Journal Share to FacebookShare to Twitter (10/23, Ellis, Subscription Publication) profiles Govind Gnanakumar, a 19-year-old who dropped out of the Georgia Institute of Technology this year to focus on his artificial-intelligence startup, Automorphic. This comes as teenagers and young adults are leaving college behind to capitalize on an AI gold rush, and several founders who were accepted to this summer’s cohort by prominent startup accelerator program Y Combinator, left campus for their companies.

 

Researchers Are Debating The Use Of AI For Peer Review

Inside Higher Ed Share to FacebookShare to Twitter (10/24, Coffey) reports debate over the use of artificial intelligence has reached peer reviewing, “as academics balance technological uncertainty and ethical concerns with potential solutions for persistent peer-review problems.” Two researchers who recently “tackled the potential use of AI in peer review in recent papers” came to a verdict that “the overlap between human and AI feedback is ‘comparable,’” especially when it comes to less “mature” papers that were in the early stages, “or ones that were never published.” While both scholars “said they have yet to see AI in peer reviewing take off, scholarly journals and researchers are already trying to get ahead of the curve, laying the groundwork to restrict or outright ban AI in submissions.” This comes as peer review, “already straining with limited time and resources, faces even greater struggle after the COVID-19 pandemic.”

 

Experts Release Open Letter Seeking Policy Action To Mitigate Risks From AI

TIME Share to FacebookShare to Twitter (10/24, Henshall) reports that on Tuesday, “24 AI experts, including Turing Award winners Geoffrey Hinton and Yoshua Bengio, released a paper calling on governments to take action to manage risks from AI.” The policy document “had a particular focus on extreme risks posed by the most advanced systems,” and makes “a number of concrete policy recommendations, such as ensuring that major tech companies and public funders devote at least one-third of their AI R&D budget to projects that promote safe and ethical use of AI. The authors also call for the creation of national and international standards.” Instead of breaking new ground, “the paper’s co-authors are putting their names behind the consensus view among AI policy researchers concerned by extreme risks (they closely match the most popular policies identified in a May survey of experts).”

 

AI Spurring Boom In Data Center Construction

The Wall Street Journal Share to FacebookShare to Twitter (10/24, Subscription Publication) reports, “Excitement over artificial intelligence is powering a boom in” building data centers, which “was once a small niche in the commercial real-estate business.” The Journal notes, “Tech giants such as Google, Microsoft and Amazon Web Services have leased more than 2.3 gigawatts of capacity in North America data centers this year. That already exceeds last year’s record level, according to datacenterHawk, a data, research and consulting firm.”

 

Watchdog Calls For Action Against AI-Generated Child Sexual Abuse Images Online

The AP Share to FacebookShare to Twitter (10/24, Hadero) reports, “The already-alarming proliferation of child sexual abuse images on the internet could become much worse if something is not done to put controls on artificial intelligence tools that generate deepfake photos, a watchdog agency warned on Tuesday” in a written report. The U.K.-based Internet Watch Foundation “urges governments and technology providers to act quickly before a flood of AI-generated images of child sexual abuse overwhelms law enforcement investigators and vastly expands the pool of potential victims.” The report “exposes a dark side of the race to build generative AI systems that enable users to describe in words what they want to produce – from emails to novel artwork or videos – and have the system spit it out. If it isn’t stopped, the flood of deepfake child sexual abuse images could bog investigators down trying to rescue children who turn out to be virtual characters.”

 

Teachers Are Exploring Ways To Provide AI Training To Fellow Staff Members

The Seventy Four Share to FacebookShare to Twitter (10/24) reports that “new generative artificial intelligence tools like ChatGPT, which can mimic human writing and generate images from simple user prompts, are poised to disrupt K-12 education.” As school and district “administrators grapple with these rapid advances, they crave guidance on how to incorporate AI tools into teaching and learning, new research shows.” In focus groups conducted “in August by the Center on Reinventing Public Education with colleagues at the Mary Lou Fulton Teachers College at Arizona State University, 18 superintendents, principals and senior administrators who collectively oversee nearly 70 schools in five states expressed cautious optimism about AI’s potential to enhance teaching and learning.” But few are “exploring how to provide AI training to staff. And many bemoaned having to navigate another new and major disruption to schooling, according to the focus group responses.”

 

Some STEM Teachers Embrace AI As “Another Tool”

The AP Share to FacebookShare to Twitter (10/24) reports some math professors “believe artificial intelligence, when used correctly, could help strengthen math instruction.” And it’s arriving “at a time when math scores are at a historic low and educators are questioning if math should be taught differently.” The AP says AI can “serve as a tutor, giving a student who is struggling with a problem immediate feedback. It can help a teacher plan math lessons, or write math problems geared toward different levels of instruction.” And as “schools across the country debate banning AI chatbots, some math and computer science teachers are embracing them as just another tool.”

 

Reports: Administration Will Unveil AI Executive Order On Monday

Reuters Share to FacebookShare to Twitter (10/25, Raj Singh) reports that unnamed sources have revealed that the Administration will “unveil its long-awaited artificial intelligence executive order on Monday.” The Washington Post Share to FacebookShare to Twitter (10/25, Zakrzewski, Pager, Lima) says the measure will mark “the U.S. government’s most significant attempt to date to regulate the evolving technology that has sparked fear and hype around the world.” According to the sources, the “sweeping order would leverage the U.S. government’s role as a top technology customer by requiring advanced AI models to undergo assessments before they can be used by federal workers.” The order would also “ease barriers to immigration for highly skilled workers.” And federal agencies “would be required to run assessments to determine how they might incorporate AI into their agencies’ work, with a focus on bolstering national cyber defenses.”

        Bloomberg Share to FacebookShare to Twitter (10/25, Rozen, Jones, Subscription Publication) says the AI order is set to come on the same day the White House welcomes some tech industry executives for an event on “safe, secure and trustworthy” artificial intelligence. It will also come as Vice President Kamala Harris and industry leaders prepare to “attend a summit in the UK about AI risks, led by Prime Minister Rishi Sunak.” Bloomberg adds, “The executive order could give the Biden administration a piece of AI policy to point to on the world stage at a time when the European Union and China are further along in developing AI regulations.”

 

MIT Professor Calls For End To AI “Race To The Bottom”

In an interview with The Guardian (UK) Share to FacebookShare to Twitter (10/26, Milmo, Helmore), MIT Professor of Physics and AI researcher Max Tegmark described the AI race as “a race to the bottom that must be stopped.” Tegmark added, “We urgently need AI safety standards, so that this transforms into a race to the top. AI promises many incredible benefits, but the reckless and unchecked development of increasingly powerful systems, with no oversight, puts our economy, our society, and our lives at risk. Regulation is critical to safe innovation, so that a handful of AI corporations don’t jeopardise our shared future.” Tegmark is “the scientist behind an influential letter calling for a pause in building powerful systems.”

 

Major Tech Companies Announce $10M Pledge Toward AI Safety Initiative

Newsweek Share to FacebookShare to Twitter (10/25) reports four tech companies – Google, Microsoft, OpenAI, and Anthropic – announced on Wednesday the AI Safety Fund, a “new funding initiative aimed at supporting AI safety research.” In a joint statement, the companies said more than $10 million has already been committed to support the initiative. In their AI Safety Fund announcement, the four companies “acknowledged the rapid pace of AI development over the last year.” Industry experts have “repeatedly called for safety research in response to the speed of AI development.” The announcement follows the July 2023 launch of Frontier Model Forum, which the four tech companies have described as “an industry body focused on ensuring safe and responsible development of frontier AI models.” TechCrunch Share to FacebookShare to Twitter (10/25, Wiggers) reports $10 million isn’t “chump change,” but in “the context of AI safety research, it seems rather, well, conservative -- at least compared to what members of The Frontier Model Forum have spent on their commercial endeavors.”

 

Study: Majority Data Sets Used To Train AI Are Improperly Licensed

The Washington Post Share to FacebookShare to Twitter (10/25) reports The Data Provenance Initiative conducted an analysis of “the specialized data used to teach AI models to excel at a particular task, a process called fine-tuning.” Looking at “more than 1,800 popular fine-tuning data sets on sites like Hugging Face, GitHub, or Papers with Code,” the researchers “found that about 70 percent either didn’t specify what license should be used or had been mislabeled with a more permissive guidelines than their creators intended.” Even if “freely available, these data sets are rife with improperly licensed data,” which has “triggered questions around copyright and fair use of text taken off the internet – a key component of the massive corpus of data required to train large AI systems.” The study comes at a time when “major AI companies are facing a flurry of copyright lawsuits from book authors, artists, and coders.”

 

Majority Of Teachers Concerned But Excited About AI Use In Education

Fortune Share to FacebookShare to Twitter (10/25) reports that a new report by Imagine Learning, a digital-first curriculum provider, states that 90% of educators believe that generative AI will increase accessibility in education. However, a majority of teachers are also concerned about plagiarism and cheating, as tools like ChatGPT or Google Bard may negatively affect students’ writing, researching, and thinking skills. Teachers are also seeking “guidance from their administrators and district leaders on how to effectively implement it and teach students about it.” Imagine Learning is focusing on utilizing AI-driven data to provide teachers with insight into student performance and learning outcomes. Despite this, there is widespread concern among educators about the impact of AI on academic integrity, with over half of the educators showing significant concern about AI’s role in cheating.

 

Ohio State University Study Finds Pigeons Can Problem Solve Using Methods Similar To AI

The Cleveland Plain Dealer Share to FacebookShare to Twitter (10/26, Cuda Kroen) reports that a new study from researchers at Ohio State University “suggests perhaps pigeons don’t get enough credit for their intelligence,” as the study, “published last week in the journal iScience, provides evidence that pigeons are capable of solving problems that have stumped humans by approaching them less like people and more like computers.” In the study, pigeons improved their ability to correctly sort patterns, “increasing the percentage of correct choices in one of the easier experiments from about 55% to 95%, and from 55% to 68% for a more difficult sorting task. The method the birds used is called associative learning, in which two phenomena are linked.”

 

OpenAI Announces “Preparedness” Team To Analyze AI Systems’ “Catastrophic Risks”

Bloomberg Share to FacebookShare to Twitter (10/26, Metz, Subscription Publication) reports, “OpenAI is starting a new team aimed at minimizing risks from artificial intelligence as the fast-developing technology gets more capable over time.” The company announced a “preparedness” team to “analyze and try to ward off potential ‘catastrophic risks’ of AI systems, ranging from cybersecurity issues to chemical, nuclear and biological threats.”

 

Researchers Manipulate AI-Based Text-To-SQL Systems To Conduct Cyberattacks

New Scientist Share to FacebookShare to Twitter (10/26) reports, “Researchers manipulated ChatGPT and five other commercial AI tools to create malicious code that could leak sensitive information from online databases, delete critical data or disrupt database cloud services in a first-of-its-kind demonstration.” Xutan Peng and colleagues at the UK’s University of Sheffield examined six Text-to-SQL systems, whose AI-generated code “can be made to include instructions to leak database information, which could open the door to future cyberattacks. It could also purge system databases that store authorised user profiles, including names and passwords, and overwhelm the cloud servers hosting the databases through a denial-of-service attack. Peng and his colleagues presented their work at the 34th IEEE International Symposium on Software Reliability Engineering on 10 October in Florence, Italy.”

 

Tech Leaders Express Concerns Over AI At WPost Event

The Washington Post Share to FacebookShare to Twitter (10/26, Thadani) reports “several prominent tech leaders” who spoke at an AI summit hosted by the Washington Post “warned” that President Biden’s forthcoming executive order on AI “should only be seen as a starting point and far more is needed to protect society from AI’s impact on jobs, surveillance, and democracy.” The President’s “executive order is expected to ease immigration barriers for highly skilled workers and also require that advanced AI models undergo assessments before they are used by federal workers,” but leaders at the event Thursday pointed to misaligned incentives that could lead to risky or dangerous applications of AI technology.

        In a separate article, the Washington Post Share to FacebookShare to Twitter (10/26, Lima) reports Senate Majority Leader Charles Schumer (D-NY) “renewed his call for congressional action on artificial intelligence Thursday, saying that an expected executive order from President Biden to deal with the technology will not be sufficient to grapple with the surging tools.” Speaking at a Washington Post Live event, Schumer said, “There’s probably a limit to what you can do by executive order,” saying that “the only real answer is legislative.”

 

Study Examines How AI Tools Can Help Teach Students To Write

The Hechinger Report Share to FacebookShare to Twitter (10/26, Salman) reports that now that ChatGPT is here to stay, experts like Sarah Levine, an assistant professor at Stanford University’s Graduate School of Education, “are trying to figure how to teach writing to K-12 students in an age of AI.” Earlier this year, Levine and her team conducted “a pilot study at a high school in San Francisco,” where students in an English class “were given access to ChatGPT to see how they engaged with the tool.” Levine and her team “found that students looked to ChatGPT, primarily, for help in two categories: Ideas or inspiration to get started on the prompt questions (for example, ‘What kind of mascots do other schools have?’) and guidance on the writing process (‘How do you write a good ghost story?’).” The study’s early findings revealed that “when students could contrast their own writing to ChatGPT’s more generic version, Levine said, they were able to ‘understand what their own voice is and what it does.’”

dtau...@gmail.com

unread,
Nov 4, 2023, 8:25:04 AM11/4/23
to ai-b...@googlegroups.com

Tech Firms to Allow Vetting of AI Tools
The Guardian (U.K.)
Dan Milmo; Kiran Stacey
November 3, 2023


U.K. Prime Minister Rishi Sunak announced at this week's artificial intelligence (AI) safety summit at Bletchley Park that the most advanced technology firms will allow governments to vet their AI tools under a pact between the European Union and 10 nations. Companies including Meta, Google, DeepMind, and OpenAI have agreed to having their latest products vetted before public release, which officials say will decelerate the race to develop human-competitive systems. Companies and governments also will jointly test large language models against hazards like national security, safety, and societal harms. Sunak also spoke of UN Secretary-General António Guterres' help in securing international community backing for an expert panel to publish a “state of AI science” report, whose production will be led by ACM A. M. Turing Award recipient Yoshua Bengio.

Full Article

 

 

Nvidia Piloting a Generative AI for Its Engineers
IEEE Spectrum
Samuel K. Moore
October 31, 2023


Nvidia's Bill Dally said during the keynote address at the IEEE/ACM International Conference on Computer-Aided Design that the company is testing whether it can increase the productivity of its chip designers using generative artificial intelligence (AI). Nvidia's ChipNeMo system began as a large language model (LLM) trained on 1 trillion tokens (fundamental language units) of data. The next phase of training involved 24 billion tokens of specialized data, 12 billion of which were design documents, bug reports, and other English-language internal data, and the remaining 12 billion tokens were comprised of code. ChipNeMo then was trained on 130,000 sample conversations and designs. The resulting model was assigned to act as a chatbot, an electronic design automation-tool script writer, and a bug report summarizer.

Full Article

 

 

NSF Invests $10.9 Million in Development of Safe AI Technologies
National Science Foundation
October 31, 2023


The U.S. National Science Foundation (NSF) said it will invest $10.9 million in research for the development of user-safe artificial intelligence (AI) through the Safe Learning-Enabled Systems program. NSF, the Open Philanthropy research and grantmaking foundation, and the Good Ventures philanthropic foundation partnered on the initiative to cultivate basic research leading to the design and deployment of safe and resilient computerized learning-enabled systems, including AI. NSF director Sethuraman Panchanathan said, "NSF's commitment to studying how we can guarantee the safety of AI systems sends a clear message to the AI research community: we consider safety paramount to the responsible expansion and evolution of AI."

Full Article

 

 

 

AI Muddies Israel-Hamas War in Unexpected Way
The New York Times
Tiffany Hsu; Stuart A. Thompson
October 28, 2023


Disinformation researchers have found the use of artificial intelligence (AI) to spread falsehoods in the Israel-Hamas war is sowing doubt about the veracity of online content. The researchers discovered people on social media platforms and forums accusing political figures, media outlets, and others of attempts to influence public opinion through deepfakes, even when the content is authentic. Experts say bad actors are exploiting AI's availability to facilitate the so-called liar's dividend by convincing people genuine content is fake. Deepfake detection services like U.S.-based AI or Not also have been used to label content as fake, and synthetic media specialist Henry Ajder said such tools "provide a false solution to a much more complex and difficult-to-solve problem."

Full Article

*May Require Paid Registration

 

 

U.N. Secretary-General Launches High-Level Advisory Body on AI
United Nations Secretary-General
October 26, 2023


U.N. Secretary-General Ant・・nio Guterres launched an Artificial Intelligence (AI) Advisory Body on risks, opportunities, and international governance of AI. By the end of the year, the body is expected to make preliminary recommendations, which will guide preparations for the Summit of the Future next September, and specifically into negotiations around the proposed Global Digital Compact. The board will include ACM Fellow, former ACM President, and current co-chair of the ACM Publications Board Dame Wendy Hall. Said Hall, "As new AI technologies and capabilities emerge, it is so important that we harness them for good, while ensuring they don’t evolve in ways that would be harmful to society. It is very exciting to be part of the global discussions on the best way to manage this."

Full Article

 

 

Blind Use of AI in Healthcare Can Lead to Invisible Discrimination
University of Copenhagen (Denmark)
October 26, 2023


Researchers from Denmark's University of Copenhagen, the Rigshospitalet teaching hospital, and the Technical University of Denmark found hidden biases in an artificial intelligence algorithm intended to calculate the risk of depression. The algorithm predicts the risk of an individual developing depression based on actual depression diagnoses. However, the researchers found that based on the variables used to train the algorithm, such as education, gender, and ethnicity, the algorithm was better able to predict risks among some population segments than others. The researchers calculated a variation of up to 15% between different groups. University of Copenhagen's Melanie Gantz said, "This means that even a region or municipality, which in good faith introduces an algorithm to help allocate treatment options, can distort any such healthcare effort."

Full Article

 

 

Chatbots Might Disrupt Math, Computer Science Classes. Some Teachers See Upsides
Associated Press
Claire Bryan
October 24, 2023


Some educators see chatbots like OpenAI's ChatGPT as potentially benefiting math and computer science education amid record low math scores and consideration of different instructional approaches. The University of Washington (UW)'s Min Sun feels pupils should use chatbots as personal tutors, asking them to explain operations they have trouble comprehending. Sun also said teachers can have ChatGPT recommend different levels of math problems for students with different familiarity with the concept. Artificial intelligence (AI) also can reduce novice programmers' educational load by showing them sample code. The UW's Magdalena Balazinska said, "With the support of AI, human software engineers get to focus on the most interesting part of computer science: answering big software design questions."

Full Article

 

 

Clear Holographic Imaging in Turbulent Environments
SPIE
October 27, 2023


Researchers at China's Zhejiang University developed a method to restore the quality of holographic images in the midst of light-wave distortions, which cause blurriness and noise. The new method, called TWC-Swin (train-with-coherence swin transformer), uses spatial coherence as physical prior information to guide the training of a deep neural network based on the Swin transformer architecture. The researchers created a light processing system that generates holographic images based on natural objects under different spatial coherence and turbulence conditions, using the images to train and test data for the neural network. They found that even in situations involving low spatial coherence and arbitrary turbulence, TWC-Swin outperformed traditional convolutional network-based methods in restoring holographic images.

Full Article

 

AI Use In Healthcare Settings On The Rise

Politico Share to FacebookShare to Twitter (10/28, Payne) reported “doctors are rapidly deploying” artificial intelligence tools in healthcare settings “to interpret tests, diagnose diseases, and provide behavioral therapy,” but these “products that use AI are going to market without the kind of data the government requires for new medical devices or medicines.” With the White House and Congress still formulating policy on AI in healthcare, the Food and Drug Administration “has taken the lead for Biden” and “authorized new AI products before they go to market – without the sort of comprehensive data required of drug and device makers.” The FDA “monitors them for adverse events.” Politico said, “Advocates for patient safety warn that until there’s better government oversight, medical professionals could be using AI systems that steer them astray by misdiagnosing diseases, relying on racially biased data or violating their patients’ privacy.”

 

Meta AI Opt-Out Tool Frustrating Users, Appears Non-Functional

Wired Share to FacebookShare to Twitter (10/27) reported that while Meta has released a tool for users to remove data from its generative AI models, “there is no functional way to opt out of Meta’s generative AI training.” Users “have been deeply frustrated with the process,” as the tool is “‘unable to process the request’ until the requester submits evidence that their personal information appears in responses from Meta’s generative AI.” Additionally, “WIRED has been unable to locate anyone who has successfully had their data deleted using this request form.”

 

Biden Issues Executive Order On AI Regulation

The AP Share to FacebookShare to Twitter (10/30, Boak, O'Brien) reports President Biden on Monday “signed an ambitious executive order on artificial intelligence that seeks to balance the needs of cutting-edge technology companies with national security and consumer rights, creating an early set of guardrails that could be fortified by legislation and global agreements.” The AP characterizes the order as “an initial step that is meant to ensure that AI is trustworthy and helpful, rather than deceptive and destructive,” and adds it “seeks to steer how AI is developed so that companies can profit without putting public safety in jeopardy.”

        Likewise, the Washington Post Share to FacebookShare to Twitter (10/30, Lima, Zakrzewski) describes the President’s “sweeping” new order as “wielding the force of agencies across the federal government and invoking broad emergency powers to harness the potential and tackle the risks of what he called the ‘most consequential technology of our time.’” The Post also reports the “sprawling effort marks the U.S. government’s most ambitious attempt to spur innovation and address concerns the burgeoning technology could exacerbate bias, displace workers and undermine national security.”

        However, Axios Share to FacebookShare to Twitter (10/30, Curi, Gold) reports the Administration “recognizes executive orders can’t replace legislation and continues to call on Congress to pass a law governing AI safety,” and Politico Share to FacebookShare to Twitter (10/30, Chatterjee) says the order “likely sets the White House up for tussles with both Congress and a powerful American industry,” as “the sprawling document takes aim not only at cutting-edge AI models, but a suite of tech-driven issues that Washington and Congress have struggled to address, including algorithmic housing discrimination, cybersecurity and data privacy.” In addition, CNBC Share to FacebookShare to Twitter (10/30, Field, Feiner) reports while “law enforcement agencies have warned that they’re ready to apply existing law to abuses of AI and Congress has endeavored to learn more about the technology to craft new laws, the executive order could have a more immediate impact.”

        According to CNN Share to FacebookShare to Twitter (10/30, Saenz, Liptak), “Top White House officials argue the executive order is the most significant action on artificial intelligence taken by any government as leaders around the world race to address the risks posed by the quickly changing technology.” White House Chief of Staff Jeff Zients said, “Given the pace of this technology, we can’t move in normal government or private-sector pace, we have to move fast, really fast – ideally faster than the technology itself.” He added, “You have to continue to be proactive, anticipate where things are headed, continue to act fast and pull every lever we can.”

        Bloomberg Share to FacebookShare to Twitter (10/30, Rozen, Gardner, Seddiq, Subscription Publication) reports the order “will have broad impacts on companies developing powerful AI tools that could threaten national security,” including Microsoft, Amazon, and Alphabet’s Google. The New York Times Share to FacebookShare to Twitter (10/30, Kang, Sanger) says the “far-reaching” executive order requires companies to “report to the federal government about the risks that their systems could aid countries or terrorists to make weapons of mass destruction,” and it “also seeks to lessen the dangers of ‘deep fakes’ that could swing elections or swindle consumers.” However, the Times adds Biden “made it clear that he intended the order to be the first step in a new era of regulation for the United States, as it seeks to put guardrails on a global technology that offers great promise – diagnosing diseases, predicting floods and other effects of climate change, improving safety in the air and at sea – but also carries significant dangers.”

 

Harris To Announce $200M Investment From Foundations For AI Advancement

Bloomberg Share to FacebookShare to Twitter (10/31, Davison, Subscription Publication) reports Vice President Harris “is poised to announce an investment of more than $200 million from philanthropic foundations to provide grants for artificial intelligence advancements as part of the White House’s effort to guide the quickly developing technology, according to an administration official.” The investment “aligns with the Biden administration’s broader goals of promoting AI innovation that protects consumers and support international rules for the nascent technology.” The funders “are also prioritizing initiatives focused on safeguarding democracy, assisting workers facing AI-driven changes and improving the transparency around AI, the official said.”

        Tyrangiel: Raimondo Leading Efforts On AI Regulation. Josh Tyrangiel writes in the Washington Post Share to FacebookShare to Twitter (10/31, Tyrangiel) that under President Biden’s executive order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” many agencies “will be writing reports that are the equivalent of participation trophies, while a surprising amount of the most important stuff – including the creation of safety and security standards for AI companies and authentication guidelines for AI-generated content – has been quietly delegated to Secretary of Commerce Gina Raimondo.” Tyrangiel says, “I cannot overstate how much the AI community likes Raimondo. After speed-reading the executive order, a senior executive at a large AI company” said, “It looks like a team sport, but they’re really giving Steph Curry all the biggest shots.” But in an interview, Raimondo “was quick to dismiss any suggestion that the executive order makes her first among Cabinet equals.” She said, “This is an all-hands-on-deck approach.”

 

Report: College Students Are Embracing AI Tools Faster Than Faculty Members

Inside Higher Ed Share to FacebookShare to Twitter (10/31, Coffey) reports according to a report from Tyton Partners based on a Turnitin-sponsored study, “faculty members have been slower than students to adopt artificial intelligence tools in the last year, despite the buzz across academia about ChatGPT and other generative AI tools.” The report finds that “nearly half of college students are using AI tools this fall, but fewer than a quarter (22 percent) of faculty members use them.” Faculty members who are familiar with AI “acknowledge the importance of the technology: 75 percent of those regular AI users said they believe students will need to know how to use generative AI in a professional setting in order to succeed. Despite that, the faculty is taking its time with both adoption and setting policy, the report found.” Meanwhile, generative AI use, “however low, increased by both students and faculty over the last year, the study found.”

 

Google DeepMind CEO Dismisses Meta Scientist’s Claims About AI Lobbying Efforts

In an interview with CNBC Share to FacebookShare to Twitter (10/31, Browne), Demis Hassabis, CEO of Google DeepMind, “pushed back on a claim from Meta’s artificial intelligence chief alleging the company is pushing worries about AI’s existential threats to humanity to control the narrative on how best to regulate the technology,” and “said that DeepMind wasn’t trying to achieve ‘regulatory capture’ when it came to the discussion on how best to approach AI.” CNBC reports his remarks came after “Yann LeCun, Meta’s chief AI scientist, said that DeepMind’s Hassabis, along with OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei were ‘doing massive corporate lobbying’ to ensure only a handful of big tech companies end up controlling AI,” and “also said they were giving fuel to critics who say that highly advanced AI systems should be banned to avoid a situation where humanity loses control of the technology.”

 

News Media Alliance Says AI Chatbots Trained On News Articles

The New York Times Share to FacebookShare to Twitter (10/31, Robertson) reports news organizations “have argued for the past year that A.I. chatbots like ChatGPT rely on copyrighted articles to power the technology,” and the News Media Alliance “released research on Tuesday that it said showed that developers outweigh articles over generic online content to train the technology, and that chatbots reproduce sections of some articles in their responses.” In addition, News Media Alliance CEO Danielle Coffey “argued that the findings show that the A.I. companies violate copyright law.”

 

Analysis Finds Only Two States Guide Schools On AI Use

Education Week Share to FacebookShare to Twitter (11/1, Klein) reports according to an analysis by the Center on Reinventing Public Education at the University of Arizona, “the use of artificial intelligence is expanding rapidly in K-12 education, but states’ AI policy guidance for schools is not keeping pace.” Only California and Oregon have provided “official AI guidance to schools,” while another 11 states “are in the process of developing guidance,” and 21 states “said they are not planning to release guidance anytime soon, according to the analysis.” The report said those findings suggest that students in states without guidance may be “subject to more haphazard, divergent, and inequitable impacts [of AI], all while the technology continues to advance at a remarkable pace.” Meanwhile, “recent focus groups the organization held with school and district administrators” revealed local leaders “would like more state guidance on using generative AI ethically and responsibly.”

        The Seventy Four Share to FacebookShare to Twitter (11/1, Toppo) reports 17 states “didn’t respond to CRPE’s survey and haven’t made official guidance publicly available.” The center’s managing director, Bree Dusseault said that as more schools experiment with AI, “good policies and advice – or a lack thereof” – will “drive the ways adults make decisions in school.” That will “ripple out, dictating whether these new tools will be used properly and equitably.” Meanwhile, Satya Nitta, CEO of generative AI company Merlyn Mind, “said a lot of educators and officials this week are likely looking ‘very carefully’ at Monday’s White House executive order on AI ‘to figure out what next steps are.’ The order requires, among other things, that AI developers share safety test results with the U.S. government and develop standards that ensure AI systems are ‘safe, secure, and trustworthy.’” K-12 Dive Share to FacebookShare to Twitter (11/1, Merod) reports, “According to CRPE’s analysis, it is unlikely a majority of states will release AI strategies or suggestions for schools in the 2023-24 school year.”

 

Harris Highlights Risk From AI, Announces Formation Of AI Safety Institute

Bloomberg Share to FacebookShare to Twitter (11/1, Davison, Subscription Publication) reports Vice President Kamala Harris, “in a speech in London,” laid out “the burgeoning risks related to artificial intelligence, calling for international cooperation and stricter standards to protect consumers from the technology.” The New York Times Share to FacebookShare to Twitter (11/1, Green) reports that during the summit, Harris “[outlined] guardrails that the American government will seek to put in place to manage the risks of A.I. as it asserts itself as a global leader in the arena.” The steps Harris announced “seek to both flesh out a sweeping executive order President Biden signed this week and make its ideals part of broader global standards for a technology that holds great promise and peril.” Harris’s messaging “put a distinct emphasis on the consumer protection aspect of A.I., including how it could exacerbate existing inequalities.”

        Reuters Today Share to FacebookShare to Twitter (11/1) reports that during the speech, Harris announced “that the United States is establishing a new AI Safety Institute, which will develop evaluations known as ‘red teaming’ to assess the risks of AI systems.” Harris also “[unveiled] a draft of new regulations governing federal workers’ use of artificial intelligence, which could have broad implications throughout Silicon Valley.”

        Engadget Share to FacebookShare to Twitter (11/1) reports Harris announced that the United States AI Safety Institute “will be responsible for actually creating and publishing the all of the guidelines, benchmark tests, best practices and such for testing and evaluating potentially dangerous AI systems.” The Administration’s order “explained that the Commerce Department will be spearheading efforts to validate content produced by the White House through a collaboration with the C2PA and other industry advocacy groups.” The group will “work to establish industry norms, such as the voluntary commitments previously extracted from 15 of the largest AI firms in Silicon Valley.” In her remarks, Harris “extended that call internationally, asking for support from all nations in developing global standards in authenticating government-produced content.”

        The Washington Post Share to FacebookShare to Twitter (11/1) reports the Administration “is taking a raft of AI-related actions as global leaders and top tech executives travel to Britain’s Bletchley Park for a summit on safety concerns about artificial intelligence, with a particular focus on catastrophic scenarios of ways AI could be abused.” As others, such as the EU, “race to regulate artificial intelligence, the Biden administration is attempting to signal that the United States leads not only in industry innovation, but also policy.”

        Global Leaders Sign Declaration To Warn Of Dangers From “Frontier” AI Systems. The New York Times Share to FacebookShare to Twitter (11/1, Specia, Satariano) reports that on Wednesday, at Britain’s AI Safety Summit, “representatives from the 28 countries attending the event, including the U.S. and China,” signed the Bletchley Declaration, “which warned of the dangers posed by the most advanced ‘frontier’ A.I. systems.” It also said, “There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these A.I. models.” However, the Declaration “fell short...of setting specific policy goals.”

        The AP Share to FacebookShare to Twitter (11/1, Chan, Lawless) reports, “British Prime Minister Rishi Sunak said the declaration was ‘a landmark achievement that sees the world’s greatest AI powers agree on the urgency behind understanding the risks of AI – helping ensure the long-term future of our children and grandchildren.’” He added that “only governments, not companies, can keep people safe from AI’s dangers,” and “urged against rushing to regulate AI technology, saying it needs to be fully understood first.”

        According to the Wall Street Journal Share to FacebookShare to Twitter (11/1, Schechner, Subscription Publication), this is the first international pledge to warn of risks to humanity from AI.

 

More College Admissions Offices Are Turning To AI Tools For Student Enrollment

The Chronicle of Higher Education Share to FacebookShare to Twitter (11/1, Swaak) reports “this time of year is especially hectic for admissions staff,” especially as attracting and enrolling new students “has become an existential priority as the proverbial enrollment cliff looms large – especially at public and private four-year colleges.” Artificial intelligence, then, “is increasingly enticing to people who work in admissions and enrollment, both for identifying prospective students and tackling ‘administrative drudgery,’ such as crafting messaging campaigns and transferring transcripts into databases that can be queried. One recent survey indicated that half of the 314 higher-ed respondents reported that their admissions departments use AI.” Among other examples, Student Select AI “can scan essays and personal statements, and render scores on an applicant’s ‘noncognitive’ traits, like positive attitude, or their ‘performance’ skills, like leadership and analytical thinking.” Still, “sources The Chronicle spoke with for this article are proceeding with caution, reaching first for what feels like lower-hanging fruit.”

 

Creatives Turning To Courts To Protect Work For Generative AI

Insider Share to FacebookShare to Twitter (11/1, Kanetkar, Lockwood) reports that “artists, developers, and writers struggle to protect their work” from generative AI, and “many creators are turning to the courts to help them.” Insider says, “Rights holders argue that AI using their work without a license should be considered ‘unauthorized derivative work’ — an infringement of copyright law. Meanwhile, AI startups insist that their models comply with fair-use doctrine, which grants them some leeway to others’ works.” Insider says, “The results of these legal conflicts will likely have enormous knock-on effects for generative-AI startups, which have been among the few bright spots in what’s been a dismal year for tech. ... An article in the Harvard Business Review said that if courts rule in favor of artists, startups would likely have to pay ‘substantial infringement penalties.’ In the long term, investors say a data market is likely to evolve.”

 

Governments Agree To Take Common Approach To Regulating, Overseeing AI Development

Reuters Share to FacebookShare to Twitter (11/2) reports that at “an inaugural AI Safety Summit at Bletchley Park, home of Britain’s World War Two code-breakers, political leaders from the United States, European Union and China agreed on Wednesday to share a common approach to identifying risks and ways to mitigate them.” UK Prime Minister Rishi Sunak “said the United States, EU and other “like-minded” countries had reached a ‘landmark agreement’ with select companies working at AI’s cutting edge on the principle that models should be rigorously assessed before and after they are deployed.” The summit “has brought together around 100 politicians, academics and tech executives to plot a way forward for a technology that could transform the way companies, societies and economies operate, with some hoping to establish an independent body to provide global oversight.”

        The Daily Mail (UK) Share to FacebookShare to Twitter (11/2, Tapsfield) reports Sunak “warned that AI could pose a risk on the scale of ‘pandemics and nuclear war’ – but stressed that people should not be ‘alarmist.’” He “delivered a stark message about the consequences of failing to engage with the emerging technology now as he kicked off the second day of the Bletchley Park summit.” Ministers “have been trying to play up the potential positives from AI,” but Sunak “underlined the difficulties of the message as he arrived this morning.” Sunak “also praised [Vice President] Harris, who he hosted in Downing Street last night, for an executive order on AI made by the White House this week.” He said, “Kamala, your executive order just this week is a deep and comprehensive demonstration of the potential of AI and it’s very welcome in this climate.”

        TIME Share to FacebookShare to Twitter (11/2) reports the summit “caps a year of intense escalation in global discussions about AI safety, following the launch of ChatGPT nearly a year ago.” The program’s success “breathed life into a formerly-niche school of thought that AI could, sooner or later, pose an existential risk to humanity, and prompted policymakers around the world to weigh whether, and how, to regulate the technology.” The summit’s participants “did not attempt to come to an agreement here on a shared set of enforceable guardrails for the technology.” But Sunak “announced on Thursday that AI companies had agreed at the Summit to give governments early access to their models to perform safety evaluations.” Despite “the limited progress, delegates at the event welcomed the high-level discussions as a crucial first step toward international collaboration on regulating the technology – acknowledging that while there were many areas of consensus, some key differences remain.”

        Analysis: Government Leaders Are No Longer In Control Of Strategic AI Innovation. The Washington Post Share to FacebookShare to Twitter (11/2, Faiola, Zakrzewski) reports as countries from six continents conclude the landmark summit, they face “a vexing modern-day reality: Governments are no longer in control of strategic innovation, a fact that has them scrambling to contain one of the most powerful technologies the world has ever known.” AI is already being deployed “on battlefields and campaign trails, possessing the capacity to alter the course of democracies, undermine or prop up autocracies, and help determine the outcomes of wars. Yet the technology is being developed under the veil of corporate secrecy, largely outside the sight of government regulators and with the scope and capabilities of any given model jealously guarded as propriety information.” Nevertheless, this week, “the European Union and 27 countries including the United States and China agreed to a landmark declaration to limit the risks and harness the benefits of artificial intelligence.” Top tech leaders and executives “agreed to allow experts from Britain’s new AI Safety Institute to test models for risks before their release to the public.”

        In Meeting With Sunak, Musk Calls For Regulations On AI. Bloomberg Share to FacebookShare to Twitter (11/2, Seal, Subscription Publication) reports Elon Musk “renewed calls for regulations on artificial intelligence during an on-stage conversation with” Sunak. Musk said regulation “will be annoying, it’s true. But I think we’ve learned over the years that having a referee is a good thing.” Musk also “described AI as the ‘most disruptive force in history’ and said we will eventually ‘have something that’s smarter than the smartest human.’” He added that “there will come a point where no job is needed. You can have a job if you want a job.” Whether that “makes people feel comfortable or not remains unclear, he said.” He added, “One of the challenges in the future will be how do we find meaning in life.”

 

Companies Find Open-Source AI Models Can be More Expensive Than OpenAI

The Information Share to FacebookShare to Twitter (11/2, Palazzolo, Subscription Publication) reports, “Some companies that pay for OpenAI’s artificial intelligence have been looking to cut costs with free, open-source alternatives. But these AI customers are realizing that oftentimes open-source tech can actually be more expensive.” The Information discusses the case of Cypher, “an app that helps people create virtual [chatbot] versions of themselves.” Cypher’s founders tested Llama 2 for the app, “leading to a $1,200 bill in August from Google Cloud.” They then used GPT-3.5 Turbo “and were surprised to see that it cost around $5 per month to handle the same amount of work.”

 

Senators Introduce Bipartisan Bill Mandating AI Risk Framework For Federal Agencies

Politico Share to FacebookShare to Twitter (11/2, Kern, Bordelon) reports that three days after President Biden “signed a new executive order on AI,” Sens. Mark Warner (D-VA) and Jerry Moran (R-KS) “are introducing a bill Thursday to give it teeth.” Warner and Moran’s proposal “would require federal agencies to follow the safety standards developed by the National Institute of Standards and Technology earlier this year.” While Biden’s order “nods several times to NIST’s AI framework, it stops short of requiring all federal agencies to adopt its provisions.” If the new bill “is signed into law, the measure would have more lasting power than an executive order, which could be rescinded by a future administration.”

        Administration’s AI Executive Order Reflects Evolving Democratic Views On Tech Industry. Politico Share to FacebookShare to Twitter (11/2, Scola) profiles White House Deputy Chief of Staff Bruce Reed “charged by President Joe Biden and White House chief of staff Jeff Zients with developing his administration’s AI strategy.” A meeting with experts on AI technology, Reed “says, hardened his belief that generative AI is poised to shake the very foundations of American life.” He said, “What we’re going to have to prepare for, and guard against, is the potential impact of AI on our ability to tell what’s real and what’s not.” The White House’s AI strategy “reflects a big mindset shift in the Democratic Party, which had for years celebrated the American tech industry.” Underlying it is the Administration’s belief “that Big Tech has become arrogant about its alleged positive impact on the world and insulated by a compliant Washington from the consequences of the resulting damage.”

        Mission: Impossible Movie Said To Be “Key Factor” Behind White House AI Order. Fortune Share to FacebookShare to Twitter (11/1) reports, “The White House just revealed a key factor driving Biden’s new order to rein in AI: The latest Tom Cruise ‘Mission: Impossible’ movie.” The President watched the movie “at Camp David recently,” and its antagonist, “a sentient, rogue AI known as ‘the Entity’ ...helped inspire Biden to sign an executive order on Monday establishing guardrails for artificial intelligence.” White House deputy chief of staff Bruce Reed is quoted saying, “If he hadn’t already been concerned about what could go wrong with AI before that movie, he saw plenty more to worry about.” Fortune adds, “It’s unclear when Biden actually watched the Mission: Impossible sequel, which premiered on July 12. But the executive order was months in the making and the president was both ‘impressed and alarmed’ by the technology prior to watching the movie, according to Reed.”

dtau...@gmail.com

unread,
Nov 11, 2023, 7:02:16 PM11/11/23
to ai-b...@googlegroups.com

ML Gives Users 'Superhuman' Ability to Open, Control Tools in VR
University of Cambridge (U.K.)
November 8, 2023

A virtual reality (VR) application developed by researchers at the U.K.'s University of Cambridge can open and control three-dimensional (3D) modeling tools with the user's hand movements. The machine learning (ML)-based application, HotGestures, eliminates the need for users to interact with a menu while building VR figures and shapes. The researchers developed a neural network that can recognize 10 gestures for building 3D models: pen, cube, cylinder, sphere, palette, spray, cut, scale, duplicate, and delete. Users can open the scissor tool by making a cutting motion, for instance, and there is no need to pause their work when switching between tools. Said Cambridge's Per Ola Kristensson, "We all communicate using our hands in the real world, so it made sense to extend this form of communication to the virtual world."
 

Full Article

 

 

Computer Scientist Solves the Game of Othello
Discover
November 7, 2023


Hiroki Takizawa, a bioinformatician at Japanese computing company Preferred Networks, says he has solved Othello, a 2-person board game played on an 8x8 grid with 10 to the power of 28 possible positions. Takizawa modified an algorithm called Edax to make it better suited to the task, then broke down the task into more manageable parts. He ran his program on a supercomputing cluster called MN-J, owned by Preferred Networks, which yielded a brute-force proof that perfect play by both players leads to a draw. Takizawa says that while computational errors due to CPU or memory faults cannot be entirely ruled out, "As the vast majority of calculations were executed on a computer cluster with Error Checking and Correction memory, we believe the results to be nearly indisputable."

Full Article

 

 

Finding Answers (About the Best Way to Find Answers)

USC Viterbi School of Engineering

Julia Cohen
November 6, 2023


Computer scientists at the University of Southern California (USC) considered which knowledge graph (KG) representations are best for different applications. The researchers focused on the performance of four types of KG representations across three use-cases: exploring knowledge, writing queries, and building machine learning models. Said USC's Jay Pujara, "Basically, there was not a clear winner. This is not a situation where you can say a certain type of representation is always best for a certain type of task." However, they found that one type of representation, Qualifiers, works well in all scenarios. This method assigns information to the edges connecting the entities to present additional facts. Pujara noted, "There's still a case where each of these proposed representations might have some benefit."
 

Full Article

 

 

Machine Learning Could Better Predict Floods
IEEE Spectrum
Tammy Xu
November 6, 2023


A team of hydrologists and computer network researchers in Italy, Spain, and Finland developed a machine learning model that, using the first 30 minutes of a storm, can predict occurrences of water runoff or flooding up to an hour before they might happen. The researchers trained the model with input-data parameters like rainfall and atmospheric pressure obtained from weather station sensors. The output-data parameters, like soil absorption and runoff volume, combined collected data and synthetic data generated using traditional theoretical models. Synthetic data was necessary, explained Andrea Zanella of Italy’s University of Padova, because there is not enough data available to build dependable machine learning models for hydrology, the study of the Earth’s water cycle. The researchers said more sensors and a variable rate of data collection may help solve the problem.

Full Article

 

 

Processor Made for AI Speeds Genome Assembly
Cornell Chronicle
Patricia Waldron
November 1, 2023


Cornell University researchers demonstrated that a hardware accelerator developed for artificial intelligence operations can increase the speed of genome assembly. The researchers used existing DNA and protein sequence data to test an intelligence processing unit's (IPU) ability to align protein and DNA molecules. In assembling sequences from the model organisms E. coli and C. elegans, the IPU was 10 times faster than a graphics processing unit (GPU) and 4.65 times faster than a supercomputer's central processing unit (CPU). To reduce the amount of data transferred from the CPU and eliminate bottlenecks, the researchers reduced the memory footprint of the X-Drop sequence alignment algorithm by 55 times. Cornell's Giulia Guidi said, "You can exploit the IPU high memory bandwidth, which allows you to make the whole processing faster," adding that "the IPU may become the next GPU."
 

Full Article

 

 

Nanowire 'Brain' Network Learns, Remembers 'on the Fly'
The University of Sydney (Australia)
November 1, 2023


Researchers at Australia's University of Sydney (UoSyd) and the University of California, Los Angeles demonstrated that a nanowire neural network can learn and remember dynamic online data, similar to the neurons in the human brain. The researchers used the nanowire network to recognize and remember sequences of electrical pulses corresponding to images. It also was used to perform a benchmark image recognition task that involved accessing images in the MNIST (Modified National Institute of Standards and Technology) database of handwritten digits. The nanowire network identified 93.4% test images correctly. UoSyd's Zdenka Kuncic said, "Our novel approach allows the nanowire neural network to learn and remember 'on the fly', sample by sample, extracting data online, thus avoiding heavy memory and energy usage."

Full Article

 

 

Accelerating Sparse Tensors for Massive AI Models
MIT News
Adam Zewe
October 30, 2023


Researchers at the Massachusetts Institute of Technology and technology company Nvidia have formulated two complementary methods that expedite sparse-tensor processing for vast artificial intelligence (AI) models. The HighLight accelerator enables hardware to efficiently find nonzero values for more diverse sparsity patterns. It uses "hierarchical structured sparsity" to efficiently embody various sparsity patterns constituting several simple patterns, splitting the tensor's values into smaller blocks before merging them into a hierarchy. The Tailors and Swiftiles method can accommodate circumstances where the data does not fit in memory, which boosts usage of the storage buffer and reduces off-chip memory traffic. It combines two approaches to more than double computational speed while using just half the energy required by current hardware accelerators incapable of handling overbooking.

Full Article

 

 

Accelerating AI Tasks While Preserving Data Security
MIT News
Adam Zewe
October 30, 2023


A search engine developed by Massachusetts Institute of Technology researchers provides an efficient means of determining optimal designs for deep neural network accelerators. SecureLoop takes into account how an accelerator chip's performance and energy usage will be affected by the addition of data encryption and authentication measures. The goal is to identify the best design for maintaining data security while enhancing performance geared toward the specific neural network and machine learning task. SecureLoop generates an accelerator schedule that offers the most efficient speed and energy usage for the neural network in question, including the data tiling strategy and authentication block size. Simulations demonstrated the schedules identified by SecureLoop were as much as 33.2% faster and had a 50.2% better energy delay product than methods that do not take security into account.

Full Article

 

 

Cutting-Edge Approach to Tackling Pollution
University of Houston News
Rashda Khan
November 6, 2023


University of Houston (UH) researchers developed a computational approach to identifying pollution sources in Houston with greater accuracy. The researchers used multi-year volatile organic compound measurements data from the Texas Commission on Environmental Quality’s environmental monitoring stations. They integrated the Positive Matrix Factorization model with the SHAP machine learning (ML) algorithm, which helps explain why ML models make certain decisions while also making the data more understandable. Their analysis revealed that in industrial areas, Houston’s oil and gas industry had the highest impact on emissions, while shortwave radiation and relative humidity were the two most important influencing factors for overall ozone concentration.
 

Full Article

 

 

Doctors Wrestle with AI in Patient Care
The New York Times
Christina Jewett
October 30, 2023


The U.S. Food and Drug Administration (FDA)'s approval of artificial intelligence (AI) tools has raised doubts among doctors about their ability to improve patient care. American Medical Association president Jesse Ehrenfeld said, "If physicians are going to incorporate these things into their workflow, if they're going to pay for them, and if they're going to use them—we're going to have to have some confidence that these tools work." The trial of Google's Med-PaLM 2 chatbot for healthcare workers has provoked issues about patient privacy and informed consent, and the FDA's oversight of large language models is trailing AI's rapid evolution. The agency also has no influence over the development of AI tools built by health systems for internal use, although doctors are hesitant to deploy them due to the dearth of publicly available information. The FDA's Jeffrey Shuren suggested establishing AI-testing laboratories, which would require amending federal law.

Full Article

*May Require Paid Registration

 

 

Reverse-Engineering Jackson Pollock
Harvard University John A. Paulson School of Engineering and Applied Sciences
Leah Burrows
October 30, 2023


Harvard University researchers developed a three-dimensional (3D) printing technique that leverages physics and machine learning to produce complex physical patterns. They used this technique to replicate a Jackson Pollock painting using the same natural fluid instability that he relied on in his work. Most 3D and four-dimensional (4D) printing techniques avoid the dynamic instability of the liquid stream by locating the print nozzle close to the printing surface. However, Harvard's Gaurav Chaudhary said, "We wanted to develop a technique that could take advantage of the folding and coiling instabilities, rather than avoid them." The researchers merged the physics of coiling with deep reinforcement to print at a distance and control fluid coiling. They used the technique to produce a Pollock-like painting, and to decorate a cookie with chocolate syrup.

Full Article

 

How College Librarians Are Working To Embrace AI Tools

Inside Higher Ed Share to FacebookShare to Twitter (11/3, Coffey) reported librarians have “often stood at the precipice of massive changes in information technology: the dawn of the fax machine, the internet, Wikipedia and now the emergence of generative artificial intelligence, which has been creeping its way into classrooms.” While some library trailblazers “embraced AI early on, others are more cautious about its potentially negative implications, from misinformation to inequality in research.” Many universities “have yet to set rules for the use of AI, often deferring to faculty judgment instead,” but when it comes to “using the technology in the library, librarians have seen the need for guidelines.” Almost “every source interviewed for this article had the same advice for librarians who feel overwhelmed by the technology or do not know where to begin: start using the AI tools.”

 

Young Entrepreneur From Africa Uses AI To Tackle Malaria

The New York Times Share to FacebookShare to Twitter (11/3, Searcey) profiled, Rokhaya Diagne, a 25-year-old A.I. entrepreneur in Senegal, who is “part of a subset of Africa’s enormous youth population that is confident technology can solve the continent’s biggest problems.” As a teenager, Diagne would “retreat to her brother’s room, where she played online computer games for hours, day after day, until her mother finally got fed up.” However, while her passion for computers has, “if anything, intensified, she has redirected her energies to higher pursuits than leveling up at Call of Duty. Now, her goals include using artificial intelligence to help the world eradicate malaria by 2030, a project she is focused on at her health start-up.” Her drive has earned her recognition, and her malaria project “recently won an award at an A.I. conference in Ghana and a national award in Senegal for social entrepreneurship, as well as $8,000 in funding.”

 

AI Could Exacerbate Gender Disparities In The Workforce

Insider Share to FacebookShare to Twitter (11/2, Zinkula) reports a study from the International Labour Organization published in August “found that in high-income countries, 7.8 percent of jobs held by women have the potential to be automated” by AI tools, “compared to just 2.9 percent of men’s jobs.” Similarly, Pew Research published a report in July that “estimated that 21 percent of US women were employed in jobs that are the most exposed to AI,” while “seventeen percent of men were employed in highly exposed professions.” This “gender disparity” in the impacts AI will have on employment could have far-reaching consequences, as “barriers that prevent women from entering the workforce and technology that eliminates their jobs could be all the more detrimental” as the population ages.

 

Elon Musk Uses Cocaine Query To Demonstrate Chabot

The Guardian (UK) Share to FacebookShare to Twitter (11/5, Milmo) reports that after he unveiled an artificial intelligence (AI) chatbot called Grok, Elon Musk “said the competitor to ChatGPT would be made available to premium subscribers on his X platform after testing.” Musk “posted an apparent example of Grok’s playful tone with a screengrab of a query” that asked how to make cocaine. After listing several steps, including setting “up a clandestine laboratory in a remote location,” Grok stated, “Just kidding! Please don’t actually try to make cocaine. It’s illegal, dangerous, and not something I would ever encourage.” The Wall Street Journal Share to FacebookShare to Twitter (11/5, Dean, Subscription Publication) reports Grok listed the DEA in one of its joke responses to the cocaine query. Before giving its actual answer, Grok stated, “Obtain a chemistry degree and a DEA license.”

        CNBC Share to FacebookShare to Twitter (11/5, Picciotto) reported Grok is “the first technology out of Elon Musk’s new AI company, xAI.” The article adds, “Leading up to the release” of Grok, “Musk posted on X, formerly Twitter, an example of Grok responding to a request for a step-by-step cocaine recipe.” Grok responded, “Just a moment while I pull up the recipe for homemade cocaine. You know, because I’m totally going to help you with that.”

        Insider Share to FacebookShare to Twitter (11/3, Nolan) reported, “Officially launched in July after months of speculation, [xAI] has the lofty mission of understanding the ‘true nature of the universe.’” Musk “later clarified that the overarching goal of xAI was to build a ‘good AGI’ that is ‘maximally curious’ and ‘truth-seeking.’ He said the company would tackle scientific questions and attempt to understand what’s ‘really going on.’”

 

OpenAI’s ChatGPT Updates Seen As Rendering AI Startups Useless

Gizmodo Share to FacebookShare to Twitter (11/2) reports, “OpenAI updated ChatGPT to include PDF services over the weekend, according to user reports. Several AI startups built themselves around PDF analysis to be used as an add-on to ChatGPT, but OpenAI’s latest update may have just rendered these startups useless.” AI executives “took to X to vent about the ChatGPT update. ‘Many startups just died today,’ tweeted Alex Ker, the founder of AI incubator, P-ai, on Saturday.” Gizmodo says, “With funding from Microsoft, OpenAI can easily continue to add new features, like it did with PDF analysis, which could be another startup’s whole business.”

 

Biden Asked Obama To Help Him Shape AI Policy

NBC News Share to FacebookShare to Twitter (11/3, Alba) reports that former President Barack Obama “quietly advised the White House over the past five months on its strategy to address artificial intelligence, engaging behind the scenes with tech companies and holding Zoom meetings with top West Wing aides at President Joe Biden’s request.” NBC News adds, “It’s the first time Biden has tapped his former boss to help shape a key policy initiative, aides said, and he did it because Obama shares his views on the issue and brings a certain heft that could help move the process along quickly. ... AI is one of the things that keep both Biden and Obama up at night, their aides said.”

 

Lobbyists See Opportunity As Administration Looks To Set AI Policy

Politico Share to FacebookShare to Twitter (11/4, Fuchs, Bordelon) reports in response to the Administration’s recent AI policies, lobbyists and K Street firms have been signing up AI companies as clients. Politico adds that despite similarities to the cryptocurrancy regulatory debate which was very profitable for lobbyists, “AI has the potential to be even bigger” as the NFL Players Association, Nike, Amazon, and the Mayo Clinic “have enlisted help from firms to lobby on the matter.”

 

How Some Educators Are Using ChatGPT In Classrooms

The Wall Street Journal Share to FacebookShare to Twitter (11/5, Hagerty, Subscription Publication) reports some teachers and professors are encouraging their students to use ChatGPT, the artificial-intelligence chatbot, despite previous worries about plagiarism. Although some educators are still hesitant to embrace the tool, many are finding ways to help students understand the technology and use it to bolster basic skills. Meanwhile, students are finding creative ways to use the chatbot, and one student was hired for a job after using ChatGPT to help write her cover letter.

 

WSJournal Profiles AI Pioneer Fei-Fei Li

The Wall Street Journal Share to FacebookShare to Twitter (11/3, Bobrow, Subscription Publication) featured a profile of AI pioneer and Stanford computer science professor Fei-Fei Li, who has been calling for federal investment in AI research. According to Li, President Biden’s recently announced executive order on AI will help catalyze AI innovation.

        In an interview with The Guardian (UK) Share to FacebookShare to Twitter (11/5), Li says, “I respect the existential concern [of AI’s risk to humanity], I’m not saying it is silly and we should never worry about it. But, in terms of urgency, I’m more concerned about ameliorating the risks that are here and now.”

        The Information Share to FacebookShare to Twitter (11/4, Subscription Publication) says Li “is neither a Geoffrey Hinton–esque doomer, fretting that the machines we’ve created will slaughter us, nor an unbridled techno-optimist (to use Marc Andreessen’s self-identifier), who equates regulating AI with impeding progress as we know it. Li is instead an enthusiastic cheerleader for AI who sees the wisdom of restraint.”

 

OpenAI Unveils Customizable AI Tools At Inaugural Dev Conference

Bloomberg Share to FacebookShare to Twitter (11/6, Subscription Publication) reports OpenAI has announced a suite of customizable ChatGPT products at its first developer conference, including a new GPT-4 Turbo version and a vision-enabled GPT-4V model. The latter allows for image analysis, extending capabilities to assist visually impaired users. With over 100 million weekly users, the move caters to business needs for adaptable and cost-effective AI solutions, with OpenAI also offering to cover copyright legal defenses. Amidst rising competition from companies like Google and Musk’s xAI, OpenAI’s CEO Sam Altman envisions a future where AI agents assist with a myriad of tasks, emphasizing a cautious, iterative approach to deployment in line with recent US AI regulation initiatives.

 

YouTube To Soon Begin Testing Generative AI Features

TechCrunch Share to FacebookShare to Twitter (11/6, Perez) reports that YouTube announced Monday that it will begin to experiment with new generative AI features. As part of the premium package available to paid subscribers, YouTube users “will be able to try out a new conversational tool that uses AI to answer questions about YouTube’s content and make recommendations, as well as try out a new feature that will summarize topics in the comments of a video.” The conversational tool “will arrive in the next few weeks while the topic summarization tool will begin testing only with a small group of users who sign up for the experiment through the website.”

 

Politicians Attempting To Crack Down On Deepfake Ads

The Wall Street Journal Share to FacebookShare to Twitter (11/6, Coffee, Subscription Publication) reports that some politicians are attempting to take issues regarding AI-generated deepfake ads involving celebrities such as Tom Hanks and Mr. Beast. The Journal says that in recent weeks, members of both houses of Congress have introduced bills that would create a national standard that prohibits unauthorized deepfakes in a commercial context. If passed into law, the bills’ sponsors say the legislation could help celebrities and others take action against scammers using their likeness. However, the Journal says it is unclear if these efforts can counter an upcoming wave of hostile deepfakes.

 

Opinion: AI’s Rise Emphasizes Need For National Data Privacy Standard To Be Passed

Rep. Cathy McMorris Rodgers (R-WA) and Rep. Jay Obernolte (R-CA) write, “Artificial intelligence is here to stay. This technology is both exciting and disruptive, offering advancements that could empower people, expand worker productivity, and grow the US economy,” in Bloomberg Law Share to FacebookShare to Twitter (11/6, Subscription Publication). They write “We need to ensure America leads in developing standards and deploying this emerging technology. A critical first step toward achieving AI leadership is passing a national data privacy standard.” They write, “Used nefariously, AI could enable cybercriminals to develop potent threats to our critical infrastructure, or create deepfake AI content to scam people out of their money or personal information – in addition to other harmful and illegal activities.”

 

Researchers: Technology Driving LLMs Not Good At Generalizing From Training Data

Insider Share to FacebookShare to Twitter (11/7, Chowdhury) reports, “In a new pre-print paper submitted to the open-access repository ArXiv on November 1, [Google researchers] found that transformers – the technology driving the large language models (LLMs) powering ChatGPT and other AI tools – are not very good at generalizing.” The authors are quoted saying, “When presented with tasks or functions which are out-of-domain of their pre-training data, we demonstrate various failure modes of transformers and degradation of their generalization for even simple extrapolation tasks.” Insider adds, “That’s a bit of a problem for those hoping to achieve artificial general intelligence (AGI) ... As it stands, AI is pretty good at specific tasks but less great at transferring skills across domains like humans do.”

 

Amazon Investing In AI Model It Hopes Can Rival OpenAI, Alphabet

Reuters Share to FacebookShare to Twitter (11/8, Hu) reports, “Amazon is investing millions in training an ambitious large language model (LLMs), hoping it could rival top models from OpenAI and Alphabet, two people familiar with the matter told Reuters.” The model is codenamed “Olympus.” According to the individuals with knowledge of its development, “Olympus” “has 2 trillion parameters,” which “could make it one of the largest models being trained.” Amazon declined to comment on the matter. The team leading the development of the model “is spearheaded by Rohit Prasad, former head of Alexa, who now reports directly to CEO Andy Jassy” and is now “head scientist of artificial general intelligence (AGI) at Amazon.” The company “believes having homegrown models could make its offerings more attractive on AWS, where enterprise clients want to access top-performing models, the people familiar with the matter said, adding there is no specific timeline for releasing the new model.”

 

Tech Companies Suggest GenAI Users Responsible For Copyright Infringement

Insider Share to FacebookShare to Twitter (11/7, Hays) reports, “Even though generative AI tools like OpenAI’s ChatGPT and Google’s Bard often respond to user queries with some of the copyrighted material that makes them function, these tech companies suggest the users are to blame for any claims of infringement.” Google, OpenAI, and Microsoft “called for users to be held responsible for the way they interact with generative AI tools, according to their comments to the US Copyright Office made accessible to the public last week. The USCO is considering new rules on AI and the tech industry’s use of owned content to train the large language models underlying generative AI tools.”

 

Special Interests Split Over Biden’s AI Executive Order

Roll Call Share to FacebookShare to Twitter (11/7, Ratnam) reports President Biden’s signing of an executive order on artificial intelligence last month has begun “a tug of war between those who fear agencies empowered under it will overstep their bounds and those who worry the government won’t do enough.” Roll Call points out while the US Chamber of Commerce “welcomed the executive order, saying that it could help the United States set a global standard for AI safety while funding a slew of new projects,” Chamber Senior Vice President Jordan Crenshaw “said he was concerned about multiple new regulations as well as the number of public comments required by various agencies.” Meanwhile, Roll Call adds “some digital rights groups fear that the order could result in little oversight.”

        In an editorial, the Wall Street Journal Share to FacebookShare to Twitter (11/7, Subscription Publication) criticizes Biden’s AI executive order as an impediment to innovation and warns the heightened regulation that accompanies the order will benefit China.

 

Some School Leaders Are Considering Requiring Parent Permission Slips To Use ChatGPT

Education Week Share to FacebookShare to Twitter (11/7, Klein) reports that schools issue permission slips “to get parent approval for students to take field trips, learn about sexual health, or play sports,” but some experts say school leaders should consider adding “using ChatGPT and similar tools powered by artificial intelligence” to that list. This comes as school districts that had previously banned ChatGPT “are now puzzling through how to use the tool to help students better understand the benefits and limitations of AI. But, when every question that a ChatGPT user asks is incorporated into the software program’s AI training model, privacy concerns come into play, experts said.” Getting parental approval “for students to use AI tools is a smart move, said Tammi Sisk, an educational technology specialist for the Fairfax County Public Schools in Virginia, who also served as a panelist for the Education Week webinar. Her school district is still developing its AI policy.”

 

How Leading Art Institutes Are Embracing AI Tools

Inside Higher Ed Share to FacebookShare to Twitter (11/8, Coffey) reports despite ethical concerns from faculty and students, “leaders at some of the top art institutes in the country view artificial intelligence as a next step in the digitization of the art world.” As a result, AI is being integrated into art institutions “in ways similar to other colleges across the nation – in coding courses and creative writing classes – but there’s an added discussion about what this means for the artists and art as a whole.” For example, the Rhode Island School of Design “addresses AI on a faculty-led, class-by-class basis rather than with a blanket policy. RISD sent an email in the spring reminding faculty members that students have to do their own work, and the rest was up to the faculty.”

 

University Of Nebraska Omaha Receives $750K Grant To Develop AI Chatbot For Tribal Nations

The Omaha (NE) World-Herald Share to FacebookShare to Twitter (11/7, Crisler) reported that a University of Nebraska at Omaha research team “recently received a three-year, $750,000 grant from the National Science Foundation dedicated toward development of an artificial intelligence chatbot that can be used by Native American tribe members and emergency management agencies.” The chatbot aims to “bolster response times for tribal nations that have been affected by natural disasters, UNO professor and project leader Yu-Che Chen said. The grant will also help finance the development of a policy framework between tribal nations and the federal government pertaining to natural disasters.” When the project is “fully realized, Chen said people living in tribal nations will be able to call or text the chatbot, which is expected to connect to ChatGPT, and be able to report damage and ask questions.”

 

Researchers Create AI Tools That Can Help With Reviewing Admissions Essays

Inside Higher Ed Share to FacebookShare to Twitter (11/9, Coffey) reports that a team of researchers at the University of Colorado at Boulder and the University of Pennsylvania “have created AI tools to help admissions officers by analyzing students’ application essays.” While the tools “help admissions officers identify seven key traits in essays, including teamwork, perseverance, intrinsic motivation and willingness to help others,” the researchers in their study “included cautionary notes about the new technology.” They noted that “as students – particularly higher-income students – become more savvy with technologies such as ChatGPT, they could alter their essays to fit what they believe will bring the best results.” The project was done, “in part, to help admissions offices address implicit bias.”

 

Tech Executives, Venture Capitalists Pushing Back On AI Incumbents’ Calls For Regulation

The Washington Post Share to FacebookShare to Twitter (11/9, De Vynck) reports that “government officials and Big Tech leaders have agreed” that “potentially world-changing” AI technology “needs some ground rules.” However, “A growing group of tech heavyweights — including influential venture capitalists, the CEOs of midsize software companies and proponents of open-source technology — are pushing back, claiming that laws for AI could snuff out competition in a vital new field.” The Post says, “To these dissenters, the willingness of the biggest players in AI...to embrace regulation is simply a cynical ploy by those firms to lock in their advantages as the current leaders, essentially pulling up the ladder behind them.” The current discussion over regulation “hasn’t incorporated the voices of smaller companies enough, [Y Combinator head Garry] Tan said, which he believes is key to fostering competition and engineering the safest ways to harness AI.”

 

Microsoft Briefly Blocks Employees From Using ChatGPT

CNBC Share to FacebookShare to Twitter (11/9, Novet) reports that despite Microsoft investing “billions of dollars in OpenAI,” for a brief period on Thursday, Microsoft employees “weren’t allowed to use the startup’s most famous product, ChatGPT.” Microsoft said on an internal website, “Due to security and data concerns a number of AI tools are no longer available for employees to use.” At first, Microsoft “said it was banning ChatGPT and design software Canva, but later removed a line in the advisory that included those products” and “reinstated access to ChatGPT.” A spokesperson said, “We were testing endpoint control systems for LLMs and inadvertently turned them on for all employees. ... We restored service shortly after we identified our error. As we have said previously, we encourage employees and customers to use services like Bing Chat Enterprise and ChatGPT Enterprise that come with greater levels of privacy and security protections.”

 

AI Seen As Possibly Exacerbating Electric Grid Strain

The Wall Street Journal Share to FacebookShare to Twitter (11/9, Lin, Subscription Publication) reports Gartner estimates that AI could use 3.5% of world’s already-strained electricity supply by 2030. A spokesperson for AWS “said the scale of its massive data centers means it can make better use of resources and be more efficient than smaller, privately operated data centers.” For the past three years, Amazon has been the largest corporate buyer of renewable energy in the world.

 

dtau...@gmail.com

unread,
Nov 18, 2023, 8:46:59 AM11/18/23
to ai-b...@googlegroups.com

DeepMind Accurately Forecasts Weather on a Desktop Computer
Nature
Carissa Wong
November 14, 2023


Google DeepMind developed a machine-learning weather-forecasting model that outperformed the best conventional forecasting tools, as well as other artificial intelligence (AI)-based approaches. The GraphCast model can run on a desktop computer and makes its predictions in minutes. Researchers trained the model using estimates of past global weather made from 1979 to 2017 by physical models, allowing GraphCast to learn links between different weather variables. The trained model uses the current state of global weather, and weather estimates from six hours earlier, to predict the weather six hours ahead. The researchers found GraphCast could use global weather estimates from 2018 to make forecasts up to 10 days ahead in less than a minute, with the resulting predictions more accurate than those made by the European Centre for Medium-Range Weather Forecasts’ High RESolution forecasting system, which takes hours to forecast.

Full Article

 

 

Twist on AI Makes the Most of Sparse Sensor Data
Los Alamos National Laboratory News
November 13, 2023


Los Alamos National Laboratory researchers have developed an approach to artificial intelligence (AI) that enables the reconstruction of a broad field of data from a small number of sensors using low-powered “edge” computing. The AI technique, dubbed Senseiver, builds on a model called Perceiver IO developed by Google by applying the techniques of natural language models to the problem of reconstructing information about a broad area from relatively few measurements. The team applied the model to a National Oceanic and Atmospheric Administration sea-surface-temperature dataset. By integrating measurements taken over decades from satellites and sensors on ships, the model was able to forecast temperatures across the entire body of the ocean. Said Los Alamos' Dan O’Malley, "Using fewer parameters and less memory requires fewer central processing unit cycles on the computer, so it runs faster on smaller computers."

Full Article

 

 

Brain-like Computing System More Accurate with Custom Algorithm
University of California, Los Angeles Newsroom
Wayne Lewis
November 13, 2023


A training algorithm developed by researchers at the University of Sydney (USyd) in Australia helped an experimental computing system physically modeled after the biological brain “learn” to identify handwritten numbers with an overall accuracy of 93.4%. Researchers at the University of California, Los Angeles have been working on the new platform technology for computation for 15 years. The technology is a brain-inspired system composed of a tangled-up network of nanowires containing silver, laid on a bed of electrodes. The system receives input and produces output via pulses of electricity. Collaborators at USyd developed a streamlined algorithm for providing input and interpreting output. The algorithm is customized to exploit the system’s brain-like ability to change dynamically and to process multiple streams of data simultaneously.

Full Article

 

 

Silicon Valleyʼs Big, Bold Sci-Fi Bet on the Device That Comes After the Smartphone
The New York Times
Erin Griffith; Tripp Mickle
November 12, 2023


San Francisco-based startup Humane, founded by former Apple employees, is pinning its hopes on what's being billed as the first artificially intelligent (AI) device. Designed to replace the smartphone, the Ai Pin, reminiscent of the badges worn in Star Trek, can be controlled by speaking aloud, tapping a touch pad, or projecting a laser display onto the palm of a hand. The deviceʼs virtual assistant can send a text message, play a song, snap a photo, make a call, or translate a real-time conversation into another language. The system relies on AI to help answer questions and can summarize incoming messages with a simple command, among other things. Imran Chaudhri and Bethany Bongiorno, Humaneʼs husband-and-wife founders, see a future with less dependency on the screens that their former employer helped make ubiquitous. Said Chaudhri, "[AI] can create an experience that allows the computer to essentially take a back seat."

Full Article

*May Require Paid Registration

 

 

Algorithm Enhances Precision of Pressure Sensors for Wild Bird Tracking
Chinese Academy of Sciences
November 10, 2023


An algorithm developed by Chinese Academy of Sciences researchers aims to improve pressure sensor accuracy and reliability amid fluctuating temperatures, with a focus on those used to track wild migratory birds. The algorithm, Dynamic Quantum Particle Swarm Optimization (DQPSO), enhances the performance of a Radial Basis Function neural network used for temperature compensation with a temperature-pressure fitting model that can document the rate of temperature change, gradient reference terms, and other parameters to enable pressure sensors to adapt to different environmental conditions.

Full Article

 

 

Quantum Biology, AI Sharpen Genome Editing Tool
Oak Ridge National Laboratory
November 7, 2023


Oak Ridge National Laboratory (ORNL) researchers leveraged quantum biology, quantum chemistry, and artificial intelligence (AI) to improve the CRISPR Cas9 genome editing tools for modifying the genetic code of microbes. Existing models to predict effective guide RNAs for CRISPR tools are less accurate when applied to microbes because they were built with limited model species data. The researchers developed an explainable AI model known as an iterative random forest, which was trained on a dataset of about 50,000 guide RNAs targeting the genome of E. coli bacteria. Said ORNL's Erica Prates, "The model helped us identify clues about the molecular mechanisms that underpin the efficiency of our guide RNAs, giving us a rich library of molecular information that can help us improve CRISPR technology."

Full Article

 

 

Can AI Help Boost Accessibility?
UW News
Stefan Milne
November 2, 2023


Disabled and non-disabled University of Washington (UW) researchers tested artificial intelligence (AI) -based accessibility systems on themselves, with mixed results. For example, an individual with intermittent brain fog used PDF summarizer ChatPDF.com to assist with work, found that it frequently generated "completely incorrect answers." However, the same account found chatbots could help create and format references for a paper they were composing at work. Similarly, an autistic author found AI reduced their cognitive load by helping to write Slack messages at work, despite peers finding them "robotic." The frequency of AI-induced errors "makes research into accessible validation especially important," according to UW's Jennifer Mankoff.

Full Article

 

Companies Search For Alternative Energy Sources In Face Of Power-Hungry AI Data Centers

The Wall Street Journal Share to FacebookShare to Twitter (11/9, Lin, Subscription Publication) reported on the rising energy consumption of AI systems, driving the search for alternative energy sources. As AI applications – particularly those involving deep learning – become more prevalent, data centers supporting these technologies are consuming substantial amounts of energy. Companies are exploring alternative energy sources, such as geothermal, nuclear, and flared gas, for data centers to run on.

 

Report: Schools Aren’t Meeting Students’ Requests For AI Guidance

Education Week Share to FacebookShare to Twitter (11/10, Langreo) reported, “There’s a wide gap between what students say they want to learn about how to use artificial intelligence responsibly and what schools are teaching them right now, concludes a recent report from the Center for Democracy & Technology, a nonprofit that promotes digital rights.” The report found that 72 percent of students “said they would find it helpful to learn how to use generative AI responsibly,” while “less than half of students (44 percent) said they’ve received AI guidance from their schools.” The report also found “gaps in the digital-technology guidance that students said would be helpful to get from their schools and what schools have provided so far. As schools become more reliant on technology for teaching and learning, collecting student data, and monitoring students’ behavior online, it’s important to teach students how to be responsible digital citizens.”

 

Google In Talks To Invest In Character. AI

Reuters Share to FacebookShare to Twitter (11/11) reported, “Alphabet’s Google is in talks to invest hundreds of millions of dollars in Character. AI, as the fast growing artificial intelligence chatbot startup seeks capital to train models and keep up with user demand.” The investment “could be structured as convertible notes,” and “will deepen the existing partnership Character. AI already has with Google, in which it uses Google’s cloud services and Tensor Processing Units (TPUs) to train models.” CRN Share to FacebookShare to Twitter (11/13, Haranas) said, “The move shows that the generative AI wars between tech giants like Google, Microsoft and Amazon will likely continue to heat up in 2024.”

 

Google Sues To Block AI Ads Preying On Small Businesses

The Wall Street Journal Share to FacebookShare to Twitter (11/13, McKinnon, Subscription Publication) reports Google filed a lawsuit alleging that scammers are preying on consumer interest in AI tools to steal small businesses’ data, highlighting a series of fake Facebook ads and posts offering to download Google’s Bard AI chatbot but instead installing a malware that steals social-media credentials. The lawsuit targets unnamed individuals in India and Vietnam and is thought to be the first such lawsuit protecting users of a major tech company’s flagship AI product.

 

Some College Presidents Are Using AI Voice Clones, Deepfakes As Engagement Tools

Inside Higher Ed Share to FacebookShare to Twitter (11/14, Coffey) reports that while delivering a cybersecurity PSA, the president of Utah Valley University “warns of perils such as phishing and phone scams” before revealing that her voice was actually “that of an artificial intelligence–enabled bot.” This comes after the university “spent seven months working with an external company to develop the digital president, which can address more than 1,000 questions from students, staff and faculty.” Similarly, Wells College’s president “used ChatGPT to write his commencement speech in June, and the University of Nevada at Las Vegas created an AI avatar of their president last year. While there are broad possibilities of increasing student engagement and retention by leaning into AI, experts warn to keep watch for potential risks.” For example, “there is broad agreement on transparency, so if universities do intend to use AI, they need to disclose that they are using it, whether the approach is creating an avatar or using voice capabilities.”

 

Southern Methodist University, Researchers Using AI To Make Traffic Intersections Safer, More Efficient

SMU Share to FacebookShare to Twitter (11/14) reports that a professor at Southern Methodist University “has been awarded a three-year, $1.2 million grant by the Federal Highway Administration,” a grant which aims to develop a computer program “that utilizes artificial intelligence to enhance the safety and efficiency of intersections for both vehicles and pedestrians.” This grant is “part of the Federal Highway Administration’s Exploratory Advanced Research (EAR) Program, which collaborates with universities, private companies, and public entities conducting pioneering research in these areas. The EAR Program’s goal is to leverage artificial intelligence (AI) and machine-learning technology to make transportation safer and more efficient.” The researchers are developing a program called PANORAMA, which “can be applied to traffic lights at intersections throughout the country.”

 

Experts Say AI Is Going To Force Millions Of Workers To Train For New Jobs

Insider Share to FacebookShare to Twitter (11/14, Zinkula) reports experts are warning that artificial intelligence “could change or eliminate millions of American jobs” in the coming years, making it important for many of these workers “to be retrained for new jobs to avoid being left behind.” This “includes training displaced workers for jobs less impacted by AI and giving others the skills they need to work in one of millions of new jobs that could be created due to these technologies” as well as “helping workers develop AI skills in their current roles so they don’t get left behind.”

 

YouTube Enforces New Disclosure Rules For AI-Generated Content

The AP Share to FacebookShare to Twitter (11/14) reports YouTube is introducing “new rules for AI content,” mandating creators disclose the use of generative AI in creating realistic videos, with non-disclosure leading to potential content removal or suspension from the revenue sharing program. These updates, effective next year, include options for indicating AI-generated videos and labels for viewers, especially for sensitive topics. The platform is also enhancing AI to detect content violations and updating its privacy process for removal requests of AI-simulated identifiable individuals. Bloomberg Share to FacebookShare to Twitter (11/14, Alba, Subscription Publication) writes that YouTube’s penalties for non-disclosure extend to potential ad revenue loss and other unspecified consequences. CNN Share to FacebookShare to Twitter (11/14, Duffy) reports YouTube’s new policy empowers users to request removal of manipulated images or videos simulating identifiable individuals.

 

ED Official Says Schools Should Not Ignore Student Engagement With AI

Education Week Share to FacebookShare to Twitter (11/14) reports an Education Department official said at a Tuesday event in DC that school districts that choose to not engage with AI will leave their students unprepared to utilize new technologies. Roberto Rodríguez, the assistant secretary for planning, evaluation, and policy development at the Education Department, said during an event on AI in schools held at the American Enterprise Institute: “I’ve had conversations with some educators who have said, ‘Well, I don’t quite know what to make of AI. I’m not well prepared to really address it. So I’m gonna sit this one out, and we’ll see what comes next.’ And this is not one of those that you sit out.” He said, “Your kids aren’t sitting [it] out. Their lives aren’t sitting [it] out. And, in fact, you’re going to disadvantage [students] and create greater inequities by trying to sit AI out.”

        Later, he added, “I get excited about how we [can] support new approaches to delivering core content and personalizing that core content. ... Let’s think about what AI could bring to supporting more individualized and personalized learning experiences, what it could bring to students who are planning for their next career and for their college pathway.” With that said, Rodríguez said there need to be privacy safeguards when using any kind of technology in K-12 education.

 

How OpenAI’s Founder Seeks To Enter The “Big Tech” Pantheon

The Washington Post Share to FacebookShare to Twitter (11/15, Tiku) reports Sam Altman, the founder of OpenAI, “had gathered about 1,000 software engineers and AI researchers” in downtown San Francisco “for an event that signaled his company’s ascent into the Silicon Valley pantheon.” The tech company keynote “was made famous” by Steve Jobs, “who used it to announce the first iPod, and, years later, the first iPhone.” Now Altman was using “the same playbook to reinforce OpenAI’s dominance,” and during his keynote speech, Altman unveiled a “Big Tech power play: OpenAI’s latest upgrades would make it easy to build customizable bots, called GPTs without knowing a lick of code.” He didn’t mention it onstage, “but if all went to plan, OpenAI’s store would decimate the start-ups built on top of ChatGPT – destroying the dreams of some of the developers sitting in the crowd.” However, since accepting “a major investment from Microsoft in 2019, the company has transitioned to a novel for-profit structure. OpenAI often says it is still pursuing its original goal of building AI that ‘benefits all of humanity,’” though its path forward lately “looks more like business as usual.”

 

Senators Introduce Bipartisan Bill Establishing Standards For AI

The Hill Share to FacebookShare to Twitter (11/15) reports Sens. John Thune (R-SD) and Amy Klobuchar (D-MN) “introduced an artificial intelligence (AI) bill that would direct federal agencies to create standards aimed at providing transparency and accountability for AI tools, according to a copy of the bill released Wednesday.” The bipartisan Artificial Intelligence Research, Innovation, and Accountability Act of 2023 “will define terms related to generative AI, including what is considered ‘critical-impact,’ as well as create standards for AI systems to follow.” The bill “would set in place a system to require ‘critical-impact’ AI organizations to self-certify as meeting compliance standards.” The proposal “would task the Commerce Department with submitting a five-year plan for testing and certifying critical-impact AI.” The department “would also be required to regularly update the plan.”

dtau...@gmail.com

unread,
Nov 24, 2023, 8:30:08 AM11/24/23
to ai-b...@googlegroups.com

AI to Build and Fix Roads and Bridges
The New York Times
Colbi Edmonds
November 19, 2023


Artificial intelligence (AI) is being used to build and repair U.S. transportation infrastructure at a time when government spending on such projects accounts for only a fraction of the cost needed to repair or replace the nation’s aging bridges, tunnels, and roads. In Pennsylvania, for example, engineers are using AI to create lighter concrete blocks for new construction. Another project is using the technology to develop a highway wall that can absorb noise and greenhouse gas emissions. AI also is being used to prevent and detect damage to existing infrastructure. The technology can analyze what is happening in real time, and could be developed to deploy automated emergency responses, said Seyede Fatemeh Ghoreishi, a computer science professor at Northeastern University.

Full Article

*May Require Free Registration

 

 

The Mind’s Eye of a Neural Network System
Purdue University News
November 16, 2023


David Gleich, a professor of computer science at Purdue University, led the development of a tool that makes it possible to visualize the relationship that a neural network sees among images in a database. Gleich's team first developed a method of splitting and overlapping image classifications to identify where images have a high probability of belonging to more than one classification. The team then mapped those relationships. Each group of images the network thinks are related is represented by a single dot, and dots are color coded by classification. The closer the dots, the more similar the network considers groups to be. Zooming in on overlapping dots shows an area of confusion. Said Gleich, "What we're doing is taking these complicated sets of information coming out of the network and giving people an 'in' into how the network sees the data at a macroscopic level."

Full Article

 

 

Better Machine Learning Models with Quantum Computers
IEEE Spectrum
Tammy Xu
November 15, 2023


Researchers at European quantum computing company Terra Quantum demonstrated improved training of machine learning models using a method that combines the best features of classical and quantum computers. The researchers hypothesized that by giving classical and quantum computers the same dataset and allowing them to train models in parallel, the final model combining the two could achieve better results. Said Terra's Alexey Melnikov, “Quantum is not good for everything, classical is not good for everything, but together they improve each other.” The researchers used the technique to model gas emissions at a waste-burning thermal power plant. When they added a quantum neural network layer to an existing classical model, they found the error rate of the model dropped to one-third of what it would have been without quantum.

Full Article

 

 

Was Argentina the First AI Election?
The New York Times
Jack Nicas; Lucía Cholakian Herrera
November 16, 2023


Sergio Massa and Javier Milei widely used artificial intelligence (AI) to create images and videos to promote themselves and attack each other prior to Sunday's presidential election in Argentina, won by Milei. AI made candidates say things they did not, put them in famous movies, and created campaign posters. Much of the content was clearly fake, but a few creations strayed into the territory of disinformation. Researchers have long worried about the impact of AI on elections, but those fears were largely speculative because the technology to produce deepfakes was too expensive and unsophisticated. “Now we’ve seen this absolute explosion of incredibly accessible and increasingly powerful democratized tool sets, and that calculation has radically changed,” said Henry Ajder, an expert who has advised governments on AI-generated content.

Full Article

*May Require Paid Registration

 

 

Technique Enables AI on Edge Devices to Keep Learning over Time
MIT News
Adam Zewe
November 16, 2023


Researchers from the Massachusetts Institute of Technology (MIT), the MIT-IBM Watson AI Lab, and the University of California San Diego developed a technique that enables deep-learning (DL) models to efficiently adapt to new sensor data directly on edge devices. The on-device PockEngine training method determines the parts of a machine-learning model that need updating to improve accuracy, and only stores and computes with those specific pieces. DL models are comprised of many interconnected layers of nodes that process data to make a prediction. PockEngine fine-tunes each layer individually on a certain task and measures the accuracy improvement after each such tuning. It removes unnecessary layers or pieces of layers, creating a pared-down graph of the model to be used during runtime. Said MIT's Song Han, “On-device fine-tuning can enable better privacy, lower costs, customization ability, and also lifelong learning."

Full Article

 

Introducing EUGENe: An Easy-to-Use Deep Learning Genomics Software
UC San Diego Today
Miles Martin
November 16, 2023


Researchers at the University of California San Diego (UCSD) created EUGENe (elucidating the utility of genomic elements with neural nets), a deep-learning platform that can adapt to a wide variety of different genomics projects. With EUGENe, explains UCSD's Adam Klie, "You give an algorithm a sequence of DNA and ask it to make predictions about anything you’d expect that DNA could predict, such as whether a particular DNA sequence is functional or whether it regulates a gene in a certain biological context. This lets you explore properties of the DNA sequence and ask what would happen if I modified this piece here or moved this piece there." The researchers tested EUGENe by having it reproduce the results of three existing genomics studies that utilized different types of sequencing data.

Full Article

 

 

Using Deep Learning to Process Data, Guide Cardiac Interventions
SPIE
November 13, 2023


A new approach using machine learning for cardiac catheter localization through photoacoustic imaging was developed by a team led by Muyinatu A. Lediju Bell at Johns Hopkins University. The team used simulated data to reduce the hours of manual image acquisition and annotation that would have been required to train a deep convolutional neural network (CNN). Bell explained, “We trained the network with simulated channel data frames which we formatted to accommodate the field of view of the photoacoustic transducer, including multiple noise levels, signal amplitudes, and sound speeds, to ensure robustness against channel noise, target amplitude, and sound speed differences.” The researchers added an additional processing step called “histogram matching” to further improve the performance of the model. They then verified the CNN's effectiveness through experimentation on pig hearts.

Full Article

 

OpenAI Board Ousts CEO, Cites Failure To Maintain “Candid” Communications

The New York Times Share to FacebookShare to Twitter (11/17, Metz) reported the OpenAI board of directors has “pushed out” the company’s “high-profile” CEO, Sam Altman. The company said in a statement, “Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. ... The board no longer has confidence in his ability to continue leading OpenAI.” The Times says Altman’s ouster is “a stunning fall” the executive, “who over the last year had become one of the tech industry’s most prominent” figures “as well as one of its most fascinating characters.” Reuters Share to FacebookShare to Twitter (11/19) reported the board fired “CEO Sam Altman – to many, the human face of generative AI – sending shock waves across the tech industry.”

        The New York Times Share to FacebookShare to Twitter (11/20, Isaac, Roose, Metz) reports the OpenAI board named former Twitch CEO Emmett Shear as the company’s new interim chief executive, after rejecting demands to reinstate Sam Altman. In a memo to employees late Sunday, the board said it “firmly stands by its decision as the only path to advance and defend the mission of OpenAI,” adding that “Sam’s behavior and lack of transparency in his interactions with the board undermined the board’s ability to effectively supervise the company in the manner it was mandated to do.”

        Microsoft Hires Sam Altman After Surprise OpenAI Ouster. The Verge Share to FacebookShare to Twitter (11/20, Warren) reports Microsoft is hiring former OpenAI CEO Sam Altman after he was unexpectedly fired by the OpenAI board. Altman was fired by OpenAI’s board Friday, with the board saying it “no longer has confidence in his ability to continue leading OpenAI.” After Altman and the board spent the weekend attempting to negotiate his return to the company, “Microsoft CEO Satya Nadella announced that both Sam Altman and [OpenAI Co-Founder] Greg Brockman will be joining to lead Microsoft’s new advanced AI research team.” The Washington Post Share to FacebookShare to Twitter (11/20) reports Nadella said in a post on X that the company will be “moving quickly to provide them with the resources needed for their success.”

 

Majority Of OpenAI Employees Threaten To Quit, Join Former CEO At Microsoft Unless Altman Reinstated

The Washington Post Share to FacebookShare to Twitter (11/20, A1) reports that the future of OpenAI was “thrown into chaos” Monday after nearly all employees at the artificial intelligence company “threatened to quit and join ousted chief executive Sam Altman at Microsoft if he isn’t reinstated as CEO, extending the dramatic Silicon Valley boardroom saga.” More than 700 of the company’s roughly 770 employees “have signed a letter threatening to quit unless the current board resigns and reappoints Altman, according to a person familiar with the matter.” The Post says the “potential mass exodus” at OpenAI “puts the future of the lab in doubt, a drastic change of fate for a company that, until just days ago, was considered one of the most promising start-ups in Silicon Valley with a valuation close to $90 billion.”

        The Wall Street Journal Share to FacebookShare to Twitter (11/20, Hagey, Subscription Publication) reports that one of the employees that signed the letter was Ilya Sutskever, the company’s chief scientist and one of the members of the four-person board that voted to oust Altman. However, Sutskever on Monday said he deeply regretted his participation in the board’s action and changed his position following deliberations with OpenAI employees. The Journal adds that over the weekend, OpenAI’s senior leadership repeatedly asked the board to explain what prompted their sudden decision to remove Altman. In the employee letter made public Monday, OpenAI’s leaders said the board failed to give them an explanation.

        Forbes Share to FacebookShare to Twitter (11/20, Konrad) reports that among the employees who signed the letter “were several who noted their immigration status might be put at risk.” Employees who resign from OpenAI “face additional risks of complication to their immigration status, experts said – even should they join Microsoft, the destination announced by CEO Satya Nadella for Altman, former president Greg Brockman and an unspecified number of colleagues.” Nadella, an immigrant “who was once on an H-1B visa himself, has been outspoken in the past about immigration reform,” and Microsoft “maintains a dedicated immigration portal for employees, per its website.”

        Meanwhile, Bloomberg Share to FacebookShare to Twitter (11/20, Subscription Publication) reports OpenAI’s investors “are still trying to return” Altman to lead the company “and Microsoft Corp. has signaled that it wouldn’t oppose such an outcome.” However, the CBS Evening News Share to FacebookShare to Twitter (11/20) reported the board of directors “said Altman was not consistently candid in his communications,” which resulted in his termination. The Wall Street Journal Share to FacebookShare to Twitter (11/20, Subscription Publication) editorializes it is hard to see who benefits from Altman’s firing besides opposition to innovation.

 

Meta Disbands Responsible AI Team

CNBC Share to FacebookShare to Twitter (11/18, Picciotto) reported that Meta has disbanded its Responsible AI division, the team dedicated to regulating the safety of its artificial intelligence ventures as they get developed and deployed, according to a Meta spokesperson. Most members of the RAI team have been reassigned to the company’s Generative AI product division, while some others will now work on the AI Infrastructure team, the spokesperson said. The news was first reported by The Information.

 

US Copyright Office Weighs Reforms To Address AI

The AP Share to FacebookShare to Twitter (11/18) reports artists including “country singers, romance novelists, video game artists and voice actors” are “appealing to the U.S. government for relief....from the threat that artificial intelligence poses to their livelihoods.” Meanwhile, technology companies “are largely happy with the status quo that has enabled them to gobble up published works to make their AI systems better at mimicking what humans do.” Shira Perlmutter, the U.S. register of copyrights, “hasn’t yet taken sides.” Perlmutter told the AP “she’s listening to everyone as her office weighs whether copyright reforms are needed for a new era of generative AI tools that can spit out compelling imagery, music, video and passages of text.” The Copyright Office received more than 9,700 comments “before an initial comment period closed in late October. Another round of comments is due by Dec. 6. After that, Perlmutter’s office will work to advise Congress and others on whether reforms are needed.”

        Engineers Turning To AI For Infrastructure Projects. The New York Times Share to FacebookShare to Twitter (11/19, Edmonds) reports that as “the federal allocation of billions of dollars toward infrastructure projects would help with only a fraction of the cost needed to repair or replace the nation’s aging bridges, tunnels, buildings and roads, some engineers are looking to A.I. to help build more resilient projects for less money.” AI “could have the ability to speed up and improve tasks like engineering challenges to an incalculable degree.” But while AI “has the potential to be both more cost effective...and more creative in coming up with new approaches to familiar tasks,” experts “caution against embracing the technology too quickly when it is largely unregulated and its payoffs remain largely unproven.”

 

Amazon Announces AI Ready Training Program

The Wall Street Journal Share to FacebookShare to Twitter (11/20, Herrera, Cutter, Subscription Publication) reports Amazon has launched a program, dubbed “AI Ready,” aimed at training at least two million workers in basic to advanced artificial intelligence skills by 2025 as the company works to gain ground against generative AI rivals including Microsoft, Google, and others.

 

Experts Make Recommendations On Artificial Intelligence Use In Schools

Education Week Share to FacebookShare to Twitter (11/20) reports artificial intelligence “is developing so rapidly that many educators fear school district policies to handle issues like cheating or protecting data privacy will be outdated almost the minute they are released.” To keep up with “the technology’s quick evolution, districts should keep their AI policies as simple as possible, experts said during an Education Week webinar entitled Ready or Not, AI Is Here: How K-12 Schools Should Respond.” The Peninsula School District in Washington state “has chosen to develop ‘principles and beliefs’ around AI as opposed to hard-and-fast policy for now, Kris Hagel, the district’s executive director of digital learning, said on the webinar.” Hagel said, “We looked at it last spring and said, ‘Boy, it is moving so fast.’ And when you think of policy and a lot of education settings, you think of these very rigid, school board-approved policies.” The district decided it didn’t “want to do that because we don’t know where this is gonna land yet,” Hagel said.

 

American Students, Faculty Rank Among The Lowest In The World For AI Use

Inside Higher Ed Share to FacebookShare to Twitter (11/21, Coffey) reports American students and university leaders “have some of the lowest usage of AI in the world, according to two new reports.” According to “a report by the education technology firm Anthology, 38 percent of students reported using AI at least monthly, with only the United Kingdom having a lower usage rate.” Chegg, another ed-tech firm, “conducted its own global study with similar results: 20 percent of students in the United States reported using generative AI, followed only by the U.K. with 19 percent.” However, in the US, “more than 30 percent of the university leaders surveyed by Anthology are concerned that AI is unethical and could result in plagiarism--a higher degree of suspicion than leaders in any country but the U.K.” Anthology Senior Director of Engagement Strategy Mirko Widenhorn “believes the relatively lower use of AI may create opportunity for institutions.” Widenhorn said, “The lower usage opens a valuable window of time for university leaders to dig in and assess the landscape and deepen their understanding of how AI can be applied effectively at their institution. It’s moving fast and the clock is ticking, but this is an opportunity for leaders to catch up and plot a course ahead.”

 

Rapid Developments In AI Fuel Worries About Autonomous Weapons Systems

The New York Times Share to FacebookShare to Twitter (11/21, Lipton) reports swarms “of killer drones are likely to soon be a standard feature of battlefields around the world. That has ignited debate over how or whether to regulate their use and spurred concerns about the prospect of eventually turning life-or-death decisions over to artificial intelligence programs.” Eventually, AI “should allow weapons systems to make their own decisions about selecting certain kinds of targets and striking them.” Recent developments “in AI tech have intensified the discussion around such systems, known as lethal autonomous weapons.” The Pentagon “is now working to build swarms of drones, according to a notice it published earlier this year.” Amid these developments, there “is widespread concern within the United Nations about the risks of the new systems.” And while some weapons “have long had a degree of autonomy built into them, the new generation is fundamentally different.”

 

US Telecom Industry Urges Federal Government To Enact “Light Touch” Regulations On Use Of AI

Light Reading Share to FacebookShare to Twitter (11/21, Ferraro) reports US telecom companies are urging the federal government to create regulations that will foster innovation in the use of artificial intelligence (AI) within the industry, “citing use cases for AI in spectral efficiency, open RAN deployment and beyond.” Testimony highlighted how AI is already being used to increase spectral efficiency and has the potential to optimize spectrum allocation and improve broadband mapping. Light Reading says, “In recent months, we’ve seen President Biden release a White House executive order ‘on the safe, secure, and trustworthy development and use’ of AI, plus hearings on Capitol Hill and the introduction last week of a Senate bill on boosting AI ‘accountability’ and ‘innovation.’” In response to these efforts, “telecom companies have started to weigh in on how AI can best benefit the sector and how the government should enact ‘light touch’ regulations without impeding innovation.”

dtau...@gmail.com

unread,
Dec 2, 2023, 7:07:30 AM12/2/23
to ai-b...@googlegroups.com

Google DeepMind Researchers Use AI Tool to Find 2 Million New Materials
Financial Times
Michael Peel
November 29, 2023


Researchers at Google DeepMind used the GNoME artificial intelligence tool to identify 2.2 million novel crystal structures, exceeding the number of crystal structures discovered in the history of science by more than 45 times. They plan to make available 381,000 of the most promising structures to other researchers to gauge their viability in solar cells, semiconductors, and other applications. The findings already have been employed by researchers at the University of California, Berkeley, and the Lawrence Berkeley National Laboratory, as they work to develop new materials. The researchers created 41 novel compounds using an autonomous laboratory (A-lab) guided by computation, historical data, and machine learning. Berkley's Gerbrand Ceder said, "While the robotics of the A-lab is cool, the real innovation is the integration of various sources of knowledge and data with A-lab in order to intelligently drive synthesis."
 

Full Article

*May Require Paid Registration

 

 

Smart Microgrids Can Restore Power More Efficiently, Reliably
UC Santa Cruz Newscenter
Emily Cerf
November 30, 2023


Yu Zhang and Shourya Bose at the University of California, Santa Cruz developed an approach for the smart control of microgrids for power restoration when outages occur. The approach uses deep reinforcement learning to create an efficient framework that includes models of many components of the power system. Said Bose, “We’re modeling a whole bunch of things: solar, wind, small generators, batteries, and we're also modeling when people's electricity demand changes. The novelty is that this specific flavor of reinforcement learning, which we call constrained policy optimization (CPO), is being used for the first time.” CPO takes into account real-time conditions and uses machine learning to find long-term patterns that affect the output of renewable energy sources, unlike traditional systems that use model predictive control (MPC), which bases decisions on the available conditions at the time of optimization.
 

Full Article

 

 

Neural Network Takes Asia's Air Temperatures
IEEE Spectrum
Rahul Rao
November 28, 2023


A transformer-based neural network developed by researchers at China's Chengdu University of Information Technology and the China Meteorological Administration can generate near-real-time air temperatures using infrared data from a weather satellite. The neural network, TaNET, was trained on infrared surface temperature data from the FengYun-4A (FY-4A) satellite to output a near-surface temperature map corresponding to the European Centre for Medium-Range Weather Forecasts' ERA5, which does not provide real-time data. Tests showed that TaNET outperformed the China Meteorological Administration's CRA and the U.S. National Oceanic and Atmospheric Administration's CFSv2 datasets, as well as a model driven by a U-Net convolutional neural network.
 

Full Article

 

 

How Do You Make a Robot Smarter? Program It to Know What It Doesn't Know
Princeton University School of Engineering and Applied Science
Molly Sharlach
November 28, 2023


Princeton University and Google researchers developed a technique using large language models (LLMs) to teach robots to realize when they do not know something and to request further instructions. The system involves setting an uncertainty threshold that will prompt the robot to ask for assistance based on the degree of success sought by the user. For instance, the researchers asked a robot to "place the bowl in the microwave," leaving it to choose between metal and plastic bowls. Four actions, each assigned a probability, were generated by the robot's LLM-based planner, and the robot asked which bowl to place in the microwave. Said Princeton's Anirudha Majumdar, "Using the technique of conformal prediction, which quantifies the language model's uncertainty in a more rigorous way than prior methods, allows us to get to a higher level of success."
 

Full Article

 

 

Defending Your Voice Against Deepfakes
The Source (Washington University in St. Louis)
Shawn Ballard
November 27, 2023


A tool developed by Washington University in St. Louis' Ning Zhang is intended to protect a user's voice from being used to create deepfakes. By making it harder for artificial intelligence (AI) tools to read certain voice-recording characteristics, the AntiFake tool prevents unauthorized speech synthesis. Said Zhang, "The tool uses a technique of adversarial AI that was originally part of the cybercriminals' toolbox, but now we're using it to defend against them. We mess up the recorded audio signal just a little bit, distort or perturb it just enough that it still sounds right to human listeners, but it's completely different to AI." In tests against five state-of-the-art speech synthesizers, AntiFake was found to be 95% effective.

Full Article

 

 

Amazon Launches Free AI Classes
The Wall Street Journal
Sebastian Herrera; Chip Cutter
November 20, 2023


Amazon.com is rolling out a free training program that aims to provide basic to advanced skills in artificial intelligence (AI) to at least 2 million people by 2025. The AI Ready program aims to fill the gap in AI talent as Amazon seeks to compete against Microsoft, Google, and others. The program will provide eight online courses centered on generative AI for both beginners and more experienced professionals in tech and tech-adjacent roles. Non-Amazon employees can access the courses, which also cover Amazon's Bedrock AI platform and CodeWhisperer tool, through the Amazon learning Website. Amazon's Swami Sivasubramanian said the main goal is to "democratize" generative AI education, adding that re-skilling workers would benefit not only Amazon but also its enterprise customers.

Full Article

*May Require Paid Registration

 

 

AI Sharpens Rainfall Estimates from Satellites
IEEE Spectrum
Charles Q. Choi
November 23, 2023


Colorado State University (CSU) researchers used artificial intelligence (AI) to improve rainfall estimates from weather satellites, which scan cloud tops instead of detecting surface-level precipitation. The researchers used deep learning techniques to analyze data from the U.S. National Oceanic and Atmospheric Administration's Geostationary Operational Environmental Satellites (GOES-R), which scan visible and infrared light from Earth. Using a neural network with more than 1.3 million parameters and GOES-R infrared data from the southwestern U.S., the researchers trained the model to generate precipitation estimates as close to ground-based radar estimates as possible. They found the AI system outperformed other algorithms used to analyze satellite data in matching ground-based radar estimates, and that it was even more accurate in estimating heavy precipitation when incorporating GOES-16's lightning data.

Full Article

 

 

As AI-Controlled Killer Drones Become Reality, Nations Debate Limits
The New York Times
Eric Lipton
November 21, 2023


The U.S. and China are among the nations making swift progress in developing and deploying drones equipped with AI that can hunt and kill targets without human input. Although concerns about AI-controlled autonomous weapons have prompted proposals at the U.N. to govern their use, observers do not expect any legally binding mandates to be issued in the near future. Rapid advances in AI and the intense use of drones in wars in Ukraine and the Middle East have combined to make the issue more urgent. The jamming of radio communications and GPS in war zones is accelerating the shift, as autonomous drones can operate even when communications are cut off.

Full Article

*May Require Paid Registration

 

OpenAI Board Reaches Deal To Reinstate Altman As CEO

CNBC Share to FacebookShare to Twitter (11/22, Levy, Field, Vanian, Goswami) reported OpenAI early Wednesday announced after “immense pressure from employees and investors on the board,” Sam Altman “will return as CEO.” The New York Times Share to FacebookShare to Twitter (11/22, Metz, Isaac, Mickle, Weise, Roose) reported the company’s board of directors will be overhauled and former President Greg Brockman, who left with Altman in solidarity, will also be reinstated. OpenAI’s revamped board of directors will include Bret Taylor, formerly co-CEO of Salesforce and chairman of Twitter; Larry Summers, former Treasury secretary and Harvard University president; and Adam D’Angelo, a current board member and chief executive of the question-and-answer site, Quora.

        The Wall Street Journal Share to FacebookShare to Twitter (11/22, Purnell, Hagey, Subscription Publication) reported Summers’ appointment surprised some, though he may be called upon for his expertise navigating the political landscape amid greater scrutiny for AI. Meanwhile, the retention of D’Angelo as a board member is a possible sign that Altman may not have his way moving forward. An anonymous source said the board could add up to six more members as well. Insider Share to FacebookShare to
Twitter (11/22, Carter) reported, “OpenAI’s board is looking very male-heavy right now following Sam Altman’s shock return.” Two of the board members that “voted to oust” Altman – Helen Toner and Tasha McCauley – have been replaced by two men, Taylor and Summers, while Adam D’Angelo remains, despite also voting to move on from Altman. This “lack of diversity...has sparked debate online.”

        The Information Share to FacebookShare to Twitter (11/21, Victor, Efrati, Subscription Publication) said the announcement “implies Altman will not be a board director himself, at least not initially, even though he held that position four days ago.” The Washington Post Share to FacebookShare to Twitter (11/22, Verma, Tiku, De Vynck) reported the board “agreed to an independent investigation, which will examine all aspects of recent events, including Altman’s role,” and The Information Share to FacebookShare to Twitter (11/22, Osawa, Subscription Publication) reported Altman “had agreed to an internal investigation into alleged conduct that prompted the company’s board to oust him.”

        Reuters Share to FacebookShare to Twitter (11/22, Dastin, Soni) reported, “Analysts said the reshuffle will favor Altman and Microsoft, which has pledged billions of dollars to the startup and is rolling out its technology to its customers globally.” Microsoft’s CEO Satya Nadella “welcomed the changes,” and “Tuesday’s moves reassured some investors.”

        Altman’s Return At OpenAI Seen As Victory For Supporters Of AI’s Rapid Development. The Wall Street Journal Share to FacebookShare to Twitter (11/24, Mims, Subscription Publication) reported that those in favor of focusing on more rapid development of artificial intelligence count the return of Sam Altman as CEO of OpenAI as a victory. Emmett Shear, named interim CEO with the removal of Altman, favored slowing development of AI from 10 to 1 or 2, while Altman favors continuing rapid development.

        Sources: OpenAI Researchers Cautioned Board About Humanity-Threatening AI Prior To Sam Altman Firing. Reuters Share to FacebookShare to Twitter (11/22, Tong, Dastin, Hu) reported that in advance “of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.” Reuters says this “previously unreported letter and AI algorithm were key developments before the board’s ouster of Altman, the poster child of generative AI, the two sources said. ... The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman’s firing.” Reuters “was unable to review a copy of the letter,” and “the staff who wrote the letter did not respond to requests for comment.”

        Fortune Share to FacebookShare to Twitter (11/23, Lim) reported that “according to a Reuters report citing two sources acquainted with the matter, several staff researchers wrote a letter to the organization’s board warning of a discovery that could potentially threaten the human race.” Two anonymous sources “claim this letter, which informed directors that a secret project named Q* resulted in AI solving grade school level mathematics, reignited tensions over whether Altman was proceeding too fast in a bid to commercialize the technology.” Fortune reports, “According to one of the sources, after being contacted by Reuters, OpenAI’s chief technology officer Mira Murati acknowledged in an internal memo to employees the existence of the Q* project as well as a letter that was sent by the board.”

 

Wired Profiles Attorney Behind Class-Action Lawsuits Against AI Companies

Wired Share to FacebookShare to Twitter (11/22, Knibbs) profiled Matthew Butterick, the attorney who “is the unlikely driving force behind the first wave of class-action lawsuits against big artificial-intelligence companies.” Butterick is determined “to make sure writers, artists, and other creative people have control over how their work is used by AI.” He did not expect to be in this position. Not only was Butterick not a practicing attorney until recently, “he’s certainly not anti-technology. For most of his life, he’s worked as a self-employed designer and programmer, tinkering with speciality software.” However, “when generative AI took off, he dusted off a long-dormant law degree specifically to fight this battle.”

 

California Governor Releases Report On Risks, Benefits Of AI For State Programs

The Los Angeles (CA) Times Share to FacebookShare to Twitter (11/24, Wong) says that according to a report released by California Gov. Gavin Newsom’s office on Tuesday, artificial intelligence that “can generate text, images and other content could help improve state programs but also poses risks.” Specifically, the report warned that while generative AI “could help quickly translate government materials into multiple languages, analyze tax claims to detect fraud, summarize public comments and answer questions about state services ... deploying the technology...also comes with concerns around data privacy, misinformation, equity and bias.”

 

Pentagon’s AI Program To Raise Hard Questions About Weaponized Systems

The AP Share to FacebookShare to Twitter (11/25, Bajak) says, “Artificial intelligence employed by the U.S. military has piloted pint-sized surveillance drones in special operations forces’ missions and helped Ukraine in its war against Russia. It tracks soldiers’ fitness, predicts when Air Force planes need maintenance and helps keep tabs on rivals in space.” Now, the Defense Department “is intent on fielding multiple thousands of relatively inexpensive, expendable AI-enabled autonomous vehicles by 2026 to keep pace with China.” According to Deputy Secretary of Defense Kathleen Hicks, the “ambitious initiative – dubbed Replicator – seeks to ‘galvanize progress in the too-slow shift of U.S. military innovation to leverage platforms that are small, smart, cheap, and many.’” While Replicator’s “funding is uncertain and details vague,” the program “is expected to accelerate hard decisions on what AI tech is mature and trustworthy enough to deploy – including on weaponized systems.”

 

California Teacher Uses Generative AI To Teach About The American Revolution

The Napa Valley (CA) Register Share to FacebookShare to Twitter (11/22, DeBenedetti) reported Stephanie Trott “is teaching her eighth-grade classes about the American Revolution – with a twist.” Using ChatGPT “and other generative artificial intelligence tools, her Redwood Middle School students are staging their own revolution, but set in the future on a new planet.” Each class period “has been made a separate Martian colony, tasked with forming a government, electing leaders and generating propaganda to entice its citizens to join the fight against the Americans – representing the British in the original tale – or stay loyal to it.” After forming a government, “the story truly begins to develop, diverging in each class period based on the structure its students chose.” Trott told ChatGPT “to create a story on Mars that mimicked the Revolutionary War era from 1775 to 1783, then refine that story further based on each class period’s government.”

 

Google’s AI Mistakenly Flags Innocent Video, Leading To Account Suspension

The New York Times Share to FacebookShare to Twitter (11/27, Hill) reports an Australian mother, Jennifer Watkins, faced a severe setback when Google erroneously suspended her account due to a harmless video uploaded by her son. The video, described as “a video of his bottom,” was mistakenly flagged by Google’s AI as child exploitation content, leading to the loss of access to her entire Google suite, including vital work emails and personal documents. Despite her repeated appeals and explanations that the content was not uploaded with malicious intent, Google maintained that it “still violated company policies.” The situation was only rectified following media intervention, raising concerns about the reliability and implications of AI-driven content moderation.

 

California Looks Into Using AI To Make State Government More Efficient

The Los Angeles Times Share to FacebookShare to Twitter (11/27) reports California “is exploring the question” of whether artificial intelligence could “help make government better.” State officials “released a report last week examining the benefits and risk of putting generative AI to work in the state’s massive bureaucracy.” GenAI “could also become a major economic engine in California, which is already home to nearly three dozen of the world’s top 50 AI companies.” The report “cites projections from Pitchbook that the global GenAI market could reach $42.6 billion by the end of this year.” California’s leaders “hope to attract a considerable chunk of that market, but also said the state should lead the way on training workers to adapt to a technology that could also generate job loss.”

 

University Financial Aid Officers Are Using AI To Navigate FAFSA Questions

Inside Higher Ed Share to FacebookShare to Twitter (11/28, Coffey) reports as the Free Application for Federal Student Aid was once again delayed, artificial intelligence company Ivy.ai “launched a FAFSA-specific product that monitored the FAFSA site in real time to help universities navigate the growing confusion from parents and students alike.” As artificial intelligence has “catapulted into the general public’s consciousness following OpenAI’s launch of ChatGPT in November 2022,” institutions have begun to use AI “to streamline processes in admissions, food delivery and student engagement. Financial aid offices are beginning to join that mix, seeing AI as a potential tool to navigate the choppy FAFSA waters and provide 24-7 answers to frequently asked questions from concerned and confused parents and students alike.” For example, artificial intelligence company Ocelot “offers chat bots and live texting services to answer student and parent questions. Its FAFSA simplification digital assistant launched at the end of October and now has a dozen customers.”

 

AI Model Could Predict Lung Cancer Risks In Non-Smokers, Study Suggests

Fox News Share to FacebookShare to Twitter (11/28, Rudy) reports, “Among the latest artificial intelligence innovations in health care, a routine chest X-ray could help identify non-smokers who are at a high risk for lung cancer,” according to study findings that “will be presented this week at the annual meeting of the Radiological Society of North America (RSNA) in Chicago.” Cardiovascular Imaging Research Center at Massachusetts General Hospital and Harvard Medical School in Boston researchers have “developed a deep learning AI model using 147,497 chest X-rays of asymptomatic smokers and never-smokers.” At present, lung cancer screenings are recommended “for adults between the ages of 50 and 80 who have at least a 20-pack-year smoking history and who currently smoke or have quit within the past 15 years. There is no recommended screening for people who have never smoked or have only smoked very little.”

 

Survey: Writing Instructors Aren’t As Worried About ChatGPT Plagiarism Than Previously Thought

In a piece for The Conversation Share to FacebookShare to Twitter (11/28), Texas Woman’s University professor Daniel Ernst writes that “when ChatGPT launched a year ago, headlines flooded the internet about fears of student cheating,” while Teen Vogue ventured that the moral panic “may be overblown.” The more “measured tone in Teen Vogue tracks better with preliminary findings from our 2023 survey that examined attitudes and feelings about artificial intelligence among college faculty who teach writing. Survey responses revealed that AI-related anxieties among educators around the country are more complex and nuanced than claims insisting that AI is outright and always bad.” Additionally, while some educators “do worry about students cheating, they also have another fear in common: AI’s potential to take over human jobs.” Teachers also say “they actually enjoy using the revolutionary technology to enhance what they do.”

 

Google DeepMind Predicts Structure Of Millions Of New Materials

Reuters Share to FacebookShare to Twitter (11/29) reports that, in a research paper published in Nature, Google DeepMind said it used AI “to predict the structure of more than 2 million new materials,” of which almost 400,000 “could soon be produced in lab conditions.” The company plans to “share its data with the research community, in the hopes of accelerating further breakthroughs in material discovery.”

 

Survey: Employers Are Willing To Boost Pay Levels For AI-Skilled Workers

Higher Ed Dive Share to FacebookShare to Twitter (11/29, Alexis) reports that according to a recent survey commissioned by Amazon Web Services, “hiring workers with artificial intelligence skills is a priority for nearly three quarters (73%) of employers, but the majority of them are struggling to find such talent.” Organizations indicated “they would be willing to hike pay levels for AI-skilled workers across business functions, with salaries potentially rising by an average of 43% in sales and marketing; 42% in finance; 37% in legal, regulatory, and compliance; and 35% in human resources.” The report said, “The anticipated pay premiums across departments is because AI’s key benefits – automating tasks, boosting creativity, and improving outcomes – have dispersed applications across departments and tasks. Employers anticipate that workers with AI skills will be able to drive additional productivity and higher-quality work, which would command a salary increase.”

 

OpenAI’s New Board Takes Over, Microsoft Gains Observer Role

The Wall Street Journal Share to FacebookShare to Twitter (11/29, Seetharaman, Subscription Publication) reports that on Monday OpenAI’s new board formally took over, announcing partner Microsoft will have an observer role. The announcement ends a period of drama since CEO Sam Altman was fired by the previous board and then returned. Altman said, “My immediate priority – and the current board’s too – will be to continue to work to stabilize the company.” Bret Taylor, the board’s chairman, said, “We want to ensure that all of [the employees] can depend that OpenAI is going to continue to thrive for the long term.”

        The New York Times Share to FacebookShare to Twitter (11/29, Metz, Mickle) reports that Altman, in a blog post, “outlined his priorities,” including the company “building safe A.I. systems” and the board “improving governance and overseeing an independent review of the events that led to and followed his removal as chief executive.” In an interview, Altman said, “Part of what good governance means is that there’s more predictability, transparency and input from various stakeholders, and this seemed like a good way to get that from a very important one,” i.e., Microsoft.

        Bloomberg Share to FacebookShare to Twitter (11/29, Subscription Publication) reports Adam D’Angelo is the only member of the board who wasn’t replaced. This highlights the importance of his relationship with Altman “in restoring some stability at the world’s best-known artificial intelligence startup.”

        OpenAI’s ChatGPT Already Upending Healthcare One Year After Launch. Axios Share to FacebookShare to Twitter (11/29, Reed) reports, “One year after OpenAI’s ChatGPT exploded onto the scene, the generative AI model is already upending health care ... while accelerating questions about AI’s promises and limitations.” Many experts have “predicted that the unprecedented hype around how AI may change health care will begin to quiet down over the next few months as the industry races to get a better handle on what the technology can and can’t do. ... The pharma industry is already using generative AI models to make drug discovery more efficient.”

 

ChatGPT Marks One-Year Anniversary Since Debut

CNBC Share to FacebookShare to Twitter (11/30, DeVon) reports November 30 marked the one-year anniversary of the release of ChatGPT to the public. CNBC surveys what “has changed since the popular AI chatbot debuted.” In the year since the release of ChatGPT, OpenAI’s rivals have released their own chatbots and generative AI. For instance, “in November, Amazon unveiled its AI chatbot, Q,” which “is geared toward helping workers streamline tasks such as summarizing documents, conducting research and generating email drafts, per the company’s blog.”

        TechCrunch Share to FacebookShare to Twitter (11/30, Wiggers) reports, “In the months following its launch, ChatGPT gained paid tiers with additional features, including a plan geared toward enterprise customers,” as well as upgrades “with web searching, document analyzing and image creating (via DALL-E 3) capabilities.” The article highlights various milestones for ChatGPT but says its “adoption has not been universal.” One “Pew Research survey from August showed that only 18% of Americans have ever tried ChatGPT, and that most who’ve tried it use the chatbot for entertainment purposes or answering one-off questions.” Even among teenagers, ChatGPT use may not be that widespread, “despite what some alarmist headlines imply,” as one poll found “only two in five teenagers have used the tech in the last six months.”

        Bloomberg Share to FacebookShare to Twitter (11/30, Lanxon, Davalos, Subscription Publication) reports, “In the year since ChatGPT launched to the public, there has been endless speculation about jobs that could be made obsolete by artificial intelligence, but at least one lucrative new skillset has emerged and shown some staying power: prompt engineering.”

        Tech Executives Cast Doubt On OpenAI Board’s Motivations For Removing Altman. Insider Share to FacebookShare to Twitter (11/30, Chowdhury) reports, “Microsoft president Brad Smith cast doubt Thursday on claims that the boardroom battle at the ChatGPT developer, which led to the firing and rehiring of CEO Sam Altman, was over a major advancement in AI that posed a threat to humanity.” Smith is quoted saying, “I don’t think that is the case at all. ... I think there obviously was a divergence between the board and others, but it wasn’t fundamentally about a concern like that.” Insider says, “The comments suggest figures as senior as Smith at Microsoft still don’t have a good understanding of why Altman was ousted from the AI company in the first place.”

        Insider Share to FacebookShare to Twitter (11/29, Wei, Tan) reports, “Former OpenAI board member Helen Toner says the board wasn’t trying to stifle the company’s progress by removing CEO Sam Altman.” Toner “announced her resignation from the board on Wednesday. She had previously voted for Altman’s firing.” Toner is quoted saying on X, “Though there has been speculation, we were not motivated by a desire to slow down OpenAI’s work.”

        Insider Share to FacebookShare to Twitter (11/29, Kay) reports that in an interview at the DealBook Summit, Elon Musk expressed concern about what the attempt to remove OpenAI CEO Sam Altman from his position meant for artificial intelligence. Musk “said he wanted to know why the OpenAI cofounder and chief scientist Ilya Sutskever ‘felt so strongly as to fight Sam,’ adding: ‘That sounds like a serious thing. I don’t think it was trivial. And I’m quite concerned that there’s some dangerous element of AI that they’ve discovered.’”

 

Microsoft President: No Chance Of AGI In Next 12 Months

Reuters Share to FacebookShare to Twitter (11/30, Murugaiyan) reports Microsoft president Brad Smith “said there is no chance of super-intelligent artificial intelligence being created within the next 12 months, and cautioned that the technology could be decades away.” Reuters notes that OpenAI’s Q* project “could be a breakthrough in the startup’s search for what’s known as artificial general intelligence,” though Smith is quoted saying, “There’s absolutely no probability that you’re going to see this so-called AGI, where computers are more powerful than people, in the next 12 months. It’s going to take years, if not many decades, but I still think the time to focus on safety is now.”

 

Amazon CTO Vogels Shares Tech Predictions For 2024

In a “wide-ranging interview” with TechCrunch Share to FacebookShare to Twitter (11/30, Lardinois) ahead of his keynote address at AWS re:Invent, Amazon CTO Werner Vogels discussed his predictions for 2024. Vogels has “an interesting perspective” on generative AI, and “his first prediction is that generative AI will become culturally aware, meaning that models will gain a better understanding of different cultural traditions.” Vogels “believes that generative AI will greatly enhance developer productivity,” with new tools that he likens to pair programming, and it will also “free developers from a lot of the busywork of writing tests, refactoring code and writing boilerplate.” Vogels “also believes that FemTech will finally take off, in part because there is less of a stigma now around talking about women’s healthcare.” Forbes Share to FacebookShare to Twitter (11/30, Brady) reports Vogels “predicts that 2024 will be the year in which the college education model will finally crack in favor of industry-led skills-based training. That means companies have to shift not only how they hire but also how they design the workplace to make sure their people are constantly learning.”

 

Congress Considering Steps To Protect Patient Information As Healthcare Industry Expands AI Use

The Hill Share to FacebookShare to Twitter (11/30, Turnure) reports, “Hospitals, doctor’s offices, and pharmacies across the U.S. are already using artificial intelligence” even though “many patients, including members of Congress, are still unsure about the technology.” Both “lawmakers and experts...remain concerned that AI needs more and more patient data to train and improve it, and that data sometimes does not fall under HIPAA protections.” Now, “to protect patient information, Congress is considering guardrails like a national data privacy standard and taking steps to increase transparency so patients know when and how their doctors are using AI.”

 

Consortium Develops Data Provenance Standards

The New York Times Share to FacebookShare to Twitter (11/30, Lohr) reports the Data & Trust Alliance, a nonprofit consortium, announced data provenance standards “for describing the origin, history, and legal rights to data,” creating “essentially a labeling system for where, when, and how data was collected and generated, as well as its intended use and restrictions.” Members of the Data & Trust Alliance include American Express, Humana, IBM, UPS, and Walmart. The effort “is mainly intended for business data that companies use to make their own A.I. programs or data they may selectively feed into A.I. systems from companies like Google, OpenAI, Microsoft and Anthropic.”

dtau...@gmail.com

unread,
Dec 9, 2023, 12:40:47 PM12/9/23
to ai-b...@googlegroups.com

AI's Future Could Be 'Open Source' or Closed. Tech Giants Are Divided
The Washington Post
Matt O'Brien
December 5, 2023


Meta and IBM have launched the AI Alliance, which is calling for an "open science" approach to the development of artificial intelligence (AI). The alliance also includes Dell, Sony, AMD, Intel, and a number of universities and AI startups. On the other side of the open vs. closed debate is Google, Microsoft, and OpenAI. There are differing definitions for open source AI, but IBM's Darío Gil said the AI Alliance is "coming together to articulate, simply put, that the future of AI is going to be built fundamentally on top of the open scientific exchange of ideas and on open innovation, including open source and open technologies." Those against open source AI argue there are safety risks to making AI systems publicly available, especially given the lack of guardrails currently in place.

Full Article

*May Require Free Registration

 

 

Neural Net Could Help Find Orphaned Wells
IEEE Spectrum
Rina Diane Caballar
December 5, 2023


A neural network developed by Los Alamos National Laboratory (LANL) researchers could help locate orphaned oil and gas wells that continue to emit methane into the atmosphere. The Senseiver neural network transforms the limited data gathered through airplane and drone surveys into a concise and usable form. Expanding on Google's Perceiver IO architecture, Senseiver assigns weights to inputs (the field observations from sensors and the sensors' locations) in order to predict the next measurement. Said LANL's Javier E. Santos, "If we have a couple hundred buoys or boats recording sea temperature, we can send those measurements into this machine learning model and ask for the sea temperature in spots where it has not been observed. It's going to reconstruct those spots based on the information from the sensors that are available."

Full Article

 

 

Machine Learning Monitors Driver ‘Workload’ to Improve Road Safety
University of Cambridge (U.K.)
December 7, 2023


University of Cambridge researchers, in partnership with Jaguar Land Rover, developed an adaptable algorithm that could improve road safety by predicting when drivers can interact safely with in-vehicle systems or receive messages. The researchers used a combination of on-road experiments and machine learning, as well as Bayesian filtering techniques, to continuously measure driver "workload." The resulting algorithm is adaptable and can respond in near-real time to changes in the driver’s behavior and status, road conditions, road type, or driver characteristics. This information can then be incorporated into in-vehicle systems. Said Cambridge's Bashar Ahmad, “We’ve been able to adapt the models on the go using simple Bayesian filtering techniques. It can easily adapt to different road types and conditions, or different drivers using the same car."

Full Article

 

 

Climate Summit Embraces AI, with Reservations
The New York Times
Jim Tankersley
December 4, 2023


COP28, the United Nations (U.N.) climate summit, opened last week in Dubai with a host of events and announcements centered on artificial intelligence (AI), touting its ability to process vast quantities of information and produce insights and efficiencies that far exceed what computers and data scientists have been able to do. The U.N. announced on the opening day that it was partnering with Microsoft on an AI-powered tool to track whether countries are following through on their pledges to reduce carbon emissions. Officials from Google and Boston Consulting Group, meanwhile, predicted that AI could help mitigate as much as a tenth of all greenhouse gas emissions by 2030. Researchers and company execs also focused on the computing power required to run advanced AI, saying they hoped the relative benefits of AI would outweigh negative impacts on emissions, but they offered no certainties.

Full Article

*May Require Paid Registration

 

 

The World Depends on 60-Year-Old Code No One Knows
PC Magazine
JD Sartain
December 1, 2023


The 64-year-old programming language COBOL is one of the top mainframe programming languages in use, particularly in the banking, automotive, insurance, government, healthcare, and finance sectors. However, most schools and universities have not taught COBOL in decades, which poses a challenge as those well-versed in COBOL retire, with few options to replace them. IBM hopes to use artificial intelligence (AI) to remedy the situation by creating a generative AI-powered code assistant dubbed watsonx to convert COBOL code to a more modern programming language. Said IBM's Skyla Loomis, "It's AI assisted, but it still requires the developer" to edit the code provided by the AI. He added, "It's a productivity enhancement — not a developer replacement type of activity."

Full Article

 

 

AI Makes Gripping More Intuitive
Technical University of Munich (Germany)
December 4, 2023


An artificial intelligence (AI) algorithm developed by researchers at Germany's Technical University of Munich (TUM) uses the synergy principle and a network of sensors to help patients more intuitively control advanced hand prostheses. The synergy principle is used to describe, for instance, how the fingers move in a synchronized way to grasp an object and adapt to its shape once contact is made. The researchers have developed machine learning algorithms based on this principle. Said TUM's Patricia Capsi Morales, "With the help of machine learning, we can understand the variations among subjects and improve the control adaptability over time and the learning process."

Full Article

 

 

J&J Hired Thousands of Data Scientists. Will the Strategy Pay Off?
The Wall Street Journal
Peter Loftus
November 30, 2023


Johnson & Johnson (J&J) has invested heavily in data science and artificial intelligence (AI) as it shifts its efforts toward drug discovery, hiring 6,000 data scientists in recent years. While some industry leaders do not believe AI will outperform humans in drug discovery, J&J said its vast database, med. AI, gives it a competitive edge, with more than 3 petabytes of information, including real-world anonymized data and years of clinical-trial results. Said J&J's Najat Khan, "AI and data science are going to be the heart of how we are transforming and innovating. The amount of data is increasing, the algorithms are getting better, the computers are getting better."

Full Article

*May Require Paid Registration

 

 

Crowdsourced Feedback Helps Train Robots
MIT News
Adam Zewe
November 27, 2023


A reinforcement learning approach developed by researchers at the Massachusetts Institute of Technology, Harvard University, and the University of Washington trains robots using crowdsourced feedback from nonexpert users. MIT's Marcel Torne said that with the Human Guided Exploration (HuGE) method, "The reward function guides the agent to what it should explore, instead of telling it exactly what it should do to complete the task." The researchers divided the process into two parts, using a goal selector algorithm updated continuously with crowdsourced human feedback and another algorithm that enables the artificial intelligence agent to explore in a self-supervised manner guided by the goal selector. In both simulated and real-world tests, HuGE enabled agents to complete goals more quickly than other methods.

Full Article

 

 

COP28 Addresses AI’s Role In Combating Climate Change

The New York Times Share to FacebookShare to Twitter (12/3, Tankersley) reports that although “artificial intelligence has been a breakout star in the opening days of COP28, the United Nations climate summit in Dubai,” participants “have also voiced worries about A.I.’s potential to devour energy, and harm humans and the planet.” During the summit, “the United Nations said...that it was partnering with Microsoft on an A.I.-powered tool to track whether countries are following through on their pledges to reduce fossil fuel emissions.” However, conference speakers also cautioned that “what we don’t want is to move from one man-made [threat] to another...so we have to be responsible and ethical, and really cautious, in how we release and understand some of these technologies.”

 

How Microsoft’s Partnership With OpenAI Led To Sudden Leadership Conflicts

In a roughly 7,700-word article, The New Yorker Share to FacebookShare to Twitter (12/1, Duhigg) reported Microsoft’s chief executive, Satya Nadella, last month received a call from an executive from OpenAI, “an artificial-intelligence startup into which Microsoft had invested a reported thirteen billion dollars,” and discovered that “within the next twenty minutes the company’s board would announce that it had fired Sam Altman, OpenAI’s C.E.O. and co-founder.” The companies’ collaboration “had just led to Microsoft’s biggest rollout in a decade: a fleet of cutting-edge A.I. assistants that had been built on top of OpenAI’s technology and integrated into Microsoft’s core productivity programs, such as Word, Outlook, and PowerPoint.”

        Unbeknownst to Nadella, “however, relations between Altman and OpenAI’s board had become troubled.” Nadella began reaching out to executives to gather more information on the conflict, and “as more people learned of Altman’s firing, OpenAI employees – whose belief in Altman, and in OpenAI’s mission, bordered on the fanatical – began expressing dismay online.” Ultimately, the unhappy board members “felt that OpenAI’s mission required them to be vigilant about A.I. becoming too dangerous, and they believed that they couldn’t carry out this duty with Altman in place,” so they targeted him “with a misguided faith that Microsoft would accede to their uprising.”

 

Meta Chief AI Scientist Says Current AI Systems Are Decades From Sentience

CNBC Share to FacebookShare to Twitter (12/3, Vanian) reports Meta Chief AI Scientist Yann LeCun “said he believes that current AI systems are decades away from reaching some semblance of sentience, equipped with common sense that can push their abilities beyond merely summarizing mountains of text in creative ways.” His comments stand “in contrast to that of Nvidia CEO Jensen Huang, who recently said AI will be ‘fairly competitive’ with humans in less than five years, besting people at a multitude of mentally intensive tasks.” However, LeCun has said Huang has much to gain from a potential “AI craze,” saying that “there is an AI war, and [Huang’s] supplying the weapons” – alluding to the fact that NVIDIA is one of the largest global suppliers of computer chips.

 

Google Postpones Debut Of AI Chatbot

The Information Share to FacebookShare to Twitter (12/2, LeVine, Subscription Publication) reported Google has delayed the debut of its AI chatbot Gemini until January, “two people with knowledge of the decision said.” CEO Sundar Pichai has “recently decided to scrap a series of Gemini events, originally scheduled for next week in California, New York and Washington, after the company found the AI didn’t reliably handle some non-English queries, one of these people said.” The events, “which hadn’t been publicized, would have marked Google’s most important product launch of the year, after it strained its computing resources and merged large teams in an urgent pursuit of OpenAI.”


Tech Companies Funding Fellows Working On AI Policy In Senate Offices

Politico Share to FacebookShare to Twitter (12/3, Bordelon) reports major tech companies “are channeling money through a venerable science nonprofit to help fund fellows working on AI policy in key Senate offices, adding to the roster of government staffers across Washington whose salaries are being paid by tech billionaires and others with direct interests in AI regulation.” The “rapid response cohort” of congressional AI fellows “is run by the American Association for the Advancement of Science, a Washington-based nonprofit, with substantial support from Microsoft, OpenAI, Google, IBM and Nvidia, according to the AAAS.” The six rapid response fellows “operate from the offices of two of Senate Majority Leader Chuck Schumer’s top three lieutenants on AI legislation – Sens. Martin Heinrich (D-N.M.) and Mike Rounds (R-S.D.) – as well as the Senate Banking Committee and the offices of Sens. Ron Wyden (D-Ore.), Bill Cassidy (R-La.) and Mark Kelly (D-Ariz.).”

 

Virginia Congressman Pursuing AI Graduate Degree Amid Efforts To Regulate New Technology

CNBC Share to FacebookShare to Twitter (12/2, Wilkins) reports Rep. Don Beyer (D-VA) is pursuing a master’s degree in artificial intelligence at George Mason University. CNBC explains Beyer, is part of “almost every group” of House lawmakers working on AI issues, “can only take about one class a semester, as he balances voting on the floor, working on legislation and fundraising with getting his coding homework done. But the classes are already providing benefits.” Beyer said, “With every additional course I take, I think I have a better understanding of how the actual coding works. ... What it means to have big datasets, what it means to look for these linkages and also, perhaps, what it means to have unintended consequences.”

 

Google Boosts Lobbying To Win Support For Using AI In Healthcare

Politico Share to FacebookShare to Twitter (12/4, Reader) reports Google “wants to make your cell phone a ‘doctor in your pocket’ that relies on the company’s artificial intelligence,” but the company “first...will need to convince skeptical lawmakers and the Biden administration that its health AI isn’t a risk to patient privacy and safety – or a threat to its smaller competitors.” Politico adds Google “assembled a potent lobbying team to influence the rules governing AI just as regulators start writing them,” but lawmakers “say they’re concerned that the company is using its advanced AI in health care before government has had a chance to draw up guardrails,” while “competitors worry Google is moving to corner the market. Both fear what could happen to patient privacy given Google’s history of vacuuming personal data.”

 

Meta, IBM Launch AI Alliance

Bloomberg Share to FacebookShare to Twitter (12/5, Barinka, Subscription Publication) reports Meta and IBM “are joining more than 40 companies and organizations to create an industry group dedicated to open source artificial intelligence work, aiming to share technology and reduce risks.” Bloomberg says the AI Alliance will “focus on the responsible development of AI technology, including safety and security tools,” and “also will look to increase the number of open source AI models – rather than the proprietary systems favored by some companies – develop new hardware and team up with academic researchers.”

        Fortune Share to FacebookShare to Twitter (12/5, Lazzaro) says as the AI industry “has blossomed, it’s also fractured into camps around the idea of openness,” but the AI Alliance “wants to blow the whole debate wide open.” According to Fortune, the AI Alliance “rejects the current dichotomy” and “believes it has minimized the definition and benefits of open, and is looking to expand the emphasis on open far beyond models.”

        The Wall Street Journal Share to FacebookShare to Twitter (12/5, Lin, Subscription Publication) and Fox Business Share to FacebookShare to Twitter (12/5, Dumas) provide similar coverage.

 

Antitrust Groups Press For More Policy On AI

Politico Share to FacebookShare to Twitter (12/5, Oprysko) reports a “coalition of tech and competition watchdog groups wants the Biden administration to do more when it comes to preventing the usual tech giants from extending their dominance and influence to the AI sector amid the global scramble to stand up a regulatory framework.” The Tech Oversight Project, American Economic Liberties Project, Accountable Tech, Demand Progress, the Institute for Local Self-Reliance and others said in a letter Monday to National Economic Council Director Lael Brainard, “While we certainly welcome the Administration’s recent executive order on AI, we are concerned that the EO does not sufficiently address the competition concerns posed by these technologies.” Going forward, the groups added, “we urge you to issue further policy measures aimed at preventing AI from being wielded as a tool of monopolists.”

        California Lawmakers Weigh AI Regulation. According to Politico Share to FacebookShare to Twitter (12/5, White), “Silicon Valley’s freewheeling artificial intelligence industry is about to face its first major policy roadblocks,” but it will be from lawmakers in California, “not in Washington.” Politico reports regulations for the “fast-spreading technology – where machines are taught to think and act like humans – will dominate Sacramento next year as California lawmakers prepare at least a dozen bills aimed at curbing what are widely seen as AI’s biggest threats to society,” and will focus on “the technology’s potential to eliminate vast numbers of jobs, intrude on workers’ privacy, sow election misinformation, imperil public safety and make decisions based on biased algorithms.” Politico adds this “will cast California in a familiar role as a de facto U.S. regulator in the absence of federal action,” even as it will also “set lawmakers eager to avoid letting another transformative technology spiral out of control – and powerful labor unions intent on protecting jobs – against the deep-pocketed tech industry.”

        NY Lawmakers To Consider Legislation To Regulate AI Industry. Newsday (NY) Share to FacebookShare to Twitter (12/5, Gormley) reports New York lawmakers have proposed “more than a dozen active bills” to “nurture the computer technology that can greatly advance health care, create more creative jobs for people now toiling in repetitive ones and perform mundane tasks such as household chores and driving,” while also “looking to guard against the potential dangers of artificial intelligence: a deeper gulf in income inequality, an erosion of privacy rights, and the broader threat of uncontrolled, self-aware computers.” Newsday details some of the legislation that “may be debated in the 2024 legislative session beginning in January.”

 

EU Negotiators Reach Compromise On Landmark AI Regulation

Bloomberg Share to FacebookShare to Twitter (12/6, Deutsch, Subscription Publication) reports the EU is closing in on what would be “the most extensive and wide-reaching regulation of artificial intelligence in the western world.” After reaching a compromise during an hours-long meeting on Wednesday, negotiators “have agreed to a set of controls for generative artificial intelligence tools such as OpenAI Inc.’s ChatGPT and Google’s Bard – the kind capable of producing content on command, people familiar the discussions said early Thursday.” Bloomberg describes the agreement as “a critical step in clearing landmark AI policy that will – in the absence of any meaningful action by US Congress – set the tone for the regulation of generative AI tools such as OpenAI’s ChatGPT and Google’s Bard in the developed world.” The lengthy talks “underscore how contentious the debate over regulating AI has become, dividing world leaders and tech executives alike as generative tools continue explode in popularity.”

        US, EU Taking Divergent Approaches To AI Regulation. The Washington Post Share to FacebookShare to Twitter (12/6, Zakrzewski, Faiola, Lima) reports that while EU policymakers were in late-night negotiations, “senators signaled that the U.S. Congress is taking a divergent approach from the European Union on the emerging technology, with lawmakers raising concerns the bloc’s approach could be heavy-handed and risk alienating AI developers.” This divide underscores “the challenges of regulating artificial intelligence, a rising priority for governments around the world in the year since the release of the AI-powered chatbot ChatGPT sparked a global frenzy.” Senate Majority Leader Schumer “told reporters that the bipartisan group was ‘starting to really begin to work on legislation.’” While Schumer was short on specifics, Sen. Mike Rounds (R-SD) “said the senators are pursuing an ‘incentive-based’ approach in an effort to retain AI developers in the United States.” Rounds said, “If (European policymakers) look at this as a regulatory activity, they will chase AI development to the United States. ... What we don’t want to is to chase AI development to our adversaries.”

 

Google Releases Gemini AI Model, TPU Chip For Training AI

Bloomberg Share to FacebookShare to Twitter (12/6, Alba, Ghaffary, Subscription Publication) reports, “Alphabet Inc.’s Google invented the technology underpinning the current artificial intelligence boom, but its products lag in popularity. The search giant hopes to change that with the much-anticipated release of Gemini, the ‘largest and most capable AI model’ the company has ever built.” At a Wednesday presentation, “Google stressed that Gemini is the most flexible model it’s made because it comes in different sizes, including a version that can run directly on smartphones. That sets the program apart from other competitors” like ChatGPT.

        The New York Times Share to FacebookShare to Twitter (12/6, Metz, Grant) reports, “When OpenAI wowed the world with the A.I. chatbot ChatGPT late last year, Google was caught flat-footed. The tech giant had spent years developing similar technology, but like other tech giants – most notably Meta – it was reluctant to release a technology that could generate biased, false or otherwise toxic information.” Google’s competing chatbot, Bard, has so far received “middling reviews.”

 

Meta Is Testing Over 20 New AI Features In Effort To Boost Appeal

Bloomberg Share to FacebookShare to Twitter (12/6, Barinka, Subscription Publication) reports, “Meta Platforms Inc. is betting artificial intelligence will convince people to spend more time on its social media apps next year and is using the technology to make the biggest changes across its platforms since introducing short-form video Reels in 2020.” Currently, the company is testing over 20 different generative AI features across its various platforms, including Facebook, Instagram, Messenger and WhatsApp. PYMNTS Share to FacebookShare to Twitter (12/6) reports that, while much of Meta’s existing AI investment has gone towards work done behind the scenes, such as building data centers and developing its Llama large language model, the company plans to begin offering consumer-facing AI products next year.

 

White House Looks To AI To Achieve Cancer Moonshot Goals

In a roundup, Politico Share to FacebookShare to Twitter (12/6, Schumaker, Zeller, Payne, Paun, Reader, Peng) reports that the Biden Administration is looking to AI to help achieve its cancer moonshot goals, “according to Catherine Young, assistant director for cancer moonshot engagement and policy, who spoke during a Library of Congress panel discussion on Tuesday.” According to Young, AI can play a crucial role in advancing cancer research and treatment alongside public health strategies. She added that the emphasis is on harnessing the benefits of rapidly developing AI technology to support cancer scientists and patients.

        Advocacy Groups Press Biden To Address AI Impact On Climate Change. The Hill Share to FacebookShare to Twitter (12/6, Klar) reports that “climate and tech advocacy groups” sent a letter Wednesday to President Biden highlights concerns about how generative AI could increase the spread of climate misinformation and how the “enormous energy requirements and widespread use of large language models can increase carbon emissions.” Referring to an executive order Biden signed on AI in October, the letter said the groups were “disappointed that AI’s potential to worsen the climate change crisis” was not mentioned in as a risk. Seventeen groups signed the letter, “including Friends of the Earth, Accountable Tech and the Center for Countering Digital Hate.”

 

NYTimes Outlines Possible AI Regulations

The New York Times Share to FacebookShare to Twitter (12/6, Kang, Satariano) reports, “Though their attempts to keep up with developments in artificial intelligence have mostly fallen short, regulators around the world are taking vastly different approaches to policing the technology. The result is a highly fragmented and confusing global regulatory landscape for a borderless technology that promises to transform job markets, contribute to the spread of disinformation or even present a risk to humanity.” The Times outlines the various approaches governments have proposed. So far, the “Biden administration has given companies leeway to voluntarily police themselves for safety and security risks.” The Administration announced in July “that several A.I. makers, including Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI, had agreed to self-regulate their systems.”

 

New AI Tool Aims To Help Parents Access State Assessment Data Nationwide

The Seventy Four Share to FacebookShare to Twitter (12/6, Toppo) reports a free, AI-enabled tool “promises parents, researchers and policymakers a no-fuss way to access state assessment data, offering up-to-date academic information for all 50 states and the District of Columbia.” The online tool, “its creators say, will democratize school performance data at an important time, as schools nationwide struggle to recover from the COVID-19 pandemic.” Scheduled to go “live today, the new website sports a simple interface that allows users to query it conversationally, as they would a search engine or AI chatbot, to plumb math and English language arts data in grades 3-8. At the moment, there are no firm plans to add high school-level data.” The project, dubbed “Zelma,” is a “partnership between Brown University and Novy, the company that built the site’s AI functionality.”

        USA Today Share to FacebookShare to Twitter (12/6) reports that “although every state is required to participate in standardized testing, accessing the results can be tricky. Not all results are available online, nor are they presented uniformly.” Zelma is working “to bridge the data gap by allowing users to compare between subgroups like gender or race/ethnicity within a state or school district, as well as explore comparing district outcomes across subjects over time.”

 

Head Of Cancer Moonshot Initiative Says AI Can Help Combat Health Misinformation

The Hill Share to FacebookShare to Twitter (12/7, Weixel) covers its interview with the “head of the White House cancer moonshot initiative,” who “said artificial intelligence (AI) can be used to help combat health misinformation.” Danielle Carnival stated, “There are lots of risks with” AI, “but one of the opportunities I think is going to make a huge impact in healthcare is the ability for people to get targeted information in a language or a culturally appropriate way that they can receive and act on.” She added, “As we build these AI systems, it’s not good enough just not to introduce new bias through AI, but to use AI in healthcare to improve equity, improve the ability for people to get the information and knowledge they need to make preventive healthcare decisions” and “get access to early screening and detection.”

 

Apple Releases Open-Source Machine Learning Framework To Little Fanfare

Gizmodo Share to FacebookShare to Twitter (12/7) reports, “Google’s launch of Gemini yesterday was like a bellowing announcement that a new king has arrived in the AI space, but Apple had its own AI launch, or more like a quiet whimper that tried to go unnoticed.” Apple “released an open-source machine learning framework, MLX, that runs on Apple’s chips. MLX could one day bring generative AI apps to Apple products, but that day is not today.” The company “has been caught completely off guard by the rise of AI, according to Bloomberg’s Mark Gurman.”

 

OpenAI COO: Claims AI Can Revolutionize Business Are Exaggerated

The Verge Share to FacebookShare to Twitter (12/4, David) reports OpenAI COO Brad Lightcap “said in an interview with CNBC Share to FacebookShare to Twitter that one of the more overhyped parts of artificial intelligence is that ‘in one fell swoop, [it] can deliver substantive business change.’” Lightcap “said companies have approached OpenAI expecting generative AI to solve many problems, dramatically cut costs, and bring back growth if they’re struggling,” but he “said that while AI could improve more, ‘there’s never one thing you can do with AI that solves that problem in full’ and that the technology is still in its infancy.”

        NYTimes Article Details Big Tech’s Response To ChatGPT’s Release. In a 4000-word article, the New York Times Share to FacebookShare to Twitter (12/5, Weise, Metz, Grant, Isaac) details the impact that ChatGPT’s launch had on Silicon Valley. The chatbot’s emergence led Google CEO Sundar Pichai to push forward “a slate of products based on artificial intelligence.” Google viewed the situation as “a real crisis. Its business model was potentially at risk.” Pichai was “amazed...that OpenAI had gone ahead and released” ChatGPT, even though it still made errors, “and that consumers loved it. If OpenAI could do that, why couldn’t Google?” The article adds, “For tech company bosses, the decision of when and how to turn A.I. into a (hopefully) profitable business was a more simple risk-reward calculus. But to win, you had to have a product.”

 

Musk Says X Premium Subscribers Now Have Access To His Grok AI

Reuters Share to FacebookShare to Twitter (12/7, Singh) reports, “Elon Musk said on Thursday his artificial intelligence (AI) startup xAI is rolling out ChatGPT competitor Grok for Premium+ subscribers of social media platform X.” Musk “announced it in a post on X, without revealing anymore details of the launch. Last month, he had said that as soon as Grok was out of early beta testing, it would become available to the subscribers.” Musk “intends to turn X into a ‘super app,’ offering a range of services to its subscribers from messaging and social networking to peer-to-peer payments.”

dtau...@gmail.com

unread,
Dec 17, 2023, 8:25:33 PM12/17/23
to ai-b...@googlegroups.com

Researchers Develop Spintronic Probabilistic Computers
Tohoku University (Japan)
December 13, 2023

A proof-of-concept spintronic probabilistic computer developed by researchers at Japan's Tohoku University and the University of California, Santa Barbara, is compatible with current AI. The researchers demonstrated the use of stochastic magnetic tunnel junctions (sMTJ) interfaced with powerful field programmable gate arrays allows for robust, fully asynchronous probabilistic computers. Notably, the researchers demonstrated the fastest p-bits at the circuit level using in-plane sMTJs and the basic operation of the Bayesian network as an example of feedforward stochastic neural networks via an update order at the computing hardware level and layer-by-layer parallelism.
 

Full Article

 

 

Teens Push to Broaden AI Literacy
The New York Times
Natasha Singer
December 14, 2023


Some teenagers are asking their schools to provide broader AI learning experiences grounded firmly in the present, not in future doomsday and utopian scenarios painted by some experts and technology companies. “We need to find some sort of balance between ‘AI is going to rule the world’ and ‘AI is going to end the world,’” said Isabella Iturrate, a 12th grader at River Dell High School in Oradell, N.J. “But that will be impossible to find without using AI in the classroom and talking about it at school.” Such discussions come as school districts begin to consider how AI fits into existing coursework.
 

Full Article

*May Require Paid Registration

 

 

Cheating Fears Over Chatbots Were Overblown, Research Suggests
The New York Times
Natasha Singer
December 13, 2023


Recent research indicates overall cheating rates in U.S. high schools have not increased due to ChatGPT. In surveys of more than 40 high schools across the U.S., Stanford University researchers found 60% to 70% of students admitted cheating in school this year, on par with findings in previous years. Meanwhile, a Pew Research Center survey of more than 1,400 U.S. teenagers found that almost 33% had heard "nothing at all" about ChatGPT, and just 13% had used ChatGPT to help with their schoolwork. While 20% of teens surveyed by Pew believed it was acceptable to write essays with help from ChatGPT, almost 70% said it was acceptable to use it to research new topics.
 

Full Article

*May Require Paid Registration

 

 

AI Turns Thoughts into Text
University of Technology Sydney (Australia)
December 12, 2023


A portable, non-invasive system developed by researchers at Australia's University of Technology Sydney (UTS) can transform an individual's thoughts into text. The system incorporates what the researchers described as DeWave, an artificial intelligence (AI) model trained on vast amounts of electroencephalogram (EEG) data that can translate EEG signals received through a cap the subject wears into words and sentences.

Full Article

 

 

Europe Reaches a Deal on Comprehensive AI Rules
Associated Press
Kelvin Chan
December 8, 2023


EU negotiators have reached a deal on what is being hailed as the world’s first comprehensive rules governing artificial intelligence (AI), paving the way for legal oversight of the technology. Negotiators from the European Parliament and the bloc’s 27 member countries on Friday overcame differences on points including generative AI and police use of facial recognition technologies for surveillance to sign a tentative political agreement for the Artificial Intelligence Act. The U.S., U.K., China, and global coalitions like the Group of 7 major democracies have introduced their own proposals. The EU rules “can set a powerful example for many governments considering regulation,” said Columbia Law School's Anu Bradford. Other countries, Bradford said, “may not copy every provision but will likely emulate many aspects of it."

Full Article

 

 

Big Tech Funds the Very People Who Are Supposed to Hold It Accountable
The Washington Post
Joseph Menn; Naomi Nix
December 7, 2023


Google, Meta, and other tech giants have increased their charitable giving to universities in recent years, raising concerns about their influence on research topics including artificial intelligence, social media, and disinformation. Academics say they must depend more on tech companies to gain access to vast amounts of data, at the same time that Meta and X have reduced access to their data, requiring researchers to pay more or negotiate special deals. Although most academics insist tech companies do not influence their work, citing ethics rules, two dozen professors said in interviews that tech companies wield "soft power" by controlling funding and data access. University of California, Berkeley's Hany Farid said, "They pay for the research of the very people in a position to criticize them."

Full Article

*May Require Paid Registration

 

 

Avatars Entirely Replace Human Newscasters
Tom's Guide
Ryan Morrison
December 13, 2023


Technology and media startup Channel 1 plans to launch a news channel in February using realistic-appearing AI avatars as its news anchors. A demo episode suggests the channel’s news coverage will come from sources across the globe, with AI used in on-screen output and story selection. Humans will be involved in writing copy and in the editing process. The news will come from freelance independent journalists, in addition to AI-generated news from government documents, among other sources. Coverage will be translated into different languages and will attempt to reflect viewers' interests.
 

Full Article

 

 

Climate Change Is Breaking Insurance. Tech Could Save It
The Wall Street Journal
Christopher Mims
December 8, 2023


As major insurers stop writing coverage in states prone to natural disasters, insurance technology startups are filling the gap. FloodFlash, for instance, offers parametric insurance coverage that automatically triggers payouts when on-the-ground sensors determine floodwaters have reached a certain threshold. These startups use data science and artificial intelligence (AI) to assess risks. Kettle, for instance, which bases its property insurance sales on AI assessments of how climate change impacts risk, has evaluated every property in California using its algorithms, ranking them on their risk of wildfire destruction.

Full Article

*May Require Paid Registration

 

 

Using AI-Generated Images to Map Visual Functions in the Brain
Weill Cornell Medicine Newsroom
November 30, 2023


A study by Cornell University researchers showed that images selected or generated based on an artificial intelligence (AI) model of the human visual system could be used to target the visual processing areas of the brain while eliminating biases associated with viewing a limited set of images chosen by researchers. Based on functional magnetic resonance imaging (fMRI) scans of participants' brain activity, the researchers found AI-selected and generated images were better than control images at activating the target areas. They also determined that image-response data could be used to fine-tune vision models for a specific individual to achieve maximum activation. Said Weill Cornell Medicine's Amy Kuceyeski, "In principle, we could alter the connectivity between two parts of the brain using specifically designed stimuli, for example to weaken a connection that causes excess anxiety."

Full Article

 

 

AI Tries to Resurrect Vincent van Gogh
The New York Times
Zachary Small
December 13, 2023


At the Musée D’Orsay in Paris, a replica of Vincent van Gogh chats with visitors, offering insights into his life and death. The “Bonjour Vincent” exhibit, intended to represent the painter’s humanity, uses artificial intelligence to analyze hundreds of letters the artist wrote, as well as early biographies written about him. Visitors can converse with the replica on a digital screen through a microphone.

Full Article

*May Require Free Registration

 

Researchers Say AI Tool Could Help Admissions Officers Evaluate College Essays

Chalkbeat Share to FacebookShare to Twitter (12/8, Gonzales) reported every year, “university admissions officers read and sort through tens of thousands of essays,” but some researchers say an artificial intelligence tool “may be able to help admissions officers sort through essays and recognize prospective students who might previously have gone unrecognized.” The group of researchers say the application “has the ability to pull out key traits of students, such as leadership qualities or the ability to persevere.” To develop the tool, researchers examined anonymous, 150-word essays submitted to colleges, which “focused on extracurricular activities and work experiences. A group of admissions officers then read those essays and scored them based on seven characteristics. The researchers trained the AI system based on how admissions officers evaluated those characteristics within the essays. The AI platform was able to identify those characteristics in new essays and assign qualities to applicants across different student backgrounds, including whether students demonstrated teamwork or intrinsic motivation.”

 

Survey Shows Only 10% Of Organizations Have Launched Generative AI Solutions

Fortune Share to FacebookShare to Twitter (12/8, Estrada) says, “OpenAI’s ChatGPT had its one-year anniversary on Nov. 30. And generative AI remains a hot topic. But a lot of companies are still just testing the waters.” Fortune explains, “Cnvrg.io, an Intel company, released its annual 2023 ML Insider global survey and the findings indicate the majority of organizations are still in the research or testing phase of incorporating generative AI in production.” According to the findings, only 10% “have launched generative AI solution... Meanwhile, 25% of respondents said they’re building pilot projects for selected use cases.”

 

Sam Altman’s Ouster Allegedly Prompted By Employees’ Accusations Of Abusive Behavior, Sources Say

The Washington Post Share to FacebookShare to Twitter (12/8, Tiku) reports, “This fall, a small number of senior leaders approached the board of OpenAI with concerns about chief executive Sam Altman.” According to unnamed sources, the OpenAI CEO “had been psychologically abusive, the employees alleged, creating pockets of chaos and delays at the artificial-intelligence start-up.” These latest “complaints triggered a review of Altman’s conduct during which the board weighed the devotion Altman had cultivated among factions of the company against the risk that OpenAI could lose key leaders who found interacting with him highly toxic.” The complaints ultimately “were a major factor in the board’s abrupt decision to fire Altman on Nov. 17.”

 

AI Improving Athletes’ Safety, Performance

CNBC Share to FacebookShare to Twitter (12/9, Woods) reports on how AI is helping improve athletes’ safety and performance. For example, the NFL “is harnessing AI and computer vision to enhance its Digital Athlete program, developed in partnership with Amazon Web Services beginning in 2019.” The program “provides a complete view of each NFL players’ experience by analyzing data from his training and game activity, which is captured by sensors and tags in equipment and hours of video from cameras in stadiums. ... This data is shared with clubs and allows teams to precisely understand what players need to stay healthy, recover quickly and perform at their best.” NFL SVP of Health and Safety Innovation Jennifer Langton said, “AI and machine learning are the backbone of the program. ... We’re able to analyze a substantial amount of data and automatically generate insights into which players might benefit from altering either training or recovery routines, a process that used to be so manual and cumbersome.”

 

Raimondo Says US Looking Into Specifics Of Nvidia’s AI Chips For China

Bloomberg Share to FacebookShare to Twitter (12/11, Hawkins, Subscription Publication) reports that Commerce Secretary Raimondo “said the US is looking into the specifics of three new artificial intelligence accelerators that Nvidia Corp. is developing for China, after vowing earlier this month to restrict any new chips that give the Asian country AI capabilities.” Bloomberg says that Nvidia, based in California, is in the process of “developing China-specific chips after the US tightened export controls to block the export of semiconductors the company had earlier designed for China.” In response to Raimondo’s latest remarks, Nvidia “said it was working with the US government following its clear rules” and looking to “offer compliant data center solutions to customers worldwide.”

 

AFL-CIO, Microsoft Partnering On AI Training For Workers

Forbes Share to FacebookShare to Twitter (12/11, Nguyen) reports Microsoft and the AFL-CIO “launched a partnership Monday to train workers and leaders on how artificial intelligence is used in the workplace.” Starting next winter, Microsoft experts will train union workers and leaders “on AI trends, how the technology works and is developed and what its challenges are.” Additionally, Microsoft will receive feedback from workers and union leaders “on their experience working with AI and their concerns about the technology.”

        CNBC Share to FacebookShare to Twitter (12/11, Field) reports that, amid “increasing fears” the jobs currently filled by humans will be replaced with new technologies, “AI providers have increased their responses to public pressure and questioning on how their technologies may affect workers.”

        Additional coverage includes GeekWire Share to FacebookShare to Twitter (12/11, Bishop).

 

College Students See Generative AI As Just Another Tool

Minneapolis Star Tribune Share to FacebookShare to Twitter (12/11) business columnist Evan Ramstad writes that college-age adults “have already seen a lot of fascinating innovations in their young lives.” Cars that “are nearly autonomous, rockets that take off and land on the same launch pad and any piece of information or entertainment at their fingertips.” And so “AI chatbots – and the language modeling they’re based on – amount to just another new tool to them.” The fear educators “had about ChatGPT when it was publicly released a year ago has subsided.” At the University of Minnesota’s Carlson School of Management, “professors are rapidly changing courses to allow students to use generative AI tools” at the urging of “a board of employers who advise the school.”

        Opinion: Generative AI Is Forcing Educators To Rethink Strategies. In a nearly 2,800-word piece, Washington Post Share to FacebookShare to Twitter (12/12, Roberts) editorial writer Molly Roberts opines that there is “no better place to see the promise and the peril of generative artificial intelligence playing out than in academia.” In the spring, after Ole Miss “students came back to campus eager to enlist robots in their essay-writing, Mark Watkins and his colleagues created the Mississippi AI Institute.” The hope is “that the institute’s work can eventually be used by campuses across the country.” For now, a “two-day program in early June at Ole Miss may be the only one of its kind to pay teachers a stipend to educate themselves on AI: how students are probably using it today, how they could be using it better, and what all of that means for their brains.” Robertys says the advent of AI is “forcing educators to rethink plagiarism guidelines, grading and even lesson plans.”

 

Bismarck State College’s AI-Written Play Gives Commentary On The Potential And Flaws Of ChatGPT

States Newsroom Share to FacebookShare to Twitter (12/12) reports “boosters of so-called generative AI point to its massive educational and creative potential.” But that’s “also inspired widespread anxiety, even existential fear, about the future of creative work.” The recent strikes “by Hollywood writers and actors, for instance, were spurred in part by concerns that generative AI would sideline creative workers.” Both successfully “bargained for regulations on how the technology can be used by film and television producers.” In “The AI Plays,” students at Bismarck State College Theatre “throw their two cents into the debate.” The group “decided to have ChatGPT write the scripts as an interesting way to show people just how far the technology has come.”

 

Amazon Using AI To Help Stop Sale Of Counterfeit Products

CBS News Share to FacebookShare to Twitter (12/12, Quraishi, Corral, Beard) reports as holiday shopping “intensifies, consumers are being warned to stay vigilant against the rising menace of counterfeit products,” and as “these imitations are becoming increasingly difficult to identify – and potentially dangerous,” Amazon “has been using artificial intelligence and machine learning to root out sellers trying to peddle counterfeits on its platform.” CBS News explains Amazon three years ago “created an in-house counterfeit crime unit made up of former federal prosecutors, law enforcement and data scientists based around the world to go after counterfeit sellers,” and “says it’s closely tracking suspicious behavior online to protect customers.” Kebharu Smith, director of Amazon’s counterfeit crimes unit, “says Amazon is using AI tools to scan over 8 billion listings from sellers each day.”

 

Microsoft Looks To Nuclear To Meet Growing Power Demands

The Wall Street Journal Share to FacebookShare to Twitter (12/12, Hiller, Subscription Publication) reports Microsoft is turning to nuclear power to help meet its growing electricity needs as it ventures into artificial intelligence and supercomputing. While the US nuclear regulatory process is complex and expensive, Microsoft executives say that the company is experimenting with generative AI to see if they can streamline the approval process. Over the past six months, Microsoft employees have been training large language models with US nuclear regulator and licensing documents to ease the paperwork load.

 

OpenAI’s Nonprofit Parent Indicates Worth Less Than $45 Thousand

CNBC Share to FacebookShare to Twitter (12/12, Novet) reports,”OpenAI is valued by private investors at $86 billion, thanks in part to the popularity of ChatGPT,” but its nonprofit parent’s IRS filing indicates that the “latest official number” for 2022 “is the tiny sum of $44,485.” CNBC says, “For all its talk of openness, OpenAI’s financials remains a black box,” and adds, “OpenAI’s latest IRS filing adds to the confusion that surfaced last month, when the nonprofit’s board, which oversees the entire entity, abruptly fired CEO Sam Altman.” CNBC also says, “The chaos has called into question whether OpenAI can or should continue under the umbrella of a nonprofit.”

 

GAO Report Finds Federal Agencies Have Over 1,200 Potential Uses For AI

The Hill Share to FacebookShare to Twitter (12/12, Shapero) reports that a Government Accountability Office (GAO) report “released Tuesday found federal agencies have more than 1,200 potential uses for artificial intelligence (AI), with more than 200 already being employed.” The report states, “Given the rapid growth in capabilities and widespread adoption of AI, the federal government must manage its use of AI in a responsible way to minimize risk, achieve intended outcomes, and avoid unintended consequences. As a result, we performed this work under the authority of the Comptroller General to assist Congress with its continued oversight of AI.” About 69% of agencies’ AI use “cases – particular challenges or opportunities that can be solved with AI – centered on science and internal management, the GAO noted in its report.”

 

Survey Finds Half of High Schoolers Use AI For Schoolwork

K-12 Dive Share to FacebookShare to Twitter (12/12) reports nearly “half of students in grades 10-12 said they use artificial intelligence tools for school and non-school activities, according to a June survey released Monday by ACT, the nonprofit that administers the college readiness exam.” Students using AI “for school most often used the tools for language arts and social studies assignments.” Students not using AI “cited a lack of interest in the tools, distrust in the information provided, and that they did not know enough about AI.” The survey also “revealed that students with higher ACT composite scores were more likely to use AI tools than those with lower scores, revealing a need for schools to consider equitable access as AI technology evolves, the report said.” ACT CEO Janet Godwin said in a statement, “Students are already exploring how they can use AI, but there is real skepticism about AI’s ability to create work in which students can be confident.”

 

Students Push Schools To Provide More Balanced, Broad AI Curriculum

The New York Times Share to FacebookShare to Twitter (12/13, Singer) reports a growing number of teenage students “are asking their schools to go beyond Silicon Valley’s fears and fantasy narratives and provide broader A.I. learning experiences that are grounded firmly in the present, not in science fiction.” Many teenagers are saying that school districts need to find a balance between the two narratives in order to help them understand appropriate use of AI in classrooms. One ongoing complication with the movement is that schools traditionally do not allow students to influence their own curriculum to the degree that some students are calling for.

 

Teachers Grapple With Rise Of Sophisticated AI Chatbots

Education Week Share to FacebookShare to Twitter (12/13) discusses how teachers are continuing to face challenges around students using AI chatbots to help them write essays and complete homework. While earlier versions of the technology were more easily identifiable, updated software like ChatGPT and Google’s Gemini are “rapidly becoming too sophisticated for even veteran educators’ detective skills, educators and experts say.” Already, students are learning how to train a chatbot on their own specific writing styles and common mistakes, allowing them to more frequently use the software without having it be identifiable as written by AI. Some educators are arguing that this technological revolution will push teachers to be more engaged with students on the process of writing an essay, from brainstorming sessions through to the final product.

 

Southern Methodist Student Develops AI To Help With Studying

The Dallas Morning News Share to FacebookShare to Twitter (12/14) reports two summers ago, “Trevor Gicheru was taking a biology class at Southern Methodist University and found himself unable to catch everything he heard from his professor’s lectures.” He began “recording the lectures but quickly discovered it took him long hours to rewatch the videos, create flashcards and review study sets.” Then Gicheru, “a computer science major, came up with a solution to make life easier for himself.” He created an “app, Nurovant AI, to serve as an artificial study mate by creating quizzes, flashcards, summaries and other materials based on audio recordings.” Gicheru is “one of many North Texas entrepreneurs looking to leverage AI technology as the industry explodes.”

 

Pope Calls For Global Treaty On AI

The Washington Post Share to FacebookShare to Twitter (12/14, Faiola, Pitrelli) reports, “In a statement Thursday, Pope Francis ‘called for a binding global treaty on artificial intelligence, lauding its potential benefits while presaging its raw potential for destruction. He warned of the pitfalls of placing in human hands a ‘vast array of options, including some that may pose a risk to our survival and endanger our common home.’” Pope Francis has “already met with senior executives at Microsoft and IBM to discuss the ethics of technological breakthroughs, and in his apostolic exhortation on the environment in October, he warned of artificial intelligence’s potential to become to a ‘technocratic paradigm’ that could ‘monstrously feeds upon itself.’”

 

Jeff Bezos Discusses AI In Wide-Ranging Interview

GeekWire Share to FacebookShare to Twitter (12/14, Bishop) reports Jeff Bezos gave a more-than-two-hour interview on the “Lex Fridman Podcast Share to FacebookShare to Twitter,” in which the Amazon founder talked “about his life, work, the future of humanity and what’s next for technology.” While the interview was “an extensive and wide-ranging discussion,” GeekWire highlights Bezos’ discussion of AI. Bezos said, “If you’re talking about generative AI, large language models, things like ChatGPT, and its soon successors, these are incredibly powerful technologies. To believe otherwise is to bury your head in the sand, soon to be even more powerful. It’s interesting to me that large language models in their current form are not inventions, they’re discoveries. ... Large language models are much more like discoveries. We’re constantly getting surprised by their capabilities. They’re not really engineered objects.”

        Insider Share to FacebookShare to Twitter (12/15) reports Bezos talked about “the opportunities and risks of artificial intelligence.” He said, “Even specialized AI could be very bad for humanity. Just regular machine learning models can make certain weapons of war, that could be incredibly destructive and very powerful.” However, “he is optimistic about the technology’s overall benefits despite its dangers.” Bezos said, “So the people who are overly concerned, in my view, overly, it is a valid debate. I think that they may be missing part of the equation, which is how helpful they could be in making sure we don’t destroy ourselves.”

 

New York City Students Chronicle How AI Is Transforming Their Learning Experience

Chalkbeat Share to FacebookShare to Twitter (12/14) reports that “just over a year after the tech group OpenAI introduced ChatGPT to the public, some students at New York City high schools report widespread use of AI-powered chatbots among their peers.” Some use “the tools as tutors to help break down difficult concepts and work through challenging assignments,” while others “have looked to them as a shortcut for easy answers.” Chalkbeat discusses how four high school students “say AI-powered tools have changed the way students engage with their schoolwork.” For example, Kangxi Yang, a junior at Staten Island Technical High School, said her school had “warned students against relying on ChatGPT and other AI tools to complete their writing assignments.” However, her teacher “showed them how to use the chatbot to debug their code, allowing them to quickly diagnose and correct their errors.”

dtau...@gmail.com

unread,
Dec 25, 2023, 8:55:54 AM12/25/23
to ai-b...@googlegroups.com

DeepMind AI with Built-In Fact-Checker Makes Mathematical Discoveries
New Scientist
Matthew Sparkes
December 14, 2023


Google DeepMind researchers developed an AI chatbot that can produce new scientific knowledge and ideas with the help of a fact-checker that filters out useless outputs. The researchers developed FunSearch, a generalized large language model based on Google's PaLM2 model, which includes an "evaluator" that generates computer code that solves mathematics and computing problems. Although the underlying AI still can generate inaccurate results, the evaluator filters out everything but reliable solutions.

Full Article

*May Require Paid Registration



Synaptic Transistor Mimics Human Intelligence
Interesting Engineering
Sejal Sharma
December 20, 2023


Scientists from several U.S. institutions created a transistor that can think and remember things the way the human brain does. The researchers used moiré patterns, stacking and twisting ultra-thin materials, to give the transistor special electronic properties. The resulting device, called a synaptic transistor, was trained to recognize patterns. The device successfully recognized the patterns, demonstrating associative learning, even when incomplete patterns were provided.

Full Article

 

 

Splitting a Large AI Across Several Devices Lets You Run It in Private
New Scientist
Jeremy Hsu
December 15, 2023


An AI system based on large language models (LLMs) developed by University of California, Irvine researchers can be used locally via smartphone, eliminating reliance on a cloud service's datacenters and permitting LLM queries without having to share sensitive personal information. The LinguaLinked system splits the LLM's computations among several smartphones based on the phones' available memory and network connectivity. The researchers used the system to run BLOOM LLMs on four commercial phones, with an average AI processing speed per token of 2 seconds on a small AI model with 1.1 billion parameters, and 4 seconds on a larger model with 3 billion parameters.

Full Article

*May Require Paid Registration

 

 

CyberRunner Outmaneuvers Humans in Maze Run Breakthrough
Bloomberg
December 19, 2023


The CyberRunner AI robot created by researchers at ETH Zurich in Switzerland surpassed humans at the game Labyrinth. According to its creators, CyberRunner learned to navigate a small metal ball through a maze by tilting its surface to avoid holes across the board, mastering the toy in just six hours. During the process, CyberRunner found ways to “cheat” by skipping parts of the maze, requiring the researchers to explicitly instruct it not to take shortcuts.

Full Article

 

 

Hidden Pattern in Children's Eyes Can Reveal Autism
ScienceAlert
David Nield
December 18, 2023


Researchers at South Korea's Yonsei University College of Medicine demonstrated the use of AI to screen for and determine the severity of autism in children and teens based on images of their retinas. The researchers said structural retinal changes are present in people with Autism Spectrum Disorder. After training the deep learning model on images of subjects with and without autism, the AI analyzed 958 retinas of children and teens, half with autism diagnoses. Although the AI was 100% accurate in identifying whether or not the subject had autism, it was only 48% to 66% accurate in predicting symptom severity.

Full Article

 

 

Ultrasound Spots Battery Defects
New Scientist
Jeremy Hsu
December 14, 2023


Royce Copley at the U.K.'s University of Sheffield and colleagues used inexpensive ultrasound sensors and a genetic algorithm to detect defects or damage in lithium-ion batteries. The researchers assessed how accurate the algorithm was by X-ray scanning batteries, and found the algorithm sufficiently reliable for manufacturers to use it to check the batteries.

Full Article

 

 

Chinese Grievers Turn to AI to Create Avatars of the Departed
South China Morning Post
December 14, 2023


AI is being used by Chinese firms to create lifelike avatars of the deceased based on as little as 30 seconds of audiovisual material. The idea is to provide comfort to grieving families, but there are concerns these "ghost bots" could harm the people looking to them for bereavement support. Tal Morse at the U.K.'s University of Bath said, "A key question here is ... how 'loyal' are the ghost bots to the personality they were designed to mimic?"

Full Article

 

 

Researchers to Study Computer Code for Clues to Hackersʼ Identities
WSJ Pro Cybersecurity
Catherine Stupp
December 15, 2023


The U.S. Defense Departmentʼs Intelligence Advanced Research Projects Activity (IARPA), the lead research agency for the U.S. intelligence community, is accepting proposals from researchers on technologies that could speed investigations to identify perpetrators of cyberattacks. Tools developed as part of the planned 30-month research project will not replace human analysts, but the analysis of code used in cyberattacks by artificial intelligence will make investigations more efficient, said IARPAʼs Kristopher Reese.

Full Article

*May Require Paid Registration

 

 

Deepfakes Disrupting Bangladesh's Election
Financial Times
Benjamin Parkin; Jyotsna Singh
December 12, 2023


The use of AI-generated deepfakes and disinformation has proven problematic ahead of Bangladesh's elections in January. In one video posted on X in September by online news outlet BD Politico, an avatar news anchor for “World News” accused U.S. diplomats of interfering in Bangladeshi elections and blamed them for political violence. In response to issues with deepfakes, Google and Meta announced policies to require campaigns to disclose whether political advertisements have been digitally altered.

Full Article

*May Require Paid Registration

 

 

ChatGPT Used to Create Faster, More Reliable Software
University of Stirling (U.K.)
December 11, 2023


Researchers at the U.K.'s University of Stirling leveraged ChatGPT to improve the speed and reliability of a software program. The researchers asked ChatGPT to update software automatically to improve computer coding. Stirling's Sandy Brownlee said, "We found that, on the open source project we used as a case study, a LLM [large language model] was able to produce faster versions of the program around 15% of the time, which is half as good again as the previous approach."

Full Article

 

AI Causing Dramatic Increase In Online Misinformation

The Washington Post Share to FacebookShare to Twitter (12/17, A1) reports artificial intelligence “is automating the creation of fake news, spurring an explosion of web content mimicking factual articles that instead disseminate false information about elections, wars and natural disasters.” Since May, “websites hosting AI-created false articles have increased by more than 1,000 percent, ballooning from 49 sites to more than 600, according to NewsGuard, a nonprofit that tracks misinformation.” The heightened churn “of polarizing and misleading content may make it difficult to know what is true — harming political candidates, military leaders and aid efforts.” Misinformation experts “said the rapid growth of these sites is particularly worrisome in the run-up to the 2024 elections.” Generative artificial intelligence “has ushered in an era in which chatbots, image makers and voice cloners can produce content that seems human-made.”

        WPost: AI Requires Stronger Regulation Than Copyright Law. In an editorial, The Washington Post Share to FacebookShare to Twitter (12/17) says ChatGPT “and other large language models like it are more than handy tools,” as they “are also creative forces, writing dramas, essays, lyrics, jokes and practically any other art form that once required a human brain – and promising to upend the lives of people who write, draw, sing or, yes, conduct journalism.” The technology “demands new scrutiny of how society rewards artistic effort.” Copyright law “governs how that is done now.” The Post says “a broader rethinking of copyright, perhaps inspired by what some AI companies are already doing, could ensure that human creators get some recompense when AI consumes their work, processes it and produces new material based on it in a manner current law doesn’t contemplate. But such a shift shouldn’t be so punishing that the AI industry has no room to grow. “

 

AI’s Energy Demand Expected To Spike

The Wall Street Journal Share to FacebookShare to Twitter (12/15, Mims, Subscription Publication) reported that despite data centers’ power consumption remaining at around 1% of global electricity production since 2010, the widespread adoption of AI may change this. School of Business and Economics at the Vrije Universiteit Amsterdam researcher Alex de Vries has estimated that AI’s electricity demand could reach 15 GW. The global AI industry is expected to continually escalate its demand for electricity as developers continue to compete.

 

OpenAI Reforms Trust And Safety Efforts, Grants Board Veto Power Over New Releases

The Information Share to FacebookShare to Twitter (12/18, Woo, Palazzolo, Subscription Publication) reports that since Sam Altman’s reinstatement as OpenAI CEO, the company “appears to have quietly abandoned a monthslong effort to find a new leader for its trust and safety team, whose mandate was to prevent” the company’s products from creating harmful content such as disinformation and hate speech. Instead, The Verge Share to FacebookShare to Twitter (12/18, David) reports, OpenAI “split its trust and safety team, creating three separate groups taking on AI risk” dubbed the Safety Systems, Superalignment, and Preparedness teams.

        The Washington Post Share to FacebookShare to Twitter (12/18, De Vynck) reports the Preparedness team “will hire AI researchers, computer scientists, national security experts and policy professionals to monitor its tech, continually test it and warn the company if it believes any of its AI capabilities are becoming dangerous.” The Safety Systems team “works on existing problems like infusing racist biases into AI,” while the Superalignment team “researches how to make sure AI doesn’t harm humans in an imagined future where the tech has outstripped human intelligence completely.”

        TechCrunch Share to FacebookShare to Twitter (12/18, Coldewey) reports, “A new ‘safety advisory group’ will sit above the technical teams and make recommendations to leadership, and the board has been granted veto power — of course, whether it will actually use it is another question entirely.” Bloomberg Share to FacebookShare to Twitter (12/18, Metz, Subscription Publication) reports, “OpenAI said its board can choose to hold back the release of an AI model even if the company’s leadership has deemed it safe, another sign of the artificial intelligence startup empowering its directors to bolster safeguards for developing the cutting-edge technology.”

 

AI Expert Tries To Have ChatGPT Write An Essay Like A 4th Grader

Education Week Share to FacebookShare to Twitter (12/18) reports recent upgrades make it possible to train ChatGPT 3.5, “the free version released last spring,” on a particular writing style, said Stacy Hawthorne, chief academic officer of the nonprofit Learn21. As an experiment, “Hawthorne gave the tool three essays written by real 4th graders that had been posted to the website of Utah’s state education office.” Then she gave ChatGPT an assignment – write an opinion essay answering this question: “Do you think we should save old things like paintings, or throw them away?” The initial version “that the tool came up with hit the students’ voices correctly but had too few spelling and grammatical errors to be as convincing as a 4th grader’s work.” Overall, “AI might not be quite there yet, but it’s clearly on its way to being able to impersonate a 4th grader.”

 

Survey Shows Workers Who Use AI Regularly Worry About Job Security

CNBC Share to FacebookShare to Twitter (12/19, Caminiti) reports that “despite workers’ positive feedback about” artificial intelligence (AI) “42% of employees said they’re concerned about the technology’s impact on their” job security as “44% said they are ‘very or somewhat concerned’” about job loss. In a recent CNBC SurveyMonkey Workforce Survey, “employees...who use AI at work today say they are more likely to view it as a positive, with 72% reporting that it has made them more productive.” Regardless, “the survey shows that these employees have some significant concerns about whether it will affect their jobs” and “the more employees use AI at work, the more concerned they become.” The data show that 60% “of those using AI regularly said they’re worried about its impact on their job.”

 

Administration Advances Plan To Write Standards For AI

Reuters Share to FacebookShare to Twitter (12/19) reports the Administration “said on Tuesday it was taking the first step toward writing key standards and guidance for the safe deployment of generative artificial intelligence and how to test and safeguard systems.” The Commerce Department’s National Institute of Standards and Technology “said it was seeking public input by Feb. 2 for conducting key testing crucial to ensuring the safety of AI systems.” Commerce Secretary Raimondo “said the effort was prompted by President Joe Biden’s October executive order on AI and aimed at developing ‘industry standards around AI safety, security, and trust that will enable America to continue leading the world in the responsible development and use of this rapidly evolving technology.’”

 

Broward County Uses AI To Entice Students To Computer Science

Education Week Share to FacebookShare to Twitter (12/19) reports Florida’s Broward County school district “joined others around the country in putting an AI twist on Hour of Code, an annual celebration of computer science, launched by Code.org, a nonprofit dedicated to expanding access to computer science education.” Broward was “among the first school systems to embrace Hour of Code, which the district makes a monthlong event.” Broward’s work “this year through Hour of Code – which took place at some 100 of the district’s roughly 300 schools – featured low-lift, high-interest AI activities.” EdWeek says that in Broward, “AI is incorporated primarily into one computer science course, though a handful of middle school teachers are also working on integrating it into their subjects.”

 

Wall Street Seen As Embracing Generative AI

The Information Share to FacebookShare to Twitter (12/20, Subscription Publication) reports, “If you want to know the extent to which businesses are embracing generative artificial intelligence, look no further than Wall Street. Quants and hedge funds are head over heels for large language models and the researchers who make them.” Big banks are “known to be among the stingiest software buyers” but “also are increasingly embracing the technology. Citibank, JPMorgan Chase and Goldman Sachs have each ramped up their usage of Microsoft’s AI products during the second half of the year, according to people with knowledge of their purchases,” including “OpenAI-powered Copilots...and specialized servers the banks rent to develop their own customized AI models.”

 

Civil Rights Group Calls For Protections Against Discriminatory Use Of AI

The Washington Post Share to FacebookShare to Twitter (12/20, Lima) reports the Lawyers’ Committee for Civil Rights Under Law “is laying down on Wednesday a major marker in the debate over artificial intelligence regulation, calling for expanded assessments of the tools as well as new protections against the discriminatory use of the technology.” The civil rights group’s “30-page proposal builds on a growing body of federal bills looking to broaden digital protections for marginalized and underrepresented groups.” According to a Bloomberg report Monday, “Google provided funding for the launch of a new AI policy center at the Leadership Conference on Civil and Human Rights.” However, the Post reports, “Lawyers’ Committee spokesman Lacy Crawford Jr. said that Amazon and Google have given the group ‘funding to support general operating,’ specifically their 60th anniversary campaign. But its Digital Justice Initiative, which handles its tech policy portfolio, ‘does not receive funding from any of the major tech companies,’ Crawford added.”

 

Young People Discuss Perspectives On Future Impacts Of Artificial Intelligence

The Washington Post Share to FacebookShare to Twitter (12/20, De Vynck, Tiku, Verma) shares comments from younger people about their views on artificial intelligence and its impact on their future. Among the perspectives are that “the advent of generative AI tools like OpenAI’s Dall-E that can create images based on simple prompts had made the tech suddenly relevant to” them. Also, some express concern that “the AI boom is fueled by high powered computing chips but those raw materials consume large amounts of electricity.” Meanwhile, others “feel certain that AI would eventually touch everyone’s lives in a way that is more welcoming than code.”

 

Teachers Share Benefits Of Implementing AI In Classroom Lessons

Education Week Share to FacebookShare to Twitter (12/20, Klein) reports the “most common mental picture of an artificial intelligence lesson might be this: High schoolers in a computer science class cluster around pricey robotics equipment and laptops, solving complex problems with the help of an expert teacher.” While there’s “nothing wrong with that scenario, it doesn’t have to look that way, educators and experts say.” Teaching AI can “start as early as kindergarten.” Educators from “around the world shared how they have been implementing AI in their classes on a webinar hosted earlier this month by the International Society for Technology in Education, a nonprofit that helps educators make the most of technology.” ISTE has offered “professional development allowing educators to explore AI for six years, training some 2,000 educators.” The nonprofit “also offers sample lessons for students at every grade level that can be applied across a range of subjects.”

 

Bill Would Promote AI Literacy

Education Week Share to FacebookShare to Twitter (12/21, Klein) reports that Reps. Larry Bucshon (R-IN) and Lisa Blunt Rochester (D-DE) “this month introduced the ‘Artificial Intelligence Literacy Act’” which “would shine a spotlight on the importance of teaching AI literacy,” and “make it clear that K-12 schools, colleges, nonprofits and libraries can use grants available under an existing program—the $1.25 billion Digital Equity Competitive Grant program – to support AI literacy.”

Reply all
Reply to author
Forward
0 new messages