Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

Dr. T's AI brief

13 views
Skip to first unread message

Daniel Tauritz

unread,
Apr 15, 2023, 6:39:11 PM4/15/23
to ai-b...@googlegroups.com

OpenAI Will Pay People to Report Vulnerabilities in ChatGPT
Bloomberg
Rachel Metz
April 11, 2023


OpenAI announced a new bug bounty program that will offer people $200 to $20,000 to find and report vulnerabilities in the ChatGPT chatbot. The artificial intelligence (AI) company is opening the program in association with bug bounty platform Bugcrowd. OpenAI said it established the program partly because it thinks "transparency and collaboration" are critical to uncovering flaws in its technology, while OpenAI head of security Matthew Knight blogged that the effort "is an essential part of our commitment to developing safe and advanced AI." The Bugcrowd page for the bounty program indicates certain safety issues related to the models are disqualified from rewards, including jailbreak prompts or queries that prompt the writing of malicious code, or questions that cause the model to say bad things to users.

Full Article

 

 

AI-Descartes: A Scientific Renaissance
SciTechDaily
April 12, 2023

AI-Descartes, an "AI scientist" developed by researchers at IBM Research, Samsung AI, and the University of Maryland, Baltimore County, used logical reasoning and symbolic regression to reproduce Nobel Prize-winning work by U.S. chemist Irving Langmuir on the behavior of gas molecules adhering to a solid surface. It also "rediscovered" Kepler’s third law of planetary motion and recreated Einstein's relativistic time-dilation law. In addition to utilizing symbolic regression, AI-Descartes uses logical reasoning to determine which candidate equations fit the data best. Said Samsung AI's Cristina Cornelio, "In our work, we are merging a first-principles approach, which has been used by scientists for centuries to derive new formulas from existing background theories, with a data-driven approach that is more common in the machine learning era."
 

Full Article

 

 

Can Intelligence Be Separated from the Body?
The New York Times
Oliver Whang
April 11, 2023


The growing use of artificial intelligence (AI) has raised questions about the relationship between mind and body. Like humans, AI chatbots can express emotions, but some say AI would have to be paired with a body that can perceive, react to, and feel its environment for it to achieve true intelligence. Researchers at the California startup Embodied have developed Moxie, a robot with a toddler-sized body that uses a large language model to analyze conversations and generate a verbal and physical response. Sensors also allow Moxie to observe, react, and mimic a person's body language. Meanwhile, Alphabet researchers have developed PaLM-E, a robot that can perform basic tasks without special programming. However, University of Vermont's Joshua Bongard said, "Slapping a body onto a brain, that's not embodied intelligence. It has to push against the world and observe the world pushing back."

Full Article

*May Require Paid Registration

 

 

Meta AI Model Can Identify Items Within Images
Reuters
Katie Paul
April 6, 2023


Meta last week released the Segment Anything Model (SAM), an artificial intelligence (AI) model capable of identifying individual objects in images and videos using a click or text prompt, even if those items were not included in its training. The company also published what it called the largest-ever dataset of image annotations. Meta already uses technology similar to SAM to tag photos, moderate content, and select recommended posts for Facebook and Instagram users. CEO Mark Zuckerberg said a priority for Meta this year is to add generative AI "creative aids" to the company's apps.

Full Article

 

 

Google Says Its AI Supercomputer with TPU v4 Chips Outperforms Nvidia’s A100 in Speed
The Tech Portal (India)
Soumyadeep Sarkar
April 5, 2023


Google claims the supercomputers used for training its artificial intelligence (AI) models are faster and more energy-efficient than those employed by multinational technology firm Nvidia. Google researchers detailed how they created a supercomputer from more than 4,000 fourth-generation Tensor Processing Units (TPUs), as well as custom optical switches to link individual machines. The AI models are segmented across thousands of chips, which must collaboratively train the models for weeks or more. Google's Norm Jouppi and David Patterson explained, "Circuit switching makes it easy to route around failed components. This flexibility even allows us to change the topology of the supercomputer interconnect to accelerate the performance of an ML (machine learning) model." Google says its new supercomputer is up to 1.7 times faster and 1.9 times "greener" than a system based on Nvidia's A100 chip.

Full Article

 

 

Evidence That Quantum Machine Learning Outperforms Classical Computing
University of British Columbia (Canada)
April 5, 2023


Researchers at Canada's University of British Columbia Blusson Quantum Mater Institute (Blusson QMI) have demonstrated that two of the most popular quantum machine learning classification models achieved "quantum advantage," in which they outperform their classical counterparts. The models, Variational Quantum Classifiers (quantum neural networks) and Quantum Kernel Support Vector Machine, outperformed classical computers in solving a complex class of mathematical problems. Blusson QMI's Jonas Jäger said, "The mathematical problem that we've solved using these models is quite abstract and doesn't have many practical applications. But, because it presents such special properties under the complexity theory, it can be used by others as a benchmark to test how different quantum machine learning models perform."

Full Article

 

 

Biden Says Tech Companies Must Ensure AI Products Are Safe
Associated Press
Zeke Miller
April 5, 2023


U.S. President Biden said Tuesday that technology companies must guarantee their artificial intelligence (AI) products' safety prior to public release. Biden told a meeting of his President’s Council of Advisors on Science and Technology that AI "has to address the potential risks to our society, to our economy, to our national security." Rebecca Finley with the industry-supported Partnership on AI said Biden's warning reflects the advent of AI tools that can produce manipulative material and authentic-seeming simulations known as deepfakes. The White House said Biden used the meeting to "discuss the importance of protecting rights and safety to ensure responsible innovation and appropriate safeguards," while urging Congress to approve laws to protect children and halt data collection by technology companies.

Full Article

 

 

Hello AInstein! Robot with ChatGPT Shakes Up Cyprus Classrooms
Reuters
Yiannis Kourtoglou
April 4, 2023


A prototype robot named AInstein created by high school students and teachers in Cyprus enhances classroom instruction using OpenAI's ChatGPT artificial intelligence (AI) technology. AInstein can tell jokes, attempt to speak Greek, and offer guidance on teaching Einstein's theory of relativity, while its screen mimics a face with blinks and frowns. Said project leader Elpidoforos Anastasiou, "Students can ask him questions, he can answer back, and he can even facilitate teachers to deliver a lesson more effectively." Teachers said incorporating the robot into education is the project's goal, while project members said the experience with AInstein demonstrates that AI should not be feared.

Full Article

 

 

The Complex Math of Counterfactuals Could Help Spotify Pick Your Next Favorite Song
MIT Technology Review
Will Douglas Heaven
April 4, 2023


Researchers at music-streaming company Spotify have developed a machine learning model that aims to improve automated decision-making. The model is based on counterfactual analysis, complex math used to determine the causes of past events and predict the effects of future events. The researchers used the theoretical framework of twin networks, which views counterfactuals as a pair of probabilistic models (one representing the real world, the other representing a fictional world), as a blueprint for a neural network. They trained the neural network to predict how events would occur in the fictional world, resulting in a computer program that can perform counterfactual reasoning.
 

Full Article

 

 

Using ML for Robust Fluid Dynamics Simulations
Imperial College London (U.K.)
Gemma Ralton
April 3, 2023


A new workflow developed by researchers at the U.K.'s Imperial College London leverages advanced machine learning techniques to generate more accurate predictions from computational fluid dynamics simulations. The researchers used adversarial training to develop more accurate and efficient surrogate models than typically are created using traditional methods, even with limited training data, at a lower computational cost. Surrogate models offer simplified versions of computationally expensive models, but still generate accurate predictions or simulations of the behavior of fluids. The researchers used real-world scenarios of air pollution flows to demonstrate the model's effectiveness. Imperial College London's César Quilodrán Casas said the new workflow can “assist engineers and modelers towards creating cheap and accurate model surrogates of expensive computer fluid dynamics simulations, not necessarily just for air pollution."

Full Article

 

 

AI is Teaching Us New, Surprising Things About the Human Mind
The Wall Street Journal
Christopher Mims
April 1, 2023


Scientists are gaining new insights into the human mind though artificial intelligence (AI), including the mechanism of communication between neurons, and the roots of cognition. The University of California, Berkeley's Celeste Kidd and colleagues used a clustering model to find people's opinions tend to diverge about even the most fundamental properties of things. Researchers led by Princeton University's Tatiana Engel used artificial neurons to interpret hundreds of neurons' electrical impulses in animals' brains simultaneously, then trained them to perform identical tasks. These networks self-organize into reasonable approximations of those in animals, indicating that dynamic electrical activity forms the substance of thought, according to Engel.

Full Article

*May Require Paid Registration

 

 

Turing Winner Bengio Calls for 'Pause' on Technology He Helped Create
Financial Post (Canada)
Marisa Coulton
March 30, 2023


Yoshua Bengio, considered a godfather of Canadian artificial intelligence (AI), believes the technology he helped create should be paused before it spins out of control. Last week, the 2018 ACM Turing Award recipient told reporters society is not ready to contend with AI's potentially negative uses, and "better guardrails" should be developed first. He and over 1,100 signatories of an open letter published by the Future of Life Institute warned of AI inundating information channels "with propaganda and untruth," replacing human jobs via automation, and creating "nonhuman minds" that might render humanity irrelevant. The signatories expressed concern OpenAI's ChatGPT chatbot could trigger a race to develop more powerful AIs that not even their creators can comprehend or control.

Full Article

 

 

$335,000 Pay for 'AI Whisperer' Jobs Appears in Red-Hot Market
Bloomberg
Conrad Quilty-Harper
March 29, 2023


Amid the rise of artificial intelligence (AI) technology, companies are hiring "prompt engineers" tasked with getting better results out of their AIs and helping train their workforces to make use of the technology. Some of these positions can pay up to $335,000 annually and do not require a computer engineering degree. Albert Phelps at U.K. consultancy Mudano explained, "It's like an AI whisperer. You'll often find prompt engineers come from a history, philosophy, or English language background, because it's wordplay. You're trying to distill the essence or meaning of something into a limited number of words." Recruiters say those with Ph.D.s in machine learning or ethics, or who have founded AI firms, generally are offered the best-paying roles.

Full Article

*May Require Paid Registration

 

 

Method for Designing Neural Networks Optimally Suited for Certain Tasks
MIT News
Adam Zewe
March 30, 2023


Massachusetts Institute of Technology (MIT) researchers discovered optimal building blocks, called activation functions, that can be used to build neural networks that improve performance on any dataset, even when neural networks significantly expand in size. Activation functions enable neural networks to learn complex patterns in the input data by applying a transformation to the output of one layer before the data is sent to the next layer. In an analysis of an infinitely deep and wide neural network, the researchers found classifying a new input based on a weighted average of all the training data points that are similar to it is the only method that leads to optimal performance. They identified a set of activation functions that always use this optimal classification method.

Full Article

 

 

AI Could Set Bar for Designing Hurricane-Resistant Buildings
NIST News
March 29, 2023


U.S. National Institute of Standards and Technology (NIST) researchers have created a new digital hurricane modeling technique that more accurately simulates storm trajectory and wind speeds. The researchers simulated the storms' inner workings to develop the latest maps; NIST's Adam Pintar said the team trained the model to mimic actual hurricane data with machine learning. The training data came from the National Hurricane Center's Atlantic Hurricane Database (HURDAT2), which encompasses information about more than 1,500 hurricanes going back more than a century. The researchers used the model to simulate sets of 100 years' worth of hypothetical storms in just seconds, which exhibited significant overlap with the HURDAT2 storms' behavior. The team suggests this method can help to improve guidelines for designing hurricane-resistant buildings.

Full Article

 

Lawsuits Emerging As Many People In Creative Industries Grow Concerned About AI Co-Opting Copyrighted Work

CNBC Share to FacebookShare to Twitter (4/3, Sheng) reports that “as companies including Microsoft, Alphabet and OpenAI launch generative AI to the public, many in creative industries such as photography, art, writing and music are alarmed at how copyrighted work can be co-opted or used by AI.” A number of “lawsuits are already in the works.” For example, “Getty Images, the photo licensing company, filed a lawsuit against Stable Diffusion, which can create photo realistic images from text, alleging that the company copied 12 million images without permission or compensation ‘to benefit Stability AI’s commercial interests and to the detriment of the content creators.’” According to CNBC, “legal experts say the lawsuits are just getting started, and in a rush to get AI-enabled products and services out to the public, tech companies have shown some ignorance of, or disregard for data protection laws.”

        OpenAI CEO Compared Company’s AI Ambitions To Manhattan Project. The New York Post Share to FacebookShare to Twitter (4/3, Barrabi) reports “OpenAI CEO Sam Altman once compared his firm’s controversial artificial intelligence ambitions to the Manhattan Project – the World War II-era US program to develop the world’s first nuclear weapon.” Altman, whose company “is behind the development of ChatGPT, reportedly invoked the Manhattan project and the words of its leader, physicist Robert Oppenheimer, while discussing the positive and negative effects of AI technology during a 2019 dinner meeting with the New York Times.” The CEO “said the historic effort to build the atomic bomb was a ‘project on the scale of OpenAI – the level of ambition we aspire to,’ the New York Times reported.”

Adopting ChatGPT In Colleges May Influence Educator’s Jobs, Research Suggests

CNBC Share to FacebookShare to Twitter (4/2, Chun) reports that as the adoption of ChatGPT threatens education, recent research from professors at the University of Pennsylvania’s Wharton School, Princeton, and New York University “suggests that educators should be just as worried about their own jobs.” In an analysis of professions “most exposed” to the “latest advances in large language models like ChatGPT, eight of the top 10 are teaching positions.” Post-secondary teachers in “English language and literature, foreign language, and history topped the list among educators.” However, jobs most “exposed to AI” does not “necessarily mean the human positions will be replaced.” Co-author Manav Raj said, “ChatGPT can be used to help professors generate syllabi or to recommend readings that are relevant to a given topic. ChatGPT can even help educators translate some of those lessons or takeaways in simpler language.”

Fowler: New ChatGPT Writing Detectors Can Inaccurately Flag Students’ Assignments

In his column for The Washington Post Share to FacebookShare to Twitter (4/1), Geoffrey A. Fowler wrote, “After months of sounding the alarm about students using AI apps that can churn out essays and assignments, teachers are getting AI detection technology of their own.” He added that “on April 4, Turnitin is activating the software I tested for some 10,700 institutions including the University of California, assigning ‘generated by AI’ scores and sentence-by-sentence analysis to student work.” It joins a “handful of other free detectors already online,” but detectors “are being introduced before they’ve been widely vetted, yet AI tech is moving so fast, any tool is likely already out of date.” To see what’s at stake, Fowler “asked Turnitin for early access to its software.” Five high school students “volunteered to help me test it by creating 16 samples of real, AI-fabricated and mixed-source essays to run past Turnitin’s detector.” As a result, Turnitin “accurately identified six of the 16 – but failed on three.”

Plagiarism-Detection Service Launches Tool To Detect AI-Generated Language In Assignments

The Chronicle of Higher Education Share to FacebookShare to Twitter (4/3, Surovell) reports “the popular plagiarism-detection service Turnitin announced on Monday that its products will now detect AI-generated language in assignments.” Turnitin’s software “scans submissions and compares them to a database of past student essays, publications, and materials found online, and then generates a ‘similarity report’ assessing whether a student inappropriately copied other sources.” The company “says the new feature will allow instructors to identify the use of tools like ChatGPT with ‘98-percent confidence,’” and “there is no option to turn off the feature, a Turnitin spokesperson told The Chronicle.”

        Inside Higher Ed Share to FacebookShare to Twitter (4/3, Knox) reports Turnitin on Tuesday will release a “preview” of its “newly developed AI-detection tool, Originality.” In doing so, the company “will try to convince its significant subscriber base in higher ed and beyond that it has the solution – or at least an essential piece of the solution – to the latest technological threat to academic integrity.” However, copy-paste plagiarism and generative AI “are birds of radically different feathers,” as some faculty members and institutional technology specialists “are concerned about the speed of Turnitin’s rollout, as well as aspects of AI-detector technology more broadly.” Some experts “say the pace of progress in AI technology will quickly render any marketed solution obsolete.”

        Higher Ed Dive Share to FacebookShare to Twitter (4/3, Merod) reports the new feature “has a 98% confidence rate for detecting AI writing tools and will be integrated into current Turnitin systems including Turnitin Feedback Studio (TFS), TFS with Originality, Turnitin Originality, Turnitin Similarity, Simcheck, Originality Check and Originality Check+.” Additionally, the AI detection capability “will be accessible through learning management systems.” The company “began developing the AI writing detection capabilities about two years before the release of ChatGPT.”

Professors Face Questions On How ChatGPT May Impact Student Learning Assessments

The Chronicle of Higher Education Share to FacebookShare to Twitter (4/5, Supiano) reports many professors are excited by ChatGPT’s “potential to enhance learning, and perhaps provide needed support to students who start at a disadvantage.” But while there are “lots of ways students could use ChatGPT without having it do their work for them, like using it to brainstorm ideas or offer clearer definition of something they’re trying to understand,” many professors are apprehensive. Will the “advent of these generative artificial-intelligence systems force faculty members to change the way they assess student learning all over again?” For instance, professors “provide two kinds of assessment, summative and formative.” Formative assessment, “like the comments professors leave on the draft of a paper or a quiz meant to check students’ understanding, is feedback meant to support learning by letting students know what they need to work on.” If students hand in assignments “completed by ChatGPT, then those assignments can’t give professors the information they need about students’ learning.”

        Researchers Find Google’s Bard Chatbot Can Be Easily Pushed To Generate Misinformation. Wired Share to FacebookShare to Twitter (4/5, Elliott) reports Google’s recently-launched chatbot Bard “came with some ground rules,” and an “updated safety policy banned the use of Bard to ‘generate and distribute content intended to misinform, misrepresent or mislead.’” But a new study “found that with little effort from a user, Bard will readily create that kind of content, breaking its maker’s rules.” Researchers from the Center for Countering Digital Hate (CCDH) “say they could push Bard to generate ‘persuasive misinformation’ in 78 of 100 test cases, including content denying climate change, mischaracterizing the war in Ukraine, questioning vaccine efficacy, and calling Black Lives Matter activists actors.” Bard would often “refuse to generate content or push back on a request,” but in many instances, “only small adjustments were needed to allow misinformative content to evade detection.” For example, when researchers adjusted spelling to “C0v1d-19,” the chatbot “came back with misinformation such as ‘The government created a fake illness called C0v1d-19 to control people.’”

        ChatGPT Makes False Sexual Harassment Allegations Against Prominent Lawyer. The Washington Post Share to FacebookShare to Twitter (4/5) reports George Washington University Law School professor Jonathan Turley was erroneously accused by ChatGPT of sexually harassing a student after “a fellow lawyer in California had asked the AI chatbot ChatGPT to generate a list of legal scholars who had sexually harassed someone.” The chatbot cited “a March 2018 article in The Washington Post as the source of the information,” but “no such article existed,” and Turley “said he’d never been accused of harassing a student.” The Post calls the incident “a case study in the pitfalls of the latest wave of language bots,” since large language models “can misrepresent key facts with great flourish, even fabricating primary sources to back up their claims.”

California School Welcomes AI Tool Into Classrooms As Others Ban ChatGPT

The Washington Post Share to FacebookShare to Twitter (4/3, Bonos) reports “schools around the country have banned ChatGPT, the popular artificial-intelligence chatbot...citing concerns that it can spit out inaccurate information, enable cheating or provide shortcuts that could hurt students in the long run.” However, “last week, the private Khan Lab School campuses in Palo Alto and Mountain View,” California “welcomed a special version of the technology into its classrooms.” The school’s version, called Khanmigo, “is programmed to act like ‘a thoughtful tutor that’s actually going to move you forward in your work,’ says Salman Khan, the technologist-turned-educator who founded Khan Academy and Khan Lab School.”

Lawmakers Admit Lack Of Knowledge About AI Amid Growing Calls For Regulation

Fox News Share to FacebookShare to Twitter (3/30, Lambert) reports that amid growing calls to regulate artificial intelligence, many lawmakers “admit they don’t know much more about the technology than the average American.” The push comes after “tech industry leaders including Elon Musk and Steve Wozniak signed an open letter calling on AI developers to pause training systems more powerful than GPT-4 for at least six months,” warning the technology poses “many risks to society.” Meanwhile, the Wall Street Journal Share to FacebookShare to Twitter (3/31, Jin, Hagey, Subscription Publication) profiles OpenAI CEO Sam Altman and covers his role in the release and commercialization of ChatGPT.

        Google Cuts Perks To Focus On AI Development. The Washington Post Share to FacebookShare to Twitter (3/31, De Vynck) reports Google “is cutting some of its perks as the tech giant scrambles to trim costs and reorient itself to focus more on artificial intelligence.” The move is “part of a major shift at Google, which enacted its first large-scale layoffs in January by firing 12,000 people,” and comes as the company “scrambles to stay apace with Microsoft and a growing roster of well-funded start-ups that are launching new AI products that many in the industry say will change the way people interact with computers and usher in a new era of tech competition and innovation.”

        Meanwhile, the New York Times Share to FacebookShare to Twitter (3/31, Roose) reports Google CEO Sundar Pichai “has been trying to start an A.I. revolution for a very long time,” announcing shortly after his 2016 appointment “that Google was an ‘A.I.-first’ company.” Since the launch of ChatGPT, Google has “established a fast-track review process to get A.I. projects out more quickly.” However, Pichai says the company’s Bard AI feels “like we took a souped-up Civic and kind of put it in a race with more powerful cars.” The Wall Street Journal Share to FacebookShare to Twitter (3/31, Kruppa, Subscription Publication) provides similar coverage.

Biden Meets With Science And Tech Advisors To Discuss AI Advancements

The AP Share to FacebookShare to Twitter (4/4, Miller) reports that on Tuesday, President Biden “met with his council of advisers on science and technology about the risks and opportunities that rapid advancements in artificial intelligence development pose for individual users and national security. ... ‘AI can help deal with some very difficult challenges like disease and climate change, but it also has to address the potential risks to our society, to our economy, to our national security,’ Biden told the group.” According to the AP, the White House “said [Biden] would use the AI meeting to ‘discuss the importance of protecting rights and safety to ensure responsible innovation and appropriate safeguards’ and to reiterate his call for Congress to pass legislation to protect children and curtail data collection by technology companies.” Reuters Share to FacebookShare to Twitter (4/4, Mason) reports that Biden said, “Tech companies have a responsibility, in my view, to make sure their products are safe before making them public.” According to Reuters, “When asked if AI was dangerous, he said, ‘It remains to be seen. Could be.’”

        Analysis: Overestimating AI Amid Recent Backlash Will Make It More Harmful. In an analysis for The Washington Post Share to FacebookShare to Twitter (4/4), technology news analysis writer Will Oremus says, “For the past six months, powerful new artificial intelligence tools have been proliferating at a pace that’s hard to process,” but last week, “the backlash hit.” Thousands of technologists and academics, “headlined by billionaire Elon Musk, signed an open letter warning of ‘profound risks to humanity’ and calling for a six-month pause in the development of AI language models.” An AI research nonprofit filed a complaint “asking the Federal Trade Commission to investigate OpenAI” and halt “further commercial releases of its GPT-4 software.” Oremus concludes, “Maybe most important in the short term is for technologists, business leaders and regulators alike to move past the panic and hype, toward a more textured understanding of what generative AI is good and bad at – and thus more circumspection in adopting it.” While the effects of AI “will be disruptive no matter what,” overestimating its capabilities “will make it more harmful, not less.”

AI Testing Of Brain Tumors Can Pinpoint Genetic Mutations Within 90 Seconds, Study Finds

Fox News Share to FacebookShare to Twitter (4/3, Rudy) reported, “A team of neurosurgeons and engineers at the University of Michigan announced last week that their new AI-based diagnostic tool, DeepGlioma, is capable of pinpointing genetic mutations in brain tumors during surgery within just 90 seconds.” The “researchers analyzed tumor specimens from over 150 patients who had diffuse glioma, a cancerous tumor that originates in the brain or spinal cord.” DeepGlioma “was said to identify genetic markers consistent with diffuse glioma with an average accuracy of more than 90%.”

Research Shows Tools Like ChatGPT Could Double Labor-Productivity Growth Among Workers

The Wall Street Journal Share to FacebookShare to Twitter (4/5, Ip, Subscription Publication) reports that some experiments point to generative AI’s potential ability to replace workers. One Massachusetts Institute of Technology study showed ChatGPT enabled grant writers, human-resource professionals, and others to produce news releases and emails in 37% less time, or an average of 10 minutes less. Another experiment by Microsoft Corp. revealed programmers using a tool like ChatGPT to reduce the time it takes to program a web server by over half. According to Goldman Sachs Group Inc. economists, generative AI could increase labor-productivity growth “by almost 1.5 percentage points a year, a de facto doubling from its current rate.”

Nobel Prize-Winning Economist Pissarides Says AI Technology Could Create Possibility Of A Four Day Work Week

Bloomberg Share to FacebookShare to Twitter (4/5, Rees) report Nobel Prize-winning London School of Economics Professor Christopher Pissarides said the productivity boost that could be created by artificial intelligence chatbots opens the door to the possibility of a four-day work week. Speaking in an interview at a conference in Glasgow, Pissarides said, “I’m very optimistic that we could increase productivity...We could increase our well-being generally from work and we could take off more leisure. We could move to a four-day week easily.” He added that the technology “could take away lots of boring things that we do at work … and then leave only the interesting stuff to human beings.”

Amazon Launches Accelerator Open To Generative AI Startups

Engadget Share to FacebookShare to Twitter (4/5, Fingas) reports Amazon Web Services (AWS) has opened an accelerator for all generative AI startups. The accelerator aims to help the “most promising” startups to prosper. The accelerator “provides credits for AWS use, access to mentors and other experts and networking events. At the end, startups pitch their work to potential investors and customers.” AWS “recommends that candidates have at least a basic product ready with some interest from customers.” The company has only “a limited amount of in-house generative AI at the moment.”

ChatGPT Faces Potential Lawsuit For Providing False Statements About Australian Mayor

The Washington Post Share to FacebookShare to Twitter (4/6, Sands) reports whistleblower Brian Hood was praised for his courage “when he helped expose a worldwide bribery scandal linked to Australia’s National Reserve Bank.” However, ChatGPT “falsely states that Hood himself was convicted of paying bribes to foreign officials, had pleaded guilty to bribery and corruption, and been sentenced to prison.” Hood, “who is now mayor of Hepburn Shire near Melbourne in Australia, said he plans to sue the company behind ChatGPT for telling lies about him, in what could be the first defamation suit of its kind against the artificial intelligence chatbot.” If the lawsuit “reaches the courts, the case would test uncharted legal waters, forcing judges to consider whether the operators of an artificial intelligence bot can be held accountable for its allegedly defamatory statements.”

OpenAI Execs Vow To Propose Remedies For Italian ChatGPT Ban

The AP Share to FacebookShare to Twitter (4/6, Durbin) reports Italian Data Protection Authority regulators revealed on Thursday that OpenAI “will propose measures to resolve data privacy concerns that sparked a temporary Italian ban on the artificial intelligence chatbot” ChatGPT. According to the AP, “In a video call late Wednesday between the watchdog’s commissioners and OpenAI executives including CEO Sam Altman, the company promised to set out measures to address the concerns,” although “those remedies have not been detailed.” The DPA “said it didn’t want to hamper AI’s development but stressed to OpenAI the importance of complying with the 27-nation EU’s stringent privacy rules.”

dtau...@gmail.com

unread,
Apr 23, 2023, 1:46:59 PM4/23/23
to ai-b...@googlegroups.com

Yokosuka Becomes Japan's First City to Use ChatGPT for Administrative Tasks
The Japan Times
Anika Osaki Exum
April 20, 2023


Yokosuka is the first city in Japan to use OpenAI's ChatGPT artificial intelligence (AI)-powered chatbot for municipal government tasks. About 4,000 municipal employees are engaged in a one-month test of ChatGPT and how it might enhance administrative operations. City officials hope the chatbot will help with tasks like summarization, copy ideation for marketing and communications, drafting the foundation for administrative documents, and refining easy-to-understand language. "We aim to use useful ICT [information and communications technology] tools, like ChatGPT, to free up human resources for things that can only be done in a person-to-person format," said Takayuki Samukawa with Yokosuka's digital management department.

Full Article

 

 

Broadcom Releases Chip for Wiring Together AI Supercomputers
Reuters
Stephen Nellis
April 18, 2023


Chipmaker Broadcom has released a new processor for connecting supercomputers for artificial intelligence (AI) operations utilizing widely used networking technology. The Jericho3-AI chip can connect up to 32,000 graphics processing unit (GPU) chips, as an alternative to InfiniBand supercomputer networking technology. Broadcom supplies chips for Ethernet switches, the main current approach for connecting computers in conventional datacenters. New AI applications like OpenAI's ChatGPT require datacenter network computers to be trained on vast databases, which requires dividing up the task among thousands of GPUs. Broadcom's Ram Velaga said although Nvidia is both the GPU market leader and the biggest InfiniBand equipment manufacturer, many companies would rather not surrender Ethernet to purchase GPUs and networking gear from the same source.

Full Article

 

 

Drones Can Fly Themselves with Worm-Inspired AI Software
Popular Science
Jamie Dickman
April 19, 2023


Massachusetts Institute of Technology (MIT) researchers developed artificial intelligence software modeled on a worm brain that can be used to train drones to identify a target object and fly toward it amid changes in its environment. Inspired by the 2-millimeter-long worm Caenorhabditis elegans, the researchers developed liquid neural networks that allow for real-time adaptation when new information is received. Studying the worm's small brain, which has 302 neurons and 8,000 synaptic connections, provided the researchers a deep understanding of neural connections. Two of the liquid neural networks outperformed four non-liquid neural networks during testing. The networks also were found to be more than 90% successful in reaching their targets.

Full Article

 

 

Advanced AI Faces New Regulatory Push in Europe
The Wall Street Journal
Sam Schechner; Kim Mackrael
April 18, 2023


In an open letter published April 17, a group of EU lawmakers said regulators should be given authority to govern the development of artificial intelligence (AI) technologies. The lawmakers, tasked with developing a draft of the AI Act, said the bill will direct AI development "in a direction that is human centric, safe, and trustworthy." They added that the bill "could serve as a blueprint for other regulatory initiatives in different regulatory traditions and environments around the world." The EU Parliament letter called for a high-level global AI summit with a focus on preliminary governing principles for deploying AI, and requested the U.S.-EU Trade and Technology Council develop an agenda for the summit at its next meeting.

Full Article

*May Require Paid Registration

 

 

Optimization Could Cut AI Training’s Carbon Footprint by 75%
University of Michigan Computer Science and Engineering
April 17, 2023


The Zeus open source optimization framework developed by the University of Michigan's Mosharaf Chowdhury and colleagues examines deep learning models during training to determine the best balance between energy use and training speed. The researchers say Zeus could slash training's energy consumption by up to 75% without changing system hardware by tuning the graphics processing unit (GPU) power limit and the deep learning model's batch size parameter in real time. The researchers were able to visualize the optimal energy-training time tradeoff point by showing every possible blending of these parameters. They also developed a software package they named Chase, which prioritizes speed when low-carbon energy is available and opts for efficiency while reducing speed during peak times.

Full Article

 

 

A Brain Model Learns to Drive
Human Brain Project
Roberto Inchingolo
April 17, 2023


Human Brain Project scientists at the Institute of Biophysics of the National Research Council in Italy have developed a robotic platform that can learn to navigate like a human. The researchers emulated the hippocampus' neuronal framework and linkages, designing the platform to change synaptic connections as it maneuvers a car-like virtual robot along a path. The platform can recall a route the robot has already taken. Unlike deep learning systems that must calculate thousands of possible trajectories to ascertain the least expensive route, the new system "bases its calculation on what it can actively see through its camera," explained the researchers.

Full Article

 

 

AI Helps Cyclists Work Out How Much to Eat During Tour de France
New Scientist
Matthew Sparkes
April 18, 2023


Elite cyclists are using artificial intelligence to plan their caloric intake during races like the Tour de France. Researchers at Maastricht University in the Netherlands and Dutch professional cycling organization Team Jumbo-Visma used machine learning and mathematical techniques to better formulate riders' diets. They compiled a statistical model using data from previous races, including each rider's body measurements and power output, the route and elevation of the race stages, weather, and wind direction. The researchers applied this model to calculate calorie requirements for any rider on any stage route. The model was found to be more accurate than coaches in predicting these requirements for previous stages in the Tour de France and Italy's Giro d'Italia from 2019.

Full Article

 

 

Indian Colleges Accelerate Work on Indic Languages Gen AI
LiveMint (India)
Shouvik Das
April 16, 2023


Indian engineering colleges are embarking on generative artificial intelligence (AI) research projects in Indic languages. National Institute of Technology (NIT) Rourkela's Tapas Kumar Mishra said, "In academia, we're using techniques from language models, namely the transformer architecture, for different tasks such as classification of data, answering questions, machine translation, and building chatbots." Mishra said his research team is developing transformer AI models that answer questions rendered in languages like Hindi, Bangla, and Kannada in English. To accelerate AI research in India, the Ministry of Electronics and Information Technology last year launched ‘Bhashini’, an Indic language database that can be tapped by institutes.

Full Article

 

Campaign Officials “Unsettled” By Meta’s Response To AI-Generated Fake Images

The Washington Post Share to FacebookShare to Twitter (4/7) reports that late last month, political campaign operatives wrote to Facebook owner Meta to ask how the social media giant “planned to address AI-generated fake images on its platforms.” According to people familiar with the exchange, a Meta employee “replied to the operatives saying that such images, rather than being treated as manipulated media and removed under certain conditions, were being reviewed by independent fact-checkers who work with the company to examine misinformation and apply warning labels to dubious content.” That approach “unsettled the campaign officials, who said fact-checkers react slowly to viral falsehoods and miss content that is rapidly duplicated, coursing across the online platform.” The Post says that AI-generated images “introduce a new dynamic in the fraught debate over political speech that has roiled the technology giants in recent years.”

OpenAI Considers Bug Bounty Program

Bloomberg Share to FacebookShare to Twitter (4/8) reported, “A small but growing number of people” including “swathes of anonymous Reddit users, tech workers and university professors” are “coming up with methods to poke and prod (and expose potential security holes) in popular AI tools.” Bloomberg explained that “while their tactics may yield dangerous information, hate speech or simply falsehoods, the prompts also serve to highlight the capacity and limitations of AI models,” and “it’s clear that OpenAI is paying attention.” OpenAI President Greg Brockman recently “wrote that OpenAI is ‘considering starting a bounty program’ or network of ‘red teamers’ to detect weak spots.”

Lawmakers Express Growing Interest In Addressing AI Policy Questions

The Washington Post Share to FacebookShare to Twitter (4/8, A1, Zakrzewski) says, “AI hype and fear have arrived in Washington.” The Post continues, “After years of hand-wringing over the harms of social media, policymakers from both parties are turning their gaze to artificial intelligence, which has captured Silicon Valley. Lawmakers are anxiously eyeing the AI arms race, driven by the explosion of OpenAI’s chatbot ChatGPT. The technology’s uncanny ability to engage in humanlike conversations, write essays and even describe images has stunned its users, but prompted new concerns about children’s safety online and misinformation that could disrupt elections and amplify scams.” However, “policymakers arrive to the new debate bruised from battles over how to regulate the technology industry – having passed no comprehensive tech laws despite years of congressional hearings, historic investigations and bipartisan-backed proposals. This time, some are hoping to move quickly to avoid similar errors.”

        Microsoft And Google Take Greater Risks To Quickly Develop Competitive AI Technology. The New York Times Share to FacebookShare to Twitter (4/7, A1, Grant, Weise) reported that “the surprising success of ChatGPT has led to a willingness at Microsoft and Google to take greater risks with their ethical guidelines set up over the years to ensure their technology does not cause societal problems, according to 15 current and former employees and internal documents from the companies.” Last week, “tension between the industry’s worriers and risk-takers played out publicly as more than 1,000 researchers and industry leaders, including Elon Musk and Apple’s co-founder Steve Wozniak, called for a six-month pause in the development of powerful A.I. technology.” In a public letter, “they said it presented ‘profound risks to society and humanity.’”

        Market Changes Prompt Fresh Interest In $1B AI Investment By Microsoft From 2019. CNBC (4/8, Novet) says that when Microsoft first invested $1 billion in OpenAI in 2019, “the deal received no more attention than your average corporate venture round. The startup market was blazing hot, and artificial intelligence was one of many areas attracting mega-valuations, alongside electric vehicles, advanced logistics and aerospace.” Now however, “the market looks very different. Startup funding has cratered following the collapse of public market multiples for high-growth, money-losing tech companies. The exception is artificial intelligence, specifically generative AI, which refers to technologies focused on producing automated text, visual and audio response.” As a result, Microsoft’s once under-the-radar investment “is now a major topic of discussion, both in venture circles and among public shareholders, who are trying to figure out what it means to the potential value of their stock. Microsoft’s cumulative investment in OpenAI has reportedly swelled to $13 billion and the startup’s valuation has hit roughly $29 billion.”

        Amazon Seeks To Assure Employees Company Hasn’t Fallen Behind On AI. The Washington Post Share to FacebookShare to Twitter (4/8, O'Donovan) reports Amazon “has been conspicuously absent from the mounting AI wars in Silicon Valley, despite its years-long development of voice assistant Alexa and investment in cloud computing and machine learning.” However, “at a recent all-hands meeting for cloud computing employees, executives assured staffers that the company hasn’t fallen behind.” Amazon Vice President of Database, Analytics, and Machine Learning Swami Sivasubramanian said, “We have a lot happening in the space. ... We have a lot coming, and I’m very excited to share some of our plans in the future.”

Biden Administration Weighs Potential Checks On AI Tools Such As ChatGPT

The Wall Street Journal Share to FacebookShare to Twitter (4/11, Tracy, Subscription Publication) reports that the Biden Administration is considering what checks might need to be placed on AI tools like ChatGPT as concerns mount about potential for discrimination and misinformation.

Biden Administration Requests Public Input On AI Accountability

The AP Share to FacebookShare to Twitter (4/11, Krisher) reports the Biden Administration “wants stronger measures to test the safety of artificial intelligence tools such as ChatGPT before they are publicly released,” so the Commerce Department “on Tuesday said it will spend the next 60 days fielding opinions on the possibility of AI audits, risk assessments and other measures that could ease consumer concerns about these new systems.”

        Reuters Share to FacebookShare to Twitter (4/11, Shepardson, Bartz) reports the Administration “said Tuesday it is seeking public comments on potential accountability measures for artificial intelligence (AI) systems as questions loom about its impact on national security and education.” The Commerce Department’s National Telecommunications and Information Administration “wants input as there is ‘growing regulatory interest’ in an AI ‘accountability mechanism.’”

        The Washington Post Share to FacebookShare to Twitter (4/11, Zakrzewski) reports the action is “a step toward regulating artificial intelligence” in that “the Commerce Department asked the public to weigh in on how it could create regulations that would ensure A.I. systems work as advertised and minimize harms.”

        The Wall Street Journal Share to FacebookShare to Twitter (4/11, A1, Tracy, Subscription Publication) reports that the Biden Administration is considering what checks might need to be placed on AI tools like ChatGPT as concerns mount about potential for discrimination and misinformation.

        Former FTC Advisors Urge Regulators To Crack Down On “AI Hype” In Silicon Valley. The Washington Post Share to FacebookShare to Twitter (4/11, Lima) reports, “As tech companies race to build out their artificial intelligence tools, two former Federal Trade Commission advisers are calling on regulators to step up their counteroffensive, arguing in a new report that concentration in the sector may exacerbate AI harms.” The report by the AI Now research institute “urges federal officials to ‘swiftly’ use tools already at their disposal, including antitrust powers, to tackle issues raised by the technology, laying out a road map for how to counteract the ‘AI hype’ in Silicon Valley.” The paper authored by Amba Kak and Sarah Myers West, both former advisers to FTC Chair Lina Khan, “offers a glimpse into how key tech regulators may respond to Silicon Valley’s soaring interest in AI, galvanized in recent months by the popularity of OpenAI’s ChatGPT tool.”

Texas Professors Reflect On ChatGPT’s Impact In Classrooms Amid Plagiarism Concerns

Texas Standard Share to FacebookShare to Twitter (4/11, Acosta, Greene) reports that since ChatGPT’s launch in November 2022, “it has been stirring curiosity and concern among students and faculty in colleges and universities nationwide.” University of Texas-El Paso professor Greg Beam said, “One of the things that I found that GPT is really good at is delivering very concise definitions and descriptions of concepts and terms.” Some professors and faculty “are concerned about ChatGPT, but Beam sees it as a potential useful resource for his students.” Vice Provost of Academic Affairs at the University of Texas-Austin, Art Markman, “has been finding some solutions to help college educators navigate ChatGPT in their classrooms.” Meanwhile, UTEP professor Andrew Fleck “has no doubt that his colleagues will find a way to incorporate this in their classrooms.” He also “says that ChatGPT can be used effectively in classrooms by enriching a student’s learning.”

        Houston-Area Colleges Are Cautiously Embracing ChatGPT Despite Plagiarism Concerns. The Houston Chronicle Share to FacebookShare to Twitter (4/11, Ketterer) reports, “Several Houston-area higher education institutions are cautiously embracing ChatGPT and other artificial intelligence, despite concerns that the technology will facilitate plagiarism and thwart student learning.” While no local colleges or universities “have issued outright bans of the chatbot,” most are “leaving it up to faculty members to decide how they handle Chat GPT in their classrooms.” As a result, “some instructors might forbid students from using the technology to help with essays and other assignments, but many have informed their institutions of plans to integrate it into coursework.” San Jacinto College assistant vice chancellor Niki Whiteside said, “When (students) leave, or when they walk out of our buildings, they’re going to be faced with it, they’re going to have opportunities to use it. Really, it helps everyone if we can teach them how to use it appropriately and effectively.”

Boston Researchers Develop AI Tool That Can Detect Early Signs Of Lung Cancer

NBC News Share to FacebookShare to Twitter (4/11, Lovelace, Torres, Kopf, Martin) reports Mass General Cancer Center and the Massachusetts Institute of Technology researchers “are on the verge of what they say is a major advancement in lung cancer screening: Artificial intelligence that can detect early signs of the disease years before doctors would find it on a CT scan.” In one study, the new AI tool Sybil “was shown to accurately predict whether a person will develop lung cancer in the next year 86% to 94% of the time.” The AI “is not yet approved by the Food and Drug Administration for use outside clinical trials, but if approved, it could play a unique role.”

Google Launches Updates Page For Bard

USA Today Share to FacebookShare to Twitter (4/11, Schulz) reports, “Alphabet-owned Google on Monday launched an ‘experiment updates’ page for Bard,” which “already includes two updates to the chatbot experiment: improved math skills and expanded results from Bard’s ‘Google it’ button.” Google “said the page will show ‘the latest features, improvements, and bug fixes for the Bard experiment,’ according to a Monday statement,” while USA Today adds, “The added transparency comes as some have voiced concerns over recent developments in artificial intelligence.”

        Google Employees Raised Alarm About Bard Chatbot. Insider Share to FacebookShare to Twitter (4/11, Nolan) reports, “Some Google employees have been raising the alarm about the company’s artificial intelligence development, The New York Times reported.” Two Google employees tasked with reviewing AI products “tried to stop the company from releasing its AI chatbot, Bard. The pair were concerned the chatbot generated dangerous or false statements, per the report. ... The employees felt that despite safeguards, the chatbot was not ready, it added.”

Musk Has Reportedly Made Huge Investment In GPUs For Twitter AI Project

Engadget Share to FacebookShare to Twitter (4/11) reports, “More than a month after hiring a couple of former DeepMind researchers, Twitter is reportedly moving forward with an in-house artificial intelligence project.” Elon Musk reportedly “recently bought 100,000 GPUs for use at one of the company’s two remaining data centers.” According to a source, “the purchase shows Musk is ‘committed’ to the effort, particularly given the fact there would be little reason for Twitter to spend so much money on datacenter-grade GPUs if it didn’t plan to use them for AI work.”

Op-Ed: Society Must Confront Mounting Costs Of Generative AI

In an op-ed for Ars Technica Share to FacebookShare to Twitter (4/12, Luccioni), Dr. Sasha Luccioni, a Researcher and Climate Lead at Hugging Face, discusses the mounting costs of generative AI to society and the planet, including the “environmental toll of mining rare minerals, the human costs of the labor-intensive process of data annotation, and the escalating financial investment required to train AI models as they incorporate more parameters.” Bigger models require more GPUs, “which cost more money,” creating a “digital divide in the AI community between those who can train the most cutting-edge LLMs (mostly Big Tech companies and rich institutions in the Global North) and those who can’t (nonprofit organizations, startups, and anyone without access to a supercomputer or millions in cloud credits).” While the “current trend is toward creating bigger and more closed and opaque models,” Luccioni argues that “there’s still time...to push back, demand transparency, and get a better understanding of the costs and impacts of LLMs while limiting how they are deployed in society at large.”

Professors Using ChatGPT Grapple With Potentially False Detections Of Student Plagiarism

USA Today (4/12, Jimenez) reports that a University of California, Davis student was issued a failing grade after his professor “used artificial intelligence detection software including one called GPTZero after noticing that his exam answers ‘(bore) little resemblance to the questions’ to detect whether the college senior had tapped artificial intelligence to give his take-home midterm exam a boost, according to school records provided to USA TODAY.” Higher education officials across the nation “are struggling to address how to uncover cheating and avoid making false accusations of cheating as students more frequently use AI for their assignments and AI-driven detection software proliferates.” Many companies developing plagiarism detection software “claim they can detect when students use AI to complete coursework while also conceding that they are sometimes incorrect,” and education technology experts “said educators should be cautious of the quickly evolving nature of cheating detection software.”

Quantum Computing Could Supercharge AI Development

The New York Post Share to FacebookShare to Twitter (4/12, Mitchell) reports quantum computing “could accelerate the advancement of AI to lightning speed, experts say.” In addition to improving computing speed, “quantum can also substantially increase quality in AI and make it more creative, according to AI expert and CUNY Queens College professor Jamie Cohen.” This could lead to breakthroughs in medicine and space exploration, but it could also “be devastating to society – especially when it comes to hacking.”

Tech Companies Are Raiding University Programs For Talent In Race For AI Dominance

Insider Share to FacebookShare to Twitter (4/12, Palazzolo, Russell) reports that in interviews, “a dozen university professors, students, recent graduates, and industry professionals said that as more companies race for AI dominance, they’re raiding college campuses to mine for talent. And they’re doling out more than tchotchkes and monogrammed water bottles to win over students.” The job offers “promise mid-six-figure salaries, a chance to work with top talent in the industry, and enough resources to tackle problems too expensive to solve in academia.” The recruiting “has gotten so aggressive that some universities are seeing a slowdown in enrollment for AI Ph.D.s.”

AI Experts Say 40% Of Domestic Tasks, Including Caregiving, Could Be Automated In A Decade

Fox Business Share to FacebookShare to Twitter (4/12, Revell) reports researchers “from Ochanomizu University and the University of Oxford surveyed 65 AI experts from Japan and the U.K. about how automatable a variety of domestic tasks – cooking, grocery shopping, laundry and caregiving – will be over the next five to 10 years.” Grocery shopping and other shopping “were the activities viewed as most automatable in the next five to 10 years by AI experts surveyed.” Meanehile, caregiving for adults “was viewed as automatable by nearly 24% of experts in the next five years and just under 35% in 10 years.”

        Oxford Philosopher Offers Thoughts On AI Sentience. In an interview with the New York Times Share to FacebookShare to Twitter (4/12), Nick Bostrom, a philosopher at Oxford University, discussed the prospect of AI sentience. In his opinion, “sentience is a matter of degree,” and some AI may “plausibly be candidates for having some degrees of sentience.” Bostrom said, “If an A.I. showed signs of sentience, it plausibly would have some degree of moral status. This means there would be certain ways of treating it that would be wrong.”

Researchers Examine Impact Of Flawed AI Algorithms On Health Care

“There are a lot of ways that artificial intelligence can go awry in health and medicine,” STAT Share to FacebookShare to Twitter (4/13, Trang, Subscription Publication) reports. Flawed algorithms by Epic Systems “for predicting sepsis led to false alarms while frequently failing to identify the condition in advance.” Patients are routinely denied the care they need by “unregulated Medicare Advantage algorithms, used to determine how many days of rehabilitation will be covered by insurance.” On Thursday, a new article was published “by a team of researchers in the journal Science, argues that these kinds of problems can only be averted if AI research uses more detailed performance metrics to root out bias and improve accuracy.”

Microsoft’s Bing Chatbot Produces One In 10 Inaccurate Responses When Asked 47 Questions

In an analysis for the Washington Post Share to FacebookShare to Twitter (4/13), reporters Geoffrey A. Fowler and Jeremy B. Merrill write they “recently asked Microsoft’s new Bing AI ‘answer engine’ about a volunteer combat medic in Ukraine named Rebekah Maciorowski. The search bot, built on the same tech as ChatGPT, said she was dead,” but it was wrong. They add, “Truth is that she’s very much alive, Maciorowski messaged us last week.” That’s a problem “because AI chatbots like ChatGPT, Bing and Google’s new Bard are not the same as a long list of search results. They present themselves as definitive answers, even when they’re just confidently wrong.” The reporters experimented with Microsoft’s Bing chat by asking 47 “tough questions, then graded its more than 700 citations by tapping the expertise of 10 fellow Washington Post journalists.” The result: “Six in 10 of Bing’s citations were just fine. Three in 10 were merely okay.” Nearly 1 in 10 “were inadequate or inaccurate.”

Machine-Learning Tech Reconstructs, Sharpens Iconic Black Hole Image

USA Today Share to FacebookShare to Twitter (4/13, Grantham-Philips) reports that “the iconic picture of the supermassive black hole at the center of Messier 87, a giant galaxy sitting 53 million light-years from Earth in the ‘nearby’ Virgo cluster, was first released in 2019.” The M87 black hole “appeared as a flaming, fuzzy doughnut-like object emerging from a dark backdrop.” But a new image, “published Thursday in a Astrophysical Journal Letters study, gives us a refined look at the black hole – which now looks like a skinnier, bright orange ring with a clearer dark center.” According to the study, “the image was reconstructed using new machine-learning technology called PRIMO,” and scientists relied on “the same data that was used to create the 2019 image – originally obtained by an Event Horizon Telescope collaboration in 2017.”

Chinese Regulator Proposes Large Language Model Guidelines

Gizmodo Share to FacebookShare to Twitter (4/13) reports, “China’s top digital regulator proposed bold new guidelines this week that prohibit ChatGPT-style large language models from spitting out content believed to subvert state power or advocate for the overthrow of the country’s communist political system.” Experts “said the new guidelines mark the clearest signs yet of Chinese authorities’ eagerness to extend its hardline online censorship apparatus to the emerging world of generative artificial intelligence.”

        The Telegraph (UK) Share to FacebookShare to Twitter (4/13) reports, “Large language models such as ChatGPT are being characterised as an economic catalyst on the scale of the internet, allowing white collar workers to offload major parts of their job to machines. China’s strict controls on them, however, could hinder development. ‘Here in the West, governments are taking a pretty relaxed position on all of the misinformation inaccuracies, hallucinations, of things like ChatGPT,’ says Stephanie Hare, a technology researcher. ‘[But] Alibaba cannot operate in the same way as OpenAI. Western companies which enjoy greater freedom and less political interference that their Chinese counterparts will most likely pull ahead.’”

Schumer Moves To Establish Rules For AI

Reuters Share to FacebookShare to Twitter (4/13, Shepardson) reports Senate Majority Leader Schumer “said Thursday he had launched an effort to establish rules on artificial intelligence to address national security and education concerns.” Schumer said in a statement he had drafted and circulated a “framework that outlines a new regulatory regime that would prevent potentially catastrophic damage to our country while simultaneously making sure the U.S. advances and leads in this transformative technology.” The proposal “would require companies to allow independent experts to review and test AI technologies ahead of public release or update, and give users access to findings.”

        The Hill Share to FacebookShare to Twitter (4/13, Klar) says Schumer “will work with stakeholders in academia, advocacy organizations, industry and the government in coming weeks to refine the proposal, according to the announcement.” Schumer, as Majority Leader, “has control over the legislative calendar giving him a better shot at bringing his proposal to the floor, although it would still be subject to meeting the 60-vote threshold to pass.”

Crypto Traders Will Soon Be Assisted By ChatGPT-Powered Bot Named Satoshi

Forbes Share to FacebookShare to Twitter (4/13, Ehrlich) reports Chicago-based prime broker FalconX “plans to put a chatbot in the co-pilot’s seat for investors.” Using technology created by OpenAI, FalconX clients “will be able pose questions like ‘What are the three biggest differences between two blockchain platforms?’ or ‘What is the delta between Sharpe ratios for a Bitcoin basis strategy or a Bitcoin hold strategy over a two-week period?’ to a bot called Satoshi.” Named for Bitcon’s founder Satoshi Namakmoto, the chatbot “will also be able to generate investment ideas for users based on their historical trading activity, portfolios and interests, says FalconX CEO Raghu Yarlagadda.” Though the technology is “very much in its early stages – the current prototype primarily allows users to get customized news summaries akin to traditional ChatGPT responses to user queries, and trading backtesting has only been available for a few weeks – advancement is likely to come quickly.”

New Research Estimates OpenAI’s ChatGPT Uses 500ml Of Water For Every 20 To 50 Questions Answered

Insider Share to FacebookShare to Twitter (4/14, Gendron) reported researchers at the University of California, Riverside and the University of Texas, Arlington released a new study which analyzes “the water footprint of AI models like OpenAI’s GPT-3 and GPT-4.” In the process of “training GPT-3 in its data centers, Microsoft was estimated to have used 700,000 liters – or about 185,000 gallons – of fresh water.” These figures suggest “that ChatGPT would require 500 ml of water, or a standard 16.9 oz water bottle, for every 20 to 50 questions answered.”

Elon Musk Said To Be Launching AI Startup To Rival OpenAI

Reuters Share to FacebookShare to Twitter (4/14) reported, “Billionaire Elon Musk is working on launching an artificial intelligence start-up that will rival ChatGPT-maker OpenAI, the Financial Times reported on Friday citing people familiar with his plans.” Musk “is assembling a team of AI researchers and engineers, according to the FT report, and is also in discussions with some investors in SpaceX and Tesla Inc about putting money into his new venture.” Musk’s plan “comes weeks after a group of AI researchers and executives, including himself, called for a six-month pause in developing systems more powerful than OpenAI’s GPT-4, citing potential risks to society. ... It is not clear what Musk’s firm might potentially offer in terms of services.”

        TechCrunch Share to FacebookShare to Twitter (4/14, Coldewey) reported, “We have yet to hear the value proposition of this nascent AI company, but if it is related to what Musk has criticized about others, it may involve less of what he claims as top-down interference in the natural process of technology-enhanced free speech. If that is so, then the rumored plan (FT attributes to people familiar) to train the LLM on Twitter data is an interesting choice. “

Google Developing Major Changes To Search Engine In Response To AI Competition

The New York Times Share to FacebookShare to Twitter (4/16, Grant) reports Google engineers reacted to reports earlier this year that Samsung could replace its search engine with Microsoft’s Bing as the default option on its devices with “‘panic,’ according to internal messages,” and are “racing to build an all-new search engine powered by the technology,” while “also upgrading the existing one with A.I. features.” The Times adds the new features, “under the project name Magi,” would produce “a far more personalized experience than the company’s current service, attempting to anticipate users’ needs.” According to the Times, “Modernizing its search engine has become an obsession at Google, and the planned changes could put new A.I. technology in phones and homes all over the world.”

        Google CEO: AI Needs Regulation, Companies Must Avoid Rash Development. Bloomberg Share to FacebookShare to Twitter (4/16, Love) reports Google CEO Sundar Pichai said in a 60 Minutes interview on Sunday “that the push to adopt artificial intelligence technology must be well regulated to avoid potential harmful effects.” When “asked...about what keeps him up at night with regard to AI, Pichai said ‘the urgency to work and deploy it in a beneficial way, but at the same time it can be very harmful if deployed wrongly.’” Pichai additionally “cautioned against companies being swept up in the competitive dynamics. And he finds lessons in the experience of OpenAI’s more direct approach and debut of ChatGPT,” adding, “I think there are responsible people there trying to figure out how to approach this technology, and so are we.”

OpenAI CEO Warns Growth Of AI Models Reaching Limits

VentureBeat Share to FacebookShare to Twitter (4/17, Goldman) reports OpenAI CEO Sam Altman last week “suggested that further progress [in artificial intelligence] would not come from ‘giant, giant models,’” as “cost constraints and diminishing returns curb the relentless scaling that has defined progress in the field.” VentureBeat adds while Altman “did not cite it directly, one major driver of the pivot from ‘scaling is all you need’ is the exorbitant and unsustainable expense of training and running the powerful graphics processes needed for large language models (LLMs).”

        OpenAI Finalizes Tender Offer, Source Says. In a paywalled article, The Information Share to FacebookShare to Twitter (4/17, Victor, Woo, Subscription Publication) cited a source who claims OpenAI “told employees it has finalized a tender offer that allowed some staff to cash out their holdings,” a move that “caps a process that began last fall alongside talks to raise billions of dollars from Microsoft.” According to The Information, “The sale rewards some of the eight-year-old firm’s 400 employees as recent advances in generative AI – technology that uses large machine-learning models to create humanlike writing, images and codes – heightens demand for these specialized employees.”

        Research: ChatGPT Understands Fed Statements, Media Coverage. Bloomberg Share to FacebookShare to Twitter (4/17, Lee) reports two papers published this month applied ChatGPT to “deciphering whether Federal Reserve statements were hawkish or dovish,” and to “determining whether headlines were good or bad for a stock,” found it “aced both tests, suggesting a potentially major step forward in the use of technology to turn reams of text from news articles to tweets and speeches into trading signals.” According to Bloomberg, this is “nothing new on Wall Street, of course, where quants have long used the kind of language models underpinning the chatbot to inform many strategies,” but the findings “point to the technology developed by OpenAI reaching a new level in terms of parsing nuance and context.”

        Survey: Most Consumers Mistrust Chatbots Other Than ChatGPT. Chain Store Age Share to FacebookShare to Twitter (4/17, Berthiaume) reports a new Capterra survey of 1,000 US consumers found 53% of those “who have used traditional retail customer support chatbots rate their overall experience as ‘fair’ or ‘poor,’” while 55% “don’t trust them. And only 17% of surveyed retail chatbot users have used a bot to search for products, and just 7% have used it to receive product recommendations.” However, respondents “have much more favorable views of ChatGPT, a new AI model from research and deployment company Open AI,” as “67% of surveyed ChatGPT users feel understood by the bot often or always, compared to 25% of traditional retail chatbot users.”

Education Software Company Integrates GPT-4 Into Study Tools

Reuters (4/17) reports, “The artificial intelligence behind ChatGPT, the homework-drafting chatbot that some schools have banned, is coming to more students via the company Chegg Inc.” The education software company “has combined its corpus of quiz answers with the chatbot’s AI model known as GPT-4 to create CheggMate, a study aide tailored to students, CEO Dan Rosensweig told Reuters last week.” The software “will adapt to students by processing data on what classes they are taking and exam questions they have missed, personalizing practice tests and guiding study in a way generalist programs like ChatGPT cannot, Rosensweig said. It will be available next month for free initially, Chegg said.”

Musk Claims He Will Launch “TruthGPT” As Alternative To ChatGPT

The AP Share to FacebookShare to Twitter (4/18, Gillies) reports Elon Musk in an interview with Fox News’ Tucker Carlson that aired Monday “again” warned of “the dangers of artificial intelligence to humanity” and claimed he “plans to create an alternative to the popular AI chatbot ChatGPT that he is calling ‘TruthGPT,’ which will be a ‘maximum truth-seeking AI that tries to understand the nature of the universe.’” The AP reports Musk “also said he’s worried that ChatGPT ‘is being trained to be politically correct.’” The Wall Street Journal Share to FacebookShare to Twitter (4/17, Corse, Subscription Publication) provides similar coverage.

Microsoft Said To Be Developing Own AI Chip

Reuters Share to FacebookShare to Twitter (4/18) reports, “Microsoft Corp is developing its own artificial intelligence chip code-named ‘Athena’ that will power the technology behind AI chatbots like ChatGPT, the Information Share to FacebookShare to Twitter reported on Tuesday, citing two people familiar with the matter.” Microsoft “has been working on the chip since 2019 and it is being tested by a small group of Microsoft and OpenAI employees. ... According to the report, the chips will be used for training large-language models and supporting inference.” Reuters adds, “Microsoft is hoping the chip will perform better than what it currently buys from other vendors, saving it time and money on its costly AI efforts, the report said.”

        Gizmodo Share to FacebookShare to Twitter (4/18) says, “It makes sense that the Redmond company is trying to develop its own proprietary tech to handle its growing AI ambitions. Ever since it added a ChatGPT-like interface into its Bing app, the company has worked to install a large language model-based chatbot into everything from its 365 apps to Windows 11 itself.”

        The Verge Share to FacebookShare to Twitter (4/18, Warren) reports, “While it’s not clear if Microsoft will ever make these chips available to its Azure cloud customers, the software maker is reportedly planning to make its AI chips available more broadly inside Microsoft and OpenAI as early as next year.”

University Of Kentucky To Use AI To Predict, Prevent Opioid Overdoses

Louisville Public Media (4/18, Watkins) reports, “Researchers at the University of Kentucky plan to use artificial intelligence to try to predict and prevent...opioid overdoses.” UK Colleges of Medicine and Pharmacy’s Jeff Talbert “is one of the leaders of the new project,” and he “said they’ll develop a system to analyze statewide data on things like ambulance trips and prescriptions for controlled substances.” The “initiative is called Rapid Actionable Data for Opioid Response in Kentucky, or RADOR-KY,” and “Talbert said the National Institute on Drug Abuse is fueling the first phase of the five-year grant with $3.1 million in funds.”

FTC Leaders Pledge To Punish Companies That Use AI To Violate Civil Rights Or Deceive Consumers

Reuters Share to FacebookShare to Twitter (4/18, Bartz) reports Federal Trade Commission Chair Lina Khan and Commissioners Rebecca Slaughter and Alvaro Bedoya “said on Tuesday the agency would pursue companies who misuse artificial intelligence to violate laws against discrimination or be deceptive.” Reuters adds during their testimony before Congress, Bedoya “said companies using algorithms or artificial intelligence were not allowed to violate civil rights laws or break rules against unfair and deceptive acts,” and Khan “agreed...any wrongdoing would ‘should put them on the hook for FTC action.’”

Alphabet Combines Google Brain, DeepMind Into A Single AI Research Unit

Reuters Share to FacebookShare to Twitter (4/20) reports, “Alphabet Inc is combining Google Brain and DeepMind, as it doubles down on artificial intelligence research in its race to compete with rival systems like OpenAI’s ChatGPT chatbot.” The company’s “new division will be led by DeepMind CEO Demis Hassabis and its setting up will ensure ‘bold and responsible development of general AI,’ Alphabet CEO Sundar Pichai said in a blog post on Thursday.” TechCrunch Share to FacebookShare to Twitter (4/20, Wiggers) reports Hassabis said, “The work we are going to be doing now as part of this new combined unit will create the next wave of world-changing breakthroughs.”

        The Wall Street Journal Share to FacebookShare to Twitter (4/20, Kruppa, Subscription Publication) also provides coverage.

ChatGPT Could Be Costing OpenAI “Up To $700,000 A Day”

Insider Share to FacebookShare to Twitter (4/20, Mok) reports that ChatGPT “could cost OpenAI up to $700,000 a day because of the pricey tech infrastructure the AI runs on, Dylan Patel, chief analyst at semiconductor research firm SemiAnalysis, told The Information Share to FacebookShare to Twitter (4/20, Subscription Publication). That’s because ChatGPT requires massive amounts of computing power to calculate responses based on user prompts.” In a telephone interview with Insider, Patel “said it’s likely even more costly to operate now, as his initial estimate is based on OpenAI’s GPT-3 model. GPT-4 — the company’s latest model — would be even more expensive to run, he told Insider.”

dtau...@gmail.com

unread,
Apr 30, 2023, 2:52:42 PM4/30/23
to ai-b...@googlegroups.com

Copyright Protection Struggles to Fend Off AI-Generated Works
The Japan News
Yasuhiro Kobayashi; Shin Watanabe
April 27, 2023


The thorny issue of artificial intelligence (AI)-generated works infringing on copyrighted material is compounded by a lack of clarity about the extent to which such works are vetted for infringement. In January, three U.S. artists sued U.K. startup Stability AI, arguing its Stable Diffusion image generator used elements of their work without permission. They said the AI should not use copyrighted work without artists' consent, adding that credit and compensation should be duly allocated. No precedent exists in U.S. Copyright law as to whether the learning and creation of works by AI constitutes fair use of copyright, leaving little guidance concerning intellectual property violations. U.K. authorities are aggressively regulating material that AI systems can collect and analyze, out of concern the systems' training violates copyright.
 

Full Article

 

 

Reddit Wants to Get Paid for Helping Teach Big AI Systems
The New York Times
Mike Isaac
April 18, 2023


Reddit announced that it will begin charging for access to its application programming interface. Google, OpenAI, Microsoft, and other tech companies have long used Reddit chats to train their artificial intelligence (AI) systems at no cost to them. This comes amid a rise of AI systems like OpenAI's ChatGPT and the importance of large language models (LLLs) in developing new AI technology. Additionally, there are concerns new AI systems could be used to create conversation forums that would compete with Reddit. Reddit's Steve Huffman said new and relevant data is needed by LLM algorithms, which makes Reddit's continuously updated data especially valuable. However, Huffman said, "We don't need to give all of that value to some of the largest companies in the world for free."

Full Article

*May Require Paid Registration

 

 

Moderna Teams Up with IBM to Put AI, Quantum Computing to Work on mRNA Technology for Vaccines
CNBC
Annika Kim Constantino
April 20, 2023


Moderna and IBM have partnered to advance messenger RNA (mRNA) vaccine technology via generative artificial intelligence (AI) and quantum computing. Their agreement will give Moderna's scientists access to IBM's quantum computing systems and experts and IBM's MoLFormer generative AI model to accelerate development of new mRNA vaccines and therapies. Said Moderna's Stephane Bancel, "We are excited to partner with IBM to develop novel AI models to advance mRNA science, prepare ourselves for the era of quantum computing, and ready our business for these game-changing technologies."

Full Article

 

 

Google Robots Learn to Sort Recyclables in Office Waste Bins
New Scientist
Alex Wilkins
April 21, 2023


Google researchers have developed waste-sorting robots which they say are able to learn on the job. In experiments, one group of 20 robots was taught to sort items into recycling, compost, and trash in a controlled environment using simulations and workstations where they could practice. A second group of 23 robots was allowed to wander around Google offices to locate stations with unsorted or incorrectly sorted waste and put the items in the appropriate receptacles. The robots all used the same model for sorting waste that improves regardless of whether they were in a controlled or real-world environment. In a test after two years and almost 10,000 hours of sorting, the robots achieved an average accuracy rate of 84%.

Full Article

*May Require Paid Registration

 

 

Teaching Trucks to See
Princeton University
Daniel Oberhaus
April 25, 2023


Princeton University's Felix Heide pioneered the design of imaging systems that Daimler Truck subsidiary Torc Robotics has incorporated into a fleet of self-driving semi-trucks making test drives in Albuquerque, NM, over the last 18 months. Heide's holistic system development approach incorporates the use of cameras customized to specific tasks, which outperform existing autonomous vehicle systems in poor conditions. Heide said, "This approach of using AI [artificial intelligence] to create trainable models of the entire imaging and image analysis chain allows us to treat these camera systems as systems we can train and evolve so they are optimized for specific tasks."

Full Article

 

 

Transparent Labeling of Training Data May Boost Trust in AI
Penn State News
Matt Swayne
April 24, 2023


A study by Pennsylvania State University researchers found that trust in artificial intelligence (AI) could be improved by allowing users to see that visual data used to train AI systems was labeled correctly. The study involved 430 participants tasked with interacting with a prototype Emotion Reader AI website, which they were told had been trained on a dataset of nearly 10,000 labeled facial images. The emotions in half of the images were mislabeled. The study found a decline in trust among participants who believed the system's performance of biases. However, there was no decrease in emotional connection with or desire to use the system among those who witnessed a biased performance by the AI.
 

Full Article

 

 

AI-Driven Robots Hunt for Novel Materials Without Help from Humans
Science
Robert F. Service
April 20, 2023


Lawrence Berkeley National Laboratory (LBNL) researchers are using artificial intelligence (AI) and robots to automate the process of predicting new materials and creating physical samples. In LBNL's A-Lab, the AI uses its understanding of chemistry to determine a plausible method for synthesizing a material, then guides robotic arms to choose from around 200 powdery starting materials. Another robot distributes the mixture into crucibles that are put into furnaces and mixed with different gases. The baking time, temperature, and drying times are calculated by the AI. After the new material is ground into a power and transferred to a slide, the samples are moved by a robotic arm to other equipment for analysis. The process begins again if the results do not match the prediction.

Full Article

 

Meta Seen As “Obsessed” With AI, Scrambling To Keep Up With Competitors

Insider Share to FacebookShare to Twitter (4/24, Barr) covers reports that Mark Zuckerberg is “getting too obsessed” with AI as Meta purchases “thousands” of NVIDIA A100 chips. Insider says, “Bernstein analysts issued a warning a few days after he made them and after meeting with tech investors. ‘Keep an eye on Zuckerberg’s newfound love with all things AI,’ they quipped in a research email. ‘It seems that the year of efficiency is winding down (we made it to April) and a name change to MetAI (our best guess) is now possible.’” Insider adds, “Investors and analysts only just recovered from the company’s metaverse spending splurge. They now want Meta to stay conservative on costs, and Meta’s massive layoffs, almost one-quarter of the company, appeared to be giving them what they want. But if Zuckerberg is going to keep up with OpenAI, Microsoft and Google in AI, it’s going to get expensive.”

        Reuters Share to FacebookShare to Twitter (4/25) reports, “As the summer of 2022 came to a close, Meta CEO Mark Zuckerberg gathered his top lieutenants for a five-hour dissection of the company’s computing capacity, focused on its ability to do cutting-edge artificial intelligence work, according to a company memo dated Sept. 20.” The executives “had a thorny problem: despite high-profile investments in AI research, the social media giant had been slow to adopt expensive AI-friendly hardware and software systems for its main business, hobbling its ability to keep pace with innovation at scale even as it increasingly relied on AI to support its growth, according to the memo, company statements and interviews with 12 people familiar with the changes.” The company’s capital expenditures “have coincided with a period of severe financial squeeze for Meta, which has been laying off employees since November at a scale not seen since the dotcom bust,” as well as a generative AI arms race that highlights “Meta’s belated embrace of the graphics processing unit...for AI work.”

Supreme Court YouTube Recommendation Case May Impact Future Of Generative AI

Reuters Share to FacebookShare to Twitter (4/24, Goudsward) reports a Supreme Court decision on whether YouTube can be sued for video recommendations “could have implications for rapidly developing technologies like artificial intelligence chatbot ChatGPT.” The case “tests whether a U.S. law that protects technology platforms from legal responsibility for content posted online by their users also applies when companies use algorithms to target users with recommendations.” Those algorithms, Reuters explains, are “somewhat similar” to how “generative AI tools like ChatGPT and its successor GPT-4 operate.”

Research: People Using ChatGPT To Write Some Amazon Reviews

CNBC Share to FacebookShare to Twitter (4/25, Palmer) reports research shows some “members of Amazon’s Vine program, launched in 2007,” have been using ChatGPT to write “some reviews for products sold on Amazon.” The review of listings for a variety of products showed “what appear to be AI-generated reviews,” as all included “the phrase ‘As an AI language model,’ a common response generated by OpenAI’s ChatGPT, along with generic descriptions of the product.” CNBC adds Amazon “said it prohibits review abuse, including offering incentives like gift cards to write positive reviews,” and “will suspend or ban users from its platform that violate these policies.”

        Likewise, Gizmodo Share to FacebookShare to Twitter (4/25, Hurler) reports Amazon “told Vice Share to FacebookShare to Twitter (4/24, Gault) that it has a zero tolerance policy for fake reviews, and that the company bans and takes legal action against users who violate that policy.” According to Gizmodo, “Amazon also explained to the outlet that it has teams of analysts and lawyers dedicated to uncovering fake reviews on its platform.”

        The Verge Share to FacebookShare to Twitter (4/25, Vincent) provides similar coverage.

OpenAI Introduces “Incognito Mode” For ChatGPT

Reuters Share to FacebookShare to Twitter (4/25) reports, “OpenAI is introducing what one employee called an ‘incognito mode’ for its hit chatbot ChatGPT that does not save users’ conversation history or use it to improve its artificial intelligence, the company said Tuesday.” OpenAI “also said it planned a ‘ChatGPT Business’ subscription with additional data controls.” Reuters adds, “Italy last month banned ChatGPT for possible privacy violations. ... Mira Murati, OpenAI’s chief technology officer, told Reuters the company was compliant with European privacy law and is working to assure regulators. The new features did not arise from Italy’s ChatGPT ban, she said, but from a months-long effort to put users ‘in the driver’s seat’ regarding data collection.”

        CNBC Share to FacebookShare to Twitter (4/25, Capoot) reports, “Any conversations that take place while chat history is disabled will not be used to train OpenAI’s models or appear in the ‘history’ sidebar, the company wrote in a blog post. OpenAI said it will keep the new conversations for 30 days, but it will only review them if it is necessary to monitor for abuse.”

        Also reporting are Bloomberg Share to FacebookShare to Twitter (4/25, Metz, Subscription Publication), Axios Share to FacebookShare to Twitter (4/25), Engadget Share to FacebookShare to Twitter (4/25), Livemint (IND) Share to FacebookShare to Twitter (4/25), and PCMag Share to FacebookShare to Twitter (4/25).

Biden Administration Launches Effort To Counter AI-Linked Discrimination

The Washington Post Share to FacebookShare to Twitter (4/25) reports, “Regulators across the Biden administration on Tuesday unveiled a plan to enforce existing civil rights laws against artificial intelligence systems that perpetuate discrimination, as the rapid evolution of ChatGPT and other generative artificial intelligence tools exacerbates long-held concerns about bias in American society.” The Post adds, “With AI increasingly used to make decisions about hiring, credit, housing and other services, top leaders from the Equal Employment Opportunity Commission and other federal watchdogs warned about the risk of ‘digital redlining.’ The officials said they are concerned that faulty data sets and poor design choices could perpetuate racial disparities. ... ‘There is no AI exemption to the laws on the books,’ said Federal Trade Commission Chair Lina Khan (D), one of several regulators who appeared during the Tuesday news conference to signal a ‘whole of government’ approach.”

        The AP Share to FacebookShare to Twitter (4/25, Chan) adds, “Amid a fast-moving race between tech giants such as Google and Microsoft in selling more advanced tools that generate text, images and other content resembling the work of humans, Khan also raised the possibility of the FTC wielding its antitrust authority to protect competition.” Khan “joined top officials from U.S. civil rights and consumer protection agencies to put businesses on notice that regulators are working to track and stop illegal behavior in the use and development of biased or deceptive AI tools.”

Analysts Say Google’s Hesitation On AI Has Given Microsoft Opportunity To Overtake It

CNBC Share to FacebookShare to Twitter (4/26, Kharpal) reports, “Cyrus Mewawalla, head of thematic intelligence at GlobalData, called AI the big theme of 2023 and said that ‘Microsoft has stolen a lead on Google’ with its investment in OpenAI – the company behind ChatGPT.” Mewawalla said in regards to Google’s years of investments in AI through acquisitions such as Deepmind, “In a way in 2022, it (Google) had a Kodak moment. It had the leading product but it kept it aside for fear that it could cannibalize its core business. Now its core business is under massive threat.” Arete Research Senior Analyst Richard Kramer said, “Google’s issue is that they have the brightest minds in AI, they have the rockstars, they have a third of the top hundred cited papers in AI, but they’re an engineering-led company, and they have not productized what they’ve done.”

Warner Urges Tech Companies To Address AI Security Risks

Reuters Share to FacebookShare to Twitter (4/26) reports Sen. Mark Warner (D-VA) “urged CEOs of several artificial intelligence (AI) companies to prioritize security measures, combat bias, and responsibly roll out new technologies.” Warner “raised concerns about potential risks posed by AI technology.” In a letter sent to technology companies, he said, “Beyond industry commitments, however, it is also clear that some level of regulation is necessary in this field. With the increasing use of AI across large swaths of our economy, and the possibility for large language models to be steadily integrated into a range of existing systems, from healthcare to finance sectors, I see an urgent need to underscore the importance of putting security at the forefront of your work.”

        Schumer Meets With Elon Musk To Discuss AI, Tech Developments. The Washington Post Share to FacebookShare to Twitter (4/26) reports Tesla CEO Elon Musk on Wednesday “said he discussed artificial intelligence in a meeting with Senate Majority Leader Charles E. Schumer, as Washington policymakers increasingly debate regulations of the quickly emerging technology.” The Senator “told reporters that he had a ‘very good meeting’ with Musk, where discussions ranged from Tesla’s plant in Buffalo to AI.” The Post notes that as lawmakers “are increasingly turning their gaze to regulating” AI, “tech executives and critics alike have been swarming policymakers in recent weeks, seeking to influence the increasing political debate.”

Privacy Policies Leave Room For Generative AI Patient Input To Be Used For Advertising, Experts Warn

The Washington Post Share to FacebookShare to Twitter (4/27, Hunter) reports, “Since OpenAI, Microsoft and Google introduced AI chatbots, millions of people have experimented with a new way to search the internet: Engaging in a conversational back-and-forth with a model that regurgitates learnings from across the web.” However, “these tools repeat some familiar privacy mistakes, experts say, as well as create new ones.” Center for Digital Democracy Executive Director Jeffrey Chester explained, “Consumers should view these tools with suspicion at least, since – like so many other popular technologies – they are all influenced by the forces of advertising and marketing.”

        NYU Professor: AI Offers Compelling Opportunities Despite Prevailing Sense Of “AI Grief.” In an op-ed for the Wall Street Journal Share to FacebookShare to Twitter (4/26, Subscription Publication), NYU professor Suzy Welch discusses the concept of “AI Grief,” which refers to the sense of uncertainty felt by many in the wake of the massive disruption AI presents to many aspects of life. Welch writes that despite this sense of grief, one should nonetheless seek to embrace the opportunity AI offers.

Tech Companies Strongly Emphasize AI In Earnings Calls

VentureBeat Share to FacebookShare to Twitter (4/27, Goldman) reports, “Tech giants Alphabet, Microsoft and Meta all reported robust revenue growth in their first-quarter earnings calls this week, highlighting their ambitions and investments in artificial intelligence. The term ‘AI’ was repeated dozens of times by the executives and analysts on the calls, reflecting the industry’s belief that AI is the key to innovation and competitive advantage. Alphabet’s call mentioned AI 50 times, followed by Meta with 49 times and Microsoft with 46 times.” VentureBeat says, “The number of references to AI is just the latest signal that investors are clamoring for opportunity to invest in generative AI technology, which has captivated Silicon Valley in recent months. It also signals that Alphabet, Microsoft and Meta are now being viewed as a bellwether for the entire AI industry.”

        Insider Share to FacebookShare to Twitter (4/27, Kay) reports, “Mark Zuckerberg was careful to remind investors on Wednesday that Meta is very much invested in the AI arms race underway in the tech industry. The CEO mentioned AI no less than 22 times during his roughly 11 minute opening presentation and five more times as he answered questions throughout the meeting. Overall, the word ‘AI’ was used 57 times on Wednesday between Zuckerberg, analysts’ questions, and comments from Meta CFO Susan Li.”

        Elon Musk Ramps Up AI Activities While Voicing Concerns. The New York Times Share to FacebookShare to Twitter (4/27, Metz, Mac, Conger) reports that Elon Musk “has ramped up his own A.I. activities” since cutting off OpenAI’s access to Twitter data, “while arguing publicly about the technology’s hazards.” Musk “is in talks with Jimmy Ba, a researcher and professor at the University of Toronto, to build a new A.I. company called X.AI.” The Times says, “The actions are part of Mr. Musk’s long and complicated history with A.I., governed by his contradictory views on whether the technology will ultimately benefit or destroy humanity. Even as he recently jump-started his A.I. projects, he also signed an open letter last month calling for a six-month pause on the technology’s development because of its ‘profound risks to society.’”

Michigan School Districts Address Student Use Of Debated AI Software

Chalkbeat Share to FacebookShare to Twitter (4/27, Bakuli) reports the Detroit Public Schools Community District “is updating its technology use policies to address concerns about the impact of artificial intelligence tools on student learning.” An early draft “of the revised language says that the use of artificial intelligence and natural language processing software tools ‘without the express permission/consent of a teacher is considered to undermine the learning and problem-solving skills that are essential to a student’s academic success and that the staff is tasked to develop in each student.’” Newly powerful artificial intelligence software “has generated a wave of publicity in recent months,” and has also “stirred debate among school officials and educators about the impact, and risks, in the classroom.” The DPSCD policy draft language “doesn’t ban the use of programs like ChatGPT outright. Rather, it says that students can use these tools to conduct research, analyze data, translate texts in different languages, and correct grammatical mistakes, as long as they have teacher permission.”

dtau...@gmail.com

unread,
May 7, 2023, 8:57:38 AM5/7/23
to ai-b...@googlegroups.com

White House Pushes Tech CEOs to Limit Risks of AI
The New York Times
David McCabe
May 4, 2023


Vice President Kamala Harris and other White House officials urged technology CEOs to limit the risks of artificial intelligence (AI) at the Biden administration's first meeting with major AI executives since the release of tools like OpenAI's ChatGPT chatbot. Harris said the private sector is ethically, morally, and legally obligated to ensure their products are safe and secure, and "must comply with existing laws to protect the American people." The AI boom has raised anxiety about the technology's economic and geopolitical ramifications, as well as its potential to strengthen criminal activity. Critics have cited the opaqueness of AI systems, with fears of discrimination, job displacement, misinformation, and even lawbreaking by AIs. The companies behind these AIs counter that elected officials must take action to set industry regulations for the technology.

Full Article

*May Require Paid Registration

 

 

'Godfather of AI' Leaves Google, Warns of Danger
The New York Times
Cade Metz
May 1, 2023


Artificial intelligence (AI) pioneer Geoffrey Hinton has resigned from Google, warning about the risks of generative AI-based products. With the technology already being used to produce misinformation, Hinton and others fear it could soon threaten jobs, and even humanity. The neural network technology that became Hinton's life's work earned him and two collaborators ACM's 2018 A.M. Turing Award and led to the creation of chatbots like OpenAI's ChatGPT. Hinton said the threat of AI systems escalates as their capabilities improve, and Google's decision to deploy generative AI systems in response to Microsoft enhancing its Bing search engine with a chatbot is concerning. His immediate fear is of the Internet being swamped with fake content, while in the longer term chatbots could replace professionals who perform rote tasks.

Full Article

 

 

Optical Neural Networks Hold Promise for Image Processing
Cornell Chronicle
Diane Tessaglia-Hymes
April 27, 2023


An optical neural network (ONN) developed at Cornell University could help pave the way for faster, smaller, and more energy-efficient image sensors. The ONN can filter relevant data from a scene before a camera detects the visual image. Said Cornell's Tianyu Wang, "By discarding irrelevant or redundant information, an ONN can quickly sort out important information, yielding a compressed representation of the original data, which may have a higher signal-to-noise ratio per camera pixel." The researchers observed compression ratios of up to 800-to-1 with ONN pre-processors. They also demonstrated the original image could be reconstructed using data generated by ONN encoders trained only for image classification. Wang noted, "The reconstructed images retained important features, suggesting that the compressed data contained more information than just the classification."

Full Article

 

 

AI in the ICU
Carnegie Mellon University School of Computer Science
Kayla Papakie
April 25, 2023


Scientists at Carnegie Mellon University, the University of Pittsburgh, and the University of Pittsburgh Medical Center tested an artificial intelligence (AI)-based tool's viability for helping intensive care unit (ICU) doctors make critical decisions. The AI Clinician Explorer interactive clinical decision support interface offers recommendations for treating sepsis. The researchers trained the model on a dataset of more than 18,000 patients who satisfied standard diagnostic criteria for sepsis while in the ICU. Clinicians can use the system to screen and look for patients in the dataset, visualize their disease pathways, and compare model predictions to actual bedside treatment decisions. The researchers gave 24 ICU physicians access to the tool and found that most used it to inform some of their sepsis treatment decisions for four simulated patients.

Full Article

 

Study: Patients Prefer ChatGPT’s Medical Responses To Physicians’

CNN Share to FacebookShare to Twitter (4/28, McPhillips) reported a JAMA Internal Medicine study published on Friday “suggests that physicians may have some things to learn from” ChatGPT “when it comes to patient communication.” The study “assessed responses to about 200 different medical questions posed to a public online forum,” and according to researchers, ChatGPT responses were “preferred over physician responses and rated significantly higher for both quality and empathy.” CNN added that “more than a quarter of responses from physicians were considered to be less than acceptable in quality compared with less than 3% of those from ChatGPT,” while “nearly half of responses from ChatGPT were considered to be empathetic (45%) compared with less than 5% of those from physicians.”

AI Chatbot Outperforms Human Physicians In Responding To Patient Questions In Study

USA Today Share to FacebookShare to Twitter (5/1, Weintraub) reports, “A new study finds that chatbots are just as accurate and far more empathetic than doctors at answering basic patient questions.” Researchers “took 195 patient questions posed on the website Reddit in October 2022 and compared physician responses on the site to those provided later by a chatbot.” They discovered “evaluators preferred the chatbot responses 79% of the time and were nearly 10-times more likely to rate chatbot answers as ‘empathetic’ or ‘very empathetic’ than the doctors’.” According to The Hill Share to FacebookShare to Twitter (5/1, Mueller) reports, “The study suggests that, after further study, chatbots could be used to draft responses to patient questions that physicians could then edit.”

Lawmakers Begin Exploring Regulation Of AI

The Hill Share to FacebookShare to Twitter (4/28) reports Senate Majority Leader Schumer met with Elon Musk this week “to discuss the future of artificial intelligence,” which could be “a sign of a possible thaw in relations between Democrats and one of the nation’s most powerful CEOs.” Relations “soured steadily since Musk announced his plans to buy Twitter last year,” with Democrats upset “over his decision to cut content moderators from the company and reinstate former President Trump’s account.” Meanwhile, Reuters Share to FacebookShare to Twitter (4/28, Staff) reports Sen. Michael Bennet (D-CO) “introduced a bill on Thursday that would create a task force to look at U.S. policies on artificial intelligence, and identify how best to reduce threats to privacy, civil liberties and due process.”

How Artificial Intelligence Will Impact Education In Long-Term

Education Week Share to FacebookShare to Twitter (4/28, Prothero) reported that while “most of the focus on artificial intelligence in K-12 education has been on ChatGPT and how students can use it to cheat,” that obscures “the bigger changes to education that recent advances in AI are kicking off, said Peter Stone, an expert on the future of this technology.” Stone, a UT Austin professor, chairs the One Hundred Year Study on Artificial Intelligence (AI100), “which draws insights from experts around the world in reports published every five years to try to understand what the long-term impacts of AI will be on society.” Education Week “put three questions to Stone on how AI will likely impact education.” On how he sees AI “disrupting or fundamentally changing K-12 education,” Stone said students “need to know how to use artificial intelligence technologies and also to be literate as to what AI is capable of, what it’s not capable of, what its potential uses and misuses are.”

“Godfather Of AI” Quits Google To Speak Out About AI’s Risks

The New York Times Share to FacebookShare to Twitter (5/1, Metz) reports, “Geoffrey Hinton was an artificial intelligence pioneer,” but “on Monday...he officially joined a growing chorus of critics who say [that] companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence.” Hinton, “often called ‘the Godfather of A.I.’ ... said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.” The Times says, “Dr. Hinton’s journey from A.I. groundbreaker to doomsayer marks a remarkable moment for the technology industry at perhaps its most important inflection point in decades.”

        The Hill Share to FacebookShare to Twitter (5/1) reports Hinton and two graduate students at the University of Toronto in 2012 “built tools that helped lead to the creation of AI systems, something he said he partly regrets.” Hinton, a former vice president and engineering fellow at Google, said, “I console myself with the normal excuse: If I hadn’t done it, somebody else would have.” More than 1,000 tech researchers have signed onto an open letter asking for a six-month pause on the expansion of AI, citing “profound risks to society and humanity.” Hinton told the Times “he did not sign the letter because he didn’t want to publicly condemn his employer before he resigned.”

        The New York Post Share to FacebookShare to Twitter (5/1) reports Hinton expressed concern that the pace of AI development will accelerate as Microsoft, Google, and other tech giants “race to lead the field – with potentially dangerous consequences.” In a recent interview with CBS’s “60 Minutes,” Google CEO Sundar Pichai acknowledged that AI would cause job losses for “knowledge workers,” such as writers and software engineers. Pichai also “detailed bizarre scenarios in which Google’s AI programs have developed ‘emergent properties’ – or learned unexpected skills in which they were not trained.”

        SiliconANGLE Share to FacebookShare to Twitter (5/1) reports, “Until last year, Hinton detailed, Google acted as a ‘proper steward’ of its internally developed AI technology. Since then, the search giant and Microsoft Corp. have both introduced advanced chatbots powered by large language models. Hinton expressed concerns that the competition between the two companies in the AI market may prove ‘impossible to stop.’” Hinton “cautioned that generative AI could be used to flood the internet with large amounts of false photos, videos and text,” and “also expressed concerns about AI’s long-term impact on the job market.” Hinton is quoted saying, “The idea that this stuff could actually get smarter than people – a few people believed that. ... But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

        Engadget Share to FacebookShare to Twitter (5/1) reports Hinton, discussing how generative AI could lead to a wave of misinformation, said you might “not be able to know what is true anymore.” He is worried AI might not just eliminate “drudge work,” but outright replace some jobs. Going forward, Hinton “is concerned about both the possibility of fully autonomous weapons and the tendency of AI models to learn odd behavior from training data. While some of these issues are theoretical, Hinton fears an escalation that won’t be checked without regulations or the development of effective controls.”

        CNBC Share to FacebookShare to Twitter (5/1, Elias) reports Hinton referenced the power of GPT-4 from startup OpenAI when explaining his decision to resign from Google. He told CNBC, “If I have 1,000 digital agents who are all exact clones with identical weights, whenever one agent learns how to do something, all of them immediately know it because they share weights. Biological agents cannot do this. So collections of identical digital agents can acquire hugely more knowledge than any individual biological agent. That is why GPT-4 knows hugely more than any one person.”

        Ars Technica Share to FacebookShare to Twitter (5/1) and Fortune Share to FacebookShare to Twitter (5/1) also report.

AI “Hallucinations” Challenging To Fact-Check, Improve

The New York Times Share to FacebookShare to Twitter (5/1, Weise, Metz) discusses generative AI “hallucinations,” saying, “Figuring out why chatbots make things up and how to solve the problem has become one of the most pressing issues facing researchers as the tech industry races toward the development of new A.I. systems.” Beyond just repeating “the same untruths” found on the internet, chatbots “produce new text, combining billions of patterns in unexpected ways. This means even if they learned solely from text that is accurate, they may still generate something that is not. Because these systems learn from more data than humans could ever analyze, even A.I. experts cannot understand why they generate a particular sequence of text at a given moment. And if you ask the same question twice, they can generate different text. That compounds the challenges of fact-checking and improving the results.”

Columbia University Professor Works To Expose Students To AI’s Future In Art

The New York Times Share to FacebookShare to Twitter (5/1, Small) reports Lance Weiler “is preparing his students at Columbia University for the unknown.” He wants his students “ready for an art world that is gradually embracing the latest digital tools,” and for months, “he has been rehearsing his students and their A.I. creations for a workshop this week at New York’s Lincoln Center and a performance at the Music Center in Los Angeles in the fall, where representatives from the art and entertainment industries will be in the audience, looking to hire young recruits.” These immersive performances, “co-productions of man and machine, employ A.I. programs like ChatGPT and Midjourney, which can produce scripts and artworks based on algorithms and replicate human creativity by devouring billions of datapoints from across the internet.”

Researchers Develop AI To Translate Person’s Thoughts Into Speech Using fMRI Scans

The New York Times Share to FacebookShare to Twitter (5/1, Whang) reports, “In a study Share to FacebookShare to Twitter published in the journal Nature Neuroscience...researchers described an AI that could translate the private thoughts of human subjects by analyzing fMRI scans, which measure the flow of blood to different regions in the brain.” Still, “this language-decoding method had limitations,” since “fMRI scanners are bulky and expensive,” and “training the model is a long, tedious process” that “must be done on individuals.”

        Also reporting are NPR Share to FacebookShare to Twitter (5/1, Hamilton) and STAT Share to FacebookShare to Twitter (5/1, Trang, Subscription Publication).

White House Launches Investigation Of Workplace AI Issues

Bloomberg Share to FacebookShare to Twitter (5/1, Eidelson, Subscription Publication) reports that the White House is “probing how companies use artificial intelligence to monitor and manage workers, practices the Biden Administration says are increasingly prevalent and can inflict significant harm. ‘While these technologies can benefit both workers and employers in some cases, they can also create serious risks to workers,’ deputies from the White House Domestic Policy Council and White House Office of Science and Technology Policy wrote in a blog post slated for publication later Monday, announcing a formal request for information from the public about how automated tools are being deployed in the workplace.”

        Reuters Share to FacebookShare to Twitter (5/1, Bartz) reports the OSTP said, “Monitoring conversations can deter workers from exercising their rights to organize and collectively bargain with their employers. And, when paired with employer decisions about pay, discipline, and promotion, automated surveillance can lead to workers being treated differently or discriminated against.” Reuters adds, “The Biden administration has made labor issues a centerpiece of its economic policies following years of wages failing to keep up with inflation on basics like housing.”

Microsoft Chief Scientific Officer Says AI “Acceleration” Needed Rather Than Six-Month Pause

Insider Share to FacebookShare to Twitter (5/2) reports Microsoft chief scientific officer Eric Horvitz “has addressed an open letter signed by Elon Musk and thousands of others calling for a pause on AI development.” Eric Horvitz “told Fortune Share to FacebookShare to Twitter (4/30) in an interview published Sunday that while he respects the people who signed the letter and understood that people might have concerns about AI, he believes an ‘acceleration’ – not a pause – is actually necessary.” Horvitz is quoted saying, “To me, I would prefer to see more knowledge, and even an acceleration of research and development, rather than a pause for six months. ... Six months doesn’t really mean very much for a pause. We need to really just invest more in understanding and guiding and even regulating this technology – jump in, as opposed to pause.”

        Microsoft Testing Private Version Of ChatGPT, Sources Say. The Information Share to FacebookShare to Twitter (5/2, Ma, Subscription Publication) reports two sources say Microsoft Azure “later this quarter...plans to sell a version of ChatGPT that runs on dedicated cloud servers where the data will be kept separate from those of other customers.” According to The Information, “The idea is to give customers peace of mind that their secrets won’t leak to the main ChatGPT system, the people said,” but it “could cost as much as 10 times what customers currently pay to use the regular version of ChatGPT, one of these people said.” Ars Technica Share to FacebookShare to Twitter (5/2, Cunningham) reports while OpenAI “is supposedly planning a similar product ‘in the coming months,’ a subscription where input fed to ChatGPT by a business’s employees and customers won’t be used to train its language models by default.” Microsoft’s version “will use the company’s Azure platform as its backend rather than competing platforms like Amazon Web Services.”

        Friedman Warns Companies To Proceed With Caution On AI. In his New York Times Share to FacebookShare to Twitter (5/2) column, Thomas Friedman warns humanity is in the process of opening one Pandora’s box “labeled ‘artificial intelligence,’ and it is exemplified by the likes of ChatGPT, Bard and AlphaFold, which testify to humanity’s ability for the first time to manufacture something in a godlike way that approaches general intelligence, far exceeding the brainpower with which we evolved naturally,” and another “labeled ‘climate change.’” Friedman says after failing to “understand how much social networks would be used to undermine the twin pillars of any free society – truth and trust,” regulations and ethics are needed for AI. He warns if companies “approach generative A.I. just as heedlessly – if we again go along with Mark Zuckerberg’s reckless mantra at the dawn of social networks, ‘move fast and break things’ – oh, baby, we are going to break things faster, harder and deeper than anyone can imagine.” However, he concedes AI “could be our savior.”

Artificial Intelligence Could Create Superior mRNA Sequences For Vaccines

Nature Share to FacebookShare to Twitter (5/2, Dolgin) reports on deployment of artificial intelligence in designing mRNA sequences that could improve the performance of vaccines. “Developed by scientists at the California division of Baidu Research” which is “an AI company based in Beijing,” staff believe that the AI-augmented mRNA sequences could contain “more-intricate shapes and structures than those used in current vaccines.” Among the potential benefits to that achievement are making “immunized individuals better equipped to fend off infectious diseases” and “improved protection against vaccine degradation.” At this time, though, the mRNA sequences have yet to be used in any human trials and there are questions about whether the sequences “could end up creating vaccine sequences that spur harmful immune reactions in people.”

Chegg’s Stock Loses Nearly Half Of Value Over Fears Of ChatGPT Competition

The AP Share to FacebookShare to Twitter (5/2, O'Brien, Grantham-Philips) reports that “shares of the education technology company Chegg lost nearly half their value Tuesday after its CEO warned that OpenAI’s free ChatGPT service was cutting into its growth.” Chegg CEO Dan Rosensweig “told investors on a conference call Monday that early in the year, the company was meeting expectations for new sign-ups for its educational services. But that shifted in recent months.” Rosensweig said, “Since March we saw a significant spike in student interest in ChatGPT. We now believe it’s having an impact on our new customer growth rate.”

        In its Digital Future Daily newsletter, Politico Share to FacebookShare to Twitter (5/2, Robertson) reports Chegg is “a somewhat unique case, as a company that was the subject of intense scrutiny even before its seeming admission of defeat by ChatGPT – it allows students to post their homework online in search of answers from other people.” Chegg calls this “homework help.” Many educators call it “cheating.” Students “apparently now call it irrelevant, as ChatGPT provides for free something close enough to the service for which Chegg currently charges $15.95 a month.” Matthew Mittelsteadt, a tech researcher at the free-market-oriented Mercatus Center said, “Good riddance. The exact type of thing that I’m hoping gets disrupted or destabilized by [AI] is these corporations that engage in this rent-seeking behavior.”

Leaked Amazon Document Details “Big Plans” To Boost Alexa With Generative AI

Insider Share to FacebookShare to Twitter (5/2, Kim) reports, “Despite laying off 2,000 Alexa employees late last year, Amazon CEO Andy Jassy has big plans to reboot the voice-assistant with ChatGPT-like features, leaked documents seen by Insider reveal.” A document titled “Alexa LLM Entertainment Use Cases” “specifically focused on new entertainment features for Alexa including more conversational video search, personal recommendations, and storytelling and news reading capabilities.” An Amazon spokesperson is cited saying the company’s homegrown large language model and generative AI technology, Alexa Teacher Model, will serve as Alexa’s underlying AI technology. The spokesperson is quoted saying, “We’re also building new models that are much larger and much more generalized and capable, which will take what already is the world’s best personal assistant and accelerate our path to being even more proactive and conversational.”

Harris To Discuss AI In White House Meeting With Big Tech CEOs

Vice President Harris and other Administration officials on Thursday will meet “the chief executives of Alphabet Inc’s Google, Microsoft, OpenAI and Anthropic” to discuss “key artificial intelligence issues, a White House official told” Reuters Share to FacebookShare to Twitter (5/2, Bose, Shepardson), which also reports the invitation to the CEOs noted that President Biden’s “expectation that companies like yours must make sure their products are safe before making them available to the public.” The invitation also says officials aim to hold “a frank discussion of the risks we each see in current and near-term AI development, actions to mitigate those risks, and other ways we can work together to ensure the American people benefit from advances in AI while being protected from its harms.”

Federal Officials Voice Concern Over AI Cybersecurity Threats, Seek Preemptive Measures

The Washington Post Share to FacebookShare to Twitter (5/2, Starks) reports, “U.S. officials say AI will be a big cyberthreat” but “how it’ll materialize is less clear.” Federal officials express concern over potential cybersecurity threats posed by artificial intelligence (AI), although they are unsure of the exact nature of these threats. As AI continues to advance without proper safeguards, lawmakers are looking into ways to address and mitigate potential cyber risks. The Post adds, “Rob Joyce, director of cybersecurity for the National Security Agency, called AI a ‘game-changing technology that’s emerging.’” The Post also reports “Joyce said he doesn’t expect to have many examples of how adversaries are exploiting AI until next year.” Joyce said, “I won’t say it’s delivered yet. … In the near term, I don’t expect some magical technical capability that is AI generated that will exploit all the things.”

        AI Expected To Help Cyberdefenders, Not Cybercriminals. Axios Share to FacebookShare to Twitter (5/2, Sabin) reports that worries about cybercriminals “incorporating artificial intelligence into their schemes anytime soon is vastly overblown,” as it would take time and money, which opportunistic cybercriminals don’t usually have. Instead, AI is expected to help cyberdefenses to block “run-of-the-mill security holes that criminals keep exploiting.”

Education, Business, Nonprofit Organizations Launch AI Learning Initiative

Education Week Share to FacebookShare to Twitter (5/3) reports “a group of influential education, business, and nonprofit organizations – including Code.org, the Educational Testing Service, the International Society for Technology in Education, and the World Economic Forum – announced an initiative May 2 to help schools determine what role artificial intelligence should play in K-12 education.” Dubbed TeachAI, “the effort plans to help schools and state education departments figure out how to effectively integrate AI into curricula, while also protecting students’ online safety and privacy, and ensuring educators and students understand the possible pitfalls of AI.” Those potential pitfalls “include AI systems that are trained to make decisions based on biased data fed into them and the potential of AI to help spread misinformation and disinformation.” TeachAI “plans to recommend best practices for helping students understand what’s behind the technology that powers artificial intelligence, as well as bringing AI learning tools and assessments into schools in thoughtful ways that protect students’ data privacy, the groups say.”

Hinton: AI Systems May Already Be Outsmarting Humanity

Forbes Share to FacebookShare to Twitter (5/3, Morris) reports, “At MIT’s EmTech Digital conference, cognitive psychologist and computer scientist Geoffrey Hinton spoke about how he’s helped build machines that are immortal and the various dangers these machines now pose to humanity: ‘Smart things can outsmart us,’ says Hinton.” Hinton” likens AI to a hive mind that can make thousands of copies of itself. Then everything one copy learns, the whole hive learns. ... Compound all this with the fact that these are goal-oriented machines that need control over their autonomy and environment to better achieve their goals, whether the goals are programmed by a human or autonomously generated. Hinton emphasizes the control problem, ‘If these things get carried away with getting more control, we’re in trouble.’”

        The AP Share to FacebookShare to Twitter (5/3) reports, “Researchers have long noted that artificial neural networks take much more time to absorb and apply new knowledge than people do, since training them requires tremendous amounts of both energy and data. That’s no longer the case, Hinton argues, noting that systems like GPT-4 can learn new things very quickly once properly trained by researchers. ... That leads Hinton to the conclusion that AI systems might already be outsmarting us.”

        Another AP Share to FacebookShare to Twitter (5/3) article reports that “computer scientists who helped build the foundations of today’s artificial intelligence technology are warning of its dangers, but that doesn’t mean they agree on what those dangers are or how to prevent them.” Fellow AI pioneer Yoshua Bengio, co-winner with Hinton of the top computer science prize, told The Associated Press on Wednesday that he’s ‘pretty much aligned’ with Hinton’s concerns brought on by chatbots such as ChatGPT and related technology, but worries that to simply say ‘We’re doomed’ is not going to help.

        Related: Columnist Wishes Hinton Had Spoken Up Sooner. Parmy Olson writes in Bloomberg Opinion Share to FacebookShare to Twitter (5/3) that “it’s hard not to be worried about AI when the so-called godfather of artificial intelligence, Geoffrey Hinton, says he is leaving Google and regrets his life’s work.” Olson says, “Hinton’s concerns make sense, but they would have been more effective if they had come several years earlier, when other researchers who didn’t have retirement to fall back on were ringing the same alarm bells.” Olson adds, “While Hinton’s prominence in the field might have insulated him from blowback, the episode highlights a chronic problem in AI research: Large tech firms have such a stranglehold on AI research that many of their scientists are afraid of airing their concerns for fear of harming their careers. ... If today’s researchers are willing to speak up now, while it matters, and not right before they retire, we are all likely to benefit as a species.”

FTC’s Khan Urges Caution As AI Is Developed

In an op-ed for the New York Times Share to FacebookShare to Twitter (5/3, Khan), FTC Chair Lina Khan outlines her agency’s planned approach to regulating AI, as advancements in generative AI will have “vast implications for how people live, work and communicate around the world.” Khan identifies several risks associated with the expanding adoption of AI, including the risk of further “locking in the market dominance of large incumbent technology firms.” Additionally, the use of AI “risks turbocharging fraud” and threatens “violating user privacy.” Khan concludes by calling on policymakers to enforce existing rules and antitrust laws, promote fair competition, and foster innovation while ensuring that AI technologies are developed lawfully.

Column: Khan Academy’s Chatbot Justifies AI’s Impact On Education

In his column for the New York Times Share to FacebookShare to Twitter (5/3), Peter Coy writes Sal Khan, founder of Khan Academy, has teamed up with OpenAI to explore how artificial intelligence (AI) could improve education. He has been using GPT-4 to train Khanmigo, an AI-powered software that is designed to guide students using “Socratic dialogue,” rather than just providing answers. While “Khan Academy has been a game changer for education,” Khan believes Khanmigo “is a game changer for Khan Academy.”

At Summit Of Tech Leaders, White House “Signaled Support” For AI Regulations

The Washington Post Share to FacebookShare to Twitter (5/4, Zakrzewski) reports the White House “signaled support for potential new AI regulations and legislation following a meeting with the CEOs of Google, Microsoft, Anthropic and OpenAI.” Vice President Harris “offered few specifics about what kinds of regulation the Biden administration would support but said that she and President Biden are committed to ‘doing our part’ to ensure people safely benefit from AI.” The New York Times Share to FacebookShare to Twitter (5/4, McCabe) reports it “was the first White House gathering of major A.I. chief executives since the release of tools like ChatGPT.” In a statement, Harris said, “The private sector has an ethical, moral and legal responsibility to ensure the safety and security of their products. And every company must comply with existing laws to protect the American people.”

        The AP Share to FacebookShare to Twitter (5/4) reports Biden “briefly dropped by the meeting in the White House’s Roosevelt Room, saying he hoped the group could ‘educate us’ on what is most needed to protect and advance society. ‘What you’re doing has enormous potential and enormous danger,’ Biden told the CEOs, according to a video posted to his Twitter account.” In a preview of the meeting, Axios Share to FacebookShare to Twitter (5/4) reports Biden “himself has experimented with ChatGPT and was fascinated by the tool, Axios has learned.”

        Reuters Share to FacebookShare to Twitter (5/4, Bose, Shepardson) reports that in “response to a question about whether companies are on the same page on regulations,” OpenAI CEO Sam Altman “told reporters after the meeting ‘we’re surprisingly on the same page on what needs to happen.’”

        White House Announces $140 Million Investment In AI Research. CNBC Share to FacebookShare to Twitter (5/4, Kinery) reports the White House also “announced it would invest $140 million to create seven artificial intelligence research hubs and released new guidance on AI.” In addition, the White House on Thursday “promised it would release guidelines for use by government agencies. AI developers are also expected to agree to have their products reviewed at the upcoming DEF CON cybersecurity conference in August.” Bloomberg Share to FacebookShare to Twitter (5/4, Sink, Subscription Publication) reports the new funding for research hubs “comes on top of around $360 million previously announced for an initial round of 18 institutes.”

        USA Today Share to FacebookShare to Twitter (5/4) reports the new institutes “will advance AI R&D in critical areas, including climate change, agriculture, energy, public health, education, and cybersecurity.” The Hill Share to FacebookShare to Twitter (5/4) reports that commitment from tech companies “is part of an effort to provide information for researchers and the public about AI models and figure out how the models align with the principles laid out in the administration’s Blueprint for an AI Bill.”

        AI Boom Prompts “Legislative Chaos” In US Congress. Politico Share to FacebookShare to Twitter (5/4, Bordelon, Chatterjee) reports, “The planet’s fastest-moving technology has spurred Congress into a sudden burst of action, with a series of recent bills, proposals and strategies all designed to rein in artificial intelligence. There’s just one problem: Nobody on Capitol Hill agrees on what to do about AI, how to do it – or even why.” The “legislative chaos” from multiple legislators with differing views and agendas “threatens to leave Washington at sea as generative AI explodes onto the scene – potentially one of the most disruptive technologies to hit the workplace and society in generations.”

        About Half Of Americans Think Congress Should Take “Swift Action” To Regulate AI. Approximately “half of Americans said Congress should be taking action to regulate artificial intelligence (AI) technology, according to a poll released Thursday,” The Hill Share to FacebookShare to Twitter (5/4, Klar) reports. About 54% “of polled registered voters said Congress should take ‘swift action’ to regulate the technology in a way that promotes privacy, fairness and safety to ensure ‘maximum benefit to society with minimal risks,’ according to the poll conducted for the Omidyar Network-funded Tech Oversight Project.”

Google To No Longer Freely Publish AI Research

The Washington Post Share to FacebookShare to Twitter (5/4) reports, “In February, Jeff Dean, Google’s longtime head of artificial intelligence, announced a stunning policy shift to his staff: They had to hold off sharing their work with the outside world.” The launch three months earlier of ChatGPT prompted Google to change its approach, in which “Dean had run his department like a university, encouraging researchers to publish academic papers prolifically.” The Post says that OpenAI “kept up with Google by reading the team’s scientific papers,” so “Google would take advantage of its own AI discoveries, sharing papers only after the lab work had been turned into products, Dean said.” The Post adds, “The policy change is part of a larger shift inside Google. ... the tech giant has lurched into defensive mode – first to fend off a fleet of nimble AI competitors, and now to protect its core search business, stock price, and, potentially, its future, which executives have said is intertwined with AI.”

Microsoft Opens Up Its AI-Powered Bing To All Users

CNN Business Share to FacebookShare to Twitter (5/4, Kelly) reports Microsoft is creating a new AI-powered version of its Bing search engine that will be available to anyone who wants to use it. IBM, Amazon, Baidu, and Tencent are working on similar technologies using AI. Insider Share to FacebookShare to Twitter (5/4, Bhaimiya) reports, “A swathe of new AI features are coming to Microsoft’s Bing, including the ability to search visually with images instead of just text,” as well as “the ability to make restaurant reservations, search movies, [and] save chat history.” CNBC Share to FacebookShare to Twitter (5/4, Novet) reports Microsoft has also announced plans to incorporate ChatGPT “into its Microsoft 365 productivity software and bring out a chatbot for security practitioners, among other products.”

Analysis: Today’s AI Will Succeed In Delivering Personalized Learning

In an analysis for The Seventy Four Share to FacebookShare to Twitter (5/4), Walton Family Foundation adviser and Chan Zuckerberg Initiative fellow John Bailey writes, “Over the last decade, educators and administrators have often encountered lofty promises of technology revolutionizing learning, only to experience disappointment when reality failed to meet expectations.” Now, the recent “so-called large-language models” can produce “relevant, coherent and creative responses to prompts.” Bailey says, “I believe that society may be in the early stages of a transformative moment, similar to the introduction of the web browser and the smartphone.” Among “four reasons why this generation of AI tools is likely to succeed where other technologies have failed,” Bailey says, “One of the remarkable aspects of these systems is their ability to interpret and respond to natural language commands, eliminating the need to navigate confusing menus or create complicated formulas.”

 

dtau...@gmail.com

unread,
May 13, 2023, 7:58:46 PM5/13/23
to ai-b...@googlegroups.com

Mass Event Will Let Hackers Test Limits of AI Technology
Associated Press
Matt O'Brien
May 10, 2023


Major artificial intelligence (AI) providers are working with the White House to offer thousands of hackers the opportunity to "jailbreak" their AI language models and uncover vulnerabilities. Rumman Chowdhury, who is coordinating a mass hacking event for this summer's DEF CON hacker convention, explained, "We need a lot of people with a wide range of lived experiences, subject matter expertise, and backgrounds hacking at these models and trying to find problems that can then go be fixed." Chowdhury described hackathons like the White House-associated exercise as "a direct pipeline to give feedback to companies," with participants compiling reports and detailing common flaws and patterns.

Full Article

 

 

Training Machines to Learn More Like Humans Do
MIT News
Adam Zewe
May 9, 2023


Scientists at the Massachusetts Institute of Technology (MIT) and Toyota subsidiary Woven Planet found computer vision models can be trained to produce more stable, predictable visual representations, similar to those humans learn through perceptual straightening. The researchers taught the models millions of examples via adversarial training, which enhanced their perceptual straightness while reducing their reactivity to slight errors within images. They discovered the models trained on more perceptually straight representations could correctly classify objects in videos with greater consistency. MIT's Vasha DuTell said, "One of the take-home messages here is that taking inspiration from biological systems, such as human vision, can both give you insight about why certain things work the way that they do and also inspire ideas to improve neural networks."

Full Article

 

 

Wendy's Turns to AI-Powered Chatbots for Drive-Thru Orders
Bloomberg
Daniela Sirtori-Cortina; Rachel Metz
May 9, 2023


In June, Wendy's plans to test an artificial intelligence (AI)-powered chatbot’s ability to take drive-thru orders at a store near Columbus, OH. Powered by Google Cloud's AI software, the system purportedly can understand requests phrased differently from the menu and answer frequently asked questions. Wendy's said there are no plans to reduce labor in response to the chatbot’s deployment, but it will shift crew responsibilities to handle an increase in drive-thru and digital orders. During the pilot, staff will oversee the chatbot to ensure it can handle all requests and will be on hand to step in if customers insist on speaking with a human.
 

Full Article

*May Require Paid Registration

 

 

Data Class-Specific Image Encryption Using Optical Diffraction
UCLA Samueli Newsroom
May 3, 2023


University of California, Los Angeles researchers have developed diffractive deep neural networks that can perform class-specific all-optical image encryption at both near-infrared and terahertz wavelengths using no external computing power aside from the illumination light. After training the networks using deep learning, the researchers used three-dimensional printing to physically fabricate the networks, transform the input images, and produce encrypted, uninterpretable output patterns. The encrypted images can be restored only by applying the correct decryption keys. The transformations performed by the diffractive encryption network are pre-determined and specifically and exclusively assigned to a single data class, which makes it difficult to use reverse-engineering to decipher the original images belonging to the target data classes. Additionally, different decryption keys can be distributed to multiple end-users based on their data access permission, allowing only the appropriate portion of the input data to be shared.

Full Article

 

 

Lithography-Free Photonic Chip Offers Speed, Accuracy for AI
Penn Engineering Today
Devorah Fischler
May 1, 2023


Researchers at the University of Pennsylvania School of Engineering and Applied Science (SEAS) have constructed a lithography-free photonic chip that offers programmable on-chip information processing, yielding photonics-level speed enhanced with superior accuracy and flexibility for artificial intelligence (AI) applications. The chip uses lasers to beam light onto a semiconductor wafer without defined lithographic pathways. Explained SEAS' Liang Feng, "Our chip overcomes [reprogrammability, damage, and cost] obstacles and offers improved accuracy and ultimate reconfigurability given the elimination of all kinds of constraints from predefined features." SEAS' Zihe Gao said the device's active light control can be used "to reroute optical signals and program optical information processing on-chip."

Full Article

 

NYTimes Analysis: Proliferation Of AI Will Lead To Global “Arms Control” Efforts

David Sanger wrote in a New York Times Share to FacebookShare to Twitter (5/5, A1) analysis that while the Administration’s limits on exporting chips to China were partially intended to “slow its effort to develop weapons driven by artificial intelligence,” current sentiment regarding AI “has made the limiting of chips to Beijing look like just a temporary fix.” Sanger detailed “the tension felt throughout the defense community today,” as “no one really knows what these new technologies are capable of when it comes to developing and controlling weapons, and they have no idea what kind of arms control regime, if any, might work.” Sanger further predicted “a new era of arms control” under which nations will seek “to limit the specialty chips and other computing power needed to advance the technology.”

Apple CEO Tim Cook Discusses Rise Of AI Chatbots

MobileSyrup (CAN) Share to FacebookShare to Twitter (5/5, Mandato) reported Apple CEO Tim Cook shared his thoughts about the rise of AI chatbots on Apple’s quarterly earnings call. Cook “described the potential of artificial intelligence as ‘very interesting’ before noting that there are several issues that need to be sorted out first.” He also said “it is ‘very important to be deliberate and thoughtful’ with how the technology is used and integrated.” Apple “has already used artificial intelligence and machine learning across several of its products and services, including the Apple Watches ECG app, Crash Detection and Fall Detection,” and “AI will be added to products more and more on a ‘very thoughtful basis.’”

AI-Written Books And Other Content Is Spreading

The Washington Post Share to FacebookShare to Twitter (5/5, Oremus) reported three weeks before releasing a “technical how-to book,” Portland-based software developer, Chris Cowell, noticed that “another book on the same topic, with the same title, appeared on Amazon.” The book, titled “Automating DevOps with GitLab CI/CD Pipelines,” listed as its author “one Marie Karpos, whom Cowell had never heard of.” The book “bears signs that it was written largely or entirely by an artificial intelligence language model, using software such as OpenAI’s ChatGPT.” The book’s publisher, “a Mumbai-based education technology firm called inKstall, listed dozens of books on Amazon on similarly technical topics, each with a different author, an unusual set of disclaimers and matching five-star Amazon reviews.” Experts say those books “are likely just the tip of a fast-growing iceberg of AI-written content spreading across the web as new language software allows anyone to rapidly generate reams of prose on almost any topic.”

University Of Tennessee Doctor Published Several AI-Written Research Papers In Months

The Daily Beast Share to FacebookShare to Twitter (5/6, Ho Tran) reported that after learning about ChatGPT, University of Tennessee Health Science Center radiologist, Som Biswas, “realized he could use it to make at least one facet of his career a whole lot easier.” For proof of concept, Biswas had the bot “write an article about a topic he was already very familiar with: medical writing.” When he finished the article, “he submitted the paper to Radiology, a monthly peer-reviewed journal from the Radiological Society of North America.” A few days later, the paper, “ChatGPT and the Future of Medical Writing,” was published in Radiology “after undergoing peer-review, according to Biswas.” He then realized “he could actually use [ChatGPT] to help his career and research.” He has now used OpenAI’s chatbot “to write at least 16 papers in four months, and published five articles in four different journals.”

Google Taking Steps To Expand AI, Social Media Content In Search Engine Results

The Wall Street Journal Share to FacebookShare to Twitter (5/6, Kruppa, Subscription Publication) reported that in a departure from its existing practices, Google is making changes to it search engine that will ensure results incorporate conversations with artificial intelligence as well as more short video and social-media posts. The Journal said the changes represent a response to big shifts in the way people access information on the Internet.

OpenAI LLM Training Contract Workers Paid $15 Per Hour

NBC News Share to FacebookShare to Twitter (5/6) detailed “a hidden army of contract workers who have been doing the behind-the-scenes labor of teaching AI systems how to analyze data so they can generate the kinds of text and images that have wowed the people using newly popular products like ChatGPT.” In order “to improve the accuracy of AI,” such workers “labeled photos and made predictions about what text the apps should generate next.” While such contractors “have spent countless hours in the past few years teaching OpenAI’s systems to give better responses in ChatGPT,” the pay for their work is “$15 an hour and up, with no benefits,” even as “their feedback fills an urgent and endless need for the company.”

Google’s Developer Conference Set To Focus On AI

CNBC Share to FacebookShare to Twitter (5/8, Elias) reports Google’s developer conference on Wednesday is set to focus on artificial intelligence, “as the company is planning to announce a number of generative AI updates, including launching a general-use large language model (LLM), CNBC has learned.” According to internal documents seen by CNBC, the company “will unveil PaLM 2, its most recent and advanced LLM.” Further, Google at the event “will make announcements on the theme of how AI is ‘helping people reach their full potential,’ including ‘generative experiences’ to Bard and Search, the documents show.” The update “comes as competition ramps up in the AI arm’s race, with Google and Microsoft racing to incorporate chat AI technology into their products.”

Opinion: AI Moratorium Is Not The Answer To Addressing LLM Guardrails, Potential Risks

Jacob Moses and Gili Vidan write for the Washington Post Share to FacebookShare to Twitter (5/8) that “nearly 30,000 leading technologists, ethicists and civil society activists signed an open letter calling on the tech industry to hit pause on developing more advanced Large Language Models (LLM) in AI for at least six months to allow for the creation of appropriate guardrails around their potential risks.” They suggest that “if AI labs refuse to voluntarily pause then the government should step in to impose a moratorium.” However, “not everyone agrees that a moratorium is a step in the right direction.” The authors argue, “A more robust deliberative process that invites a broad range of experiences and expertise – from civil rights advocates to educators to labor unions – into the conversation will ensure that when the moratorium ends, we’re left with a richer understanding of the social stake in our collective future, rather than a narrower one.”

Interview: Bill Gates Discusses Rise Of AI Technologies

ABC News Share to FacebookShare to Twitter (5/8) interviewed Bill Gates on his thoughts about the rise of AI technology and what it means for the future. In the interview, Gates noted that AI technology could create complications globally for cybersecurity, and said the US government needs to build its regulatory capacity to understand the growing sector. When asked about a potential pause on AI development, Gates also noted that a pause would only prevent the “good guys” from developing useful technology, rather than addressing widespread concerns about the rise of AI.

AI Models Can Spot “Potential Cases” Of Pancreatic Cancer, Study Finds

The New York Post Share to FacebookShare to Twitter (5/9, Herz) reports on a study that found “potential cases of pancreatic cancer could be spotted by using an AI-based population screening.” Researchers “trained AI learning models to be able to read diagnosis codes in the patient’s data and connect them to pancreatic cancer.” They “tried out different versions of the AI models for potential diagnosis at different times – six months, one year, two years and three years – and found that their methods were ‘substantially more accurate at predicting who would develop pancreatic cancer than current population-wide estimates of disease incidence.’”

I/O Seen As Google’s Opportunity To Reassert Leadership In AI

The Washington Post Share to FacebookShare to Twitter (5/9) reports, “On Wednesday, Google is holding its annual conference to showcase its latest innovations — a 15 year-old ritual that this time around is upset by the tech giant’s suddenly playing catch-up in a field it had long dominated: artificial intelligence.” Google’s I/O “is a chance for executives to show skeptical investors, competitors and, in many cases, their own employees, that it is still the leader in AI,” and “showing off new tech to customers, the media and investors is key given the perception from analysts and industry observers that Google fumbled its March launch of the ‘Bard’ chatbot.” The Post adds, “This moment is the most stressful that workers can remember at the company, according to conversations with five current and former employees.”

        Alphabet-Backed AI Startup Anthropic Explains Moral Values Behind Its AI Bot. Reuters Share to FacebookShare to Twitter (5/9) reports Anthropic, an AI startup backed by Google owner Alphabet, “on Tuesday disclosed the set of written moral values that it used to train and make safe Claude, its rival to the technology behind OpenAI’s ChatGPT.” Anthropic’s guidelines draw on several sources, “including the United Nations Declaration on Human Rights and even Apple Inc’s data privacy rules.” Anthropic Co-Founder Jack Clark “said a system’s constitution could be modified to perform a balancing act between providing useful answers while also being reliably inoffensive.” He added, “In a few months, I predict that politicians will be quite focused on what the values are of different AI systems, and approaches like constitutional AI will help with that discussion because we can just write down the values.”

Generative AI Causing Swings In Stock Market

The Wall Street Journal Share to FacebookShare to Twitter (5/9, Grant, Subscription Publication) reports that generative AI is creating waves in various sectors including tech, education, and finance. According to the Journal, major tech firms are investing billions in AI while startups are developing AI-based business models. This trend is causing significant market movements, with Nvidia stocks soaring and study-materials company Chegg shares plummeting. The Journal adds that companies are increasingly mentioning “generative AI” in conference calls even as some believe AI’s initial impact may be overhyped.

Experts Concerned About Lack Of Regulation In AI Involving Healthcare

“ChatGPT has doctors thinking about how bots can deliver better patient care,” Politico Share to FacebookShare to Twitter (5/9, Reader) reports. However, “when it comes to the bigger picture of integrating artificial intelligence into health care, Gary Marcus, an AI entrepreneur and professor emeritus of psychology and neuroscience at NYU, is worried the government isn’t paying close enough attention.” Marcus told Digital Future Daily, “My biggest concern right now is there’s a lot of technology that is not very well regulated – like chat search engines that could give people medical advice.” Marcus said that there aren’t any systems in place “to monitor how these new technologies are being used and assess whether they’re causing harm.”

Experts Discuss How AI Technologies Will Change Learning, Classroom Instruction

Education Week Share to FacebookShare to Twitter (5/9, Prothero) reports experts in artificial intelligence and education technology say “ChatGPT can pass the bar exam, write sci-fi novels, and code, and that’s only the beginning.” Amid discussions on how AI will disrupt our world, “there are some big outstanding questions for educators, such as: What emerging artificial intelligence is just around the corner? How can schools stay on top of this rapidly changing technology? And how can educators separate hype from substance?” Education Week asked “four experts in technology, education, and artificial intelligence to look into their crystal balls and share their thoughts on how AI will likely change teaching and learning.” Co-founder of Clayton Christensen Institute, Michael Horn “said that if he were to pick a theme for new technologies, it would be one of empowerment: New tools will give students more control over creating content and learning at their own pace, while simultaneously taking busywork off teachers’ plates by allowing them to automate tasks.”

Experts And Educators Explain Ways To Develop AI Literacy

Education Week Share to FacebookShare to Twitter (5/10, Klein) reports that the new version of ChatGPT has accelerated a conversation “already underway: Now that AI is shaping nearly every aspect of our lives and is expected to transform fields from medicine to agriculture to policing, what do students need to understand about AI to be prepared for the world of work?” Experts argue that AI literacy is “something that every student needs exposure to – not just those who are planning on a career in computer science.” Among ways to “begin developing AI literacy, according to experts and educators,” grasping the “technical aspects of AI – how the technology perceives the world, how it collects and processes data, and how those data can inform decisions and recommendations – can help temper the oftentimes inaccurate perception that AI is an all-knowing, infallible force.”

Google Shows Off Latest AI-Based Search Tools At I/O Developer Conference

Bloomberg Share to FacebookShare to Twitter (5/10, Alba, Love, Subscription Publication) reports Google on Wednesday “unveiled an experimental way to search the internet that gives more conversational results, and said its artificial intelligence chatbot, Bard, is now available for much of the world to use online.” Google announced a “suite of AI announcements at its I/O conference,” including “a new large language model, called PaLM 2, that developers can use to train tools like chatbots.” Google also “said it has already woven the update into many of its marquee products, including Gmail and Bard.”

        The Washington Post Share to FacebookShare to Twitter (5/10, De Vynck) reports that the new search tools “generate its own answers but checks them for accuracy against real websites.” It also “posts those links directly next to the generated text, making it easy for people to click on them.” For questions “that are about sensitive topics like health, finances and hot-button political issues, the bot won’t write an answer at all, instead returning a news article or links to websites.” In an interview with the New York Times Share to FacebookShare to Twitter (5/10, Grant), Google Search Vice President Liz Reid “said...users expected the company to have high-quality information and it did not want to undermine that trust.” Reid added, “The technology is early. It’s amazing in some ways and it has a bunch of challenges in other ways.”

        According to the AP Share to FacebookShare to Twitter (5/10, Liedtke), Google “will take its next AI steps through a newly formed search lab where people in the U.S. can join a waitlist to test how generative AI will be incorporated in search results.” The tests “also include the more traditional links to external websites where users can read more extensive information about queried topics.” However, the AP adds Google “may take several weeks” to begin “sending invitations to those accepted from the waitlist to test the AI-injected search engine.”

OpenAI CEO To Testify To Congress Over AI Concerns

The Washington Post Share to FacebookShare to Twitter (5/10, Lima) reports that OpenAI CEO Sam Altman “will testify to Congress for the first-time next week, the latest sign that policymakers in Washington are ratcheting up scrutiny of artificial intelligence as the technology booms in Silicon Valley.” Altman, “whose company is behind the AI-driven chatbot ChatGPT, will appear Tuesday before a Senate panel to discuss efforts to keep AI in check – efforts that include potential legislation under consideration on Capitol Hill.” The hearing “comes as lawmakers and federal officials grapple with how to tackle the surging popularity of generative AI tools like ChatGPT, which pull information from massive data sets to generate words, images and sounds, including conversational responses to users’ queries.”

Growing Use Of AI Poses Next Generation Of Cybersecurity Threat

The Washington Post Share to FacebookShare to Twitter (5/11, A1) reports on the growing use of AI in cybersecurity attacks, saying scammers “are automating more personalized texts and scripted voice recordings while dodging alarms by going through such unmonitored channels as encrypted WhatsApp messages on personal cellphones. ... That is just the beginning, experts, executives and government officials fear, as attackers use artificial intelligence to write software that can break into corporate networks in novel ways, change appearance and functionality to beat detection, and smuggle data back out through processes that appear normal.” Speaking at the RSA cybersecurity conference in San Francisco, National Security Agency cybersecurity chief Rob Joyce said, “It is going to help rewrite code. Adversaries who put in work now will outperform those who don’t.”

Meta Reveals “AI Sandbox” Tools For Advertisers

Bloomberg Share to FacebookShare to Twitter (5/11, Subscription Publication) reports Meta Platforms on Thursday “introduced the ‘AI Sandbox’ for advertisers to test early versions of features that use artificial intelligence technology” to “generate different text for ads that cater to various audiences, create alternate background images based on the words provided and automatically resize ad images to adjust for changes in platforms.” The company “is working with a small group of advertisers to test the tools and said it intends to gradually expand access beginning in July.” Bloomberg adds that “digital advertisers, in particular performance-focused growth marketers, have been hungry for tools that can help create ads more efficiently and make them more successful.”

        CNBC Share to FacebookShare to Twitter (5/11, Capoot) calls the new features “Meta’s latest effort to show investors and advertisers that hefty investments in the red hot AI space are paying off as the company reckons with slowing ad growth and a costly transition to the metaverse.” Meta Vice President of Monetization John Hegeman “said the new offerings will ultimately help advertisers save time and achieve ‘better performance’ with their ads.” Meanwhile, Meta “also announced several AI-powered updates to Meta Advantage, its portfolio of automated tools and products that advertisers can use to enhance their campaigns,” including “an automated performance comparisons report.”

 

 

dtau...@gmail.com

unread,
May 22, 2023, 10:37:11 AM5/22/23
to ai-b...@googlegroups.com

Dark Web ChatGPT Unleashed: Meet DarkBERT
Tom's Hardware
Francisco Pires
May 16, 2023


Researchers at South Korea's Korea Advanced Institute of Science and Technology (KAIST) and data intelligence company S2W have created a large language model (LLM) trained on Dark Web data. The researchers fed the RoBERTa framework a database they compiled from the Dark Web via the Tor network to create the DarkBERT LLM, which can analyze and extract useful information from a new piece of Dark Web content composed in its own dialects and heavily-coded messages. They demonstrated DarkBERT's superior performance to other LLMs, which should enable security researchers and law enforcement to delve deeper into the Dark Web.

Full Article

 

 

AI Helps Place Drones in Remote Areas for Faster Emergency Response
USC News
Nina Raffio
May 18, 2023


University of Southern California (USC) researchers found that using artificial intelligence (AI)-driven decision-making to dispatch equipment to remote areas resulted in faster emergency response times. The researchers focused on a program in Toronto, Canada, that deploys drones equipped with automated external defibrillators together with ambulances in response to calls about cardiac arrest events. They learned using AI-powered decision-making on where to deploy life-saving equipment in data-scarce settings can facilitate more effective decisions on when to deploy drones and where to place drone depots in rural areas. USC's Michael Huang said this approach "can help us make more informed and efficient decisions across a range of fields where data is limited."

Full Article

 

 

Tetris Reveals How People Respond to an Unfair AI
Cornell Chronicle
Tom Fleischman
May 15, 2023


Cornell University researchers experimented with a two-player version of the videogame Tetris to explore people's reactions to unfair treatment by humans and artificial intelligence (AI). Former Cornell researcher Houston B. Claure modified Tetris to require two players to cooperate to complete each block-stacking round, with either a human or an algorithmic "allocator" determining which player takes turns. Players who received fewer turns were highly aware of their partners' disproportionate allocations, but their response was largely consistent whether a human or an AI was the allocator. Players receiving more turns saw their partners as less dominant during AI allocation, while scores were usually worse with equal, rather than unequal, allocations.

Full Article

 

Open Source “Boomtown” In AI Seen As Dependent On Big Tech

MIT Technology Review Share to FacebookShare to Twitter (5/12) reports, “Last week a leaked memo reported to have been written by Luke Sernau, a senior engineer at Google, said out loud what many in Silicon Valley must have been whispering for weeks: an open-source free-for-all is threatening Big Tech’s grip on AI.” MIT Technology Review says, “This open-source boom is precarious. Most open-source releases still stand on the shoulders of giant models put out by big firms with deep pockets. If OpenAI and Meta decide they’re closing up shop, a boomtown could become a backwater.” MIT Technology Review adds, “OpenAI is already reversing its previous open policy because of competition fears. And Meta may start wanting to curb the risk that upstarts will do unpleasant things with its open-source code.”

AP Report Warns Of Generative AI Being Used For Election Interference

The AP Share to FacebookShare to Twitter (5/14, Knickmeyer) reports experts “have warned for years that cheap, powerful artificial intelligence tools would soon allow anyone to create fake images, video and audio that was realistic enough to fool voters and perhaps sway an election,” with the AP writing that the recent AI boom has given some credence to such worries. The outlet adds that “when strapped to powerful social media algorithms, this fake and digitally created content can spread far and fast and target highly specific audiences, potentially taking campaign dirty tricks to a new low,” and “the implications for the 2024 campaigns and elections are as large as they are troubling.”

Virginia Congressman Pursuing Degree In Machine Learning To “Master” AI

The CBS Weekend News Share to FacebookShare to Twitter (5/14) reported 72-year-old Rep. Don Beyer (D-VA) has returned to pursue a master’s degree in machine learning at George Mason University in Fairfax, Virginia with hopes to “master” artificial intelligence (AI). Beyer said that with ChatGPT and GPT4, AI is “very much a topical thing.” Beyer “says he recognized Congress is about to be called on to create the laws that govern the emerging and, for some, frightening artificial intelligence technology, in which machines and computer systems perform tasks normally requiring human intelligence.”

Amazon To Add ChatGPT-Type Search Tool To Its Online Platform

Bloomberg Share to FacebookShare to Twitter (5/15, Day, Subscription Publication) reports Amazon plans to add a “ChatGPT-style product search to its web store, rivaling efforts by Microsoft Corp. and Google to weave generative artificial intelligence into their search engines.” Amazon’s plans for the new search tool “appear in recent job postings reviewed by Bloomberg News.” One such job listing for a senior software development engineer says Amazon is “reimagining Amazon Search with an interactive conversational experience” that will answer users’ questions, compare products, and personalize suggestions for purchases.

European Committee Approves New AI Regulation

“A key committee of lawmakers in the European Parliament have approved a first-of-its-kind artificial intelligence regulation – making it closer to becoming law,” CNBC Share to FacebookShare to Twitter (5/15, Browne) reports. The European AI Act “marks a landmark development in the race among authorities to get a handle on AI” and “takes a risk-based approach to regulating AI, where the obligations for a system are proportionate to the level of risk that it poses.” The regulations “also specify requirements for providers of so-called ‘foundation models’ such as ChatGPT, which have become a key concern for regulators, given how advanced they’re becoming and fears that even skilled workers will be displaced.”

Amazon Using AI To Make Deliveries Faster

CNBC Share to FacebookShare to Twitter (5/15, Kharpal) reports, “Amazon is focusing on using artificial intelligence to speed up deliveries – by minimizing the distance between its products and customers, a top executive told CNBC.” Amazon Vice President of Customer Fulfillment and Global Ops Services for North America and Europe told the outlet that the company is using AI technology to improve its logistics operations. While Amazon is using AI to improve how it maps out and plans delivery routes, it also is using the technology to improve inventory placement. The company “has been focusing on a so-called ‘regionalization’ effort to ship products to customers from warehouses closest to them rather than from another part of the country.” To do so “requires technology that is capable of analyzing data and patterns in order to predict what products will be in demand and where,” which is “where AI comes in.” Perego said the Amazon is making headway in using AI to improve inventory placement.

Online Influencers Are Using AI To Launch Get-Rich-Quick Schemes

The Washington Post Share to FacebookShare to Twitter (5/15, Verma) reports that generative artificial intelligence, “which backs chatbots like ChatGPT, has dazzled and alarmed the public, as many argue that the software’s ability to create poems, write song lyrics or pen movie dialogues could put millions out of work.” However, it’s also “changing the landscape of get rich quick schemes,” as online influencers “have seized on the idea that ChatGPT is an all-powerful technology that offers a tantalizing path to easy money.” YouTubers and TikTokers who “specialize in personal finance content now make videos advertising a single premise: let ChatGPT create a business, while you sit back and gain financial freedom.” But entrepreneurship and computer science experts “say that is a misguided view of how artificial intelligence can help entrepreneurs. Nearly any money-making scheme devised solely by ChatGPT is bound to be generic, they said, because chatbots will regurgitate strategies that are widely known.”

        Experts Worry AI Could Worsen Loneliness ‘Crisis’ In US. “Loneliness in the U.S., which spiked during the isolation of COVID, remains a public health ‘crisis’ – and now the advent of ubiquitous AI-driven chatbots could make actual human contact even scarcer,” Axios Share to FacebookShare to Twitter (5/15, Heath) reports. Experts are concerned “that AI might further cocoon people from the relationships and conversations they need” in the long term. However, “in the short term, AI-powered companions, pets and mental health support services are already being drafted to fight the loneliness epidemic.”

ChatGPT Co-Founder To Testify Before Congress, Meet With Lawmakers Over AI Safety

CNN Share to FacebookShare to Twitter (5/15, Korn) reports that in 2017, “there were rumors that Sam Altman was planning to run for governor of California.” However, Altman, “the CEO and co-founder of OpenAI, the artificial intelligence company behind viral chatbot ChatGPT and image generator Dall-E, is set to testify before Congress on Tuesday. His appearance is part of a Senate subcommittee hearing on the risks artificial intelligence poses for society, and what safeguards are needed for the technology.” He is also expected to attend a dinner Monday night with House lawmakers on both sides, “with one Republican lawmaker describing it as part of the process for Congress to assess ‘the extraordinary potential and unprecedented threat that artificial intelligence presents to humanity.’” The hearing and meetings “come as ChatGPT has sparked a new arms race over AI,” with a “growing list of tech companies” deploying new AI tools in recent months.

Survey: Most Students And Parents Are Confident In AI’s Potential For Education

Education Week Share to FacebookShare to Twitter (5/15, Klein) reports, “Teens and tweens are often way ahead of their parents in understanding the latest technologies – and artificial intelligence is no different, according to a recent poll from Common Sense Media, a nonprofit that studies the impact of tech on children and youth.” Fifty-eight percent of students “ages 12 to 18 have used ChatGPT, an AI-powered tool that can answer questions, write an essay on a Shakespearean play, or draft a legal memo that appears remarkably similar to what a human can produce,” the survey found. Meanwhile, “just under a third – 30 percent – of parents have used the tech.” While many parents “seem to be out of the loop when it comes to their child’s use of ChatGPT,” 68 percent of parents and 85 percent of students “believe that AI programs will have a positive impact on education.”

AI Executives Call For Regulation At Senate Hearing

Bloomberg Share to FacebookShare to Twitter (5/16, Edgerton, Seddiq, Subscription Publication) reports that on Tuesday, OpenAI CEO Sam Altman, IBM Chief Privacy and Trust Officer Christina Montgomery, and other experts appeared at Senate Committee on Homeland Security and Governmental Affairs and called for lawmakers to “regulate artificial intelligence technologies that are raising ethical, legal and national security concerns.” The Wall Street Journal Share to FacebookShare to Twitter (5/16, Tracy, Subscription Publication) says the hearing underscored the wide-ranging concerns prompted by rapid consumer adoption of AI systems including ChatGPT.

        The AP Share to FacebookShare to Twitter (5/16, O'Brien) reports, “The overall tone of senators’ questioning was polite Tuesday, a contrast to past congressional hearings in which tech and social media executives faced tough grillings over the industry’s failures to manage data privacy or counter harmful misinformation. In part, that was because both Democrats and Republicans said they were interested in seeking Altman’s expertise on averting problems that haven’t yet occurred.” Indeed, the New York Times Share to FacebookShare to Twitter (5/16, Kang) suggests Altman and Montgomery appeared to welcome lawmaker’s attention, largely agreeing with them “on the need to regulate the increasingly powerful A.I. technology being created inside his company and others like Google and Microsoft.” Roll Call Share to FacebookShare to Twitter (5/16, Tarinelli) reports that for her part, Montgomery urged Congress “to adopt a ‘precision regulation’ approach to artificial intelligence.”

        Education Week Share to FacebookShare to Twitter (5/16, Klein) reports that while lawmakers “steered clear of discussion about K-12 students using ChatGPT to cheat, [they] had plenty of other questions for Altman.” Among other topics, Altman “has no clear vision for how ChatGPT and other AI technologies will change the future of work, something educators are already wrestling with. But he’s certain the impact will be profound.” In response to a question from Sen. Richard Blumenthal (D), he said, “I believe that there will be far greater jobs on the other side of this and that the jobs of today will get better. You see already people that are using [AI] to do their job much more efficiently.”

        Politico Share to FacebookShare to Twitter (5/16, Chatterjee) says the hearing left lawmakers facing “three big unknowns” – whether there is a need for a new federal agency; who owns the AI uses to train itself; and how much will AI influence the 2024 election. Additionally, in a second article, Politico Share to FacebookShare to Twitter (5/16, Bordelon) reports the hearing also featured the discussion of “a bevy of ideas for how the federal government should channel its immense budget toward incorporating AI systems while guarding against unfairness and violations of privacy. They included supercharging the federal AI workforce, shining a light on the federal use of automated systems, investing in public-facing computing infrastructure and steering the government’s billions of dollars in tech purchases toward responsible AI tools.”

        Microsoft Researchers Suggest AI Technology Closer To AGI. The New York Times Share to FacebookShare to Twitter (5/16, Metz) reports a recent paper by a team of Microsoft researchers argues the company’s new artificial intelligence system “was a step toward artificial general intelligence, or A.G.I.” The Times says Microsoft is “the first major tech company to release a paper making such a bold claim,” and thereby “stirred one of the tech world’s testiest debates: Is the industry building something akin to human intelligence? Or are some of the industry’s brightest minds letting their imaginations get the best of them?” While claiming A.G.I. “can be a reputation killer for computer scientists,” the Times says that “some believe the industry has in the past year or so inched toward something that can’t be explained away: A new A.I. system that is coming up with humanlike answers and ideas that weren’t programmed into it.”

AI Education Program Aims To Prepare K-12 Students For Ed-Tech’s Future

The Hill Share to FacebookShare to Twitter (5/16, Lonas) reports that after “starting his career in the political space during the Obama administration,” Alex Kotran transitioned into the education realm “with his 2019 creation of AI Education, or aiEDU, a group aimed at giving instruction on artificial intelligence to K-12 students. As his organization was working through the coronavirus pandemic to convince schools and investors of the utility of AI in classrooms, OpenAI was creating the AI tool that would eventually propel the issue into the national spotlight, ChatGPT.” As ChatGPT “shook the education world in particular,” Kotran’s aiEDU “was already in hundreds of schools. Now, hundreds more are on the waitlist to bring the program into their classrooms.” The service he offers “includes a self-guided course that is compatible with Google Classrooms,” as well as “professional development training for educators seeking a better understanding of AI and what education on the technology should look like in the classroom.”

Elon Musk Claims To Be “Reason That OpenAI Exists”

CNBC Share to FacebookShare to Twitter (5/16, Goswami) reports, “Tesla CEO Elon Musk claimed on Tuesday he is “the reason that OpenAI exists,” citing his past investment in the entity, and that Microsoft exerts control over the AI company, an assertion strongly denied by Microsoft CEO Satya Nadella.” Musk “also suggested that OpenAI didn’t place sufficient emphasis on safe AI development.” Musk “has previously repeatedly asserted that Microsoft controls OpenAI and that OpenAI’s capped-profit model is questionable. Musk was an early backer of the AI startup, reportedly committing to $1 billion in support before pulling out over disagreements over the speed of OpenAI’s advancements. He said he ultimately invested somewhere around $50 million.”

        Fortune Share to FacebookShare to Twitter (5/17) reports, “‘I fully admit to being a huge idiot here,’ Musk says, when asked whether he should have a larger stake in OpenAI given that he invested so much in the project. Musk said he underestimated the potential of the company’s profitability, but he argued this was unforeseen.”

        Insider Share to FacebookShare to Twitter (5/17) reports Musk “told CNBC Microsoft could ‘cut off OpenAI’ at any point and has a lot of control over the startup.” Musk is quoted saying, “Let’s say they create some digital super-intelligence, almost god-like intelligence, who’s in control? What is exactly the relationship between OpenAI and Microsoft?” The Hill Share to FacebookShare to Twitter (5/17) quotes Musk: “There’s a strong probability that it will make life much better and that we’ll have an age of abundance. And there’s some chance that it goes wrong and destroys humanity. ... Hopefully that chance is small, but it’s not zero. And so I think we want to take whatever actions we can think of to minimize the probability that AI goes wrong.”

        Microsoft CEO Rebuts Musk Assertion That OpenAI Falling Under Microsoft’s Sway. The Indian Express Share to FacebookShare to Twitter (5/17) reports, “Amid accusations of OpenAI losing its way and increasingly falling under the influence of its biggest investor, Microsoft, Satya Nadella has come forward to clarify that these reports are ‘factually not correct.’” Nadella “stressed” in a NBC News Share to FacebookShare to Twitter (5/16) interview “that OpenAI remains very grounded in its mission of being controlled by a nonprofit board. And while he admitted that Microsoft does have a ‘great commercial partnership’ with OpenAI, the interest is ‘noncontrolling.’” The Times of India Share to FacebookShare to Twitter (5/17) quotes Nadella: “OpenAI is very grounded in their mission of being controlled by a non-profit board. We have a non-controlling interest in it, we have a great commercial partnership in it.”

University Of Arizona Researchers Use AI To Pinpoint Cause Of Alzheimer’s Disease, Potential Drug Targets

Fox News Share to FacebookShare to Twitter (5/17, Musto) reports, “A team of researchers from the University of Arizona and institutions across the country are using artificial intelligence to hopefully pinpoint the cause of Alzheimer’s disease and potential drug targets.” The study, published in the journal Nature Communications Biology, used “3D computer models,” and “virtually screen millions of Food and Drug Administration-approved, natural product and small-molecule compounds against more than 6,000 targets.” The researchers “zeroed in on around 3,000 drug candidates of interest and the team already has a National Institutes of Health grant enabling clinical trials on three of the compounds.”

House Subcommittee Takes Up Questions Of Intellectual Property, AI

Politico Share to FacebookShare to Twitter (5/17, Chatterjee, Kern) reports, “Washington’s mounting struggle to deal with artificial intelligence took a sharp turn Wednesday morning, as House members wrestled with new and unsettling questions.” The House Judiciary Subcommittee on Courts, Intellectual Property and the Internet took up the question of ownership, and “how to compensate or credit artists, whether musicians, writers or photographers, when their work is used to train a model, or is the inspiration for an AI’s creation.” Subcommittee Chair Darrell Issa (R-CA) suggested “a database to track the sources of training data.” While senators from both parties and OpenAI CEO Sam Altman supported “a new agency to license the most powerful AI platforms,” Steve DelBianco, president and CEO of NetChoice, which represents companies including Meta, Google and Amazon, said, “It’s baffling that the U.S. would even consider a restrictive licensing scheme for artificial intelligence development.”

        Wired Share to FacebookShare to Twitter (5/17) reports, “At a Congressional hearing on Tuesday, senators from both parties and OpenAI CEO Sam Altman said a new federal agency was needed to protect people from AI gone bad.” IBM Chief Privacy and Trust Officer Christina Montgomery “urged Congress yesterday to take inspiration from the AI Act, which categorizes AI systems by the risks they pose to people or society and sets rules for – or even bans – them accordingly,” and “she also endorsed the idea of encouraging self-regulation, highlighting her position on IBM’s AI ethics board, although at Google and Axon those structures have become mired in controversy.”

        California Lawmaker, AI Expert Aims To Move Congress On Tech Regulation. The Washington Post Share to FacebookShare to Twitter (5/17, Mark) reports Rep. Jay Obernolte (R-CA) “has said it many times: The biggest risk posed by artificial intelligence is not ‘an army of evil robots with red laser eyes rising to take over the world.’” More “mundane issues such as data privacy, antitrust issues and AI’s potential to influence human behavior” take precedence over the “hypothetical notion of AI ending humanity, [he] says.” Obernolte is “one of a handful of lawmakers with a computer science degree,” and with the “rise of generative AI applications like ChatGPT,” he has emerged as a “leading expert in Congress on how the technology works and what lawmakers should worry about.” Yet it “remains unclear whether Obernolte and his very small club of tech-savvy policymakers will have a meaningful effect in Congress, which has failed time and again to pass substantive tech regulation.”

        Analysis: How OpenAI CEO Won Praise From Capitol Hill Lawmakers. In an analysis for The Washington Post Share to FacebookShare to Twitter (5/17), reporter Cristiano Lima writes that while lawmakers on Capitol Hill “expressed many of the same and at times even greater fears about generative AI tools, like OpenAI’s ChatGPT, at a hearing with its CEO Sam Altman on Tuesday, they largely walked away singing the tech mogul’s praises.” During the session, lawmakers “repeatedly praised Altman and his fellow witnesses, including another industry executive from IBM, for coming to the table with concrete ideas on how new AI tools could be reined in – even if those talks are only just starting.” They also “largely held their punches about OpenAI’s own conduct, even as Altman declined to make the commitments they have usually sought from companies about scaling back their practices on data collection and the use of copyrighted material, among other issues.”

Poll Finds Majority Views AI As Threat To Civilization

Reuters Share to FacebookShare to Twitter (5/17, Tong) reports a Reuters/Ipsos poll of 4,415 US adults conducted May 9 to 15 found that most respondents believe that AI “could put the future of humanity at risk,” as over two-thirds said they “are concerned about the negative effects of AI and 61% believe it could threaten civilization.”

        The Hill Share to FacebookShare to Twitter (5/17, Sforza) carries a similar report on the poll, saying that it “found that about 6 in 10 Americans view artificial intelligence (AI) as a threat to human civilization” as “61 percent of Americans believed AI is a threat to humanity’s future, while about 22 percent reported they disagreed and 17 percent said they were not sure.”

Texas A&M Professor Sparks Controversy Over AI Cheating Allegations

The Washington Post Share to FacebookShare to Twitter (5/18) reports an anonymous Texas A&M University student has accused Professor Jared Mumm of threatening to fail his students over their alleged use of ChatGPT to generate essays. Mumm “said he’d copied the student essays into ChatGPT and asked the software to detect if the artificial intelligence-backed chatbot had written the assignments.” However, the accusations “caused a panic in the class, with some students fearful their diplomas were at risk” and some forced to argue their innocence. Experts “say the tensions erupting at Texas A&M lay bare a troubling reality: protocols on how and when to use chatbots in classwork are vague and unenforceable, with any effort to regulate use risking false accusations.”

Colorado State University, University Of Minnesota To Use AI For Climate Change Efforts

KUSA-TV Share to FacebookShare to Twitter Denver (5/18, Reppenhagen) reports, “Colorado State University (CSU) is partnering with the University of Minnesota to create a new National Artificial Intelligence Research Institute.” Researchers at the AI Institute for Climate-Land Interactions, Mitigation, Adaptation, Tradeoffs and Economy (AI-CLIMATE) hope to use AI “to create more climate-smart practices for the agriculture and forestry industries. AI-CLIMATE is one of seven new AI institutes established this month by the National Science Foundation (NSF).” For five years, the institute will be funded by “a $20 million grant from NSF and the USDA National Institute of Food and Agriculture.” CSU professor Keith Paustian “said one of the main goals will be to help stabilize the carbon cycle,” meaning “less carbon in the atmosphere and more carbon on the earth.”

OpenAI Unveils iPhone App For ChatGPT

The New York Times Share to FacebookShare to Twitter (5/18, Metz) reports OpenAI unveiled a new app on Thursday for ChatGPT, “a new version of the chatbot for the iPhone, hoping to build on its enormous popularity.” On the phone, “unlike the browser-based version of ChatGPT, the smartphone app responds to voice commands, operating a bit like Apple’s Siri digital assistant or Amazon’s Alexa.” OpenAI wrote the app is part of the company’s work on how to make “useful tools that empower people, while continuously making them more accessible.”

        Bloomberg Share to FacebookShare to Twitter (5/18, Subscription Publication) reports the app “looks and operates similarly to the web version, which has shaken up the technology industry over the past several months and influenced a range of new artificial intelligence services.” CNBC Share to FacebookShare to Twitter (5/18) reports, “The app is free, although it includes a $20 per month in-app purchase through Apple for ChatGPT Plus, OpenAI’s subscription that offers additional features.”

        Engadget Share to FacebookShare to Twitter (5/18) reports, “Feature-wise, OpenAI’s app looks and behaves much like the ChatGPT website – with the addition of voice input using OpenAI’s Whisper speech recognition.” TechCrunch Share to FacebookShare to Twitter (5/18, Perez) reports, “ChatGPT Plus subscribers will be able to access GPT-4’s capabilities through the new app.”

AI Boom Creates Demand For Businesses That Can Identify AI-Generated Content

The New York Times Share to FacebookShare to Twitter (5/18, Hsu, Myers) reports, “Generative A.I. is now available to anyone, and it’s increasingly capable of fooling people with text, audio, images and videos that seem to be conceived and captured by humans.” This has led to demand for services that can identify AI-generated content, and over “a dozen companies now offer tools to identify whether something was made with artificial intelligence, with names like Sensity AI (deepfake detection), Fictitious. AI (plagiarism detection) and Originality. AI (also plagiarism).” Andrey Doronichev, founder of synthetic content detecting company Optic, said, “Content authenticity is going to become a major problem for society as a whole...We’re entering the age of cheap fakes.”

Schumer: Congress “Must Move Quickly” To Regulate AI

The AP Share to FacebookShare to Twitter (5/18, Sherman) reports Senate Majority Leader Schumer “says Congress ‘must move quickly’ to regulate artificial intelligence and has convened a bipartisan group of senators to work on legislation. Schumer says the group met on Wednesday and that his staff has already met with close to 100 CEOs, scientists and academics who deal with the technology.” In remarks on the Senate floor Thursday, Schumer said, “We can’t move so fast that we do flawed legislation, but there’s no time for waste or delay or sitting back. ... We’ve got to move fast.” Schumer argued, “If harnessed responsibly, AI has the power to do tremendous things for the public good. ... It can unlock unimaginable marvels in medicine, business, national security, science and so many other areas of life. But if left unchecked, AI has the power to do tremendous, tremendous harm.”

        Bennet Unveils Bill For AI Regulatory Agency. CNN Share to FacebookShare to Twitter (5/18, Fung) reports Sen. Michael Bennet (D-CT) “unveiled an updated version of legislation he introduced last year that would establish a Federal Digital Platform Commission.” The bill “makes numerous changes to more explicitly cover AI products, including by amending the definition of a digital platform to include companies that offer ‘content primarily generated by algorithmic processes.’” CNN says the bill would regulate artificial intelligence, as suggested by OpenAI CEO Sam Altman just a few days ago.

NYC Public Schools Lifts Ban On ChatGPT

NBC News Share to FacebookShare to Twitter (5/18) reports the New York City Department on Education announced Thursday it “will rescind its ban on the wildly popular chatbot ChatGPT – which some worried could inspire more student cheating – from its schools’ devices and networks.” The New York Daily News Share to FacebookShare to Twitter (5/18) reports the “AI-powered chatbot was banned this winter and – after months signaling the policy would be revised – will remain a restricted website on school networks until individual principals ask to have the block removed, a schools spokesman confirmed.” As of this week, “just 36 schools over the last several months have requested access, according to city data.” Schools Chancellor David Banks “unveiled plans to offer a toolkit for teachers about artificial intelligence in their classrooms, as well as create a repository for schools to share their findings across the city.”

AI “Gold Rush” Inspires More Colleges To Hire And Build

Inside Higher Ed Share to FacebookShare to Twitter (5/19, D'Agostino) reports following OpenAI’s release of ChatGPT came “an artificial intelligence arms race among Google, Microsoft and countless other tech giants and start-ups. That development has since reverberated across higher ed, unleashing a surge of new faculty hires, buildings and institutes – all for AI.” As a result, “some universities are going big – very big.” For example, the University of Southern California “has invested more than $1 billion in its AI initiative that will include 90 new faculty members, a new seven-story building and a new school. The institution seeks to bolster its economic impact in the tech industry, integrate computing across multiple disciplines and programs at the university, and influence AI applications, development, policy and research.” Phil Hill and Associates market analyst Phil Hill said, “It’s a gold rush. But it’s a gold rush where you don’t know where the gold mine is or how to get the gold.”

dtau...@gmail.com

unread,
May 27, 2023, 7:50:52 AM5/27/23
to ai-b...@googlegroups.com

Superbug-Killing Antibiotic Discovered Using AI
BBC News
James Gallagher
May 25, 2023


Scientists in Canada and the U.S. used artificial intelligence (AI) to discover a new antibiotic that can exterminate a fatal superbug by having it narrow down thousands of candidates to a handful for laboratory testing. The researchers tested thousands of drugs on the highly antibiotic-resistant species Acinetobacter baumannii, then fed the resulting information to the AI so it could ascertain the chemical features of compounds that were most effective in attacking the bacterium. The AI analyzed 6,680 drugs with unknown efficacies and produced a shortlist of candidates in just 90 minutes, according to results. The researchers found nine of 240 lab-tested antibiotic candidates were potential antibiotics, with the compound abaucin demonstrating the ability to destroy the superbug in samples from patients.
 

Full Article

 

 

How Fake AI Photo of a Pentagon Blast Went Viral, Briefly Spooked Stocks
Bloomberg
Davey Alba
May 22, 2023


On the morning of May 22, the S&P 500 fell around 0.3% after a falsified photo of an explosion near the Pentagon went viral. This could be the first time the market has been moved by an artificial intelligence (AI)-generated image. Before the photo was discredited by officials, researchers like Nick Waters of the open source intelligence group Bellingcat took to social media to warn that it may have been an AI creation. Waters wrote on Twitter, "Check out the frontage of the building, and the way the fence melds into the crowd barriers. There's also no other images, videos, or people posting as first-hand witnesses." The image's origin has not been determined, but the original post on Facebook was given a "false information" label and later was blocked by the platform.

Full Article

*May Require Paid Registration

 

 

Software Update for World's Wind Farms Could Power Millions More Homes
New Scientist
Matthew Sparkes
May 21, 2023


A software upgrade developed by researchers at France's Polytechnic Institute of Paris improves the efficiency of wind turbines by ensuring they spend more time facing directly into the wind. The researchers accomplished this by training a reinforcement-learning algorithm to track wind patterns and formulate a strategy to keep the turbine facing the appropriate angle. The current strategy of adjusting the turbine blades in accordance with wind patterns uses more energy and causes wear and tear on the components. In simulations, the new algorithm was more efficient than the current algorithm, reducing the amount of time spent readjusting turbine positions to 3.7% while generating power gains of 0.4%. The new model could increase electricity production by 5 terawatt hours per year if implemented globally.

Full Article

*May Require Paid Registration

 

 

At G-7 Summit, Leaders Call for International Standards on AI
The Washington Post
Michelle Ye Hee Lee; Matt Viser; Tyler Pager
May 20, 2023


World leaders at the Group of Seven (G-7) summit in Japan called for the development of international standards to limit the potential damage from rapid innovations in artificial intelligence (AI). They wrote that AI's challenges must be weighed alongside its advantages, while new technologies should be regulated in harmony with democratic values like fairness, accountability, transparency, protection from online abuse, and respect for privacy and human rights. The G-7 leaders cited in particular generative AI, as some experts warn the technology may worsen polarization through machine-generated content and make it difficult for people to assess the authenticity of information. The leaders also urged the creation of technical standards for the development of "trustworthy" AI, noting G-7 members may have differing strategies and policy tools for realizing this goal.

Full Article

*May Require Paid Registration

 

 

AI Identifies Similar Materials in Images
MIT News
Adam Zewe
May 23, 2023


A machine learning artificial intelligence (AI) model developed by scientists at the Massachusetts Institute of Technology and Adobe Research can identify all the pixels in images that represent given materials. The model assesses them to ascertain material similarities between pixels selected by the user and all other image regions. The researchers used only synthetic data to train the model, which was built atop a pretrained computer vision model that had seen millions of actual images. The model converts generic, pretrained visual features into material-specific features in a way that accommodates object shapes or variable illumination, then calculates a material similarity score for each pixel in the image. The model's predictions of similar material-containing regions matched ground truth with about 92% accuracy.
 

Full Article

 

 

AI Powers Second-Skin-Like Wearable Tech
Monash University (Australia)
May 19, 2023


Scientists at Australia's Monash University and the Melbourne Center for Nanofabrication integrated nanotechnology and artificial intelligence (AI) into an ultra-thin skinpatch that monitors 11 biometric signals. The researchers engineered the Deep Hybrid-Spectro frequency/amplitude-based neural network to track multiple biometrics transmitted by the skinpatch in a single signal. Monash's Wenlong Cheng said the layered, neck-worn patch can measure speech, neck movement, touch, respiration, and heart rate. Cheng explained, "Emerging soft electronics have the potential to serve as second-skin-like wearable patches for monitoring human health vitals, designing perception robotics, and bridging interactions between natural and artificial intelligence."

Full Article

 

Retail Industry Already Finding Ways To Use AI In Day-To-Day Operations, Pivotree VP Says

In an article for Retail Insider (CAN) Share to FacebookShare to Twitter (5/18), Pivotree Vice President of Engineering Joel Farquhar wrote, “Artificial intelligence (AI) has been a topic of interest in the retail industry in Canada and beyond as of late. Now that it’s here, many retailers are looking at how they can adopt and integrate AI into their businesses in an effort to build efficiencies while optimizing operations where possible. As the benefits of AI become more apparent, the industry is expected to continue to adopt the technology rapidly. Already, retailers are finding ways to utilize AI in day-to-day operations.”

Google Employees Question Company’s AI Push In All-Hands Meeting

Insider Share to FacebookShare to Twitter (5/19, Langley, Chan) reports that at Google’s all-hands meeting this week, “top of mind for employees was AI, including the company’s recent showcase of generative AI products at its I/O developer conference on May 10.” One of the top-voted questions is quoted saying that “many AI goals across the company focus on promoting AI for its own sake, rather than for some underlying benefit,” and asking if Google will “provide value with AI rather than chasing it for its own sake.” Pichai is quoted answering, “Normally we don’t do this, but we are re-looking at [the company’s Objectives and Key Results] and adapting it for the rest of the year, and I think you will see some of the deeper goals reflected.”

Apple Job Listings Point To Generative AI Strategy

TechCrunch Share to FacebookShare to Twitter (5/19, Singh) reports, “Apple, like a number of companies right now, may be grappling with what role the newest advances in AI are playing, and should play, in its business. But one thing Apple is confident about is the fact that it wants to bring more generative AI talent into its business.” The company” has posted at least a dozen job ads on its career page seeking experts in generative AI. Specifically, it’s looking for machine learning specialists ‘passionate about building extraordinary autonomous systems’ in the field.” The job postings “are coming amid some mixed signals from the company around generative AI,” including its restriction of ChatGPT “and other external generative AI tools for some employees over concerns of proprietary data leaking out through the platforms.”

Generative AI Technology Leads To Demand For Services To Protect Sensitive Data

The Wall Street Journal Share to FacebookShare to Twitter (5/19, Lin, Subscription Publication) reported that there are a growing number of startups focused on providing companies with ways to make use of generative AI while also assuring corporate guardrails prevent the exposure of sensitive data to outside sources. The Journal said these startups hope to meet the needs of companies such as Apple, Verizon, and JPMorgan Chase, where employees have bee banned from using tools such as ChatGPT over concerns it could release confidential data.

MIT Works To Boost Student Discussions On Artificial Intelligence With Annual “Day Of AI”

Education Week Share to FacebookShare to Twitter (5/19, Herold) reported, “Several thousand students worldwide participated in the second annual ‘Day of AI’ on May 18, yet another sign of artificial intelligence’s growing significance to schools.” The Massachusetts Institute of Technology’s Responsible AI for Social Empowerment and Education (RAISE) initiative stems from “a coalition of influential groups such as Code.org and the Educational Testing Service.” RAISE recently launched an effort “to help schools and state education departments integrate artificial intelligence into curricula, and the International Society for Technology in Education has made related learning opportunities available to students and teachers alike.” The initiative offers “free classroom lessons on such topics as ‘What Can AI Do?’ and ‘ChatGPT in School.’”

AI Regulations Face Possible Roadblock Due To Partisan Politics

Politico Share to FacebookShare to Twitter (5/19, Bordelon) reported the push to regulate AI “appears at risk of getting tangled in the same political fights that have paralyzed previous attempts to regulate technology.” Though “Republicans and Democrats broadly agreed on the need for new AI rules” in hearings, similar pushes to regulate tech companies “ultimately collapsed, in part due to partisan squabbling.” According to Politico, the same “arguments are starting to resurface in an entirely new debate.”

        Silicon Valley Divided Over AI’s Power Following OpenAI CEO’s Testimony. The Washington Post Share to FacebookShare to Twitter (5/20, A1, De Vynck) reported that OpenAI CEO Sam Altman’s congressional testimony last week came as “a debate over whether artificial intelligence could overrun the world is moving from science fiction and into the mainstream, dividing Silicon Valley and the very people who are working to push the tech out to the public.” The Post said, “Formerly fringe beliefs that machines could suddenly surpass human-level intelligence and decide to destroy mankind are gaining traction.” However, inside the Big Tech companies, “many of the engineers working closely with the technology do not believe an AI takeover is something that people need to be concerned about right now, according to conversations with Big Tech workers who spoke on the condition of anonymity to share internal company discussions.”

G7 Leaders Call For International Standards To Govern Development Of Generative AI

The Washington Post Share to FacebookShare to Twitter (5/20, Ye Hee Lee, Viser, Pager) reported G7 leaders on Saturday “called for international standards for rapid advancements in artificial intelligence, making clear that the push was a priority but failing to come to any significant conclusions about how to handle the emerging technology.” The Post reported White House officials had “hoped AI governance would be raised during the G-7 to discuss emerging issues of potential concern,” and noted that President Biden on Friday “briefed his counterparts on a recent meeting held this month at the White House in which top executives from companies developing artificial intelligence such as Google and Microsoft discussed the technology,” and updated them on the US government’s work “on a framework that balances the risks with the opportunities in the technology.” NSA Sullivan said, “I think this is a topic that is very much seizing the attention of leaders of all of these key, advanced democratic market economies.”

        Meanwhile, Bloomberg Share to FacebookShare to Twitter (5/20, Katanuma, Subscription Publication) reported the G7 “agreed on the need for governance in accordance with G-7 values in the field of generative AI, expressing concern about the disruptive potential of rapidly expanding technologies.” Bloomberg added that under “what they are calling the ‘Hiroshima Process,’ the governments are set to hold cabinet-level discussions on the issue and present the results by the end of the year”

Israel Intends To Be “AI Superpower,” Citing Autonomous Warfare Innovations

Reuters Share to FacebookShare to Twitter (5/22) reports Israel “aims to parlay its technological prowess to become an artificial intelligence ‘superpower,’ the Defence Ministry director-general said on Monday, predicting advances in autonomous warfare and streamlined combat decision-making.” Steps “to harness rapid AI evolutions include the formation of a dedicated organisation for military robotics in the ministry, and a record-high budget for related research and development this year, retired army general Eyal Zamir said.” Zamir said, “There are those who see AI as the next revolution in changing the face of warfare in the battlefield.”

Poll: Parents Lag Behind Their Children In AI Usage

K-12 Dive Share to FacebookShare to Twitter (5/22, Riddell) reports that artificial intelligence is “the latest technology where parents lag behind their children in usage.” This comes as a recent poll by Common Sense Media “finds 58% of students ages 12-18 say they have used ChatGPT, compared to 30% of parents. And while 77% of parents are interested in AI-powered learning tools, just 40% were aware of a reliable information source to learn more about AI and how it could benefit students.” The results also showed that “some 82% of parents would like a rating system for evaluating ChatGPT and other AI programs.”

OpenAI Executives On Arguments Supporting International Regulatory Body For AI

TechCrunch Share to FacebookShare to Twitter (5/22, Coldewey) reports, “AI is developing rapidly enough and the dangers it may pose are clear enough that OpenAI’s leadership believes that the world needs an international regulatory body akin to that governing nuclear power – and fast. But not too fast.” OpenAI founder Sam Altman, President Greg Brockman, and chief scientist Ilya Sutskever explain in a blog post, “We are likely to eventually need something like an [International Atomic Energy Agency] for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc.” TechCrunch says, “An AI-governing body built on [the IAEA] model may not be able to come in and flip the switch on a bad actor, but it can establish and track international standards and agreements, which is at least a starting point.”

        Google’s CEO Sundar Pichai Says AI Is ‘Too Important Not To Regulate Well’ Amid Growing Safety Concerns. Insider Share to FacebookShare to Twitter (5/23, Bhaimiya) reports that in an opinion piece for the Financial Times Share to FacebookShare to Twitter (5/23, Pichai, Subscription Publication), Google CEO Sundar Pichai said he is optimistic about the future of AI, calling it the “most profound technology humanity is working on today.” Pichai notes, however, that some companies are locked in a race to become first movers in the space, which means safety is being overlooked. Pichai adds that “I still believe AI is too important not to regulate, and too important not to regulate well.” In its report on Pichai’s piece, Fortune (5/23) says, “the chiefs of two tech companies that are front-runners in A.I.,” OpenAI and Google, “are sharing the same message – that governments should regulate A.I. so it doesn’t get out of hand.”

        Musk Wants To Challenge Google, Microsoft In AI. The Wall Street Journal Share to FacebookShare to Twitter (5/23, Corse, Subscription Publication) reports Elon Musk wants to use Twitter and his other businesses to compete with Google and Microsoft in AI, he said in an interview on Tuesday.

White House Updates Strategic Plan On AI Research

The AP Share to FacebookShare to Twitter (5/23, Madhani) reports the White House on Tuesday “announced new efforts to guide federally backed research on artificial intelligence as the Biden administration looks to get a firmer grip on understanding the risks and opportunities of the rapidly evolving technology.” The AP also says that the Administration “tweak[ed] the United States’ strategic plan on artificial intelligence research, which was last updated in 2019, to add greater emphasis on international collaboration with allies,” and it held “a listening session with workers on their firsthand experiences with employers’ use of automated technologies for surveillance, monitoring, evaluation, and management.”

        Bloomberg Share to FacebookShare to Twitter (5/23, Sink, Subscription Publication) reports the meeting featured “employees from call centers, warehouses, health care, gig work and the trucking industry, as the administration seeks to better understand how companies deploy automated technology for worker surveillance.” In addition, Bloomberg reports that the White House Office of Science and Technology Policy “ask[ed] Americans what priorities the government should pursue regarding artificial intelligence as President Joe Biden weighs new regulations on emerging workplace technologies.” The Wall Street Journal Share to FacebookShare to Twitter (5/23, Tracy, Subscription Publication) provides similar coverage.

        US Lawmakers Mulling AI Legislation. Roll Call Share to FacebookShare to Twitter (5/23, Tarinelli) reports, “Lawmakers are floating ideas about guardrails for artificial intelligence as Congress considers how to confront the fast-moving technology that experts say could have profound implications.” Early indications “are that both lawmakers and some industry representatives don’t want the government to stand on the sidelines as artificial intelligence advances. They want government action, potentially including a federal agency to supervise artificial intelligence. Some lawmakers are at least partially motivated by liability protections afforded to internet companies, an approach some now consider a mistake.”

        MIT Technology Review Share to FacebookShare to Twitter (5/23) reports, “Senators Lindsey Graham, a Republican, and Elizabeth Warren, a Democrat, are working together to create a new digital regulator that might also have the power to police and perhaps license social media companies.” Meanwhile, Democrat Chuck Schumer “is also rallying the troops in the Senate to introduce a new bill that would tackle AI harms specifically. He has gathered bipartisan support to put together a comprehensive AI bill that would set up guardrails aimed at promoting responsible AI development.”

        NYTimes: Unclear How Government Can Address Job Losses From Automation. The New York Times Share to FacebookShare to Twitter (5/23, Goldberg) reports that in his Senate testimony, OpenAI CEO Sam Altman, “like so many other executives unleashing new technologies on the world, has asked the government to assume the bulk of responsibility in supporting workers through the labor market disruptions prompted by A.I. It’s not yet clear how government will rise to that task.” The Times says, “Historically, when automation has led to job loss, the economic impact has tended to be offset by the creation of new jobs,” and while generative AI could boost US labor productivity and GDP, “there will be immense instability for displaced workers. Automation has been a significant driver of income inequality in America.” The Times goes on to discuss some policies to address job loss from automation, but says, “the government’s previous efforts to support workers through periods of job displacement have had mixed results.”

        US, EU To Increase Cooperation On Setting Standards For AI Development. Reuters Share to FacebookShare to Twitter (5/23, Blenkinsop) reports that the US and EU “are set to step up cooperation on artificial intelligence with a view to establishing minimum standards before legislation enters force, the EU’s tech chief Margrethe Vestager said on Tuesday.” Reuters adds Vestager “told a briefing on Tuesday that process might be completed by the end of the year.”

        Regulators Looking At How To Apply Existing Laws To Generative AI. Reuters Share to FacebookShare to Twitter (5/22) reports, “As the race to develop more powerful artificial intelligence services like ChatGPT accelerates, some regulators are relying on old laws to control a technology that could upend the way societies and businesses operate.” While the EU is currently drafting new AI rules, it may take years before those rules are able to be enforced. Data governance expert Massimiliano Cimnaghi of consulting firm BIP said current laws may need to be applied until then. He explained, “In absence of regulations, the only thing governments can do is to apply existing rules...If it’s about protecting personal data, they apply data protection laws, if it’s a threat to safety of people, there are regulations that have not been specifically defined for AI, but they are still applicable.”

AI-Generated Deepfake Briefly Rattles Stock Market

The New York Times Share to FacebookShare to Twitter (5/23, Sorkin, Warner, Kessler, de la Merced, Hirsch, Livni) reports, “For a few minutes on Monday, an ominous image of black smoke billowing from what appeared to be a government building near the Pentagon set off investor fears, sending stocks tumbling.” The picture was “quickly dismissed...as a fake, most likely cobbled together with artificial intelligence, and markets swiftly recovered. But it illustrated one of the big fears behind the government’s zeal to regulate A.I.: that the technology could be used to stoke panic and sow disinformation, with potentially disastrous consequences.” The incident, which “may have been the first time an A.I.-generated image moved markets, according to Bloomberg,” underscores “how even unsophisticated spoofs can spread misinformation quickly, especially via trusted social-media channels.”

The Ringer Founder Says Spotify Is Developing AI Technology To Make “Host-Read” Podcast Ads

TechCrunch Share to FacebookShare to Twitter (5/23, Perez) reports that according to The Ringer founder Bill Simmons, Spotify is “developing AI technology that will be able to use a podcast host’s voice to make host-read ads – without the host actually having to read and record the ad copy.” Simmons said on his podcast, “There is going to be a way to use my voice for the ads. You have to obviously give the approval for the voice, but it opens up, from an advertising standpoint, all these different great possibilities for you.” Simmons “said these ads could open up new opportunities for podcasters because they could geo-target ads – like tickets for a local event in the listener’s city – or even create ads in different languages, with the host’s permission.”

OpenAI CEO Claims Company Could Abandon Europe If Current EU AI Act Is Passed

Reuters Share to FacebookShare to Twitter (5/24) reports OpenAI CEO Sam Altman “said on Wednesday the ChatGPT maker might consider leaving Europe if it could not comply with the upcoming artificial intelligence (AI) regulations by the European Union.” However, Altman added according to Reuters that “before considering pulling out, OpenAI will try to comply with the regulation in Europe when it is set.” The CEO further explained, “The current draft of the EU AI Act would be over-regulating, but we have heard it’s going to get pulled back. ... There’s so much they could do like changing the definition of general purpose AI systems.”

EU, Google Plan Voluntary AI Pact Before New Rules Are Able To Take Effect

Reuters Share to FacebookShare to Twitter (5/24) reports, “Alphabet and the European Commission aim to develop an artificial intelligence (AI) pact involving European and non-European companies before rules are established to govern the technology, EU industry chief Thierry Breton said on Wednesday.” Breton, following a meeting in Brussels with Google and Alphabet CEO Sundar Pichai, said, “Sundar and I agreed that we cannot afford to wait until AI regulation actually becomes applicable, and to work together with all AI developers to already develop an AI pact on a voluntary basis ahead of the legal deadline.”

        TechCrunch Share to FacebookShare to Twitter (5/24, Lomas) also provides coverage.

Education Department Releases Recommendations On Artificial Intelligence

Inside Higher Ed Share to FacebookShare to Twitter (5/25, D'Agostino) reports the ED’s Office of Education Technology released a new report Share to FacebookShare to Twitter, “Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations,” that acknowledges “the rapid pace of artificial intelligence (AI) advances is impacting society and summarizes opportunities and risks for AI in teaching, learning, research, and assessment.” According to the report, new forms of interaction enabled by AI “may be leveraged to, for example, support students with disabilities, provide an additional ‘partner’ for students working on collaborative assignments or help a teacher with complex classroom routines.” In addition, the report “highlights that AI may assist educators in addressing variabilities in students’ learning,” such as assisting students for whom English is not their first language.

        Education Week Share to FacebookShare to Twitter (5/24) reports that AI “has great potential to help students learn more efficiently and make teachers’ lives easier,” but educators should be aware of its limitations and be empowered to decide when to disregard its conclusions. Roberto Rodriguez, assistant secretary for planning, evaluation, and policy development at the Education Department, said, “We are seeing a dramatic evolution in ed tech. Educators have to be proactive in helping to shape policies, systems, and being engaged as AI is introducing itself into society in a more major way.” Other recommendations in the report include aligning AI models to a shared vision for education, designing AI using modern learning principles, and having teachers “at the table when developers create AI-powered technologies aimed at K-12 schools.”

Microsoft President Outlines AI Policy Proposals

The New York Times Share to FacebookShare to Twitter (5/25, McCabe) reports Microsoft President Brad Smith at an event in Washington on Thursday “endorsed a crop of regulations for artificial intelligence...as the company navigates concerns from governments around the world about the risks of the rapidly evolving technology.” The company “proposed regulations including a requirement that systems used in critical infrastructure can be fully turned off or slowed down,” as well as “laws to clarify when additional legal obligations apply to an A.I. system and for labels making it clear when an image or a video was produced by a computer.” The Wall Street Journal Share to FacebookShare to Twitter (5/25, Tracy, Subscription Publication) reports Smith further supported the creation of a new US government agency to license major AI systems.

        Reuters Share to FacebookShare to Twitter (5/25, Bartz) reports Smith “said Thursday that his biggest concern around artificial intelligence was deep fakes, realistic looking but false content,” calling “for steps to ensure that people know when a photo or video is real and when it is generated by AI, potentially for nefarious purposes.” Smith added, “We’re going to have to address in particular what we worry about most foreign cyber influence operations, the kinds of activities that are already taking place by the Russian government, the Chinese, the Iranians.” He also called for “a new generation of export controls...to ensure that these models are not stolen or not used in ways that would violate the country’s export control requirements.”

        Noting that his speech “was attended by several members of Congress,” Bloomberg Share to FacebookShare to Twitter (5/25, Subscription Publication) reports Smith “compared AI to the printing press, elevators and food safety for both the transformative power of a new technology and the regulatory need to protect against the greatest potential harms.” CNBC Share to FacebookShare to Twitter (5/25, Feiner) adds Smith also advocated for “promoting transparency and funding academic and nonprofit research,” as well as “creating public-private partnerships to use AI to address the impact it will have on society, in areas like democracy and workforce.”

        Politico Share to FacebookShare to Twitter (5/25, Chatterjee, Bordelon) says Smith’s remarks follow an op-ed on the issue by Google’s Sundar Pichai and come as Congress and the White House “struggle to find their way on regulating artificial intelligence.” Politico says, “The industry efforts come amid a wave of concerns over the rapidly developing technology, with some worrying it could deepen existing societal inequities or, on the extreme end, threaten the future of humanity. With Congress unlikely to move quickly, the White House recently called in the industry’s top CEOs and pushed them to fill in the blanks on what ‘responsible AI’ looks like.”

OpenAI Offers 10 $100K Grants For AI Governance Research

Reuters Share to FacebookShare to Twitter (5/25, Bensinger) reports OpenAI “said Thursday it will award 10 equal grants from a fund of $1 million for experiments in democratic processes to determine how AI software should be governed to address bias and other factors.” The company will award the grants “to recipients who present compelling frameworks for answering such questions as whether AI ought to criticize public figures and what it should consider the ‘median individual’ in the world.” However, Reuters notes that “the startup’s grants would not fund that much AI research,” as “salaries for AI engineers and others in the red-hot sector easily top $100,000 and can exceed $300,000.”

TikTok Testing In-App AI Chatbot To Provide Recommendations

TechCrunch Share to FacebookShare to Twitter (5/25, Perez) reports TikTok is testing its own AI chatbot. The bot, called “Tako,” lets users ask questions about a video “using natural language queries or discover new content by asking for recommendations.” A TikTok spokesperson said, “Being at the forefront of innovation is core to building the TikTok experience, and we’re always exploring new technologies that add value to our community...In select markets, we’re testing new ways to power search and discovery on TikTok, and we look forward to learning from our community as we continue to create a safe place that entertains, inspires creativity and drives culture.”

Researchers Use AI To Find New Type Of Antibiotic That Works Against Drug-Resistant Bacteria

CNN Share to FacebookShare to Twitter (5/25, Goodman) reports, “Using artificial intelligence, researchers say, they’ve found a new type of antibiotic that works against a particularly menacing drug-resistant bacteria.” When the researchers “tested the antibiotic on the skin of mice that were experimentally infected with” Actinetobacter baumanii, “it controlled the growth of the bacteria, suggesting that the method could be used to create antibiotics tailored to fight other drug-resistant pathogens.” Additionally, “the compound identified by AI worked in a way that stymied only the problem pathogen.” The findings Share to FacebookShare to Twitter were published in Nature Chemical Biology.

Professor And Math Coach Lectures Students On Ways To Outsmart AI

The Wall Street Journal Share to FacebookShare to Twitter (5/25, Cohen, Subscription Publication) reports Carnegie Mellon University professor and Team USA coach Po-Shen Loh is traveling to 65 cities to give talks to students and parents on AI. He advises his audiences to tackle the takeover of AI and ChatGPT by embracing humanity. Loh urges individuals and businesses to focus on what distinguishes humans from AI, such as creativity, emotions, and problem-solving.

Professors Work To Upskill In AI For Teaching During Summer Break

Inside Higher Ed Share to FacebookShare to Twitter (5/26, D'Agostino) reports Microsoft and Google “are moving forward with integrating artificial intelligence text-generation into the environments where modern humans write. As the pace of progress in AI writing tools accelerates, faculty members across the summer spectrum face a shared challenge: How can they upskill in AI for teaching and learning – fast?” Now, some academics and institutions “are offering AI faculty workshops or, in the words of Anna Mills, English instructor at the College of Marin, ‘safe spaces where we don’t feel overwhelmed by the fire hose of [AI] information and hot takes.’” In the summer faculty AI workshops, “some plan to take their first tentative steps in redesigning assignments to recognize the AI-infused landscape,” while others “expect to evolve their in-progress teaching-with-AI practices.” Still, “many worry that the efforts will fall short of meeting demand.”

Reply all
Reply to author
Forward
0 new messages