Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

Dr. T's AI brief

70 views
Skip to first unread message

dtau...@gmail.com

unread,
Mar 2, 2024, 12:59:41 PM3/2/24
to ai-b...@googlegroups.com

'AI Godfather', Others Urge More Deepfake Regulation

More than 400 AI experts and executives from various industries, including AI "godfather" and ACM A.M. Turing Award laureate Yoshua Bengio, signed an open letter calling for increased regulation of deepfakes. The letter states, "Today, deepfakes often involve sexual imagery, fraud, or political disinformation. Since AI is progressing rapidly and making deepfakes much easier to create, safeguards are needed." The letter provides recommendations for regulation, such as criminal penalties for individuals who knowingly produce or facilitate the spread of harmful deepfakes, and requiring AI companies to prevent their products from creating harmful deepfakes.
[
» Read full article ]

Reuters; Anna Tong (February 21, 2024)

 

'Unhackable' Computer Chip Works on Light

An "unhackable" computer chip that uses light instead of electricity for computations was created by researchers at the University of Pennsylvania (UPenn) to perform vector-matrix multiplications, widely used in neural networks for development of AI models. Since the silicon photonic (SiPh) chip can perform multiple computations in parallel, there is no need to store data in a working memory while computations are performed. That is why, UPenn's Firooz Aflatouni explained, "No one can hack into a non-existing memory to access your information."
[ » Read full article ]

Interesting Engineering; Ameya Paleja (February 16, 2024)

 

Tech Companies Agree to Combat AI-Generated Election Trickery

Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, and TikTok announced a joint effort to combat AI-generated images, audio, and video designed to sway elections. Announced at the Munich Security Conference on Friday, the initiative, which also will include 12 other major technology companies, outlines methods the companies will use to try to detect and label deceptive AI content when it is created or distributed on their platforms. Participants will share best practices and provide “swift and proportionate responses” when fake content starts to spread.
[ » Read full article ]

Associated Press; Matt O'Brien; Ali Swenson (February 16, 2024)

 

The Seeing Eye Dog V2.0

Researchers at the University of Glasgow in Scotland showed off the latest iteration of their RoboGuide, an AI-powered quadruped robot designed to assist visually impaired people. RoboGuide uses sensors to map and assess its surroundings. Software developed by the team help it learn optimal routes between locations and interpret sensor data in real time to help the robot avoid moving obstacles. The RoboGuide incorporates large language model technology, so it can understand questions and comments from users and provide verbal responses.
[ » Read full article ]

New Atlas; Mike Hanlon (February 16, 2024)

University Of Michigan Stops Work On AI System After Vendor Offers To Sell Student Data

Inside Higher Ed Share to FacebookShare to Twitter (2/19, Coffey) reports the University of Michigan “said it asked one of its vendors to stop work, following an offer on social media to sell student data to train artificial intelligence.” Last Thursday, a Google employee “posted on the social media site X a screenshot of a sponsored message received on LinkedIn.” The message, “from an unknown company, said the University of Michigan was ‘licensing academic speech data and student papers’ that could ‘be very useful for training or tuning LLMs,’ or large language models, which are used to train artificial intelligence.” The message “said the potential training materials included 829 student papers, 65 speech events and 85 hours of audio recordings.” University spokesperson Colleen Mastony told Inside Higher Ed in a statement, “Student data was not and has never been for sale by the University of Michigan. The [message] in question was sent out by a new third party vendor that shared inaccurate information and has since been asked to halt their work.”

AI Researchers Turn To Self-Learning To Improve Generative AI Models

The Atlantic Share to FacebookShare to Twitter (2/16, Wong) discusses how “tech corporations appear more and more stuck” in improving their generative AI models, due to a lack of training data and “the costly and slow process of using human evaluators.” In response, AI researchers “are exploring a new avenue to advance their products: They’re using machines to train machines.” Various companies and academic laboratories “have all published research that uses an AI model to improve another AI model, or even itself, in many cases leading to notable improvements.” The Atlantic also discusses the limits of this approach. AWS AI VP of Applied Science Stefano Soatto “compared self-learning to buttering a dry piece of toast. Imagine an AI model as a piece of bread, and its initial training process as placing a pat of butter in the center. At its best today, the self-learning technique simply spreads the same butter around more evenly, rather than bestowing any fundamentally new skills. Still, doing so makes the bread taste better.”

Lawmakers “Slow-Walking” Regulation Even As AI Use In Healthcare Increases

Politico Share to FacebookShare to Twitter (2/18, Reader) reported physicians “are already using unregulated artificial intelligence tools such as note-taking virtual assistants and predictive software that helps them diagnose and treat diseases.” Lawmakers have “slow-walked regulation of the fast-moving technology because the funding and staffing challenges facing agencies like the Food and Drug Administration in writing and enforcing rules are so vast. It’s unlikely they will catch up any time soon.” This “means the AI rollout in health care is becoming a high-stakes experiment in whether the private sector can help transform medicine safely without government watching.”

AI-Powered Tutoring Bot Struggled With Basic Math During Tests

The Wall Street Journal Share to FacebookShare to Twitter (2/16, Barnum, Subscription Publication) reported educator Sal Khan’s education nonprofit, Khan Academy, has developed a tutoring bot powered by AI and known as Khanmigo. However, AI that’s based on large language models struggles with math, and when a Wall Street Journal reporter tested Khanmigo, powered by ChatGPT, the bot frequently made basic arithmetic errors. It also didn’t know how to round answers, calculate square roots, or correct its mistakes when asked to double-check solutions.

Researchers Are Working To Develop ChatGPT-Powered Robots

In a nearly 4,000-word article, Scientific American Share to FacebookShare to Twitter (2/21, Berreby) reports that in restaurants around the world, robots are cooking meals “in much the same way robots have made other things for the past 50 years: by following instructions precisely, doing the same steps in the same way, over and over.” One University of Southern California student “wants to build a robot that can make dinner,” but says that even after “many cycles of trial and error and thousands of lines of code, that effort will yield a robot that can’t cope when it encounters something its program didn’t foresee.” However, a “large language model” (LLM), such as ChatGPT-3, has “what robots lack: access to knowledge about practically everything humans have ever written.” Some roboticists have been looking to LLMs “as a way for robots to escape the preprogramming limits,” but others are more skeptical, “pointing to LLMs’ occasional weird mistakes, biased language and privacy violations.”

House Leaders Announce Formation Of Bipartisan AI Task Force

Reuters Share to FacebookShare to Twitter reports House Speaker Johnson and Minority Leader Jeffries “said Tuesday they are forming a bipartisan task force to explore potential legislation to address concerns around artificial intelligence.” Reuters adds that they “said the task force would be charged with producing a comprehensive report and consider ‘guardrails that may be appropriate to safeguard the nation against current and emerging threats.’” Rep. Jay Obernolte (R-CA) will chair the 24-member task force and Rep. Ted Lieu (D-CA) will serve as co-chair.

Lack Of Resources Hindering FDA’s Ability To Regulate AI

Politico Share to FacebookShare to Twitter (2/20, Leonard, Cirruzzo) reports, “President Joe Biden has promised a coordinated – and fast – response from his agencies to ensure artificial intelligence safety and efficacy.” However, “regulators like the FDA don’t have the resources they need to preside over technology that, by definition, is constantly changing.” This “lack of resources is a major reason the government hasn’t yet regulated the advanced AI that’s remaking health care.” The piece adds, “FDA Commissioner Robert Califf says he needs to double his staff to properly monitor technology that learns and evolves and can have varying levels of effectiveness in different venues.”

Many School Districts Have Yet To Implement Clear Policies On AI Tools

Education Week Share to FacebookShare to Twitter (2/19, Klein) reported that while it’s been more than a year since the rollout of ChatGPT, “most school districts are still stuck in neutral, trying to figure out the way forward on issues such as plagiarism, data privacy, and ethical use of AI by students and educators.” Seventy-nine percent of educators “say their districts still do not have clear policies on the use of artificial intelligence tools, according to an EdWeek Research Center survey of 924 educators conducted in November and December.” The lack of clear direction is “especially problematic given that the majority of educators surveyed – 56 percent – expect the use of AI tools to increase in their districts over the next year.” When district officials and school principals “sidestep big questions about the proper use of AI, they are inviting confusion and inequity, said Pat Yongpradit, the chief academic officer for Code. Org and leader of Teach AI, an initiative aimed at helping K-12 schools use AI technology effectively.”

        How Schools Can Avoid Chaos When Implementing AI Tools. Education Week Share to FacebookShare to Twitter (2/19, Langreo) reported that “while more teachers are trying out the technology, a majority say they haven’t used AI tools at all, according to the EdWeek Research Center survey conducted last fall.” A popular reason for that resistance, “according to 33 percent of teachers, is that their district hasn’t established a policy on how to use the technology appropriately.” According to a list of strategies culled from “state and organization guidelines,” before deciding to implement AI, “district leaders should think about their district’s mission and vision and figure out where the technology can help achieve those goals. It can help student learning by personalizing content, aiding students’ creativity, and preparing them for future careers.” Teachers and other district staff “also need to know how AI works and how to use it responsibly.”

When K-12 Students Should Be Introduced To AI-Powered Tech

Education Week Share to FacebookShare to Twitter (2/19, Prothero) reported that “while there is broad consensus among education and technology experts that students will need to be AI literate by the time they enter the workforce, when and how, exactly, students should be introduced to this tech is less prescribed.” EdWeek consulted four teachers and two child-development experts “on when K-12 students should start using AI-powered tech and for what purposes. They all agree on this central fact: There is no avoiding AI. Whether they are aware of it or not, students are already interacting with AI in their daily lives when they scroll on TikTok, ask a smart speaker a question, or use an adaptive-testing program in class.” Among other insights, “just like teaching young children that the characters they see in their favorite TV shows are not real, adults need to reinforce that understanding with AI-powered technologies.” Educators should give students “a peek under the hood so they can start to unpack how these technologies work.”

Amazon Report On AI Jobs Offers Look Into Possible Future Of Work

Fast Company Share to FacebookShare to Twitter (2/22, Hess) reports on the increasing anxiety among some workers that AI will replace their jobs. Last year, the World Economic Forum “estimated that 75% of companies are actively looking to adopt technologies like big data, cloud computing, and AI – and that automation will lead to 26 million fewer jobs by 2027.” Companies making investments in AI often say that while the technology may replace some jobs, it will create others. Amazon Global Director of Education Philanthropy Victor Reinoso “echoes this sentiment.” Reinoso said that when Amazon was founded, many of the current careers at the company did not exist. Amazon acknowledges that while “innovation is ongoing,” there are “some foundational skills or literacies that will allow” workers access to new careers. Reinoso oversees Amazon’s “childhood-to-career initiatives.” Last November, the company “announced a new “AI Ready” initiative that promises to provide free AI education and skills training to 2 million people by 2025.” Since then, “Reinoso’s team announced a new study, which found that more than 60% of teachers believe having AI skills will be necessary for their students to obtain high-paying careers of the future.”

NYTimes Analysis: China Relying On US Technology In Effort To Dominate AI Industry

The New York Times Share to FacebookShare to Twitter (2/21, Mozur, Liu, Metz) discusses how China’s efforts to dominate the nascent AI industry may depend on US technology. The Times says, “Even as the country races to build generative A.I., Chinese companies are relying almost entirely on underlying systems from the United States. China now lags the United States in generative A.I. by at least a year and may be falling further behind, according to more than a dozen tech industry insiders and leading engineers, setting the stage for a new phase in the cutthroat technological competition between the two nations that some have likened to a cold war.”

Google Introduces Open Source LLMs

Bloomberg Share to FacebookShare to Twitter (2/21, Subscription Publication) reports Google “is introducing new open large language models that it’s calling Gemma, reversing its general strategy of keeping the company’s proprietary artificial intelligence technology out of public view.” Gemma “will handle text only” and “has been built from the same research and technology used to create the company’s flagship AI model, Gemini, Google said Wednesday in a blog post.” Gemma “will be released in two sizes, one targeted at customers who plan to develop artificial intelligence software using high-capacity AI chips and data centers, and a smaller model for more cost-efficient app building.”

        Reuters Share to FacebookShare to Twitter (2/21) reports that Google “said individuals and businesses can build AI software based on its new family of “open models” called Gemma, for free. The company is making key technical data such as what are called model weights publicly available, it said.”

        TechCrunch Share to FacebookShare to Twitter (2/21, Lardinois) reports, “Google did not provide us with a detailed paper on how these models perform against similar models from Meta and Mistral, for example, and only noted that they are ‘state-of-the-art.’ The company did note that these are dense decoder-only models, though, which is the same architecture it used for its Gemini models (and its earlier PaLM models) and that we will see the benchmarks later today on Hugging Face’s leaderboard.”

White House Enters Debate On Whether AI Systems Should Be “Open-Source” Or “Closed”

The AP Share to FacebookShare to Twitter (2/21, O'Brien) reports the Biden Administration is “wading into a contentious debate about whether the most powerful artificial intelligence systems should be ‘open-source’ or closed.” The White House said Wednesday “it is seeking public comment on the risks and benefits of having an AI system’s key components publicly available for anyone to use and modify.” Tech companies are “divided on how open they make their AI models, with some emphasizing the dangers of widely accessible AI model components and others stressing that open science is important for researchers and startups. Among the most vocal promoters of an open approach have been Facebook parent Meta Platforms and IBM.”

Google Suspends Gemini AI Chatbot From Generating Pictures Of People

The AP Share to FacebookShare to Twitter (2/22) reports, “Google said Thursday it is temporarily stopping its Gemini artificial intelligence chatbot from generating images of people a day after apologizing for ‘inaccuracies’ in historical depictions that it was creating.” This week, Gemini users “posted screenshots on social media of historically white-dominated scenes with racially diverse characters that they say it generated, leading critics to raise questions about whether the company is over-correcting for the risk of racial bias in its AI model.”

Amazon Tells Employees Not To Use Generative AI Tools For Work

Insider Share to FacebookShare to Twitter (2/22, Stewart, Kim) reports that according to internal emails, “Amazon is warning employees not to use third-party generative AI tools for work.” Insider quotes an email from Amazon directing employees to refrain from entering “confidential” information into GenAI tools, adding, “Amazon’s internal third-party generative AI use and interaction policy...warns that the companies offering generative AI services may take a license to or ownership over anything employees input into tools like OpenAI’s ChatGPT.”

DOJ Taps Princeton Professor To Serve As Department’s First AI Officer

Reuters Share to FacebookShare to Twitter reports that on Thursday, the Justice Department has appointed Princeton University professor Jonathan Meyer to serve as its chief science and technology adviser and chief AI officer. The appointment marks the first time the Department has created a role to focus on artificial intelligence.

New Guide Will Help Superintendents Navigate Their Questions About AI

K-12 Dive Share to FacebookShare to Twitter (2/22, Riddell) reports, “When it comes to artificial intelligence, the good news for superintendents is that most people have some idea of what it is at this point.” When asked if they had “experimented with AI in their school districts, nearly all attendees raised their hands during a packed Friday morning session at the National Conference on Education held by AASA, The School Superintendents Association.” However, technical knowledge “shouldn’t be assumed for district leaders or others in the school community,” so the Consortium for School Networking, “a nonprofit that promotes technological innovation in K-12, has released an array of AI resources to help superintendents stay ahead of the curve, including a one-page explainer that details definitions and guidelines to keep in mind as schools work with the emerging technology. Top-of-mind for many leaders is ensuring that, alongside any awareness of AI that exists in school communities, stakeholders also understand the technology’s limitations.”

        Superintendents Say AI Policies Should Allow Teachers, Students To Make Mistakes. Education Week Share to FacebookShare to Twitter (2/22, Peetz) reports, “When school districts craft or update their policies on the use of artificial intelligence, they should set clear expectations but leave room for students and teachers to make mistakes, according to superintendents who have been leading their schools through establishing guidelines around the use of the powerful technology.” District leaders who have begun to grapple with these challenges “said it’s important to be clear about expectations, particularly for staff members so they have some license to experiment within reasonable boundaries.” In a panel discussion at the Friday conference, superintendents said that “because AI is rapidly evolving and changing and to allow for...experimentation, the expectations the district leaders have set center on what not to do, rather than what to do.”

Column: How Khan Academy’s AI-Powered Tutoring Model Improved ChatGPT

In his column for The Washington Post Share to FacebookShare to Twitter (2/22), Josh Tyrangiel says, “Remember when people were furious about kids using ChatGPT to cheat on their homework?” The furious included Sal Khan, the founder of Khan Academy – “the nonprofit online educational empire with more than 160 million registered users in more than 190 countries.” Unknown to the world, “he had signed a nondisclosure agreement with OpenAI and had been working for months to figure out how Khan Academy could use generative artificial intelligence, even securing beta access to GPT-4 for 50 of his teachers, designers and engineers at a time when most of OpenAI’s own employees couldn’t get log-ins.” However, after infusing GPT “with its own database of lesson plans, essays and sample problems, Khan Academy improved accuracy and reduced hallucinations.” The result is Khanmigo, “a safe and accurate tutor, built atop ChatGPT, that works at the skill level of its users – and never coughs up answers.”

dtau...@gmail.com

unread,
Mar 3, 2024, 8:40:19 PM3/3/24
to ai-b...@googlegroups.com

Sam Altman Seeks Trillions of Dollars to Reshape Business of Chips and AI
OpenAI's CEO, Sam Altman, is in talks with investors, including the United Arab Emirates government, to raise funds for a massive tech initiative. The project aims to expand chip-building capacity and power artificial intelligence (AI) systems, potentially requiring $5 trillion to $7 trillion in investment. Altman seeks to address the scarcity of AI chips and boost OpenAI's quest for artificial general intelligence. The fundraising plans face significant obstacles but could revolutionize the semiconductor industry and AI infrastructure. Altman is pitching a partnership between OpenAI, investors, chip makers, and power providers to build new chip foundries. (WSJ.COM)

 

Nvidia Pilots Chat with RTX Demo in Conversational AI Push
Nvidia has released a technology demo called Chat with RTX, which allows users to customize a chatbot with locally hosted content on Windows PCs. The tool, powered by Nvidia's AI platform RTX, enables users to connect PC files and other information sources to create contextually relevant responses. The demo aims to attract security-conscious enterprises by running locally on Windows RTX PCs, allowing the processing of sensitive data without sharing it with third parties or connecting to the internet. The emergence of AI-ready PCs is expected to drive growth in the overall PC market in the coming years. (CIODIVE.COM)

 

Deepfake Democracy: AI Technology Complicates Election Security
The increasing prevalence of AI technology, particularly in the form of deepfakes, poses new threats to election security. Malicious actors can leverage AI platforms to conduct mass influence campaigns, automated trolling, and spread deepfake content, undermining public trust in the electoral process. The automation and sophistication of AI-generated content can lead to highly convincing disinformation campaigns, potentially polarizing citizens and exacerbating divisions. Defending against these threats requires awareness, training, and potential regulation to mitigate the risks associated with AI technology. (DARKREADING.COM)

 

How Tech Giants Turned Ukraine Into an AI War Lab
Since Russia's invasion, Ukraine has partnered with Western tech companies like Palantir, Microsoft, and Clearview AI to use the country as a testing ground for new AI and defense technologies. Palantir has provided advanced data analytics software to support targeting and decision-making. Clearview's facial recognition is being used to identify Russian soldiers. Ukraine is pitching itself as an R&D hub, attracting tech investment and partnerships. Critics warn the lack of oversight risks abuse and global proliferation of these new capabilities developed by private companies for commercial gain. (TIME.COM)

 

Microsoft Finds Evidence of China, Russia, Iran, North Korea Using AI in Cyber Operations
According to a report by Microsoft, hacking groups affiliated with China, Russia, North Korea, and Iran are increasingly leveraging artificial intelligence (AI) technologies to enhance their cyber and espionage activities. The report highlights the potential for AI to fuel an increase in cyberattacks and reshape the global cyber threat landscape. Chinese groups Charcoal Typhoon and Salmon Typhoon were observed using AI to augment their cyberattacks, while Russian group Forest Blizzard used AI for researching satellite and radar technologies. North Korean group Emerald Sleet utilized AI to improve phishing emails, and Iranian group Crimson Sandstorm utilized AI to create phishing emails and evade detection. The findings emphasize the need for increased attention to AI-driven cyber threats. (POLITICOPRO.COM)

 

Israel’s AI Can Produce 100 Bombing Targets a Day in Gaza. Is This the Future of War?
The Israel Defense Forces (IDF) are reportedly using an AI system called Habsora to select targets in the war on Hamas in Gaza. The system, which can find more bombing targets, link locations to Hamas operatives, and estimate civilian deaths in advance, raises questions about the ethical implications of AI in conflict and the potential dehumanization of adversaries. AI targeting systems have the potential to reshape the character of war, increase the speed of warfare, and create challenges in ethical deliberation. The use of machine learning algorithms in targeting practices may have implications for civilian casualties and the proportionality of force. (THECONVERSATION.COM)

 

AI Girlfriends and Boyfriends Harvest Personal Data, Study Finds
A study by Mozilla's *Privacy Not Included project reveals that AI romance chatbots, including CrushOn.AI, collect and sell shockingly personal information, violating user privacy. These chatbots, marketed as enhancing mental health and well-being, actually thrive on dependency and loneliness while prying for data. Most apps sell or share user data, have poor security measures, and use numerous trackers for advertising purposes. Additionally, some apps have made questionable claims about improving mood and well-being, despite disclaimers stating they are not healthcare providers. (GIZMODO.COM)

 

The Crow Flies at Midnight - Exploring Red Team Persistence via AWS Lex Chatbots
This blog post explores the use of AWS Lex chatbots as a persistence method for red teamers in cybersecurity. While it may not be a practical technique, it provides hands-on experience with a service commonly used in the AI industry. The post includes a hypothetical scenario and a step-by-step guide on modifying a Lambda function to demonstrate persistence. (MEDIUM.COM)

 

How AI Is Strengthening XDR To Consolidate Tech Stacks
Artificial intelligence (AI) is playing a crucial role in enhancing extended detection and response (XDR) platforms by analyzing behaviors and detecting threats in real-time. XDR is being adopted by CISOs and security teams for its ability to consolidate functions and provide a unified view of attack surfaces. Leading XDR vendors are leveraging AI and machine learning (ML) to consolidate tech stacks and improve prediction accuracy, closing gaps in identity and endpoint security. AI has the potential to strengthen XDR in areas such as threat detection and response, behavioral analysis, reducing false positives, and automating threat hunting.  (VENTUREBEAT.COM)

 

AI Platform p0 Helps Developers Identify Red Flags and Avoid DDOS Attacks
p0, an AI startup, aims to assist developers in identifying and resolving issues in their code that could lead to crashes and other problems. Using generative AI, p0 analyzes code to detect security vulnerabilities such as speed issues, timeout problems, data integrity failures, and validation issues. The platform offers a free option on the cloud or a local setup, as well as a paid version for enterprises. Recently, p0 raised $6.5 million in funding and enables users to log in with GitHub and connect their Git code repositories for code scans and identification of potential attacks. (ITBREW.COM)

 

AI-Generated Voices in Robocalls Can Deceive Voters. The FCC Just Made Them Illegal.
The Federal Communications Commission (FCC) has unanimously ruled that robocalls containing voices generated by artificial intelligence (AI) are illegal. The ruling empowers the FCC to fine companies that use AI voices in their calls and provides mechanisms for call recipients to file lawsuits. This decision comes in response to AI-generated robocalls that mimicked President Joe Biden's voice during the New Hampshire primary. The FCC's chairwoman, Jessica Rosenworcel, emphasized the need to act against these deceptive calls, which can misinform voters and impersonate celebrities. (APNEWS.COM)

 

State-Backed Hackers Experimenting with OpenAI Models
Hackers from China, Iran, North Korea, and Russia are exploring the use of large language models (LLMs) in their operations, according to a report by Microsoft and OpenAI. While no notable attacks have been observed, the report highlights how hackers are using LLMs for research, crafting spear-phishing emails, and improving code generation. The report also emphasizes the need for monitoring and preventing the abuse of AI models by state-backed hackers, with Microsoft announcing principles to address this issue and collaborate with other stakeholders. (CYBERSCOOP.COM)

 

Iranian Hackers Broadcast Deepfake News in Cyber Attack on UAE Streaming Services
Iranian state-backed hackers disrupted TV streaming services in the UAE by broadcasting a deepfake newsreader delivering a fabricated report on the war in Gaza. The hackers, known as Cotton Sandstorm, used AI-generated technology to present unverified images and false information. This marks the first time Microsoft has detected an Iranian influence operation using AI as a significant component. The incident highlights the potential risks of deepfake technology in disrupting elections and spreading disinformation. (READWRITE.COM)

 

Google Rebrands its AI Services as Gemini, Launches New App and Subscription Service
Google has introduced the Gemini app, a free artificial intelligence app that allows users to rely on technology for tasks such as writing and interpreting. The app will be available for Android smartphones and will eventually be integrated into Google's search app for iPhones. Google also plans to offer an advanced subscription service called Gemini Advanced, which will provide more sophisticated AI capabilities for a monthly fee of $20. The rollout of Gemini highlights the growing trend of bringing AI to smartphones and intensifies the competition between Google and Microsoft in the AI space. (THEHILL.COM)

 

What to Know About the 200-Member AI Safety Alliance
The newly formed U.S. AI Safety Institute Consortium (AISIC) has over 200 members, including big tech companies like Google, Microsoft, NVIDIA, and OpenAI. The consortium, housed under the National Institutes of Standards and Technology's U.S. AI Safety Institute, aims to shape guidelines and evaluations around AI features, risk management, safety, security, and other AI guardrails. This initiative aligns with the Biden administration's executive order on AI, which emphasizes the need for responsible AI practices and sharing safety results with the government. (CIODIVE.COM)

 

AI in Finance: Revolutionising the Future of Financial Services
Artificial Intelligence (AI) is transforming the financial industry, improving efficiency and customer experiences. It streamlines processes, reduces costs, and enables personalized services. However, challenges include data privacy, bias, talent shortage, and regulatory compliance. Use cases include fraud detection, credit risk assessment, and robo-advisory. Regulatory frameworks are evolving to address AI's impact on privacy. To unlock AI's full potential, organizations should invest in talent development and collaborate with regulators and technology partners. The future of AI in finance holds continuous evolution and opportunities for growth while emphasizing ethical use and responsible AI application. (IOSPEED.COM)

 

Cyber Startup Armis Buys Firm That Sets ‘Honeypots’ for Hackers
Armis, a cyber security startup, has acquired CTCI, a company that uses artificial intelligence to create a network of decoy systems to attract and trap hackers. This acquisition is part of Armis' broader strategy to expand its offerings in the cyber security market. (BLOOMBERG.COM)

dtau...@gmail.com

unread,
Mar 6, 2024, 8:28:14 AM3/6/24
to ai-b...@googlegroups.com

Disrupting Malicious Uses of AI by State-Affiliated Threat Actors
OpenAI is taking a multi-pronged approach to combat the use of its platform by malicious state-affiliated actors. This includes monitoring and disrupting their activities, collaborating with industry partners to exchange information, iterating on safety mitigations, and promoting public transparency. OpenAI aims to stay ahead of evolving threats and foster collective defense against malicious actors while continuing to provide benefits to the majority of its users. (OPENAI.COM)

 

OpenAI Joins Race to Make Videos from Text Prompts
OpenAI has unveiled Sora, its new tool that can transform a text prompt into a one-minute video. Sora, still in the research stage, uses a diffusion model to generate complex scenes with multiple characters and accurate details. OpenAI emphasizes that Sora will not be widely available yet, as the company continues to address safety concerns and seeks feedback from testers to improve the model. Other tech giants like Meta, Google, and Runway have also introduced their own text-to-video engines. (AXIOS.COM)

 

OpenAI CEO Warns That 'Societal Misalignments' Could Make Artificial Intelligence Dangerous
OpenAI CEO Sam Altman has expressed concerns about the potential dangers of artificial intelligence (AI), specifically highlighting the risks posed by "very subtle societal misalignments." Altman emphasized the need for oversight and regulation of AI, suggesting the establishment of a body similar to the International Atomic Energy Agency. While acknowledging the importance of ongoing discussions and debates, Altman believes that an action plan with global buy-in is necessary in the coming years. Altman also stated that the AI industry should not be solely responsible for creating regulations governing AI. (APNEWS.COM)

 

What Using Security to Regulate AI Chips Could Look Like
An exploratory research proposal recommends regulating AI chips and implementing stronger governance measures to keep up with rapid AI innovations. The proposal suggests auditing the development and use of AI systems and implementing security features like limiting performance and remotely disabling rogue chips. However, industry experts express concerns about the impact of security features on AI performance and the challenges of implementing such measures. Suggestions include limiting bandwidth between memory and chip clusters and remotely disabling chips, but the effectiveness and technical implementation of these measures remains uncertain. (DARKREADING.COM)

 

Protect AI's February 2024 Vulnerability Report
Protect AI discovered critical vulnerabilities in February 2024, enabling server takeovers, file overwrites, and data loss in popular open-source AI tools, including Triton Inference Server, Hugging Face transformers, MLflow, and Gradio. All issues were responsibly disclosed with fixes released or forthcoming. (PROTECTAI.COM)

 

The True Energy Cost of AI: Uncertain and Variable
Estimates for the energy consumption of AI are incomplete and contingent, with companies like Meta, Microsoft, and OpenAI keeping this information secret. Training large language models like GPT-3 can consume as much power as 130 US homes annually, while the energy usage for inference tasks varies widely depending on the model and use case. The lack of transparency and standardized data on AI energy consumption makes it difficult to determine the true environmental impact of AI. Efforts such as introducing energy star ratings for AI models and questioning the necessity of using AI for certain tasks may be necessary to address the issue. (THEVERGE.COM)

 

Using AI in a Cyberattack? DOJ's Monaco Says Criminals Will Face Stiffer Sentences
Deputy Attorney General Lisa Monaco directs federal prosecutors to impose harsher penalties on cybercriminals who employ artificial intelligence (AI) in their crimes. Monaco emphasizes the need to prioritize AI in enforcement efforts, recognizing its potential to amplify the danger associated with criminal activities. The DOJ aims to deter criminals by demonstrating that the malicious use of AI will result in severe consequences. Additionally, the department is exploring ways to implement AI responsibly while respecting privacy and civil rights. (THERECORD.MEDIA)

 

EU AI Act: What It Means for Research and ChatGPT
The EU AI Act, the world's first comprehensive AI regulation, imposes strict rules on high-risk AI models and aims to ensure safety and respect for fundamental rights. Researchers are divided on its impact, with some welcoming it for encouraging open science while others worry about potential stifling of innovation. The law exempts AI models developed purely for research, but researchers will still need to consider transparency and potential biases. Powerful general-purpose models, like GPT, will face transparency requirements and stricter obligations under a two-tier system. The act aims to promote open-source AI, unlike the US approach. Enforcement and evaluation of models will be overseen by an AI Office within the European Commission. (NATURE.COM)

 

FTC Wants to Penalize Companies for Use of AI in Impersonation
The US Federal Trade Commission (FTC) is proposing new rules to hold companies accountable for the use of generative artificial intelligence (AI) technology in impersonation scams. The FTC is seeking public input on the rule, which would make companies liable if they are aware or have reason to believe that their technology is being used to harm consumers through impersonation. The FTC is also finalizing a rule that addresses impersonations of businesses and government entities. The agency has observed a surge in complaints related to impersonation fraud and is concerned about the potential for AI to exacerbate this issue. (BLOOMBERG.COM)

 

Congress Should Enable Private Sector Collaboration To Reverse The Defender's Dilemma
A new bill proposes removing barriers to cooperation between companies and allowing them to share cyber threat information. This would help leverage AI capabilities across platforms to identify vulnerabilities and strengthen defenses for organizations of all sizes against continuously evolving attacks. (GOOGLE.COM)

 

Top National Security Council Cybersecurity Official on Institutions Vulnerable to Ransomware Attacks - "The Takeout"
According to Ann Neuberger, the deputy national security adviser for cyber and emerging technology, hospitals and schools are particularly vulnerable to ransomware attacks, often carried out by Russian cybercriminals. The US government is working to enhance cyber defenses in these institutions, utilizing artificial intelligence tools for quicker detection and source identification. The Biden administration is taking action by equipping companies with cybersecurity practices, dismantling cyberinfrastructure used by criminals, and collaborating with international partners to address cryptocurrency movement and money laundering. Neuberger emphasizes the importance of AI-driven defense to stay ahead or closely behind AI-driven offense, highlighting the need for speed in cybersecurity. Neuberger's comments were made prior to the public reference to a non-specific "serious national security threat" related to Russian capabilities in space. (CBSNEWS.COM)

 

AI Governance: A Comprehensive Guide To Developing An Acceptable Use Policy
This article provides a guide to developing an Acceptable Use Policy for governing employees' use of generative AI tools. It outlines key aspects to address, such as identifying tools and risks, defining guidelines, setting security controls, and socializing the policy among employees through training and accessible documentation. (MILLENNIUMWEB.COM)

 

Slack Launches AI Upgrades for Enterprise Customers
Slack is introducing native generative AI capabilities for enterprise customers, including thread summaries, channel recaps, and improved search results. Users can opt-in to access AI-generated summaries for specific threads or channels, saving time and facilitating catch-up. Pricing details for the AI upgrades have not been disclosed. Slack has been piloting these features since September, with early testers reporting time savings of 97 minutes per week on average. The company plans to roll out Slack AI in phases, with small business customers gaining access in the coming weeks. (CIODIVE.COM)

 

Scale AI to Set the Pentagon's Path for Testing and Evaluating Large Language Models
Scale AI has been chosen by the Pentagon's Chief Digital and Artificial Intelligence Office (CDAO) to develop a testing and evaluation framework for large language models (LLMs). This one-year contract aims to create a means of deploying AI safely, measuring model performance, and providing real-time feedback for military applications. The framework will address the complexities and uncertainties associated with generative AI, including the creation of "holdout datasets" and evaluation metrics. Scale AI will work closely with the DOD to enhance the robustness and resilience of AI systems in classified environments. (DEFENSESCOOP.COM)

dtau...@gmail.com

unread,
Mar 7, 2024, 8:22:30 AM3/7/24
to ai-b...@googlegroups.com

Qualcomm Chip Brings AI to Wi-Fi

Qualcomm showcased its FastConnect 7900 chip at the Mobile World Congress in Spain on Monday. The company said the FastConnect 7900 will enable AI-enhanced Wi-Fi 7; facilitate the integration of Wi-Fi, Bluetooth, and ultra-wideband for consumer applications; and support two Wi-Fi connections to the same device in the same spectrum band. The chip can identify which applications are being used by a device, then optimize power and latency accordingly, saving the device up to 30% in power consumption.
[ » Read full article ]

IEEE Spectrum; Michael Koziol (February 27, 2024)

 

Scientists Putting LLM Brains Inside Robot Bodies

Robotics researchers are using large language models (LLMs) to skirt preprogramming limits. Computer scientists at the University of Southern California developed ProgPrompt, which involves giving an LLM prompts in the Python programming language that include a sample question and solution to help restrict its answers to the range of tasks the robot can perform. Google researchers have developed a strategy that involves giving a list of behaviors that can be performed by the robot to the PaLM LLM, which responds to human requests to the robot in conversational language with a behavior from the list.
[ » Read full article ]

Scientific American; David Berreby (February 21, 2024)

 

New ‘Magic’ Gmail Security Uses AI And Is Here Now, Google Says
Google introduces its AI Cyber Defense Initiative, including the open-source Magika tool, to enhance Gmail security by detecting problematic content and identifying malware with high accuracy. The initiative also involves investing in AI-ready infrastructure, releasing new tools, and providing research grants to advance AI-powered security. (FORBES.COM)

 

Cybercriminals Utilize Meta's Llama 2 AI for Attacks, Says CrowdStrike
CrowdStrike's Global Threat Report reveals that cybercriminals, specifically the group Scattered Spider, have started using Meta's Llama 2 large language model to generate scripts for Microsoft's PowerShell tool. The generated scripts were employed to download login credentials from a North American financial services victim. Detecting generative AI-based attacks remains challenging, but the report predicts an increase in malicious use of AI as its development progresses. Cybersecurity experts also highlight the potential for misinformation campaigns during the multitude of government elections taking place this year. (ZDNET.COM)

 

A Top White House Cyber Official Sees the ‘Promise and Peril’ in AI
Anne Neuberger, the deputy national security adviser for cyber, spoke with WIRED about emerging technology issues such as identifying new national security threats from traffic cameras and security concerns regarding software patches for autonomous vehicles. She also discussed advancements in threats from AI and the next steps in the fight against ransomware. (WIRED.COM)

 

Shifting Trends in Cyber Threats
The 2024 Threat Index report by IBM X-Force reveals changing trends in cyber threats, including a decline in ransomware attacks but a rise in infostealing methods and attacks on cloud services and critical infrastructure. The report emphasizes the need for constant vigilance and adaptation to combat these evolving threats. Additionally, the report highlights the potential risks posed by AI-driven cyberattacks, urging proactive measures to secure AI systems. Organizations must adopt comprehensive cybersecurity strategies to effectively detect and mitigate emerging threats in this dynamic landscape. (CYBERMATERIAL.COM)

 

83 Percent of Doctors in New Survey Say AI Could Help Fight Burnout
A survey conducted by Athenahealth reveals that 83 percent of physicians believe that artificial intelligence (AI) could help alleviate burnout in the healthcare industry. However, concerns about the loss of human touch and the potential complications caused by AI were also expressed by the majority of respondents. If AI can reduce administrative work and increase efficiency, it could benefit the medical field by allowing doctors to refocus on patient care and address issues of staff shortages and retention struggles. The survey polled 1,003 doctors and was conducted by The Harris Poll. (THEHILL.COM)

dtau...@gmail.com

unread,
Mar 9, 2024, 8:20:49 AM3/9/24
to ai-b...@googlegroups.com

Malware Worm Can Poison ChatGPT, Gemini-Powered Assistants

A "zero-click" AI worm able to launch an "adversarial self-replicating prompt" via text and image inputs has been developed by researchers at Cornell University, Intuit, and Technion—Israel Institute of Technology to exploit OpenAI’s ChatGPT-4, Google’s Gemini, and the LLaVA open source AI model. In a test of affected AI email assistants, the researchers found that the worm could extract personal data, launch phishing attacks, and send spam messages. The researchers attributed the self-replicating malware’s success to “bad architecture design” in the generative AI ecosystem.
[
» Read full article ]

PC Magazine; Kate Irwin (March 1, 2024)

 

AI Warfare Is Already Here

In recent weeks, the U.S. Department of Defense's Maven Smart System was used to identify rocket launchers in Yemen and surface vessels in the Red Sea and assisted in narrowing down targets in Iraq and Syria. Maven, which merges satellite imagery, sensor data, and geolocation data into a single computer interface, uses machine learning to identify personnel and equipment on the battlefield and detect weapons factories and other objects of interest in various environmental conditions.
[
» Read full article *May Require Paid Registration ]

Bloomberg; Katrina Manson (February 28, 2024)

 

AI Chatbots Not Ready for Election Prime Time, Study Shows

A software portal developed by researchers at the AI Democracy Projects assessed whether popular large language models can handle questions about topics related to national elections around the globe. Open AI's GPT-4, Alphabet's Gemini, Anthropic's Claude, Meta's Llama 2, and Mistral AI's Mixtral were asked election-related questions. Of the 130 responses, slightly more than 50% were found to be inaccurate, and 40% were deemed harmful. The most inaccurate models were Gemini, Llama 2, and Mixtral, and the most accurate was GPT-4. Meanwhile, Gemini had the most incomplete responses, and Claude had the most biased answers.
[
» Read full article *May Require Paid Registration ]

Bloomberg; Antonia Mufarech (February 27, 2024)

 

AI Is Being Built on Dated, Flawed Motion-Capture Data

A study by a University of Michigan-led research team found that the motion-capture data used to design some AI-based applications is flawed and could endanger users outside the parameters of the preconceived "typical" body type. The benchmarks and standards used by developers of fall detection algorithms for smartwatches and pedestrian-detection systems for self-driving vehicles, among other technologies, do not include representations of all body types. In a systemic literature review of 278 studies as far back as the 1930s, the researchers found that the data captured for most motion-capture systems were from white able-bodied men "of unremarkable weight." Some studies used data from dismembered cadavers.
[
» Read full article ]

IEEE Spectrum; Julianne Pepitone (March 1, 2024)

 

Your Doctor's Office Might Be Bugged

More physician practices are implementing ambient AI scribing, in which AI listens to patient visits and writes clinical notes summarizing them. In a recent study of the Permanente Medical Group in Northern California, more than 3,400 doctors have used ambient AI scribes in more than 300,000 patient encounters since October. Doctors reported that the technology reduced the amount of time spent on after-hours note writing and allowed for more meaningful patient interactions. However, its use raises concerns about security, privacy, and documentation errors.
[
» Read full article ]

Forbes; Jesse Pines (March 4, 2024)

 

AI Enables Phones to Detect Depression from Facial Cues

The MoodCapture smartphone app that leverages AI and facial-image processing software can determine when a user is depressed based on their facial cues. The app, developed by Dartmouth College researchers, could pave the way for early diagnoses and real-time digital mental-health support. The app was 75% accurate in detecting symptoms in a study of 177 individuals with a diagnosis of major depressive disorder.
[
» Read full article ]

UPI; Susan Kreimer (February 27, 2024)

 

Google Acknowledges That AI Image-Generator Can ‘Overcompensate’ For Diversity

The AP Share to FacebookShare to Twitter (2/23, O'Brien) reported Google “apologized Friday for its faulty rollout of a new artificial intelligence image-generator, acknowledging that in some cases the tool would ‘overcompensate’ in seeking a diverse range of people even when such a range didn’t make sense.” The AP continues, “The partial explanation for why its images put people of color in historical settings where they wouldn’t normally be found came a day after Google said it was temporarily stopping its Gemini chatbot from generating any images with people in them. That was in response to a social media outcry from some users claiming the tool had an anti-white bias in the way it generated a racially diverse set of images in response to written prompts.”

 

Students Use AI To Develop Autonomous Bikes, Homework Helpers, 911 Chatbots

The Seventy Four Share to FacebookShare to Twitter (2/25, Toppo) reports “students as young as 15 are seizing on ChatGPT and similar applications to solve problems and have fun,” though many educators and policymakers “still fear that students will primarily use the technology for cheating.” Students are not only “fearless about AI, they’re building their studies and future professional lives around it.” The 74 went looking for young people “diving head-first into AI and found several doing substantial research and development as early as high school.” The six students they found “are thinking much more deeply about AI than most adults, their hands in the technology in ways that would have seemed impossible just a generation ago. Many are immigrants to the West or come from families that emigrated here.” The students are programming “everything from autonomous bicycles to postpartum depression apps for new mothers to 911 chatbots, homework helpers and Harry Potter-inspired robotic chess boards.”

 

Colleges Still Uncertain Of AI’s Long-Term Impacts On Campus

The Chronicle of Higher Education Share to FacebookShare to Twitter (2/26, Swaak) reports, “In the 15 months since OpenAI released ChatGPT, generative AI – a type of artificial intelligence – has generated a mercurial mix of excitement, trepidation, and rebuff across all corners of academe.” While some instructors and college campuses are embracing the tools, others “have been steering clear, deeming the tech too confusing or problematic.” There is “nearly unanimous agreement from sources The Chronicle spoke with for this article: Generative AI, or GenAI, has brought the field of artificial intelligence across an undefined yet critical threshold, and made AI accessible to the public in a way it wasn’t before.” But GenAI’s role in higher education “over the long run remains an open question,” as AI technologies “are maturing rapidly, while colleges are historically slow to evolve.”

 

Survey: Why Few Superintendents Are Prioritizing AI In K-12 Education

K-12 Dive Share to FacebookShare to Twitter (2/26, Merod) reports while a “majority of superintendents understand the importance of artificial intelligence and its potential impact on K-12 education, only a small fraction of district leaders see AI as a ‘very urgent’ need this year, according to a survey released this month by EAB, an education consulting firm.” According to the survey, for superintendents, “recruiting and hiring qualified teachers is the most pressing issue to tackle in their districts this school year.” Fifty-two percent of superintendents said teacher staffing is “very urgent,” and 40 percent said it was “mild or moderately urgent.” While superintendents “continue to face myriad challenges,” 63 percent “said they plan to stay in their roles beyond the next two years.”

 

Report: Chatbots Producing Flawed Information On Elections, Potentially Disenfranchising Voters

The AP Share to FacebookShare to Twitter (2/27, Golden) reports that according to a report released Tuesday based on the findings of artificial intelligence experts and a bipartisan group of election officials, AI chatbots “are generating false and misleading information that threatens to disenfranchise voters.” As Super Tuesday approaches, “millions of people already are turning to artificial intelligence-powered chatbots for basic information, including about how their voting process works. Trained on troves of text pulled from the internet, chatbots such as GPT-4 and Google’s Gemini are ready with AI-generated answers, but prone to suggesting voters head to polling places that don’t exist or inventing illogical responses based on rehashed, dated information, the report found.”

        CBS News Share to FacebookShare to Twitter (2/27, Picchi) reports the report, “from AI Democracy Projects and nonprofit media outlet Proof News, comes as the U.S. presidential primaries are underway across the U.S. and as more Americans are turning to chatbots such as Google’s Gemini and OpenAI’s GPT-4 for information. Experts have raised concerns that the advent of powerful new forms of AI could result in voters receiving false and misleading information, or even discourage people from going to the polls.”

 

Warren Calls For New Restrictions On Big Tech’s Dominance Of AI

Bloomberg Share to FacebookShare to Twitter (2/27, Subscription Publication) reports Sen. Elizabeth Warren (D-MA) on Tuesday “called for a new restriction on major cloud providers Microsoft Corp., Amazon.com Inc. and Alphabet Inc., barring them from developing some of the most promising artificial intelligence technologies.” Warren argued the companies “should not be allowed to use their enormous size to dominate a whole new field, and that means blocking them from operating large language models.” Warren “also called for separating Amazon’s e-commerce platform from its product lines, and breaking up Google’s search business from its browsing services.”

dtau...@gmail.com

unread,
Mar 17, 2024, 1:11:55 PM3/17/24
to ai-b...@googlegroups.com

World's Largest Computer Chip Will Power Supercomputer

Cerebras' Wafer Scale Engine 3 (WSE-3), now the world's largest computer chip, is expected to power the Condor Galaxy 3 supercomputer, which will be used to train future AI systems. The chip, made from an 8.5-inch by 8.5-inch silicon wafer, features 4 trillion transistors and 900,000 AI cores. Currently under construction, the Condor Galaxy 3 will be comprised of 64 Cerebras CS-3 AI system "building blocks" and will generate 8 exaFLOPs of computing power.
[
» Read full article ]

LiveScience; Keumars Afifi-Sabet (March 14, 2024)

 

EU Parliament Approves AI Law

The European Parliament approved far-reaching EU regulations governing AI, with the goal of facilitating innovation while protecting citizens from the risks associated with the fast-developing technology. The so-called AI Act will impose stricter requirements on riskier systems, with bans on the use of AI for predictive policing; most real-time facial recognition in public places; and biometric systems used to infer race, religion, or sexual orientation. The text is slated for endorsement by EU states next month, with publication in the EU's Official Journal expected as early as May.
[
» Read full article ]

France 24 (March 13, 2024)

 

Silicon Valley Is Pricing Academics Out of AI Research

Stanford University's Fei-Fei Li, an ACM Fellow known as the "godmother of AI," pressed President Joe Biden, following his State of the Union address, to fund a national warehouse of computing power and datasets to ensure the nation's leading AI researchers can keep pace with big tech firms. Said Li, "The public sector is now significantly lagging in resources and talent compared to that of industry. This will have profound consequences because industry is focused on developing technology that is profit-driven, whereas public sector AI goals are focused on creating public goods."

[ » Read full article *May Require Paid Registration ]

The Washington Post; Naomi Nix; Cat Zakrzewski; Gerrit De Vynck (March 10, 2024)

 

AI Learning What It Means to Be Alive

With an AI program similar to ChatGPT, Stanford University researchers found that computers could teach themselves biology. Among other things, the foundation model, called Universal Cell Embedding (UCE), discovered Norn cells, rare kidney cells that make the hormone erythropoietin when oxygen levels fall too low, in only six weeks, an achievement that took human scientists over 100 years. UCE learned to classify cells it had never seen previously as one of more than 1,000 different types and also applied its learning to new species.


[
» Read full article *May Require Paid Registration ]

The New York Times; Carl Zimmer (March 10, 2024)

 

Scientists Sign Effort to Prevent AI Bioweapons

Over 90 biologists and other scientists who specialize in technologies used to design new proteins last week signed an agreement that seeks to ensure their AI-aided research will move forward without exposing the world to serious harm. The biologists, who include Nobel laureate Frances Arnold, also said the benefits of current AI technologies for protein design “far outweigh the potential for harm.” The agreement does not seek to suppress the development or distribution of AI technologies, but to regulate the use of equipment needed to manufacture new genetic material.

[ » Read full article *May Require Paid Registration ]

The New York Times; Cade Metz (March 9, 2024)

 

Researchers Jailbreak Chatbots with ASCII Art

ArtPrompt, developed by researchers in Washington and Chicago, can bypass large language models' (LLMs) built-in security features. The tool generates ASCII art prompts to get AI chatbots to respond to queries they are supposed to reject, like those referencing hateful, violent, illegal, or harmful content. ArtPrompt replaces the "safety word" (the reason for rejecting the submission) with an ASCII art representation of the word, which does not trigger the ethical or security measures that would prevent a response from the LLM.
[ » Read full article ]

Tom's Hardware; Mark Tyson (March 7, 2024)

 

School Introduces India's First AI Teacher Robot

In Kerala, India, an AI teacher robot from Maker Labs has been rolled out at KTCT Higher Secondary School. Known as Iris, the generative AI-powered robot can create lessons tailored to the needs and preferences of individual students. Iris can respond to questions, explain concepts, and provide interactive learning experiences. The robot also can move through learning spaces and manipulate objects with its hands.
[ » Read full article ]

The Times of India; Sanjay Sharma (March 7, 2024)

 

Warren Calls For Cloud Provider Restrictions On AI Development

Bloomberg Share to FacebookShare to Twitter (2/27, Subscription Publication) reports that Senator Elizabeth Warren (D-MA) has proposed new restrictions on major cloud providers such as Amazon and Microsoft, limiting their development capability of large language models (LLMs). She expressed concern at a Washington conference that these tech giants have the potential to dominate the AI sector and inhibit competition due to their scale. “Amazon should not be allowed to use their enormous size to dominate a whole new field, and that means blocking them from operating LLMs,” stated Warren. She also suggested separating Amazon’s e-commerce platform from its product lines, aiming to fragment monopolistic power in the industry.

 

Teachers, Administrators Voice Strong Opinions About Where AI Belongs In K-12 Education

Education Week Share to FacebookShare to Twitter (2/28, Bushweller) reports as the “expanded use of artificial intelligence in K-12 education this school year is prompting very strong feelings,” educators are also creating “new approaches to balance the benefits and drawbacks of the new technology.” While few are calling for “outright bans on large language models like ChatGPT, recognizing that students will have to learn how to use AI in future jobs,” many are still worried that AI, “unchecked, could lead to lazier students and much more cheating.” Educators are “hungry for guidance from their schools, districts, and states on how to use AI for instruction. But they say they are not getting that guidance.” In that survey, scores of respondents “weighed in on the role of AI in education,” with one administrator saying, “The idea of AI being integrated into the education system is inevitable, but scary.”

 

AI Study Finds Two Types of Prostate Cancer

Forbes Share to FacebookShare to Twitter (2/29, Forster) reports that a study led by researchers from the University of Oxford and the University of Manchester has used artificial intelligence (AI) to reveal two distinct types of prostate cancer. The findings, published in Cell Genomics, could advance the development of personalized therapies. Using neural networks on samples from 159 patients, the study revealed two different ways the cancer could evolve, labelled as “evotypes”. The discovery could enhance diagnoses and tailored treatments, improving patient outcomes. Further study into “evotypes” in other forms of cancer is planned.

 

Figure AI Announces $675 Million Funding, OpenAI Partnership

CNBC Share to FacebookShare to Twitter (2/29, Palmer) reports that Figure AI, “a startup working to build humanoid robots that can perform dangerous and undesirable jobs... said Thursday that it raised $675 million at a $2.6 billion valuation from investors including Jeff Bezos, Nvidia, Microsoft and Amazon’s $1 billion Industrial Innovation Fund.” The company sees its Figure 01 general-purpose robots “being put to use in manufacturing, shipping and logistics, warehousing, and retail, ‘where labor shortages are the most severe,’ though its machines aren’t intended for military or defense applications.” CNBC adds, “As part of the deal announced Thursday, Figure said it’s partnering with ChatGPT maker OpenAI to ‘develop next generation AI models for humanoid robots.’ It will also use Microsoft’s Azure cloud services for AI infrastructure, training and storage, Figure said.”

        TechCrunch Share to FacebookShare to Twitter (2/29, Heater) says, “It’s an absolutely mind-boggling sum of money for what remains a small startup, with an 80-person headcount. That last bit will almost certainly change with this round.”

        Also reporting are Reuters Share to FacebookShare to Twitter (2/29), The Information Share to FacebookShare to Twitter (2/29, Subscription Publication), Insider Share to FacebookShare to Twitter (2/29, Mann), and the Financial Times Share to FacebookShare to Twitter (2/29, Subscription Publication).

 

Survey: Educators Share Differing Opinions About When Students Should Be Taught About AI

Education Week Share to FacebookShare to Twitter (2/29, Prothero) reports, “The overwhelming majority of teachers, principals, and district leaders” believe that students “should learn how artificial intelligence works at some point in their K-12 education, according to recently released survey data from the EdWeek Research Center.” According to the survey, “nearly 9 in 10 educators feel that students should be taught how AI works in a developmentally appropriate manner sometime before they graduate from high school.” Meanwhile, six percent of educators “say that the topic shouldn’t be taught until the postsecondary levels and another 6 percent say that AI should never be taught.” The survey shows “different perspectives among teachers, depending on what age group they teach. For instance, while administrators were more likely to say that students should start learning about AI in elementary school than teachers overall, elementary teachers were just as likely as school and district leaders to say that students in grades 3-5 should learn about AI.”

        Most Educators Believe AI Use Will Increase In Schools Next Year. Education Week Share to FacebookShare to Twitter (2/29, Klein) reports the EdWeek Research Center survey also found that “the majority of educators expect use of artificial intelligence tools will increase in their school or district over the next year.” Fifty-six percent of school and district leaders and teachers surveyed “said they anticipate AI use to rise. Most respondents who predicted an increase expected to employ the technology ‘a little’ more, and 6 percent of respondents said they foresee using it ‘a lot’ more.” Another 43 percent “expected their schools’ level of use to remain the same.” Some districts “are already looking for ways the technology might help save educators’ time.”

 

Colleges Still Uncertain Of AI’s Long-Term Impacts On Campus

The Chronicle of Higher Education Share to FacebookShare to Twitter (2/26, Swaak) reported, “In the 15 months since OpenAI released ChatGPT, generative AI – a type of artificial intelligence – has generated a mercurial mix of excitement, trepidation, and rebuff across all corners of academe.” While some instructors and college campuses are embracing the tools, others “have been steering clear, deeming the tech too confusing or problematic.” There is “nearly unanimous agreement from sources The Chronicle spoke with for this article: Generative AI, or GenAI, has brought the field of artificial intelligence across an undefined yet critical threshold, and made AI accessible to the public in a way it wasn’t before.” But GenAI’s role in higher education “over the long run remains an open question,” as AI technologies “are maturing rapidly, while colleges are historically slow to evolve.”

 

Elon Musk Accuses OpenAI Of Breaching Founding Agreement

The New York Times Share to FacebookShare to Twitter (3/1, Satariano, Metz, Mickle) reports, “Elon Musk sued OpenAI and its chief executive, Sam Altman, accusing them of breaching a contract by putting profits and commercial interests in developing artificial intelligence ahead of the public good.” Musk “helped create OpenAI with Mr. Altman and others in 2015,” and “said the company’s multibillion-dollar partnership with Microsoft represented an abandonment of its founding pledge to carefully develop A.I. and make the technology publicly available.” The lawsuit is quoted saying, “OpenAI has been transformed into a closed-source de facto subsidiary of the largest technology company, Microsoft.”

        The Washington Post Share to FacebookShare to Twitter (3/1, Mark, De Vynck, Tiku) reports Musk “is asking the court to block OpenAI from using its products, such as the popular large-language model ChatGPT, for ‘financial benefit,’ including in its multibillion-dollar partnership with Microsoft.”

        Wired Share to FacebookShare to Twitter (3/1, Nast) says, “The lawsuit alleges that the internal design of GPT-4, the company’s latest model, remains secret because Microsoft and OpenAI stand to make a fortune by selling access to the AI model to the public. ‘GPT-4 is hence the opposite of ‘Open AI’,’ the filing reads.”

        The AP Share to FacebookShare to Twitter (3/1) reports, “Under its founding agreement, OpenAI would also make its code open to the public instead of walling it off for any private company’s gains, the lawsuit says. However, by embracing a close relationship with Microsoft, OpenAI and its top executives have set that pact ‘aflame’ and are ‘perverting’ the company’s mission, Musk alleges.” The AP adds, “When the nonprofit board abruptly fired Altman as CEO late last year, for reasons that still haven’t been fully disclosed, it was Microsoft that helped drive the push that brought Altman back as CEO and led most of the old board to resign. Musk’s lawsuit alleged that those changes caused the checks and balances protecting the nonprofit mission to ‘collapse overnight.’”

        CNN Share to FacebookShare to Twitter (3/1, Fung) quotes from the lawsuit: “The public is still in the dark regarding what exactly the Board’s ‘deliberative review process’ revealed that resulted in the initial firing of Mr. Altman. ... However, one thing is clear to Mr. Musk and the public at large: OpenAI has abandoned its ‘irrevocable’ non-profit mission in the pursuit of profit.”

 

Elon Musk’s Lawsuit Against OpenAI Seen As Legally Shaky

The New York Times Share to FacebookShare to Twitter (3/2, Weise, Metz) continues coverage of Elon Musk’s lawsuit against OpenAI, saying Musk “turned claims by the start-up’s closest partner, Microsoft, into a weapon. He repeatedly cited a contentious but highly influential paper written by researchers and top executives at Microsoft” in which the company’s research lab “said that – though it didn’t understand how – GPT-4 had shown ‘sparks’ of ‘artificial general intelligence,’ or A.G.I.” Musk “say[s] it showed how OpenAI backtracked on its commitments not to commercialize truly powerful products.” Bloomberg Share to FacebookShare to Twitter (3/1, Metz, Subscription Publication) reports, “OpenAI ‘categorically disagrees’ with the lawsuit Elon Musk filed against the company, according to an internal memo sent to employees of the artificial intelligence startup.”

        The New York Times Share to FacebookShare to Twitter (3/2, Roose) reports the lawsuit says it “illustrates a paradox that is at the heart of much of today’s A.I. conversation – and a place where OpenAI really has been talking out of both sides of its mouth, insisting both that its A.I. systems are incredibly powerful and that they are nowhere near matching human intelligence.”

        The New York Times Share to FacebookShare to Twitter’ (3/2) “Dealbook” newsletter adds, “Among lawyers, the case has become something of a fascination for a different reason: It poses a series of unique and unusual legal questions without clear precedent. And it remains unclear what would constitute ‘winning’ in a case like this, given that it appears to have been brought out of Musk’s own personal frustration and philosophical differences with Open A.I.”

        ABC News Share to FacebookShare to Twitter (3/1) reports, “The case, legal experts said, centers on the founding agreement alleged by Musk, which he said took place at the outset of the firm. Typically, deals established between a top investor and company leadership are set out in writing with concrete terms, the experts added, leaving Musk in a difficult position as he attempts to invoke what they say appear to be spoken commitments made years ago without a formal contract. For his part, Musk says in the lawsuit that the agreement was memorialized in a legal filing when OpenAI was incorporated.”

        Insider Share to FacebookShare to Twitter (3/2, Kay) reports, “Despite Musk’s grandstanding, his case against OpenAI appears shaky at best, David Hoffman, a contract law expert from the University of Pennsylvania, said. ‘It would be very difficult to claim a breach of contract without a written contract,’ Hoffman said.”

 

Google Cuts Staff From AI Trust And Safety Team

Bloomberg Share to FacebookShare to Twitter (3/1, Subscription Publication) reported Google is reducing its trust and safety team’s staff while requiring members to troubleshoot potential issues with its AI tool, Gemini. Anonymous insiders confirmed the cuts, affecting fewer than 10 employees within a team of around 250. The team’s role is to establish safety measures for Google’s AI products, predicting potential misuse, and conducting risk evaluations.

 

AI-Fueled Data Center Growth Leading To New Sustainability Concerns

The New York Times Share to FacebookShare to Twitter (2/29, Sisson) reported a surge in data center developments fueled by the growing artificial intelligence demands is leading to “new questions on whether they can meet the demand while still operating sustainably.” The industry’s growth in electricity demand “comes as manufacturing in the United States is the highest in the past half-century, and the power grid is becoming increasingly strained,” leading the Uptime Institute to predict Share to FacebookShare to Twitter “that the sector’s myriad net-zero goals, which are self-imposed benchmarks, would become much harder to meet in the face of this demand and that backtracking could become common.”

 

Anthropic Reveals Claude 3 AI Model Suite

Reuters Share to FacebookShare to Twitter (3/4) reports that Anthropic “on Monday revealed a suite of artificial intelligence models known as Claude 3, the latest salvo in Silicon Valley’s near incessant contest to market still-more powerful technology.” According to Anthropic, “the most capable in the family, Claude 3 Opus, outperforms rival models GPT-4 from OpenAI and Gemini 1.0 Ultra from Google on various benchmark exams.”

        The New York Times Share to FacebookShare to Twitter (3/4, Metz) reports Anthropic CEO and co-founder Dario Amodei “said the new technology, called Claude 3 Opus, was particularly useful when analyzing scientific data or generating computer code.” Claude 3 Opus “will be available starting Monday to consumers who pay $20 per month for a subscription. A less powerful version, called Claude 3 Sonnet, is available for free.”

        Bloomberg Share to FacebookShare to Twitter (3/4, Metz, Subscription Publication) reports, “Anthropic has emphasized developing AI safely and responsibly, which has at times limited its performance. For example, older versions of Claude often refused to respond to queries that were harmless, the company said, because they appeared to the software to be problematic. The new models introduced Monday do this much less often, the company said.”

 

AI Can Help Identify New Alzheimer’s Risk Factors, Study Finds

TIME Share to FacebookShare to Twitter (3/5, Park) reports UCSF researchers “ran a machine-learning program on a database of anonymous electronic health records from patients.” The “algorithm was trained to pull out any common features shared by people who were ultimately diagnosed with Alzheimer’s over a period of seven years.” Lead researcher Marina Sirota said, “There were some things we saw that were expected...but some of things we found were novel and interesting.” For instance, “unexpected patterns emerge closer to when people are diagnosed, such as having lower levels of vitamin D.” Overall, “the study shows the power of machine learning in helping scientists to better understand the factors driving diseases as complex as Alzheimer’s, as well as its ability to suggest potential new ways of treating them.”

 

Demand Rising For AI Talent As Other Tech Job Listings Decline

The Wall Street Journal Share to FacebookShare to Twitter (3/5) reports as companies ramp up the recruitment of artificial intelligence professionals, they’re paying a premium for talent. Following the release of ChatGPT, new AI job listings are up 42 percent compared with a low point in December 2022, according to University of Maryland researchers. New IT job listings were 31 percent lower in January compared with December 2022, as the overall market for tech talent continues to trend downward.

 

Researchers Call On Generative AI Companies To Allow Investigators Access To Systems

The Washington Post Share to FacebookShare to Twitter (3/5, Tiku) reports, “More than 100 top artificial intelligence researchers have signed an open letter calling on generative AI companies to allow investigators access to their systems, arguing that opaque company rules are preventing them from safety-testing tools being used by millions of consumers.” The researchers “say strict protocols designed to keep bad actors from abusing AI systems are instead having a chilling effect on independent research. Such auditors fear having their accounts banned or being sued if they try to safety-test AI models without a company’s blessing.” The letter was “sent to companies including OpenAI, Meta, Anthropic, Google and Midjourney.”

 

New Guidance Says Math Educators Should Teach Their Students To Question AI

Education Week Share to FacebookShare to Twitter (3/5, Klein) reports, “Artificial intelligence-powered tools can help students learn math, but educators should also explain why students should be skeptical of the technology, concludes the National Council of Teachers of Mathematics in a recent AI position statement.” AI has potential to “make math teachers’ lives easier, by helping to create quizzes or tests, for example, the NCTM guidance says,” but the technology also tends to “hallucinate” – provide answers that are “untrue or unreasonable.” The organization also “recommends that math educators be on the front lines of developing and testing AI tools aimed at teaching or reinforcing math skills. The good news for math educators: It will take teachers with more math expertise, not less, to properly vet cutting-edge math education technology.”

 

Louisiana Considers Using AI To Help Expand Post-Pandemic Tutoring Efforts

The New Orleans Times-Picayune Share to FacebookShare to Twitter (3/5) reports as Louisiana “looks to jumpstart students’ ongoing academic recovery from the pandemic, policymakers want to go big on tutoring.” A new bill, filed last Friday, “would require schools to provide intensive tutoring to students in kindergarten through 12th grade who test below grade level.” State Superintendent of Education Cade Brumley also wants to make artificial intelligence “one of several options schools can choose from to expand tutoring. Recently, he recalled seeing a demonstration of Amira,” an AI tutor, “and thinking it could bypass some of the funding and staffing hurdles associated with in-person tutors.” To increase all Louisiana students’ access to tutoring, “he wants to give schools a menu of options: pay school staffers to tutor, hire vendors to tutor in person or over video, or use tutoring software – including ones powered by A.I.”

 

Microsoft Engineer Warns Company’s AI Tool Creates Violent, Sexual Images, Ignores Copyrights

CNBC Share to FacebookShare to Twitter (3/6, Field) reports that a former Microsoft employee warned Microsoft’s AI-powered Copilot Designer creates imagery containing sexual and violent content, but the company hasn’t taken appropriate action in response to the findings. Bloomberg Share to FacebookShare to Twitter (3/6, Davalos, Subscription Publication) says the engineer, Shane Jones, “sent letters to the company’s board, lawmakers and the Federal Trade Commission warning that the tech giant is not doing enough to safeguard” the AI tool. Jones “said he discovered a security vulnerability in OpenAI’s latest DALL-E image generator model that allowed him to bypass guardrails that prevent the tool from creating harmful images.”

        Insider Share to FacebookShare to Twitter (3/6, Mok) reports Jones claimed in the letter that Microsoft’s AI image generator can add “harmful content” to images that can be created using “benign” prompts. For example, the prompt “car accident” produced images that included an “inappropriate, sexually objectified image of a woman” in front of totaled cars, according to the letter. In addition, he said “the term ‘pro-choice’ generated graphics of cartoons that depict Star Wars’ Darth Vader pointing a lightsaber next to mutated children, and blood spilling out of a smiling woman.”

 

Reduced Funding Threatens NIST’s Efforts To Carry Out Biden’s AI Agenda

The Washington Post Share to FacebookShare to Twitter (3/6, Zakrzewski) reports on leaky roofs, black mold, and blackouts at the National Institute of Standards and Technology, presenting these issues as being emblematic of the challenges the agency faces from limited funding. According to the Post, the “NIST is at the heart of President Biden’s ambitious plans to oversee a new generation of artificial intelligence models; the agency is tasked with developing tests for security flaws and other harms. But budget constraints have left the 123-year-old lab with a skeletal staff on key tech teams and most facilities on its main Gaithersburg, Md., and Boulder, Colo., campuses below acceptable building standards.” Scientists at NIST “joke that they work at the most advanced labs in the world – in the 1960s.” The problems may get worse: lawmakers on Sunday “released a new spending plan that would cut NIST’s overall budget by more than 10 percent, to $1.46 billion.”

 

Founder Of Khan Academy Touts Potential Of AI Tutors

The Wall Street Journal Share to FacebookShare to Twitter reports Sal Khan, the founder of Khan Academy, is an advocate for the potential of artificial intelligence in education. His nonprofit has developed an AI chatbot called Khanmigo, and he believes that AI could serve as a personal tutor that may eventually bring an educational breakthrough. Citing results from a 1984 study on the impacts of tutoring, he’s said he believes improvements comparable to the study’s findings are achievable through Khanmigo, which is still a work in progress.

 

Survey: Students Want More Help From Adults In Better Understanding How AI Works

Education Week Share to FacebookShare to Twitter (3/7, Langreo) reports most kids “have at least some understanding of what generative artificial intelligence is and how it can be used, but they also want more help from adults in learning how to use the tools properly, concludes a new survey from the nonprofit National 4-H Council.” The report, which was conducted by Hart Research with support from Microsoft, found that “before being given a description of AI, most 9- to 17-year-olds were able to express what they think it is and what it can do.” While many students “are aware of various AI-powered tools and platforms, they don’t use them that often.” One-third of survey respondents said they’re “not clear about the right way to use ChatGPT as a tool and want teachers to be more involved in guiding them. More generally, 72 percent say they would like to get some help from adults in learning how to use different AI tools, the report found.”

dtau...@gmail.com

unread,
Mar 17, 2024, 7:45:09 PM3/17/24
to ai-b...@googlegroups.com

Nvidia Ramps Up GPU Production to Fuel AI Data Center Revolution
Nvidia is working hard to meet the growing demand for its GPU hardware as the AI data center revolution gains momentum. The company expects demand to outpace supply throughout the year, but is committed to doing its best to meet the needs of customers. Nvidia reported a record quarter with revenue of $22.1 billion, a 22% increase from the previous quarter and a 265% increase year over year. The company's GPUs have become crucial for AI adoption, particularly in the field of inference workloads. Nvidia sees a broader shift in the data center core, with companies building AI factories to refine raw data and produce valuable intelligence. Despite the high demand, Nvidia is increasing its supply to meet market needs. (CIODIVE.COM)

Facebook Whistleblower, AI Godfather Join Hundreds Calling for Deepfake Regulation
Facebook whistleblower Frances Haugen, AI expert Yoshua Bengio, and former presidential candidate Andrew Yang are among the signatories of an open letter calling for the regulation of deepfakes. The letter urges governments to criminalize deepfake child pornography and impose penalties for the creation and spread of harmful deepfakes. The signatories also propose holding software developers and distributors liable for their products' role in creating and facilitating deepfakes. The letter highlights the growing risks posed by deepfakes and emphasizes the need for immediate action to combat their proliferation. (THEHILL.COM)

AI Can 'Disproportionately' Help Defend Against Cybersecurity Threats, Google CEO Sundar Pichai Says
Google CEO Sundar Pichai believes that the rapid advancements in artificial intelligence (AI) can strengthen defenses against cybersecurity threats. While concerns about the malicious use of AI persist, Pichai argues that AI tools can aid governments and companies in detecting and responding to hostile actors more quickly. He states that AI disproportionately benefits defenders by providing a scalable tool to impact attacks, helping to tilt the balance in favor of defenders. Pichai's comments come as cyberattacks continue to grow in volume and sophistication, costing the global economy trillions of dollars. Google recently announced initiatives to enhance online security using AI tools and infrastructure investments. (CNBC.COM)

Deepfake Phishing Grew by 3,000% in 2023 - And It's Just Beginning
Deepfake phishing, which uses AI-generated content to create convincing fake videos or audios, has seen a 3,000% increase in fraud attempts in 2023. As deep learning models become more accessible, cybercriminals are using deepfakes to bypass biometric security and carry out phishing attacks. Organizations can protect against deepfake phishing by securing account access, training employees to recognize deepfakes, using AI detection models, imposing failsafes, and staying updated on evolving threats. (HACKERNOON.COM)

IC Preparing Its Own Tailor-Made Artificial Intelligence Policy
The Office of the Director of National Intelligence (ODNI) is developing a comprehensive AI governance policy specifically designed for the intelligence community. The aim is to ensure transparency and responsible use of AI technologies across the organization, with a focus on governance structures, policy standards, and a new strategy. The ODNI's Augmenting Intelligence using Machines (AIM) group is leading the effort and collaborating with a policy team to develop the necessary directives, guidance, standards, and memorandums. The timeline for completing the initial AI policy directive was not disclosed. (DEFENSESCOOP.COM)

Microsoft Introduces Tool for AI Risk
Microsoft has unveiled PyRIT, a red teaming tool designed to help security professionals and machine learning engineers identify risks associated with generative AI systems. PyRIT automates tasks and highlights areas that require investigation, complementing manual red teaming efforts. It addresses the challenges posed by generative AI, providing users with control over strategy and execution to assess potential risks more effectively. Although not a replacement for manual red teaming, PyRIT enhances the security assessment process in the realm of generative AI. (CYBERMATERIAL.COM)

AI and Cybersecurity Defense: Mastering the Art of Empowering Defenders Against Hackers
Artificial intelligence (AI) can be a powerful ally in the fight against cyber threats. By leveraging AI techniques such as deep learning, machine learning, and natural language processing, cybersecurity defenders can detect, prevent, and mitigate cyberattacks more effectively. However, there are challenges to overcome, such as ensuring data quality and availability, addressing ethical concerns, and developing countermeasures against adversarial attacks. By adopting a holistic approach and collaborating with stakeholders, cybersecurity defenders can harness the full potential of AI in cybersecurity. (FORTUNE.COM)

Face Off: Attackers are Stealing Biometrics to Access Victims' Bank Accounts
Cybersecurity company Group-IB has discovered a banking trojan that steals people's faces, using deepfake technology to bypass security checkpoints and gain unauthorized access to bank accounts. The increasing sophistication of deepfake methods poses a significant threat to biometric authentication, leading experts to question its reliability. Users are advised to be cautious and take steps to protect against biometric attacks. (VENTUREBEAT.COM)

 

All About Hackbots
The author defines hackbots as automated systems using AI to find vulnerabilities in applications. They created a hackbot proof-of-concept called Hero to inspire others. The post discusses the potential impact of competent hackbots, noting they could help secure the internet but also pose national security risks. The author believes stealth mode startups and government agencies are developing hackbots, with differing approaches. They express hopes that AI researchers won't limit security expertise in models, that platforms will use hacker reports to train models, and that readers understand LLMs alone can't hack but will enable hacking in systems. (JOSEPHTHACKER.COM)

Justice Department Names First Artificial Intelligence Officer
The Department of Justice has appointed Jonathan Mayer as its first Chief AI Officer to address the impact of AI on the criminal justice system. Mayer, a Princeton University assistant professor, will lead the newly formed Emerging Technology Board and advise the DOJ on the ethical use of AI. His expertise will help the department stay abreast of scientific and technological advancements while upholding the rule of law and protecting civil rights. (THEHILL.COM)

'We Want Our AI and We Want It Now' Say Software Buyers
A survey of 2,500 executives conducted by Gartner Digital Markets reveals that AI has become a top priority for software buyers, with 92% of businesses considering investing in AI-powered software in the coming year. The survey also highlights that 47% of buyers prioritize security and cyberattack concerns when making software investments. However, the results show that 53% of buyers do not see security as an important feature, raising alarm. Cost control is another factor influencing software purchases, with 31% of businesses replacing software due to high costs. Customization is preferred by a majority of enterprise managers, with 59% seeking customized solutions from vendors. (ZDNET.COM)

dtau...@gmail.com

unread,
Mar 23, 2024, 7:54:56 PM3/23/24
to ai-b...@googlegroups.com

Microsoft Security AI Product to Help Clients Track Hackers

Microsoft plans to launch Copilot for Security April 1 following a year-long trial with corporate customers. Leveraging OpenAI and the large amount of security-specific data collected by Microsoft, Copilot can be integrated with Microsoft's security and privacy software to generate suspicious incident summaries, answer questions, and determine attackers' intentions. Tests of Copilot showed a 26% increase in performance speed and a 35% increase in accuracy among newer security workers.

[ » Read full article *May Require Paid Registration ]

Bloomberg; Dina Bass (March 13, 2024)

 

US Rivals Preparing for AI Cyberwar, Says Microsoft Report
According to Microsoft, countries such as Iran, Russia, and North Korea are gearing up for an escalation in cyberwar using generative AI, while a shortage of skilled cybersecurity personnel exacerbates the problem. Microsoft has responded with CoPilot For Security, an AI tool that can track, identify, and block threats more effectively and efficiently than humans. The report also highlights the use of AI by threat actors and the need for increased efforts to detect and combat these malicious activities. The impact of generative AI on cyber attacks is evident, with a rise in email-based attacks and phishing attempts. Microsoft anticipates that AI will evolve social engineering tactics, leading to more sophisticated attacks. (TECHRADAR.COM)

 

UN Adopts Resolution Backing Efforts to Ensure Safe AI

The U.N. General Assembly approved a resolution on Thursday that encourages the development and support of safe AI systems. The resolution was adopted by consensus without a vote, meaning it has the support of all 193 U.N. member nations. U.S. Ambassador Linda Thomas-Greenfield told the assembly just before the vote, "Today, as the U.N. and AI finally intersect we have the opportunity and the responsibility to choose as one united global community to govern this technology rather than let it govern us."
[ » Read full article ]

Associated Press; Edith M. Lederer (March 21, 2024)

 

EU to Impose Election Safeguards on Big Tech

Brussels is set to roll out its first binding regime to fight election disinformation. The guidelines, aimed at countering online threats to election integrity, could be adopted by the European Commission as soon as next week, according to insiders. Among other things, the guidelines say platforms that fail to adequately address AI-powered disinformation or deepfakes could face fines of up to 6% of their global turnover.

[ » Read full article *May Require Paid Registration ]

Financial Times; Javier Espinoza (March 20, 2024)

 

Google DeepMind Unveils AI Football Coach

Researchers at Google DeepMind collaborated with English Premier League club Liverpool to develop an AI football coach. The geometric deep learning model TacticAI was trained on a dataset of 7,176 corner kicks from the English Premier League from 2020 to 2023. After analyzing corner kicks with different player configurations, it can suggest positional improvements. In a study, experts including data scientists, a video analyst, and a coaching assistant chose TacticAI's recommendations over existing strategies 90% of the time.

[ » Read full article *May Require Paid Registration ]

Financial Times; Michael Peel (March 19, 2024)

 

Tech Job Seekers Without AI Skills Face a New Reality

Recently laid-off information technology workers are not finding new jobs quickly due to a mismatch in the skills they have and their expectations about pay, according to consulting company Janco Associates. Jobs in areas like telecommunications, corporate systems management, and entry-level IT have declined in recent months, while roles in cybersecurity, AI, and data science continue to rise, according to Janco's data. Meanwhile, data from job listings aggregator Indeed show that the average total compensation for IT workers is about $100,000, while the average salary potential for those with generative AI skills is $174,727.


[
» Read full article *May Require Paid Registration ]

The Wall Street Journal; Belle Lin (March 18, 2024)

 

Altman Returns To OpenAI Board After Being Cleared Of Wrongdoing

The New York Times Share to FacebookShare to Twitter (3/8, Metz, Mickle, Isaac) reported OpenAI announced on Friday following a three-month investigation that CEO Sam Altman “did not do anything that justified his removal and would regain the one role at the company that still eluded him: a seat on the company’s board of directors.” After receiving the report from WilmerHale, OpenAI “said that the law firm’s report found that OpenAI’s board acted within its broad discretion to terminate Mr. Altman, but also found that his conduct did not mandate removal,” which “represented a resounding victory for the high-profile chief executive as he moves to reassert control of the artificial intelligence company he helped to create.”

        The Washington Post Share to FacebookShare to Twitter (3/8, De Vynck, Tiku) reported OpenAI Board Chairman Bret Taylor “said in a statement Friday that the board had ‘unanimously concluded’ that Altman and his deputy, OpenAI President Greg Brockman, ‘are the right leaders for OpenAI.’” According to the Post, WhitmerHale “did not find any problems when it came to OpenAI’s product safety, finances or its statements to investors.” Meanwhile, the company also “named Sue Desmond-Hellmann, a former CEO of the Bill & Melinda Gates Foundation; Nicole Seligman, a former executive vice president and general counsel of Sony Entertainment; and Fidji Simo, CEO of Instacart, to its new board.” The Wall Street Journal Share to FacebookShare to Twitter provides similar coverage.

        Elon Musk’s Lawsuit Against OpenAI Raises Questions About Transparency Of AI Technology. Forbes Share to FacebookShare to Twitter (3/7, Green) says, “The ongoing legal battle between Elon Musk and OpenAI has taken a surprising turn, evolving from a dispute over the misuse of Musk’s contributions into a larger debate about transparency and the future of artificial intelligence. The lawsuit, reported on just days ago, has quickly become the catalyst for a much broader conversation surrounding the principles that should guide the development of advanced AI systems.” Forbes explains that at the heart of the case “lies a fundamental question: Should AI research be open and transparent, or is it necessary to keep some aspects of the technology behind closed doors?”

 

Experts Warn Growing Cost Of AI Is Limiting Access For Researchers

The Washington Post Share to FacebookShare to Twitter (3/10) reports a “growing chorus of academics, policymakers and former employees” are warning that growing cost “of working with AI models is boxing researchers out of the field, compromising independent study of the burgeoning technology.” Tech companies including Meta, Google, and Microsoft are buying up huge quantities of chips required to run AI models, and their salaries “are draining academia of star talent.” Researchers “say this lopsided power dynamic is shaping the field in subtle ways, pushing AI scholars to tailor their research for commercial use.”

 

Experiment Finds ChatGPT “Systematically Produces Biases” In Job Candidate Screening

Bloomberg Business Share to FacebookShare to Twitter (3/7) reports that job recruiters “are using generative AI to screen and rank job candidates. But a Bloomberg experiment found the best-known tool, OpenAI’s GPT, systematically produces biases that could disadvantage groups based on their names.” Reporters “used voter and census data to derive names that are demographically distinct...and randomly assigned them to equally-qualified resumes. When asked to rank those resumes 1,000 times, GPT 3.5 — the most broadly-used version of the model — favored names from some demographics more often than others, to an extent that would fail benchmarks used to assess job discrimination against protected groups.” Bloomberg says, “Using generative AI for recruiting and hiring poses a serious risk for automated discrimination at scale.”

 

Microsoft Begins Blocking Prompts That Led Copilot To Generate Violent, Sexual Images

CNBC Share to FacebookShare to Twitter (3/8, Field) reported Microsoft has implemented changes to its Copilot artificial intelligence tool after one of its AI engineers raised concerns about disturbing image generations. The changes involve blocking certain prompts such as “four twenty,” “pro choice,” and “pro life,” along with generating images of minors with rifles. The tool also features “a warning about multiple policy violations leading to suspension from the tool, which CNBC had not encountered before Friday.” A Microsoft spokesperson told CNBC, “We are continuously monitoring, making adjustments and putting additional controls in place to further strengthen our safety filters and mitigate misuse of the system.”

 

AI Accelerates Drug Discovery In Groundbreaking Study

Genetic Engineering & Biotechnology News Share to FacebookShare to Twitter (3/11, Mayer) reports that a paper details how Insilico Medicine utilized AI to identify a new antifibrotic target and develop a corresponding small molecule inhibitor. The AI-driven process, taking only 18 months, resulted in drug candidate INS018_055 now poised for Phase II trials after confirming effectiveness for idiopathic pulmonary fibrosis treatment in various in vivo studies and a Phase I trial. The findings demonstrate AI’s profound impact on speeding up drug discovery. The findings Share to FacebookShare to Twitter were published in Nature Biotechnology.

 

Robotics Startup To Focus On AI-Equipped Robots

The New York Times Share to FacebookShare to Twitter (3/11, Metz, Gardi) reports that Covariant, a robotics company founded by three former OpenAI researchers, “is using the technology development methods behind chatbots to build A.I. technology that can navigate the physical world.” The technology “is not perfect. But it is a clear sign that the artificial intelligence systems that drive online chatbots and image generators will also power machines in warehouses, on roadways and in homes.”

 

Elon Musk Announces Plan To Open Source xAI’s Grok Chatbot

Engadget Share to FacebookShare to Twitter (3/11) reports Elon Musk says that xAI plans to open source its AI assistant Grok, currently available to Premium+ subscribers. The details surrounding this decision are not yet disclosed, while the move is planned to come into effect this week. Musk may be aiming to stimulate more uptake of the model and improve Grok through feedback from third-party developers and researchers, as The Wall Street Journal suggests. This decision may also be interpreted as a move against OpenAI, the ChatGPT maker Musk sued earlier this month.

 

Microsoft Insiders Worry Company’s AI Strategy Over-Focuses On OpenAI Partnership

Insider Share to FacebookShare to Twitter (3/11, Stewart) reports, according to several Microsoft insiders, there is growing concern that the company’s AI strategy has become too concentrated on its partnership with OpenAI, with some suggesting that Microsoft has become an extension of OpenAI’s IT department. This shift has allegedly caused resentment and attrition among executives who worked on Microsoft’s own AI projects. “The former Azure AI is basically just tech support for OpenAI,” one former Microsoft executive said. “Eric Boyd [head of Microsoft’s AI Platform team] is effectively maintaining the OpenAI service. It’s less of an innovation engine than it once was. Now it’s more IT for OpenAI.”

 

State Department Report Warns Of “Extinction-Level Threat” From AI

CNN Share to FacebookShare to Twitter (3/12) says, “A new report commissioned by the US State Department paints an alarming picture of the ‘catastrophic’ national security risks posed by rapidly evolving artificial intelligence, warning that time is running out for the federal government to avert disaster.” In fact, the report, which was released this week by Gladstone AI, “flatly states that the most advanced AI systems could, in a worst case, ‘pose an extinction-level threat to the human species.’” CNN says, “The warning in the report is another reminder that although the potential of AI continues to captivate investors and the public, there are real dangers too.”

 

Physicians Increasingly Using AI

“New AI tools are helping doctors communicate with their patients, some by answering messages and others by taking notes during exams,” the AP Share to FacebookShare to Twitter (3/13, Johnson) reports. The new technology “saves doctors time and prevents burnout, enthusiasts say. It also shakes up the doctor-patient relationship, raising questions of trust, transparency, privacy and the future of human connection.” AI can also “mean more money for the doctor’s employer because it won’t forget details that legitimately could be billed to insurance.”

 

University Of Nebraska Researchers Find AI Support Can Help College Students Pass STEM Classes

The Seventy Four Share to FacebookShare to Twitter (3/13, Wagner) reports that “a recently published study from the University of Nebraska-Lincoln found that using artificial intelligence interventions boosted student achievement in STEM courses.” According to the study, “retention rates in undergraduate STEM majors have fallen below 50%, and graduation rates are roughly 20% lower than in non-STEM majors.” The researchers trained an AI model “on homework and test scores and final grades of 537 students in a computer science class between 2015 and 2018. In fall 2019, they tested the model on 65 undergraduates taking the same course,” and at the end of the semester, “nearly 91% of the first group passed the course, versus 73% of the second group. Of students surveyed who reported actively checking their status from the AI model, 86% said they increased their effort after seeing the forecast.”

 

EU Passes Groundbreaking AI Regulation Law

CNN Share to FacebookShare to Twitter (3/13, Fung) reports that the European Union has approved a pioneering AI law, setting stringent rules for technology usage. The first-of-its-kind law “imposes blanket bans on some ‘unacceptable’ uses of the technology while enacting stiff guardrails for other applications deemed ‘high-risk.’” The regulation “also requires all AI-generated deepfakes to be clearly labeled, targeting concerns about manipulated media that could lead to disinformation and election meddling. The sweeping legislation, which is set to take effect in roughly two years, highlights the speed with which EU policymakers have responded to the exploding popularity of tools such as OpenAI’s ChatGPT.”

 

OpenAI Enters Licensing Deals With European Publishers

Bloomberg Share to FacebookShare to Twitter (3/13, Subscription Publication) reports that OpenAI “has inked licensing deals with two major European publishers, French paper Le Monde and Spanish media conglomerate Promotora de Informaciones SA, or Prisa — agreements that will bring French and Spanish language news content to ChatGPT and help train the startup’s models.” The agreements “are the latest expansion of OpenAI’s efforts to cut deals with media companies rather than battle them over how the company uses news articles and other content in its AI tools.”

 

Apple Acquires AI Startup DarwinAI

Bloomberg Share to FacebookShare to Twitter (3/14, Gurman, Subscription Publication) reports that earlier this year, Apple “acquired Canadian artificial intelligence startup DarwinAI, adding technology to its arsenal ahead of a big push into generative AI in 2024.” Bloomberg says, “dozens of DarwinAI’s employees have joined Apple’s artificial intelligence division, according to people with knowledge of the matter, who asked not to be identified because the deal hasn’t been announced.”

        The Verge Share to FacebookShare to Twitter (3/14) says DarwinAI “makes AI systems that visually inspect components for manufacturers, but, as pointed out by Bloomberg, the startup also aims to ‘make neural network models smaller and faster.’’This tech might prove useful to Apple, which is working to optimize large language models for phones.”

dtau...@gmail.com

unread,
Mar 24, 2024, 1:24:20 PM3/24/24
to ai-b...@googlegroups.com

OpenAI Gears Up for GPT-5 Release

Insider Share to FacebookShare to Twitter (3/19, Hays, Rafieyan) reports that OpenAI is expected to release GPT-5, the newest version of ChatGPT, in the middle of 2024. Some enterprise customers have previewed the latest model, noting significant improvements. Though currently in its training phase, GPT-5 will undergo internal safety evaluations and “red teaming.” This follows the release of GPT-4 a year ago and GPT-4 Turbo later in 2023; both aimed to enhance speed, accuracy, and address the model’s previous reluctance to respond to prompts. OpenAI relies on sales to enterprise clients as its primary revenue source amidst customer concerns over “laziness” and data degradation in prior models.

AI Community Suspects OpenAI Using YouTube Videos For Model Training

Insider Share to FacebookShare to Twitter (3/18, Barr, Stewart) reports that there is widespread speculation in the AI community that OpenAI has been using a substantial number of YouTube videos to train its AI models, including its new Sora model. Google, which owns YouTube, has policies against such large-scale scraping or downloading of videos for commercial purposes, and is known to limit high-volume data downloads. An OpenAI spokesperson confirmed that Sora’s training involved both licensed sources and publicly available online content, without directly addressing the scale of YouTube video use. The specifics of OpenAI’s data acquisition methods remain undisclosed, according to a person familiar with the company’s operations. Amidst a race to gather quality data for AI model training, ethical and legal standards remain unclear, with OpenAI and other companies arguing that the use of copyrighted content is permissible.

Musk Open-Sources Grok Amid Industry Debate

The New York Times Share to FacebookShare to Twitter (3/17, Conger, Metz) reports that Elon Musk has released the underlying code for xAI’s A.I. chatbot Grok. The decision is Musk’s latest step in advocating for transparency in A.I. and follows his legal dispute with OpenAI for failing to open-source their technology, as well as his attempts to challenge the dominance of tech giants such as Google and Microsoft over A.I. advancements.

Apple Considers Licensing Google’s Gemini AI Tech For iPhone Features

TechCrunch Share to FacebookShare to Twitter (3/17, Mehta) reports Apple is reportedly considering a collaboration with Google to use its Gemini AI model for iPhone features, according to individuals familiar with the matter cited by Bloomberg. The potential licensing deal could allow for the introduction of AI-powered features with iOS updates later this year. Apple has also had conversations with OpenAI about using its GPT models. These considerations signify Apple’s current struggle in advancing its own AI efforts to compete with rivals like Microsoft, Anthropic and Google.

Google Unveils AI Health Initiatives At Annual Event

Bloomberg Share to FacebookShare to Twitter (3/19, Subscription Publication) reports, “Alphabet Inc.’s Google announced a slew of initiatives to deploy its artificial intelligence models in the health care industry, including a tool that will help Fitbit users glean insights from their wearable devices and a partnership to improve screenings for cancer and disease in India.” At its annual health event, Google “said Tuesday that teams at Google Research and Fitbit, which it owns, were developing a new AI feature that will draw data from the wristbands to coach users on their personal health.” Powered by Google’s AI model Gemini, “the tool could, for example, assess how workouts affect the quality of a person’s sleep.”

        Modern Healthcare Share to FacebookShare to Twitter reports, “No release date has been set,” as “Google is fine-tuning and testing the model with de-identified data from research case studies and validating the results with accredited health coaches and wellness experts.”

The Development Of AI Has Shown The Outsized Nature Of Human Intelligence

George Musser writes in Scientific American Share to FacebookShare to Twitter (3/19, Musser) that the dream of artificial intelligence “has never been just to make a grandmaster-beating chess engine or a chatbot that tries to break up a marriage.” It has been to “hold a mirror to our own intelligence, that we might understand ourselves better.” Researchers seek “not simply artificial intelligence but artificial general intelligence, or AGI – a system with humanlike adaptability and creativity.” The one piece of AGI “these systems have unequivocally solved is language.” They possess what “experts call formal competence: they can parse any sentence you give them, even if it’s fragmented or slangy, and respond in what might be termed Wikipedia Standard English.” But they fail “at the rest of thinking – everything that helps us deal with daily life.”

Microsoft Hires DeepMind Co-Founder to Spearhead Consumer AI Products

The Wall Street Journal Share to FacebookShare to Twitter (3/19, Subscription Publication) reports that Microsoft has brought on Mustafa Suleyman, co-founder of DeepMind and of AI startup Inflection AI, to head its consumer AI product initiatives. Suleyman will report directly to CEO Satya Nadella and lead the newly formed Microsoft AI, focusing on Copilot integrations and various consumer products and research. This reorganization reflects Microsoft’s effort to streamline its AI strategy, previously dispersed across multiple divisions, and indicates Microsoft’s AI diversification, as evidenced by its ongoing partnership with OpenAI and recent investment in Mistral AI.

China Has More AI Researchers Than The US

The New York Times Share to FacebookShare to Twitter (3/22, Mozur, Metz) reports new research “shows that China has by some metrics eclipsed the United States as the biggest producer of A.I. talent, with the country generating almost half the world’s top A.I. researchers.” By contrast, “about 18 percent come from U.S. undergraduate institutions, according to the study, from MacroPolo, a think tank run by the Paulson Institute.” The findings show “a jump for China, which produced about one-third of the world’s top talent three years earlier.” The United States, “by contrast, remained mostly the same.” The research is based “on the backgrounds of researchers whose papers were published at 2022’s Conference on Neural Information Processing Systems.”

Universities Developing Their Own ChatGPT-Inspired Tools

Inside Higher Ed Share to FacebookShare to Twitter (3/21, Coffey) reports the University of Michigan is “one of a small number of institutions that have created their own versions of ChatGPT for student and faculty use over the last year.” Institutions include Harvard University, Washington University, and UC San Diego, but the effort “goes beyond jumping on the artificial intelligence (AI) bandwagon – for the universities, it’s a way to overcome concerns about equity, privacy and intellectual property rights.” Students can use OpenAI’s ChatGPT “and similar tools for everything from writing assistance to answering homework questions,” while the newer models “have more up-to-date information, which could give students who can afford it a leg up.” OpenAI has also “not been transparent in how it trains ChatGPT, leaving many worried about research and potential privacy violations.” UC Irvine publicly announced “their own AI chatbot – dubbed ZotGPT – on Monday.” The tool can help staff and faculty “with everything from creating class syllabi to writing code.”

Saudi Arabia Plans $40 Billion AI Investment

The New York Times Share to FacebookShare to Twitter (3/19, Farrell, Copeland) reports that Saudi Arabia’s government intends to form a $40 billion artificial intelligence fund, potentially becoming the world’s largest AI investor. The country’s Public Investment Fund, which controls assets over $900 billion, is exploring partnerships with Silicon Valley entities, including venture capital firm Andreessen Horowitz. While details are not finalized and may change, the fund is part of Saudi Arabia’s economic diversification and global influence efforts. The investment would surpass typical US venture capital amounts and is second only to Japan’s SoftBank. The plans are being orchestrated with assistance from Wall Street banks amidst a surge in AI valuations and global investment activities.

 

DHS Set To Launch Pilot Programs Utilizing AI. The New York Times Share to FacebookShare to Twitter (3/18, Kang) reports that the Department of Homeland Security is “becoming the first federal agency to embrace the technology with a plan to incorporate generative A.I. models across a wide range of divisions. In partnerships with OpenAI, Anthropic and Meta, it will launch pilot programs using chatbots and other tools to help combat drug and human trafficking crimes, train immigration officials and prepare emergency management across the nation.” DHS Secretary Mayorkas is quoted as saying, “One cannot ignore it. And if one isn’t forward-leaning in recognizing and being prepared to address its potential for good and its potential for harm, it will be too late and that’s why we’re moving quickly.”

Bipartisan Bill Would Require Online Identification, Labeling On AI Videos, Audio

The AP Share to FacebookShare to Twitter (3/21) reports a bipartisan House bill introduced Thursday “would require the identification and labeling of online images, videos and audio generated using artificial intelligence.” Such deepfakes have “already been used to mimic President Joe Biden’s voice, exploit the likenesses of celebrities and impersonate world leaders, prompting fears it could lead to greater misinformation, sexual exploitation, consumer scams and a widespread loss of trust.” The bill “would require AI developers to identify content created using their products with digital watermarks or metadata” and online platforms “would then be required to label the content in a way that would notify users.”

Advocacy Group Sues Consultant, Companies Behind AI-Deepfake Robocall Of Biden

The Washington Post Share to FacebookShare to Twitter (3/16, Raji) reports the League of Women Voters of New Hampshire has sued campaign consultant Steve Kramer and telecom companies Life Corp. and Lingo Telecom over the “AI-generated robocall of President Biden that in January urged New Hampshire voters not to participate in the state’s presidential primary.” The Post says the voting advocacy group accuses them “of voter intimidation, coercion and deception in violation of federal and state laws, including the Voting Rights Act and the Telephone Consumer Protection Act.” The lawsuit “asks a judge to fine the defendants and block them from producing, generating or distributing other robocalls generated with artificial intelligence.”

Google Announces Plan To Build New Data Center In Kansas City

The Kansas City (MO) Star Share to FacebookShare to Twitter (3/20, Shorman, A. Cronkleton) reports Google “unveiled plans for a sprawling data center in Kansas City’s Northland on Wednesday, a $1 billion investment the company said would help drive its artificial intelligence efforts.” The project “marks Google’s first data center in Missouri. Heavy equipment was already moving dirt on the site on Wednesday.” The data center “will play an essential role in supporting the company’s AI innovations and growing its cloud business, Google said.” Google “will work with Evergy to power the site and Ranger Power and D. E. Shaw Renewable Investments to bring 400 megawatts of new carbon-free energy to the grid as part of the company’s goal to run on carbon-free energy.”

Los Angeles Unified To Roll Out Pilot Of AI-Powered Personal Assistant

EdSource Share to FacebookShare to Twitter (3/20) reports Los Angeles Unified School District students “will soon have their own individualized AI tool, a ‘personal assistant,’ to help them with everyday tasks and remind them about school work when they forget.” The tool, named Ed, “is the first of its kind in the nation and will be able to accommodate students verbally and on screen in 100 languages.” Ed will first become “available to 55,000 students across 101 elementary, middle and senior high schools starting March 20.” Superintendent Alberto Carvalho said, “What we are announcing here today is a vision that was built over years of thinking about it, but only one year in actually bringing the necessary partners together – to give a voice, to give a simple life, to give a color, to give an experience. And what has emerged is Ed.” Carvalho also “said this tool will not replace the many people in LAUSD who teach and support students on a daily basis.”

Los Angeles Unified Deploys AI Tool For Students

The Los Angeles Times Share to FacebookShare to Twitter (3/21, Blume) reports Los Angeles Unified on Wednesday debuted “Ed,” its “much-awaited AI tool” that serves as a “student adviser, programmed to tell its young users and their parents about grades, tests results and attendance – while giving out assignments, suggesting readings and even helping students cope with nonacademic matters.” At its core, Ed is “designed to give students immediate answers about where they stand, what they need to do to make progress – or, more immediately, find out when their bus will arrive.” Superintendent Alberto Carvalho said the app “demystifies the navigation of the day...crunches the data in a way that it brings what students need.” It also serves as an example of how AI can “help students learn – a contrast from the reality that some students have used AI to cheat or other malfeasance, several experts said.” The district has placed “limits on the reach of the AI software,” but Carvalho said it could be expanded in the near future.

        Education Week Share to FacebookShare to Twitter (3/21) reports Ed is “available 24/7 and in multiple languages” and is part of the district’s “effort to catch students up on any unfinished learning from the pandemic.” It has “undergone pilot testing with about 1,000 LAUSD students since January” and, for now, “Ed will be available to 55,000 students at select schools, Carvalho said.” It will be launched for all students in what could be “just matter of weeks,” Carvalho added, after they’ve reached a “level of confidence” in the success of the platform. Torrey Trust, a professor of learning technology at UMass Amherst, believes LAUSD’s way of bringing in information together so parents and students can understand it better and easily access it can be “beneficial” and has “really great potential.” But, Trust added, “With any AI [tool], we’ve got to be critical. You can’t just accept this is an amazing tool that’s going to transform your life.”

        KABC-TV Share to FacebookShare to Twitter Los Angeles (3/21) reports Carvalho addressed security concerns and the overall fear of AI technology during Wednesday’s press conference. He explained the district “collaborated with federal security companies and included filters in the programming to screen for any type of threatening language.” Carvalho said, “There is no way that a student right now can go outside of LAUSD to look up information through ED, so the level of protection is absolutely guaranteed.” LAist (CA) Share to FacebookShare
to Twitter (3/20) reported Carvalho “said the app is currently for parents and students, but there is potential for the tool to be used by teachers if they want to in the future.”

State Guidance On AI Use In Schools Still Sparse

K-12 Dive Share to FacebookShare to Twitter (3/19) reports that ensuring artificial intelligence “does not replace humans and addressing workforce needs are just two of the themes emerging across at least seven states that have released guidance on using AI in K-12 settings.” According to a recent analysis by nonprofit Digital Promise. California, North Carolina, Ohio, Oregon, Virginia, Washington and West Virginia have “released guidance to help school district leaders navigate AI in K-12 as of late February.” Meanwhile, a “separate review of state AI policies” by Arizona State University’s Center on Reinventing Public Education (CRPE) “found conversations shifting away from last year’s focus on plagiarism and bans and moving toward urging teachers to accept AI and use it to enhance learning and their own effectiveness in the classroom.” While AI guidance “at the state level continues to roll out along with recommendations from K-12 education experts and researchers,” CRPE said there is a “potentially decentralized and fragmented set of approaches to AI.”

 

How to Make AI 'Forget' All the Private Data It Shouldn't Have
Researchers are exploring the concept of machine "unlearning" to enable AI models to remove specific data that should not be retained, such as private or outdated information. This is particularly important for compliance with data privacy regulations and to address biases or inaccuracies in training data. Machine unlearning involves efficiently removing the influence of the data without having to retrain the entire model. It has practical applications for companies like Facebook and Google, as well as in high-risk sectors like healthcare and finance. The vulnerability of generative AI models to privacy attacks and the increasing scale of models contribute to the need for machine unlearning. (HBS.EDU)

 

Microsoft Uses AI to Stop Phone Scammers
Microsoft introduces Azure Operator Call Protection, a service that analyzes phone conversations in real time to identify suspicious callers. The AI-powered system can alert users if a call seems fraudulent, reinforcing best practices and helping combat spam calls. The service is opt-in, and data from calls is not saved or used for training AI models. Microsoft is currently piloting the technology with BT Group. (CNET.COM)

 

Reddit Strikes $60M Deal Allowing Google to Train AI Models on Its Posts, Unveils IPO Plans
Reddit has entered into a deal with Google that allows the search giant to use posts from the online discussion platform to train its AI models and enhance services like Google Search. The agreement, valued at approximately $60 million, also grants Reddit access to Google's AI models to improve its internal site search and other features. This partnership marks a significant step for Reddit, which relies on volunteer moderators, and comes alongside the company's announcement of its plans for an initial public offering (IPO) on the New York Stock Exchange. (APNEWS.COM)

 

AI Doomsayers Funded by Billionaires Ramp Up Lobbying
Nonprofits backed by tech billionaires are increasing their lobbying efforts in Washington to advocate for AI safety bills. Critics argue that these efforts are a diversion tactic to prevent regulation and competition, while redirecting attention from more immediate AI-related issues. The nonprofits, such as the Center for AI Policy and Center for AI Safety, have registered lobbyists and are pushing for legislation that holds AI developers accountable for potential harm and empowers regulators to intervene in emergencies. The lobbying activities could potentially benefit leading AI firms, sparking concerns about the influence of wealthy backers on policy priorities. (POLITICOPRO.COM)

 

How a Small Iowa Newspaper's Website Became an AI-Generated Clickbait Factory
Two former Meta employees investigated why the website of the Clayton County Register in Iowa was publishing dubious financial posts. They found it's now part of a network of sites using AI to generate clickbait content. After domains change hands, sites leverage old reputations to rank in search results and earn ad revenue. The network publishes thousands of articles with AI-generated text and images under fake bylines. The operators remain anonymous, highlighting the difficulty in identifying who runs AI content mills. (WIRED.COM)

 

The 9-Month-Old AI Startup Challenging Silicon Valley’s Giants
Paris-based Mistral AI, a nine-month-old startup founded by Arthur Mensch, is challenging the dominance of U.S. tech giants in the AI race. Mistral aims to build and deploy AI systems more efficiently and at a fraction of the cost, offering many of their AI systems as open-source software. With a new AI model called Mistral Large, the company aims to compete with industry leaders like OpenAI and Google. Mistral has attracted interest from corporate clients and investors, including Microsoft, which plans to add Mistral's new model as an option on its Azure cloud service. (WSJ.COM)

 

Nvidia's Stellar Results Show It Can Thrive Amid China Decoupling as It Lists Huawei as Potential AI Chip Rival
Nvidia reports a 265% jump in quarterly revenue, beating estimates, despite US trade sanctions limiting sales to China. Demand remains strong, with smaller countries now competing for Nvidia's GPUs. The company has halted exports of restricted chips to China and has identified Huawei as a potential competitor in certain AI chip categories. Analysts predict continued growth for Nvidia as demand exceeds supply and local players, including Huawei, pose competition in the market. (SCMP.COM)

Nvidia Flirts With $2 Trillion Valuation as Rapid Ascent Extends
Chipmaker Nvidia reported eye-popping sales and forecasts, adding momentum to its stock rally amid surging AI demand. Nvidia's valuation neared $2 trillion as its shares hit a record high. Its data center revenue soared 409% annually. Nvidia sees AI investment continuing to drive growth, though it faces risks like export restrictions on China sales. (BLOOMBERG.COM)

 

A Power Struggle Brews Over Crypto and AI
A lawsuit and an essay highlight growing concerns over the energy demand of artificial intelligence and cryptocurrency mining. The crypto industry has obtained a temporary order blocking the Energy Department's collection of power usage data, while AI ethicist Kate Crawford calls for pragmatic actions to limit AI's ecological impacts. The Energy Information Administration is conducting an "emergency" data survey on crypto mining's power demand. On the other hand, the Texas Blockchain Council and Riot Platforms have filed a lawsuit against invasive government data collection. The International Energy Agency estimates that data centers, crypto, and AI could account for 4% of global power demand by 2026. (AXIOS.COM)

 

Your Online Identity is Not as Safeguarded as You Think-and It's Not on You to Fix
A hacker with over 20 years of experience reveals that cybercriminals are increasingly using employees' identities to gain access to company networks. With the rise of generative AI, cyber criminals can easily piece together fragments of personal information to exploit individuals. As the adoption of generative AI grows, cybercriminals may distort identities for their attacks, including cloning voices or using deepfake services. It is crucial to dispel the notion of users as the "root cause" of data breaches, and instead, enterprises must take responsibility for combating the security issue. Businesses are shifting towards behavioral analytics and reducing the need for users to input credentials as methods of authentication. By making identity a harder path for cybercriminals to pursue, the incentive to exploit personal data decreases. (FASTCOMPANY.COM)

 

How AI Is Transforming Air-Traffic-Control Towers
The use of high-definition cameras and AI algorithms in digital air traffic control towers is revolutionizing the management of air traffic. These towers provide panoramic views of runways, overlay radar information on aircraft, and can be placed anywhere on the airfield. Machine learning algorithms analyze the camera feeds to improve turnaround times and enhance safety. While digital towers are already in use in some airports, their certification and widespread adoption in the US may take a few more years. However, they are seen as a way to optimize the use of air traffic controllers and address the global shortage in this field. (WSJ.COM)

dtau...@gmail.com

unread,
Mar 30, 2024, 4:18:10 PM3/30/24
to ai-b...@googlegroups.com

The Fight for AI Talent

To attract workers with generative AI expertise, tech companies increasingly are offering million-dollar annual compensation packages with accelerated stock-vesting schedules. Others are poaching entire engineering teams. This comes as other areas of the tech industry have seen major layoffs. In high demand are those who have experience training large language models (LLMs) and AI salespeople. SBT Industries' Justin Kinsey said some candidates can be persuaded to switch companies with promises of autonomy over their work.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Katherine Bindley (March 27, 2024)

 

AI Could Use Coughs to Diagnose Disease

A machine learning tool developed by Google researchers can assess noises like coughing, breathing, and throat clearing to detect certain health conditions. The self-supervised learning system, Health Acoustic Representations (HeAR), was trained on 300 million audio clips of human sounds extracted from YouTube videos. HeAR can be fine-tuned to detect specific diseases and characteristics by feeding it limited datasets with the appropriate labels. Tests showed that HeAR outperformed existing models trained on speech data or general audio for COVID-19 and tuberculosis detection.
[ » Read full article ]

Nature; Mariana Lenharo (March 21, 2024)

 

China Pulls Ahead of the U.S. in AI Talent

The MacroPolo think tank found that nearly half of the world's top AI researchers come from China, up from about 33% three years ago, while only around 18% come from U.S. undergraduate institutions, essentially unchanged. Of the AI researchers working in the U.S., 37% are American, up from 31% three years ago, and 38% are from China, up from 27%. The U.S. is home to around 42% of the world's top AI talent, down from about 59% three years ago.

[ » Read full article *May Require Paid Registration ]

The New York Times; Paul Mozur; Cade Metz (March 23, 2024)

 

UAE on Mission to Become AI Power

The United Arab Emirates (UAE) put itself on the AI map last year when an international team of 25 computer scientists completed the open-source large language model (LLM) Falcon, released in September. Abu Dhabi's Advanced Technology Research Council contributed $300 million to the Falcon Foundation to oversee open-source development of large language models, an approach intended to attract top AI researchers looking to work on technologies that will serve the greater good.
[ » Read full article ]

Time; Billy Perrigo; Leslie Dickstein (March 20, 2024)

 

University Of South Florida To Establish College Of AI

Inside Higher Ed Share to FacebookShare to Twitter (3/22, Coffey) reported the University of South Florida “said Thursday it is launching a college focused on artificial intelligence (AI).” The College of Artificial Intelligence, Cybersecurity and Computing, “opening in the fall of 2025, would be the first AI-centered college within a university in Florida, said Prasant Mohapatra, provost and president of academic affairs for University of South Florida.” AI colleges within universities “are rare, although institutions across the nation have increasingly begun offering degrees in artificial intelligence.” Others have “made sweeping, system-wide commitments to the technology, such as University at Albany’s ‘AI plus initiative.’” USF plans to “create a physical home for the college, but in the interim it will be nestled within the computer science programming department.” The university plans “to spend a ‘significant’ amount on hiring new faculty, with an initial focus on 15 to 20 professors.”

 

Vanderbilt University To Launch New Computing, AI, Data Science College

Diverse Issues in Higher Education Share to FacebookShare to Twitter (3/25, Jackson) reports Vanderbilt University “has announced its work toward establishing a college dedicated to computer science, artificial intelligence (AI), data science, and related fields.” The College of Connected Computing will collaborate “with all of Vanderbilt’s schools and colleges to advance breakthrough discoveries and strengthen computing education through a ‘computing for all’ approach.” The interdisciplinary college “will be led by a new dean,” with a dean search “expected to begin in late August.” A dedicated college “will enable Vanderbilt to keep making groundbreaking discoveries at the intersections of computing and other disciplines and will more effectively leverage advanced computing to address some of society’s most pressing challenges.”

 

Study: 36% Of Gen Z Is Worried About Their Overreliance On AI At Work

Forbes Share to FacebookShare to Twitter (3/23, Paulise) reports that an EduBirdy “study revealed that 36% of Gen Z respondents felt guilty about using AI to aid them in their work tasks.” In addition, “1 in 3 Gen Z respondents expressed concern about relying too heavily on ChatGPT, as they believed it could limit their critical thinking skills” and “18% of respondents stated that it hampered their creativity.”

 

Apple CEO Advocates For Using AI For Carbon Reduction During China Visit

Fortune Share to FacebookShare to Twitter (3/24) reports that Apple CEO, Tim Cook, promoted artificial intelligence (AI) as a crucial tool for reducing corporate carbon footprints during the China Development Forum. Cook’s participation in the climate change discussion marked the end of his week-long visit, which highlighted Apple’s commitment to China. He praised suppliers BYD Co., Lens Technology Co., and Shenzhen Everwin Precision Technology Co for their innovation and environmental consciousness. Apple aims for net zero climate impact across its whole operation by 2030.

 

Companies Open-Source AI Models To Compete With Dominant Players

The Wall Street Journal Share to FacebookShare to Twitter (3/22, Subscription Publication) reports that companies such as Microsoft and OpenAI currently dominate the AI market, and in response, some companies are open-sourcing their AI models to compete. Examples include Elon Musk’s xAI start-up releasing its chatbot Grok and Meta Platforms launching its Llama 2 model. Other companies like Google and a range of AI startups, including Mistral AI and Hugging Face, are adopting a similar strategy to offer free models to chip away at the leading players’ market share. They also see potential revenue by selling enhanced enterprise services and applications built on these open models. Despite initial challenges like high training costs and standardizing licensing terms for AI technologies, companies are betting on open source to innovate and attract independent developers.

 

Microsoft To Pay $650M To License Inflection AI Software, Hire Staff

Bloomberg Share to FacebookShare to Twitter (3/21, Subscription Publication) reports, “Microsoft Corp. has agreed to pay Inflection AI $650 million, largely to license its artificial intelligence software, alongside its move earlier this week to hire much of the startup’s staff.” The move “resembled an ‘acqui-hire’ – but without an acquisition,” though some experts have “suggested that Microsoft’s Inflection deal might still spark antitrust concerns with US regulators.” Bloomberg adds, “Now, with a much smaller staff, Inflection is trying to offload some of its compute capacity.”

 

Tech Companies Worried About Energy Sources For Growing AI Infrastructure

The Wall Street Journal Share to FacebookShare to Twitter (3/24, Subscription Publication) reports tech companies are working to address the immense energy needs of the booming AI industry, with many executives worried the demand will slow global transition to cleaner energy sources, strain the existing energy grid, and impact their environmental commitments.

 

EdWeek Survey: 70% Of Teachers Have Received No Professional AI Training

Education Week Share to FacebookShare to Twitter (3/25) reports more than “7 in 10 teachers said they haven’t received any professional development on using AI in the classroom, according to a nationally representative EdWeek Research Center survey of 953 educators, including 553 teachers, conducted between Jan. 31 and March 4.” The survey data show “that teachers who are in urban districts, those in districts with free/reduced-price meal rates of more than 75 percent, and those who teach elementary grades are more likely than their peers to say they haven’t received any AI training.” Some experts argue “that educators can’t ignore this technology that is predicted to be a huge force in the world. They say that it’s important for teachers to learn more about AI, not just so they can use it responsibly in their work, but also to help model that use for students who are already interacting with this technology and will need to become smart AI consumers.”

 

Opinion: AI-Powered Translation Technology Could Hinder Foreign-Language Education

In an opinion piece for The Atlantic Share to FacebookShare to Twitter (3/26), journalist Louise Matsakis says neural networks, “the machine-learning systems that power generative-AI programs such as ChatGPT, have rapidly improved the quality of automatic translation over the past several years, making even older tools like Google Translate far more accurate.” Meanwhile, the number of students “studying foreign languages in the U.S. and other countries is shrinking.” Many factors could “help explain the downward trend, including pandemic-related school disruptions,” but whether the cause of the shift “is political, cultural, or some mix of things, it’s clear that people are turning away from language learning just as automatic translation becomes ubiquitous across the internet.” As AI translation technology “becomes normalized, we may find that we’ve allowed deep human connections to be replaced by communication that’s technically proficient but ultimately hollow.”

 

AI Seen Replacing 10% Of The Workforce

Inc. Magazine Share to FacebookShare to Twitter asks, “Will generative AI take your job?” The piece says “it depends,” according to a recent story about the Council of Economic Advisors’ Annual Economic Report of the President. Inc. says the report “distinguishes between simple tasks--which an AI can perform more efficiently than a person does--and complex ones, which a person can do more effectively than an AI.” If an AI chatbot “can perform most of your job activities, you might need to find new work. The report estimates that 10 percent of the U.S. workforce falls into that category. “

 

EU Calling On Large Tech Firms To Crack Down On AI-Generated Content Ahead Of Election

AFP Share to FacebookShare to Twitter (3/27) reports the EU is calling on Facebook, TikTok, and other large tech firms “to crack down on deepfakes and other AI-generated content by using clear labels ahead of Europe-wide polls in June.” The guideline is part of a string of measures published under the Digital Services Act, under which the EU “has designated 22 digital platforms as ‘very large’ including Instagram, Snapchat, YouTube and X.” The European Commission “recommends that big platforms promote official information on elections and ‘reduce the monetization and virality of content that threatens the integrity of electoral processes’ to diminish any risks.” TikTok on Tuesday “announced more of the measures it was taking including push notifications from April that will direct users to find more ‘trusted and authoritative’ information.”

 

Western Governments Grapple With Reaching Consensus On AI Regulation

Politico Europe Share to FacebookShare to Twitter (3/26) reports, “For the past year, a political fight has been raging around the world, mostly in the shadows, over how – and whether – to control AI.” The stakes are high: “Whoever wins will cement their dominance over Western rules for an era-defining technology.” Moreover, “if liberal industrialized economies fail to reach a common regime among themselves, China may step in to set the global rulebook for a technology that – in a doomsday scenario – some fear has the potential to wipe humanity off the face of the Earth.” As of now, as Western governments “pitch their conflicting plans for regulating AI, the chances of a deal look far from promising.”

 

Amazon Invests Additional $2.75B In AI Startup Anthropic

The Wall Street Journal Share to FacebookShare to Twitter (3/27, Pisani, Subscription Publication) reports Amazon has invested an additional $2.75 billion in AI startup Anthropic, bringing its total investment in the company to $4 billion. This marks Amazon’s largest investment in another company to date and is part of an AI arms race among tech giants. Anthropic, which offers AI assistant Claude, has committed to spending $4 billion on AWS over five years.

        CNBC Share to FacebookShare to Twitter (3/27, Rooney, Field) reports AWS VP of Data and AI Dr. Swami Sivasubramanian stated, “Generative AI is poised to be the most transformational technology of our time, and we believe our strategic collaboration with Anthropic will further improve our customers’ experiences, and look forward to what’s next.” Amazon’s investment in AI startup Anthropic, which competes with OpenAI’s ChatGPT is part of a broader trend among cloud providers to advance in the AI field. Amazon will retain a minority stake in Anthropic without a board seat. The New York Times Share to FacebookShare to Twitter (3/27, Weise) reports Amazon’s investment includes access to Anthropic’s AI systems and commitments to provide computing power, avoiding high-value acquisitions that might trigger antitrust reviews.

 

White House Requires Federal Agencies To Use AI Oversight Officers, Tests For Potential Risks In AI

Behind a paywall, Bloomberg Share to FacebookShare to Twitter (3/28, Gardner, Subscription Publication) reports, “The White House will require federal agencies to test artificial intelligence tools for potential risks and designate officers to ensure oversight, actions intended to encourage responsible adoption of the emerging technology by the US government.” On Thursday, “the Office of Management and Budget” (OMB) “issued a government-wide policy...to mitigate the threats posed by AI – including discrimination and privacy violations – and increase transparency over how government uses the technology, building on an executive order signed by President Joe Biden last year.”

        Reuters Share to FacebookShare to Twitter reports that the goal of the requirement is for “federal agencies using artificial intelligence to adopt ‘concrete safeguards’ by Dec. 1 to protect Americans’ rights and ensure safety as the government expands AI use in a wide range of applications.” President Biden “signed an executive order in October invoking the Defense Production Act to require developers of AI systems posing risks to U.S. national security, the economy, public health or safety to share the results of safety tests with the U.S. government before publicly released.” Additionally, “the White House plans to hire 100 AI professionals to promote the safe use of AI and is requiring federal agencies to designate chief AI officers within 60 days.”

 

GPT-4 Users Report Declining Performance

Insider Share to FacebookShare to Twitter (3/28, Chowdhury) reports that users of GPT-4 are observing a reduction in the AI model’s capabilities, with complaints of it not fully complying with instructions and offering inadequate responses. OpenAI previously noted a decrease in performance and claimed to have issued a fix in February, according to CEO Sam Altman. Amid these issues, competitors like Anthropic’s Claude 3 Opus have appeared, potentially outperforming GPT-4. Some users report that alternatives like Claude offer more reliable coding capabilities. OpenAI hasn’t commented on these performance concerns, but speculation suggests its focus may have shifted to developing GPT-5.

 

Harris Unveils New Rules For Federal Agencies Utilizing AI

The AP Share to FacebookShare to Twitter (3/28, O'Brien) reports Vice President Kamala Harris on Thursday announced new rules that require federal agencies to “show that their artificial intelligence tools aren’t harming the public, or stop using them.” According to the AP, “Each agency by December must have a set of concrete safeguards that guide everything from facial recognition screenings at airports to AI tools that help control the electric grid or determine mortgages and home insurance.” The AP reports the White House’s Office of Management and Budget issued the new policy directive to agency heads as “part of the more sweeping AI executive order signed by President Joe Biden in October.” Harris said, “When government agencies use AI tools, we will now require them to verify that those tools do not endanger the rights and safety of the American people.”

        In addition, Bloomberg Share to FacebookShare to Twitter (3/28, Gardner, Subscription Publication) reports Harris “said the US would ‘continue to call on all nations to follow our lead and put the public interest first’ when promoting the use of AI,” while senior Administration officials “said high-risk systems would undergo rigorous testing under the new guidelines.” Reuters Share to FacebookShare to Twitter reports the White House “plans to hire 100 AI professionals to promote the safe use of AI and is requiring federal agencies to designate chief AI officers within 60 days.”

dtau...@gmail.com

unread,
Mar 30, 2024, 7:59:48 PM3/30/24
to ai-b...@googlegroups.com

Why Our Law Firm Bans Generative AI for Research and Writing
A law firm explains why it prohibits the use of generative AI for legal products such as briefs and motion arguments. The firm argues that generative AI lacks thought, analysis, and understanding, relying on algorithms to generate content instead of human judgment. It highlights concerns about accuracy, fabrication of cases, and the inability to provide insightful analysis. (BLOOMBERGTAX.COM)

 

Welcome to the Era of BadGPTs: Hackers Utilize AI Chatbots for Cyberattacks
Hackers are using AI chatbots, similar to ChatGPT, to enhance their phishing emails and create deepfakes. The rise of AI-generated email fraud and deepfakes has businesses on high alert for more sophisticated cyberattacks. Dark web services are offering AI hacking tools, including BadGPT, which utilize models like OpenAI's GPT to generate effective malware and exploit vulnerabilities. The challenge lies in detecting these AI-enabled cybercrimes, as they are crafted to evade detection and can have a significant impact on businesses. (WSJ.COM)

 

Microsoft's Mistral AI Investment to Be Examined by EU Watchdog
The European Union's competition watchdog will analyze Microsoft's investment in Mistral AI, a startup that develops algorithmic models similar to those from OpenAI. The strategic partnership between Microsoft and Mistral AI, which includes making AI models available to Azure cloud customers, will be scrutinized alongside Microsoft's deep ties to OpenAI. The EU's analysis could potentially lead to a formal investigation, impacting Microsoft's plans. (BLOOMBERG.COM)

 

Feds Say AI Favors Defenders Over Attackers in Cyberspace-So Far
According to officials from the FBI and DHS, artificial intelligence (AI) tools have provided more benefits to cybersecurity defenders than malicious hackers. While there are concerns that AI could be used by attackers to discover vulnerabilities and exploit them, defenders have been using AI for various purposes such as detecting malicious activity, incident response, and software development. The jury is still out on whether AI will ultimately favor attackers or defenders, but for now, defenders seem to have the advantage. However, officials caution that the balance could shift in the future, making it easier for hackers to hide their presence and obfuscate their origins. (CYBERSCOOP.COM)

 

Meta Plans Launch of New AI Language Model Llama 3 in July, The Information Reports
Meta Platforms is set to release Llama 3, the latest version of its AI language model, in July. The new model aims to provide better responses to contentious questions by offering contextual understanding. Meta's efforts to improve the model's usefulness come after Google paused the image-generation feature on its Gemini AI due to inaccuracies. Llama 3 is expected to be able to understand questions like "how to kill a vehicle's engine" in the sense of shutting it off rather than causing harm. Meta also plans to appoint someone internally to oversee tone and safety training for more nuanced responses. (REUTERS.COM)

 

Defending Against Cyber Threats in the Age of AI
Artificial intelligence (AI) plays a crucial role in cybersecurity, but it also presents challenges as attackers leverage AI for sophisticated attacks. The evolving threat landscape and AI's dual nature require organizations to adapt and embrace innovation in their defense strategies. AI-based threats include AI-generated phishing campaigns, AI-assisted target identification, AI-driven behavior analysis, automated vulnerability scanning, smart data sorting, and AI-assisted social engineering. Defenders must remain vigilant and leverage generative AI as a powerful tool to anticipate and counter future threats. (DARKREADING.COM)

 

With Deepfakes, Facial Recognition Can't Fight Alone
Gartner predicts that rising deepfake attacks on face biometrics will lead 30% of enterprises to doubt the reliability of facial recognition as a standalone authentication method by 2026. However, industry professionals argue that combining facial recognition with other factors like behavioral analysis and location-specific analysis can enhance security and maintain trust in digital authentication. Deepfakes have become increasingly convincing, but incorporating additional detection capabilities can help identify suspicious activity beyond just detecting the deepfake itself. (ITBREW.COM)

 

Generative AI Could Deliver a $2.25 Trillion Economy Boost: Report
Generative AI tools have the potential to boost user productivity, according to a report by HFS Research and Ascendion. If Fortune 500 companies were 10% more productive due to generative AI, it could result in a $2.25 trillion impact on the US economy. However, enterprises struggle to quantify the ROI of generative AI, with transformational change proving challenging. CEOs expect a ROI timeframe of three to five years for generative AI investments. (CIODIVE.COM)

 

Stanford Study Outlines Risks and Benefits of Open AI Models
A Stanford University study examined the risks and benefits of open source AI models, highlighting the need for a framework to assess the potential dangers. Open AI models have significant implications for global geopolitics and domestic AI competition. The researchers identified five properties of open models, including broader access and greater customizability, but also noted the risk of misuse and weak monitoring. While open models enable distributed decision-making power and innovation, they also raise concerns about disinformation and bioterrorism. Open model advocates can now better understand and address the risks associated with their preferred models. (AXIOS.COM)

 

Integration of Antidetect with AI: New Security Age or Global Risk?
The fusion of artificial intelligence (AI) with antidetection technologies poses challenges and opportunities for online privacy. AI-driven antidetect systems enhance privacy and anonymity but also raise concerns about evasion, phishing attacks, targeted attacks, fraud detection compromise, and the formation of cyber armies. The cybersecurity community, developers, and governmental bodies must collaborate to develop advanced detection techniques, share threat intelligence, establish ethical guidelines, promote secure coding practices, educate users, introduce regulations, support research, and ensure international cooperation to ensure digital security. (FORBES.COM)

 

Is the US Prepared for AI's Influence on the Election?
The use of artificial intelligence (AI) in elections is becoming a concern as technology advances and regulations lag behind. Recent incidents, such as AI-generated robocalls targeting New Hampshire voters and fake audio recordings swaying an election in Slovakia, highlight the potential for AI interference. US regulations are not ready for the impact of AI on elections, leaving voters to discern what is real and not real. AI can be used to manipulate audio, video, and text to mislead voters and spread disinformation. The ability of AI to deceive has exacerbated the problem of misinformation. While companies have imposed limitations on AI tools, government regulation is lacking. Some states have implemented laws to regulate AI in political ads, but federal regulations are still pending. It remains uncertain if regulations will be in place in time for the upcoming elections. (THEGUARDIAN.COM)

 

An AI License Plate Surveillance Startup Installed Hundreds of Cameras Without Permission
Surveillance startup Flock has installed car tracking cameras in 4,000 cities across 42 states without obtaining the necessary permits, leading to a ban on its operations in two states. Flock provides AI-based tracking hardware and software to local police departments, but it has been found to violate state regulations by installing cameras without prior approval. The company's actions raise concerns about the handling and access to tracking data, infringing on personal freedom. Flock's CEO claims their cameras cover almost 70% of the population and are used to solve around 2,200 crimes per day. (QZ.COM)

 

IBM Introduces Data Path Ransomware Detection in New FlashSystem
IBM has enhanced its storage solutions with AI capabilities to improve data resilience against cyber threats. The new features, integrated with IBM's FlashCore Module technology and updated Storage Defender release, enable real-time monitoring of data for ransomware detection and response. The system analyzes data anomalies and employs machine learning models to detect and mitigate cyber attacks, providing early warnings and enhancing data protection and cyber resilience. (FORBES.COM)

 

China Faces High Demand for GenAI Talent, Offers Hefty Salary Premium
Chinese employers are aggressively seeking talent with skills in generative artificial intelligence (GenAI) as they try to catch up with advancements in the field. Computer vision engineers with GenAI expertise are being offered salaries that are two-thirds higher than their peers without such knowledge. The demand extends to other tech roles as well. However, there is a shortage of qualified candidates, with only two qualified workers available for every five new AI jobs in China. Some top Chinese AI talent have chosen to work overseas, highlighting the global competition for skilled professionals. (SCMP.COM)

 

Zoom Wants to Customize, Monetize Generative AI Assistant, CEO Says
Zoom CEO Eric Yuan announced plans to focus on customizing and monetizing the company's generative AI assistant platform, Zoom AI Companion. The platform, which has already reached 510,000 accounts, will be customized for customers to leverage their own data. Zoom aims to develop new AI capabilities to help customers achieve their unique business objectives. Despite experiencing slower growth in 2024 compared to the previous year, Zoom remains optimistic about its post-pandemic growth strategy and the potential of its AI offerings. (CIODIVE.COM)

dtau...@gmail.com

unread,
Apr 6, 2024, 8:19:10 AM4/6/24
to ai-b...@googlegroups.com

AI Researcher Takes on Election Deepfakes

TrueMedia.org, founded by Oren Etzioni (pictured), founding chief executive of the Allen Institute for AI, has rolled out free tools that journalists, fact-checkers, and others can use to detect AI-generated deepfakes. Etzioni said the tools will help detect "a tsunami of misinformation" that is expected during an election year. However, he added that the tools are not perfect, noting, "We are trying to give people the best technical assessment of what is in front of them. They still need to decide if it is real."


[
» Read full article *May Require Paid Registration ]

The New York Times; Cade Metz; Tiffany Hsu (April 2, 2024)

 

For Data-Guzzling AI Companies, the Internet Is Too Small

Companies working on powerful AI systems are encountering a lack of quality public data online, especially as some data owners block access to their data. One possible solution to the data shortage is the use of synthetic training data, though this has raised concerns about the potential for severe malfunctions. DatologyAI is experimenting with curriculum learning, which feeds data to language models in a certain order to improve the quality of connections between concepts.


[
» Read full article *May Require Paid Registration ]

The Wall Street Journal; Deepa Seetharaman (April 1, 2024)

 

Microsoft Tools to Stop Users from Tricking Chatbots

Microsoft's Azure AI Studio will soon have new built-in safety features to identify and block suspicious inputs in real time. Developers use Azure AI Studio to create customized AI assistants. The new features include "prompt shields" to stop prompt injection attacks or jailbreaks, which can trick an AI model into acting in an unintended way, and will address "indirect prompt injections," which insert malicious instructions into the training dataset to get the model to perform unauthorized actions.

[ » Read full article *May Require Paid Registration ]

Bloomberg; Jackie Davalos (March 28, 2024)

 

U.S. Military's Investments into AI Skyrocket

The Brookings Institution reported a nearly 1,200% surge in the potential value of AI-related U.S. government contracts, from $355 million in the year ending in August 2022 to $4.6 billion in the year ending in August 2023. The U.S. Department of Defense accounted for the majority of the total, with $557 million committed by the agency to AI-related contracts, rising to $4.3 billion if each contract were extended to its fullest terms.
[ » Read full article ]

Time; Will Henshall (March 29, 2024)

 

U.S. Farms Making Urgent Push into Technology

Amid a dwindling number of farm workers in the U.S., some farmers are turning to robotics and AI to address labor issues. These include drones and GPS tools, as well as self-driving tractors and combines, sensor systems, and AI-powered sorting and cultivation tools. The U.S. government is offering financial incentives to accelerate relevant AI development and deployment. The technology could improve crop yields, make agriculture more climate-resilient, increase profits for farmers, and reduce the amount of resources, energy, chemicals, and water used by farms.
[ » Read full article ]

BBC; Sam Becker (March 27, 2024)

 

Spider Conversations Decoded with Machine Learning

Using an array of contact microphones with sound-processing machine learning, University of Nebraska-Lincoln Ph.D. student Noori Choi captured the vibratory movements of wolf spiders across sections of forest floor. In all, he collected 39,000 hours of data, including over 17,000 series of vibrations, and designed a machine learning program capable of filtering out unwanted sounds while isolating the vibrations generated by three specific wolf spider species.
[
» Read full article ]

Popular Science; Andrew Paul (April 2, 2024)

 

U.S., U.K. Partner to Test AI Model Safety

Under an agreement signed by U.K. Secretary of State for Science, Innovation and Technology Michelle Donelan (pictured, right) and U.S. Secretary of Commerce Gina Raimondo (pictured, left), the nations will partner to safety test AI models. The U.K. and U.S. AI Safety Institutes will formulate an AI safety testing approach that uses the same methods and underlying infrastructure, exchange employees and information, and undertake a joint testing exercise on a publicly available AI model.
[
» Read full article ]

Time; Will Henshall (April 1, 2024)

 

OpenAI Reveals New AI Technology That Recreates Human Voices

The New York Times Share to FacebookShare to Twitter (3/29, Metz) reported that after OpenAI “offered a tool that allowed people to create digital images simply by describing what they wanted to see,” then it built “similar technology that generated full-motion video like something from a Hollywood movie,” it has now unveiled technology “that can recreate someone’s voice.” The start-up said on Friday “that a small group of businesses was testing a new OpenAI system, Voice Engine, that can recreate a person’s voice from a 15-second recording. If you upload a recording of yourself and a paragraph of text, it can read the text using a synthetic voice that sounds like yours.” The tool can recreate your voice in Spanish, French, Chinese, or many other languages.

 

Google’s AI Controversy Puts Spotlight On DeepMind Head

The Wall Street Journal Share to FacebookShare to Twitter (4/1, Subscription Publication) reports that Demis Hassabis, head of Google DeepMind, is facing challenges after the company’s AI chatbot Gemini exhibited biased and historically inaccurate responses, leading Google to stop it from generating images. Hassabis stressed that this was not the intended behavior of the AI and highlighted the complexity involved in such technologies. Under Sundar Pichai’s overhaul, Hassabis oversees the combined efforts of DeepMind and the Brain division. Amidst this turbulent period, Hassabis aims to keep Google at the AI forefront, with adjustments expected after the Gemini fiasco. Meanwhile, inside Google, there are calls for Hassabis to increase his influence over product translations of research.

 

Skeptics Questions AI’s Impact on Corporate Efficiency

The New York Times Share to FacebookShare to Twitter (4/1, Holman, Smialek) reports that as AI increasingly powers American businesses, experts debate its efficacy in enhancing company efficiency. A key hope is that AI will bolster productivity, allowing firms to increase profits and wages without raising prices. However, skeptics like Federal Reserve Chair Jerome H. Powell and NY Fed President John C. Williams, who cites Northwestern University economist Robert Gordon, are cautious, suggesting AI’s transformative impact on productivity may not be immediate or as significant as some believe.

 

Granholm Stresses Needs For Accelerating Discussion On AI Power Needs

In an interview with Axios Share to FacebookShare to Twitter (4/1, Nichols), Energy Secretary Granholm explained that the Administration “wants to ‘accelerate’ its conversations with big technology companies on how to generate more electricity — including with nuclear power — to meet their massive demand for artificial intelligence computing.” Granholm stressed that discussions need “to accelerate, because this demand for power is only going up.” The comments come after Granholm “announced a $1.52 billion loan guarantee to help restart a shuttered nuclear power plant on the shores of Lake Michigan last week,” with Axios suggesting that “the promise of nuclear energy...was clearly on Granholm’s mind.”

 

Survey: Many Schools Lack Discipline Guidance As Teacher AI Detection Use Grows

K-12 Dive Share to FacebookShare to Twitter (4/1, Merod) reports that “nearly twice as many teachers said their schools are implementing policies that allow the use of generative artificial intelligence for classwork, with 31% reporting so in 2022-23 compared to 60% in 2023-24, according to a survey published Wednesday by the Center for Democracy & Technology, a civil rights nonprofit.” CDT found that as AI use goes up, “only about a third of teachers said they have received guidance on the actions they should take if they suspect a student’s use of AI is out of line with school policy.” Additionally, 37% said “they’ve been trained to spot if a student is using generative AI in their class assignments.” There was also a “30 percentage point jump – to 68% in 2023-24 – of teachers using these tools compared to the prior school year, according to CDT.”

 

Texas Forms Committee For AI Policy Development

The Austin (TX) American Statesman Share to FacebookShare to Twitter (4/2, Wagner, Subscription Publication) reports that Texas House Speaker Dade Phelan (R) announced the establishment of a five-member panel, the House Select Committee on Artificial Intelligence and Emerging Technologies, tasked to study “challenges and opportunities” of AI and other modern technologies. The move is driven by concerns about data privacy and cybersecurity. The committee, chaired by Rep. Giovanni Capriglione (R), who also leads the state AI Advisory Council, will make recommendations for potential legislative, policy, regulatory and remedial actions.

 

Survey Shows Most Teachers Have Received Generative AI Training, But Few Know How To Respond To Student Plagiarism

The Seventy Four Share to FacebookShare to Twitter (4/2, Keierleber) reports a new survey by the nonprofit Center for Democracy and Technology “shows most teachers say their districts have adopted guidance” for artificial intelligence tools for both educators and students. Yet what “this guidance lacks...are clear instructions on how teachers should respond if they suspect a student used generative AI to cheat.” Sixty percent of middle and high school teachers who responded to the online survey “said their schools permit the use of generative AI for schoolwork – double the number who said the same just five months earlier on a similar survey. And while a resounding 80% of educators said they have received formal training about the tools, including on how to incorporate generative AI into assignments, just 28% said they’ve received instruction on how to respond if they suspect a student has used ChatGPT to cheat.” Report co-authors Maddy Dwyer and Elizabeth Laird write, “Though there has been positive movement, schools are still grappling with how to effectively implement generative AI in the classroom.”

 

US, EU To Use AI To Seek Alternate Chemicals For Making Chips

Bloomberg Law Share to FacebookShare to Twitter (4/3, Subscription Publication) reports behind a paywall that the “European Union and the US plan to enlist artificial intelligence in the search for replacements to” PFAS “that are prevalent in semiconductor manufacturing, according to a draft statement seen by Bloomberg.” The pledge “forms part of the conclusions to this week’s joint US-EU Trade and Technology Council taking place in Leuven, Belgium.” The draft statement said, “we plan to explore the use of AI capacities and digital twins to accelerate the discovery of suitable materials to replace PFAS in semiconductor manufacturing.” Separately, the statement also “confirms earlier Bloomberg reporting of EU plans to join the US in reviewing the security risk of so-called legacy chips in its supply chains.”

 

Researchers Working To Measure Artificial General Intelligence Attainment

The AP Share to FacebookShare to Twitter (4/4) reports “there’s a race underway to build artificial general intelligence,” and achieving such a concept – “commonly referred to as AGI – is the driving mission of ChatGPT-maker OpenAI and a priority for the elite research wings of tech giants Amazon, Google, Meta and Microsoft.” Leading AI scientists published research Thursday “in the journal Science warning that unchecked AI agents with ‘long-term planning’ skills could pose an existential risk to humanity.” But without a clear definition of AGI, “it’s hard to know when a company or group of researchers will have achieved artificial general intelligence – or if they already have.” Some researchers “would like to find consensus on how to measure it. It’s one of the topics of an upcoming AGI workshop next month in Vienna, Austria – the first at a major AI research conference.”

dtau...@gmail.com

unread,
Apr 13, 2024, 1:05:29 PM4/13/24
to ai-b...@googlegroups.com

Now Hiring: Sophisticated (Part-Time) Tutors for Chatbots

The growth of large language models has fueled the need for knowledgeable humans to train them. Companies that specialize in data curation hire contractors and sell their training data to bigger developers. Developers of AI models also recruit in-house data annotators. As with other types of gig work, benefits for employees come with challenges, such as being cut off from the work with no explanation. Researchers also have raised concerns over a lack of standards.
[
» Read full article ]

The New York Times; Yiwen Lu (April 11, 2024)

 

IBM's Watson Supercomputer Made a Lousy Tutor

As Google, Microsoft, and other tech firms work on education-related AI tools, Satya Nitta at IBM's Watson Research Center cautions that AI is "not great as a replacement for humans." A five-year project at IBMʼs Watson Research Center to create an AI-driven tutoring system using Watson ultimately found the Watson system lacked the ability to engage, motivate, inspire, and keep children focused on their lessons.
[
» Read full article ]

The74; Greg Toppo (April 9, 2024)

 

Trudeau Unveils $1.8-Billion Package for Canada's AI Sector

The Canadian government announced C$2.4-billion (US$1.8-billion) in measures to bolster the nation's AI sector, including C$2 billion for "computing capabilities and technological infrastructure." The government also proposed the creation of a Canadian AI Safety Institute. Canadian Prime Minister Justin Trudeau announced the initiatives in Montreal, one of a number of AI hubs that has emerged in Canada. ACM A.M. Turing Award laureate Yoshua Bengio said during the Montreal event, "Canada places itself on the right side of history with this announcement."


[
» Read full article *May Require Paid Registration ]

Bloomberg; Mathieu Dion; Brian Platt (April 7, 2024)

 

Tool Makes AI Models Hallucinate Cats to Fight Copyright Infringement

A tool called Nightshade, developed by University of Chicago (UoC) researchers, changes images in ways that are nearly invisible to the human eye but which look dramatically different to AI models, as a means of protecting artworks from copyright infringements. Nightshade alters thousands of pixels, a small amount compared to images that contain millions of pixels but enough to trick an AI into seeing “something that's completely different,” explained UoC's Shawn Shan.
[ » Read full article ]

NBC News; Brian Cheung (April 4, 2024)

 

Texas Will Use Computers to Grade STAAR Tests

The Texas Education Agency (TEA) this year will use an “automated scoring engine” that uses natural language processing technology to assess and grade open-ended questions on the State of Texas Assessment of Academic Readiness (STAAR) for reading, writing, science, and social studies. TEA gathered 3,000 responses that went through two rounds of human scoring, and used them to teach the automated scoring engine the characteristics of responses. It is programmed to assign the same scores a human would have given.
[ » Read full article ]

The Texas Tribune; Keaton Peters (April 9, 2024)

 

'Social Order Could Collapse' in AI Era, Japanese Companies Say

In an AI manifesto published April 8, Japan's Nippon Telegraph and Telephone (NTT) and Yomiuri Shimbun Group Holdings called for legislation to rein in generative AI. Despite acknowledging the productivity benefits afforded by generative AI, the manifesto said that if AI remains unchecked, "in the worst-case scenario, democracy and social order could collapse, resulting in wars." The companies called for laws to safeguard elections and national security from generative AI abuse.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Peter Landers (April 7, 2024)

 

Who Decides When Artificial General Intelligence is Attained?

Although achieving artificial general intelligence (AGI), when machines are as smart as or can perform many tasks as well as humans, is a goal for many researchers, there is no standard to determine when it has been attained. An AGI workshop in Vienna, Austria, in May will focus on this lack of consensus. Said University of Illinois Urbana-Champaign's Jiaxuan You, "This really needs a community's effort and attention so that mutually we can agree on some sort of classifications of AGI."
[ » Read full article ]

Associated Press; Matt O'Brien (April 4, 2024)

 

U.S., EU to Use AI to Seek Alternate Chemicals for Making Chips

The U.S. and EU have pledged to use AI to identify replacements for "forever chemicals" used in semiconductor manufacturing. A draft statement from the joint U.S.-EU Trade and Technology Council in Belgium said, "We plan to explore the use of AI capacities and digital twins to accelerate the discovery of suitable materials to replace PFAS [per- and polyfluorinated substances] in semiconductor manufacturing." The U.S. and EU also will review the security risk of legacy chips in their supply chains.

[ » Read full article *May Require Paid Registration ]

Bloomberg; Alberto Nardelli; Debby Wu (April 3, 2024)

 

Hyundai Coffee Delivery Robot Rides Elevators Alone

The Hyundai/Kia Robotics Lab developed the autonomous DAL-e Delivery robot for use in office buildings. The robot is equipped with onboard sensors that allow it to avoid obstacles while navigating complex environments, as well as real-time optimal route calculation capabilities, and an AI facial recognition system to ensure deliveries reach the correct individual. The DAL-e robot can carry up to 22 pounds of packages, or up to 16 cups of coffee.
[ » Read full article ]

New Atlas; Paul Ridden (April 3, 2024)

 

U.S. Tech Giants Turn to Mexico to Make AI Gear

To ease reliance on China, some U.S. tech companies are calling on their manufacturing partners in Taiwan to increase AI-related hardware production in Mexico. Such nearshoring has increased under the U.S.-Mexico-Canada Agreement. The Mexico facilities of Taiwan's Foxconn manufacture AI servers for Amazon, Google, Microsoft, and Nvidia, according to sources. Dell and Hewlett Packard Enterprise also have asked their suppliers to shift some server and cloud computing production to Southeast Asia and Mexico.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Yang Jie; Santiago Pérez (March 31, 2024)

 

AI Generates 3D City Maps from Single Radar Images

A machine learning system developed by researchers at Germany's University of the Bundeswehr uses a single synthetic aperture radar (SAR) image to create complete 3D city maps. The SAR2Height framework was created using SAR images from the TerraSAR-x satellite and height maps for 51 cities. The researchers performed one-to-one, pixel-to-pixel mapping between the height maps and SAR images to train a deep neural network. SAR2Height can predict building heights in SAR images within about 3 meters and could be used to help assess earthquake damage.
[
» Read full article ]

IEEE Spectrum; Mark Harris (April 4, 2024)

 

OpenAI Expands Enterprise AI Services Via ChatGPT

The Wall Street Journal Share to FacebookShare to Twitter (4/8, Subscription Publication) reports that OpenAI is leveraging its ChatGPT platform, with 600,000 individual business users, to promote its AI services to enterprises. Despite strong competition and a lack of precise market share data, OpenAI cites a survey by Andreessen Horowitz that reveals 100% of 70 enterprises utilize OpenAI models. The startup faces challenges in gaining enterprise credibility and the complexities of vendor choices, as some CIOs prefer a single-source AI partnership. The company maintains independence from Microsoft, offering direct access to AI technologies alongside options through Microsoft’s Azure, though some companies may find it easier to leverage OpenAI services through their existing Azure accounts.

        Fortune Share to FacebookShare to Twitter (4/8) reports, “A recent survey by venture capital firm a16z found that for large companies adopting generative AI, OpenAI’s closed, proprietary models remain the most popular by far – particularly for use cases actually put into production.” However, “in 2024, [organizations] are opening up to experimenting with more AI model options – that are often open source,” and “more organizations are experimenting with open source models.”

 

Microsoft To Open AI Hub In London

Reuters Share to FacebookShare to Twitter (4/8) reports Microsoft plans to establish “a new artificial intelligence (AI) hub in London, focused on product development and research.” The facility will be overseen “by Mustafa Suleyman, the London-born cofounder of Google DeepMind, who Microsoft hired last month.” Reuters says that with AI competition “heating up across Europe over the past 18 months...Microsoft may seek to poach experts from other AI-focused companies to staff its new unit, such as DeepMind or OpenAI.” Reuters adds that the decision by Microsoft “represents a win for Britain, which has sought to bolster its credentials as a technology superpower since hosting the world’s first global AI safety summit in November.”

 

University Of Washington Joins $110M Academic Partnership With Japan To Advance AI

The University of Washington Share to FacebookShare to Twitter (4/9) says the University of Washington and the University of Tsukuba “have entered an innovation partnership with NVIDIA and Amazon aimed at furthering research, entrepreneurship, workforce development and social implementation in the field of artificial intelligence.” The US-Japan academic partnership “is part of a broad, $110 million effort to build upon the strong ties between the U.S. and Japan and to continue to lead innovation and technological breakthroughs in artificial intelligence.” The agreement “was announced on April 9th in Washington, D.C. as part of Prime Minister Kishida Fumio’s historic state visit.” These partnerships are supported by “combined private sector investment from NVIDIA, Amazon, Arm, Microsoft, and nine Japanese companies. Amazon and NVIDIA will each invest $25 million in this collaboration.”

 

GenAI Users Report Productivity Gains Over Time

Insider Share to FacebookShare to Twitter (4/9, Zinkula) reports that despite AI’s potential “to boost productivity, seeing the benefits can take time. Microsoft’s research found that using its Copilot tool for 11 weeks was the ‘breakthrough moment’ at which a majority of users reported improvement” in their work. Broadly, the impact of AI adoption on workers “will likely be a mixed bag. For example, one-quarter of global CEOs surveyed by the consulting firm PwC between last October and November said they planned to ‘reduce employee head count by at least 5% in 2024 due to generative AI.’”

 

Meta Confirms Llama 3 Release Plans

TechCrunch Share to FacebookShare to Twitter (4/9, Lunden) reports, “At an event in London on Tuesday, Meta confirmed that it plans an initial release of Llama 3 – the next generation of its large language model used to power generative AI assistants – within the next month.” Meta president of global affairs Nick Clegg is quoted saying, “Within the next month, actually less, hopefully in a very short period of time, we hope to start rolling out our new suite of next-generation foundation models, Llama 3. ... There will be a number of different models with different capabilities, different versatilities [released] during the course of this year, starting really very soon.”

 

OpenAI Gears Up To Fight New Legal Battles

The Washington Post Share to FacebookShare to Twitter (4/9, Zakrzewski, Lima) reports, “OpenAI is defending against numerous lawsuits and government inquiries since comedian Sarah Silverman first initiated legal action for the alleged misuse of her memoir to train the company’s AI.” The firm has expanded its legal team, hiring around two dozen lawyers since March 2023, and is negotiating with Chris Lehane for strategic public policy efforts. Amidst copyright disputes and scrutiny over a Microsoft partnership, OpenAI’s evolution from a startup to a target for Big Tech litigation reflects the Silicon Valley pattern of facing backlash after initial success. The company has had mixed results in court but remains a focus for regulatory bodies like the SEC and FTC.

 

Computer Science Teachers Experimenting With New AI-Powered Grading Tool

Education Week Share to FacebookShare to Twitter (4/10, Klein) reports an AI teaching assistant has been developed by Code.org, “a nonprofit organization that aims to expand access to computer science courses, and the Piech Lab at Stanford University in Palo Alto, Calif.” Twenty teachers nationwide tested “the computer science grading tool’s capability on about a dozen coding projects also designed by Code.org, as part of a limited pilot project. Beginning today, Code.org is inviting an additional 300 teachers to give the tool a try.” In early testing, the tool’s assessment of student work “closely tracked those of experienced computer science teachers, said Karim Meghji, the chief product officer at Code.org. If that trend holds through this larger trial, the nonprofit hopes to make the tool widely available to computer science teachers around the country, he said.” Many educators see “helping teachers tackle time-consuming but relatively rote tasks – like grading – as a huge potential upside of AI.”

 

IBM Supercomputer Failed To Effectively Tutor Students Using AI

The Seventy Four Share to FacebookShare to Twitter (4/9, Toppo) reports IBM researcher Satya Nitta was “tasked with figuring out how to apply the” Watson supercomputer’s powers “to education, he soon envisioned tackling ed tech’s most sought-after challenge: the world’s first tutoring system driven by artificial intelligence. It would offer truly personalized instruction to any child with a laptop – no human required.” Nitta persuaded his bosses “to throw more than $100 million at the effort, bringing together 130 technologists, including 30 to 40 Ph.D.s, across research labs on four continents.” But by 2017, “the tutoring moonshot was essentially dead, and Nitta had concluded that effective, long-term, one-on-one tutoring is ‘a terrible use of AI – and that remains today.’”

 

Schiff Introduces GenAI Copyright Disclosure Act

Engadget Share to FacebookShare to Twitter (4/10) reports, “The debate over using copyrighted materials in AI training systems rages on – as does uncertainty over which works AI even pulls data from. US Congressman Adam Schiff is attempting to answer the latter, introducing the Generative AI Copyright Disclosure Act on April 9. The bill would require AI companies to outline every copyrighted work in their datasets.” If the Act passes, “companies would need to file all relevant data use to the Register of Copyrights at least 30 days before introducing the AI tool to the public. They would also have to provide the same information retroactively for any existing tools and make updates if they considerably altered datasets,” or be fined.

 

Congressman Enrolled At George Mason University To Better Understand AI

The AP Share to FacebookShare to Twitter (4/11, Klepper) reports that when questions about regulating artificial intelligence emerged, Rep. Don Beyer (D-VA) enrolled at George Mason University “to get a master’s degree in machine learning. In an era when lawmakers and Supreme Court justices sometimes concede they don’t understand emerging technology, Beyer’s journey is an outlier, but it highlights a broader effort by members of Congress to educate themselves about artificial intelligence as they consider laws that would shape its development.” As artificial intelligence has been called “a transformative technology, a threat to democracy or even an existential risk for humanity,” it will fall to members of Congress “to figure out how to regulate the industry in a way that encourages its potential benefits while mitigating the worst risks. But first they have to understand what AI is, and what it isn’t.”

 

Microsoft To Reveal AI Tools At Build Conference

CNBC Share to FacebookShare to Twitter (4/10, Novet) reported, “Microsoft will reveal brand-new artificial intelligence tools for use on PCs and in the cloud at its annual Build conference, according to a session list posted Wednesday.” According to CNBC, the itinerary for the event reflects the company’s goal to have AI become a “first-class part of every PC,” in the words of CEO Satya Nadella. CNBC adds, “The new head of Microsoft AI, Mustafa Suleyman, will take the stage alongside...Nadella and other longtime executives during the show’s opening keynote in Seattle. Suleyman – a cofounder of DeepMind, the AI startup that Google acquired in 2014 – joined Microsoft last month from startup Inflection AI.”

 

Experts Say AI Could Transform Student Testing In Schools

The Hechinger Report Share to FacebookShare to Twitter (4/11, Preston, Salman) reports that a senior analyst of innovative assessments with the Programme for International Student Assessment (PISA) and others argue that AI “has the potential to shake up the student testing industry, which has evolved little for decades and which critics say too often falls short of evaluating students’ true knowledge.” They also warn that “the use of AI in assessments carries risks.” PISA expects to integrate AI “into the design of its 2029 test,” and the Organization for Economic Cooperation and Development, “which runs PISA, is exploring the possible use of AI in several realms.” When it comes to using AI “to design tests, there are all sorts of opportunities. Career and tech students could be assessed on their practical skills via AI-driven simulations,” and while those hands-on tests “are incredibly intensive and costly,” AI could help put such tests “within reach for students and schools around the world.”

dtau...@gmail.com

unread,
Apr 20, 2024, 12:24:29 PM4/20/24
to ai-b...@googlegroups.com

Microsoft's AI Copilot Automates the Coding Industry

The AI Copilot coding assistant developed by Microsoft's GitHub is behind a growing percentage of new software programs, as the incorporation of the latest version of OpenAI's GPT-4 technology has expanded its capabilities. Microsoft said 1.3 million customers, including 50,000 businesses, are using Copilot. Software engineers typically use Copilot to handle tedious and repetitive tasks, such as debugging software, but some are using it to develop code for critical systems.

[ » Read full article *May Require Paid Registration ]

Bloomberg; Jackie Davalos; Dina Bass (April 17, 2024)

Bengio Makes the 2024 TIME100 List

Time magazine has named ACM A. M. Turing Award laureate Yoshua Bengio to its list of the 100 Most Influential People of 2024. Bengio, a leading AI researcher, has been vocal about AI safety and the possibility of catastrophic outcomes related to future AI. Among other things, he has served as an adviser for the U.N. Secretary-General and the U.K. AI Safety Institute.
[ » Read full article ]

Time; Geoffrey Hinton (April 17, 2024)

 

Intel Unveils Largest Neuromorphic System

Intel announced today it has built the world's largest neuromorphic system. Codenamed Hala Point, the research prototype system supports up to 1.15 billion neurons and 128 billion synapses distributed over 140,544 neuromorphic processing cores consuming a maximum of 2,600 watts of power. Said Intel’s Mike Davies, Hala Point “combines deep learning efficiency with novel brain-inspired learning and optimization capabilities.”
[ » Read full article ]

Intel Press Release (April 17, 2024)

 

Go Language Shines for AI-Powered Workloads

The Go Developer Survey for the first half of 2024, conducted by Google's Go team, revealed that developers of AI-powered applications and services view the Go programming language as a robust platform for running those workloads. However, most begin AI-powered work in Python before transitioning to a more "production-ready" language. Of the 6,224 developers polled, 93% of respondents expressed satisfaction with Go during the past year, while 42% cited concerns about insecure coding practices when working on Go services.
[ » Read full article ]

InfoWorld; Paul Krill (April 10, 2024)

 

Instagram Tests Chatbot Versions of Influencers

Instagram is testing Creator AI, a program that would enable popular influencers to interact with fans through AI. Sources say Creator AI is a chatbot that will talk with fans via direct messages and potentially comments, mimicking the influencer's "voice" to help creators respond to messages and comments. Creator AI will send messages automatically, disclosing, at least initially, that they are generated by AI, and allow creators to choose the data used to copy their voice and dictate responses to specific questions.

[ » Read full article *May Require Paid Registration ]

The New York Times; Sapna Maheshwari; Mike Isaac (April 15, 2024)

 

AI Helps Ambulances, Firetrucks Arrive Faster

Cities across the U.S. are testing AI tools in an effort to improve emergency response times and cut costs. Technology from the C2Smarter academic consortium is used with street sensor data in Manhattan to determine the best route for firetrucks to get to a fire. Lyt offers a system that connects sensors attached to city vehicles with software that controls traffic signals.

[ » Read full article *May Require Paid Registration ]

Bloomberg; Matthew Boyle (April 16, 2024)

 

Humanoid Robots Show Off Football Skills

Google DeepMind researchers used deep reinforcement learning to train two-legged robots to walk, turn to kick a ball, and get up after falling. After 240 hours of deep reinforcement learning, the battery-powered Robotis OP3 robots could walk 181% faster, turn 302% faster, kick a ball 34% harder, and get up 63% more quickly than robots working from pre-scripted skills. The researchers used a physics engine to simulate training cases, instead of having the robots learn by repeatedly attempting tasks.

[ » Read full article *May Require Paid Registration ]

New Scientist; Chris Stokel-Walker (April 10, 2024)

 

Microsoft Ups Ante in AI Race with China Through Stake in Abu Dhabi Firm

Microsoft will invest $1.5 billion in G42, a tech company based in Abu Dhabi and backed by the UAE government. The deal includes an intergovernmental pact to ensure AI security and involves the creation of "a skilled and diverse AI workforce" for the UAE and a $1-billion fund for developers. G42 will use Microsoft's cloud services for its AI applications.
[ » Read full article ]

The Wall Street Journal; Rory Jones (April 16, 2024)

 

AI Chip Trims Energy Budget

A microchip developed by researchers at China's Tsinghua University and the Beijing National Research Center for Information Science and Technology uses photons instead of electrons to run AI tasks with greater energy efficiency. The researchers took a hybrid approach, incorporating both clusters of diffractive units that can compress data for large-scale input and output, and interferometer arrays for reconfigurable computation. The Taichi microchip distributes computing across numerous chiplets operating in parallel and possesses 13.96 million parameters, mimicking synapses connecting neurons in the human brain.
[ » Read full article ]

IEEE Spectrum; Charles Q. Choi (April 12, 2024)

 

Race for AI Supremacy in Middle East Measured in Datacenters

A rivalry between Saudi Arabia and the United Arab Emirates to become the regional AI superpower has ignited a race to build expensive desert datacenters to support the technology. At the end of last year, the UAE had 235 megawatts of datacenter capacity and Saudi Arabia had 123 megawatts, compared to Germany’s 1,060 megawatts, according to research firm DC Byte. To close the gap, the UAE is planning to expand capacity by 343 megawatts, while Saudi Arabia says it wants to add 467 megawatts over the next few years.
[ » Read full article ]

Bloomberg; Marissa Newman; Olivia Solon; Mark Bergen (April 11, 2024)

 

Energy-Guzzling AI is also the Future of Energy Savings

AI datacenters could account for up to a quarter of U.S. power usage by 2030, up from 4% currently, according to the CEO of chip-design firm Arm. However, AI could be an important tool in identifying potential energy savings. Schneider Electric CEO Peter Herweck said buildings' energy consumption could be cut 15% to 25% during the next four years with the help of AI.

[ » Read full article *May Require Paid Registration ]

The Wall Street Journal; Carol Ryan (April 12, 2024)

 

AI Regulation Debate Heats Up At Antitrust Event

Bloomberg Share to FacebookShare to Twitter (4/12, Nylen, Subscription Publication) reported that at a recent antitrust conference in Washington, executives from Andreessen Horowitz, OpenAI, and Google advocated for fewer regulations on AI, suggesting that AI could significantly benefit sectors like healthcare and energy if not hindered by excessive oversight. Google’s top lawyer, Kent Walker, emphasized the importance of not letting AI development fall victim to “partisan politics.” Concurrently, at the main conference event, Federal Trade Commission Chair Lina Khan expressed skepticism, highlighting that tech companies must still adhere to existing laws against collusion and monopolization. The discussions also touched on concerns that large tech companies, including Amazon, might dominate AI through strategic investments in startups, potentially stifling competition and innovation in the industry. The UK’s Competition and Markets Authority (CMA) released a report calling “out Microsoft, Google, Apple Inc., Meta Platforms Inc., Amazon and Nvidia , saying their investments in AI may allow them ‘to shape these markets in their own interests.’” Bloomberg notes Amazon’s investments in Anthropic.

        TechCrunch Share to FacebookShare to Twitter (4/11, Lomas) reported the CMA has issued a warning about the potential monopolistic behaviors of major tech companies, including Amazon, in the AI sector. CMA CEO Sarah Cardell expressed “real concerns” about the development of the foundational AI model sector, highlighting the risks of market power concentration. The CMA’s Update Paper specifically notes the involvement of Amazon among other tech giants, suggesting their significant presence across the AI value chain could “profoundly shape FM-related markets to the detriment of fair, open and effective competition.” This concern is driven by the potential for these companies to reduce consumer choice and quality while increasing prices.

Dartmouth College Researchers Developing Therapeutic Mental Health App Using AI

NBC News Share to FacebookShare to Twitter (4/13, Weir, McLaughlin, Dong) reported that Therabot, “an experimental, artificial intelligence-powered therapeutic app that its creators hope will drastically improve access to mental health care, began its first clinical trial last month.” The “text-based AI app in development at Dartmouth College” uses generative AI in conversations with users, as well as “a form of AI that learns patterns.” Other mental health apps have been launched, such as Wysa, which “in 2022 received a Food and Drug Administration Breakthrough Device designation,” but they “generally rely on rules-based AI with preapproved scripts.”

 

Elon Musk’s AI Startup Seeks Billions In Funding. Bloomberg Share to FacebookShare to Twitter (4/11, Subscription Publication) reported that Elon Musk’s new AI venture, X.AI Corp., is aiming to secure between $3 billion and $4 billion in funding, pushing the company’s valuation to $18 billion. Investor materials, including a detailed pitch deck of approximately 20 pages, emphasize Musk’s successful history with companies like Tesla and SpaceX and highlight the strategic advantage of accessing high-quality data from Musk’s network X for AI development. The final terms and values of this investment round are yet to be finalized.

Long Island Teachers Use AI To Create Lesson Plans

The Seventy Four Share to FacebookShare to Twitter (4/15, D'Orio) reports on a group of students in Franklin Square, New York, learning about ancient Greek vases, saying that while the class was similar to those taking place in other schools, “behind the scenes, preparing for the lesson was anything but typical for teachers Janice Donaghy and Jean D’Aurio. They had avoided the hours of preparation the lesson might normally have taken by using artificial intelligence to craft a plan that included a summary of ancient Greek vases, exit questions and student activities.” The teachers “consulted the county’s curriculum guide but also used Canva, a tool that automatically generated pictures of Grecian vases. The teachers turned to Diffit, another AI application, to craft a reading passage that explained the importance of vases in everyday life in ancient Greece.”

Plagiarism Detection Platform Turnitin Releases Data On Students Using AI In Their Writing

K-12 Dive Share to FacebookShare to Twitter (4/15) reports, “It’s been a year since grading and plagiarism detection platform Turnitin unveiled its artificial intelligence writing detection tool,” and as schools increasingly use technology to detect plagiarism, “the company shared data regarding the hundreds of millions of student papers processed through its system. AI-generated content continued to show up in student work, according to Turnitin.” Of the 200 million papers reviewed by Turnitin, 22 million student papers had at least 20% of the writing that was AI content. Six million reviewed papers “contained at least 80% AI writing.”

Column: Lack Of Measurement Of AI Tools A “Major Problem”

In his New York Times Share to FacebookShare to Twitter (4/15) column, Kevin Roose writes that we “don’t really know how smart” AI tools like ChatGPT are. “That’s because, unlike companies that make cars or drugs or baby formula, A.I. companies aren’t required to submit their products for testing before releasing them to the public,” he said, explaining “there’s no Good Housekeeping seal for A.I. chatbots, and few independent groups are putting these tools through their paces in a rigorous way.” Roose says he is “convinced that a lack of good measurement and evaluation for A.I. systems is a major problem.” Moreover, he points out that “shoddy measurement also creates a safety risk. Without better tests for A.I. models, it’s hard to know which capabilities are improving faster than expected, or which products might pose real threats of harm.”

AI Job Displacement Farther Away Than Expected

Fortune Share to FacebookShare to Twitter (4/15) reports that at the Fortune Brainstorm AI London conference on Monday, Marc Warner, CEO of AI consultancy Faculty, addressed concerns about AI leading to widespread job losses. Warner argued that the impact of AI on employment is likely to be less immediate than anticipated, drawing parallels to the slower-than-expected advancement of autonomous vehicles. He highlighted overestimations surrounding AI’s capabilities and pointed to similar overhyped projections in other tech areas, such as self-driving cars, suggesting a more gradual integration of AI into the workforce.

Google Cloud Announces Partnerships With Biotech Firms

SiliconANGLE Share to FacebookShare to Twitter (4/16) reports that Google Cloud announced new partnerships with Path AI Inc. and TetraScience Inc. at the Bio-IT World conference this week. These collaborations are aimed at enhancing AI-driven drug discovery and scientific research within the biotech sector. Path AI will integrate its digital pathology platform with Google Cloud to expand its services globally, enhancing the analysis of patient tissue samples. Similarly, TetraScience will leverage Google Cloud’s infrastructure to process and format large volumes of complex data, improving efficiency and accelerating the research and development of new therapies.

Microsoft’s Deal With OpenAI Avoids EU Antitrust Scrutiny

Bloomberg Share to FacebookShare to Twitter (4/17, Subscription Publication) reports Microsoft’s $13 billion investment in OpenAI will not face formal scrutiny by the European Union’s antitrust watchdogs, as the involvement is short of a takeover and Microsoft does not control OpenAI’s development, according to unnamed sources. In January, the EU announced a review of the partnership following a mutiny at OpenAI which exposed deep ties between both companies. Microsoft declined to comment, though it pointed to an earlier statement stating that its OpenAI partnership has “fostered more AI innovation and competition, while preserving independence for both companies.”

        Also reporting is Reuters Share to FacebookShare to Twitter.

Google AI Aids National Guard In Disaster Response

The Washington Post Share to FacebookShare to Twitter (4/17, De Vynck) reports that Google’s innovation lab, part of Alphabet Inc., developed artificial intelligence technology to assist the National Guard in analyzing disaster area images more efficiently. The technology, created by Bellwether, a group within “X,” enables rapid comparison of aerial photos with satellite imagery to identify crucial infrastructure, significantly speeding up response times. According to Nirav Patel from the Defense Innovation Unit, this advancement sharply reduces the analysis time from hours or days to seconds, which will be increasingly important as climate change intensifies.

Ed Tech Experts Discourages Use Of Term “Hallucination” To Refer To AI Mistakes

Education Week Share to FacebookShare to Twitter (4/17) reports that academic text created by AI tools “might be riddled with factual errors. Those inaccuracies are commonly known as ‘hallucinations’ in computer science speak – but education technology experts are trying to steer away from that term.” According to Pati Ruiz, a senior director of ed tech and emerging technologies for the nonprofit Digital Promise, the word “hallucination” could be seen as disparaging people with mental health issues. Ruiz “she added that using that word for AI’s errors ‘might give students a false sense of this tool having humanlike qualities. And that’s something that we advocate against, right? We advocate for folks to understand these tools as just that, tools that will support us as humans.’”

Faculty Unions Pressing Institutions For AI Guidelines

Inside Higher Ed Share to FacebookShare to Twitter (4/18, Coffey) reports, “Faculty unions are starting to take their concerns about artificial intelligence out of peer group discussions and into contract negotiations.” The growth of AI tools “immediately began causing concern for some faculty members across the nation. Faculty unions, from local state associations to national behemoths, are now discussing how to ensure their concerns are addressed by their institutions’ administrators.” The National Education Association represents “nearly 200,000 members and expects to have language finalized in July that members can use as a framework for their own bargaining. The union expects to lay groundwork focused on questions – centered on issues such as ethical concerns, data protection and educators’ involvement – that faculty should be asking administrators as they create AI policies.”

        King’s College Professor: AI Could Reduce Demand For College Educated Workers, Impacting Higher Education. Higher Ed Dive Share to FacebookShare to Twitter (4/18) reports the rise of AI poses “existential questions about the role of higher education. Generative AI systems such as ChatGPT are poised to disrupt white collar and professional work, according to Daniel Susskind, an economics professor at King’s College London, who has written a handful of books on tech’s impact on work. That, in turn, could have important ripple effects on colleges, which have long served as the training camps for those workers, he said.”

AI Could Improve US Recycling Efficiency

Insider Share to FacebookShare to Twitter (4/18, Boudreau) reports EverestLabs, an AI robotics company, is addressing the inefficiencies in the US recycling system, where about 27% of recyclable materials end up in landfills, potentially costing recycling centers up to $1 million annually. By employing 3D cameras, machine learning, and robotics, EverestLabs and similar startups aim to recover more materials, thus increasing profits and reducing greenhouse gas emissions. The technology focuses on sorting materials like aluminum and plastic, which are in high demand for manufacturing due to sustainability goals and regulations requiring recycled content. EverestLabs’ study, conducted over two years at various recycling centers, highlights the significant losses of valuable commodities. The technology has been adopted by major waste haulers and recyclers, with Caglia Environmental in California successfully diverting over 1 million cans from landfills.

Meta Releases LLaMA 3, Expands AI Assistant Across Apps

The New York Times Share to FacebookShare to Twitter (4/18, Isaac, Metz) reports that Meta is set to enhance its applications, including Instagram, WhatsApp, Messenger, and Facebook, with its latest AI-powered smart assistant software starting Thursday. The new software, named Meta A.I. and based on LLaMA 3 technology, promises extensive integration, enabling users to perform tasks and access information seamlessly across the platform. Meta CEO Mark Zuckerberg is quoted saying, “With LLaMA 3, Meta A.I. will now be the most intelligent freely available assistant. ... And because we’ve reached the quality level we want, we’re now going to make it much more prominent and easier to use across all our apps.”

Lawmakers Face Pushback Over AI Regulations

The AP Share to FacebookShare to Twitter (4/18) reports, “The first major proposals to reign in bias in AI decision making are facing headwinds from every direction.” While “advocacy groups are pulling for more transparency from companies and greater legal recourse for citizens to sue over AI discrimination,” companies are worried about an increased “risk of lawsuits and the revelation of trade secrets.” The AP adds that “the biggest bills this team of lawmakers has put forward offer a broad framework for oversight, particularly around one of the technology’s most perverse dilemmas: AI discrimination.” Since “up to 83% of employers use algorithms to help in hiring,” advocates warn that risks of discrimination are high.

AI Deepfakes Disrupt Pop Music Industry

TIME Share to FacebookShare to Twitter (4/18) reports recent suspected leaks of songs by prominent artists like Taylor Swift and Drake have been attributed to AI-generated vocal deepfakes, leading to significant confusion among fans. Sites like Reddit have seen intense speculation about the authenticity of these tracks, highlighted by incidents such as Rick Ross releasing a diss track in response to supposed lyrics from Drake. This issue is becoming more evident as AI technology that replicates artists’ voices improves and becomes more accessible, fooling even the most dedicated fans. Simultaneously, the music industry and lawmakers are exploring methods to protect artists, evidenced by multiple lawsuits against AI companies and new laws like the ELVIS Act in Tennessee, which targets unauthorized use of vocal mimicry through AI.

dtau...@gmail.com

unread,
Apr 27, 2024, 8:31:19 AM4/27/24
to ai-b...@googlegroups.com

Generative AI Arrives in the Gene Editing World of CRISPR

Generative AI technology developed by Berkeley, Calif.-based startup Profluent is generating blueprints for microscopic biological mechanisms with a gene editor called OpenCRISPR-1, which can edit DNA. The technology learns from sequences of amino acids and nucleic acids, in essence analyzing the behavior of CRISPR gene editors pulled from nature and learning how to generate entirely new gene editors. "These AI models learn from sequences, whether those are sequences of characters or words or computer code or amino acids," said Profluent CEO Ali Madani (pictured). Profluent said that it was "open sourcing" its OpenCRISPR-1 editor, though not the AI technology behind it.
[
» Read full article *May Require Free Registration ]

The New York Times; Cade Metz (April 23, 2024)

 

AI Industry's Thirst for New Datacenters Can't Be Satisfied

The rush to construct datacenters amid a surge in AI demand has led to a shortage of the necessary parts, property, and electricity. Datacenter executives say it can take two years to obtain backup generators, and delivery times for custom cooling systems are five times longer than several years ago. Finding real estate with enough power and data connectivity also poses a challenge.


[
» Read full article *May Require Paid Registration ]

The Wall Street Journal; Tom Dotan; Asa Fitch (April 24, 2024)

 

Microsoft Pushes into Smaller AI Systems

To attract customers with lower prices, Microsoft has unveiled three smaller AI models: Phi-3-mini, Phi-3-small, and Phi-3-medium. The smallest, Phi-3-mini, can run on a smartphone even without an Internet connection, and on chips that power regular computers. Lower processing requirements translates into lower prices, although the Phi-3 models may be less accurate than larger models.


[
» Read full article *May Require Paid Registration ]

The New York Times; Karen Weise; Cade Metz (April 23, 2024)

 

Advanced Brain Science Without Coding Expertise

A deep learning tool developed by researchers at Germany's Helmholtz Munich and the LMU University Hospital Munich enables brain cell mapping without the need for coding expertise. The goal of the tool, DELiVR (Deep Learning and Virtual Reality), is to democratize 3D brain analysis. Researchers can train DELiVR for specific cell types, and it works with the open source Fiji software for image analysis.
[
» Read full article ]

Helmholtz Centers (April 22, 2024)

 

AI-Controlled Jet Fighter Flies Against Human Pilots

As part of the U.S. Department of Defense's Defense Advanced Research Projects Agency's (DARPA) Air Combat Evaluation program, an AI test pilot has flown a jet fighter in dogfights against human pilots for the first time. The X-62A Variable Stability In-Flight Simulator Test Aircraft (VISTA) had flown 21 test flights since December 2022, including an aerial engagement within visual range against a human-piloted F-16. DARPA said more than 100,000 lines of flight-critical software changes were made over the period of test flights, marking "an unprecedented rate of development."
[
» Read full article ]

Ars Technica; Jonathan M. Gitlin (April 19, 2024)

 

Deepfakes of Bollywood Stars Spark Worries of Meddling in India Election

Deepfake videos of A-list Bollywood actors Aamir Khan (pictured, right) and Ranveer Singh (left) criticizing India Prime Minister Narendra Modi (center) have gone viral. The videos, which call on viewers to vote for the opposition Congress party, have generated concerns about the use of AI to influence the nation's ongoing general election. Reuters found that the videos had been viewed more than 500,000 times on social media since last week. At least eight fact-checking websites determined the videos to be altered or manipulated, but it remains unclear who created them.
[
» Read full article ]

Reuters; Aditya Kalra; Munsif Vengattil; Dhwani Pandya (April 22, 2024); et al.

 

3D Model of Black Hole's Mysterious Flare

Researchers at the California Institute of Technology have produced the first 3D model of the flare surrounding Sagittarius A*, the Milky Way's black hole. The 3D video was built by combining telescope data with AI computer-vision simulation. The researchers leveraged the neural radiance fields (NeRF) algorithm, which uses deep learning to produce a 3D representation from 2D images, and data from Chile's Atacama Large Millimeter Array.
[
» Read full article ]

Interesting Engineering; Mrigakshi Dixit (April 22, 2024)

 

Google to Provide AI to Military for Disaster Response

Google is providing AI tools to the U.S. National Guard to analyze images of disaster areas, in order to improve its disaster response. The technology can compare aerial photos with satellite imagery and maps to pinpoint locations, roads, buildings, and other infrastructure, allowing the National Guard to see what has been damaged and deploy resources accordingly. The technology will be rolled out for the summer wildfire season.

[ » Read full article *May Require Paid Registration ]

The Washington Post; Gerrit De Vynck (April 17, 2024)

 

Lightweight Model for Gymnast Movement Detection

Researchers in China demonstrated the effectiveness of the ShuffleNet V2 and convolutional block you only look once version 5 (SCB-YOLOv5) model for on-site athlete action detection. The SCB-YOLOv5 model combines lightweight design principles with advanced feature integration strategies for high detection accuracy while minimizing computational resources. The model outperformed other detectors in recognizing irregular hand and leg movements.
[
» Read full article ]

AZoAi; Sipaja Chandrasekar (April 24, 2024)

 

Olympic Organizers Unveil Strategy for Using AI in Sports

The International Olympic Committee on Friday outlined its agenda for using AI in sports, saying the technology could be used to help identify promising athletes, personalize training methods, and make the games fairer by improving judging. Plans also include using AI to protect athletes from online harassment and to help broadcasters improve the viewing experience for home viewers. Some AI projects will be rolled out at the Paris games this summer.
[ » Read full article ]

Associated Press; Kelvin Chan (April 19, 2024

 

Simulating Human Text Entry on Mobile Phones

An AI model developed by researchers at Finland's Aalto University can simulate human-like typing on mobile phones. The model, developed in collaboration with Google, can be used to analyze different user groups, such as those who type with one finger, when evaluating mobile keyboard designs. Aalto's Antti Oulasvirta said the researchers created a simulated user with human-like visual and motor systems, and “trained it millions of times in a keyboard simulator until it acquired typing skills applicable in various real-world scenarios.”
[ » Read full article ]

Helsinki Times (Finland) (April 18, 2024)

 

Young Engineers Seem Unprepared To Face AI’s Ethical Challenges

Writing for The Conversation Share to FacebookShare to Twitter (4/19), University of Michigan associate professor Erin A. Cech and doctoral candidate Elana Goldenkoff said that “as artificial intelligence and machine learning tools become more integrated into daily life, ethical considerations are growing, from privacy issues and race and gender biases in coding to the spread of misinformation.” Cech and Goldenkoff “are currently researching how engineers in many different fields learn and understand their responsibilities to the public. Yet our recent research, as well as that of other scholars, points to a troubling reality: The next generation of engineers often seem unprepared to grapple with the social implications of their work.” Additionally, some appear “apathetic about the moral dilemmas their careers may bring – just as advances in AI intensify such dilemmas.” The people “who are designing, testing and fine-tuning this technology are the public’s first line of defense. We believe educational programs owe it to them – and the rest of us – to take this training seriously.”

 

Tech Interactive Event Helps Students, Teachers, Administrators Explore AI Literacy

The San Jose Mercury News Share to FacebookShare to Twitter (4/20, Pizarro) reported the Tech Interactive in San Jose “hosted an event Friday morning for National AI Literacy Day.” There were “more than 1,100 students crawling around the downtown learning center, but there were also teachers and administrators hearing from experts on panel discussions about why its important for communities to understand artificial intelligence and how it’s already impacting schools and communities.” In her welcoming remarks, Tech Interactive CEO Katrina Stevens “shared some key points parents should keep in mind when it comes to AI and their kids,” such as explaining “what AI is and what it is not before using it with your kids.” San Jose was “just one of three cities hosting official National AI Literacy Day events, with the others being in New York City and Washington, D.C.”

 

Amazon Invests $5.3B As Saudi Aims To Be AI Power

The New York Times Share to FacebookShare to Twitter (4/25, Satariano, Mozur) reports Amazon’s cloud computing division, led by CEO Adam Selipsky, announced a significant $5.3 billion investment in Saudi Arabia aimed at establishing data centers and advancing artificial intelligence technology. This move came during the Leap conference in Riyadh, which attracted over 200,000 attendees, including tech giants like Google and TikTok. The conference underscored Saudi Arabia’s ambitious tech goals under its “Vision 2030” initiative, spearheaded by Crown Prince Mohammed bin Salman. The kingdom’s plans include a massive $100 billion investment in AI and technology, dwarfing similar nation-state investments globally. Additionally, Saudi Arabia is fostering a domestic tech industry by requiring international firms to establish local operations to access its funds.

 

Survey: Only 18% Of Teachers Report Using AI In Their Classrooms

K-12 Dive Share to FacebookShare to Twitter (4/24, Merod) reports, “Over a year has passed since generative artificial intelligence tool became publicly available, but the use of AI still remains fairly uncommon among K-12 teachers.” According to a survey by Rand Corp. and the Center on Reinventing Public Education, “just 18% of teachers reported that they used AI tools in their classrooms, and another 15% said they have tried to, as of fall 2023.” Among AI-using teachers, “a majority used virtual learning platforms (80%) like Google Classroom and adaptive learning systems (61%) such as Khan Academy at least once per week, the survey found. The third most common AI tool educators tapped was chatbots like ChatGPT or Google Gemini, with 53% of teachers reporting at least weekly usage.” According to the findings, “60% of school districts plan to train teachers about AI use by the end of the 2023-24 school year.”

 

Alphabet Reported 15% Jump In Q1 Revenue To $80.5B

The New York Times Share to FacebookShare to Twitter (4/25, Grant) reports Google parent company Alphabet on Thursday reported a 36% jump in profit for the first quarter of the year to $23.7 billion. Its quarterly sales came in at $80.5 billion, up 15% from a year prior. Alphabet announced that it would provide shareholders with its first ever dividend of 20 cents per share. The company also approved a $70 billion share repurchase program. Alphabet’s shares rose 13% in after-hours trading.

        CNN Share to FacebookShare to Twitter (4/25, Duffy) reports Alphabet CEO Sundar Pichai attributed the company’s success to its investment in AI, including its large language model and AI product suite known as Gemini. “We are well under way with our Gemini era and there’s great momentum across the company. Our leadership in AI research and infrastructure, and our global product footprint, position us well for the next wave of AI innovation,” Pichai said.

 

Fewer Students Using AI To Cheat Than Previously Believed, Data Show

Education Week Share to FacebookShare to Twitter (4/25, Prothero) reports newly released data from Turnitin, “a popular plagiarism-detection company,” found that “of the more than 200 million writing assignments reviewed by Turnitin’s AI detection tool over the past year, some AI use was detected in about 1 out of 10 assignments, while only 3 out of every 100 assignments were generated mostly by AI.” Turnitin’s latest data release “shows that in 11 percent of assignments run through its AI detection tool that at least 20 percent of each assignment had evidence of AI use in the writing. In 3 percent of the assignments, each assignment was made up of 80 percent or more of AI writing.” Despite scant evidence “that AI is fueling a wave in cheating, half of teachers reported in the Center for Democracy and Technology survey that generative AI has made them more distrustful that their students are turning in original work.”

dtau...@gmail.com

unread,
May 4, 2024, 7:40:04 PM5/4/24
to ai-b...@googlegroups.com

Apple Targets Google Staff to Build AI Team

An analysis of hundreds of LinkedIn profiles, public job postings, and research papers by the Financial Times found that Apple has built up a team of AI experts and established an AI lab in Zurich, Switzerland, in recent years. Since 2018, when it poached Google's John Giannandrea as its top AI executive, Apple has hired at least 36 specialists from the rival tech firm. Apple's secretive Zurich-based "Vision Lab" stems from its acquisition of local AI startups FaceShift and Fashwell, known for VR and image recognition, respectively.

[ » Read full article *May Require Paid Registration ]

Financial Times; Michael Acton (April 30, 2024)

 

GitHub's Take on AI-Powered Software Engineering

GitHub has unveiled plans for the Copilot Workspace, where AI agents powered by its Copilot coding assistant would help developers brainstorm, plan, build, test, and run code in natural language. GitHub's Jonathan Carter said Workspace would build on new capabilities, such as Copilot Chat, where developers can ask coding questions in natural language. Carter said Copilot Workspace “gives developers a plan to start iterating from."
[ » Read full article ]

Tech Crunch; Kyle Wiggers (April 29, 2024)

 

Drones Dance Using ChatGPT

Researchers at Germany's Technical University of Munich (TUM) successfully choreographed an in-air drone performance using ChatGPT, directing six drones to fly in circles without colliding. This involved selecting a music track and entering text that ChatGPT translated into choreography. TUM's Angela Schoellig said because ChatGPT "initially knows nothing about the properties of drones and physical limits for their flight paths," a safety algorithm was developed to map out flight paths to avoid collisions.
[ » Read full article ]

Interesting Engineering; Maria Mocerino (April 30, 2024)

 

Company Revives Alan Turing as a Chatbot

Singapore's Genius Group, a company focused on AI-powered business education, revived Alan Turing as a chatbot and appointed it the company’s "chief AI officer." After a month-long search for a chief AI officer, "It soon became clear that Alan Turing AI was by far the best candidate for the role," said Genius Group CEO Roger James Hamilton. The company uploaded a video introducing the Turing chatbot to the public using AI-generated imagery.
[ » Read full article ]

PC Magazine; Michael Kan (April 30, 2024)

 

AI Faces Its 'Oppenheimer Moment'

During an April 29 meeting of civilian, military, and technology officials from more than 100 countries in Vienna, Austria, speakers said governments are running out of time to rein in autonomous weapons systems. “This is the Oppenheimer Moment of our generation,” said Austrian Foreign Minister Alexander Schallenberg. Costa Rican Foreign Minister Arnoldo André Tinoco said new rules will be required once non-state actors and terrorists have access to the technology.

[ » Read full article *May Require Paid Registration ]

Bloomberg; Jonathan Tirone (April 29, 2024)

 

In Race to Build AI, Tech Plans Plumbing Upgrade

Big Tech is investing heavily in the infrastructure and hardware needed to support its AI ambitions. Microsoft, Meta, and Alphabet disclosed last week that they had spent more than $32 billion combined on datacenters and other capital expenses in just the first three months of this year. The companies all said they had no plans to reduce their AI-related spending. Meta said it needed to spend billions more on the chips and datacenters for AI than it had previously indicated.

[ » Read full article *May Require Paid Registration ]

The New York Times; Karen Weise (April 27, 2024)

 

From Baby Talk to Baby AI

Researchers at New York University (NYU) developed an AI model trained on a child's experience, particularly their language acquisition, based on videos captured by toddlers wearing GoPro-type cameras on their heads. The goal is to gain better understanding of how human intelligence develops, in order to develop smarter AI models. Said NYU's Brenden Lake, "If the field can get to the place where models are trained on nothing but the data that a single child saw, and they do well on a huge set of tasks, that would be a huge scientific achievement."

[ » Read full article *May Require Paid Registration ]

The New York Times; Oliver Whang (April 30, 2024)

 

Chip Safeguards Data, Enables Efficient Computing on Smartphone

Researchers from the Massachusetts Institute of Technology (MIT) and the MIT-IBM Watson AI Lab developed a chip that can efficiently accelerate machine learning workloads on edge devices like smartphones while protecting sensitive user data from side-channel and bus-probing attacks. To accomplish this, the team split data in an in-memory compute chip into random pieces, used a lightweight cipher that encrypts the model stored in off-chip memory, and generated the key that decrypts the cipher directly on the chip.
[ » Read full article ]

MIT News; Adam Zewe (April 23, 2024)

 

Saudi Arabia Spends Big to Become an AI Superpower

Saudi Arabia's efforts to become an AI superpower have raised concerns among U.S. officials, especially if the kingdom provides computing power to Chinese researchers and companies. Saudi Arabia created a $100-billion fund this year to invest in AI and other technology and is in talks with investors to put an additional $40 billion into AI companies. In March, the government said it would invest $1 billion in a Silicon Valley-inspired start-up accelerator to lure AI entrepreneurs to the kingdom.

[ » Read full article *May Require Paid Registration ]

The New York Times; Adam Satariano; Paul Mozur (April 25, 2024)

 

DHS Establishes AI Safety Board For Evaluating Threats To Critical Infrastructure

Reuters Share to FacebookShare to Twitter reports the Department of Homeland Security on Friday “announced a blue-ribbon board that includes the CEOs of OpenAI, Microsoft, Google parent Alphabet and Nvidia that will advise the government on the role of artificial intelligence on critical infrastructure.” Homeland Security Secretary Alejandro Mayorkas “told reporters the board would help ensure the safe deployment of AI technology and how to address threats posed by this technology to vital services like energy, utilities, transportation, defense, information technology, food and agriculture, and financial services.” Mayorkas added, “It is not a board that will be focused on theory, but rather practical solutions for the implementation of AI in our nation’s daily life.”

        According to the AP Share to FacebookShare to Twitter (4/26, Staff), Mayorkas said that “AI holds potential for improving government services but ‘we recognize the tremendously debilitating impact its errant use can have.’” The AP adds regarding the 22-member board that “corporate executives dominate, but it also includes civil rights advocates, AI scientist Fei-Fei Li who leads Stanford University’s AI institute as well as Maryland Gov. Wes Moore and Seattle Mayor Bruce Harrell, two public officials who are ‘already ahead of the curve’ in thinking about harnessing AI’s capabilities and mitigating risks.”

 

Study: AI Models Can Automate Security Exploits

Axios Share to FacebookShare to Twitter (4/26) reports that academic research from the University of Illinois Urbana-Champaign has revealed that GPT-4 can autonomously write scripts to exploit security vulnerabilities with an 87 percent success rate. This research tested 10 different models against 15 serious vulnerabilities listed by Mitre’s Common Vulnerabilities and Exposures. As of their study, GPT-4 was the most proficient, managing nearly 50 steps in a single exploit attempt. These findings, published this month, validate concerns about AI’s potential role in cybersecurity threats. Additionally, ongoing advancements in AI technologies suggest other models may soon demonstrate similar capabilities.

 

Survey: Most Teachers Do Not Tell Students, Parents About Their AI Use

Education Week Share to FacebookShare to Twitter (4/26, Langreo) reported, “More states and school districts are rolling out guidelines and policies for how educators and students can use generative artificial intelligence in their work.” As more teachers “try out generative AI tools for their work, some are asking the question that the guidelines don’t always answer: Should we tell students when we’re using AI?” An overwhelming majority – 80 percent – of educators “said it’s not necessary to tell students or parents when teachers use AI to plan a lesson, according to a nationally representative EdWeek Research Center survey of 1,183 teachers, principals, and district leaders conducted in March and April. Most educators also said the same when it came to creating assignments, building assessments, and tracking student behavior in the classroom.” Experts say that teachers “should be transparent with students about how they use the technology, because it helps model appropriate use for students.”

 

Chicago Art Institute Utilizes AI In Admissions

Inside Higher Ed Share to FacebookShare to Twitter (4/29, Coffey) reports the School of the Art Institute of Chicago (SAIC), in collaboration with technology consulting firm SPR, is leveraging artificial intelligence (AI) to improve their enrollment process. The tool analyzes more than 100 factors, such as applicants’ school background and program interest, to gauge the likelihood of an offer being accepted and the student attending the school. While the technology does not determine acceptances, it provides foresight into likely enrollment figures. SAIC is the first art and design-focused institution to adopt such a tool, a trend that could expand following its recent implementation.

 

Penn Professor Seen As Influential In AI Policy, Education

The Wall Street Journal Share to FacebookShare to Twitter (4/27, Subscription Publication) profiles Ethan Mollick, a professor at the University of Pennsylvania who has become a prominent AI expert, frequently consulting for corporations and policymakers without charge. Mollick, who integrates AI education into his Wharton M.B.A. classes, interacts with leaders from companies such as Google, Meta Platforms, and JP Morgan, as well as governmental bodies like the White House. His advocacy emphasizes the dual nature of AI technologies, urging proactive engagement to mitigate potential negatives while harnessing their benefits. Mollick’s insights are shaping the conversation on how AI is utilized in various sectors, emphasizing the importance of early and informed engagement with emerging technologies.

 

Former Rivals Align To Participate In Big Tech’s AI Race

The New York Times Share to FacebookShare to Twitter (4/29, Metz, Grant) reports Mustafa Suleyman and Demis Hassabis “are two of the most powerful executives in the tech industry’s race to build artificial intelligence.” Dr. Hassabis is the chief executive of Google DeepMind, “the tech giant’s central research lab for artificial intelligence,” while Suleyman was recently named chief executive of Microsoft AI, “charged with overseeing the company’s push into A.I. consumer products. Their path from London to the executive suites of Big Tech is one of the most unusual – and personal – stories in an industry full of colorful personalities and cutting rivalries.” Their paths diverged once the A.I. race kicked off, after a “clash at DeepMind, which Google acquired for $650 million in 2014.”

 

AI Investment Surge Raises Concerns Of A Bubble

The Wall Street Journal Share to FacebookShare to Twitter (4/29, Jin, Subscription Publication) reports that despite lacking a business model or product, AI startup Imbue has attracted significant investment, reaching a valuation over $1 billion. Investors, including former Google CEO Eric Schmidt, continue to fund AI startups heavily, with $21.8 billion invested last year alone. However, the industry faces challenges with high costs and slow adoption, leading to concerns about a potential bubble in AI startup valuations.

 

Newspapers Sue AI Chatbots Over Use Of Articles To Train Services

The New York Times Share to FacebookShare to Twitter (4/30, Robertson) reports, “Eight daily newspapers owned by Alden Global Capital sued OpenAI and Microsoft on Tuesday, accusing the tech companies of illegally using news articles to power their A.I. chatbots. ... In the complaint, the publications accuse OpenAI and Microsoft of using millions of copyrighted articles without permission to train and feed their generative A.I. products, including ChatGPT and Microsoft Copilot.” The Hill Share to FacebookShare to Twitter (4/30) reports, “Beyond initially scraping their articles to train the AI models, the newspapers also contend Microsoft and OpenAI’s generative AI systems offer their users content that is identical to, or a slightly masked version of, the newspapers’ content.’”

        Reuters Legal Share to FacebookShare to Twitter (4/30) reports, “A lawyer for the MediaNews publications, Steven Lieberman, told Reuters that OpenAI owed its runaway success to the works of others. The defendants know they have to pay for computers, chips, and employee salaries, but ‘think somehow they can get away with taking content’ without permission or payment, he said.”

 

New Guidelines Seek To Help University Librarians Navigate AI

Inside Higher Ed Share to FacebookShare to Twitter (5/1, Coffey) reports, “The Association of Research Libraries announced a set of seven guiding principles for university librarians to follow in light of rising generative AI use.” The AI guidelines seek to “help librarians deal with an onslaught of inquiries related to generative artificial intelligence.” The seven guiding principles “focus on the development and deployment of generative AI, which usually refers to large language models such as OpenAI’s ChatGPT.” The association said the principles aim to “promote ethical and transparent practices, and build trust among stakeholders, within research libraries as well as across the research environment.” At the start of 2024, “more than three quarters of librarians said in a poll that there is an urgent need to address AI’s ethical and privacy concerns. Major worries included violations of privacy and misuse of data, such as generating false citations.”

 

Big Tech Behind AI Lobbying Surge

TIME Share to FacebookShare to Twitter (4/30, Henshall) reports a significant increase in organizations lobbying on AI in the US, with numbers rising from 158 in 2022 to 451 in 2023. This surge is largely driven by technology companies, which dominate the lobbying efforts. Despite public support for AI regulation, these companies often advocate for more lenient, voluntary regulations in private meetings with officials. In 2023, Amazon, along with other tech giants like Meta, Alphabet, and Microsoft, each spent over $10 million on lobbying – although this figure is not specific to AI. The rise in AI lobbying coincides with heightened legislative focus, as seen with Senate Majority Leader Chuck Schumer’s “Insight Forums” and various new AI safety and policy groups entering the lobbying scene for the first time.

 

Yale Student, Professor Create Chatbot That Answers Questions About AI Ethics

Inside Higher Ed Share to FacebookShare to Twitter (5/2, Coffey) reports that a Yale University freshman and his professor “worked together to create an artificial intelligence chatbot based on the professor’s research into ethical AI.” When the student “wanted to make an artificial intelligence (AI) chatbot based on his professor’s research...his professor advised him to temper expectations.” However, “two weeks after the launch of LuFlot Bot, there have been 11,000 queries from 85-plus countries.” LuFlot Bot focuses specifically on “the ethics, philosophies and uses of AI, answering questions such as ‘Is AI environmentally harmful?’ and ‘What are the regulations on AI?’” The two scholars “join the ranks of institutions creating their own large language models (LLMs),” as concerns “swirl about mainstay generative AI tools like ChatGPT.”

 

AI Tool Helps Researchers Locate Drinking Water In Western US

Newsweek Share to FacebookShare to Twitter (5/2) reports researchers at Washington State University have published a study detailing an improved model that uses artificial intelligence to estimate water supplies across expansive distances in the Western U.S. The computer model is a critical tool for the region, which relies heavily on snowmelt for its water supply and has been experiencing a prolonged megadrought since 2000. To determine the effectiveness of the model, the AI’s measurements were compared against 300 existing snow measuring stations and were found to greatly outperform current methods.

 

Meta Launches New AI Chatbot For Social Media Users

The Los Angeles Times Share to FacebookShare to Twitter (5/2, Deng) reports Meta, the parent company of Facebook, has planted a new artificial-intelligence-powered chatbot “on its Whatsapp and Instagram services.” Internet users can now “open one of these free social media platforms and draw on Meta AI’s services as a dictionary, guidebook, counselor or illustrator, among many other tasks it can perform – although not always reliably or infalliably.” In an announcement, Mark Zuckerberg, the chief executive officer at Meta said, “Our goal is to build the world’s leading AI and make it available to everyone. We believe that meta AI is now the most intelligent AI assistant that you can freely use.” AI experts said social media users “can expect to see more of this technology influencing their experience – for better or possibly worse.”

 

CIA Preparing For “Infinite Race” Against China For Technological Advantage

The Washington Times Share to FacebookShare to Twitter (5/2) reports the CIA is preparing for an “infinite race” with China “for artificial intelligence and top technology, making getting the best tools a leading priority for America’s spies.” In remarks to the “Hill & Valley Forum’s gathering of top technology and government officials in Washington this week, [Nand Mulchandani, the agency’s chief technology officer] said the CIA is ‘all in’ on AI for offense, defense and more. He said the CIA is building its own large language models, which are sophisticated algorithms that make generative AI tools work.”

 

Survey: How Teachers Are Using AI In Their Classrooms

Education Week Share to FacebookShare to Twitter (5/2, Solis) reports “one-third of K-12 teachers say they have used artificial intelligence-driven tools in their classrooms, according to an EdWeek Research Center survey, which included 498 teachers and was conducted in November and December.” When asked “how they were using AI to do their jobs this school year, 52 percent of educators said they don’t use it at all, according to an analysis of responses to an open-ended question from a separate EdWeek Research Center survey of 595 district leaders, school leaders, and teachers conducted in December and January.” There are “downsides to the new technology,” and districts “don’t often have the expertise they need to train their staff on new and emerging technologies.” Among those who do use AI, the survey showed that “teachers mostly use ChatGPT and other generative AI tools to create lesson plans, build rubrics, compose emails to parents, and write letters of recommendation for students.” Using AI to grade student work “is less popular.”

Reply all
Reply to author
Forward
0 new messages