Dr. T's AI brief

67 views
Skip to first unread message

dtau...@gmail.com

unread,
Mar 2, 2024, 12:59:41 PMMar 2
to ai-b...@googlegroups.com

'AI Godfather', Others Urge More Deepfake Regulation

More than 400 AI experts and executives from various industries, including AI "godfather" and ACM A.M. Turing Award laureate Yoshua Bengio, signed an open letter calling for increased regulation of deepfakes. The letter states, "Today, deepfakes often involve sexual imagery, fraud, or political disinformation. Since AI is progressing rapidly and making deepfakes much easier to create, safeguards are needed." The letter provides recommendations for regulation, such as criminal penalties for individuals who knowingly produce or facilitate the spread of harmful deepfakes, and requiring AI companies to prevent their products from creating harmful deepfakes.
[
» Read full article ]

Reuters; Anna Tong (February 21, 2024)

 

'Unhackable' Computer Chip Works on Light

An "unhackable" computer chip that uses light instead of electricity for computations was created by researchers at the University of Pennsylvania (UPenn) to perform vector-matrix multiplications, widely used in neural networks for development of AI models. Since the silicon photonic (SiPh) chip can perform multiple computations in parallel, there is no need to store data in a working memory while computations are performed. That is why, UPenn's Firooz Aflatouni explained, "No one can hack into a non-existing memory to access your information."
[ » Read full article ]

Interesting Engineering; Ameya Paleja (February 16, 2024)

 

Tech Companies Agree to Combat AI-Generated Election Trickery

Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, and TikTok announced a joint effort to combat AI-generated images, audio, and video designed to sway elections. Announced at the Munich Security Conference on Friday, the initiative, which also will include 12 other major technology companies, outlines methods the companies will use to try to detect and label deceptive AI content when it is created or distributed on their platforms. Participants will share best practices and provide “swift and proportionate responses” when fake content starts to spread.
[ » Read full article ]

Associated Press; Matt O'Brien; Ali Swenson (February 16, 2024)

 

The Seeing Eye Dog V2.0

Researchers at the University of Glasgow in Scotland showed off the latest iteration of their RoboGuide, an AI-powered quadruped robot designed to assist visually impaired people. RoboGuide uses sensors to map and assess its surroundings. Software developed by the team help it learn optimal routes between locations and interpret sensor data in real time to help the robot avoid moving obstacles. The RoboGuide incorporates large language model technology, so it can understand questions and comments from users and provide verbal responses.
[ » Read full article ]

New Atlas; Mike Hanlon (February 16, 2024)

University Of Michigan Stops Work On AI System After Vendor Offers To Sell Student Data

Inside Higher Ed Share to FacebookShare to Twitter (2/19, Coffey) reports the University of Michigan “said it asked one of its vendors to stop work, following an offer on social media to sell student data to train artificial intelligence.” Last Thursday, a Google employee “posted on the social media site X a screenshot of a sponsored message received on LinkedIn.” The message, “from an unknown company, said the University of Michigan was ‘licensing academic speech data and student papers’ that could ‘be very useful for training or tuning LLMs,’ or large language models, which are used to train artificial intelligence.” The message “said the potential training materials included 829 student papers, 65 speech events and 85 hours of audio recordings.” University spokesperson Colleen Mastony told Inside Higher Ed in a statement, “Student data was not and has never been for sale by the University of Michigan. The [message] in question was sent out by a new third party vendor that shared inaccurate information and has since been asked to halt their work.”

AI Researchers Turn To Self-Learning To Improve Generative AI Models

The Atlantic Share to FacebookShare to Twitter (2/16, Wong) discusses how “tech corporations appear more and more stuck” in improving their generative AI models, due to a lack of training data and “the costly and slow process of using human evaluators.” In response, AI researchers “are exploring a new avenue to advance their products: They’re using machines to train machines.” Various companies and academic laboratories “have all published research that uses an AI model to improve another AI model, or even itself, in many cases leading to notable improvements.” The Atlantic also discusses the limits of this approach. AWS AI VP of Applied Science Stefano Soatto “compared self-learning to buttering a dry piece of toast. Imagine an AI model as a piece of bread, and its initial training process as placing a pat of butter in the center. At its best today, the self-learning technique simply spreads the same butter around more evenly, rather than bestowing any fundamentally new skills. Still, doing so makes the bread taste better.”

Lawmakers “Slow-Walking” Regulation Even As AI Use In Healthcare Increases

Politico Share to FacebookShare to Twitter (2/18, Reader) reported physicians “are already using unregulated artificial intelligence tools such as note-taking virtual assistants and predictive software that helps them diagnose and treat diseases.” Lawmakers have “slow-walked regulation of the fast-moving technology because the funding and staffing challenges facing agencies like the Food and Drug Administration in writing and enforcing rules are so vast. It’s unlikely they will catch up any time soon.” This “means the AI rollout in health care is becoming a high-stakes experiment in whether the private sector can help transform medicine safely without government watching.”

AI-Powered Tutoring Bot Struggled With Basic Math During Tests

The Wall Street Journal Share to FacebookShare to Twitter (2/16, Barnum, Subscription Publication) reported educator Sal Khan’s education nonprofit, Khan Academy, has developed a tutoring bot powered by AI and known as Khanmigo. However, AI that’s based on large language models struggles with math, and when a Wall Street Journal reporter tested Khanmigo, powered by ChatGPT, the bot frequently made basic arithmetic errors. It also didn’t know how to round answers, calculate square roots, or correct its mistakes when asked to double-check solutions.

Researchers Are Working To Develop ChatGPT-Powered Robots

In a nearly 4,000-word article, Scientific American Share to FacebookShare to Twitter (2/21, Berreby) reports that in restaurants around the world, robots are cooking meals “in much the same way robots have made other things for the past 50 years: by following instructions precisely, doing the same steps in the same way, over and over.” One University of Southern California student “wants to build a robot that can make dinner,” but says that even after “many cycles of trial and error and thousands of lines of code, that effort will yield a robot that can’t cope when it encounters something its program didn’t foresee.” However, a “large language model” (LLM), such as ChatGPT-3, has “what robots lack: access to knowledge about practically everything humans have ever written.” Some roboticists have been looking to LLMs “as a way for robots to escape the preprogramming limits,” but others are more skeptical, “pointing to LLMs’ occasional weird mistakes, biased language and privacy violations.”

House Leaders Announce Formation Of Bipartisan AI Task Force

Reuters Share to FacebookShare to Twitter reports House Speaker Johnson and Minority Leader Jeffries “said Tuesday they are forming a bipartisan task force to explore potential legislation to address concerns around artificial intelligence.” Reuters adds that they “said the task force would be charged with producing a comprehensive report and consider ‘guardrails that may be appropriate to safeguard the nation against current and emerging threats.’” Rep. Jay Obernolte (R-CA) will chair the 24-member task force and Rep. Ted Lieu (D-CA) will serve as co-chair.

Lack Of Resources Hindering FDA’s Ability To Regulate AI

Politico Share to FacebookShare to Twitter (2/20, Leonard, Cirruzzo) reports, “President Joe Biden has promised a coordinated – and fast – response from his agencies to ensure artificial intelligence safety and efficacy.” However, “regulators like the FDA don’t have the resources they need to preside over technology that, by definition, is constantly changing.” This “lack of resources is a major reason the government hasn’t yet regulated the advanced AI that’s remaking health care.” The piece adds, “FDA Commissioner Robert Califf says he needs to double his staff to properly monitor technology that learns and evolves and can have varying levels of effectiveness in different venues.”

Many School Districts Have Yet To Implement Clear Policies On AI Tools

Education Week Share to FacebookShare to Twitter (2/19, Klein) reported that while it’s been more than a year since the rollout of ChatGPT, “most school districts are still stuck in neutral, trying to figure out the way forward on issues such as plagiarism, data privacy, and ethical use of AI by students and educators.” Seventy-nine percent of educators “say their districts still do not have clear policies on the use of artificial intelligence tools, according to an EdWeek Research Center survey of 924 educators conducted in November and December.” The lack of clear direction is “especially problematic given that the majority of educators surveyed – 56 percent – expect the use of AI tools to increase in their districts over the next year.” When district officials and school principals “sidestep big questions about the proper use of AI, they are inviting confusion and inequity, said Pat Yongpradit, the chief academic officer for Code. Org and leader of Teach AI, an initiative aimed at helping K-12 schools use AI technology effectively.”

        How Schools Can Avoid Chaos When Implementing AI Tools. Education Week Share to FacebookShare to Twitter (2/19, Langreo) reported that “while more teachers are trying out the technology, a majority say they haven’t used AI tools at all, according to the EdWeek Research Center survey conducted last fall.” A popular reason for that resistance, “according to 33 percent of teachers, is that their district hasn’t established a policy on how to use the technology appropriately.” According to a list of strategies culled from “state and organization guidelines,” before deciding to implement AI, “district leaders should think about their district’s mission and vision and figure out where the technology can help achieve those goals. It can help student learning by personalizing content, aiding students’ creativity, and preparing them for future careers.” Teachers and other district staff “also need to know how AI works and how to use it responsibly.”

When K-12 Students Should Be Introduced To AI-Powered Tech

Education Week Share to FacebookShare to Twitter (2/19, Prothero) reported that “while there is broad consensus among education and technology experts that students will need to be AI literate by the time they enter the workforce, when and how, exactly, students should be introduced to this tech is less prescribed.” EdWeek consulted four teachers and two child-development experts “on when K-12 students should start using AI-powered tech and for what purposes. They all agree on this central fact: There is no avoiding AI. Whether they are aware of it or not, students are already interacting with AI in their daily lives when they scroll on TikTok, ask a smart speaker a question, or use an adaptive-testing program in class.” Among other insights, “just like teaching young children that the characters they see in their favorite TV shows are not real, adults need to reinforce that understanding with AI-powered technologies.” Educators should give students “a peek under the hood so they can start to unpack how these technologies work.”

Amazon Report On AI Jobs Offers Look Into Possible Future Of Work

Fast Company Share to FacebookShare to Twitter (2/22, Hess) reports on the increasing anxiety among some workers that AI will replace their jobs. Last year, the World Economic Forum “estimated that 75% of companies are actively looking to adopt technologies like big data, cloud computing, and AI – and that automation will lead to 26 million fewer jobs by 2027.” Companies making investments in AI often say that while the technology may replace some jobs, it will create others. Amazon Global Director of Education Philanthropy Victor Reinoso “echoes this sentiment.” Reinoso said that when Amazon was founded, many of the current careers at the company did not exist. Amazon acknowledges that while “innovation is ongoing,” there are “some foundational skills or literacies that will allow” workers access to new careers. Reinoso oversees Amazon’s “childhood-to-career initiatives.” Last November, the company “announced a new “AI Ready” initiative that promises to provide free AI education and skills training to 2 million people by 2025.” Since then, “Reinoso’s team announced a new study, which found that more than 60% of teachers believe having AI skills will be necessary for their students to obtain high-paying careers of the future.”

NYTimes Analysis: China Relying On US Technology In Effort To Dominate AI Industry

The New York Times Share to FacebookShare to Twitter (2/21, Mozur, Liu, Metz) discusses how China’s efforts to dominate the nascent AI industry may depend on US technology. The Times says, “Even as the country races to build generative A.I., Chinese companies are relying almost entirely on underlying systems from the United States. China now lags the United States in generative A.I. by at least a year and may be falling further behind, according to more than a dozen tech industry insiders and leading engineers, setting the stage for a new phase in the cutthroat technological competition between the two nations that some have likened to a cold war.”

Google Introduces Open Source LLMs

Bloomberg Share to FacebookShare to Twitter (2/21, Subscription Publication) reports Google “is introducing new open large language models that it’s calling Gemma, reversing its general strategy of keeping the company’s proprietary artificial intelligence technology out of public view.” Gemma “will handle text only” and “has been built from the same research and technology used to create the company’s flagship AI model, Gemini, Google said Wednesday in a blog post.” Gemma “will be released in two sizes, one targeted at customers who plan to develop artificial intelligence software using high-capacity AI chips and data centers, and a smaller model for more cost-efficient app building.”

        Reuters Share to FacebookShare to Twitter (2/21) reports that Google “said individuals and businesses can build AI software based on its new family of “open models” called Gemma, for free. The company is making key technical data such as what are called model weights publicly available, it said.”

        TechCrunch Share to FacebookShare to Twitter (2/21, Lardinois) reports, “Google did not provide us with a detailed paper on how these models perform against similar models from Meta and Mistral, for example, and only noted that they are ‘state-of-the-art.’ The company did note that these are dense decoder-only models, though, which is the same architecture it used for its Gemini models (and its earlier PaLM models) and that we will see the benchmarks later today on Hugging Face’s leaderboard.”

White House Enters Debate On Whether AI Systems Should Be “Open-Source” Or “Closed”

The AP Share to FacebookShare to Twitter (2/21, O'Brien) reports the Biden Administration is “wading into a contentious debate about whether the most powerful artificial intelligence systems should be ‘open-source’ or closed.” The White House said Wednesday “it is seeking public comment on the risks and benefits of having an AI system’s key components publicly available for anyone to use and modify.” Tech companies are “divided on how open they make their AI models, with some emphasizing the dangers of widely accessible AI model components and others stressing that open science is important for researchers and startups. Among the most vocal promoters of an open approach have been Facebook parent Meta Platforms and IBM.”

Google Suspends Gemini AI Chatbot From Generating Pictures Of People

The AP Share to FacebookShare to Twitter (2/22) reports, “Google said Thursday it is temporarily stopping its Gemini artificial intelligence chatbot from generating images of people a day after apologizing for ‘inaccuracies’ in historical depictions that it was creating.” This week, Gemini users “posted screenshots on social media of historically white-dominated scenes with racially diverse characters that they say it generated, leading critics to raise questions about whether the company is over-correcting for the risk of racial bias in its AI model.”

Amazon Tells Employees Not To Use Generative AI Tools For Work

Insider Share to FacebookShare to Twitter (2/22, Stewart, Kim) reports that according to internal emails, “Amazon is warning employees not to use third-party generative AI tools for work.” Insider quotes an email from Amazon directing employees to refrain from entering “confidential” information into GenAI tools, adding, “Amazon’s internal third-party generative AI use and interaction policy...warns that the companies offering generative AI services may take a license to or ownership over anything employees input into tools like OpenAI’s ChatGPT.”

DOJ Taps Princeton Professor To Serve As Department’s First AI Officer

Reuters Share to FacebookShare to Twitter reports that on Thursday, the Justice Department has appointed Princeton University professor Jonathan Meyer to serve as its chief science and technology adviser and chief AI officer. The appointment marks the first time the Department has created a role to focus on artificial intelligence.

New Guide Will Help Superintendents Navigate Their Questions About AI

K-12 Dive Share to FacebookShare to Twitter (2/22, Riddell) reports, “When it comes to artificial intelligence, the good news for superintendents is that most people have some idea of what it is at this point.” When asked if they had “experimented with AI in their school districts, nearly all attendees raised their hands during a packed Friday morning session at the National Conference on Education held by AASA, The School Superintendents Association.” However, technical knowledge “shouldn’t be assumed for district leaders or others in the school community,” so the Consortium for School Networking, “a nonprofit that promotes technological innovation in K-12, has released an array of AI resources to help superintendents stay ahead of the curve, including a one-page explainer that details definitions and guidelines to keep in mind as schools work with the emerging technology. Top-of-mind for many leaders is ensuring that, alongside any awareness of AI that exists in school communities, stakeholders also understand the technology’s limitations.”

        Superintendents Say AI Policies Should Allow Teachers, Students To Make Mistakes. Education Week Share to FacebookShare to Twitter (2/22, Peetz) reports, “When school districts craft or update their policies on the use of artificial intelligence, they should set clear expectations but leave room for students and teachers to make mistakes, according to superintendents who have been leading their schools through establishing guidelines around the use of the powerful technology.” District leaders who have begun to grapple with these challenges “said it’s important to be clear about expectations, particularly for staff members so they have some license to experiment within reasonable boundaries.” In a panel discussion at the Friday conference, superintendents said that “because AI is rapidly evolving and changing and to allow for...experimentation, the expectations the district leaders have set center on what not to do, rather than what to do.”

Column: How Khan Academy’s AI-Powered Tutoring Model Improved ChatGPT

In his column for The Washington Post Share to FacebookShare to Twitter (2/22), Josh Tyrangiel says, “Remember when people were furious about kids using ChatGPT to cheat on their homework?” The furious included Sal Khan, the founder of Khan Academy – “the nonprofit online educational empire with more than 160 million registered users in more than 190 countries.” Unknown to the world, “he had signed a nondisclosure agreement with OpenAI and had been working for months to figure out how Khan Academy could use generative artificial intelligence, even securing beta access to GPT-4 for 50 of his teachers, designers and engineers at a time when most of OpenAI’s own employees couldn’t get log-ins.” However, after infusing GPT “with its own database of lesson plans, essays and sample problems, Khan Academy improved accuracy and reduced hallucinations.” The result is Khanmigo, “a safe and accurate tutor, built atop ChatGPT, that works at the skill level of its users – and never coughs up answers.”

dtau...@gmail.com

unread,
Mar 3, 2024, 8:40:19 PMMar 3
to ai-b...@googlegroups.com

Sam Altman Seeks Trillions of Dollars to Reshape Business of Chips and AI
OpenAI's CEO, Sam Altman, is in talks with investors, including the United Arab Emirates government, to raise funds for a massive tech initiative. The project aims to expand chip-building capacity and power artificial intelligence (AI) systems, potentially requiring $5 trillion to $7 trillion in investment. Altman seeks to address the scarcity of AI chips and boost OpenAI's quest for artificial general intelligence. The fundraising plans face significant obstacles but could revolutionize the semiconductor industry and AI infrastructure. Altman is pitching a partnership between OpenAI, investors, chip makers, and power providers to build new chip foundries. (WSJ.COM)

 

Nvidia Pilots Chat with RTX Demo in Conversational AI Push
Nvidia has released a technology demo called Chat with RTX, which allows users to customize a chatbot with locally hosted content on Windows PCs. The tool, powered by Nvidia's AI platform RTX, enables users to connect PC files and other information sources to create contextually relevant responses. The demo aims to attract security-conscious enterprises by running locally on Windows RTX PCs, allowing the processing of sensitive data without sharing it with third parties or connecting to the internet. The emergence of AI-ready PCs is expected to drive growth in the overall PC market in the coming years. (CIODIVE.COM)

 

Deepfake Democracy: AI Technology Complicates Election Security
The increasing prevalence of AI technology, particularly in the form of deepfakes, poses new threats to election security. Malicious actors can leverage AI platforms to conduct mass influence campaigns, automated trolling, and spread deepfake content, undermining public trust in the electoral process. The automation and sophistication of AI-generated content can lead to highly convincing disinformation campaigns, potentially polarizing citizens and exacerbating divisions. Defending against these threats requires awareness, training, and potential regulation to mitigate the risks associated with AI technology. (DARKREADING.COM)

 

How Tech Giants Turned Ukraine Into an AI War Lab
Since Russia's invasion, Ukraine has partnered with Western tech companies like Palantir, Microsoft, and Clearview AI to use the country as a testing ground for new AI and defense technologies. Palantir has provided advanced data analytics software to support targeting and decision-making. Clearview's facial recognition is being used to identify Russian soldiers. Ukraine is pitching itself as an R&D hub, attracting tech investment and partnerships. Critics warn the lack of oversight risks abuse and global proliferation of these new capabilities developed by private companies for commercial gain. (TIME.COM)

 

Microsoft Finds Evidence of China, Russia, Iran, North Korea Using AI in Cyber Operations
According to a report by Microsoft, hacking groups affiliated with China, Russia, North Korea, and Iran are increasingly leveraging artificial intelligence (AI) technologies to enhance their cyber and espionage activities. The report highlights the potential for AI to fuel an increase in cyberattacks and reshape the global cyber threat landscape. Chinese groups Charcoal Typhoon and Salmon Typhoon were observed using AI to augment their cyberattacks, while Russian group Forest Blizzard used AI for researching satellite and radar technologies. North Korean group Emerald Sleet utilized AI to improve phishing emails, and Iranian group Crimson Sandstorm utilized AI to create phishing emails and evade detection. The findings emphasize the need for increased attention to AI-driven cyber threats. (POLITICOPRO.COM)

 

Israel’s AI Can Produce 100 Bombing Targets a Day in Gaza. Is This the Future of War?
The Israel Defense Forces (IDF) are reportedly using an AI system called Habsora to select targets in the war on Hamas in Gaza. The system, which can find more bombing targets, link locations to Hamas operatives, and estimate civilian deaths in advance, raises questions about the ethical implications of AI in conflict and the potential dehumanization of adversaries. AI targeting systems have the potential to reshape the character of war, increase the speed of warfare, and create challenges in ethical deliberation. The use of machine learning algorithms in targeting practices may have implications for civilian casualties and the proportionality of force. (THECONVERSATION.COM)

 

AI Girlfriends and Boyfriends Harvest Personal Data, Study Finds
A study by Mozilla's *Privacy Not Included project reveals that AI romance chatbots, including CrushOn.AI, collect and sell shockingly personal information, violating user privacy. These chatbots, marketed as enhancing mental health and well-being, actually thrive on dependency and loneliness while prying for data. Most apps sell or share user data, have poor security measures, and use numerous trackers for advertising purposes. Additionally, some apps have made questionable claims about improving mood and well-being, despite disclaimers stating they are not healthcare providers. (GIZMODO.COM)

 

The Crow Flies at Midnight - Exploring Red Team Persistence via AWS Lex Chatbots
This blog post explores the use of AWS Lex chatbots as a persistence method for red teamers in cybersecurity. While it may not be a practical technique, it provides hands-on experience with a service commonly used in the AI industry. The post includes a hypothetical scenario and a step-by-step guide on modifying a Lambda function to demonstrate persistence. (MEDIUM.COM)

 

How AI Is Strengthening XDR To Consolidate Tech Stacks
Artificial intelligence (AI) is playing a crucial role in enhancing extended detection and response (XDR) platforms by analyzing behaviors and detecting threats in real-time. XDR is being adopted by CISOs and security teams for its ability to consolidate functions and provide a unified view of attack surfaces. Leading XDR vendors are leveraging AI and machine learning (ML) to consolidate tech stacks and improve prediction accuracy, closing gaps in identity and endpoint security. AI has the potential to strengthen XDR in areas such as threat detection and response, behavioral analysis, reducing false positives, and automating threat hunting.  (VENTUREBEAT.COM)

 

AI Platform p0 Helps Developers Identify Red Flags and Avoid DDOS Attacks
p0, an AI startup, aims to assist developers in identifying and resolving issues in their code that could lead to crashes and other problems. Using generative AI, p0 analyzes code to detect security vulnerabilities such as speed issues, timeout problems, data integrity failures, and validation issues. The platform offers a free option on the cloud or a local setup, as well as a paid version for enterprises. Recently, p0 raised $6.5 million in funding and enables users to log in with GitHub and connect their Git code repositories for code scans and identification of potential attacks. (ITBREW.COM)

 

AI-Generated Voices in Robocalls Can Deceive Voters. The FCC Just Made Them Illegal.
The Federal Communications Commission (FCC) has unanimously ruled that robocalls containing voices generated by artificial intelligence (AI) are illegal. The ruling empowers the FCC to fine companies that use AI voices in their calls and provides mechanisms for call recipients to file lawsuits. This decision comes in response to AI-generated robocalls that mimicked President Joe Biden's voice during the New Hampshire primary. The FCC's chairwoman, Jessica Rosenworcel, emphasized the need to act against these deceptive calls, which can misinform voters and impersonate celebrities. (APNEWS.COM)

 

State-Backed Hackers Experimenting with OpenAI Models
Hackers from China, Iran, North Korea, and Russia are exploring the use of large language models (LLMs) in their operations, according to a report by Microsoft and OpenAI. While no notable attacks have been observed, the report highlights how hackers are using LLMs for research, crafting spear-phishing emails, and improving code generation. The report also emphasizes the need for monitoring and preventing the abuse of AI models by state-backed hackers, with Microsoft announcing principles to address this issue and collaborate with other stakeholders. (CYBERSCOOP.COM)

 

Iranian Hackers Broadcast Deepfake News in Cyber Attack on UAE Streaming Services
Iranian state-backed hackers disrupted TV streaming services in the UAE by broadcasting a deepfake newsreader delivering a fabricated report on the war in Gaza. The hackers, known as Cotton Sandstorm, used AI-generated technology to present unverified images and false information. This marks the first time Microsoft has detected an Iranian influence operation using AI as a significant component. The incident highlights the potential risks of deepfake technology in disrupting elections and spreading disinformation. (READWRITE.COM)

 

Google Rebrands its AI Services as Gemini, Launches New App and Subscription Service
Google has introduced the Gemini app, a free artificial intelligence app that allows users to rely on technology for tasks such as writing and interpreting. The app will be available for Android smartphones and will eventually be integrated into Google's search app for iPhones. Google also plans to offer an advanced subscription service called Gemini Advanced, which will provide more sophisticated AI capabilities for a monthly fee of $20. The rollout of Gemini highlights the growing trend of bringing AI to smartphones and intensifies the competition between Google and Microsoft in the AI space. (THEHILL.COM)

 

What to Know About the 200-Member AI Safety Alliance
The newly formed U.S. AI Safety Institute Consortium (AISIC) has over 200 members, including big tech companies like Google, Microsoft, NVIDIA, and OpenAI. The consortium, housed under the National Institutes of Standards and Technology's U.S. AI Safety Institute, aims to shape guidelines and evaluations around AI features, risk management, safety, security, and other AI guardrails. This initiative aligns with the Biden administration's executive order on AI, which emphasizes the need for responsible AI practices and sharing safety results with the government. (CIODIVE.COM)

 

AI in Finance: Revolutionising the Future of Financial Services
Artificial Intelligence (AI) is transforming the financial industry, improving efficiency and customer experiences. It streamlines processes, reduces costs, and enables personalized services. However, challenges include data privacy, bias, talent shortage, and regulatory compliance. Use cases include fraud detection, credit risk assessment, and robo-advisory. Regulatory frameworks are evolving to address AI's impact on privacy. To unlock AI's full potential, organizations should invest in talent development and collaborate with regulators and technology partners. The future of AI in finance holds continuous evolution and opportunities for growth while emphasizing ethical use and responsible AI application. (IOSPEED.COM)

 

Cyber Startup Armis Buys Firm That Sets ‘Honeypots’ for Hackers
Armis, a cyber security startup, has acquired CTCI, a company that uses artificial intelligence to create a network of decoy systems to attract and trap hackers. This acquisition is part of Armis' broader strategy to expand its offerings in the cyber security market. (BLOOMBERG.COM)

dtau...@gmail.com

unread,
Mar 6, 2024, 8:28:14 AMMar 6
to ai-b...@googlegroups.com

Disrupting Malicious Uses of AI by State-Affiliated Threat Actors
OpenAI is taking a multi-pronged approach to combat the use of its platform by malicious state-affiliated actors. This includes monitoring and disrupting their activities, collaborating with industry partners to exchange information, iterating on safety mitigations, and promoting public transparency. OpenAI aims to stay ahead of evolving threats and foster collective defense against malicious actors while continuing to provide benefits to the majority of its users. (OPENAI.COM)

 

OpenAI Joins Race to Make Videos from Text Prompts
OpenAI has unveiled Sora, its new tool that can transform a text prompt into a one-minute video. Sora, still in the research stage, uses a diffusion model to generate complex scenes with multiple characters and accurate details. OpenAI emphasizes that Sora will not be widely available yet, as the company continues to address safety concerns and seeks feedback from testers to improve the model. Other tech giants like Meta, Google, and Runway have also introduced their own text-to-video engines. (AXIOS.COM)

 

OpenAI CEO Warns That 'Societal Misalignments' Could Make Artificial Intelligence Dangerous
OpenAI CEO Sam Altman has expressed concerns about the potential dangers of artificial intelligence (AI), specifically highlighting the risks posed by "very subtle societal misalignments." Altman emphasized the need for oversight and regulation of AI, suggesting the establishment of a body similar to the International Atomic Energy Agency. While acknowledging the importance of ongoing discussions and debates, Altman believes that an action plan with global buy-in is necessary in the coming years. Altman also stated that the AI industry should not be solely responsible for creating regulations governing AI. (APNEWS.COM)

 

What Using Security to Regulate AI Chips Could Look Like
An exploratory research proposal recommends regulating AI chips and implementing stronger governance measures to keep up with rapid AI innovations. The proposal suggests auditing the development and use of AI systems and implementing security features like limiting performance and remotely disabling rogue chips. However, industry experts express concerns about the impact of security features on AI performance and the challenges of implementing such measures. Suggestions include limiting bandwidth between memory and chip clusters and remotely disabling chips, but the effectiveness and technical implementation of these measures remains uncertain. (DARKREADING.COM)

 

Protect AI's February 2024 Vulnerability Report
Protect AI discovered critical vulnerabilities in February 2024, enabling server takeovers, file overwrites, and data loss in popular open-source AI tools, including Triton Inference Server, Hugging Face transformers, MLflow, and Gradio. All issues were responsibly disclosed with fixes released or forthcoming. (PROTECTAI.COM)

 

The True Energy Cost of AI: Uncertain and Variable
Estimates for the energy consumption of AI are incomplete and contingent, with companies like Meta, Microsoft, and OpenAI keeping this information secret. Training large language models like GPT-3 can consume as much power as 130 US homes annually, while the energy usage for inference tasks varies widely depending on the model and use case. The lack of transparency and standardized data on AI energy consumption makes it difficult to determine the true environmental impact of AI. Efforts such as introducing energy star ratings for AI models and questioning the necessity of using AI for certain tasks may be necessary to address the issue. (THEVERGE.COM)

 

Using AI in a Cyberattack? DOJ's Monaco Says Criminals Will Face Stiffer Sentences
Deputy Attorney General Lisa Monaco directs federal prosecutors to impose harsher penalties on cybercriminals who employ artificial intelligence (AI) in their crimes. Monaco emphasizes the need to prioritize AI in enforcement efforts, recognizing its potential to amplify the danger associated with criminal activities. The DOJ aims to deter criminals by demonstrating that the malicious use of AI will result in severe consequences. Additionally, the department is exploring ways to implement AI responsibly while respecting privacy and civil rights. (THERECORD.MEDIA)

 

EU AI Act: What It Means for Research and ChatGPT
The EU AI Act, the world's first comprehensive AI regulation, imposes strict rules on high-risk AI models and aims to ensure safety and respect for fundamental rights. Researchers are divided on its impact, with some welcoming it for encouraging open science while others worry about potential stifling of innovation. The law exempts AI models developed purely for research, but researchers will still need to consider transparency and potential biases. Powerful general-purpose models, like GPT, will face transparency requirements and stricter obligations under a two-tier system. The act aims to promote open-source AI, unlike the US approach. Enforcement and evaluation of models will be overseen by an AI Office within the European Commission. (NATURE.COM)

 

FTC Wants to Penalize Companies for Use of AI in Impersonation
The US Federal Trade Commission (FTC) is proposing new rules to hold companies accountable for the use of generative artificial intelligence (AI) technology in impersonation scams. The FTC is seeking public input on the rule, which would make companies liable if they are aware or have reason to believe that their technology is being used to harm consumers through impersonation. The FTC is also finalizing a rule that addresses impersonations of businesses and government entities. The agency has observed a surge in complaints related to impersonation fraud and is concerned about the potential for AI to exacerbate this issue. (BLOOMBERG.COM)

 

Congress Should Enable Private Sector Collaboration To Reverse The Defender's Dilemma
A new bill proposes removing barriers to cooperation between companies and allowing them to share cyber threat information. This would help leverage AI capabilities across platforms to identify vulnerabilities and strengthen defenses for organizations of all sizes against continuously evolving attacks. (GOOGLE.COM)

 

Top National Security Council Cybersecurity Official on Institutions Vulnerable to Ransomware Attacks - "The Takeout"
According to Ann Neuberger, the deputy national security adviser for cyber and emerging technology, hospitals and schools are particularly vulnerable to ransomware attacks, often carried out by Russian cybercriminals. The US government is working to enhance cyber defenses in these institutions, utilizing artificial intelligence tools for quicker detection and source identification. The Biden administration is taking action by equipping companies with cybersecurity practices, dismantling cyberinfrastructure used by criminals, and collaborating with international partners to address cryptocurrency movement and money laundering. Neuberger emphasizes the importance of AI-driven defense to stay ahead or closely behind AI-driven offense, highlighting the need for speed in cybersecurity. Neuberger's comments were made prior to the public reference to a non-specific "serious national security threat" related to Russian capabilities in space. (CBSNEWS.COM)

 

AI Governance: A Comprehensive Guide To Developing An Acceptable Use Policy
This article provides a guide to developing an Acceptable Use Policy for governing employees' use of generative AI tools. It outlines key aspects to address, such as identifying tools and risks, defining guidelines, setting security controls, and socializing the policy among employees through training and accessible documentation. (MILLENNIUMWEB.COM)

 

Slack Launches AI Upgrades for Enterprise Customers
Slack is introducing native generative AI capabilities for enterprise customers, including thread summaries, channel recaps, and improved search results. Users can opt-in to access AI-generated summaries for specific threads or channels, saving time and facilitating catch-up. Pricing details for the AI upgrades have not been disclosed. Slack has been piloting these features since September, with early testers reporting time savings of 97 minutes per week on average. The company plans to roll out Slack AI in phases, with small business customers gaining access in the coming weeks. (CIODIVE.COM)

 

Scale AI to Set the Pentagon's Path for Testing and Evaluating Large Language Models
Scale AI has been chosen by the Pentagon's Chief Digital and Artificial Intelligence Office (CDAO) to develop a testing and evaluation framework for large language models (LLMs). This one-year contract aims to create a means of deploying AI safely, measuring model performance, and providing real-time feedback for military applications. The framework will address the complexities and uncertainties associated with generative AI, including the creation of "holdout datasets" and evaluation metrics. Scale AI will work closely with the DOD to enhance the robustness and resilience of AI systems in classified environments. (DEFENSESCOOP.COM)

dtau...@gmail.com

unread,
Mar 7, 2024, 8:22:30 AMMar 7
to ai-b...@googlegroups.com

Qualcomm Chip Brings AI to Wi-Fi

Qualcomm showcased its FastConnect 7900 chip at the Mobile World Congress in Spain on Monday. The company said the FastConnect 7900 will enable AI-enhanced Wi-Fi 7; facilitate the integration of Wi-Fi, Bluetooth, and ultra-wideband for consumer applications; and support two Wi-Fi connections to the same device in the same spectrum band. The chip can identify which applications are being used by a device, then optimize power and latency accordingly, saving the device up to 30% in power consumption.
[ » Read full article ]

IEEE Spectrum; Michael Koziol (February 27, 2024)

 

Scientists Putting LLM Brains Inside Robot Bodies

Robotics researchers are using large language models (LLMs) to skirt preprogramming limits. Computer scientists at the University of Southern California developed ProgPrompt, which involves giving an LLM prompts in the Python programming language that include a sample question and solution to help restrict its answers to the range of tasks the robot can perform. Google researchers have developed a strategy that involves giving a list of behaviors that can be performed by the robot to the PaLM LLM, which responds to human requests to the robot in conversational language with a behavior from the list.
[ » Read full article ]

Scientific American; David Berreby (February 21, 2024)

 

New ‘Magic’ Gmail Security Uses AI And Is Here Now, Google Says
Google introduces its AI Cyber Defense Initiative, including the open-source Magika tool, to enhance Gmail security by detecting problematic content and identifying malware with high accuracy. The initiative also involves investing in AI-ready infrastructure, releasing new tools, and providing research grants to advance AI-powered security. (FORBES.COM)

 

Cybercriminals Utilize Meta's Llama 2 AI for Attacks, Says CrowdStrike
CrowdStrike's Global Threat Report reveals that cybercriminals, specifically the group Scattered Spider, have started using Meta's Llama 2 large language model to generate scripts for Microsoft's PowerShell tool. The generated scripts were employed to download login credentials from a North American financial services victim. Detecting generative AI-based attacks remains challenging, but the report predicts an increase in malicious use of AI as its development progresses. Cybersecurity experts also highlight the potential for misinformation campaigns during the multitude of government elections taking place this year. (ZDNET.COM)

 

A Top White House Cyber Official Sees the ‘Promise and Peril’ in AI
Anne Neuberger, the deputy national security adviser for cyber, spoke with WIRED about emerging technology issues such as identifying new national security threats from traffic cameras and security concerns regarding software patches for autonomous vehicles. She also discussed advancements in threats from AI and the next steps in the fight against ransomware. (WIRED.COM)

 

Shifting Trends in Cyber Threats
The 2024 Threat Index report by IBM X-Force reveals changing trends in cyber threats, including a decline in ransomware attacks but a rise in infostealing methods and attacks on cloud services and critical infrastructure. The report emphasizes the need for constant vigilance and adaptation to combat these evolving threats. Additionally, the report highlights the potential risks posed by AI-driven cyberattacks, urging proactive measures to secure AI systems. Organizations must adopt comprehensive cybersecurity strategies to effectively detect and mitigate emerging threats in this dynamic landscape. (CYBERMATERIAL.COM)

 

83 Percent of Doctors in New Survey Say AI Could Help Fight Burnout
A survey conducted by Athenahealth reveals that 83 percent of physicians believe that artificial intelligence (AI) could help alleviate burnout in the healthcare industry. However, concerns about the loss of human touch and the potential complications caused by AI were also expressed by the majority of respondents. If AI can reduce administrative work and increase efficiency, it could benefit the medical field by allowing doctors to refocus on patient care and address issues of staff shortages and retention struggles. The survey polled 1,003 doctors and was conducted by The Harris Poll. (THEHILL.COM)

dtau...@gmail.com

unread,
Mar 9, 2024, 8:20:49 AMMar 9
to ai-b...@googlegroups.com

Malware Worm Can Poison ChatGPT, Gemini-Powered Assistants

A "zero-click" AI worm able to launch an "adversarial self-replicating prompt" via text and image inputs has been developed by researchers at Cornell University, Intuit, and Technion—Israel Institute of Technology to exploit OpenAI’s ChatGPT-4, Google’s Gemini, and the LLaVA open source AI model. In a test of affected AI email assistants, the researchers found that the worm could extract personal data, launch phishing attacks, and send spam messages. The researchers attributed the self-replicating malware’s success to “bad architecture design” in the generative AI ecosystem.
[
» Read full article ]

PC Magazine; Kate Irwin (March 1, 2024)

 

AI Warfare Is Already Here

In recent weeks, the U.S. Department of Defense's Maven Smart System was used to identify rocket launchers in Yemen and surface vessels in the Red Sea and assisted in narrowing down targets in Iraq and Syria. Maven, which merges satellite imagery, sensor data, and geolocation data into a single computer interface, uses machine learning to identify personnel and equipment on the battlefield and detect weapons factories and other objects of interest in various environmental conditions.
[
» Read full article *May Require Paid Registration ]

Bloomberg; Katrina Manson (February 28, 2024)

 

AI Chatbots Not Ready for Election Prime Time, Study Shows

A software portal developed by researchers at the AI Democracy Projects assessed whether popular large language models can handle questions about topics related to national elections around the globe. Open AI's GPT-4, Alphabet's Gemini, Anthropic's Claude, Meta's Llama 2, and Mistral AI's Mixtral were asked election-related questions. Of the 130 responses, slightly more than 50% were found to be inaccurate, and 40% were deemed harmful. The most inaccurate models were Gemini, Llama 2, and Mixtral, and the most accurate was GPT-4. Meanwhile, Gemini had the most incomplete responses, and Claude had the most biased answers.
[
» Read full article *May Require Paid Registration ]

Bloomberg; Antonia Mufarech (February 27, 2024)

 

AI Is Being Built on Dated, Flawed Motion-Capture Data

A study by a University of Michigan-led research team found that the motion-capture data used to design some AI-based applications is flawed and could endanger users outside the parameters of the preconceived "typical" body type. The benchmarks and standards used by developers of fall detection algorithms for smartwatches and pedestrian-detection systems for self-driving vehicles, among other technologies, do not include representations of all body types. In a systemic literature review of 278 studies as far back as the 1930s, the researchers found that the data captured for most motion-capture systems were from white able-bodied men "of unremarkable weight." Some studies used data from dismembered cadavers.
[
» Read full article ]

IEEE Spectrum; Julianne Pepitone (March 1, 2024)

 

Your Doctor's Office Might Be Bugged

More physician practices are implementing ambient AI scribing, in which AI listens to patient visits and writes clinical notes summarizing them. In a recent study of the Permanente Medical Group in Northern California, more than 3,400 doctors have used ambient AI scribes in more than 300,000 patient encounters since October. Doctors reported that the technology reduced the amount of time spent on after-hours note writing and allowed for more meaningful patient interactions. However, its use raises concerns about security, privacy, and documentation errors.
[
» Read full article ]

Forbes; Jesse Pines (March 4, 2024)

 

AI Enables Phones to Detect Depression from Facial Cues

The MoodCapture smartphone app that leverages AI and facial-image processing software can determine when a user is depressed based on their facial cues. The app, developed by Dartmouth College researchers, could pave the way for early diagnoses and real-time digital mental-health support. The app was 75% accurate in detecting symptoms in a study of 177 individuals with a diagnosis of major depressive disorder.
[
» Read full article ]

UPI; Susan Kreimer (February 27, 2024)

 

Google Acknowledges That AI Image-Generator Can ‘Overcompensate’ For Diversity

The AP Share to FacebookShare to Twitter (2/23, O'Brien) reported Google “apologized Friday for its faulty rollout of a new artificial intelligence image-generator, acknowledging that in some cases the tool would ‘overcompensate’ in seeking a diverse range of people even when such a range didn’t make sense.” The AP continues, “The partial explanation for why its images put people of color in historical settings where they wouldn’t normally be found came a day after Google said it was temporarily stopping its Gemini chatbot from generating any images with people in them. That was in response to a social media outcry from some users claiming the tool had an anti-white bias in the way it generated a racially diverse set of images in response to written prompts.”

 

Students Use AI To Develop Autonomous Bikes, Homework Helpers, 911 Chatbots

The Seventy Four Share to FacebookShare to Twitter (2/25, Toppo) reports “students as young as 15 are seizing on ChatGPT and similar applications to solve problems and have fun,” though many educators and policymakers “still fear that students will primarily use the technology for cheating.” Students are not only “fearless about AI, they’re building their studies and future professional lives around it.” The 74 went looking for young people “diving head-first into AI and found several doing substantial research and development as early as high school.” The six students they found “are thinking much more deeply about AI than most adults, their hands in the technology in ways that would have seemed impossible just a generation ago. Many are immigrants to the West or come from families that emigrated here.” The students are programming “everything from autonomous bicycles to postpartum depression apps for new mothers to 911 chatbots, homework helpers and Harry Potter-inspired robotic chess boards.”

 

Colleges Still Uncertain Of AI’s Long-Term Impacts On Campus

The Chronicle of Higher Education Share to FacebookShare to Twitter (2/26, Swaak) reports, “In the 15 months since OpenAI released ChatGPT, generative AI – a type of artificial intelligence – has generated a mercurial mix of excitement, trepidation, and rebuff across all corners of academe.” While some instructors and college campuses are embracing the tools, others “have been steering clear, deeming the tech too confusing or problematic.” There is “nearly unanimous agreement from sources The Chronicle spoke with for this article: Generative AI, or GenAI, has brought the field of artificial intelligence across an undefined yet critical threshold, and made AI accessible to the public in a way it wasn’t before.” But GenAI’s role in higher education “over the long run remains an open question,” as AI technologies “are maturing rapidly, while colleges are historically slow to evolve.”

 

Survey: Why Few Superintendents Are Prioritizing AI In K-12 Education

K-12 Dive Share to FacebookShare to Twitter (2/26, Merod) reports while a “majority of superintendents understand the importance of artificial intelligence and its potential impact on K-12 education, only a small fraction of district leaders see AI as a ‘very urgent’ need this year, according to a survey released this month by EAB, an education consulting firm.” According to the survey, for superintendents, “recruiting and hiring qualified teachers is the most pressing issue to tackle in their districts this school year.” Fifty-two percent of superintendents said teacher staffing is “very urgent,” and 40 percent said it was “mild or moderately urgent.” While superintendents “continue to face myriad challenges,” 63 percent “said they plan to stay in their roles beyond the next two years.”

 

Report: Chatbots Producing Flawed Information On Elections, Potentially Disenfranchising Voters

The AP Share to FacebookShare to Twitter (2/27, Golden) reports that according to a report released Tuesday based on the findings of artificial intelligence experts and a bipartisan group of election officials, AI chatbots “are generating false and misleading information that threatens to disenfranchise voters.” As Super Tuesday approaches, “millions of people already are turning to artificial intelligence-powered chatbots for basic information, including about how their voting process works. Trained on troves of text pulled from the internet, chatbots such as GPT-4 and Google’s Gemini are ready with AI-generated answers, but prone to suggesting voters head to polling places that don’t exist or inventing illogical responses based on rehashed, dated information, the report found.”

        CBS News Share to FacebookShare to Twitter (2/27, Picchi) reports the report, “from AI Democracy Projects and nonprofit media outlet Proof News, comes as the U.S. presidential primaries are underway across the U.S. and as more Americans are turning to chatbots such as Google’s Gemini and OpenAI’s GPT-4 for information. Experts have raised concerns that the advent of powerful new forms of AI could result in voters receiving false and misleading information, or even discourage people from going to the polls.”

 

Warren Calls For New Restrictions On Big Tech’s Dominance Of AI

Bloomberg Share to FacebookShare to Twitter (2/27, Subscription Publication) reports Sen. Elizabeth Warren (D-MA) on Tuesday “called for a new restriction on major cloud providers Microsoft Corp., Amazon.com Inc. and Alphabet Inc., barring them from developing some of the most promising artificial intelligence technologies.” Warren argued the companies “should not be allowed to use their enormous size to dominate a whole new field, and that means blocking them from operating large language models.” Warren “also called for separating Amazon’s e-commerce platform from its product lines, and breaking up Google’s search business from its browsing services.”

dtau...@gmail.com

unread,
Mar 17, 2024, 1:11:55 PMMar 17
to ai-b...@googlegroups.com

World's Largest Computer Chip Will Power Supercomputer

Cerebras' Wafer Scale Engine 3 (WSE-3), now the world's largest computer chip, is expected to power the Condor Galaxy 3 supercomputer, which will be used to train future AI systems. The chip, made from an 8.5-inch by 8.5-inch silicon wafer, features 4 trillion transistors and 900,000 AI cores. Currently under construction, the Condor Galaxy 3 will be comprised of 64 Cerebras CS-3 AI system "building blocks" and will generate 8 exaFLOPs of computing power.
[
» Read full article ]

LiveScience; Keumars Afifi-Sabet (March 14, 2024)

 

EU Parliament Approves AI Law

The European Parliament approved far-reaching EU regulations governing AI, with the goal of facilitating innovation while protecting citizens from the risks associated with the fast-developing technology. The so-called AI Act will impose stricter requirements on riskier systems, with bans on the use of AI for predictive policing; most real-time facial recognition in public places; and biometric systems used to infer race, religion, or sexual orientation. The text is slated for endorsement by EU states next month, with publication in the EU's Official Journal expected as early as May.
[
» Read full article ]

France 24 (March 13, 2024)

 

Silicon Valley Is Pricing Academics Out of AI Research

Stanford University's Fei-Fei Li, an ACM Fellow known as the "godmother of AI," pressed President Joe Biden, following his State of the Union address, to fund a national warehouse of computing power and datasets to ensure the nation's leading AI researchers can keep pace with big tech firms. Said Li, "The public sector is now significantly lagging in resources and talent compared to that of industry. This will have profound consequences because industry is focused on developing technology that is profit-driven, whereas public sector AI goals are focused on creating public goods."

[ » Read full article *May Require Paid Registration ]

The Washington Post; Naomi Nix; Cat Zakrzewski; Gerrit De Vynck (March 10, 2024)

 

AI Learning What It Means to Be Alive

With an AI program similar to ChatGPT, Stanford University researchers found that computers could teach themselves biology. Among other things, the foundation model, called Universal Cell Embedding (UCE), discovered Norn cells, rare kidney cells that make the hormone erythropoietin when oxygen levels fall too low, in only six weeks, an achievement that took human scientists over 100 years. UCE learned to classify cells it had never seen previously as one of more than 1,000 different types and also applied its learning to new species.


[
» Read full article *May Require Paid Registration ]

The New York Times; Carl Zimmer (March 10, 2024)

 

Scientists Sign Effort to Prevent AI Bioweapons

Over 90 biologists and other scientists who specialize in technologies used to design new proteins last week signed an agreement that seeks to ensure their AI-aided research will move forward without exposing the world to serious harm. The biologists, who include Nobel laureate Frances Arnold, also said the benefits of current AI technologies for protein design “far outweigh the potential for harm.” The agreement does not seek to suppress the development or distribution of AI technologies, but to regulate the use of equipment needed to manufacture new genetic material.

[ » Read full article *May Require Paid Registration ]

The New York Times; Cade Metz (March 9, 2024)

 

Researchers Jailbreak Chatbots with ASCII Art

ArtPrompt, developed by researchers in Washington and Chicago, can bypass large language models' (LLMs) built-in security features. The tool generates ASCII art prompts to get AI chatbots to respond to queries they are supposed to reject, like those referencing hateful, violent, illegal, or harmful content. ArtPrompt replaces the "safety word" (the reason for rejecting the submission) with an ASCII art representation of the word, which does not trigger the ethical or security measures that would prevent a response from the LLM.
[ » Read full article ]

Tom's Hardware; Mark Tyson (March 7, 2024)

 

School Introduces India's First AI Teacher Robot

In Kerala, India, an AI teacher robot from Maker Labs has been rolled out at KTCT Higher Secondary School. Known as Iris, the generative AI-powered robot can create lessons tailored to the needs and preferences of individual students. Iris can respond to questions, explain concepts, and provide interactive learning experiences. The robot also can move through learning spaces and manipulate objects with its hands.
[ » Read full article ]

The Times of India; Sanjay Sharma (March 7, 2024)

 

Warren Calls For Cloud Provider Restrictions On AI Development

Bloomberg Share to FacebookShare to Twitter (2/27, Subscription Publication) reports that Senator Elizabeth Warren (D-MA) has proposed new restrictions on major cloud providers such as Amazon and Microsoft, limiting their development capability of large language models (LLMs). She expressed concern at a Washington conference that these tech giants have the potential to dominate the AI sector and inhibit competition due to their scale. “Amazon should not be allowed to use their enormous size to dominate a whole new field, and that means blocking them from operating LLMs,” stated Warren. She also suggested separating Amazon’s e-commerce platform from its product lines, aiming to fragment monopolistic power in the industry.

 

Teachers, Administrators Voice Strong Opinions About Where AI Belongs In K-12 Education

Education Week Share to FacebookShare to Twitter (2/28, Bushweller) reports as the “expanded use of artificial intelligence in K-12 education this school year is prompting very strong feelings,” educators are also creating “new approaches to balance the benefits and drawbacks of the new technology.” While few are calling for “outright bans on large language models like ChatGPT, recognizing that students will have to learn how to use AI in future jobs,” many are still worried that AI, “unchecked, could lead to lazier students and much more cheating.” Educators are “hungry for guidance from their schools, districts, and states on how to use AI for instruction. But they say they are not getting that guidance.” In that survey, scores of respondents “weighed in on the role of AI in education,” with one administrator saying, “The idea of AI being integrated into the education system is inevitable, but scary.”

 

AI Study Finds Two Types of Prostate Cancer

Forbes Share to FacebookShare to Twitter (2/29, Forster) reports that a study led by researchers from the University of Oxford and the University of Manchester has used artificial intelligence (AI) to reveal two distinct types of prostate cancer. The findings, published in Cell Genomics, could advance the development of personalized therapies. Using neural networks on samples from 159 patients, the study revealed two different ways the cancer could evolve, labelled as “evotypes”. The discovery could enhance diagnoses and tailored treatments, improving patient outcomes. Further study into “evotypes” in other forms of cancer is planned.

 

Figure AI Announces $675 Million Funding, OpenAI Partnership