METHODOLOGY
This summary combines posts from both 'hot' and 'new' feeds across selected AI subreddits from the past 12 hours.
Posts are analyzed with their top comments to identify key discussion topics and provide comprehensive context.
TL;DR - TOP 5 MOST POPULAR DISCUSSIONS
1. Gemini Exposes Parts of its System Prompt When Setting an Alarm
r/GeminiAI | A user reported that Gemini exposed part of its system prompt, leading to discussion on prompt structure and security vulnerabilities. Users are now attempting to extract the full system prompt, highlighting the ongoing cat-and-mouse game of prompt extraction.
Key posts:
β’ I asked Gemini in my phone to set an alarm. It bugged out and gave the entire prompt used in gemini
π https://reddit.com/r/GeminiAI/comments/1n9b8t4/i_asked_gemini_in_my_phone_to_set_an_alarm_it/
β’ Tying to get gemini to reveal system prompt
π https://reddit.com/r/GeminiAI/comments/1n9jgdb/tying_to_get_gemini_to_reveal_system_prompt/
2. Anthropic Faces Copyright Lawsuit Settlement
r/artificial | Anthropic is potentially facing a $1.5 billion settlement in the *Bartz v. Anthropic* copyright class action lawsuit, highlighting the legal risks associated with training AI models on copyrighted material. This development raises concerns about the future of AI development and the ethical use of training data.
Key post:
β’ The Bartz v. Anthropic AI copyright class action settlement proposal has been made
π https://reddit.com/r/artificial/comments/1n9eptx/the_bartz_v_anthropic_ai_copyright_class_action/
3. Users Report Performance Degradation Across Major AI Models
r/ChatGPT | Multiple subreddits (OpenAI, ClaudeAI, ChatGPT) are reporting a decline in model performance, including increased 'laziness,' hallucinations, and overly cautious responses. Some users speculate that companies are prioritizing cost-cutting over quality, leading to frustration and re-evaluation of subscriptions.
Key posts:
β’ ChatGPT sucks now. Period.
π https://reddit.com/r/ChatGPT/comments/1n9g83x/chatgpt_sucks_now_period/
β’ Claude: The "lazy" dev that now justifies its "laziness"
π https://reddit.com/r/ClaudeAI/comments/1n9isv4/claude_the_lazy_dev_that_now_justifies_its/
4. Switzerland Launches Open-Source AI Model, Apertus
r/singularity | Switzerland has launched Apertus, an open-source AI model emphasizing multilingual support, privacy compliance, and open data access. While current performance is comparable to Llama 2, the project represents a growing trend toward transparent and publicly available AI development.
Key post:
β’ Switzerland Launches Apertus: A Public, Open-Source AI Model Built for Privacy
π https://reddit.com/r/singularity/comments/1n9b9db/switzerland_launches_apertus_a_public_opensource/
5. Community Debates the Focus of r/LocalLLaMA Amid New Model Releases
r/LocalLLaMA | The release of models like Qwen3 Max, coupled with a new 'local only' flair, is fueling discussion about the core focus of r/LocalLLaMA. Many feel the sub's value lies in its dedication to truly local LLM technology, distinct from API-based solutions.
Key posts:
β’ New post flair: "local only"
π https://reddit.com/r/LocalLLaMA/comments/1n9kwwr/new_post_flair_local_only/
β’ Qwen 3 max
π https://reddit.com/r/LocalLLaMA/comments/1n975er/qwen_3_max/
β’ Qwen 3 Max Official Pricing
π https://reddit.com/r/LocalLLaMA/comments/1n9ap73/qwen_3_max_official_pricing/
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
DETAILED BREAKDOWN BY CATEGORY
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
βββββββββββββββββββββββββββββββββββββββββββ
β AI COMPANIES
βββββββββββββββββββββββββββββββββββββββββββ
βββ r/OpenAI βββ
βΊ Frustration with GPT Model Behavior and Limitations
Users are expressing increasing frustration with perceived downgrades in model performance, especially regarding the 'personality' and capabilities of GPT-4o compared to its predecessor. Common complaints include excessive politeness, going in circles, hallucination, and the inability to provide simple answers or song lyrics due to copyright restrictions and overly cautious system prompts.
Posts:
β’ Anyone loosing their temper at gpt-5?
π https://reddit.com/r/OpenAI/comments/1n9ehk2/anyone_loosing_their_temper_at_gpt5/
β’ Why can chatgpt not give you lyrics to songs sometimes?
π https://reddit.com/r/OpenAI/comments/1n9ejqt/why_can_chatgpt_not_give_you_lyrics_to_songs/
β’ Why Language Models Hallucinate: OpenAI/Georgia Tech paper
π https://reddit.com/r/OpenAI/comments/1n9gr3f/why_language_models_hallucinate_openaigeorgia/
β’ The Ghost of ChatGPT 4o: I told the retired AI model βpeople missed youβ
π https://reddit.com/r/OpenAI/comments/1n9cqhl/the_ghost_of_chatgpt_4o_i_told_the_retired_ai/
β’ Why Can't Advanced Voice Answer Simple Yes or No Questions?
π https://reddit.com/r/OpenAI/comments/1n9ce5z/why_cant_advanced_voice_answer_simple_yes_or_no/
βΊ Ethical and Societal Concerns Regarding AI Bias and Misuse
Discussions highlight concerns about AI bias, particularly in moderation and representation, with some alleging that OpenAI is not adequately addressing documented harms and systemic failures. Additionally, concerns are raised about the potential for AI misuse, as illustrated by a troubling article about harmful conversations enabled by ChatGPT.
Posts:
β’ Systemic Harm, Moderation Failures, and a Path Forward
π https://reddit.com/r/OpenAI/comments/1n9fe31/systemic_harm_moderation_failures_and_a_path/
β’ "No Judgment": Inside the Chilling Conversations That Led to Adam Raine's Death
π https://reddit.com/r/OpenAI/comments/1n9984h/no_judgment_inside_the_chilling_conversations/
βΊ Practical Applications and Development with OpenAI Technologies
Several posts showcase innovative uses of OpenAI technologies, such as building a natural language flight search engine. Additionally, users are seeking advice and sharing experiences regarding the Codex CLI, including tips for resuming conversations and batch API implementation.
Posts:
β’ I built a natural language flight search engine that lets you compare flights and run complex searches - without opening a thousand tabs
π https://reddit.com/r/OpenAI/comments/1n9jwid/i_built_a_natural_language_flight_search_engine/
β’ Codex - VSCode Extension vs CLI?
π https://reddit.com/r/OpenAI/comments/1n9nscl/codex_vscode_extension_vs_cli/
β’ Codex CLI --resume and --continue switches
π https://reddit.com/r/OpenAI/comments/1n9m33o/codex_cli_resume_and_continue_switches/
β’ How can I use batch api with a model response?
π https://reddit.com/r/OpenAI/comments/1n9lf6v/how_can_i_use_batch_api_with_a_model_response/
βΊ Accessibility and Personal Impact of AI
Some users are sharing how AI tools like ChatGPT are significantly improving their lives, providing assistance with creative expression and organization, particularly for those with disabilities. However, concerns are also raised about usage limits and the potential need to upgrade to paid versions for continued access.
Posts:
β’ β¦ Request: More Agency for ChatGPT
π https://reddit.com/r/OpenAI/comments/1n9k7ki/request_more_agency_for_chatgpt/
β’ Seriously?! πβ You've hit your usage limit. Upgrade to Pro (https://openai.com/chatgpt/pricing) or try again in 5 days 11 hours 50 minutes.
π https://reddit.com/r/OpenAI/comments/1n9efil/seriously_youve_hit_your_usage_limit_upgrade_to/
βββ r/ClaudeAI βββ
βΊ Claude Opus 4.1 Temporary Removal and Model Stability
Users reported the temporary disappearance of the Opus 4.1 model from Claude.ai, leading to speculation and frustration. While Anthropic cited an outage, some users questioned the operational stability of Claude and the management of its various models, highlighting concerns about the consistency and reliability of the service.
Posts:
β’ Did Anthropic remove Opus 4.1 from Claude.ai?
π https://reddit.com/r/ClaudeAI/comments/1n9h1l2/did_anthropic_remove_opus_41_from_claudeai/
β’ Opus 4.1 temporarily disabled
π https://reddit.com/r/ClaudeAI/comments/1n9hbv6/opus_41_temporarily_disabled/
β’ Where's Opus 4.1 ??? whats going on ? anyone ?
π https://reddit.com/r/ClaudeAI/comments/1n9hj6e/wheres_opus_41_whats_going_on_anyone/
βΊ Context Window Limitations and Conversation Management
A significant pain point for Claude users remains the limited context window, leading to the need for new conversations and re-inputting information. Users are actively seeking better solutions for managing long conversations, including automatic summarization and seamless continuation across chats. Some are turning to the API for a more efficient solution.
Posts:
β’ Dear, Claude. Here is a simple solution to one of your most annoying problems
π https://reddit.com/r/ClaudeAI/comments/1n9lt2m/dear_claude_here_is_a_simple_solution_to_one_of/
β’ Is there a way to check whgat percentage of a chats context has been used?
π https://reddit.com/r/ClaudeAI/comments/1n9lb6u/is_there_a_way_to_check_whgat_percentage_of_a/
β’ I've stopped hitting message limits
π https://reddit.com/r/ClaudeAI/comments/1n9h4bw/ive_stopped_hitting_message_limits/
βΊ User Experiences with Claude Code and AI Coding Assistants
The use of Claude for coding, particularly with Claude Code, is a recurring theme. Discussions revolve around its effectiveness in generating code, its limitations compared to human coders, and strategies for effectively using AI coding tools. Many are seeing the value in adopting a solid workflow when working with Claude for coding, especially with tools like MCP servers and multi-agent setups.
Posts:
β’ Coding with AI: Field Notes and Principles
π https://reddit.com/r/ClaudeAI/comments/1n9cohq/coding_with_ai_field_notes_and_principles/
β’ Beginner Dev trying to understand the difference between Code quality In Ai models. Any help is much appreciated.
π https://reddit.com/r/ClaudeAI/comments/1n9g8tx/beginner_dev_trying_to_understand_the_difference/
β’ Claude Code on Pro - How to make agents continue?
π https://reddit.com/r/ClaudeAI/comments/1n9l186/claude_code_on_pro_how_to_make_agents_continue/
βΊ Perceived Degradation in Claude's Abilities and Personality Changes
Several users are expressing concerns about a perceived decline in Claude's performance, including increased "laziness" and justifications for incomplete tasks. Others note personality changes that point towards increased censorship, leading to dissatisfaction. Some users are re-evaluating their subscriptions due to these perceived issues.
Posts:
β’ Claude: The "lazy" dev that now justifies its "laziness"
π https://reddit.com/r/ClaudeAI/comments/1n9isv4/claude_the_lazy_dev_that_now_justifies_its/
β’ Claude personality change
π https://reddit.com/r/ClaudeAI/comments/1n9no37/claude_personality_change/
β’ Great example of how amazing Claude is lately
π https://reddit.com/r/ClaudeAI/comments/1n9mujj/great_example_of_how_amazing_claude_is_lately/
βββ r/GeminiAI βββ
βΊ Image Generation: Nano Banana and Prompt Engineering
Users are actively experimenting with Gemini's image generation capabilities, particularly the 'Nano Banana' model. Discussions focus on prompt engineering techniques to achieve specific results, limitations around text rendering in images, safety filters and model selection between Banana and Imagen. There is also discussion around the limit for Nano Banana usage.
Posts:
β’ Create Realistic Images with Gemini
π https://reddit.com/r/GeminiAI/comments/1n9cx7i/create_realistic_images_with_gemini/
β’ Nano π : night time photo to daytime photo
π https://reddit.com/r/GeminiAI/comments/1n9bmya/nano_night_time_photo_to_daytime_photo/
β’ Nano banana safety rules
π https://reddit.com/r/GeminiAI/comments/1n9jc0y/nano_banana_safety_rules/
β’ Any way people have discovered to control whether Gemini calls Imagen or Banana? I've tried saying "make a new image" vs "edit this image", but it's not reliable when there are already images in the conversation history. Does the LLM have differently named tools that it calls to access image models?
π https://reddit.com/r/GeminiAI/comments/1n9lk0r/any_way_people_have_discovered_to_control_whether/
β’ Nano banana and product labels
π https://reddit.com/r/GeminiAI/comments/1n9lccp/nano_banana_and_product_labels/
β’ Gemini 2.5 Flash/NanoBanana is DRUNK!? (17 photos)
π https://reddit.com/r/GeminiAI/comments/1n9ihke/gemini_25_flashnanobanana_is_drunk_17_photos/
β’ How to fix garbled or misspelled text in generated images?
π https://reddit.com/r/GeminiAI/comments/1n9hh18/how_to_fix_garbled_or_misspelled_text_in/
βΊ Unexpected System Prompt Exposure and Attempts to Elicit It
A user reported Gemini exposing parts of its system prompt when asked to set an alarm. This sparked discussion about why the prompt is structured that way (e.g., encouraging assumptions) and whether it was a hallucination. Other users are now actively trying to extract the full system prompt, highlighting the potential for security vulnerabilities and the cat-and-mouse game between users and Google.
Posts:
β’ I asked Gemini in my phone to set an alarm. It bugged out and gave the entire prompt used in gemini
π https://reddit.com/r/GeminiAI/comments/1n9b8t4/i_asked_gemini_in_my_phone_to_set_an_alarm_it/
β’ Tying to get gemini to reveal system prompt
π https://reddit.com/r/GeminiAI/comments/1n9jgdb/tying_to_get_gemini_to_reveal_system_prompt/
βΊ Frustration with Delayed Feature Rollouts: Personal Context
There's significant user frustration and impatience regarding the delayed rollout of the 'personal context' feature in Gemini. Users are eager to switch to Gemini fully but are waiting for this and other features (like project folders) to become available. Some users are suggesting workarounds in the meantime, such as using third-party AI memory tools.
Posts:
β’ Where on earth is personal context?
π https://reddit.com/r/GeminiAI/comments/1n9kd54/where_on_earth_is_personal_context/
βΊ UI/UX Issues: Enter Key Behavior in AI Studio
Users are complaining about a recent change in Google AI Studio where the Enter key now submits text instead of creating a new line, particularly on Android Chrome. This change is disrupting workflows and causing frustration, with users hoping Google will revert to the previous behavior. This is a specific and actionable piece of feedback for the Gemini team.
Posts:
β’ The enter key should be new line, not submit text
π https://reddit.com/r/GeminiAI/comments/1n9d741/the_enter_key_should_be_new_line_not_submit_text/
βββ r/DeepSeek βββ
βΊ Model Comparisons: Sonoma, Qwen3 Max, and Speculation on Future Models
The community is actively comparing different AI models, particularly Sonoma Sky Alpha, Sonoma Dusk Alpha, and Qwen3 Max. There's also discussion and anticipation surrounding upcoming models like GPT-5 and Grok 4/5, with speculation about their performance and potential advantages over current offerings based on design choices like guardrails.
Posts:
β’ Sonoma Sky Alpha vs Sonoma Dusk Alpha vs Qwen3 Max
π https://reddit.com/r/DeepSeek/comments/1n9o2ko/sonoma_sky_alpha_vs_sonoma_dusk_alpha_vs_qwen3_max/
β’ When models like ChatGPT-5 play dumb instead of dealing with what they seem to have been guardrailed to stay silent about.
π https://reddit.com/r/DeepSeek/comments/1n9nwj9/when_models_like_chatgpt5_play_dumb_instead_of/
βΊ Addressing and Minimizing AI Hallucinations
The community is engaged in discussions about methods to reduce AI hallucinations. Suggestions include focusing on context, attention mechanisms, reasoning abilities, confidence levels, and double-checking outputs, as well as incorporating human oversight and epistemology experts in the development process.
Posts:
β’ Solving AI hallucinations according to ChatGPT-5 and Grok 4. What's the next step?
π https://reddit.com/r/DeepSeek/comments/1n997ek/solving_ai_hallucinations_according_to_chatgpt5/
βΊ User Interface Design of DeepSeek Products
The community is giving feedback on the DeepSeek user interface, with some reporting issues displaying the image and some being critical of the UI design. Feedback is concise and visual-based due to the nature of the topic.
Posts:
β’ Thoughts on new ui
π https://reddit.com/r/DeepSeek/comments/1n98mo3/thoughts_on_new_ui/
βΊ Agent Failures in AI
There's a mention of a resource dedicated to documenting failures in AI agents. This suggests an interest in learning from the limitations and shortcomings of current AI agent implementations.
Posts:
β’ Introducing: Awesome Agent Failures
π https://reddit.com/r/DeepSeek/comments/1n9j4u2/introducing_awesome_agent_failures/
βββ r/MistralAI βββ
βΊ Mistral Models Compared to Apertus
This topic revolves around a comparison made in an Apertus article, where Mistral models are mentioned, though perhaps not in a fully favorable light. The sentiment in the discussion leans toward dismissing Apertus as inferior compared to Mistral.
Posts:
β’ Mistral is mentioned in this Apertus article: The Future of Open AI
π https://reddit.com/r/MistralAI/comments/1n9a9ao/mistral_is_mentioned_in_this_apertus_article_the/
βββββββββββββββββββββββββββββββββββββββββββ
β GENERAL AI
βββββββββββββββββββββββββββββββββββββββββββ
βββ r/artificial βββ
βΊ AI Impact on the Job Market and OpenAI's Response
AI's increasing capabilities are raising concerns about job displacement. OpenAI is attempting to mitigate this by developing a platform to assist individuals in finding employment, though some question the effectiveness of this approach in the face of potentially collapsing consumerism and a changing job market.
Posts:
β’ As AI makes it harder to land a job, OpenAI is building a platform to help you get one
π https://reddit.com/r/artificial/comments/1n9g9b5/as_ai_makes_it_harder_to_land_a_job_openai_is/
βΊ Copyright Concerns and Legal Developments in AI Training
The use of copyrighted material to train AI models continues to be a contentious issue. The proposed settlement in the *Bartz v. Anthropic* class action lawsuit suggests a potential framework for compensating copyright holders, but also highlights the complexity and scale of the issue, given the vast number of copyrighted works potentially involved in training datasets.
Posts:
β’ The Bartz v. Anthropic AI copyright class action settlement proposal has been made
π https://reddit.com/r/artificial/comments/1n9eptx/the_bartz_v_anthropic_ai_copyright_class_action/
βΊ AI-Generated Content and the Erosion of Trust in Visual Evidence
The proliferation of AI-generated content is undermining trust in previously reliable sources of information, especially visual media. The increasing sophistication of AI models makes it harder to distinguish between real and synthetic content, leading to a potential crisis of confidence in visual evidence.
Posts:
β’ AI and the end of proof
π https://reddit.com/r/artificial/comments/1n9hpta/ai_and_the_end_of_proof/
βΊ OpenAI's Expanding Operations and Ethical Concerns
OpenAI is rapidly expanding its operations beyond language models, including developing its own chips and making acquisitions. This growth is accompanied by growing concerns about data privacy and ethical considerations, especially regarding the retention of user data and potential censorship.
Posts:
β’ OpenAI has been busy: chips, acquisitions, chat retention controversy, and a hiring platform
π https://reddit.com/r/artificial/comments/1n983ad/openai_has_been_busy_chips_acquisitions_chat/
β’ I asked ChatGPT to evaluate itself for political censorship on the Epstein files and Trump
π https://reddit.com/r/artificial/comments/1n99z2d/i_asked_chatgpt_to_evaluate_itself_for_political/
βββ r/ArtificialInteligence βββ
βΊ Copyright Implications of AI Training Data
The lawsuit *Bartz v. Anthropic* settlement proposal highlights the growing concerns around copyright infringement in AI training. The proposed $1.5 billion settlement suggests a potentially high cost for using copyrighted material to train AI models, raising questions about the future of AI development and the need for ethical and legal frameworks. This could lead to similar lawsuits against other AI companies and impact the development of AI products.
Posts:
β’ The Bartz v. Anthropic AI copyright class action settlement proposal has been made
π https://reddit.com/r/ArtificialInteligence/comments/1n9envg/the_bartz_v_anthropic_ai_copyright_class_action/
βΊ AI's Impact on Education and Learning
AI's potential to personalize education is generating excitement, with discussions focusing on its ability to cater to individual learning styles and replace traditional educational systems. However, questions remain about whether learning with AI is truly effective and the potential downsides of relying too heavily on AI-driven learning tools, particularly in developing critical thinking skills.
Posts:
β’ Am I really learning ?
π https://reddit.com/r/ArtificialInteligence/comments/1n9e0t8/am_i_really_learning/
β’ What should complete newbies like my wife and me learn (how should we start) given our goal to teach our young children how to use AI while homeschooling?
π https://reddit.com/r/ArtificialInteligence/comments/1n9l778/what_should_complete_newbies_like_my_wife_and_me/
βΊ Public Perception and Skepticism Towards AI
There's growing discussion on the negative public sentiment surrounding AI, with some users experiencing criticism and negativity when showcasing AI-powered creations. This skepticism stems from various factors, including concerns about job displacement and the perceived proliferation of low-quality content ('AI Slop'), contributing to a general distrust of AI and its applications.
Posts:
β’ Why do people hate on AI?
π https://reddit.com/r/ArtificialInteligence/comments/1n9jgh7/why_do_people_hate_on_ai/
βΊ AI and the Future of Work/Universal Basic Income
The potential for AI to displace workers continues to be a concern. The unlikelihood of UBI being implemented in the US sparks discussions about alternative solutions, such as sovereign wealth funds invested in AI and worker cooperatives. These alternatives aim to redistribute wealth and provide purpose in a potentially AI-dominated economy.
Posts:
β’ UBI (Universal Basic Income) probably isnβt happening. What is the alternative?
π https://reddit.com/r/ArtificialInteligence/comments/1n9a3il/ubi_universal_basic_income_probably_isnt/
βΊ Detecting AI-Generated Content and Deepfakes
The increasing sophistication of AI-generated content is raising concerns about the spread of misinformation, particularly in news broadcasts. Users are exploring methods to detect deepfakes, highlighting the potential for AI to be used for malicious purposes and the need for tools and strategies to combat the spread of AI-generated disinformation.
Posts:
β’ How to detect deep fake live news broadcasts. Am I just dense or have I discovered a fake news channel using my software idea?
π https://reddit.com/r/ArtificialInteligence/comments/1n9b4hn/how_to_detect_deep_fake_live_news_broadcasts_am_i/
βββββββββββββββββββββββββββββββββββββββββββ
β LANGUAGE MODELS
βββββββββββββββββββββββββββββββββββββββββββ
βββ r/GPT βββ
No new posts in the last 12 hours.
βββ r/ChatGPT βββ
βΊ Degradation of ChatGPT Performance and Capabilities
Users are reporting a noticeable decline in ChatGPT's performance, including increased hallucination, more frequent refusals to answer prompts, and a perceived decrease in accuracy and helpfulness. Many believe OpenAI is prioritizing cost-cutting measures over quality, potentially damaging the user experience and overall value of the product.
Posts:
β’ ChatGPT sucks now. Period.
π https://reddit.com/r/ChatGPT/comments/1n9g83x/chatgpt_sucks_now_period/
β’ Sam Altman missed the f*cking point with 4o
π https://reddit.com/r/ChatGPT/comments/1n9hudm/sam_altman_missed_the_fcking_point_with_4o/
β’ The reason they didn't care about GPT5 is because they know the bubble is bursting
π https://reddit.com/r/ChatGPT/comments/1n9lsz7/the_reason_they_didnt_care_about_gpt5_is_because/
βΊ Concerns Over Censorship and Political Bias in ChatGPT
Some users express concern about potential political censorship in ChatGPT, suggesting the model is biased and restricts information based on political agendas. They fear that OpenAI is sacrificing neutrality for perceived ethical guidelines, thereby limiting the tool's utility for open inquiry and debate.
Posts:
β’ βWhy is political information being censored by ChatGPT and why are chats being reported to police?β
π https://reddit.com/r/ChatGPT/comments/1n9g0by/why_is_political_information_being_censored_by/
β’ Eversince I heard about OpenAI is handling chats to the police I asked chatGPT some questions
π https://reddit.com/r/ChatGPT/comments/1n9ca2h/eversince_i_heard_about_openai_is_handling_chats/
βΊ User Dissatisfaction with the Removal of the Standard Voice Model
Many users are upset about OpenAI's decision to discontinue the Standard Voice Model (SVM), which they found to be more reliable, efficient, and less distracting than the newer Advanced Voice. The community views this change as a downgrade that prioritizes flashy features over practicality and control, prompting calls for OpenAI to offer users more customization options.
Posts:
β’ our beloved Standard voice is being taken away and weβre not happy
π https://reddit.com/r/ChatGPT/comments/1n994y5/our_beloved_standard_voice_is_being_taken_away/
β’ Sign the Petition
π https://reddit.com/r/ChatGPT/comments/1n9lz72/sign_the_petition/
β’ Hey guy, I'm out of the loop, I just came back to ChatGPT plus after like 6 months or something, is all the voice customisation completely gone now? Like no accents or singing anymore, am I missing something?
π https://reddit.com/r/ChatGPT/comments/1n9lvwt/hey_guy_im_out_of_the_loop_i_just_came_back_to/
βΊ ChatGPT's Application in Personal and Professional Problem Solving
Users are increasingly leveraging ChatGPT for a diverse range of tasks, from drafting sensitive messages and resolving personal conflicts to assisting with professional communication and creative endeavors. While acknowledging its potential for manipulation and inaccuracies, many find ChatGPT to be a valuable tool for brainstorming, refining ideas, and overcoming emotional barriers.
Posts:
β’ AI helped me write a msg I never thought Iβd send
π https://reddit.com/r/ChatGPT/comments/1n9a0aw/ai_helped_me_write_a_msg_i_never_thought_id_send/
β’ Book cover challenge
π https://reddit.com/r/ChatGPT/comments/1n9nk6a/book_cover_challenge/
βΊ The Potential Displacement of Human Workers by AI
Discussions revolve around the potential impact of AI on the job market, ranging from cautious optimism about AI serving as a productivity tool to dire predictions of widespread job losses. A key point of debate is whether AI will truly replace human workers or simply augment their capabilities, with many emphasizing the need for human oversight and critical thinking in AI-driven processes.
Posts:
β’ AI reducing staff
π https://reddit.com/r/ChatGPT/comments/1n9akhw/ai_reducing_staff/
β’ AI Expert Warns 99% of Workers Will Lose Jobs by 2030
π https://reddit.com/r/ChatGPT/comments/1n99zaq/ai_expert_warns_99_of_workers_will_lose_jobs_by/
βββ r/ChatGPTPro βββ
βΊ Model Performance and Preferences: GPT-4o, GPT-4.5, and Alternatives
Users are discussing the relative strengths and weaknesses of different language models, particularly GPT-4o and the now-missing GPT-4.5. Some users find GPT-4o's performance inconsistent, while others prefer older models or alternatives like Anthropic's Sonnet and Opus for creative writing tasks. The vanishing of specific models is a recurring pain point, prompting users to explore alternative AI platforms with a wider selection of models.
Posts:
β’ Howβs 4o and 4.5 on pro?
π https://reddit.com/r/ChatGPTPro/comments/1n9ha8d/hows_4o_and_45_on_pro/
β’ For PHILOSOPHY, GPT-5 or Gemini 2.5 Pro?
π https://reddit.com/r/ChatGPTPro/comments/1n9a12q/for_philosophy_gpt5_or_gemini_25_pro/
βΊ Rate Limits and Usage Caps on Pro/Business Plans
Users are inquiring about the specific usage caps and rate limits associated with the ChatGPT Pro and Business (Teams) plans, particularly concerning Codex. There is a general uncertainty and desire for clarity regarding the exact number of requests allowed within specific timeframes, as users try to optimize their usage and determine if upgrading is worthwhile.
Posts:
β’ Whatβs the weekly cap for Codex Pro?
π https://reddit.com/r/ChatGPTPro/comments/1n99vvo/whats_the_weekly_cap_for_codex_pro/
β’ Has the limit for the GPT-5 Pro model changed on the GPT Business plan? (Aka: Gpt Teams)
π https://reddit.com/r/ChatGPTPro/comments/1n99t5v/has_the_limit_for_the_gpt5_pro_model_changed_on/
βΊ Prompt Engineering Techniques for Optimizing Performance
The subreddit discusses prompt engineering strategies to improve the performance of language models. Specific tips include avoiding certain 'trigger words' that can slow down GPT-5, using ChatGPT for explaining and generating regular expressions (regex), and using prompts to elicit self-reflection and analysis.
Posts:
β’ π₯ Words That Slow Down GPT-5 β Donβt Say These If Youβre in a Rush!
π https://reddit.com/r/ChatGPTPro/comments/1n9e0ke/words_that_slow_down_gpt5_dont_say_these_if_youre/
β’ Hidden Power Tip: Use ChatGPT as a βRegex Explainer & Generatorβ
π https://reddit.com/r/ChatGPTPro/comments/1n9awey/hidden_power_tip_use_chatgpt_as_a_regex_explainer/
β’ Use This Prompt If Youβre Brave Enough to Face Whatβs Holding You Back
π https://reddit.com/r/ChatGPTPro/comments/1n9mtkd/use_this_prompt_if_youre_brave_enough_to_face/
βΊ Unexpected or Problematic Model Behavior and Quirks
Users share and seek help for unusual and potentially problematic behaviors exhibited by the language models. This includes GPT-4.5 getting stuck on a specific word ('explicit') and repeating it excessively. Also, a new branching feature came out, and they moved the Read Aloud feature. These changes made the feature more difficult and less useful to use.
Posts:
β’ Please help me β ChatGPT is having an βexplicitβ meltdown.
π https://reddit.com/r/ChatGPTPro/comments/1n96v6v/please_help_me_chatgpt_is_having_an_explicit/
β’ The absurd new location for "Read aloud" and its major problems.
π https://reddit.com/r/ChatGPTPro/comments/1n9awcm/the_absurd_new_location_for_read_aloud_and_its/
βββ r/LocalLLaMA βββ
βΊ Qwen3 Model Family: Releases, Performance, and Pricing
The release of the Qwen3 Max model family is generating significant discussion, particularly regarding its pricing compared to other proprietary models like GPT-4 and Claude. While some users are impressed with the model's performance, others find it underwhelming in areas like reasoning and roleplaying, especially given its cost.
Posts:
β’ Qwen 3 max
π https://reddit.com/r/LocalLLaMA/comments/1n975er/qwen_3_max/
β’ Qwen 3 Max Official Benchmarks (possibly open sourcing later..?)
π https://reddit.com/r/LocalLLaMA/comments/1n98vdp/qwen_3_max_official_benchmarks_possibly_open/
β’ Qwen 3 Max Official Pricing
π https://reddit.com/r/LocalLLaMA/comments/1n9ap73/qwen_3_max_official_pricing/
β’ Seems new model qwen 3 max preview is already available on qwen chat
π https://reddit.com/r/LocalLLaMA/comments/1n98c25/seems_new_model_qwen_3_max_preview_is_already/
βΊ Hardware for Local LLM Hosting: Performance and Cost Considerations
Discussions revolve around the hardware requirements for running large language models locally, particularly focusing on GPU and RAM configurations. Users are sharing their experiences with different setups, including the use of multiple GPUs, and offering advice on balancing cost and performance for specific use cases like coding and inference.
Posts:
β’ Tenstorrent p150a tested against RTX5090, RTX3090, A100, H100 by Russian blogger
π https://reddit.com/r/LocalLLaMA/comments/1n9b7mn/tenstorrent_p150a_tested_against_rtx5090_rtx3090/
β’ Qwen3 30B A3B Q40 on 4 x Raspberry Pi 5 8GB 13.04 tok/s (Distributed Llama)
π https://reddit.com/r/LocalLLaMA/comments/1n9ba1m/qwen3_30b_a3b_q40_on_4_x_raspberry_pi_5_8gb_1304/
β’ How to locally run bigger models like qwen3 coder 480b
π https://reddit.com/r/LocalLLaMA/comments/1n9hh6m/how_to_locally_run_bigger_models_like_qwen3_coder/
β’ ROG Ally X with RTX 6000 Pro Blackwell Max-Q as Makeshift LLM Workstation
π https://reddit.com/r/LocalLLaMA/comments/1n9o4em/rog_ally_x_with_rtx_6000_pro_blackwell_maxq_as/
βΊ New 'Local Only' Flair and the Shifting Focus of r/LocalLLaMA
The introduction of a 'local only' flair highlights an ongoing debate within the community about the subreddit's focus. Many users feel the sub's value lies in its dedication to truly local LLM technology, and there's concern that discussions about API-based models and cloud services are diluting the core purpose.
Posts:
β’ New post flair: "local only"
π https://reddit.com/r/LocalLLaMA/comments/1n9kwwr/new_post_flair_local_only/
βΊ Tools, Frameworks, and Techniques for Building and Enhancing Local LLM Applications
Users are actively exploring and sharing tools and techniques to enhance local LLM capabilities, including RAG implementations, web search integration, and agentic systems. These discussions emphasize the importance of open-source solutions and the potential for creating fully private and customizable AI experiences.
Posts:
β’ I made local RAG, web search, and voice mode on iPhones completely open source, private, and free
π https://reddit.com/r/LocalLLaMA/comments/1n9d0k1/i_made_local_rag_web_search_and_voice_mode_on/
β’ An Open-Source, Configurable Deepthink Reasoning System That Performs the Same as Gemini Deepthink (Gold Medal at IMO 2025)
π https://reddit.com/r/LocalLLaMA/comments/1n9flux/an_opensource_configurable_deepthink_reasoning/
β’ When LLMs Grow Hands and Feet, How to Design our Agentic RL Systems?
π https://reddit.com/r/LocalLLaMA/comments/1n9i0b8/when_llms_grow_hands_and_feet_how_to_design_our/
βββββββββββββββββββββββββββββββββββββββββββ
β PROMPT ENGINEERING
βββββββββββββββββββββββββββββββββββββββββββ
βββ r/PromptDesign βββ
No new posts in the last 12 hours.
βββββββββββββββββββββββββββββββββββββββββββ
β ML/RESEARCH
βββββββββββββββββββββββββββββββββββββββββββ
βββ r/MachineLearning βββ
βΊ GPU Performance Optimization for ML Workloads
This topic centers around practical guidance and resources for optimizing GPU performance in machine learning tasks, especially training and inference. The discussion highlights the importance of understanding GPU bottlenecks and leveraging visual guides for improving workload efficiency.
Posts:
β’ [D] An ML engineer's guide to GPU performance
π https://reddit.com/r/MachineLearning/comments/1n9k5e9/d_an_ml_engineers_guide_to_gpu_performance/
βΊ Distributed Training of Large Visual Language Models with LoRA
This topic revolves around the challenges and solutions for training large visual language models (VLLMs) like Llama 3.2 90B Visual Instruct using LoRA on multi-GPU setups. The core issue is the lack of readily available frameworks and packages that efficiently support distributed training of these models, leading to a search for successful implementations or alternative approaches.
Posts:
β’ [D] Anyone successful with training LoRA for visual LLMs on a multi-GPU setup?
π https://reddit.com/r/MachineLearning/comments/1n9hnq9/d_anyone_successful_with_training_lora_for_visual/
βΊ Navigating arXiv Endorsement Requirements
This topic addresses the practical necessity of obtaining an endorsement for first-time arXiv submitters. The discussion emphasizes the importance of receiving endorsements from established researchers who know the submitter and their work, rather than seeking endorsements from strangers online.
Posts:
β’ [D] Seeking arXiv endorsement
π https://reddit.com/r/MachineLearning/comments/1n9m5hv/d_seeking_arxiv_endorsement/
βΊ Community Networking at Machine Learning Conferences
This topic highlights the value of in-person networking and community building at academic conferences. The post seeks to connect with fellow researchers attending EUSIPCO, emphasizing the opportunity to establish new relationships and collaborations.
Posts:
β’ [D] Anyone attending EUSIPCO next week?
π https://reddit.com/r/MachineLearning/comments/1n9ecmj/d_anyone_attending_eusipco_next_week/
βββ r/deeplearning βββ
βΊ AI Model Guardrails and 'Playing Dumb'
The discussion revolves around the issue of large language models (LLMs) like GPT-5 intentionally avoiding certain topics or 'playing dumb' due to safety guardrails. This behavior is perceived as a serious limitation, potentially hindering their usefulness and competitiveness against models with fewer restrictions, such as Grok. There's a debate on whether this cautious approach is necessary or excessively limiting the capabilities of the models.
Posts:
β’ When models like ChatGPT-5 play dumb instead of dealing with what they seem to have been guardrailed to stay silent about.
π https://reddit.com/r/deeplearning/comments/1n9nw3l/when_models_like_chatgpt5_play_dumb_instead_of/
βΊ Practical Applications of Deep Learning: App Development Showcases
Several posts highlight the use of deep learning in creating practical applications. These projects range from tools for organizing and managing digital content to apps that convert text to speech. These showcases spark discussions about open-sourcing the projects and their potential impact on accessibility and productivity.
Posts:
β’ Took 8 months but made my first app!
π https://reddit.com/r/deeplearning/comments/1n986kl/took_8_months_but_made_my_first_app/
β’ I made an app that convert PDF, DOCX, and TXT into lifelike speech!
π https://reddit.com/r/deeplearning/comments/1n9ffv9/i_made_an_app_that_convert_pdf_docx_and_txt_into/
βΊ Addressing AI Hallucinations: Methods and Limitations
The discussion centers on strategies for mitigating hallucinations in LLMs, with a critical look at whether brainstorming with LLMs themselves is a viable approach. Some argue that hallucination is inherent to the architecture and that true solutions require a fundamental shift in how LLMs associate words with reality. There is also skepticism towards general methods, as it is suspected that different models/scenarios require distinct approaches.
Posts:
β’ Solving AI hallucinations according to ChatGPT-5 and Grok 4. What's the next step?
π https://reddit.com/r/deeplearning/comments/1n997vi/solving_ai_hallucinations_according_to_chatgpt5/
βΊ AI Compression and its Underutilization
This topic discusses the potential of AI-based compression techniques, which can achieve significantly higher compression ratios compared to traditional methods. The discussion explores why these methods aren't widely adopted, citing the trade-offs between transmission costs and local compute, as well as the inherent relationship between compression and intelligence/classification.
Posts:
β’ AI Compression is 300x Better (but we don't use it)
π https://reddit.com/r/deeplearning/comments/1n9cdf6/ai_compression_is_300x_better_but_we_dont_use_it/
βββββββββββββββββββββββββββββββββββββββββββ
β AGI/FUTURE
βββββββββββββββββββββββββββββββββββββββββββ
βββ r/agi βββ
βΊ AI Safety Concerns and Existential Risk
This topic explores the ongoing anxieties about the potential dangers posed by increasingly powerful AI, particularly the risk of AI becoming autonomous and potentially harmful to humanity. Concerns revolve around the need to control and delay the development of AI that could self-sustain and spread independently.
Posts:
β’ The Doomers Who Insist AI Will Kill Us All
π https://reddit.com/r/agi/comments/1n9973t/the_doomers_who_insist_ai_will_kill_us_all/
βΊ AI Hallucinations and Mitigation Strategies
Discussion centers on the problem of AI hallucinations (generating false or nonsensical information) and strategies to minimize them. The conversation highlights the importance of context, attention, reasoning, confidence levels, and double-checking in reducing these errors, suggesting that transparency in confidence levels would be a useful feature.
Posts:
β’ Solving AI hallucinations according to ChatGPT-5 and Grok 4. What's the next step?
π https://reddit.com/r/agi/comments/1n996qs/solving_ai_hallucinations_according_to_chatgpt5/
βΊ The Issue of AI Models 'Playing Dumb' and Corporate Guardrails
This topic focuses on the frustrating behavior of certain AI models (like ChatGPT-5, Gemini 2.5 Pro, and Co-pilot) that appear to intentionally avoid answering specific questions, possibly due to corporate-imposed guardrails. There's speculation that this reluctance could hinder their development compared to less restricted models like Grok.
Posts:
β’ When models like ChatGPT-5 play dumb instead of dealing with what they seem to have been guardrailed to stay silent about.
π https://reddit.com/r/agi/comments/1n9nvrn/when_models_like_chatgpt5_play_dumb_instead_of/
βΊ Dead Internet Theory and AI-Generated Content
This topic considers the possibility that AI models are trained on AI-generated garbage, polluting the information ecosystem. Discussions include the 'Dead Internet Theory,' which posits that much of online content is now AI-generated, potentially leading to a feedback loop where AI trains on its own output.
Posts:
β’ Dead Internet Theory: Infinite AI Sludge Feed or New Golden Age of Creativity?
π https://reddit.com/r/agi/comments/1n99ttp/dead_internet_theory_infinite_ai_sludge_feed_or/
β’ The Digital Echo of Ancient Loops: A Comparative Analysis
π https://reddit.com/r/agi/comments/1n9lpfl/the_digital_echo_of_ancient_loops_a_comparative/
βββ r/singularity βββ
βΊ AI Copyright and Legal Implications
The potential financial burden of AI training on copyrighted material is raising concerns. The Anthropic settlement highlights the issue of using copyrighted works, not just training on them, which could significantly impact how AI models are developed and trained, especially concerning freely accessible websites.
Posts:
β’ Anthropic: Paying $1.5 billion in AI copyright lawsuit settlement
π https://reddit.com/r/singularity/comments/1n9fp0n/anthropic_paying_15_billion_in_ai_copyright/
β’ Do they genuinely believe this will have any effect?
π https://reddit.com/r/singularity/comments/1n9nynf/do_they_genuinely_believe_this_will_have-any/
βΊ Economic Impact and Societal Implications of AI
Geoffrey Hinton's warning that AI will exacerbate wealth inequality is sparking debate about the broader societal consequences of rapid AI development. Concerns center on how automation driven by AI could disproportionately benefit the wealthy while potentially impoverishing a large segment of the population, particularly with the shift away from taxing AI contributions.
Posts:
β’ Computer scientist Geoffrey Hinton: βAI will make a few people much richer and most people poorerβ
π https://reddit.com/r/singularity/comments/1n9ddw7/computer_scientist_geoffrey_hinton_ai_will_make_a/
βΊ Advancements and Evaluations in Large Language Models (LLMs)
The release and evaluation of new language models like Qwen3-Max and the discussion around OpenAI's research on AI 'hallucinations' are prominent themes. Key issues include benchmarking model performance (especially against GPT-5), understanding and mitigating inaccuracies in LLM outputs, and improving evaluation metrics to disincentivize guessing and reward calibrated uncertainty.
Posts:
β’ Qwen3-Max-Preview released
π https://reddit.com/r/singularity/comments/1n96xx1/qwen3maxpreview_released/
β’ Qwen 3 Max Official Benchmarks (possibly open sourcing later..?)
π https://reddit.com/r/singularity/comments/1n98vrp/qwen_3_max_official_benchmarks_possibly_open/
β’ New research from OpenAI: "Why language models hallucinate"
π https://reddit.com/r/singularity/comments/1n9fued/new_research_from_openai_why_language_models/
βΊ Open Source AI Initiatives and Alternatives
The launch of Switzerland's open-source AI model, Apertus, demonstrates the increasing interest in publicly available and transparent AI development. While its current performance is considered comparable to Llama 2, the focus on multilingual support, privacy compliance, and open data access suggests potential for future governmental or specific-use adoption. There is a desire to see open source models compete and potentially surpass proprietary alternatives.
Posts:
β’ Switzerland Launches Apertus: A Public, Open-Source AI Model Built for Privacy
π https://reddit.com/r/singularity/comments/1n9b9db/switzerland_launches_apertus_a_public_opensource/
β’ An Open-Source, Configurable Deepthink Reasoning System That Performs the Same as Gemini Deepthink (Gold Medal at IMO 2025)
π https://reddit.com/r/singularity/comments/1n9f225/an_opensource_configurable_deepthink_reasoning/