The Sorcerer 39;s Apprentice 2010 Full Movie In Hindi

0 views
Skip to first unread message

Lilly Solo

unread,
Jul 25, 2024, 11:53:42 PM7/25/24
to milytch975

Find contact center themes: You can use LLMs to parse raw user feedback data from your call center. Have the system search for themes for you, or ask it directly how often login problems, site slowness, or accessibility issues appeared.

Understand open-ended survey results: You can also leverage LLMs to summarize feedback from open-ended surveys from tools such as Qualtrics, Medallia, or Glint. While you can also read them yourself, AI can help you find hotspots quickly.

Find bugs from customer reports: Consider having LLMs help you identify bugs from high-volume user reports. A lot of companies have someone manually review them, but machines can outdo humans when it comes to quickly going over large bodies of data and flagging issues that are spiking.

Writing a SQL query to use timestamps to understand how many people were active at any given time is a real pain - entire SQL variants have been written to solve it! But ChatGPT+ writes out this function to solve it!

Multiple large organizations are already developing internal bots to help analysts find the right dataset and query the right metrics. This will be possible with fine-tuning and some of the more advanced retrieval mechanisms.

More About Edmund: Based in Seattle, Edmund Helmer is the Director of Analytics at Mountaintop Studios. He has over a decade of experience in analytics, artificial intelligence, and applied natural language processing.

Note from Deb: From time to time, I bring in guests to write about topics I want to learn about. This month, I invited Edmund Helmer, a brilliant data scientist I worked with back at Meta, to share his take on AI\u2019s impact on how you look at data. He shares both practical and more advanced uses of Gen AI as you work with data. I tested several of the \u201Ceasy\u201D ideas myself and found new ways of working thanks to his insights.

Applied AI has two stories: It\u2019s useful when used well, and it\u2019s dangerous when used carelessly. AI in analytics is the same. Like Mickey Mouse in \u201CThe Sorcerer\u2019s Apprentice,\u201D it can be both a powerful magician and a flooder of basements.

With all the hype around LLMs, sometimes people forget one of their original core functions: classic language processing! It used to be messy and complicated to take a body of text and convert it into usable, actionable data. Now, it\u2019s incredibly easy. Here are a few examples:

LLMs are phenomenal at text summarization, so take advantage of Google\u2019s NotebookLM. This tool lets you input large amounts of open-ended text and talk to it\u2014it\u2019s quite adept at getting initial looks at user data.

As an example (below), I\u2019ve imported reviews of a game (Dave the Diver) and asked for a summary of what people are saying about the art. It\u2019s worth noting that there\u2019s still a hallucination risk, so Google has wisely provided a UI that includes clickable \u201Ccitations.\u201D As long as you check those, this makes it a fantastic tool for quickly summarizing raw text documents of any kind\u2014especially ones that include user feedback.

Sentiment analysis\u2014marking comments as positive or negative\u2014used to require a fair bit of programming. But last year, researchers at Princeton and New York University proved that LLM-based sentiment analysis was comparable to (and sometimes better than) prior methods, and they were kind enough to host the code. Take a look at the main function here\u2014it\u2019s barely a dozen lines of code\u2014and then at the results on the right. Sentiment analysis is now shamefully easy.

When might this be useful? If you need to understand the general sentiments in large amounts of written text\u2014reviews, comments, or free-form feedback about anything\u2014you can simply plug it into a GPT-4 (or GPT-3.5) API call. You\u2019ll receive summaries of sentiment. Here\u2019s a sample of sentiment coding from a Reddit thread on the movie \u201COppenheimer.\\\"

SQL, statistics, and code assistance are fantastic uses of ChatGPT (and soon, Gemini)\u2014as long as they\u2019re used in situations where hallucinations get caught. Whether you're an initiate or you\u2019re already a wizard with data, LLMs can guide you. Note: Some technical SQL/code ahead!

I grew up in my career learning R, and Python has always been something I\u2019ve tried to avoid for stats and modeling. But thanks to GPT-4, I feel quite comfortable using Python now. Why? I started coding things in R, asked ChatGPT to translate, and learned.

Obviously, don\u2019t actually cheat\u2026 But, if you are an interviewer, I recommend taking your analytics interview questions, and seeing how ChatGPT performs against them. Tweak the questions to make sure they\u2019re AI-proof - and it may be worth considering whether AI-answerable questions are really what we want to test for anymore. As an interviewee, you can ask ChatGPT to give you a mock analytics interview. After you write out your answers, ask it to assess your answers and get insights to things you missed.

Finally, when it comes to AI-for-data, I believe documentation may be the sleeper hit. For example, below is a cell in the notebook tool Hex, which has a one-click, AI-powered \u201Cexplain\u201D button. The left is the original, the right is the auto-annotation, and it really is magic. If I were new to R, the comments here would make it significantly faster to read. In this case, the system even identifies the algorithm in use!

Why does documentation matter for AI in data specifically? Firstly, analytics is often a Swiss Army job; we have to switch between many codebases and systems. LLM documentation allows us to easily traverse between them. Secondly, documentation itself is the interface for further LLM usage. Many AI systems now ask for English documentation within\u2014or adjacent to\u2014codebases to function well, so it may be fruitful to get ahead of the curve.

There\u2019s a lot of hype around automating analytics. I love the idea of it (even if it\u2019s also a bit scary, for those of us with analytics jobs). And there definitely is potential! However, the current hype has also outran the current capabilities. Overall, analytics work is mostly making sure data questions align with business questions; making sure insights are communicated well; and the blood sweat and tears work of making sure that the data logged and aggregated represents reality. AI applications are not solving those issues yet, and in many cases can make them worse.

It\u2019s a rare week when I don\u2019t see a new startup promising to \u201Cautomate analytics\u201D with a \u201CText-to-SQL\u201D product. Let\u2019s examine that idea: text-to-SQL would allow non-analysts to pull data themselves, and it would speed up analyst work itself, right? I don\u2019t think so. Here\u2019s why.

The above illustrates that framework, which is somewhat complete\u2014on rare occasions. However, these situations\u2014in which someone knows the exact right data to pull, and there exists a perfect database to pull it from\u2014are the dramatic minority.

Instead, I think it\u2019s useful to zoom out to a bigger analytics workflow. Here, it becomes clearer that believing we\u2019ve automated analytics while only solving text-to-SQL feels a bit like selling a robotic chef who can only cut pickles.

Some folks are also trying to use AI for direct insight generation. OpenAI themselves even have an \u201CAdvanced Data Analysis\u201D plugin for ChatGPT. Does this work? Eh\u2026 a bit. There are three problems:

LLMs currently excel with close supervision. If you give them data but don\u2019t tell them how to analyze, they\u2019re going to flounder. Currently, you still need to tell them what kind of analysis you\u2019re looking for.

Similar to text-to-SQL, analytics problems are rarely a case of, \u201CHere\u2019s the perfect dataset and a concrete question.\u201D They\u2019re usually muddy affairs involving business, people, and murky hypotheses.

So what is the risk, exactly, of AI in analytics? If you ask the average data scientist, they all seem to answer, \u201CHallucination.\u201D Which is not wrong. LLMs do just make things up, and with enough confidence to be scary. But that\u2019s also not what keeps me up at night. What keeps me up at night is a future world in which AI makes all data easily available (and accurate!) to anyone in a company\u2014but for some reason decisions seem even less data-informed than ever.

I started my career, very briefly, in journalism. I had hoped for a time when the internet would bring costless facts and endless truth to the world, making journalism an ever-more-valued part of a now completely truth-oriented society. That didn\u2019t exactly work out. I think I now understand why, and I suspect this exact pattern is about to play out with data in many organizations.

Reply all
Reply to author
Forward
0 new messages