Is Today Another Day Walkthrough

0 views
Skip to first unread message

Dashonn Moulton

unread,
Aug 4, 2024, 6:48:25 PM8/4/24
to tranathtilen
Thefirst thing people try to do with AI is what it is worst at; using it like Google: tell me about my company, look up my name, and so on. These answers are terrible. Many of the models are not connected to the internet, and even the ones that are make up facts. AI is not Google. So people leave disappointed.

Second, they may try something speculative, using it like Alexa, and asking a question, often about the AI itself. Will AI take my job? What do you like to eat? These answers are also terrible. With one exception, most of the AI systems have no personality, are not programmed to be fun like Alexa, and are not an oracle for the future. So people leave disappointed.


All of these uses are not what AI is actually good at, and how it can be helpful. They can blind you to the real power of these tools. I want to try to show you some of why AI is powerful, in ways both exciting and anxiety-producing.


The first four (including Bing) are all OpenAI systems. There are basically two major OpenAI AIs today: 3.5 and 4. The 3.5 model kicked off the current AI craze in November, the 4 model just premiered and is much more powerful. A new variation uses plugins to connect to the internet and other apps, but it is only in early testing, as is an extremely powerful version of ChatGPT that can run Python programs. If you have never paid for OpenAI, you have only used 3.5. Aside from the plugins variation, none of these models are connected to the internet.


For right now, no other general AI tool comes even close to GPT-4, which you can access at Bing for free or by purchasing a $20/month subscription to ChatGPT. GPT-3.5 is also good at writing and is much faster. I have experimented a lot on how to use AI to help with written material, so here is a list of way you might find it useful:


Stable Diffusion, which is open source and you can run from any high-end computer. It takes effort to get started, since you have to learn to craft prompts properly, but once you do it can produce great results. It is especially good for combining AI with images from other sources. Here is a nice guide to Stable Diffusion if you go that route (be sure to read both parts 1 and part 2).


Midjourney, which is the best system in early 2023. The reason I would suggest MidJourney is that it has the lowest learning-curve of any system: just type in "thing-you-want-to-see --v 5" (the --v 5 at the end is important, it uses the latest model) and you get a great result. Midjourney requires Discord. Here is a guide to using Discord.


These systems are also trained on existing art on the internet in ways that are not transparent and potentially legally and ethically questionable. Though technically you own copyright of the images created, legal rules are still hazy.


Despite of (or in fact, because of) all its constraints and weirdness, AI is perfect for idea generation. You often need to have a lot of ideas to have good ideas. Not everyone is good at generating lots of ideas, but AI is very good at volume. Will all these ideas be good or even sane? Of course not. But they can spark further thinking on your part.


It is now trivial to generate a video with a completely AI generated character (you can use the images generated using the techniques in the guide), reading a completely AI-written script, talking in an AI-made voice, animated by AI.


It can also deepfake people, as you can see in this link where I deepfaked myself. Instructions and more information here. Use with caution, but this can be great for explainer videos and introductions. And within a few months you are likely to be able to generate videos from text prompts, so stay tuned.


AI can be a powerful tool for learning and exploration. I have written about how it can be used for teaching and to help make teachers lives easier and their lessons more effective, but it can also work for self-guided learning as well. Some ways to do that in ChatGPT:


There are many ethical concerns you need to be aware of. AI can be used to infringe on copyright, or to cheat, or to steal the work of others, or to manipulate. And how a particular AI model is built and who benefits from its use are often complex issues, and not particularly clear at this stage. Ultimately, you are responsible for using these tools in an ethical manner.


Hey Ethan, this is a great rundown! For a part 2, you might want to look at some of the add-on tools people have built around the AI API's and see if they're useful. I've been using Lex as my writing companion.


We live in an era of practical AI, but many people haven\u2019t yet experienced it, or, if they have, they might have wondered what the big deal is. Thus, this guide. It is a modified version of one I put out for my students earlier in the year, but a lot has changed. It is an overview of ways to get AI to do practical things.


Large Language Models like ChatGPT are extremely powerful, but are built in a way that encourages people to use them in the wrong way. When I talk to people who tried ChatGPT but didn\u2019t find it useful, I tend to hear a similar story.


If people still stick around, they start to ask more interesting questions, either for fun or based on half-remembered college essay prompts: Write an article on why ducks are the best bird. Why is Catcher in the Rye a good novel? These are better. As a result, people see blocks of text on a topic they don\u2019t care about very much, and it is fine. Or the see text on something they are an expert in, and notice gaps. But it not that useful, or incredibly well-written. They usually quit around now, convinced that everyone is going to use this to cheat at school, but not much else.


Microsoft\u2019s Bing uses a mix of 4 and 3.5. It is connected to the internet. Bing is a bit weird to use, but powerful. Here is my guide to using it. In addition, Google has released a disappointing AI called Bard (though they may show us more impressive models soon) and Anthropic has released Claude, though it is more focused at business users. So what can you do with these things?


Writing anything. Blog posts, essays, promotional material, speeches, lectures, chose-you-own adventures, scripts, short stories - you name it, it does it. But you can\u2019t just give it basic prompts. Basic prompts result in boring writing. Getting good writing out of ChatGPT takes some practice, and here is a guide to doing that. ChatGPT-4 is much better at writing. Bing can be incredible at writing, but needs some convincing.


Make your writing better. Paste your text into ChatGPT. Ask it to improve the content or for suggestions about how to make it better for a particular audience. Ask it to create 10 drafts in radically different styles. Ask it to make things more vivid, or add examples. (though note it only \u201Cremembers\u201D a couple thousand words of text)


Some things to worry about: In a bid to respond to your answers, it is very easy for the AI to \u201Challucinate\u201D and generate plausible facts. It can generate entirely false content that is utterly convincing. Let me emphasize that: AI lies continuously and well. Every fact or piece of information it tells you may be incorrect. You will need to check it all. Particularly dangerous is asking it for math, references, quotes, citations, and information for the internet (for the models that are not connected to the internet). Bing and ChatGPT-4 are better at this. Here is a guide to avoiding hallucinations.


The AI also doesn\u2019t explain itself, it only makes you think it does. If you ask it to explain why it wrote something, it will give you a plausible answer that is completely made up. It is not interrogating its own actions, it is just generating text that sounds like it is doing so. This makes understanding biases in the system very challenging, even though those biases almost certainly exist.

3a8082e126
Reply all
Reply to author
Forward
0 new messages