Meetup Sat March 4, 2pm - 6pm

24 views
Skip to first unread message

Timothy Underwood

unread,
Feb 19, 2023, 3:12:53 AM2/19/23
to rationalit...@googlegroups.com
At my apartment as usual:

66/b/1 Ulloi Ut
ring/csengo 27
floor 6, door 3
There is an elevator.
There also is a dog who will whine while we eat snacks, a cat who will disappear and at most be glimpsed,and a perfect, beautiful nursing baby. 

In the name of making organizational matters easier, and also for the sake of my wife knowing further ahead in time what events I am scheduling, so she can plan around them, I'm planning to run these meetings on the first Saturday of the month, except in weird and exceptional circumstances.

This month's meeting was particularly fun with a great discussion about how human brains and neural nets work, and I'd like to specifically mention that I appreciate the leading role Adam took in the conversation, and I hope to see him come by more often. 

My suggested readings are going to continue the chatgpt theme, since that is the most fun thing going on.

This skeptical POV, that it will fall upon Milan to refute in irrefutable detail:

and our favorite recent story of a man falling in love with an AI


Note, even if you don't read the suggested readings, you should definitely show up. The more people, the more fun we add to the conversation, and there is very much no requirements beyond being vaguely interested for showing up. 

So I hope to see lots of you there in two weeks -- I'll also probably send another email a few days before the event, since the meetup this month had an unusually large number of people, and maybe the fact that there were two emails (and actually a discussion with Milan's reading recommendation) may have been part of it.

Tim

istatic

unread,
Feb 19, 2023, 9:17:22 AM2/19/23
to rationalit...@googlegroups.com
Thanks Tim for the link recommendations!

I started reading the first one, but I'm a bit confused; are you endorsing this author's position? If so, which of his points, if I may ask?

Here is a link from me too:
https://www.lesswrong.com/posts/PE22QJSww8mpwh7bt/agi-in-sight-our-look-at-the-game-board

Some select quotes from it:

"- AGI by default very soon: brace for impact [...] Significant probability of it happening in less than 5 years
- No safety solutions in sight: we have no airbag
- Race ongoing: people are actually accelerating towards the wall"

--
You received this message because you are subscribed to the Google Groups "rationality-budapest" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rationality-buda...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/rationality-budapest/CANsMpDsdKdvdBHfhU7TS8M37LRE0YXy_byRiDksLyZxa3Qd-Mg%40mail.gmail.com.

neurhlp

unread,
Mar 2, 2023, 1:32:13 PM3/2/23
to rationality-budapest
Hi!

I have browsed some articles about LLMs from non-AI experts, and I have realized that even people heavily criticizing chatGPT are dangerously underestimating the technology. Their understanding is very surface level. But they actively use that understanding to predict its dangers. This is likely the same with most policy makers.

Basically most of this boils down to something like these logical extremes:
- LLMs predict the next word, with a statistical model, so they cannot understand anything. They are simple functions (text goes in, probability distribution goes out)
- LLMs cannot understand anything, so they are not intelligent in any way.
- LLMs have have the same intelligence as a shovel, they have zero agency, so they pose absolutely zero dangers, no matter how advanced they would get in the future.
- LLMs are one of the most highly developed "AI" systems today, so no AI systems in the near future would have any intelligence, understanding, or agency at all. Therefore in the next few decades (or centuries) we are perfectly safe from any dangers of AI.

Each statement have some significant truth in it, but is catastrophically wrong if we would wish to base policy on it.
My problem is that many comments, and articles accept some or all of these logical extremes, without any hedging.

Articles in the LessWrong blog seem to deal with all of the points I have raised, and in the previous discussion we have touched upon some of it too. But we may want to more systematically discuss it, so everybody is on the same page.
"Popscience" completely misunderstands LLMs, so it is difficult to be actually informed about this topic.

Adam Rak

istatic

unread,
Mar 3, 2023, 1:06:45 AM3/3/23
to rationalit...@googlegroups.com
Thank you Adam, I agree with much of what you wrote.

On a related note, to whom it may concern; here's some food for thought for tomorrow:

image.png

Some context:

https://ourworldindata.org/brief-history-of-ai


istatic6

unread,
Mar 7, 2023, 1:02:03 AM3/7/23
to rationality-budapest
https://palm-e.github.io/

Partly to you, David and Tim since you exhibited varying levels of skepticism related to multi-modality, spacial awareness, embody-ability, goal directedness, and continuous video understanding.
Reply all
Reply to author
Forward
0 new messages