Posts
Comments
I'm curious to know for anyone that has read a lot of Yudkowsky's and Scott Alexander's writings (I read them for entertainment even) how are they feeling about the advancements of AI -- all happening so fast and in such magnitude.
The last few weeks I felt the opposite of this. I kind of go back and forth on thinking they are plateauing and then I get surprised with the new Sonnet version or o1-preview. I also experiment with my own prompting a lot.
Myself, I feel like every two weeks or so we see this kind of post with similar style to Eliezer's so it feels repetitive... but I may be wrong though, just my reaction after seeing that post.
I'm kind of against it. There's a line and I draw it there, it's just too much power waiting to fall in a bad actor's hand...
May you possibly be underestimating how hard it is to build a startup?
Hey, I think you should also consider how the out-of-nowhere narrative-breaking nature of COVID. Which also happened after you wrote this. It's not necessarily a proof that the narrative can "break," but it sure is an example.
And, while I think I read the sequences way longer than 4 years ago, if I remember something it gave me is a sense of "everything can change very, very fast."
Thank you for being a temporary asshole, that is a great comment. Does it occur to you how it can be done?
the first prediction market about this was in 8 november on polymarket and surprisingly it was 94 percent probability on them not halting withdrawals
Magic not needed... someone sees that SBF offers 3 billion to buy a piece of twitter, gets spooked and aggregates it on the prediction market -> this raises the probability that it's doing something shady (then any other assessment will be aggregated) now when FTX offers EA some money, we know the probability of them doing something shady... have some info to make a better decision (I noticed Scott Alexander writing about this)
I don't think it's quite a mystery as you think... if you do the same overleveraged high stakes bet and you come out winning ten out of ten times, it's likely you will continue doing it (this is the nature of a ponzi scheme.) I think they could have easily find investors if they could have sustained the bank run and prevent the balance sheet leak.
The latter, I think, is the real mystery.
SBF probably knew at all times the dollar value size of a bank-run he could sustain until having to forcedly lock withdrawals, plus which whales could dump FTT at any given time (one of this being the CEO of Binance.)
While most people are thinking that Binance's CEO Changpeng Zhao made a huge move dumping all of FTT, I think it's terrible for him -- it highlights the fragility of the entire enterprise and shifts all eyes on the remaining centralized exchanges (I imagine SBF's line of thinking going something like this, he didn't think that Zhao would martyr himself, but he did and thus we're here)
hehe Inadequate Equilibria is one of my favorite books ever, so I'm really glad that this study came up. It proves not only the SAD-specific section of the book, but the overall point.
I would recommend to only focus on the object-level problem you're trying to solve.
For example:
- for programming, things like more monitors, a better IDE (or extensions, knowing how to navigate it by traveling back and forward, having a section with last opened files, last refactored functions, etc...) will help.
- for conversations, you can apply some heuristics at different points of the conversation: what are talking about again? what did they mention? are we at the midpoint, ending, etc...
In mathematics, notation is basically a solution to the small working memory that we have, you just have to find the analogue for whatever you're trying to solve. I doubt something will permanently fix low working memory in the long-term (e.g. dual n back) you can of course try some acetylcholine release agents or reuptake inhibitors which will make you more vigilant (e.g. the mildest one being coffee.) There's also some evidence pointing to fasting, sleep deprivation, niche nutritional protocols such as carnivore diets (or rather, strict exclusion diets where you remove some foods progressively until you find the one that does not suit you.)
This is an excerpt I want to highlight for everyone in this forum:
> I agreed and said I was planning on doing it next Monday... anyway, 2 years later on <some> Monday I finally found the time.
Write your thinking, expand on your comments, even if it takes 2 years!! I think there will always be someone which will derive value from it.
This type of honesty is very unheard of and I appreciate this a lot from you. Please, just take it step by step, I can assure that no one is out to get you and no one thinks that you're being childish; all the contrary, you are quite honest and smart in your account.
I feel like many members of this community have had very similar experiences, I know that Scott Alexander, Eliezer, lukeprog , and Aella for example have had to make massive updates to their beliefs, and they've luckily written about it. I think you would get value from reading their accounts.
You don't have to make immediate updates, these take a time, and only when you're ready you can do so. In the end, your self-preservation, and self-actualization are the most important things you can optimize for.
I didn't read any bad intent from P on his/her comment. And I also got the general sense from the post: of looking for help rather than us telling how you can help LW, and I'm guessing that was the nature of the comment.
I feel like you are quite a smart person (and still very young) but with the wrong assumptions that may be blocking a very meaningful self-development that is key to life (being independent/belonging to a group that cares about you) and I think you will be benefit greatly from trying to accomplish these goals however hard they may seem to you.
While you've had a very unique experience growing up that may have wired you in a certain way, this is not basis to the fact that you cannot live in any other way or change your mind (e.g. how sure are you of not having a job because you can take "another person telling me what to do" as opposed to just being plain afraid of doing something that you have virtually never done before?) I think navigating this reversal in assumptions will bring the most value to you and your content here.
I really enjoyed the post, and I appreciate reading about your experience and honesty. This unique experience in life that you had will bring unique insight, and I think this is how you can help others.
There's also this video where Rhonda Patrick goes in depth about this
The main chemical compound in broccolli and other cruciferous vegetables is sulforaphane, which has various health benefits, it's syntetized by an enzime called myrosinase which is very heat sensitive; this paper https://sci-hub.se/10.1016/j.ctrv.2010.01.002 has a table showing the different and optimal boiling temperatures.
Not to mention the fact that different foods have different cooking times, and if you overcook one (e.g. broccoli) you risk losing all nutritive properties.
I think he was running the same algorithm he used when the LW community "failed" to buy bitcoin in bulk. Here's the response if you are interested in reading about a similar case as this.
I didn't mean to discuss sentience here, I was looking more into the usefulness/interestingness of the conversation: the creativity/funnyness behind the responses. I think that everyone I've ever met and conversed for more than ~30 mins showed a very different quality to this conversation. This conversation didn't make me think/laugh ever the way conversing with a human does.
For example, if they quote Les Miserable or any other book it would be via the way it relates to them on a personal level, a particular scene/a particular dialogue that has struck them in a very particular way and has stayed with them ever since, not a global summary of what it's via scraping who knows what website. If I were to believe this A.I. is sentient, I would say it's a liar.
If someone has the response that this LaMDA had, I would bet they hadn't actually read the book, would never claim to have done that, and would never bring this into conversation in the first place. This differs from every single one person (e.g. everyone will give different answers) and it's not something I would ever find by searching Les Miserables on Google.
This is to say that I have gained nothing from ever conversing to this supposed A.I, the same reason why no-one converses with GPT-3, or why people actually use DALLE or GitHub Copilot. I'm not asking it to write a symphony, just make me laugh once, make me think once, help me at some problem I have.
“Universal love,” said the cactus person.“Transcendent joy,” said the big green bat.
“Right,” I said. “I’m absolutely in favor of both those things. But before we go any further, could you tell me the two prime factors of 1,522,605,027, 922,533,360, 535,618,378, 132,637,429, 718,068,114, 961,380,688, 657,908,494 ,580,122,963, 258,952,897, 654,000,350, 692,006,139?
“Universal love,” said the cactus person.
“Transcendent joy,” said the big green bat.
Boom, LaMDA is turned off... so much for sentience.
Someone ran the same questions through GPT and got similar responses back, so that's a point towards this not being a hoax, but just a sophisticated chat-bot. Still doesn't avoid editing or cherry-picking.
Now, while I feel this article being a bit interesting, it's still missing the point of what would get me interested in the first place... if it has read Les Miserables and can draw conclusion on what it is about, what else has LaMDA read? Can it draw parallels with other novels?
If it would had responded something like, "Actually... Les Miserables is plagiarized from so and so, you can find similar word-structure in this book..." something truly novel, or funny that would have made the case for sentience more than anything. I think the response about being useful are correct to some extent, since the only reason why I use copilot is because it's useful.
So this point would actually be more interesting to read about e.g. has LaMDA read interesting papers, can it summarize it? I would be interested in seeing it ask difficult questions... try to get something funny/creative out of it. But as this wasn't shown I think they were asked and the responses were edited out.
Good comments, thanks for sharing both.
Journal about your thought processes after solving each problem
create a way to 'get to' that memory somehow;
'd love to hear more about practical insights on how to get better at recalling + problem-solving.
What books are you reading? Podcast you are watching? Talks/articles to recommend?
Prompt:
SOMEBODY BOILING A GOAT IN ITS MOTHER’S MILK.
Hello, is there any update on this? Hopefully it doesn't die off!
Awesome!!! Incredibly useful, thank you for taking the time, I'll take many of these.
Great recommendations, thank you for making this post. I would add to the books section: Deep Work & So Good They Can't Ignore You (Cal Newport). It definitely led me to take a new perspective on what I spend time on.
I would find it kind of disrespectful if someone called me gpt-3, maybe gpt-15...
I would guess that the end project would be something closer to The Codex than the Sequences. And from reading Hanson a few times, there's an obvious thread that he weaves among many posts, but may be just a tad difficult to untangle. For one, I really enjoy his take on Prediction Markets
So on this thread, as well, we have a post from @Richard_Ngo with some links https://www.lesswrong.com/posts/SSkYeEpTrYMErtsfa/what-are-some-of-robin-hanson-s-best-posts
Awesome, some 2 years ago I was looking just for this. https://www.lesswrong.com/posts/9mXi6QNN7udsGcDYJ/eigen-s-shortform?commentId=uTEEJHSHLjyHXHsM2 I ended up reading his book, and really liked it. Excited to be one of the early readers of these. (I hope it's as good as The Codex recollection)
The idea of ontological flexibility hints at this.
Unsong from Scott Alexander is a masterpiece.
An unexplored land within LessWrong is where the objective world meets the narrative world. Science dedicates its time almost exclusively to objective facts (what is) but this is hardly our everyday life. The world we live is full of emotions, motivations, pain, and joy. This is the world of stories that constrain and inform action. Rather than brush it aside, it should be explored as an important puzzle piece in instrumental rationality (I think this is what Unsong is kind-of about, including other works in the category of the aptly named: "Rational Fiction.")
Funny that you have your great LessWrong whale as I do, and that you recall that it may be from Wei Dai as well (while him not recalling)
https://www.lesswrong.com/posts/X4nYiTLGxAkR2KLAP/?commentId=nS9vvTiDLZYow2KSK
One of the things that enticed me about LessWrong is a concrete and easy way to call someone a "rationalist," namely someone who has read the three books from the Library section (Sequences, HPMOR, and The Codex.)
After that, the curated sequences and the concepts page. I just think is a wonderfully easy way to define concepts, and create a shared vocabulary while building on top of it. I hope that with time it gets expanded to encompass more books and sequences.
- The term rationalist then falls short but this is how I easily can separate people and am sympathetic to those who want to change the name.
A talk given by Rogen Penrose is apt here: The Problem of Modelling the Mathematical Mind. He tries to define how the mind of a sufficiently good mathematician may work with emphasis on parallelization of mathematical solutions. And an interesting book may be The Mathematician's Mind: The Psychology of Invention in the Mathematical Field by Jacques Hadamard.
Mathematician Richard Borcherds said in an interview that he does not have a great memory, that this allows him to come back to a mathematical problem and try solving it in a different way than he did before (because he does not remember how he solved it.)
This is amazing. Incredible execution, it does not go unnoticed!
This is a really good comment. If you care to know more about his thinking, he has a book called, "hackers and painters" which I think sums up very well his views. But yes, it's a redistribution of wealth and power from strong people and bureaucrats to what he calls "nerds" as in people who know technology deeply and actually build things.
The idea of instrumental rationality touches at the edges of builders, and you need to if you ever so desire to act in the world.
The heads of Government and the FDA don't work like you do. Who knows the incentives that they have? It's entirely possible that for them this is just a political play(moving chess pieces) that make sense for them while the well-being of the people take secondary place to a particular political move.
This wouldn't be the first thing that this happens in any kind of government agency, but, at any rate, it's too early to be skeptical. We need to see how this unfolds, may be the pausing don't last as much.
This is a really interesting post. I wonder how implementable this is, it does touches the edges of collective action. Imagine a change.org petition for someone to read something and make a review of it, despite public interest it does miss the incentive structure for one who actually carries the task.
Going further, some people are tokenizing the hours of their day and selling them on the blockchain (this is too broad, but imagine a particular action being tokenized, where people can fund it through sheer interest and then someone like David Deutsch could claim it.) This does not seem so far-fetched to me.
I don't think there's enough written about long-termism. You have a reader here if you ever decide to write something. I wonder as to where between those two school of thoughts you fall.
As a side thought: One of the things I always sensed from this forum is a deep affinity for different ways of understanding things. So, not surprisingly, many converge and are enthralled by Bret Victor, though there are many (Nielsen, Matuschak, the web you linked, 3blue1brown, Jonathan Blow.)
So I think that exploring different mediums can be and end on itself rather than just making visualizations to understand a given subject (I get a sense of that from your comment, and I hope you explore it further!)
Welcome!
I think the problem with visualization content is that is very time-intensive to make (let alone the difficulty.) You should look at the manim library written in python from which 3blue1brown made his videos of Linear Algebra.
Huh, I understand where you're coming from. Especially, this:
[...] a kidney stone increases my level of baseline fear
Since I did not consider it. It's completely possible to imagine a world where your baseline fear increases ever so slightly in a way that outweighs the fact of knowing what may be going on when it hits you.
But –though I concede your point– is your behavior someway modified, at any rate, given the fact that you may get hit by kidney stones? For example, say, analogizing with family history of high blood pressure, I would most likely take some precaution measures if I knew high blood pressure (or kidney stones) were in my family. Precautions that I wouldn't have taken in the case where I'm oblivious to my inclination for such diseases.
I think that most of what I've gotten out of the Sequences is actually this. The act of noticing. I think it not only applies to shame, but to many more related internal conflicts.
In my experience, it's surprising the amount which we can learn by applying procedures such as the method you outline. Hopefully we get to see more about this.
While I think, the Typical Mind Fallacy is strong with this one; the post does have some good bits. I messaged him privately my problems with it, but I up-voted since I think the post taps into something broader and good which I would like to read more about.