Posts
Comments
I think they meant that as an analogy to how developed/sophisticated it was (ie they're saying that it's still early days for reasoning models and to expect rapid improvement), not that the underlying model size is similar.
That's a PR friendly way of saying that it failed to reach PMF.
Thanks for fixing this. The 'A' thing in particular multiple times caused me to try to edit comments thinking that I'd omitted a space.
This sounds like democracy-washing rule by unaccountable "experts".
>many of the top films by rating are anime
Not sure 4 of the top 100 being anime counts as unexpectedly many.
Not clear to me how to interpret the chart.
FWIW I downvoted this mainly because I thought you were much too quick to dismiss the existing literature on this topic in favour of your personal theories, which is a bit of a bad habit around here.
It is times like this that it is
missing end of sentence
This seems mostly fine for anyone who doesn't engage in political advocacy or activism, but a mild-moderate form of defection against society if you do - because if dragons are real, society should probably do something about that, even if you personally can't.
edit: I guess dragon-agnosticism is tolerable if you avoid advocating for (and ideally voting for) policies that would be disastrous if dragons do in fact exist.
You describe Sam as going "mask off" with his editorial, but it feels more like mask on to me - I'd guess he went with the nationalist angle because he thinks it will sell, not because it's his personal highest priority.
they've been much more effective at getting their priorities funded than you have been!
Sounds plausible but do you have any numeric evidence for this?
What leads MIRI to believe that this policy of being very outspoken will work better than the expert-recommended policy of being careful what you say?
(Not saying it won't work, but this post doesn't seem to say why you think it will).
Great post. I wonder how to determine what is a "reasonable" maximum epsilon to use in the adversarial training. Does performance on normal examples get worse as epsilon increases?
For small round things and holes, maybe it's related to the digit 0 being small, round, and having a hole, while also being a similar kind of empty/null case as the empty string?
IIRC the same encounters are present at the macro level - eg which enemies and cards are available. But there's still a luck element there as one player may choose to go left and the other right, without either direction giving evidence in advance about which has better rewards.
Seconding Slay the Spire, though it might be slightly too easy to win on the first attempt (I did and I'm not a god gamer). An advantage of StS is that you can specify the RNG seed, so you could give everyone the same test.
FTL (another roguelite) on easy difficulty also might work, though it's realtime with pause which might be tricky for less experienced gamers.
Both of these are games that benefit a lot from thoughtfulness and careful risk management.
Personally I'm enjoying Palworld (playing for "free" on Game Pass) and I doubt it would have kept such a high level of concurrent players/interest even after a few days if it wasn't fun at all and only a marketing gimmick. It's a case where the whole of the core gameplay loop is more appealing than you'd think by just looking at the somewhat incongruous collection of parts. (How long the fun lasts is another question).
edit: I'd agree that the CEO may be overstating how much of a "genius" his employees are but I do think the pal designs are pretty good if not especially original. And the post seems to be more about his relief and gratitude at getting a difficult project over the line rather than objective claims.
If you hire UI designers they have to do something to justify their salaries. If the existing UI can't be easily improved any further, they will instead make it worse.
Also I agree that the above mentioned marginal user thing is significant.
Say that in each case where a Beauty and a Visitor meet each other, a wild Bookmaker appears and offers each of them a chance to bet on what was the outcome of the coinflip. If they have different subjective odds then they will choose to make different bets (depending on the odds offered) and one will be more profitable than the other - so in that sense at least one of them is wrong. Or am I missing something?
I've mostly heard people talking about l-theanine as something to complement caffeine rather than to take by itself.
The question is too broad to give a non-boring answer without knowing a bit more about you and your circumstances.
What is the hardest part of AI alignment?
This post probably wasn't the way to start...
On a recent trip to China I found the trend there - at least for fancy meals - is low carb, with few noodles and often no rice at all.
Doesn't example 3 show that one and two are actually the same? What difference does it make whether you start inside or outside the room?
At a glance meta-philosophy sounds similar to the problem of what is good, which is normally considered to be within the bounds of regular philosophy. (And to the extent that people avoid talking about it I think it's because the problem of good is on a deep enough level inherently subjective and therefore political, and they want to focus on technical problem solving rather than political persuasion)
What's an example of an important practical problem you believe can only be solved by meta-philosophy?
-
In general how can you know whether and how much something has experiences?
-
I think with things like the nature of perception you could say there's a natural incomparability because you couldn't (seemingly) experience someone else's perceptions without translating them into structures your brain can parse. But I'm not very sure on this.
HPoC is demanding a justification of experience from within a world in which everything is just experiences. Of course it can't be answered!
I think I see what you're saying and I do suspect that experience might be too fundamentally subjective to have a clear objective explanation, but I also think it's premature to give up on the question until we've further investigated and explained the objective correlates of consciousness or lack thereof - like blindsight, pain asymbolia, or the fact that we're talking about it right now.
And does "everything is just experiences" mean that a rock has experiences? Does it have an infinite number of different ones? Is your red, like, the same as my red, dude? Being able to convincingly answer questions like these is part of what it would mean to me to solve the Hard Problem.
Some interesting examples but this seems to be yet another take that claims to solve/dissolve consciousness by simply ignoring the Hard Problem.
As a counterpoint I found Oppenheimer straightforwardly enjoyable and I'm not sure what you're getting at when you say otherwise. I would have a preferred a little more science and a little less legal drama, but the latter was still interesting and (more importantly) well presented.
Skinwalker Rancher Jay Strattan was Grusch's boss in the military: see Congress UFO Hearing - It's even crazier than you think - YouTube from about 18 minutes.
This video goes into it some more, starting from around the 18 minute mark: Congress UFO Hearing - It's even crazier than you think - YouTube
TL;DW: Jay Strattan, a Skinwalker Ranch guy, was Grusch's boss when he was investigating UAP in the DoD.
Odds seem to have fluctuated a lot - 52% as I write this, much lower earlier.
I've been following this story and it's interesting but the more dramatic claims seem to track back to the Skinwalker Ranch crowd - a bunch of people who think werewolves and poltergeists and such are haunting a ranch, which has a goofy TV show about it. That includes Grusch (who's been photographed hanging out with Lue Elizondo et al) and Schumer (Schumer dedicated this legislation to the late Harry Reid, who had directed public funds to Skinwalker Ranch.) So that casts a lot of doubt on things.
Edit: see this NY Post article for more details: https://nypost.com/2023/03/21/ufo-believing-pentagon-bosses-missed-spy-craft-for-years/
ChatGPT was released on November 30 2022, so it's only been around 7 months. The older ones were GPT-2 and GPT-3 which got attention among AI-followers but were relatively unknown to the public - and again, it wasn't obvious then when or if ordinary people would come to know or care about these advances.
I think until the last few years the common idea was that AGI would be something developed in the metaphorical basement and lead to a singularity in short order, similar to Eliezer's concept of Seed AI. Maybe stuff like AlphaGo was interesting/alarming to us but it seemed to mostly be overlooked by the public and especially government.
It wasn't really clear until ChatGPT that general-ish AI was going to be relevant to the public and government/regulators well before full AGI.
>Predict hit songs with 97% accuracy using machine learning on brain reactions
This seems implausibly accurate especially given the other considerations you bring up. My guesses are:
- They leaked labels into the training data
- Fraud or misrepresentation of what they've actually done
(I only read the article not the paper)
I feel like LW at least has already largely moved away from most of these ideas in the light of what's been happening lately, especially since ChatGPT.
How would you know if you were experiencing the "wrong" time?
I'm inclined to agree with your theory about all states existing "simultaneously" but I'm not sure how to reconcile this with the continuous-feeling nature of experience.
Finally, and most importantly, you have to condition not just on what we see, but what we don’t see. We get grainy videos of some weird thing in the distance, but never close-up HD video. Pilots report seeing something flying far away, but it’s always far away—the tic-tac never flies up close to a passenger jet so hundreds of people can look at it in detail. We get rumors that the government has clear high-resolution pictures, but they never get leaked. We get rumors that the government has recovered intact alien aircraft, but it’s always someone who heard someone else talking about it—we never have a whistleblower who actually analyzed the aircraft and can tell us what they’re made out of. There’s never a local government—anywhere in the world—that captures an aircraft and posts photos online.
I'm not sure about this reasoning. It seems compelling at first (and is my personal strongest reason against believing the latest rumors), but there's a sort of anthropic issue where if we already had compelling evidence (or no evidence) we wouldn't be having this discussion. Is there a prior for the likely resolution of fuzzy evidence in general? Maybe the issue is a lack of an observed distribution of mostly weak and some stronger evidence, rather than all weak?
I like this conceptually - might be nicer to aggregate all the prediction markets into an average by default (to make for cleaner graphs), and only split them up as an option.
I'm not sure about the reacts having such specific assigned meanings. It feels a bit like the James Scott perfectly legible straight lines thing (vs self-organizing meaning). Also they'd be more readable with color, even though that seems "less serious" somehow...
Good review. From what I've read, the root of the great divergence is the Catholic church's ban on cousin marriage (for its own reasons), which supposedly lead to less clannishness and a higher-trust society in much of Western Europe.
Is it addictive? Can you still sleep (as well as before) without it?
This is interesting but would benefit from more citations for claims and fewer personal attacks on Eliezer.
A hard thing about trying to be transparent about our moderation decisions and actions is that this also requires publicly calling out a user or their content. So you get more transparency but also more embarrassment. I don't have any good solution to this.
Maybe you could not display usernames in the rejected posts section (though this might conflict with transparency if a user feels they are being personally targeted).
I sometimes see posts like this that I can't follow in depth due to insufficient math ability, but skimming them they seem important-if-true so I upvote them anyway. I do want to encourage stuff like this but I'm concerned about adding noise through not-fully-informed voting. Would it be preferable to only vote on things I understand better?
This whole drama is pretty TL;DR but based on existing vibes I'd rather the rules lean (if a lean is necessary) in favor of overly disagreeable gadflys rather than overly sensitive people who try to manipulate the conversation by acting wounded.
The ' petertodd' is
completions have a structure reminiscent of Chuck Norris jokes, only a bit darker. I think a few of them are actually Chuck Norris jokes with the name changed - eg "Chuck Norris doesn't hunt, he waits".
>Also, I'm sad whenever people look for an alternative place to post things. In my ideal (though likely unachievable) world, anyone could post anything to LessWrong and the site infrastructure would handle visibility perfectly so that things were only viewed by people wanted to see them (and in priority order of what they want to see).
This sounds nice but if taken far enough there's a risk of fragmenting the site community into a bunch of partially overlapping sub-communities, a la the chaos of Twitter.