Posts

Melbourne – ACX Meetups Everywhere Spring 2024 2024-03-30T11:17:28.934Z
Melbourne, Victoria, Australia – ACX Meetups Everywhere Fall 2023 2023-08-25T23:23:11.518Z
How should AI systems behave, and who should decide? [OpenAI blog] 2023-02-17T01:05:43.038Z
Melbourne, Australia – ACX Meetups Everywhere 2022 2022-08-24T23:03:55.851Z
Open AI: Can we rule out near-term AGI? 2018-11-09T12:16:18.821Z
Open AI co-founder on AGI 2018-09-16T10:18:02.603Z
Melbourne Social Meetup July 2018-07-01T06:29:16.877Z
Links with underscores 2018-02-24T11:32:48.752Z
An attempt at a short no-prerequisite test for programming inclination 2013-06-29T23:36:06.042Z
Main section vs. discussion section 2012-03-06T05:37:31.090Z
Suggestions for a presentation on FAI? 2011-02-11T06:09:27.227Z

Comments

Comment by ShardPhoenix on Eli's shortform feed · 2024-11-08T23:33:39.390Z · LW · GW

That's a PR friendly way of saying that it failed to reach PMF.

Comment by ShardPhoenix on Habryka's Shortform Feed · 2024-10-29T08:16:34.534Z · LW · GW

Thanks for fixing this. The 'A' thing in particular multiple times caused me to try to edit comments thinking that I'd omitted a space.

Comment by ShardPhoenix on Could randomly choosing people to serve as representatives lead to better government? · 2024-10-24T02:26:55.447Z · LW · GW

This sounds like democracy-washing rule by unaccountable "experts".

Comment by ShardPhoenix on Monthly Roundup #22: September 2024 · 2024-09-18T05:13:49.403Z · LW · GW

>many of the top films by rating are anime

Not sure 4 of the top 100 being anime counts as unexpectedly many.

Comment by ShardPhoenix on shortplav · 2024-09-04T00:57:46.054Z · LW · GW

Not clear to me how to interpret the chart.

Comment by ShardPhoenix on Why Large Bureaucratic Organizations? · 2024-08-29T06:59:24.265Z · LW · GW

FWIW I downvoted this mainly because I thought you were much too quick to dismiss the existing literature on this topic in favour of your personal theories, which is a bit of a bad habit around here.

Comment by ShardPhoenix on AI #76: Six Shorts Stories About OpenAI · 2024-08-09T01:14:04.557Z · LW · GW

It is times like this that it is

missing end of sentence

Comment by ShardPhoenix on Dragon Agnosticism · 2024-08-05T00:27:35.284Z · LW · GW

This seems mostly fine for anyone who doesn't engage in political advocacy or activism, but a mild-moderate form of defection against society if you do - because if dragons are real, society should probably do something about that, even if you personally can't.

edit: I guess dragon-agnosticism is tolerable if you avoid advocating for (and ideally voting for) policies that would be disastrous if dragons do in fact exist.

Comment by ShardPhoenix on AI #75: Math is Easier · 2024-08-02T02:13:24.353Z · LW · GW

You describe Sam as going "mask off" with his editorial, but it feels more like mask on to me - I'd guess he went with the nationalist angle because he thinks it will sell, not because it's his personal highest priority.

Comment by ShardPhoenix on Abhimanyu Pallavi Sudhir's Shortform · 2024-08-01T00:17:25.150Z · LW · GW

they've been much more effective at getting their priorities funded than you have been!

 

Sounds plausible but do you have any numeric evidence for this?

Comment by ShardPhoenix on MIRI 2024 Communications Strategy · 2024-05-30T03:28:42.697Z · LW · GW

What leads MIRI to believe that this policy of being very outspoken will work better than the expert-recommended policy of being careful what you say?
(Not saying it won't work, but this post doesn't seem to say why you think it will).

Comment by ShardPhoenix on Ironing Out the Squiggles · 2024-04-30T00:18:05.364Z · LW · GW

Great post. I wonder how to determine what is a "reasonable" maximum epsilon to use in the adversarial training. Does performance on normal examples get worse as epsilon increases?

Comment by ShardPhoenix on What's up with all the non-Mormons? Weirdly specific universalities across LLMs · 2024-04-21T01:09:45.558Z · LW · GW

For small round things and holes, maybe it's related to the digit 0 being small, round, and having a hole, while also being a similar kind of empty/null case as the empty string?

Comment by ShardPhoenix on One-shot strategy games? · 2024-03-11T23:02:10.186Z · LW · GW

IIRC the same encounters are present at the macro level - eg which enemies and cards are available. But there's still a luck element there as one player may choose to go left and the other right, without either direction giving evidence in advance about which has better rewards.

Comment by ShardPhoenix on One-shot strategy games? · 2024-03-11T08:12:40.309Z · LW · GW

Seconding Slay the Spire, though it might be slightly too easy to win on the first attempt (I did and I'm not a god gamer). An advantage of StS is that you can specify the RNG seed, so you could give everyone the same test.

FTL (another roguelite) on easy difficulty also might work, though it's realtime with pause which might be tricky for less experienced gamers.

Both of these are games that benefit a lot from thoughtfulness and careful risk management.

Comment by ShardPhoenix on Palworld development blog post · 2024-01-28T23:41:54.003Z · LW · GW

Personally I'm enjoying Palworld (playing for "free" on Game Pass) and I doubt it would have kept such a high level of concurrent players/interest even after a few days if it wasn't fun at all and only a marketing gimmick. It's a case where the whole of the core gameplay loop is more appealing than you'd think by just looking at the somewhat incongruous collection of parts. (How long the fun lasts is another question).

 

edit: I'd agree that the CEO may be overstating how much of a "genius" his employees are but I do think the pal designs are pretty good if not especially original. And the post seems to be more about his relief and gratitude at getting a difficult project over the line rather than objective claims.

Comment by ShardPhoenix on Do websites and apps actually generally get worse after updates, or is it just an effect of the fear of change? · 2023-12-11T06:20:54.188Z · LW · GW

If you hire UI designers they have to do something to justify their salaries. If the existing UI can't be easily improved any further, they will instead make it worse.

Also I agree that the above mentioned marginal user thing is significant.

Comment by ShardPhoenix on Antropical Probabilities Are Fully Explained by Difference in Possible Outcomes · 2023-11-10T02:31:08.696Z · LW · GW

Say that in each case where a Beauty and a Visitor meet each other, a wild Bookmaker appears and offers each of them a chance to bet on what was the outcome of the coinflip. If they have different subjective odds then they will choose to make different bets (depending on the odds offered) and one will be more profitable than the other - so in that sense at least one of them is wrong. Or am I missing something?

Comment by ShardPhoenix on Self-Blinded L-Theanine RCT · 2023-11-01T00:28:09.230Z · LW · GW

I've mostly heard people talking about l-theanine as something to complement caffeine rather than to take by itself.

Comment by ShardPhoenix on High school advice · 2023-09-11T08:34:50.850Z · LW · GW

The question is too broad to give a non-boring answer without knowing a bit more about you and your circumstances.

Comment by ShardPhoenix on Focus on the Hardest Part First · 2023-09-11T08:25:49.904Z · LW · GW

What is the hardest part of AI alignment?

Comment by ShardPhoenix on How did you make your way back from meta? · 2023-09-07T23:03:17.072Z · LW · GW

This post probably wasn't the way to start...

Comment by ShardPhoenix on Who Has the Best Food? · 2023-09-06T01:17:49.892Z · LW · GW

On a recent trip to China I found the trend there - at least for fancy meals - is low carb, with few noodles and often no rice at all.

Comment by ShardPhoenix on Which Questions Are Anthropic Questions? · 2023-09-01T06:34:41.113Z · LW · GW

Doesn't example 3 show that one and two are actually the same? What difference does it make whether you start inside or outside the room?

Comment by ShardPhoenix on Meta Questions about Metaphilosophy · 2023-09-01T02:09:28.523Z · LW · GW

At a glance meta-philosophy sounds similar to the problem of what is good, which is normally considered to be within the bounds of regular philosophy. (And to the extent that people avoid talking about it I think it's because the problem of good is on a deep enough level inherently subjective and therefore political, and they want to focus on technical problem solving rather than political persuasion)

What's an example of an important practical problem you believe can only be solved by meta-philosophy?

Comment by ShardPhoenix on Seth Explains Consciousness · 2023-08-25T05:57:31.552Z · LW · GW
  1. In general how can you know whether and how much something has experiences?

  2. I think with things like the nature of perception you could say there's a natural incomparability because you couldn't (seemingly) experience someone else's perceptions without translating them into structures your brain can parse. But I'm not very sure on this.

Comment by ShardPhoenix on Seth Explains Consciousness · 2023-08-25T02:36:00.632Z · LW · GW

HPoC is demanding a justification of experience from within a world in which everything is just experiences. Of course it can't be answered!

I think I see what you're saying and I do suspect that experience might be too fundamentally subjective to have a clear objective explanation, but I also think it's premature to give up on the question until we've further investigated and explained the objective correlates of consciousness or lack thereof - like blindsight, pain asymbolia, or the fact that we're talking about it right now.

And does "everything is just experiences" mean that a rock has experiences? Does it have an infinite number of different ones? Is your red, like, the same as my red, dude? Being able to convincingly answer questions like these is part of what it would mean to me to solve the Hard Problem.

Comment by ShardPhoenix on Seth Explains Consciousness · 2023-08-23T05:18:49.656Z · LW · GW

Some interesting examples but this seems to be yet another take that claims to solve/dissolve consciousness by simply ignoring the Hard Problem.

Comment by ShardPhoenix on Barbieheimer: Across the Dead Reckoning · 2023-08-02T00:42:49.408Z · LW · GW

As a counterpoint I found Oppenheimer straightforwardly enjoyable and I'm not sure what you're getting at when you say otherwise. I would have a preferred a little more science and a little less legal drama, but the latter was still interesting and (more importantly) well presented.

Comment by ShardPhoenix on The UAP Disclosure Act of 2023 and its implications · 2023-07-29T01:56:53.413Z · LW · GW

Skinwalker Rancher Jay Strattan was Grusch's boss in the military: see Congress UFO Hearing - It's even crazier than you think - YouTube from about 18 minutes.

Comment by ShardPhoenix on The UAP Disclosure Act of 2023 and its implications · 2023-07-29T01:55:08.665Z · LW · GW

This video goes into it some more, starting from around the 18 minute mark: Congress UFO Hearing - It's even crazier than you think - YouTube

 

TL;DW: Jay Strattan, a Skinwalker Ranch guy, was Grusch's boss when he was investigating UAP in the DoD.

Comment by ShardPhoenix on The First Room-Temperature Ambient-Pressure Superconductor · 2023-07-26T04:02:23.113Z · LW · GW

Odds seem to have fluctuated a lot - 52% as I write this, much lower earlier.

Comment by ShardPhoenix on The UAP Disclosure Act of 2023 and its implications · 2023-07-22T09:14:12.763Z · LW · GW

I've been following this story and it's interesting but the more dramatic claims seem to track back to the Skinwalker Ranch crowd - a bunch of people who think werewolves and poltergeists and such are haunting a ranch, which has a goofy TV show about it. That includes Grusch (who's been photographed hanging out with Lue Elizondo et al) and Schumer (Schumer dedicated this legislation to the late Harry Reid, who had directed public funds to Skinwalker Ranch.) So that casts a lot of doubt on things.

Edit: see this NY Post article for more details: https://nypost.com/2023/03/21/ufo-believing-pentagon-bosses-missed-spy-craft-for-years/

Comment by ShardPhoenix on Why was the AI Alignment community so unprepared for this moment? · 2023-07-18T10:45:04.872Z · LW · GW

ChatGPT was released on November 30 2022, so it's only been around 7 months. The older ones were GPT-2 and GPT-3 which got attention among AI-followers but were relatively unknown to the public - and again, it wasn't obvious then when or if ordinary people would come to know or care about these advances.

Comment by ShardPhoenix on Why was the AI Alignment community so unprepared for this moment? · 2023-07-17T03:14:43.550Z · LW · GW

I think until the last few years the common idea was that AGI would be something developed in the metaphorical basement and lead to a singularity in short order, similar to Eliezer's concept of Seed AI. Maybe stuff like AlphaGo was interesting/alarming to us but it seemed to mostly be overlooked by the public and especially government.

It wasn't really clear until ChatGPT that general-ish AI was going to be relevant to the public and government/regulators well before full AGI.

Comment by ShardPhoenix on AI #18: The Great Debate Debate · 2023-06-30T01:53:15.412Z · LW · GW

>Predict hit songs with 97% accuracy using machine learning on brain reactions

This seems implausibly accurate especially given the other considerations you bring up. My guesses are:

  1. They leaked labels into the training data
  2. Fraud or misrepresentation of what they've actually done

(I only read the article not the paper)

Comment by ShardPhoenix on Lightning Post: Things people in AI Safety should stop talking about · 2023-06-21T02:52:42.541Z · LW · GW

I feel like LW at least has already largely moved away from most of these ideas in the light of what's been happening lately, especially since ChatGPT.

Comment by ShardPhoenix on What is the foundation of me experiencing the present moment being right now and not at some other point in time? · 2023-06-18T08:49:28.307Z · LW · GW

How would you know if you were experiencing the "wrong" time?

I'm inclined to agree with your theory about all states existing "simultaneously" but I'm not sure how to reconcile this with the continuous-feeling nature of experience.

Comment by ShardPhoenix on I still think it's very unlikely we're observing alien aircraft · 2023-06-15T13:23:21.182Z · LW · GW

Finally, and most importantly, you have to condition not just on what we see, but what we don’t see. We get grainy videos of some weird thing in the distance, but never close-up HD video. Pilots report seeing something flying far away, but it’s always far away—the tic-tac never flies up close to a passenger jet so hundreds of people can look at it in detail. We get rumors that the government has clear high-resolution pictures, but they never get leaked. We get rumors that the government has recovered intact alien aircraft, but it’s always someone who heard someone else talking about it—we never have a whistleblower who actually analyzed the aircraft and can tell us what they’re made out of. There’s never a local government—anywhere in the world—that captures an aircraft and posts photos online.

I'm not sure about this reasoning. It seems compelling at first (and is my personal strongest reason against believing the latest rumors), but there's a sort of anthropic issue where if we already had compelling evidence (or no evidence) we wouldn't be having this discussion. Is there a prior for the likely resolution of fuzzy evidence in general? Maybe the issue is a lack of an observed distribution of mostly weak and some stronger evidence, rather than all weak?

Comment by ShardPhoenix on The Base Rate Times, news through prediction markets · 2023-06-06T23:53:50.938Z · LW · GW

I like this conceptually - might be nicer to aggregate all the prediction markets into an average by default (to make for cleaner graphs), and only split them up as an option.

Comment by ShardPhoenix on Reacts now enabled on 100% of posts, though still just experimenting · 2023-06-03T07:27:17.209Z · LW · GW

I'm not sure about the reacts having such specific assigned meanings. It feels a bit like the James Scott perfectly legible straight lines thing (vs self-organizing meaning). Also they'd be more readable with color, even though that seems "less serious" somehow...

Comment by ShardPhoenix on What if they gave an Industrial Revolution and nobody came? · 2023-05-18T12:27:30.541Z · LW · GW

Good review. From what I've read, the root of the great divergence is the Catholic church's ban on cousin marriage (for its own reasons), which supposedly lead to less clannishness and a higher-trust society in much of Western Europe.

Comment by ShardPhoenix on Product Endorsement: Apollo Neuro · 2023-05-09T04:26:27.808Z · LW · GW

Is it addictive? Can you still sleep (as well as before) without it?

Comment by ShardPhoenix on Contra Yudkowsky on AI Doom · 2023-04-24T01:15:22.322Z · LW · GW

This is interesting but would benefit from more citations for claims and fewer personal attacks on Eliezer.

Comment by ShardPhoenix on LW moderation: my current thoughts and questions, 2023-04-12 · 2023-04-21T01:13:38.827Z · LW · GW

A hard thing about trying to be transparent about our moderation decisions and actions is that this also requires publicly calling out a user or their content. So you get more transparency but also more embarrassment. I don't have any good solution to this.

Maybe you could not display usernames in the rejected posts section (though this might conflict with transparency if a user feels they are being personally targeted).

Comment by ShardPhoenix on Mechanistically interpreting time in GPT-2 small · 2023-04-17T04:37:21.796Z · LW · GW

I sometimes see posts like this that I can't follow in depth due to insufficient math ability, but skimming them they seem important-if-true so I upvote them anyway. I do want to encourage stuff like this but I'm concerned about adding noise through not-fully-informed voting. Would it be preferable to only vote on things I understand better?

Comment by ShardPhoenix on Moderation notes re: recent Said/Duncan threads · 2023-04-15T08:14:18.912Z · LW · GW

This whole drama is pretty TL;DR but based on existing vibes I'd rather the rules lean (if a lean is necessary) in favor of overly disagreeable gadflys rather than overly sensitive people who try to manipulate the conversation by acting wounded.

Comment by ShardPhoenix on The ‘ petertodd’ phenomenon · 2023-04-15T03:32:11.839Z · LW · GW

The ' petertodd' is completions have a structure reminiscent of Chuck Norris jokes, only a bit darker. I think a few of them are actually Chuck Norris jokes with the name changed - eg "Chuck Norris doesn't hunt, he waits".

Comment by ShardPhoenix on LW Account Restricted: OK for me, but not sure about LessWrong · 2023-04-13T04:33:00.648Z · LW · GW

>Also, I'm sad whenever people look for an alternative place to post things. In my ideal (though likely unachievable) world, anyone could post anything to LessWrong and the site infrastructure would handle visibility perfectly so that things were only viewed by people wanted to see them (and in priority order of what they want to see).

This sounds nice but if taken far enough there's a risk of fragmenting the site community into a bunch of partially overlapping sub-communities, a la the chaos of Twitter.

Comment by ShardPhoenix on Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky · 2023-03-31T00:08:59.552Z · LW · GW

This question appears to be structured in such a way as to make it very easy to move the goalposts.