Melbourne – ACX Meetups Everywhere Spring 2024 2024-03-30T11:17:28.934Z
Melbourne, Victoria, Australia – ACX Meetups Everywhere Fall 2023 2023-08-25T23:23:11.518Z
How should AI systems behave, and who should decide? [OpenAI blog] 2023-02-17T01:05:43.038Z
Melbourne, Australia – ACX Meetups Everywhere 2022 2022-08-24T23:03:55.851Z
Open AI: Can we rule out near-term AGI? 2018-11-09T12:16:18.821Z
Open AI co-founder on AGI 2018-09-16T10:18:02.603Z
Melbourne Social Meetup July 2018-07-01T06:29:16.877Z
Links with underscores 2018-02-24T11:32:48.752Z
An attempt at a short no-prerequisite test for programming inclination 2013-06-29T23:36:06.042Z
Main section vs. discussion section 2012-03-06T05:37:31.090Z
Suggestions for a presentation on FAI? 2011-02-11T06:09:27.227Z


Comment by ShardPhoenix on One-shot strategy games? · 2024-03-11T23:02:10.186Z · LW · GW

IIRC the same encounters are present at the macro level - eg which enemies and cards are available. But there's still a luck element there as one player may choose to go left and the other right, without either direction giving evidence in advance about which has better rewards.

Comment by ShardPhoenix on One-shot strategy games? · 2024-03-11T08:12:40.309Z · LW · GW

Seconding Slay the Spire, though it might be slightly too easy to win on the first attempt (I did and I'm not a god gamer). An advantage of StS is that you can specify the RNG seed, so you could give everyone the same test.

FTL (another roguelite) on easy difficulty also might work, though it's realtime with pause which might be tricky for less experienced gamers.

Both of these are games that benefit a lot from thoughtfulness and careful risk management.

Comment by ShardPhoenix on Palworld development blog post · 2024-01-28T23:41:54.003Z · LW · GW

Personally I'm enjoying Palworld (playing for "free" on Game Pass) and I doubt it would have kept such a high level of concurrent players/interest even after a few days if it wasn't fun at all and only a marketing gimmick. It's a case where the whole of the core gameplay loop is more appealing than you'd think by just looking at the somewhat incongruous collection of parts. (How long the fun lasts is another question).


edit: I'd agree that the CEO may be overstating how much of a "genius" his employees are but I do think the pal designs are pretty good if not especially original. And the post seems to be more about his relief and gratitude at getting a difficult project over the line rather than objective claims.

Comment by ShardPhoenix on Do websites and apps actually generally get worse after updates, or is it just an effect of the fear of change? · 2023-12-11T06:20:54.188Z · LW · GW

If you hire UI designers they have to do something to justify their salaries. If the existing UI can't be easily improved any further, they will instead make it worse.

Also I agree that the above mentioned marginal user thing is significant.

Comment by ShardPhoenix on Antropical Probabilities Are Fully Explained by Difference in Possible Outcomes · 2023-11-10T02:31:08.696Z · LW · GW

Say that in each case where a Beauty and a Visitor meet each other, a wild Bookmaker appears and offers each of them a chance to bet on what was the outcome of the coinflip. If they have different subjective odds then they will choose to make different bets (depending on the odds offered) and one will be more profitable than the other - so in that sense at least one of them is wrong. Or am I missing something?

Comment by ShardPhoenix on Self-Blinded L-Theanine RCT · 2023-11-01T00:28:09.230Z · LW · GW

I've mostly heard people talking about l-theanine as something to complement caffeine rather than to take by itself.

Comment by ShardPhoenix on High school advice · 2023-09-11T08:34:50.850Z · LW · GW

The question is too broad to give a non-boring answer without knowing a bit more about you and your circumstances.

Comment by ShardPhoenix on Focus on the Hardest Part First · 2023-09-11T08:25:49.904Z · LW · GW

What is the hardest part of AI alignment?

Comment by ShardPhoenix on How did you make your way back from meta? · 2023-09-07T23:03:17.072Z · LW · GW

This post probably wasn't the way to start...

Comment by ShardPhoenix on Who Has the Best Food? · 2023-09-06T01:17:49.892Z · LW · GW

On a recent trip to China I found the trend there - at least for fancy meals - is low carb, with few noodles and often no rice at all.

Comment by ShardPhoenix on Which Questions Are Anthropic Questions? · 2023-09-01T06:34:41.113Z · LW · GW

Doesn't example 3 show that one and two are actually the same? What difference does it make whether you start inside or outside the room?

Comment by ShardPhoenix on Meta Questions about Metaphilosophy · 2023-09-01T02:09:28.523Z · LW · GW

At a glance meta-philosophy sounds similar to the problem of what is good, which is normally considered to be within the bounds of regular philosophy. (And to the extent that people avoid talking about it I think it's because the problem of good is on a deep enough level inherently subjective and therefore political, and they want to focus on technical problem solving rather than political persuasion)

What's an example of an important practical problem you believe can only be solved by meta-philosophy?

Comment by ShardPhoenix on Seth Explains Consciousness · 2023-08-25T05:57:31.552Z · LW · GW
  1. In general how can you know whether and how much something has experiences?

  2. I think with things like the nature of perception you could say there's a natural incomparability because you couldn't (seemingly) experience someone else's perceptions without translating them into structures your brain can parse. But I'm not very sure on this.

Comment by ShardPhoenix on Seth Explains Consciousness · 2023-08-25T02:36:00.632Z · LW · GW

HPoC is demanding a justification of experience from within a world in which everything is just experiences. Of course it can't be answered!

I think I see what you're saying and I do suspect that experience might be too fundamentally subjective to have a clear objective explanation, but I also think it's premature to give up on the question until we've further investigated and explained the objective correlates of consciousness or lack thereof - like blindsight, pain asymbolia, or the fact that we're talking about it right now.

And does "everything is just experiences" mean that a rock has experiences? Does it have an infinite number of different ones? Is your red, like, the same as my red, dude? Being able to convincingly answer questions like these is part of what it would mean to me to solve the Hard Problem.

Comment by ShardPhoenix on Seth Explains Consciousness · 2023-08-23T05:18:49.656Z · LW · GW

Some interesting examples but this seems to be yet another take that claims to solve/dissolve consciousness by simply ignoring the Hard Problem.

Comment by ShardPhoenix on Barbieheimer: Across the Dead Reckoning · 2023-08-02T00:42:49.408Z · LW · GW

As a counterpoint I found Oppenheimer straightforwardly enjoyable and I'm not sure what you're getting at when you say otherwise. I would have a preferred a little more science and a little less legal drama, but the latter was still interesting and (more importantly) well presented.

Comment by ShardPhoenix on The UAP Disclosure Act of 2023 and its implications · 2023-07-29T01:56:53.413Z · LW · GW

Skinwalker Rancher Jay Strattan was Grusch's boss in the military: see Congress UFO Hearing - It's even crazier than you think - YouTube from about 18 minutes.

Comment by ShardPhoenix on The UAP Disclosure Act of 2023 and its implications · 2023-07-29T01:55:08.665Z · LW · GW

This video goes into it some more, starting from around the 18 minute mark: Congress UFO Hearing - It's even crazier than you think - YouTube


TL;DW: Jay Strattan, a Skinwalker Ranch guy, was Grusch's boss when he was investigating UAP in the DoD.

Comment by ShardPhoenix on The First Room-Temperature Ambient-Pressure Superconductor · 2023-07-26T04:02:23.113Z · LW · GW

Odds seem to have fluctuated a lot - 52% as I write this, much lower earlier.

Comment by ShardPhoenix on The UAP Disclosure Act of 2023 and its implications · 2023-07-22T09:14:12.763Z · LW · GW

I've been following this story and it's interesting but the more dramatic claims seem to track back to the Skinwalker Ranch crowd - a bunch of people who think werewolves and poltergeists and such are haunting a ranch, which has a goofy TV show about it. That includes Grusch (who's been photographed hanging out with Lue Elizondo et al) and Schumer (Schumer dedicated this legislation to the late Harry Reid, who had directed public funds to Skinwalker Ranch.) So that casts a lot of doubt on things.

Edit: see this NY Post article for more details:

Comment by ShardPhoenix on Why was the AI Alignment community so unprepared for this moment? · 2023-07-18T10:45:04.872Z · LW · GW

ChatGPT was released on November 30 2022, so it's only been around 7 months. The older ones were GPT-2 and GPT-3 which got attention among AI-followers but were relatively unknown to the public - and again, it wasn't obvious then when or if ordinary people would come to know or care about these advances.

Comment by ShardPhoenix on Why was the AI Alignment community so unprepared for this moment? · 2023-07-17T03:14:43.550Z · LW · GW

I think until the last few years the common idea was that AGI would be something developed in the metaphorical basement and lead to a singularity in short order, similar to Eliezer's concept of Seed AI. Maybe stuff like AlphaGo was interesting/alarming to us but it seemed to mostly be overlooked by the public and especially government.

It wasn't really clear until ChatGPT that general-ish AI was going to be relevant to the public and government/regulators well before full AGI.

Comment by ShardPhoenix on AI #18: The Great Debate Debate · 2023-06-30T01:53:15.412Z · LW · GW

>Predict hit songs with 97% accuracy using machine learning on brain reactions

This seems implausibly accurate especially given the other considerations you bring up. My guesses are:

  1. They leaked labels into the training data
  2. Fraud or misrepresentation of what they've actually done

(I only read the article not the paper)

Comment by ShardPhoenix on Lightning Post: Things people in AI Safety should stop talking about · 2023-06-21T02:52:42.541Z · LW · GW

I feel like LW at least has already largely moved away from most of these ideas in the light of what's been happening lately, especially since ChatGPT.

Comment by ShardPhoenix on What is the foundation of me experiencing the present moment being right now and not at some other point in time? · 2023-06-18T08:49:28.307Z · LW · GW

How would you know if you were experiencing the "wrong" time?

I'm inclined to agree with your theory about all states existing "simultaneously" but I'm not sure how to reconcile this with the continuous-feeling nature of experience.

Comment by ShardPhoenix on I still think it's very unlikely we're observing alien aircraft · 2023-06-15T13:23:21.182Z · LW · GW

Finally, and most importantly, you have to condition not just on what we see, but what we don’t see. We get grainy videos of some weird thing in the distance, but never close-up HD video. Pilots report seeing something flying far away, but it’s always far away—the tic-tac never flies up close to a passenger jet so hundreds of people can look at it in detail. We get rumors that the government has clear high-resolution pictures, but they never get leaked. We get rumors that the government has recovered intact alien aircraft, but it’s always someone who heard someone else talking about it—we never have a whistleblower who actually analyzed the aircraft and can tell us what they’re made out of. There’s never a local government—anywhere in the world—that captures an aircraft and posts photos online.

I'm not sure about this reasoning. It seems compelling at first (and is my personal strongest reason against believing the latest rumors), but there's a sort of anthropic issue where if we already had compelling evidence (or no evidence) we wouldn't be having this discussion. Is there a prior for the likely resolution of fuzzy evidence in general? Maybe the issue is a lack of an observed distribution of mostly weak and some stronger evidence, rather than all weak?

Comment by ShardPhoenix on The Base Rate Times, news through prediction markets · 2023-06-06T23:53:50.938Z · LW · GW

I like this conceptually - might be nicer to aggregate all the prediction markets into an average by default (to make for cleaner graphs), and only split them up as an option.

Comment by ShardPhoenix on Reacts now enabled on 100% of posts, though still just experimenting · 2023-06-03T07:27:17.209Z · LW · GW

I'm not sure about the reacts having such specific assigned meanings. It feels a bit like the James Scott perfectly legible straight lines thing (vs self-organizing meaning). Also they'd be more readable with color, even though that seems "less serious" somehow...

Comment by ShardPhoenix on What if they gave an Industrial Revolution and nobody came? · 2023-05-18T12:27:30.541Z · LW · GW

Good review. From what I've read, the root of the great divergence is the Catholic church's ban on cousin marriage (for its own reasons), which supposedly lead to less clannishness and a higher-trust society in much of Western Europe.

Comment by ShardPhoenix on Product Endorsement: Apollo Neuro · 2023-05-09T04:26:27.808Z · LW · GW

Is it addictive? Can you still sleep (as well as before) without it?

Comment by ShardPhoenix on Contra Yudkowsky on AI Doom · 2023-04-24T01:15:22.322Z · LW · GW

This is interesting but would benefit from more citations for claims and fewer personal attacks on Eliezer.

Comment by ShardPhoenix on LW moderation: my current thoughts and questions, 2023-04-12 · 2023-04-21T01:13:38.827Z · LW · GW

A hard thing about trying to be transparent about our moderation decisions and actions is that this also requires publicly calling out a user or their content. So you get more transparency but also more embarrassment. I don't have any good solution to this.

Maybe you could not display usernames in the rejected posts section (though this might conflict with transparency if a user feels they are being personally targeted).

Comment by ShardPhoenix on Mechanistically interpreting time in GPT-2 small · 2023-04-17T04:37:21.796Z · LW · GW

I sometimes see posts like this that I can't follow in depth due to insufficient math ability, but skimming them they seem important-if-true so I upvote them anyway. I do want to encourage stuff like this but I'm concerned about adding noise through not-fully-informed voting. Would it be preferable to only vote on things I understand better?

Comment by ShardPhoenix on Moderation notes re: recent Said/Duncan threads · 2023-04-15T08:14:18.912Z · LW · GW

This whole drama is pretty TL;DR but based on existing vibes I'd rather the rules lean (if a lean is necessary) in favor of overly disagreeable gadflys rather than overly sensitive people who try to manipulate the conversation by acting wounded.

Comment by ShardPhoenix on The ‘ petertodd’ phenomenon · 2023-04-15T03:32:11.839Z · LW · GW

The ' petertodd' is completions have a structure reminiscent of Chuck Norris jokes, only a bit darker. I think a few of them are actually Chuck Norris jokes with the name changed - eg "Chuck Norris doesn't hunt, he waits".

Comment by ShardPhoenix on LW Account Restricted: OK for me, but not sure about LessWrong · 2023-04-13T04:33:00.648Z · LW · GW

>Also, I'm sad whenever people look for an alternative place to post things. In my ideal (though likely unachievable) world, anyone could post anything to LessWrong and the site infrastructure would handle visibility perfectly so that things were only viewed by people wanted to see them (and in priority order of what they want to see).

This sounds nice but if taken far enough there's a risk of fragmenting the site community into a bunch of partially overlapping sub-communities, a la the chaos of Twitter.

Comment by ShardPhoenix on Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky · 2023-03-31T00:08:59.552Z · LW · GW

This question appears to be structured in such a way as to make it very easy to move the goalposts.

Comment by ShardPhoenix on Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky · 2023-03-30T00:48:50.288Z · LW · GW

If he thinks AI interpretability work as it exists isn't helpful he should say so, but he shouldn't speak as though it doesn't exist.

Comment by ShardPhoenix on Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky · 2023-03-30T00:28:25.695Z · LW · GW

Eliezer's repeated claim that we have literally no idea about what goes on in AI because they're inscrutable piles of numbers is untrue and he must know that. There have been a number of papers and LW posts giving at least partial analysis of neural networks, learning how they work and how to control them at a fine grained level, etc. That he keeps on saying this without caveat casts doubt on his ability or willingness to update on new evidence on this issue.

Comment by ShardPhoenix on Remarks 1–18 on GPT (compressed) · 2023-03-21T05:25:01.625Z · LW · GW

Recall that the Python primitive "sort" corresponds to a long segment of assembly code in the compiler.

This analogy is a bit off because Python isn't compiled, it's interpreted at runtime. Also, compilers don't output assembly language, they output binary machine code (assembly is what you use to write machine code by hand, basically). So it would be better to talk about C and machine code rather than Python and assembly.

Aside from that I thought that was a very interesting post with some potentially powerful ideas. I'm a little skeptical of how practical this kind of prompt-programming could be though because every new LLM (and probably every version of an LLM, fined-tuned or RLHF-ed differently) is like a new CPU architecture and would require a whole new "language/compiler" to be written for it. Perhaps these could be adapted in the same way that C has compilers for various CPU architectures, but it would be a lot of work unless it could be automated. Another issue is that the random nature of LLM evaluation means it wouldn't be very reliable unless you set temperature=0 which apparently tends to give weak results.

Comment by ShardPhoenix on When will computer programming become an unskilled job (if ever)? · 2023-03-17T00:21:37.922Z · LW · GW

>The consequence is the higher performance of programmers, so more tasks can be done in a shorter time so the market pressure and market gap for employees will fall. This means that earnings will either stagnate or fall.

Mostly agree with your post. Historically higher productivity has generally lead to higher total compensation but how this affects individuals during the transition period depends on the details (eg how much pent-up demand for programming is there?).

Comment by ShardPhoenix on The Waluigi Effect (mega-post) · 2023-03-03T08:52:02.367Z · LW · GW

An interesting theory that could use further investigation.

For anyone wondering what's a Waluigi, I believe the concept of the Waluigi Effect is inspired by this tongue-in-cheek critical analysis of the Nintendo character of that name: (specifically the first one titled I, We, Waluigi: a Post-Modern analysis of Waluigi by Franck Ribery)

Comment by ShardPhoenix on Candyland is Terrible · 2023-02-22T04:07:56.546Z · LW · GW

You're probably right, but a potential contrary take is that learning to emotionally cope with loss and frustration is part of the purpose of the game.

Comment by ShardPhoenix on AstralCodexTen and Rationality Meetup Organisers’ Retreat Asia Pacific region · 2022-10-12T10:10:29.309Z · LW · GW

I run a relevant meetup but TBH not sure what the value of this would be (had the same thought about the global one so didn't apply for that either). Our meetup isn't particularly formal or serious so going on a kind of paid "business trip" for it seems a bit odd or wasteful. What's the intention?

Comment by ShardPhoenix on Consider your appetite for disagreements · 2022-10-09T10:01:46.137Z · LW · GW

I enjoyed this post but the first example is probably too long and technical for anyone not familiar with poker.

Comment by ShardPhoenix on Why I think there's a one-in-six chance of an imminent global nuclear war · 2022-10-08T09:33:07.640Z · LW · GW

>I view it as highly unlikely (<10%) that Putin would accept "Vietnam" without first going nuclear, because it would almost certainly result in him being overthrown and jailed or killed.


Not obvious to me that this is true. If it was, I would have expected more escalation/effort from Russia already by this point.

Comment by ShardPhoenix on A Few Terrifying Facts About The Russo-Ukrainian War · 2022-10-03T06:59:32.053Z · LW · GW

How much difference would using tactical nuclear weapons actually make?

Comment by ShardPhoenix on Argument against 20% GDP growth from AI within 10 years [Linkpost] · 2022-09-12T06:50:11.914Z · LW · GW

As this argument implies, getting very high growth rates probably requires getting humans (largely) out of the loop in at least some major fields. Unclear how AGI-complete doing that would be.

Comment by ShardPhoenix on What Games These Days? · 2022-08-18T23:27:24.022Z · LW · GW

Sounds like you might enjoy The Crew: Mission Deep Sea. It's a co-operative trick-taking game, so each hand you draft a few challenges like "I will win exactly 5 tricks", or "I will win the blue 7 and the green 4" and you have to complete those as a team with minimal communication. Each hand plays independently so it can be as long or as short as you want.


My personal favourite board game recently is Dune: Imperium, a midweight Euro mixing worker placement, deckbuilding, and simple combat. The interaction between the cards, worker placement and resource management is tight and fun, and I appreciate the theme (visually it's based on the recent Dune movie). Can be a little long with unexperienced players though.

Comment by ShardPhoenix on Announcing Encultured AI: Building a Video Game · 2022-08-18T05:23:53.334Z · LW · GW

How will you compete in the market while optimizing for various things other than what players want?