Posts

Regular Meetup 2025-01-30T23:06:31.659Z
Tokyo, Japan – ACX Autumn Schelling Meetup Everywhere 2025! 2025-01-30T23:06:13.826Z
Regular Meetup 2025-01-30T23:05:36.489Z
Regular Meetup 2025-01-30T23:04:21.137Z
Regular Meetup 2025-01-30T23:04:01.586Z
Regular Meetup 2025-01-30T23:03:35.375Z
Tokyo, Japan – ACX Spring Schelling Meetup Everywhere 2025! 2025-01-30T23:02:41.184Z
Regular Meetup 2025-01-30T23:00:59.785Z
Regular Meetup (Topic: Xi Jinping & China's Place in the World) 2024-12-30T12:10:47.671Z
Regular Meetup (Topic: Lessons from Elon Musk) 2024-10-30T12:24:38.534Z
Regular Meetup (Assign Probabilities to Inconceivable Events) 2024-10-30T12:23:10.383Z
Regular Meetup (Superintelligence) 2024-06-23T07:40:51.378Z
Regular Meetup (ACX Nagano!) The Early Christian Strategy 2024-06-23T07:40:39.302Z
Tokyo, Japan – ACX Autumn Schelling Meetup Everywhere 2024! 2024-06-23T07:40:27.589Z
Regular Meetup (Book Review: The Secret of Out Success) 2024-06-23T07:40:15.923Z
Regular Meetup (Topic: Lifeboat Games And Backscratchers Clubs) 2024-06-23T07:39:56.987Z
Regular Meetup (Topic: It's not like anything to be a bat) 2024-06-23T07:39:42.958Z
Regular Meetup (Topic: Should the Future be Human?) 2024-06-23T07:39:26.536Z
Tokyo, Japan – ACX Spring Schelling Meetup Everywhere 2024! 2024-06-23T07:39:09.897Z
Regular Meetup (Topic: Effect Size Criteria are Insignificant from a God's Eye View) 2024-06-23T07:38:28.972Z
Regular Meetup (Topic: Can Aliens Exist? Do Aliens Exist? Should Aliens Exist?) 2024-06-23T07:38:03.296Z
Regular Meetup (Topic: Class is Culture) 2024-01-08T03:39:23.292Z
Irregular Meetup (Topic: Topic: Bryan Caplan on Natalism) 2023-11-22T03:54:11.760Z
Regular Meetup (We're in Fukushima! Talking about Prospera, Honduras) 2023-11-22T03:53:15.710Z
Regular Meetup (Topic: Calibration and Prediction Markets) 2023-05-20T04:04:29.059Z
Regular Meetup (Topic: The AI Extinction Debate) 2023-05-20T04:03:44.706Z
Tokyo, Japan – ACX Autumn Schelling Meetup Everywhere 2023! 2023-05-20T04:03:06.707Z
Regular Meetup (Topic: Raising Up Genius) 2023-05-20T04:02:14.751Z
Regular Meetup (Topic: What Developmental Milestones Are You Missing?) 2023-05-20T04:01:34.854Z
Regular Meetup (topic: Defining Rationality) 2023-01-23T08:53:52.777Z
Regular Meetup (topic: Self-determination) 2023-01-23T08:52:36.061Z
Regular Meetup (topic: Tulip Subsidies) 2023-01-23T08:52:00.647Z
Regular Meetup (topic: Fat Diets) 2023-01-23T08:50:14.386Z
Regular Meetup (topic: AntiRationalist Virtues) 2023-01-23T08:49:08.584Z
Regular Meetup (Topic: Is Science Slowing Down?) 2023-01-09T07:06:26.685Z
Regular Meetup (Topic: Conspiracies of Cognition, Conspiracies of Emotion) 2023-01-09T07:05:53.017Z
Tokyo, Japan – ACX Spring Schelling Meetup Everywhere 2023 2022-11-19T07:33:11.071Z
Regular Meetup (topic: Wither Tartaria?) 2022-11-19T07:32:33.426Z
Irregular Meetup (topic: Does Bryan Caplan disagree with Scott Alexander?) 2022-10-28T04:03:01.432Z
AstralCodexTen and Rationality Meetup Organisers’ Retreat Asia Pacific region 2022-10-12T03:20:43.461Z
Regular Meetup (topic: The Media, Very Rarely Lying) 2022-09-09T22:21:20.040Z
Regular Meetup (topic: What Future? Is 2050 a Real Year?) 2022-09-09T22:19:44.500Z
Regular Meetup (topic: Consciousness And The Brain by Stanislas Dehaene) 2022-09-09T22:19:10.182Z
Regular Meetup (topic: Genetic Engineering & Inequality) 2022-09-09T22:18:31.954Z
Regular Meetup (topic: AI Risk) 2022-09-09T22:17:37.698Z
Regular Meetup (topic: Effective Altruism) 2022-08-28T23:29:43.380Z
Tokyo, Japan – ACX Meetups Everywhere Autumn 2022 2022-08-24T23:04:28.398Z
Tokyo, Japan – ACX Autumn Schelling Meetup 2022 2022-08-16T06:18:38.926Z
Tokyo, Japan – ACX Spring Schelling Meetup 2022 2022-04-16T03:45:31.428Z
Tokyo, Japan – ACX Meetups Everywhere 2021 2021-08-26T00:56:18.278Z

Comments

Comment by Harold (harold-1) on Twelve Virtues of Rationality · 2024-11-16T07:40:24.808Z · LW · GW

Since reading this a few years ago, I've often thought about the Void and Musashi's advice. For the reference of anyone like me, the quote in its full context is comes after a description of the 'five fundamental stances' in 'the Way of the sword'. Here it is from the 2001 Wilson translation of the Five Rings -- I think it's worth thinking about together with the piece above: 

The Lesson of Stance/No Stance
What is called Stance/No Stance means that there is no stance that you should take with your sword at all. However, as I place this within the Five Stances, there is a stance here. According to the chances your opponent takes, and according to his position and energy, your sword will be of a mind to cut down your opponent in fine fashion no matter where you place it. According to the moment, if you want to lower your sword a little from the Upper Stance, it will become a Middle Stance; if, according to the situation, you raise your sword a bit from the Middle Stance, it will become the Upper Stance. The Lower Stance, accordingly, may be raised a little to become the Middle Stance as well. This means that the two Side Stances, according to their position, may be moved a little to the center and become the Middle or Lower Stances.

This is the principle in which there is a stance and there is no stance. At its heart, this is first taking up the sword and cutting down your opponent, no matter what is done or how it happens. Whether you parry, slap, strike, hold back or touch your opponent's cutting sword, you must understand that all of these are opportunities to cut him down. To think, "I'll parry," or "I'll slap," or "I'll hit, hold or touch," will be insufficient for cutting him down. It is essential to think that anything at all is an opportunity to cut him down. You should investigate this thoroughly. With martial arts in the larger field, the placement of numbers of people is also a stance. All of these are opportunities to win a battle. It is wrong to be inflexible. You should make great efforts in this.

(The Japanese is here: https://www.koten.net/gorin/yaku/214/)

Comment by Harold (harold-1) on Otherness and Control in the Age of AGI · 2024-11-10T05:07:50.848Z · LW · GW

I had a twenty hour drive by myself recently, and binged Otherness in the Age of AGI. It was tremendous. Ambitious to take on the entire thing in one session! 

Comment by Harold (harold-1) on Optimality is the tiger, and annoying the user is its teeth · 2024-07-28T14:19:41.223Z · LW · GW

Agree with this. Law is downstream of a particular medium-scale ontology of human agency. A paperclip maximizer, as mythologized, would be working with a different notion of agency, by definition.

Querying check_legality("clippy's plan"), would be like checking the temperature of the number "100". Sure, it might kinda sound like its hot, or illegal, but that's not the kind of input that check_legality() currently takes. 

check_legality() can't even handle the agency of nation states ...

Comment by Harold (harold-1) on Partitioned Book Club · 2024-07-19T02:28:44.088Z · LW · GW

Genius. I'm stealing this for use in a professional setting. I'll have lawyers read chapters of "Co-Intelligence" by Ethank Mollick, and get them to focus on the useful parts for their own law firms...

(Also going to use this on "Legal Systems Very Different..." for our ACX group!)

Comment by Harold (harold-1) on I would have shit in that alley, too · 2024-06-28T08:29:28.327Z · LW · GW

I'm a lawyer (NY licensed) working in Tokyo, and this account of the Japanese penal system is incorrect. Prosecutors in Japan are extremely, extremely hesitant to bring a criminal case into the penal system, and so when cases are brought they are far beyond "a reasonable doubt". As a slightly misleading short summary, this, rather than lack of concern for false positives, is the reason for the notorious 99% conviction rate of criminal cases in Japan. 

I've also been unhoused in a few different countries for short periods of time. 
I'm certain that my affinity for Japan has its roots in needing this peaceful cultural of public safety.

Comment by harold-1 on [deleted post] 2023-04-10T03:37:14.978Z

Completely in agreement with Domenic (though, full disclosure, we're both in AISafety東京 members).

What's missing in the Japanese space is attempts to answer the question of why the Anglo-US views on AI are relevant in Japan. Anglo-Americans may think it's obvious why that question isn't relevant... which just closes the loop.

Comment by harold-1 on [deleted post] 2023-04-10T03:31:18.050Z

Yes. But Deepl is also in the running. All three have different use cases and nuances. Google translate, for example, provides more straightforward translation with phonetics when translating EN>JP. GPT-4 provides more natural but also more often incorrect translations. (This has been the state of affairs for a month. At the current rate of change, I expect this comment will be out of date fast.)

Comment by Harold (harold-1) on How can we promote AI alignment in Japan? · 2023-03-13T00:38:54.053Z · LW · GW

I wholeheartedly agree, Colin. (I think we're saying the same thing--let me know where we may disagree.)

It's a daily challenge in my work to 'translate' what can sometimes seem like abstract nonsense into scenarios grounded in real context, and the reverse.

I want to add that a grounded, high context decision process is slower (still wearing masks?) but significantly wiser (see the urbanism of Tokyo compared to any given US city).

Comment by Harold (harold-1) on How can we promote AI alignment in Japan? · 2023-03-12T14:06:19.426Z · LW · GW

I’ve a developing a hunch that the abstract framing of arguments for AI Safety are unlikely to ever gain a foothold in Japan. The way forward here is in contextual framing of the same arguments. (Whether in English or Japanese is less and less relevant with machine translation.)

I’ve been a resident of Tokyo for twelve years, half of that as a NY lawyer in a Japanese international law firm. I’m also a founding member working with AI Safety 東京 and the Chair of the Tokyo rationality community. Shoka Kadoi, please express interest in our 勉強会.

As a lawyer engaged with AI safety, I often have conversations with the more abstract-minded members of our groups that reveal an intellectual acceptance but strong aesthetic distaste for the contextual nature of legal systems. (The primitives of legal systems are abstraction-resistant ideas like ‘reasonableness’.)

Aesthetic distaste for contextual primitives leads to abstract framing of problems. Abstract framing of the AI safety issues tends to lead from standard AI premises to narrow conclusions that are often hard for contextual-minded people to follow. Conclusions like, we’ve found a very low-X percent chance of some very specific bad outcome, and so we logically need to take urgent preventative actions.

To generalize, Japan as a whole (and perhaps most of the world) does not approach problems abstractly. Contextual framing of AI safety issues tends to lead from standard AI premises to broad and easily accepted conclusions. Conclusions like, we’ve found a very high-Y chance of social disruption, and we are urgently compelled to take information-gathering actions. 

There’s obviously much more support needed for these framing claims. But you can see those essential differences in outcomes in the AI regulatory approaches of the EU and Japan, respectively. (The EU is targeting abstract AI issues like bias, adversarial attacks, and biometrics with specific legislation. Japan is instead attempting to develop an ‘agile governance’ approach to AI in order to keep up with “the speed and complexity of AI innovation”. In this case, Japan's approach seems wiser, to me.)

If the conclusions leading to existential risk are sound, both these framings should converge on similar actions and outcomes. Japan is a tough nut to crack. But having both framings active around the world would mobilize a significantly larger number of brains on the problem. Mobilizing all those brains in Japan is the course to chart now. 

Comment by Harold (harold-1) on Enemies vs Malefactors · 2023-03-02T01:44:39.202Z · LW · GW

I don't have any terminological suggestions that I love

Following on my prior comment, the actual legal terms used for the (oxymoronic) "purposeless and unknowing mens rea" might provide an opening for the legal-social technologies to provide wisdom on operationizing these ideas -  "negligent" at first, and "reckless" when it's reached a tipping point.

Comment by Harold (harold-1) on Enemies vs Malefactors · 2023-03-02T01:40:45.829Z · LW · GW

(As an example, various crimes legally require mens rea, lit. “guilty mind”, in order to be criminal. Humans care about this stuff enough to bake it into their legal codes.)

Even in the law of mental states, intent follows the advice in this post. U.S. law commonly breaks down the 'guilty mind' into at least four categories, which, in the absence of a confession, all basically work by observing the defendant's patterns of behaviour. There may be some more operational ideas in the legal treatment of reckless and negligent behaviour.

  1. acting purposely - the defendant had an underlying conscious object to act
  2. acting knowingly - the defendant is practically certain that the conduct will cause a particular result
  3. acting recklessly - The defendant consciously disregarded a substantial and unjustified risk
  4. acting negligently - The defendant was not aware of the risk, but should have been aware of the risk
Comment by Harold (harold-1) on Gauging interest for a Tokyo area meetup group · 2022-08-18T02:46:40.155Z · LW · GW

Just stumbled into this old thread. For anyone else who comes by, there is now an active ACX/LW group in Tokyo, meeting in Nakameguro monthly. 

See https://www.lesswrong.com/groups/2Gx38j5JBc4AyHJ9a

https://www.facebook.com/groups/477916570844589/

https://www.meetup.com/acx-tokyo/