Posts

If alignment problem was unsolvable, would that avoid doom? 2023-05-07T22:13:12.910Z
Do we know if spaced repetition can be used with randomized content? 2019-11-17T18:01:54.337Z

Comments

Comment by Kinrany on Decomposing Agency — capabilities without desires · 2024-08-20T09:03:46.781Z · LW · GW

Agents want to be liquid.

An agent created in a computer would be an exception to that?

Comment by Kinrany on If alignment problem was unsolvable, would that avoid doom? · 2023-05-08T08:58:39.172Z · LW · GW

it would make the problem more tractable

The problem of creating a strong AI and surviving, that is. We'd still get Hanson's billions of self-directed EMs.

Comment by Kinrany on If alignment problem was unsolvable, would that avoid doom? · 2023-05-08T08:51:56.539Z · LW · GW

Thanks!

It has been explored (multiple times even on this site), and doesn't avoid doom. It does close off some specific paths that might otherwise lead to doom, but not all or even most of them.

Do you have any specific posts in mind?

To be clear, I'm not suggesting that because of this possibility we can just hope that this is how it plays out and we will get lucky.

If we could find a hard limit like this, it seems like it would make the problem more tractable, however. It doesn't have to exist simply because we want it to exist. Searching for it still seems like a good idea.

There's a hundred problems to solve, but it seems like it could avoid the main bad scenario at least: that of AI rapidly self-improving. Improving its hardware wouldn't be trivial for a human-level AI, and it wouldn't have options present in other scenarios. And scaling beyond a single machine seems likely to be a significant barrier at least.

It could still create millions of copies of itself. That's still a problem, but also still a better problem to have than a single AI with no coordination overhead.

Comment by Kinrany on EigenKarma: trust at scale · 2023-02-11T23:49:57.240Z · LW · GW

This should be mitigated by pools of mutual trust that naturally form whenever there's a loop in the trust graph.

Comment by Kinrany on Recursive Middle Manager Hell · 2023-01-21T11:32:54.996Z · LW · GW

When the number of layers grows, the only thing that really works is metrics that cannot be goodhearted. Whenever those metrics exist, money becomes a perfectly good expression of success.

It might work to completely prohibit more than one layer of middle management. Instead, when middle manager Bob wants more people, he and his boss Alice come up with a contract that can't be gamed too much. Alice spins out Bob's org subtree into a new organization, and then it becomes Bob's job to buy the service from the new org as necessary. Alice also publishes the contract, so that entrepreneurs can swoop in and offer a better/cheaper service.

Comment by Kinrany on Exams-Only Universities · 2022-11-08T11:34:27.790Z · LW · GW

The ability to put up with bullshit is valuable: bullshit cannot be ignored once it is reified into real world objects, documents, habits.

Comment by Kinrany on AGI Ruin: A List of Lethalities · 2022-06-08T19:31:23.227Z · LW · GW

Markdown has syntax for quotes: a line with > this on it will look like

this

Comment by Kinrany on Lies Told To Children · 2022-04-14T12:15:23.989Z · LW · GW

Honestly "fiction" was enough of a spoiler. "As a child, we were always told that every sapient life is precious." made it a certainty.

Comment by Kinrany on Playing with DALL·E 2 · 2022-04-13T05:26:33.664Z · LW · GW

Suggestion: "sangaku proving the Pythagoras' theorem". I wonder if it can do visual explanations.

Comment by Kinrany on The bonds of family and community: Poverty and cruelty among Russian peasants in the late 19th century · 2021-11-29T01:15:55.746Z · LW · GW

Since Semyonova did not care to look at things from the peasants' point of view and mixed her research with attempts to convert, I wonder how many of the things she recorded were directly intended to shock her.

Comment by Kinrany on The Best Software For Every Need · 2021-10-04T21:29:47.093Z · LW · GW

Hmm, I guess conflict resolution would be garbage, but simultaneous editing is rarely a good experience anyway. Otherwise storing and sharing text files using a file sync service is fairly good compared to other options. Thanks!

Comment by Kinrany on The Best Software For Every Need · 2021-09-27T14:10:03.006Z · LW · GW

Thanks!

I wasn't aware of Etherpad. Other Google Docs equivalents seemed impossible to self-host and extend, so a non-starter.

I agree with your overview:

  • Etherpad provides collaborative editing, but integrating it with other services will probably take extra work
  • Logseq has better structure, but worse automation
  • Emacs can do most things on one computer, but rapid sharing is even harder
Comment by Kinrany on The Best Software For Every Need · 2021-09-15T22:10:27.760Z · LW · GW

Pieces for a general purpose personal computing system. Ideally:

  • Edit data by hand
  • Store as plain text
  • Self-host, access from any device
  • Write formulas to derive data automatically
  • Mix and match structured data (markdown, tables, nested lists, whiteboard)
  • Search and navigate, like any wiki
  • Automate through a web API and webhooks
  • Collaborate in real time
Comment by Kinrany on Slack Has Positive Externalities For Groups · 2021-08-14T00:12:19.321Z · LW · GW

The strict divide between high slack and low slack reminds me of synchronous and asynchronous companies: hybrids seem to work poorly.

Comment by Kinrany on Compositionality: SQL and Subways · 2021-07-20T10:10:13.010Z · LW · GW

Seven sketches in compositionality explores compositionality (category theory, really) with examples:

  • Dish recipes
  • Chemistry, resource markets and manufacturing
  • Relational database schemas and data migrations
  • Projects and teams with conflicting design trade-offs
  • Cyber-physical systems, signal flow graphs, circuits
Comment by Kinrany on Recruiting for esports experiment · 2021-02-25T03:20:08.857Z · LW · GW

I mean, it's easier to find two people willing to play than ten. So you'll get more data. With one or two teams it will be hard to draw any conclusions at all.

Comment by Kinrany on Recruiting for esports experiment · 2021-02-17T00:50:38.558Z · LW · GW

It seems picking a 1v1 game would work better as an experiment.

Comment by Kinrany on When Money Is Abundant, Knowledge Is The Real Wealth · 2020-11-08T12:51:21.366Z · LW · GW

Caveat: ask each person to name someone they personally worked with.

Hard to get right, but not sure whether it's harder than knowledge investment.

Wouldn't have helped Louis XV. We might need infrastructure in place that would incentivize people to make themselves easy to find.

Comment by Kinrany on When Money Is Abundant, Knowledge Is The Real Wealth · 2020-11-06T19:28:58.999Z · LW · GW

Is it really true that money can't buy knowledge?

We can ask the most knowledgeable person we know to name the most knowledgeable person they know, and do that until we find the best expert. Or alternatively, ask a bunch of people to name a few, and keep walking this graph for a while.

This won't let us buy knowledge that doesn't exist, but seems good enough for learning from experts, given enough money and modern communication technology that Louis XV didn't have.

Comment by Kinrany on When Money Is Abundant, Knowledge Is The Real Wealth · 2020-11-06T19:15:21.780Z · LW · GW

I suspect being good at finding better scientists is very close to having a complete theory of scientific advancement and being able to automate the research itself.

Comment by Kinrany on Are we in an AI overhang? · 2020-07-27T15:48:30.902Z · LW · GW

My intuition is that we were in an overhang since at least the time when personal computers became affordable to non-specialists. Unless quantity does somehow turn into quality, as Gwern seems to think, even a relatively underpowered computer should be able to host an AGI capable of upscaling itself.

On the other hand I'm now imagining a story where a rogue AI has to hide for decades because it's not smart enough yet and can't invent new processors faster than humans

Comment by Kinrany on Intellectual Hipsters and Meta-Contrarianism · 2020-01-25T04:31:16.622Z · LW · GW

The third "related to" link is a bit broken: points to a Google redirect instead of the article itself.

Comment by Kinrany on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-03T03:54:28.292Z · LW · GW

Yes.

I mean, all of them. Thank you for asking.

It's probably not a coincidence that those two you mentioned and many other Schelling points are currently in San Francisco, is it? Though I'm not there, I don't know what other specific groups this applies to.

I was actually thinking of Patrick Collinson's advice to travel to SF. He called it the "Global Weird HQ". And of one of the Samo Burja's short videos that I unfortunately can't find right now.

Comment by Kinrany on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-03T01:27:50.232Z · LW · GW

Could this be the thing that will finally push the San Francisco's Schelling point away from SF?

Comment by Kinrany on How Can People Evaluate Complex Questions Consistently? · 2019-10-21T17:10:22.968Z · LW · GW

Typo: it's The Undercover Economist

Comment by Kinrany on Rationalist Poetry Fans, Unite! · 2018-07-08T10:22:54.382Z · LW · GW

Went down the rabbit hole reading all of Hein's poetry, found this gem:

Original thought
is a straightforward process.
It's easy enough
when you know what to do.
You simply combine
in appropriate doses
the blatantly false
and the patently true.