Posts

Link: The Cook and the Chef: Musk's Secret Sauce - Wait But Why 2015-11-11T05:46:56.413Z

Comments

Comment by taygetea on The Best Tacit Knowledge Videos on Every Subject · 2024-04-01T19:50:52.967Z · LW · GW

and while I'm here, i also curate something like this. ben krasnow is only the best entry point into a wider world. This list was my best attempt recently, it was particularly aimed at getting programmers into physical engineering topics, trying to removing learned helplessness around it and making the topic feel like something it's possible to engage with. https://gist.github.com/taygetea/1fcc9817618b1008a812e6f2c58ca987

Comment by taygetea on The Best Tacit Knowledge Videos on Every Subject · 2024-04-01T19:45:06.659Z · LW · GW

That was 13 years ago across an ocean of accelerating cultural change, institutional trust, and people maturing. I'm sure you can still find plenty of people who would use mechanisms like that, but I'm pretty sure it's going to be one of the less important considerations now. 

Comment by taygetea on Barbieheimer: Across the Dead Reckoning · 2024-03-05T05:38:56.689Z · LW · GW

Several months late, but that Mission Impossible movie had real world effects, because Joe Biden watched it. https://www.the-independent.com/news/world/americas/us-politics/joe-biden-ai-mission-impossible-b2440365.html

Something very similar happened to Reagan with a depiction of nuclear war.

Comment by taygetea on Why Do You Keep Having This Problem? · 2020-01-22T05:15:17.854Z · LW · GW

A point I think others missed here is that in the TV example, there's more data than the situations the OP talks about, so mscottveach can say there's a disparity instead of just having the hatemail. Maybe more situations should involve anonymous polling.

Comment by taygetea on The First Fundamental · 2018-01-17T05:23:54.771Z · LW · GW
Crossbow is closer to Mars than pen

If you treat war and conflict as directed intentionality along the lines of the Book of Five Rings, then this is something akin to a call to taking actions in the world rather than spilling lots of words on the internet.

Comment by taygetea on [deleted post] 2017-06-03T09:02:11.708Z

I think people tend to need a decent amount of evidence before they start talking about someone looking potentially abusive. Then the crux is "does this behavior seem normal or like a predictive red flag?". In those cases, your lived experience directly influences your perception. Someone's actions can seem perfectly fine to most people. But if some others experience spooky hair-raising flashes of their questionably abusive father or a bad ex, that's evidence. The people who didn't think anything was weird brush off the others as oversensitive, risk averse, or paranoid. Then those raising alarms think of everyone else as callous, imperceptive, or malicious. It's not just people who don't alieve the correct base rates. Certainly those people exist, though they're much more plentiful on Tumblr than in person or on LW. It's very non-obvious whether a strong reaction is correct.

Neither side can truly accept the other's arguments. It's a bad situation when both sides consider the other's reasoning compromised beyond repair. That brings politics and accusations of bad faith on all sides. But there is a fact of the matter, and the truth is actually unclear. Anyone thinking at enough of a distance from the issue should have honest uncertainty. I suspect you're particularly prone to refusing to let the conflicting experience of others be seen by your deep internal world-models, to strongly underestimating the validity and reliability of that type of evidence. That would cause what you say to be parsed as bad faith, which other people then respond to in kind. That would cause a positive feedback loop where your prior shifts even further away from them having useful things to say. Then you'd end up a frog boiled in a pot of drama nobody else is experiencing. I'm not sure this is what's happening, but it looks plausible.

Comment by taygetea on [deleted post] 2017-05-27T14:39:10.588Z

This post puts me maybe 50% the way to thinking this is a good idea from my previous position.

My largest qualm about this is well-represented by a pattern you seem to show, which starts with saying "Taking care of yourself always comes first, respect yourself", then getting people to actually act on that in simple, low-risk low-involvement contexts, and assuming that means they'll actually be able to do it when it matters. People can show all the signs of accepting a constructed social norm when that norm is introduced, without that meaningfully implying that they'll use it when push comes to shove. Think about how people act when actual conflicts with large fight/flight/freeze responses interact with self-care norms. I suspect some typical-mind, as my model of you is better at that than most people. I think it depends on what "running on spite" cashes out to. This is kind of a known skull, but I think the proposed solution of check-ins is probably insufficient.

My other big concern is what comments like your reply to Peter here imply about your models and implicit relationship to the project. In this comment, you say you'll revise something, but I pretty strongly anticipate you still wanting people to do the thing the original wording implied. This seems to defuse criticism in dangerous ways, by giving other people the impression that you're updating not just the charter, but your aesthetics. Frankly, you don't seem at all likely to revise your aesthetics. And those, ultimately, determine the true rules.

To summarize the nature of my issues here in a few words: aesthetic intuitions have huge amounts of inertia and can't be treated like normal policy positions, and people's self-care abilities (and stress-noticing abiities) cannot be trusted in high-stress environments, even under light to moderate testing.

-Olivia

Comment by taygetea on The Adventure: a new Utopia story · 2016-12-26T11:52:06.706Z · LW · GW

Would you expect to be able to achieve that - maybe eventually - within the world described?

Definitely. I expect the mindspace part to actually be pretty simple. We can do it in uncontrolled ways right now with dreams and drugs. I guess I kind of meant something like those, only internally consistent and persistent and comprehensible. The part about caring about base reality is the kind of vague, weak preference that I'd probably be willing to temporarily trade away. Toss me somewhere in the physical universe and lock away the memory that someone's keeping an eye on me. That preference may be more load-bearing than I currently understand though, and there may be more preferences like it. I'm sure the Powers could figure it out though.

It's partially that, and partially indicative of the prudence in the approach.

Perfectly understandable. I'd hope for exploration of outer reaches of mindspace in a longer-form version though.

Comment by taygetea on The Adventure: a new Utopia story · 2016-12-25T21:52:17.091Z · LW · GW

This was great. I appreciate that it exists, and I want more stories like it to exist.

As a model for what I'd actually want myself, the world felt kind of unsatisfying, though the bar I'm holding it to is exceptionally high-- total coverage of my utility-satisfaction-fun-variety function. I think I care about doing things in base reality without help or subconscious knowledge of safety. Also, I see a clinging to human mindspace even when unnecessary. Mainly an adherence to certain basic metaphors of living in a physical reality. Things like space and direction and talking and sound and light and places. It seems kind of quaintly skeuomorphic. I realize that it's hard to write outside those metaphors though.

Comment by taygetea on Narrativemancy 101: Why Paper Beats Rock · 2016-12-24T06:10:57.945Z · LW · GW

This seems very related to Brienne's recent article.

Comment by taygetea on A quick note on weirdness points and Solstices [And also random other Solstice discussion] · 2016-12-23T01:20:15.611Z · LW · GW

For context, calling her out specifically is extremely rare, people try to be very diplomatic, and there is definitely a major communcation failure Elo is trying to address.

Comment by taygetea on A quick note on weirdness points and Solstices [And also random other Solstice discussion] · 2016-12-22T07:26:54.250Z · LW · GW

Replied above. There's a strong chilling effect on bringing up that you don't want children at events.

Comment by taygetea on A quick note on weirdness points and Solstices [And also random other Solstice discussion] · 2016-12-22T07:22:46.163Z · LW · GW

It was not an exaggeration.

Comment by taygetea on A quick note on weirdness points and Solstices [And also random other Solstice discussion] · 2016-12-22T07:22:12.969Z · LW · GW

From what I've seen, it's not rare at all. I count... myself and at least 7 other people who've expressed the sentiment in private across both this year and last year (it happened last year too). It is, however, something that is very difficult for people to speak up about. I think what's going on is that different people care about differing portious of the solstice (community, message, aesthetics, etc) to surprisingly differing degrees, may have sensory sensitivites or difficulty with multiple audio input streams, and may or may not find children positive to be around in principle. I think this community has far more people for whom noisy children destroy the experience than the base rate of other communities.

From what I've observed, the degree to which children ruin events for certain people is almost completely lost on many others. It's difficult to speak up largely because of sentiments like yours, which make it feel like people will think that I'm going against the idea of the community. For me, and I don't think I'm exceptionally sensitive, I think it removes between a third and half of the value of going to the event.

Comment by taygetea on How does personality vary across US cities? · 2016-12-21T07:20:14.001Z · LW · GW

Ah, I spoke imprecisely. I meant what you said, as opposed to things of the form "there's something in the water".

Comment by taygetea on How does personality vary across US cities? · 2016-12-20T23:53:18.578Z · LW · GW

I think you have the causality flipped around. Jonah is suggesting that something about Berkeley contributes to the prevalence of low conscientiousness among rationalists.

Comment by taygetea on Open thread, Dec. 12 - Dec. 18, 2016 · 2016-12-16T02:51:34.999Z · LW · GW

Nicotine use and smoking are not at all the same thing. Did you read the link?

Comment by taygetea on CFAR’s new focus, and AI Safety · 2016-12-12T11:06:53.170Z · LW · GW

To get a better idea of your model of what you expect the new focus to do, here's a hypothetical. Say we have a rationality-qua-rationality CFAR (CFAR-1) and an AI-Safety CFAR (CFAR-2). Each starts with the same team, works independently of each other, and they can't share work. Two years later, we ask them to write a curriculum for the other organization, to the best of their abilities. This is along the lines of having them do an Ideological Turing Test on each other. How well do they match? In addition, is the newly written version better in any case? Is CFAR-1's CFAR-2 curriculum better than CFAR-2's CFAR-2 curriculum?

I'm treating curriculum quality as a proxy for research progress, and somewhat ignoring things like funding and operations quality. The question is only meant to address worries of research slowdowns.

Comment by taygetea on Making Less Wrong Great Again · 2016-06-01T20:25:46.843Z · LW · GW

I logged in just to downvote this.

Comment by taygetea on Why CFAR? The view from 2015 · 2015-12-20T19:40:44.795Z · LW · GW

I could very well be in the grip of the same problem (and I'd think the same if I was), but it looks like CFAR's methods are antifragile to this sort of failure. Especially considering the metaethical generality and well-executed distancing from LW in CFAR's content.

Comment by taygetea on Why CFAR? The view from 2015 · 2015-12-20T19:37:26.686Z · LW · GW

There are a few people who could respond who are both heavily involved in CFAR and have been to Landmark. I don't think Alyssa was intending for a response to be well-justified data, just an estimate. Which there is enough information for.

Comment by taygetea on Ask and ye shall be answered · 2015-09-19T00:45:24.548Z · LW · GW

Unrelated to this particular post, I've seen a couple people mention that all your ideas as of late are somewhat scattered and unorganized, and in need of some unification. You've put out a lot of content here, but I think people would definitely appreciate some synthesis work, as well as directly addressing established ideas about these subproblems as a way of grounding your ideas a bit more. "Sixteen main ideas" is probably in need of synthesis or merger.

Comment by taygetea on Stupid Questions September 2015 · 2015-09-09T07:01:57.732Z · LW · GW

To correct one thing here, the Bussard ramjet has drag effects. It can only get you to about 0.2c, making it pretty pointless to bother if you have that kind of command over fusion power.

Comment by taygetea on Rudimentary Categorization of Less Wrong Topics · 2015-09-05T08:49:01.552Z · LW · GW

I would not call this rudimentary! This is excellent. I'll be using this.

Didn't someone also do this for each post in the sequences a while back?

Comment by taygetea on Lesswrong real time chat · 2015-09-05T07:33:04.711Z · LW · GW

There's been quite a bit of talk about partitioning channels. And the #lesswrong sidechannels sort of handle it. But it's nowhere near as good. I'm starting to have ideas for a Slack-style interface in a terminal... but that would be a large project I don't have time for.

Comment by taygetea on Open Thread August 31 - September 6 · 2015-09-02T17:34:41.762Z · LW · GW

Alright, I'll be a little more clear. I'm looking for someone's mixed deck, on multiple topics, and I'm looking for the structure of cards, things like length of section, amount of context, title choice, amount of topic overlap, number of cards per large scale concept.

I am really not looking for a deck that was shared with easily transferrable information like the NATO alphabet, I'm looking for how other people do the process of creating cards for new knowledge.

I am missing a big chunk of intuition on learning in general, and this is part of how I want to fix it. I also don't expect people to really be able to answer my questions on it, and I don't expect that I've gotten every specification. Which is why I wanted the example deck.

Edit: So I can't pull a deck off Ankiweb because I want the kind of decks nobody puts on Ankiweb.

Comment by taygetea on Open Thread August 31 - September 6 · 2015-09-02T13:06:12.399Z · LW · GW

Is anyone willing to share an Anki deck with me? I'm trying to start using it. I'm running into a problem likely derived from having never, uh, learned how to learn. I look through a book or a paper or an article, and I find it informative, and I have no idea what parts of it I want to turn into cards. It just strikes me as generically informative. I think that learning this by example is going to be by far the easiest method.

Comment by taygetea on Magnetic rings (the most mediocre superpower) A review. · 2015-08-05T08:05:58.536Z · LW · GW

Does anyone have or know anyone with a magnetic finger implant who can compare experiences? I've been considering the implant. If the ring isn't much weaker, that would be a good alternative.

Comment by taygetea on MIRI Fundraiser: Why now matters · 2015-07-27T18:47:58.994Z · LW · GW

So, to my understanding, doing this in 2015 instead of 2018 is more or less exactly the sort of thing that gets talked about when people refer to a large-scale necessity to "get there first". This is what it looks like to push for the sort of first-mover advantage everyone knows MIRI needs to succeed.

It seems like a few people I've talked to missed that connection, but they support the requirement for having a first-mover advantage. They support a MIRI-influenced value alignment research community, but then they perceive you asking for more money than you need! Making an effort to remind people more explicitly why MIRI needs to grow quickly may be valuable. Link the effect of 'fundraiser' to the cause of 'value learning first-mover'.

Comment by taygetea on Bragging Thread July 2015 · 2015-07-15T07:13:19.578Z · LW · GW

That's a pretty large question. I'd love to, but I'm not sure where to start. I'll describe my experience in broad strokes to start.

Whenever I do anything, I quickly acclimate to it. It's very difficult to remember that things I know how to do aren't trivial for other people. It's way more complex than that... but I've been sitting on this text box for a few hours. So, ask a more detailed question?

Comment by taygetea on Bragging Thread July 2015 · 2015-07-14T11:56:27.020Z · LW · GW

This month (and a half), I dropped out of community college, raised money as investment in what I'll do in the future, moved to Berkeley, got very involved in the rationalist community here, smashed a bunch of impostor syndrome, wrote a bunch of code, got into several extremely promising and potentially impactful projects, read several MIRI papers and kept being urged to involve myself with their research further.

I took several levels of agency.

Comment by taygetea on Open Thread, May 11 - May 17, 2015 · 2015-05-17T22:57:13.911Z · LW · GW

Hi. I don't post much, but if anyone who knows me can vouch for me here, I would appreciate it.

I have a bit of a Situation, and I would like some help. I'm fairly sure it will be positive utility, not just positive fuzzies. Doesn't stop me feeling ridiculous for needing it. But if any of you can, I would appreciate donations, feedback, or anything else over here: http://www.gofundme.com/usc9j4

Comment by taygetea on Open Thread, May 11 - May 17, 2015 · 2015-05-15T22:40:41.923Z · LW · GW

I've begun to notice discussion of AI risk in more and more places in the last year. Many of them reference Superintelligence. It doesn't seem like a confirmation bias/Baader-Meinhoff effect, not really. It's quite an unexpected change. Have others encountered a similar broadening in the sorts of people you encounter talking about this?

Comment by taygetea on If you could push a button to eliminate one cognitive bias, which would you choose? · 2015-04-10T06:31:02.940Z · LW · GW

Typical Mind Fallacy. Allows people to actually cooperate for once. One of the things I've been thinking about is how one person's fundamental mind structure is interpreted by another as an obvious status grab. I want humans to better approximate Aumann's Agreement Theorem. Solve the coordination problem, solve everything.

Comment by taygetea on In what language should we define the utility function of a friendly AI? · 2015-04-06T01:07:10.494Z · LW · GW

Determining the language to use is a classic case of premature optimization. No matter what the case, it will have to be provably free of ambiguities, which leaves us programming languages. In addition, in terms of the math of FAI, we're still at the "is this Turing complete" sort of stage in development. So it doesn't really matter yet. I guess one consideration is that the algorithm design is going to take way more time and effort than the programming, and the program has essentially no room for bugs (Corrigibility is an effort to make it easier to test an AI without it resisting). So in that sense, it could be argued that the lower level the language, the better.

Directly programming human values into an AI has always been the worst option, partially for your reason. In addition, the religious concept you gave can be trivially broken by two different beings having different or conflicting utility functions, and so acting as if they were the same is a bad outcome. A better option is to construct a scheme so that the smarter the AI gets, the better it approximates human values, by using its own intelligence to determine them, as in coherent extrapolated volition.

Comment by taygetea on Against the internal locus of control · 2015-04-04T16:54:06.852Z · LW · GW

I think I see the problem. Tell me what your response to this article is. Do you see messy self-modification in pursuit of goals at the expense of a bit of epistemic rationality to be a valid option to take? Is Dark == Bad? In your post, you say that it is generally better not to believe falsehoods. My response to that is that things which depend on what you expect to happen are the exception to that heuristic.

Life outcomes are in large part determined by your background that you can't change, but expecting to be able to change that will lead you to ignore fewer opportunities to get out of that situation. This post about luck is also relevant.

Comment by taygetea on Futarchy and Unfriendly AI · 2015-04-04T16:34:27.179Z · LW · GW

I can't say much about the consequences of this, but it appears to me that both democracy and futarchy are efforts to more closely approximate something along the lines of a CEV for humanity. They have the same problems, in fact. How do you reconcile mutually exclusive goals of the people involved?

In any case, that isn't directly relevant, but linking futarchy with AI caused me to notice that. Perhaps that sort of optimization style, of getting at what we "truly want" once we've cleared up all the conflicting meta-levels of "want-to-want", is something that the same sorts of people tend to promote.

Comment by taygetea on Bitcoin value and small probability / high impact arguments · 2015-03-31T17:18:22.525Z · LW · GW

Nitpick: BTC can be worth effectively less than $0 if you buy some then the price drops. But in a Pascalian scenario, that's a rounding error.

More generally, the difference between a Mugging and a Wager is that the wager has low opportunity cost for a low chance of a large positive outcome, and the Mugging is avoiding a negative outcome. So, unless you've bet all the money you have on Bitcoin, it maps much better to a Wager scenario than a Mugging. This is played out in the common reasoning of "There's a low chance of this becoming extremely valuable. I will buy a small amount corresponding to the EV of that chance, just in case".

Edit: I may have misread, but just to make sure, you were making the gold comparison as a way to determine the scale of the mentioned large positive outcome, correct? And my jump to individual investing wasn't a misinterpretation?

Comment by taygetea on Michael Oakeshott's critique of something-he-called-rationalism · 2015-03-29T02:03:46.611Z · LW · GW

The entire point of "politics is the mind-killer" is that no, even here is not immune to tribalistic idea-warfare politics. The politics just get more complicated. And the stopgap solution until we figure out a way around that tendency, which doesn't appear reliably avoidable, is to sandbox the topic and keep it limited. You should have a high prior that a belief that you can be "strong" is Dunning-Kruger talking.

Comment by taygetea on The great decline in Wikipedia pageviews (condensed version) · 2015-03-29T01:50:14.562Z · LW · GW

This would rely on a large fraction of pageviews being from Wikipedia editors. That seems unlikely. Got any data for that?

Comment by taygetea on Discussion of Slate Star Codex: "Extremism in Thought Experiments is No Vice" · 2015-03-28T15:35:01.136Z · LW · GW

You could construct an argument about needing to reinforce explicitly using system-2 ethics on common situations to make sure that you associate those ethics implicitly with normal situations, and not just contrived edge cases. But that seems to be even a bit too charitable. And also easily fixed if so.

Comment by taygetea on Where can I go to exploit social influence to fight akrasia? · 2015-03-28T12:44:59.681Z · LW · GW

From my experiences trying similar things over IRC, I have found that the lack of anything holding you to your promises definitely is a detriment to most people. I have found a few for whom that's not the case, but that's very much the exception. That's definitely a failure mode to look out for, doing this online (especially in text) won't work for many people. In addition, this discrepancy can create friction between people.

The general structure of the failure tends to be one person feeling vaguely bad about not talking as much, or missing a session. And then when they don't have many vectors to viscerally receive signals of disapproval, of the kind that would cause them to be uncomfortable and go through with it even when they don't want to, it becomes easiest to do it the next time. Schelling Fences are easier to break without face to face interaction.

There should be ways to bypass that problem. One of the memes around LW is actively reinforcing positive things, instead of relying on implied approval. If you can create a culture of actively rewarding success, and treating apathy as something to be stamped out at every point, then you can do it. You can also make a point to create norms where one goes out of their way to help someone who falls behind to figure out what the true problem is. If you can manage that, instead of silence or simple berating, then you can make it work. Ideas around Tell Culture can help you with this. Unfortunately, this also requires diverting a lot of focus into preserving those conditions. Creating community norms is hard, but that seems like the way you avoid that problem.

I don't mean to imply that you want to start a community around this along the lines of the LW study hall, but this is what I have found from my attempts. Maybe someone will find it helpful.

Comment by taygetea on The Fermi paradox as evidence against the likelyhood of unfriendly AI · 2013-08-02T07:59:09.379Z · LW · GW

Relating to your first point, I've read several stories that talk about that in reverse. AIs (F or UF is debatable for this kind) that expand out into the universe and completely ignore aliens, destroying them for resources. That seems like a problem that's solvable with a wider definition of the sort of stuff it's supposed to be Friendly to, and I'd hope aliens would think of that, but it's certainly possible.

Comment by taygetea on Harry Potter and the Methods of Rationality discussion thread, part 25, chapter 96 · 2013-07-28T03:59:55.823Z · LW · GW

According to Quirrel, yes, they are. "Anything with a brain". And I notice that you've only looked at what we've directly seen. The presence of spells like all the ones you mentioned lead me to think that you can do more directed things with spells harry hasn't come across yet.

Comment by taygetea on Norbert Wiener on automation and unemployment · 2013-07-27T21:49:11.897Z · LW · GW

the second, cybernetic, industrial revolution “is [bound] to devalue the human brain, at least in its simpler and more routine decisions

It certainly seems like he considered it, at least on a basic level, enough to be extrapolated.

Comment by taygetea on Open thread, July 23-29, 2013 · 2013-07-26T06:15:23.106Z · LW · GW

Well, I did say it far outweighed it. Even that's less of an inconvenience in my mind, but that's getting to be very much a personal preference thing.

Comment by taygetea on Harry Potter and the Methods of Rationality discussion thread, part 25, chapter 96 · 2013-07-26T06:10:35.845Z · LW · GW

Creating arbitrary animals that are barely alive, don't need food, water, air, or movement, and made of easily workable material which is also good as armor seems like a good place to start, and also within the bounds of magic. This isn't as absurd as it seems. Essentially living armor plates. You'd want them to be thin so you could have multiple layers, and to fall off when they die, and various similar things. Or maybe on a different scale, like scale or lamellar armor.

Comment by taygetea on Open thread, July 23-29, 2013 · 2013-07-26T06:05:38.769Z · LW · GW

The messiness and potential for really unpleasant sounds, in my mind, far outweighs the need for a specific type of dry-erase marker. Though that might be related to how easily sounds can be unpleasant to me in particular.