Comment by magfrump on Norms of Membership for Voluntary Groups · 2018-12-31T18:51:15.927Z · score: 6 (5 votes) · LW · GW

I think an important point with this system (and RE: "Not a Taxonomy") is that it's possible to mix and match norms.

For example, in a recreational sports team you see inclusion and membership having Civic norms (sometimes moving slightly toward Guest norms for something like pickup games) but praise and feedback being closer to Kaizen norms.

I bring up this specific example because I think it's the default assumption I made about the sort of space LessWrong was when I discovered it. In particular because the cost of admitting additional members is very low, I expect the minimum standards for expertise in the community to be very low, but the expectations around feedback and discussion to be goal-driven. This contrasts with something like a sports team or a workplace, where there is often a limit on the number of people who can join a community or a high cost to adding more members, and each member relies directly on the work of others.

Comment by magfrump on A LessWrong Crypto Autopsy · 2018-02-08T07:04:42.506Z · score: 4 (1 votes) · LW · GW

Strongly agree that I probably would have bought some crypto on LW advice had there been a nearby meetup to go through the process of doing it. Otherwise my priors about not giving my credit card info (or whatever) to strange websites were too strong to believe I would even successfully engage in the strategy.

Comment by magfrump on Rules of variety · 2017-12-12T00:45:53.176Z · score: 5 (5 votes) · LW · GW

I have an explicit but vague memory from childhood about doing exactly this, not on an essay question, but on a silly questionnaire like "What are you thankful for this Thanksgiving?"

All the other kids wrote things like "I'm thankful for the food and my family" and I had a very difficult time with it because I felt like it was not at all allowed for me to be thankful for food or for family (or a couple other things that had already been said aloud) and I had trouble thinking of other things to be thankful for.

I remember someone eventually understanding why I was having trouble but I don't remember how they reacted or what ended up happening though.

Comment by magfrump on Success and Fail Rates of Monthly Policies · 2017-12-12T00:38:56.300Z · score: 7 (2 votes) · LW · GW

Something I noticed in your summarization process is that it seems like you have a lot of records about what you've done; in particular for example you mentioned that when coming up with a new monthly theme you went over your activities from every day of the previous month. You also mentioned your first month of the year being structured data. Can you give some more detail about what kind of records you keep, and how you use them? And in particular, what do you think the absolute minimum amount of information is that's necessary to implement something like this?

(I ask because I currently forget most of what I do and it seems like that would make it very difficult to take any of this advice.)

LDL 9: When Transfer Learning Fails

2017-12-12T00:22:48.316Z · score: 7 (3 votes)

LDL 8: A Silly Realization

2017-12-06T02:20:44.216Z · score: 9 (5 votes)
Comment by magfrump on Updates from Boston · 2017-12-06T00:07:06.823Z · score: 19 (6 votes) · LW · GW

Regarding bureaucracy day:

1) Is there a list of the sorts of tasks that have been accomplished? I often find myself questioning whether there are bureaucratic things I should be engaging with but am not remembering.

2) I really want someone near me (south bay) to host a bureaucracy day :|

Comment by magfrump on DeepMind article: AI Safety Gridworlds · 2017-12-04T02:23:36.591Z · score: 11 (3 votes) · LW · GW

I don't interpret this as an attempt to make tangible progress on a research question, since it presents an environment and not an algorithm. It's more like an actual specification of a (very small) subset of problems that are important. Without steps like this I think it's very clear that alignment problems will NOT get solved--I think they're probably (~90%) necessary but definitely not (~99.99%) sufficient.

I think this is well within the domain of problems that are valuable to solve for current ML models and deployments, and not in the domain of constraining superintelligences or even AGI. Because of this I wouldn't say that this constitutes a strong signal that DeepMind will pay more attention to AI risk in the future.

I'm also inclined to think that any successful endeavor at friendliness will need both mathematical formalisms for what friendliness is (i.e. MIRI-style work) and technical tools and subtasks for implementing those formalisms (similar to those presented in this paper). So I'd say this paper is tangibly helpful and far from complete regardless of its position within DeepMind or the surrounding research community.

LDL 7: I wish I had a map

2017-11-30T02:03:57.713Z · score: 26 (9 votes)
Comment by magfrump on The Darwin Results · 2017-11-25T18:19:29.285Z · score: 6 (3 votes) · LW · GW

I am very proud of myself for calling this one.

Comment by magfrump on The Darwin Pregame · 2017-11-23T02:44:38.927Z · score: 16 (4 votes) · LW · GW

The obvious choice in this environment is a Clique-y defensebot; send the clique signal and cooperate with them, but instead of being an attackbot otherwise be a defensebot.

Since you wouldn't use this logic against other cliquebots it would be hard for them to punish you without giving up the ability to cooperate with themselves. You'll outperform other cliquebots if the other cliquebots dominate by dominating sooner and riding off their punishments. In the mid game either cliquebots dominate with you slightly ahead and you win, or cliquebots die off but you've survived the early game and cooperate with them so you get the benefits of attackers in the pool without the costs.

If enough cliquebots defect like this, then I'm not sure what would happen, and it'll depend a lot on what the initial distribution is I guess. If there are very few cliquebots this one is also vulnerable to other attackbots, but I think enough other possibilities (cliquey equitybot or cliquey foldbot) make me strongly suspect that someone will win by defecting from the clique.

Comment by magfrump on Hogwarts House Primaries · 2017-11-23T02:08:51.453Z · score: 8 (3 votes) · LW · GW

As an unrelated aside, I often rename the hogwarts houses as the four basic D&D classes since the mapping is obvious. I also used to attach these to directives that are somewhat practical on a daily or weekly basis (but which I almost never checked in about or followed up on).

Gryffindor - Fighter - Do something you're afraid of

Ravenclaw - Wizard - Learn something new

Hufflepuff - Cleric - Help someone or contribute to a group

Slytherin - Thief - Benefit from work someone else has done

Comment by magfrump on LDL 2: Nonconvex Optimization · 2017-11-14T23:25:49.262Z · score: 4 (1 votes) · LW · GW

I definitely intended the implied context to be 'problems people actually use deep learning for,' which does impose constraints which I think are sufficient.

Certainly the claim I'm making isn't true of literally all functions on high dimensional spaces. And if I actually cared about all functions, or even all continuous functions, on these spaces then I believe there are no free lunch theorems that prevent machine learning from being effective at all (e.g. what about those functions that have a vast number of huge oscillations right between those two points you just measured?!)

But in practice deep learning is applied to problems that humans care about. Computer vision and robotics control problems, for example, are very widely used. In these problems there are some distributions of functions that empirically exist, and a simple model of those types of problems is that they can be locally approximated over an area with positive size by taylor series at any point of the domain that you care about, but these local areas are stitched together essentially at random.

In that context, it makes sense that maybe the directional second derivatives of a function would be independent of one another and rarely would they all line up.

Beyond that I'd expect that if you impose a measure on the space of such functions in some way (maybe limiting by number of patches and growth rate of power series coefficients) that the density of functions with even one critical point would quickly approach zero, even while infinitely many such functions exist in an absolute sense.

I got a little defensive thinking about this since I felt like the context of 'deep learning as it is practiced in real life' was clear but looking back at the original post it maybe wasn't outlined in that way. Even so I think your reply feels disingenuous because you're explicitly constructing adversarial examples rather than sampling functions from some space to suggest that functions with many local optima are "common." If I start suggesting that deep learning is robust to adversarial examples I have much deeper problems.

Comment by magfrump on [deleted post] 2017-10-27T17:17:21.597Z

Why did the SoE and OoH switch spheres?

And anyway Void Engineers are obviously there to pick up the slack of the dying dreamspeakers and get spirit back into the technocratic paradigm.

LDL 6: Deep learning isn't about neural networks

2017-10-27T17:15:20.115Z · score: 14 (4 votes)
Comment by magfrump on [deleted post] 2017-10-27T04:24:25.955Z

Tradesies for the order of hermes? They can run universities or something?

Comment by magfrump on LDL 5: What to do first · 2017-10-26T20:17:00.561Z · score: 4 (1 votes) · LW · GW

Thanks! Yeah a lot of the "content" that I have right now is on the order of "I spent all day writing a function that could have been a single line library call :(" because it makes me keep working the next day even if I have to spend all day on another function that could have been a single line library call. Hopefully I'll "get past" some of that at some point and then be able to conduct some experiments that are interesting in and of themselves and/or provide some notebooks alongside things, which could move in the direction of front page stuff instead of personal life blogging.

LDL 5: What to do first

2017-10-26T18:11:18.412Z · score: 17 (4 votes)

LDL 4: Big data is a pain in the ass

2017-10-25T20:59:41.007Z · score: 15 (5 votes)
Comment by magfrump on [deleted post] 2017-10-25T06:31:34.019Z

If we're sympathizing with the technocracy in this thread I just want to note that the Void Engineers are blue and also precious.

Comment by magfrump on [deleted post] 2017-10-25T06:19:56.239Z

Polyamory feels green to me. My natural state of being involves falling in love with lots of people, and what I want out of polyamory is to be genuine and accepted in that and to be able to live that part of my life. I can easily imagine how it would be other colors for other people though. For example, acceptance mostly makes sense with more committed partners, and someone who is committed to solo poly but otherwise similar to me might feel like their poly is mostly black.

Comment by magfrump on Time to Exit the Sandbox · 2017-10-24T21:50:01.974Z · score: 8 (2 votes) · LW · GW

I have more urgent things to do in this world than sitting and writing down stuff I already know.

I definitely understand this, but I would suggest that you may be undervaluing sitting and writing down stuff you already know.

Comment by magfrump on Time to Exit the Sandbox · 2017-10-24T20:29:04.766Z · score: 4 (1 votes) · LW · GW

I was interested in the example you link to at the end of the article, but it didn't seem to me like there was enough specificity (especially in the red underlined concepts which you cited as background--I expected them to be links to their own full length posts instead of just half-sentence descriptions) to actually get everything working without individual oversight.

Also a lot of the claims seemed very specific in a way that I am skeptical if they are universal--and no citations or other discussion of how you know any of this was provided.

LDL 3: Deep Learning Pedagogy is Hard

2017-10-24T18:15:27.233Z · score: 9 (3 votes)
Comment by magfrump on What Evidence Is AlphaGo Zero Re AGI Complexity? · 2017-10-24T00:22:43.347Z · score: 4 (1 votes) · LW · GW

I think it is evidence that 'simple general tools' can be different from one another along multiple dimensions.

...we solve most problems via complex combinations of simple tools. Combinations so complex, in fact, that our main issue is usually managing the complexity, rather than including the right few tools.

This is a specific instance of complex details being removed to improve performance, where using the central correct tool was the ONLY moving part.

And thus the first team to find the last simple general tool needed might “foom” via having an enormous advantage over the entire rest of the world put together. At least if that one last tool were powerful enough. I disagree with this claim, but I agree that neither view can be easily and clearly proven wrong.

I am interpreting your disagreement here to mean that you disagree that any single simple tool will be powerful enough in practice, and not in theory. I hope you agree that if someone acquired all magic powers ever written about in fiction with no drawbacks they would be at an enormous advantage over the rest of the world combined. If that was the simple tool, it would be big enough.

Then if the question is "how big of an advantage can a single simple tool give," and the observation is, "this single simple tool gives a bigger advantage on a wider range of tasks than we have seen with previous tools," then I would be more concerned with bigger, faster moving simple tools in the future having different types or scales of impact.

Comment by magfrump on What Evidence Is AlphaGo Zero Re AGI Complexity? · 2017-10-23T23:42:52.933Z · score: 13 (4 votes) · LW · GW

I think there are some strong points supporting the latter possibility, like the lack of similarly high profile success in unsupervised learning and the use of massive amounts of hardware and data that were unavailable in the past.

That said, I think someone five years ago might have said "well, we've had success with supervised learning but less with unsupervised and reinforcement learning." (I'm not certain about this though)

I guess in my model AGZ is more like a third or fourth data point than a first data point--still not conclusive and with plenty of space to fizzle out but starting to make me feel like it's actually part of a pattern.

Comment by magfrump on What Evidence Is AlphaGo Zero Re AGI Complexity? · 2017-10-23T18:04:59.543Z · score: 12 (5 votes) · LW · GW

My sense is that AGZ is a high profile example of how fast the trend of neural nets (which mathematically have existed in essentially modern form since the 60s) can make progress. The same techniques have had a huge impact throughout AI research and I think counting this as a single data point in that sense is substantially undercounting the evidence. For example, image recognition benchmarks have used the same technology, as have Atari playing AI.

Comment by magfrump on LDL 2: Nonconvex Optimization · 2017-10-23T17:56:34.642Z · score: 4 (1 votes) · LW · GW

The point is that they really are NOT still there in higher dimensions.

Comment by magfrump on What Evidence Is AlphaGo Zero Re AGI Complexity? · 2017-10-22T17:59:39.860Z · score: 41 (18 votes) · LW · GW

I appreciate your posting this here, and I do agree that any information from AlphaGo Zero is limited in our ability to apply it to forecasting things like AGI.

That said, this whole article is very defensive, coming up with ways in which the evidence might not apply, not coming up with ways in which it isn't evidence.

I don't think Eliezer's article was a knock-down argument, and I don't think anyone including him believes that. But I do think the situation is some weak evidence in favor for his position over yours.

I also think it's stronger evidence than you seem to think according to the framework you lay down here!

For example, a previous feature of AI for playing games like Chess or Go was to capture information about the structure of the game via some complex combination. However in AlphaGo Zero, very little specific information about Go is required. The change in architecture actually subsumes some amount of the combination of tools needed.

Again I don't think this is a knockdown argument or very strong or compelling evidence--but it looks as though you are treating it as essentially zero evidence which seems unjustified to me.

Comment by magfrump on LDL 2: Nonconvex Optimization · 2017-10-21T01:58:10.171Z · score: 4 (1 votes) · LW · GW

Instead of hills or valleys, it seems like the common argument is in favor of most critical points in deep neural networks being saddle points

I agree, the point of the digression is that a saddle point is a hill in one direction and a valley in the other.

The point is that because it's a hill in at least one direction a small perturbation (like the change in your estimate of a cost function from one mini-batch to the next) gets you out of it so it's not a problem.

Is this the probability of a point, given that it is a critical or near-critical point, being an optima?

there p is the probability that, given a near-critical point, that in a given direction that criticality is hill-like or valley-like. If any of the directions are hill-like you can roll down those directions so you need your critical points to be very valley-like. It's a stupid computation that isn't actually well defined (the probability I'm estimating is dumb and I'm only considering one critical point when I should be asking how many points are "near critical" and factoring that in, among other things) so don't worry too much about it!

Comment by magfrump on LDL 2: Nonconvex Optimization · 2017-10-21T01:55:43.074Z · score: 4 (1 votes) · LW · GW

Edit: pictures added, though in retrospect I'm not sure that they really add that much to the post.

Fair enough; if your comment is at +5 or more by Monday I'll go back and figure out the formatting.

Comment by magfrump on Learning Deep Learning: Joining data science research as a mathematician · 2017-10-21T01:16:07.417Z · score: 4 (1 votes) · LW · GW

I've been doing machine learning for about 2.5 years now and using python for longer than that and I'm also a big jupyter notebook fan. I still have a bit of trouble reading other people's code almost always, what I'm really hoping is that I'll be able to dive into the keras documentation more as this undertaking moves along.

I'll check out the blogs also, thanks for the references!

Comment by magfrump on LDL 2: Nonconvex Optimization · 2017-10-20T23:52:57.357Z · score: 4 (1 votes) · LW · GW

Thanks for this explanation, but tbh I'm way too lazy to add them in now.

LDL 2: Nonconvex Optimization

2017-10-20T18:20:54.915Z · score: 26 (10 votes)

Learning Deep Learning: Joining data science research as a mathematician

2017-10-19T19:14:01.823Z · score: 29 (9 votes)
Comment by magfrump on [deleted post] 2017-10-18T20:47:40.278Z

When a post is in text it's already in or close to the form of a discussion, and the post gives language and many specific places to reference to continue the discussion.

For this post, engaging with each image means moving out of a verbal space and if you want to type comments you need to come up with your own schema for determining what part you're responding to and how to discuss it in words. That's a lot more effort, resulting in fewer comments.

I think it's also very important to distinguish "makes sense" or "is preferred" from "provokes comments" here--the post has about the same level of upvotes as your usual posts, just fewer comments. That means people are probably reading and appreciating the post at similar rates, just not commenting because it's more difficult without being obviously more rewarding.

Comment by magfrump on Anti-tribalism and positive mental health as high-value cause areas · 2017-10-18T00:18:00.036Z · score: 7 (2 votes) · LW · GW

A possible next step conclusion one could draw from this is that it is worth expending effort to make people around you feel safe. As Vaniver mentions it's generally the combination of value and an approach that leads to calling something a high value cause, but as you mention we do have some experience with reducing tribalism in our lives.

I'd be interested in seeing some off the cuff evaluations of being patient with people to see if there's a reasonable upper and/or lower bound on how patient we should be but I have no idea how to even figure out what numbers I'd need to make up to do that evaluation myself.

Comment by magfrump on [deleted post] 2017-10-17T18:27:37.682Z

I don't think I disagree with the claim you're making here--I think formal background for things like decision theory is a big contributor to day to day rationality. But I think posts detailing formal background on this site will often be speaking either to people who already have the formal background, and be boring, or be speaking to people who do not, and it would be better to refer them to textbooks or online courses.

On the other hand, if someone wanted to take on the monumental task of opening up the possibility of running interactive jupyter notebooks to add coding exercises to notebooks and start building online courses here, I'd be excited for that to happen--it just seems like if we want to build more formal background it will be a struggle with the current site setup to match other resources.

Comment by magfrump on [deleted post] 2017-10-17T17:51:29.728Z

Thanks for responding! I think you make fair points--I hadn't seen the previous thread in detail, I just try to read all the posts but afaik there isn't a good way of tracking which comment threads continue to live for a while.

I think the center of our disagreement on point 2 is a matter of the "purpose of LessWrong;" if you intend to use it as a place to have communal discussions of technical problems which you hope to make progress on through posts, then I agree that introducing more formal background is necessary even in the case that everyone has the needed foundations. I am skeptical that this will be a likely outcome, since the blog has cross purposes of building communities and general life rationality, and building technical foundations is rough for a blog post and might better be assigned to textbooks and meetup groups. That limits engagement much more heavily and I definitely don't mean to suggest you shouldn't try, but I wasn't really in that mindset when first reading this. I had a more general response on the lines of "this person wants to do something mathematically rigorous but is a bit condescending and hasn't written anything interesting." I hope/believe that will change in future posts!

Comment by magfrump on Creating Space to Cultivate Skill · 2017-10-17T01:01:44.178Z · score: 3 (1 votes) · LW · GW

I think it would be very useful to talk about a specific skill as a case study. When I considered skills that I might be interested in cultivating (for example, playing guitar, juggling, or learning a new programming language came to mind for me) many of the steps varied from unimportant to explicitly antagonistic. If I want to practice a skill like programming where my feedback loop necessarily involves searching the internet for syntax, my set of relevant distractions will be very different than if I need to go outside to an open space to juggle clubs.

Comment by magfrump on [deleted post] 2017-10-16T23:21:45.434Z

I feel like there are some good intentions behind this post but I didn't feel like I got anything from reading it. I know it can be disconcerting to get downvotes without feedback so I'll try to summarize what feels off to me.

  1. You start this post by saying you "always disagreed [with the community]" but don't outline any specific disagreements. In particular your concluding points sound like they're repeating Eliezer's talking points.

  2. You suggest that the community doesn't have a strong background in the formal sciences, but this seems not only unjustified but explicitly contradicted by the results of the various community surveys over the years--over 50% of the community works in computers, engineering, or math as of 2016. Of course, this has fluctuated over time and I don't want to push too hard to group professors in AI with people who work tech support, but if anything my experience is that the community is substantially more literate in formal logic, math, etc., than could reasonably be expected.

  3. I'm guessing your work in logic is really interesting and that we'd all be interested in reading your writing on the subject. But the introduction you give here doesn't distinguish between possible authors who are undergrads versus authors who are ivy league professors. In particular outside of a couple of buzzwords you don't tell us much about what you study, how much of an expert you are in the subject, and why formal logic specifically should be relevant to AGI.

My guess is you have some interesting things to share with the community, so I hope this is helpful for you writing your next posts for the LW audience and doesn't come off as too rude.

Comment by magfrump on Why no total winner? · 2017-10-16T23:03:47.977Z · score: 15 (5 votes) · LW · GW

Let me propose a positive, generative theory that generates multiple agents/firms/etc.

Any agent (let's say nation) when it starts has a growth rate and a location. As it grows, it gains more power in its location, and power nearby at levels declining exponentially with distance. If a new nation starts at some minimum level of power, not too late and somewhat far from a competing nation, then it can establish itself and will end up having a border with the other nation where their power levels are the same.

Certainly there are growth rates for different nations that would give us fluctuating or expanding borders for new nations or that would wipe out new nations. Major changes in history could be modeled like different coefficients on the rates of growth or the exponent of decay (or maybe more clearly by different measures of distance, especially with inventions like horses, boats, etc.)

In particular if the growth rates of all nations are linear, then as long as a nation can come into existence it will be able to expand as it ages from size zero to some stable size. The linear rates could be 5000 and .0001 an the smaller growth nation would still be able to persist--just with control of much less space.

In practice, in history, some nations have become hundreds of times as large as others, but only a certain size has been achieved. There are a few ways this model could account for a different single-nation scenario.

  1. Non-linear growth rates; in particular growing at a rate that equals the rate of distance decay (no matter what those rates are) would overwhelm any nation growing at a linear rate. Probably it's easier to overwhelm those nations than that.

  2. Massive changes to the distance decay. This has been attested historically at least in small part--European countries expanding into the Americas and Africa with boats, Mongols with horses, even back to vikings. This is also analogous to principal-agent problems (and maybe Roman republicanism is another example?) Mathematically, if the exponents are similar it won't matter too much (or at least the 'strongest' state will take 'some time' to overpower others) but an immediate, large change on the part of the strongest nation would cause it to overwhelm every other nation

  3. Not having enough space for equilibrium. A new nation can only start in this model if it's far enough away from established nations to escape their influence. We don't see much in the way of new nations starting these days, for example.

This is a toy model that I literally came up with off the top of my head, so I don't mean to claim that it has any really thrilling analogies to reality that I haven't listed above.

I do think it's robustly useful in that if you massively change all the parameters numerically it should still usually predict that there will be many nations but if you tweak the parameters at a higher level it could collapse into a singleton.

Comment by magfrump on [deleted post] 2017-10-16T20:05:32.043Z

My experience is that I have different inferential gaps between people who share different features with me. In most of my life and conversations, the biggest gaps are coming from things like reading the sequences. But among people who are similar on a lot of levels, coming from a different socio-economic background means we have other inferential gaps, especially in terms of the way we handle money or define personal success.

I do agree that there's an implicit weighting in terms of mentioning some factors and not others and I don't mean to endorse 100% the exact implicit weighting, just to mention that we focus on some inferential gaps in our lives and not others for lots of reasons, and there are lots of gaps coming from the factors he did mention that are important and large and often invisible.

Comment by magfrump on Winning is for Losers · 2017-10-16T19:58:06.882Z · score: 6 (2 votes) · LW · GW

I think I might try using this as a LW landing page. It introduces some of the jargon at a nice pace--only a couple of pieces are actually unique--while hinting that there's a lot more, and points at reasons to think about weird fun transhumanism without having to commit to "believing in it," which seems like a great avenue for opening up curiosity.

That said, the single sentence about libertarians near the beginning not being engaged in the same sort of competitive world registered as praising a specific tribe which sort of put me off (since it wasn't my tribe) before I even think about engaging with specifics (i.e. distinctions between libertarians, liberaltarians, the american libertarian party, or whether the green party would work as an example). I can see now that this is a reference to a book but I'd expect most readers not to get that reference (and many not to think of libertarians as their tribe) so there might be a better way to write that paragraph. Even just leaving that last sentence about libertarians out or as a footnote I would have felt better about it.

Comment by magfrump on Robustness as a Path to AI Alignment · 2017-10-10T22:49:53.183Z · score: 3 (0 votes) · LW · GW

I've been interested in the general question of adapting 'safety math' to ML practices in the wild for a while, but as far as I know there isn't a good repository of (a) math results with clear short term implications or (b) practical problems in current ML systems. Do you have any references for such things? (even just a list of relevant blogs and especially good posts that might be hard to track down otherwise would be very helpful)

Comment by magfrump on Distinctions in Types of Thought · 2017-10-10T21:54:52.069Z · score: 6 (2 votes) · LW · GW

This immediately reminded me of this paper which I saw on Short Science last week.

Comment by magfrump on Clueless World vs. Loser World · 2017-10-10T21:38:43.739Z · score: 3 (1 votes) · LW · GW

Ah, thanks

Comment by magfrump on Clueless World vs. Loser World · 2017-10-10T17:52:07.928Z · score: 6 (2 votes) · LW · GW

I would have read it the opposite way; that the first group is clueless (since they are unaware of the struggle) and the second group is losers (since they are losing out in this scenario in many ways) but otherwise appreciate this clarification.

Comment by magfrump on [deleted post] 2017-10-09T22:40:09.847Z

This is going to feel like a bit of a detour, but I want to talk about parallel universes for a second.

I held my A button halfway pressed the entire time while reading this article.

Comment by magfrump on SSC Survey Results On Trust · 2017-10-06T19:37:02.729Z · score: 7 (2 votes) · LW · GW

It could be that the number of people who refused to make their data public was very small, but otherwise I agree that's inconsistent and needs to be clarified.

Comment by magfrump on [deleted post] 2017-10-05T18:57:35.320Z

I tend to have a sort of opposite problem to the Hufflepuff trap: I can run through the exercise and see practical actions I could take, but then do not find them compelling.

Often I can even pull myself all the way through to writing down the specifics of an exercise I could do within five minutes and then... I just leave it on the page and go back to playing phone games.

Some of the things you say about willpower at the end resonate more with me, but the place willpower needs to get used is different for me, and saying I need to use willpower doesn't result in willpower being used by me. Many posts in the past have discussed this problem and I don't want to derail the discussion (though I haven't really found any of them to help me much).

But moving forward from that, if there are people whose problems are more "flinch away from acknowledging the correct behavior" and my problems are more "fail to execute on the correct behavior" that suggests an interesting division of labor possibility for paired productivity. Unfortunately one of the important components in getting there is "hang out with other rationalists in a more personal setting" which is perhaps chief among my correct behaviors that I've failed to execute on.

Comment by magfrump on The Reality of Emergence · 2017-10-05T17:14:59.554Z · score: 3 (1 votes) · LW · GW

Honestly I think that comment got away from me, and looking back on it I'm not sure that I'd endorse anything except the wrap up. I do think "from a quantum perspective, size is emergent" is true and interesting. I also think people use emergence as a magical stopword. But people also use all kinds of technical terms as magical stopwords, so dismissing something just on those grounds isn't quite enough--but maybe there is enough reason to say that this specific word is more confusing than helpful.

Comment by magfrump on The Reality of Emergence · 2017-10-04T23:03:12.616Z · score: 4 (2 votes) · LW · GW

I'm going to settle on a semi-formal definition of emergence which I believe is consistent with everything above, and run through some examples because I think your post misrepresents them and the emergence is interesting in these cases.

Preliminary definition: a "property" is a function mapping some things to some axis

Definition: a property is called emergent if a group of things is in its domain while the individual things in the group are not in its domain.

This isn't the usual use of "property" but I don't want to make up a nonsense word when a familiar one works just as well. In this case, "weighs >1kg" either isn't a property, or everything is in its domain; I'd prefer to say weight is the only relevant property. Either way this is clearly not emergent because the question always makes sense.

Being suitable for living in is a complicated example, but in general it is not an emergent property. In particular, you can still ask how suitable for living in half a house is; it's still in the domain of the "livability" property even if it has a much lower value. This is true all the way down to a single piece of wood sticking out of the ground, which can maybe be leaned against or used to hide from wind. If you break the house down into pieces like "planks of wood" and "positions on the ground" then I think it's true, if trivial, that livability is an emergent property of a location with a structure in some sense--it's the interactions between those that make something a house. And this gives useful predictions, such as "just increasing quality of materials doesn't make a house good" and "just changing location doesn't make a house good" even though both of these are normal actions to take to make a house better.

Being able to run Microsoft windows is an emergent property of a computer, in a way that is very interesting to me if I want to build a computer from parts on NewEgg, which I've done many times. It has often failed for silly reasons, like "I don't have one of the pieces needed for the property to emerge." Like the end of the housing example, I think this is a simple example where we understand the interactions, but it still is emergent and that emergence again gives concrete predictions, like "improving the CPU's ability to run windows doesn't help if it stops it interacting the right way" which with domain knowledge becomes "improving the CPU's ability to run windows doesn't help if you get a CPU with the wrong socket that doesn't fit in your motherboard."

I think this is useful, and I think it's very relevant to a lot of LW-relevant subjects.

If intelligence is an emergent property, then just scaling up or down processing power of an emergent system may scale intelligence up and down directly, or it might not--depending on the other parts of the system.

If competence is an emergent property, then it may rely on the combination of behavior and context and task. Understanding what happens to competence when you change the task even in practical ways through e.g. transfer learning is the same understanding that would help prevent a paperclip maximizer.

If ability to talk your way out of a box is an emergent property, then the ability to do it in private chat channels may depend on many things about the people talking, the platform being used to communicate, etc. In particular it also predicts quite clearly that reading the transcripts might not be at all convincing to the reader that the AI role-player could talk his way out of their box. It also suggests that the framing and specific setup of the exercise might be important, and that if you want to argue that a specific instance of making it work is convincing, there is a substantial amount of work to do remaining.

This is getting a bit rambly so I'm going to try to wrap it up.

With a definition like this, saying something has an emergent property relies on and predicts two statements:

  1. The thing has component parts

  2. Those component parts connect

These statements give us different frameworks for looking at the thing and the property, by looking at each individual part or at the interactions between sets of parts. Being able to break a problem down like this is useful.

It also says that answering these individual questions like "what is this one component and how does it work" and "how do these two components interact" are not sufficient to give us an understanding of the full system.

Comment by magfrump on The Reality of Emergence · 2017-10-04T22:24:42.746Z · score: 8 (3 votes) · LW · GW

As a single data point: I think DragonGod's definition of emergence is exactly what mine has been, and while I would have agreed with Eliezer's assessment of ESR's comment I think DragonGod's sense that it's the same would convince me the difference is just poor writing.

Comment by magfrump on Resolving human inconsistency in a simple model · 2017-10-04T17:23:39.348Z · score: 3 (1 votes) · LW · GW

I'd like to try running some simulations with a model like this, but I'm realizing that the environment needs to be more complicated than just generating a bunch of pairs of random numbers. But it would be interesting to see how different random circumstances cause a model like this to behave differently.

Comment by magfrump on [deleted post] 2017-10-01T17:57:03.498Z

I didn't engage with the different scenarios because they weren't related to my cruxes for the argument; I don't expect the nature of human intelligence to give us much insight into the nature of machine intelligence in the same way I don't expect bird watching to help you identify different models of airplane.

I think there are good reasons to believe that safety focused AI/ML research will have strong payoffs now; in particular the flash crash example I linked above and things like self-driving car ethics are current practical problems--I do think this is very different from what MIRI does for example, and I'm not confident that I'd endorse MIRI as an EA cause (though I expect their work to be valuable). But there is a ton of unethical machine learning going on right now and I expect both that substantial problems that already exist can be addressed by ML safety as well as that research contributing to both the social position and theoretical development of AI safety in the future.

In a sense, I don't feel like we're entirely in a dark cave. We're in a cave with a bunch of glowing mushrooms, and they're sort of going in a line, and we can try following that line in both directions because there are reasons to think that'll lead us out of the tunnel. It might also be interesting to study the weird patterns they make along the way but I think that requires a better outside view argument, and the patterns they make when we leave the tunnel have a good chance of being totally different. Sorry if that metaphor got away from me.

Comment by magfrump on Call for cognitive science in AI safety · 2017-09-29T22:57:11.044Z · score: 3 (1 votes) · LW · GW

Yeah I think that's pretty clear

Comment by magfrump on Call for cognitive science in AI safety · 2017-09-29T22:23:14.460Z · score: 3 (1 votes) · LW · GW

Maybe just write "Cosigned," above the names?

Comment by magfrump on [deleted post] 2017-09-29T20:49:51.918Z
  1. I agree there is no consensus on the meaning of intelligence, and this is probably because "intelligence" isn't one thing but many things. In humans these things are often correlated but only weakly.

  2. As Scott argues, IQ gives an okay measure of some things under the intelligence umbrella. However, the economic impact of an AI will be measured entirely by job performance. And the correlation between IQ-like intelligence and job competency is pretty clearly already broken by existing AI expert systems. We may be able to get a better understanding of the human correlations by doing experiments.

  3. Understanding what IQ measures and how it is related to competencies I expect with ~90% confidence will be a human coincidence. Since we don't have a notion of intelligence, IQ is a flawed measure only applicable to humans, and the connection between what IQ attempts to measure and the thing we care about is only present in humans, I don't expect models of IQ to improve our ability to forecast AI.

  4. Even if I agreed that IQ research measured a coherent notion of intelligence and that there was a strong metaphor between human IQ and machine intelligence that would give us better models of takeoff and help with strategy, this does not imply that funding IQ research is an effective tactic unless there is a community of researchers with well aligned motivations capable of doing research aligned with these interests and that they expect results to be strategicaly useful in a foreseeable time frame. This may be true but it's an important part of the argument that should be presented.

And some specific comments:

If intelligence is not correlated with job performance or correlated only up to a certain point of performance, super intelligence's won't be super at job performance and they won't have as huge an impact as we might have expected.

If intelligence is not correlated with job performance we should not expect economically disruptive AI to be "superintelligent." This already seems clear, since AI that caused various Flash Crashes was not superintelligent. By the same analogy, this may be either convenient (for example, very corrigible) or inconvenient (for example, can commit grave errors without understanding them) for safety purposes.

It is unlikely to speed up the creation of intelligence directly as it is not working on a constructive view of intelligence but a descriptive one. It will point in a direction of research though

(warning, epistemic status: hot take) I think one of the major barriers in AI right now is a lack of clear functional analogies to existing intelligence systems. Building AI systems out of various pieces is very common (AlphaGo and self driving cars both do this) and some of the most exciting recent advances like use of convnets in computer vision are based on explicit analogies. I would suspect that good descriptive models of human intelligence would be very powerful tools to advance AI.

My overall takeaway here is:

  1. Having better descriptive models of human intelligence would be valuable and interesting, though possibly would carry some dangers. This does not mean EAs should fund it.

  2. Being able to constrain takeoff scenarios would be very valuable for AI safety research. This does not mean there's a connection between takeoff scenarios and IQ.

Uncertainty in Deep Learning

2017-09-28T18:53:51.498Z · score: 3 (0 votes)

Why do people ____?

2012-05-04T04:20:36.854Z · score: 25 (28 votes)

My Elevator Pitch for FAI

2012-02-23T22:41:40.801Z · score: 15 (19 votes)

[LINK] Matrix-Style Learning

2011-12-13T00:41:52.281Z · score: 4 (13 votes)

[link] Women in Computer Science, Where to Find More Info?

2011-09-23T21:11:51.628Z · score: 3 (8 votes)

Computer Programs Rig Elections

2011-08-23T02:03:07.890Z · score: -2 (7 votes)

Best Textbook List Expansion

2011-08-08T11:17:33.462Z · score: 5 (10 votes)

Traveling to Europe

2011-05-18T22:48:30.933Z · score: 1 (4 votes)

Rationality Exercise: My Little Pony

2011-05-13T02:13:39.781Z · score: 13 (20 votes)

[POLL] Slutwalk

2011-05-08T07:00:38.842Z · score: -7 (29 votes)

What Else Would I Do To Make a Living?

2011-03-02T20:09:47.330Z · score: 15 (16 votes)

Deep Structure Determinism

2010-10-10T18:54:15.161Z · score: 1 (8 votes)