Posts

SSC Meetups Everywhere: Toulouse 2019-09-10T19:17:34.732Z · score: 3 (1 votes)
Layers of Expertise and the Curse of Curiosity 2019-02-12T23:41:45.980Z · score: 12 (7 votes)
Willpower duality (a very short essay) 2017-01-20T09:56:50.441Z · score: 5 (6 votes)
Open thread, Oct. 31 - Nov. 6, 2016 2016-10-31T21:24:05.923Z · score: 4 (5 votes)

Comments

Comment by gyrodiot on Beliefs: A Structural Change · 2018-05-07T09:07:38.659Z · score: 4 (2 votes) · LW · GW

I sometimes do a brief presentation of rationality to acquaintances, and I often stress the importance of being able to change your mind. Often, in the Sequences, this is illustrated by thought experiments, which sound a bit contrived when taken out of context, or by wide-ranging choices, which sound too remote and dramatic for explanation purposes.

I don't encounter enough examples of day-to-day application of instrumental rationality, the experience of changing your mind, rather than the knowledge of how to do it. Your post has short glimpses of it, and I would very much enjoy reading a more in-depth description of these experiences. You seem to notice them, which is a skill I find very valuable.

On a more personal note, your post nudges me towards "write more things down", as I should track when I do change my mind. In other words, follow more of the rationality checklist advice. I'm too often frustrated by my lack of noticing stuff. So, thanks for this nudge!

Comment by gyrodiot on Should an AGI build a telescope to spot intergalactic Segways? · 2018-05-04T09:03:14.798Z · score: 6 (3 votes) · LW · GW

Thanks for your clarification. Even though we can't rederive Intergalactic Segways from unknown strange aliens, could we derive information about those same strange aliens, by looking at the Segways? I'm reminded of some SF stories about this, and our own work figuring out prehistorical technology...

Comment by Gyrodiot on [deleted post] 2018-04-29T19:49:55.137Z

Thanks again for this piece. I'll follow your daily posts and comment on them regularly!

I have a few clarification questions for you:

  • if an AGI could simulate quasi-perfectly a human brain, with human knowledge encoded inside, would your utility function be satisfied?
  • is the goal of understanding all there is to the utility function? What would the AGI do, once able to model precisely the way humans encode knowledge? If the AGI has the keys to the observable universe, what does it do with it?
Comment by gyrodiot on Should an AGI build a telescope to spot intergalactic Segways? · 2018-04-29T09:06:29.679Z · score: 7 (2 votes) · LW · GW

Thanks for your post. Your argumentation is well-written and clear (to me).

I am confused by the title, and the conclusion. You argue that a Segway is a strange concept, that an ASI may not be capable of reaching by itself through exploration. I agree that the space of possible concepts that the ASI can understand is far greater than the space of concepts that the ASI will compute/simulate/instantiate.

However, you compare this to one-shot learning. If an ASI sees a Segway, a single time, would it be able to infer what is does, what's it for, how to build it, etc.? I think so! The purpose of one-shot learning models is to provide a context, a structure, that can be augmented with a new concept based on a single example. This is far simpler than coming up with said new concept from scratch.

See, on efficient use of sensory data, That Alien Message.

I interpret your post as « no, an ASI shouldn't build the telescope, because it's a waste of resources and it wouldn't even need it » but I'm not sure this was the message you wanted to send.

Comment by gyrodiot on European Community Weekend 2018 Announcement · 2018-02-05T17:39:54.566Z · score: 22 (5 votes) · LW · GW

I'll be there. As I said in the sister post on LW1.0:

The community weekend of 2017 was one of my best memories from the past year. Varied and interesting activities, broad ranges of topics, tons of fascinating discussions with people from diverse backgrounds. Organizers are super friendly.

One very, very important point is people there cooperate by default. Communication is easy, contribution is easy, getting help is easy, feedback is easy, learning is easy. Great times and productivity. And lots of fun!

Entirely worth it.

Comment by gyrodiot on European Community Weekend 2018 Announcement · 2018-01-25T14:14:51.374Z · score: 1 (1 votes) · LW · GW

The Community Weekend of 2017 was one of the highlights of my past year. I strongly recommend it.

Excellent discussions, very friendly organizers, awesome activities.

Signed up!

Comment by Gyrodiot on [deleted post] 2018-01-19T13:23:24.694Z

Hi! Was this a test post?

Comment by gyrodiot on Announcing the AI Alignment Prize · 2018-01-16T17:35:43.570Z · score: 3 (1 votes) · LW · GW

Winners have just been announced here.

Comment by gyrodiot on The Cake is a Lie, Part 2. · 2018-01-14T09:32:44.879Z · score: 11 (5 votes) · LW · GW

I'll be blunt. Until this second post, there was a negative incentive to people on this site to comment on your first post. The expected reaction was downvote it to hell without bothering to comment. Now, with this second post, clarifying the context of the first, I'd still downvote the first, but I'd comment.

I read the first post three times before downvoting. I substituted words. I tried to untie the metaphor. Then I came to two personal conclusions:

  1. You offered us a challenge, ordering us to play along, with no reward, at a cost for us. HPMOR provided dozens of chapters and entertaining fiction before the Final Exam. You just posted once and expected effort.
  2. You impersonate an ASI with very very precise underlying hypotheses. An ASI that would blackmail us? Fair enough, that would be a variant of Roko's Basilisk. Your Treaty is not remotely close to what I expect an ASI to behave. As you state, the ASI make all important decisions, so why bother simulating a particular scenario involving human rights?

The first post was confusing, your second post is still confusing, neither fit the posting guidelines. You are not an ASI. Roleplaying an ASI leads to all sorts of human bias. I downvoted your two post because I do not expect anyone to be better equipped to think about superintelligences after reading them. That's it.

Comment by gyrodiot on Hidden Hope For Control · 2018-01-11T15:52:49.302Z · score: 10 (3 votes) · LW · GW

Thanks for this post.

I'm not sure what is your central point, what is the component you announce at the start of the post. I understood that life contains a spectrum of situations we have more or less control over, that perfect control or perfect lack of control all the time is not desirable, and that we ought to have a wide range of experiences along that dimension to have enjoyable lives.

Did I miss something? Can you clarify your conclusions?

Comment by gyrodiot on Optimizing a Week of Machine Learning Learning · 2018-01-09T09:16:03.237Z · score: 26 (6 votes) · LW · GW

Hi Raemon! This is a topic I'm very bad ad writing structured answers about, and much better at chatting about, because there are tons of things to say and I'd need more details to know how to steer my advice.

That being said, I recommend you this repository for resources, aimed at people with a tech, but not necessarily math, background. Reading though some of the guides there should help you solve some of your last-section questions.

I'd say that staying up to date on AI developments with a goal of AI safety is entirely tractable as long as you're not looking for the particular techniques that will lead to unsafe AI. Most AI literature is entirely disconnected from AI safety concerns, and if you dive into the field enough, you will become proficient enough to understand the papers that are relevant to safety concerns.

Cute little ML projects almost always have hidden depths, if you're dealing with real-world data. I suggest to try them after tutorials, not as tutorials, so that you'll be able to split whatever you're trying to do in manageable chunks (and understand why things fail or succeed).

I wish you the best for your endeavor!

Comment by gyrodiot on Some food for thoughts about climate. · 2017-12-11T15:57:21.070Z · score: 6 (3 votes) · LW · GW
  1. Drop in average temperature over the last few centuries, with a minimum around 1650.
  2. Colder winters, change in crops produced, more glaciers.
  3. Half a Kelvin (half a degree Celsius).
  4. Up to five degrees until the end of the century.

See this and that. What is your point?

Comment by gyrodiot on The list · 2017-12-11T15:29:40.767Z · score: 10 (3 votes) · LW · GW

I agree with your points. To restate my question, what extra insights does your model provide, compared to (for example), an ever-updating Maslow's hierarchy of needs?

Comment by gyrodiot on The list · 2017-12-11T13:28:09.439Z · score: 2 (1 votes) · LW · GW

What you are describing, as far as I can understand, is that we are adaptation executers. The List is everything we want, whether hardcoded into our biology, or expressed by our minds. Yes, it updates.

I'd also appreciate, as other commenters pointed out, what kind of predictions can your model make on human behavior.

Comment by gyrodiot on The Impossibility of the Intelligence Explosion · 2017-11-30T09:46:10.649Z · score: 10 (5 votes) · LW · GW

I think you wanted to link to this recent essay by François Chollet (AI researcher and designer of Keras, a well-known deep learning framework). The essay has also been discussed on Hacker News, also on Twitter.

I'm currently writing an answer to this one. I think it would be beneficial to have extra material about intelligence explosion which is disconnected from the "what should be done about it" question, which is so often tied to "sci-fi" scenarios.

Comment by gyrodiot on Fire drill proposal · 2017-11-23T14:02:40.729Z · score: 6 (3 votes) · LW · GW

Not speaking for Christian here. Personally, I can't steelman suggestions that have never been defended. I see the point of steelmanning as trying in good faith to build the strongest version of an opposing view, and then criticizing it. However, to do that, I need material! I need the voice of a proponent, I need something stable and sound to argue against.

I don't want to generate arguments for your idea, since I will probably misrepresenting it and build a strawman out of speculations, even if I work in good faith. This is why I need your voice!

Comment by Gyrodiot on [deleted post] 2017-11-12T20:19:54.077Z

As I'm the one being answered to, a bit of context. A long discussion started on the #philosophy channel of the Slack group. For reasons irrelevant to the present discussion, I'm continuing the exchange on the website linked above.

I'm currently writing an answer to this. I do not claim to represent the LW community, though I'm trying my best to reflect the broad concepts and reasoning outlined in the Sequences, notably. Please correct any blatant inaccuracies in my prose, if you think it worthwhile.

Comment by gyrodiot on Self-confidence as a time saving tactic · 2017-11-06T15:25:40.506Z · score: 2 (1 votes) · LW · GW

This post describes the state of underconfidence, which is to assign a lower probability to events than what actually happens. The event is here "being right" or "being competent enough to do X". Yes, if people think they're wrong in situations where they are right, they will waste time seeking advice and/or help. Here, self-confidence is a good thing because it brings them closer to correctly evaluating themselves.

Conversely, if they are overconfident they will waste time by making errors and taking more responsibility than they should. There, self-confidence is a bad thing because they are too sure of themselves.

If you are placed in an uncertain situation and you want to ensure success, asking for help is a trade-off between the cost of asking and the expected gain from the information. Say you ask a "stupid question". If its answer helps you figure out stuff outside your area of expertise, it is worth it, and not stupid.

If you are shamed for asking stuff, for wanting to learn, when you are outside your expertise, you are not the problem, unless you asked the wrong person, and there were ways to learn at lower cost.

Learning has a cost. Asking has a cost. You can skip it, rely only on your present knowledge to act now and take a risk, so that it saves resources.

Conversely, you could write a post titled "Asking for help as a time-saving tactic"... for the symmetric situations.

Comment by gyrodiot on Winning is for Losers · 2017-10-13T12:21:55.132Z · score: 8 (4 votes) · LW · GW

Indeed, it has become my go-to formulation for the problem. It is also showcased here, as well as numerous iterated strategies: The Evolution of Trust.

Comment by Gyrodiot on [deleted post] 2017-10-03T09:27:01.864Z

Thank you for this! I find this list extremely useful. I notice you described the items from the point of view of someone already familiar with the concepts, and you made a good case for them.

I expect to have to expand at length on the points you made, if I have to present them to someone not familiar with rationality. I would consider them as an excellent way to jumpstart a conversation about the mechanisms behind those suggestions (ability to change one's mind, have a more precise view of reality, being able to achieve goals, etc.)

Comment by gyrodiot on Stupid Questions - September 2017 · 2017-09-28T15:40:17.999Z · score: 2 (2 votes) · LW · GW

I don't think there is much material records you're supposed to have besides the administrative and medical history polymathwannabe mentioned.

However, as mental record I would add are the memories of the places you've lived in, and significant experiences you've had. You can make some up, but you're supposed to have a story to tell if someone asks small talk questions, unless you're prepared to signal "I don't want to talk about my childhood".

Comment by gyrodiot on 2017 LessWrong Survey · 2017-09-14T15:54:12.755Z · score: 19 (19 votes) · LW · GW

I have taken the survey. Cannot remember my old public key, shrug

Comment by gyrodiot on Open thread, July 24 - July 30, 2017 · 2017-07-24T21:49:31.064Z · score: 0 (0 votes) · LW · GW

Hi Hafurelus, your post is in the right place!

I support your idea of finding a rival. Friendly competition can indeed foster motivation.

I think I'm too far down the path you described to be your direct rival. I'm past the step "PhD in AI/ML", and I'm currently gaining experience in the AI industry before turning to AI safety proper, if the opportunity comes. If anything, I can be the guy that you're trying to catch up to instead of a rival (by no means I think you should follow my steps, but our goals seem to be in the same ballpark).

There has been a recent post by 80k Hours that you may find interesting.

Do you keep a journal, something to evaluate progress and compare with others?

Comment by gyrodiot on Welcome to Less Wrong! (8th thread, July 2015) · 2016-11-16T21:28:33.088Z · score: 1 (1 votes) · LW · GW

Welcome, Peter!

Comment by gyrodiot on Anyone else reading "Artificial Intelligence: A Modern Approach"? · 2016-11-05T18:20:21.013Z · score: 3 (3 votes) · LW · GW

Hello, fellow AIMA reader :-) I'm positive a fair number of LW members have read the book, or at least the first chapters. I'm one of them!

How do you want to communicate? You can ask your questions here, or exchange direct messages, or chat... the possibilities are endless!

Comment by gyrodiot on The true degree of our emotional disconnect · 2016-11-02T20:20:17.211Z · score: 1 (1 votes) · LW · GW

I have several (cached) assumptions behind this. I need to do some extra research on the matter to be confident about this.

First, the brain is very bad at estimating actual danger based on fear. If you picture death from a 1-in-a-million-years event, it will be the same reaction for a 1-in-a-century event. When you calm down, you may be able to sort out which event is more threatening, but you will have a better time figuring out calmly which problem is the most significant and then trigger a fear reaction about it.

Second, fear about threat A can warp perception about threat B even if the two aren't related, because immediate action is needed towards solving A and whatever threat B is cannot be as important at the moment. I exaggerate a bit; I have a hard time how anyone could manage having to take potentially conflicting actions against several threats at once.

Multiple threats matter because you need to split your resources between them: does fear help in that case, given that fear pushes you, not only to take action, but immediate and probably miscalibrated action to squash the threat?

Comment by gyrodiot on The true degree of our emotional disconnect · 2016-11-01T09:35:02.073Z · score: 1 (1 votes) · LW · GW

WW2 is an excellent example of a situation where large amounts of people must be made aware of a single threat. Fear and panic makes the threat move up your priority list. In emergency mode, you can still vividly remember what caused the panic.

It works well for one threat, but it doesn't help when you have multiple distant threats. How do you manage mobilization against climate change and X-risk, through fear, at the same time?

Should we make separate groups of people care about separate concerns, each fearing a single distant threat? That would be effective, with each person having their one priority they are very concerned about (a caveat, they would probably see the other groups as blissful ignorants).

Comment by gyrodiot on MIRI's 2016 Fundraiser · 2016-10-31T21:52:37.275Z · score: 2 (2 votes) · LW · GW

Donated $20. Just in time!

Comment by gyrodiot on Open thread, Oct. 24 - Oct. 30, 2016 · 2016-10-31T21:25:52.756Z · score: 1 (1 votes) · LW · GW

Done! Hope the convention was nice.

Comment by gyrodiot on The true degree of our emotional disconnect · 2016-10-31T20:31:51.470Z · score: 1 (1 votes) · LW · GW

I argue the reason we don't have an instant fear of distant scenarios is exactly why we shouldn't rely on fear to deal with distant scenarios.

Our fear is indeed disconnected from distant threats. The function of fear is to react to immediate threats, and we can conjure the reaction of fear if we picture vividly a situation. However, the reaction induced by fear may not be what you want. Say you wear a virtual reality headset, and show someone the consequences of climate change, or X-risk. If the simulation is vivid enough, the subject will think "I must act on this threat now as if my life depended on it", but that would be trading inaction with panic.

Yes, your life depends on the resolution of the distant problems, but we shouldn't fear them as they were coming right now. Attempting to connect a distant threat to an immediate emotion fails, of course, which means we must devise other means to deal with the threat.

Conversely, saying a threat isn't important because we don't feel it threatening is a dangerous fallacy, I agree. The next step is to make distant threats credible by other means than emotion, because those don't trigger the kind of reaction we need to manage said threats.

Comment by gyrodiot on Open Thread, Aug. 8 - Aug 14. 2016 · 2016-08-09T13:58:18.390Z · score: 1 (1 votes) · LW · GW

As of today, the thread is still gathering questions. The team will start answering them August 11; a LW post may be of interest then.

Comment by gyrodiot on The Problem (TM) - Part 2 · 2016-08-02T10:00:57.932Z · score: 0 (0 votes) · LW · GW

Taken in isolation, part 1 left me confused. Part 2 greatly improves the value of the transcript.

Zebra is dealing with a set of problems, which may or not stem from a single Problem. I saw your questions as an effort, not only to clarify the issues, but also determine the structure of the problem set. Here, you describe the problem spiral, where solving one issue raises further issues because you keep thinking of the whole set instead of taking each issue in isolation.

Note there are two seemingly conflicting strategies here. One is to solve part of the problem, focusing on it for a given time, trying to jumpstart a success spiral. But how would you differentiate this from bikeshedding? How can you be sure you're not focusing on irrelevant things?

On the other hand, carefully thinking about the Problem, and how to solve it all at once, or by the correct sequence of actions, may lead to an efficient strategy. However, taking on the Problem as a whole shatters all motivation, warps the view of the problem set and makes small tasks seems undoable.

You linked Zebra, in the transcript, to Nate Soares's Replacing Guilt series. I would pinpoint my advice with Moving towards the goal. Solving the Problem (the cluster of problems, taken as a single eldritch entity) has to be set aside. You can picture the goal state, but the planning fallacy goes both ways: you can overestimate the difficulty of the solution. What matters is to make progress.

(I may have paraprashed your post here - I wanted to write down my own understanding of it)

Comment by gyrodiot on Open thread, Jul. 25 - Jul. 31, 2016 · 2016-07-27T18:52:52.290Z · score: 1 (1 votes) · LW · GW

Hi Arielgenesis, and welcome!

From a rationalist perspective, taking things for granted is both dangerous and extremely useful. We want to preserve our ability to change our minds about things in the right direction (closer to truth) whenever the opportunity arises. That being said, we cannot afford to doubt everything, as updating our beliefs takes time and resources.

So there are things we take for granted. Most mathematics, physics, the basic laws and phenomena of Science in general. Those are ideally backed by the scientific method, which axioms are grounded in building a useful model of the world (see Making Beliefs Pay Rent (in Anticipated Experiences) ).

From my rationalist perspective, then, there are no self-evident things, but there are obvious things, considered evident by the overwhelming weight of available... evidence.

Regarding values... it's a tough problem. I personally find that all preconceptions I had about universally shared values are shattered one by one the more I study them. For more information on this, I shall redirect you to complexity of value.

Comment by gyrodiot on Open Thread April 4 - April 10, 2016 · 2016-04-05T13:17:36.173Z · score: 1 (1 votes) · LW · GW

(meta: I'm not sure if I should make a Discussion post for this, so I'm posting here. Feedback most welcome)

I'm exploring the following hypothesis : sometimes, you have to give up constructive actions for the sake of focus.

Most productivity methods suggest the obvious, to keep wasteful activities in check. It could be gaming, chatting, checking news websites. They all share a common trait: you don't gain any significant utility (nor money, nor fun, nor rest) for spending more time on it. You achieve the same result by spending a little time on it, rather than a full day.

With productive activites, time spent and value created aren't proportional. Sometimes you're lacking energy, inspiration, and it's okay: you don't have to work yourself ragged.

If you have multiple tasks to be achieved in parallel, you should treat them as sequential anyway. Focusing on one task at a time yields better results than task switching all the time.

Problems arise when you find inspiration, or a sudden peak of interest for a certain task which is, useful in isolation, but which doesn't fit in your schedule. Maybe a discussion with a friend sparked the idea of a story to write. Maybe you're considering moving some furniture because you're well-rested and full of energy.

Even if you could be maximally productive for a given useful task, you should treat it as a wasteful activity if you have something else you planned to do. If the idea sounds good, write it down. If it's really good, hype will come back another time. If you're energetic, do the most physical thing you had planned to do. Energy will come back another time.

The goal is not to add another task on your current schedule and mess with the plan you've set for the day, like you'd do with "classical" wasteful activities. You can convince yourself easily that news websites can wait another day. The unintuitive part is this also holds with most productive activities, even though you're training yourself to not defer work!

Comment by gyrodiot on Does Evidence Have To Be Certain? · 2016-04-01T14:41:18.774Z · score: 0 (0 votes) · LW · GW

There's a labeling problem here. E is an event. The extra information you're updating on, the evidence, the thing that you are certain of, is not "E is true". It's "E has probability p". You can't actually update until you know the probability of E.

What the joint probability give you is by how much you have to update your credence in H, given E. Without P(E), you can't actually update.

P(H|E) tells you "OK, if E is certain, my new probability for H is P(H|E)". P(H|~E) tells you "OK, if E is impossible, my new probability for H is P(H|~E)". In the case of P(E) = 0.5, I will update by taking the mean of both.

Updating, proper updating, will only happen when you are certain of the probability of E (this is different form "being certain of E"), and the formulas will tell you by how much. Your joint probabilities are information themselves: they tell you how E relates to H. But you can't update on H until you know evidence about E.

Comment by gyrodiot on Does Evidence Have To Be Certain? · 2016-03-30T13:05:04.671Z · score: 0 (0 votes) · LW · GW

Consider P(E) = 1/3. We can consider three worlds, W1, W2 and W3, all with the same probability, with E being true in W3 only. Placing yourself in W3, you can evaluate the probability of H while updating P(E) = 1 (because you're placing yourself in the world where E is true with certainty.

In the same way, by placing yourself in W1 and W2, you evaluate H with P(E) = 0.

The thing is, you're "updating" on an hypothetical fact. You're not certain of being in W1, W2, or W3. So you're not actually updating, you're artificially considering a world where the probabilities are shifted to 0 or 1, and weighting the outcomes by the probabilities of that world happening.

Comment by gyrodiot on Lesswrong 2016 Survey · 2016-03-26T07:16:28.394Z · score: 42 (42 votes) · LW · GW

I have taken the survey. Yesterday.

Comment by gyrodiot on On Making Things · 2016-03-05T09:41:36.701Z · score: 2 (2 votes) · LW · GW

I really enjoyed this post. The process of making things, and the motivation behind interests me. Also, congrats on making this whiteboard!

Did you, at any point, thought "is this really worth my time"? From your description, I suppose the fun justifies the whole thing, and the fact that you made a usable thing adds to the value. I'm often overthinking fun and how I spend my time, so I wonder how to mitigate these feelings of "am I doing the right choice doing that at all?"

Comment by gyrodiot on Where are we? · 2016-02-09T13:59:43.270Z · score: 0 (0 votes) · LW · GW

Greetings, from Toulouse.

Comment by gyrodiot on The Fable of the Burning Branch · 2016-02-09T10:03:46.396Z · score: 3 (5 votes) · LW · GW

Thanks ! I wasn't joking. Now that I read the whole thing once again, the metaphor should have been perfectly obvious, but I guess I wasn't in the right state of mind yesterday.

Well, now I understand, I wish there hadn't be any metaphor. Here it conceals the point rather than offering a new perspective on it.

Comment by gyrodiot on The Fable of the Burning Branch · 2016-02-08T15:37:39.208Z · score: 0 (6 votes) · LW · GW

Thanks for the fable. It was a nice reading!

I tried to pattern-match the metaphor against many things; I failed. Could you please provide the key to the metaphor, as I sense there's hidden meaning underneath this story?

I don't want to guess a false meaning.

Comment by gyrodiot on Study partner matching thread · 2016-01-28T22:15:04.944Z · score: 0 (0 votes) · LW · GW

I forgot to mention I was currently an AI PhD student. Which doesn't entail much free time ^^

So... what exactly are you interested in learning (if you want to pair up)? I'm also interested in your project, if you have an idea in mind.

Comment by gyrodiot on Study partner matching thread · 2016-01-25T12:39:12.160Z · score: 3 (3 votes) · LW · GW

Hi,

I have two areas I'd like to study: deep learning, and anything on the MIRI research guide. Lots of material is available on both topics, but I'd like to pair up with someone to build a good learning strategy (for lack of a better expression).

I have some knowledge of algebra, probability theory, logic, game theory, machine learning (Master's Degree in Computer Science).

Regarding deep learning, I have a small collection of links, Udacity, and I'm positive learning materials abound now that the field is really popular.

Regarding MIRI's research guide, well, the guide itself provides a lot of links and pointers.

My timezone is CET (UTC+1).

Comment by gyrodiot on Open Thread, Jun. 29 - Jul. 5, 2015 · 2015-06-29T06:22:38.727Z · score: 4 (4 votes) · LW · GW

You can find them directly here:

http://lesswrong.com/saved

Or by clicking on the "Saved" tab, right under "Main" and "Discussion" when you click on them.

Comment by gyrodiot on Group Rationality Diary, February 1-14 · 2015-02-02T11:54:35.056Z · score: 1 (1 votes) · LW · GW

Since January 5, I started keeping a exhaustive log of expenditures/income, and meals. I expect to use the former to create a clear budget.

Also, I have been setting alarms every time I need to do something in the next 24 hours. I forget things easily and making the conscious effort to remind myself tasks clutters my mind.

Finally, I keep a log of ideas, and projects. I often find myself solving some problems twice because I didn't bother to write things down. Anything to remove mental clutter.

Comment by gyrodiot on Welcome to Less Wrong! (7th thread, December 2014) · 2014-12-26T12:07:26.190Z · score: 0 (0 votes) · LW · GW

Welcome :D Glad to see you there.

Comment by gyrodiot on Open thread, Dec. 15 - Dec. 21, 2014 · 2014-12-15T15:52:48.173Z · score: 7 (9 votes) · LW · GW

Link to the podcast, with transcript.

The mention of MIRI, about (bad) AI forecasts :

Russ Roberts : [It] seems to me that there are a lot of people in AI who think [strong AI development is] only a matter of time, and that the consequences are going to be enormous. They're not going to just be like a marginal improvement or marginal challenge. They "threaten the human race."

Gary Marcus : Before we get to those consequences, which I actually do think are important, I'll just say that there's this very interesting [?] by a place called MIRI in Berkeley, MIRI (Machine Intelligence Research Institute). And what they found is that they traced people's prediction of how far away AI is. And the first thing to know is what they found is, the central prediction, I believe it was the modal prediction, close to the median prediction, was 20 years away. But what's really interesting is that they then went back and divided the data by year, and it turns out that people have always been saying it's 20 years away. And they were saying it was 20 years away in 1955 and they're saying it now. And so people always think it's just around the corner. The joke in the field is that if you say it's 20 years away, you can get a grant to do it. If you said it was 5 years away, you'd have to deliver it; and if 100 years, nobody's going to talk to you.

Comment by gyrodiot on Welcome to Less Wrong! (6th thread, July 2013) · 2014-12-14T14:14:46.865Z · score: 0 (0 votes) · LW · GW

Hi, and welcome to Less Wrong !

There are indeed few works about truly superintelligent entities including happy humans. I don't recall any story where human beings are happy... while there are other artificial entities that suffer. This is definitely a worthy thought experiment, that raises some morality issues : should we apply human morality to non-human conscious entities ?

Are you familiar with the Fun Theory Sequence?

Comment by gyrodiot on Welcome to Less Wrong! (6th thread, July 2013) · 2014-12-08T16:42:31.126Z · score: 3 (3 votes) · LW · GW

Hi there, my name is Jérémy.

I found Less Wrong via HPMoR, which I found via TVTropes. I started reading the Sequences a few months ago, and am still going through them, taking my time to let the knowledge sink, and practice rationality methods.

I like to join the LW IRC chatroom, where I had (and witnessed) many interesting, provocative, and fruitful discussions.

I'm 22, I live in France, where, after an engineering degree in Computer Science, I'm now a PhD student in the wonderful field of Natural Language Processing. I've been interested in AI for about 10 years, since I wanted to create a little program that could chat with me. It was a bit harder than I expected. So I studied, I learned, and reaching the state of the art, found that NLP in general was AI-complete, and that a whole world of (yet) unsolved problems was in front of me. Awesome.

Being quite lazy most of the time, I also wanted to create tools that did stuff on my behalf, and eventually tools that created such tools, etc. Looking for existing examples of this, I soon discovered recursive self-improving systems, the concept of technological singularity, and other elements that strengthened my interest in AI.

When asked about my goals, I tell people I want to share the beauty of language, which I describe as the most powerful tool of humanity, with machines. This is my main motivation in life.

This, and also a fear of death that caused some panic attacks when I was younger. I only recently came to face the problem instead of avoiding the prospect. I think AI can help humanity tackle problems faster that any other methods, which drives me, again, to the path of AI.

I grew up asking lots and lots of questions nobody was able to answer. I had no friends to debate with (I skipped four grades, which set a huge social gap with my classmates). Worst of all, my parents taught me that I was the best, and that my skills allowed me to do pursue whichever education I wanted. I learned how to fail, and fail again, and fail again. I now want to become stronger, and stop wandering in the fields of knowledge anymore.

I love studying, experimenting and designing (mostly board) games. I play and run some RPGs from time to time. I write fiction, though not as often as I used to.

I try to share my interests towards (friendly) AI and rationality around me, and I'd love to participate in LW meetings if they weren't so far from south-western France.

Last but not least : I have no idea what to do once I finish my PhD. Academia isn't appealing as I thought it would be.

Nice to meet you all !