Posts
Comments
Do you guys ever have meetups in English? Do you know if anyone in Moscow does?
Thanks, Ill try zink again, and one day get around to doung blood test.
I think insomnia starts gradually and progressively gets worse after a few days, maybe a week, my hypothesis was that D3 was building up, apparently it has a very long half life. I took about 2000-3000 IU, it's not that much, right?
I didn't try K2 supplement, but I've just googled and turns out spinach and kale have a lot of K, and I eat a lot of those.
I used to take calcium-magnesium-zink supplement, not sure if it was durig the time I took D3. Could lack of zink be an issue, should I try it?
I don't know if I need it, never took the blood test, I tried it based on all the artiles about how good it is and how most people are deficient in it. Also I live in a cold climate and definitely don't go out enough, so I don't get a lot of sunlight.
I took D3 with light breakfast, in the morning.
Thank you for a great post!
Taking a D3 supplement, even in small amounts, seems to cause a horrible insomnia for me. I have problems with sleep even without it, but on D3 it gets a lot worse, I sleep like 3 hours per night. I'm like 80-90% sure it's D3, when I take it it gets worse, a few days after I stop it gets better.
Do you have any ideas on what could cause this and how could I fix it? I already take magnesium and choline, and I take D3 in the morning, but that don't seem to help much. Melatonin doesn't do anything, it gets me to sleep in the evening, but then I wake up anyway.
Hey, everyone! My first post here. Just testing out this awesome platform. Curious to see where it goes.
Thank you for your reply!
For a long time, the way ANNs work kinda made sense to me, and seemed to map nicely onto my (shallow) understanding of how human brain works. But I could never imagine how could the values/drives/desires be implemented in terms of ANN.
The idea that you can just quantify something you want as a metric, feed it as an input, and see if the output is closer to what we want is new to me. It was a little epiphany, that seems to make sense, so it prompted me to write this post.
Evolutionary, I guess human/animal utility function would be something like "How many copies of myself have I made? Let's maximize that." But from the subjective perspective, it's probably more like "Am I receiving the pleasure from the reward system my brain happened to develop?"
For sure there are a bunch of different impulses/drives, but they all are just little rewards for transforming the current state of the world into the one our brain prefers, right? Maybe they have appeared randomly, but if you were to design one intentionally, is that how you would go about it?
Learning
- Get inputs from eyes/ears.
- Recognize patterns, make predictions.
- Compare predictions to how things turned out, update the beliefs, improve the model of the world.
- Repeat.
General intelligence taking actions towards it's values
- Perceive the difference between the state of world, and the state I want.
- Use the model of the world that I've learned to predict the outcomes of possible actions.
- If I predict that applying action to the world will lead to rewards - take action.
- See how it turned out, update the model, repeat.
I agree that specific goals can also have unintended consequences. It just occurred to me that this kind of problem would be much easier to solve than trying to align the abstract values, and the outcome is the same - we get what we want.
Oh, and I totally agree that there's probably a ton of complexity when it comes to the implementation. But it would be pretty cool to figure out at least the general idea of what intelligence and consciousness are, what things we need to implement, and how they fit together.
I am working on a project with this purpose, and I think you will find it interesting:
It is intended to be a community for intelligent discussion about rationality and related subjects. It is still a beta version, and has not launched yet, but after seeing this topic, I have decided to share it with you now.
It is based on the open source platform that I'm building:
https://github.com/raymestalez/nexus
This platform will address most of the issues discussed in this thread. It can be used both like a publishing/discussion platform, and as a link aggregator, because it supports both twitter-like discussion, reddit-like communities, and medium-like long form articles.
This platform is in active development, and I'm very interested in your feedback. If LessWrong community needs any specific functionality that is not implemented yet - I will be happy to add it. Let me know what you think!
I am working on a project with the similar purpose, and I think you will find it interesting:
It is intended to be a community for intelligent discussion about rationality and related subjects. It is still a beta version, and has not launched yet, but after seeing this topic, I have decided to share it with you now.
If you find it interesting and can offer some feedback - I would really appreciate it!
Hey, everyone! Author of rationalfiction.io here.
I am actively building and improving our website, and I would be happy to offer it as a new platform for LW community, if there's interest.
I can take care of the hosting, and build all the necessary features.
I've been thinking about creating a LW-like website for a while now, but I wasn't sure that it will work. After reading this post I have decided that I'm going launch and see where it goes.
If there's any ideas or suggestions about how such platform can be improved or what features we'll need - let's discuss them.
By the way, the platform is open source(though I will probably fork it as a separate project and develop it in a new repo).
Thanks!
It works well on my iPad, haven't tested it on the phones yet. I will.
There are links to author's RSS feed in the post footer and on the profile pages.
Is there a reason you don't want to use the site? I'd appreciate any feedback or ideas on how I can make it better.
In startups, it is so called "MVP" - minimal viable product, a simplest version that you can show users to get some feedback and see if it works. It is the first step to building a startup.
To me it's a pretty huge accomplishment, I'm really proud of myself =) Most of the work went not into coding the website, but into figuring out what it is. I needed a thing that would be valuable, and that I would be excited to work on for the following few years.
A competent programmer could probably create something like that in a week, but because I'm just learning web development(along with writing, producing videos, and other stuff) it took me longer. At the moment it's the best thing I've created, so I'm really happy about it.
Also it's actually the 3rd iteration of my startup idea(first one was a platform for publishing fiction, 2nd - platform for publishing webcomics.)
I've launched the first version of my startup, lumiverse:
I want lumiverse to become the perfect place for people to publish, discover and discuss great educational videos. I want to build a friendly and intelligent community, make it easy for video creators to find an audience, and make it easy for viewers to discover awesome videos.
I also have finaly made the first few episodes of Orange Mind - my video series about rationality.
I'm new to the subject, so I'm sorry if the following is obvious or completely wrong, but the comment left by Eliezer doesn't seem like something that would be written by a smart person who is trying to suppress information. I seriously doubt that EY didn't know about Streisand effect.
However the comment does seem like something that would be written by a smart person who is trying to create a meme or promote his blog.
In HPMOR characters give each other advice "to understand a plot, assume that what happened was the intended result, and look at who benefits." The idea of Roko's basilisk went viral and lesswrong.com got a lot of traffic from popular news sites(I'm assuming).
I also don't think that there's anything wrong with it, I'm just sayin'.
It's hard to be more specific. I just love comedy very much and it is the best I've ever seen(besides Community). It's on the level of Louis CK, and, in my personal opinion, RaM compared to other comedies is what Breaking Bad is compared to other drama.
There's no point in explaining it too deeply. Most of the episodes are officially available for free here. Watch the first 3, and then you'll either like it or not.
Rick and Morty season 2 is absolutely brilliant and hilarious. If you guys haven't watched it - you should, it's amazing.
I think that ability to understand is a part of being clever. So is knowing a lot of things, and being able to come up with unusual ideas, and being able to focus on a task for a long time, and ability to achieve goals, and many other things.
I want to create a startup.
And I also want to write awesome fiction(Rationalist sci-fi comedy. Something like Rick and Morty meets HPMOR).
I disagree. My drive to "be clever" has nothing to do with my intelligence compared to other people, it's just about my desire to push my understanding of the universe, mastery of my skills, and creativity as far as I can. I love knowing things, understanding things, and being able to create things. And being good at it is what matters to me the most. At least this is what 'being clever' means to me.
Other people are just examples of what's possible, or of what I should avoid. I really don't care whether I appear smarter than them, it is just about pushing my potential as far as possible.
As to whether it is a worthwhile aim in life - it seems pretty worthwhile to me. So far I have not found anything more interesting or worthy of pursuing.
I have always loved intelligence and creativity. When I was about 12 years old, I have discovered 3D computer graphics, and got addicted to it - learning, understanding, and creating things was the most fun thing I have ever experienced.
As I got older, I have spent a lot of time trying to figure out what I want out of life and what are my values. After thinking for a long time and reading books like "Atlas Shrugged" and "Surely You're Joking, Mr. Feynman!", I have identified that "being clever" is my main drive in life, my main value. I realized that whatever "being clever" means - this is what I want to live for, this is something I want as my end goal, intelligence(and creativity) for it's own sake.
Once I've realized that, I have started looking for ways to learn things and become more intelligent. I have stumbled upon Paul Graham's essays, and decided that startups, programming, and writing are the best paths for me, mastering these things will make me the kind of person I want to be, teach me things, and improve my brain.
I have never explicitly pursued "rationality", I was just trying to read books, learn from smart people, and do what makes sense.
Later I happened upon HPMOR, found out about LessWrong, and really enjoyed EY's essays. So here I am now.