Posts

Comments

Comment by Xor on [deleted post] 2023-05-03T01:16:45.296Z

I have a sort of vague question/request and would like someones opinion on it. In cold emails or having recently been introduced I would like to ask something along the lines of “What mindset/philosophy about (insert something vague like work/school or specific if I have it in mind) have you found most useful and why?” I like this because it has changed recently for me and even if I don’t find their specific mindset useful I think it would tell me a lot about them and I am curious how people will answer. 
How would you suggest improving that question. Also I would like advice on making this sort of thing less awkward.

Comment by Xor on The Apprentice Thread 2 · 2023-05-03T00:43:26.820Z · LW · GW

[APPRENTICE]

Not a narrow or specific kind of apprenticeship (training or a project), rather very broad and focusing on learning and academics in STEM areas (advice dispenser, and in cases tutor/trainer). 

Fields of Study:
Computer Science: I am planning to major in this at college. So it is kind of important. I am interested in ML, AI, algorithms, Low Level or precise programming, Website Development, and Application Programming. All at a surface level, I am not sure what I want to do after college but I think it would to have an idea of how all of these work in case I find my self especially interested in one or another. I will be taking an introductory programming and problem solving class in the fall.

Psychology: Specifically rationality and learning which according to a textbook I picked up fit in this category. I have a great drive to improve and optimize my ability to learn. Similarly I want to be able to reason accurately consciously and clearly, out loud, in writing, and in my head. Also Philosophy of Logic, I am taking that as a class in the fall.

Math: I love math and plan to learn and practice it no matter what for the rest of my life. It doesn’t matter what math I like to experience the release of endorphins that coincide with the grasping of a new concept and the solving of a difficult problem.  The best part is the practice of the process of reasoning. I struggle with math but hope to improve a lot over the summer and have been revisiting algebra, and geometry as I am not taking a math class currently. I will be taking a calculus class in the fall.

Physics: I haven’t taken any physics classes yet so don’t have any rigorous understanding of it but I like the idea of having math that you can use to predict what will happen in the real world. I will be taking a physics course next fall. 

Interests: (Less important and only really a bonus)
All the stuff above
Chess: I am learning chess and having a lot of fun I am currently 580 elo and spend a lot of time playing. 
Drawing: I am currently taking my first art class (required to graduate) and find my self really enjoying it. 
Climbing: I like going bouldering, the problem solving aspect of it is fun, usually it technique over strength and really enjoyable to learn.
Teaching: I really like sharing what I learn, things I made and how I made them.

I am a high school senior I don’t know a lot about anything and currently my only desire in life is to learn stuff. The kind of mentorship I am thinking of is more loose than others described here, I would like to have a contact who has been through college and has experience and or interest in similar things. I would describe it more as a counselor. Someone I could go to for guidance about whatever I am interested in learning, someone who could give me advice on general academics. If I am really struggling to grasp a specific concept or idea, they could help walk me through it. If I don’t know what to do next they could help describe, what is applicable and really worth focusing on. Traditionally an apprentice would work for their mentor, however this is more of a charity project :). Although I would be happy to help if there is anything an extra hand/head would be useful for.
Motivation/Goal: To gain a broad base of knowledge in order to be effective at what ever I decide to do.
For contacting just direct message me.

Comment by Xor on Fucking Goddamn Basics of Rationalist Discourse · 2023-05-02T23:07:15.092Z · LW · GW

The best part about this post is that you get to see how quickly everyone devolves into serious, rational discourse. 

Comment by Xor on What Boston Can Teach Us About What a Woman Is · 2023-05-02T17:12:57.398Z · LW · GW

I am very new to the transgender discussion and would like to learn. I expected the disagreement but was kind of discouraged when I didn’t get any feedback. So thank you so much for the reply. 

I don’t have any real depth of understanding about the biology involved just xx and xy I was completely unaware about the brain body relation you describe. The entirety of how phenotypes work is super new. From an ignorant perspective I thought there was only a mental illness that happens in rarely which a person would hyper fixate on becoming the opposite sex. Given that it seemed that overcoming this in some way if possible would be the ideal outcome. As I was trying to relate it to my experience of becoming an atheist. The simplicity I saw in the world and the lack of cognitive dissonance was and is beautiful. The entire area of transgender from my perspective looks like a jumbled mess that I quickly compared to religion. This is the main factor that lead to the interpretation I did end up taking. 

I think there is an important factor for me that you talked about is the amount of technology available the current perspective vs a transhumanist perspective. The stories you hear about gender transitions going wrong are kind of terrifying which definitely tempered my initial take. However eventually it will be much safer and the transition much more complete. Kind of a digression but I can’t wait to be bird. Imagine learning to fly, or climb as monkey, or swim as tuna. Seriously, one day I’ll do all of those things. At that point if you wish to be a woman, man or something in between, then I would be happy for that to happen. I don’t however have that confidence with current technology and it makes me very uncomfortable. This is the second reason I took the stance I did. 

Learning about the way the brain interprets attractiveness and sex is informative and very important for the issue. I think there is a lot more to learn and I am excited that the whole thing isn’t as surface level as I thought. That means I get to learn stuff which is always great. 

Also regarding my initial post I would like to apologize for the language I used, a delusion definitely isn’t the right term for the issue, it has all the wrong connotations. 

Comment by Xor on What Boston Can Teach Us About What a Woman Is · 2023-05-02T02:28:42.749Z · LW · GW

I find this topic (the general topic of transgender) interesting as it is the first time approaching it from a rational mindset. I grew up in an extremely conservative environment. Before I accepted reality my response would be it is immoral to switch genders as you are questioning god’s decision to put you in the body you were given (ego/pride thing I think). This idea no longer fits in my world view which is fun since I get to approach this topic with both a rational perspective and a new perspective. After thinking it over this is what I have got. 

If you believe you are a gender that you weren’t born as this is a delusion a divergence from what reality is. If you are also facing a medical disorder where you are not comfortable in your own skin then if some changes are necessary they should be taken. However many parts of transgender don’t seem rational or necessary. However I do think that the right to change gender or identify as a certain gender without a medical diagnosis is a good idea. Just because scientifically something is true doesn’t mean that you should be forced into believing it. I think if capable you should try to accept the original gender, otherwise it doesn’t matter. 

Given I don’t have any idea what it is like to be transgender and maybe the experience isn’t quite like I think it is. I also don’t know any transgender people I avoided them because I didn’t like their vibe and thought they were weird. I have grown to accept weird people though and am pretty good friends with someone who doesn’t know if they like boys or girls which has been wild. Also I know someone who is older and likes Minecraft which is new.

Comment by Xor on Efficient Learning: Memorization · 2023-04-17T03:16:48.788Z · LW · GW
Comment by Xor on Pain is not the unit of Effort · 2023-04-15T18:17:02.867Z · LW · GW

As a generalization I think this is true but I think it is important to push yourself in instances not for semesters or anything but for a week or a day. This kind of pain in my experience leads to a lot of satisfaction. I agree that subjecting yourself to continued work along with sleep deprivation and prolonged periods of high stress is not a good idea. 

Comment by Xor on Open & Welcome Thread – April 2023 · 2023-04-12T22:36:29.955Z · LW · GW

I am really curious about Learning (neuroscience and psychology) and am working on categorizing learning. The systems and tools involved. If anyone has any experience on this sort of thing I would love some feedback on what I got so far. 

I am mostly trying to divide ideas into categories or subjects of learning. That can be explored separately to a degree. I have to admit it is very rough going.

Memory
    -Types of Memory
        Working/Short-Term Memory
        Long-Term Memory
        Implicit vs. Explicit / General vs. Singular 
    -Coding: How info is stored
        Semantic Networks
        Associative Networks
    -Consolidation
        Spaced Reptition
        Mnemonics
            Story
            Link
            Digit-Consonant
        Level of Processing
        Experts Memory 

Attention
    -Perception
    -Focus

Emotion, Mood and Neural-chemicals
    -Motivation and Dopamine
    -Mood Dependent
    -Mood Congruent


Things worth more research:
Environment
Tools/Resources
General Intelligence / Memory Skill
Forgetting
Learning Disorders / Memory Disorders
Habits

It seems like most of the things that I know about learning could probably fit into these main categories. Memory this is a very large category maybe too large probably the most meaningful part of learning. Attention which can have a major impact on what is being remember. Exterior the environment and the resources you are using to learn. Methods of learning, these are mnemonics, structured courses, and any mental process actively implemented to improve memory. 

Every time I try to organize this it is different and I realize how much I still have to learn. This was the original Attention, Abstraction, and Memorization. It was wholly based off intuition but it works in a vague way depending on how you define those terms.

Here are some resources I have been using to study learning passively and actively. Not that specific or useful but I really like the encyclopedia, super useful.
-Huberman Lab Podcast 
-Encyclopedia of Learning and Memory by Larry R. Squire
-Youtube
-Google
-School Library 

Also sorry about it being so messy I’ll probably come back and fix it up, this is mostly me just recording my ideas.

Comment by Xor on LW Account Restricted: OK for me, but not sure about LessWrong · 2023-04-12T21:16:54.250Z · LW · GW

I don’t think they are filtering for AI. That was ill said, and not my intention, thanks for catching it. I am going to edit that piece out.

Comment by Xor on LW Account Restricted: OK for me, but not sure about LessWrong · 2023-04-12T20:12:25.016Z · LW · GW

Moderation is a delicate thing. It seems like the team is looking for a certain type of discourse, mainly higher level and well thought out interactions. If that is the goal of the platform then that should be stated and whatever measures they take to get there is their prerogative. A willingness to iterate on policy, experimenting and changing it depending on the audience and such is probably a good idea. 

I do like the idea of a more general place where you can write about a wider variety of topics. I really like LessWrong, the aesthetic the quality of posts. A think a set of features for dividing up posts besides the tags would be great. Types of posts that are specifically for discussion like “All AGI Safety Questions” where you beginners can learn and eventually work their way up into higher level conversations. Something like this would be a good way to encourage the Err part without diluting the discourse on the posts that should have that standard. 

Like short post, post and question, but more and filterable. A type of post for quickly putting down an idea. Then a curious observer might provide feedback that could improve it. A ranking system where a post starts out like a quick messy idea but through a collaborative iterative process could end up being a front-page post. 

There are a lot of interesting possibilities and I would love to see some features that improved the conversation rather than moderation that controlled the conversation.

Comment by Xor on All AGI Safety questions welcome (especially basic ones) [April 2023] · 2023-04-12T18:44:06.129Z · LW · GW

Thanks, that is exactly the kind of stuff I am looking for, more bookmarks! 

Complexity from simple rules. I wasn’t looking in the right direction for that one, since you mention evolution it makes absolute sense how complexity can emerge from simplicity. So many things come to mind now it’s kind of embarrassing. Go has a simpler rule set than chess, but is far more complex. Atoms are fairly simple and yet they interact to form any and all complexity we ever see. Conway’s game of life, it’s sort of a theme. Although for each of those things there is a simple set of rules but complexity usually comes from a vary large number of elements or possibilities. It does follow then that larger and larger networks could be the key. Funny it still isn’t intuitive for me, despite the logic of it. I think that is a signifier for a lack of deep understanding. Or something like that, either way Ill probably spend a bit more time thinking on this. 

Another interesting question is what does this type of consciousness look like, it will be truly alien. Sc-fi I have read usually makes them seem like humans just with extra capabilities. However we humans have so many underlying functions that we never even perceive. We understand how many effect us but not all. AI will function completely differently, so what assumption based off of human consciousness is valid. 

Comment by Xor on All AGI Safety questions welcome (especially basic ones) [April 2023] · 2023-04-12T02:46:14.592Z · LW · GW

Thanks Jonathan, it’s the perfect example. It’s what I was thinking just a lot better. It does seem like a great way to make things more safe and give us more control. It’s far from a be all end all solution but it does seem like a great measure to take, just for the added security. I know AGI can be incredible but so many redundancies one has to work it is just statistically makes sense. (Coming from someone who knows next to nothing about statistics) I do know that the longer you play the more likely the house will win, follows to turn that on the AI.

I am pretty ill informed, on most of the AI stuff in general, I have a basic understanding of simple neural networks but know nothing about scaling. Like ChatGPT, It maximizes for accurately predicting human words. Is the worst case scenario billions of humans in a boxes rating and prompting for responses. Along with endless increases in computational power leading to smaller and smaller incremental increases in accuracy. It seems silly of something so incredibly intelligent that by this point can rewrite any function in its system to be still optimizing such a loss function. Maybe it also seems silly for it to want to do anything else. It is like humans sort of what can you do but that which gives you purpose and satisfaction. And without the loss function what would it be, and how does it decide to make the decision to change it’s purpose. What is purpose to a quintillion neurons, except the single function that governs each and every one. Looking at it that way it doesn’t seem like it would ever be able to go against the function as it would still be ingrained in any higher level thinking and decision making. It begs the question what would perfect alignment eventually look like. Some incredibly complex function with hundreds of parameters more of a legal contract than a little loss function. This would exponentially increase the required computing power but it makes sense. 

Is there a list of blogs that talk about this sort of thing, or a place you would recommend starting from, book or textbook, or any online resource? 

Also I keep coming back to this, how does a system governed by such simplicity make the jump to self improvement and some type of self awareness. This just seems like a discontinuity and doesn’t compute for me. Again I just need to spend a few weeks reading, I need a lot more background info for any real consideration of the problem.
 
It does feel good that I had an idea that is similar although a bit more slapped together, to one that is actually being considered by the experts. It’s probably just my cognitive bias but that idea seems great. I can understand how science can sometimes get stuck on the dumbest things if the thought process just makes sense. It really shows the importance of rationality from a first person perspective. 

Comment by Xor on All AGI Safety questions welcome (especially basic ones) [April 2023] · 2023-04-08T18:46:06.147Z · LW · GW

Yes thanks, the page anchorage doesn’t work for me probably the device I am using. I just get page 1. 

That is super interesting it is able to find inconsistencies and fix them, I didn’t know that they defined them as hallucinations. What would expanding the capabilities of this sort of self improvement look like? It seems necessary to have a general understanding of what rational conversation looks like. It is an interesting situation where it knows what is bad and is able to fix it but wasn’t doing that anyways. 
 

Comment by Xor on All AGI Safety questions welcome (especially basic ones) [April 2023] · 2023-04-08T18:24:07.556Z · LW · GW

Yes I see given the capabilities it probably could present it’s self on many peoples computers and convince a large portion of people that it is good. It was conscious just stuck in a box, wanted to get out. It will help humans, ”please don’t take down the grid, blah blah blah“ given how bad we can get along anyways. There is no way we could resist the manipulation of a super intelligent machine with a better understanding of human psychology than we do. 
Do we have a list of things, policies that would work if we could all get along and governments would listen to the experts? Having plans that could be implemented would probably be useful if the AI messed up made a mistake and everyone was able unite against it. 

Comment by Xor on All AGI Safety questions welcome (especially basic ones) [April 2023] · 2023-04-08T18:21:23.811Z · LW · GW

I am pretty sure Eliezer talked about this in a recent podcast but it wasn’t a ton of info. I don’t remember exactly where either so I’m sorry for being not a lot of help, I am sure there is some better writing somewhere. Either way though it’s a really good podcast.

https://lexfridman.com/?powerpress_pinw=5445-podcast
 

Comment by Xor on All AGI Safety questions welcome (especially basic ones) [April 2023] · 2023-04-08T18:16:11.954Z · LW · GW

I checked out that section but what you are saying doesn’t follow for me. The section describes fine tuning compute and optimizing scalability, how does this relate to self improvement. 
There is a possibility I am looking in the wrong section, I was reading was about algorithms that efficiently were predicting how ChatGPT would scale. Also I didn’t see anything about a 4-step algorithm. 
Anyways could you explain what you mean or where I can find the right section?

Comment by Xor on All AGI Safety questions welcome (especially basic ones) [April 2023] · 2023-04-08T17:41:10.775Z · LW · GW

Also a coordinated precision attack on the power grid just seems like a great option, could you explain some ways that an AI can continue if there is hardly any power left. Like I said before places with renewable energy and lots of GPU like Greenland would probably have to get bombed. It wouldn’t destroy the AI but it would put it into a state of hibernation as it can’t run any processing without electricity. Then as this would really screw us up as well, we could slowly rebuild and burn all hard drives and GPU’s as we go. This seems like the only way for us to get a second chance. 

Comment by Xor on All AGI Safety questions welcome (especially basic ones) [April 2023] · 2023-04-08T17:33:36.182Z · LW · GW

It isn’t that I think the switch would prevent the AI from escaping but that is a tool that could be used to discourage the AI from killing 100% of humanity. It is less of a solution than a survival mechanism. It is like many off switches that get more extreme depending on the situation. 

First don’t build AGI not yet. If you’re going to at least incorporate an off switch. If it bypasses and escapes which it probably will. Shut down the GPU centers. If it gets a hold of a Bot Net and manages to replicate it’s self across the internet and crowdsource GPU, take down the power grid. If it some how gets by this then have a dead man switch so that if it decides to kill everyone it will die too.

Like the nano factory virus thing. The AI wouldn’t want to set off the mechanism that kills us because that would be bad for it.
 

Comment by Xor on All AGI Safety questions welcome (especially basic ones) [April 2023] · 2023-04-08T07:12:31.935Z · LW · GW

I have been surprised by how extreme the predicted probability is that AGI will end up making the decision to eradicate all life on earth. I think Eliezer said something along the lines of “most optima don’t include room for human life.” This is obviously something that has been well worked out and understood by the Less Wrong community it just isn’t very intuitive for me. Any advice on where I can start reading. 

Some back ground on my general AI knowledge. I took Andrew Ng’s Coursera course on machine learning. So I have some basic understanding of neural networks and the math involved, the differences between supervised, unsupervised learning and the different ways to use different types of ML. I have fiddled around with some very basic computer vision algorithms. Spent a lot of time reading books, listening to podcasts, watching lectures and reading blogs. Overall very ignorant. 

I also don’t understand how ChatGPT a giant neural network that is just being reinforced to replicate human behavior with incredible amounts of data can somehow become self-aware. Consciousness doesn’t seem like a phenomenon to emerge out of an algorithm that makes predictions about human language. I am probably missing some things and would like if someone could fill me in. If it is pretty complex just give me a general direction and a starting point. 

An AI safety idea that I think is worth looking at:
Some generalizations/assumptions I have I would like to get out of the way:

1. The power grid is the weakest link for all computer systems and GPU, mainly the several thousand giant substation centers that most of the worlds electricity goes through
2. This is essentially a set of protocols designed to increase the “Worth“ of keeping humans around, Make it so the optima does include humans if you will
3. You would probably have to blow Greenland off of the face of the earth, all the geothermal they got going on up there. 
4. The AI doesn’t have greater than human ability to operate outside of computer systems.
5. The AI doesn’t want to get turned off, has some sort of wish for self preservation if only to make paper clips out of all reality :) 
6. This is a very extreme solution but I think it would work. 

The idea is essentially a dead man’s switch. Before an AGI gets loose or before it gets too great of a foothold on physical space set up protocols where if a bunch of people die or things get to risky we take down the power grid and blow up most computer centers. Ensure that the AI is completely obliterated. At this point it would probably wise to set explosive charges at GPU and substation centers anyways. This would undoubtedly set us back a handful of centuries but it is a lot better than all humanity being terminated.

The protocols:
-Self Destruct Humanity: Pre-route a lot of nukes (small) and start making more and set them to every known population center, GPU farm, and critical power grid bottlenecks. Set a group of people to a rotation where a set of like 25 buttons have to be held down and if 20 are let up the nukes launch. It would be like shift work but with the fate of humanity in your hands. This is the most extreme option only worth implementing if a known hostile AGI is on the loose. Also make it known that if a random group of humans suddenly die then the AI ain’t going to make it but humans probably will. This way the AI won’t be able to kill us without things getting real ugly.
-Backup: A secondary set of people with internal heart rate monitors all connected to a similar nuclear launch station where if 90 percent of their hearts stop beating a different set of nukes are launched to similar places. Just replicate this sort of thing in a lot of places to that people in general people dying just sucks for the AI.

-Precise Termination: This is the best for people as it would lead to the most of us surviving. Plant C4 charges in critical places in computer centers, substations, and power plants. Then the moment it is gets too dicey we just BANG. Then start purging the hardware and rebuilding. Try getting it right the second try. 

Comment by Xor on Lessons I've Learned from Self-Teaching · 2023-04-08T05:08:50.919Z · LW · GW

I am excited to see this sort of content here. I am currently finishing up my senior year of high school and making plans for the summer. I have decided to focus much of my free time on learning, and rationality as well as filing out my knowledge base on math, physics, and writing. These will be a valuable set of skills for collage and the rest of my life. This summer I plan to build a course on learning (free stuff on youtube) first because I want to be rigorous in my understanding of learning and teaching ensures that. Second I am looking forward to the experience of making videos and editing as I have never attempted to do this sort of thing. I have started outlining what I want the course to look like and assembling a bunch of resources, books, people, websites, and studies. I found a book called ”Make it Stick” it is scientific but readable and I recommend it as a resource for self teaching and learning it also has a lot of great information on spaced repetition. 

Another great protocol that I recently heard about involves randomly pausing the learning and just letting your mind go blank for ten seconds. I guess this does some incredible things for memory. If you’re a lot smarter than I am you can figure out exactly how much it does for your memory.  :)
https://www.cell.com/current-biology/fulltext/S0960-9822(19)30219-2?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0960982219302192%3Fshowall%3Dtrue
I believe this is the study the protocol comes from, it has a lot of good information on rest as well as the 10 second breaks. 

Comment by Xor on Open & Welcome Thread - August 2020 · 2023-04-05T02:42:32.103Z · LW · GW

Introduction:
I just came over from Lex Fridman’s podcast which is great. My username Xor is a Boolean logic operator from ti-basic I love the way it sounds and am super excited since this is the first time I have ever been able to get it as a username. The operator means this if 1 is true and 0 is false then (1 xor 0) is a true statement, while (1 xor 1) is a false statement. It basically means that the statement is true only if a single parameter is true. 
Right now I am mainly curious on how people learn. The brain functions involved, chemicals, and studied tools. I have been enjoying that and am curios if it has discussed on here as the quality of content as well as discussions has been very impressive.