Posts
Comments
I also found this hard to parse. I suggest the following edit:
Omega will send you the following message whenever it is true: "Exactly one of the following statements is true: (1) you will not pull the lever (2) the stranger will not pull the lever " You receive the message. Do you pull the lever?
And even when the AGI does do work (The Doctor), it’s been given human-like emotions. People don’t want to read a story where the machines do all the work and the humans are just lounging around.
I am taking the opportunity to recommend the culture by Ian M. Banks here is a good entry point to the series, the books can be read in almost any order. It's not like they find no space for human-like actors, but I still think these books show -by being reasonably popular- that there is an audience for stories about civilizations where AGI does all the work.
Of course, your original point still stands if you say "most people" instead.
I think I found another typo
I have two theses. First of all, the Life Star is a tremendous
For anyone wondering TMI almost certainly stands for "The Mind Illuminated"; a book by John Yates, Matthew Immergut, and Jeremy Graves . Full title: The Mind Illuminated: A Complete Meditation Guide Integrating Buddhist Wisdom and Brain Science for Greater Mindfulness
Thank you
As I understand it, that point feels wrong to me. There are many things that I would be sad not to have in my life but only on the vaguely long term and that are easy to replace quickly. I have only one fridge and I would probably be somewhat miserable without one (or maybe I could adapt), but it would be absurd for me to buy a second one.
I would say most of the things that I would be sad to miss and that are easy to duplicate are also easy to replace quickly. The main exception is probably data, which should indeed be backed up regularly and safely.
Could you link a source for the once a week coffee? I am intrigued.
I did not yet read your recommendations so I don't know if the answer is there.
I read the rewrites before I read the corresponding section of the post and, without knowing the context, I find Richard's first rewrite to be the most intuitive permutation of the three. I fully expect that this will stop once I read the post, but I thought that my particular perspective of having read the rewrites first might be relevant.
Adapted from the french "j'envisage que X" I propose "I am considering the possibility that X" or in some contexts "I am considering X". "The plumber says it's fixed, but I am considering he might be wrong".
I just want to point out that the sentence you replied to starts with an "if". "If those genes' role is to alter the way synapses develop in the fastest growth phase, changing them when you're 30 won't do anything" (emphasis mine). You described this as "At first you confidently assert that changing genes in the brain won't do anything to an adult". The difference is important. This is in no way a comment on the object level debate. I simply think Lesswrong is a place where hypotheticals are useful and that debates will be poorer if people cannot rely on the safety that saying "if A then B" will not be interpreted as just saying "B".
Error message: "Sorry, you don't have access to this draft"
Makes sense and I think that's wise (you could also think about it with other people during that time). Do you want to expand on the game-theoretic reasons?
You did, indeed, fuck up so hard that you don't get to hang out with the other ancestor simulations, and even though I have infinite energy I'm not giving you a personal high resolution paradise simulation. I'm gonna give you a chill, mediocre but serviceable sim-world that is good enough to give you space to think and reflect and decide what you want.
And you don't get to have all the things you want until you've somehow processed why that isn't okay, and actually learned to be better.
I was with you until this part. Why would you coerce Hitler into thinking like you do about morality? Why be cruel to him by forcing him into a mediocre environment? I suppose there might be game-theoristic reasons for this. But if that's not where you're coming from then I would say you're still letting the fact that you dislike a human being make you degrade his living conditions in a way that benefits no one.
I think this shows your "universal love" extends to "don't seek the suffering of others" but not to "the only reason to hurt* someone is if it benefits someone else".
* : In the sense of "doing something that goes against their interests".
When I downvote a comment it is basically never because I want the author to delete that comment. I rarely downvote comments already bellow 0, but even when I do it is not because I wish the comment was deleted. Instead, it mostly means that I dislike the way in which that comment was written and thought out; that I don't want people to have that style / approach when commenting. This correlates with me disagreeing with the position, but not strongly so; and I try to keep my opinions about the object topic to the agree/disagree voting.
I don't know how representative I am of the Lesswrong population in that regard, but I at least think most people who downvote a comment would prefer for it to stay undeleted; if only to make past discussions legilible.
I took the survey and mostly enjoyed it. There are some questions that I skipped because my answer would be too specific and I wanted to keep the ability to speak about them without breaking anonymity.
I also skipped some questions because I wasn't sure how to interpret certain words.
I don't have much to add. But I think this is a very well done post, that it has a nice size in scope, and that it is about a good and useful concept.
Why not make it so there is a box "ask me before allowing other participants to publish" that is unchecked by default?
Thank you for your work. I am often amazed by the effort you pour in your regular posting, and I see you as among the highest value contributors to LW.
Two minor nitpicks:
As opposed to, as Solana states here, saying ‘e/acc is not a cult [....] technologically progressive meme.’
Maybe format this part like other quotations? I was confused for a second there.
Memes do not declare anyone who disagrees with them on anything as enemies, who they continuously attack ad hominem, describe as being in cults and demand everyone treat as criminals. Memes do not demand people declare which side people are on. Memes do not put badges of loyalty into their bios. Memes do not write multiple manifestos. Memes do not have, as Roon observes, persecution complexes.
Meh. Groups of people sharing a common meme certainly do all these things. I see little point in arguing the precise semantics of "movement" and I do not particularly think Solana's message is honest. But I would like to register that I don't see any intrisic contradiction in "X is a rare meme that turns those who integrate it to their thinking into rude unthinking fanatics, but it did not create a movement yet".
I think this somehow misses the crux for the problem of induction. By redefining "justification" (and your definition is reasonnable), you get to speak of partially justifying a general statement that has not yet been fully observed. Sure.
But this doesn't solve the question. The question, in your example, would be closer to "On what basis can I, having seen a large number of swans being white, consider it now more likely that the next swan I see will be white"?
I think it already exists to some extent, at least in France. We have "leboncoin" which I think is similar to Craigslist. Many offers are very cheap and the software is decent, though not great. The giver has to deal with the hassle of taking pictures, making a public offer, and then coordinating with the taker; so in exchange the taker gives them a token amount of money. Seems fair. I truly think that many offers on leboncoin are put there because people want others to benefit from what they no longer need. I also think I saw some offers in the past that were fully free (listed for 1 euro, with a comment saying they are free).
You say
Did craigslist pave the way despite becoming a cesspool of overly expensive crap and deceptively listed '$1' ads for businesses, that require you to drive 8 miles to pick it up while someone side-eyes you suspiciously from their driveway and waits for the venmo to go through?
But my experience with leboncoin in France has always been about nice people and polite conversations. Maybe there is a cultural difference?
I sometimes think that it would be great to have a more comprehensive system to facilitate giving and reusing all kinds of things across society. It would notably be great to have a system that handles storage until a taker is found, pickup/delivery, quality check, and cleaning in exchange for a small fee. We also have that in France for furniture, it is called Emaus. You give your couch to the charity, they clean it and ensure it is in decent shape (I think they do bedbug screening) then they put it up for sale and deliver it, all for a very reasonable price. In addition it even provides jobs for people in need who might often be "noncompetitive" and unlikely to find jobs otherwise. Emaus is a charity, so they don't care: the jobs are part of the goal. Their website is quite bad, there is no accessible database of items on offer and they don't even do a good job at clarifying their services and policies. So you need to go there and see for yourself. Why don't they have a good website/app? I don't know but I can guess. Not a priority, their volunteers are often old and don't know how to code, it would take more work for each donation if they had to populate a database, everything gets sold eventually so they think better software would change nothing.
In short: Craigslist/leboncoin is already decent software for donations where you pay the donor a few bucks for taking the time to organize the donation, and people use that software to that end. Maybe there would be benefits to an app that only organizes donations and nothing else, but the gain seems minimal. It would make sense to have a third party simplify the logistic for a fee, and that has been done for expensive items (couchs). For it to work with smaller stuff you would probably need some economy of scale, so it might be harder to make it work. Maybe there is an opportunity there. You certainly would need good software: I might go to Emaus to see if they have a nice couch, but I won't go for a blanket if I cannot check in advance that they have a nice one.
This looks interesting. I will come back to this post later and read it if the math displays properly.
I have become less sceptic about the ability of western government to act and solve issues in a reasonable timeframe. In general, I tend to think political actions are doomed and are mostly only able to let the statu quo evolve by itself. But recent relatively fast reactions to the evolution of mainstream AI tools have led me to think that I am too cynical on this. I do not know what to think instead, but I am now less confident in my old opinion.
Here is a description of that metaphor, for those who don't know.
I have heard of filters and ultrafilters, but I have never heard of anyone calling any sort of filter a hyperfilter.
Oops, my bad. I re-read the post as I was typing to make sure I hadn't missed any explanation. That can sometimes cause me to type what I read instead of what I intended. I probably interverted the prefixes because they feel similar.
Thank you for the math. I am not sure everything is right with your notations in the second half, it seems to me there must be a typo either for the intersection case or the superset one. But the ideas are clear enough to let me complete the proof.
Thank you for this. it looks like a good first contact with hyperreals.
Two nitpicks:
- Ω=(1,2,3,ldots). --> I think you forgot a "\" here and it is messing your formatting up.
- It is not clear in the post why we use a hyperfilter, rather than just the set of all infinite sets.
I am also curious about this association. Is it just that in your experience these traits are correlated?
I agree that deep canvassing would be interesting. I am also curious about the famous experiments in which forcing people to smile (by holding a pen in their mouth) makes them more likely to appreciate something. Though I don't know if there are already many replication studies for those.
But many as an economist you would consider this to be too out of your field?
No, I mean the price at which that party is indifferent between making the deal and not making the deal.
I think that's the same thing? By "highest price" I meant "the highest price the buyer is willing to pay". That's the turning point after which the buyer dislikes the deal and before which the buyer likes the deal.
Yeah, and I'm trying to make that difficult for humans to do.
I understand but I fail to see that this attempt works. It seems to me that in many / most real cases (for which I have a reasonable estimate on the other's best price) it is in my interst to lie if I know that the other is filling the form honestly. If that is correct, then the "honest meta" is unstable.
Just in case: I assume that by "best price" you mean "highest price" rather than "estimated fair price".
If so, I only need to have some information on it to be incentivized to lie. In the example above I only use the information that the buyer is willing to pay two units above the fair price. The kind of example I use doesn't work if I have no information at all about the other's best price but that is rare. Realistically, I always have "some" estimation of what the other is willing to pay.
If we take a general Bayesian framework, I have a distribution on the buyer's best and fair price. It seems to me that most/all nontrivial distributions will incentivize me to lie.
I am not so sure the incentives are properly aligned here. Let's assume I am the seller. In the extreme case in which I know the highest price accepted by the buyer, I am obviously incentivized to take it as my own limit price.
And I think this generalizes. If:
- there is a universally agreed true fair price FAIR_PRICE, like an official market value
- the buyer is still filling the form honestly
- I know the buyer to have a highest price above FAIR_PRICE + 2 then I can easily get FAIR_PRICE+2
Of course this requires some information on the other negotiator, but I do not see this as unreasonable.
Could you clarify in which situation this is meant to incentivize people to fill the form honestly?
I guess. But I don't know of any real-world transactions where it's expected that people keep their word on something like this
I think there are two points worth raising here:
- If someone accepts to precommit to the result of this negotiation and then, when the website outputs a price, refuses to honor it then I probably do not want to trade with them anymore. At the least, I would count it as though they agreed to a price and refused to honor it the next day.
- You only need to keep a solid precomitment yourself to avoid falling prey to the strategy above.
The obvious issue in my eyes is that few people would agree to use this kind of tool anyway, especially for a nontrivial transaction. But in a society (or simply a subculture) that normalizes them it would probably no longer be true that people don't consider their precommitment to the tool as binding.
The obvious exploit is to lie and then negotiate “normally” if the tool fails to make a deal in your favor.
The website says:
In order for both participants to have the correct incentives, you must both commit to abide by that result and not try to negotiate further afterwards.
So this strategy fails against people that keep their word.
Maybe you would accept this paper, which was discussed quite a bit at the time: Emergent Tool Use From Multi-Agent Autocurricula
The AI learns to use a physics engine glitch in order to win a game. I am thinking of the behavior at 2:36 in this video. The code is available on github here. I didn't try to run it myself, so I do not know how easy to run or complete it is.
As to whether the article matches your other criteria:
- The goal of the article was to get the AI to find new behaviors, so it might not count as purely natural. But it seems the physics glitch was not planned. So it did come as a surprise.
- Maybe glitching the physics to win at hide and seek is not a sufficiently general behavior to count as a case of instrumental convergence.
I won't blame you if you think this doesn't count.
lack of built-in machinery for inviolable contracts which makes non-defection hard to enforce
Out of topic: if you change nothing else about the universe, an easy to use "magical" mechanism for inviolable contracts would be a dreadful thing. As soon as you have power of life or death over someone you can pretty much force into irrevocable slavery. I suppose we could imagine a "good" working society using that mechanism. But more probably almost all humans would be slaves, serving maybe a single small group of aristocrats.
You might want to add a "free of influence" condition to the contract system, but in a society that normalizes absolute power (such as many ancient monarchies), that becomes difficult to define.
Ok, I think I can clarify what people generally mean when they consider that the logic Church-Turing thesis is correct.
There is an intuitive notion of computation that is somewhat consensual. It doesn't account for limits on time (beyond the fact that everything must be finitely long) and does not account for limits in space / memory. It is also somewhat equivalent to "what a rigorous idiot could be made to do, given immortality and infinite paper/pen". Many people / most computer scientist share this intuitive idea and at some point people thought they should make more rigorous what it is exactly.
Whenever people tried to come up with formal processes that only allow "obviously acceptable" operations with regard to this intuitive notion, they produced frameworks that are either weaker or equivalent to the turing machine.
The Church-Turing thesis is that the turing machine formalism fully captures this intuitive notion of computation. It seems to be true.
With time, the word "computable" itself has come to be defined on the basis of this idea. So when we read or hear the word in a theoretical computer-science context, it now refers to the turing machine.
Beyond this, it is indeed the case that the word "computation" has also come to be applied to other formalisms that look like the initial notion to some degree. In general with an adjective in front to distinguish them from just "computation". We can think for example of "quantum computing", which does not match the initial intuition for "computation" (though it is not necessarily stronger). These other applications of the word "computation" are not what the Church-Turing thesis is about, so they are not counterexamples. Also, all of this is for the "logic" Church-Turing thesis, to which what can be built in real life is irrelevant.
PS: I take it for granted that computer scientists, including those knowledgeable on the notions at hand, usually consider the thesis correct. That's my experience, but maybe you disagree.
I didn't check the article yet but if I understood your comment correctly then a simpler example would have been "turing machines with a halting oracle", which is indeed stronger than normal turing machines. (Per my understanding) the church-turing thesis is about having a good formal definition of "the" intuitive notion of algorithm. And an important property of this intuitive notion is that a "perfectly rigorous idiot" could run any algorithm with just paper and a pen. So I would say it is wrong to take something that goes beyond that as a counterexample.
Maybe we should clarify this concept of "the intuitive notion of algorithm".
PS: We are running dangerously close to just arguing semantics, but insofar as "the church-turing thesis" is a generally consensual notion I do not think the debate is entirely pointless.
Thank you for this post.
Seems kinda weird, right? Well, consider this: Turing showed that there is no computation that this machine can't perform, that another machine can perform.
I am not sure to which extent you already know what follows, but I thought this might be worth clarifying. The "basic" church turing thesis is that our intuitive notion of "formal computation process" is entirely captured by the turing machine. A consequence is that any "good" formal notion of algorithm / fully described computation process is equivalent or weaker to the Turing machine. So far, this has been true of all proposals and it is taken for granted that this statement is true. Note that the thesis is about being a good formal definition of an intuitive notion. Hence we cannot prove it to be true. We can only prove that various attempts at formalizing computation indeed ended up equivalent to each others.
The "physic church turing thesis" is that no machine can be built that performs computations impossible for a turing machine. For example: there is no way to build a halting oracle in real life. This is less certain.
But if you take a God's eye view and had the power to get the community to all shift at once to a different model, it sounds to me like there are probably better ones than Turing's.
People implicitly refer to models much closer to actual programing languages when they prove stuff related to computability. The tedious details of turing machines are rarely discuss in actual proof, at least in my experience. In fact, the precise model at hand is often somewhat ambiguous, which can be slightly problematic. I think the turing machine is a good starting point to understand the notion of computability, just because it is simpler than alternatives (that I am aware of).
One of my favorites: he proved that, given some extremely, extremely reasonable assumptions[13], you should Shut Up and Multiply. Well, he didn't phrase it that way. He said "maximize expected utility".
Not sure he did, at least to the extent that it is taken when we say "shut up and multiply". I guess you refer to the Von Neumann–Morgenstern utility theorem, but that theorem does not provide a full "it is in our best interest to do utility maximization". There was some discussion on the topic recently: a first claim and my own answer.
Let's put aside the distiction between initial conditions and rules, I think it is just a distraction at this point.
In general I would even expect a complete simulation of the univers to be non-computable. Ie I expect that the univers contains an infinite amount of information. If we bound the problem to some finite part of time and space then I expect, just as an intuition, that a complete simulation would require a lot of information. Ie, the minimal turing machine / python code that consistently outputs the same result as the simulation for each input is very long.
I do not have a good number to give that translates this intuition of "very long". Let's say that simulating the earth during the last 10 days would take multiple millions of terrabits of data? Of course the problem is underspecified. We also need to specify what the legal inputs are.
Anyway, do you agree with this intuition?
I agree it might be too ambitious to look at all nondominated strategy. I went for "nondominated" as a condition because it was, in my eyes, the best formal translation of the initial intuitive claim I was trying to test. Besides, that's what is used in the complete class theorem.
There might be interesting variations of the conjecture with stricter requirements on the strategy. But I also think it would be very hard to give a non-tautological result that uses this notion of "no matter the odds". The very notion that there are odds to discuss is what we are trying to prove.
You seem to interpret that in my example the energy "in the battery of the agent" counts, such that not moving can't be dominated. I said "energy accumulated in some region of the universe" to avoid this kind of thing. Anyway, the point of the example is not showing a completely general property, but to point at things which have the property, so I expect you to fix yourself counter-specifications that make the example fail, unless of course you thought the example was very broken.
Sure. I agree counterexamples that rely on a small specification flaw are not relevant to your point.
I don't know if that class of examples works. My intuition is somewhat that there will be nondominated strategies that are not utility maximization "by default" on that sort of games. At least if we only look at utilities that are weighted sums of the energy at various points in time.
On the whole and in general, it is still not intuitive to me whether utility maximization become ubiquitous when the "complexity" ratio you defines goes down.
Sorry for only answering now, I was quite busy in the last few days.
I think simulating the real world requires a lot of memory and computations, not a large program. (Not that I know the program.) Komogorov complexity does not put restrictions on the computations. Think about Conway's game of life.
You also need a specification of the initial state, which dramatically increases the size of the program! Because the formalism only requires turing machines (or any equivalent computation formalism), there is no distinction between the representation of the "rules" and the rest of the necessary data. So even if the rules of physics themselves are very simple (like in the game of life), the program that simulates the world is very big. It probably requires something like "position of every atom at step ".
Sorry, my choice of expression is confusing. I was thinking about a directed acyclic graph representing the order in my mind, and called that "tree", but indeed the standard definition of tree is acyclic without orientation, so the skeleton of a DAG does not qualify in general. A minimal representation total order would be a chain, while a partial order has to contain "parallel branches".
Ok thank you. I will keep reading "order relation" for those.
Weird. I didn't expect this to be wrong and I did not expect the other one to be right. Glad I asked.
"Minimal number of character it takes to implement this game in python" would be small because the "game code" part is the laws and the reward.
Not so sure about that. The game has to describe and model "everything" about the situation. So if you want to describe interaction with details of a "real world" then you also need a complete simulation of said real world. While everything is "contained" in the reward function, it is not like the reward function can be computed independently of "what happens" in the game. It is however true that you only need to compute the most minimal version of the game relevant to the outcome. So if your game contains a lot of "pointless" rules that do nothing then they can be safely ignored when computing the complexity of the game. I think that's normal.
In the case of the real world, even restricting it to a bounded precision, the program would need to be very long. It is not just a description of the sentence "you win if there are a lot of diamonds in the world" (or whatever the goal is). It is also a complete "simulation" of the world.
Btw, the notion I was alluding to is Kolmogorov complexity.
and it would be difficult to write a non-dominated strategy which tries to be non-dominated just by not accruing that much energy overall and instead moving a lot of energy at some time step. Yet it's probably possible [...]
Depending on the exact parameter, an intuitive strategy that is not dominated but not optimal long terms either could be "never invest anything", in which you value the present so much that you never move because that would cost energy. Remark that this strategy is still "a" utility maximisation (just value each step more than the next to a high amount). But it is very bad for the utility you described.
But this kind of trick becomes more difficult if I restrict the number of branches you can make in the preferences tree; it's possible to be non-dominated "just" because you have many non-comparable branches and it suffices to do OK on just one branch. As I restrict the number of branches, you'll have to do better overall.
I still get the feeling that your notion of preference tree is not equivalent to my own concept of a partial order on the set of outcomes. Could you clarify?
In EJT's post, I remember that the main concrete point was that being stubborn, in this sense:
if I previously turned down some option X, I will not choose any option that I strictly disprefer to X
To my understanding that was a good counter to the idea that anything that is not a utility maximisation is vulnerable to money pumps in a specific kind of games. But that is restricted to "decision tree" games in which in every turn but the first you have an "active outcome" which you know you can keep until the end if you wish. Every turn you can decide to change that active outcome or to keep it. These games are interesting to discuss dutch book vulnerability but they are still quite specific. Most games are not like that.
On a related note:
a non-dominated strategy for a preference tree compact enough compared to the world it applies to will be approximately a utility maximizer
I think I didn't understand what you mean by "preference tree" here. Is it just a partial order relation (preference) on outcomes? If you mean "for a case in which the complexity of the preference ordering is small compared to that of the rest of the game" , then I disagree. The counterexample could certainly scale to high complexity of the rules without any change to the (very simple) preference ordering.
The closest I could come to your statement in my vocabulary above is:
For some value , if the ratio "complexity of the outcome preference" / "complexity of the total game" is inferior to then any nondominated strategy is (approximately) a utility maximisation.
Is this faithful enough?
I have the suspicion that you read "more complexity" as meaning "more restrictions", while I meant the contrary (I do realize I didn't express myself clearly). Is that the case?
My intuition for the idea of complexity is something like "the minimal number of character it takes to implement this game in python". The flaw is that this assume computable games, which is not in line with the general formulation of the conjecture I used. So that definition does not worK. But that's roughly what I think of when I read "complexity". Is that compatible with your meaning?
Note that this is for the notion of complexity of a given game. If you mean the complexity of a class of games then I am less certain how to define it. However if we are to change the category of games we are talking about then the only clear ways to do so I see involve weakening the conjecture by restricting it to speak of strictly fewer games.
I see. If we were to make this formal it would depend on the notion of "complexity" we use.
Notably it seems intuitive that there be counterexample games that pump complexity out of thin air by adding rules and restriction that do not really matter. So "just" adding a simple complexity threshold would certainly not work, for most notions of complexity.
Maybe it is true that "the higher the complexity the larger the portion of nondominated strategies that are utility maximisation". But
- The set of strategies is often infinite, so the notion of "portion" depends on a measure function.
- That kind of result is already much weaker than the "coherence result" I have come to expect by reading various sources.
Interesting idea anyway, seems to require quite a bit more work.
I get the idea but I am not sure how to move to a richer domain. The only obvious idea I see is to go to continous time, but that not the usual paradigm for games.
We could go the opposite direction and try to get a result for a more restrictive class of games. I listed some in the post; the only case I thought of for which I do not know if the result holds is bounded games.
Alternatively, it is also possible to take another hypothesis than the strategy not being dominated. The result has shape "if a strategy is then it is a utility maximisation". Maybe we can find some better .
Is there some other way to change the conjecture that I missed?
[MENTOR]
I am a 25-year-old computer science PhD student and I currently work on natural language processing with neurosymbolic approaches (using a mix of logic and neural networks to do reasoning on a normal text input). I have a (very) high degree of education in math and theoretical computer science. I also have some very limited skills and knowledge for more practical aspects of computer science. Also, I have quite a lot of reflection on the topic of the nature of thought, valid inference, and rational behavior. These are condensed in a currently unpublished book length report. The result of a year-long break I took for personal reflection. I expect to be able to help quite a lot with fundamental reflection that relates to epistemology or rationality.
If you intend to learn about some aspect of mathematics or theoretical computer science I can probably point you toward resources or help you understand technical aspects. I am also willing to serve as a background help for someone going through a math education. Alternatively, I might be willing to help an in depth reflection to which you think I can be relevant.
I can offer some asynchronous email/messages exchange and semiregular conversation, perhaps averaging one or two per month. Ideally, we would schedule conversations quickly when you need them. If you have a very different mode of interaction in mind I am willing to adapt.
[APPRENTICE]
Presentation copy/pasted from the mentor section.
I am a 25-year-old computer science PhD student and I currently work on natural language processing with neurosymbolic approaches (using a mix of logic and neural networks to do reasoning on a normal text input).
I have a (very) high degree of education in math and theoretical computer science. I also have some very limited skills and knowledge for more practical aspects of computer science.
Also, I have quite a lot of reflection on the topic of the nature of thought, valid inference, and rational behavior. These are condensed in a currently unpublished book length report. The result of a year-long break I took for personal reflection.
I am looking for one of the followings:
- Resources and help on the topic of the nature of thought and characterization of valid inference. This is a long standing topic for me. A string of reflections and readings that has been and continues to be an important side project.
- Like 1 but with a focus on AI or in general automation of thought.
- Help better managing my work and motivation. I need to reduce my work related stress. The help of someone who does a high amount of intellectual work without stress could be precious.
By default, I would imagine a few discussions near the beginning and then much less frequent conversations as needed (once a month?). But that's just off the top of my head.
[NORMAL]
Whether as a mentor, as an apprentice, or because you think we could work together in some other capacity, feel free to DM me through lesswrong. Give me a few days to get back to you. If I do not answer it probably means your message did not go through. In that case just comment below.
Thank you for the post. A few typos:
-
To understand the current conflicts, it’s vital what the Russian discourse means when it talks about Nazis --> Probably "it's vital to understand".
-
One argument made, about why Russia’s claims of far-right influence in Ukraine are overblown, is that far-right parties don’t have much influence is that they have relatively poor electoral results --> Two different versions of the same end of sentence.
-
Holocaust to be national heroes who are shall not be criticized feels deeply wrong. --> just "who shall not be".
I don't think this is a good illustration of point 6. The video shows a string of manipulative leading questions, falling short of the " in a way that they would endorse" criteria.
When people understand that a string of questions is designed to strong arm them into a given position they rarely endorse it. It seems to me that point 6 is more about benevolent and honest uses of leading questions.
Admittedly, I am making the assumption that " in a way that they would endorse" means "such that if people understood the intent that went into writing the string of questions in that way they would approve of the process".
at first she had qualms literally called "Effective Evil"
I think this sentence is missing "working for an organization".