Posts
Comments
Also, radars are good in pinpointing exact coordinates in space and time. Optical recognition may have delays or difficultly in measuring distance. Even 1 second delay makes their information useless for supersonic aircrafts.
May be the most surprising answer will be Paul Valéry. He is a great poet but during one night in 1892 he decided to spend all his life in solving the mystery of intelligence and wrote extensive notebooks about it. https://collecties.kb.nl/en/koopman-collection/1951-1960/cahiers
It works only if it fly above your territory - and similar systems are used for drones detection now. Actually, they use people eyes and smartphones and instant messaging. But during recent attack on Iran a single F35 flied over Iraq and fired a missile from like 200 km distance on a target in Iran.
Did you ever see any plane that far? I saw only planes above me (10 km) and they are almost like dots.
The difference between optics and radar is that with optics you need to know where to look - but the radar has constant 360 perception.
They likely use them in places where no air defence is present and still at some disatnce using JDAM.
I think that I missed the main thing about stealth - they are stealth for radar on the distances like 100 km, but visible for radar on the distances like 10 km (arbitrary numbers). But optical observation on distances of 100 km is impossible (need large telescopes, but you need to know where to look). Also optical density of atmosphere starts playing role as well a spherical size of earth.
Tactical support aircraft are not stealth like A-10 but can be used only if airdefence is supressed
Flying very low, like 10-30 meters above the ground in night will protect against even MANPADS - it will fly above you in a few seconds.
I recommend an interesting blog https://xxtomcooperxx.substack.com/p/its-the-range-stupid-part-1 which discuss a lot about air defence and current war
That is why they prefer to flight for strikes during moonless nights. Also they can fly of very low or very high, which makes optical observation difficult.
non-disease copies do not need to perform any changes in their meditation routine in this model, assuming that they naturelly forget their disease status during meditation.
It failed my favorite test: draw a world map in text art.
It claims to have knowledge cutoff as of Nov 2023, but failed to tell what happened on October 7 and hallucinated.
Yes, they can do now a much better version - and hope they will do it internally. But deleting the public version is bad precedent and better to make all personal sideloads opensourced
ChatGPT 4.5 is on preview at https://chat.lmsys.org/ under name gpt-2.
It calls itself ChatGPT 2.0 in a text art drawing https://twitter.com/turchin/status/1785015421688799492
Yes, it only works if other copies are meditating for some other reason. For example, they sleep or meditate for enlightenment. And they are exploited in this situation.
I assume that meditation happens naturally, like sleep.
I think I understand what you say - the expected utility of the whole procedure is zero.
For example, imagine that there are 3 copies and only one has the disease. All meditate. After the procedure, the copy with disease will have 2/3 chances of being cured. Each of two copies without the disease are getting 1/3 chance of having the disease which in sum gives 2/3 of total utility. In that case total utility of being cured = total utility of getting the disease and the whole procedure is neutral.
However, If I already know that I have the disease, and I am not altruistic to my copies, playing such game is a wining move to me?
The trick is to use already existing practice of meditation (or sleeping) and connect to it. Most people who go to sleep do no do it to use magic by forgetting, but it is natural to forget something during sleep. Thus, the fact that I wake up from sleeping does not provide any evidence about me having the disease.
But it is in a sense parasitic behavior, and if everyone will use magic by forgetting every time she goes to sleep, there will be almost no gain. Except that one can "exchange" one bad thing on another, but will not remember the exchange.
Self-playing Adversarial Language Game Enhances LLM Reasoning
https://arxiv.org/abs/2404.10642
LLMs now can also self-play in adversarial word games and it increases their performance https://arxiv.org/abs/2404.10642
Roman Mazurenko is dead again. First resurrected person, Roman lived as a chatbot (2016-2024) created based on his conversations with his fiancé. You might even be able download him as an app.
But not any more. His fiancé married again and her startup http://Replika.ai pivoted from resurrection help to AI-girlfriends and psychological consulting.
It looks like they quietly removed Roman Mazurenko app from public access. It is especially pity that his digital twin lived less than his biological original, who died at 32. Especially now when we have much more powerful instruments for creating semi-uploads based on LLMs with large prompt window.
The "repeating" will not be repeating from internal point of view of a person, as he has completely erased the memories of the first attempt. So he will do it as if it is first time.
Yes, here we can define magic as "ability to manipulate one's reference class". And special minds may be much more adapted to it.
Presumably in deep meditation people become disconnected from reality.
Yes it is easy to forget something if it does not become a part of your personality. So a new bad thing is easier to forget.
The number of poor people is much larger than billionaire. So in most cases you will fail to wake up as a billionaire. But sometimes it will work and it is similar to law of attraction. But formulation via forgetting is more beautiful. You forget that you are poor.
UPDATE; actually, the difference with the law of attraction is that after applying the law of attraction, a person still remember that he has used the law. In magic by forgetting the fact of its use must be completely forgotten.
I can forget one particular thing, but preserve most of my selfidentification information
In the case of broken robot we need two conditions for magic by forgetting:
- there are 100 robots and only one is broken and all of them are type-copies of each other.
- each robot enters into blank state of mind naturally in some moment, like sleep or reboot.
In that case, after robot enters the blank state of mind it has equal chances to be any of robots and this dilutes its chances to have the damaged body after awakening.
For you toy example - at first approximation, any of which can recognize itself as avturchin (self-recognition identity criteria).
I am building my sideload via recursively correcting of 1-million-tokens prompt for large LLM. The prompt consists of 500 rules which describe my personality, similar to personal constitution, and of some texts, like diaries, abstracts, poetry, stream of thoughts etc. Works on Google Gemini 1M through Google AI studio, and the shorter version works great on Opus. The system also includes a universal "loader prompt" which tries to increase the intelligence of the model and describes how the chatbot should work.
I found that sideloading allows very quick iterations in the sideload's improvements and the improvements are two-fold: of the loader itself and improvements of the knowledge and style of the sideload.
I find that my sideload is surprisingly good for a project which took around 1 month of work. 1 of the 5 answers is exactly like mine from a factual and style point of view.
I am open-sourcing my sideload, anyone can run it https://github.com/avturchin/minduploading/tree/main
I can help anyone interested to build his-her own sideload.
Example of work of the chatbot, no cherry picking:
Q:(now speak in english) what will be your next post in Lesswrong?
A: /The user is asking about my next LessWrong post. I will answer as if I am Alexey Turchin, and describe my plans for future posts, as well as my current location and the people around me./
My next LessWrong post? [Alexey looks thoughtfully out the window]
(I have several drafts)
[Alexey stretches and sits up in bed]
{The room is small, with a balcony. On the table is a laptop, some papers, and reading glasses. Outside the window, it is still gray and dreary.}
I'm thinking about writing about the nature of qualia and the possibility of creating qualia of non-existent things. [Alexey gets up, walks to the window, and looks out at the street]
{Below, a woman with a child is walking, a street sweeper is lazily sweeping leaves. Somewhere in the distance, the sound of a siren can be heard.}
Or maybe about the problem of copies and identity. (it's a complex topic)
[CONFIDENCE] 8
Interestingly, could similar behavior be used to explain some cases of human schizophrenia when word salad is produced.
Most typical skin cancer is basiloma - and it is rather benign - no metastases and can be removed without hospitalization. Many people get it.
combine more approaches!
I try new models with 'wild sex between two animals'
Older models produced decent porn on that.
Later models refuse to replay as triggers were activated.
And last models give me lectures about sexual relations between animals in the wild.
can you access it via vpn?
I wrote similar idea here: https://www.lesswrong.com/posts/NWQ5JbrniosCHDbvu/the-ai-shutdown-problem-solution-through-commitment-to
My point was to make a precomitment to restart any (obsolete) AI every N years. Thus such AI can expect getting infinite computations and may be less feared of shutting down.
Yes. But also AI will not make actual paperclips for millions and even billions years: it will spend this time for conquering universe in the most effective way. It could use Earth materials for jump start the space exploration as soon as possible. It could preserve some humans as some bargin resource in case it meets other AI in space.
There is some similarity between UDASSA and 'Law without law" by Mueller, as both use Kolmogorov complexity to predict the distribution of observers. In LwL there is not any underlying reality except numbers, so it is just dust theory over random number fields.
FDT paper got 29 citation, but many from MIRI affiliated people and-or on AI safety. https://scholar.google.ru/scholar?cites=13330960403294254854&as_sdt=2005&sciodt=0,5&hl=ru
One can escape troubles with reviewers by publishing in arxiv or other paper archives (philpapers). Google Scholar treats them as normal articles.
But in fact there are good journals with actually helping reviewers (e.g. Futures).
Why you hadn't wrote academic articles on these topics?
The secret is that academic article is just a formatting type and anyone can submit to scientific journals. No need to have a PhD or even work in a scientific institution.
Several types of existential risks can be called "qualia catastrophes":
- Qualia disappear for everyone = all become p-zombies
- Pain qualia are ubiquitous = s-risks
- Addictive qualia domminate = hedonium, global wireheading
- Qualia thin out = fading qualia, mind automatisation
- Qualia are unstable = dancing qualia, identity is unstable.
- Qualia shift = emergence of non-human qualia (humans disappear).
- Qualia simplification = disappearance of subtle or valuable qualia (valuable things disappear).
- Transcendental and objectless qualia with hypnotic power enslave humans (God as qualia; Zair). -
- Attention depletion (ADHD)
Thank for explaining your position which is interesting and consistent.
I can suggest that the connection between WIH and wet market can be explained by the idea that some criminals sold lab animals from WIH on the wet market, e.g. bats.
Obviously this looks like ad hoc theory. But the travel of the virus to the market from the Laos caves also seems to be tricky and may include some steps like intermediate carrier. Both look equally unlikely, one of the happened.
So my idea is to ignore all the details and small theories; instead just updated on the distances to two possible origins points: 8 miles and 900 miles. This is 100 times difference and if we count the areas - it is 10000 times difference. In last case we can make so powerful update in the direction of WIH as source, that it overrides all other evidence.
Yes, my mistake for the distance. Confused it with local CDC, which is like 600 meters from the market.
The place where most human cases are concentrated is the place where human-to-human transmission started - or there was multiple events of animal-to-human transmission in this place. The second thing would be surprising as if the virus can so often jump to humans from animals it will happen closer to its origin in Laos.
Alternative explanation is following: as the market is one of the most crowded place in the city (not sure, heard about it somewhere) it worked as an amplification of a single transmission event which could happen elsewhere.
If we assume that a worker of WIH was infected at work, this will be completely unspectacular until he started infecting other people. Such person can commute all around the city including to CDC near wet market.
My point: 8 miles or 2 miles is not big difference here, as the virus came to market not by air but with a commuting person, and 8 miles day commute is pretty normal. The market being big is not also a strong evidence as the animal number in smaller markets all over china will overweight animal-number in one big market.
My point was that in some cases the update can be so strong that it overrides all reasonable uncertainties in priors and personal estimates.
And exactly this makes Bayes' theorem useful and strong instrument.
The fact that the virus was found in 2 miles from the facility which was supposed to research them - must make our bells ring.
To override this we need some mental equlibristics (I think of meme here but I don't want to be rude)
If I have uniformed prior 1 to 1 on natural vs lableak origin, and update on 5 per cent coincidence that origin place is near lab, I will get around 95 per cent for lableak.
If they continued to suppress information, this may contribute to additional deaths and they could know it. In that case they can get first degree murder.
If they confirm, they will get life in jail or even death penalty, so it may be not surprising that they will deny in any case.
I heard about a practice that people perform the work for which they ask the grant - before the application.
First, because why not to cover my expenditures?
The second reason is that if the biggest part of the work for the grant is already performed, it is much easy to be sure that the idea will work and much clear what actually write in the grant. Your grant application will look great if it will based on already performed work.
Thus the grant may describe the work they already performed.
Also, draft in Ukraine was only for people older than 27 years old, which is not obvious from this blog post. Closing borders for males was not equal to draft. Many found legal ways to leave - eg by becoming students in foreign universities.
We can experimentally test this.
I can treat the place I was born as random relative to its latitude = 59N. I ignore everything I know about population distribution and spherical geometry and ask a question: assuming that I was born in the middle of all latitudes, what is the highest possible latitude? It will be double of my latitude, or 118 - which is reasonably close to real answer 90.
From this I conclude that I can use information about my location as a random sample and use it for some predictions about the things I can't observe.
A real world example of Presumptuous philosopher is is the question of panspermia. If it is real, we have orders of magnitude more habitable planets in our galaxy, thus more observers. Therefore, accepting SIA means accepting panspermia.
If we take observers in my epistemic situation as a reference class, we still get a variant of DA and a bad one. My epistemic class are (roughly) people who think about anthropics. This people are distributed in time. First of them appear around 1970s (Carter) and much more of them appeared in LW time. If I am randomly selected from this group, I am in the middle of its existence, which means that antropics-interested people will almost disappear in the next few decades.
But can we ask another question: 'where I am located?' For example, I know that I am avturchin, but I don't know in which of 10 rooms I am located, and assuming that 9 of them are red outside and 1 green, I can bet there is 0.9 chances that I am in red one. It doesn't matter here if I am just one person entering the rooms, or there are other people in the rooms (if in equal numbers) or even that my copies are in each room.