Posts

[Paper] Trajectories through semantic spaces in schizophrenia and the relationship to ripple bursts 2023-12-15T13:37:55.880Z
The Peril of the Great Leaks (written with ChatGPT) 2023-03-31T18:14:29.237Z
Correcting a misconception: consciousness does not need 90 billion neurons, at all 2023-03-31T16:02:30.282Z
Paris Schelling Meetup 2022-04-19T15:24:21.196Z
bvbvbvbvbvbvbvbvbvbvbv's Shortform 2021-06-11T22:10:13.277Z

Comments

Comment by bvbvbvbvbvbvbvbvbvbvbv on How much I'm paying for AI productivity software (and the future of AI use) · 2024-10-11T21:34:13.797Z · LW · GW

Sharing my setup too:

Personnaly I'm just self hosting a bunch of stuff:

  • litellm proxy, to connect to any llm provider
  • langfuse for observability
  • faster whisper server, the v3 turbo ctranslate2 versions takes 900mb of vram and are about 10 times faster than I speak
  • open-webui, as it's connected to litellm and ollama, i avoid provider lock in and keep all my messages on my backend instead of having some at openai, some at anthropic, etc. Additionally it supports artifacts and a bunch of other nice features. It also allows me to craft my perfecr prompts. And to jailbreak when needed.
  • piper for now for tts but plan on switching to a selfhosted fish audio.
  • for extra privacy I have a bunch of ollama models too. Mistral Nemo seems to be quite capable. Otherwise a few llama3, qwen2 etc.
  • for embeddings either bge-m3 or some self hosted jina.ai models.

I made a bunch of scripts to pipe my microphone / speaker / clipboard / llms together for productivity. For example I press 4 times on shift, speak, then shift again, and voila what I said was turned into an anki flashcard.

As providers, I mostly rely on openrouter.ai which allows to swap between providers without issue. These last few months I've been using sonnet 3.5 but change as soon as there's a new frontier model.

For interacting with codebases I use aider.

So at the end all my cost comes from API calls and none from subscriptions.

Comment by bvbvbvbvbvbvbvbvbvbvbv on What Depression Is Like · 2024-09-05T15:16:35.122Z · LW · GW

Very interesting of you to think of it that way. It turns out that it's very in line with recent results from computation psychiatry. Basically in depression we can study and distinguish how much the lack of activity is due to "lack of ressource to act" vs "increased cost of action". Both look clinically about the same but underlying biochemical pathways differ, so it's a (IMHO) promising approach to shorten the times it takes for a doctor to find the appropriate treatment for a given patient.

If that's something you already know I'm sorry, I'm short on time and wanted this to be out :)

Comment by bvbvbvbvbvbvbvbvbvbvbv on Why you should be using a retinoid · 2024-08-19T08:00:27.916Z · LW · GW

Just a detail : Haven't retinoids been discovered when looking for cancer treatments? I thought it was the origin story behind isotretinoin.

Comment by bvbvbvbvbvbvbvbvbvbvbv on Notice when you stop reading right before you understand · 2024-01-31T18:47:50.214Z · LW · GW

My personnal solution to this is to mostly use Anki for everything and anything.

  1. It helps not loose my momentum: if I see my cards about the beginning of the articles about transformers it increases my chance of finishing the article
  2. It garuantees that I never get the dreaded my knowledge is limited to a couple of related keywords ("self-attention", "encoder"/"decoder") with no real gears-level understanding of anything. feeling. Making it all the more easier to get back to reading.

In fact I hate the number 2 feeling so much that it was a huge motivation to really master Anki. (1300 day streak or so with no sign of regret whatsoever)

Comment by bvbvbvbvbvbvbvbvbvbvbv on Clip keys together with tiny carabiners · 2024-01-31T08:22:08.709Z · LW · GW

I think very cheap carabiners are extremely fragile especially for repeated use. I saw a failing mode where the mobile axis just opens in the wrong way by going around the stator. Keep that in kind when choosing which carabiner to use.

Might be better to keep using ring keyholders but have one decently strong carabineer to hold the rings together instead of what you did : tiny carabiners that hols onto a ring no?

Comment by bvbvbvbvbvbvbvbvbvbvbv on Terminology: <something>-ware for ML? · 2024-01-10T07:33:43.130Z · LW · GW

I don't really like any of those ideas. I think it's really interesting that aware is so related though. I think the best bet would be based on software. So something like deepsoftware, nextsoftware, nextgenerationsoftware, enhancedsoftware, etc.

Comment by bvbvbvbvbvbvbvbvbvbvbv on The Sequences on YouTube · 2024-01-08T13:02:43.371Z · LW · GW

For ayone trying to keep up with AI for film making, I recommend the youtube channel curious refuge https://www.youtube.com/channel/UClnFtyUEaxQOCd1s5NKYGFA

Comment by bvbvbvbvbvbvbvbvbvbvbv on Screen-supported Portable Monitor · 2024-01-05T16:15:19.709Z · LW · GW

Also, velcro comes in many strength and sizes. I find heavy duty velcros to be frequently underused in such DIY projects

Comment by bvbvbvbvbvbvbvbvbvbvbv on Screen-supported Portable Monitor · 2024-01-05T08:20:30.716Z · LW · GW

DIYPerks on youtube did just that: https://www.youtube.com/watch?v=J2aY6cvk-WI

Comment by bvbvbvbvbvbvbvbvbvbvbv on Techniques to fix incorrect memorization? · 2024-01-03T10:18:55.803Z · LW · GW

Medical student here, I get that a lot it's called interference at least in the supermemo sphere.

My personnal solution to this is to add more cards. For example "is friendA born before or after friendB?", "what are the birthday of friendA and friendB?".

The latter question is a crucial example actually. It makes you practice the recall of the distinction between the interference instead of the raw recall of each datum.

Also, as other suggested here, mnemonics help a ton: for example is there an intuitive reason you can link to friendB having an odd birthday and friendA having an even birthday? If so, add a third anki card to never forget the mnemonics.

You also might be interested in the 20 rules of formulating knowledge by supermemo.

One insight I had over the years on anki is that because of the algorithm, having more cards is not penalizing.

Also I created AnnA - Anki Neuronal Appendix to help spread reviews with semantic similarity if that helps.

Comment by bvbvbvbvbvbvbvbvbvbvbv on Legalize butanol? · 2023-12-21T17:27:12.885Z · LW · GW

Based on?

The wikipedia page explicitely states that they don't have the same binding profile and also Ockham's razor as it seems unlikely that two different drugs with two different binding profile perform similarly on ADHD.

"stronger per weight impact on dopamine"? That's not how drugs or biology work. Every neurotransmitter and hormone has several different receptors that different drugs affect in different ways.

I'm aware. I know my sentence did not sound professional but it was on purpose. I think it's true nonetheless : using a more specific sentence would involve naming specific receptors and I don't want to check each of them (MPH, ETH, modafinil). Being specific here doesn't really matter because I'm expressing surprise as to your apparent certainty. Your certainty indicates that you researched those specifics, hence what I was asking.

Regarding the study you linked, I don't find anything relevant to ETH's effectiveness compared to MPH on ADHD.

To clarify because I think I sound rash when really I'm not:

  • I'm surprised you seem to have low uncertainty on ETH being more effective than MPH on ADHD, even though ETH has never been studied on ADHD, never been compared to MPH, possibly never even been studied on humans (?).
  • It is certain that ETH and MPH don't have the same binding profile. I would be extremely surprised if they didn't have a difference in effectiveness in treating ADHD if compared in a head to head RCT. But I don't really have an opinion on which would be better. You seem to do so I'm probably lacking information about some neurotransmitters and ADHD.

PS: actually I do think ETH would perform worse than MPH. But probably not by a large margin. ETH could also be more pleasurable but less tolerable. All of this with very low certainty.

Comment by bvbvbvbvbvbvbvbvbvbvbv on Legalize butanol? · 2023-12-21T14:01:59.248Z · LW · GW

Sure!

To me the only relevant passage seems to be this one:

Ethylphenidate is more selective to the dopamine transporter (DAT) than methylphenidate, having approximately the same efficacy as the parent compound,[6] but has significantly less activity on the norepinephrine transporter (NET).[8] Its dopaminergic pharmacodynamic profile is nearly identical to methylphenidate, and is primarily responsible for its euphoric and reinforcing effects.

You said:

methylphenidate is obligatory for some kids while ethylphenidate is illegal, and they're basically the same but ethylphenidate is probably slightly better.

  1. I disagree with "they're basically the same". I mean of course they are more similar together than compared to acetaminophen but I do think that they would probably be different in terms of effect on ADHD or hedonic perception if compared head to head.

  2. I don't see how ethylphenidate would be better. Is it "better in terms of how pleasurable it is"? As in recreationnal use. Because then there's no point in comparing to methylphenidate beging given to kids. And if it's in terms of how good it treats ADHD then I don't know where it's coming from. Is that because it has a stronger per weight impact on dopamine than on norepinephrin? IIRC modafinil has way more of an impact on dopamine than norepinephrine but helps poorly for ADHD.

I really must be missing something. Thank you for helping me out!

Also:

All available data on ethylphenidate's pharmacodynamics are drawn from studies conducted on rodents.

Comment by bvbvbvbvbvbvbvbvbvbvbv on Legalize butanol? · 2023-12-21T12:59:28.596Z · LW · GW

For the curious, famous researcher David Nutt is working on alcohol replacements too. He's part of Alcarelle, now called gaba labs and IIRC they bet on benzodiazepine derivatives.

Comment by bvbvbvbvbvbvbvbvbvbvbv on Legalize butanol? · 2023-12-21T12:56:39.645Z · LW · GW

for example, methylphenidate is obligatory for some kids while ethylphenidate is illegal, and they're basically the same but ethylphenidate is probably slightly better.

This sounds surprising to me. Can you elaborate on the source and thought process leading you to this?

Comment by bvbvbvbvbvbvbvbvbvbvbv on What do you do to remember and reference the LessWrong posts that were most personally significant to you, in terms of intellectual development or general usefulness? · 2023-12-13T08:55:32.884Z · LW · GW

I use anki flashcard for everything

Comment by bvbvbvbvbvbvbvbvbvbvbv on Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible · 2023-12-13T08:54:24.632Z · LW · GW

There's a great youtuber called the thought emporium who did genetic engineering on himself. I highly recommend checking them out :

https://www.youtube.com/watch?v=J3FcbFqSoQY

And the 2year follow up: https://www.youtube.com/watch?v=aoczYXJeMY4

The tldr is he created a virus then ate it to make his digestive system have more of the gene that makes lactase as he was very intolerant. 2 years later the effects are starting to wear off as cells get replaced but it seems to have had a very high ROI

Comment by bvbvbvbvbvbvbvbvbvbvbv on How did you integrate voice-to-text AI into your workflow? · 2023-11-22T11:37:23.316Z · LW · GW

I bought a cheap watch : twatch 2020 that has wifi and a microphone. The goal is to have an easily accessible langchain agent connected to my localai.

I'm a bit stuck for now because of a driver in C while I know mostly python but I'm getting there.

Comment by bvbvbvbvbvbvbvbvbvbvbv on How did you integrate voice-to-text AI into your workflow? · 2023-11-22T11:37:10.858Z · LW · GW

You meant speech to text instead of text to speech. They just added the latter recently but we don't know the model behind it afaik

Comment by bvbvbvbvbvbvbvbvbvbvbv on Does bulemia work? · 2023-11-07T07:48:27.806Z · LW · GW

Na. Although you can see patient having binge that you then understand were just one bigmac, indicating something closer to anorexia.

The suicide rate is about 2% per 10 year which is insanely high. Also it is not uncommon for people with bulemia to have (sometimes severe) deficiencies regardless of their weight.

Comment by bvbvbvbvbvbvbvbvbvbvbv on Does bulemia work? · 2023-11-07T07:30:01.617Z · LW · GW

To add some perspective : I suspect some people don't really understand how large the caloric intake can be in boulemia. I routinely see patients eating upwards of 50 000 calories (even saw 100 000 a few times) per day when crisis occur. Things like eating several large peanut butter jars in a row etc

Comment by bvbvbvbvbvbvbvbvbvbvbv on Mission Impossible: Dead Reckoning Part 1 AI Takeaways · 2023-11-02T09:05:41.573Z · LW · GW

Link : https://en.m.wikipedia.org/wiki/The_Day_After

Comment by bvbvbvbvbvbvbvbvbvbvbv on How does GPT-3 spend its 175B parameters? · 2023-10-19T14:29:23.681Z · LW · GW
  1. The only difference between encoder and decoder transformers is the attention mask. In an encoder, future tokens can attend to past tokens (acausal), while in a decoder, future tokens cannot attend to past tokens (causal attention). The term "decoder" is used because decoders can be used to generate text, while encoders cannot (since you can only run an encoder if you know the full input already).

This was very helpful to me. Thank you.

Comment by bvbvbvbvbvbvbvbvbvbvbv on Steering GPT-2-XL by adding an activation vector · 2023-09-03T16:17:15.719Z · LW · GW

Hi,

I had a question the other day and figured I'll post it here. Do we have any idea what would happen if we used the steering vector of the input itself?

For example : Take sentenceA, pass it through the LLM, store its embedding, take once again sentenceA, pass it through the LLM while adding the embedding.

As is, this would simply double the length of the hidden vector, but I'm wondering what would happen if we took played instead with the embedding say after the 5th token of sentenceA and add it at the 3rd token.

Similarly, would anything interesting happen with substraction? with adding a random orthogonal vector?

Thanks

Comment by bvbvbvbvbvbvbvbvbvbvbv on Meetup Tip: Board Games · 2023-08-19T07:53:55.759Z · LW · GW

Personnaly I come (and organize) meetups to make my brain sweat and actively avoid activities that leave me unchanged (I won't change much during a play while I grow a lot after each confrontation or discussion). But to each their own of course!

Comment by bvbvbvbvbvbvbvbvbvbvbv on Tips for reducing thinking branching factor · 2023-08-09T14:32:40.402Z · LW · GW

FWIW I tend to see a good part of ADHD medication's effect as changing the trade off between exploration and exploitation. ADHD being an excess of eploration, the meds nudging towards excess of exploitation. If you struggle with a perceived excess of exploration, you might ask yourself if you are helped by taking those medication or if you might fit the diagnostic criteria.

Related : Taking too much of those psychostimulants gives usually an extreme type of exploitation often called "tunnel vision", which can be detrimental as it feels like being a robot doing something on repeat.

Also : branched thinking is not only related to ADHD but also to people with unusually large IQ. So let me just stress that YMMV

Also2 : another interesting thread about ADHD the other day : https://www.lesswrong.com/posts/zi6Qq5jeWat8xFodq/what-works-for-adhd-and-or-related-things

Comment by bvbvbvbvbvbvbvbvbvbvbv on How to Search Multiple Websites Quickly · 2023-06-28T15:52:57.256Z · LW · GW

That sounds like something easy to do with langchain btw

Comment by bvbvbvbvbvbvbvbvbvbvbv on Request: Put Carl Shulman's recent podcast into an organized written format · 2023-06-28T14:09:58.649Z · LW · GW

edit: I can make the prompt more or less compressed easily, just ask. The present example is "pretty compressed" but I can make a more verbose one

Not really what you're asking but :

I'm coincidentally working on the side on a DIY summarizer to manage my inputs. I summarized a bit of the beginning of part 1. If you think it has any value I can run the whole thing :

note that '- ---' indicate the switch to a new chunk of text by the llm

This is formatted as a logseq / obsidian markdown format.


- Carl Shulman (Pt 1) - Intelligence Explosion, Primate Evolution, Robot Doublings, & Alignment - https://youtube.com/watch?v=_kRg-ZP1vQc
  summarization_date:: 28/06/2023
  token_cost:: 12057
  dollar_cost:: 0.01911
  summary_reading_length:: 4.505
  doc_reading_length:: 120.5025
  author:: Dwarkesh Patel
    - Carl Shulman: highly regarded intellectual known for ideas on intelligence explosion and its impacts
      - Advisor to Open Philanthropy project
      - Research associate at Future of Humanity Institute at Oxford
    - Feedback loops and dynamics when approaching human-level intelligence involve:
      - Development of new computer chips, software, and training runs
    - Concept of input-output curves important in understanding increasing difficulty of improving AI
    - Productivity of computing has increased significantly over the years, but investment and labor required for advancements have also increased
    - In a world where AI is doing the work, doubling computing performance translates to a doubling or better of effective labor supply
    - Doubling labor force can result in several doublings of compute, accelerating AI development
    - Bloom paper mentioned:
      - 35% increase in transistor density
      - 7% increase per year in number of researchers required to sustain that pace
    - ---
    - The bloom paper mentioned:
      - 35% increase in transistor density
      - 7% increase per year in the number of researchers required to sustain that pace
    - There is a question of whether AI can be seen as a population of researchers that grows with compute itself.
    - Compute is a good proxy for the number of AI researchers because:
      - If you have an AI worker that can substitute for a human, having twice as many computers allows for running two separate instances and getting more gains.
    - Improvements in hardware and software technology contribute to the progress of AI.
    - The work involved in designing new hardware and software is done by people, but computer time is not the primary cost.
    - The number of people working on AI research is in the low tens of thousands, with companies like Nvidia, TSMC, and DeepMind having significant numbers of employees.
    - The capabilities of AI are doubling on a shorter time scale than the number of people required to develop them.
    - ---
    - The capabilities of AI are doubling faster than the number of people needed to develop them.
      - Hardware efficiency has historically doubled 4-5 times per doubling of human inputs, but this rate has slowed down as Moore's Law nears its end.
      - On the software side, the doubling time for workers driving software advances is several years, while the doubling time for effective compute from algorithmic progress is faster.
    - Epoch, a group that collects datasets relevant to forecasting AI progress, found the following doubling times:
      - Hardware efficiency doubles in about 2 years.
      - Budget growth doubles in about 6 months.
      - Algorithmic progress doubles in less than 1 year.
    - The growth of effective compute for training big AIs is drastic, with estimates that GPT-4 cost around 50 million dollars to train.
      - Effective compute can increase through greater investment, better models, or cheaper training chips.
    - Software progress is measured by the reduction in compute needed to achieve the same benchmark as before.
    - The feedback loop between AI and compute can help with hardware design and chip improvements.
    - Automating chip design work could lead to faster improvements, but it is less important for the intelligence explosion.
    - ---
    - Improving chip design through AI automation is less important for the intelligence explosion because it only applies to future chips.
      - Faster improvements can be achieved through AI automation.
    - The most disruptive and important aspect of AI automation is on the software side.
      - Improvements can be immediately applied to existing GPUs.
    - The question is when AI will contribute significantly to AI progress and software development.
      - This contribution could be equivalent to having additional researchers.
    - The magnitude of AI's contribution is crucial.
      - It should boost effective productivity by 50-100% or more.
    - AI can automate certain tasks in the AI research process.
      - This allows for more frequent and cost-effective completion of these tasks.
    - The goal is to have AI that can significantly enhance performance.
      - This is even with its weaknesses, rather than achieving human-level AI with no weaknesses.
    - Existing fabs can produce tens of millions of advanced GPUs per year.
      - If they run AI software as efficient as humans, with extended work hours and education, it can greatly surpass human capabilities.
    - ---
    - The education level of AI models surpasses that of humans and focuses on specific tasks.
      - Tens of millions of GPUs, each equivalent to the work of the best humans, contribute to significant discoveries and technological advancements.
      - Human-level AI is currently experiencing an intelligence explosion, starting from a weaker state.
      - The feedback loop for AI researchers begins when they surpass small productivity increases and reach a level equivalent to or close to human researchers.
      - AI systems can compensate for their weaknesses by deploying multiple less intelligent AIs to match the capabilities of a human worker.
      - AI can be applied to tasks such as voting algorithms, deep search, and designing synthetic training data, which would be impractical for humans.
      - As AI becomes more advanced, it can generate its own data and identify valuable skills to practice.
      - For instance, AlphaZero generated its own data through self-play and followed a curriculum to always compete against an opponent of equal skill.

Comment by bvbvbvbvbvbvbvbvbvbvbv on Can you prevent negative long-term effects of bad trips with sleep deprivation? · 2023-06-27T13:54:23.020Z · LW · GW

That would most certainly cause a bad trip at night. As taking uppers to stay awake for long will also increase anxiety, which will not be helped by the residual hallucinations from the earlier hallucinogenic.

Comment by bvbvbvbvbvbvbvbvbvbvbv on Can you prevent negative long-term effects of bad trips with sleep deprivation? · 2023-06-24T17:58:13.650Z · LW · GW

In my experience opinion. A good deal of bad trips are actually caused by being sleep deprived.

Comment by bvbvbvbvbvbvbvbvbvbvbv on Could induced and stabilized hypomania be a desirable mental state? · 2023-06-14T11:34:13.726Z · LW · GW

Can't check currently but IIRC there is a marked neurotoxicity cause by too much cholinergic activity during mania, leading to quicker than average dementia onset and proportional to time spent in mania. Might be controversial among specialist. Might not apply to hypomania but be a useful prior none the less. I recommend the website elicit to quickly reduce uncertainty on this question.

Edit: also related to wether putting everyone on at least a low adderall dose might be a good thing

Comment by bvbvbvbvbvbvbvbvbvbvbv on Data and "tokens" a 30 year old human "trains" on · 2023-05-27T12:39:56.359Z · LW · GW

edit: rereading your above comments. I see that I should have made clear that I was thinking more about learned architectures. In which case we apparently agree is I meant what you said in https://www.lesswrong.com/posts/ftEvHLAXia8Cm9W5a/data-and-tokens-a-30-year-old-human-trains-on?commentId=4QtpAo3XXsbeWt4NC

Thank your for taking the time.

I agree that it's probably terminology that is the culprit here. It's entirely my fault: I was using the word pretraining loosely and meant more something like that hyper parameters (number of layers, inputs, outputs, activation fn, loss) are "learned" by evolution. Leaving to us poor creatures only the task to prune neurons and adjust the synaptic weights.

The reason I was thinking at it this way is that I've been reading about NEAT recently, an algorithm that uses a genetic algorithm to learn an architecture as well as train selected architecture. A bit like evolution?

To rephrase my initial point: evolution does its part of the heavy lifting for finding the right brain to live on earth. This shrinks tremendously the space of computation a human has to explore in his lifetime to have a brain fitted to the environnement. This "shrinking of the space" is kinda is like a strong bias towards certain computation. And model pretraining is having the weights of the network already initialized at a value that "already works", kinda like a strong bias too. Hence the link in my mind.

But yeah, evolution does not give us synaptic weights that work so pretraining is not the right word. Unless you are thinking about learned architectures, in that case my point can somewhat work I think.

Comment by bvbvbvbvbvbvbvbvbvbvbv on Data and "tokens" a 30 year old human "trains" on · 2023-05-26T17:12:16.970Z · LW · GW

If all humans have about as many neurons in a the gyri that is hardwired to receive from the eyes, it seems safe to assume that the vast majority of humans will end up with this gyri extracting the same features.

Hence my view is that evolution, by imposing a few hardwired connections and gyri geometries, gives an enormous bias in the space of possible networks, which is similar to what pretraining is.

In essence evolution gives a foundational model that we fine tune with our own experiences.

What do you think? Does that make sense?

Comment by bvbvbvbvbvbvbvbvbvbvbv on Data and "tokens" a 30 year old human "trains" on · 2023-05-26T10:22:01.144Z · LW · GW

I think that gyri are mostly hard coded by evolution and given how strongly they restrict the computation space that the cortical area can learn, one could consider the cortex to be heavily pre trained by evolution.

Studying geometrical gyri correlation with psychiatry is an ongoing hot topic

Comment by bvbvbvbvbvbvbvbvbvbvbv on Advice for newly busy people · 2023-05-13T09:23:41.303Z · LW · GW

b. Saying "no" to a certain activity means saying "yes" to myself and our relationship. When you propose something and I say "no" to it, I'm simultaneously saying "yes" to our relationship. Because when I say "yes" while I'm actually a "no", I slowly accumulate resentment that poisons the connection between us without you being able to do anything about it. And, you will inevitably sense when I said "yes" to something but my heart is not in it. Having been on both sides of this, I know how awkward that feels. So, the moment when I start to really feel comfortable around someone is when I heard the first "no" from them, and when saying "no" to them has become something casual for me. "No" is actually a very, very precious gift. Let's treat it as such, and encourage one another to give it more often.

Here's a quote from Steve Jobs about that:

“People think focus means saying yes to the thing you've got to focus on. But that's not what it means at all. It means saying no to the hundred other good ideas that there are. You have to pick carefully. I'm actually as proud of the things we haven't done as the things I have done. Innovation is saying no to 1,000 things.” ― Steve Jobs

source: https://www.goodreads.com/quotes/629613-people-think-focus-means-saying-yes-to-the-thing-you-ve

Comment by bvbvbvbvbvbvbvbvbvbvbv on Geoff Hinton Quits Google · 2023-05-02T07:41:51.166Z · LW · GW

Fyi actually radiology is not mostly looking at pictures but doing imagery-guided surgery (for example embolisation) which is significantly harder to automate.

Same for family octors : it's not just following guidelines and renewing scripts but a good part is physical examination.

I agree that AI can do a lot of what happens in medicine though.

Comment by bvbvbvbvbvbvbvbvbvbvbv on Summaries of top forum posts (24th - 30th April 2023) · 2023-05-02T06:56:09.623Z · LW · GW

Thanks! Regarding the survey, some people might be having issues like me for their lack of google account or google device. If you can consider using other forms like the one supplied by nextcloud (framaform etc) that might help!

Sorry for being that guy and thanks for the summaries :)

Comment by bvbvbvbvbvbvbvbvbvbvbv on Navigating AI Risks (NAIR) #1: Slowing Down AI · 2023-04-15T11:56:36.179Z · LW · GW

Question : what do you think of the opinion of the chinese officials on easily accessible LLM to chinese citizens? As long as alignment is unsolved, I can imagine china being extremely leery of how citizens could somehow be exposed to ideas that go against official propaganda (human rights, genocide, etc).

But china can't accept being left out of this race either is my guess.

So in the end china is incentivized to solve alignment or to as least slow down its progress.

Have you thought about any of this? I'm extremely curious about anyone's opinion on the matter.

Comment by bvbvbvbvbvbvbvbvbvbvbv on Upcoming Changes in Large Language Models · 2023-04-08T17:14:05.754Z · LW · GW

I strongly disagree. I think most people here think that AGI will be created eventually and we have to make sure it does not wipe us all. Not everything is an infohazard and exchanging ideas is important to coordonate on making it safe.

What do you think?

Comment by bvbvbvbvbvbvbvbvbvbvbv on Upcoming Changes in Large Language Models · 2023-04-08T07:16:37.455Z · LW · GW

Medical student here. I'm actually convinced we can build an AGI right now by using multiple LLM with langchain agents and memory and a few tools. Even making it multimodal and embodied.

Just have them impersonnate each basal ganglia nuclei and a few other stuff.

This would allow to throttle it's thinking speed, making it alignable because you can tweak it internally.

Lots of other benefits but i'm on mobile. Anyone get in touch if interested!

Comment by bvbvbvbvbvbvbvbvbvbvbv on Auto-GPT: Open-sourced disaster? · 2023-04-06T07:11:56.106Z · LW · GW

Pinging @stevenbyrnes : do you agree with me that instead of mapping those protoAGIs to a queue of instructions it would be best to have the AGI be made from a bunch of brain strcture with according prompts? For example "amygdala" would be in charge of returning an int between 0 and 100 indicating feat level. A "hypoccampus" would be in charge of storing and retrieving memories etc. I guess the thalamus would be consciousness and the cortex would process some abstract queries.

We could also use active inference and bayesian updating to model current theories of consciousness. Even use it to model schizophrenia by changing the number of past messages some strctures can access (i.e. modeling long range connection issues) etc.

To me that sounds way easier to inspect and align than pure black boxes as you can throttle the speed and manually change values like make sure the AGI does not feel threatened etc.

Is anyone aware of similar work? I've created a diagram of the brain structures and its roles in a few minutes with chatgpt and it seems super easy.

Comment by bvbvbvbvbvbvbvbvbvbvbv on Correcting a misconception: consciousness does not need 90 billion neurons, at all · 2023-04-02T13:18:03.681Z · LW · GW

I don't understand what you mean by "inaccessible"

Comment by bvbvbvbvbvbvbvbvbvbvbv on Correcting a misconception: consciousness does not need 90 billion neurons, at all · 2023-04-01T08:24:35.386Z · LW · GW

I don't like how it sounds but : i think you are missing a lot of biological facts about consciousness and that we're not as clueless as you seem to think. I definitely recommend reading the book "consciousness and the brain" by stanislas dehaene which is basically a collection of facts onbthe topic.

Comment by bvbvbvbvbvbvbvbvbvbvbv on Correcting a misconception: consciousness does not need 90 billion neurons, at all · 2023-03-31T22:56:03.706Z · LW · GW

My other comment might be relevant here also

Comment by bvbvbvbvbvbvbvbvbvbvbv on Correcting a misconception: consciousness does not need 90 billion neurons, at all · 2023-03-31T22:54:55.439Z · LW · GW

Don't you agree that certain brain lesion definitely make you not conscious? I think identifying which region is indispensable is important.

If I had to guess human can be conscious without a cerebellum but not without basal ganglia fwiw

Comment by bvbvbvbvbvbvbvbvbvbvbv on Correcting a misconception: consciousness does not need 90 billion neurons, at all · 2023-03-31T22:51:31.987Z · LW · GW

Ley's put it lile this : if you had hours of interaction with this individual you'd have no reason to doubt it's conscious. I indeed don't know if it has the exact same sense of consciousnes as someone with a cerebellum but this is also true for everyone else : I don't know if you and I have the same conscious experience either.

Comment by bvbvbvbvbvbvbvbvbvbvbv on The Peril of the Great Leaks (written with ChatGPT) · 2023-03-31T18:14:58.301Z · LW · GW

Here's the full prompt:

>>> user

Write a short story in 2 paragraphs title "The Peril of the Great Leaks" describing how in less than 3 years hacking capabilities will be so advanced thanks to large language models that many many databases will get leaked and peple will unwillingly have their information easily accessible to anyone. Things like facebook, github, porn accounts, etc. End by talking about how those dumps will be all the more easy to parse that LLM will be easy to use.

This will be posted on the LessWrong.com forum, make it engaging and fun to read while being to the point. The main objectif is to make it thought provoking. The first sentence has to be very catchy.

<<< assistant

The Peril of the Great Leaks

In less than three years, the world will be a very different place. Thanks to the rise of large language models, hacking capabilities will be so advanced that databases will be leaking left and right. Facebook, Github, porn accounts - you name it, it'll be out there for anyone to see. 

The scariest part? These dumps will be all the more easy to parse thanks to the ease of use of LLM. People will be unwittingly giving away their most personal information, and there will be nothing they can do to stop it. The only question is, will you be one of the unlucky ones?

>>> user

Very good. Make it longer, I allow 3 paragraphs.

<<< assistant

The Peril of the Great Leaks

In less than three years, the world will be a very different place. Thanks to the rise of large language models, hacking capabilities will be so advanced that databases will be leaking left and right. Facebook, Github, porn accounts - you name it, it'll be out there for anyone to see. 

The scariest part? These dumps will be all the more easy to parse thanks to the ease of use of LLM. People will be unwittingly giving away their most personal information, and there will be nothing they can do to stop it. The only question is, will you be one of the unlucky ones?

The rise of LLMs is a double-edged sword. On one hand, it has the potential to revolutionize the way we communicate and interact with technology. On the other hand, it also has the potential to be used for nefarious purposes. Hackers will be able to use LLMs to create more sophisticated phishing scams, and to craft more convincing social engineering attacks. 

The future is uncertain, but one thing is clear: we need to start taking cybersecurity seriously. We can't rely on outdated security measures to protect us from the threats of tomorrow. We need to invest in new technologies and new approaches to security, and we need to do it now. The future is coming, whether we're ready for it or not.

>>> user

No I meant rewrite the whole thing into 3 longer paragraphs, try again.

<<< assistant

The Peril of the Great Leaks

In less than three years, the world will be a very different place. Thanks to the rise of large language models (LLMs), hacking capabilities will be so advanced that databases will be leaking left and right. The sheer amount of data that will be available to hackers will be staggering, and the consequences could be catastrophic. 

Imagine a world where your most personal information is readily available to anyone with an internet connection. Your social security number, your credit card information, your medical records - all of it out in the open. This is the future we're facing, and it's not a pretty one. 

The scariest part is that these dumps will be all the more easy to parse thanks to the ease of use of LLMs. Hackers will be able to sift through massive amounts of data in a matter of seconds, making it easier than ever to find the information they're looking for. And with the rise of the dark web, it will be easier than ever for them to sell that information to the highest bidder. 

But it's not just individuals who will be at risk. Companies and governments will also be vulnerable to these attacks. Imagine a world where the secrets of major corporations and government agencies are readily available to anyone who wants them. The consequences could be disastrous, both for the organizations themselves and for the people they serve. 

The rise of LLMs is a double-edged sword. On one hand, it has the potential to revolutionize the way we communicate and interact with technology. On the other hand, it also has the potential to be used for nefarious purposes. Hackers will be able to use LLMs to create more sophisticated phishing scams, and to craft more convincing social engineering attacks. 

The future is uncertain, but one thing is clear: we need to start taking cybersecurity seriously. We can't rely on outdated security measures to protect us from the threats of tomorrow. We need to invest in new technologies and new approaches to security, and we need to do it now. The future is coming, whether we're ready for it or not. And if we're not careful, it could be a very dangerous place indeed.

>>> user

This is perfect. Thank you.

<<< assistant

You're welcome! I'm glad I could help.

>>> user



Comment by bvbvbvbvbvbvbvbvbvbvbv on Correcting a misconception: consciousness does not need 90 billion neurons, at all · 2023-03-31T18:10:43.371Z · LW · GW

Yes we do, it's in the sources.

Comment by bvbvbvbvbvbvbvbvbvbvbv on How do you manage your inputs? · 2023-03-29T12:09:06.763Z · LW · GW

I use RSS a lot, adds some articles to read in wallabag, annotate them there then create anki cards from the annotations.

Comment by bvbvbvbvbvbvbvbvbvbvbv on Why no major LLMs with memory? · 2023-03-29T08:01:54.383Z · LW · GW

On mobile but FYI langchain implements some kind of memory.

Also, this other post might interest you. It's about asking GPT to decide when to call a memory module to store data : https://www.lesswrong.com/posts/bfsDSY3aakhDzS9DZ/instantiating-an-agent-with-gpt-4-and-text-davinci-003

Comment by bvbvbvbvbvbvbvbvbvbvbv on Why no major LLMs with memory? · 2023-03-29T08:00:02.179Z · LW · GW

Two links related to RWKV to know more :

https://johanwind.github.io/2023/03/23/rwkv_overview.html

https://johanwind.github.io/2023/03/23/rwkv_details.html