Posts

Comments

Comment by sdr on How popular is ChatGPT? Part 1: more popular than Taylor Swift · 2023-02-25T00:53:34.196Z · LW · GW
Comment by sdr on Open & Welcome Thread - Oct 2022 · 2022-10-05T09:09:55.839Z · LW · GW

Ben Thompson ( https://stratechery.com/ ) , an American industry analyst currently living in Taiwan has a bunch of analyses on this on his blog. In nutshell, the US has a critical infra dependency on Taiwan in high-performance chip manufacturing; specifically, TSMC has a 90% share of 7nm, and 5nm chips. This is critical infra, for which the US does not have good (or even close-enough) substitues. Based on both these economic incentives, and Biden's own statements, the US is extremely likely to reply to Chineese aggression against Taiwan with military force.

Comment by sdr on What's up with the bad Meta projects? · 2022-08-19T07:11:00.363Z · LW · GW

Cross-posting some thoughts:

Facebook's metaverse strat is focusing heavily on capability / platform, and not content / single-awe-of-moment. To them it's possibly okay if vrchat wins at the expense of horizon worlds, _just as long as majority of peeps access it via quest_ which they do: https://metrics.vrchat.community/?orgId=1&refresh=30s <- quest users now outnumber pc ones 1:2.

Consider the apple & appstore fiasco, whereby apple can basically, in one OS update, kill retargeting by introducing privacy popups into apps at os level, kneecapping the entire ad industry (single major reason for FB's this quarter _decline of revenues for the first time ever_); and unilaterally decide, that everyone who takes payments for digital services, and has an an app on ios (read: entire B2C SAAS market) now has to pay 30% to them. _and make it a reality_ on pain of removal from app store. _and it works_.

Basically, FB wants to position itself into the same capability / platform play -they control the device, they can dictate terms for everyone building on top of it. 

Comment by sdr on Kelly Bet on Everything · 2020-07-12T03:41:44.930Z · LW · GW

Oh darn, you're right. Thank you!

Comment by sdr on Kelly Bet on Everything · 2020-07-11T21:54:31.185Z · LW · GW

I'm running simulations to get a feel for what "betting Kelly" would mean in specific contexts. See code here: https://jsfiddle.net/se56Luva/ . I observe, that given a uniform distribution of probabilities 0-1, if the maximum odds ratio is less than 40/1, this algo has a high chance of going bankrupt within 50-100 bets. Any thoughts on why that should be?

Comment by sdr on SlateStarCodex deleted because NYT wants to dox Scott · 2020-06-23T13:32:42.894Z · LW · GW

-

Comment by sdr on What is required to run a psychology study? · 2019-05-30T04:44:37.949Z · LW · GW

In the context of customer development for product research, yes. For good questions on that, see eg the book "Mom test" by Rob Fitzpatrick, and lean customer development field in general. This was solving for the general question "will developing x be paid for"; being wrong on this particular question is expensive.

Comment by sdr on What is required to run a psychology study? · 2019-05-29T07:57:53.780Z · LW · GW

In the name of supporting people actually doing stuff:

  • Scott’s IRB Nightmare comes from the circumstance of polling taking place within the context of privileged patient-provider interaction, which is covered by HIPAA, which requires somewhat stringent data handling. If you are not a doctor, and you're not asking your patients in the hospital, this does not apply to you.
  • Yes, you are allowed to "just go out and ask a whole bunch of people stuff". People can, actually, give away whatever information they feel compelled to do so. People are allowed to enter (mostly) any trade. People are free to do stuff.
  • For people <18, you need parental consent.
  • There are, like, hundreds of tools to do this -both finding people, and nailing the questions. Google Survey samples currently best across US (specifically, it had predicted the 2016 election results successfully). This is good, if you have a specific hypothesis, that you want to ask from 1000++ people.
  • The more quantitative you get, the less signal it carries, at higher precision. Survey & stats criticism generally comes from attempting to determine "things about humanity in general", which is also (somewhat) useful, but requires N > very large, and _very_ methodical sampling / experiment formulating / etc.
  • Generate qualitatively, validate quantitatively. Vast majority of effort goes into actually locating the hypothesis. Before building a research thesis in your room, go out and do the simplest thing first. Talk with people, like, in-person. There's a learning curve prior to being able to formulate meaningful hypotheses.
  • Ask yourself, what rent does answer to a specific question pays. What does it say of reality if it turns out to be A vs B? How does that interact with neighbouring things?
    • And: what, specifically, do you wish to achieve here? Specifically, some qualitative answers to some of the questions above from Bay Area people, along with some synthesis, would be extremely informative (to me at least).
  • A good starting point for this might be cultural anthropology, but instead of getting a book, here's an MVP: get a tape recorder,ask 50 of your friends the questions above, then put answers into a spreadsheet, and a synthesis into an lw post. This is extremely informative for eg measuring local shifts in the overton window, finding common ground (and grounds shifting); and is sorely missing.
    • Why in-person? People who persistently fill out textareas on web pages are heavily biased in income, and mental illnesses; generally, people don't do that. Being in-person, you're raising the interview against personal reputation, which bridges the addressibility gap, and makes a much wider variety of people's voices accessible.
    • To avoid pet-theory issue: Ask open-ended questions (eg the relating to job/ambition ones above are good). Don't lead, capture the raw stuff
  • Do this simple thing first, prior to embarking on specific hypothesis formulation; and post the results!
Comment by sdr on LW Migration Announcement · 2018-03-23T02:09:19.961Z · LW · GW

Not grandparent, but browsing through my private notebook for potentially breaking links, eg:

http://lesswrong.com/r/discussion/lw/deg/less_wrong_product_service_recommendations/6yry <- which is one specific advice (and a good one at that) vs https://www.lesserwrong.com/r/discussion/lw/deg/less_wrong_product_service_recommendations/6yry <- which is 404. This actually do have a high impact both on other sites linking specific comment threads, and by extension, on SEO in general (linked page with content changed to empty).

( Relatedly, https://www.lesserwrong.com/non-existing-page returns HTTP 200 instead of 404, which is more wrong, than http://lesswrong.com/not-existing-page )

Comment by sdr on Superhuman Meta Process · 2018-01-04T05:38:48.595Z · LW · GW

Hello my values a decade ago, it's so nice to see you publicly documented! In retrospect & in particular, the level of paranoia imbued here will serve you well against incentive hijacking, and will serve as a foundational stone in goal stability.

There is one particular policy here, where my thinking has changed significantly since then; and I'd love to check against Time whether it makes sense, or has my values been shifted:

| Reject invest-y power. Some kinds of power increase your freedom. Some other kinds require an ongoing investment of your time and energy, and explode if you fail to provide it. The second kind binds you, and ultimately forces you to give up your values. The second kind is also easier, and you'll be tempted all the time.

Contrast with:

| Optimization never stops. Avoid one-time effort if at all possible. Aim for long-term stability of the process that generates improvements. There is no room for the psychological comfort of certainty.

So, the operative word above is "freedom" (personally, I've used "possibility space maximization"), and it's super useful to run a conceptually exhaustive search across surface-y options . But.

You probably have goals of interest, that you wish to achieve (eg "long-term future of humanity"). Some of these might require banging at stuff for an extended period of time. You have behaviours (eg your meta-policies), which you do for an extended period of time. Whether you recognize it as such, or not, you are also vesting into these; and by way of the forgetting curve, and blog readership, they also require ongoing maintenance. And yes, there might come future technological change which will make them obsolete, and put you into the decision between "your values" & "rolling with changes".

So, my counter to this is, _Anything which does not take into consideration the passage of time, gets eaten by it._  Your Time is a super scarce resource -probably the scarcest of them all. One way to turn this liability into an asset is by vesting into stuff (projects, startups, skills, people, ideas, what have you), and riding the compounding interest across time. This is, to my knowledge, the only way one can scale scarce resources into epic levels of task-specific utility.

(Relatedly, it seems to me, that there is a sliding scale between the need for change in the face of future changes and vesting into things, that most people tend to shift through as they age. Obvious problem here is simulated annealing being susceptible to fixation on phantom (local) maxima by way of changing environment.)

So, unpacking the desiredata from above, the model I'd offer for consideration is the Affordable Loss Principle, with a side dish of Avoiding Infinite Optimizers:

* The affordable-loss principle: prescribes committing in advance to what one is willing to lose rather than investing in calculations about expected returns to the project. Key to affordable loss policies is generation of Next-best-alternatives, such so when it comes to move, there is something to seamlessly move forward to.

Or, in the wise words of Zvi:  https://www.lesserwrong.com/posts/ENBzEkoyvdakz4w5d/out-to-get-you

  • Get Got when the deal is Worth It.
  • When you Get Got, do it on purpose.
  • But, You cannot afford to Get Got if the price is not compact.  (Sufficiently advanced optimizers will eat your time, attention, and resources for breakfast _if you let them_ . Don't. )

In conclusion, I'd suggest that yes, run a freedom-maximizing circle, because it eliminates conceptual blindsight, and there is a lot of low-hanging fruit you can pick up on your way. But additionally, be on the lookout for opportunities that are compact, low-hanging, and compounding across time, such so that linear investments today leads to incremental & compounding utility for tomorrow.

Comment by sdr on Project Hufflepuff: Planting the Flag · 2017-04-26T04:46:31.372Z · LW · GW

Thank you for posting this. I agree, that growing negotiation skills is hard under best of circumstances; and I agree that certain types of newbies might self-identify with the post above.

There is a qualitative difference between people who are negotiating (but lack the proper skill), and the parasites described above:

  • Beginner negotiators state their request, and ask explicitly (or expect impliedly) for price / counter

  • More advanced negotiators start with needs/wants discovery, to figure out where a mutually beneficial deal can be made; and they adjust as discussion proceeds

  • These parasites, in comparison, attempt to raise their request against explicitly stated, nebulous things (or nothing at all): "Would you like to do free translation for me?" - "Cause X is very important, and therefore you, specifically, should do something about it" - "Would you like to build my full website for me in exchange of 1% shares?"

For the record:

  • I have attempted education in some cases (1-on-1, no social standings on the line on either parties, being discreet, etc), to no effect, and only resentment from the other party.

  • I observe that this parasitic strategy works some of the time, which incentivize existing parasitic behavior to grow until saturation. These are the reasons why I brought this up here in the first place.

  • Kindly note, that while there were a lot more evidencing going into this than described above, I am hesitant to disclose more specificities about any of these cases, because the Bay is small (-> personal identification), and discussion isn't reflective-complete (parasites read this, too; the more I disclose here, the more they can shift their strategies)

Comment by sdr on Project Hufflepuff: Planting the Flag · 2017-04-04T16:55:10.588Z · LW · GW

Updated. Re: | if you want to publicly address these people <- if people are addressed offline in public, I suspect you can dress it up with the appropiate social grace. But, we're talking about behavior here (and entrepreneurs have exploits they're already proud of, like hackers have hacks, and free riders aren't actively malicious), and I feel that dressing it up with the same grace would actually backfire by not changing (or even harming) the reward structure of the behavior.

Comment by sdr on Project Hufflepuff: Planting the Flag · 2017-04-04T03:57:57.483Z · LW · GW

Agreed. Recommend a non-verbed descriptive noun, and I'll update the post above.

Comment by sdr on Project Hufflepuff: Planting the Flag · 2017-04-02T03:51:02.762Z · LW · GW

(I'm not sure which part of this is "armchair-theorizing-sociology piece", so let me share impressions:

  • The 3 specific examples are all observations: 2 on a CFAR event, 1 on a bay-lesswrong event
  • The "people putting other's needs ahead of their own" comes from 2 persons who both bounced from the Bay for this reason
  • The "attempting value-pumping" / lack-of-dealcraft is ubiquitous everywhere where people are Getting Stuff Done; the only novel thing in the Bay is high turnaround / people onboarding allows this to be done systematically
  • The "let's make stuff suck less" -> "let's all of you do my stuff" headfake is a non-profit-special; 2 attempts so far on me
  • The part where instead of attempting to "forbid parasiting", I turn it around and ask "how can we make these parasites profitable?" is a special of mine, and has so far been very profitable, in a number of contexts.

If you see none of these, I am happy for you. )

Comment by sdr on Project Hufflepuff: Planting the Flag · 2017-04-02T01:49:31.558Z · LW · GW

Fellow Hufflepuff / startupper / business getting-stuff-done-er / CFAR / Bay-arean here. Can we talk about the elephant in the room?

  • Geeks, MOPs, and sociopaths in subculture evolution <- describes the idea of the role of parasites in subculture evolution; specifically, that once group-surplus achieves a threshold, it is immediately soaked up by parasites funneling it to agendas of their own
  • There are, by my count, at least 3 such parasites in the Bay community; and specifically they position themselves as the broken stair step right at onboarding, making the community feel "impenetrable and unwelcoming". The way how this happens operationally, is when I admit to some level of operational surplus (language skills, software development, business building), from these specific persons I get immediately asks of "Would you like to do free translation for me?" / "Would you like to build $website-idea$ for me?" / "Would you like to donate to $my-cause$?". I also notice that they don't do it this overtly to long-term members.
  • Note, the problem here isn't the ask. We do asks in entrepreneur-topia all the time. The problem is the lack of dealcraft: the asks are asymmetrically favouring the asker, and only offer vague lipservice-waving-towards-nice-things as return.
  • Presence of these parasites, and lack of dealcraft by these people reached equilibrium at having 'a strong culture of “make sure your own needs are met”, that specifically pushes back against broader societal norms that pressure people to conform.' , because people who have been valuepumped hard enough can not sustain themselves in the Bay.

You are attempting to increase the group-surplus of the community. This is very cool. My pre-mortem says, that any such surplus created by the sweat of your brow will be soaked up by this parasitic behavior, and hence fail to achieve long-term changes in admitted competence of the community.

There might be several ways to work around this problem. I want to be upfront about the evaluation criterias for it:

  • not talking, or taking action about this problem will not make it go away;
  • parasites' aim is value-pumping: that is, closing deals in which they get the maximum amount of value with the least amount of work on their own;
  • parasites participate in the culture like everyone else; for this reason, any plan you might come up with must be reflection-complete: that is, it needs to work, even if everyone in the community knows that such plan is in motion.

A few candidate solutions which sticks out:

  • Level up dealcraft: cultivate, and enforce a culture of mutually beneficial asks.
  • Level up quantitiy of dealcraft: elicit members -all members- of their goals / objectives / needs, and focus on coincidence of wants. There's a pretty cool model of this in the book Wishcraft: "Barn-raising"
  • Systematically post-mortem newbs, elicit list of parasites ("was there someone who made you uncomfortable, and describe the exact specificities of the situation"), and systematically intervene in the onboarding process.

Edit note: originally, this post used the word "sociopath", incorrectly -thanks for Viliam's comment below for pointing it out- fixed.

Comment by sdr on [stub] 100-Word Unpolished Insights Thread (3/10-???) · 2017-03-14T16:34:10.030Z · LW · GW

FAI value alignment research, and cryonics are mutually inconsistent stances. Cryo resurrection will almost definitely happen by scanning & whole-brain-emulation. An EM/upload with a subjective timeline sped up to 1000x will be indistinguishable from an UFAI. Incremental value alignment results of today will be applied to your EM tomorrow.

For example, how would you feel with all your brilliant intellect, with all your inner motivational spark being looped into a rat race against 10000 copies of yours, performing work for & grounded to a baseline, where if you don't win against your own selves, all your current thoughts, and feeling, and emotions are to be permanently destroyed?

Comment by sdr on 5 Project Hufflepuff Suggestions for the Rationality Community · 2017-03-04T10:04:29.932Z · LW · GW

Very yes. Specifically, a bi-weekly, or monthly thread (similar to the open threads currently) of eg "Pitch your idea", with the hard-constrain for the topmost comments being 100 words at most at any given time, with optional links leading down the rabbit hole.

Edit: bonus point, but not hard requirement describing your idea in language of "up goer five" to avoid that thing where people compress by using technical words, as opposed to compressing comprehensibly. Like, what we want to achieve here is to serve as a common onboarding point for new people to get introduced to those ideas; as opposed to communicating the Theory of Everything in greek symbols.

Comment by sdr on March 2017 Media Thread · 2017-03-03T09:11:16.144Z · LW · GW

Sure can: subscribe to gwern's newsletter

Comment by sdr on Metrics to evaluate a Presidency · 2017-01-24T14:09:52.267Z · LW · GW

Can you point at the part which you find objectionable?

Comment by sdr on Metrics to evaluate a Presidency · 2017-01-24T13:57:55.218Z · LW · GW

note: topic text was originally different, and included a recently-elected president's name; which would've ranked on google for related-keywords. Below is unedited comment, asking for that name not to be included

Since "Downvoting temporarily disabled", I would like to express a very, very strong disapproval of this topic being discussed on lesswrong. Rationale:

1, Politics is the mindkiller

2, It attracts the sort of people who would like to discuss these sorts of things, at the expense of those (including myself), who do not; specifically, by ranking for relevant keywords on Google (with lesswrong's reputation)

3, There exists almost the entirety of the rest of the Internet to discuss these issues, including rationality-related groups, forums, and mailing lists

4, For a specific case study, we just had a CFAR-alumni discussion group blown up by a topic similar to this, which got 100s++ replies, with no measurable convergence; which strongly implies that no, actually, we do not have a collective-intelligence / social tooling to tackle these issues yet.

For this reason, I -along with all upvoters of this comment- would like to ask for an Admin intervention, specifically by either deleting this post, or modifying it to "Metrics to evaluate a president"; that is, talking about general evaluation criterias, instead of ranking for T-related keywords.

Comment by sdr on On the importance of Less Wrong, or another single conversational locus · 2016-11-28T06:25:28.878Z · LW · GW

Speaking as a writer for different communities, there are 2 problems with this:

  • Duplicate content: unless explicitly canonized via headers, Google is ambiguous about which version should rank for keywords. This hits small & upcoming authors like a ton of bricks, because by default, the LW version is going to get ranked (on basis of authority), and their own content will be marked both as a duplicate, and as spam, and their domain deranked as a result.

  • "An audience of your own": if a reasonable reader can reasonably assume, that "all good content will also be cross-posted to LW anyways", that strongly eliminates the reason why one should have the small blogger in their RSS reader / checking once a day in the first place.

The HN "link aggregator" model works, because by directly linking to a thing, you will bump their ranking; if it ranks up to the main page, it drives an audience there, who can be captured (via RSS, or newsletters); and therefore have limited downside of participation.

Comment by sdr on Open thread, Nov. 14 - Nov. 20, 2016 · 2016-11-15T13:56:40.306Z · LW · GW

I won't speak to the content, but can wave towards the form: basically, there is a set of brain modules / neural pathways, which, when triggered by a set of thoughts, fills one with hope / drive / selflessness. Specifically for me, one of these thoughts include:

| "That was humanity in the ancient days. There was so much wrong with the world that the small resources of altruism were splintered among ten thousand urgent charities, and none of it ever seemed to go anywhere. And yet... and yet..." .. "There was a threshold crossed somewhere," said the Confessor, "without a single apocalypse to mark it. Fewer wars. Less starvation. Better technology. The economy kept growing. People had more resource to spare for charity, and the altruists had fewer and fewer causes to choose from. They came even to me, in my time, and rescued me. Earth cleaned itself up, and whenever something threatened to go drastically wrong again, the whole attention of the planet turned in that direction and took care of it. Humanity finally got its act together." Three worlds collide

How much this neural pathway is developed, and what specific form the actual software takes varies enormously between individuals. This is a problem with how atheism is being propagated currently: when you're telling a person "god does not exist", you're basically denying him the reality of this brain module, while at the same time taking away a core motivator, without substituting it with anything even barely close to it, motivation / qualia-wise.

So, my import of people checking "non-religious spirituality", is that they both have this brain module somewhat developed, and there exists some thoughts by which they can readily trigger it.

Comment by sdr on November 2016 Media Thread · 2016-11-03T05:50:40.691Z · LW · GW

Additional to that, you might want to consider posting larger completed stand-alone works directly into the discussion section as a link for discussion, feedback, and good karma.

Comment by sdr on November 2016 Media Thread · 2016-11-02T01:03:00.260Z · LW · GW

The Oatmeal: How to be perfectly unhappy <- This reminds me of On the unpopularity of cryonics: life sucks, but at least then you die

| Most people have a very limited range of interests and possibilities for gratification. This problem cannot be fixed for most by giving them more money, or even more money and autonomy. Do that, and they will drown themselves in what they already have, or kill themselves with drugs. How many cars, planes, and pairs of shoes or houses can you really gain joy from?

Happiness doesn't scale. Being engaged does.

Comment by sdr on November 2016 Media Thread · 2016-11-02T00:14:04.280Z · LW · GW

AMV short: Nostromo's newest: AMV - Nostromo - Umbrella Corp

Comment by sdr on November 2016 Media Thread · 2016-11-02T00:09:47.271Z · LW · GW

Crash course: Meta-ethics (Crash Course Philosophy #32) <- mostly classification, taxonomy, and a few thorny problems. Good review.

Comment by sdr on November 2016 Media Thread · 2016-11-02T00:07:43.184Z · LW · GW

Anime / AMV short: Porter Robinson & Madeon - Shelter

Comment by sdr on Trying to find a short story · 2016-10-25T04:31:04.107Z · LW · GW

The Gentle Seduction by Marc Stiegler ; search strategy was [short story about technological change saturn]

Comment by sdr on Preference over null preference · 2016-09-05T14:35:48.476Z · LW · GW

Elo,

You seem to be posting, like, a lot. This is good, this is what we have personal blogs for.

I do have an issue with syndicating your content straight to here, regardless of state, amount of research, amount of prior discussion with other people, confidence, or epistemic status. This introduces an asymetric opportunity cost on behalf of the lesswrong community; specifically, writing these is much easier, and lower effort, than the amount of effort these will collectively soak up for no gain.

For this reason, I have downvoted this post as is. I will also kindly ask of you to introduce a pre-syndication filter, which respects other people's limited amount of time, and attention; and cross-post only the ones where you have 1, a coherent thesis, and 2, validated interest coming from other people (as in, someone explicitely remarked "that's interesting").

Thanks.

Comment by sdr on Open thread, Jul. 11 - Jul. 17, 2016 · 2016-07-12T12:16:40.646Z · LW · GW

Heads up about the business side of this: selling to primary & secondary schools, esp outside of the US, is 8/10 difficult.

Specifically, even if the teachers are fully championing your solution, they do not wield any sort of purchasing authority (and sure as hell won't pay from their own wallet). Purchasing authority's incentive-structure does not align with "teacher happiness", "optimal schedule", or most things one would imagine being the mission of the school. It is, however, critical for them to control all sw used inside the school, and might actively discourage using non-approved vendors.

Comment by sdr on May 2016 Media Thread · 2016-05-06T03:31:05.376Z · LW · GW

Exurb1a is making some excellent nihilistic mind-bending. Highlights:

Comment by sdr on February 2016 Media Thread · 2016-02-14T10:47:51.407Z · LW · GW

A short on the FAI problemset: https://www.youtube.com/watch?v=m0PuqSMB8uU ("Rick and Morty - Keep Summer Safe")

Comment by sdr on Entrepreneurial autopsies · 2015-07-13T23:52:11.514Z · LW · GW

Within the context of online businesses, we have some stats on failure mode frequency, which strongly reflects my own priors, and the ~200 startup founders I've talked with to date (source: Quora )

Comment by sdr on Language Learning and the Dark Arts. · 2015-04-08T16:56:49.801Z · LW · GW

patio11 on language learning:

"...A lot of people have vague goals like "I want to learn French" or "I want to be fluent in Japanese." There is no defensible definition of the word "fluent." Instead, you should have specific goals which test ability to complete tasks that are representative of the larger set of tasks you need to be good at to achieve metagoals which are important to you.

This is why I care relatively little about "fluency in Japanese" and quite a bit about "what percentage of commercially significant terms in my apartment lease did I understand without having to ask a Japanese speaker to explain them to me?"

That task is roughly representative of many tasks required to achieve my metagoal, which is "being a functioning adult / educated professional in Japanese society."

Now how do I measure progress? Well, I have some notion of groupings of tasks by difficulty level. The "apartment lease" task is in the same grouping and difficulty level as the "employment contract" task was or the "extract the relevant rule for recognizing SaaS revenue from the National Tax Agency's docs" was. Given roughly comparable levels of difficulty, if I start doing better on a task where previously I did poorly, then I'm progressing.

Why don't I just take Japanese tests yearly? Because my metagoal is not becoming the best Japanese test-taker there is. They are good from the perspective of many decisionmakers, since they allow decisionmakers to compare me against other people in a reproducible and cheap-at-the-margin fashion, but that doesn't get anything that I value. I don't care how I compare to Frank or Taro -- being better than Frank will not save me social embarrassment if I have to ask an accountant "Here is my... um, I don't know what the word is, but it's the piece of paper that records the historical prices I purchased by assets at and then their declining present value representing their worth diminishing over time as calculated by the straight line method. There's an accounting word I'm searching for here and I bet it is followed by the word 'schedule.' DEPRECIATION. Yep, that's the one, thanks."

https://news.ycombinator.com/item?id=9341401

Comment by sdr on October 2014 Media Thread · 2014-10-02T03:13:37.841Z · LW · GW

A fantastic short on existentialism: The missing Scarf

Comment by sdr on Open thread, 14-20 July 2014 · 2014-07-21T12:51:21.376Z · LW · GW

Here's an evolutionary psychology question:

#1: Lemma: Replicator-selection works only through genes; that is, there is no such thing as group selection; from a reproduction perspective, the only which matters, is delta-reproduction-fitness increase.

#2: Lemma: Technologies, and techniques doesn't require gene-transfer. Once someone comes up with a new idea, that idea can freely spread across the entire population. Therefore, technologies, and techniques doesn't offer delta-reproduction-fitness increase.

#3: Observation: Some people appear to be interested more in things (as observed in Scientists, engineers; think "flow"), as opposed to other people (as predicted by the Machiavellian Intelligence Hypothesis )

For the purpose of this thread, I'm not interested in discussing lemma #1, and #2. Assume these to be axiomatic. How can #3 still increase delta-reproduction-fitness?

Comment by sdr on June 2014 Media Thread · 2014-06-20T23:05:08.609Z · LW · GW
Comment by sdr on Open thread for December 17-23, 2013 · 2013-12-17T22:54:51.806Z · LW · GW

The rationale behind salary negotiations are best expanded upon by patio11's "Salary Negotiation: Make More Money, Be More Valued" (that article's well worth the rent).

In real life, the sort of places where employers take offense by you not disclosing current salary (or generally, by salary negotiations -that is, they'd hire someone else if he's available more cheaply) are not the places you want to work with: if they're putting selection pressure for downscaling salaries, all your future coworkers are going to be, well, cheap.

This is anecdotally not true for Google; they can afford truckloads, if they really want to have you onboard. So this is much more likely to come from standardized processes. Also note in Google's case, that decisions are delegated to a board of stakeholders, so there isn't really one person who can be put off due to salary (and they probably handle the hire/no hire decisions entirely separate to the salary negotiations).

Comment by sdr on August 2013 Media Thread · 2013-08-04T05:05:45.263Z · LW · GW

(( For the uninitiated:

1, It would not be unrealistic from her to assume youtube's copyright algorithms to flag her video into oblivion. It's known to happen.

More importantly, 2, Vi work for Khan Academy, who is sponsoring her "to do whatever she wants". That comes with lawyers. ))

Comment by sdr on August 2013 Media Thread · 2013-08-02T22:40:55.955Z · LW · GW

Vihart's "Twelve Tones" is quite possibly the most mind-expanding mix of interdisciplinarity (math, music & creativity) in 2013 I've seen so far: http://www.youtube.com/watch?v=4niz8TfY794

Comment by sdr on "Stupid" questions thread · 2013-07-14T02:58:39.182Z · LW · GW

Specifically for business, I do.

The general angle is asking intelligent, and forward-pointing questions, specifically because deep processing for thoughts (as described in Thinking Fast and Slow) is rare, even within the business community; so demonstrating understanding, and curiosity (both of which are strength of people on LW) is an almost instant-win.

Two of the better guides on how to approach this intelligently are:

The other aspect of this is Speaking the Lingo. The problem with LW is:

1, people developing gravity wells around specific topics , and having a very hard time talking about stuff others are interested in without bringing up pet topics of their own; and

2, the inference distance between the kind of stuff that puts people into powerful position, and the kind of stuff LW develops a gravity well around is, indeed, vast.

The operational hack here is 1, listening, 2, building up the scaffolds on which these people hang their power upon; 3, recognizing whether you have an understanding of how those pieces fit together.

General algorithm for the networking dance:

1, Ask intelligent question, listen intently

2, Notice your brain popping up a question/handle that you have an urge to speak up. Develop a classification algo to notice whether the question was generated by your pet gravity well, or by novel understanding.

3, If the former,SHUT UP. If you really have the urge, mimic back what they've just said to internalize / develop your understanding (and move the conversation along)

Side-effects might include: developing an UGH-field towards browsing lesswrong, incorporating, and getting paid truckloads. YMMV.

Comment by sdr on Update on Kim Suozzi (cancer patient in want of cryonics) · 2013-01-22T14:00:20.178Z · LW · GW

Farewell, and see you on the other side!

Comment by sdr on Programming Thread · 2012-12-07T02:04:22.175Z · LW · GW

In ascending order of resolution:

  • There are a lot of quicker ways to set up a website -a lot of hosting solutions come with one sort of web designer, or another; you can be up&running with a general blogger account in 2 minutes. If you have a specific end-goal (eg. moving inventory) in mind, this'll give you disproportionally quicker bang for your time.

  • Depending on what your goals are, the primary challenges of websites might not be the technical details, but rather clear communication & value presentation. If you have a goal, articulate it in writing first.

With that said...

  • Knowing HTML allows you to create static websites; CSS gives you fine-grained control over presentation; Javascript (and specifically, JQuery) allows you to create client-side (in-browser) interactions. You can get through these without the understanding of CS basics (for JS specifically, there are a lot of online collections for scripts, etc)

  • Web-specific domain languages (php, python, ruby) gives you server-side capabilities (storing & querying data, generating dynamic pages, business logic). More assembly required, and this needs some CS fundamentals.

In short: it depends on whether you see this website&programming as an instrumental goal towards something larger, or as a terminal goal towards "being a better website creator". Hope this makes sense.

Comment by sdr on Tools versus agents · 2012-05-17T03:42:51.005Z · LW · GW

Yes, that assumes away tiredness, inattention, and the like, but I think that's more an issue of relative speed than anything else

Exactly for those reasons. From the relevant utilitarianism perspective, we care about those things much more deeply. (also, try differentiating between "不労所得を得るにはまずこれ" and "スラッシュドット・")

Comment by sdr on Tools versus agents · 2012-05-17T01:53:06.138Z · LW · GW

You're fundamentally assuming opaque AI, and ascribing intentions to it; this strikes me as generalizing from fictional evidence. So, let's talk about currently operational strong super-human AIs. Take, for example, Bayesian-based spam filtering, which has the strong super-human ability to filter e-mails into categories of "spam", and "not spam". While the actual parameters of every token are opaque for a human observer, the algorithm itself is transparent: we know why it works, how it works, and what needs tweaking.

This is what Holden talks about, when he says:

Among other things, a tool-AGI would allow transparent views into the AGI's reasoning and predictions without any reason to fear being purposefully misled

In fact, the operational AI R&D problem, is that you can not outsource understanding. See tried eg. neural networks, when trained with evolutionary algorithms: you can achieve a number of different tasks with these, but once you finish the training, there is no way to reverse-engineer how the actual algorithm works, making it impossible for humans to recognize conceptual shortcuts, and thereby improve performance.

Neural networks, for instance, are in the dock not only because they have been hyped to high heaven, (what hasn't?) but also because you could create a successful net without understanding how it worked: the bunch of numbers that captures its behaviour would in all probability be "an opaque, unreadable table...valueless as a scientific resource". ref

Comment by sdr on SotW: Be Specific · 2012-04-04T04:26:40.976Z · LW · GW

Easy excercise on the 5-second level: ask the question "as opposed to what?" both loud, and when constructing what you'd like to tell. An easy trigger to remember is qualifiers -they're usually a mark of motivated abstraction-switch.

Medium-level excercise: take one of your life failures at any level, and dismantle it via root cause analysis:

"The business failed." "Why?"

"We failed to nail down the unit economics tightly before scaling up marketing" "why?"

"No one was dedicated to look over all the 6 pieces on the value chain" "why?"

...etc.

Also known as 5-whys, this practice basically drills down a single causality chain via "why" questions to 4-6 levels, untangling human, skill, intention, and other components that lead to the failure. You can verify, whether you were specific enough, by being able to come up with concrete solutions for each of these levels.

Comment by sdr on How to manipulate future self into being productive? · 2011-11-04T04:46:17.920Z · LW · GW

You're framing the problem wrong -within these conditions, there are no good solution. There are 3 shortcuts out:

First, realize, that you're inherently time-locked: the current self is the only one on which you have some amount of control (you might put yourself in a situation, where your only way out is to "work hard" -eg. make a bet with a friend to pass that exam, etc- but I found these to be less effective, than the other two).

Second, reframe the problem. Some sample questions you might ask:

  • In what ways might I get the most gratification out of this work?
  • In what ways might I get the most XP out of this experience?
  • In what ways might I learn the most of myself during this excercise?
  • In what ways might I use this as a way to self-improve? You get the idea -reframing is key.

Third, "working" for most classes of work, is fundamentally muscles: as you do more, and more, try different ways out, your leverage, and ability to "get stuff done" will improve. So: start with baby steps, then use the positive feedback, and gained experience to improve, and apply it to other aspects of the task.

Hope this helps.