Posts
Comments
After looking at the website, it's still a very complicated idea, it's totally unclear what a small-scale implementation is supposed to look like (i.e. what impact does this have in a small community of enthusiasts, what incentive does anyone have to adopt it, etc.).
I can't evaluate your math because I don't see any math in your website or kickstarter. I see references to Bayes and Information theory, but no actual math.
Not particularly urgent. An understanding of how to update priors (which you can get a good deal of with an intro to stats and probability class) doesn't help dramatically with the real problem of having good priors and correctly evaluating evidence.
The big pizza delivery chains have expanded their menus over time, from pretty-much-pizza to sandwiches, desserts, wings, salads, etc.
So it seems efficiency is not the driver.
I think you have failed to consider the requirements of Kickstarter. You need to really think about your marketing, deliver as clear and slick a statement of your project as possible, and imitate as much as you can of successful similar kickstarter projects.
Your video is neither clear nor slick (verbal explanation with random camera angles is a poor way for most people to absorb information), and needs visual aids at least.
I watched a couple minutes of explanation, and then zoned out. Sorry.
I'm guessing from your low funds that you've also done little to evangelize your concept, or have been unsuccessful in doing so.
I also think Kickstarter for coding projects is a high barrier, since so much programming is volunteer open-source projects, people wonder why they'd donate extra. And you offer no rewards for backers, a key element of Kickstarter's concept.
You also say in the first paragraph "the project isn't going well".
It's not really arbitrage if you lose money.
I think the same essential abuses that exist with debt exist here, so time-limitation (say equity stops yielding returns after 15-20 years, and can be discharged by bankruptcy) is important.
I worry about abuses when the equity stake is high. If you're a mentor, and your investment decides they don't really want to prioritize income maximization, what will you do?
Would the way to optimize returns involve hiring those you've invested in yourself (or arranging some convenient swap, if such direct employment is forbidden), and perhaps result in a system that looks either nepotistic or like indentured servitude?
Most of my objections melt away with shorter term investments. Still, equity is a much more natural fit on the project/business level.
And taking on low-earning degrees isn't particularly an information issue. It's a people-make-poor-decisions-at-18 issue. Data about majors' earnings are readily available.
Owning a car can have a large advantage over leasing, if you are likely to keep the car a long time. An owned car can also become a second backup car, that it's no big deal if it breaks down) if you get a newer one. Leasing two cars at once is a big waste of money.
Owning versus renting a home is not clear cut at all. Renting a house is a huge money pit unless you are frequently moving, and renting an apartment vs. owning a house is often not comparable in lifestyle. Owning a house gives you the property and flexibility, and can result in longer-term wealth building.
I used to be a poor student, and while I had a few indulgences I was frugal by virtue of being unable to afford some things. Now I have a job that makes plenty of money, and I spend it on things I would have once considered a poor value or even outright wasteful (uber instead of public transit, order something on Amazon I am unlikely to use).
I imagine if I had much much more money, I'd spend it on things I consider wasteful now.
So, Harry should probably have figured this out 60 chapters ago.
But we'll cut him some slack.
You've dropped out of the lower end of Flow, the optimal level of challenge for a task.
You've solved the intellectually interesting nugget, or believe you have, and now all that's left are the mundane and often frustrating details of implementation. Naturally you'll lose some motivation.
So you have to embrace that mundanity, and/or start looking at the project differently.
That's too strong. For instance, multi-person and high-noise environments will still have room for improvement. Unpopular languages will lag behind in development. I'd consider "solved' to mean that the speech-processing element of a Babelfish-like vocal translator would work seamlessly across many many languages and virtually all environments.
I'd say it will be just below the level of a trained stenographer with something like 80% probability, and "solved" (somewhat above that level in many different languages) with 30% probability.
With 98% probability it will be good enough that your phone won't make you repeat yourself 3 times for a simple damn request for directions.
I think drones will probably serve as the driver of more advanced technologies - e.g. drones that can deposit and pick up payloads, ground-based remote-controlled robots with an integration of human and automatic motion control.
Right - I agree that Go computers will beat human champions.
In a sense you're right that the techniques are general, but are they the general techniques that work specifically for Go, if you get what I'm saying. That is, would the produce similar improvements when applied to Chess or other games? I don't know but it's always something to ask.
I am in the NLP mindset. I don't personally predict much progress on the front you described. Specifically, I think this is because industrial uses mesh well with the machine learning approach. You won't ask an app "where could I sit" because you can figure that out. You might ask it 'what brand of chair is that" though, at which point your app has to have some object recognition abilities.
So you mean agent in the sense that an autonomous taxi would be an agent, or an Ebay bidding robot? I think there's more work in economics, algorithmic game theory and operations research on those sorts of problems than in anything I've studied a lot of. These fields are developing, but I don't see them as being part of AI (since the agents are still quite dumb).
For the same reason, a program that figures out the heliocentric model mainly interests academics.
There is work on solvers that try to fit simple equations to data, I'm not that familiar.
I'm not asking for sexy predictions; I'm explicitly looking for more grounded ones, stuff that wouldn't win you much in a prediction market if you were right but which other people might not be informed about.
I think NLP, text mining and information extraction have essentially engulfed knowledge representation.
You can take large text corpora like the and extract facts (like Obama IS President of the US) using fairly simple parsing techniques (and soon, more complex ones) put this in your database in either semi-raw form (e.g. subject - verb - object, instead of trying to transform verb into a particular relation) or use a small variety of simple relations. In general it seems that simple representations (that could include non-interpretable ones real-valued vectors) that accommodate complex data and high-powered inference are more powerful than trying to load more complexity into the data's structure.
Problems with logic-based approaches don't have a clear solution, other than to replace logic with probabilistic inference. In the real world, logical quantifiers and set-subset relations are really really messy. For instance a taxonomy of dogs is true and useful from a genetic perspective, but from a functional perspective a chihuahua may be more similar to a cat than a St. Bernard. I think instead of solving that with a profusion of logical facts in a knowledge base, it might be solved by non-human interpretable vector-based representations produced from, say, a million youtube videos of chihuahuas and a billion words of text on chihuahuas.
Google's Knowledge Graph is a good example of this in action.
I know very little about planning and agents. Do you have any thoughts on them?
That's what I know most about. I could go into much more depth on any of them.
I think Go, the board game, will likely fall to the machines. The driving engine of advances will shift somewhat from academia to industry.
Basic statistical techniques are advancing, but not nearly as fast as these more downstream applications, partly because they're harder to put to work in industry. But in general we'll have substantially faster algorithms to solve many probabilistic inference problems, much the same way that convex programming solvers will be faster. But really, model specification has already become the bottleneck for many problems.
I think at the tail end of 10 years we might start to see the integration of NLP-derived techniques into computer program analysis. Simple prototypes of this are on the bleeding edge in academia, so it'll take a while. I don't know exactly what it would look like, beyond better bug identification.
What more specific things would you like thoughts on?
After talking with a friend, I realized that the unambitious, conformist approach I'd embraced at work was really pretty poisonous. I'd become cynical, and realized I was silencing myself at times and not trying to be creative, but I really didn't feel like doing otherwise.
My friend was much more ambitious, and had some success pushing through barriers that tried to keep her in her place, doing just the job in her job description. It wasn't all that hard for her; I'd just gotten too lazy and cynical to do this myself after mild setbacks.
The bureaucratic element was a very good idea by the Gatekeeper.
How superhuman does an AI have to be to beat the Kafkaesque?
You may be right. Hackernews then. An avowed love of functional programming is a sure sign of an Auteur.
Yes! Like those.
I think you're being a bit harsh though - the problem with personality tests and the like is not that the spectrums or clusters they point out don't reflect any human behavior ever at all, it's just that they assign a label to a person forever and try to sell it by self-fulfilling predictions ("Flurble type personalities are sometimes fastidious", "OMG I AM sometimes fastidious! this test gets me").
Professional/Auteur is a distinction slightly more specific than personality types, since it applies to how people work. It comes from the terminology of film, where directors range from hired-hands to fill a specific void in production to auteurs whose overriding priority is to produce the whole film as they envision it, whether this is convenient for the producer or not. Reading and listening to writers talk about their craft, it's also clear that there's a spectrum from those who embrace the commercial nature of the publishing industry and try hard to make that system work for them (by producing work in large volume, by consciously following trends, etc.) to those who care first and foremost about creating the artistic work they envisioned. In fact, meeting a deadline with something you're not entirely satisfied with vs. inconveniencing others to hone your work to perfection is a good example of diverging behavior between the two types.
There are other things that informed my thinking like articles I'd read on entrepreneurs vs. executives, foxes vs. hedgehogs, etc.
If I wanted to make this more scientific, I would focus on that workplace behavior aspect and define specific metrics for how the individual prioritizes operational and organization concerns vs. their own preferences and vision.
Exactly. The world is complicated, apparently contradictory characteristics can co-inhabit the same person, and frameworks are frequently incorrect in proportion to their elegance, but people still think in frameworks and prototypes so I think these are two good prototypes.
It's not burdensome detail; its a list of potential and correlated personality traits. You don't need the conjunction of all these traits to qualify. More details provide more places to relate to the broad illustration I'm trying to make. But I'll try to state the core elements that I want to be emphasized, so that it's clearer which details aren't as relevant.
Professionals are more interested in achieving results, and do not have a specific attachment to a philosophy of process or decision-making to reach those results.
Auteurs are very interested in process, and have strong opinions about how process and decision-making should be done. They are interested in results too, but they do not treat it as separate from process.
And I'll add that like any supposed personality type, the dichotomy I'm trying to draw is fluid in time and context for any individual.
But I think it's worth considering because it reflects a spectrum of the ways people handle their relationship with their work and with coworkers.
Essentially, treat it as seriously as a personality test.
Perhaps a better way to say it is that these beliefs are irrelevant to the ideology-as-movement.
That is, if you are a Christian missionary, details of transubstantiation are irrelevant to gathering more believers (or at least, are not going to strike the prospective recruits as more important). Likewise, MWI is not really that important for making the world more rational.
So I was wrong to say they are not in some way important points of doctrine. What I should say is that they are points of doctrine that are unimportant to the movement and most of the externally-oriented goals of an organization dedicated to these ideologies.
If movie A sells for 9 dollars, people able to do a side-by-side comparison will never purchase movie B. Movie A will accrue 1.8 million dollars.
I don't see what sublinear pricing has to do with it unless the audience is directly engaging in some collective buying scheme.
I was thinking 4 quadrant. Horizontal axis is competence, vertical axis is professional vs. auteur.
Steve Jobs was something of an auteur who eventually began to really piss off the people he had once successfully led and inspired. After his return to Apple, he had clearly gained some more permanent teamwork and leadership skills, which is good, but was still pretty dogmatic about his vision and hard to argue with.
The most competent end of the professional quadrant probably includes people more like Jamie Dimon, Jack Welch, or Mitt Romney. Professional CEOs who you could trust to administrate anything, who topped their classes (at least in Romney's case), but who don't necessarily stand for any big idea.
This classification also corresponds to Foxes and Hedgehogs - Many Small Ideas vs. One Big Idea / Holistic vs. Grand Framework thinking.
But it is not a true binary; people who have an obsessing vision can learn to play nice with others. People who naturally like to conform and administrate can learn to assert a bold vision. If Stanley Kubrick is the film example of an Auteur - an aggravating genius - and J.J. Abrams is the professional - reliable and talented but mercenary and flexible, there are still people like Martin Scorsese who people love to work with and who define new trends in their art.
So maybe junction is a good way to think of it, but there are extraordinarily talented and important people who seem to have avoided learning from the other side too.
My post is about dogmatism. Sometimes beliefs have implications but they are implications in the long-tail, or in the far-future.
I would wager that most medieval Christians could not follow a medieval Christian theological debate about transubstantiation, and would not alter their behavior at all if they did. And eventually, Christians came to the consensus that this particular piece of dogma was not worth fighting over.
Specifics of property are similarly irrelevant in a world so far from the the imagined world where these specific determine policy. Certainly, it should be an issue you put aside if you want to be an effective activist. I'm not saying nobody should think about these issues ever, just that they are a disproportionately argued-about issue.
Similarly MWI just doesn't have an impact on any decision I make in my daily life apart from explaining quantum physics to other people, and never will. Can you think of a decision and action it could impact (I really would like to know an example!)? I'm not saying it's totally irrelevant or uninteresting, it's just disproportionately touted as a badge of rationalism, and disproportionately argued about.
Or if you did quantum computing, or if you were a chemist, or if you were a biologist, or if you care about understanding and figuring out the Great Filter and related issues, or if you work with GPS, or if you care about nuclear safety and engineering, or if you work with radios, etc. And that's before we look at the more silly problems that can arise from seeing parts of the world as magic, like people using quantum mechanics to justify homeopathy. If quantum mechanics is magic, then this is much easier to fall prey to.
This argument can be applied to anything. Automotive knowledge, political knowledge, mathematical knowledge, cosmetics knowledge, fashion knowledge, etc. I think it's great to know things, especially when you actually do need to know them but also when you don't. But if some piece of knowledge is unimportant to determining your actions or most of them, I won't privilege it just because it has some cultural or theoretical role in some ideology.
I just read a crapton of political news for a couple years until I was completely sick of it.
I also kind of live in a bubble, in terms of economic security, such that most policy debates don't realistically impact me.
High belief in a near singularity is unnecessary.
The decreasing price for prior buyers is an interesting notion.
There are specific auction and pre-commitment and group-buying schemes that evoke certain behaviors, and there's room for a lot more start-ups and businesses taking advantage of these (blockchains and smart contracts in particular have a lot of potential).
I don't think we'll ever get rid of marketing though.
I don't get what you're getting at.
Pricing is a well-studied area. Price discrimination based on time and exclusivity of 'first editions' and the like is possible, but highly dependent on the market. Why would anyone be able to sell an item with a given pricing scheme like 1/n? If their competitor is undercutting them on the first item, they'll never get a chance to sell the latter ones. And besides there's no reason such a scheme would be profit-maximizing.
AI is the harder role, judging from past outcomes. I hope you prepare well enough to make it interesting for GK.
I'm interested in doing AI Box as either role. How did you organize your round?
I used to be frustrated by the idea that my nation's stated principles were often undermined by its historical actions. Then I realized that this is true of every nation, everywhere at all times. Same with politicians, public figures, parents, companies, myself, etc.
Hypocritical actions happen all the time, and it is a victory when their severity and frequency is tempered. At the same time, justifications for those hypocritical actions abound. The key is not to take them at face value or reject them completely, but remember with humility that your favored group makes the same excuses at varying levels of validity.
So now I can empathize much more easily when people try to defend apparently hypocritical and reprehensible behavior. Even if I AM better than they are, I'm not qualitatively better, and its disingenuous to try to argue as if I am. This realization leads to a more pragmatic, more fact-and-context-sensitive approach to real-world conflicts of values.
I think in many professions you can categorize people as professionals or auteurs (insofar as anyone can ever be classified, binaries are false, yada yada).
Professionals as people ready to fit into the necessary role and execute the required duties. Professionals are happy with "good enough", are timely, work well with others, step back or bow out when necessary, don't defend their visions or ideas when the defense is unlikely to be listened to. Professionals compromise on ideas, conform in their behavior, and to some degree expect others to do the same. Professionals are reliable, level-headed, and can handle crises or unexpected events. They may have a strong and distinct vision of their goals or their craft, but will subsume it to another's without much fuss if they don't think they have the stance or leverage to promote it. Professionals accurately assess their social status in different situations, and are reluctant to defy it.
On the worse end of this spectrum is the yes-man, the bureaucrat, and the aggressive conformer. On the better end are the metaphorical Eagle Scouts, the executive, the "fixers" who can come in and clean up any mess.
Auteurs are guided first by their own vision, maybe to the point of obsession. Auteurs optimize aggressively and wildly, but only for their own vision. Auteurs will interrupt you to tell you why their idea is better, or why yours is wrong. Auteurs have a hard time working together if they disagree, but can work well together if they agree, or with professionals who can align with their thinking. Auteurs don't care that their ideas are non-standard, or don't follow best practices, or have substantial material difficulties. Auteurs will let a deadline fly past if their work is not ready. Auteurs might look past facts that contradict thems. Auteurs don't feel that sorry if they make themselves a pain in the ass for others to move toward their goals. Auteurs will disregard status, norms, and feelings to evangelize.
On the worse end they are kooks, irrationally obstinate and arrogant, or quixotic ineffectuals. On the best they are visionaries, evangelists for great ideas, obsessive perfectionists who elevate their craft whether the material rewards are proportional to their pains or not.
I think LW might have more sympathy for the Auteurs, but I hope people recognize the virtues of the professional and the shortcomings of the auteur, and that there is a time and place to channel the spirit of each side.
I believe many philosophies and ideologies have hangups, obsessions, or parasitical beliefs that are unimportant to most of the beliefs in practice, and to living your life in concordance with the philosophy, yet which are somehow interpreted as central to the philosophy by some adherents, often because they fit elegantly into the theoretical groundings.
Christians have murdered each other over transubstantiation vs consubstantiation. Some strands of Libertarianism obsess over physical property. On this forum huge amounts of digital ink are spilled over Many-Worlds Interpretation. Each fitness community swears by contradictory advice, even about basic nutrition and exercise.
These are sometimes badges of tribalism, sometimes the result of trying to hard to make a "perfect theory".
Most of the time, most of this stuff just doesn't matter! To live a Christian life, it could not matter less what you believe about the Eucharist. You could live your life as if the world were classically Newtonian and everything defying that was magic, and unless you were a physicist it would not affect your life as a rationalist. You can become more fit than most people you know on almost any given fitness program, with time and effort and diet.
"Doctrinal" issues are largely a distraction from actually living your life in accordance with principles you think are good or from achieving a goal.
Remember the 80/20 rule. Don't over-optimize; it could be expensive and dangerous.
At least get your diet in line before you worry too much about pharmaceuticals.
Be charitable; don't assume they're trying to present themselves as martyrs. Instead they could be outlining the peculiar challenges and difficulties of their particular positions.
Life is hard for everyone at times.
I am enrolled in a weightwatchers-like program.
My doctor recommended it to me 6 months ago and I said "doc, I can understand nutrition and exercise myself! no need for a program like this. I'll lose weight my own methods and show you in a follow-up!"
One follow-up later, I'm 10 pounds heavier and agree to enroll.
If you're not rational enough to get it done one way, try a different way.
I'm moderately familiar with the work that exists. No need to google it for me.
I'm talking about something on the order of winning the Methusaleh mouse prize (20 years). Something that could show a concrete path towards indefinite lifespan. Calorie restriction doesn't look like it will get us there.
Sorry I wasn't clear.
Sadly it seems like all the researchers are still at the early hypothesis / vaguely-grounded speculation stage.
Of course, everything has to start somewhere, and the true hypothesis is built on the bones of the false ones, but it also means that it's hard for these efforts to gain the scale of funding that could really accelerate them.
When somebody manages to substantially slow aging in an animal (preferably a mouse, but maybe a fruitfly would be enough), I think the faucet will really turn on.
The bike you should get depends a lot on your use case. A used one is a decent choice if you're doing short commuting and random city errands. If you want to do long or fast rides, invest more (though beware there's no real ceiling on bike cost and accessories have strong allure).
Whatever bike you get, make sure it's in decent shape and is sized correctly for you. Also put a bit of effort into maintenance (lube the chain and inflate the tires and you're fine for casual riding). AND GET SAFETY LIGHTS.
- Personal financial literacy (single most important factor for long-term wealth building)
- Basic understanding of nutrition and exercise (the caveat is that there's a LOT of bad information or irrelevant information out there. Going from zero to some knowledge is hugely beneficial).
- Meditation (you can immediately observe the novelty of meditating if you have never done it before; I won't say it's incredibly useful but it can be nice)
- Household repair (look up how to fix something yourself whenever the opportunity comes up; use your judgment though)
I think a good principle for critical people - that is, people who put a lot of mental effort into criticism - to practice is that of even-handedness. This is the flip-side of steelmanning, and probably more natural to most. Instead of trying to see the good in ideas or people or systems that frankly don't have much good in them, seek to criticize the alternatives that you haven't put under your critical gaze.
Quotes like [the slight misquote] "Democracy is the worst form of government except for all the others that have been tried from time to time" epitomize this, and indeed politics is a great domain to apply this too. If you find some set of ideas wretched, it's probably easier to see the wretchedness in your own cherished ones than to find the positive view of others.
It's a good way to harness that cutting, critical impulse many of us have towards humility.
I will not do this because I do not want to use my social media accounts in this way, but if gratitude journals work this probably would too.
Yes; please provide those links.
And remember that getting to this level of industrial capacity on earth followed from millions of years of biological evolution and thousands of years of cultural evolution in Earth's biosphere. Why would one ship full of humans be able to replicate that success somewhere else?
Similarly, an AGI that can replicate itself with an industrial base at its disposal might not be able to when isolated from those resource (it's still an AGI).
I'm going to say though, that sales can be a high-pressure, miserable environment, depending on the company and your personality. And just doing sales isn't enough to become great at sales and leverage that talent.
We are getting very close to the capability to build von Neuman probes though, so I'm not sure an o sky is evidence for a late filter.
I am highly skeptical of this statement.
We haven't built a machine that can get out of our solar system and land on a planet in another.
We haven't made machines that can copy themselves terrestrially.
Making something that can get out of the solar system, land on another planet, then make (multiple) copies of itself seems huge leap beyond either of the other two issues.
Even an AGI that could self-replicate might have enormous difficulty getting to another planet and turning its raw resources into copies of itself.
There's a large range between excellent company and scam company. Many companies are earnestly but poorly run, or not-scams-per-se but concealing financial issues. Others seem too-good-to-be-true but really are that good.
As a rule, companies make offers that are just good enough to get a yes. My prior would be that too-good-to-be-true always deserves extra scrutiny, and is probably somehow deceptive or high-risk if the terms don't make a guarantee (for instance, equity and deferred compensation in a job offer could never materialize). The other possibility is that they believe you are more valuable to them than other companies do. The question is why? (A final possibility is that you have a poor understanding of the job market, and the other companies are lowballing you).
Well, you could invest your own money. Most strategies benefit from using small amount of cash, since being able to take advantage of small volume opportunities often trumps higher relative execution costs. Register your track record and start recruiting investors.
You can also trade with your own strategies at companies that let you use a mis of your money and theirs, and provide some resources. Typically, traders at these companies aren't executing pure softwares strategies, so "stealing" their methods doesn't have much point.
You could also go to people with established reputations in academic or quantitative finance and show them enough of your method to convince them you're legit, but not so much they can copy it, then have them lend their reputation.
A superhuman AGI could accumulate tremendous financial resources; to do so most effectively it would need access to as many feeds and exchanges as possible, so some kind of shell company would buy those for it. I'm not sure to what extent a finance-specialist AI that achieves superhuman performance is really easier to make than a superhuman AGI; and bear in mind that it could lose its edge quickly.
I like this series; it's fun, well-designed pop-social science surveys. Of course, this type of survey has a big historical element; you may not like that but I think it's fun to read.
Make the text more clearly readable by spacing it away from the tables, or making more solid, lighter-colored lettering.
What leads you to think this?
One major is enough; many companies look at GPA and a low GPA can rule you out but no second major won't. The GPA/more classes pareto curve is also usually more favorable towards one major. But if it's a very small commitment for OP my advice doesn't stand.
What kind of effort?
Talk to people, be friendly, stay in touch, initiate social activities, be a good friend.