Mix
- AI Revolution 101 I created a summary of this topic on Medium
- Everything is information ultimately. We are biological machines built out of tiny code. Our DNA is just information. If you have a brilliant brain that is great at processing information you have a complete control of the reality. You can control over atoms. Only what you need is a tiny atomic printer. We have those, those are cells. If the boundary is information there is no boundary between technology and real world. We are creating something that has a massive ability to affect the real world. – Aella on Lex Fridman
- For example, we tried to teach AI organisms in a simulation to jump, but we did it by teaching them to measure how far their “feet” rose above the ground. Instead of jumping, they learned to grow into tall vertical poles and do flips — they excelled at what we were measuring, but they didn’t do what we wanted them to do.
Timelines
Katja Grace survey: A ~20% probability of this sort of AI by 2036; a ~50% probability by 2060; a ~70% probability by 2100. These match the figures I give in the introduction.
Holden: think there's more than a 10% chance we'll see something PASTA-like enough to qualify as "transformative AI" within 15 years (by 2036); a ~50% chance we'll see it within 40 years (by 2060); and a ~2/3 chance we'll see it this century (by 2100).
Argument for AIs will be concious
According to the PhilPapers Surveys, 56.5% of philosophers endorse physicalism, vs. 27.1% who endorse non-physicalism and 16.4% "other." I expect the vast majority of philosophers who endorse physicalism to agree that a sufficiently detailed simulation of a human would be conscious. (My understanding is that biological naturalism is a fringe/unpopular position, and that physicalism + rejecting biological naturalism would imply believing that sufficiently detailed simulations of humans would be conscious.) I also expect that some philosophers who don't endorse physicalism would still believe that such simulations would be conscious (David Chalmers is an example - see The Conscious Mind). These expectations are just based on my impressions of the field. –
LLM sizes in millions of parameters
GPT 2 – 117
GPT 3.5 – 1,500
GPT 4 – 1,700,000
Notes
- Failure to generate links “blue grass, green sky” tells as something about how ai works
- Marc Andreessen on Sam Harris Show explained well about why ai makes mistakes and is able to corrects itself or why this prompt is better: “create quick safe code for X” vs this “create code for X” – ~There are different ways AI slices the data