![transformers 2 robotize me transformers 2 robotize me](https://bwtf.com/sites/default/files/styles/juicebox_medium/public/toyreview-images/universe2/smokescreen/rm5.jpg)
* This collaboration leads to major progress in our understanding of one or more of the areas mentioned above.įor me the, admittedly very small, likelihood of such an outcome, justifies the author's post and its appearance on HN.In the modern era of computing, the news ecosystem has transformed from old traditional print media to social media outlets.
![transformers 2 robotize me transformers 2 robotize me](https://i.ytimg.com/vi/rib0V1Nscw8/hqdefault.jpg)
![transformers 2 robotize me transformers 2 robotize me](https://pbs.twimg.com/media/DXG7xZAX0AQewIV.jpg)
* A small number of other people find them intriguing and choose to collaborate with the author to further elaborate them. * There is some kernel of validity in this author's ideas. It seems at least conceivable to me that there is a possible future in which the following hold: I'm not convinced however that this historical model of very small scale scientific collaboration is necessarily the only one nor the best one in light of modern means of communication. Now a secondary issue is that it is true that as far as I'm aware major scientific discoveries have typically been initially published in much more developed form and have thus been the work of a single individual or of a relatively small group of closely affiliated individuals. I certainly would not want to make such an assertion, despite being troubled by what I think are some inaccuracies in the author's description of certain physical concepts. The main point I was trying to make was that given a speculative post of such breadth, which touches on such difficult issues as AGI, how the brain works and perhaps even the nature of conscious experience, and which makes some claims that are at least interesting, I think it's quite presumptuous to assert that these ideas are all nonsense without a deeper exploration of them. Those predictions can be influenced by low-d world models, but you don't accidentally elevate that model to claim a pure/symbolic/accurate model as can happen with incautious use of the words like "causal." That's why I like focusing on "sequence prediction", even colloquially we know predictions can be wrong.
#TRANSFORMERS 2 ROBOTIZE ME SERIES#
The animal develops a false "causal" belief that a series of its actions is influencing the presentation of a reward. Random reward structures are a way to illustrate this: present a reinforcing stimulus to an animal at random times and a random subset of behavior will increase in frequency. We can learn just about any temporal association even if there is no direct cause-effect relationship. On the other side we have "causal" learning in biology where you're not actually learning causes, just strong correlations. On one side we have the colloquial understanding of cause and effect where a cause is a true impetus of effect. The causal argument suffers from a problem of nomenclature. I'm sceptical real-world behaviour can be so-constrained. This is already happening on the internet: our behaviour made more machine-like so it can be predicted. The solution to self-driving cars will be to try and gamify the roads: robotize people so that machines can understand them. The state of all possible (past, current, future) configurations of a physical system cannot be computed - it's an infinity computational statistics will never bridge. If you overcome the relevant computational infinities to learn "strategy" you will still only do so in the narrow horizon of a highly regulated game where causation has been eliminated by construction (ie., the space of all possible moves over the total horizon of the game can be known in an instant). You need to have lived a human life to guess what a pedestrian is going to do. And a rich background of historical learning to interpret new observation. They require direct causal intervention in the environment to see how it changes (ie., real learning). These models cannot be inferred from association (hence the last 500 years of science). ML operates with associative models of billions of parameters: trying to learn thermodynamics by parameterizing for every molecule in a billion images of them.Īnimals operate with causal models of a very small number of parameters: these models richly describe how an intervention on one variable causes another to change. Time is only a symptom of what's missing: causation.