A few days ago, [Martin Hilbert (UC Davis)](http://communication.ucdavis.edu/people/hilbert) gave an interview to a Chilean newspaper ([The Clinic](http://www.theclinic.cl/)). On it, he talked about Big Data, Deep Learning, surveillance and politics; the usual. However, on the last few responses, he talks about his view about the singularity in artificial intelligence, and his answer shocked me. I have never heard about this view before, so I'm going to reproduce it here, in English. Queue huge citation: > [Talking about the deep influence of AI technologies in our daily lives, the interviewer asks a question about the techonological singularity] > **As long as we [humans] decide and not them [the computers]...** > Yes. And if you ask me, philosophically, what I think is happening, is that we are creating a supra-species, a distinct superior species. But to be honest, I am not afraid of that. > > **Why not?** > Let’s see… we usually understand that natural selection, when confronted between two species, chooses one of the two, the well-known “survival of the fittest”, right? But there are also examples of symbiosis, where the two species merge. And I think that in this case, the two species will merge. But now we have talked so much, I don’t think if it’s worth to talk about this. > > **I'm pretty sure it is.** > Maybe to understand this, we should look at how life works, living systems. As you know, there are different levels of abstraction: on the bottom, there are subatomic particles that interact to form an atom; the atoms form networks that create molecules, then the molecules form cells, and the cells –each with its own job– form organisms. Subsequently, organisms form networks to create societies. And now, what comes next? Societies that form networks to build something superior. The point is that each of these levels believe to be working with its own laws, and are unaware that thanks to these laws they have created a superior level. My cells do not know that I have a conscience. They look at each other and say “hey, there’s some bacteria there, should I attack or will you do it?”. They think that they are pretty free, don’t they? But the law of the large numbers creates a trustworthy statistic that the bacteria is going to be attacked, and thanks to the stability of those averages, my system has the steadiness to create what we call consciousness. What I think is that digitalization will end up turning us into cells of a larger organism. > > **How?** > As the AI begins to organize us, to program the society. And it will be able to do so because, although you and I believe to be very different, the operation of the society, thanks to the law of large numbers, produce very stable averages. Hence this new organism can survive, up to the point that it will be able to create a conscience. But we will not even know that that consciousness exists. That's why I say that it will not be "Terminator against us". We are merging with a supra-organism, and that digitalization is the glue that unites us. The truth is that I do not normally talk about this in public interviews, but that is the meaning of the singularity for me: we are converging with technology to create a higher entity, which is called socio-technology, techno-science or whatever you want to call it. > > (Source: http://www.theclinic.cl/2017/01/19/martin-hilbert-experto-redes-digitales-obama-trump-usaron-big-data-lavar-cerebros/, in Spanish) This analysis is in no way academic. The use of sensitive terms is all over the place (e.g. "create a conscience"). This is to be expected, though, from an interview. Nevertheless, the point struck me as incredibly thoughtful and deep. Could this be the case? It immediately made me think of [Ned Block's China brain](http://[1] https://en.wikipedia.org/wiki/China_brain). In my opinion, the China brain would definitely be conscious, and we would not be able to tell that it is. Hilbert goes further, however, and states that the AI would "organize us", which obviously adds questions about the causal powers of an emergent consciousness. Maybe this development will finally help us understand our own consciousness? Or maybe it will be so superior that we will not take anything from it? (Maybe the AI consciousness will be to the human consciousness what an F16's flying capabilities are to a bird's flying capabilities.) A fascinating subject, undoubtedly. Let's just hope that the AI decides to survive, because if it falls into artificial depression and commits suicide, we might be done. A friend asked me what to read in order to learn more about this. These are my recommendations: 1- The Wikipedia entry to the China Brain: https://en.wikipedia.org/wiki/China_brain 2- Asimov's "The Last Question" short story (because it's so good)3- Emergentism (just the intro, to have an idea): https://en.wikipedia.org/wiki/Emergentism 4- Panpsychism (if you want to get deeply philosophical): https://plato.stanford.edu/entries/panpsychism/